text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Blu-ray is a replacement for DVD video, with high definition video and audio. In addition, Blu-ray also includes a solution for more interactive content; Java! The built-in Java platform gives more opportunities than ever ever before. Let us take a look at Blu-ray and Java. The article contains a description of the execution model and contains some sample code.
The Possibilities
The Java platform for Blu-ray players is called BD-J. The BD-J platform is based on Java ME and uses the Personal Basis Profile (PBP) v 1.1. The profile is spiced up with additional APIs such as Java TV and a special Blu-ray API. The PBP v 1.1 is based on a subset of Java v1.4.2. It should be noted that a standard JVM is used, compared to some flavours of Java ME where reflection is not available. Other features include such things as Vector graphics, network support and file system access. The network connectivity is particularly interesting, since it gives opportunities for creating new on-line based applications. Note that a network connection is mandatory for Blu-ray profile 2 (see fast facts). The standard network classes are available, with support for TCP/IP and HTTP. For your security you have the packages for secure connections available, Java Secure Socket Extension. A Blu-ray player could include flash disks, a hard-disc or maybe a USB port for extra memory. The Java file system classes are available for you to access these types of file systems. The GUI toolkit that is used on the BD-J platform is based on AWT, with special components for remote control navigation. The Java Media Framework (JMF) is used for playback of the content on the Blu-ray disc. Last, but not least, we have our beloved Java security sandbox to play in. For example, you are only allowed to access servers that are allowed by the publisher of the disc. All in all you have a rather competent Java platform in your Blu-ray player.
One interesting aspect to consider is the types of applications that could be developed with BD-J. Here are some examples of application types:
- Interactive menus
- In-movie interactivity
- Games
- New content downloading
- On-line shopping
The main reason for incorporating Java in Blu-ray players it to get better possibilities when creating menus, in contrast to DVD players which uses a simple MPEG based technology. Another basic feature is the possibility to add interactivity while playing the movie. Many BD-J applications would probably be games. These Java games could be far more advanced than those included on DVD discs today. But there are other types of applications that could be more interesting. Imagine that you have bought your new Sci-fi movie “Intergalactic Wars III – Special Edition”. You could easily access the library of all movie related downloads, such as games, slideshows, sneak previews of the sequel etc. You could of course also have access to an on-line shop where you could buy your “Intergalactic Wars” action figures etc. All this could be done from your living room, right at your fingertips. Compare it to a DVD where you have to pull out the DVD, start your computer, insert the DVD and start it to access the links to the movie related site.
Ok, that is fine. But what if I want to start develop BD-J applications? What programming environment could be used? In theory you should be able to use Eclipse or NetBeans. The biggest problem is the Blu-ray APIs. Unfortunately, these are not publicly available. If you are willing to pay, you could of course buy a license of a professional Blu-ray authoring tool. These tools seems to be aimed at big production companies, such as the big Hollywood studios, at least when looking at the prices. Another solution is to do a little bit of reverse engineering. There are some people that have made it, see references below. The solution is that you use a PC based Blu-ray player with the appropriate software, such as WinDVD or Power DVD. The software contains the necessary JAR files that you could import into your favourite IDE. This will make it possible to compile your BD-J applications. If you want to buy a book on the subject of programming Blu-ray disc, you do not have that many opportunities. At the time of writing this article, there is ONE book available on the subject and one more announced.
Another tool that is valuable when developing embedded devices is a simulator. There are no Blu-ray simulators available on the market today. There are some environments that are used for Personal Basis Profile development that could be used. As you might imagine, this is not a perfect solution. If you finally have managed to set up a development environment, there is another important thing to take into consideration; performance. The best solution is to make up a set of benchmark and execute them on the target player(s). The best performance you can get today is on a Sony Playstation 3.
Fast Facts: The Blu-ray Profiles
Each Blu-ray player must adhere to a specific profile. These specifies such things as the memory requirement. All profile with video requires a full BD-J implementation. This is a short summary of the current profiles:
- 1.0 – This is the profile that is used today. It requires that the player has at least 64 KB persistent memory.
- 1.1 – This becomes mandatory on all players that are manufactured after November 2007. The memory requirement is increased to 256 MB. The players must have a secondary video decoder and a secondary audio decoder. The secondary video decoder is used for picture in picture, whereas the secondary audio could be used for audio commentary etc.
- 2 – This is called BD-Live and this requires the player to have network connectivity. The memory requirements are set to at least 1 GB. The memory is not limited to built-in memory, but could also be an external memory, such as an USB memory stick.
- 3 – This is an audio only profile. This does not require BD-J.
The Code
Since we use the Personal Basis Profile, the Xlet execution model is used. It is almost the same as the Applet or MIDlet execution model. The difference is that an Xlet does not need an UI. The figure below shows the state machine for the Xlet execution model. When the Xlet has been loaded, the initXlet() method is called. The Xlet then enters the paused state. The startXlet() method is called whenever the Xlet shall start. When the Xlet has started it enters the active state. If the Xlet shall be paused, the pauseXlet() method is called. The destroyXlet() method is called if the user stops the Xlet, and then it enters the destroyed state.
The sample code shows an Xlet class. It contains callback methods for each of the states that an Xlet could be in. The initXlet() method is called the first time the Xlet is executed. This is where you should do your one time initialisations. Next up is the the startXlet() method, where you should have your start-up code. For example, you start your threads here. If you shall have a GUI running, start it in a separate thread. The pauseXlet() is typically used for pausing your threads. The destroyXlet() is called when the Xlet is stopped. This method is used for releasing all the resource that have been allocated, e.g. your file(s) should be closed here.
public class JayViewXlet implements javax.tv.xlet.Xlet { /** * Initialise the Xlet. */ public void initXlet(javax.tv.xlet.XletContext context) throws javax.tv.xlet.XletStateChangeException { // Initialisation code, e.g. GUI components and threads. } /** * Start the Xlet. */ public void startXlet() throws javax.tv.xlet.XletStateChangeException { // This is where it all starts, like threads. } /** * Pause the Xlet. */ public void pauseXlet() { // Pause the Xlet, like pausing your threads, } /** * Destroy the Xlet. */ public void destroyXlet(boolean unconditional) throws javax.tv.xlet.XletStateChangeException { // Destroy the Xlet. Release such things as your file resources. } }
The Conclusion
The Java platform that is included in Blu-ray players brings new possibilities to the publishers of movies. They could bring more added value, such as on-line shopping. There are many obstacles when trying to develop BD-J applications for Blu-ray, but none of them are impossible to solve. Let us hope that we will see many interesting BD-J applications in the future.
h3. The Links
This Post Has 6 Comments
Alex14 Dec 2009
>There are many obstacles when trying to develop BD-J applications for Blu-ray, but none of them are impossible to solve.
Major one which is definitely show stopper for me is price and licensing.
Do you know that you have to pay about $20,000 to obtain working environment with all licenses purchased?
And even if you have decided to do so you should remember that Sony’s license is for one year only. Not only for your environment bit for product too.
So, you can invest money and write good product but later license could be revoked and it will simply not work on any BD player.
Johan Karlsson14 Dec 2009
Actually you could use NetBeans and a software DVD (Blu-ray) player to develop BD-J. If you look at the Blu-ray forum at Sun there are ways of getting a software development version of some player. Of course you do no get a fully fledged environment like the one you obtain for $20.000
Regards
Johan
devinder30 Mar 2010
i want to know about BD Authoring – that what kind of work it is .. is it programming or something else..
Star22 Dec 2012
Everynoe would benefit from reading this post
Pingback: Agarest Senki Zero
Pingback: Best languages to learn how to code | CoderHood | https://blog.jayway.com/2009/12/11/blu-ray-and-java/ | CC-MAIN-2020-05 | refinedweb | 1,663 | 66.74 |
Comparison method is implemented (using Reflector):
public static bool IsNaN(double d) { return (d != d); }
My question is why double.Nan exist? It only creates confusion…
We pay for user submitted tutorials and articles that we publish. Anyone can send in a contributionLearn More
JV Said on Apr 18, 2008 :
You already gave the answer yourself. Because they implemented IEEE 754…
Gabriel Rodriguez Said on Apr 18, 2008 :
any smart reason of why the equality comparison always returns false?
Shahar Y Said on Apr 18, 2008 :
JV,
I am sure they didn’t implement all of the IEEE specs, so they could just ignore this one too.
Anyway, I was just saying that this is a strange behavior. Most of the developers don’t know IEEE specs…
Shahar Y Said on Apr 18, 2008 :
Hi Gabriel Rodriguez,
Nan could be generated by many different mathematical operations and: (Square(-1) == Log(0)) shall indeed return false. But the problem is that (Square(-1) == Square(-1)) will also return false…
Mark S. Rasmussen Said on Apr 18, 2008 :
It’s logic (literally). The same goes for NULL comparisons in any database. You cannot compare two unknowns, and that’s exactly what NaN is, it’s an unknown. All we know is that it’s definitely not a number. Other than that it could be anything.
If I present you with two wrapped boxes and ask you to tell me if the contents are equal, what would you answer?
aalmada Said on Apr 18, 2008 :
double.NaN and double.Infinity exist for mathematical reasons. The following operations, instead of throwing exceptions, return these values:
var x = 1.0 / 0.0; // double.Infinity
var y = Math.Sin(0.0); // double.NaN
SY Said on Apr 18, 2008 :
Sometimes its better to throw exception if your data gets corrupted or unknown. I do think it is much better then using NANs.
Casper Bang Said on Apr 18, 2008 :
On a related note, I find it a much bigger problem in practice, that Double.MIN_VALUE yields 4.9E-324, the smallest *positive* number and not -1.7976931348623157E308, the actual smallest number. Java/JavaScript are the only languages I know where this is the case.
ChrisW Said on Apr 18, 2008 :
Years ago I found a bug in SQL Server. As I recall, if you save a double NAN into a table, and the column that contains that NAN is indexed, then attempts to SELECT that value out via that index will crash SQL Server. This happens even if you’re not using that value in the index anyway (e.g., the index is MyInt + MyDouble, and you select WHERE MyInt=7, and the row with 7 has a NAN, then SQL Server will crash)
I reported this in SQL 7, it remained in SQL 2000. Dunno if it’s been fixed yet.
carsten Said on Apr 18, 2008 :
Apart from the mathematical reason which has already been pointed out, it is very handy to use for variablres where the value is currently undefined; e.g. very handy to get started when computing min or max from a collection. You just have to use it right, ie use isNaN instead of ==.
Matter of fact, you can use ==. x==x is the same as !Double.isNaN(x) since all numnbers which are not NaN are euqal to them selves. But this is clearly just a smart-ass hack. Use Double.isNaN() in real live.
km Said on Jun 8, 2008 :
aalmada, why should Math.Sin(0.0) return a NaN ? Isn’t a sine of 0 just a 0 ?
aalmada Said on Jun 9, 2008 :
@km
Yes, you’re right… :-/
james Said on Oct 4, 2008 :
What about infinity? If x = double.infinity. Then shouldn’t x!=x as well?
Zohrab Broyan Said on Dec 23, 2008 :
To compare double equality you can use Equals() method instead of == operator.
Dean Said on Jan 26, 2009 :
“My question is why double.Nan exist? It only creates confusion…”
How do you suppose you’d assign a NaN value to double without double.NaN? | http://www.dev102.com/2008/04/18/dont-use-doublenan/ | crawl-002 | refinedweb | 683 | 75.5 |
Since this is the first tutorial program, let us comment first on how this tutorial and the rest of the deal.II documentation is supposed to work. The documentation for deal.II comes essentially at three different levels:
Let's come back to the tutorial, since you are looking at the first program (or "step") of it. Each tutorial program is subdivided into the following sections:
The tutorials are not only meant to be static documentation, but you should play with them. To this end, go to the
examples/step-1 directory (or whatever the number of the tutorial is that you're interested in) and type
The first command sets up the files that describe which include files this tutorial program depends on, how to compile it and how to run it. This command should find the installed deal.II libraries as well that were generated when you compiled and installed everything as described in the README file. If this command should fail to find the deal.II library, then you need to provide the path to the installation using the command
instead.
The second of the commands above compiles the sources into an executable, while the last one executes it (strictly speaking,
make run will also compile the code if the executable doesn't exist yet, so you could have skipped the second command if you wanted). This is all that's needed to run the code and produce the output that is discussed in the "Results" section of the tutorial programs. This sequence needs to be repeated in all of the tutorial directories you want to play with.
When learning the library, you need to play with it and see what happens. To this end, open the
examples/step-1/step-1.cc source file with your favorite editor and modify it in some way, save it and run it as above. A few suggestions for possibly modifications are given at the end of the results section of this program, where we also provide a few links to other useful pieces of information.
This and several of the other tutorial programs are also discussed and demonstrated in Wolfgang Bangerth's video lectures on deal.II and computational science. In particular, you can see the steps he executes to run this and other programs, and you will get a much better idea of the tools that can be used to work with deal.II. In particular, lectures 2 and 4 give an overview of deal.II and of the building blocks of any finite element code. (See also video lecture 2, video lecture 4.)
If you are not yet familiar with using Linux and running things on the command line, you may be interested in watching lectures 2.9 and 2.91. (See also video lecture 2.9, video lecture 2.91.) These give overviews over the command line and on what happens when compiling programs, respectively.
Note that deal.II is actively developed, and in the course of this development we occasionally rename or deprecate functions or classes that are still referenced in these video lectures. For example, the step-1 code shown in video lecture 5 uses a class HyperShellBoundary which was replaced with SphericalManifold class later on. Additionally, as of deal.II version 9.0, GridGenerator::hyper_shell() now automatically attaches a SphericalManifold to the Triangulation. Otherwise the rest of the lecture material is relevant.
Let's come back to step-1, the current program. In this first example, we don't actually do very much, but show two techniques: what is the syntax to generate triangulation objects, and some elements of simple loops over all cells. We create two grids, one which is a regularly refined square (not very exciting, but a common starting grid for some problems), and one more geometric attempt: a ring-shaped domain, which is refined towards the inner edge. Through this, you will get to know three things every finite element program will have to have somewhere: An object of type Triangulation for the mesh; a call to the GridGenerator functions to generate a mesh; and loops over all cells that involve iterators (iterators are a generalization of pointers and are frequently used in the C++ standard library; in the context of deal.II, the Iterators on mesh-like containers module talks about them).
The program is otherwise small enough that it doesn't need a whole lot of introduction.
If you are reading through this tutorial program, chances are that you are interested in continuing to use deal.II for your own projects. Thus, you are about to embark on an exercise in programming using a large-scale scientific computing library. Unless you are already an experienced user of large-scale programming methods, this may be new territory for you — with all the new rules that go along with it such as the fact that you will have to deal with code written by others, that you may have to think about documenting your own code because you may not remember what exactly it is doing a year down the road (or because others will be using it as well), or coming up with ways to test that your program is doing the right thing. None of this is something that we typically train mathematicians, engineers, or scientists in but that is important when you start writing software of more than a few hundred lines. Remember: Producing software is not the same as just writing code.
To make your life easier on this journey let us point to some resources that are worthwhile browsing through before you start any large-scale programming:
As a general recommendation: If you expect to spend more than a few days writing software in the future, do yourself the favor of learning tools that can make your life more productive, in particular debuggers and integrated development environments. (See also video lecture 7, video lecture 8, video lecture 8.01, video lecture 25.) You will find that you will get the time spent learning these tools back severalfold soon by being more productive! Several of the video lectures referenced above show how to use tools such as integrated development environments or debuggers.
The most fundamental class in the library is the Triangulation class, which is declared here:
We need the following two includes for loops over cells and/or faces:
Here are some functions to generate standard grids:
Output of grids in various graphics formats:
This is needed for C++ output:
And this for the declarations of the
std::sqrt and
std::fabs functions:
The final step in importing deal.II is this: All deal.II functions and classes are in a namespace
dealii, to make sure they don't clash with symbols from other libraries you may want to use in conjunction with deal.II. One could use these functions and classes by prefixing every use of these names by
::, but that would quickly become cumbersome and annoying. Rather, we simply import the entire deal.II namespace for general use:
In the following, first function, we simply use the unit square as domain and produce a globally refined grid from it.
The first thing to do is to define an object for a triangulation of a two-dimensional domain:
Here and in many following cases, the string "<2>" after a class name indicates that this is an object that shall work in two space dimensions. Likewise, there are versions of the triangulation class that are working in one ("<1>") and three ("<3>") space dimensions. The way this works is through some template magic that we will investigate in some more detail in later example programs; there, we will also see how to write programs in an essentially dimension independent way.
Next, we want to fill the triangulation with a single cell for a square domain. The triangulation is the refined four times, to yield \(4^4=256\) cells in total:
Now we want to write a graphical representation of the mesh to an output file. The GridOut class of deal.II can do that in a number of different output formats; here, we choose scalable vector graphics (SVG) format that you can visualize using the web browser of your choice:
The grid in the following, second function is slightly more complicated in that we use a ring domain and refine the result once globally.
We start again by defining an object for a triangulation of a two-dimensional domain:
We then fill it with a ring domain. The center of the ring shall be the point (1,0), and inner and outer radius shall be 0.5 and 1. The number of circumferential cells could be adjusted automatically by this function, but we choose to set it explicitly to 10 as the last argument:
By default, the triangulation assumes that all boundaries are straight lines, and all cells are bi-linear quads or tri-linear hexes, and that they are defined by the cells of the coarse grid (which we just created). Unless we do something special, when new points need to be introduced the domain is assumed to be delineated by the straight lines of the coarse mesh, and new points will simply be in the middle of the surrounding ones. Here, however, we know that the domain is curved, and we would like to have the Triangulation place new points according to the underlying geometry. Fortunately, some good soul implemented an object which describes a spherical domain, of which the ring is a section; it only needs the center of the ring and automatically figures out how to instruct the Triangulation where to place the new points. The way this works in deal.II is that you tag parts of the triangulation you want to be curved with a number that is usually referred to as "manifold indicator" and then tell the triangulation to use a particular "manifold object" for all places with this manifold indicator. How exactly this works is not important at this point (you can read up on it in step-53 and Manifold description for triangulations). The functions in GridGenerator handle this for us in most circumstances: they attach the correct manifold to a domain so that when the triangulation is refined new cells are placed in the correct places. In the present case GridGenerator::hyper_shell attaches a SphericalManifold to all cells: this causes cells to be refined with calculations in spherical coordinates (so new cells have edges that are either radial or lie along concentric circles around the origin).
By default (i.e., for a Triangulation created by hand or without a call to a GridGenerator function like GridGenerator::hyper_shell or GridGenerator::hyper_ball), all cells and faces of the Triangulation have their manifold_id set to numbers::flat_manifold_id, which is the default if you want a manifold that produces straight edges, but you can change this number for individual cells and faces. In that case, the curved manifold thus associated with number zero will not apply to those parts with a non-zero manifold indicator, but other manifold description objects can be associated with those non-zero indicators. If no manifold description is associated with a particular manifold indicator, a manifold that produces straight edges is implied. (Manifold indicators are a slightly complicated topic; if you're confused about what exactly is happening here, you may want to look at the glossary entry on this topic".) Since the default chosen by GridGenerator::hyper_shell is reasonable we leave things alone.
In order to demonstrate how to write a loop over all cells, we will refine the grid in five steps towards the inner circle of the domain:
Next, we need to loop over the active cells of the triangulation. You can think of a triangulation as a collection of cells. If it were an array, you would just get a pointer that you increment from one element to the next using the operator
++. The cells of a triangulation aren't stored as a simple array, but the concept of an iterator generalizes how pointers work to arbitrary collections of objects (see wikipedia for more information). Typically, any container type in C++ will return an iterator pointing to the start of the collection with a method called
begin, and an iterator point to 1 past the end of the collection with a method called
end. We can increment an iterator
it with the operator
++it, dereference it to get the underlying data with
*it, and check to see if we're done by comparing
it != collection.end().
The second important piece is that we only need the active cells. Active cells are those that are not further refined, and the only ones that can be marked for further refinement. deal.II provides iterator categories that allow us to iterate over all cells (including the parent cells of active ones) or only over the active cells. Because we want the latter, we need to call the method Triangulation::active_cell_iterators().
Putting all of this together, we can loop over all the active cells of a triangulation with
In the initializer of this loop, we've used the
auto keyword for the type of the iterator
it. The
auto keyword means that the type of the object being declared will be inferred from the context. This keyword is useful when the actual type names are long or possibly even redundant. If you're unsure of what the type is and want to look up what operations the result supports, you can go to the documentation for the method Triangulation::active_cell_iterators(). In this case, the type of
it is
Triangulation::active_cell_iterator.
While the
auto keyword can save us from having to type out long names of data types, we still have to type a lot of redundant declarations about the start and end iterator and how to increment it. Instead of doing that, we'll use range- based for loops, which wrap up all of the syntax shown above into a much shorter form:
autokeyword.
Next, we loop over all vertices of the cells. For that purpose we query an iterator over the vertex indices (in 2d, this is an array that contains the elements
{0,1,2,3}, but since
cell->vertex_indices() knows the dimension the cell lives in, the array so returned is correct in all dimensions and this enables this code to be correct whether we run it in 2d or 3d, i.e., it enables "dimension-independent programming" – a big part of what we will discuss in step-4).
If this cell is at the inner boundary, then at least one of its vertices must sit on the inner ring and therefore have a radial distance from the center of exactly 0.5, up to floating point accuracy. So we compute this distance, and if we find a vertex with this property, we flag this cell for later refinement. We can then also break the loop over all vertices and move on to the next cell.
Because the distance from the center is computed as a floating point number, we have to expect that whatever we compute is only accurate to within round-off. As a consequence, we can never expect to compare the distance with the inner radius by equality: A statement such as
if (distance_from_center == inner_radius) will fail unless we get exceptionally lucky. Rather, we need to do this comparison with a certain tolerance, and the usual way to do this is to write it as
if (std::abs(distance_from_center - inner_radius) <= tolerance) where
tolerance is some small number larger than round-off. The question is how to choose it: We could just pick, say,
1e-10, but this is only appropriate if the objects we compare are of size one. If we had created a mesh with cells of size
1e+10, then
1e-10 would be far lower than round-off and, as before, the comparison will only succeed if we get exceptionally lucky. Rather, it is almost always useful to make the tolerance relative to a typical "scale" of the objects being compared. Here, the "scale" would be the inner radius, or maybe the diameter of cells. We choose the former and set the tolerance equal to \(10^{-6}\) times the inner radius of the annulus.
Now that we have marked all the cells that we want refined, we let the triangulation actually do this refinement. The function that does so owes its long name to the fact that one can also mark cells for coarsening, and the function does coarsening and refinement all at once:
Finally, after these five iterations of refinement, we want to again write the resulting mesh to a file, again in SVG format. This works just as above:
Finally, the main function. There isn't much to do here, only to call the two subfunctions, which produce the two grids.
Running the program produces graphics of two grids (grid-1.svg and grid-2.svg). You can open these with most every web browser – in the simplest case, just open the current directory in your file system explorer and click on the file. If you like working on the command line, you call your web browser with the file:
firefox grid-1.svg,
google-chrome grid-1.svg, or whatever the name of your browser is. If you do this, the two meshes should look like this:
The left one, well, is not very exciting. The right one is — at least — unconventional. The pictures color-code the "refinement level" of each cell: How many times did a coarse mesh cell have to be subdivided to obtain the given cell. In the left image, this is boring since the mesh was refined globally a number of times, i.e., every cell was refined the same number of times.
(While the second mesh is entirely artificial and made-up, and certainly not very practical in applications, to everyone's surprise it has found its way into the literature: see [70]. Apparently it is good for some things at least.)
This program obviously does not have a whole lot of functionality, but in particular the
second_grid function has a bunch of places where you can play with it. For example, you could modify the criterion by which we decide which cells to refine. An example would be to change the condition to this:
This would refine all cells for which the \(y\)-coordinate of the cell's center is greater than zero (the
TriaAccessor::center function that we call by dereferencing the
cell iterator returns a Point<2> object; subscripting
[0] would give the \(x\)-coordinate, subscripting
[1] the \(y\)-coordinate). By looking at the functions that TriaAccessor provides, you can also use more complicated criteria for refinement.
In general, what you can do with operations of the form
cell->something() is a bit difficult to find in the documentation because
cell is not a pointer but an iterator. The functions you can call on a cell can be found in the documentation of the classes
TriaAccessor (which has functions that can also be called on faces of cells or, more generally, all sorts of geometric objects that appear in a triangulation), and
CellAccessor (which adds a few functions that are specific to cells).
A more thorough description of the whole iterator concept can be found in the Iterators on mesh-like containers documentation module.
Another possibility would be to generate meshes of entirely different geometries altogether. While for complex geometries there is no way around using meshes obtained from mesh generators, there is a good number of geometries for which deal.II can create meshes using the functions in the GridGenerator namespace. Many of these geometries (such as the one used in this example program) contain cells with curved faces: put another way, we expect the new vertices placed on the boundary to lie along a circle. deal.II handles complex geometries with the Manifold class (and classes inheriting from it); in particular, the functions in GridGenerator corresponding to non-Cartesian grids (such as GridGenerator::hyper_shell or GridGenerator::truncated_cone) attach a Manifold object to the part of the triangulation that should be curved (SphericalManifold and CylindricalManifold, respectively) and use another manifold on the parts that should be flat (FlatManifold). See the documentation of Manifold or the manifold module for descriptions of the design philosophy and interfaces of these classes. Take a look at what they provide and see how they could be used in a program like this.
We also discuss a variety of other ways to create and manipulate meshes (and describe the process of attaching Manifolds) in step-49.
We close with a comment about modifying or writing programs with deal.II in general. When you start working with tutorial programs or your own applications, you will find that mistakes happen: your program will contain code that either aborts the program right away or bugs that simply lead to wrong results. In either case, you will find it extremely helpful to know how to work with a debugger: you may get by for a while by just putting debug output into your program, compiling it, and running it, but ultimately finding bugs with a debugger is much faster, much more convenient, and more reliable because you don't have to recompile the program all the time and because you can inspect the values of variables and how they change.
Rather than postponing learning how to use a debugger till you really can't see any other way to find a bug, here's the one piece of advice we will provide in this program: learn how to use a debugger as soon as possible. It will be time well invested. (See also video lecture 25.) The deal.II Frequently Asked Questions (FAQ) page linked to from the top-level deal.II webpage also provides a good number of hints on debugging deal.II programs.
It is often useful to include meshes into your theses or publications. For this, it may not be very useful to color-code the cells by refinement level, and to print the cell number onto each cell. But it doesn't have to be that way – the GridOut class allows setting flags for each possible output format (see the classes in the GridOutFlags namespace) that control how exactly a mesh is plotted. You can of course also choose other output file formats such as VTK or VTU; this is particularly useful for 3d meshes where a 2d format such as SVG is not particular useful because it fixes a particular viewpoint onto the 3d object. As a consequence, you might want to explore other options in the GridOut class. | https://dealii.org/developer/doxygen/deal.II/step_1.html | CC-MAIN-2021-04 | refinedweb | 3,792 | 56.89 |
Progress
Progress is used to display the progress status for a task that takes a long time or consists of several steps.
import { CProgress } from '@chakra-ui/vue'
Editable Example
<c-progress :
You can add
hasStripe prop to any progressbar to apply a stripe via a CSS
gradient over the progress bar’s background color.
Editable Example
<c-progress has-stripe :
There are two ways you can increase the height of the progressbar:
- You can add
sizeprop to increase the height of the progressbar.
- You can also use the
heightprop to manually set a height.
Editable Example
<c-stack : <c-progress <c-progress <c-progress <c-progress </c-stack>
You can add
color prop to any progressbar to apply any color that exists in
the theme.
Editable Example
<c-progress color="gray" has-stripe />
The striped gradient can also be animated. Just add
isAnimated and
hasStripe
prop to the progressbar to animate the stripes right to left via CSS3
animations.
Editable Example
<c-progress has-stripe is-animated />
- Progress has a
roleset to
progressbarto denote that it's a progress.
- Progress has
aria-valuenowset to the percentage completion value passed to the component, to ensure the progress percent is visible to screen readers.
❤️ Contribute to this page
Caught a mistake or want to contribute to the documentation? Edit this page on GitHub! | https://vue.chakra-ui.com/progress | CC-MAIN-2020-40 | refinedweb | 222 | 61.26 |
For the last few weeks I've played around with Angular 2 in an ASP.NET Core application. To start writing Angular 2 components there is some necessary preparation. In the first part of this small Angular 2 series, I'm going to show you how to prepare your project to start working with Angular.
Prerequisites
I'm trying to create a real single page application (SPA) which is really easy with Angular 2. This is why I created an empty ASP.NET project without any controllers, views, and other stuff in it. It only contains a Startup.cs project.json and a Project_Readme.html. I'll create some API Controllers later on to provide some data to Angular post, I'll just use a mock of that interface, which will provide objects generated by GenFu.
Let's Start
Let's create a new empty ASP.NET Core project. We don't need any views, but just a single Index.html in the wwwroot folder. This file will be the host of our single page application.
The NuGet Dependencies
We also need some NuGet dependencies in our project:
-*" },
We need MVC just for the Web API to provide the data. The StaticFiles library is needed to serve the Index.html and all the CSS, images, and JavaScript files to run the SPA. We also need some logging and configuration.
- GenFu is just used to generate some mock data.
- Gos.Tools.Azure is the already mentioned Azure library to wrap the connection to the Azure Table Storage.
- Gos.Tools.Cqs is a small library which provides the infrastructure to use the "Command & Query Segregation" pattern in your app. These three libraries are not yet relevant for the part one of this series.
Prepare the Startup.cs
To get the static files (Index.html, CSS, images and JavaScripts) we need to add the necessary middleware:
app.UseDefaultFiles(); app.UseStaticFiles(); app.UseMvcWithDefaultRoute();
We also need to add MVC with the default routes to activate the Web API. Because we'll use attribute routing, we don't need to configure a special routing here.
To enable Angular 2 routing and deep links in our SPA, we need a separate error handling: In case of any 404 Exception, we need to call the Index.html because the called URL could be an Angular 2 a 404 status and if there's no call to a file (
!Path.HasExtension()) and then we start the pipeline again.
I placed this code before the previously mentioned middleware to provide the static files.
I also need to add MVC to the services in the ConfigureServices method: 2 and its dependencies and Gulp to prepare our scripts. To do this, I added an npm configuration file called package" } }
By the way, if you add a new file in Visual Studio, you can easily select predefined files for client side techniques in the "add new items" dialog:
Visual Studio 2015 also starts downloading the dependencies just after saving the file; npm needs some more time to download all the dependencies.
Preparing the JavaScript
Bower will load the dependencies into the lib folder in the wwwroot. npm stores the files outside the wwwroot in the Node_Modules. We want to move just the necessary files to the wwwroot, too. To get this done we use Gulp. Just create a new gulpfile.js with the "add new items" dialog and add the following lines in it:
/*']);
Now, you can use the Task Runner Explorer in Visual Studio 2015 to run the "copy-deps" task to get the files to the right location.
Preparing the Index.html
In the header of the Index.html we just need a meaningful title, a base href to get the Angular routing working and a reference to the bootstrap CSS:
<base href="/" /> <link rel="stylesheet" href="lib/bootstrap/dist/css/bootstrap.css" />
At the end of the body, we need a little more. Add the following JavaScript references:
>
After that, we have to add some configuration and to initialize our Angular 2 app:
<script> System.config({ packages: { 'app': { defaultExtension: 'js' }, 'lib': { defaultExtension: 'js' }, } }); System.import('app/boot') .then(null, console.error.bind(console)); </script>
This code calls a boot.js in the folder app inside the wwwroot. This file is the Angular 2 bootstrap we need to create later on.
Just after the starting body, we need to call the directive of our first Angular 2 component:
<my-app>Loading...</my-app>
The string "Loading..." will be displayed until the Angular 2 app is loaded. I'll show the Angular 2 code a little later.
Configure TypeScript
Since the AngularJS team is using TypeScript to create Angular 2, it makes a lot sense to write the app using TypeScript instead of plain JavaScript. TypeScript is, too. I prefer to work in a separate Scripts folder outside the wwwroot and to transpile the JavaScript into the wwwroot/app folder. To do this, we need a TypeScript configuration called tsconfig.json. This file tells the TypeScript compiler how to compile and.
Enable ES6
To use ECMAScript 6 features in TypeScript we need to add the es6-shim definition to the scripts folder. Just download it from the DefinitelyTyped repository on GitHub.
That's pretty much it to start working with Angular 2, TypeScript, and ASP.NET Core. We haven't seen very much of the ASP.NET Core stuff until yet, but we will see some more things in one of the next posts about it.
Let's Create the First App
Now we have the project set up to write Angular 2 components using TypeScript and to use the transpiled code in the Index.html, which hosts the app.
As already mentioned, we first need to bootstrap the application. I did this by creating a file called boot.ts inside the scripts folder. This file contains just four lines of code:
///<reference path="../node_modules/angular2/typings/browser.d.ts"/> import {bootstrap} from 'angular2/platform/browser' import {AppComponent} from './app' bootstrap(AppComponent);
It references and imports the angular 2/platform/browser component and the AppComponent which needs to be created in the next step.
The last line starts the Angular 2 App by passing the root component to the bootstrap method.
The AppComponent is in another TypeScript file called app.ts:
import {Component} from 'angular2/core'; @Component({ selector: 'my-app', template: '<p></p>' }) export class AppComponent { Title: string; constructor() { this.Title = 'Hello World'; } }
This pretty simple component just defines the directive we already used in the Index.html and contains a simple template. Instead of 2 logs pretty detailed information about problems on the client.
Conclusion
This is just a simple "Hello World" example, but this will show you whether the configuration is working or not.
If this is done and if all is working we can start creating some more complex things. But, let me show this in another blog post.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/aspnet-core-and-angular-2-part-1 | CC-MAIN-2017-51 | refinedweb | 1,163 | 66.33 |
Somewhat over-engineered blinking LEDs
My Raspberry Pi arrived a couple of weeks ago and I've been working on turning it into a mini-audio server to connect to the home stereo in the living room.
As part of the project I'd like to drive an analog VU meter from the sound signal.
This week my (enthusiastic!) wife and I played around with attaching some basic electronics to the GPIO ports so that we could get more comfortable the Raspberry Pi. Our electronics knowledge is more than a little rusty but surprisingly everything we tried worked first go.
On the first night we attached a LED in line with a resistor directly to a GPIO port and were able to programmatically turn it on and off. Easily done.
Then we took that a little further by using a transistor to switch the LED from the Pi's 5V power supply. This is a better option because the circuit can be arranged so that a minimal current is pulled from the GPIO pin but a higher current can be put through the LED to give a nice bright light. The amount of current you can put through the GPIO pins without damaging the Pi is limited so this is a safer option (although not strictly needed for a single LED). There's an excellent page on elinux.org which explains this arrangment.
Here's the result (sorry about the dodgy video quality):
The Python code driving the LED looks like:
import itertools import time import RPi.GPIO as GPIO # to use Raspberry Pi board pin numbers GPIO.setmode(GPIO.BCM) # set up GPIO output channel GPIO.setup(17, GPIO.OUT) pin_values = itertools.cycle([GPIO.HIGH, GPIO.LOW]) for pin_value in pin_values: GPIO.output(17, pin_value) time.sleep(1)
On the second night we duplicated this circuit 5 times to drive 5 LEDs:
We were quite chuffed that we managed to pack everything neatly into one end of the breadboard so that all the LEDs were in a line.
The plan now is to get PulseAudio working. It provides a nice way to intercept the sound going to the audio output. It should be possible to use that to drive these LEDs like the lights on a retro 80's hi-fi system.
And after comes driving an analog meter which will require digital-to-analog conversion either by using the PWM channels or an external DAC chip. More on that to come. | http://freshfoo.com/posts/overengineered_blinking_leds/ | CC-MAIN-2016-44 | refinedweb | 413 | 71.34 |
Algorithm to find cliques of a given size k【O(n^k) time complexity】
Get FREE domain for 1st year and build your brand new site
Reading time: 35 minutes
In this article, we will go through a simple yet elegant algorithm to find a clique of a given size. Clique is an interesting topic in itself given that the clique decision problem is NP-Complete and clique arises in almost all real-life applications involving graphs. Before we go into the wonderful algorithm, we will go through some basic ideas.. So we can say that a clique in an undirected graph is a subgraph that is complete.Learn more about Clique in general and related ideas and problems.Learn why the Clique decision problem is NP-Complete
A clique of size
k in a graph
G is a clique of graph
G containing
k vertices, i.e. the degree of each vertex is
k-1 in that clique.
So particularly, if there is a subset of
k vertices that are connected to each other in the graph
G, we say that graph contains a k-clique.
A k-clique can be a maximal clique or can be a subset of a maximal clique, so if a graph contains a clique of size more than
k then it definitely contains a clique of size
k.
For example the graph shown below:
Algorithm
We can find all the 2-cliques by simply enumerating all the edges.
To find k+1-cliques, we can use the previous results. Compare all the pairs of k-cliques. If the two subgraphs have k-1 vertices in common and graph contains the missing edge, we can form a k+1-clique.
The above algorithm of finding k-clique in a graph G takes polinomial time for its execution. The algorithm starts from 2-clique pairs and use this as base data to find 3-cliques and more.
To generate 3-cliques from 2-cliques we take each combination pair of 2-cliques and take intersection of the pair, if the intersection is an edge and it is present in the graph then the union of the pair is a clique of size 3. By doing intersection of the pair we find the missing edge so that the 2-clique can be extended to 3-clique, and if the edge is present in the graph then we extend the 2-clique pair into 3-clique and store it. In similar way we generate k+1-clique from k-clique.
Let's understand with it with a graph with 4 vertices:
To find k-cliques we iterate the same method O(k) times. The method which finds the p+1-clique from p-clique takes O(n) time where n is number of vertices. So in overall the algorithm takes O(nk) time in the worst case.
Implementation
Code in Python3
from itertools import combinations import networkx as nx def print_cliques(graph, size_k): for k, cliques in k_cliques(graph): if k == size_k: print('%d-cliques = %d, %s.' % (k, len(cliques), cliques)) nodes, edges = 6, 10 size_k = 3 graph = nx.Graph() graph.add_nodes_from(range(nodes)) graph.add_edge(1, 2) graph.add_edge(1, 3) graph.add_edge(1, 5) graph.add_edge(2, 3) graph.add_edge(2, 4) graph.add_edge(2, 6) graph.add_edge(3, 4) graph.add_edge(3, 6) graph.add_edge(4, 5) graph.add_edge(4, 6) print_cliques(graph, size_k)
Output
3-cliques = 5, [{3, 4, 6}, {2, 3, 6}, {2, 4, 6}, {1, 2, 3}, {2, 3, 4}].
Complexity
Time Complexity
- The k-clique algorithm takes O(nk) (i.e. polynomial) time in the worst case.
Space Complexity
- The k-clique algoorithm takes O(n2) auxiliary space in the worst case.
Related articlesUsing Bron Kerbosch algorithm to find maximal cliques in O(3^(N/3))
Greedy approach to find a single maximal clique in O(V^2) time complexity | https://iq.opengenus.org/algorithm-to-find-cliques-of-a-given-size-k/ | CC-MAIN-2021-43 | refinedweb | 650 | 72.76 |
Important: Please read the Qt Code of Conduct -
Error on my application: (process:9105): GLib-ERROR **: Creating pipes for GWakeup: Too many open files
Well I think it's about threads, the limitation os System!
But I can't find the problem!
I think the problem is here:
#include "restservices.h" RestServices::RestServices(QObject *parent) : QObject(parent) { } void RestServices::send(QString data) { QStringList lista= data.split("$", QString::SkipEmptyParts); for (int i=0; i<lista.size(); i++) { qDebug() << "$" << lista.at(i); sendRequest("$" + lista.at(i)); } } void RestServices::sendRequest(QString data) { QNetworkAccessManager *manager = new QNetworkAccessManager(this); QObject::connect(manager, SIGNAL(finished(QNetworkReply *)), SLOT(slotRequestFinished(QNetworkReply *))); QNetworkRequest request; request.setUrl(QUrl("")); request.setHeader(QNetworkRequest::ContentTypeHeader, "text/plain"); QNetworkReply *reply = 0; reply = manager->post(request, data.toUtf8()); } void RestServices::slotRequestFinished(QNetworkReply *reply) { if (reply->error() > 0) { qDebug() << reply->errorString(); } else { qDebug() << "Retornou: " << reply->readAll(); } }
So when I call to many times the method "send" I got error " GLib-ERROR **: Creating pipes for GWakeup: Too many open files".
I think that is about Threads, but I can't find a Way to avoid this problem!
Can anyone help me?
I remove the line:
´´´
reply = manager->post(request, data.toUtf8());
´´´
And them stop to crash the app...
This post is deleted!
I see this at log output:
Erro: "Out of resources"
´´´
void RestServices::slotRequestFinished(QNetworkReply *reply)
{
if (reply->error() > 0) {
qDebug() << "Erro: " << reply->errorString();
} else {
qDebug() << "Retornou: " << reply->readAll();
}
}
´´´
Still getting erros.
Anyone?
Hi,
You are creating a new QNetworkAccessManager each time you call sendRequest and never delete it, the same goes for the QNetworkReply.
You should have only one QNetworkAccessManager for your application and take care of cleaning up the QNetworkReply.
I try to do that I got another errors.
I will check this again, I try to clear and remove the reference and etc... But I don't found anything about it.
I will try more!
Thanks
Well I use 3 QNetworkAccessManager and 1 QTcpSocket:
- Check Internet Connection (With google), checked every second;
- Check Server Connection, checked every second too;
- The main: Send data received from QTcpSocket (I read data from another App thru TCP Socket).
So, appears to be an error on item 3. After start send data do server I get the crash, so...
- I change all QNetworkAccessManager to use each one single Instance (can I call Instance in C/C++) and this change:
void RestServices::slotRequestFinished(QNetworkReply *reply) { reply->close(); delete myManager; }
So each "send" create a new myManager and after send DELETE it...
Appears more stable (for longer) at this moment. I will need test more.
Yeap... The app take longerrrrr to crash, but I got same error...
You can use a single QNetworkAccessManager for your application. It will handle parallel connections for you (up to six at the same time and it will queue other requests made IIRC)
@SGaist I can instantiate on MainWindow, at start up and pass the pointer to other classes, that's correct?
And I can make any connection withs signals to handle each request that I make?
You can yes.
Since you are making a REST service interface. You should rather have an object that represent that service and have your widgets call functions on that object. That way you can modify the REST service if you want and only have one place in your code that you will need to modify. | https://forum.qt.io/topic/56682/error-on-my-application-process-9105-glib-error-creating-pipes-for-gwakeup-too-many-open-files | CC-MAIN-2021-31 | refinedweb | 558 | 57.37 |
.
How to Get an API key
1- Login to your Google Cloud Console
2- From top navigation bar click “Select a project”
3- In the new window click “New project”
4- Type a name for your project and click on “Create”
5- From left side navigation go to “APIs & Services > Library”
6- From Maps section select “Places API” or search for “Places API” in the search box.
8- Go to Credentials tab
9- Click on “Create Credentials”
10- Click on “API Key”
11- Copy your generated API Key and store it somewhere.
Congratulations! You’ve got your Google Places API key successfully. Now let’s get started with coding.
Dummy Class Object
Let’s create a class which does nothing for now. You will pass your API key in the class constructor and set the apiKey attribute so you can access it later easily. We will complete our class step by step.
To get place details, you need to search for places and get the place IDs first. Fortunately there is an API endpoint for this.
With this endpoint you will send a GPS Coordinate and a radius to the API and it will return the nearby places by your defined radius. Also there is a filter called types which can filter out only the types of the places that you are interested in. Like school or restaurant .
Note: Here is a list of all valid types:
Lets add this search function to our class.
Note: Google places API can return the results in JSON or XML format. We will be using JSON format in this tutorial.
As you can see we are sending 4 parameters to the api and get back our json result. Then load it in a Python dictionary using json.loads function.
But still there is something missing in our function. Each search can return maximum 60 results and there will be 20 results per page. So you will need to paginate through the results if there is more that 20 results in your search.
If there is more pages the api will return next_page_token with the results and you have to submit this value to the api with the same search parameters to get the rest of the results.
Note: There is a delay until the next_page_token is issued and validated. So you need to put a small sleep time like 2 seconds between each request. Otherwise, you will get an INVALID_REQUEST status.
So let’s add pagination to our function. At the beginning we are creating an empty list for the found places and extending it with results from the search API.
Now our function with return a list containing the search results (Max 60 places) but still we don’t have all the details like user reviews.
Place Details
To get the complete details we have to use another API endpoint. So let’s create another function to get the place details.
Again we have to submit some parameters to the place details API to get the results. Some parameters are required some while others are optional.
Required parameters:
key : your API key.
placeid : An identifier that uniquely identifies a place, returned from the Place Search.
Optional parameters:
language : The language code, indicating in which language the results should be returned. See the list of supported languages and their codes (default is EN).
fields : One or more fields, specifying the types of place data to return, separated by a comma.
There is 3 categories for the fields parameter.
Basic: address_component, adr_address, alt_id, formatted_address, geometry, icon, id, name, permanently_closed, photo, place_id, plus_code, scope, type, url, utc_offset, vicinity
Contact: formatted_phone_number, international_phone_number, opening_hours, website
Atmosphere: price_level, rating, review
So let’s create our function to get the place details.
It’s going to be very similar to our first function. We just need to use different URL and parameters.
This is how this function looks like.
Note: This function expects the fields parameter to be a list of strings (Valid fields string from above) then in convert the list to a comma separated string using ",".join(fields)
So here is our complete class.
Now let’s use this class to retrieve some real information.
Example: Getting User Reviews for a Place Using GooglePlaces Class
In this example, you will see how to retrieve some information about places including place name, user reviews and ratings, address, phone number and website.
First you need to search for places as stated before. So go to maps.google.com and search for the area you are interested in. Then click anywhere on the map and a small box will show up in the bottom of the page.
Copy the GPS coordinates. We will use it to search the area.
We have everything we need now. Let’s write some code.
Initialize GooglePlaces class with your API key.
Search the and store the results in a list.
Note: This will return nearby restaurant up to 100 meters away and maximum of 60 places.
Now places variable contains our search results. Every place has a place_id and we will use the place identifier to retrieve more details about it.
Let’s define the fields which we want to retrieve.
fields = ['name', 'formatted_address', 'international_phone_number', 'website', 'rating', 'review']
And finally retrieve the details.
You have all the details in details dictionary now. Here I’m just going to print the details but you can store it in CSV or Excel file or even a database.
If you run this code in your terminal, it will print something like the image below. I have added some separators so it’ easier to read.
As you can see in the image all the info we requested is printed on the the screen so you can process them easily.
Note: some variable ( website,name, addrees, phone_number ) are wrapped inside a try/except block just in case if the API doesn’t return those info.
Limitations
Number of Reviews
Currently, the Google Places API returns only 5 last reviews.
Being able to retrieve all the reviews for a given business is only supported if you are a verified business owner and can be done via the Google My Business API:
Number & Type of Requests
Using Google Places API for free, you can only send up to 100,000 requests per month, and retrieve “Basic Data”.
The Basic Data SKU is triggered when any of these fields are requested:
address_component,
adr_address,
formatted_address,
geometry,
icon, name,
permanently_closed,
photo, place_id,
plus_code,
type,
url,
utc_offset,
vicinity.
On the other hand, there are request types that are paid from the first request. You can check rate limits and pricing details here:
Completed Code
I speak Python!
Majid Alizadeh is a freelance developer specialized in web development, web scraping and automation. He provides high quality and sophisticated software for his clients. Beside Python he works with other languages like Ruby, PHP and JS as well.
17 Replies to “Google Places API: Extracting Location Data & Reviews”
Thanks. I have 300 addresses in a dataframe. How we can handle them?
Hi Abbas! What do you want to do with these addresses? Generally speaking, you can check this question (read all the answers) to know how to iterate a dataframe.
Thanks a lot for sharing the knowledge. I have used your query but getting an error “SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1056)”. Any help would really be appreciated. Thanks!!
@soumi Try adding the
verify=Falseto this line, as follows:
requests.get(endpoint_url, params = params, verify=False)
I have a list of addresses that I would like to know the business name that is at each location.
@Jack What about “iterating” the list of addresses using a for loop?
hello
i have got empty places 🙁 i only change GooglePlaces(“AIzaSyCy-HpjsoOoQAbwbhGdhfE8wCwrtpvqj14”)
===================PLACE===================
(‘Name:’, ”)
(‘Website:’, ”)
(‘Address:’, ”)
(‘Phone Number’, ”)
==================REWIEVS==================
===================PLACE===================
(‘Name:’, ”)
(‘Website:’, ”)
(‘Address:’, ”)
(‘Phone Number’, ”)
==================REWIEVS==================
@leena Are you sure the code written/copied correctly and that you are using the latest versions of all the libraries?
hi,
how to store it in excel file ?
@leena You can use a Python library like Pandas or CSV to store data into a CSV file.
Traceback (most recent call last):
File “C:/Users/varsayemlak/AppData/Local/Programs/Python/Python37-32/asdad.py”, line 1, in
import requests
ModuleNotFoundError: No module named ‘requests’
@evisofis You must install the Python library “requests” beforehand.
I really need to get in contact with Majid. I need some help and am not asking for you to do it for free. Please contact me.
@Chase – Regarding the malicious Telegram group you found, you can simply “report” it to Telegram on their website, or send them an email at “abuse@telegram.org” and they can take the proper action. This has nothing to do with us or what we do here. All the best! -admin
thank you it is work now, but how to receive more than 5 reviews?
and is it possible to specify the language for the 5 reviews? | https://python.gotrained.com/google-places-api-extracting-location-data-reviews/ | CC-MAIN-2020-34 | refinedweb | 1,501 | 73.47 |
On Mon, Sep 08, 2003 at 02:16:21AM +1000, Damien Elmes wrote: > Mirian Crzig Lennox <address@hidden> writes: > > > Why "gah?" The alternative is to clutter the user's source tree with > > magical names like "{arch}", ",,what-changed.foo" and ++log.bar", > > requiring added complexity to tell them apart from actual source. > > Directories are the canonical way to partition namespace in Unix, so > > we may as well use them. Let the kernel do our work for us rather > > than requiring every utility know how to recognise junk paths from > > source. > > > > This is also the rationale behind the common practice of keeping build > > directories separate from one's source trees. > > The ,,what-changed stuff no longer appears by default in recent tla > releases. The log filename could probably be shortened a bit so it > doesn't mess up the directory listing wrapping, but it's not a huge > problem. {arch} is a simple grep -v away if it's a bother. > > I remember being quite turned off by all the funnily named files when > I first started arch. But really I couldn't care anymore - I know for > one that having to do something like "cd project/actual-tree" where > project/ contained {arch} etc would be far more of a pain than having > the files sitting inside the same tree. Maybe there's some scope for > improvements here, but it's certainly not the priority I once > considered it. It can be convenient to have those "arch droppings" in > easy view, and the ,, files are easy to remove. . > A tool like "cvs export" would probably a good thing though. i would like this, i personally don't want to ship any version control data with releases. in any event its easy enough to remove this without an export command. -- Ethan Benson
pgpEKoU_qZpG1.pgp
Description: PGP signature | http://lists.gnu.org/archive/html/gnu-arch-users/2003-09/msg00240.html | CC-MAIN-2013-20 | refinedweb | 305 | 73.07 |
#include "BCP_vector.hpp"
Include dependency graph for BCP_vector_sanity.hpp:
This graph shows which files directly or indirectly include this file:
Go to the source code of this file.
A helper function to test whether a set positions is sane for a vector.
The set of positions consists of the entries in
[firstpos,lastpos). The length of the vector is
maxsize.
The set of positions is defined to be sane if and only if the entries are in increasing order, there are no duplicate entries, no negative entries and the largest entry is smaller than
maxsize.
If the sanity check fails the function throws a [
BCP_fatal_error]{BCP_fatal_error.html} exception.
Referenced by BCP_vec< T >::erase_by_index(), BCP_vec< T >::keep_by_index(), BCP_vec< int >::update(), BCP_vec< double >::update(), BCP_vec< char >::update(), and BCP_vec< T >::update(). | http://www.coin-or.org/Doxygen/CoinAll/_b_c_p__vector__sanity_8hpp.html | crawl-003 | refinedweb | 129 | 64.2 |
This is the Java Program to Implement the pow() Function.
Given a value say x, and an exponent n, write a program to calculate x raised to the power n.
The value x raised to the power n can be calculated, using a simple recursive algorithm. If the exponent is 1 return the variable x. Otherwise, recursively call the function on half of the exponent. If the exponent is even, multiply the result obtained from the recursive call by itself and in case if the exponent is odd also multiply x after multiplying the result.
Here is the source code of the Java Program to Implement the pow() Function. The program is successfully compiled and tested using IDE IntelliJ Idea in Windows 7. The program output is also shown below.
// Java Program to Implement the pow() Function
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class Power {
// Function to calculate power of x raised to n
static double pow(double x, int n){
if(n<0){
return pow(1/x,-n);
}
if(n==1)
return x;
else if(n%2 == 0){
double y = pow(x,n/2);
return y*y;
}
double y = pow(x,n/2);
return y*y*x;
}
// Function to read user input
public static void main(String[] args) {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
double x;
int n;
try {
System.out.println("Enter the number");
x = Double.parseDouble(br.readLine());
System.out.println("Enter the exponent (in integer)");
n = Integer.parseInt(br.readLine());
} catch (Exception e) {
System.out.println("An error occurred");
return;
}
double result = pow(x,n);
System.out.printf(x + " raised to " + n + " is %f", result);
}
}
1. In function pow(), firstly the exponent is checked. If it is negative, then the recursive call is made on the reciprocal of x, with negative of the exponent.
2. If the exponent is 1, then simply x is returned.
3. If the exponent is even, then a recursive call with half of the exponent is made and finally the result is squared.
4. If the exponent is odd, then a recursive call with half of the exponent is made and finally the result is squared and multiplied with x.
Time Complexity: O(log(n)) where n is the exponent.
Case 1 (Simple Test Case): Enter the number 3.5 Enter the exponent (in integer) 5 3.5 raised to 5 is 525.218750 Case 2 (Simple Test Case - another example): Enter the number 2 Enter the exponent (in integer) -2 2.0 raised to -2 is 0.250000
Sanfoundry Global Education & Learning Series – Java Programs. | https://www.sanfoundry.com/java-program-implement-pow-function/ | CC-MAIN-2019-26 | refinedweb | 430 | 58.79 |
11 May 2011 03:35 [Source: ICIS news]
PITTSBURGH, Pennsylvania (ICIS)--The exponential growth of North American shale gas production is a once-in-a-lifetime gift, NOVA Chemicals CEO, Randy Woelfel, said on Tuesday, but he cautioned that the feedstock windfall must be secured with sustainable economic, social and environmental policies.
Speaking at the close of the Pittsburgh Chemical Day conference, Woelfel described the recent eruption of shale gas supply as akin to the 1850s California gold rush in that it has reversed the once poor fortunes for the continent’s chemicals sector.
“As recently as 2005, funeral music was being played for the North American chemicals industry,” Woelfel said.
“The industry was facing a dwindling supply of feedstock, higher costs and a sophisticated but mature market, and 97% of new capital investment was going into Asia and the ?xml:namespace>
“No new polyethylene [PE] plants had been built in
“We were dependent on LNG [liquefied natural gas] imports and the pipeline from
“But just since 2005 - a heartbeat in industrial time - we have gone from import terminals to export facilities,” he said, adding that “today our industry is back, and once again we are a competitive force to be reckoned with on the global stage”.
“There is talk of a ‘super cycle’ for the North American chemicals sector," he said, "and it will require a tremendous investment in new plants just to keep up."
“If I may quote Mark Twain, it really does seem like the impending death of our industry was a bit exaggerated,” Woelfel said.
He said the windfall in shale gas feedstock supply that has revived North American chemicals will provide a payoff to the broader manufacturing sectors and the general economy, “and we need to share this fact with policymakers”.
“But we also need to be cautious,” he said. “In the race to grow, the decisions we make now have to last and be good for years to come.”
He argued that the boom in shale gas development was indeed like the famous 1850s California gold rush in that “something local became global, small towns became boom towns and there is the risk of thinking about the dollar for today without thought for tomorrow”.
“For the shale gas boom to be a real success, it has to be sustainable economically, socially and environmentally,” he said. “This is our triple bottom line.”
He said that the chemicals and gas industries must work to ensure the environmental sustainability of shale gas by meeting the challenge of public concerns over the chemicals and water volumes used in hydraulic fracturing.
“If society does not embrace the right balance of risk and reward in shale gas, that opportunity will close for us. Society holds our permit to operate, not us," Woelfel said.
The Pittsburgh Chemical Day conference was held on | http://www.icis.com/Articles/2011/05/11/9458730/NOVA-Chemical-CEO-sees-golden-opportunity-in-shale-gas.html | CC-MAIN-2014-10 | refinedweb | 471 | 52.23 |
.
Asia - Pacific OMICRON electronics Asia Ltd. Unit 2812-19, 28/F, The Metropolis Tower 10 Metropolis Drive, Hunghom Kowloon, Hong Kong S.A.R. Phone: +852 3767 5500 E-Mail: support@asia.omicron.at This manual is a publication of OMICRON electronics GmbH. Web: All rights including translation reserved. Reproduction of any kind, for example, photocopying, microfilming, optical character recognition and/or storage in electronic data processing systems, Europe, Africa, Middle East requires the explicit consent of OMICRON electronics. Reprinting, wholly or in part, is not permitted. OMICRON electronics GmbH Oberes Ried 1, The product information, specifications, and technical data embodied in this manual represent A-6833 Klaus, Austria the technical status at the time of writing and are subject to change without prior notice. Phone: +43 59495 4444 We have done our best to ensure that the information given in this manual is useful, accurate and entirely reliable. However, OMICRON electronics does not assume responsibility for any E-Mail: support@omicron.at inaccuracies which may be present. Web: The user is responsible for every application that makes use of an OMICRON product. OMICRON electronics translates this manual from the source language English into a number For addresses of OMICRON electronics offices with customer service centers, regional sales of other languages. Any translation of this manual is done for local requirements, and in the offices or offices for training, consulting and commissioning please visit our website. event of a dispute between the English and a non-English version, the English version of this manual shall govern.
Preface Introduction Quick Current Voltage Transformer Resistance Others Common Technical Data CP TD1 CP CU1 CP SB1 CP CB2 Transformer Transformer FunctionsPrefaceCPC 100 V 3.0
About this User Manual Safety Instructions for the CPC 100 and its AccessoriesThe purpose of this User Manual is to get you started quickly. It guides you directly to the various Orderly Measures Caution: The CPC 100 must be used in observance of all existing safety require-CPC 100 application fields, shows the typical test setup, the corresponding CPC 100 test card, ments from national standards for accident prevention and environmental protec- • This User Manual only complements the CPC 100 Reference Manual available in PDFand outlines the parameters used for this test in a compact form. tion. format on the CPC 100 Toolset CD-ROM and the CPC 100 Start Page. However, it does notSince the scope of this User Manual is confined to the most important information about a replace it.specific subject, the CPC 100 User Manual complements the CPC 100 Reference Manual, Before operating the CPC 100, read the following safety instructions carefully. It is not • Either this User Manual or the CPC 100 Reference Manual should always be available onhowever, it does not replace it. The CPC 100 Reference Manual is available in PDF format on recommended that the CPC 100 be used (or even turned on) without understanding the the site where the CPC 100 is being used.the CPC 100 Toolset CD-ROM and the CPC 100 Start Page. information in this manual. If some points of the safety instructions are unclear, contact • Personnel assigned to use the CPC 100 should carefully read the CPC 100 User Manual/ OMICRON electronics. Reference Manual - in particular the section on safety instructions - before beginning to workReading the CPC 100 User Manual alone does not release the user from the duty of complyingwith all national and international safety regulations relevant for working with the CPC 100, for with it. On principle, this also applies to personnel who only occasionally work with theexample, the regulation EN50191 "Erection and Operation of Electrical Test Equipment" as well Principle Use According to Regulations CPC 100.as the applicable regulations for accident prevention in the country and at the site of operation. • The CPC 100 should only be used in a safe manner, mindful of the dangers while paying • Do not undertake any modifications, extensions, or adaptations to the CPC 100. attention to the User Manual, and when it is in a technically sound condition and when its • Use the CPC 100 in conjunction with original accessories only.Conventions and Symbols Used use is in accordance with the regulations. In particular, avoid disruptions that could in turn affect safety. Operator Qualifications and Primary ResponsibilitiesIn this manual, the following symbols indicate paragraphs with special safety relevant meaning: • DANGER: If you have a cardiac pacemaker, do not use the CPC 100! Before operating theSymbol Description CPC 100, make sure there is no person with a cardiac pacemaker in the immediate vicinity. Warning: Testing with the CPC 100 should only be performed by authorized and • The CPC 100 is exclusively intended for the application fields specified in detail in qualified personnel. Clearly establish the responsibilities. Equipment damage or loss of data possible. ”Designated Use” on page Preface-2. Any other use is deemed not to be according to the regulations. The manufacturer/distributor is not liable for damage resulting from improper Personnel receiving training, instruction, direction, or education on the CPC 100 should remain usage. The user alone assumes all responsibility and risk. under the constant supervision of an experienced operator while working with the equipment. Personal injury or severe damage to objects possible. • Following the instructions provided in this User Manual and in the CPC 100 Reference Manual available in PDF format on the CPC 100 Toolset CD-ROM and the CPC 100 Safe Operation Start Page is also considered part of being in accordance with the regulations. When putting the CPC 100 into operation, follow the instructions in section "Putting CPC 100 • Do not open the CPC 100 housing. into Operation" in the CPC 100 Reference Manual (available in PDF format on the • If you do not use the CPC 100 anymore, turn the safety key to "lock" (vertical) and remove CPC 100 Toolset CD-ROM or the CPC 100 Start Page). the key to avoid anybody accidentally turning on the CPC 100. • Store key and the CPC 100 separately to prevent unauthorized personnel from using the Note: Never use the CPC 100, any accessory or the CP TD1 equipment trolley without a solid CPC 100. connection to earth with at least 6 mm². Use a ground point as close as possible to the operator.
Preface - 1CPC 100 V 3.0
Designated Use Safety Instructions for the CPC 100 and its AccessoriesThe CPC 100, in conjunction with its accessories or as a stand-alone unit, is a multi- Warning: Do not enter the high-voltage area if the red warning light of the Generalpurpose primary test set for commissioning and maintaining substation equipment. It • Before connecting or disconnecting test objects and/or cables, turn off the CPC 100 by CPC 100 is on since all outputs carry dangerous voltage or current! Always obeyperforms current transformer (CT), voltage transformer (VT) and power transformer (TR) either the POWER ON/OFF switch or the Emergency Stop button. Never connect or the five safety rules and follow the detailed safety instructions in the respectivetests. Furthermore, it is used for contact and winding resistance testing, polarity checks as user manuals. disconnect a test object while the outputs are active.well as primary and secondary protection relay testing. Note: Even if you switched off the CPC 100, wait until the red I/O warning light is fullyThe various, partly automated tests are defined and parameterized via the front panel control of extinguished. As long as this warning light is lit, there is still voltage and/or current potentiala built-in embedded PC. on one or more of the outputs.The functionality scope of the CPC 100 is described in detail in the chapter "Designated Use" of • Make sure that a test object’s terminals that are to be connected to the CPC 100 do not carrythe CPC 100 Reference Manual available in PDF format on the CPC 100 Toolset CD-ROM or any voltage potential. During a test, the only power source for a test object may be thethe CPC 100 Start Page. CPC 100.Note: Any other use of the CPC 100 but the one mentioned above is considered improper use, • At their output sockets and especially in the cables connected to them, in operation the high-and will not only invalidate all customer warranty claims but also exempt the manufacturer from current outputs 400A DC and 800A AC generate a significant amount of heat (approx.its liability to recourse. 300W/m at 800A). To prevent burns, use gloves when touching the cables while in operation or a short while after. • Do not insert objects (e.g., screwdrivers, etc.) into any input/output socket. FOR YOUR OWN SAFETY • Never use the test cards Quick and Resistance to measure the resistance of windings with a high inductance because turning off the DC source results in life-threatening voltage Always follow the 5 safety rules: levels. For this kind of measurement only use either the special winding resistance test card 1. Insulate RWinding or the test card TRTapCheck! 2. Secure to prevent reconnecting Warning: When measuring the ratio of voltage and power transformers make 3. Check isolation sure that the test voltage is connected to the corresponding high-voltage winding, 4. Earth and short-circuit and the voltage of the low-voltage winding is the one that is measured. Acciden- tally mixing up the windings can generate life-threatening voltages within the 5. Cover or shield neighboring live parts transformer.
Safe area High-voltage area Warning: Make sure that when testing a current transformer by feeding a test current into its primary winding, all secondary windings are shorted. On open sec- Example for the separation of safe and high-voltage area using different OMICRON electronics ondary windings, life-threatening voltages can be induced! GmbH devices
Preface - 2
PrefaceCPC 100 V 3.0
– For the high-voltage and current output connectors on the left-hand side of the test set Warning: However, in case of an internal insulation fault these outputs may carry Power Supply (2kV AC, 400A DC and 800A AC, Ext. Booster), only use the specially manufactured up to 300 V. Consider these outputs life-hazardous! • Supply the CPC 100 only from a power outlet that has protective earth (PE). cables supplied by OMICRON electronics (refer to the chapter "Accessories" of the • An error message (313) appears if either the PE connection is defective or the power supply CPC 100 Reference Manual available in PDF format on the CPC 100 Toolset CD-ROM has no galvanic connection to ground. In this case, make sure that the PE connection is – Always lock connectors properly. or the CPC 100 Start Page). intact. If the PE connection is intact and the error message still appears, select the "Disable The counterpart of the high-current sockets are locking connectors. ground check" check box at the Device Setup tab in the Options view. – One end of the high-voltage cable has a coaxial safety plug that is certified for a voltage level of 2kV AC. The other end is equipped with a safety banana plug that is insulated To lock these connectors safely, insert them carefully until you feel a • Ground the isolating transformer outputs or generators used to supply the CPC 100 on the with a shrink tube. "click" position. Now they are locked. Confirm this by trying to pull them N (neutral) output or select the "Disable ground check" check box as described above. out. This should not be possible now. • Instead of supplying the CPC 100 from phase - neutral (L1-N, A-N), it may also be supplied Warning: When the CPC 100 is switched on, consider this part of the cable a from phase - phase (e.g., L1-L2; A-B). However, the voltage must not exceed 240V AC. To remove the locking connectors, unlock them by pushing them in hazard of electric shock! • Fuse-protect the power supply (16A slow-acting fuse). completely first, and then pull them out. • Do not use an extension cable on a cable reel to prevent an overheating of the cord; run out – If you do not use the high-current outputs 400A DC or 800A AC, or the high-voltage the extension cord. output 2kV AC, disconnect any cable that may be plugged in to these sockets. Note: The 400A DC or 800A AC outputs are not switched off by internal relays. Therefore, if a test mode is selected that does not use either one of these two outputs, they still generate current.
Preface - 3CPC 100 V 3.0
If a test object with a big inductance was connected to the CPC 100, short-out the test object additionally before disconnecting it from the CPC 100.
Warning: Use separate clamps for current and voltage connections on both sides of the test object to avoid hazards in case one clamp falls off during the test.
Preface - 4
PrefaceIntroductionCPC 100 V 3.0
Fuse 6.3A T (slow-acting wire fuse 5x20 mm) Test Procedure Overview: Provides an for 3A AC, 6A AC, 130V AC and 6A DC enhanced overview of all test cards of the currently active test procedure. Defines the test procedure AC OUTPUT default. 6A, 3A or 130V output File Operations: Lets you save, load, delete, copy Fuse 3.15A (slow-acting wire fuse 5x20 mm) and rename test procedures. for 3A AC and 130V AC Options: To specify general parameters. V1 AC input V2 AC input 300V AC input 3V AC input Context-dependent menu keys Directly invoke specific commands associated with the DC OUTPUT currently selected control of the test card and view. 6A DC output (fuse-protected with a 6A fuse)
Introduction - 1CPC 100 V 3.0
Introduction - 2
IntroductionCPC 100 V 3.0
CPC 100 Block Diagram Principles of Test Cards and Test Procedures The Components of a Test Card Ext. Test Cards Focus on the data entry field for AC current. Booster The CPC 100 software comprises a number of test cards. A test card carries out one specific The term "focus" designates the currently selected (active) part of the test card. The selected Rectifier & power test, e.g., measuring a CT excitation curve, or testing the ratio of a voltage transformer. component is highlighted or inverted. factor +
corrector 500V A test card holds a number of user-definable test settings and - after the test was run - test The actual function of the context-dependent R 500V results. menu keys depends on the selected view, test 2kV mode, test card and selected test card 500V 500V 4A Switched 1kV 2A R Test Procedure component (i.e., the focus). 2kV 1A mode 500V amplifier A test procedure contains multiple test cards. Temperature and power consumptionL Mains The composition of such a test procedure and the settings of all single test cards can be freely monitoring.100-240V Filter defined by the user. Within a test procedure, each test card and its associated test is executed If an output is activated, both the CPC 100’s 50/60Hz I individually in a user-defined order. power consumption and the current emitted at 65V 130V / 6A ACN 2kV U O the high-current outputs is monitored and, 65VPE I 800A u Report together with the temperature, displayed by this 3.15A t temperature gauge. ADC U p For archiving or reporting purposes, or later processing, a test procedure with all of its test cards, Status of test assessment. The test 6.3A The temperature gauge’s bar therewith U 65V / 6A DC u specific settings and - after the test was run - test results and assessments can be saved. It is assessment is a manual procedure carried represents an indicator for the remaining time DSP I t then considered a report. out by the user. After the test, set the focus (Digital the CPC 100 can output power. Signal U s Such a report can later be opened any time in the CPC 100’s File Operations menu. on the assessment symbol. Use the context- Processor) dependent menu key OK or Failed to assess plenty of spare 6V / 800A AC Note: For detailed information about test cards, test procedures and templates, refer to section the test. no more spare "How to Use The CPC 100 Software" of chapter "Introduction" in the CPC 100 Reference For a few seconds, the status line also Manual available in PDF format on the CPC 100 Toolsets or CPC 100 Start Page. + displays general operation information, e.g. Ethernet 5V / 400A DC "Emergency key pressed". -to 300V AC RS 232 Built-inext. ePC 3V AC I Pressing the Settings menu key opens the Settings page (see page Quick-1) allowing you toPC optional 10A AC/DC n 10V DC set the test cards individually. As a rule, do not set the test cards on the Settings page but set BIN IN p. all test cards of a test procedure using the Device Setup tab in the Options view (see page optional analog or digital interfaces Introduction-5). (plug-in boards)
Introduction - 3CPC 100 V 3.0
The Menus
The Test Procedure Overview lists all test cards of the currently active test procedure in a list Opens the submenu Edit (refer to “Submenu Edit” on page 5)box showing the card’s name, its creation date and time, whether test results are available andthe test card’s assessment status. The CPC 100 file system differentiates two file types: Saves the currently open test, i.e., the test card(s) previously opened in the Test Card View (refer to Note below). With Save As Default, Test Procedure Overview provides a function to save the current test procedure as the test procedure default, i.e., that default the CPC 100 name.xml A test procedure with all of its test cards and specific settings. An .xml file Opens the String Editor. You can save the currently open test under a new name software will start with in future. may also contain test results and assessments that were stored together of your choice (15 characters max.). with the settings as report in the CPC 100 file system for archivingNote: For detailed information refer to section "Test Procedure Overview" of chapter purposes. Use the handwheel or the Up / Down keys to select a test, and press Open to open"Introduction" in the CPC 100 Reference Manual available in PDF format on the it. Changes to Test Card View. name.xmt Test procedure template, i.e., a user-defined template containing one orCPC 100 Toolsets or CPC 100 Start Page. more test cards with all of their specific test settings but without test Closes the current test card(s), changes to Test Card View and opens the test results. procedure default.
Introduction - 4
IntroductionCPC 100 V 3.0
Move to the destination folder of your choice. Press Paste to insert the contents of Select the check box if theOpens the String Editor. You can create a new folder with any name of your choice. the CPC 100 clipboard to this folder. PE connection is intact and an error message (313)Appends the contents of a test file (.xml) or template (.xmt) of your choice to the Press Paste As Templ. to make the contents of the CPC 100 clipboard a test appears.currently open test. procedure template. Operating the CPC 100 (for future use) with the check boxDeletes the currently selected test or folder from the CPC 100’s disk space. selected can cause injury or possibly death of the Closes the Edit submenu and returns to the main File Operations menu. operating staff! Resets all user-specific settings made in theOpens the String Editor that enables you to rename the current test to any new CPC 100 software to factory-defined defaultsname of your choice. Set the default frequency. including: This value will be used for all • the test card defaults(for future use) Note: If a folder is cut or copied to the Clipboard, the selection is recursive, i.e., all of its test cards. • the test procedure default subfolders will also be put to the Clipboard. Cutting or copying a test or folder, and trying to paste it in the same location, opens the String • all settings made at the Device Setup tabCloses the submenu and returns to the main File Operations menu. Auto save automatically (Sets external booster to CB2, sets CT and Editor. saves the current test Since a test or folder cannot exist twice under the same name at the same location, determine VT to "OFF" and sets the default frequency settings in fixed intervals to 50 Hz.) a new name for it using the String Editor. specified to a file named • the String Editor’s template strings lastmeas.xml. If selected, the CPC 100 cools down faster. Thus, the duty cycle can be increased.
Introduction - 5CPC 100 V 3.0
Introduction - 6
IntroductionCPC 100 V 3.0
The command Restore Defaults at the Options tab Device Setup resets all user-specific settings made in the CPC 100 software to factory-defined defaults. This includes the test card defaults and the test procedure default.
Introduction - 7CPC 100 V 3.0
Introduction - 8
IntroductionQuickCPC 100 V 3.0
Quick - 1CPC 100 V 3.0
Note that some of the trigger events offered in the Trigger on: combo box depend on the measured quantity settings below (trigger on measurement). Trigger on "Overload": the occurrence or the clearing of an output overload condition (clearing is delayed by 100 ms to debounce).
Quick - 2
QuickCurrent TransformerCPC 100 V 3.0
CTRatio (and Burden) CTRatio (with Burden) - The Option Measure BurdenUse the CTRatio test card to measure a current transformer’s ratio and burden with injection on Select the check box Measure Burden to measure the burden in VA.the CT’s primary side with up to 800 A from AC OUTPUT. Nominal primary current Note: This option is only useful as long as the injected current I test is about of the magnitude Primary Output range Select to stop test automatically when measurement is of the nominal current I prim. injection current done. Nominal secondary current Use current clamp Actual current rather than IAC injected into CT’s input primary side
Burden CT
Burden CT Phase angle ϕ relative to Iprim
Current Transformer - 1CPC 100 V 3.0
Burden Output frequency CT
Actual injectionBurden in VA: I sec nom × current(V sec act × I sec nom/I sec act) measured via Select to enter input I AC secondary voltage cos ϕ: cosine of angle between I sec and V sec instead of measuring it
Secondary voltage at theNote: For the meaning of the other test card components, refer to page Current Transformer-1. burden, measured at Cosinus of phase input V1 AC, angle ϕ and phase angle Burden in VA: I sec nom × ϕ relative to Isec (V sec act × I sec nom/I sec act)
Current Transformer - 2
Current TransformerCPC 100 V 3.0
Burden CT Actual voltage graph does not work in Auto If fnom < 60 Hz -> ftest = fnom + 10 Hz. Actual current mode. The voltage will then be calculated back to fnom (V = Vmeas * fnom/ftest). With fnom < 60 Hz, the maximum test voltage is reduced up to 20% and with fnom ≥ 60 Hz, the maximum test voltage is increased up to 16%. The exciting current will not be corrected as the influence is very small.
Current Transformer - 3CPC 100 V 3.0
Burden CT Burden CT Transformer’s winding Total elapsed time resistance
Enable/disable temperature T meas: Actual ambient temperature compensation Off before disconnecting for the result T ref: Temperature for which the result is calculated device under test R ref: Calculated resistance. In Centigrade: Rref = (V DC / I DC) x (235 °C + T ref) / (235 °C + T meas) In Fahrenheit: Rref = (VDC / IDC) x (391°F + Tref F) / (391°F + Tmeas F) Note: Formula according to IEC 60076-1
Off before disconnecting device under test Note: If n/a appears in the V DC or R meas box, the V DC input is overloaded.
Current Transformer - 4
Current TransformerCPC 100 V 3.0
+ If the capacity of the CPOL’s battery gets low, the LEDs start flashing. As long asTerminates test when testing the LEDs are flashing, the CPOL’s battery provides sufficient power to continuetime has elapsed flashing working. However, the battery should be changed as soon as possible.
Warning: If you detect a wrong polarity in the current path, turn off the CPC 100Actual test voltage first, and only then disconnect the terminals.Actual test current green LED red LED Never operate the CPOL with an open battery compartment. A life-hazardous The CPC 100 injects a voltage level may occur in the battery compartment if the CPOL’s probe touches a special polarity check signal test point with high-voltage potential!Highest measured current polarity polarity checker checker CPOL CPOLTime span Vtest is applied tothe output
During the test, the test voltage increases in a ramp characteristic from 0 V to V test. V test isthen applied to the output for the specified time span. The measurements are continuouslytaken. Afterwards, V test decreases in a ramp characteristic.
Current Transformer - 5CPC 100 V 3.0
2. define a pulse duty cycle for the output signal: The preferred method for CT ratio measurement is current injection using the CTRatio test card. However, on some GIS CTs or bushing CTs on power transformers where the primary current path is not accessible, the method described in this section is the only solution.T on: time span the signal is applied to the output To measure the CT ratio using the CTRatioV test card, connect the 2kV AC output to the CT’sT off: time span the signal output is paused secondary winding and the V2 AC input to the main conductors, e.g. on a power transformer to the transformer’s bushings of different phases.
Burden CT Warning: Feeding test voltage to a tap of a multi-ratio CT can cause life-threaten- A T on / T off ratio of 2.000 s / 9.000 s means the signal is applied for 2 ing voltages on other taps with higher ratios. seconds, then paused for 9 seconds. After that the cycle repeats.
Select outputrange
Amplitude
Enter resultsmanually
Current Transformer - 6
Ratio errorPolarity: Ratio Iprim. / Isec.: Isec act x (Iprim nom/Iprim act) OK = phase I sec - phase I prim = - 45 ° < 0 ° < + 45 ° and deviation in % ((Kn x Isec - Iprim)/Iprim) x 100% NOTOK = all other cases
Note: If the transformer’s knee point voltage is approximated or exceeded, due to thetransformer’s saturation the measurement results are not correct anymore. If the knee point isextensively exceeded, the transformer can even be damaged.Therefore, the knee point voltage should be known or measured beforehand.
Current Transformer - 7CPC 100 V 3.0
*) Note that the current I sec does not really exist in the system. It is a calculated current only.
Current Transformer - 8
Current TransformerCPC 100 V 3.0
SV-RatioThe SV-Ratio test card is mainly used to check the ratio between the output current or voltage Block diagram of a typical measurement setup: Output rangeand the input current or voltage of the selected merging unit channel according to the IEC 61850 Output frequencystandard. In addition, the SV-Ratio card is also used to determine the polarity of the signal, Primary side Ethernet IRT**whereas the CPC 100 serves as the signal source. The merging units generate the input output CPC 100 input Switch Primary injection Nominal primary current or voltagevoltages or currents. (SV-Ratio) current or Refresh streamThe CPC 100 test system performs closed-loop testing whereby a test signal is injected on the voltageprimary side of the current/voltage sensors. The Merging Unit (MU) converts the sensor output digital Stream information analoginto an SV stream which is published to the substation network. The CPC 100 then reads thedata back from the network in order to perform a variety of different tests. Channel Optical fiber selectionThe CPC 100 transforms the sampled points to the spectral function of the signal. This Fourier-transformed sampled values signal is filtered with a special Hann window to only retrieve the Refer to the"signal" at the selected frequency. This allows frequency-selective measurements to be selectedperformed on SV streams and thereby the noise is suppressed. Selected stream Range (I or V) digitalThe SV-Ratio test card can be accessed from CT, VT or Others.The following tests can be performed: Deviation of Select to choose actual ratio from• Ratio and polarity Substation channels nominal ratio in CT/VT MU*• Automatic MU detection Network automatically %• Frequency-selective current/voltage measurement• Noise level measurement• Magnitude response of the signal processing chain (15 to 400 Hz) * If the MU has an Ethernet output, no IRT switch is required. ** IRT Switch: Industrial Real-Time Switch Channel name; Identifies quality of Output Primary Calculated refers to the the connection current values ratio value selected Range or from Note: The SV-Ratio test card can be used both for current transformers and voltage (either I or V) voltage measured Polarity status transformers alike. Therefore, the description refers to currents and voltages. values
Current Transformer - 9CPC 100 V 3.0
Current Transformer - 10
Current TransformerVoltage TransformerCPC 100 V 3.0
VTRatio VTBurdenUse the VTRatio test card to measure a voltage transformer’s ratio with injection on the VT’s Use the VTBurden test card to measure a voltage transformer’s secondary burden with voltageprimary side with up to 2 kV from AC OUTPUT. Warning: For VT ratio measurement, the CPC 100 output has to be connected to injection on the VT’s secondary side with up to 130 V from AC OUTPUT. the primary side of the VT. Connecting the CPC 100 output to the secondary side To do so, open the circuit as shown in the figure below, and inject the AC voltage from the of the VT will cause hazardous voltages on the primary side. CPC 100’s 130V AC output into the burden. Input I AC measures the current that flows into the Correction factor Nominal primary voltage burden, and input V1 AC the voltage at the burden. A for V prim 1/√3 and 1/3: Correction factors for V sec a A Nominal secondary voltage a Primary injection voltage Select to stop test
Burden VT
Burden automatically VT when Output measurement is frequency done n N Measured n primary voltage N Select to enter secondary voltage Secondary instead of voltage measuring it measured at V1 AC, and its phase angle Ratio and deviation in % relative to the measured Vprim Polarity: OK = phase I sec - phase I prim = - 45 ° < 0 ° < + 45 ° NOTOK = all other cases
Voltage Transformer - 1CPC 100 V 3.0
Burden VT
*) Due to cross-talk between the measuring inputs V1 AC and V2 AC, we suggest not to connecta current clamp to the input V2 AC. Therefore, use a current clamp with current output.
Voltage Transformer - 2
Voltage TransformerCPC 100 V 3.0
VTElectronicsUse the VTElectronics test card to test the ratio of non-conventional electronic voltagetransformers with a very low-level secondary voltage. Correction factor Nominal primary voltage for V prim Electronic protection relay 1/√3 and 1/3: Correction factors for V sec Electronic voltage transformer with low-voltage input Primary injection Nominal secondary voltage voltage Select to stop test automatically Output when frequency measurement is Shielded cable with done twisted wires
Measured primary voltage Select to enter secondary voltage instead of Secondary measuring it voltage measured at V1 AC, and its phase angle relative to the Ratio and deviation in % measured Vprim Polarity: OK = phase I sec - phase I prim = - 45 ° < 0 ° < + 45 ° NOTOK = all other cases
Voltage Transformer - 3CPC 100 V 3.0
Voltage Transformer - 4
Voltage TransformerTransformerCPC 100 V 3.0
Transformer tap identifier and tap V sec Actual voltage measured at V1 AC number for the ° Phase angle of the primary current relative to Vprim measurements nominal. in the respective line of the table :1 Calculated ratio value from the measured values Vprim / Vsec % Deviation of the actual ratio from the nominal ratio
Transformer - 1CPC 100 V 3.0
Transformer - 2
TransformerCPC 100 V 3.0
B V-W / H2-H3 w-v / X3-X2 B V-(U+W) / H2- v-u / X2-X1 Warning: Connect the CP SA1 discharge box to the CPC 100's V DC input sock- (H1+H3) ets to protect yourself and the CPC 100 from high-voltage hazards. C W-U / H3-H1 u-w / X1-X3 C W-(U+V) / H3- w-v / X3-X2Dz6 V/H2 w/X3 u/X1 A U-V / H1-H2 v-u / X2-X1 1 (H1+H2)
Transformer - 3CPC 100 V 3.0
Transformer - 4
TransformerCPC 100 V 3.0
When testing a tap changer, we recommend: After pressing the Auto Keep Result menu key, the CPC 100 waits until stable• To inject the same current value for each phase. results with a deviation less than the defined tolerance (in %) within the defined• To perform tests of each phase, start with the lowest tap through to the highest and continue settling time ( Δ t) are achieved. After then, a new result line is added and the next backwards down to the lowest tap again. Taps may show quite different results depending measurement starts. on the direction of the tap movement and defects can behave differently. An interruption caused by a defective tap changer results in comparatively high measured values for ripple Note: If the CPC 100 is in Auto Keep Result status, the user can end the process by either and slope. pressing Keep Result or by changing to the Tolerance setting and changing the value. The soft key Set Current Deviation resumes the value of the current deviation in the Tolerance field.Example: Results of a tap changer and winding resistance testFor the tap changer test, the last two columns of the table are relevant. Performing a Tap Changer Test
High ripple because inductance is 1. Press the I/O (test start/stop) push-button to start the test. charged 2. Press Keep Result to save the resistance value of this tap or press Auto Keep Result. In this case, the CPC 100 waits until stable results within the set Tolerance and Δ t are achieved. After then, a new result line is added showing Values okay because always in the the number of the next measured tap. same range 3. Move to the next position on the tap changer. 4. Repeat steps 2 and 3 for all taps you want to measure. 5. Press the I/O (test start/stop) push-button to stop the test and wait until the transformer windings are discharged.Tap defective: significantly higher values for ripple and slope. Compared to the properlyfunctioning tap change of line 5, for the defective tap in line 7 the ripple is about 30 times andthe slope about 15 times higher. Warning: Before disconnecting the transformer under test, ground all transformer connections.
Transformer - 5CPC 100 V 3.0
DemagnetizationUse the Demag test card to demagnetize the transformer core. Magnetized transformers may Check box for Vector group ofeasily saturate and draw an excessive inrush current upon energization. Since the forces on the single-phase the transformerwindings due to high inrush current may cause damage or even breakdown, it is desirable to transformersavoid them. Warning: The trans- former can carry life- Measured current The CPC 100 Demag test card requires a Test current CP SB1 transformer switch box. The wiring threatening currents. is the same as for a standard resistance Never touch the cables test plus a connection of the V1 input to the before the automatic Demag status Saturation Demag cycle is com- message threshold switch box. Via the switch box, the CPC 100 injects a constant current from the pleted. If in doubt, use a 6A DC output into the power transformer. grounding or discharging Current Set present The current is led through the I AC / DC rod. saturation level or saturation as the input for measurement. remaining new saturation remanenceIn the Demag test card you need to: threshold level CP SB1• enter the vector group of the transformer, Test card during Demag process transformer• specify whether the test object is a single-phase transformer, and switch box• enter the test current. Demag status messages:In the first step during the demagnetization process, the transformer core is saturated. This Wiring check... Checking for correct wiringprocess stops at predefined thresholds. If a threshold is not reached over a long period of time, Idle. Displayed before the process is startedthe saturation level can be adapted manually. By pressing the Set current saturat. soft key, Test was canceled. Displayed after pushing the Emergency Stop button orthe present saturation level can be set as the new threshold. During the Demag cycle, the confirming an error messageinitial remanence is measured and the currently remaining remanence is constantly displayed.After the test, the core is demagnetized. Saturating core... Core is being saturated Discharging... Core is being discharged Demagnetizing... Actual demagnetization cycle in progress Core is demagnetized. Demag cycle has been successful
Transformer - 6
TransformerResistanceCPC 100 V 3.0
µΩ MeasurementThe Resistance test card provides a total of three output ranges. The test setup depends on 10 mΩ to 10 Ω 10 Ω to 20 kΩthe selected range. Setup for a mΩ measurement in the 6A DC range: Setup for an Ω to kΩ measurement in the V DC (2 wire) range:1 µΩ to 10 mΩSetup for a µΩ measurement in the 400A DC range:
Inject current from the 6A DC output to both sides of the test object. To measure this current, route it via the I AC/DC input as shown in the figure above. Input V DC measures the voltage drop, the software calculates the test object’s resistance. At this range, the DC input V DC outputs the current needed to measure the resistance.
Resistance - 1CPC 100 V 3.0
Burden Highest possible CTSmallest resistancepossibleresistance Select to enter VDC manually instead of measuring it
Actual testcurrent that isinjected into thetest object Measured voltage Calculated resistance of test object, drop at the test object R = V DC / I DC
Resistance - 2
ResistanceCPC 100 V 3.0
RGroundUse the RGround test card to determine earth resistance between a substation’s ground Measuring the Ground Resistance of Small Ground Systems Measuring the Ground Resistance of Large Ground Systemssystem and a remote auxiliary electrode. To measure the earth resistance, the CPC 100 injectsAC current between the substation’s ground system and a temporary remote auxiliary electrode.A second auxiliary electrode is used to measure the voltage potential across the substation’searth resistance.Note: Make sure not to position the auxiliary electrode U too close to the substation’s groundsystem. If you do so, you measure in a range where the earth resistance may not be linear (seefigure below).We suggest to test several points using a longer distance to the substation ground. That wayyou get a better understanding of where the linear range of the earth resistance lies, and wherethe measurements are reliable.Theoretical resistance characteristic of an earth electrode: Auxiliary electrode U 90ºEarth resistance in mΩ (Bird’s-eye view) > 1km 600 ΔU 500 3...5 x a Auxiliary 400 linear range of earth resistance electrode I
300 ≈ 10xa Auxiliary 200 electrode U
Resistance - 3CPC 100 V 3.0
RGroundMeasuring the Soil Resistivity Calculating the soil resistivity: ρ=2πdR Nominal test current
Legend: ρ = soil resistivity Frequency of test current. Select a d = distance between auxiliary electrodes (identical between all electrodes) frequency other than the 50 or R = calculated resistance as indicated at the RGround test card (R(f)) 60Hz mains frequency to prevent With the spacing of "d", the test measures the average soil resistivity between the U auxiliary interferences by stray earth electrodes down to a depth of "d". Therefore, varying "d" also varies the depth of the volume for currents. which the soil resistivity is to be measured.
Caution: The 6A AC output can carry a life-threatening voltage level at high loop impedances or open measuring circuits.
Note: To learn how to measure the resistance of a single ground rod in an earthing system, refer Actual test current (rms value) to the CPC 100 Reference Manual, section "RGround" of chapter "Resistance". The CPC 100 Reference Manual is available in PDF format on the CPC 100 Toolsets or the CPC 100 Start Page. Auxiliary Auxiliary Auxiliary Auxiliary Measured voltage between electrode I electrode U electrode U electrode I substation ground and the ΔU auxiliary electrode U (rms value, Calculated ohmic part Calculated inductive non-selective frequency) and of earth impedance part of earth d d d phase shift between VRMS and (frequency-selective impedance IRMS. measurement) (frequency-selective d= distance measurement)
Resistance - 4
ResistanceOthers: SequencerCPC 100 V 3.0
Others: Sequencer - 1CPC 100 V 3.0
100ms 100ms State 4: "wait for the CB to close" Long dead time. Set to output 50A*) until the "Overload" triggerState 2: "wait for the CB to close" t condition that started state 4 clears.Short dead time. Set to output 50A until the "Overload" trigger The measurement table shows for state 4 that the long dead time + *) *) State 1 State 2 State 3 State 4condition that started state 2 clears. the CB closing time lasted 3.191 s. This time also includes the short long dead timeThe measurement table shows for state 2 that the short dead time + additional time to compensate for the debounce (see note). dead timethe CB closing time lasted 477 ms. This time also includes the The actual value for CB close equals 3.191 s - 100 ms = 3.091 s.additional time to compensate for the debounce (see note). *)Current values < 50A do not initiate an "Overload" when the currentThe actual value for CB close equals 477 ms - 100 ms = 377 ms. circuit opens. For this reason, a nominal current value of 50A was chosen *) State 2 and 4 incl. the additional 100 ms the CPC 100 adds to compensate for the debounce here, even though the CB is open. (see note above).Note that the r.m.s. measurement of IOut reacts slow and thereforethe measurement table does not show the full current.
Others: Sequencer - 2
Others: SequencerOthers: RampingCPC 100 V 3.0
GeneralUse the Ramping test card to define a series of ramps to be applied to a connected test object. The feature Manual Trigger provides a possibility to manually initiate a triggerA series of up to 5 ramps can be defined. The ramps within that series execute sequentially, and signal (i.e., a premature termination) of the current ramp at any time. This manual Irun from a start to an end value within a set period of time. Ramp 2 trigger has the same function as an automatic trigger signal. 200AIt is possible to specify a trigger signal that prematurely terminates either Press the Add Ramp button to define additional ramps. Note that the maximum• the entire series of ramps possible number of ramps is 5.• or the actual ramp only, and then continues with the next one (if any).
Ra 1 mp Example of a series of ramps
mp Start value of ramp
Ra
3Switch off on trigger, i.e., when a trigger condition becomes true Ramp 1 Ramp 2Output range selection & actual Ramp 3output value
Ramped quantity & fixed quantity The three ramps defined in the ramps table shown above result in an output signal like this: 1A t 0s 5s 15s 20sRamps table (ramp-specificsettings):• output quantity settings• ramp duration if no trigger Ramp 1 Ramp 2 Ramp 3 occurs • from 1 A • from 200 A • from 200 A• trigger specification (set at "Start val:") (end value of ramp 1) (end value of ramp 2) • to end value 200 A • to end value 200 A • to end value 0 A (set in line 1 column "A") (set in line 2 column "A") (set in line 3 column "A") • in 5 s • for 10 seconds • in 5 seconds (set in line 1 column "s") (set in line 2 column "s") (set in line 3 column "s")
Others: Ramping - 3CPC 100 V 3.0
Others: Ramping - 4
Others: RampingOthers: AmplifierCPC 100 V 3.0
GeneralUse the Amplifier test card to set the CPC 100 to an "amplifier-like" mode. In this mode, an Starting a high-current outputinput signal fed into a synchronization input drives the high-current output’s magnitude, Display of the measured high-currentfrequency and phase angle. output signal Caution: Depending on the measured input signal, setting the amplification factor Set rangeSelect between I AC, V1 AC and V2 AC as synchronization inputs. can result in unintentionally high currents. If the magnitude of the input signal is unknown or uncertain, it is strongly recommended to set the amplification factor toTo prevent saturation, the output signal follows sudden magnitude changes at the "0" before starting the test.synchronization input slowly. This smoothening effect delays the follow-up of the output currentup to 250 ms. Measured phase Set phase angleBoth the "amplification" factor and the phase angle between input and output are set by the user angle between between input • Set an amplification factor of "0".in the Amplifier test card. input and output and output signal • Press I/O (test start / stop) to start the measurement. signalNote: Changes in frequency and phase angle may result in unwanted effects. Both frequencyand phase must be held stable. Now the display field shows the measured input value.Note: The input frequency is limited to a range of 48 ... 62 Hz. Select • With the measured input value in mind, enter the amplification factor now. synchronization Value measured • Acknowledge this entry by pressing the handwheel or the Enter key to start the at output. synchronization input (refer to ”Starting a high-current output” in the Measured input frequency (48 ... 62 Hz) next column). Set the amplification factor to determine the ratio between input and output signal.
Note: The synchronization input is not automatically range-switching, it is fixed to its maximum value.
Others: Amplifier - 5CPC 100 V 3.0
Test setCMC 256-3
GPS synchronization unit CMGPS
CT 1 CT 2 CT 3 Protection relay
Others: Amplifier - 6
Others: AmplifierOthers: CommentCPC 100 V 3.0
Others: Comment - 7Others: HV Resonance Test SystemCPC 100 V 3.0
State definition Define/set Short-circuit automatic test cycle impedanceSet of the power Total time of test cyclefrequency VT atvalue 100 Hz
EstimatedControlled input power VTchannel ratio with losses
Others: HV ResonanceCommon FunctionsCPC 100 V 3.0
Test AssessmentThe test assessment is a manual procedure carried out by the user. The String Editor is used to name or rename test cards, tests and templates as well as to fill out Important special charactersThe example below shows an assessment made at a VTRatio test card. However, the the Comment card.assessment procedure is carried out in the same fashion on all test cards. Any time such an operation becomes necessary, the String Editor starts automatically. carriage return (line feed)
Change case tab (special function in Form Editor mode; refer to page Others-7).
Available Move right • enter the new test or folder name by consecutively selecting the characters of your Assessment choice from the "on-screen keyboard" with the Up / Down keys or by navigating characters symbol to it with the handwheel Finish editing • acknowledge every selected character by pressing the handwheel or Enter Template Abort editing, phrases discard changes• After the test, set the focus on the assessment symbol by turning the handwheel.
Test not assessed. The number of available characters to choose from depends on the String Editor’s use. If, for example, a user-defined comment is to be entered in the Comment card, the number of available characters is bigger than if a test is to be renamed. This difference are special characters, such as !, ?, _, [ ], etc.• Use the context-dependent menu keys to assess the test.
Test OK
Test failed
Common Functions - 1CPC 100 V 3.0
Common Functions - 2
Common FunctionsCPC 100 Technical DataCPC 100 V 3.0
Generator / Output Section - Current Outputs Generator / Output Section - Voltage Outputs Internal Measurement of OutputsNote: For detailed information refer to the section “Technical Data” in the CPC 100 Reference Range Amplitude5 tmax Imax Powermax5 f Guaranteed accuracy Typical accuracy6Manual available in pdf format on the CPC 100 Toolsets or the CPC 100 Start Page. 0 … 2 kV 1 min 1.25 A 2500 VA 15 … 400 Hz Output Range Amplitude Phase Amplitude PhaseThe output is either voltage or current, and is automatically selected by the software or manually 2kV AC3by the user. Current and voltage outputs are overload and short-circuit proof and protected 0 … 2 kV >2h 0.5 A 1000 VA 15 … 400 Hz Reading Full Full Reading Full Fullagainst over-temperature. error scale scale error scale scale 1kV AC 3 0 … 1 kV 1 min 2.5 A 2500 VA 15 … 400 Hz error error error errorRange Amplitude tmax1 Vmax2 Powermax2 f 0 … 1 kV >2h 1.0 A 1000 VA 15 … 400 Hz 800A AC - 0.20% 0.20% 0.20° 0.10% 0.10% 0.10° 0 … 800 A 25 s 6.0 V 4800 VA 15 … 400 Hz 500V AC3 0 … 500 V 1 min 5.0 A 2500 VA 15 … 400 Hz 400A DC - 0.40% 0.10% - 0.20% 0.05% - 0 … 400 A 8 min 6.4 V 2560 VA 15 … 400 Hz 0 … 500 V >2h 2.0 A 1000 VA 15 … 400 Hz800A AC3 2000 V 0.10% 0.10% 0.20° 0.05% 0.05% 0.10° 0 … 200 A >2h 6.5 V 1300 VA 15 … 400 Hz 130V AC10 0 … 130 V >2h 3.0 A 390 VA 15 … 400 Hz 1000 V 0.10% 0.10% 0.30° 0.05% 0.05% 0.15°6A AC 10 0 …6A >2h 55 V 330 VA 15 … 400 Hz 2kV AC 500 V 0.10% 0.10% 0.40° 0.05% 0.05% 0.20° Output transient characteristics3A AC10 0 …3A >2h 110 V 330 VA 15 … 400 Hz 5A 0.40% 0.10% 0.20° 0.20% 0.05% 0.10° 0 … 400 A 2 min 6.5 V 2600 VA DC Changes from “off” or a low Changes from a high magnitude to 500 mA 0.10% 0.10% 0.20° 0.05% 0.05% 0.10° magnitude to a higher magnitude a lower magnitude or “off”400A DC 0 … 300 A 3 min 6.5 V 1950 VA DC AC current within one period 300 ms maximum; accordingly less Note: For the individual notes, see “Notes regarding Inputs and Outputs” below. 0 … 200 A >2h 6.5 V 1300 VA DC for smaller magnitudes6A DC4, 10 0 …6A >2h 60 V 360 VA DC AC voltage 1200 ms maximum; accordingly less 300 ms maximum; accordingly less 32000A AC with an optional current booster. For more details, refer to page CP CB2-1. for smaller magnitudes for smaller magnitudes
Measuring Inputs Output to Input Synchronization Notes Related to Inputs and Outputs All input/output values are guaranteed over one year within an ambient temperature of 23 °C ± Guaranteed accuracy Typical accuracy6 Test cards Quick, Test card Amplifier 5 ° (73 °F ± 10 °F), a warm-up time longer than 25 min and in a frequency range of 45 … 60 Hz Sequencer, Ramping Amplitude Phase Amplitude Phase or DC. Accuracy values indicate that the error is smaller than ± (value read x reading error + fullInput Imped. Range Frequency range 48 … 62 Hz scale of the range x full scale error). Reading Full Full Reading Full Full error scale scale error scale scale Synchronization inputs V1 AC V1 AC, V2 AC, I AC 1. With a mains voltage of 230 V using a 2 x 6 m high-current cable at an ambient temperature error error error error (automatic range switching) (fixed to maximum range) of 23 °C ± 5 ° (73 °F ± 10 °F) 10A AC 0.10% 0.10% 0.20° 0.05% 0.05% 0.10° 2. Signals below 50 Hz or above 60 Hz with reduced values possible. Input magnitude 10% of input range full scale 1A AC 0.10% 0.10% 0.30° 0.05% 0.05% 0.15° 3. Output can be synchronized with V1 AC in Quick, Sequencer, Ramping and Amplifier. Output magnitude 5% of output range full scaleIAC/DC4, 7 < 0.1 Ω 4. The input / output is protected with lightning arrestors between the connector and against 10A DC 0.05% 0.15% - 0.03% 0.08% - Settling time 100 ms after 5% of output 1000 ms after 5% of output protective earth. In case of energy above a few hundred Joule the lightning arrestors apply 1A DC 0.05% 0.15% - 0.03% 0.08% - magnitude is reached magnitude is reached a permanent short-circuit to the input / output. 300 V 0.10% 0.10% 0.20° 0.05% 0.05% 0.10° Signal changes All quantities must be ramped No changes of frequency and 5. Signals below 50 Hz or above 200 Hz with reduced values possible. within 20 signal periods phase. Magnitude changes 30 V 0.10% 0.10% 0.20° 0.05% 0.05% 0.10° 6. 98% of all units have an accuracy better than specified as typical. 500 kΩ without limitation. OutputV1 AC8 3V 0.20% 0.10% 0.20° 0.10% 0.05% 0.10° follows within 250 ms. 7. Input is galvanically separated from all other inputs 300 mV 0.30% 0.10% 0.20° 0.15% 0.05% 0.10° Phase tolerance 0.5° within the limits as specified above 8. V1 and V2 are galvanically coupled but separated from all other inputs. 3V 0.05% 0.15% 0.20° 0.03% 0.08% 0.10° 9. There are power restrictions for mains voltages below 190V AC.V2 AC 8, 11 10 MΩ 10. Fuse-protected 300 mV 0.15% 0.15% 0.20° 0.08% 0.08% 0.10° 11. When using the CTRogowski test card, the 3V V2 AC input uses an additional software 30 mV 0.20% 0.50% 0.30° 0.10% 0.25% 0.15° based integration method. In the range of 50 Hz < f < 60 Hz, this results in a phase shift of 10 V 0.05% 0.15% - 0.03% 0.08% - 90° as well as an additional phase error of +/- 0.1° and an additional amplitude error of +/- 1V 0.05% 0.15% - 0.03% 0.08% - 0.01%. For frequencies in the range of 15 Hz < f < 400 Hz, the phase error is not specified,V DC4, 7 and the amplitude error can be up to +/- 0.50% higher. 100 mV 0.10% 0.20% - 0.05% 0.10% - 10 mV 0.10% 0.30% - 0.05% 0.15% -
CPC 100 Technical DataCPC 100 V 3.0
GeneralWeight and DimensionsWeight 29 kg (64 lbs), robust case with coverDimensions W x H x D: 468 x 394 x 233 mm (18.4 x 15.5 x 9.2"), cover, without handles.
CPC 100 Technical DataCP TD1CP TD1
• Never remove any cables from the CP TD1 or the test object during a test.• Keep clear from zones in which high voltages may occur. Set up a barrier or establish similar adequate means.
CP TD1 - 1CP TD1
CP TD1 - 2
CP TD1CP TD1
CP TD1 Connected to a Power Transformer CP TD1 Connected to CP CAL1 Putting the CP TD1 into Operation CP TD1 CP TD1 CP CAL1 As the first step, before you set a CPC 100 / CP TD1 measurement setup into 12 kV 12 kV operation, link the CPC 100, CP TD1 and, if applicable, the equipment trolley with IN a min. 6 mm² grounding cable as displayed on page CP TD1-2. Never use the CPC 100 / CP TD1 measurement setup without a solid connection C1 to ground. IN A IN A C2 1. Switch off the CPC 100 at the main power switch. Ix IN B IN B Ix 2. With trolley: C3 Properly connect the CPC 100 and CP TD1 grounding terminals to the trolley’s ground bar. Connect the ground bar to earth. All cables minimum 6 mm². Measurement Measurement Without trolley: PE PE Properly connect the CPC 100 and CP TD1 grounding terminals to earth. Both cables minimum 6 mm². Booster Serial Booster Serial 3. Connect the CP TD1’s "BOOSTER IN" to the CPC 100’s "EXT. BOOSTER" with the OMICRON electronics supplied booster cable. Equipotential CPC 100 ground 4. Connect the CP TD1’s "SERIAL" to the CPC 100’s "SERIAL" with the OMICRON electronics CPC 100 supplied data cable. This cable also provides the power supply for the CP TD1. Power transformer 5. Pull out the measuring cables from the cable drum and connect the test object to the CP TD1’s measuring inputs IN A and IN B. 6. Pull out the high-voltage cables from the cable drum and connect the test object to the When using the CP CAL1 for calibration, we recommend to take C1 as reference and to select CP TD1’s high-voltage output. Equipotential ground the calibration frequency in a range between 50 … 200 Hz. 7. Switch on the CPC 100. 8. Selecting the TanDelta test card from any of the CPC 100’s CT, VT, Transformer or Others test card groups automatically turns on the CP TD1. If no CP TD1 is connected to the CPC 100, an error message appears. 9. Set up your measurement in the TanDelta test card (see page CP TD1-5). 10. Press the CPC 100’s I/O (test start / stop) push-button.
CP TD1 - 3CP TD1
Calibrate the CP TD1 Using a Reference Capacitor Option TH 3631 Application and Test TemplatesBy connecting a reference capacitor (e.g., optional device CP CAL1) with known values of Use the optional device TH 3631 to measure ambient temperature, the test object temperature For detailed information on the CP TD1 applications, refer to the CP TD1 Reference Manualcapacity Cref and dissipation factor DFref, in mode UST-A the values Cx and DFx can be and humidity. Once these values were measured, enter them into the respective entry fields of delivered with the CP TD1 or available in pdf format on the CPC 100 Start Page.measured and then compared to the known reference values. the TanDelta test card’s Settings page at "Compensations" (see page CP TD1-6).If you experience substantial deviations, re-calibrate the CP TD1: Test Templates• Cx = Cref / Cmeas and The test procedures for designated applications are controlled by templates available on the• DF / PF + = DFref - DFmeas CPC 100 Toolsets shipped with your CP TD1 or on the CPC 100 Start Page.as described on page CP TD1-6. Test templates are available for the following areas:A re-calibration of the CP TD1 is also shown in the test report (.xml file). • power transformers • instrument transformersNote: If you change the factory-made calibration, the responsibility for the accuracy of the • rotating machinesCP TD1 will be in your hands. • cables and transmission lines • grounding systemsCalibration tips: • others• For calibration, set the averaging factor to maximum and the filter bandwidth to ± 5 Hz (refer to page CP TD1-5).• To reset to the factory settings, select "DF/PF+" to 0.0 ppm and "Cx" to 1.000 (refer to page CP TD1-6).
CP TD1 - 4
CP TD1CP TD1
CP TD1 - 5CP TD1
TanDelta-PF Test Card - Main Page TanDelta-PF Test Card - Settings PageCompound measurement setting Pressing the Settings button on the TanDelta main page opens the Settings page allowing you Selecting “Compensations" converts the actually measured dissipation or power factor to to set additional measurement options. normalized values corresponding to an ambient temperature of 20 °C. In doing so, the valuesCp, DF (tanδ) = parallel capacitance & dissipation factorCp, PF (cosϕ) = parallel capacitance & power factor The CP TD1 leaves OMICRON entered at "Compensations" represent the existing ambient condition.Cp, Ptest = parallel capacitance & power At "Assessment Limits", set the tolerance of theCp, P@10kV = parallel capacitance & power; linearly interpolated to 10 kV test voltage electronics factory-calibrated. If a main page’s nominal values for the assessment. • Enter oil temperature, ambient temperature (at bushing) and relative humidity first.Qtest, Stest = reactive & apparent power component needs to be exchanged by a For the capacitance, the tolerance is entered in • Then place the cursor on "k".Z = impedance with phase angle spare part, the CP TD1 must be re- percent, for the dissipation factor it’s a multiplier.Cp, Rp = parallel capacitance & parallel resistance The medium the measurement takes place in, oil or air, determines the k-factor.Ls, Rs = serial inductance & serial resistance calibrated. To re-calibrate, set the focus onto the Note: Availability and naming of the entry fields • ANSI C57.12Cp, QF = parallel capacitance & quality factorLs, QF = series inductance & quality factor test card tab designation TanDelta and depend on the measuring mode, e.g., DF and PF The oil temperature is the determining medium for the k-factor. press Edit Calib to enable the entry are the same entry field. • Bushings fields: The air temperature at the respective bushing is the determining medium for the k-factor. Bushings provides three bushing types to select from: RBP (Resin Bonded Paper), RIP • Cx = correction factor for CmeasThe averaging factor determines the (Resin Impregnated Paper) and OIP (Oil Impregnated Paper). The k-factor changes (multiplier)number of measurements. A factor of 3 accordingly. • DF/PF + = corrective value addedmeans: the CP TD1 carries out 3 to dissipation or power factor (canmeasurements whose results are then be + or -)averaged. The higher the factor, the Note: You must enter your name and Select if you use an external CT.more accurate the measurement butthe longer the measuring time. press Update Calib to complete the re- The entered ratio is used to calculate calibration. the measured current accordingly. See also figure CP TD1 ⇔ CP CAL1 on Note: "Use ext. CT" can only be page CP TD1-3. selected if there are no measurementFilter bandwidth of measurement. If selected, the beeper sounds during results yet. the entire test. If cleared, the beeperNote: If the test frequency equals the default frequency (as set at Options | Device Setup), sounds at the beginning and the end ofthe filter bandwidth is always ± 5 Hz, regardless of the set value. This even applies if the option the test only."use default frequency of xx.xx Hz" is not specifically selected.± 5 Hz means that interferences at frequencies with an offset of ≥ ± 5 Hz from the measuring If selected, the CPC 100 checks whether the shield of the high-voltage cable is connected. Forfrequency will not affect the results. some large inductive loads, the CPC 100 can accidentally report shield check error even whenThe smaller the filter bandwidth, the longer the measuring time. the shield is connected. If this is the case, it makes sense to clear the check box. Returns to TanDelta’s main page
CP TD1 - 6
CP TD1CP TD1
CP TD1 - 7CP TD1
Conditions: Signals below 45 Hz with reduced values possible. Capacitive linear loads. Conditions: f0 = 15 … 400 Hz Range Resolution Typical accuracy Conditions
Terminal U/f THD I S tmax Filter bandwidth Meas. time Stop band specification (attenuation) error < 0.05% of reading Ix < 8 mA, f0 ± 5 Hz 2.2 s > 110 dB at fx = f0 ± (5 Hz or more) 1 pF … 3 μF 6 digits + 0.1 pF Vtest = 300 V … 10 kVHigh-voltage 10 … 12 kV AC < 2% 300 mA 3600 VA > 2 minoutput f0 ± 10 Hz 1.2 s > 110 dB at fx = f0 ± (10 Hz or more) error < 0.2% of reading Ix > 8 mA, 15 … 400 Hz 100 mA 1200 VA > 60 min Vtest = 300 V … 10 kV f0 ± 20 Hz 0.9 s > 110 dB at fx = f0 ± (20 Hz or more)Measurements Dissipation factor DF (tanδ) Test current (RMS, selective)Test frequencies Range Resolution Typical accuracy Conditions Terminal Range Resolution Typical accuracy ConditionsRange Resolution Typical accuracy 0 … 10% 5 digits error < 0.1% of reading f = 45 … 70 Hz, IN A or IN Ba 0 …5A AC 5 digits error < 0.3% of reading + Ix < 8 mA (capacitive) I < 8 mA,15 … 400 Hz 0.01 Hz error < 0.005% of reading + 0.005%a 100 nA Vtest = 300 V … 10 kV error < 0.5% of reading Ix > 8 mA 0 … 100 5 digits error < 0.5% of reading Vtest = 300 V … 10 kVTanDelta test card: Column “Hz” of the results table (0 … 10000%) + 0.02%Special displays in the frequency column “Hz” and their meanings: a ) IN A (red) or IN B (blue), depending on the mode.
*50 Hz (*60 Hz) Measurement mode suppressing the mains frequency Test voltage (RMS, selective) Power factor PF (cosϕ) interferences; doubles the measurement time. Range Resolution Typical accuracy Range Resolution Typical accuracy Conditions!30 Hz The selected test voltage is not available in Automatic measurement (applies to frequencies below 45 Hz only). 0 … 12000V AC 1V error < 0.3% of reading + 1 V 0 … 10% 5 digits error < 0.1% of reading f = 45 … 70 Hz, (capacitive) + 0.005%a I < 8 mA,?xx Hz Results with reduced accuracy, e.g., in case of a low testing Vtest = 300 V … 10 kV voltage, influences of partial discharge etc. 0 … 100% 5 digits error < 0.5% of reading Vtest = 300 V … 10 kV + 0.02%
a ) Reduced accuracy of DF and PF at mains frequency or its harmonics. Mains frequency suppression available by precisely selecting a mains frequency of *50 Hz or *60 Hz in the “Hz” column.
CP TD1 - 8
CP TD1CP TD1
1 H … 1000 kH 6 digits error < 0.3% of reading temperature Cables and equipment 16.6 kg Humidity range 5 … 95% relative humidity, no condensation accessories (36.6 lbs)
Quality factor QF Shock IEC68-2-27 (operating), 15 g/11 ms, half-sinusoid equipment and 26.6 kg 680 x 450 x 420 mm casea (58.7 lbs) (26.8 x 17.7 x 16.5”) Vibration IEC68-2-6 (operating), 10 … 150 Hz, acceleration 2 gRange Resolution Typical accuracy continuous (20 m/s²); 5 cycles per axis Equipment trolley equipment 14.5 kg0 … 1000 5 digits error < 0.5% of reading + 0.2% (32 lbs) EMC EN 50081-2, EN 55011, EN 61000-3-2, FCC Subpart B of Part> 1000 5 digits error < 5% of reading 15 Class A, EN 50082-2, IEC 61000-4-2/3/4/8, CE conform equipment & carton 18.9 kg 590 x 750 x 370 mm (89/336/EEC) (41.7 lbs) (23.2 x 29.2 x 14.6”) Safety EN 61010-1, EN 60950, IEC 61010-1, produced and tested in CP TD1, CPC 100, equipment 85 kg 750 x 1050 x 600 mm an EN ISO 9001 certified company. equipment & trolley (187.5 lbs) (29.5 x 41.3 x 23.6”) (without CP CAL1) equipment & packing 125 kg Prepared for IEEE 510, EN 50191, VDE 104 (275.8 lbs)
a ) Case = robust case, IP22
CP TD1 - 9CP TD1
CP TD1 - 10
CP TD1CP CU1CP CU1
CP CU1 - 1CP CU1
To CP CU1 To test object
Surge arrestor
Fuse 30 A Voltmeter
Ground connection Current range switch I OUT output
BOOSTER input CT 100 A : 2.5 A
CP CU1 - 2
CP CU1CP CU1
Measurement Setup Configuring the CPC 100 Connecting the CPC 100 and CP CU1 to Power Lines The CPC 100 must be configured for the CP CU1. To configure the CPC 100: Safety Instructions 1. Press the Options view selector button to open the Options window. Warning: A lightning discharge to the line under test can cause injury or possibly death of the operating staff. Do not connect the measurement setup to overhead lines if there is a possibility of a thunderstorm over any part of the lines to be mea- sured.
Warning: Connecting the measurement setup to overhead lines with a life paral- lel system brings about high-voltage hazards. It is strongly recommended to take all parallel lines out of service before proceeding Dangerous zone
Warning: During the grounding switch at the near end of the power line is open, I AC V1 AC the area around the CP GB1 in the range of 5 m/15 ft and around the CP CU1 inV1 AC I AC the range of 2 m/5 ft is a dangerous zone due to high-voltage and mechanical CPC 100 CP CU1 V SENSE Test object CP GB1 hazards. Do not enter the dangerous zone. Keep the grounding switch open for a EXT. BOOSTER BOOSTER I OUT time as short as possible. (optional)
2. In the External booster combo box, select CU 1. Warning: If you see or hear anything uncommon in the test equipment, e.g. noise The CT and VT settings are set according to the built-in current and voltage transformers of electrical discharge or lightening of surge arrestors, close the grounding switch automatically. before touching the measurement setup. 3. Set the current range of the CP CU1 using the current range switch (see page CP CU1-2) to the value configured by the CPC 100 software.
Warning: Set the current range switch on the CP CU1 front panel only when the CPC 100 is turned off and the test object is connected to ground with closed grounding switch near the measurement setup.
Note: Current range settings on the test card and on the CP CU1 front panel must be the same.
CP CU1 - 3 Yes Yes Yes
CP CU1 - 4 10 km/6 mi? 2 km/1.5 mi?
Use the 10 A 50 km/30 mi?
current range. current range. Start with the 100 A
Set 10 A Set 20 A Set 50 A
No No No Connect a grounding set in parallel to the closed grounding disconnector to each phase. Connect it first to the ground and then to the line. Open the grounding disconnector and measure the currents in all three grounding sets. Use the highest measured value of IGS in the following formula. Close the grounding disconnector after the measurement is done. Connecting the CPC 100 and CP CU1 to Power Lines
Yes
Yes Yes Yes
Is Is Is Is
greater than 50 V?
Set 10 A Set 50 A
Set 20 A
No No No No
Yes Yes Yes 50 V?
500 V? 250 V? 100 V?
Set 50 A
Set 10 A Set 20 A
Try to take parallel lines out of service or to reduce the current flow on the parallel systems.
No No No
CP CU1 Is 1 A possible? Are 12 A possible? Are 30 A possible? Are 60 A possible?
Yes Yes Yes Yes
Applications and Test Templates Line Impedance Measurement Ground Impedance MeasurementThe following application examples show the typical usage of the CP CU1. The test proceduresrunning on the measurement setup are controlled by templates available on theCPC 100 Start Page.For detailed information on the CP CU1 applications, refer to the CP CU1 Reference Manualdelivered with the CP CU1 or available in pdf format on the CPC 100 Start Page. Far end Far end
Overhead line Overhead line
Near endend Near 90°
V1 V1 AC AC II AC AC I AC V1 AC CPC 100 100 CP CU1 V SENSE V1 AC I AC I AC CPC EXT. EXT. BOOSTER BOOSTER BOOSTER CP GB1 CPC 100 CP CU1 I OUT EXT. BOOSTER BOOSTER I OUT CP GB1
There are seven different measurement loops: A-B (shown here), A-C, B-C, A-G, B-G, C-G and ABC in parallel to ground (similar to the next figure).
CP CU1 - 5CP CU1
Measurement of Coupling into Signal Cables Step and Touch Voltage Measurement Technical Data Output Ranges
Accuracy
There are four measurements with different connections. For detailed information, refer to the For the step and touch voltage measurements using the CP AL1 FFT voltmeter, refer to thetemplate or the CP CU1 Reference Manual. CP 0502 Application Note.
CP CU1 - 6
CP CU1CP SB1CP SB1
CP SB1 - 1CP SB1
Functional Components of the CP SB1 Connecting the CPC 100 and CP SB1 to Power Transformers The front panel of the CP SB1 provides the following functional components: Safety InstructionsTransformer High Transformer Low Read the • Transformer High Voltage: • Position the CP SB1 in the safety area and do not enter this area during the entireVoltage Tap Changer manual Voltage – Outputs (Source) for the injection of current or voltage on the individual phases of the measurement. Up Down transformer • Connect the CPC 100 and CP SB1 using the delivered grounding cable. – Inputs (Measure) for the voltage measurement • Connect the grounding cable of the CP SB1 at a safe grounding point at the transformer.
Note: The inputs and outputs of the respective connections (U/H1, V/H2, W/H3, N/H0) are Note: Do not operate the test equipment without safe connection to ground. connected to the transformer using Kelvin clamps. • Make sure that all high-voltage connections of the transformer are removed. • Transformer Low Voltage: • Make sure that all terminals of the transformer are connected to ground. • Switch off the power supply of the tap changer. – Outputs (Source) for the injection of current or voltage on the individual phases of the • Connect the Kelvin clamps to the bushings. transformer • Connect the cables to the Kelvin clamps. Make sure that the cables show upwards and that – Inputs (Measure) for the voltage measurement each colour is connected to a different phase. Note: The inputs and outputs of the respective connections (u/X1, v/x2, w/x3, n/X0) are • Connect the cables from the Kelvin clamps’ voltage sense outputs to the CP SB1’sLEDs connected to the transformer using Kelvin clamps. transformer inputs. Observe the color code. • Make sure to measure the voltage to ground at the terminals of the tap changer. If no voltage • Tap Changer: Two potential-free contacts for switching the tap changer is measured, connect the flexible terminal adapters to the "up" and "down" terminals of the • AC input for connection to the 2KV AC output of the CPC 100 tap changer. • DC input for connection to the 6A DC output and I AC/DC input of the CPC 100 • Connect the cables ("up", "down") to the CP SB1. • AC output for connection to the V1 AC input of the CPC 100 • Connect the CP SB1 to the CPC 100 according to ”Functional Components of the CP SB1” • DC output for connection to the V DC input of the CPC 100 on page CP SB1-2. • Serial interface for the CPC 100 (TRRatio and TRTapCheck test cards) to control the • Switch on the power supply of the tap changer. CP SB1 • Remove all grounding connections of the terminals except one per winding. Use Neutral (N) • Equipotential ground terminal for grounding the CP SB1 close to the position of the for the grounding connection if accessible. operating staff • Start the measurement according to page Transformer-1 and page Transformer-5. Serial Equipotential AC Input DC Input V1 AC Output V DC Output connection ground terminal
CP SB1 - 2
CP SB1CP SB1
CP SB1 - 3CP SB1
CP SB1 - 4
CP SB1CP CB2CP CB2
Burden
BurdenNote: If you select the CP CB2 as external booster on the Device Setup tab in the Optionsmenu, it will be saved as default value for new test cards. However, it is also possible to selectthe external booster individually on the test cards. The settings for already inserted test cardswill only be changed if no test results are available yet.
CP CB2 - 1CP CB2
Technical Data Notes regarding the CP CB2 Current outputs 1. With a mains voltage of 230 V using a 2 x 0.6 m high-current cable at an ambientRange Amplitude tmax1 Vmax2 Powermax2 f temperature of 23 °C ± 5 ° (73 °F ± 10 °F)1000 A AC 0 … 1000 A 25 s 4.90 V 4900 VA 15 … 400 Hz 2. Signals below 50 Hz or above 60 Hz with reduced values possible 0 … 500 A 30 min 5.00 V 2500 VA 15 … 400 Hz Caution: Make sure to establish series or parallel connection, depending on the2000 A AC 0 … 2000 A 25 s 2.45 V 4900 VA 15 … 400 Hz selected range on the test card.
Weight Dimensions (W x H x D)
CP CB2 test set 16 kg (35.3 lbs) 186 x 166 x 220 mm (7.3 x 6.5 x 8.7”), without handle.
test set & case 25 kg (55.1 lbs) 700 x 450 x 360 mm (27.6 x 17.7 x 14.2”)
CP CB2 - 2
CP C. | https://ru.scribd.com/document/401652744/CPC-100-User-Manual-pdf | CC-MAIN-2019-51 | refinedweb | 12,819 | 62.27 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
How to use compute with One2many?
Hello I want to know how to use compute to fill a One2Many
option with recordsets:
o2m_field = fields.One2many(....,compute="_compute_o2m_field")
@api.one
def _compute_o2m_field(self):
### get recordset of related object, for example with search (or whatever you like):
related_recordset = self.env["the.relation.obj"].search([("some", "condition","here")])
self.o2m_field = related_recordset
option with ids:
o2m_field = fields.One2many(....,compute="_compute_o2m_field")
@api.one
def _compute_o2m_field(self): related_ids = [] # here compute & fill related_ids with ids of related object self.o2m_field.ids = related_ids
together with @api.one add @api.depends(...) as well, if your calculation of related recordset or ids depends on another fields in a model.
Thank you i going to try this but if i want to create record in my on2many it's possible?
you mean make this field editable? in this case you'll have to implement another function as well, the "inverse" function, see doc:
Not exactly in my one2many, i want to fill a result for eg : timesheet_ids = timesheet_obj.search([('employee_id','=', self.id)]) for timesheet_id in timesheet_ids : time_geh[timesheet_id.machine_id.geh_id.id] += timesheet_id.time_amount for geh_id_int in time_geh : geh_id = geh_obj.browse(geh_id_int) my_ids = my_one2many.create({'name':geh_id.name, 'total_time' : time_geh[geh_id]}) self.my_one2many.ids = my_ids
yes, technically it's possible, but I do not see how it may be useful to create records from inside of compute function... as compute function is called several times, each time when you access the field, when field displayed in UI, etc... so new records will be created over and over...
Yeah i understund this is why i try to do each time to make an unlink to empty the table but i have keyerror :! | https://www.odoo.com/forum/help-1/question/how-to-use-compute-with-one2many-87307 | CC-MAIN-2016-50 | refinedweb | 311 | 59.09 |
Difference between revisions of "Talk:Main Page"
Revision as of 23:28, 11 August 2006
Contents.
Language Filters on Wiki
Is it possible to implement a language filter on wiki? I don't mean for swearing, I mean to block out articles that aren't in your normal language from being displayed in the recent changes and random pages. - RodeoClown 00:39, 20 Mar 2006 (PST)
- I would have to say no, because the alternate language articles are in the main namespace. If we had a system set up like Wikipedia then they wouldn't even show up. The Valve staff hasn't even upgraded the wiki software since they installed it (apparently due to their customizations, but making diffs from the base code to the current version and then reimplementing them when they're done shouldn't be as hard..) Anyway, from a logical standpoint, I'd say they're probably not going to. But that's just me. They don't happen that much, either.. --AndrewNeo 13:40, 10 Apr 2006 (PDT)
- Actually, the wiki software has been updated for security and bugfixes. An upgrade to a newer version of the software is forthcoming. Full version upgrades are not trivial, as Mediawiki often makes changes that break existing features, even if you have not made any customizations. For this reason we only makes updates when necessary. --JeffLane 19:16, 10 Apr 2006 (PDT)
I tend to agree that there should be something done about the language issue. One, it's pointless to have an example of something written in another language when you are looking for English. Two, it's annoying to do a search and have something come up not in english. Three, it would be easier for people of another language to find it, if it was in another section. Four, Since this is a mostly English forum, it makes it difficult to maintain the accuracy of posts made in another language. I think at least a seperation of English and non-english would be nice to have. --Carbonice 12:32, 3 May 2006 (PDT)
Since we have so many of the boxes I made to put at the top of games now (ex. Shadowgrounds has four) we should move to the same type of system Wikipedia uses, the sidebar (see Wikipedia:Half-Life 2 for example.) This way we could fit the name, a screenshot, release dates, mod status, and Steam URLs all in one place without six or so different templates. No specifics like Wikipedia (like system requirements and ESRB rating), we need to keep this a developer community and just link to Wikipedia for information like that. I'll begin designing the template and put it in my sandbox for demonstration and consideration in an hour or so. --AndrewNeo 17:20, 10 May 2006 (PDT)
- Did you ever go anywhere with this? It would be alot cleaner for the titles. Wraiyth 02:42, 26 Jul 2006 (PDT)
Separate Wiki for Valve "Game" Community?
There are many articles about "playing" Valve/Steam games(title informations, bug feedbacks, character explanations, etc) lately. How about having a separate wiki for players? It would help many non-developers out there, while keeping VDC focused on development topics. --n-neko 23:21, 7 Jun 2006 (PDT)
- I'm not convinced that the "playing" pages are actually hurting the VDC's focus on development topics. All the development articles are still there and accessible from all the same links and searches. There is already a place for players to discuss Valve's games, Wikipedia and several other Wikis. --Giles 01:44, 8 Jun 2006 (PDT)
Does Valve get involved?
Not trying to be overly critical, but after looking through this wiki I wonder how much valve really cares about us, who are interested in working with the source engine. There seems to be no participation or commitment from valve whatsoever. Am I wrong? Not that we can't deal with things ourselfes, but I wonder if content wouldn't improve/increase if valve was more eager to help (read: browse around now and then and answer questions on topics that may be complex/difficult to understand), which in turn should make people more interested in the source engine.
- Actually, after thinking about it, this critique mostly applies to the code section. Corner 14:07, 20 Jun 2006 (PDT)
- Probably, Valve developers should have created something like a developers' blog. Blog is not meant here as a cool feature of the time, but as a definetely comfortable way of communication with the Valve team via comments and\or users' articles. Kith 02:55, 25 Jul 2006
- They sometimes read the hlcoders list and working now on a SDK update. However a developer's blog would be nice. Better communication would avoid people thinking that Valve doesn't care. --dutchmega 02:47, 25 Jul 2006 (PDT)
- This is one of the reasons I'm a bit turned off Source at the moment. I'm beta testing a game and engine by an indie dev (along with mod tools), and after having alot of contact with him over ICQ, helping out with documentation and him concisely answering any questions I've got... coming back to modding with Source seems tedious. Wraiyth 05:24, 25 Jul 2006 (PDT)
- Does anyone know if somebody of Valve team has his own blog or smth. like that?
Source SDK] | https://developer.valvesoftware.com/w/index.php?title=Talk:Main_Page&diff=prev&oldid=42728 | CC-MAIN-2021-10 | refinedweb | 903 | 70.33 |
A Complete Shotgun Schema
Reading the "private" schema for all the details.
My site-local Shotgun cache is coming together quite nicely; you can see the progress on GitHub, as well as the ongoing results of my reverse-engineering efforts.
As work progresses, there is a greater and greater need for a through understanding of Shotgun's database schema so that I can replicate its behavior. One example that I'm starting to tackle: when you query an
entity or
multi_entity field, the entities will return the (required)
type and
id, but also a
name, however, if you query for that name field you will find that it doesn't actually exist:
>>> # Lets grab the pipeline step of any task: >>> sg.find_one('Task', [], ['step']) {'step': {'type': 'Step', 'id': 4, 'name': 'Matchmove'}, 'type': 'Task', 'id': 2} >>> # Notice the name... ^^^^^^ here. >>> # Let's grab that name directly: >>> sg.find_one('Step', [('id', 'is', 4)], ['name', 'code']) {'code': 'Matchmove', 'type': 'Step', 'id': 4} >>> # No 'name' to be found. Huh.
Another example is the description of back-references of many
multi_entity fields. E.g. if you set
Task.entity to a
Shot, that task will end up in
Shot.tasks as well.
I have reached out to Shotgun support, and they have confirmed to me that the schema returned by the public API's
schema_read (and related) methods would need to be expanded to return the information I need.
There must be another way to get this information, because the Shotgun website works, and it must have it. So lets go digging there...
The "Private" Schema
If you view the source of any Shotgun page (after authentication), you will see a collection of interesting
<script> tags in the
<head>:
<script src="/page/js_globals"></script> <script src="/javascripts/ext_5e14f74dad38933008ea607ddeed863c.js"></script> <script src="/javascripts/base_c42fd9a9fa5653fb541fc60e22005824.js"></script> <script src="/page/schema?page_id=1354"></script> <script src="/javascripts/widgets_a2b85e973eef52e64aaee66b64fd1593.js"></script>
/page/js_globals is interesting, as it contains all of the state for your Shotgun instance, including settings,
ActionMenuItems, details about you and your preferences, etc.. The three in the
/javascripts directory implement the core functionality of the site, and are a great read if you feel like monkey-patching the site to extend
ActionMenuItems. But none of these are what we are here for.
/page/schema contains (among other things) the schema for your Shotgun instance, in blistering detail.
Of particular interest to the two problems outlined above, some fields are marked with
identifier_column: true which I believe indicates the field used for the implicit
name, e.g.
Task.content:
"content": { "id": 264, "entity_type": "Task", "name": "content", "display_name": "Task Name", "data_type": "text", ... "identifier_column": true }, ...
...
entity fields explicitly label their back-references, e.g.
Task.entity:
"entity": { "id": 255, "entity_type": "Task", "name": "entity", "display_name": "Link", "data_type": "entity", ... "inverse_association": [ "Asset.tasks", "Element.tasks", "Scene.tasks", "Sequence.tasks", "Shot.tasks", "PhysicalAsset.tasks", "Project.tasks", "Tool.tasks" ], ... } ...
... and
multi_entity fields describe their association objects, e.g.
Asset.shots:
"shots": { "id": 222, "entity_type": "Asset", "name": "shots", "display_name": "Shots", "data_type": "multi_entity", ... "through_join_field": "asset_shot_connections", "through_join_entity_type": "AssetShotConnection", ... }, ...
Reading the Private Schema
I've known about this file and its contents for a while, and have studied it to understand the design of the schema, but I have never before used its specific contents to drive anything. I did not use it because I thought retrieving it would either be a manual process, or I would need to use the credentials of a user (instead of API keys) to scrape the website. Given the opportunity for the schema to change at any moment, these options always seemed too fragile.
The
python_api3 has always had a
_get_session_token method for retrieving a session cookie. For some time this seemed like it be replaced at any moment as it was documented as:
def _get_session_token(self): """Hack to authenticate in order to download protected content like Attachments ...
However, in a relatively recent commit, this method became part of the public API, and it is now a key part of the new two-factor authentication. At this point, we can read the Javascript file with only API keys:
I'll leave parsing it as an exercise to the reader.
But, is this safe?
The people at Shotgun are very much not the type to perform spontaneous massive refactors of key parts of their framework. So, unless they specifically block access of that page to API-based sessions, I don't see it going away.
My feeling is that any large changes to the availability or format of this schema would coincide with large changes to the API or storage engine, in which case anything that you are writing that is sensitive enough to require such intimate knowledge is going to break anyways.
Ergo, I'm going to go ahead and start using this information to build sgcache. I'll update this post with any information that Shotgun gives me about it. | http://mikeboers.com/blog/2015/07/21/a-complete-shotgun-schema | CC-MAIN-2021-31 | refinedweb | 811 | 53.31 |
Solution:
Modify Css3Mask.gss
.text {
gwt-sprite: "loading"; <----- remove this line
width: auto;
height: auto;
....
Type: Posts; User: Shawn.in.Tokyo
Solution:
Modify Css3Mask.gss
.text {
gwt-sprite: "loading"; <----- remove this line
width: auto;
height: auto;
....
In the following mask, there is always a Status indicator.
HTHM h;
h.getElement().<XElement> cast().mask("Try again");
Is it not possible in V4.x to have a simple text mask without the...
This bug filed against 3.0.0 RC is still present in 3.0.1
The final release (gxt-3.0.0-GPL) suffers this defect as well.
While implementing it as suggested (!before.isCanceled()) does work, more fundamentally, it doesn't seem that the
...
Thanks for your help. It's ok now.
It turns out that in refactoring some models to upgrade to GXT3 I introduced some errors that eclipse didn't flag but that choked the GWT compiler.
FWIW, the...
Hi,
What might the error below be telling me? Is it my improper usage of GXT? Might there be an bug someplace in gxt3 rc2? I see no errors in eclipse but when I try to compile the project, it...
So how do we migtrate?
Specificially I need to create an AppEvent with the constructor AppEvent(GwtEvent.Type<?> type)
I just don't get, even after carefully reading the javadocs, searching...
This has to be a bug:
FormPanel testForm;
testForm= new FormPanel();
TextField<String> TEST= new TextField<String>();
TEST.setId("TEST123:-6");// "TEST" works fine
...
Anyway, this worked...
junk implements IsSerializable
Hi,
Why does "junk" fail but "order" work below? They are both of type LinkedList<long[]>.
public class Sample extends BaseModel {
class junk{
LinkedList<long[]> test=new...
Hi,
Why might I be seeing an error like this when calling layout() on a content panel?
com.google.gwt.core.client.JavaScriptException: (TypeError): Result of expression...
Tried and failed. The data is complicated so I gave up after finding a work around.
It's weird -- really weird. Here are the types of classes:
ClassOne implements serializable
BaseModel1...
Why do I see this NPE which causes my RPC call to fail? :((:((
What makes this inexplicable is that RpcMap initializes the map with:
private transient FastMap<Object> map = new...
Hello,
I recently deployed to Google App Engine where I found code that ran on the development server hung.
It was a simple work around as I was using code splitting via runAsync to initialize...
How can we do that? I just want the dirty triangle to appear when the bound field loses focus.
I tried extending FieldBinding and adding a listener to the constructor but it has no effect.
...
Did you try the suggestion from tortexty?
@SuppressWarnings("unused")
Layout junk = new AnchorLayout();
That seems consistent with what I experienced (error when layout not used soon...
I use Gxt2.1 in Gwt2.0 fine (Code Splitting and all)
Anyway, for your example:
FAILURE - at beginning of onModuleLoad [Invalid memory access of location 0x8 eip=0x4a8aeb] (my stack trace in...
Did you look at the api?
com.extjs.gxt.ui.client.widget.DatePicker.DatePickerMessages
Look for the DatePicker and then scroll through. There is a Nested Class Summary
which had a link to...
How can I make a RFE or find a work around to allow for multiple forms -- i.e., I want more than one FormBinding?
How can I work around
formBindings.setStore((Store) grid.getStore());
to allow...
I don't think it has anything to do with the code itself.
[ERROR] Line 10: The import ms.webclient.ui.HoverText cannot be resolved
Your class is not being found. You have to get it on the...
How can I use a grid to display and edit data that may be different for each row.
Example.
class Country
String name
int government_type
Depending on government_type I want to show and...
For real server pushed I use Comet. Well, the dwr implementation at least.
It's not GWT-RPC, and I don't know if it's the best way but I am comfortable using dwr and the way I can secure it with...
OK, I set my content panel to have a fit layout:
panel1.setLayout(new FitLayout());
Still, it doesn't render fully.
It renders more but it always stops just about the place where...
Hi,
I don't know your solution exactly but ... experienced something similar.
Did you try:
cp.setScrollMode(Scroll.AUTOX); | https://www.sencha.com/forum/search.php?s=cf038c6be93fdbec68d9f08bd8c0100d&searchid=18401667 | CC-MAIN-2016-50 | refinedweb | 732 | 70.5 |
This tutorial explains the basics of C#, a modern object oriented programming language that was designed by Anders Hejlsberg at Microsoft.
This tutorial explains the basics of C#, a modern object oriented programming language that was designed by Anders Hejlsberg at Microsoft.A QUICK INTRODUCTIONThe pursuit to teach any new language usually begins with the classic Hello World program. This tutorial does not break that tradition
/** HelloWorld.cs* Version 1.1* A program that prints Hello, world! on screen*/
Using the first optioncsc HelloWorld.csMicrosoft (R) Visual C# Compiler Version [CLR version]Copyright (C) Microsoft Corp 2000-2001. All rights reserved.
In this admittedly simple case, you do not have to specify anything other than the file to compile. In particular, C# does not use the additional step of linking that is required by C/C++ etc.The default output of the C# compiler is an executable file of the same name, and running this program generates the following output:
C:\\HelloWorld\cs>HelloWorldHello, world!
Programmers familiar with JAVA will be amazed at how close C# syntax is to JAVA As in c, c++, java // /*, */ are valid comment lines in C# also. Any line started by // or contained within /* and */ is ignored by the compiler. However in C#, XML statements may be embedded within comments and may later be used to generate documentation for the program.
Just as the entry point to a c/c++/java program is main, the entry point to a C# program is Main() *with a capital M*. But unlike in the case of the former, no arguments are passed to Main(), and it returns a void (a keyword that indicates the function does not return anything).
public and static are qualifiers for the entry point function Main(). More on this later.
From the Official Documentation:
"The compiler requires the entry point to be called Main. The entry point must also be marked with both public and static. In addition, the entry point takes no arguments and does not return anything (although different signatures for more sophisticated programs are certainly possible)."
Since C# is strictly an Object Oriented language, there needs to be at least one class in every program. In HelloWorld.cs the class that contains Main() is given the name HelloWorld (Note that the class name and the file name need not match. This program can be put into a file called myFirstProgram.cs. It will compile and run properly even then).From the Official Documentation:"In C#, all code must be contained in methods of a class. So, to house the entry-point code, you must first create a class. (The name of the class does not matter here)."
For those familiar to C/C++/JAVA programming System.Console.WriteLine() appears to be a function call. It is! It takes one argument and displays it on the console.
NAMESPACES
Namespaces were used earlier in C++ and now in C#. Namespaces, as the name suggests are spaces that are used to contain a set of programming entities such as classes, methods etc.
"C# programs are organized using namespaces. Namespaces are used both as an "internal" organization system for a program, and as an "external" organization system-a way of presenting program elements that are exposed to other programs"
This has several advantages...In a large project, two different persons working on it may name a class similarly, which may lead to ambiguity. Namespaces offer a neat solution. Any number of independent programmers may give the same name to classes they create, provided they put everything into their own unique namespace.
For example there could be a two namespaces called Console and GUI. Both could contain a function named PutText() that displays text in the command line in case of the former and in a window in the case of the latter. Such functions could be accessed by saying Console.PutText() for the console version and GUI.PutText() for the other.
The above example was a generic programming example. In C# since every method needs to reside within a class, different namespaces could contain different classes with same names.
Getting back to HelloWorld.cs, I said at the end of the previous section that System.Console.WriteLine() is a function call. Well, it is only partly true. Actually System is a namespace that contains a class called Console, and Console contains a static method called Write Line().
A Quick note on static methods:When a class contains a method, it cannot be accessed, like a normal c style function. It needs to be invoked using an object-which is defined as an instance of that class. But it is often required that some methods of a class be accessed without an object. To define such functions we include a qualifier called static. That explains why static is added to the Main() function. The CLR needs to call Main(), the entry point without creating any object for it.
The .NET Framework defines more than 90 namespaces that begin with the word System. System contains a class called Console that mostly contains methods that are related to console operations.
Since HelloWorld.cs only contained one statement that called WriteLine() method in Console class, we fully qualified it using Namespace name . Class name . Method name Image we were to make a thousand calls to it from our program... It would be tedious to reference the method using the Namespace name every time. For this purpose C# includes a keyword called using. Lets now re-write the same example...
/** HelloWorld.cs* Version 1.2* A program that prints Hello, world! on screen*/
From the Official Documentation:"'Using' directives facilitate the use of namespaces and types defined in other namespaces. Using directives impact the name resolution process of namespace-or-type-names (Section 3.8) and simple-names (Section 7.5.2), but unlike declarations, using directives do not contribute new members to the underlying declaration spaces of the compilation units or namespaces within which they are used."
All along I've been talking about, putting classes into namespaces. But in the two versions of out program HelloWorld.cs, our class has not been put inside any namespace. What happens then? They go into the global namespace. Lets now try to write version 3 of this program that uses namespaces profitably.
/** HelloWorld.cs* Version 1.3* A program that prints Hello, world! on screen*/
Namespaces also play an important role in the organization of the .NET framework documentation. All the classes have been organized by the Namespaces under which they have been grouped.
A sample that accompanies the .NET Framework SDK itself is a command-line type locater that supports sub string searches. You can use it to explore classes contained in the various namespaces. (JAVA developers will find it similar to javap). To build and use this sample, follow the instructions contained in the Readme.htm file that is located in the InstallDirectory\Samples\Applications\TypeFinder subdirectory.Here is an example of how to use the type finder:C:\\TypeFinder\CS>findtype String
class System.IO.StringWriter
That concludes the tutorial. Hope you got a feel of C#! | http://www.c-sharpcorner.com/UploadFile/Kaushiks/IntroCSharp11142005080059AM/IntroCSharp.aspx | crawl-002 | refinedweb | 1,189 | 58.69 |
ADC Home > Reference Library > Technical Notes > Hardware & Drivers > Performance >
Introduction
Analysis Tools
The Macintosh Architecture
Mac OS Optimization Strategies
Optimizing C code
PowerPC Assembly
References
Downloadables
Performance tuning is a critical part of all application development. Customers don't like sluggish applications, and are willing to vote with their money. The good news is that small changes in an application can result in solid increases in the overall performance. This technote attempts to gather a significant amount of lore on tuning Mac OS applications for the best possible performance.
The first step to optimizing your application is to define your goals. Are you trying to improve overall performance of the application, or are their specific features in the application that are critical enough to need separate performance tuning? Is there a minimum acceptable performance level? How can it be measured? The more detail you can put into your goals, the easier it will be to determine how to improve your application.
When defining your goals, you need to determine your target platforms. An iMac with 32 megabytes of memory will behave significantly differently than PowerMac G3 with 256 megabytes of memory. You should determine at least one low-end and one high-end configuration, and use these configurations in all of your performance-tuning efforts.
Programmers are routinely bad at guessing where the bottlenecks are in their code. While you should be thinking about optimization during the design process, you will want to focus your optimization efforts on the sections of code that contribute the largest amount to your total execution time. So, the next step in optimizing your code is to instrument your code and generate data that measures the execution of your code. Later on, we'll cover some of the tools that are available, and discuss when each might be useful.
After you've determined the bottlenecks in your code, the next step is to analyze your accumulated data and determine exactly why that particular code is slow. It might be slow because it is calling the operating system, or it could be an incorrect choice of algorithms, or even poorly generated assembly code. Understanding the exact behavior of your code is critical to changing the code correctly.
Low-hanging fruit are simple flaws in the code that can provide immediate performance improvements if repaired. One common example here would be making multiple system calls when only a single set of calls is necessary. This could be setting QuickDraw port variables inside a loop, or it could simply be accidentally calling the same function twice.
After this, the next major place to optimize your code is in the algorithms. This technote covers both Macintosh hardware and Mac OS technologies, so that you can understand how different algorithms may be affected by the underlying system.
Memory management plays a key role in any algorithms you choose to implement in your code. Good memory management in an application can often mean orders of magnitude in performance.
Parallelism is becoming more common in modern computer architectures. With multiprocessor Power Macintoshes already in the marketplace, and Power Macintoshes with AltiVec shipping in the future, you should look at ways to exploit parallelism in your algorithms.
Finally, if the application performance still suffers, specific functions can be hand-optimized in C or assembly language to produce more efficient code.
One golden rule of optimization is to hide the work where the user doesn't notice it. For example, if the user is not currently performing any actions, your code could precalculate data that may be used in the future. This significantly reduces the time required to actually perform the task. For example, many chess programs are computing their next move while waiting for the player to decide on their move. This significantly reduces the perceived time the computer takes to make a move.
This technote describes techniques and algorithms without regard to other considerations, such as flexibility in design, ease of coding, ease of debugging, and so on. As such, they may be inappropriate except in areas of the application where performance is most critical.
Back to top
As described above, it is critical to analyze your application to determine exactly where your program is spending the majority of its time. It is also important to understand exactly why your application is spending its time in that section of the code. Why is almost more important than where, because it will suggest the changes required to improve the code.
With this in mind, let's look at some of the tools that are available, and their best uses.
CodeWarrior ships with a built-in profiling package that makes it simple to profile your code. Just turn on Profiling in the Project options, make a couple of simple changes to your code, and compile.
The Metrowerks Profiler provides a summary of all of the calls in your application, how many times they were called, and the total amount of time spent in each call. It also keeps track of the shortest and longest times each function call was made.
This is a good tool for getting an overall sense of where your application is spending its time. The summary information can quickly give you the top 5 or 10 functions where the application is spending most of its time.
The Metrowerks Profiler doesn't give you any information on how much time you are spending inside the operating system, nor does it really provide the context in which the function calls were made. That is, you can tell that Foo was called 987,200 times, but you don't really know when, or by whom, without performing additional code inspection.
Foo
Still, this profiler is an excellent place to start when looking to optimize your application.
Apple ships a complete SDK that allows you to wire up any application to log trace events. These trace events are stored in a file that can be parsed by the Instrumentation Viewer, and displayed in a number of different formats. In addition to summary information, you can also see a time-based progression that shows any subset of the events you wish to see.
The typical way that this is used is to log a trace event at the beginning and end of a function. The instrumentation viewer translates this into a picture that shows the exact time spent in that function.
Figure 1 - Logging a trace event
One advantage this mechanism has is that you can see exactly when or where events certain events took place in relation to others. For example, you could see exactly how long a function spends before calling another function, then see how long it takes when that function returns. This gives us more detailed information on exactly where each function spends its time, rather than a simple summary.
Other types of events can be logged as well. For example, in addition to start and stop events, you can log middle events at various points. This is useful for functions that are easily broken into different sections. This allows each section of the function to be timed separately. In addition to time information, an application can also log its own numeric information. For example, a memory allocator could log the amount of memory requested by each call. This would allow additional analysis to be performed to determine the frequency of different memory requests.
All this flexibility comes with some cost. It takes additional effort to set up the Instrumentation Profiler, because the support is not automatically in the compiler. Generally speaking, this can be done by running a tool (MrPlus) on an existing binary, and having it instrument a list of CFM imports or exports. This can also be done by directly adding code to your application, which is something that will have to be done in any case to obtain more sophisticated information.
In short, the instrumentation library is particularly useful once you see where your hot spots are, because you can then drill down and analyze the hot spot in detail, and determine exactly what is being called (in the OS or in your own code), how often, and for how long.
The instrumentation library is available on the Apple SDK web page.
The 604 and G3 processors include performance-monitoring features that allow an application to record statistics about its execution. Similarly, the MPC106 PCI controller used on the iMac and 1999 G3 models can also monitor and log detailed performance information.
4PM is a program and a shared library that allows an application to monitor a set of performance statistics. Most of these performance statistics are related to processor-specific features, such as cache misses or PCI performance. As we'll see when we talk about memory issues, some of these are crucial things to consider to get the fastest possible applications.
While 4PM can be used to examine the overall performance, the shared library adds the capability to selectively start and stop the monitor functions from within your application. So, if you've used the Instrumentation Library to determine the hot spots inside your application, and it isn't readily obvious why your code is running slower than normal, 4PM could be used to provide additional details on how your code is executing.
4PM isn't a great tool for examining the application as a whole, but it can provide valuable information to understanding why a section of code is running poorly.
Motorola has created a cycle-accurate simulator of the G4 processor that can test the execution of a block of code under different starting conditions. For example, you could test a snippet of code with both hot and cold caches and compare the performance characteristics.
SIM_G4 works by creating a trace file using tracing features on the PowerPC processor. This trace file is imported into SIM_G4, and then those instructions can be simulated to give you a representation of how your code passed through the various processor pipeline stages. This tool is particularly useful for understanding how your instructions are scheduled and executed. As such, it is essential for scheduling assembly code to run on the G4 processor.
The G4 is different enough from the earlier processors that scheduling for the G4 may not transfer back to the earlier processors. So, while this tool is useful, it will not solve all your problems.
Detailed information about using SIM_G4 is available on the AltiVec web page. SIM_G4 is only made available as part of the Apple Developer seeding program.
Occasionally, you'll find that the existing tools don't provide enough details on what is actually happening in a program. In those cases, it is valuable to design your own tools to accumulate data on the operation of your code.
When designing any subsystem of your code, you should think about the types of information you need to understand the operation of that subsystem. Importantly, this information serves a second purpose: debugging that section of the code when there is a problem.
While any specifics are up to the individual application, the most common thing to log is state changes and parameters that were passed to that code. Note that simple state changes and parameters are easy to log with Instrumentation Library; you only need to write your own logging for complex systems.
PowerPC processors are very efficient at processing data, so much so that memory, not the processor, is often the bottleneck in Mac OS applications. Slow memory accesses can completely stall the processor so that little work is being performed.
Code that exhibits poor memory characteristics will run at speeds no faster than main memory or, even worse, at the speed of virtual memory reading from a hard disk. Clearly, allocation and utilization of memory are critical to achieving the best possible performance in an application.
Programming in a high-level language hides the complexities of the memory subsystems on modern computers, in order to make it easier to write applications. Unfortunately, this is not an advantage when doing performance tuning. Understanding exactly how the memory system works is key to writing code that will work with the memory system rather than against it. To understand this, we're going to discuss the architecture of the Mac in detail. We'll first introduce some common architectural concepts; the following section will cover the Macintosh in detail. This technote can only give a brief introduction to computer architectures; for a more detailed discussion, see the References.
Most computer programs tend to exhibit patterns of memory accesses, which is usually referred to as locality. Spatial Locality refers to memory accesses that appear adjacent to an access you just completed; this is common when reading different pieces of data from a C struct. Temporal locality refers to multiple memory accesses to the same section of memory; in other words, if you just read something from memory, you are likely to read it again in the near future.
Modern computer architectures use a hierarchical memory model to take advantage of spatial and temporal locality. This model is built on caches. A cache is a smaller block of memory that can be accessed faster than the level it sits on top of. When a block of memory is loaded from the lower levels of the hierarchy, it is copied into the cache. Future accesses to the same memory will retrieve the data from the cache, which means the processor is less likely to stall waiting for something to be fetched from memory. Processors are increasing in speed faster than memory technologies, so multiple levels of cache are becoming more common.
Figure 2 - Memory hierarchy
When the block of data is not available in a particular level of memory, that is known as a cache miss. A cache miss is actually slower than the original transaction would have been in a single-level memory system.
Clearly, to take best advantage of a cache, you want to reduce the number of cache misses by taking advantage of locality in your application. Some areas of memory can be marked uncached so that the processor will always fetch them from main memory. This can sometimes be very useful when the data is infrequently accessed.
Caches can be designed in a number of different ways, so we need to discuss some of the parameters that affect a cache's operations. These include:
Increasing the size of the cache increases the probability that previously accessed data will still be in the cache the next time the processor attempts to access it.
Caches are organized into blocks of data, called cachelines. When the processor executes a load instruction, it must fetch one or more cachelines of data in order to fill the request. Cachelines are generally larger than the size of the processor's registers; they need to be large enough to take advantage of spatial locality without taxing the lower levels of the memory subsystem.
Set associativity describes where a new cacheline can be stored in the cache. In a full-associative cache, a new cacheline can be stored into any available cacheline inside the cache. Most processor caches are not fully associative because of the small amount of time available to search such a cache. An n-way associative cache means that the cacheline has n locations that it could theoretically be placed inside the cache. For example, an eight-way associative cache means that any particular block of memory can only be placed in one of eight different cache blocks inside the cache. We'll discuss exactly how this works later on when we discuss the PowerPC processor.
If there are no empty cachelines available when the processor attempts to load a new cacheline, one of the current cachelines must be thrown out. This is known as a replacement strategy. Most processors use a least-recently used (LRU) strategy, but others are possible.
When the processor wants to modify the data stored inside a cacheline, the write-policy determines exactly how this write is performed. A write-through cache writes the data both into the cacheline and into the lower levels of the memory subsystem. The cacheline and memory will always stay in sync; they are said to be cache coherent. If that cacheline must be discarded to make room for newer data, then the processor doesn't need to do any additional work. A copy-back cache defers writes until that block is needed for newer data. When the block is removed from the cache, it is written out to memory. Copy-back caches are generally faster, but require additional synchronization if multiple processors attempt to access the same block of memory. If a processor attempts to access a cacheline of memory held by another processor, it must retrieve the modified data and not the original cacheline from main memory. This may require the cacheline to be written back to main memory, or the processors may be able to communicate directly with one another.
To put this in perspective, we'll examine the iMac/333 in detail. Although ultimately this information will be dated as the Macintosh continues to evolve, it is useful to look at a specific example and how its memory system is architected. For all Macintoshes, the best way to understand the machine is to read the developer note for the specific machine you are targeting.
Figure 3 - Memory Registers
The G3 processor has 32 integer and 32 floating point registers. Generally, any value stored in a register is immediately available to instructions executed by the processor, making this the fastest memory subsystem on the iMac. Clearly, anything that will be accessed repeatedly should stay in a register if possible. Register allocation is usually controlled by the compiler, but the source code can include hints as to what should be placed in a register. For crucial sections of code, assembly language may be able to beat the register allocation performed by the compiler.
The L1 caches are located inside the G3 processor, and any register can be loaded from the L1 cache in two processor cycles. Importantly, multiple load instructions can be pipelined for an overall throughput of 1 load per cycle. In other words, if an application knows that a piece of data is in the cache, it can dispatch a load and expect the data to be there two cycles later.
The L1 cache is actually two separate caches: 32K for data and 32K for instructions. Each cache is eight-way associative with 32 byte cachelines. This means that effectively there are eight 4K rows, which are split into 128 different cachelines.
Figure 4 - L1 cachelines
Whenever a memory access is made, the bottom 12 bits of the address determine exactly where to look in the cache. Of these 12 bits, the top 7 bits [20..26] determine which column to examine, and the bottom 5 bits [27..31] determine which part of the cacheline to read into the register.
Figure 5 - Memory access
When a memory access is made, the specific column is searched in all eight rows simultaneously to see if the required data is already in the cache. If the data isn't in one of those cachelines, then one of those cachelines is flushed out to the L2 cache (using a pseudo-LRU algorithm), and the new data will be loaded into that cacheline.
The L2 cache on the iMac/333 is a 512K, two-way set associative cache with 64 byte cachelines, for a total of 8,192 entries. Typically, the access time to the L2 cache is about 10 to 15 cycles. Inside the L2 cache, bits 14 through 25 of the address will be used to determine which column can hold our data. Because the rows are much longer, there is less chance of two transactions overlapping inside the L2 cache; this is good because collisions will tend to flush useful data outside of the cache sooner.
Figure 6 - L2 cache
The L2 cache holds both instructions and data, so both may be fighting for resources - but only when the code is large enough to spill outside the L1 cache.
When the information isn't available inside the L2 cache, then we must fetch it from main memory. Main memory accesses usually take about twice as long as accesses inside the L2 cache. The memory used in the iMac is SDRAM, which is organized so that adjacent reads from that memory in a short period of time will be more efficient than completely random reads. The same code transformations that improve caching will also help to maximize SDRAM performance.
Up to this point, we've ignored virtual memory, so now we'll look at it in detail. Mac OS organizes virtual memory into pages that are 4K in size. A page table is stored in main memory that maps logical pages to the actual physical pages in memory. When a page is requested that is not presently in memory, it is loaded from disk, possibly ejecting another page back to disk. Virtual memory can be seen to act exactly like a cache.
Given that the page maps are stored in main memory, it might appear that each load and store instruction is going to cause multiple memory transactions - first to read the page table, and then to fetch the actual data. To avoid this, the G3 processor keeps a cache of the most recent page accesses. This cache is known as the Translation Lookaside Buffers, or TLB. This cache is much smaller than the ones we saw for the L1 and L2 caches; a 128-entry, two-way set associative cache. Bits 14 through 19 of the address are used to look up the entries in the TLB.
Figure 7 - Translation Lookaside Buffers
A TLB miss will cause a significant penalty in accessing that block of memory. Worse, if you repeatedly access pages that are not located in memory, repeated page faults will slow the Macintosh down to the speed of the hard disk, about a million times slower than RAM accesses.
You probably didn't want to know how the iMac's memory subsystems work; you really want to know how to write your application so that it accesses memory as quickly as possible. Clearly, the goal is to spend as much time working inside the L1 cache, where memory accesses are fast and predictable, and as little time as possible in the other memory subsystems. The good news is that we can apply most of our effort to optimizing the L1 cache performance; all the other layers will benefit from those optimizations automatically.
Whenever possible, all forms of data should be aligned to their natural size boundary. For example, a 32-bit long should have an address that is a multiple of 4. A 64-bit floating point double should be aligned to an 8-byte address. AltiVec registers should be aligned to a 16-byte boundary.
If an application doesn't properly align a variable, it will take a penalty every time it executes a load or store on that section of memory. For integer instructions, the penalty is one cycle. Remember, floating point doubles; the G3 processor does not handle alignment in hardware. Instead, an exception occurs and the alignment is handled in software. This works correctly, but at a severe performance penalty, and should always be avoided.
AltiVec registers can only be loaded or stored on aligned boundaries. When an load or store generated for one of the AltiVec registers is performed, the bottom four bits of the address are ignored completely. This ensures that all loads and stores are aligned in memory, but requires the programmer to perform the alignment manually. If you need to read or write misaligned data using the altivec registers, consult the Apple AltiVec web page or the AltiVec Programming Environments Manual.
While aligned loads and stores will always fall inside a cacheline, misaligned data can straddle two different cache lines, or even two different virtual memory pages. This can result in additional performance penalties as more data is loaded into the caches or from disk.
Whenever possible, align both your reads and writes. If you have to choose one to align first, align the reads, since most code stalls on reads, not writes.
If you allocate a data structure using PowerPC alignment conventions, then the structure will automatically be padded correctly to maintain proper alignment. However, if your structure includes doubles or vector registers, at least one must appear at the beginning of the struct to maintain proper alignment.
The standard C function malloc does not guarantee any sort of alignment. Mac OS 8.6 aligns all handles and pointer blocks to a 16-byte boundary. If you need to align your own data, allocate a larger block of memory and calculate an aligned pointer to write into it. You need to save the old pointer so that you can properly dispose of the memory block when you are done with it.
malloc
#define alignPtrToBoundary(myPtr,align) (((myPtr)+(align)-1) & ~((align)-1))
enum
{
cacheSize = 32,
vmPageSize = 4096
};
Ptr savePtr, thePtr;
savePtr = NewPtr (mySize + cacheSize);
thePtr = alignPtrToBoundary(savePtr,cacheSize);
The stride is the offset between consecutive memory accesses in your code. A good rule of thumb is to choose the natural length of the item being fetched (a unit stride). For example, the unit stride of an array of UInt32s would be 4 bytes. The unit stride would only touch a new cacheline after every eight loads. Contrast this with a 32-byte stride, which would touch a new cacheline for each load.
UInt32s
If you can't choose a unit stride, you should try to choose a stride that is no larger than a cacheline. Once you choose a larger stride, you may skip cachelines, leaving a percentage of the cache unused. Larger stride values that are powers of 2 will use a dramatically smaller portion of the cache. For example, look back at the diagram of the L1 cache. If we chose a stride of 4096, we would be walking vertically through the cache, using a total of 8 cachelines in the L1 cache. This would actually use less than 1% of the L1 cache, and 2% of the L2 cache. However a stride value of 4092, only 4 bytes lower, will use the entire cache. However, large strides are always bad because they will cause many virtual memory pages to be touched, resulting in many page faults.
This technote comes with an application named Cacheline Optimizer. Given a stride value, it will simulate the cache utilization for the L1 and L2 caches on an iMac, as well as giving an approximate idea of the TLB usage. It gives the percentage utilization as well as how many iterations before you will begin to evict data from the cache.
You should optimize your cache utilization for the data that is used most frequently. For example, let's assume we've got an array of data records on which we need to search and perform operations.
struct Foo
{
int key
int[7] data;
};
Foo records[kDataSize];
for (loop = 0; loop < kDataSize; loop++)
{
if (records[loop].key == keyValue)
{
PerformAction(&records[loop]);
}
}
This code has a stride of 32 bytes; we actually load a cacheline every time we iterate on the search loop. Unless we frequently hit inside the loop, we are actually wasting most of the cacheline on unused data. Instead, this structure could be transformed into a pair of arrays, one for the keys and one for the data.
struct Foo
{
int[7] data;
}
Foo records[kDataSize];
int keys[kDataSize];
for (loop = 0; loop < kDataSize; loop++)
{
if (keys[loop] == keyValue)
{
PerformAction(&records[loop]);
}
}
This results in no wasted space in the cache; we only load data into the cache when we successfully find a key. A test on the code above showed that the second set of search code executes in roughly 60% of the time of the first test.
Another common example is multidimensional arrays. Array accesses should always start with the innermost dimension, working outwards. For example, the following array could be accessed in two different ways:
float matrix[1024][1024];
/* Stride = 4 */
for (loopX = 0; loopX < 1024; loopX++)
{
for (loopY = 0; loopY < 1024; loopY++)
{
sum = sum + matrix[x][y];
}
}
/* Stride = 4096 */
for (loopY = 0; loopY < 1024; loopY++)
{
for (loopX = 0; loopX < 1024; loopX++)
{
sum = sum + matrix[x][y];
}
}
In the first case, we have a unit stride, with the best possible cache utilization. The second case gives us a power of 2 stride, giving extremely poor L1 and L2 cache utilization. In this particular case, the first set of code will run about five times faster than the second.
High Performance Computing (see References) discusses a number of access patterns for optimizing cache utilization of arrays and is a useful place to look for more ideas on transforming your data.
When talking about data, we can't ignore lookup tables. Lookup tables used to be a very efficient way to perform computations, but the processors have become much faster than the memory systems, so it if often faster to do the calculations than to rely on a lookup table. Lookup tables are generally only efficient for small tables that are accessed extremely frequently inside a tight inner loop. Large tables, or tables that are accessed infrequently will only result in displacing other data from the caches.
We've seen how choosing a good stride value improves the spatial locality of a block of code. However, temporal locality is equally important. A block of code should work on data in relatively small chunks, and work on those chunks for as long as possible before moving on to other data. Ideally, those chunks should fit inside the L1 cache, to absolutely minimize the time spent operating on the data.
For example, the following code performs three different functions on our data structure.
struct record
{
int data[8];
};
enum
{
kNumberRecords = 262144
};
record entries[kDataSize];
for (loop= 0; loop < kNumberRecords; loop++)
{
foo(&entries[loop]);
}
for (loop= 0; loop < kNumberRecords; loop++)
{
bar(&entries[loop]);
}
for (loop= 0; loop < kNumberRecords; loop++)
{
baz(&entries[loop]);
}
This code is inefficient because the function walks an eight-megabyte block of data. This is large enough to completely fill the L1 and L2 caches. Since we walk each function separately, this code will completely reload the L1 and L2 caches in each loop. Under cramped memory conditions, it will also thrash the virtual memory system as well.
Instead, this code should perform all three functions on a block of data before moving on to the next one.
for (loop= 0; loop < kDataSize; loop++)
{
foo(&entries[loop]);
bar(&entries[loop]);
baz(&entries[loop]);
}
This will not evict any data from the L1 cache until all three functions have been executed, resulting in about 1/3 the memory accesses of the first example.
The example above shows no dependencies in the code. Most real-world examples exhibit dependencies between different blocks of code. The goal is to calculate as far ahead as possible without evicting useful data out of the L1 cache. For example, if bar required the previous four data records, we would want to perform a few calculations of foo() before starting into the single combined loop.
bar
foo()
A safe number is generally half the size of the L1 cache; this allows for other data and global variables to have space in the cache.
If you optimize your data to fit in the L1 caches, you have already optimized your code to work efficiently with the Mac OS virtual memory system. For example, a block of code executing on a four-byte stride will only incur one page fault every 1,024 iterations. In many cases you can provide additional hints to the Mac OS to further improve your virtual memory performance.
Starting in Mac OS 8.1, there are a set of functions that allow you to prepage VM pages into and out of memory. Complete documentation is in Technote 1121, "Mac OS 8.1", but to summarize:
MakeMemoryResident - pulls a set of pages in from the disk.
MakeMemoryResident
MakeMemoryNonResident - flushes a set of pages out to disk and marks those pages as available to the VM system.
MakeMemoryNonResident
FlushMemory - flushes a set of pages out to disk, but leaves them resident in memory.
FlushMemory
ReleaseMemoryData - marks a set of pages as clean, so that they will not be written out to disk.
ReleaseMemoryData
For example, let's revisit our previous example. We know the code is about to walk an eight-megabyte block of memory linearly. In the worst case, all of our data is currently on disk and we'd take 2,048 page faults to bring in this data and operate on it. Disk and file system overhead will make this operation slow, even though we optimized our memory accesses appropriately. However, we know we're going to walk the entire block, so we can prepage the information in.
struct record
{
int data[8];
};
enum
{
kNumberRecords = 262144
};
record entries[kDataSize];
MakeMemoryResident (&entries, kNumberRecords * sizeof (record));
for (loop= 0; loop < kNumberRecords; loop++)
...
This code hints to the operating system, allowing these pages to be read into memory in much larger chunks. For a piece of code similar to the above test, this roughly doubled the VM performance. Importantly, adding this function call is only useful if the data has likely been previously paged to disk; otherwise, this just adds a small amount of overhead to the function.
Similarly, if the application has modified a large chunk of data, it can call FlushMemory and allow the system to write these pages out to disk efficiently. In many cases, the operating system already combines adjacent dirty pages, but adding a hint can be a good idea. The places where you want to use this are less obvious and using FlushMemory improperly can actually reduce performance, by initiating writes that may never have happened otherwise.
MakeMemoryNonResident is a more extreme version of FlushMemory, since it allows that entire range of VM pages to be used to satisfy new VM requests. It should only be used for data that isn't going to be used again for a while.
Finally, if the application has a chunk of data that doesn't need to be stored to disk (because the entire page of data will be recreated next time it is needed), then the pages can be invalided via ReleaseMemoryData and then made non-resident using the MakeMemoryNonResident calls. This eliminates unnecessary writes to disk and also provides virtual memory pages to the system that it can use to satisfy page faults. This can prevent useful information from being paged out by mistake.
The PowerPC processor has a number of instructions that allow a program to hint to the processor about its cache utilization, much like the function calls we just described for virtual memory. The G4 processor has more advanced streaming instructions that allow even further optimizations to be made. This note will only cover the instructions that are available on the G3 processor; information on the cache streaming architecture is available on Apple's AltiVec web page.
Hinting to the processor allows it to use spare memory bandwidth to prefetch new data before it is needed by the processor. Under the best circumstances, where computations and loading are balanced perfectly, this can result in a doubling of the actual performance of your code.
Compilers will not automatically emit these instructions; you need either to write assembly language or use special compiler intrinsics to add these instructions to your code.
dcbt - data cache block touch
dcbt
The dcbt instruction hints to the processor that you will be using information from a particular block of memory in the near future. If the processor has spare memory bandwidth, it can use them to fetch this data so that it is ready immediately when you need it. This instruction will not cause page faults or other exceptions.
If we look at our previous example, we found that our record fit exactly into a cacheline. If our records were aligned to a cache boundary, we could use a dcbt instruction to prefetch the next record while we're performing calculations on the current record. When the calculation and load times are comparable, this can result in a 100% speed increase.
This instruction is only useful when the number of calculations is large enough that there are idle memory cycles and when our data is usually not already in the L1 cache. For something like a simple memory to memory copy, there are no calculations to overlap with the dcbt instruction, so this is just a waste of processor cycles. If the data is already in the L1 cache, then again, we're just adding additional unnecessary cycles.
Cache touch instructions have a very significant effect on the G4 processors, often doubling the available memory bandwidth. dcbt instructions and their more powerful stream-based counterparts should always be considered when optimizing code specifically for the G4 processor.
dcbz -
The dcbz instruction is not a hint to the processor. Given a memory address, it calculates the correct cacheline, writing that data out exactly as for any other cacheline operation. However, while a load or store instruction would fetch data from memory to fill the cacheline, the dcbz instruction just fills in the cacheline with zeros. This instruction should only be used on data that has been properly aligned, because otherwise you might wipe out useful data by mistake.
dcbz
Beyond being the most efficient way to clear cacheable memory, this instruction offers another tangible benefit. If a block of code is going to completely overwrite a cacheline of data, then it should explicitly clear that cacheline before filling it. For example, the following code fills a one-megabyte block of memory with data.
UInt32 ptr;
for (loop = 0; loop < 1024*1024; loop++)
{
*((UInt32 *)(ptr+loop)) = 0xFFFFFFFFUL;
}
When the first write is done into a cacheline, the write stalls until the cacheline is loaded from main memory. This read is completely unnecessary since the code is going to overwrite the entire contents of the cacheline. For our one-megabyte record, this is 32,768 loads from memory that we didn't need to make!
Instead, we could write this code to use the dcbz instruction.
/* assumes this data is aligned to a cacheline boundary */
UInt32 ptr;
for (loop = 0; loop < 1024*1024; loop+=32)
{
_dcbz (loop, ptr);
/* this could certainly be unrolled for speed */
for (loop2 = loop; loop2 < loop+8; loop2++)
{
*((UInt32 *)(ptr+loop2)) = 0xFFFFFFFFUL;
}
}
This code explicitly clears the cacheline and then fills it with the new data. By clearing it, we've eliminated a significant amount of load bandwidth, in some cases more than doubling the overall throughput in memory.
The dcbz instruction should only be used on cacheable memory. For uncacheable or write-through cacheable memory, it generates a processor exception that handles it correctly, but is much slower than clearing those bytes by hand.
dcbf
dcbi
The dcbf instruction pushes a cacheline of data completely out of the L1 and L2 caches, and marks that cacheline as unused. The dcbi instruction just marks a cacheline as unused, without pushing any data across the bus.
Explicitly flushing data that won't be used in the near future makes those cachelines available for other incoming data. This might prevent useful data from being purged from the caches by mistake. Like the MakeMemoryNonResident function, it should be used sparingly, since it initiates memory bus traffic that may not have otherwise been necessary. In addition, any block flushed in this fashion must be completely reloaded from main memory, resulting in considerably worse memory performance.
The dcbi instruction allows you to mark cache lines of data that do not need to be written to main memory. This is a useful hint to provide for transient data that must be completely recalculated anyway. While the dcbz instruction helped reduce unnecessary loads performed by the processor, the dcbi instruction minimizes extraneous writes.
Let's say we had a piece of code that is going to recalculate a chunk of data every time could use dcbz instructions to clear the cachelines, and dcbi instructions to prevent them from being written to main memory. This will eliminate any extraneous processor bandwidth from being generated for this chunk of memory, at the cost of a few cycles in this code.
Most applications access memory in places the programmers never realized. Compilers often generate unnecessary loads and stores to keep certain variables synchronized between a register and main memory. Global variables can also add additional data into the cache. This section discusses a few ways that this can result and suggests methods to avoid it.
Global variables are stored in the data portion of a shared library and are accessed via a lookup table (known as the TOC). Whenever an application accesses a global variable, it must first look up the address of that variable in the TOC. So, any global is accessed with at least two load instructions. If a global is accessed multiple times in a function, the compiler may cache the global's address in a register. However, this will leave this register unavailable for other computations.
Let's put this in perspective. A function that accesses eight global variables may use anywhere from eight to sixteen registers to hold global data, trading off additional registers for fewer reloads of the address. The large number of registers being used means that more registers must be saved and restored when entering and exiting the function. And finally, all of the TOC lookups mean additional cachelines of data being loaded into the L1 cache. For the above code, the worse case would be where each TOC entry is in a different cacheline.
Globals that are declared as const types are not immune to the problems listed above. Applications that use const for simple integer types should consider using enumerated types, in the same fashion as the Apple Universal Interfaces.
const
There are a few ways that an application can work around the above problem. The easiest is to declare a related set of globals into a single struct. This causes all of the globals to share a single TOC entry, and also allocates the data adjacent in memory, which improves the cacheability of the global information. Because they share a single TOC entry, a single TOC address lookup can be used to the entire set of globals, with no additional penalties over the standard case in the compiler. Using our example above, 8 globals would fit into 9 registers; one that holds the address retrieved from the TOC, and 8 to hold the actual globals. This model of using a single address to look up a set of globals is close to the model used for globals used on Windows, so those of you creating cross-platform code should find that this gives you good performance.
Another important consideration is the scoping of the global. If the global is only used in a single function, or inside a single implementation file, be sure to scope it as a static variable inside the function or file. This offers the compiler additional information it can use when optimizing the code.
Even with both of these mechanisms being used, many accesses to globals will still generate load and store instructions, even when the data hasn't changed in memory. This is known as aliasing; the compiler can't always determine if two different pointers aren't pointing at the same memory, so it gives up and assumes memory is accurate and the register is not.
By creating a local variable, and assigning the global's value to it, you explicitly tell the compiler that this variable is scoped for a given function and it won't change. If you do modify the global, you can copy the value back at the end of the function. Obviously this only works if the function doesn't call another function that will change the value of the global. Like most optimizations, this is best used inside a tight inner loop, where extraneous memory accesses to a single memory location are just wasted cycles.
These same techniques can be applied in other places as well. Values stored inside a structure or C++ class, or anything referenced via a pointer, will suffer the same aliasing problems as a global variable. Caching frequently used items into local variables explicitly tells the compiler and allow it to produce better code. Always scope variables as tightly as possible in order to let the compiler aggressively schedule register usage, preventing unnecessary registers from being saved and restored to the stack.
When programming a function in this manner, it is useful to think of the function as a machine that takes a bunch of data, stores it in registers, crunches everything inside registers, and then stores it out to memory. While doing calculations, it isn't touching memory, offering more bandwidth to other parts of the Mac.
PCI memory transactions are much slower than those going to main memory, so applications which write data to PCI must be optimized more closely than those that touch main memory. The iMac ships with a 33mhz PCI bus, while the '99 G3 and G4 systems ship with 66mhz PCI and AGP. When writing data over PCI, we want to reduce PCI transaction overhead and maximize the number of PCI cycles spent writing data.
Cacheable PCI memory works identically to main memory; the only difference is the slower time to burst cache lines across PCI bus. Because of this, all of the guidelines for L1 and L2 cache management are more critical, because code that thrashes the L1 and L2 caches will be reduced to the speed of PCI reads and writes.
However, many devices on PCI are non-cacheable and require additional work on the part of the programmer. One common case is a video frame buffer, which must be non-cacheable in order to ensure that pixel updates happen immediately. If you are writing data to non-cacheable PCI memory, you should follow the following guidelines:
Misalignment penalties when writing to PCI are extremely high, and it is always worth spending additional instructions to explicitly align a block of data before posting it over PCI. One easy way to do this is to create a buffer on the stack and write your unaligned data into the buffer. Then, write this buffer over PCI using double or vector registers. This buffer could be aligned with caches, and explicitly cleared and invalided, as discussed in the caching section.
When working on a frame buffer, don't write one- or two-byte quantities if you can gather larger sets of writes - a one- or two-byte transaction takes just as long as writing out a four-byte word. If you can, round all your writes to four-byte boundary (minimum) and write out longs.
As an example, the worse case would be code that alternates writing one byte, and then skipping one byte. If we write each byte separately, we are effectively writing to each four-byte block twice. If we round each byte to a four-byte boundary on either side and gather adjacent writes, we immediately half the number of PCI data writes we are making to the buffer. This requires the intermediate unchanged pixels to be coherent with what is presently stored in PCI memory.
In actuality, writing four bytes at a time does not offer the best performance. When larger chunks of data are being written, code should use the floating point registers to move 64 bits at a time. This reduces the PCI transaction overhead, resulting in a 50% gain in speed over writing longs. On machines with AltiVec, writing aligned data with vector registers will result in speeds more than twice that of an equivalent set of integer loads and stores.
Multiprocessing adds an entirely new set of issues to worry about. On the positive side, each processor has its own L1 caches, which allow larger data sets to be stored in L1 cache simultaneously. However, multiple processors will share L2 cache and main memory, which means the processors are competing for a sparse amount of bandwidth.
If your code has already been optimized to make efficient use of the L1 cache, then the load on the L2 and memory subsystems will be lower and your application will automatically have better MP performance.
When possible, a multithreaded application should never write to the same cacheline of data from multiple preemptive tasks simultaneously. While this will give acceptable performance on single processor systems, a multiprocessor system will cause a lot of overhead keeping cache coherency between the processors. This overhead is reduced on the G4 processor, but is still worth avoiding.
Instead, divide your work into multiple sections that don't overlap in memory and give each section to a thread. Each section should follow the rules for spatial and temporal locality.
Having examined the memory system on the current Macintoshes and how it affects the organization of your data, we will now turn to the discussion of how to optimize your code to work with the Mac OS. Applications can't change the code in the Mac OS, but they can change how they call the Mac OS. An understanding of how different components of the Mac OS operate will help determine the correct algorithms and strategies to use in your code.
Don't spend a lot of time rewriting basic operations, such as memory moves or string routines. Instead, you should rely on the low-level operating system services or StdCLib, which implements the C library functions. (Apple can optimize each routine for a specific hardware configuration to maximize performance.) You should avoid bypassing these routines except in cases where you can provide a significant improvement in speed; by using your own code, you may end up running slower on future processors. In addition, your code has to be loaded separately into the caches from the system's code, resulting in other code being evicted from the caches.
StdCLib
Similarly, you should avoid embedding any runtime libraries directly into your application binary; instead, link to StdCLib. Using StdCLib results in only a single copy of the library being loaded into memory at one time, improving virtual memory and caching performance. Applications that embed runtime libraries in their code will have their own copies of the libraries, again, evicting other information from the system caches.
So, initially, when you need a service, you should determine whether the system already has a function that implements that feature. If it does, you should use it, and only replace it when you absolutely must.
When might you need to replace a routine? Let's look at an example. BlockMoveData is a general purpose memory copy function in the Mac OS. It will copy aligned or unaligned data of any size. It correctly deals with overlapping memory blocks. It is very fast, but because of its flexibility, it must perform a number of tests before it can start copying data.
BlockMoveData
If 90% of your BlockMoveData calls are for small, aligned blocks, you can probably beat BlockMoveData by writing a smaller copy routine that eliminates all of the testing overhead of BlockMoveData.
Generally speaking, the easiest way to avoid system overhead is to make fewer system calls on larger chunks of data. We'll see this in a few places throughout the rest of this section.
Finally, while many pieces of the Mac OS run natively, other functions are still implemented in 68K code. 68K code runs significantly slower than PowerPC code. The 68K emulator tends to fill the L1 and L2 caches with data, evicting useful data out of the caches.
One place applications often lose time is inside their event loop. Every time you call WaitNextEvent, you are giving up a lot of CPU cycles to other applications. Because these other applications are getting time, they are also flushing your code and data from the caches on the processor. This breaks the temporal locality of your code and data, slowing down your processing. In general, if you have calculations to perform, you should keep the CPU for as long as possible while still maintaining a responsive user interface.
WaitNextEvent
Studies have shown most users expect the frontmost window to be the most responsive, and to complete work faster. For example, the current Finder uses significantly more time when the frontmost window is a Copy progress dialog; the user is waiting for this copy to complete. Applications should adopt the following guidelines:
Starting with Mac OS 8.6, preemptive tasking is available to all applications, all of the time, and is accessed through improvements to the Multiprocessor API. In fact, the entire Mac OS cooperative environment is now run as a single preemptive task, and is no longer tied to a specific processor.
Preemptive tasks are still limited in what they are allowed to call; calling any 68K code or most operating system functions is not allowed, making them useful primarily for computational activities. Cooperative threads are also useful in this regard, but they can also take advantage of toolbox calls.
When WaitNextEvent is called under the new environment, your application actually sleeps for the sleep time you provide it, unless an event is received for your application. If all applications are sleeping, this allows the Mac OS task to block completely, giving the processors completely to other preemptive tasks, and also allowing power saving features to be enabled, extending battery life on PowerBooks. So, if you are doing most of your work in preemptive tasks, then a high sleep value is preferred, because you want those tasks to get as much time as possible. On the other hand, if you are doing most of your work in cooperative threads, you want to use a small sleep value. Otherwise, you are spending most of your time sleeping and very little of it performing computations.
If you are performing calculations inside your WaitNextEvent loop, you don't want to perform a fixed number of iterations. As processors get faster, the time spent in your code ends up being smaller, and the time spent calling WaitNextEvent will increase. Instead, time your computations using TickCount or one of the other clocks, and only give up time when an event has come in or your time has expired. The sleep and work times should be adjusted dynamically based on the amount of work you have to do, and whether or not your application is in the foreground or the background. The following event loop demonstrates this:
TickCount
enum
{
kMaxSleep = 65535L
};
bool gAppRunning;
bool gAppForeground;
bool gThreaded;
bool gComputations;
int gComputeThreadsActive; // how many compute threads we have
int gAppleEventsSent; // how many Apple Events we've sent
int gPendingAppleEventReplies; // how many Apple Events we need to reply to
long CalculateWorkInterval();
long CalculateSleepInterval();
bool AppPerformingComputations();
bool EventsPending();
void SHEventloop(void)
{
OSStatus theErr = noErr;
UInt32 nextTimeToCheckForEvents;
EventRecord anEvent;
gAppRunning = true;
gAppForeground = true;
gComputations = false;
gComputeTheadsActive = 0;
gPendingAppleEventReplies = 0;
do
{
nextTimeToCheckForEvents = TickCount() + CalculateWorkInterval();
while (AppPerformingComputations() &&
!EventsPending() &&
(nextTimeToCheckForEvents > TickCount()) )
continue;
// retrieve an event, if one is pending, and handle it. An error here
// implies a fatal error in the application.
(void) WaitNextEvent (everyEvent,&anEvent,
CalculateSleepInterval(), AppCursorRegion());
theErr = AppHandleEvent(&anEvent);
} while ((theErr == noErr) && gAppRunning);
}
int CalculateWorkInterval()
(
/*
A more sophisticated version of the code could notice the last
time a keyboard or mouse event came in, and adjust these work
numbers up if the machine has been idle for a while
*/
/*
if we're in the background, and no one else wants data from us,
we should return control as soon as possible.
*/
if (!gAppForeGround && (gPendingAppleEventReplies == 0))
return 0;
/*
if we're waiting on data from other applications,
we'll only take a small time slice because we want
those applications to have time to respond to our requests.
*/
if (gAppleEventsSent > 0)
return 2;
/*
We're frontmost, we don't need anything from anyone else. Take
a big chunk of time to perform computations. If we run out
of computations, the work loop falls through to calling WaitNextEvent.
If we're handling text, we should modify this code to not take more
time than GetCaretTime.
See Inside Macintosh: Macintosh Toolbox Essentials, p. 2-86.
*/
return 15;
)
int CalculateSleepInterval()
(
// if no work to do, sleep for as long as possible.
if ((gComputations == false) &&
(gComputeThreadsActive == 0) &&
(gPendingAppleEventReplies == 0))
return kSHMaxSleep;
/*
If we're waiting on replies from other apps or if we're in the background
we want to sleep for a while to give other applications time to do some
work. Otherwise, we'll sleep for a small bit of time.
*/
if (gAppForeGround && (gAppleEventsSent == 0))
return 1;
else
return 10;
)
bool EventsPending()
{
EventRecord ignored;
return (OSEventAvail (everyEvent, &ignored) || CheckUpdate (&ignored));
}
bool AppPerformingComputations
{
if (gComputations)
PerformComputations();
if (gComputeThreadsActive > 0)
YieldToAnyThread();
return (gComputations || (gComputeThreadsActive > 0));
}
This code always attempts to perform calculations at least once through each event loop, calling out to cooperative threads as necessary. Once all work is completed or an event comes in, the code falls through and calls WaitNextEvent to service the event, or sleep.
The event loop dynamically alters the time it uses for sleeping and waking based on whether it is in the foreground, whether it has any cooperative work to complete, and whether it has sent or received any apple events. For this code to work properly, AppPerformingComputations should return time back to the event loop periodically - rarely more than a tick or two of time per call.
AppPerformingComputations
Using functions to calculate the work and sleep intervals allows complete customization of the event loop. For example, applications could dynamically increase the work interval and decrease the sleep interval based on the number of computational tasks they are currently performing. Games will want to maximize the amount of time used for work, and spend most of their time in AppPerformingComputations. An application that is sitting idle (with no mouse or keyboard motion) could dynamically increase the amount of time spent performing work.
Applications that don't need time should sleep for as long as possible. This provides the maximum amount of time to other applications, and also allows preemptive tasks to get a larger share of the processor. If no tasks need time on the machine, the processor can go to sleep, conserving power on portable systems.
Applications with animated displays (e.g., a blinking caret in a text editing application) should choose a sleep value that permits them to update their graphics often enough. See Inside Macintosh: Macintosh Toolbox Essentials, p. 2-86.
We've already talked about some of the hidden costs of accessing memory. However, allocating and deallocating memory can also take a significant amount of time. Macintosh heaps become inefficient when large number of memory blocks are allocated on them. The memory manager has to walk the heap and move blocks around to make room for new allocations, causing large amounts of memory thrashing.
When you need to allocate a large number of small objects, you should allocate small number of handles or pointers, and suballocate all of your other objects inside that block. The CodeWarrior MSL libraries use a suballocator mechanism for C++ new and delete operations, so you can use these libraries or roll your own suballocators. Doing this can result in orders of magnitude improvements in your application's memory allocations and deallocations.
new
delete
Be wary when allocating system data structures. Many items allocated by the operating system are composed of multiple objects, each of which is a separate pointer or handle block. For example, a GWorld is composed of a CGrafPort, a PixMapHandle, and a GDevice; in total, about 27 handles are generated every time you call NewGWorld. An game that allocated a separate GWorld for every frame of animation would quickly fill the heap with a large number of unnecessary handles. Allocating a single GWorld to hold all of the frames of animation significantly reduces the number of allocations performed by the operating system. (In this example, allocating the GWorld as a vertical strip is more cache friendly than a wide, horizontal strip of images.)
GWorld
CGrafPort
PixMapHandle
GDevice
NewGWorld
File system optimizations are often overlooked, and are a critical way to make a different in application performance. A few simple optimizations can double or triple the speed of your file system code. Technote FL16, "File Manager Performance and Caching", discusses file system performance in detail; this note will touch on some of the important points found in that note.
The file system introduces a fair amount of overhead to retrieve data from a file, so the key to improving file system performance is to request data in large, aligned chunks. Reading individual shorts, ints and floats directly from the file system is highly inefficient. A better solution is to buffer your file I/O. Read an 8K chunk from the file into a block of RAM, and then read the individual bytes from the buffer. This reduces the number of times the application accesses the file system, resulting in dramatic system performance.
Included with this technote is a piece of code, CBuffFileStream, that implements a buffered file system on top of the Mac OS file system.
In order to maximize the benefits of buffered file I/O, you must organize your data so that it can be read from the file sequentially. Random access reads and writes will still work, but at a slight performance penalty. In a sense, your buffer is a cache that holds recently accessed data; accessing data with similar spatial locality will be faster than random accessed memory.
Organizing the data can be done by reordering the file, or it can be done by generating a separate index to the file. This index can be sorted into a sequential order that can be used by the loading code to bring in all of the individual pieces of data. Assembling the data at the other end of the transaction will be more complex in this case, however.
Once the data has been reorganized sequentially, increasing the size of the read-ahead buffer will further improve performance. The application can also prefetch the next buffer of data asynchronously, since it knows that it will be using this data. Under the right conditions, where computations and file loading are roughly equal, this can double the performance of your application.
Also, when loading large files, you are not likely to be rereading that data immediately. You should use the parameter block-based calls and hint to the file system not to cache those blocks. This will save time (because the file system doesn't make an extra copy of the data) and will also keep useful data inside the disk cache.
Resources are a critical part of any Macintosh application, and are thus a crucial topic to discuss when optimizing a Macintosh application. Before we discuss ways to tune performance of resources, it is useful to discuss the mechanics of the resource manager.
Resource forks are just a file with a very specific format. Resources are just chunks of data with a specific type and ID. All of the information about the resources in a particular file are stored in an index, known as the resource map. An application can have multiple resource files opened at once. The resource files are kept in the order that they will be searched; this is known as the resource chain.
When an application makes a call to GetResource, the Resource Manager starts at the top of the chain, and searches the resource map of the first resource file. If that resource exists in the file, then that resource is loaded from the file into a new handle in memory. Otherwise, the Resource Manager will try each other resource fork in the chain, until it finds the resource (or returns a resource not found error). Once a resource has been loaded into memory, subsequent calls to the resource manager will recognize this and not load the data from disk a second time.
GetResource
resource not found
Clearly, if you have a lot of resource files open, this searching process could take a long time. Worse, searching large resource maps will flush your own data from the caches. Clearly, we want to do two things; first, make searching as efficient as possible, and second, minimize the actual amount of disk access the resource manager has to make.
Here are some immediate guidelines for optimizing resource usage in an application:
If your application's resource fork has frequently used resources, you should set the preload and locked flags on those resources. They will be loaded low into the heap when the application is first launched and will always be available when the application needs them. Since users expect a small delay when the application is launched, this resource time is hidden from the user.
You should never release any resource you expect to use again in the near future. When you call GetResource, the system will be forced to reload the resource from disk. Instead, by marking the resource as purgeable, you only need to reload the resource if it has been purged from memory. LoadResource will explicitly check to see if the resource needs to be reloaded and will load it if necessary.
LoadResource
Similarly, opening a resource fork is a costly operation, with multiple file reads and a lot of preparation work on the resource map in RAM. You should avoid closing any resource fork that you may use in the near future.
As mentioned earlier, searching the resource chain is a lengthy operation that touches a significant amount of data. If you know the specific resource file, you should explicitly set the resource file and use a shallow search of that file (e.g., Get1Resource instead of GetResource). This limits the amount of files searched and keeps more of your data inside the caches.
Get1Resource
Similarly, you don't want to put too many resources into a single file. A resource fork cannot be larger than 16 megabytes in size. In addition, the absolute maximum is 2,727 resources; the actual maximum will vary based on the number of different resource types in the fork. All resource manager searches are performed linearly, so the more resources there are in a file, the longer it will take to search that fork for a resource.
On the other hand, putting too few resources into a file can result in many more forks being open on the machine. Mac OS can only have a limited numbers of files open at one time. Also, if you make a call to the regular resource manager routines, it will search the entire resource chain.
Custom resources should be created as a single complex resource rather than using a large number of smaller associated resources. This will improve searching, both because the list of resources is smaller and because fewer searches are being executed.
The resource fork format is flexible and documented, and this allows some additional optimizations to be made. The Resource Manager organizes the data into three basic sections: the resource header, which holds the locations of the other two forks, the resource map, which stores information on all resources in the fork, and the resource data, which contains all of the actual data. These sections are usually organized so that it is easy for the Resource Manager to modify the fork on disk. However, a fork that is expected to be read-only can be reorganized on disk to optimize opening and searching the fork. This provides significant benefits when those files are read off of slower media (e.g., CD-ROMs), but are always useful.
In order to perform these optimizations, you need to have a profile of how frequently you load each resource, and which resources you load at the same time.
When the resource fork is opened, the header is read in, and then the system uses the offsets found there to read in the resource map. This results in multiple file-system reads to different parts of the file system cache. If the resource map immediately follows the header, then fewer file seeks are necessary and the resource fork can be prepared faster.
As mentioned earlier, the resource map is searched linearly, first by resource type and then by name/resource ID. Resources that are loaded frequently should be sorted so that they appear at the beginning of the list; resources that are almost never used should be moved to the end of the search path. This will improve search times and reduce the effect the Resource Manager has on the cache.
Finally, since resource data will be loaded into the file system caches, you can reorganize the data so that resources that are frequently used together are adjacent in the file (generally, in the same 512-byte block). While the first resource loaded will result in a file system call, the other resources will already be in the cache and load much faster. Resources that are infrequently used, and don't work with any other resources should be moved elsewhere in the file. This will improve the file cache utilization.
Macintosh applications tend to have very detailed user interfaces, and thus spend a lot of time inside QuickDraw. This section discusses ways to improve QuickDraw performance and suggest places where bypassing QuickDraw may be more valuable. Optimizing QuickDraw is complicated by hardware acceleration and where your PixMaps are located in memory.
PixMaps
All Macintoshes ship today ship with hardware-accelerated QuickDraw. However, most hardware accelerators can only accelerate calls when the images are being rendered directly into VRAM. Offscreen GWorlds are currently only created in regular memory, so most QuickDraw calls will be accelerated when being drawn to a window and software rendered when being drawn to an offscreen GWorld. Hardware accelerated blitters will almost always beat any software blitter.
GWorlds
However, this ignores the overhead of QuickDraw to get to the actual drawing code. When a QuickDraw call is made, a large complex structure is created that describes the blit. This structure includes any explicit parameters as well as implicit parameter information (usually, the current port and GDevice). This structure is then passed on to each registered accelerator on the machine, who examine the structure and determine if they can accelerate that call. If no accelerator will accept the call, then the software blitter will perform the work. Generating the drawing variables and determining the blitter takes a fair amount of time, and will tend to thrash the caches.
Finally, copying data from system memory to VRAM is bottlenecked by the PCI bus. This tends to affect copying large PixMaps with CopyBits more than if affects simple shape drawing (e.g., PaintRect).
CopyBits
PaintRect
When an application needs to do sophisticated compositing, it is often better to do this drawing into an offscreen GWorld and then copy the final results to the screen. By matching this GWorld to the window, the application can guarantee QuickDraw will choose an efficient blitter to copy the data to the screen. When copying this data to VRAM, a hardware accelerator will probably DMA this data directly over PCI, which still allows a system memory to VRAM copy to run faster than a software blitter. To get the best speed out of QuickDraw blits, you should match the pixel format, color tables, and pixel alignment. You should also perform simple copies with black as the foreground color and white as the background color. Not doing any of these will result in a less efficient blitter being run inside QuickDraw.
Custom blitters are an option, but think carefully before you really try to beat QuickDraw. Hardware accelerators will usually beat a custom blitter for any significantly sized blit. However, for small blits, the overhead of QuickDraw means that a specialized blitter can beat QuickDraw. "Large" and "small" can change depending on the version of the OS and the underlying graphics hardware, so they are left intentionally vague. For best results, you should compare QuickDraw and the custom blitter at runtime and choose whichever one is faster.
When writing a blitter, you should read the memory section of this technote closely, since understanding the memory systems will be key to optimal blitter performance. In addition, the custom blitter should be as specialized as possible; the more general the blitter, the less likely it is that you'll be able to beat QuickDraw.
RAVE and OpenGL have less overhead than QuickDraw and are a good choice when speed is critical. Generally, commands are either dispatched immediately to the hardware or are buffered. Buffering usually allows more efficient utilization of PCI bandwidth.
Technote 1125, "Building a 3D application that calls RAVE", covers RAVE optimizations in detail, but most 3D hardware-accelerated applications are limited by one of three major areas:
The pixel fill rate is a limitation of the 3D hardware. In general, this is a measure of the clock speed of the graphics processor and the number of pixels it can issue on each clock cycle. The larger the frame buffer gets, the more pixels will need to be filled. In other words, if pixel fill rate was the only bottleneck in an application, then rendering a 1280x960 buffer would take four times as long as rendering a 640x480 context. In general, once the pixel fill rate has been reached, there is little that can be done to improve the speed of the graphics engine.
All state changes and geometry must be sent to the 3D accelerator over the PCI bus. Thus, the amount of bandwidth defines the upper limit on the number of polygons that can be passed across the hardware. Large state changes (such as loading new textures) will reduce the amount of bandwidth available for geometry.
Finally, the graphics hardware needs to be configured differently for each graphics mode. In addition to the necessary bandwidth required to communicate a state change, there is usually a time delay for reconfiguring the hardware.
The fastest way to draw a piece of geometry is not to draw it at all. Rather than sending every polygon to the programming interface, you should use a higher-level algorithm to eliminate objects that aren't visible to the user. For example, QuickDraw 3D allows a bounding box to be specified for a piece of geometry; if the bounding box is inside the viewing area, then no other calculations need to be done for that entire geometry. Similarly, many games use BSP trees or portals to cull sections of the world. Not drawing a piece of geometry results in fewer vertices being passed to the hardware, and also may reduce or eliminate other state changes in the hardware.
You should tailor the number of polygons used to draw an object to the size of the object in the scene. Rendering a fully detailed model in the distance may result in triangles that are smaller than a pixel in size. This is a waste of bandwidth; instead, a lower-detail geometry should be used. Similarly, creating geometry with shared vertices will also use bandwidth more efficiently. For example, sending 12 triangles to the hardware is normally 36 vertices worth of data. However, if those triangles are arranged in a strip (where every two triangles share an adjacent edge), then the entire strip can be specified with only 14 vertices, for a better than 2 to 1 savings in space.
When possible, you should sort rendering based on the graphics mode being used. This will reduce the number of state changes being made in the hardware. The more time you spend in a particular mode, the less time you are spending changing the state, and the more time you are spending actually rendering polygons.
Textures are a special form of state change that are worth discussing in more detail. Loading a texture onto the hardware is a significant drain on the available bandwidth. For example, if we assume that a vertex is 32 bytes, then loading a 256x256x32 bit texture is the equivalent of 8,192 vertices! In addition, there is usually a limited amount of VRAM available to hold textures, so textures will need to be removed from the card to make room for other textures.
Like other state changes, sorting your polygon data based on the textures used will help to minimized the amount of times you change the texture. OpenGL uses a least-recently used (LRU) algorithm for determining which textures to throw out. If you use the same texture order every time you draw a frame, you'll actually get poor performance, because your textures are always being thrown out before you need them. Instead, alternate your texture rendering order so that every other frame is drawn backwards through the sort order.
When changing textures in GL, use glBindTexture rather than recreating the texture. This reduces the amount of time required to copy the texture into VRAM, since all of the information has already been generated. Similarly, if you are updating an existing texture, use the glTexSubImage call to change the texture data; this reduces the amount of information required to update the texture.
glBindTexture
glTexSubImage
Finally, if possible, make sure you are providing the textures to the hardware in a format they support natively. This eliminates any costly conversions to a native format.
Finally, Apple has implemented two extensions to OpenGL. Multitexturing allows a multi-pass rendering algorithm to be rendered in a single pass. This reduces the amount of PCI bandwidth required to render a polygon, and also uses the fill rate more efficiently. Similarly, compiled vertex arrays are a mechanism that allows the application to tell OpenGL that a vertex list is not going to change. This allows OpenGL to efficiently pack this information into the command buffer for rendering. Using compiled vertex arrays and glDrawElements calls offer extremely efficient performance.
glDrawElements
Optimizing network operations is covered in detail in Technote 1059, "On Improving Open Transport Network Server Performance."
The key to optimizing sound code on the Macintosh is to optimize your sound formats to the hardware of the machine. If your sound doesn't directly match the sound hardware, the sound manager instantiates additional sound components to convert your sound to the correct hardware characteristics; this introduces latency into your sound playing as well as using additional CPU time. See Inside Macintosh:Sound, Chapter 5.
In order to reduce the latency of the sound being played back, you should optimize your buffers to the buffer size of the hardware. You can find out the native sample type and buffer size by creating a channel, playing a sound on it, and then calling GetSoundOutputInfo on the output component.
GetSoundOutputInfo
/*
Returns the size of the output buffer in bytes.
*/
static long GetSoundOutputBufferSize (Component outputDevice, short
sampleSize, short numChannels, UnsignedFixed sampleRate) {
SoundComponentData outputFormat;
OSErr err;
SndChannelPtr chan = nil;
SndCommand cmd;
ExtSoundHeader sndHeader;
long bufSize = 0;
err = SndNewChannel (&chan, 0, 0, nil);
sndHeader.samplePtr = nil;
sndHeader.numChannels = numChannels;
sndHeader.sampleRate = sampleRate;
sndHeader.loopStart = 0;
sndHeader.loopEnd = 0;
sndHeader.encode = extSH;
sndHeader.baseFrequency = kMiddleC;
sndHeader.numFrames = 0;
sndHeader.markerChunk = nil;
sndHeader.instrumentChunks = nil;
sndHeader.AESRecording = nil;
sndHeader.sampleSize = sampleSize;
sndHeader.futureUse1 = 0;
sndHeader.futureUse2 = 0;
sndHeader.futureUse3 = 0;
sndHeader.futureUse4 = 0;
sndHeader.sampleArea[0] = 0;
// This really isn't needed since the Sound Manager currently ignores this value.
UnsignedFixedTox80 (sampleRate, &sndHeader.AIFFSampleRate);
// Get the sound channel setup so we can query it.
cmd.cmd = soundCmd;
cmd.param2 = (long)&sndHeader;
err = SndDoCommand (chan, &cmd, true);
if (err == noErr) {
err = GetSoundOutputInfo (outputDevice, siHardwareFormat, &outputFormat);
}
bufSize = outputFormat.sampleCount * (sampleSize / 8) * numChannels;
return (bufSize);
}
Time manager tasks are deferred by the virtual memory system so that page faults can be taken inside a task. If you can guarantee that both the data and code for a time manager task are held resident in memory, you can make your tasks run more efficiently and accurately. See Technote 1063, "Time Manager Addenda", for details.
Most of the discussions in this technote have been algorithmic in nature: places where the choice of algorithm affects how you call the operating system or access memory. While algorithmic changes will get you the most significant improvements, additional improvements are possible by simple changes to the C code. These changes are often just hints giving the compiler additional information that allows it to emit more efficient code. These types of changes are most useful inside computation intensive bottlenecks, but can be useful in just about any C code.
Whenever possible, scope variables as tightly as possible. Compilers perform lifetime analysis on every variable in order to allocate the actual PowerPC registers to the variables. Scoping a variable more tightly helps the compiler by allowing it to reuse the same registers for multiple local variables inside a function.
Making a global variable static restricts it to a single source file or function. Either of these allows the compiler to perform additional optimizations on that global variable. And as mentioned earlier, temporarily assigning a static variable to a local variable further restricts the scoping, allowing the compiler to further improve the compiled code.
Most instruction scheduling is performed on basic blocks. A basic block is a set of instructions that are not interrupted by any branch instructions. A key goal in optimizing C code is to increase the size of the basic blocks so that the compiler has more instructions to schedule.
Code copying is one way to increase the size of a basic block. It involves taking the same piece of code and copying it into multiple places inside the source.
For example, let's assume I had the following code, where (b), (c), and (d) are relatively small code blocks.
if (a)
{
b
}
else
{
c
}
d
Since d is a small block, copying it into both conditions of the if statement will increase the size of both basic blocks.
if (a)
{
b
d
}
else
{
c
d
}
This type of operation is less useful when copying a large number of instructions, because it tends to add more cache overhead.
Loop unrolling is a simple form of parallelism that both reduces the amount of loop overhead and increases the size of the basic block. Loop unrolling involves simply duplicating the inner loop multiple times. For example, take the following loop:
for (loop = 0; loop < 1000; loop++)
{
a[i] = b[i]+c[i]*d[i];
}
This loop is a perfect place to exploit parallelism because none of the results rely on the previous set of calculations. We can unroll this loop twice and get:
for (loop = 0; loop< 1000; loop +=4)
{
a[i] = b[i]+c[i]*d[i];
a[i+1] = b[i+1]+c[i+1]*d[i+1];
a[i+2] = b[i+2]+c[i+2]*d[i+2];
a[i+3] = b[i+3]+c[i+3]*d[i+3];
}
First, the loop overhead on this loop will be much smaller than regular loop. Second, we now have four sets of instructions that can be scheduled against each other. While one set of calculations may be stalled, another may be executed. Loop unrolling tends to avoid stalls caused by waiting on a particular instruction unit.
Most of the benefits of unrolling a loop will be found on the first two iterations. Unrolling larger loops or loops with many local variables is often counterproductive. Because each variable is duplicated, excess variables may be written into main memory, significantly hindering performance. In addition, this can significantly increase the size of the code, resulting in more code needing to be loaded into the instruction cache.
Note that loops that can be unrolled should also be examined for places to support vectorization or multiprocessing.
When unrolling a loop, don't continuously increment pointers. Instead, use array accesses and a single increment instruction. This will result in a tighter loop and less unnecessary instructions.
After memory accesses, branches are the next most common place where the PowerPC will stall. Minimizing the number of branches in a code increases the size of basic blocks and reduces the opportunities for branch prediction penalties.
First, the common path through the code should be as efficient as possible. Code that rarely executes should be placed at the end of the function, or in an entirely different function. This will prevent rare code from being prefetched into the instruction caches.
When possible, perform calculations without branching. For example, the abs() function is often calculated using a ternary operator in C.
abs()
long abs_branch (long i)
{
return ((i>=0) ? i : (0-i));
}
00000000: 2C030000 cmpwi r3,0
00000004: 41800008 blt *+8 ; $0000000C
00000008: 48000008 b *+8 ; $00000010
0000000C: 7C6300D0 neg r3,r3
00000010: 4E800020 blr
If this function were inlined, it would have two branches, which would break basic blocks and offer opportunities for mispredicted branches. However, this code can be implemented without branching. The following code is based on an assembly snippet from the PowerPC Compiler Writer's Guide:
long abs_nobranch (long i)
{
long sign, temp, result;
sign = i >> 31;
temp = i ^ sign;
result = temp - sign;
return result;
}
00000000: 7C64FE70 srawi r4,r3,31
00000004: 7C602278 xor r0,r3,r4
00000008: 7C640050 subf r3,r4,r0
0000000C: 4E800020 blr
This version of the code eliminates all branching and combines two adjacent basic blocks, resulting in more efficient code. Similarly, checking to see if a value falls within a range can be reduced to a single branch.
bool InRange(int value, int min, int max)
{
return ((unsigned) (value - min) <= (unsigned) (max - min));
}
AltiVec offers compare and select instructions that offer greater flexibility in generating code. For a sequence of code, two different result vectors can be calculated (one for success and one for failure). A third vector is used to hold the results of a compare instruction. Finally, a select instruction uses the compare results to choose between the success and failure cases. In addition to the added parallelism, this code executes without any branches.
For example, the following code combines a set of 16-bit source pixels with 16-bit destination pixels using compare and select. It uses the top bit of the 16-bit pixel as a mask. Wherever the bit is 1, we replace the destination pixel with the source. This code sets 8 pixels at a time without having to do any comparisons.
// generate a vector of all zeros.
vxor vZero, vZero, vZero
lvx vSourcePixels, 0, rSourcePtr
lvx vDestPixels, 0, rDestPtr
//Since any pixel with the bit set is effectively a negative number,
//we compare against zero to generate the mask.
vcmpgtsh vMask, vZero, vSourcePixels
vsel vDestPixels, vSourcePixels, vMask
stvx vDestPixels, 0, rDestPtr
If you cannot eliminate a branch, place as many instructions as you can between the branch and the condition it is testing against. This ensures that no branch prediction will occur, eliminating any costly penalties for a mispredicted branch.
A working knowledge of PowerPC assembly is useful when optimizing applications for Power Macintosh. While it is rarely essential for anything to be written in PowerPC instructions, it is always useful to disassemble the code generated by the compiler. SIM_G4 will also take the actual code being executed and provide you instruction-by-instruction details about how your code is executing.
If you need that last 10% out of a particular function, then you might consider writing it directly in assembly language. Assembly language is also useful when you need to get at instructions that the compiler won't normally generate.
The key to writing efficient PowerPC assembly language programs is to perform optimizations and use instruction sequences that the compiler will not normally generate. The following section describes a few useful code examples that the compilers will not generate. Also discussed are other areas you should consider when writing your own assembly functions.
Before writing any assembly language code, you should read the appropriate processor manuals along with the PowerPC Compiler Writer's Guide.
The PowerPC instruction set provides load and store instructions that automatically perform byte swapping. If you are frequently loading and storing data in little endian format, these instructions will be faster than the macros provided in Universal Headers (Endian.h).
The update forms of a load or store will access memory at the offset and updates the address register to point to the same memory location. This allows a tighter loop to be generated by eliminating unnecessary addition instructions. However, don't go overboard with these update instructions as too many of them in a row can stall the processor.
The condition register is 32 bits wide and can hold results for up to eight different compares at once. Any compare whose value is not going to change should be placed in one of the condition registers and left there for as long as possible. Some compilers do a poor job of separating compares and branches; leaving a value in the condition register means that branches will always be predicted correctly.
If you have a large number of bit flags, compilers often generate a separate rotate instruction for each test. Since the rotate and branch are often not separated, this must be branch predicted, with the potential misprediction penalties.
Instead, move as many flag bits into the condition register as possible and test on the bits manually.
UInt32 options;
mtcrf 3,options // move the bottom 8 bits of options into CR6-7
bf 31, foo // if flag 0 is false, skip
...
bf 30, bar // if flag 1 is false, skip
...
bt 29, baz // if flag 2 is true, skip
...
This is more efficient than what most compilers will generate, and also allows the GPR that holds the options flag to be reused for other purposes.
Another example is to move the bottom bits of a counter into the condition registers. This is useful for unrolled loops and loops which copy bytes of data. For example, the following code will move 16 bytes at a time, and then use the condition register bits to move the remaining 15 bytes.
; r3 = source
; r4 = destination
; r5 = number of bytes
rlwinm. r6,r5,28,4,31 ; how many 16 byte blocks to move
mtcrf 1, r5 ; move bottom 4 bits into CR7
ble move8 ; no blocks to move, finish the last 15 bytes
; perform the unrolled loop, moving 16 bytes at a time.
; this loop ignores alignment
loop:
subi. r6, r6, 1 ; loop counter
lwz r7, 0(r3)
cmplwi r6,$0000 ; are we done loopingne loop
move8:
bf 28, move4
lwz r7, 0(r3)
lwz r8, 4(r3)
addi r3,r3,8
stw r7,0(r4)
stw r8,4(r4)
addi r4, r4, 8
move4:
bf 29,
lwz r7, 0(r3)
addi r3,r3,4
stw r7,0(r4)
addi r4, r4, 4
move2:
bf 30, move1
lhz r7, 0(r3)
addi r3,r3,2
sth r7,0(r4)
addi r4, r4, 2
move1:
bflr 31
lbz r7, 0(r3)
stb r7, 0(r4)
blr
To summarize, the condition register fields can hold bit flags and other useful data, simplifying the compare-branch portions of your code, and freeing up general purpose registers that compilers might allocate to hold flag data.
The PowerPC architecture includes a dedicated counter register, which can be set to a fixed number of iterations. One big advantage of the counter register is that it always predicts correctly, so that mispredicted branches do not happen. For example, the byte-copying code could have used the following code instead:
rlwinm. r6,r5,28,4,31 ; how many 16 byte blocks to move
mtctr r6 ; move into the counter
ble move8 ; no blocks to move, finish the last 15 bytes
; perform the unrolled loop, moving 16 bytes at a time.
; this loop ignores alignment
loop:
lwz r7, 0(r3)dnz loop
However, some loops aren't always going to benefit from using the counter register. Branches can usually be folded out by the processor, effectively executing an additional instruction on that cycle. The BDNZ instruction, however, must update the counter register, so it takes up a reservation station and must be completed in order.
BDNZ
Decrementing and testing a register can be faster if both the decrement and compare instructions can be scheduled into an otherwise empty pipeline slots. For example, the original loop described above was dominated by the load-store unit, so the integer unit is relatively dormant. In this case, we can schedule the decrement, compare and branch for free. This loop also only takes one more instruction than the counter version of the loop.
So, the counter register is best used in loops where other registers are scarce, or where there aren't enough instructions to hide the decrement and branch.
The counter register is also used as the destination for a branch. This is very useful when you want to precalculate a code path outside of a loop, because inside the loop you can quickly branch to an efficient implementation of that function.
The PowerPC-calling conventions are designed around high-level languages like C and Pascal. In addition, they have to be flexible when dealing with pointers to functions, since those functions could exist in either the application or another shared library.
Within assembly code, many of these ABI restrictions can be relaxed. For example, the standard cross-TOC glue is normally invoked when going through a function pointer:
lwz r0,0(r12)
stw RTOC,r20(SP)
mtctr 0
lwz RTOC,4(r12)
bctr
However, when the function being called is in the same library, the RTOC values will never change; we can simplify this glue code to the minimum possible:
lwz r0,0(r12)
mtctr 0
bctr
A leaf function can use any unused registers in R3 through R10 without having to save their values off to memory. Similarly, registers R11 and R12 are normally used as environment glue by the compiler. In an assembly function, these registers are also available for the function to use however it wishes.
As mentioned earlier, the ABI is designed around a high-level language like C or Pascal. This limits the ABI to a single return value being passed back to the caller via a register. If multiple values need to be returned, then the additional values must be passed through memory. An assembly language program can define its own functions with its own ABI. While it must adhere to the stack frame design, a custom ABI could allow for additional input and output parameters to be stored in registers. This significantly reduces the number of memory transactions and can improve performance.
The rotate instructions on the PowerPC processor are very powerful and should be used to pack and unpack data. Most C compilers now emit these instructions aggressively, but understanding them is important for producing efficient PowerPC assembly functions.
Jon Louis Bentley, Writing Efficient Programs (ISBN 0-13-970244-X)
Kevin Dowd, Charles Severance, High Performance Computing (ISBN 1-56592-312-X)
David A. Patterson, Computer Architecture: A Quantitative Approach (ISBN 1558603298)
Steve C McConnell, Code Complete : A Practical Handbook of Software Construction (ISBN: 1556154844)
Rick Booth, Inner Loops (ISBN 0-201-479860-5)
Apple's Instrumentation SDK
Technote FL16, "File System Performance and Caching"
Technote QD21, "Of Time and Space and _CopyBits"
Technote 1008, "Understanding PCI Bus Performance"
Technote 1059, "On Improving Open Transport Network Server Performance"
Technote 1063, "Time Manager Addenda"
Technote 1109, "Optimizing QuickDraw 3D 1.5.3 Applications For Maximum Performance"
Technote 1125, "Building a 3D application that calls RAVE"
Technote 1121, "Mac OS 8.1"
Taking Extreme Advantage of PowerPC
Performance Tuning
Balance Of Power: Advanced Performance Profiling
Balance of Power: Enhancing PowerPC Native Speed
Balance of Power: Tuning PowerPC Memory Usage
Chiropractic for Your Misaligned Data
PowerPC Compiler Writer's Guide (ISBN: 0-9649654-0-2)
MPC 750 RISC Microprocessor User's Manual (Motorola, MPC750UM/AD) (PDF File)
MPC 750 RISC Microprocessor User's Manual Errata (Motorola, MPC750UM/AD) (PDF FIle)
PowerPC Microprocessor Family: The Programming Environments for 32-bit Microprocessors (PDF File)
PowerPC Microprocessor Family: The Programmer's Reference Guide (PDF file)
Apple's AltiVec Page
Balance of Power: Introducing PowerPC Assembly Language
Understanding the PowerPC Architecture
Understanding PowerPC Assembly Language
Acrobat version of this Note (432K).
Download
Optimization Sample Code (CBuffFileStream and Cacheline Optimizer).
Get information on Apple products.
Visit the Apple Store online or at retail locations.
1-800-MY-APPLE | http://developer.apple.com/technotes/tn/tn1174.html | crawl-001 | refinedweb | 16,092 | 51.18 |
2018-05-24), Vladan Djeric (VDC)
Further Secretariat Updates
(István Sebestyén)
IS: The ECMA opt-out is ending tomorrow. So far we haven't received anything, and I don't expect any opt-outs. We're going to the General Assembly in June.
IS: TC53 deals with variables. One of the work items is related to variables which are based on ECMAScript. Management and participating companies (including Bocoup and Moddable) which are already doing the management. Trying to have our first meeting at the end of September/October. We're trying to explain our goals (and differences between TC39) and advertise this group to prospective group members.
Numeric separators update
(Sam Goto)
SGO: This represents work from Rick, Dan, Leo, and myself. We've converged on a proposal to move forward, but also looking for recommendations. Numeric Separators were Stage 3 when we uncovered a conflict with a Stage 1 Proposal (Extension of Numeric Literals). We blocked that proposal and tried to resolve conflict.
SGO: The first feature allows underscores exclusively for readability. These Numeric Separators are Ignored at runtime, and must be between two numbers to improve reading large numbers. The second feature is numeric literals extensions, which offered to write a number, followed by an underscore and an identifier, which translates to be transformed into that number with units. This was designed to make it easier for userland numbers to be more expressive. The problem was that this conflicted with Numeric Separators—we both use the same delimiter.
SGO: The thing that follows a number is an identifier, so it literally transpiles
1234_i to
_i(Object.freeze(string: "1234")), which assumes the function
_i is defined somewhere.
RW: A couple of alternatives: note that these are listed in order of simplicity/preference.
SGO: Caveat: if you don't like this one, you probably won't like the ones that follow. Alternative 1: to pick a different sigil compatible with the resolution mechanism. (i.e. two underscores for extensions). Problem is it's not very ergonomic, but it is the most sound and simplest solution. Alternative 2: different sigil, but incompatible with current resolution. For example:
\``, e.g.0x12_34_ab`bytes`, which is slightly more ergonomic but it's incompatible with current options. Alternative 3: restrict extension to decimals. Because separators don't allow underscores at the end of the number, we can guarantee that the extension is unambiguous, but doing so means we can't use decimals.
YK: What is the set of things in decimal literal.
WH: You can get integer literals, fractions, and exponents.
SGO: Alternative 4: a pair of separators (_ for extensions, 1 for separators), but this would necessitate a Stage 3 -> Stage 4 regression,
JH: Between 1 and 2, (Alt 1 is basically the same as the original, with a different sigil). 2 seems much more ergonomic.
RW: I can speak to that. When you have all these names floating around in a program, there's a belief that forcing this convention, we reduce the likelihood of variable name collision. Whether or not you agree with that,
DD: Not being able to use imaginary numbers in a for loop seems pretty terrible... With
for (let i = 0; ...) { 123i } the
i would now refer to the wrong thing.
JH: I hear the conflict, but I feel if that one case is an issue, we can use j or k in a for loop, if i conflicts with imaginary numbers, or just to use the more common pattern of iteration.
SGO: Your point is valid.
JH: Using a different sigil seems like the best thing with Alternative 2.
SGO: That works for us.
MS: In Alternative 3, Colors are often described in Hex, so that seems like a potential collision. If someone wants to do a manipulation of colors with the Extension proposal, this would be impossible.
WH: I like any of these except for #3, which doesn't work. You get ambiguities related to decimals: if you define a unit called
_0, that would be ambiguous.
_0_px is also a good one — even with units not containing exclusively decimal digits you get ambiguity.
TST: I like Alternative 4. There are two features that could get quite wide adoption, so this is an area where every day users could get affected. These features are clearly tightly connected, and since they are potentially so big it may make sense to go to the whiteboard and carefully consider the best solution here.
YK: I am pretty viscerally opposed to 1. We shouldn't underestimate the language precedent issue. In a clean slate, I'd prefer #4.
RW: A person named Rick worked on C++ standardization, and helped us with this proposal. He pointed out to us that some users added an underscore extension for literals in C++, and then you couldn't use underscore for separators anymore so they had to roll back the feature proposal. (Ultimately, they used an apostrophe for a separator).
DH: We could be consistent with another language, which seems like a very strong argument to mimic the style C++ chose.
YK: I think users coming from other languages expect #4. C++ is a huge outlier here, so my argument is to match other more popular languages.
SGO: Java, Ruby, C# all use Underscore, C++ is the only one that uses apostrophe.
RW: But, again as Jeffrey from C++ told us, they said if they could go back and redo everything, they would have never used an apostrophe.
DE: A minor point about C++, the user defining numeric literals was from a user extension, not part of the language specification.
DH: Anything that requires multiple underscores is aesthetically gross, it's a non-starter for me. The second one, the single apostrophe also bothers me a lot. I love the let's call them "GDPR" separators—if you squint really hard they seem like a natural compromise between US and European separators. It's unclear how extensive the precedence is, and how far the divergence from precedence is—like the first time you see 1_000_000 might be surprising, but it's an easy thing to figure out in context.
RW: To clarify: row 4 means this might happen when we go back to the drawing board to redo this Spec. Since you said you really like the way that this works, I want to mention this as a caveat; we may not end up choosing the _, so you liking that may not ultimately
SGO: Moving to 2 means going back to the drawing board. Staying at 3 means we're confident enough that we've resolved these conflicts.
DE: Let's extend.
WH: Alternative 2 has a gotcha: if you use the form of alternative 2 that uses spaces to separate the unit from the number, you'll get more ambiguity.
in would be a convenient name for a unit for inches, but if you say
3 in that would be ambiguous.
TAB:
in is a binary operator, but this unary
WH:
3 in / opens up a can of worms — is the slash the start of a RegExp or division? This is not viable.
RW: We wanted to look for how often our extension is used in practice. Unfortunately, this is very difficult to measure.
DD: For C++ user-defined literal extensions are heavily used for sane strings
AP: Bloomberg uses numeric extensions internally for decimal literals.
DH: This is a very valuable way to collect feedback. I would caution, however to attempt to declare any particular consensus.
RW: It sounds like we are going to #4, in that case.
YK: This is a weakly held position, but a lot of people have used underscores in Ruby, and that would be very natural for users
MHK: I don't want to get into a situation, where we have to update parseInt.
RW: If you use parseInt, nothing changes—you're feeding parseInt a string, so the behavior never changes.
RJE: Sounds like we're going with Alternative 4.
RW: Rather than run through pop-quizzes about "what happens when," I suggest you look through the Spec which fully defines the semantics of Numeric Separator Literals, including behavior in the terms of Number, parseInt, parseFloat, etc.
Conclusion/Resolution
- Demoting to Stage 2 from Stage 3.
- Coupling proposals to create a holistic design
Pattern Matching for Stage 1
(Kat Marchán)
KMN: Rust, Elixir etc. all have this pattern matching feature. You can think of this as an advanced
switch. Pattern matching is one of these core features that new users of a language are introduced to and use them all the time from then on. How would this look for JS? (Shows example slide). Another motivating example, in React, you're often matching with deep values within your state (in Redux, for example, and you have a message with a complex structure).
KMN: There are 3 separate proposals here: the core proposal uses existing patterns to make basic pattern matching work, As-patterns allow you to bind to the expected value to a variable more ergonomically, and Collection Literals allow you to map matches to custom data structures.
KMN: Core Proposal: the semantics are based on the structuring assignment. Using
when to say when the property matches this literal, do this.
WH: What does
when 1 mean? (Object Equality) Does
when 0 match -0?
KMN: This applies to numeric literals, and I believe -0 is not a numeric literal. Continuing...
when x is an irrefutable pattern. You can also have guards using something like
when x if ()
DD: What is the match doing here? If you do this on an object that doesn't have a match method, will that throw an exception?
KMN: Yes.
WH: (Question about the scope of a variable
v on the slide)
KMN: It's like
if scoping.
MM: Why did you reject the arrow function body?
KMN: When I was playing around with it and showing people, these cases (continue and break) got very weird. Let's not create a completely new kind of scope.
DD: You also have an example that works with a return which makes this a lot simpler.
WH: What does
continue do in here? It doesn't seem to have it regular meaning of resuming the innermost enclosing loop.
KMN: That's exactly what it would do. There is also no fall-through (not implicitly or explicitly). Anything that is a collection literal can use this. For pathological cases:
Infinity,
NaN,
undefined, etc. we currently have no specific answer.
WH: It's not just
-Infinity, even
-3 wouldn't work here.
KMN: Yeah, I guess negative numbers don't work either.
MM: Does the syntax use parens to wrap expressions?
KMN: I don't think you should be able to do that. I'd rather avoid it.
MS: (something...)
MM: There's too much ambiguity
EFT: You want to basically do a jump table on some hidden class.
DD: Personally, I'd really like to make these things work, it may be special casing but does that bother anyone?
KMN: I think this is a nitty gritty question that we can talk about later.
KMN: It is specced to use NLTH (No line terminator here), I don't want to. I hate NLTH. This is not pretty or great. There's a number of languages that use
case, so I prefer that. Or we could use
super switch which would not be syntactically ambiguous (jokingly). I would rather use proper grammar here (as opposed to NLTH) but this is the core proposal, and for simplicity let's start with that. There's a massive thread on the issue tracker. Please jump into that if you have opinions. There's issues for working with iterators and how many times they get called.
KMN: I want the other proposals to land (especially collection literals), since they would correspond to pattern matching in other languages (any languages with record constructors). The concept behind pattern matching is to use the left-hand side as a simulation or mirror for the match to succeed. It's very important to have that correspondence so that you're not learning a special syntax just for this.
WH: Is a the only thing allowed within
match block a
when clause?
KMN: Correct. We could add default, but it's not necessary. You don't actually need it.
MSL: If the parser sees an
match ( (match plus open parens), does the parser have to parse as a potential function?
KMN: Yes, there's NLTH and a lookahead to handle this case.
MSL: Another point, to resolve the undefined ambiguity, could
when === be an expression for syntactic sugar for
when if()?
KMN: This is not adding very much, I think (over the current if-guard syntax). In general, these kinds of things slow down the pattern matching a lotl
SYG: We looked at this - the iteration protocol is destructive. Multiple array patterns in the same Match statement could exhaust an iterator. Is this desirable?
KMN: It would only iterate over it once - in the lifetime of the match statement. Multiple cases of
when [...] will only cause one iteration. It would also need to call next() one more time to determine the length of an iterator.
SYG: Are unbraced declarations allowed in the body of
when? If so what is the scope?
KMN: Yup. It's the same as
if.
SYG: There are some edge cases, around let
KMN: This just copies the syntax from
if itself. I'm not sure what the issues are around the edge cases.
TST: The behavior you just described, does that also apply to
property.gets? (Yes) Multiple
whens with the same property, is that OK?
KMN: We can make this normative but it was assumed that would be the case.
MF: We can solve some of the problems you mentioned by allowing arbitrary arithmetic expressions. This exists with your true and false and nulls, but we can also add a parenthesized identifier.
KMN: I think that was already mentioned.
MF: It's worth exploring arithmetic expressions though.
KMN: I don't think we need arithmetic expressions. Basically, the negative numbers we can address and we could special case unary literals though - like Infinity/NaN. MF: Do you allow spread destructuring?
KMN: It's not in this example, but if you put a splat here it matches on any length.
MF: that's exactly what I'd expect, and frankly the most common usage for me for this feature.
KMN: I considered special-casing splats (to note include the variable/pattern following
.... That is
[a, b, ..]) because of their frequent use, but I think not doing this is much simpler.
MF: I agree. This not having an expression form makes it less useful to me. Why would you not want expression form?
KMN: Optimization.
MF: Putting a match in expression position?
KMN: If this becomes an expression it needs to answer all the questions
do expressions have.
MF: I think that would be great to answer.
KMN: While
do is alive, I would want to let
do do it's thing. I would much rather all conditional statements use the same semantics for their bodies. Having match define its own semantics, and then having different semantics for if statements within do expressions would be bad. What does it mean for an expression to have a statement? We don't have answers to those questions yet.
MF: I wouldn't feel comfortable conforming to
do's specifics, given that
do hasn't gotten much progress recently and it's basically used as a mechanism to stop more interesting proposals.
KMN: My opinion is all or nothing. Either
do lands or it doesn't.
BN: What about object ...spread?
KMN: Object spread is object spread - including its oddities. For example, the ...rest can only be an identifier, with no further destructuring inside.
DH: First, this is a ton of great work. Years ago, I wanted to do similar things, but not nearly as much work so I appreciate this. One other syntactic option is to keep
switch for the name but use
when for the cases?
KMN: It's been brought up and there's reasons...
DH: If we have
do expressions, having to wrap this in the
do expression just to get a result out. This is way too verbose and there's no ambiguity for the precedence there. You may want to couple the challenges of
do expressions, but that's very difficult, so if you need collaboration from other team members, we can make progress on that. Of course we shouldn't block landing this on
do expressions. We should try to make statement right hand sides to work, and we can discuss do.
KMN: Expressions are useful for
if,
switch etc, not just
do.
DH: In a perfect world we would maintain that symmetry, but unfortunately JavaScript is not that perfect world, and we don't have symmetry already. I'd rather get the new, desirable conditional form without that perfect symmetry, then incrementally advanced to reach that symmetry eventually.
KMN: I would like to keep this as a statement until later stages and when we've looked into more of these questions and answers. I'm even comfortable blocking this on answers for
do, eventually.
DH: As this goes through the stages, I will become more and more uncomfortable with (not having them be) expressions.
YK: People really want
match, plus there's some desire of
match not being an outlier (among other programming languages). Because of eval, we have a global explanation to what completions mean. I think it does make sense for this form to work with expressions.
Eric: Are we strict (as in, exhaustive property matches) on objects in the when clauses or just arrays?
KMN: Just arrays.
WH: I'm uncomfortable with proposals that introduce new no-line-terminator-here restrictions and am glad that you're trying to remove those from the proposal.
WH: We need to have a discussion about us introducing new proposals with a lot of new syntax that JS users have to learn, which came to a head at the end of the last meeting.
WH: This proposal conflates assignments with values. A lot of people use named constants for things—if you have an object with a color field and you say
if { color: red }, this assigns the color field's value to red instead of testing if it's the red constant.
KMN: This is covered by the collection literals proposals. I want to keep the variable-based semantics.
WH: Collection literals doesn't apply to named constants. I would be uncomfortable with this because it conflicts with named constants.
KMN: There's a number of named discussion points in that proposal, so you should consult that. What elixir does, which has similar semantics to JS, it puts a hat before a variable, and that adds a pin to that variable. e.g.
when ^x that would match against a higher-scope x. This is called the pinning operator.
DD: Could I persuade you to add catch guards to the proposal?
KMN: I don't think it makes sense in this proposal, but rather in a separate connected proposal.
DD: This is not nearly as useful to me as catch guards.
??: One cool piece in other languages is for pattern matching is exhaustiveness checking. Have you thought about that?
KMN: I had that in the proposal because it's common in other languages. It's not very hard to do your own exhaustiveness check, but it's kind of either or and open to discussion. I personally like exhaustiveness checks.
WH: Not objecting to Stage 1, but I want to reiterate that I'm uncomfortable because of the issues I raised above.
Conclusion/Resolution
- Stage 1 acceptance
Tagged Collection Literals for Stage 1
(Kat Marchán)
KMN: It would be great to construct a map with an object literal, or a Set with an iterator/array. This gets very cool with destructuring.
MM: Not a Stage 1 blocker, but we'll be fighting over infix bang.
KMN: That's fine, I'm going to win.
KMN: This would make it useful for pattern matching. (Shows example of pattern-matching )
WH: Does this use the current value of x or assign the value to x? If you have x:y inside the tagged collection literal, does it assign to x and y or use the current values of x and y?
KMN: These are fully numeric keys in this case.
DH: In your first presentation, you talked about
new Map, and the structuring/destructuring Map. Does this proposal supplant that? Would this new syntax completely replace
new?
KMN: It would not use the same protocol. For destructuring Maps, it uses this thing called
valueOf which returns an iterator. It's possible you could use the valueOf in both cases, but it would have to use three arguments. It could work off the same protocol.
WH: What comes before
!?
KMN: An expression.
WH: So
when clauses can now take an expression?
KMN: Only left of a bang.
MF: I don't see the value in the kind of destructuring you're doing. (Gives example)
KMN: Checking against the type is a thing I want in pattern matching (doing an instanceof check and the value).
TST: Why would it be so valuable for pattern matching?
KMN: This would be very useful for error matching. They may all have the same property names, but you want to handle special cases for these different errors.
TST: You could use guards.
KG: Syntax is expensive and I don't see how this pays for itself without pattern matching, or frankly even with pattern matching. This is just syntactic sugar. I would not like to see this go in without pattern matching.
DD: This proposal is all about syntax, and I have a lot of issues with the syntax. You are using object literal syntax to put something that isn't a string key, for example. Maybe this pays for itself in pattern matching, but you get the same benefits from things like guards, etc. I'd rather see a specialized case for pattern matching as opposed to adding this syntax generally.
EFT: I'm concerned with how this works with pattern matching, in particular this RegEx thing scares the shit out of me. We want this to all be static and literally, and this has upward bindings and erodes some of the goodness of the pattern matching that you had in the previous proposal.
WH: This is much more complicated than I thought reading the original document. I now don't have a good understanding of what is being proposed after the presentation and don't think this is appropriate for Stage 1.
DH: I have a lot of issues with the syntax, but I would not object to Stage 1. In Rust, I believe there are syntactically characterized subsets that we could base off here.
KMN: It hadn't occurred to me that it would actually look that terrible as a special case only in pattern matching but also in destructuring. I pretty much agree with everything you all said, so I will update the proposal with this feedback.
Conclusion/Resolution
- Collection literals withdrawn
- Researching use of protocol for destructuring
- Pursuing
newas part of pattern matching
Binary AST
(Shu-yu Guo)
SYG: Update on Binary AST. Currently at Stage 1, will remain at Stage 1, likely. Normally parsing source text is fundamentally slow, but we could ship a pre-parsed AST (compliant to 262 syntax grammar) to make things fundamentally faster.
SYG: To enable per-function laziness, we need to enable scope annotations. We can also do a single-pass and streaming code generation (a sort of magic, go fast button). These are not orthogonal, it is possible to do both.
SYG: On the Spec front, we want to incorporate these semantic changes and a new over-the-wire format. We explicitly don't want to "handcuff" people on the spec side or implementation side. The binary AST spec will not limit the possibilities of JS as source code. We're speccing a basic tree grammar. It's a shift off an AST, with some differences (written in a WebIDL-like thing). It works backwards from an implementation—you take a source text and transform it into an AST, the spec takes the AST and transform it into an ?? to "Ecmaify" it and to avoid bifurcating the language. The new semantic stuff like scope annotations (which we call "asserted scopes") are checked during the Ecmaify stage.
MM: The variables that appear to be free within
with, may actually be looked-up in a with block?
SYG: Yes. And that's what implementations must do already.
SYG: The things in the asserted scope are effectively the outputs of static semantics (we're encoding them). This introduces the concept of "free names "
MM: What's the difference between "free" and "captured" names?
SYG: "Free" means identifiers that refer to a variable not declared in the current scope. "Captured" names are declared names that an inner function closes over.
SYG: This opens up a smoother adoption process, where implementers can decide, without changing any existing semantics. What do we do about compatibility? Once we output a tree, we cannot change the semantics, and for forwards compatibility the set of AST nodes is purely additive.
EFT: The implementation of this was quite small—it took less than 5000 lines to implement this in Firefox in roughly 5 weeks. Laziness is a thing that all the engines do already, so it was very cheap to implement. This does not require laziness to be quick; when you turn on the laziness, the front-end gets significantly faster (~50%).
SYG: On CDN, this is a very seamless experience for devs; you feel the benefits super fast.
KG: Personally, I would be very enthusiastic about this only being enabled in Strict mode.
SYG: That sounds fine to me. My main concern is that this would hurt adoption, given how much sloppy code there is. The use case here is we want to just make this a switch they'd press. If we say you must also convert your code to Strict mode too, then it may be a bit more difficult to get adoption.
YK: If there's just occasional sloppy code that you need to get around bugs, that seems extremely punitive to force people to use Strict in order to get these performance benefits.
MF: It's worth pursuing a Strict mode variant or something. You get to drop the variable names from the binary encoding in Strict mode, and size is obviously very important. I do think it's worth going the full route of requiring Strict mode, we should at least take advantage of this in Strict mode.
MS: What was a performance benefit you got from JS to binary AST?
EFT: The parser gets 30-50% faster.
MS: You have to go end to end, because part of this is load time.
EFT: 30% of parse time, and parse time is 25% of load time.
MS: You say you get this improvement of the front-end time, which is like 25% of the load time
VDC: The benefit to us at Facebook is not the load time. You want the cost of additional code is asymptotic as length increases. We vastly improve that cost equation using binary ASTs.
MS: How many Alexa Top 100 sites will use this? Facebook would have different websites that you send to different browsers.
VDC: we ship one, es5 to everyone. And there are additional features we ship (including feature testing) that we conditionally ship to other users.
MS: The kind of Portable device vs. desktop device.
EFT: We gave this encoder to Instagram and they had this up and running in 3 days.
MS: Would a very popular website do this for all browsers? But we're talking 6-7 various browsers, this would effectively double the number of bits in a testing matrix for websites to ship this. We're crafting a feature that's not going to be widely used.
VDC: Additional encoding is not a testing exercise. We can trust that the encoding is sound.
MS: More testing bots.
VDC: If you believe there's an issue with the encoding, then yes, you need more testing. But we don't expect this to be the case.
EFT: Maybe we can hear from other people?
SYG: We need to answer this before stage advancement.
MS: You said it was 30-50% of parsing, and parsing is 25%. All this work for 12% improvement?
SYG: It seems like a big ask for implementers to add this feature. But for websites, this is a small ask.
TDE: We go to way further lengths for smaller gains. We do multiple builds for each browsers, done brotli, etc. From the LinkedIn perspective, this would be a trivial integration for performance gains that we would be happy to see.
MS: Would you be willing to ship all six flavors to get this performance gain?
TDE: Yah.
MS: This doubles those.
TDE: We would be comfortable with that.
MS: You talked about once the spec is frozen, those nodes will always work. We then have the issue of tracking the implementation state for other browsers—if some implement but not other browsers do, this becomes a mess to track for backwards compatibility.
MS: I'm concerned with introducing new nodes that are not available in all browsers. We add 5 new AST nodes, some of them do only some. Having to deal with this will bite us. What happens when someone adds a node that doesn't work in all browsers?
DH: We'll just do what we today with JavaScript.
MS: We can't do that—it's a binary node!
EFT: This is up to the host.
DH: This is just like new syntax. If the browser doesn't understand the syntax, it blows up. It's the same thing for a binary format. People use transpilation to ship towards the least common denominator of browsers they want to support. This is an existing problem.
EFT: It's easier, in fact.
YK: I'm confused about some of the points being made by people. Where are the multi-build concerns coming from?
MS: For websites that have a different set of bits for different browsers, now you have a binary AST, that's 4. You also have to support the older browsers that don't support the feature.
YK: So I have 6 builds, if Chrome adds a binary AST, we'd just replace the build for Chrome with that one (dropping support for old Chromes).
MS: What do you do with a Chrome version that doesn't support binary AST?
YK: If I want to support new syntax, I would choose what is supported by
MS: Normally you do this with polyfilling, but that doesn't work for syntax.
YK: I work at a tiny company—7 engineers. We use Ember. We would also find it trivial to adopt this feature. We would add this feature to our build pipeline, so all Ember users would get this feature. We'd definitely allow you to ask for a binary AST. There's a question that it would be hard to build an binary AST, but frankly this is the same step conceptually as using a minifier. People go to extraordinary lengths that take more time and produce less benefit. For people that only target evergreen, as soon as they all support Binary AST people will only target Binary AST. Same as transpiling to old JS.
DFV: Linkedin and Salesforce are in the same boat. Particularly we ship enormous pieces of JS and we have to go through the length already to make the compatible. If this gives us 10% improvements would be amazing, we are doing today crazy things with much weaker effects. We would love the performance benefit, whatever small % we can get. We are very supportive of this proposal.
SG: Just to reiterate what's been said: my personal one not Googles, count users not domains,not devs. I have an intuition, no data, if you count 5-6 devs that constitute a good proportion of web traffic - gmail, linkedin, facebook . these are well funded 500 engineering teams that are, google.com you'd be surprised how much effort goes into that last bit of perf. My intuition is to prioritize for number of users. Even if adoption is small, we can pay the cost.
MS: I work on JS core, so I agree with you. If we do this we're not doing something else. I also don't think the numbers that Eric is quoting will be the same on all browsers. So, maybe 3% performance gains out of Safari, instead of something else.
SG: Agree. Not advocating for this solution. Problems are valid. If it's less accessible to some devs but more accessible to users, that's fine.
MS: I don't think this is solving a long-term problem for the web.
TST: Curious why it won't solve for lots of content producers. If it solves what ... wants to get it, content producers should treat it like GZIP. Accept-header should switch it to CDN. You do not have the explosion in test surface. If you don't believe it will pan out I want to know what the concerns are.
MS: Experience tells me the test explosion will happen.
TST: You think the cloudflare thing just will not work out.
MS: At past companies, it's not as simple as putting a tool out. You're putting a piece of software that you don't have control over.
MS: I'm concerned that the devil is in the details.
EFT: You can just not pass the accept header if you don't think it will go faster.
SYG: We'd like actionable feedback. Would you like numbers on JSC?
MS: We ask for the different implementations, how much time would it take to parse this code on this page. I think it would be pretty easy to ask these implementers for metrics so we can use this data to decide if it's worth it.
YK: A relevant factor. One trouble is writing tools to target lazy-sniffing browsers. It would be good to look at how often people are hitting eager parsing inappropriately. What would the saving be if you didn't hit this.
EFT: You don't have to lex the whole thing.
VDC: WRT to the actual wins, at Facebook we get 10-15% wins out of squeezing efficiency from the parser. The problem is you're still trying to get efficiency from ALL the code that gets sent. Then we want to support more code and more paths, but you need to do the optimization again to get back to the previous level. That's the fundamental appeal of binary ASTs - it's not just a % it really changes the cost structure of the page.
MS: I talked about signed modules in the cache.
WH: First, if you do this, is the intent to change the APIs for things like Modules, Realms to allow ASTs instead of strings for inputs?
SYG: I have not thought about it. It is not my intent at this time.
WH: On the topic of forwards compatibility. It's a different problem than with text — if you interpret text, it's just a string when not being interpreted. On the other hand, an AST might not be representable in the implementation even if it's not being evaluated. So you couldn't even store the input to an
eval even if you don't call the
eval on that input.
SYG: We're not changing eval to take binary ASTs.
WH: Had a long discussion with the presenters at dinner last night. Just want to make sure that we're all on the same page as to what compatibility means with regard to ASTs. Your message is that compatibility means existing nodes strips compiled continue to work as we upgrade the language. But as ECMAScript evolves the same text source code may compile to different nodes even if they don't use new constructs.
EFT: That's correct.
WH: There are examples like changing associativity of
|| that came up recently that show why that may be important. The Hyrum's Law challenge is that folks may come to rely on source text compiling to a specific AST. What are we going to do if we make such a change and it turns out to break the web?
KG: We can avoid that by not exposing the binary AST to users.
MF: Can you explain how a user would rely on a particular AST?
WF: In this committee it came up discussing changing associativity of
|| from left-to-right to right-to-left in order to properly support one of the variants of the
?? proposal. It's invisible from within ECMAScript but would change which AST gets generated. There was opposition related to the effect this would have on Babel's ASTs.
EFT: What I was able to glean from talking to you last night was that Babel has a particular AST API that will be independent from this.
MF: What happens with the binary encoding of the AST?
SYG: The TC39 is most concerned with the tree grammar. The question of the binary encoding - it's just an encoding. It's out of scope of TC39. Should we as a committee own the binary encoding parts? Or defer to another standards body?
MM: Of course we should spec the binary part of the binary AST. I also think that once it's introduced, programmatic access would follow.
SYG: I think that's dangerous given the compatibility constraints.
MM: Programmatic access will follow—allowing evaluators to accept ASTs in addition to strings. It's fine to leave that out now, but I think this is a natural part to a dynamic and it should be part of the language standard to support.
DD: Maybe Mark is talking about something a bit different. We should own the AST and the encoding, but not sure we have the expertise to handle the binary encoding.
SYG: To clarify, if you see the spec that says byte 2 means this, etc.
DD: We need to be extremely involved, like what should be in the default dictionary? We should do a corpus crawl...
MM: The result of applying the expertise, wherever we find it must be included in our spec.
MF: I'm not sure that's true. I think this could have a pluggable encoding for the AST. There may be a different encoding for a different platform.
MM: Having it pluggable gives flexibility we could consider - the utility of having an agreed concrete format for the concrete representation that corresponds to language semantics. There's no need to tie to hosting environment.
BFS: I somewhat agree that this should be in the specification. It should be controlled to some extent by TC39.
TST: I don't think there's one particular group that stands out as being an expert on this subject. IETF, the WASM community group does great things here, but I don't think there's a clear group here for us to spec this out.
EFT: There will be a future proposal where someone invents an arcane encoding.
TST: Luke Wagner specifically offered to help with this, so we should take him up on this offer.
EFT: There are several people pursuing how best to structure it.
DE: In the recent pass, we decided about making this more specific—what should be resolved by stage advancement. There were number of questions that were raised about the binary format, and the level of evidence that it's a sufficient performance improvement. I'm most concerned about these two question—I'm not sure if these are Stage 2, 3, or 4 concerns.
SYG: Stage 2, I think.
DE: I'm talking about entrance criteria—to get into Stage 2 or to get into Stage 3.
SYG: I think Michael's point asking for evidence is important. We need to address for Stage 2. The surface binary stuff is a Stage 3 concern. It does not introduce new semantics.
EFT: If we claim that we have mostly complete spec text to get to Stage 3. How would we go about doing that?
DE: If you have some draft binary format, I think that's sufficient. It will need to be ratified by Stage 3.
SYG: I believe this proposal at this stage is independent of what the binary encoding will look like. The encoding will not impact the utility of this feature.
MBS: I may be off on this, but we're talking about the binary advantage of this on websites?
SYG: Because of compatibility constraints and the nodes being not for tools. It doesn't work with the perf goals we have set. They look very different for the two use-cases.
MBS: I'm interested if you have any numbers for that.
SYG: The new trees might cease to be good for your use-case. We can create a starting point, but not after that.
YK: It may be more useful than you think because there's a lot of AST to AST transformations today and there's not a consistent intermediate format. You could imagine these kinds of transformation being much simpler if there's a universally accepted intermediate AST format.
MBS: If webpack/babel/rollup could all share the same intermediate format there's huge value.
SGN: Regarding numbers, we've seen Facebook's profile is different to other websites. So we want to see other sites. Facebook don't hit the code cache. Binary AST is only for the code start. I want to make it clear that this is not a one-time cost. Spec authors have to spec both Binary AST and JS spec, tooling has to be updated to support both, and the implementers have to implement both. This is not trivial.
MB: I want to see data for other engines. There are other techniques. V8 supports streaming parsing, we parse as the bytes are still being downloaded. I imagine the relative gains from binary AST are different compared to the SpiderMonkey numbers you've seen. Would be interesting to explore.
EFT: I'm open to getting more numbers, but I don't want to corner myself to a place where I need three compatible Binary AST implementations to get to Stage 3.
MB: Still worth exploring other options that do not have the same ongoing cost in parallel. E.g. streaming parsing is an optimization that comes for free for users and developers. There's no opt-in.
EFT: I haven't heard of other ones with this much potential upside, but I'd be open to hear that.
YK: I have been the consumer of many of these improvements. The cost of targeting is very high. If you figure out how to target engines, a year later it's wrong.
DD: Earlier, the analogy to Brotl was brought up. I find it interesting. Not implementing something and still be just as good is not a loss - it is unique for this proposal. It may be interesting exploring this as an optional thing. Maybe not in TC39. If you think about this more as an compression format, then there are different conclusions you can make with different committees, for example.
SYG: A complete spec with the well-understood caveat that the binary encoding may change drastically. Sufficient evidence that we get the perf wins we are aiming for. Some like "across implementations". We hope to work with implementers to find less than 1 engineer year ways to show promise. We can't do 5 years engineering and hope something happens.
SYG: I'd like to reiterate the other point that MB brought up to look at the other design specs to see if there are other potential alternatives that could give us similar performance wins.
EFT: We can talk about the pragma, but it leaves performance on the table.
SYG: In case we progress, Sathya mentioned it will incur spec author cost. The idea is that implementers are free to implement in whatever format they choose. The surface syntax specs should be in lock step. I will personally offer help to spec authors.
Conclusion / Resolution
- Stage 2 entrance criteria:
- A complete spec with the well understood caveat that the binary encoding may change drastically.
- Sufficient evidence that we get the perf wins we are aiming for across websites.
- Some like "across implementations" We hope to work with implementers to find less than 1 engineer year ways to show promise. We can't do 5 years engineering and hope something happens.
Function.prototype.toString() censorship for stage 2 (continued discussion)
(Domenic Denicola)
DE: The notes say Stage 2 with objections, but what does that mean? Can we come to a consensus? One theory is that we expect not to get memory savings? Is this a Stage 2 blocker?
KS: By entering Stage 2, are we committing to solving ? or solving with pragmas? I don't think we want to commit to pragmas at this time.
DE: By Stage 2 we need to figure out if we're going with pragmas.
DD: We're committing to something that works at a Stage 2 level. I have not explored alternatives in that design space.
BT: Probably a good idea to explore alternatives in that design space. Could we consider directive prologues? I'm worried that the pragma is the right approach for that.
DD: If cross cutting concerns revealed that that approach wasn't good, we obviously wouldn't use pragmas.
BT: The reality is that once the pragma exists, if it has any benefits, then this is a pragma that will exist in all JavaScript programs. It's effectively a free turbo button, so why wouldn't you use that? I feel pretty weird that this would be such a clear win, and yet it will still be opt-in.
DD: If we ever saw signs that this was optimizable, we would certainly seek that. I completely agree that this could quickly become a cross-cutting discussion.
YK: If we have Binary AST, then why would you use this?
WH: The question of misspelled pragmas comes up—it's easy to mess up a long name like what's proposed here (unlike "use strict"), in which case nothing happens (silently). Another concern is retroactivity — a pragma inside a function takes effect considerably before the pragma is seen, which caused the "use strict" pragma to be banned there in some situations. If we're addressing cross-cutting concerns of pragmas, we should look into both of these things.
TST: There does seem to be enough of a consensus that this is worth pursuing. WH seems to have some concerns with using a pragma.
WH: I do not. For this proposal, I would say a pragma is the best solution. The issues I raised are generic to all pragmas (and we should explore them). I support this proposal for stage 2.
TST: Would there be anybody in Stage 3 who would object to this?
DD: If you want to block of it because it's a pragma, we should talk about it now.
MS: I'm worried this opens the door for more pragmas (with misspelling). I do share Brian's concern where if I do this, I should use this everywhere.
LBR: I'm concerned for a precedent of adding new pragmas. Following Brian's concerns, if we are going to use these pragmas everywhere, we should first collect data from the host implementation.
DD: I can just repeat my presentation yesterday...
DE: It seems exceeding clear that we don't have a consensus.
MS: I really don't want the pragma.
BT: This is our chance to say no.
MM: The history—when we first introduced "use strict". Doug Crockford suggested both a "use strict"; with the quotes and a use strict; without the quotes. The first is ignored on earlier versions of the platform that don't recognize it, falling back to sloppy mode. The second causes a static rejection on platforms that don't recognize it. The first pragma form is the only syntactic marker we've got that is ignored, rather than causing failure, on earlier versions of the platform.
DD: The thing that would be most helpful is if we decide to not go to Stage 2, to give reasons why not. I don't
Kevin: The takeaway here, the pragma is a very bitter pill to swallow—even if it is the best solution. When we introduced modules, it was a great win that we got rid of boilerplate. Looking at this proposal, the fear is that it is boilerplate.
DD: Definitely, and I think more work can be done in the proposal to clean up that confusion.
DD: This comes down to the question of, are there people who have any objections to adding new pragmas? I need to know, is a pragma OK, ever? MM has successfully communicated that pragmas are the best option here.
MM: The graceful degradation of what happens with the pragma with other implementations is a very critical point that I want heard. We've only ever added one pragma to the language. Why do we have an enormous tolerance for new syntactic features, and no tolerance for new pragmas?
??: Pragmas are the best solution (better than symbols)
SGO: If you take that it needs to be in the source code itself, than pragmas seem to be the only viable mechanism for that. This is also importantly a feature that the developer can use to protect itself. This is done on an author basis, not a the language level—done at userland. My intuition is on the lines of whether the feature itself pays for itself, than pragmas seem to be the only option for it.
JH: How does this interact with error stacks?
DD: Line numbers are tricky.
JH: An error stack trace with some functions that are censored seem impossible to reconcile. But this may be better for a later discussion.
DD: I appreciate the Stage 3 concerns, do we have Stage 2 approval?
Conclusion/Resolution
- Stage 2 acceptance
SuperProperty evaluation order
(Justin Ridgewell)
JRL: (Presenting slides)
WH: Sounds great MM: Yeah SGN: Sounds good MB: Agreed
Conclusion/Resolution
- Consensus
Symbol.thenable for stage 1 (or 2?)
JHD: Exporting a named
then function from a module makes the module a thenable. This logically follows the promise protocol, but means there is now no way to dynamically import a module with a then function. Refactoring hazards exist. Someone could write a module that blocks itself from dynamic import. It's super weird conceptually, but it logically follows from the way Promises work, but it is a problem. This came up with members trying to implement custom module loader in node. There is no way to get a dynamically imported module record. V8 can provide hooks, there are workaround but it surfaced the issue. DD said it would be bad if module namespace objects were magick where they weren't thennable. Namespace objects are thennable now, they shouldn't be through dynamic import - it should be a static picture of the module. Its weird if we make them magick.
import('bar') would give different output to
import foo from 'bar'. Generic solution is to make a thennable object not be thennable.
DD: I was saying this is a generic problem, not that we should solve it.
JHD: Fair, it's a generic problem. The premise here is that we're faced with *.isFoo, or promise.foo, so it's not a WebCompat issue yet, but it could become one.
DD: 4/4 browsers ship this I think
JHD: This will be rarely used Im sure except when someone wants to exploit it. Another solution is to block
export then. This is weird though - only current forbidden is
export default which is not really forbidden, just default export. I think this proposal with
Symbol.thennable = false to block the object being a coerced to Promises is a good one. Hoping for stage 1 today - we have spec text for stage 2. Do we want to pursue this or not? If we do, we want to go rapidly.
BN: Module authors can't choose to export a Symbol.thenable-named export (because it's not an identifier), so this would have to be a blanket policy. Should module authors have control over this?
JHD: No choice, all module namespace objects (which are frozen) would have
Symbol.thenable = false
WH: Why do we need to do this? Yes, it's a strange problem to have. Not sure the solutions are worth the complexity.
JHD: Doing nothing is always the default option. The conceptual weirdness of a namespaced object—using dynamic import you cannot always guarantee the shape of the module, which undermines the benefit of the module in the first place. The concept of
import * as gives you the ability to get all the symbols without having to know them in advance.
WH: I agree it is a problem I just don't agree a solution is needed.
DD: You could make a wrapper module as the consumer or author. This echoes the general solution for the then problem: if you aren't expecting a thenable, wrap it.
JHB: This isn't something in the language spec so we cannot guarantee it
DD: You could guarantee with a wrapper module
JHB: We could make it return a wrapper module then
DD: It's not web-compatible. Dynamic import is shipping and in production
WH: The problem is that adding the symbol will significantly complicate the promise protocol. Different things (userland and built-in) will disagree about whether an object is thenable. That extra complexity in a different area of the language is not justified just to let folks export a module containing a top-level
then export.
JHD: I think it's rare that people will
export function then, I also think it is rare people will specifically create userland objects that do
Symbol.thenable = false.
CM: We're reaping the consequences of the
then magic now. But this feels inside out. This is almost
Symbol.unthenable. The marker for
Symbol.thenable = false to mark something as unthenable seems backwards to me.
JHD: This is not a stage 1 concern
CM: Well it strikes the heart of the proposal. The semantics are odd
JHD: I agree - boolean absence is not the same as true. I would want to update it, but we'll see.
CM: Domenic's proposal feels less wrong
SGN: Promises are already super complicated, let's not complicate it more. The status quo is fine.
MM: Some people will be confused. Principle of least surprise.
Symbol.unthenable creates confusion. When we cannot avoid surprises the way to chose is: static rejection is the least surprising. Static rejection is surprising but I can handle it ahead of time. This suggest the solution which was glossed over: banning "then" as an export - it seems weird but its a good solution considering the human factor.
JHD: I'm content with that, I cannot imagine any reasonable case when you'd need to do that. It would take a lot less spec test, wouldn't complicate promises.
MM: Anyone feel a strong preference to introduce a new static check?
DE: Would that be web compatible?
DD: It's iffy. People are doing this. This is almost a feature.
MM: People are exporting
then?
DD: People are doing it in lieu of top-level
await.
MM: Oh my god. In that case I withdraw my suggestion. Status quo is sufficient - we should simply explain the issue. It is just something JS devs will have to understand. Asynchronous constructs --- promise, async iterators, etc --- only promise non-thenables. Thennables are plumbing through which they rech the non-thenables. JavaScript programmers already have to understand that.
JHD: What happens if top-level await lands? How do we feel about that? People are then immediately provided a migration path.
MM: People leave companies, things stop getting maintained. If there is a hard line to switch, we can't let this happen for webcompat reasons.
DD: This is a generic issue in JS. Same thing for String concat will do toString, Number concat valueOf, its all just protocol hooks. Does this pay for itself? Should we had
Symbol.unvalueOfable or
Symbol.untoStringable?
JHD: I agree but module namespaces are super special. The others do not follow this as much.
DD: The + operator does this
JHD: But there is a way around the coercion.
DD: There is no way. If you want to add something with
valueOf it'll invoke
valueOf
JHD: We dont have to dig into it here and now — I think there is a subtle difference of thennable than
valueOf. I agree a generic anti protocol approach is bad though.
BFS: We're talking about webcompat concerns. Is dynamic import shipped?
MB: Chrome does
MM: Safari too
DD: I'm not sure but I think Edge too.
BFS: A boolean seems odd. I'll remind everyone of
Symbol.toPrimitive though. Instead of as a Symbol maybe we could look to a different design. Could we change the Symbol to be a function or look at dynamic imports that... . I think this might help the situation.
BN: I've changed my mind. Initially I was okay with static rejection of
then exports, since I agree with Mark that static failures are not so bad, but then I thought of a legitimate use case. If you have a module with one default export and consider it unergonomic for your users to have to do
import("your-module").then(ns => ns.default).then(x => ...), then you could export a
then function that returns the default export. Importantly, this is not the same as the namespace object, so it won't cause an infinite resolution loop like the namespace object would. Then your users could do either
import x from "your-module" or
import("your-module").then(x => ...) or
const x = await import("your-module") without worrying about accessing the
default property explicitly. This is a slight but real ergonomic win, for module authors who know what they're doing. So I have changed my mind and prefer the status quo.
JHD: Sticking with status quo means that's just a babel transform (and one that I would be tempted to use!). It seems we won't get Stage 2 - Stage 1 means examining the problem. So is this something TC39 wants to hear from me again on or do you want to reject for stage 1?
WH: I'm in favor of keeping the status quo. I'm not in favor of the complexity
JHD: Complexity is a stage 2 concern.
WH: It's not worth the complexity of symbol.thenable to solve the problem at all here. It's just not worth doing, which is a stage 1 concern.
JHD: Based on your concerns this solution will not advance, but if you feel I cannot come back to you with any solution then you can block stage 1. Is that the case?
CM: I don't think we have consensus that this needs to be solved. It's fine for this to stay at stage 0. If you came back with a different crack that fits well then I'd be more amenable but I'm not saying never ever.
JHD: I will come back with something else then.
Conclusion/Resolution
- Remains at Stage 0; will discuss further on GitHub. If unable to come up with compelling solution, will withdraw.
"Blöcks" syntax for Stage 0
(Domenic Denicola)
DD: (Presenting)
Conclusion/Resolution
- To discuss further on GitHub
RegExp Match array offsets for Stage 1
(Ron Buckton)
RBN: (Presenting)
MM: There is no precedent within ECMAScript for extending existing APIs with new arguments.
MM: For a capture group, within an expression, you don't get all the captures, just the first one.
MB: Regarding MM's comment extending APIs, there is precedent for this on the Web platform outside of ECMAScript. There's even a case where a previously existing boolean argument eventually converted to an options bag (
addEventListener with passive event listeners) in a web-compatible way. This is certainly possible.
WH: This seems like a generally good idea. I'm also sympathetic to existing concerns about how this affects all other users.
WH: Instead of creating a new output channel, could you just add a new atom which tells you where it is?
RBN: One of the motivating use cases is TextMate grammars, which need to be cross-platform without changing the regular expressions. In my opinion, that would not be a good compromise to this solution.
WH: OK.
CP: One of the hoops you have to jump through is when to turn this on/off due to performance issues.
RBN: The cost is that in a world with GC, by the time JS gets the match from the exec, any garbage has already been collected or memory has been released to the heap for reuse.
CP: Is the scale issue due to the complexity of the expression, or the volume of the matches?
RBN: Any regular expressions have to take on that cost, even if they're not using it. It would be good to have specific numbers on this.
Conclusion/Resolution
- Stage 1 acceptance
Meeting Planning Update
DE: Some concerns about the 2019 meeting at JS Interactive, please don't consider that to be final. | https://esdiscuss.org/notes/2018-05-24 | CC-MAIN-2019-18 | refinedweb | 10,350 | 74.69 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I try to use camera and opencv library to detect my fist.
I can use the cascade file in example code , but when i tried to use other code on internet , it doesn't work.
for example :
I downloaded another aGest.xml from different place, it works well.
The code looks like the same as the code in the link above. but cascade file in Aravindlivewire's github can not be applied to my program.
Here is my code :
import gab.opencv.*; import java.awt.Rectangle; import KinectPV2.*; KinectPV2 kinect; OpenCV opencv;
Rectangle[] faces;
PImage apples;
void setup() {
opencv = new OpenCV(this, 480,270); size(960, 540, P3D); kinect = new KinectPV2(this); kinect.enableColorImg(true); kinect.init(); //opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); opencv.loadCascade("aGest.xml"); apples = new PImage(480,270,RGB);
}
void draw() {
background(0); PImage img = kinect.getColorImage(); apples.copy(img,0,0,1920,1080,0,0,480,270); opencv.loadImage(apples); opencv.useColor(); image(opencv.getSnapshot(), 0, 0); faces = opencv.detect(); noFill(); stroke(0, 255, 0); strokeWeight(3); for (int i = 0; i < faces.length; i++) { rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); println(faces[i].x +" "+ faces[i].y);
} } | https://forum.processing.org/two/discussion/25021/why-can-t-i-use-other-cascades-in-opencv-library | CC-MAIN-2019-43 | refinedweb | 218 | 60.61 |
Is it possible to make an array of functions? eg funcarray[2]();
and code the functions later?
Printable View
Is it possible to make an array of functions? eg funcarray[2]();
and code the functions later?
err.... nope :)
If you want to declare a function and "code it later" (aka define it). You can build a prototype ot it.
int sum(int a, int b); //this is the prototype. Note the semicolon
//lot's of code goes here
int sum(int a, int b) //this is the definition of the function.
{
return a+b;
}
This way, even if the function sum() get's called before it's definition, the compiler will know where to look for it.
yes you can make an array of function pointers...
err... I don't thing that's what he asked, ninebit.
I could be wrong, but I got the impression he wants to create a function array, not an array of functions.
you would create an ordinary function pointer and then point to that function pointer with an void* in the list. you must know what functions you have in advance and what params to use... if the function is called wrongly with wrong params there is unexpected behaviour... So take great care when you use this...
it would be cool to have this for some kind of commandline type of way to run functions like this:
cmdline> assign UP 'w'
code:
//parse the line "assign UP 'w'" assign - function, UP and 'w' is params
cmd = "assign"
paramlist("UP", "w")
ExecuteCommand(cmd, paramlist)
in ExecuteCommand:
fncptr = fncMap(cmd);
//find the case where cmd is "assign"
if strcmp(cmd, "assign")
fncptr(paramlist[0], paramlist[1]); // you know the assign function takes 2 params...
anyway.. just a though and highly teorethical...
Nine-Bit's right, I want an array of functions. And I know what a prototype is
well, ok then.
>> Is it possible to make an array of functions? eg funcarray[2]();
You ask for an array of functions, but then give a "function array" (without better words to describe it) as an example...
Like so?
Code:
#include <stdio.h>
#include <string.h>
// pointer to function typedef
// it gets too messy to write these out in long form
// this just happens to be what strcpy and strcat use
typedef char * (*fnptr)(char *, const char* );
// an array of pointers to functions
fnptr functions[2] = {
strcpy,
strcat
};
int main ( int argc, char *argv[] ) {
char test[100] = "hello";
int i;
// calls strcpy then strcat
for ( i = 0 ; i < 2 ; i++ ) {
functions[i]( test, "world " );
printf( "%s\n", test );
}
return 0;
}
You could use a token-pasting macro to achieve a similiar effect.
... maybe.
No ,like this (this probably isn't right)
Code:
#include <iostream.h>
#include <stdio.h>
void myfunctions[3]();
int main()
{
for (int i=0; i<4; i++)
myfunctions[i];
return 0;
}
myfunctions[0](i)
{
//code....
}
//other functions
That was me, didn't log in
Use function pointers. Arrays and pointers generally are the same anyway, in a manner of speaking. | https://cboard.cprogramming.com/cplusplus-programming/20073-function-arrays-printable-thread.html | CC-MAIN-2017-13 | refinedweb | 505 | 72.56 |
Php form mail tiny jobs
...categories Travel:- Make My Trip , Go Ibibo, Trivago etc. Fashion:- Myntra, Flipkart, Amazon, HnM, koovs etc. Electronics:- Flipkart, Amazon, Snapdealetc. Food:- Food Panda, Tiny Owl, Zomato, Tapzo etc. If someone shops from my app through my affiliate link I will give them cash back in return (less than or .
...of the mask i have in mind with slight adjustments done to it. First, the eye slits must be smaller a bit and the whole front of the mask below his nose must have breathing tiny holes all around it. Im also including a sketch of the whole composition in mind. Its very basic, that is why i need artist. Additional details: Please note that i only need
I want to develop tiny Android application. This applications shouldn't take more than 2 days for experienced developer, also it have very simple functionalities. No fancy stuff. Further details will be shared with the shortlisted candidates Happy bidding!.
...blogs, ecommerce
We are getting following error please check the attached image | https://www.freelancer.com/job-search/php-form-mail-tiny/ | CC-MAIN-2018-34 | refinedweb | 173 | 76.72 |
Event handlers for sending content and triggering actions based on keypresses.
elm package install kmbn/elm-hotkeys
import Hotkeys exposing (onEnterSend)
input [ placeholder "Enter content", onEnterSend NewContent ] []
See
../examples/onEnterSend.elm for a working example
Send content only when enter is pressed. Do not send anything beforehand.
This is an alternative to using
Html.Events.onInput to update the model with
incomplete content on every keystroke and then using another event handler to
indicate that the content collected in the model is now complete.
Instead, we only send content to
update when the content is ready to be
consumed.
MIT
© 2018 Kevin Brochet-Nguyen | https://package.frelm.org/repo/1422/1.0.0 | CC-MAIN-2018-51 | refinedweb | 104 | 50.73 |
Red Hat Bugzilla – Full Text Bug Listing
When systemd creates a private tmp the permissions on the private tmp directory seem to be wrong.
Looking at httpd's private tmp on my system I see that /tmp/systemd-namespace-<random>/private is owned by root.root and has permissions 1755 instead of 1777.
This breaks any process that switch to an unprivileged user (httpd -> apache.apache) and wnats to create temporary files.
Also when restarting the httpd.service the old private tmp is not reused nor removed and is leaked on the file system.
Reusing the old tmp would be prefereable as otherwise any temporary file being in use will be lost on apache restart.
*** Bug 789587 has been marked as a duplicate of this bug. ***
Proposing for alpha blocker status. Due to this systemd bug, no service that runs as non-root and requires access to tmp files will work (apache being a notable example).
Which criteria is this breaking?
I cant see this anymore then NTH for alpha and a blocker for beta
(In reply to comment #3)
> Which criteria is this breaking?
>
> I cant see this anymore then NTH for alpha and a blocker for beta
Sorry, you are correct. I was incorrectly assuming httpd was considered CRITPATH, but it is not. So I agree with your assessment of NTH for alpha and blocker for beta.
what else is known to use private /tmp ?
For cases like httpd this can be fixed fine with an update, so I'm not sure even NTH makes a lot of sense unless there's a case you're going to hit during install or in a live boot.
--
Fedora Bugzappers volunteer triage team
(In reply to comment #5)
> what else is known to use private /tmp ?
See
(not a short list)
I'm a bit concerned that ypbind/ypserv could impact NIS logins on F17. dhcpd, cups and dovecot seem pretty serious too.
Sorry, ypserv/ypbind and dhcpd were closed NOTABUG, not RAWHIDE.
my vote here would be -1 blocker, none of this seems to prevent install or login post install and can be fixed by updates after.
cups and dovecot are too high-level functionality to conceivably impact alpha. I can't really see any package CLOSED RAWHIDE (implying it was implemented) that might constitute a blocker, though a couple may be NTH-worthy (ntpd, openvpn, bind maybe).
--
Fedora Bugzappers volunteer triage team
ntpd runs as a non-privileged user, so it's probably hit by this. cupsd runs as root, so it probably isn't. not sure about the others.
ah, ntpd is no longer default, chronyd is. so ntpd isn't too worrying. chronyd doesn't use private /tmp.
--
Fedora Bugzappers volunteer triage team
The fix is available here
IMHO it would be safer to pull it than to try to guess which package may try to write in a private tmp and fail with strange side effects because of this bug
(In reply to comment #12)
> The fix is available here
(at least for the can not write in private tmp part, it works with clamav and httpd) looks an awful lot like a fix.
--
Fedora Bugzappers volunteer triage team
Yes it's part of the systemd 43 build I just referenced (and tested)
well, I just tried systemd-43, and I'm not sure it's fixed. I installed it at 13:09 and rebooted, new boot happened at 13:11, and I have:
drwx------. 4 root root 4096 Feb 15 13:11 /tmp/systemd-namespace-gmEreP
note the creation date.
--
Fedora Bugzappers volunteer triage team
systemd-43-1.fc17 has been submitted as an update for Fedora 17.
-1 to blocker -1 to nth this should just be fixed via update +1 to beta blocker if still present at that time...
Package systemd-43-1.fc17:
* should fix your issue,
* was pushed to the Fedora 17 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing systemd-43-1.fc17'
as soon as you are able to.
Please go to the following url:
then log in and leave karma (feedback).
actually, I was looking at the wrong dir - it's /tmp/systemd-namespace-blah/private/ that matters, apparently.
so looking at that, I confirm the fix. the /private dirs created *before* the update on my system are drwxr-xr-t . the /private dir created *after* the update is drwxrwxrwt . I see the same in a VM running RC2. So looks like the fix is good in RC2. Update still has to be pushed stable.
--
Fedora Bugzappers volunteer triage team
--
Fedora Bugzappers volunteer triage team
Discussed at 2012-02-17 blocker review meeting. Agreed we can't declare this a blocker as we have no evidence that it actually causes any major issues, but accepted as NTH due to its obvious potential to cause problems and the fact that it can't entirely be fixed with an update (private /tmps created before the update is installed will retain incorrect permissions).
Note the fix was already pulled into RC2 anyway.
systemd-43-1.fc17 has been pushed to the Fedora 17 stable repository. If problems still persist, please make note of it in this bug report.
*** Bug 788061 has been marked as a duplicate of this bug. ***
Fixed in F16 as well:
*** Bug 790042 has been marked as a duplicate of this bug. *** | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=790522 | CC-MAIN-2017-30 | refinedweb | 909 | 72.97 |
How to Construct an array from its pair-sum array in Python
In this tutorial we are going to learn how to construct an array from its pair-sum array in Python. It is an array that consists of the sum of all the pairs in an orderly fashion. So, in general pair-sum array for array[0..n-1] is : –
{array[0]+array[1], array[0]+array[2], ……, array[0]+array[n-1],array[1]+array[2], array[1]+array[3], ……., array[2]+array[3], array[2]+array[4], …., array[n-2]+array[n-1]}.
Construct an Array from its pair-sum array in Python
If we are given an array called sum and there are n elements in the array that we are constructing that is named array. And by seeing some examples we can observe that we can calculate the first element of the original array by: –
Dividing the sum[0]+sum[1]-sum[n-1] by 2 that is generally (array[0]+array[1])+(array[2]+array[0])-(array[1]-array[2]). We see that array[1] and array[2] will be canceled out and 2*array[0] will remain that is divided by 2 to get the first element. So, now as we know the value of array[0] we can get the value of array[1] by subtracting array[0] from sum[0]=(array[0]+array[1]). Similarly, we can get all the values by subtracting array[0] from subsequent sum elements.
Now we are going to implement it in Python: –
Code: –
Firstly we are going to create a function based on the above algorithm.
def mainarray(array,sum,n): array[0] = (sum[0]+sum[1]-sum[n-1])//2 for i in range(1,n): array[i] = sum[i-1]-array[0]
Now we are going to initialize the sum array and the array of original numbers.
Code: –
sum=[14, 9, 10, 11, 12, 7] n=4 array=[0]*n #We initialize an array of size n with 0 mainarray(array,sum,n) for i in range(n): print(array[i],end=" ")
We are calling the mainarray function by giving the required parameters.
Output: –
6 8 3 4
We see that we are getting the correct output after calling the function. Similarly, this algorithm can be applied to any language.
Also Read: –
How to find all possible pairs with given sum in Python lists | https://www.codespeedy.com/construct-an-array-from-its-pair-sum-array-in-python/ | CC-MAIN-2020-29 | refinedweb | 401 | 59.13 |
For this project we wanted to control a Lego vehicle with a Python Tkinter app. Next we added a short cut to the Pi desktop and then we used VNC to see the Pi desktop and our app on a tablet.
Hardware Setup
Our hardware components were:
- Raspberry Pi 3
- Pimoroni ExplorerHat Pro – supports bi-directional DC motors
- Dexter Connectors – allow 2 wire connections to Lego Mindstorms parts
- 2 Lego Mindstorms motors
- Portable USB charger
- lots of Lego parts
- 4 jumpers
The Lego Mindstorms parts are little pricey but they allow you to make some pretty funky contraptions. The other thing that we like about the Mindstorms motors is that they have a lot of torque for a 5V DC motor.
There are a few options for the cabling (like cutting the cable and exposing the individual wires) we used the Dexter connectors that are breadboard friendly. ANA and GND connections on the Dexter side go to Motor + and Motor – on the ExplorerHat Pro board.
Python Tkinter
The Tkinter library allows you to create a simple graphic user interface (GUI) with components like: buttons, sliders, lists, text, labels etc.
For our interface we created a grid of 3 rows and 2 columns with 5 buttons. We made a simple motor function where we passed the speed and direction of the wheels. A negative speed is backwards, zero is stop, and a positive speed is forward.
import Tkinter import explorerhat top = Tkinter.Tk() top.title("Car Control") explorerhat.motor.one.speed(0) explorerhat.motor.one.speed(0) #Define the buttons def motor(Left,Right): explorerhat.motor.one.speed(Right) explorerhat.motor.two.speed(Left) B_Left = Tkinter.Button(top, text ="Left", bg = "green", fg = "white", width= 15, height= 5, command = lambda: motor (50,0)).grid(row=1,column=1) B_Right = Tkinter.Button(top, text ="Right", bg = "green", fg = "white", width= 15, height= 5, command = lambda: motor (0,50)).grid(row=1,column=2) B_Forward= Tkinter.Button(top, text ="Forward", bg = "green", fg = "white", width= 15, height= 5, command = lambda: motor (50,50)).grid(row=2,column=1) B_Backward = Tkinter.Button(top, text ="Backward", bg = "green", fg = "white", width= 15, height= 5, command = lambda: motor (-50,-50)).grid(row=2,column=2) B_Stop = Tkinter.Button(top, text ="Stop", bg = "red", fg = "white", width= 33, height= 3, command = lambda: motor (0,0)).grid(row=3,column=1,columnspan=2) top.mainloop()
Pi Shortcut
To create a Pi shortcut, create a file:
nano $HOME/desktop/pi.desktop
Inside this file define the name, path, and icon info for your new application:
[Desktop Entry] Name=Car Controls Comment=Python Tkinter Car Control Panel Icon=/home/pi/car1.png Exec=python /home/pi/mycarapp.py Type=Application Terminal=false Categories=None;
VNC (Virtual Network Computing)
VNC is install on the Raspbian image. To enable VNC run:
sudo raspi-config
Then select the interfacing option, and then select VNC and enable.
Finally you will need to define a VNC password and load some VNC software on your Tablet. There are a lot of packages to choose from. We have an Android table and we used RemoteToGo without any problems.
Note, when your Pi boots without a HDMI monitor connected the desktop resolution will be at a low setting (probably 800×600) this can be adjusted. For us we simply resized the desktop to fit our tablet screen. | https://funprojects.blog/2017/05/07/le-gocart-python-tkinter-gui/ | CC-MAIN-2022-40 | refinedweb | 559 | 56.86 |
052043 Super Learning Kit for Arduino
Contents
- 1 052043 Super Learning Kit for Arduino
- 2 Kit Contents
- 3 Project Details
- 3.1 Project 1: Hello World
- 3.2 Project 2: LED Blinking
- 3.3 Project 3: PWM
- 3.4 Project 4: Traffic Light
- 3.5 Project 5: LED Chasing Effect
- 3.6 Project 6: Button-controlled LED
- 3.7 Project 7: Active Buzzer
- 3.8 Project 8: Passive Buzzer
- 3.9 Project 9: RGB LED
- 3.10 Project 10: Photo Resistor
- 3.11 Project 11: Flame Sensor
- 3.12 Project 12: LM35 Temperature Sensor
- 3.13 Project 13: Tilt Switch
- 3.14 Project 14: IR Remote Control
- 3.15 Project 15: Analog Value Reading
- 3.16 Project 16: 74HC595
- 3.17 Project 17: 1-digit LED Segment Display
- 3.18 Project 18: 4-digit LED Segment Display
- 3.19 Project 19: 8*8 LED Matrix
- 3.20 Project 20: 1602 LCD
- 3.21 Project 21: Servo Control
- 3.22 Project 22: 5V Stepper Motor
- 3.23 Project 23: PIR Motion Sensor
- 3.24 Project 24: Analog Gas Sensor
- 3.25 Project 25: ADXL345 Three Axis Acceleration Module
- 3.26 Project 26: HC-SR04 Ultrasonic Sensor
- 3.27 Project 27: Joystick Module
- 3.28 Project 28: 5V Relay Module
- 3.29 Project 29: DS3231 Clock Module
- 3.30 Project 30: DHT11 Temperature and Humidity Sensor
- 3.31 Project 31: Soil Humidity Sensor
- 3.32 Project 32: RC522 RFID Module
- 4 Resources
052043 Super Learning Kit for Arduino
Inland super learning kit is suitable for Arduino Contents
NOTE:052043 KIT Includes MEGA 2560. in the Arduino world!
Hardware Required:
1. Arduino board x1
2. USB cable x1
Sample Code:
After installing driver for Arduino, let's open Arduino software and compile the code that enables Arduino to print"Hello World!" under your instruction. Of course, you can compile the code for Arduino to continuously echo "Hello World!" without instruction.
A simple If () statement will do the instruction trick. With the onboard LED connected to pin 13, you can instruct the LED to blink first when Arduino gets an instruction and then print the character:
Click serial port monitor
Input R
LED 13 will blink once;
PC will receive information from Arduino: Hello World
After choosing the right port, the experiment is very, you will need extra parts as below:
Hardware Required:
1. Red M5 LED *1
2. 220Ω resistor *1
3. Breadboard * 1
4. Breadboard jumper wire * 2
We follow below diagram from the experimental schematic link. Here we use digital pin 10.
Connect an LED to a 220 ohm resistor to avoid high current damaging the LED.
Connection for UNO R3 and 2560 R3:
Sample Code: interfaces on Arduino, namely digital pin 3, 5, 6, 9, 10, and 11. In previous experiments, we have done "button-controlled LED", using digital signal to control digital pin. This time, we will use a potentiometer to control the brightness of LED.
Hardware Required:
- 1. Potentiometer*1
- 2. Red M5 LED*1
- 3. 220Ω resistor
- 4. Breadboard*1
- 5. Breadboard jumper wire *6
The input of potentiometer is analog, so we connect it to analog port, and LED to PWM port. Different PWM signal can regulate the brightness of the LED.
Connection for UNO R3 and 2560 R3:
display the analog value on the screen. }
Result:
After uploading the program, when you rotate the potentiometer knob, you can see the value change, and also obvious change of the LED brightness.
Project 4: Traffic Light
Introduction:
In the previous program, we have done the LED blinking experiment with one LED. Now, it’s time to up the stakes and do a bit more complicated experiment-traffic light. Actually, these two experiments are similar. While in this traffic light experiment, we use 3 LEDs with different color rather than one LED.
Hardware Required:
1. Arduino board *1
2. USB cable *1
3. Red M5 LED*1
4. Yellow M5 LED*1
5. Green M5 LED*1
6. 220Ω resistor *3
7. Breadboard*1
8. Breadboard jumper wires* 4
Connection for UNO R3 and 2560 R3:
Sample Code:
Since it is a simulation of traffic light, the blinking time of each LED should be the same with those in traffic light second digitalWrite(redled, LOW);// turn off red LED }
Result:
When the uploading process is completed,.
Experiment is now completed. Thank you.
Project 5: LED Chasing Effect
Introduction:
We often see billboards composed of colorful LEDs. They are constantly changing to form various effects. In this experiment, we compile a program to simulate chase effect.
Hardware Required:
- Arduino Board x1
- Led x6
- 220Ω resistor x6
- breadboard jumper wire x 13
Connection for UNO R3 and 2560 R3:
its OUTPUT function.
In this experiment, we will try to use the input function, which is to read the output value of device connecting to it.:
- Arduino board *1
- Button switch*1
- Red M5 LED*1
- 220Ω resistor*1
- 10KΩ resistor*1
- Breadboard*1
- Breadboard jumper wire *6
Connection for UN RO3/2560 R3:
Sample Code:
Now, let's begin the compiling. When the button is pressed, the LED will be on. Based on the previous study, the coding may be easy for you. In this program, we add a statement of judgment. Here, we use an if() statement.
Arduino IDE is based on C language, so statements of C language such as while, switch, etc.. So the button controlled LED experiment is completed.
The simple principle of this experiment is widely used in a variety of circuit and electric appliances. You can easily come across it in your everyday life. One typical example is when you press a certain key of your phone, the backlight will be on.
Project 7: Active Buzzer
Introduction:
Active buzzer is widely used as a sound making element on computer, printer, alarm, electronic toy, telephone, timer, etc. It has an inner vibration source. Simply connect it with 5V power supply, it can buzz continuously.
2043-14.png
Hardware Required:
- Buzzer*1
- Control board*1
- Breadboard*1
- Breadboard jumper wire*2
Connection for UNO R3/2560 R3:
When connecting the circuit, pay attention to the positive and the negative poles of the buzzer.
In the photo, you can see there are red and black lines. When the circuit is finished, you can begin the programming.
Sample Code:
Program is simple. You see the buzzer is ringing.
Project 8: Passive Buzzer
You can use Arduino to make many interactive works.The most commonly used. You can use Arduino to code the melody of a song, which is quite fun and simple.
Hardware Required:
- Passive buzzer *1
- Control board *1
- Breadboard *1
- Breadboard jumper wire * 2
Connection for UNO R3 and 2560 R3:
delay(2);//2ms delay digitalWrite(buzzer,LOW);//not sound delay(2);//2ms delay } } }
Result:
After downloading the program, the buzzer experiment is completed.
Project 9: RGB LED
Introduction:
Tricolor principle to display various colors;
PWM controlling ports to display full color;
Can be driven directly by Arduino PWM interfaces.
Hardware Required:
- Arduino controller × 1
- USB cable × 1
- Full-color LED module × 1
- Resistor *3
- Breadboard jumper wire*5
Connection for UNO R3/2560 R3:
wire *5
Connection for UNO R3 and 2560 R3:
.
_21<<
Hardware Required:
- Arduino board *1
- Flame sensor*1
- Buzzer*1
- 10K resistor_22<<_23<<
Hardware Required:
- Arduino board *1
- LM35*1
- Breadboard*1
- Breadboard jumper wire *5
Connection for UNO R3 and 2560 R3:
Tilt switch controlling the LED ON and OFF.
Hardware Required:
1.Ball switch*1
2.Control board *1
3.Led *1
4.220Ω resistor*1
5.10KΩ resistor*1
6.USB cable *1
7.Breadboard jumper wire *5
Connection for UNO R3:
Connect the ball tilt switch, LED and resistors to control board. Connect the LED to digital pin 8, ball switch to analog pin 5.
Connection for UNO R3 and 2560 R3:
_26<<.
- expend.
Result:
Connection:
In this experiment, we will convert the resistance value of the potentiometer to analog ones and display it on the screen. This is an application you need to master well for our future experiments.
Connection circuit as below:
Connection for UNO R3 and 2560 R3:
the one in the PC's software set up.
Otherwise, the display will be messy codes or no display at all. In the lower right corner of the Arduino software monitor window, there is a button for baud rate set up. The set up.
Below is the analog value it reads.
When you rotate the potentiometer knob, you can see the displayed value change. The reading of analog value is a very common function for most sensors output analog value. After calculation, you can get the corresponding value you need.
The experiment is now completed. 168, it has only 20 I/O including analog ports. To save port resources, we use 74HC595 to reduce the number of ports it needs. Using 74HC595 enables us to use 3 digital I/O port to control 8 LEDs!
2043-34.png
Hardware Required:
- 74HC595 chip*1
- UNO board *1
- Red M5 LED*4
- Green M5 LED*4
- 220Ω resistor*8
- Breadboard*1
- USB cable *1
- Breadboard jumper wires*several
Note: for pin 13 OE port of 74HC595, it should be connected to GND
Connection for UNO R3 and 2560 R3:
. It's widely applied on displays of electromagnetic oven, full automatic washing machine, water temperature display, electronic clock, etc. It is necessary for us common anode display and common cathode display._31<<
Each segment of the display consists of an LED. So when you use it, you also need to_32<<.
Another method is to connect one resistor to each pin. It guarantees consistent brightness, but requires more resistors. In this experiment, we use 8 220Ω resistors (we use 220Ω resistors because no 100Ω resistor available. If you use 100Ω, the displaying is more brighter).
Connection:
For 4-digit displays, there are 12 pins in total. When you place the decimal point downward (see below photo position), the pin on the lower left part is refer to as 1, the upper left part 12.
Manual for LED segment display:
Connection for UNO R3 and 2560 R3:
.
It will display the number 1234.
Note: if it’s not displaying correctly, check the wiring.
Thank you.
Project 19: 8*8 LED Matrix
Introduction:
With low-voltage scanning, LED dot-matrix display have some advantages such as power saving, long service life, low cost, high brightness, wide angle of view, long visual range, waterproof, and numerous specifications. LED dot-matrix display can meet the needs of different applications,_36<< be on. If you want to light_37<<_38<<.
Connection for UNO R3 and 2560 R3:
scree, cursor position returns to 0 delay(100); LcdCommandWrite(0x80); // display setup // turn on the monitor, cursor on, no flickering delay(20); } void loop (void) { LcdCommandWrite(0x01); // clear the scree,_40<<
After the connection, upload below code to the controller board and see how it goes.
Sample Code B:
/* 9 * LCD D5 pin to digital pin 8 * LCD D6 pin to digital pin 7 * LCD D7 pin to digital pin 6 *, 9, 8, 7,); }.
_44<<: 5V Stepper Motor.
Parameters of Stepper Motor 28BYJ-48:
-)
Connection for UNO R3 and 2560 R3:
Sample Code:
#include <Stepper.h> #define STEPS 100 Stepper stepper(STEPS, 8, 9, 10, 11); int previous = 0; void setup() { stepper.setSpeed(90); } void loop() { int val = analogRead(0); stepper.step(val - previous); previous = val; } its size is much more bigger, with complex circuit and lower reliability.
Now we launch this new pyroelectric infrared motion sensor,which is specially designed for Arduino. It uses an integrated digital body pyroelectric infrared sensor, and
Connection for UNO R3 and mega2560:
); }
Test Result: Analog Gas Sensor
Introduction:
This analog gas sensor - MQ2 is used in gas leakage detecting equipment in both consumer electronics and industrial markets. This sensor is suitable for LPG, I-butane, propane, methane, alcohol, Hydrogen and smoke detection. It has high sensitivity and quick response. In addition, the sensitivity can be adjusted by the potentiometer.
Specification:
- Power supply: 5V
- Interface type: Analog
- Wide detecting scope
- Quick response and high sensitivity
- Simple drive circuit
- Stable and long lifespan
Connection for UNO R3 and 2560 R3:
Sample Code:
void setup() { Serial.begin(9600); //Set serial baud rate to 9600 bps } void loop() {int val; val=analogRead(0);//Read Gas value from analog 0 Serial.println(val,DEC);//Print the value to serial port delay(100); }
Result
Done wiring and powered up, upload well the code, then open the serial monitor and set the baud rate as 9600, you will see the analog value. When detecting the gas, the value will make a change.
Project 25: measure the static acceleration of gravity in tilt-sensing applications, as well as dynamic acceleration resulting from motion or shock. Its high resolution (4 mg/LSB) enables measurement of inclination change
Connection for UNO R3 and 2560 R3:
); }
Result
Wiring as the above diagram and uploading the code,then open the serial monitor to display the triaxial acceleration of sensor and its status, as the graph shown below.
Project.
- Working Voltage: DC 5V
- Working Current: 15mA
- Working Frequency: 40Hz
- Max Range: 4m
- Min Range: 2cm
- Measuring Angle: 15 degree
- Trigger Input Signal: 10µS TTL pulse
- Echo Output Signal Input TTL lever signal and the range in proportion
Connection for UNO R3 and 2560 R3:
27: cable supplied.
Specification:
- Supply Voltage: 3.3V to 5V
- Interface: Analog x2, Digital x1
Connection for UNO R3 and 2560 R3:
); }
Result
Wiring well and uploading the code, open the serial monitor and set the baud rate to 9600, push the joystick, you will see the value shown below.
Project 28: for UNO R3 and 2560 R3 :
29: DS3231 Clock Module
Introduction:
DS3231 is equipped with integrated TCXO and crystal, which makes it a cost-effective I2C real time clock with high precision. The device carries a battery input, so even)
Connection for UNO R3 and MEGA 2560:
This module adopts the IIC test method, so only need to connect ‘SDA’ to Arduino A4, ‘SCL’ to A5, ‘+’ to VCC and ‘-’ to GND as follows:
); }
Before compiling the code, you’d better put DS3231 library under file into Arduino catalogue.
Result:
Done uploading the code to arduino, open the serial monitor and get the following results:
Project 30: widely applied application and even the most demanding one. Convenient connection and special package can be provided according to your need.
Specification:
Supply Voltage: +5 V
Temperature Range: 0-50 °C error of ± 2 °C
Humidity: 20-90% RH ± 5% RH error
Interface: Digital
Connection for UNO R3 and 2560 R3:
This module adopts the IIC test method, so only need to connect ‘SDA’ to Arduino A4, ‘SCL’ to A5, ‘+’ to VCC and ‘-’ to GND as follows:
); }
Result
Wire it up well and upload the above code to UNO board.
Then open the serial monitor and set the baud rate to 9600, you will see the current temperature and humidity value.
Project 31: Soil Humidity Sensor
Introduction:
This is a simple soil humidity sensor aims to detect the soil humidity. If the soil is lack of water, the analog value output by the sensor will decrease, otherwise, it will increase. If you use this sensor to make an automatic watering device, it can detect whether your botany is thirsty so as to prevent it from withering when you go out. Using the sensor with Arduino controller makes your plant more comfortable and your garden smarter.
The soil humidity sensor module is not as complicated as you might think. the_54<<
Connection Method:
This module adopts the IIC test method, so only need to connect ‘SDA’ to Arduino A4, ‘SCL’ to A5, ‘+’ to VCC and ‘-’ to GND as follows:
Connection for UNO R3 and 2560 R3:
Sample Code:
/* # 0 ~300 dry soil # 300~700 humid soil # 700~950 in water */ void setup(){ Serial.begin(57600); } void loop(){ Serial.print("Moisture Sensor Value:"); Serial.println(analogRead(0)); delay(100); }
Project 32: RC522 RFID Module
Introduction:
MF522-AN module adopts Philips MFRC522 original reader circuit chip design, easy to use, low cost, suitable for equipment development, development of advanced applications, the need for the user of RF card terminal design/production. It can be loaded directly into a variety of readers molds. Module uses voltage of 3.3V, through the SPI interface using simple few lines, it can be directly connected to any CPU board communication modules to guarantee stable and reliable work and reader distance.
Electrical Parameters:
-
- Dimensions: 40mm * 60mm
- Environmental operating temperature: -20-80 degrees Celsius
- Environment storage temperature: -40-85 degrees Celsius
- Relative humidity: 5% -95%
Circuit Connection:
Connection for UNO R3 and 2560 R3:
This module adopts the IIC test method, so we only need to connect ‘SDA’ to Arduino A4, ‘SCL’ to A5, ‘+’ to VCC and ‘-’ to GND as follows:}, xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xff,0x07,0x80,0x69, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, };; /); } initial value AntennaOn(); // open antenna } uchar MFRC522_Request(uchar reqMode, uchar *TagType) { uchar status; uint backBits; // received data bits: //; } uchar MFRC522_Auth(uchar authMode, uchar BlockAddr, uchar *Sectorkey, uchar *serNum) { uchar status; uint recvBits; uchar i; uchar buff[12]; // Verification commands + block address + sector password + card sequence number++) // write 16Byte data into FIF; }, you can see it shown on the monitor window.
Note: if you want to use MEGA 2560 R3, please in the code change
const int chipSelectPin = 10;//if the controller is UNO,328,168
into
const int chipSelectPin = 53;//if the controller is MEGA 2560
Resources
- Video:
- Download all the information here: | https://wiki.keyestudio.com/052043_Super_Learning_Kit_for_Arduino | CC-MAIN-2020-05 | refinedweb | 2,923 | 64.61 |
Conor McBride wrote: > Neither Oleg nor Bruno translated my code; they threw away my > structurally recursive on-the-fly automaton and wrote combinator parsers > instead. That's why there's no existential, etc. The suggestion that > removing the GADT simplifies the code would be better substantiated if > like was compared with like. The following code specifically tries to be as close to the original Conor McBride's code as possible. I only removed local type annotations (turn out unnecessary) and replaced `ap' with `liftM' as a matter of personal preference. > Where we came in, if you recall, was the use of a datatype to define > regular expressions. I used a GADT to equip this type with some useful > semantic information (namely a type of parsed objects), and I wrote a > program (divide) which exploited this representation to compute the > regexp the tail of the input must match from the regexp the whole input > must match. This may be a bit of a weird way to write a parser, but it's > a useful sort of thing to be able to do... That particular feature is fully preserved. Data types are used to define regular expression: > p = (Star (Mult (Star (Check (== 'a'))) (Star (Check (== 'b'))))) > > testp = parse p "abaabaaabbbb" Moreover, *RX> :t p p :: Star (Mult (Star (Check Char)) (Star (Check Char))) the regular expression is apparent in the type of the `parser' (or its representation, to be precise). One can easily define data types Digit, Alphanumeric, WhiteSpace, etc. and so type will look more informative. Here's one step to regular types... Instead of GADT, a type class is used to associate semantic information with the data type (with labels). Typing is more explicit now. > And that's the point: not only does the GADT have all the disadvantages > of a closed datatype, it has all the advantages too! A type class can be closed too, thanks to functional dependencies. You were right: the code below is just the mechanical reshuffling of the original code, using the translation I come across a year ago. `empty' and `division' are now grouped by the parser type, which I personally like. It is hard to see any loss of expressiveness by getting rid of GADTs in this case. More explicit type is an advantage. Also an advantage is the fact that `pattern match' is guaranteed exhaustive. If I forgot to implement some particular parser, that would be a type error (rather than a pattern-match run-time error, as is the case with ADT/GADT). > Moreover, once you have to start messing about with equality > constraints, the coding to eliminate GADTs becomes a lot hairier. I > implemented it back in '95 (when all these things had different names) > and I was very glad not to have to do it by hand any more. The discovery of the presence of type equality in Haskell changes things a great deal. It would be interesting to see how your code of '95 will look now. Writing an equality and even _dis-equality_ constraint is just as simple as |TypeEq a b HTrue| or |TypeEq a b HFalse|. {-# OPTIONS -fglasgow-exts #-} module RX where import Control.Monad class RegExp r tok a | r tok -> a where empty :: r -> [tok] -> Maybe a divide :: tok -> r -> Division tok a data Zero = Zero data One = One newtype Check tok = Check (tok -> Bool) data Plus r1 r2 = Plus r1 r2 data Mult r1 r2 = Mult r1 r2 newtype Star r = Star r data Empty = Empty data Division tok x = forall r y. RegExp r tok y => Div r (y -> x) parse :: (RegExp r tok x) => r -> [tok] -> Maybe x parse r t@[] = empty r t parse r (t : ts) = case divide t r of Div q f -> liftM f $ parse q ts nullt :: tok -> [tok] nullt _ = [] instance RegExp Zero tok Empty where empty _ _ = mzero divide _ _ = Div Zero naughtE instance RegExp One tok () where empty _ _ = return () divide _ _ = Div Zero naughtE instance RegExp (Check tok) tok tok where empty _ _ = mzero divide t (Check p) | p t = Div One (const t) divide _ _ = Div Zero naughtE instance (RegExp r1 tok a, RegExp r2 tok b) => RegExp (Plus r1 r2) tok (Either a b) where empty (Plus r1 r2) t = (liftM Left $ empty r1 t) `mplus` (liftM Right $ empty r2 t) divide t (Plus r1 r2) = case (divide t r1, divide t r2) of (Div q1 f1, Div q2 f2) -> Div (Plus q1 q2) (f1 +++ f2) instance (RegExp r1 tok a, RegExp r2 tok b) => RegExp (Mult r1 r2) tok (a,b) where empty (Mult r1 r2) t = liftM2 (,) (empty r1 t) (empty r2 t) divide t (Mult r1 r2) = case (empty r1 (nullt t),)) instance RegExp r tok a => RegExp (Star r) tok [a] where empty _ _ = return [] divide t (Star r) = case (divide t r) of Div q f -> Div (Mult q (Star r)) (\ (y, xs) -> (f y : xs)) (***) :: (a -> b) -> (c -> d) -> (a, c) -> (b, d) (f *** g) (a, c) = (f a, g c) (+++) :: (a -> b) -> (c -> d) -> Either a c -> Either b d (f +++ g) (Left a) = Left (f a) (f +++ g) (Right c) = Right (g c) naughtE :: Empty -> x naughtE = undefined p = (Star (Mult (Star (Check (== 'a'))) (Star (Check (== 'b'))))) testp = parse p "abaabaaabbbb" | http://www.haskell.org/pipermail/haskell-cafe/2005-October/011752.html | CC-MAIN-2013-48 | refinedweb | 887 | 59.87 |
A Verified Compiler from Isabelle/HOL to CakeML
Abstract.
KeywordsIsabelle CakeML Compiler Higher-order term rewriting
1 Introduction
Many theorem provers have the ability to generate executable code in some (typically functional) programming language from definitions, lemmas and proofs (e.g. [6, 8, 9, 12, 16, 27, 37]). This makes code generation part of the trusted kernel of the system. Myreen and Owens [30] closed this gap for the HOL4 system: they have implemented a tool that translates from HOL4 into CakeML, a subset of SML, and proves a theorem stating that a result produced by the CakeML code is correct w.r.t. the HOL functions. They also have a verified implementation of CakeML [24, 40]. We go one step further and provide a once-and-for-all verified compiler from (deeply embedded) function definitions in Isabelle/HOL [32, 33] into CakeML proving partial correctness of the generated CakeML code w.r.t. the original functions. This is like the step from dynamic to static type checking. It also means that preconditions on the input to the compiler are explicitly given in the correctness theorem rather than implicitly by a failing translation. To the best of our knowledge this is the first verified (as opposed to certifying) compiler from function definitions in a logic into a programming language.
We erase types right away. Hence the type system of the source language is irrelevant.
We merely assume that the source language has a semantics based on equational logic.
- 1.
The preprocessing phase eliminates features that are not supported by our compiler. Most importantly, dictionary construction eliminates occurrences of type classes in HOL terms. It introduces dictionary datatypes and new constants and proves the equivalence of old and new constants (Sect. 7).
- 2.
The deep embedding lifts HOL terms into terms of type \(\mathsf {term}\), a HOL model of HOL terms. For each constant c (of arbitrary type) it defines a constant \(c'\) of type \(\mathsf {term}\) and proves a theorem that expresses equivalence (Sect. 3).
- 3.
There are multiple compiler phases that eliminate certain constructs from the \(\mathsf {term}\) type, until we arrive at the CakeML expression type. Most phases target a different intermediate term type (Sect. 5).
The first two stages are preprocessing, are implemented in ML and produce certificate theorems. Only these stages are specific to Isabelle. The third (and main) stage is implemented completely in the logic HOL, without recourse to ML. Its correctness is verified once and for all.1
2 Related Work
There is existing work in the Coq [2, 15] and HOL [30] communities for proof producing or verified extraction of functions defined in the logic. Anand et al. [2] present work in progress on a verified compiler from Gallina (Coq’s specification language) via untyped intermediate languages to CompCert C light. They plan to connect their extraction routine to the CompCert compiler [26].
Translation of type classes into dictionaries is an important feature of Haskell compilers. In the setting of Isabelle/HOL, this has been described by Wenzel [44] and Krauss et al. [23]. Haftmann and Nipkow [17] use this construction to compile HOL definitions into target languages that do not support type classes, e.g. Standard ML and OCaml. In this work, we provide a certifying translation that eliminates type classes inside the logic.
Compilation of pattern matching is well understood in literature [3, 36, 38]. In this work, we contribute a transformation of sets of equations with pattern matching on the left-hand side into a single equation with nested pattern matching on the right-hand side. This is implemented and verified inside Isabelle.
Besides CakeML, there are many projects for verified compilers for functional programming languages of various degrees of sophistication and realism (e.g. [4, 11, 14]). Particularly modular is the work by Neis et al. [31] on a verified compiler for an ML-like imperative source language. The main distinguishing feature of our work is that we start from a set of higher-order recursion equations with pattern matching on the left-hand side rather than a lambda calculus with pattern matching on the right-hand side. On the other hand we stand on the shoulders of CakeML which allows us to bypass all complications of machine code generation. Note that much of our compiler is not specific to CakeML and that it would be possible to retarget it to, for example, Pilsner abstract syntax with moderate effort.
Finally, Fallenstein and Kumar [13] have presented a model of HOL inside HOL using large cardinals, including a reflection proof principle.
3 Deep Embedding
Starting with a HOL definition, we derive a new, reified definition in a deeply embedded term language depicted in Fig. 1a. This term language corresponds closely to the term datatype of Isabelle’s implementation (using de Bruijn indices [10]), but without types and schematic variables.
To establish a formal connection between the original and the reified definitions, we use a logical relation, a concept that is well-understood in literature [20] and can be nicely implemented in Isabelle using type classes. Note that the use of type classes here is restricted to correctness proofs; it is not required for the execution of the compiler itself. That way, there is no contradiction to the elimination of type classes occurring in a previous stage.
Notation. We abbreviate \(\mathsf {App}\;t\;u\) to t $ u and \(\mathsf {Abs}\;t\) to \(\varLambda \;t\). Other term types introduced later in this paper use the same conventions. We reserve \(\lambda \) for abstractions in HOL itself. Typing judgments are written with a double colon: \(t\, {:}{:}\, \tau \).
Small-Step Semantics. Figure 1b specifies the small-step semantics for \(\mathsf {term}\). It is reminiscent of higher-order term rewriting, and modelled closely after equality in HOL. The basic idea is that if the proposition \(t = u\) can be proved equationally in HOL (without symmetry), then \(R \vdash {\left\langle t\right\rangle } \longrightarrow ^* {\left\langle u\right\rangle }\) holds (where \(\textit{R}\, {:}{:}\, (\mathsf {term} \times \mathsf {term})\;\mathsf {set}\)). We call \(\textit{R}\) the rule set. It is the result of translating a set of defining equations \( lhs = rhs \) into pairs \((\left\langle lhs \right\rangle , \left\langle rhs \right\rangle ) \in \textit{R}\).
Rule Step performs a rewrite step by picking a rewrite rule from R and rewriting the term at the root. For that purpose, \(\mathsf {match}\) and \(\mathsf {subst}\) are (mostly) standard first-order matching and substitution (see Sect. 4 for details).
Our semantics does not constitute a fully-general higher-order term rewriting system, because we do not allow substitution under binders. For de Bruijn terms, this would pose no problem, but as soon as we introduce named bound variables, substitution under binders requires dealing with capture. To avoid this altogether, all our semantics expect terms that are substituted into abstractions to be closed. However, this does not mean that we restrict ourselves to any particular evaluation order. Both call-by-value and call-by-name can be used in the small-step semantics. But later on, the target semantics will only use call-by-value.
Embedding Relation. We denote the concept that an embedded term t corresponds to a HOL term a of type \(\tau \) w.r.t. rule set \(\textit{R}\) with the syntax \(\textit{R} \vdash t \approx a\). If we want to be explicit about the type, we index the relation: \(\approx _\tau \).
The induction principle for the proof arises from the use of the Open image in new window command that is used to define recursive functions in HOL [22]. But the user is also allowed to specify custom equations for functions, in which case we will use heuristics to generate and prove the appropriate induction theorem. For simplicity, we will use the term (defining) equation uniformly to refer to any set of equations, either default ones or ones specified by the user. Embedding partially-specified functions – in particular, proving the certificate theorem about them – is currently not supported. In the future, we plan to leverage the domain predicate as produced by Open image in new window to generate conditional theorems.
4 Terms, Matching and Substitution
The compiler transforms the initial \(\mathsf {term}\) type (Fig. 1a) through various intermediate stages. This section gives an overview and introduces necessary terminology.
Preliminaries. The function arrow in HOL is \(\Rightarrow \). The cons operator on lists is the infix \(\#\).
Throughout the paper, the concept of mappings is pervasive: We use the type notation \(\alpha \rightharpoonup \beta \) to denote a function \(\alpha \Rightarrow \beta \;\mathsf {option}\). In certain contexts, a mapping may also be called an environment. We write mapping literals using brackets: \([a \Rightarrow x, b \Rightarrow y, \ldots ]\). If it is clear from the context that \(\sigma \) is defined on a, we often treat the lookup \(\sigma \;a\) as returning an \(x\, {:}{:}\, \beta \).
The functions \(\mathsf {dom}\, {:}{:}\, (\alpha \rightharpoonup \beta ) \Rightarrow \alpha \;\mathsf {set}\) and \(\mathsf {range}\, {:}{:}\, (\alpha \rightharpoonup \beta ) \Rightarrow \beta \;\mathsf {set}\) return the domain and range of a mapping, respectively.
Dropping entries from a mapping is denoted by \(\sigma - k\), where \(\sigma \) is a mapping and k is either a single key or a set of keys. We use \(\sigma ' \subseteq \sigma \) to denote that \(\sigma '\) is a sub-mapping of \(\sigma \), that is, \(\mathsf {dom}\;\sigma ' \subseteq \mathsf {dom}\;\sigma \) and \(\forall a \in \mathsf {dom}\;\sigma '.\; \sigma '\;a = \sigma \;a\).
Merging two mappings \(\sigma \) and \(\rho \) is denoted with \(\sigma \mathbin {+\!\!+}\rho \). It constructs a new mapping with the union domain of \(\sigma \) and \(\rho \). Entries from \(\rho \) override entries from \(\sigma \). That is, \(\rho \subseteq \sigma \mathbin {+\!\!+}\rho \) holds, but not necessarily \(\sigma \subseteq \sigma \mathbin {+\!\!+}\rho \).
All mappings and sets are assumed to be finite. In the formalization, this is enforced by using subtypes of \(\rightharpoonup \) and \(\mathsf {set}\). Note that one cannot define datatypes by recursion through sets for cardinality reasons. However, for finite sets, it is possible. This is required to construct the various term types. We leverage facilities of Blanchette et al.’s Open image in new window command to define these subtypes [7].
Standard Functions. All type constructors that we use (\(\rightharpoonup \), \(\mathsf {set}\), \(\mathsf {list}\), \(\mathsf {option}\), ...) support the standard operations \(\mathsf {map}\) and \(\mathsf {rel}\). For lists, \(\mathsf {map}\) is the regular covariant map. For mappings, the function has the type \((\beta \Rightarrow \gamma ) \Rightarrow (\alpha \rightharpoonup \beta ) \Rightarrow (\alpha \rightharpoonup \gamma )\). It leaves the domain unchanged, but applies a function to the range of the mapping.
Function \(\mathsf {rel}_\tau \) lifts a binary predicate \(P\, {:}{:}\, \alpha \Rightarrow \alpha \Rightarrow \mathsf {bool}\) to the type constructor \(\tau \). We call this lifted relation the relator for a particular type.
Definition 1 (Set relator)
Definition 2 (Mapping relator)
Term Types. There are four distinct term types: \(\mathsf {term}\), \(\mathsf {nterm}\), \(\mathsf {pterm}\), and \(\mathsf {sterm}\). All of them support the notions of free variables, matching and substitution. Free variables are always a finite set of strings. Matching a term against a pattern yields an optional mapping of type \(\mathsf {string} \rightharpoonup \alpha \) from free variable names to terms.
Note that the type of patterns is itself \(\mathsf {term}\) instead of a dedicated pattern type. The reason is that we have to subject patterns to a linearity constraint anyway and may use this constraint to carve out the relevant subset of terms:
Definition 3
A term is linear if there is at most one occurrence of any variable, it contains no abstractions, and in an application \(f\mathbin {\$}x\), f must not be a free variable. The HOL predicate is called \(\mathsf {linear}\, {:}{:}\, \mathsf {term} \Rightarrow \mathsf {bool}\).
Because of the similarity of operations across the term types, they are all instances of the \(\mathsf {term}\) type class. Note that in Isabelle, classes and types live in different namespaces. The \(\mathsf {term}\) type and the \(\mathsf {term}\) type class are separate entities.
Definition 4
\(\mathsf {matchs}\) matches a list of patterns and terms sequentially, producing a single mapping
\(\mathsf {closed}\;t\) is an abbreviation for \(\mathsf {frees}\;t = \emptyset \)
\(\mathsf {closed}\;\sigma \) is an overloading of \(\mathsf {closed}\), denoting that all values in a mapping are closed
Additionally, some (obvious) axioms have to be satisfied. We do not strive to fully specify an abstract term algebra. Instead, the axioms are chosen according to the needs of this formalization.
A notable deviation from matching as discussed in term rewriting literature is that the result of matching is only well-defined if the pattern is linear.
Definition 5
An equation is a pair of a pattern (left-hand side) and a term (right-hand side). The pattern is of the form \(f\mathbin \$p_1\mathbin \$\ldots \mathbin \$p_n\), where f is a constant (i.e. of the form \(\mathsf {Const}\; name \)). We refer to both f or \( name \) interchangeably as the function symbol of the equation.
Following term rewriting terminology, we sometimes refer to an equation as rule.
4.1 De Bruijn terms ( Open image in new window )
The definition of \(\mathsf {term}\) is almost an exact copy of Isabelle’s internal term type, with the notable omissions of type information and schematic variables (Fig. 1a). The implementation of \(\beta \)-reduction is straightforward via index shifting of bound variables.
4.2 Named Bound Variables ( Open image in new window )
The \(\mathsf {nterm}\) type is similar to \(\mathsf {term}\), but removes the distinction between bound and free variables. Instead, there are only named variables. As mentioned in the previous section, we forbid substitution of terms that are not closed in order to avoid capture. This is also reflected in the syntactic side conditions of the correctness proofs (Sect. 5.1).
4.3 Explicit Pattern Matching ( Open image in new window )
Functions in HOL are usually defined using implicit pattern matching, that is, the terms \(p_i\) occurring on the left-hand side \(\left\langle \mathsf {f}\;p_1\;\ldots \;p_n\right\rangle \) of an equation must be constructor patterns. This is also common among functional programming languages like Haskell or OCaml. CakeML only supports explicit pattern matching using case expressions. A function definition consisting of multiple defining equations must hence be translated to the form \(f = \lambda x.\;\mathsf {\mathbf {case}}\;x\;\mathsf {\mathbf {of}}\;\ldots \). The elimination proceeds by iteratively removing the last parameter in the block of equations until none are left.
In our formalization, we opted to combine the notion of abstraction and case expression, yielding case abstractions, represented as the \(\mathsf {Pabs}\) constructor. This is similar to the fn construct in Standard ML, which denotes an anonymous function that immediately matches on its argument [28]. The same construct also exists in Haskell with the LambdaCase language extension. We chose this representation mainly for two reasons: First, it allows for a simpler language grammar because there is only one (shared) constructor for abstraction and case expression. Second, the elimination procedure outlined above does not have to introduce fresh names in the process. Later, when translating to CakeML syntax, fresh names are introduced and proved correct in a separate step.
The set of pairs of pattern and right-hand side inside a case abstraction is referred to as clauses. As a short-hand notation, we use \(\varLambda \{ p_1 \Rightarrow t_1, p_2 \Rightarrow t_2, \ldots \}\).
4.4 Sequential Clauses ( Open image in new window )
In the term rewriting fragment of HOL, the order of rules is not significant. If a rule matches, it can be applied, regardless when it was defined or proven. This is reflected by the use of sets in the rule and term types. For CakeML, the rules need to be applied in a deterministic order, i.e. sequentially. The \(\mathsf {sterm}\) type only differs from \(\mathsf {pterm}\) by using \(\mathsf {list}\) instead of \(\mathsf {set}\). Hence, case abstractions use list brackets: \(\varLambda [p_1 \Rightarrow t_1, p_2 \Rightarrow t_2, \ldots ]\).
4.5 Irreducible Terms ( Open image in new window )
CakeML distinguishes between expressions and values. Whereas expressions may contain free variables or \(\beta \)-redexes, values are closed and fully evaluated. Both have a notion of abstraction, but values differ from expressions in that they contain an environment binding free variables.
Consider the expression \((\lambda x. \lambda y. x)\,(\lambda z. z)\), which is rewritten (by \(\beta \)-reduction) to \(\lambda y. \lambda z. z\). Note how the bound variable x disappears, since it is replaced. This is contrary to how programming languages are usually implemented: evaluation does not happen by substituting the argument term t for the bound variable x, but by recording the binding \(x \mapsto t\) in an environment [24]. A pair of an abstraction and an environment is usually called a closure [25, 41].
Note the nested structure of the closure, whose environment itself contains a closure.
5 Intermediate Semantics and Compiler Phases
5.1 Side Conditions
Patterns must be linear, and constructors in patterns must be fully applied.
Definitions must have at least one parameter on the left-hand side (Sect. 5.6).
The right-hand side of an equation refers only to free variables occurring in patterns on the left-hand side and contain no dangling de Bruijn indices.
There are no two defining equations \( lhs = rhs _1\) and \( lhs = rhs _2\) such that \( rhs _1 \ne rhs _2\).
For each pair of equations that define the same constant, their arity must be equal and their patterns must be compatible (Sect. 5.3).
There is at least one equation.
Variable names occurring in patterns must not overlap with constant names (Sect. 5.7).
Any occurring constants must either be defined by an equation or be a constructor.
The conditions for the subsequent phases are sufficiently similar that we do not list them again.
In the formalization, we use named contexts to fix the rules and assumptions on them (locales in Isabelle terminology). Each phase has its own locale, together with a proof that after compilation, the preconditions of the next phase are satisfied. Correctness proofs assume the above conditions on R and similar conditions on the term that is reduced. For brevity, this is usually omitted in our presentation.
5.2 Naming Bound Variables: From Open image in new window to Open image in new window
Isabelle uses de Bruijn indices in the term language for the following two reasons: For substitution, there is no need to rename bound variables. Additionally, \(\alpha \)-equivalent terms are equal. In implementations of programming languages, these advantages are not required: Typically, substitutions do not happen inside abstractions, and there is no notion of equality of functions. Therefore CakeML uses named variables and in this compilation step, we get rid of de Bruijn indices.
The “named” semantics is based on the \(\mathsf {nterm}\) type. The rules that are changed from the original semantics (Fig. 1b) are given in Fig. 3 (Fun and Arg remain unchanged). Notably, \(\beta \)-reduction reuses the substitution function.
For the correctness proof, we need to establish a correspondence between \(\mathsf {term}\)s and \(\mathsf {nterm}\)s. Translation from \(\mathsf {nterm}\) to \(\mathsf {term}\) is trivial: Replace bound variables by the number of abstractions between occurrence and where they were bound in, and keep free variables as they are. This function is called \(\mathsf {nterm\_to\_term}\).
The other direction is not unique and requires introduction of fresh names for bound variables. In our formalization, we have chosen to use a monad to produce these names. This function is called \(\mathsf {term\_to\_nterm}\). We can also prove the obvious property \(\mathsf {nterm\_to\_term}\;(\mathsf {term\_to\_nterm}\;t) = t\), where t is a \(\mathsf {term}\) without dangling de Bruijn indices.
Generation of fresh names in general can be thought of as picking a string that is not an element of a (finite) set of already existing names. For Isabelle, the Nominal framework [42, 43] provides support for reasoning over fresh names, but unfortunately, its definitions are not executable.
Theorem 1 (Correctness of compilation)
5.3 Explicit Pattern Matching: From Open image in new window to Open image in new window
Usually, functions in HOL are defined using implicit pattern matching, that is, the left-hand side of an equation is of the form \(\left\langle \mathsf {f}\;p_1\;\ldots \;p_n\right\rangle \), where the \(p_i\) are patterns over datatype constructors. For any given function \(\mathsf {f}\), there may be multiple such equations. In this compilation step, we transform sets of equations for \(\mathsf {f}\) defined using implicit pattern matching into a single equation for \(\mathsf {f}\) of the form \(\left\langle \mathsf {f}\right\rangle = \varLambda \;\textit{C}\), where \(\textit{C}\) is a set of clauses.
Semantics. The target semantics is given in Fig. 4 (the Fun and Arg rules from previous semantics remain unchanged). We start out with a rule set \(\textit{R}\) that allows only implicit pattern matching. After elimination, only explicit pattern matching remains. The modified Step rule merely replaces a constant by its definition, without taking arguments into account.
This compatibility constraint ensures that any two overlapping patterns (of the same column) \(p_{i,k}\) and \(p_{j,k}\) are equal and are thus appropriately grouped together in the elimination procedure. We require all defining equations of a constant to be mutually compatible. Equations violating this constraint will be flagged during embedding (Sect. 3), whereas the pattern elimination algorithm always succeeds.
While this rules out some theoretically possible pattern combinations (e.g. the diagonal function [36, Sect. 5.5]), in practice, we have not found this to be a problem: All of the function definitions we have tried (Sect. 8) satisfied pattern compatibility (after automatic renaming of pattern variables). As a last resort, the user can manually instantiate function equations. Although this will always lead to a pattern compatible definition, it is not done automatically, due to the potential blow-up.
5.4 Sequentialization: From Open image in new window to Open image in new window
The semantics of \(\mathsf {pterm}\) and \(\mathsf {sterm}\) differ only in rule Step and Beta. Figure 5 shows the modified rules. Instead of any matching clause, the first matching clause in a case abstraction is picked.
5.5 Big-Step Semantics for Open image in new window
This big-step semantics for \(\mathsf {sterm}\) is not a compiler phase but moves towards the desired evaluation semantics. In this first step, we reuse the \(\mathsf {sterm}\) type for evaluation results, instead of evaluating to the separate type \(\mathsf {value}\). This allows us to ignore environment capture in closures for now.
All previous \(\longrightarrow \) relations were parametrized by a rule set. Now the big-step predicate is of the form \(\textit{rs}, \sigma \vdash t \downarrow t'\) where \(\sigma \, {:}{:}\, \mathsf {string}\rightharpoonup \mathsf {sterm}\) is a variable environment.
This semantics also introduces the distinction between constructors and defined constants. If \(\mathsf {C}\) is a constructor, the term \(\left\langle \mathsf {C}\;t_1\;\ldots \;t_n\right\rangle \) is evaluated to \(\left\langle \mathsf {C}\;t'_1\;\ldots \;t'_n\right\rangle \) where the \(t_i'\) are the results of evaluating the \(t_i\).
The full set of rules is shown in Fig. 6. They deserve a short explanation:
- Const.
Constants are retrieved from the rule set \(\textit{rs}\).
- Var.
Variables are retrieved from the environment \(\sigma \).
- Abs.
In order to achieve the intended invariant, abstractions are evaluated to their fully substituted form.
- Comb.
Function application \(t \;\$\; u\) first requires evaluation of t into an abstraction \(\varLambda \;\textit{cs}\) and evaluation of u into an arbitrary term \(u'\). Afterwards, we look for a clause matching \(u'\) in \(\textit{cs}\), which produces a local variable environment \(\sigma '\), possibly overwriting existing variables in \(\sigma \). Finally, we evaluate the right-hand side of the clause with the combined global and local variable environment.
- Constr.
For a constructor application \(\left\langle \mathsf {C}\;t_1\;\ldots \right\rangle \), evaluate all \(t_i\). The set constructors is an implicit parameter of the semantics.
Lemma 1 (Closedness invariant)
If \(\sigma \) contains only closed terms, \(\mathsf {frees}\;t \subseteq \mathsf {dom}\;\sigma \) and \(\textit{rs}, \sigma \vdash t \downarrow t'\), then \(t'\) is closed.
Correctness of the big-step w.r.t. the small-step semantics is proved easily by induction on the former:
Lemma 2
By setting \(\sigma = []\), we obtain:
Theorem 2 (Correctness)
\(\textit{rs}, [] \vdash t \downarrow u \wedge \mathsf {closed}\;t \rightarrow \textit{rs}\vdash t \longrightarrow ^* u\)
5.6 Evaluation Semantics: Refining Open image in new window to Open image in new window
At this point, we introduce the concept of values into the semantics, while still keeping the rule set (for constants) and the environment (for variables) separate. The evaluation rules are specified in Fig. 7 and represent a departure from the original rewriting semantics: a term does not evaluate to another term but to an object of a different type, a \(\mathsf {value}\). We still use \(\downarrow \) as notation, because big-step and evaluation semantics can be disambiguated by their types.
The evaluation model itself is fairly straightforward. As explained in Sect. 4.5, abstraction terms are evaluated to closures capturing the current variable environment. Note that at this point, recursive closures are not treated differently from non-recursive closures. In a later stage, when \(\textit{rs}\) and \(\sigma \) are merged, this distinction becomes relevant.
- Abs.
Abstraction terms are evaluated to a closure capturing the current environment.
- Comb.
As before, in an application \(t\mathbin {\$}u\), t must evaluate to a closure \(\mathsf {Vabs}\;\textit{cs}\;\sigma '\). The evaluation result of u is then matched against the clauses \(\textit{cs}\), producing an environment \(\sigma ''\). The right-hand side of the clause is then evaluated using \(\sigma '\mathbin {+\!\!+}\sigma ''\); the original environment \(\sigma \) is effectively discarded.
- RecComb.
Similar as above. Finding the matching clause is a two-step process: First, the appropriate clause list is selected by name of the currently active function. Then, matching is performed.
- Constr.
As before, for an n-ary application \(\left\langle \mathtt {C}\;t_1\;\ldots \right\rangle \), where \(\mathsf {C}\) is a data constructor, we evaluate all \(t_i\). The result is a \(\mathsf {Vconstr}\) value.
Conversion Between Open image in new window and Open image in new window . To establish a correspondence between evaluating a term to an \(\mathsf {sterm}\) and to a \(\mathsf {value}\), we apply the same trick as in Sect. 5.2. Instead of specifying a complicated relation, we translate \(\mathsf {value}\) back to \(\mathsf {sterm}\): simply apply the substitutions in the captured environments to the clauses.
The translation rules for \(\mathsf {Vabs}\) and \(\mathsf {Vrecabs}\) are kept similar to the Abs rule from the big-step semantics (Fig. 6). Roughly speaking, the big-step semantics always keeps terms fully substituted, whereas the evaluation semantics defers substitution.
Similarly to Sect. 5.2, we can also define a function \(\mathsf {sterm\_to\_value}\, {:}{:}\, \mathsf {sterm} \Rightarrow \mathsf {value}\) and prove that one function is the inverse of the other.
Matching. The \(\mathsf {value}\) type, instead of using binary function application as all other term types, uses n-ary constructor application. This introduces a conceptual mismatch between (binary) patterns and values. To make the proofs easier, we introduce an intermediate type of n-ary patterns. This intermediate type can be optimized away by fusion.
Correctness. The correctness proof requires a number of interesting lemmas.
Lemma 3 (Substitution before evaluation)
Assuming that a term t can be evaluated to a value u given a closed environment \(\sigma \), it can be evaluated to the same value after substitution with a sub-environment \(\sigma '\). Formally: \(\textit{rs}, \sigma \vdash t \downarrow u \wedge \sigma ' \subseteq \sigma \rightarrow \textit{rs}, \sigma \vdash \mathsf {subst}\;\sigma '\;t \downarrow u\)
This justifies the “pre-substitution” exhibited by the Abs rule in the big-step semantics in contrast to the environment-capturing Abs rule in the evaluation semantics.
Theorem 3 (Correctness)
Let \(\sigma \) be a closed environment and t a term which only contains free variables in \(\mathsf {dom}\;\sigma \). Then, an evaluation to a value \(\textit{rs}, \sigma \vdash t \downarrow v\) can be reproduced in the big-step semantics as \(\textit{rs}', \mathsf {map}\;\mathsf {value\_to\_sterm}\;\sigma \vdash t \downarrow \mathsf {value\_to\_sterm}\;v\), where \(\textit{rs}' = [( name , \mathsf {value\_to\_sterm}\; rhs ) \;|\; ( name , rhs ) \leftarrow \textit{rs}]\).
Instantiating the Correctness Theorem. The correctness theorem states that, for any given evaluation of a term t with a given environment \(\textit{rs}, \sigma \) containing \(\mathsf {value}\)s, we can reproduce that evaluation in the big-step semantics using a derived list of rules \(\textit{rs}'\) and an environment \(\sigma '\) containing \(\mathsf {sterm}\)s that are generated by the \(\mathsf {value\_to\_sterm}\) function. But recall the diagram in Fig. 2. In our scenario, we start with a given rule set of \(\mathsf {sterm}\)s (that has been compiled from a rule set of \(\mathsf {term}\)s). Hence, the correctness theorem only deals with the opposite direction.
It remains to construct a suitable \(\textit{rs}\) such that applying \(\mathsf {value\_to\_sterm}\) to it yields the given \(\mathsf {sterm}\) rule set. We can exploit the side condition (Sect. 5.1) that all bindings define functions, not constants:
Definition 6 (Global clause set)
The mapping \(\mathsf {global\_css}\, {:}{:}\, \mathsf {string} \rightharpoonup ((\mathsf {term} \times \mathsf {sterm})\;\mathsf {list})\) is obtained by stripping the \(\mathsf {Sabs}\) constructors from all definitions and converting the resulting list to a mapping.
For each definition with name f we define a corresponding term \(v_f = \mathsf {Vrecabs}\;\mathsf {global\_css}\;f\;[]\). In other words, each function is now represented by a recursive closure bundling all functions. Applying \(\mathsf {value\_to\_sterm}\) to \(v_f\) returns the original definition of f. Let \(\textit{rs}\) denote the original \(\mathsf {sterm}\) rule set and \(\textit{rs}_\text {v}\) the environment mapping all f’s to the \(v_f\)’s.
The variable environments \(\sigma \) and \(\sigma '\) can safely be set to the empty mapping, because top-level terms are evaluated without any free variable bindings.
Corollary 1 (Correctness)
\(\textit{rs}_\text {v}, [] \vdash t \downarrow v \rightarrow \textit{rs}, [] \vdash t \downarrow \mathsf {value\_to\_sterm}\;v\)
Note that this step was not part of the compiler (although \(\textit{rs}_\text {v}\) is computable) but it is a refinement of the semantics to support a more modular correctness proof.
5.7 Evaluation with Recursive Closures
- Const/Var.
Constant definition and variable values are both retrieved from the same environment \(\sigma \). We have opted to keep the distinction between constants and variables in the \(\mathsf {sterm}\) type to avoid the introduction of another term type.
- Abs.
Identical to the previous evaluation semantics. Note that evaluation never creates recursive closures at run-time (only at compile-time, see Sect. 5.6). Anonymous functions, e.g. in the term \(\left\langle \mathsf {map}\;(\lambda x.\;x)\right\rangle \), are evaluated to non-recursive closures.
- Comb.
Identical to the previous evaluation semantics.
- RecComb.
Almost identical to the evaluation semantics. Additionally, for each function \(( name , cs ) \in \textit{css}\), a new recursive closure \(\mathsf {Vrecabs}\;\textit{css}\; name \;\sigma '\) is created and inserted into the environment. This ensures that after the first call to a recursive function, the function itself is present in the environment to be called recursively, without having to introduce coinductive environments.
- Constr.
Identical to the evaluation semantics.
Conflating Constants and Variables. By merging the rule set \(\textit{rs}\) with the variable environment \(\sigma \), it becomes necessary to discuss possible clashes. Previously, the syntactic distinction between \(\mathsf {Svar}\) and \(\mathsf {Sconst}\) meant that \(\left\langle x\right\rangle \) and \(\left\langle \mathsf {x}\right\rangle \) are not ambiguous: all semantics up to the evaluation semantics clearly specify where to look for the substitute. This is not the case in functional languages where functions and variables are not distinguished syntactically.
Instead, we rely on the fact that the initial rule set only defines constants. All variables are introduced by matching before \(\beta \)-reduction (that is, in the Comb and RecComb rules). The Abs rule does not change the environment. Hence it suffices to assume that variables in patterns must not overlap with constant names (see Sect. 5.1).
Correspondence Relation. Both constant definitions and values of variables are recorded in a single environment \(\sigma \). This also applies to the environment contained in a closure. The correspondence relation thus needs to take a different sets of bindings in closures into account.
Hence, we define a relation \(\approx _\text {v}\) that is implicitly parametrized on the rule set \(\textit{rs}\) and compares environments. We call it right-conflating, because in a correspondence \(v \approx _\text { v} u\), any bound environment in u is thought to contain both variables and constants, whereas in v, any bound environment contains only variables.
Definition 7 (Right-conflating correspondence)
Consequently, \(\approx _\text {v}\) is not reflexive.
Correctness. The correctness lemma is straightforward to state:
Theorem 4 (Correctness)
Let \(\sigma \) be an environment, t be a closed term and v a value such that \(\sigma \vdash t \downarrow v\). If for all constants x occurring in t, \(\textit{rs}\;x \approx _\text { v} \sigma \;x\) holds, then there is an u such that \(\textit{rs}, [] \vdash t \downarrow u\) and \(u \approx _\text { v} v\).
As usual, the rather technical proof proceeds via induction over the semantics (Fig. 8). It is important to note that the global clause set construction (Sect. 5.6) satisfies the preconditions of this theorem:
Lemma 4
Because \(\approx _\text {v}\) is defined coinductively, the proof of this precondition proceeds by coinduction.
5.8 CakeML
CakeML is a verified implementation of a subset of Standard ML [24, 40]. It comprises a parser, type checker, formal semantics and backend for machine code. The semantics has been formalized in Lem [29], which allows export to Isabelle theories.
Our compiler targets CakeML’s abstract syntax tree. However, we do not make use of certain CakeML features; notably mutable cells, modules, and literals. We have derived a smaller, executable version of the original CakeML semantics, called CupCakeML, together with an equivalence proof. The correctness proof of the last compiler phase establishes a correspondence between CupCakeML and the final semantics of our compiler pipeline.
For the correctness proof of the CakeML compiler, its authors have extracted the Lem specification into HOL4 theories [1]. In our work, we directly target CakeML abstract syntax trees (thereby bypassing the parser) and use its big-step semantics, which we have extracted into Isabelle.2
CakeML does not combine abstraction and pattern matching. For that reason, we have to translate \(\varLambda \;[p_1 \Rightarrow t_1, \ldots ]\) into \(\varLambda x.\;\mathsf {\mathbf {case}}\;x\;\mathsf {\mathbf {of}}\;p_1 \Rightarrow t_1 \;|\; \ldots \), where x is a fresh variable name. We reuse the \(\mathsf {fresh}\) monad to obtain a bound variable name. Note that it is not necessary to thread through already created variable names, only existing names. The reason is simple: a generated variable is bound and then immediately used in the body. Shadowing it somewhere in the body is not problematic.
CakeML has two distinct syntactic categories for identifiers (that can represent variables or functions) and data constructors. Our term types however have two distinct syntactic categories for constants (that can represent functions or data constructors) and variables. The necessary prerequisites to deal with this are already present in the ML-style evaluation semantics (Sect. 5.7) which conflates constants and variables, but has a dedicated Constr rule for data constructors.
Types. During embedding (Sect. 3), all type information is erased. Yet, CakeML performs some limited form of type checking at run-time: constructing and matching data must always be fully applied. That is, data constructors must always occur with all arguments supplied on right-hand and left-hand sides.
Fully applied constructors in terms can be easily guaranteed by simple pre-processing. For patterns however, this must be ensured throughout the compilation pipeline; it is (like other syntactic constraints) another side condition imposed on the rule set (Sect. 5.1).
The shape of datatypes and constructors is managed in CakeML’s environment. This particular piece of information is allowed to vary in closures, since ML supports local type definitions. Tracking this would greatly complicate our proofs. Hence, we fix a global set of constructors and enforce that all values use exactly that one.
Correspondence Relation. We define two different correspondence relations: One for values and one for expressions.
Definition 8 (Expression correspondence)
We will explain each of the rules briefly here.
- Var.
Variables are directly related by identical name.
- Const.
As described earlier, constructors are treated specially in CakeML. In order to not confuse functions or variables with data constructors themselves, we require that the constant name is not a constructor.
- Constr.
Constructors are directly related by identical name, and recursively related arguments.
- App.
CakeML does not just support general function application but also unary and binary operators. In fact, function application is the binary operator \(\mathsf {Opapp}\). We never generate other operators. Hence the correspondence is restricted to \(\mathsf {Opapp}\).
- Fun/Mat.
Observe the symmetry between these two cases: In our term language, matching and abstraction are combined, which is not the case in CakeML. This means we relate a case abstraction to a CakeML function containing a match, and a case abstraction applied to a value to just a CakeML match.
There is no separate relation for patterns, because their translation is simple.
The value correspondence (\(\mathsf {rel\_v}\)) is structurally simpler. In the case of constructor values (\(\mathsf {Vconstr}\) and \(\mathsf {Cake.Conv}\)), arguments are compared recursively. Closures and recursive closures are compared extensionally, i.e. only bindings that occur in the body are checked recursively for correspondence.
Correctness. We use the same trick as in Sect. 5.6 to obtain a suitable environment for CakeML evaluation based on the rule set \(\textit{rs}\).
Theorem 5 (Correctness)
If the compiled expression \(\mathsf {sterm\_to\_cake}\;t\) terminates with a value u in the CakeML semantics, there is a value v such that \(\mathsf {rel\_v}\;v\;u\) and \(\textit{rs}\vdash t \downarrow v\).
6 Composition
The complete compiler pipeline consists of multiple phases. Correctness is justified for each phase between intermediate semantics and correspondence relations, most of which are rather technical. Whereas the compiler may be complex and impenetrable, the trustworthiness of the constructions hinges on the obviousness of those correspondence relations.
Fortunately, under the assumption that terms to be evaluated and the resulting values do not contain abstractions – or closures, respectively – all of the correspondence relations collapse to simple structural equality: two terms are related if and only if one can be converted to the other by consistent renaming of term constructors.
This theorem directly relates the evaluation of a term t in the full CakeML (including mutability and exceptions) to the evaluation in the initial higher-order term rewriting semantics. The evaluation of t happens using the environment produced from the initial rule set. Hence, the theorem can be interpreted as the correctness of the pseudo-ML expression \(\mathsf {\mathbf {let\ rec}}\;\textit{rs}\;\mathsf {\mathbf {in}}\;t\).
7 Dictionary Construction
Isabelle’s type system supports type classes (or simply classes) [18, 44] whereas CakeML does not. In order to not complicate the correctness proofs, type classes are not supported by our embedded term language either. Instead, we eliminate classes and instances by a dictionary construction [19] before embedding into the term language. Haftmann and Nipkow give a pen-and-paper correctness proof of this construction [17, Sect. 4.1]. We augmented the dictionary construction with the generation of a certificate theorem that shows the equivalence of the two versions of a function, with type classes and with dictionaries. This section briefly explains our dictionary construction.
Figure 9 shows a simple example of a dictionary construction. Type variables may carry class constraints (e.g. \(\alpha \, {:}{:}\, \mathsf {add}\)). The basic idea is that classes become dictionaries containing the functions of that class; class instances become dictionary definitions. Dictionaries are realized as datatypes. Class constraints become additional dictionary parameters for that class. In the example, class \(\mathsf {add}\) becomes \(\mathsf {dict\_add}\); function f is translated into \(f'\) which takes an additional parameter of type \(\mathsf {dict\_add}\). In reality our tool does not produce the Isabelle source code shown in Fig. 9b but performs the constructions internally. The correctness lemma \(\mathsf {f'\_eq}\) is proved automatically. Its precondition expresses that the dictionary must contain exactly the function(s) of class \(\mathsf {add}\). For any monomorphic instance, the precondition can be proved outright based on the certificate theorems proved for each class instance as explained next.
8 Evaluation
We have tried out our compiler on examples from existing Isabelle formalizations. This includes an implementation of Huffman encoding, lists and sorting, string functions [39], and various data structures from Okasaki’s book [34], including binary search trees, pairing heaps, and leftist heaps. These definitions can be processed with slight modifications: functions need to be totalized (see the end of Sect. 3). However, parts of the tactics required for deep embedding proofs (Sect. 3) are too slow on some functions and hence still need to be optimized.
9 Conclusion
For this paper we have concentrated on the compiler from Isabelle/HOL to CakeML abstract syntax trees. Partial correctness is proved w.r.t. the big-step semantics of CakeML. In the next step we will link our work with the compiler from CakeML to machine code. Tan et al. [40, Sect. 10] prove a correctness theorem that relates their semantics with the execution of the compiled machine code. In that paper, they use a newer iteration of the CakeML semantics (functional big-step [35]) than we do here. Both semantics are still present in the CakeML source repository, together with an equivalence proof. Another important step consists of targeting CakeML’s native types, e.g. integer numbers and characters.
Evaluation of our compiled programs is already possible via Isabelle’s predicate compiler [5], which allows us to turn CakeML’s big-step semantics into an executable function. We have used this execution mechanism to establish for sample programs that they terminate successfully. We also plan to prove that our compiled programs terminate, i.e. total correctness.
The total size of this formalization, excluding theories extracted from Lem, is currently approximately 20000 lines of proof text (90 %) and ML code (10 %). The ML code itself produces relatively simple theorems, which means that there are less opportunities for it to go wrong. This constitutes an improvement over certifying approaches that prove complicated properties in ML.
Footnotes
- 1.
All Isabelle definitions and proofs can be found on the paper website:, or archived as.
- 2.
Based on a repository snapshot from March 27, 2017 (0c48672).
References
- 1.The HOL System Description (2014).
- 2.Anand, A., Appel, A.W., Morrisett, G., Paraskevopoulou, Z., Pollack, R., Bélanger, O.S., Sozeau, M., Weaver, M.: CertiCoq: a verified compiler for Coq. In: CoqPL 2017: Third International Workshop on Coq for Programming Languages (2017)Google Scholar
- 3.Augustsson, L.: Compiling pattern matching. In: Jouannnaud, J.P. (ed.) Functional Programming Languages and Computer Architecture, pp. 368–381. Springer, Heidelberg (1985)CrossRefGoogle Scholar
- 4.Benton, N., Hur, C.: Biorthogonality, step-indexing and compiler correctness. In: Hutton, G., Tolmach, A.P. (eds.) ICFP 2009, pp. 97–108. ACM (2009)Google Scholar
- 5.Berghofer, S., Bulwahn, L., Haftmann, F.: Turning inductive into equational specifications. In: Berghofer, S., Nipkow, T., Urban, C., Wenzel, M. (eds.) TPHOLs 2009. LNCS, vol. 5674, pp. 131–146. Springer, Heidelberg (2009). Scholar
- 6.Berghofer, S., Nipkow, T.: Executing higher order logic. In: Callaghan, P., Luo, Z., McKinna, J., Pollack, R., Pollack, R. (eds.) TYPES 2000. LNCS, vol. 2277, pp. 24–40. Springer, Heidelberg (2002). Scholar
- 7.Blanchette, J.C., Hölzl, J., Lochbihler, A., Panny, L., Popescu, A., Traytel, D.: Truly modular (co)datatypes for Isabelle/HOL. In: Klein, G., Gamboa, R. (eds.) ITP 2014. LNCS, vol. 8558, pp. 93–110. Springer, Cham (2014). Scholar
- 8
- 9.Boyer, R.S., Strother Moore, J.: Single-threaded objects in ACL2. In: Krishnamurthi, S., Ramakrishnan, C.R. (eds.) PADL 2002. LNCS, vol. 2257, pp. 9–27. Springer, Heidelberg (2002). Scholar
- 10.de Bruijn, N.G.: Lambda calculus notation with nameless dummies, a tool for automatic formula manipulation, with application to the church-rosser theorem. Indag. Math. (Proceedings) 75(5), 381–392 (1972)MathSciNetCrossRefGoogle Scholar
- 11.Chlipala, A.: A verified compiler for an impure functional language. In: Hermenegildo, M.V., Palsberg, J. (eds.) POPL 2010, pp. 93–106. ACM (2010)Google Scholar
- 12.Crow, J., Owre, S., Rushby, J., Shankar, N., Stringer-Calvert, D.: Evaluating, testing, and animating PVS specifications. Technical report, Computer Science Laboratory, SRI International, Menlo Park, CA, March 2001Google Scholar
- 13.Fallenstein, B., Kumar, R.: Proof-producing reflection for HOL. In: Urban, C., Zhang, X. (eds.) ITP 2015. LNCS, vol. 9236, pp. 170–186. Springer, Cham (2015). Scholar
- 14.Flatau, A.D.: A verified implementation of an applicative language with dynamic storage allocation. Ph.D. thesis, University of Texas at Austin (1992)Google Scholar
- 15.Forster, Y., Kunze, F.: Verified extraction from coq to a lambda-calculus. In: The 8th Coq Workshop (2016)Google Scholar
- 16.Greve, D.A., Kaufmann, M., Manolios, P., Moore, J.S., Ray, S., Ruiz-Reina, J., Sumners, R., Vroon, D., Wilding, M.: Efficient execution in an automated reasoning environment. J. Funct. Program. 18(1), 15–46 (2008)CrossRefGoogle Scholar
- 17.Haftmann, F., Nipkow, T.: Code generation via higher-order rewrite systems. In: Blume, M., Kobayashi, N., Vidal, G. (eds.) FLOPS 2010. LNCS, vol. 6009, pp. 103–117. Springer, Heidelberg (2010). Scholar
- 18.Haftmann, F., Wenzel, M.: Constructive type classes in Isabelle. In: Altenkirch, T., McBride, C. (eds.) TYPES 2006. LNCS, vol. 4502, pp. 160–174. Springer, Heidelberg (2007). Scholar
- 19.Hall, C.V., Hammond, K., Jones, S.L.P., Wadler, P.L.: Type classes in Haskell. ACM Trans. Program. Lang. Syst. 18(2), 109–138 (1996)CrossRefGoogle Scholar
- 20.Hermida, C., Reddy, U.S., Robinson, E.P.: Logical relations and parametricity - a Reynolds programme for category theory and programming languages. Electron. Notes Theoret. Comput. Sci. 303, 149–180 (2014)MathSciNetCrossRefGoogle Scholar
- 21.Hupel, L.: Dictionary construction. Archive of Formal Proofs, May 2017., Formal proof development
- 22.Krauss, A.: Partial and nested recursive function definitions in higher-order logic. J. Autom. Reason. 44(4), 303–336 (2010)MathSciNetCrossRefGoogle Scholar
- 23.Krauss, A., Schropp, A.: A mechanized translation from higher-order logic to set theory. In: Kaufmann, M., Paulson, L.C. (eds.) ITP 2010. LNCS, vol. 6172, pp. 323–338. Springer, Heidelberg (2010). Scholar
- 24.Kumar, R., Myreen, M.O., Norrish, M., Owens, S.: CakeML: a verified implementation of ML. In: POPL 2014, pp. 179–191. ACM (2014)Google Scholar
- 25.Landin, P.J.: The mechanical evaluation of expressions. Comput. J. 6(4), 308–320 (1964)CrossRefGoogle Scholar
- 26.Leroy, X.: Formal verification of a realistic compiler. Commun. ACM 52(7), 107–115 (2009).
- 27.Letouzey, P.: A new extraction for Coq. In: Geuvers, H., Wiedijk, F. (eds.) TYPES 2002. LNCS, vol. 2646, pp. 200–219. Springer, Heidelberg (2003). Scholar
- 28.Milner, R., Tofte, M., Harper, R., MacQueen, D.: The Definition of Standard ML (Revised). MIT Press, Cambridge (1997)Google Scholar
- 29.Mulligan, D.P., Owens, S., Gray, K.E., Ridge, T., Sewell, P.: Lem: reusable engineering of real-world semantics. In: ICFP 2014, pp. 175–188. ACM (2014)Google Scholar
- 30.Myreen, M.O., Owens, S.: Proof-producing translation of higher-order logic into pure and stateful ML. JFP 24(2–3), 284–315 (2014)MathSciNetzbMATHGoogle Scholar
- 31.Neis, G., Hur, C.K., Kaiser, J.O., McLaughlin, C., Dreyer, D., Vafeiadis, V.: Pilsner: a compositionally verified compiler for a higher-order imperative language. In: ICFP 2015, pp. 166–178. ACM, New York (2015)Google Scholar
- 32.Nipkow, T., Klein, G.: Concrete Semantics. Springer, Cham (2014). Scholar
- 33.Nipkow, T., Wenzel, M., Paulson, L.C. (eds.): Isabelle/HOL—A Proof Assistant for Higher-Order Logic. LNCS, vol. 2283. Springer, Heidelberg (2002).. 218p.CrossRefzbMATHGoogle Scholar
- 34.Okasaki, C.: Purely Functional Data Structures. Cambridge University Press, Cambridge (1999)zbMATHGoogle Scholar
- 35.Owens, S., Myreen, M.O., Kumar, R., Tan, Y.K.: Functional big-step semantics. In: Thiemann, P. (ed.) ESOP 2016. LNCS, vol. 9632, pp. 589–615. Springer, Heidelberg (2016). Scholar
- 36.Peyton Jones, S.L.: The Implementation of Functional Programming Languages. Prentice-Hall Inc., Upper Saddle River (1987)zbMATHGoogle Scholar
- 37.Shankar, N.: Static analysis for safe destructive updates in a functional language. In: Pettorossi, A. (ed.) LOPSTR 2001. LNCS, vol. 2372, pp. 1–24. Springer, Heidelberg (2002). Scholar
- 38.Slind, K.: Reasoning about terminating functional programs. Ph.D. thesis, Technische Universität München (1999)Google Scholar
- 39.Sternagel, C., Thiemann, R.: Haskell’s show class in Isabelle/HOL. Archive of Formal Proofs, July 2014., Formal proof development
- 40.Tan, Y.K., Myreen, M.O., Kumar, R., Fox, A., Owens, S., Norrish, M.: A new verified compiler backend for CakeML. In: Proceedings of 21st ACM SIGPLAN International Conference on Functional Programming - ICFP 2016. Association for Computing Machinery (ACM) (2016)Google Scholar
- 41.Turner, D.A.: Some history of functional programming languages. In: Loidl, H.-W., Peña, R. (eds.) TFP 2012. LNCS, vol. 7829, pp. 1–20. Springer, Heidelberg (2013). Scholar
- 42.Urban, C.: Nominal techniques in Isabelle/HOL. J. Autom. Reason. 40(4), 327–356 (2008). Scholar
- 43.Urban, C., Berghofer, S., Kaliszyk, C.: Nominal 2. Archive of Formal Proofs, February 2013. Formal proof development:
- 44.Wenzel, M.: Type classes and overloading in higher-order logic. In: Gunter, E.L., Felty, A. (eds.) TPHOLs 1997. LNCS, vol. 1275, pp. 307–322. Springer, Heidelberg (1997).. | https://link.springer.com/chapter/10.1007/978-3-319-89884-1_35 | CC-MAIN-2018-39 | refinedweb | 8,253 | 56.66 |
Edit: Point #2 of this post has been revised to be more understandable (and creepier) in a reader’s perspective. Thank you to the user on dev.to who emailed me about the previous confusion!
A lot of us have fallin in love with the react library for several reasons. It can be incredibly painless to create complex interactive user interfaces. The greatest part of it all is being able to compose components right on top of another without breaking other composed components.
And it's amazing that even social media giants like Facebook, Instagram and Pinterest made heavy use of them while creating a seamless user experience with huge APIs like Google Maps .
If you're currently building an application using react or thinking of using react for upcoming projects, then this tutorial is for you. I hope this tutorial will help you on your journey to make great react applications too by exposing a few code implementations that you ought to think twice about.
Without further ado, here are 8 Practices In React That Will Crash Your App In The Future:
1. Declaring Default Parameters Over Null
I mentioned this topic in an earlier article, but this is one of those creepy "gotchas" that can fool a careless developer on a gloomy friday! After all, apps crashing is not a joke--any type of crash can result in money loss at any point in time if not handled correctly.
I was once guilty of spending a good amount of time debugging something similar to this:
const SomeComponent = ({ items = [], todaysDate, tomorrowsDate }) => { const [someState, setSomeState] = useState(null) return ( <div> <h2>Today is {todaysDate}</h2> <small>And tomorrow is {tomorrowsDate}</small> <hr /> {items.map((item, index) => ( <span key={`item_${index}`}>{item.email}</span> ))} </div> ) } const App = ({ dates, ...otherProps }) => { let items if (dates) { items = dates ? dates.map((d) => new Date(d).toLocaleDateString()) : null } return ( <div> <SomeComponent {...otherProps} items={items} /> </div> ) }
Inside our App component, if dates ends up being falsey, it will be initialized with null.
If you're like me, our instincts tell us that items should be initialized to an empty array by default if it was a falsey value. But our app will crash when dates is falsey because items is null. What?
Default function parameters allow named parameters to become initialized with default values if no value or undefined is passed!
In our case, even though null is falsey, it's still a value!
So the next time you set a default value to null, just make sure to think twice when you do that. You can simply just initialize a value to an empty array if that is the expected type of the value.
2. Grabbing Properties With Square Brackets
Sometimes the way properties are being grabbed may influence the behavior of the app. If you're wondering what that behavior is, it's the app crashing. Here is an example of performing object lookups with square brackets:
const someFunction = function() { const store = { people: { joe: { age: 16, gender: 'boy', }, bob: { age: 14, gender: 'transgender', } } } return { getPersonsProfile(name) { return store.people[name] }, foods: ['apple', 'pineapple'], } } const obj = someFunction() const joesProfile = obj.getPersonsProfile('joe') console.log(joesProfile) /* result: { age: 16, gender: boy, } */
These are actually 100% valid use cases and there's nothing really wrong with them besides being slower than object key lookups.
Anyhow, the real problem starts to creep up on your app when an unintentional issue occurs, like a tiny typo:
const someFunction = function () { const store = { people: { joe: { age: 16, gender: 'boy', }, bob: { age: 14, gender: 'transgender', } } } return { getPersonsProfile(name) { return store.people[name] }, foods: ['apple', 'pineapple'], } } const obj = someFunction() const joesProfile = obj.getPersonsProfile('Joe') const joesAge = joesProfile.age console.log(joesAge)
If you or one of your teammates were implementing some enhancement to this snippet and made a minor mistake (such as capitalizing the J in joe), the result will immediately return undefined, and a crash will occur:
"TypeError: Cannot read property 'age' of undefined at tibeweragi.js:24:29 at at"
The creepy part is, the app will not crash until a part of your code attempts to do a property lookup with that undefined value!
So in the mean time, joes profile (undefined in disguise) will be passed around your app and no one will be able to know that this hidden bug is creeping around until a piece of a code performs some property lookup, like joesProfile.age, because joesProfile is
undefined!
What some developers do to avoid a crash is to initialize some default valid return value if a lookup ends up becoming unsuccessful:
const someFunction = function () { const store = { people: { joe: { age: 16, gender: 'boy', }, bob: { age: 14, gender: 'transgender', } } } return { getPersonsProfile(name) { return store.people[name] || {} }, foods: ['apple', 'pineapple'], } }
At least now the app won't crash. The moral of the story is, always handle an invalid lookup case when you're applying lookups with square bracket notation!
For some, it might be a little hard to explain the severity of this practice without a real world example. So I'm going to bring up a real world example. The code example I am about to show you was taken from a repository that dates 8 months back from today. To protect some of the privacy that this code originated from, I renamed almost every variable but the code design, syntax and architecture stayed exactly the same:
import { createSelector } from 'reselect' // supports passing in the whole obj or just the string to correct the video type const fixVideoTypeNaming = (videoType) => { let video = videoType // If video is a video object if (video && typeof video === 'object') { const media = { ...video } video = media.videoType } // If video is the actual videoType string if (typeof video === 'string') { // fix the typo because brian is an idiot if (video === 'mp3') { video = 'mp4' } } return video } /* ------------------------------------------------------- ---- Pre-selectors -------------------------------------------------------- */ ] /* ------------------------------------------------------- ---- Selectors -------------------------------------------------------- */ export const getWeeklyCycleSelector = createSelector( getSpecificWeekSelector, (weekCycle) => weekCycle || null, ) export const getFetchingTotalStatusSelector = createSelector( (state) => state.app[fixVideoTypeNaming(state.app.media.video.videoType)].options.total .fetching, (fetching) => fetching, ) export const getFetchErrorSelector = createSelector( (state) => state.app[fixVideoTypeNaming(state.app.media.video.videoType)].options.total .fetchError, (fetchError) => fetchError, )
fixVideoTypeNaming is a function that will extract the video type based on the value passed in as arguments. If the argument is a video object, it will extract the video type from the .videoType property. If it is a string, then the caller passed in the videoType so we can skip first step. Someone has found that the videoType .mp4 property had been mispelled in several areas of the app. For a quick temporary fix around the issue, fixVideoTypeNaming was used to patch that typo.
Now as some of you might have guessed, the app was built with redux (hence the syntax).
And to use these selectors, you would import them to use in a connect higher order component to attach a component to listen to that slice of the state.
const withTotalCount = (WrappedComponent) => { class WithTotalCountContainer extends React.Component { componentDidMount = () => { const { total, dispatch } = this.props if (total == null) { dispatch(fetchTotalVideoTypeCount()) } } render() { return <WrappedComponent {...this.props} /> } } WithTotalCountContainer.propTypes = { fetching: PropTypes.bool.isRequired, total: PropTypes.number, fetchError: PropTypes.object, dispatch: PropTypes.func.isRequired, } WithTotalCountContainer.displayName = `withTotalCount(${getDisplayName( WrappedComponent, )})` return connect((state) => { const videoType = fixVideoTypeNaming(state.app.media.video.videoType) const { fetching, total, fetchError } = state.app.media.video[ videoType ].options.total return { fetching, total, fetchError } })(WithTotalCountContainer) }
UI Component:
const TotalVideoCount = ({ classes, total, fetching, fetchError }) => { if (fetching) return <LoadingSpinner /> const hasResults = !!total const noResults = fetched && !total const errorOccurred = !!fetchError return ( <Typography variant="h3" className={classes.root} error={!!fetched && !!fetchError} primary={hasResults} soft={noResults || errorOccurred} center > {noResults && 'No Results'} {hasResults && `$${formatTotal(total)}`} {errorOccurred && 'An error occurred.'} </Typography> ) }
The component receives all of the props that the HOC passes to it and displays information following the conditions adapting from the data given from the props. In a perfect world, this would be fine. In a non-perfect world, this would temporarily be fine.
If we go back to the container and look at the way the selectors are selecting their values, we actually might have planted a ticking timebomb waiting for an open opportunity to attack: ]
When developing any sort of application, common practices to ensure higher level of confidence and diminishing bugs during the development flow is implementing tests in-between to ensure that the application is working as intended.
In the case of these code snippets however, if they aren't tested, the app will crash in the future if not handled early.
For one, state.app.media.video.videoType is four levels deep in the chain. What if another developer accidentally made a mistake when he was asked to fix another part of the app and state.app.media.video becomes undefined? The app will crash because it can't read the property videoType of undefined.
In addition, if there was another typo issue with a videoType and fixVideoTypeNaming isn't updated to accomodate that along with the mp3 issue, the app risks another unintentional crash that no one would have been able to detect unless a real user comes across the issue. And by that time, it would be too late.
And it's never a good practice to assume that the app will never ever come across bugs like these. Please be careful!
3. Carelessly Checking Empty Objects When Rendering
Something I used to do long ago in the golden days when conditionally rendering components is to check if data had been populated in objects using
Object.keys. And if there were data, then the component would continue to render if the condition passes:
const SomeComponent = ({ children, items = {}, isVisible }) => ( <div> {Object.keys(items).length ? ( <DataTable items={items} /> ) : ( <h2>Data has not been received</h2> )} </div> )
Lets pretend that we called some API and received items as an object somewhere in the response. With that said, this may seem perfectly fine at first. The expected type of items is an object so it would be perfectly fine to use Object.keys with it. After all, we did initialize items to an empty object as a defense mechanism if a bug were to ever appear that turned it into a falsey value.
But we shouldn't trust the server to always return the same structure. What if items became an array in the future?
Object.keys(items) would not crash but would return a weird output like
["0", "1", "2"]. How do you think the components being rendered with that data will react?
But that's not even the worst part. The worst part in the snippet is that if items was received as a null value in the props, then
items will not even be initiated to the default value you provided!
And then your app will crash before it begins to do anything else:
"TypeError: Cannot convert undefined or null to object at Function.keys (<anonymous>) at yazeyafabu.js:4:45 at at"
Again, please be careful!
4. Carelessly Checking If Arrays Exist Before Rendering
This can be a very similar situation as with #3, but arrays and objects are used quite often interchangeably that they deserve their own sections.
If you have a habit of doing this:
render() { const { arr } = this.props return ( <div> {arr && arr.map()...} </div> ) }
Then make sure you at least have unit tests to keep your eyes on that code at all times or handle
arr correctly early on before passing it to the render method, or else the app will crash if
arr becomes an object literal. Of course the
&& operator will consider it as truthy and attempt to .map the object literal which will end up crashing the entire app.
So please keep this in mind. Save your energy and frustrations for bigger problems that deserve more of your special attention! ;)
5. Not Using a Linter
If you aren't using any type of linter while you're developing apps or you simply don't know what they are, allow me to elaborate a little about why they are useful in development.
The linter I use to assist me in my development flow is ESLint, a very known linting tool for JavaScript that allows developers to discover problems with their code without even executing them.
This tool is so useful that it can act as your semi-mentor as it helps correct your mistakes in real time--as if someone is mentoring you. It even describes why your code can be bad and suggests what you should do to replace them with!
Here's an example:
The coolest thing about eslint is that if you don't like certain rules or don't agree with some of them, you can simple disable certain ones so that they no longer show up as linting warnings/errors as you're developing. Whatever makes you happy, right?
6. Destructuring When Rendering Lists
I've seen this happen to several people in the past and it isn't always an easy bug to detect. Basically when you have a list of items and you're going to render a bunch of components for each one in the list, the bug that can creep up on your app is that if there comes a time in the future where one of the items in the list is not a value you expect it to be, your app may crash if it doesn't know how to handle the value type.
Here's an example:
const api = { async getTotalFrogs() { return { data: { result: [ { name: 'bob the frog', tongueWidth: 50, weight: 8 }, { name: 'joe the other frog', tongueWidth: 40, weight: 5 }, { name: 'kelly the last frog', tongueWidth: 20, weight: 2 }, ], }, } }, } const getData = async ({ withTongues = false }) => { try { const response = await api.getTotalFrogs({ withTongues }) return response.data.result } catch (err) { throw err } } const DataList = (props) => { const [items, setItems] = useState([]) const [error, setError] = useState(null) React.useEffect(() => { getData({ withTongues: true }) .then(setItems) .catch(setError) }, []) return ( <div> {Array.isArray(items) && ( <Header size="tiny" inverted> {items.map(({ name, tongueWidth, weight }) => ( <div style={{ margin: '25px 0' }}> <div>Name: {name}</div> <div>Width of their tongue: {tongueWidth}cm</div> <div>Weight: {weight}lbs</div> </div> ))} </Header> )} {error && <Header>You received an error. Do you need a linter?</Header>} </div> ) }
The code would work perfectly fine. Now if we look at the api call and instead of returning this:
const api = { async getTotalFrogs() { return { data: { result: [ { name: 'bob the frog', tongueWidth: 50, weight: 8 }, { name: 'joe the other frog', tongueWidth: 40, weight: 5 }, { name: 'kelly the last frog', tongueWidth: 20, weight: 2 }, ], }, } }, }
What if somehow there was an issue with how the data flow was handled when an unexpected condition occurred in the api client and returned this array instead?
const api = { async getTotalFrogs() { return { data: { result: [ { name: 'bob the frog', tongueWidth: 50, weight: 8 }, undefined, { name: 'kelly the last frog', tongueWidth: 20, weight: 2 }, ], }, } }, }
Your app will crash because it doesn't know how to handle that:
Uncaught TypeError: Cannot read property 'name' of undefined at eval (DataList.js? [sm]:65) at Array.map (<anonymous>) at DataList (DataList.js? [sm]:64) at renderWithHooks (react-dom.development.js:12938) at updateFunctionComponent (react-dom.development.js:14627)
So to prevent your app from crashing instead, you can set a default object on each iteration:
{ items.map(({ name, tongueWidth, weight } = {}) => ( <div style={{ margin: '25px 0' }}> <div>Name: {name}</div> <div>Width of their tongue: {tongueWidth}cm</div> <div>Weight: {weight}lbs</div> </div> )) }
And now your users won't have to make judgements about your technology and expertise when they don't see a page crashing in front of them:
However, even though the app no longer crashes I recommend to go further and handle the missing values like returning null for entire items that have similar issues instead, since there isn't any data in them anyways.
7. Not Researching Enough About What You're Going To Implement
One crucial mistake i've made in the past was being overly confident with a search input I had implemented, trusting my opinions too early in the game.
What do I mean by this? Well, its not the search input component that I was overly confident with. The component should have been an easy task... and it was.
The real culprit of an issue that occurred with the whole the search functionality was the characters being included in the queries.
When we're sending keywords as queries to a search API, it's not always sufficient to think that every key the user types is valid, even though they're on the keyboard for that reason.
Just be 100% sure that a regex like this works just as intended and avoids leaving out any invalid characters that can crash your app:
const hasInvalidChars = /^.*?(?=[\+\^#%&$\*:<>\?/\{\|\}\[\]\\\)\(]).*$/g.test( inputValue, )
That example is the most up to date, established regular expression for a search API.
Here is what it was before:
const hasInvalidChars = /^.*?(?=[\+\^#%&$\*:<>\?/\{\|\}\[\]\)\(]).*$/g.test( inputValue, ) const callApi = async (keywords) => { try { const url = `{keywords}/` return api.searchStuff(url) } catch (error) { throw error } }
As you can see the slash
/ is missing, and that was causing the app to crash! if that character ends up being sent to an API over the wire, guess what the API thinks the URL is going to be?
Also, I wouldn't put 100% of my trust in the examples you find on the internet. A lot of them aren't fully tested solutions and there isn't really a standard for majority of use cases when it comes to regular expressions.
7. Not Restricting The Sizes of File Inputs
Restricting the sizes of files that users select is a good practice because most of the time you don't really need a rediculously large file when it can be compressed in some way without losing any noticeable signs of reduction in quality.
But there's a more important reason why restricting sizes to a certain limit is a good practice. At my company, we've noticed users in the past occasionally get "frozen" while their images are being uploaded. Not everyone has an Alienware 17 R5 in their possession, so you must take certain circumstances of your users in consideration.
Here's an example of restricting files to a limit of 5 MB (5,000,000 bytes):
import React, { useState, useEffect } from 'react' const useUploadStuff = () => { const [files, setFiles] = useState([]) // Limit the file sizes here const onChange = (e) => { const arrFiles = Array.from(e.target.files) const filesUnder5mb = arrFiles.filter((file) => { const bytesLimit = 5000000 if (file.size > bytesLimit) { // optionally process some UX about this file size } return file.size < bytesLimit }) setFiles(filesUnder5mb) } useEffect(() => { if (files.length) { // do something with files } }, [files]) return { files, onChange, } } const UploadStuff = () => { const { onChange } = useUploadStuff() return ( <div> <h2 style={{ color: '#fff' }}>Hi</h2> <div> <input style={{ color: '#fff' }} onChange={onChange} type="file" placeholder="Upload Stuff" multiple /> </div> </div> ) } export default UploadStuff
You wouldn't want users to be uploading video games when they're supposed to be uploading documents!
Conclusion
And that concludes the end of this post!
There will be a part 2 as I've only gotten through half of my list (yikes!)
Anyhow, Thank you for reading and make sure to follow me for future updates! Happy 4th of july!
Discussion
Another point to add is that using a type checker like Flow or TypeScript is super helpful in catching some of these bugs for you!
Yea almost all of the points made in the article would be pointed out to you by the compiler if you were using TypeScript. Excellent article though! It just confirmed my bias towards type safety.
type errors never happen when process is correct, but types add a ton of overhead and slow down the team, and don't forget type correctness !== program correctness.
Can you explain why types add overhead? And which type of overhead— speed? Quality? Value?
This is nice, I do fall for some of those sometimes, using quick shortcuts. This is a good reminder to avoid that :)
About #1 though, it's good to filter the app from
nulls. I parse the server responses and anything that libraries might return and check for nulls. Typescript also helps!
From my experience, I think most of the problems in this article will be easy to solve, if you use Typescript.
This.
Nice article! Points 3 and 4 is interesting because it's a very commonly used pattern. What solutions would you suggest implement aside from unit tests?
About 5:
Understand the rules. You can trick the linter into believing the code is right and still not be solving the issue. The most common case I've seen is using unique and stable values as keys in elements that are produced by iterating arrays. You could call a function that always generates a different id, and ESLint will stop complaining about it but your keys will be even worse than array indexes.
Read what the rule is about and why does it exist. Don't go just for disappearing the red line, that's how you get into worse problems than the one the linter is trying to avoid.
As others have mentioned, this post is a great example of why TypeScript or another type checker vastly improve your code reliabilty as it would have solved the first six issues.
The last two are about validating user input which is always a good idea. With the particular example given in number 7 (the first one), values passed in the querystring should be URL encoded first and foremost which would solve the problem with the '/'. Additional validation to restrict the characters could then be done with a Regex but always with appropriate server-side validation also in place.
This is a Javascript thingie, not React specific.
Edit: As far as I can see none of those issues listed have nothing to do with React API, all of these are just Javascript gotchas, and you can have the same issues in the Vue, Angular, [nameYourFrameworkHere] projects.
By using type checker helper like Flow or Typescript you can get rid of the half problems listed here, if you don't want to do that, guard your logic for unforeseen situations and name your variables correctly, so that you know what is an array and what is an object.
Your posts are always so informative. I'm getting better at coding.
I am glad I can help!
I'm sorry but none of these are related to react practices, this is just JS in general, and that example with an API returning undefined is pure facepalm, I cannot even realistically imagine something like that.
nice article!
regarding accessing deeply nested object properties, i prefer lodash get. lodash.com/docs/4.17.11#get
You most definitely don't want to do this. You're only eliminating the crash without fixing the bug.
Almost all of these problems are completely eliminated by using Typescript.
This would make a great article if the title was: How Typescript saved the front-end world!
I'd recommend to the author to work over variable naming, since it looks he struggles a lot to understand if his variable is an object or an array.
This article is somehow like "8 mistakes that will break your leg", and the first one is "jumping from the 15'th floor" | https://dev.to/jsmanifest/8-practices-in-react-that-will-crash-your-app-in-the-future-2le5 | CC-MAIN-2020-50 | refinedweb | 3,869 | 61.46 |
The components we will write in this book will all implement any given number of system interfaces. “System” in this context (no pun intended) means that these interfaces have already been defined by Microsoft. They are documented, and you can read all about them in the Platform SDK (though the details may be a little murky sometimes).
You can think of an interface
as a defined functionality. When a component
implements an interface, it is really saying,
“I support this functionality!” Consider a Triangle
component. It implements the interface
Shape.
Shape defines two methods:
Draw
and
Color. Therefore, you could expect to access
the following functionality through Triangle:
Triangle.Draw Triangle.Color
Because the Circle, Square, and Trapezoid components
also implement
Shape, you
would expect these objects to have the same functionality as well.
This is what it means to implement an interface.
The components in this book all implement some functionality that is required by the shell. This means that when the shell loads our components, it will be able to gain access to our component through a defined mechanism: an interface.
With that said, let’s talk about the interfaces a context menu handler component needs to implement before it can be loaded by the shell.
IShellExtInit
contains one method (besides the
IUnknown portion
of the interface),
Initialize, as shown in Table 4.1.
IShellExtInit::Initialize
is the first method called by the shell after it loads the context
menu handler; it is the context menu handler’s equivalent of a
class constructor in C++ programming or the Class_Initialize event
procedure of a class in VB. Typically, this method is used by the
context menu handler to determine which file objects are currently
selected within Explorer. Initialize is defined as follows:
HRESULT Initialize(LPCITEMIDLIST
pidlFolder, IDataObject *
lpdobj, HKEY
hkeyProgID);
All three arguments are provided by the shell and passed to the
context menu handler when it is invoked, which is indicated by the
[in] notation in the following argument list. The
three arguments are:
pidlFolder
[in] A pointer to an
ITEMIDLIST
structure (commonly referred to in shell parlance as a PIDL) with
information about the folder containing the selected objects. If you
want more information on PIDLs and what you can do with them, see
Chapter 12. We are not going to use this member,
and we are not even going to discuss it (yet), because the topic of
PIDLs is a universe unto itself. All you need to know is that a PIDL
provides a location of something (such as the path of a file or
folder object) within the Windows
namespace.
lpdobj
[in]
A pointer to an
IDataObject interface that provides information
about the selected objects. The
IDataObject
interface is discussed in the following section.
hKeyProgID
[in] The handle of the registry key containing the
programmatic identifier of the selected file. For instance, if a Word
.doc file was right-clicked,
hKeyProgID would be a handle to the
HKEY_CLASSES_ROOT\Word.Document.8 key on systems
with Office 2000 installed. Once the handle to this key is available,
it is a trivial matter to find the host application that is
responsible for dealing with this file type, which in the case of our
example happens to be Microsoft Word. The context menu handler can
then defer any operations to the host application, if necessary.
The only parameter in which we are interested is the second,
lpdobj, which is a pointer to an
IDataObject interface. Like the first parameter,
IDataObject is also a world unto itself.
Fortunately for us, we don’t need to know too much about the
interface at this juncture. In Chapter 8, when we
create a data handler, we will put this interface under the knife, so
to speak, but until then let’s just cover what we need to know.
The shell uses this interface to communicate to us the files that
were clicked on in Explorer. We’ll see how this works
momentarily.
Now that we know a little bit about this interface, let’s get on to how we are actually going to implement it. There are some problems ahead.
IShellExtInit
, like most of the interfaces in this
book, is a VB-unfriendly interface. An
unfriendly interface contains datatypes that are not automation
compatible. You can think of an automation-compatible type as
basically anything that will fit into a
Variant.
Table 4.2 lists all of the datatypes that are
considered OLE automation compatible.
Now, to implement
IShellExtInit successfully, the
interface will have to be redefined with automation-compatible types
and made available through a type library. This interface contains
one method,
Initialize. Let’s tear it apart
to see what we need to do in order to make this interface work for
us.
Consider the first parameter of the
Initialize method, which is an
LPCITEMIDLIST. The documentation for the interface
states that this is an address of an
ITEMIDLIST.
(We’ll talk about
ITEMIDLIST in Chapter 11.) The structure is defined like this:
typedef struct _ITEMIDLIST { SHITEMID
mkid; } ITEMIDLIST;
As
you can see, the one and only member of this structure is another
structure called
SHITEMID, which is not an
automation-compatible type. This means we cannot define this
parameter as a pointer to an
ITEMIDLIST when we
define the
IShellExtInit interface. What can we
do? Well, a pointer is four bytes wide, so the automation-compatible
type that can be used in place of
LPCITEMIDLIST is
a
long. When we create our type library, we will
just redefine
LPCITEMIDLIST to mean a
long, like so:
typedef [public] long LPCITEMIDLIST;
When we actually define the
Initialize method (see
Example 4.1), we can still use
LPCITEMIDLIST for the datatype of the first
parameter. Then, when VB displays the parameters for the method via
IntelliSense, rather than seeing
long, we will see
LPCITEMIDLIST. This acts as a reminder of what the
original definition is supposed to be.
We’ll do the same thing for the third parameter, which is an
HKEY. An
HKEY is a handle to a
registry key. Handles to anything are four bytes, so a
long works in this case, too:
typedef [public] long HKEY;
We don’t have to redefine anything as far as the second
parameter goes. It’s an
IDataObject
interface pointer.
And interface pointers that are
derived from
IUnknown or
IDispatch are automation compatible, so this
portion of the definition is fine as is.
Let’s talk about these parameters we have redefined for a
moment. As it turns out, we will not need the first or the third
parameters of this method in order to implement a context menu
handler. But what if we did? After all, these types have been
redefined as long values. Well, an
HKEY is really
a void pointer—that is, a pointer that does not point to any
specific datatype. As a
long, you can use this
value as is with any of the registry API functions that take
HKEYs.
How do we access the pointer to
the ITEMIDLIST when all we have is a long value? We can use the
RtlMoveMemory API (a.k.a.
CopyMemory) to make a local copy of the UDT.
This API call is defined like so:
Public Declare Sub CopyMemory Lib "kernel32" _ Alias "RtlMoveMemory" (pDest As Any, _ pSource As Any, _ ByVal ByteLen As Long)
The code on the VB side would then look something like the following: idlist As ITEMIDLIST CopyMemory idlist,
ByVal pidlFolder, len(idlist)
Notice, though, that the second parameter to
CopyMemory (our
ITEMIDLIST
that has been redefined as a long) is passed to the function
ByVal. This is because this long value represents
a raw address. We’ll talk more about this later, since we will
use techniques similar to this throughout the course of this book.
Example 4.1 shows the modified definition for the
IShellExtInit interface as it exists in our type
library.
Example 4-1. IShellExtInit Interface
typedef [public] long HKEY;
typedef [public] long LPCITEMIDLIST;[ uuid(000214E8-0000-0000-C000-000000000046), helpstring("IShellExtInit Interface"), odl ] interface IShellExtInit : IUnknown { [helpstring("Initialize")] HRESULT Initialize([in]
LPCITEMIDLISTpidlFolder, [in] IDataObject *pDataObj, [in]
HKEYhKeyProgID); }
The
[public] attribute used in Example 4.1 makes the
typedef values
available through the type library; otherwise, they would just be
available for use inside of the library itself.
The
[odl] attribute is required for all interfaces
compiled with MKTYPLIB. MIDL supports this attribute as well, but
only for the sake of backward compatibility. The attribute itself
does absolutely nothing.
The
[helpstring] attribute, as you can probably
guess, denotes the text that will be displayed for a library or an
interface from within Object Browser or the Project/References
dialog.
The
[in] attribute is known as a directional
attribute. This indicates that the parameter is passed from the
caller to the COM component. (In the case of our context menu
handler, it indicates that the shell is passing our COM component a
parameter.) Another attribute,
[out], specifies
the exact opposite, which is a parameter that is passed from the
component to the caller. All parameters to a method have a
directional attribute. This is either
[in],
[out], or
[in,
out]. But VB cannot handle
[out]-only parameters. Parameters designated as
[out] usually require the caller to free memory.
VB likes to shield responsibility from the programmer whenever
possible, especially when it comes to memory management.
Look at the GUID for
IShellExtInit,
(000214E8-0000-0000-C000-000000000046). This GUID
comes straight from the registry. It has been defined by Microsoft as
the GUID for
IShellExtInit. It is important that
you use the correct GUID for interfaces already defined by the
system, because, after all, that is their true name. The GUID for the
library block (see Appendix A ), on the other hand,
can be anything since it’s being defined by us—but not
anything you can think of off the top of your head. Whenever you need
to define your own GUID, you should use GUIDGEN (see Figure 4.4). GUIDGEN is a program used for generating
GUIDs that guarantees them to be unique (theoretically) and copies
them to the clipboard. GUIDGEN ships with Visual Studio, but if you
don’t have it, you can always make your own, as Example 4.2 demonstrates.
Example 4-2. Source Code for a Self-Created GUIDGEN Utility
Option Explicit Private Type GUID Data1 As Long Data2 As Integer Data3 As Integer Data4(7) As Byte End Type Private Declare Function CoCreateGuid Lib "ole32.dll" _ (g As GUID) As Long Private Declare Sub CopyMemory Lib "kernel32" Alias _ "RtlMoveMemory" (pDst As Any, pSrc As Any, _ ByVal ByteLen As Long) Private Declare Function StringFromCLSID Lib "ole32.dll" _ (pClsid As GUID, lpszProgID As Long) As Long Private Sub StrFromPtrW(pOLESTR As Long, strOut As String) Dim ByteArray(255) As Byte Dim intTemp As Integer Dim intCount As Integer Dim i As Integer intTemp = 1 'Walk the string and retrieve the first byte of each WORD. While intTemp <> 0 CopyMemory intTemp, ByVal pOLESTR + i, 2 ByteArray(intCount) = intTemp intCount = intCount + 1 i = i + 2 Wend 'Copy the byte array to our string. CopyMemory ByVal strOut, ByteArray(0), intCount End Sub Private Sub Command1_Click( ) Dim g As GUID Dim lsGuid As Long Dim sGuid As String * 40 If CoCreateGuid(g) = 0 Then StringFromCLSID g, lsGuid StrFromPtrW lsGuid, sGuid End If InputBox "This is your GUID!", "GUID", sGuid End Sub
Figuring out the details of this code is an exercise for you. However, this will be much easier to do after you have finished this book, since we will discuss all of the functions in this listing extensively.
IDataObject
is not implemented by the context menu handler directly, but rather,
it is a parameter to
IShellExtInit::Initialize.
Therefore, it has to be defined in the type library.
IDataObject provides the means to determine which
files have been right-clicked within the shell.
IDataObject is a fairly complex interface that
contains nine methods:
GetData,
GetDataHere,
QueryData,
GetCanonicalFormat,
SetData,
EnumFormatEtc,
DAdvise,
DUnadvise, and
EnumDAdvise.
This interface is the soul of OLE data transfers.
In regards to context menu
handlers, there is only one method,
GetData, that
we will use to implement the extension. Its syntax is:
HRESULT GetData(FORMATETC * pFormatetc, STGMEDIUM *
pmedium);
Its parameters are:
pFormatetc
[in] Pointer to a
FORMATETC
structure. The
FORMATETC structure represents a
generalized clipboard format. It’s defined like this:
typedef struct { long cfFormat; long ptd; DWORD dwAspect; long lindex; TYMED tymed; } FORMATETC;
pmedium
[in] Pointer to a
STGMEDIUM
structure.
STGMEDIUM is a generalized
global-memory handle used for data-transfer operations. It is defined
like this:
typedef struct tagSTGMEDIUM {;
Because VB does not support unions, our type library will contain a more generalized definition of this structure:
typedef struct { TYMED tymed; long pData; IUnknown *pUnkForRelease; } STGMEDIUM;
Admittedly, the discussion of
FORMATETC and
STGMEDIUM is rather cryptic here. This is
intentional. When we implement
IShellExtInit later
in the chapter, just understand that the shell is using
IDataObject to transfer a list of files to us.
IDataObject is the primary interface involved in
OLE data transfers. That’s about all you need to know right
now. We will learn much more about this interface in Chapter 8.
As
Table 4.3 shows,
IContextMenu
contains three methods:
GetCommandString,
InvokeCommand, and
QueryContextMenu. This is the core of the context
menu handler. The methods of this interface provide the means to add
items to a file object’s context menu, display help text in
Explorer’s status bar, and execute the selected command,
respectively. We’ll discuss each of these methods in turn.
GetCommandString
allows the handler to specify the text that will be displayed in the
status bar of Explorer. This occurs when a particular context menu
item is selected. Its syntax is:
HRESULT GetCommandString( UINT
idCmd, UINT
uFlags, UINT *
pwReserved, LPSTR
pszName, UINT
cchMax);
Its parameters are:
idCmd
The ordinal position of the selected menu item.
uFlags
A flag specifying the information to return.
pwReserved
Unused; handlers must ignore this parameter, which should be set to
NULL.
pszName
A pointer to the string buffer that holds the null-terminated string to be displayed.
cchMax
Size of the buffer defined by
pszName.
When the method is invoked by the shell, the shell passes the
following items of information to the
GetCommandString method:
The
idCmd argument to indicate which menu
item is selected.
The
uFlags argument to indicate what
string the method is expected to return. This can be one of the
following values:
The
cchMax argument to indicate how many
bytes of memory have been allocated for the string that the method is
to pass back to the shell.
The method can then place the desired string in the
pszName buffer. As a general rule, the
string should be 40 characters or less and should not exceed
cchMax.
The shell calls this method to execute the command selected in the context menu. Its syntax is:
HRESULT InvokeCommand(LPCMINVOKECOMMANDINFO
lpici);
with the following parameter:
lpici
A pointer to a
CMINVOKECOMMANDINFO structure that
contains information about the command to execute when the menu item
is selected.
The
CMINVOKECOMMANDINFO structure is defined in
the Platform SDK as follows:
typedef struct _CMInvokeCommandInfo{ DWORD cbSize; DWORD fMask; HWND hwnd; LPCSTR lpVerb; LPCSTR lpParameters; LPCSTR lpDirectory; int nShow; DWORD dwHotKey; HANDLE hIcon; } CMINVOKECOMMANDINFO, *LPCMINVOKECOMMANDINFO;
Its members are:
cbSize
The size of the structure in bytes.
fMask
Zero, or one of the following values:
hwnd
The handle of the window that owns the context menu.
lpVerb
Contains the zero-based menu item offset in the low-order word.
lpParameters
Not used for shell extensions.
lpDirectory
Not used for shell extensions.
nShow
If the command opens a window, specifies whether it should be visible
or not visible. Can be either
SW_SHOW or
SW_HIDE.
dwHotKey
fMask must contain
CMIC_MASK_HOTKEY for this value to be valid. It
contains an optional hot key to assign to the command.
hIcon
Icon to use for any application activated by the command.
This method is called by the shell to allow the handler to add items to the context menu. Its syntax is:
HRESULT QueryContextMenu( HMENU
hmenu, UINT
indexMenu, UINT
idCmdFirst, UINT
idCmdLast, UINT
uFlags);
with the following parameters:
hmenu
Handle of the menu.
indexMenu
Zero-based position at which to insert the first menu item.
iCmdFirst
Minimum value that the handler can use for a menu-item identifier.
iCmdLast
Maximum value that the handler can use for a menu-item identifier.
uFlags
Flags specifying how the context menu can be changed. These flags are discussed later in this chapter.
In invoking the method, the shell provides the context menu handler
with all of the information needed to customize the context menu. The
QueryContextMenu method can then use this
information when calling the Win32
InsertMenu
function to modify the context menu.
The documentation for the interface states that
QueryContextMenu should return the menu identifier
of the last menu item added, plus one. This presents an interesting
problem, because VB does not allow access to the
HRESULT. Fortunately, there is a workaround. We
will discuss this in detail when we actually implement the interface.
The complete IDL listing for IContextMenu is shown in Example 4.3.
Example 4-3. IContextMenu
typedef [public] long HMENU; typedef [public] long LPCMINVOKECOMMANDINFO; typedef [public] long LPSTRVB; typedef [public] long UINT; [ uuid(000214e4-0000-0000-c000-000000000046), helpstring("IContextMenu Interface"), odl ] interface IContextMenu : IUnknown { HRESULT QueryContextMenu([in] HMENU hmenu, [in] UINT indexMenu, [in] UINT idCmdFirst, [in] UINT idCmdLast, [in] QueryContextMenuFlags uFlags); HRESULT InvokeCommand([in] LPCMINVOKECOMMANDINFO lpcmi); HRESULT GetCommandString([in] UINT idCmd, [in] UINT uType, [in] UINT pwReserved, [in] LPSTRVB pszName, [in] UINT cchMax); }
Notice the last parameter of
QueryContextMenu,
which takes a type of
QueryContextMenuFlags. This
is actually an enumeration defined within the type library.
Enumerations are a good way to restrict the range of values that can
be accepted as a method parameter. We will define many such
enumerations throughout the course of this book. This provides some
type safety for this method, though not much. The enum does not
require an attributes block, although you could add one if you
wanted.
QueryContextMenuFlags is defined as
follows:
typedef enum { CMF_NORMAL = 0x00000000, CMF_DEFAULTONLY = 0x00000001, CMF_VERBSONLY = 0x00000002, CMF_EXPLORE = 0x00000004, CMF_NOVERBS = 0x00000008, CMF_CANRENAME = 0x00000010, CMF_NODEFAULT = 0x00000020, CMF_INCLUDESTATIC = 0x00000040, CMF_RESERVED = 0xffff0000 } QueryContextMenuFlags;
No credit card required | https://www.oreilly.com/library/view/vb-shell-programming/1565926706/ch04s04.html | CC-MAIN-2019-43 | refinedweb | 3,046 | 62.27 |
Red Hat Bugzilla – Bug 167367
std::showbase fails for hex value 0
Last modified: 2007-11-30 17:11:12 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.8) Gecko/20050511
Description of problem:
std::showbase combined with std::hex in <iostream> fails to show the base if the value is 0. The following program should produce "0x0" as the output in all lines which start with:
std::cout << std::hex << std::showbase...
Instead this program produces output "0", skipping the "0x" base.
// main.cxx
#include <iostream>
int main()
{
unsigned int var = 0;
std::cout << var << "\n";
// This is a failure line, it outputs "0" instead of "0x0".
std::cout << std::hex << std::showbase << var << "\n";
var = 1;
std::cout << var << "\n";
std::cout << std::hex << std::showbase << var << "\n";
return 0;
}
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Use std::hex and std::showbase in a std::cout, then feed 0 to the stream.
Actual Results: Outputs "0".
Expected Results: Outputs "0x0".
Additional info:
gcc-c++-4.0.1-4.fc4
*** This bug has been marked as a duplicate of 166735 *** | https://bugzilla.redhat.com/show_bug.cgi?id=167367 | CC-MAIN-2016-50 | refinedweb | 200 | 66.13 |
Advanced Namespace Tools blog
11 January 2018
Plan 9 has had independent ip addresses all along
Years ago I wrote "almost everything I want to do with computers can be done with a few lines of rc in Plan 9" and I just re-learned that lesson very powerfully, and got a good lesson in the difference between vague, superficial understanding and real comprehension. For a long time, I had thought that the idea of giving separate ip addresses to the independent namespaces ANTS creates was some kind of difficult-to-implement feature that I'd have to tackle at some point in the future. Nope, Plan 9 has always had this available, I was just too dense to get what the manpages were telling me. Thanks to the 9front irc crew for clearing up my confusion.
How VMX handles networking
After always being frustrated by the complexity of virtual machine networking in Linux, with more or less incomprehensible tun/tap command strings and vm configurations that I was rarely able to get to work right even following along with guides, the ease and simplicity of networking vmx virtual machines in 9front was refreshing - with no configuration needed other than telling the vm to use the host ethernet, the vm would happily claim its own IP address via DHCP and everything just worked naturally.
This led me to thinking "well, since this works so nicely for a vm, maybe we could have the same functionality in userpsace?" I started poking around in the vmx code, trying to understand how this magical trick of letting the vm get an independent IP on the same box using the same ethernet address worked. Being a Bear of Little Brain I didn't get very far with this, mostly ending up very impressed with the fact that Aiju had created the vmx implementation as a solo effort in a short time.
I expose my foolishness in IRC
So, thinking I might have thought up something clever and useful, I ask in irc if it would be possible to extract this magical code from vmx to allow userspace processes to communicate on independent IP addresses without a full virtualization layer. The response from those in-the-know was unanimous. "WTF are you talking about? vmx doesn't do what you think it does." After a little bit of confusion where I asked questions based on my misunderstandings, the real picture was patiently explained to me. I'm clearly not very good at getting at the essence of things from reading manpages, because once I started to get it via the dialectic of irc, a succession of "aha! so THATS what all that stuff in the manpages I've read so many times" moments followed.
Namespacing multiple addresses on the same IP stack
The way to achieve the goal is just to bind a new ip address onto an existing IP configuration, and then direct listeners to use it. So, my box is DHCP to a given address in the 192.168.0.x range by my LAN router. I can just find an unused ip address and do:
ip/ipconfig -g 192.168.0.1 ether /net/ether0 add 192.168.0.99 255.255.255.0 aux/listen1 tcp!192.168.0.99!2500 /bin/fortune
And then from another box, ifI do
telnet tcp!192.168.0.99!2500
I receive a random fortune. All that is happening here is that we are adding a new name, 192.168.0.99, to an existing ip stack on an interface without removing the previous address, and then telling the listener to only listen for connections to that ip-name. The usual central parameter in aux/listen is usually '*' which directs the listener to use all interfaces, but by giving it a specific number, we can have multiple listeners on the same port on the same box, but using different ip address namespaces.
Digging into the kernel devices
This is a good opportunity to understand the underlying mechanisms a little more clearly. Basic network config in Plan 9 works like this:
bind -b '#l0' /net bind -b '#I0' /net ip/ipconfig #standard DHCP, or ip/ipconfig -g ip.gateway ether /net/ether0 ip.address net.mask
Let's pull this apart to see the details of what is going on. The first bind command of #l0 is for the hardware device. /net starts empty, and after the first bind command, contains nothing but the /net/ether0 directory. The next bind command doesn't refer to the hardware, it refers to the in-kernel ip software stack. It contains a series of protocol directories such as tcp, udp, the il protocol (Plan 9's original preferred ip protocol) and control and information files like ipifc and iproute and ipselftab. The ipconfig command "glues" the ip software stack together with the hardware interface in a specific configuration.
With multiple hardware interfaces available, it is easy to create multiple ip stacks bound in different places. On my vultr vms, I use this to make the intra-grid service connections on a secondary, private networking interface. At bootup, venti, fossil, and tcp boot cpus configure the secondary interface on /net.alt to use the private subnet. See the blog post on using net.alt. | http://doc.9gridchan.org/blog/180111.ip.stack.addresses | CC-MAIN-2021-21 | refinedweb | 885 | 58.92 |
In the previous few NestJS tutorials we have been discussing how to set up a basic REST API that we can interact with using an Ionic application. We have covered making both GET and POST requests to the backend, but so far we have just been using dummy data and placeholder code - our API doesn’t really do anything yet.
In this tutorial, we will be covering how to integrate MongoDB into a NestJS backend and how to use that to add and retrieve records from an Ionic application. We will be walking through an example where we
POST data to the NestJS backend which will then be stored in a MongoDB database, and we will also make a
GET request to the NestJS backend in order to retrieve data from the MongoDB database to display in our Ionic application.
If you are not already familiar with creating a NestJS server, or if you are unfamiliar with making
GET and
POST requests from an Ionic application to a NestJS server, I would recommend reading through the other tutorials in this series first:
- An Introduction to NestJS for Ionic Developers
- Using Providers and HTTP Requests in a NestJS Backend
- Sending Data with POST Requests to a NestJS Backend
We will be continuing on from the code in the last tutorial above. Although you do not have to complete the previous tutorials in order to do this one, if you want to follow along step-by-step it will help to have already completed the previous tutorials.
MongoDB and Mongoose
This tutorial is going to focus on covering how to integrate MongoDB with Ionic and NestJS more than explaining the concept of MongoDB in general. In short, MongoDB is a document based NoSQL database and Mongoose is a library that allows us to define objects/schemas that represent the types of data/documents we want to store. Mongoose provides methods that make it easier to create and retrieve the data we are working with (and it also does a lot more than just that). MongoDB is the database that stores the data, and Mongoose makes it easier to interact with that database.
If you would like more information on both of these technologies, this tutorial is a particularly good introduction.
1. Installing MongoDB and Mongoose
In order to work with MongoDB on your machine, you will need to have it installed. If you do not already have MongoDB installed on your machine, you can find information on how to do that here. I also released a video recently that covers installing MongoDB on macOS: Installing MongoDB with Homebrew on macOS.
Once you have installed MongoDB, you will need to make sure to open a separate terminal window and run the following command:
mongod
This will start the MongoDB daemon, meaning that the database will be running in the background on your computer and you will be able to interact with it. You will also need to install the mongoose package and the NestJS package associated with that by running the following command in your NestJS server project:
npm install --save @nestjs/mongoose mongoose
The NestJS Backend
First, we are going to work on our NestJS server. We will walk through setting up a connection to the database, creating a schema to represent the data we want to store, creating a service to handle adding and retrieving records, and setting up the appropriate routes in the controller.
1. Connecting to MongoDB
In order to connect to MongoDB in our NestJS server, we need to add the
MongooseModule to our root module. We will use the
forRoot method to supply the connection address, which is exactly the same as what we would use if we were just using the standard
mongoose.connect method described here.
Modify src/app.module.ts to reflect the following:
import { Module, HttpModule } from '@nestjs/common'; import { MongooseModule } from '@nestjs/mongoose'; import { AppController } from './app.controller'; import { AppService } from './app.service'; @Module({ imports: [ HttpModule, MongooseModule.forRoot('mongodb://localhost/mydb') ], controllers: [AppController], providers: [AppService] }) export class AppModule {}
As you can see, we supply
mongodb://localhost/mydb to the
MongooseModule which will set up a connection to the
mydb MongoDB database on
localhost. Keep in mind that in a production environment this connection address would be different.
If you have been following along with the previous tutorials you might notice that we have made some other changes to this module. Previously, we had included a
QuotesService and a
MessagesController in this module. We will be getting rid of the
QuotesService as this was just an example for a previous tutorial. Instead of adding the
MessagesController directly to the root module, we are going to give the messages functionality its own module that we will import into the root module later. Since the
Messages functionality is becoming a little more complex now, it is going to be neater to organise the functionality into its own module. Even though that isn’t strictly required, it does allow for better code organisation.
2. Create a Message Schema
As I mentioned before, we can use Mongoose to define the type of data we want to store in the database. A “schema” represents the structure of the data that we want to store - if you are familiar with types and interfaces it is basically the same concept.
Create a file at src/messages/message.schema.ts and add the following:
import * as mongoose from 'mongoose'; export const MessageSchema = new mongoose.Schema({ content: String, submittedBy: String });
We create a new Schema with
new mongooose.Schema and we supply the properties that we want that schema to contain along with the types for those properties. The type of data that we are adding to this schema is the same as what we have defined in the message.dto.ts file in the previous tutorials (this represents the Data Transfer Object (DTO) used to
POST data to a NestJS server). It makes sense that these match because we intend to store the same data that we will
POST to the NestJS server in the MongoDB database.
3. Create the Messages Module
We are going to create the
Messages module now as we need this to set up a
Model based on our
MessageSchema which will allow us to interact with messages in our database. This module will set up all of the message related functionality that we need, and then we can import this single module into our main root module (rather than having to add all of these things individually to app.module.ts).
Create a file at src/messages/messages.module.ts and add the following:
import { Module } from '@nestjs/common'; import { MongooseModule } from '@nestjs/mongoose'; import { MessagesController } from './messages.controller'; import { MessagesService } from './messages.service'; import { MessageSchema } from './message.schema'; @Module({ imports: [MongooseModule.forFeature([{name: 'Message', schema: MessageSchema}])], controllers: [MessagesController], providers: [MessagesService] }) export class MessagesModule { }
We use
MongooseModule.forFeature to set up our
Message model that we will make use of in a moment - this is based on the
MessageSchema that we just created. Also, notice that we are importing
MessagesService and adding it as a provider - we haven’t created this yet but we will in the next step.
4. Create a Messages Service
Now we are going to create a messages service that will handle adding documents to the MongoDB database and retrieving them.
Create a file at src/messages/messages.service.ts and add the following:
import { Injectable } from '@nestjs/common'; import { Model } from 'mongoose'; import { InjectModel } from '@nestjs/mongoose'; import { MessageDto } from './message.dto'; import { Message } from './message.interface'; @Injectable() export class MessagesService { constructor(@InjectModel('Message') private messageModel: Model<Message>){ } async createMessage(messageDto: MessageDto): Promise<Message>{ const message = new this.messageModel(messageDto); return await message.save(); } async getMessages(): Promise<Message[]>{ return await this.messageModel.find().exec(); } async getMessage(id): Promise<Message>{ return await this.messageModel.findOne({_id: id}); } }
We are creating three different methods here:
- A
createMessagemethod that will add a new document to the database
- A
getMessagesmethod that will return all message documents from the database
- A
getMessagemethod that will return one specific document from the database (based on its
_id)
In the
constructor we add
@InjectModel('Message') which will inject our
Message model (which we just set up in the messages module file) into this class. We will be able to use this model to create new messages and retrieve them from the database.
Our
createMessage method accepts the
messageDto which will
POST from our Ionic application to the NestJS backend. It then creates a new message model using the data from this DTO, and then calls the
save method which will add it to the MongoDB database. We are returning the result of the
save operation which will allow us to see the document that was added.
The
getMessages method we call the
find method on the messages model which will return all message documents from the database (as an array).
The
getMessage method will accept an
id parameter, and then it will return one document from the database that matches the
id that was supplied. MongoDB
_id fields are generated automatically if you add a document to the database that does not contain an
_id.
Before this will work, we will need to define an
interface for our messages since we are using
Message as a type in this service.
Create a file at src/messages/message.interface.ts and add the following:
export interface Message { content: string; submittedBy: string; }
5. Create the Routes
Now we need to create the appropriate routes in the message controller.
Modify src/messages/messages.controller.ts to reflect the following:
import { Controller, Get, Post, Body, Param } from '@nestjs/common'; import { MessageDto } from './message.dto'; import { MessagesService } from './messages.service'; @Controller('messages') export class MessagesController { constructor(private messagesService: MessagesService){ } @Post() async createMessage(@Body() message: MessageDto){ return await this.messagesService.createMessage(message); } @Get() async getMessages(){ return await this.messagesService.getMessages(); } @Get(':id') async getMessage(@Param('id') id: String){ return await this.messagesService.getMessage(id); } }
I’ve already covered the concepts used above in the previous tutorials, so if you aren’t familiar with what is happening here make sure to check those out.
6. Import the Messages Module
Finally, we just need to import our messages module into the root module for the application.
Modify src/app.module.ts to reflect the following:
import { Module, HttpModule } from '@nestjs/common'; import { MongooseModule } from '@nestjs/mongoose'; import { AppController } from './app.controller'; import { AppService } from './app.service'; import { MessagesModule } from './messages/messages.module'; @Module({ imports: [ HttpModule, MessagesModule, MongooseModule.forRoot('mongodb://localhost/mydb') ], controllers: [AppController], providers: [AppService] }) export class AppModule {}
As you can see above, we can set up all of that messages functionality we just created with one clean and simple import in our root module.
The Ionic Frontend
Now we just need to update our frontend to make use of the new backend functionality - fortunately, there isn’t much we need to do here. It doesn’t really have to be an Ionic/Angular application either, as we are just interacting with a REST API.
1. Modify the Messages Service
Our frontend also has a messages service to handle making the calls to the backend, we will need to update that.
Modify src/app/services/messages.service.ts to reflect the following:
import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { Observable } from 'rxjs' @Injectable({ providedIn: 'root' }) export class MessagesService { constructor(private http: HttpClient) { } createMessage(message): Observable<Object> { return this.http.post('', { content: message.content, submittedBy: message.submittedBy }); } getMessages(): Observable<Object> { return this.http.get(''); } getMessage(id): Observable<Object> { return this.http.get(`{id}`); } }
- Trigger Calls to the API
Now we just need to make some calls to the API we created. We will just trigger these(){ let testMessage = { content: 'Hello!', submittedBy: 'Josh' }; let testId = '5c04b73880159ab69b1e29a9' // Create a test message this.messages.createMessage(testMessage).subscribe((res) => { console.log("Create message: ", res); }); // Retrieve all messages this.messages.getMessages().subscribe((res) => { console.log("All messages: ", res); }); // Retrieve one specific message this.messages.getMessage(testId).subscribe((res) => { console.log("Specific message: ", res); }); } }
This will trigger three separate calls to each of the different routes we added to the server. In the case of retrieving a specific message, I am using a
testId - in order for you to replicate this, you will first need to add a document to the database and then copy its
_id as the
testId here. The first time you serve this application in the browser you will be able to see the
_id of one of the documents that are created.
To run this code you will need to:
- Make sure that the MongoDB daemon is running by executing the
mongodcommand in a separate terminal window
- Make sure that your NestJS server is running by executing the
npm run startcommand in another terminal window
- Serve your Ionic application by executing the
ionic servecommand in another terminal window
When you run the application, you should see something like this output to the console:
You can see the document that was just created, all of the documents currently in the database, and the specific document that matches the
testId (assuming that you have set up a
testId that actually exists).
Summary
The built-in support for MongoDB that NestJS provides makes for a really smooth experience if you want to use MongoDB as your database. You will be able to use all the features you would expect from Mongoose within an Angular style application architecture. We have only scratched the surface of what you might want to do with a MongoDB integration here, in future tutorials we are going to focus on more advanced concepts like adding authentication. | https://www.joshmorony.com/using-mongodb-with-ionic-and-nestjs/ | CC-MAIN-2021-04 | refinedweb | 2,283 | 50.36 |
std::include() function in C++
Here we will learn more about std::include() function in C++. This function recognizes the matched numbers in both containers. The objective is achieved by “include”, defined in the header.
Let us look at an example that gives a brief explanation of the same:
Example-1 of std::include() function in C++
using namespace std; int main() { vector<int> array1 = { 1, 2, 3,4, 5 }; vector<int> array2 = { 1, 2, 3 }; sort(array1.begin(), array1.end()); sort(array2.begin(), array2.end()); if(includes(array1.begin(), array1.end(), array2.begin(), array2.end())) cout << "The elements are matched"; else cout << "The elements are not matched"; }
Output: The elements are matched
Example-2 of this function
using namespace std; int main() { vector<int> a = { 1, 2, 3, 3, 4, 5 , 6 }; vector<int> b = { 1, 2, 3, 4 }; sort(a.begin(), a.end()); sort(a.begin(), b.end()); if(includes(a.begin(), a.end(), b.begin(), b.end())) cout << "Matched"; else cout << "Not matched"; }
The examples match all the elements of the passes array to provide the result. Some real-life applications can be seen in the lottery decisions or in card games. In the code, it is necessary to sort the elements first. It also throws an exception on an operation on an iterator and it causes undefined behavior for invalid parameters. It gives linear time complexity.
Example-3 Complete code for std::include() function
#include <iostream> #include <algorithm> #include <vector> using namespace std; int main() { vector<int> vector1 = {1, 2, 3, 4, 5}; vector<int> vector2 = {1, 2, 3}; bool test; test = includes(vector1.begin(), vector1.end(), vector2.begin(), vector2.end()); if (test == true) cout << "Matched" << endl; test = includes(vector1.begin(), vector1.end(), vector2.begin(), vector2.end()); if (test == false) cout << "Not matched" << endl; return 0; }
This was a basic concept about using the std:: include() function for matching elements.
Also read: Increment and Decrement Operator in C++ | https://www.codespeedy.com/stdinclude-function-in-cpp/ | CC-MAIN-2020-45 | refinedweb | 322 | 51.68 |
But sometimes we just need down and dirty explanation with some examples. That's what this article is.
Don't be afraid of the idea of creating a custom event. You have probably already worked with events. If you have ever put a button on a form, then double-clicked it to create a method that handles the button.click event, then you have already worked with events.
private void button1_Click(object sender, EventArgs e) { // Here is where you put your code for what to do when the button is clicked. }An event handler receives two parameters: The object that sent the event, and an EventArgs. You can define any kind of event arguments you want. Maybe your arguments only need to be a string... maybe your arguments for an event is a picture... maybe you don't need any custom arguments because you only need to be notified with some task is done.
We're going to make:
- A string event argument
- An event that uses the string event argument
- A method that raises the event
- A method that handles the raised event
First the EventArgs, which in this example is just a string:
public class TextArgs : EventArgs { #region Fields private string szMessage; #endregion Fields #region ConstructorsH public TextArgs(string TextMessage) { szMessage = TextMessage; } #endregion Constructors #region Properties public string Message { get { return szMessage; } set { szMessage = value; } } #endregion Properties }We have a private field, a public constructor and a public property. That's it: Nothing scary. When you make a new TextArgs you will be providing a string to become the Message to be passed.
Now for the event:
public partial class Form1 : Form { public Form1() { InitializeComponent(); } #region Events public event EventHandler<TextArgs> Feedback; #endregion Events }
That's it in line 9: A new event called "Feedback" that uses your new TextArgs. Basically this a way for your Form1 to yell something to any other form (or class) that is listening.
There is no point raising an event if nobody is listening. So we are going to use a method to check first. If there is a subscriber to the event, then we raise the event. If nobody is listening, then we do nothing.
private void RaiseFeedback(string p) { EventHandler<TextArgs> handler = Feedback; if (handler != null) { handler(null, new TextArgs(p)); } }
That's it! You have created a custom argument, a custom event that uses the argument, and a method to raise the event if someone is listening. To use this in your program yo might do something like this
private void ProcessMyData() { RaiseFeedback("Data process starting..."); variableOne = variableTwo / variableThree * variableFour; string results = variableOne.ToString(); // Do a bunch of cool charting stuff RaiseFeedback("Stage one complete at: " + DateTime.Now.ToString()); // Do the more complex stuff as part of stage two RaiseFeedback("Stage two complete at: " + DateTime.Now.ToString()); }
Notice that while Form1 does it's processing it is not directly trying to force any other work. It is not logging. It is not trying to make Form6 display a MessageBox. It is not trying to force Form3 to display the feedback in a ListBox. It doesn't know or care about anything other than it's own job. This is an important concept. Each class of your program should do one thing, do it well, and do no more. If you need 6 things done then write six methods. Don't try to make one all-encompassing method that does 6 things. It will just make your life tough later when you need to change the sequence of those 6 things, or add things 7, 8, 9 but temporarily stop thing 4. Whatever you do in response to a Feedback event is NOT tightly bound to the process that raises the event and thus the two won't break each other due to minor changes.
Let's subscribe to the Feedback event:
private void Form1_Load(object sender, EventArgs e) { // Do your initial setup of the form once it loads Feedback += new EventHandler<TextArgs>(Feedback_Received); }
and create the event handling method:
void Feedback_Received(object sender, TextArgs e) { HistoryListBox.Items.Add(e.Message); }Notice the e.Message. That comes from the TextArgs you made earlier. It is the public property Message.
Let's walk through what actually happens when you use this:
// Do some processing; RaiseFeedback("Igor, it's alive"); // Do some MORE processing;
Code jumps to the RaiseFeedback method where a new TextArgs is created putting "Igor, it's alive" into the Message property.
The event Feedback is raised with the new TextArgs.
Execution then splits. Once the event is raised program flow returns to the next line: // Do some more processing
But, execution also starts in the Feedback_Received event handling method which is going to put the Message of the TextArgs into our HistoryListBox
======= 2 weeks later =======
As your program grows you realize that everything you are sending to the HistoryListBox really should also go to a text file as a log of what your program is doing. You don't have to go through hundreds of places where you called the Feedback. You just create a logging method, and subscribe it to your Feedback event. Boom! Everything to screen now also goes to a text file.
private void Form1_Load(object sender, EventArgs e) { // Do your initial setup of the form once it loads Feedback += new EventHandler<TextArgs>(Feedback_Received); Feedback += new EventHandler<TextArgs>(LogFeedback); }
void LogFeedback(object sender, TextArgs e) { //Write e.Message to my log text file }
======== Form1 and Form2 =======
Or maybe you need Form2 to react to something that happens in Form1
private void Form1_Load(object sender, EventArgs e) { // Do your initial setup of the form once it loads Feedback += new EventHandler<TextArgs>(Feedback_Received); Feedback += new EventHandler<TextArgs>(LogFeedback); Form2 myForm2 = new Form2(); Feedback += new EventHandler<TextArgs>(myForm2.FeedbackResponse); }
public partial class Form2 : Form { public void FeedbackReponse(object sender, TextArgs e) { // Handle feedback from some other form } }Notice that Form2 doesn't have to know where the feedback is coming from. Its is happy in it's ignorance. Some other form subscribed it to it's feedback event. You could have 10 other forms all raise a feedback event and have this Form2 subscribed to all of them. It then becomes a single location to show the entire running operation of your program. | http://www.dreamincode.net/forums/topic/176796-quick-and-easy-custom-events/ | CC-MAIN-2018-09 | refinedweb | 1,045 | 62.48 |
I
View Complete Post
I have set up my buddy class with validators. However I only want the validators to fire of the associated fields are used in that particular screen. When I split them across 2 screens I find my validation doesn't work. How do I fix this? I have had to remove the validation from some properties in order to get it to work on one screen;
See the class here;
Scott Allen shows how to improve your user input validation with new features coming out in ASP.NET MVC 2 that easily allow validation on both the client and server.
Scott Allen
MSDN Magazine March 2010,
I decided to use jquery validations because asp.net validation controls are so crazy inside the update panel.
But I need to plase the error messages in the specified div whose class is putmehere ! I am unable to do that. here is my code;
<asp:Content
<script src="../JQuery/jquery-1.4.2.js" type="text/javascript"></script>
<script src="../JQuery/Validation/jquery.validate.js" type="text/javascript"></script>
<script type="text/javascript">
$(document).ready(function() {
$("#aspnetForm").validate({
rules: {
<%=TextBox1.UniqueID %>: {
minlength: 2,
required: true
},
<%=TextBox2.UniqueID %>: {
required: true,
}
}, messages: {
<%=TextBox1.UniqueID %>:{
required: "* Required Field *",
minlength: "* Please enter atleast 2 characters *"
} implemented aps.net mvc validation and it is working fine. Only issue I have "Input-validation-error" css class is not getting applied if model has complex types.
I am using my own data annotations for validation.
Any idea how to resolve this.
if you use domain service classes you have the option to produce a meta class. this allows you to add attributes like [Required] etc for simplifying validation.
i have been trying to orgainize my assemblies by splitting out the domain service from the model.
during that splitting out process i have found that the meta class must reside in the same namespace AND assembly
if they get seperated physically to another project, the code in the meta classes fail to get hit despit being in the same namespace and despite being a partial class.
weird.
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/43252-metadata-buddy-class-validation-issue.aspx | CC-MAIN-2018-30 | refinedweb | 364 | 58.48 |
Configuring Diagnostics for Azure Cloud Services and Virtual Machines
Updated: June 5, 2015
When you need to troubleshoot an Azure cloud service or Azure virtual machine, you can configure Azure diagnostics more easily by using Visual Studio. Azure diagnostics captures system data and logging data on the virtual machines and virtual machine instances that run your cloud service and transfers that data into a storage account of your choice. You can also access Azure Diagnostics programmatically and by editing configuration files directly. See Collect Logging Data by Using Azure Diagnostics for more information.
This topic shows you how to enable and configure Azure diagnostics in Visual Studio, both before and after deployment, as well as in Azure virtual machines. It also shows you how to select the types of diagnostics information to collect and how to view the information after it's collected.
You can configure Azure Diagnostics in the following ways:
- You can change diagnostics configuration settings through the Diagnostics Configuration dialog box in Visual Studio. The settings are saved in a file called diagnostics.wadcfgx (diagnostics.wadcfg in Azure SDK 2.4 or earlier). Alternatively, you can directly modify the configuration file. If you manually update the file, the configuration changes will take effect the next time you deploy the cloud service to Azure or run the service in the emulator.
- Use Server Explorer to change the diagnostics settings for a running cloud service or virtual machine.
For Azure SDK 2.6 projects in Visual Studio, the following changes were made.
- The local emulator now supports diagnostics. This means you can collect diagnostics data and ensure your application is creating the right traces while you're developing and testing in Visual Studio. The connection string UseDevelopmentStorage=true enables.
- In Azure SDK 2.4 and earlier, the connection string was used as a runtime by the diagnostics plugin to get the storage account information for transferring diagnostics logs.
- In Azure SDK 2.6, the diagnostics connection string is used by Visual Studio.
When migrating from Azure SDK 2.5 to Azure SDK 2.6,, please note the changes in how connection strings are treated in Azure SDK 2.6 as specified in the previous section.
-.
In Visual Studio, you can choose to collect diagnostics data for roles that run in Azure, when you run the service in the emulator before deploying it. All changes to diagnostics settings in Visual Studio are saved in the diagnostics.wadcfgx configuration file. These configuration settings specify the storage account where diagnostics data is saved when you deploy your cloud service.
On the shortcut menu for the role that interests you, choose Properties, and then choose the Configuration tab in the role’s Properties window.
In the Diagnostics section, make sure that the Enable Diagnostics check box is selected.
Choose the ellipsis (…) button to specify the storage account where you want the diagnostics data to be stored.
The storage account you choose will be the location where diagnostics data is stored.
In the Create Storage Connection String dialog box, specify whether you want to connect using the Azure Storage Emulator, an Azure subscription, or manually entered credentials.
- If you choose the Microsoft Azure Storage Emulator option, the connection string is set to UseDevelopmentStorage=true.
- If you choose the Your subscription option, you can choose the Azure subscription you want to use and the account name. You can choose the Manage Accounts button to manage your Azure subscriptions.
- If you choose the Manually entered credentials option, you're prompted to enter the name and key of the Azure account you want to use.
Choose the Configure button to view the Diagnostics configuration dialog box..
Run your Azure cloud service project in Visual Studio as usual. As you use your application, the log information that you enabled is saved to the Azure storage account you specified.
In Visual Studio, you can choose to collect diagnostics data for Azure virtual machines.
In Server Explorer, choose the Azure node and then connect to your Azure subscription, if you're not already connected.
Expand the Virtual Machines node. You can create a new virtual machine, or select one that's already there.
On the shortcut menu for the virtual machine that interests you, choose Configure. This shows the virtual machine configuration dialog box.
If it's not already installed, add the Microsoft Monitoring Agent Diagnostics extension. This extension lets you gather diagnostics data for the Azure virtual machine. In the Installed Extensions list, choose the Select an available extension drop-down menu and then choose Microsoft Monitoring Agent Diagnostics.
Choose the Add button to add the extension and view its Diagnostics configuration dialog box.
Choose the Configure button to specify a storage account and then choose the OK button..
Save the updated project.
You'll see a message in the Microsoft Azure Activity Log window that the virtual machine has been updated.
After you enable diagnostics data collection, you can choose exactly what data sources you want to collect and what information is collected. The following is a list of tabs in the Diagnostics configuration dialog box and what each configuration option means.
Application logs contain diagnostics information produced by a web application. If you want to capture application logs, select the Enable transfer of Application Logs check box. You can increase or decrease the number of minutes when the application logs are transferred to your storage account by changing the Transfer Period (min) value. You can also change the amount of information captured in the log by setting the Log level value. For example, you can choose Verbose to get more information or choose Critical to only capture critical errors. If you have a specific diagnostics provider that emits application logs, you can capture them by adding the provider’s GUID to the Provider GUID box.
See Enable diagnostic logging for Web Apps for more information about application logs.
If you want to capture Windows event logs, select the Enable transfer of Windows Event Logs check box. You can increase or decrease the number of minutes when the event logs are transferred to your storage account by changing the Transfer Period (min) value. Select the check boxes for the types of events that you want to track.
If you're using Azure SDK 2.6 and want to specify a custom data source, enter it in the <Data source name> text box and then choose the Add button next to it., such as in the following example.
Performance counter information can help you locate system bottlenecks and fine-tune system and application performance. See Create and Use Performance Counters in an Azure Application for more information. If you want to capture performance counters, select the Enable transfer of Performance Counters check box. You can increase or decrease the number of minutes when the event logs are transferred to your storage account by changing the Transfer Period (min) value. Select the check boxes for the performance counters that you want to track.
To track a performance counter that isn’t listed, enter it by using the suggested syntax and then choose the Add button. The operating system on the virtual machine determines which performance counters you can track. For more information about syntax, see Specifying a Counter Path.
If you want to capture infrastructure logs, which contain information about the Azure diagnostic infrastructure, the RemoteAccess module, and the RemoteForwarder module, select the Enable transfer of Infrastructure Logs check box. You can increase or decrease the number of minutes when the logs are transferred to your storage account by changing the Transfer Period (min) value.
See Collect Logging Data by Using Azure Diagnostics for more information.
If you want to capture log directories, which contain data collected from log directories for Internet Information Services (IIS) requests, failed requests, or folders that you choose, select the Enable transfer of Log Directories check box. You can increase or decrease the number of minutes when the logs are transferred to your storage account by changing the Transfer Period (min) value.
You can select the boxes of the logs you want to collect, such as IIS Logs and Failed Request Logs. Default storage container names are provided, but you can change the names if you want.
Also, you can capture logs from any folder. Just specify the path in the Log from Absolute Directory section and then choose the Add Directory button. The logs will be captured to the specified containers.
If you use Event Tracing for Windows (ETW) and want to capture ETW logs, select the Enable transfer of ETW Logs check box. You can increase or decrease the number of minutes when the logs are transferred to your storage account by changing the Transfer Period (min) value.
The events are captured from event sources and event manifests that you specify. To specify an event source, enter a name in the Event Sources section and then choose the Add Event Source button. Similarly, you can specify an event manifest in the Event Manifests section and then choose the Add Event Manifest button.
The ETW framework is supported in ASP.NET through classes in the System.Diagnostics namespace. The Microsoft.WindowsAzure.Diagnostics namespace, which inherits from and extends standard System.Diagnostics classes, enables the use of System.Diagnostics as a logging framework in the Azure environment. For more information, see Take Control of Logging and Tracing in Microsoft Azure and Enabling Diagnostics in Azure Cloud Services and Virtual Machines.
If you want to capture information about when a role instance crashes, select the Enable transfer of Crash Dumps check box. (Because ASP.NET handles most exceptions, this is generally useful only for worker roles.) You can increase or decrease the percentage of storage space devoted to the crash dumps by changing the Directory Quota (%) value. You can change the storage container where the crash dumps are stored, and you can select whether you want to capture a Full or Mini dump.
The processes currently being tracked are listed. Select the check boxes for the processes that you want to capture. To add another process to the list, enter the process name and then choose the Add Process button.
See Take Control of Logging and Tracing in Microsoft Azure for more information.
After you’ve collected the diagnostics data for a cloud service or a virtual machine, you can view it.
Deploy your cloud service as usual and then run it.
You can view the diagnostics data in either a report that Visual Studio generates or tables in your storage account. To view the data in a report, open Server Explorer, open the shortcut menu of the node for the role that interests you, and then choose View Diagnostic Data.
A report that shows the available data appears...
On the shortcut menu for the virtual machine, choose View Diagnostics Data.
This opens the Diagnostics summary window..
If you're investigating a problem with a cloud service that already running, configuration settings, see Configure diagnostics data sources in this topic. For information about how to view the diagnostics data, see View the diagnostics data in this topic.
If you change data collection in Server Explorer, these changes remain in effect until you fully redeployx (or .wadcfg) file as set through the Properties editor for the role. If you update your deployment, Azure keeps the old settings.
If you experience problems with your cloud service projects, such as a role that gets stuck in a "busy" status, repeatedly recycles, or throws an internal server error, there are tools and techniques you can use to diagnose and fix these problems. For specific examples of common problems and solutions, as well as an overview of the concepts and tools used to diagnose and fix such errors, see Windows Azure PaaS Compute Diagnostics Data./.wadcfgx. The following three timestamp columns in the log tables are used.
- PreciseTimeStamp is the ETW timestamp of the event. That is, the time the event is logged from the client.
- TIMESTAMP is PreciseTimeStamp rounded down to the upload frequency boundary. So, if your upload frequency is 5 minutes and the event time 00:17:12, TIMESTAMP will be 00:15:00.
- Timestamp is the timestamp at which the entity was created in the Azure table.. | https://msdn.microsoft.com/en-us/library/dn186185.aspx | CC-MAIN-2015-35 | refinedweb | 2,040 | 55.54 |
I have a .csv file that could have anywhere from 50 rows to over 10,000. The first 32 rows (geographical header information) will always be ignored. The 2nd column of data will have 0.00 for (x) rows. Once that column starts to read a value I want to start storing that data up the max number in column3 before column3 starts to decrease. I will then want to store all of the data in the 2nd and 3rd columns that were in the same rows. Then I want to output that data to another file(colum3, column2).
Here is a sample of the data:
Column2 at value 0
19/08/2013 10:39:47.000,0,0.009,29.621,-0.002,0.014,-4.227,1508.28
more rows with column2 at 0.
Column2 starts reading a value
19/08/2013 10:51:32.000,1547.122,1.543,29.552,59.068,35.812,22.495,1545.548 Start Data Storage
Column3 reaches its max value
19/08/2013 10:58:23.000,1502.544,223.176,12.228,41.002,35.662,28.057,1502.078 End Data Storage
Where do I start after accessing the .csv file?
#include <iostream> #include <fstream> #include <string> #include <sstream> using namespace std; int main() { string soundvelocity; string pressure; ifstream myfile; myfile.open ("C:\\Program Files\\DataLog Express\\SV Cast_Test_Data.csv"); myfile.close(); return 0; }
I know i will need a for loop for accessing column2 and column3. Is this the right approach?
// Getting data from 3rd column(pressure) for(i=0;i<3;++i) { getline(input_string_stream,entry2, ','); } //Getting data from 2nd column(soundvelocity) for(i=0;i<2;++i) { getline(input_string_stream,entry1, ','); }
I know this is long but any help would be appreciated. | https://www.daniweb.com/programming/software-development/threads/465487/reading-in-parts-of-a-csv-and-outputting-2-columns | CC-MAIN-2017-47 | refinedweb | 295 | 67.86 |
Awesome Autocomplete: Trigram Search in Rails and PostgreSQL
PostgreSQL mostly known as Postgres, is one of the most mature and robust databases out there. It’s a multi-platform, open source and the second most used open source DBMS.
Today we will see about how to implement a basic autocomplete search using a trigram in Postgres and Rails. The tutorial is split into three parts:
- What is a trigram?
- Trigram in Postgres
- Implementing trigram in a sample rails app
What is a Trigram?
A trigram is nothing but an n-gram with three letter sequence. So, what is an n-gram? From wikipedia,
In the fields of computational linguistics and probability, an n-gram is a contiguous sequence of n items from a given sequence of text or speech.
Well, what does that mean, exactly? It means finding matching words by maintaining variable sequences of characters in a word.
In a trigram the variable sequence in n-gram is 3. For finding the similarity between two words, wordA and wordB, wordA is split into three letter sequences and compared with the three letter sequence combinations computed from wordB. The comparison aims to find the number of shared sets between the two words. The more number of sequence matches means the high similarity between the words. This becomes very useful in terms of autocompletion.
Each word is treated with two spaces prefixed and one space suffixed to balance the number of trigrams for a n-character word. That’s confusing, so let’s have an example.
Assume we have a word group that consists of three words
[google, toddle, beagle] and the search term is googlr. We need to find the best matching word from the batch for the search term. First the batch words are split with three letter groups:
google - g, go, goo, oog, ogl, gle, le toddle - t, to, tod, odd, ddl, dle, le beagle - b, be, bea, eag, agl, gle, le
The trigram groups of three letters will be calculated for the search term and compared to the words in for the batch for the sequences they share:
g, go, goo, oog, ogl, glr, lr google - 5 toddle - 0 beagle - 0
The similarity is calculated using the number of trigrams they share, which in our case is quite trivial. So
For the second use case, let’s say the search term is just
gle. The trigram is:
g, gl, gle, le Matches - google - 3 toddle - 1 beagle - 2
For this,
beagle. Once you are comfortable with this concept, we can move on to how trigrams are implemented in Postgres.
Trigram in PostgreSQL
Postgres supports trigrams through an extension called pg_trgm which is an officially supported extension. It’s worth noting that pgtrgm ignores special characters while calculating trigram.
The following list consists of the features that the extension comes with, which helps in doing trigram searches:
similarity(text1, text2)– Calculates the similarity index between
text1and
text2in the scale of 0 to 1, with 0 being the least similar.
show_trgm(text)– Lists the possible trigrams that could be calculated from the given
text, like we did above.
show_limit()– The set filter used by the
%method (look below). Similarity indices above this limit are only shown while performing a trigram search. The default limit is 0.3.
set_limit(real)– Sets the limit to be used by
%method.
text1 % text2– Returns true if the similarity between
text1and
text2is above the set limit.
text1 <-> text2– Distance operator, which is the inverse of similarity. Returns the distance between
text1and
text2.
gist\_trgm\_opsand
gin\_trgm\_ops– Builds the GiST or GIN index, respectively, over a text column for faster similarity search.
Let’s get started with implementing a trigram search in a Rails app.
Implementing a Trigram in Rails
Our sample app is going to be very simple with only one model,
Post, which has two columns
title and
content. Let’s quickly create the app, model, and controller using the commands below. I am using Rails 4.2:
rails new app_name -d postgresql cd app_name rails generate model post title content rake db:create && rake db:migrate rails generate controller posts index
Seed the database with some fake data. I’m using the Faker gem. Below is the seed file:
(0..100).each do |p| Post.create(title: Faker::Commerce.product_name, content: Faker::Company.catch_phrase) puts "Created #{p}" end
Let’s also add some basic content to the controller:
def index if params[:q].present? @posts = Post.where(title: params[:q]) else @posts = Post.all end end
In the app/views/post/index.html.erb file, add the below lines, which includes a basic search box along with the list of all the posts:
<form method="GET" action="/"> <input placeholder="Search" id="search" type="text" name="q" /> <input type="submit"> </form> <table> <tr> <th>Title</th> <th>Content</th> </tr> <% @posts.each do |post| %> <tr> <td><%= post.title %></td> <td><%= post.content %></td> </tr> <% end %> </table>
We now have a basic application with a single model, 100 rows of posts, an index page, and a search option that matches only the full title of the post. Let’s plug a trigram search into it.
Install the pg_trgm Extension
As mentioned before, Postgres provides trigram functionality via the
pg_trgm extension . Install it in the app using a migration instead of doing it directly in the psql console. Create a migration using the below command:
rails generate migration AddPgTrgmExtensionToDB
Add the following to the migration:
execute "create extension pg_trgm;"
Run the migration. This will install the pg_trgm extension in Postgres.
Add a Search Index
When we’re at it, let’s also add an index to the column that we are going to search. GiST (Generalized Search Tree) and GIN (Generalized Inverted Index) are two kinds of indices in Postgres. Adding an index is not mandatory, but desirable to speed up queries. At this point, I really can’t recommend GiST or GIN since I’ve had varying performance differences between them in the past. Primary differences between the two indexes and how to choose one can be found here. Add whichever works for you best.
Create a migration and add the below line to it:
add_index :posts, :title
Run the migration and that’s it! We’re all set on the database side. Quickly add the search query to make use of the trigram similarity.
Search Method
To add the search option, add a method in our
Post model:
class Post < ActiveRecord::Base def self.text_search(query) self.where("similarity(title, ?) > 0.3", query).order("similarity(title, #{ActiveRecord::Base.connection.quote(query)}) DESC") end end
Let’s also replace the search line in our controller from
@posts = Post.where(title: params[:q])
to
@posts = Post.text_search(params[:q])
That’s it. Start the server, then go and search with typos and see the magic of similar words showing up.
In the
text_search method in post.rb, the threshold score is set to
0.3. Feel free to tune this to meet your requirements. The more the threshold, the less results and the stricter the searches.
One way of improving the speed of the search is by having a separate column that will hold all the trigram sequences of the title column. Then, we can perform the search against the pre-populated column. Or we can make use of
ts_vector, but that would become useless with fuzzy words.
Conclusion
There is a gem called pg_search that provides the trigram search functionality out of the box, but for some reason the trigram search from this gem is slower for me than the raw SQL. I may cover this gem in the future.
All the code used for sample in this is hosted on GitHub. Feel free to fork an play with it.
Thank you for reading. I hope you find this useful in your Rails application development. | https://www.sitepoint.com/awesome-autocomplete-trigram-search-in-rails-and-postgresql/ | CC-MAIN-2019-04 | refinedweb | 1,313 | 65.01 |
...
You rock... Thanks for taking the time to do the polishing...
Does anyone have any links to see these in action? Also, does anyone have any links to public sharepoint sites?.
Installed your web part "What's New" but seem to have a problem. It works fine except for some document libray files that Adobe PDF. It doesn't show the file name only the location. Any idea what's up with that?
Frick, you can control this by using the "List fields" property.
Jan, I have an issue where a particular site and it's sublevels aren't showing in the tree when I'm on one particular site (2nd level).
My structure:
Sites: Internal: 2003 Planning: Sub Site 1: Sub Site 1.1
This tree usually shows on the when I click "Start from root", but DOES NOT show when I'm in the "2003 Planning" site.
Why this one site causes the whole site tree to not show, but if I go into "Internal" (above) or "Sub Site 1" (below) the whole tree shows for all levels?
I'm having the same problem as Frick (PDFs not showing in the What's New webpart).
Chris
1) Are you using SharePoint Portal Server or Windows SharePoint Service.
2) You should use the List fields property to tweak the title of the item in the list.
My question has to do with either the breadcrumbs or the navigation web part. I can get it to display on my site fine but it seems that it only shows site and subsite levels, it doesn't show down to individual pages. For instance if I go into edit a contact should it show something like rootsite/subsite/contacts/edit contact ?
I looked at all the settings but I don't see anything.
Brett, this is not possible. If you want that behavior I suggest you alter the code a little bit.
When I install the Lead-it SharePoint RSSReader Webpart 1.0.0.1.msi.....it never prompts me for the installation location.
Does this require a 2003 Server ? Or will it work on a 2000 Server ?
Steve, it should work on both of them.
I am having the same navigation problem as Chris, some subsites don't show the correct tree. I have the following:
Site
|-Sub1
|--SubSub1
|-Sub2
It shows the whole tree at Site and SubSub1, but Sub1 only shows:
Site
|-Sub2
and Sub1 shows only:
Site
|-Sub1
|--SubSub1
Start from root and 5 levels is selected for all of them.
For reference I am running WSS on 2K3 server using MSDE.
When I download the source code for the RSS Reader, the project references an RSS directory that contatins the RSS and RSSFeed namespaces, but the files seem to be missing from the ZIP archive.
Please help.
It seems like I can use the files from version 1.0.0.0. Please let me know if this might cause a problem.
I placed the navigation web on the main page of my portal. It shows all of my areas. However, it doesn't display anything from or below. The CorasWorks workplace navigation webpart doesn't seem to be able to handle this either. Anyone aware of a webpart that can show sites from the portal on down into WSS sites?
Hi
When i try to add the useralert web part I get the question for user an password, and access denied. Seems that whatever i try to autenticate with i get a problem. Can anyone tell me what the problem is, and what rights the user should have ? I am running as administrator on aa test-server.
Idid run into the same problem with the
Snorre
Updated my list of links: 3rd Party Webparts
For those who aggregate my feed and do not often visit the blog iteself... I've updated my SharePoint...
Mike, does this RSS web part work with authenticated proxies? I have a client that is looking for a rss feed web part but needs it to work under these conditions.
You can reach me at mlisidoro@gmail.com.
Thanks
Miguel
When i add a feed url this message appears:
There was an exception reading the RSS Feed.
The ServicePointManager does not support proxies of server scheme.
what´s the trouble, and how can i fix it?
I have successfully installed the Lead-it SharePoint Essential Webparts 1.1.1.0.msi file onto all sites under my portal. If I wish to uninstall, how would I proceed? help?
many thanks.
Hi
I need microsoft.Sharepoint.dll file
please send to me
Advance thanks
Thanks
Jenkins
Lead-it SharePoint Essential Webparts 1.1.1.0.msi installation failure.
When we ran the MSI, we get the followoing error: Setup has encountered an error and must close.
The error referanced the wppackager.log for more information... but the log was empty.
Has anyone exeperienced a complete failure during installation? I'm at a loss for a starting point to troubleshoot.
Thanks for any help
R2
On 2003 SmallBusinessServer and WSS I had the same problem as R2:
Setup has encountered an error and must close.
The error referanced the wppackager.log for more information... but the log was empty.
thanks
Nicola
The MSI installer isn't installing the files where they need to go, but it thinks that it is installed. Is there a .cab file or a solution to this problem?
Thanks,
Ashkan
same error, what can I do??
hi Jan,
I installed your webpart, but now all of my SPS sites display the error "Server Error in '/' Application."
I've tried Microsoft KB articles but i hasnt helped. Please help!
I tried uninstalling but i still have the problem. Any ideas what could cause this?
Regards,
Martin
martin.keen@porthosp.nhs.uk
Once i turned Customerr logging to OFF - i now get:
Parser Error Message: Unable to read the security policy file for trust level 'wss_custom'.
When i examine my web.config file i see a line:
<trustLevel name="wss_custom" policyFile="C:\Program Files\Common
Files\Microsoft Shared\Web Server
Extensions\60\config\wss_custom_wss_minimaltrust.config" />
However, when I navigate to this folder there is no file called
'wss_custom'.
Jan.. Does your install/uninstall routine change any of these things?
HI,
I have 'resolved' this problem by making a copy of the wss_mediumtrust.config file and renaming it.
Do you forsee any implications in doing so?
I also had to make a couple of changes in the web.config file.
still stumped as to why it was referring to the missing wss_custom_wss_minimaltrust.config. Was this anything to do with your webpart?
Cheers,
MK *PHEW!
you guys rock --- love the RSS web part for Share Point. Keep on keeping on.
I was just hired by a client who is experiencing authentication problems on their WSS sites (subsites only). There are users who have read access to a parent site and read access to a subsite. They can access the parent site properly but when they try to access subsite (with Lead-It Alert webpart) they keep getting prompted for a login. If I make the user an Admin for the subsite they don't get prompted. If I remove the Lead-It part and then put them back to Reader they can access the subsite properly. Does lead-it need elevated permissions to work. I can see how it needs permission to add an alert but just to paint the page it should not require elevated privledges, should it?
Bill_Brace@hotmail.com
Lucy! Please call me,Lucy! Please call me
<a href= >a synonym for important</a> <a href= >a mother and father prayer</a> <a href= >a+ software practice tests</a> <a href= >a fine romance with no kisses</a> <a href= >a kiss so deadly</a>
<a href= 43="goodurl_megabase.txt >iva[!3!]@[!2!].[!5!].edu</a> [url=43="goodurl_megabase.txt]iva[!3!]@[!2!].[!5!].edu[/url]
<a href= >a company call pds parcel</a> <a href= >a bird of prey jim grimsley</a> <a href= >a muscleman</a> <a href= >a mummy shaped amulet that held farm tools</a> <a href= >a very good production inc.</a>
<a href= >tetris rom snes</a> <a href= >javascript digital klocka</a> <a href= >haulocaust timeline</a> <a href= >so called life lyrics</a> <a href= >online money converters</a>
I tried to download the 1.1.1.0 version, and had the following error message:
This page has an unspecified potential security flaw. Would you like to continue.
I'm hesitant to click the yes button.
Simon
I'm getting the following error despite trying many server addresses. Any ideas?
========= WPPackager install log started 07/06/2007 15:40:22 =========
07/06/2007 15:40:35: Error: Error while installing from 'Lead-it SharePoint Essential Webparts'. Invalid virtual server 'server address'.
07/06/2007 15:40:35: Error: Error message: 'The server instance specified was not found. Please specify the server's address and port.' while installing Web Part package to virtual server ''.
07/06/2007 15:40:35: Critical Error: Unable to install package to any virtual servers
Free SharePoint Web Parts (3rd Party) Konrad Brunner - UGS's Web Parts (broken link 8/25) Document
I'm also getting the error message:
02-07-2007 14:37:14: Error: Error while installing from 'Lead-it SharePoint Administration Webparts'. Invalid virtual server ...
And I'm 100% sure that it's a correct URL I have typed.
I installed Lead-it RSSReader successfully and is there a limitation on the feed URL to be just one. How can I separate if I am using more than one Feed URL ?
getting this error
There was an exception reading the RSS Feed.
The underlying connection was closed: Unable to connect to the remote server.
Hi, nice very nice page..!
high school fund raiser - allfunds.free-site-host.com/provident-fund/index.html
<a href="allfunds.free-site-host.com/non-profit-fund-raising/index.html">globe fund</a>
[URL=allfunds.free-site-host.com/nonprofit-fund-raising/index.html]high school fund raiser[/URL]
church fund raising idea - allfunds.free-site-host.com/nonprofit-fund-raising/index.html
<a href="allfunds.free-site-host.com/non-profit-fund-raising/index.html">non profit fund raising</a>
Good luck !
PS: do you listen Linkin Park ?
Too bad about the idjuts from Angelfire and High School Fund raiser. Go away. This site is for developers. We are trying to get things done. Jerk off in your bedroom.
I ran the msi and Add Remove Programs shows Lead-It RSS Webpart is installed, but I can't see it in WSS 3.0. Any help Jan? We need people like you leading the way. Should I build, package and install? Is it worth my time? Will it work in 3.0? Smiling Goat component only works sometimes and we have had to turn it off. We want RSS and I would hate to start from scratch. What stops Lead-It RSS from showing on my WSS site webparts? Is it only installing on default site? I have many sites. Can I point msi to my dev site for evaluation. Help is appreciated. Thanks for your time.
I can send you my email if you do not post. See my post from moments ago for I-Lead RSS question. Should I send you my email address? Post response for a few days and then remove. I will return and look for it. Thank you. Jan. Idjuts are nothing but that. Let's not let them ruin our world. Peace and beer, Hypercat.
Hi lads,
I am new to sharepoint. I have successfully installed the rssreader webpart to my WSS 3.0. The problem is i can not connect any feed generated by SharePoint it self to this webpart.
After doing some investigation, it seems like sharepoint doesn't pass through the authentication to the webpart because i receive authorization error 401.
Is it built like this? or am i doing something wrong?
Why am i trying to do this? because i want the announcement on the top site to be able to be shared/viewed on all the subsites with rss reader. Or maybe i am doing it the wrong way?
If any of you could help :D
Thank you in advance
You should be able to render the weather RSS using the buildt-in DataView, why go trough the trouble of installing a WebPart?
MSDN Webcast: Microsoft Office SharePoint Designer 2007: It's for Developers Too (Level 300) (ID:1032345471) | http://weblogs.asp.net/jan/archive/2004/03/11/88118.aspx | crawl-002 | refinedweb | 2,100 | 76.42 |
I do a lot of data conversion, and my
experience with Excel to Access is that if the
entire column in Excel is formatted as text
before you use the import wizard it transfers it
as a text field. It is when you have not
specified a format that it uses "General" which
tries to discern a format from the Excel file.
Also, I know this is automated in a VBA script,
but it is always more useful for me to import
the information into a new table and then use
copy/paste-append to add it to an existing
table so that any errors in format, etc. can be
caught before the data is in the ongoing table.
General discussion
All Comments
I do a lot of data conversion, and
import excel bombs
This conversation is currently closed to new comments. | https://www.techrepublic.com/forums/discussions/import-excel-bombs/ | CC-MAIN-2019-09 | refinedweb | 142 | 51.52 |
Using mxmlc / Flex Builder, you can safely store any kind of binary data, text, or XML directly in your SWF if loading it at runtime is not possible or not desirable. Find out how below.
The Setup
My friend EJ was wondering if there was some way to compile XML directly into an application without putting it inline in code. Of course E4X allows you to type XML literals, but doing so with large blocks of XML can be problematic for a few reasons. You can’t edit this XML with an XML editor. The XML parsing inside Flex Builder can be faulty on occasion, causing ActionScript in the same file to be misinterpreted. Placing XML in your source files can make them unwieldy and large, and slow to parse.
An Idea, but Not the Right One
My initial idea was to use include. Even though #include is deprecated in AS3, the little-known include (no hash mark!) uses the same syntax and can include code in much the same way. The Flex framework uses this to include a common version number as a static field in many classes:
mx_internal static const VERSION:String = "3.0.0.0";
This line, found in the file Version.as is placed in the class definition of classes where include "Version.as" is found. However, you can’t just include arbitrary copy. The included code has to be one or more full lines. So my idea of using
protected var xml:XML = include "static.xml" ;
wasn’t going to work. It did work when I included the whole variable declaration in the XML file, but then the file wasn’t valid XML any more! Not even close. And not too much better than just typing inline.
The Solution
The solution here is much, much more powerful than the original hack might have been. The MXML and AS3 compiler, mxmlc, which is used by Flex Builder as well, has the ability to embed all kinds of video, graphics, other swfs, and fonts into a SWF. The [Embed] metadata tag / compiler directive works whether you are using Flex or simply AS3. [Embed] associates a bit of data with a class that is capable of representing it.
If you haven’t seen it used before, here’s a simple example where we embed an image in the SWF:
[Embed(source="assets/photo.jpg")] private const PhotoImage:Class;
And then you can use it:
var myPhoto:DisplayObject = DisplayObject(new PhotoImage());
So, I don’t know how Adobe is really implementing things under the hood, but I say the following with some certainty. The compiler, happening across your [Embed] directive, retrieves the source, and examines it to see what kind of file it is. Depending on the type of file you’ve asked the compiler to embed, the runtime class reference will produce a subclass of a particular kind of class which is appropriate for the data you’ve embedded. The compiler will take the source of that file, transcode it, preparing the data to be inlined in the SWF itself, and possibly preprocessing it, for example parsing SVG into vector shapes.
You should also know that depending on whether you’re using the Flex framework or not, I believe you will end up with different superclasses associated with your embedded assets. If you use Flex, you might see a FontAsset subclass where you would simply get a Font subclass were you only using ActionScript.
You can also embed SWFs and symbols from within SWFs, a technique I quite like. Simply use a symbol attribute in the compiler directive:
[Embed(source="MenuAssets.swf", symbol="com.partlyhuman.assets.TopMenuItem")] protected const TopMenuItem:Class; addChild(Sprite(new TopMenuItem()));
The transcoder should automatically know what to do with these kinds of assets:
- JPG/JPEG image files
- GIF image files
- PNG image files
- SVG/SVGZ vector image files (supports a subset of SVG 1.1)
- MP3 sound files
- TTF font files
- Installed system fonts
- SWF files and specific symbols inside SWFs
No, I Know What I’m Doing, Really
But get this! There are more things you can convince the transcoder to accept. By manually specifying the MIME type of the file, you can force the transcoder to interpret the data in some format. This is the solution to embedding any XML at compile time, and then some. In fact, you can embed any binary data you want in a SWF, as long as you know how to interpret it on the other side.
There may be additional interesting MIME types that are registered by the transcoder. These are, as far as I know, undocumented, so if you find an interesting one that’s not covered by the types above or these two introduced here, leave a comment.
Here, we see that we can import an XML file with MIME type text/xml, and it embedded as a subclass of XML.
[Embed(source="test.xml", mimeType="text/xml")] protected const EmbeddedXML:Class; var x:XML = XML(new EmbeddedXML()); trace(x.toXMLString()); //it should work!
To get this to work, you should keep the XML prolog in your XML file. With this technique, EJ didn’t have to load the XML asynchronously, it was embedded right in his SWF for instant access, binary compression, and easy deployment, and tight coupling with the build itself. He could also now use a normal, full-featured XML editor to mess with the XML source.
But it gets better! You can embed any binary data whatsoever in a SWF, as long as you know how to interpret it on the way out. To do this, use the directive to embed any file with MIME type application/octet-stream (an octet is just a fancy word for a byte, by the way, as a byte is eight bits, so a stream of octets is a fancy way of saying “a bunch of bytes”). The class comes out in ActionScript as a subclass of, can you guess? ByteArray!
Here, ActionScript genius Max knows how to parse a WAD file, which Doom and other ID games used for levels and sprites and all kinds of business. He puts it right in the SWF by embedding it as application/octet-stream and interpreting it as a ByteArray:
public class DoomTest extends Sprite { [Embed(source="doom1-shareware.wad", mimeType="application/octet-stream")] private const DoomWad:Class; public function DoomTest() { var wad:ByteArray = new DoomWad() as ByteArray; new DoomCore(this,wad); } }
(Full source here) And, passing it to his Doom playing engine, you start playing the shareware level of Doom that was embedded right in the SWF!
You can apply this technique to any binary data you’ve cleverly figured out how to parse. Of course, Flash knows very well how to parse all the file types described in the list above, but with some creative coding, that’s just the beginning!
Holly Hell. That is quite intense. Someone do Hexen as well!
i just could not get the xml embedding to work as you said.
i kept getting Syntax Error: xmltagstartend unexpected if i included a doctype, or my xml would come out as null.
if anyone else is having this problem, i tried your second method and it worked:
[Embed(source="info.xml", mimeType="application/octet-stream")]
protected var EmbeddedXML:Class;
var ba : ByteArray = (new EmbeddedXML()) as ByteArray;
var s : String = ba.readUTFBytes( ba.length );
xml = new XML( s );
xml.ignoreWhitespace = true;
oh and thanks for everything roger!!!! :)
HI,
I have been trying to get the Embed with mimeType = “text/xml” to work with no luck,Keep getting a syntax errors. here is my code :
package {
import flash.display.*;
public class Embed extends MovieClip {
[Embed(source="/assets/employees.xml", mimeType ="text/xml")]
var theClass:Class;
public function Embed() {
var xml:XML = XML(new theClass());
trace(xml.toXMLString());
}
}
}
any idea ?
Thanks, it worked.
Hello,
I have the same problems mentioned in the other comments. I get also the error xmltagstartend unexpected. So how does it really work to embed XML?
Please help!
Hey Roger, are you sure the XML-example works for you? I tried it out in Flex Builder (2) and all I got was the toString() representation (”[object Model_EmbeddedXML]“)..
Hello Roger,
I’m having issues using your xml embed technique. I have a feeling it may be my prolog.
Would you be open to providing us with test.xml that you used in the example above so that we can test with an XML file that works?
Thanks!
Hey Roger :)
That is cool example u have here.
Is there a chance it will work with *.flv ?
Actually I managed to embed it as byte array, but i could not figure how to translate it backwards.
The problem I’m dealing with is that my client wants to put the player I made on a CD and he wants the videos to be part of the swf.
Another way could be embedding the whole folder into swf.
Is there a chance someone ever had to do it ?
Thanks a lot.
Fell free to contact me on my skype, the address is : atlaseli
@Eli, I’m not sure you will have success playing back embedded FLVs, since the NetStream object seems to only want to load FLVs from remote locations… The one way I thought might work, though not exactly what you want, is to compress the videos as SWFs instead of FLVs, and then embed them and display with Loader.loadBytes().
Hi Roger,
I am trying to embed an SWF created in Flash (CS3). After embedding it I want to access a parameter from it (a:String).
I am not able to do it.
[Embed(source="3.swf")]
private const SWFAsset:Class;
and using it as
var swfMovie:MovieClip = new SWFAsset() as MovieClip;
now I want to access a variable “a” from the movie clip.
tried trace(”Val ” + swfMovie['a'].toString());
Any ideas??
yeah I have to concurr this example does not seem to work or at least not in FB2.01 as an Actionscript project. Real shame as I am going nuts trying to embed XML too sensitive to load remotely. Do we have to set compiler properties or compile outside of flex builder using mxmlc to get this to work? Or will the XML parser only work within a Flex project?
looks like a bug in the transcoder perhaps or the relevant transcoder is unavailable for an Actionscript project this seems to work:
package
{
import flash.display.MovieClip;
import flash.utils.ByteArray;
public class Test extends MovieClip
{
[Embed(source="data.xml", mimeType="application/octet-stream")]
private static const MyData:Class;
public function Test():void{
var byteArray:ByteArray = new MyData() as ByteArray;
var xml:XML = new XML(byteArray.readUTFBytes(byteArray.length));
trace(xml.toXMLString());
}
}
}
PS. Thanks Dan.
Actually, this works just fine here (Flex Builder 3)
[Embed(source="/../xml/TacticalMaps.xml",mimeType="application/octet-stream")]
protected const EmbeddedXML:Class;
public function LoadMapXML():void
{
var x:XML = XML(new EmbeddedXML());
trace(x.toXMLString());
for each (var map:XML in x.maps)
{
var tmd:TacticalMapData = new TacticalMapData(map);
maps[map.@name] = tmd;
}
yeah u gotta use:
mimeType=”application/octet-stream”)]
not text/xml
…. :/
Hey everyone,
Yes, I’ve had problems getting this to work with text/xml continuously as well. I swear, though, it did work once. Then I wrote this. Then it promptly broke. I assert that with noodling you can get it to work with text/xml (I swear it did!) but I’ve been using application/octet-stream ever since. Sorry for the misinformation!
In my experience, specifying text/xml doesn’t make a difference. Flash will create an object class, with data and prototype. The xml you want is in the data fork. So your first bit of code would be:
[Embed(source = "test.xml")]
var EmbeddedXML:Class;
var myXML:Object = EmbeddedXML.data;
// process your xml – the data object behaves as xml allowing the same methods
var myvar:String = myXML.noderef.toString(); // replace noderef with target node
It’s buggy with reading in XML, the first line has to be a node – not a comment or a doctype declaration.
I noticed that any XML with a doctype declaration screws up the encoder with text/XML. Like the following …
Removing that works, or else, Application/Octet-stream seems to work really well.
Great post! The Adobe live docs list the acceptable mime-types used for transcoding, and text/xml is not among them. I’ve been using this technique (with octet-stream) for a while to compress xml documents for lighter deployment (I’m sure you’ve noticed that the xml file size shrinks by about 75% when using the embed technique. ) If XML is included as a literal in the code, rather than being embedded with the [Embed] directive, you get close to 90% compression (120kb xml files drop to circa 12kb as swfs!). I’m thinking about developing an app that dynamically builds an actionscript file from a source xml doc, that can then be compiled with ant… to semi-automate the process for clients with lots of xml docs. Definitely an under-documented, but cool, affordance! | http://dispatchevent.org/roger/embed-almost-anything-in-your-swf/ | crawl-002 | refinedweb | 2,189 | 64.1 |
Andre-Littoz
2012-02-26
Topic opened on behalf of Zhang Qi
REMINDER: never file a bug when you only need help. This is the place where you can get help and where anybody can contribute. Do not pollute the bug database. Thank you all.
Original question:
The first time I genxref the linux kernel, I found that if you want to locate some header file like asm/io.h, when you chose the arm arch, it should point to the arch/arm/include/asm/io.h. But I got nothing, and I found the path displayed on the page point to something like /linux/asm-arm/io.h which is a wrong path. So I review the lxr.conf about subdirectory section. I found that incprefix and maps may be wrong. Then I search the web and found a page . This seems can solve the problem. Now I got a quetion, that page says change the incprefix like , what the xxx means? Do it means all the arch that can be found or I need to specify a arch like x86 or arm? If I replace the xxx with $a ( the variable), will it work?
I run the apache2 on centos, using the lxr-0.10.2
The Linux kernel has a rather sophisticated directory structure to handle the different hardware it has been ported on. This directory structure has evolved with time and the historical comment in lxr.conf are no longer valid for the present kernel. The example in (Rewrite Rules for Include Paths) comes from a quick analysis of the present directory structure.
Architecture specific include files are stored in /arch/port_name/include/
Architecture independent inclue files are in /include
This defines the content of incprefix which should be written:
'incprefix' => [ '/include' , '/arch/i386/include' , '/arch/ppc/include' , 'arch/arm/include' [i]... and all other hardware ...[/i] ]
This is not convenient. You can use a short-hand notation in association with a rewrite rule.
'incprefix' => [ '/include' , '/arch/virtual/include' ]
Include directives are coded in the kernel either as:
#include <[i]subsystem[/i]/file.h>
or
#include <asm/file.h>
As you noticed, the /asm directory is deeply buried inside the /arch hierarchy. You must find a way to make
asm/
point towards the right /arch/*/include directory.
The previous
'incprefix'
will give:
A- architecture independent include
/include/[i]subsystem[/i]/file.h
will hit in subsystem (with subsystem equal to linux, acpi, …).
B- architecture dependent include
/arch/virtual/include/asm/file.h
but there is no
virtual
directory. The word virtual must be transformed into something meaningful. If we suppose that variable
a
has a
'range'
attribute containing the list of all /arch subdirectories, the rewrite rule is:
"_Replace word 'virtual' by each element in _
'range'
list until we get a hit"
But this rewrite rule must only be applied if we are testing an /arch subdirectory. This is written as:
'maps' => { '\/arch\/virtual\/' => '\/arch\/$a\/' }
I use virtual here, while the article in lxr.sourceforge.net uses xxx. Just replace the word in both places, its sole role is to create a level in the path hierarchy. However, if you change it, you must change it in two locations, which is error-prone. The rewrite rule may be modified as:
'maps' => { '\/arch\/[^\/]+\/' => '\/arch\/$a\/' }
which means "if you meet /arch followed by any name, apply the rule".
While writing this answer, I thought of an even simpler incprefix/maps pair, but I did not test it to see if is is free of side effects:
'incprefix' => [ '/include' , '/arch/include' ]
coupled with:
'maps' => { '\/arch\/' => '\/arch\/$a\/' }
Since virtual xxx are not used in the right side of the rewrite rule, there no point to add them in incprefix
End of this rather long answer. Please give feedback if that solve your problem.
ajl
Thanks for your help.
I genxref the linux kernel 3.2.7 twice. The first time I made some mistake so it failed. Today I re-genxref the source tree. This time I checked the config file twice and it succeed. I used your new incprefix-maps you mentioned but it didn't work. But I changed the maps to 'maps' => { '/arch/' => '/arch/$a/' and it works quiet well. All the /arch/xxx/include/asm/yyy.h work pretty well. It can locate thoes header files. That's awesome. But I got a new problem.
It can locate the .h files included using the absolute path like #include <linux/io.h> or #include <asm/xxx.h>, But, if it is using the relative path like #include " xxx.h" or #include " yyy.c" which xxx.h is in the same dir with the file inclcude it, it won't locate the xxx.h file.
I have another question. I found that some identifier could not be linked automatically. But if you search them, you can find them in some .h files or .c files. And I found that, if aaa.c include bbb.h and bbb.h include ccc.h. An identifier or function declared in ccc.h could be linked automatically in the bbb.h or bbb.c but when aaa.c or aaa.h use the identifier or function, it won't be linked automatically.
Andre-Littoz
2012-02-27
Problem 1: can't located included file in current directory
Give me an example with name of current directory, name of including file, name of included file. This could be an adverse side effect of over simplified 'maps' rule.
Problem 2: identifier not being recognised
This is a known ctags issue. ctags does not capture all variables and functions. Its C parser is approximately good but not totally satisfactory. Its Perl parser captures only function names; you get identifier references (but not definitions) if they happen by chance in other languages.
Give me name of some identifiers and language in which they are defined.
Free-text search is a known work-around when language parser fails but is not an exact replacement.
Problem 3: transitive inclusion
I do not understand. Identifier definitions are captured in an independent pass of genxref. Then the references are collected in a following independent pass. It does not matter that a file is included or not; whenever a declared identifier is found, it is decorated with a <span class='fid'> tag. You can even get wrong decoration because LXR does not check the definition language. It does not either take consideration for any order of appearance.
Give an example of good context and one of bad context.
Andre-Littoz
2012-02-27
Forgot to ask a question.
How long does it take to index your 3.2.7 kernel? Give information about your computer: clock frequency, memory size, CPU type, OS name and version. Thanks.
For Q1, please open /arch/arm/kernel/compat.c, see that
0018 #include <linux/types.h> 0019 #include <linux/kernel.h> 0020 #include <linux/string.h> 0021 #include <linux/init.h> 0022 0023 #include <asm/setup.h> 0024 #include <asm/mach-types.h> 0025 #include <asm/page.h> 0026 0027 #include <asm/mach/arch.h> 0028 0029 #include "compat.h"
in my lxr page, the compat.h is not linked. And a new problem is the asm/mach/arch.h is also not linked.
go to /arch/x86/kernel/asm-offsets.c
in this file in my lxr page, the asm-offsets_32.c and asm-offsets_64.c are also not linked
see line 0032~0035, like
0033 OFFSET(TI_status, thread_info, status);
I thought TI_status is a variable or macro define before, but I did a general search and found that it is a function. This is not linked either. But
0032 OFFSET(TI_flags, thread_info, flags)
the TI_flags is also a function and it is linked. This is weird. By the way, lxr is excellent, maybe not powerfull enough, it is awesome.
I install Centos in vmware8 under win7 ultimat.
My pc has AMD X6 1090T 3.2G + 8G ram + 1T 7200rpm HDD. I give vmware 4 cores to run Centos. I didn't notice how long does it take to index the kernel. It seems take more than 3 hours.
Thank you
asm/mach/arch.h
Same for me if I enter /arch/arm/kernel/compat.c with default architecture setting, i.e. x86. Just set the 'a' variable (labeled architecture) to arm through a link or a menu item and everything becomes OK.
REMINDER:
If you want to read architecture-specific kernel parts, do not forget to set the 'a' variable (architecture) to the appropriate value.
While trying to understand why it worked for me and not for you, I realised that correctly displaying and hyperlinking #include "generated-something" requires extra 'incprefix/'maps' with new variables. However I need to think a lot about it before suggesting rules because they seem to be rather complex.
#include "compat.h"
It looks like you discovered a bug. I investigate.
/arch/x86/kernel/asm-offsets.c and #include "asm-offsets_32/64.c": same bug
OFFSET and symbols not defined (consequently not hyperlinkable)
Same for me. I did a "general search" with TI_status which did not find any explicit definition. TI_status is defined by OFFSET macro in a "concealed" way. With "concealed", I mean definition tricky enough that ctags can't discover it. And if ctags does not report a definition, LXR won't know anything about that symbol. And the symbol is not hyperlinked.
Remember that LXR is not a compiler. It looks only at "first level" textual appearance of symbols. It is good enough for the majority of projects. The Linux kernel case is an extreme one. It pushes gcc and its macro interpreter to its bleeding edge technical capacities. LXR cannot keep abreast with all the cute tricks in the kernel. This is why the "free-text search" feature is provided to work around its limitations.
Nothing can be done, unless to use gcc itself as the internal LXR engine. The consequence would be catastrophic indexing and display performances.
Included files not hyperlinked from current directory
This was caused by a too-naive rewrite rule in 'incprefix'/'maps' as I feared. The suggested
'maps' => { '\/arch\/' => '/arch/$a/'
rule blindly adds a nom path component. Correct if the '/arch/include' was added by an 'incprefix' item because the architectecture-specific part is missing. But when current directory is '/arch/x86/', the 'maps' rule gives '/arch/x86/x86/' (architecture part duplicated) which does not exist.
Consequently, the correct 'incprefix' and 'maps' directives are:
, 'incprefix' => [ '/include' , '/arch/%VIRTUAL%/include' ] , 'maps' => { '\/arch\/%VIRTUAL%\/' => '/arch/$a/' }
where I use %VIRTUAL% to mark the generated part. If %VIRTUAL% is a real directory, change that marker to anything not existing in the source-tree.
There was no bug in LXR code. This illustrates how cautious you must be when writing 'maps' rules.
Thanks for your help!
I changed the lxr.conf as your suggestion. it works. The header file or the c file can be located correctly and it works well.
But this afternoon I found a new problem, yesterday I have found it but I didn't notice. See the file linux-3.2.7/arch/arm/mach-at91/at91rm9200.c
0015 #include <asm/irq.h> 0016 #include <asm/mach/arch.h> 0017 #include <asm/mach/map.h> 0018 #include <mach/at91rm9200.h> 0019 #include <mach/at91_pmc.h> 0020 #include <mach/at91_st.h> 0021 #include <mach/cpu.h>
those including asm/something works find but the mach/XXX.h is not hyperlinked.
while, there is a include directory in the linux-3.2.7/arch/arm/mach-at91/ directory. the mach is in the include dir.
So this afternoon, I test several times, I realized that the incprefix/map can only find those header file in the path like
/arch/xxx/include/ or /root/include
but, like the at91rm9200.c, it is under /arch/arm/mach-at91, it needs header file from the include dir in mach-at91.
Your incprefix/map does not work on this condition. I tried change or add some regex expression in the lxr.conf, but it didn't work. So please help me again.
Thanks!!
A quick answer: it is a simple matter of manipulation the include path with 'incprefix' and 'maps'.
Step 1:
Create a new variable like
, 'arm_cpu' => { 'name' => 'Arm processor type' , 'range' => [qw(at91 bcmring clps711x cns3xxx davinci)] }
I put in 'range' only several partial directory names found in /arch/arm as _mach-/i]xxx/. Add the others if you want to be exhaustive. It is likely you need an equivalent trick for plat-xxx/.
Step 2:
Add a new include prefix
_
, 'incprefix' => [ '/include' , '/arch/%-VIRTUAL-%/include' , '/arch/arm/%-CPU-%/include' ]
Step 3:
Add a new rewrite rule to point to /arch/arm/mach-*/include/
, 'maps' => { '\/arch\/%-VIRTUAL-%\/' => '/arch/$a/' , '\/arch\/arm\/%-CPU-%\/' => '/arch/arm/mach-${arm_cpu}/' }
_
Run LXR, it works!
I think an equivalent trick should be used for thr other architectures. A new feature should be added to LXR as "conditional variable", i.e. a variable valid only if another variable has a given value. I defined above variable arm_cpu to list the arm CPU variants. Obviously, I need the same for other CPUs. But it is much better to have an individual list per architecture because we could have duplicate entries and also many entries are not meaningful for a given architecture.
I'll do that after 0.11 release._
Hello there
I changed as you did. It indeed can locate the header files or c files. But I found a new bug.
, 'armcpu' => { 'name' => 'ARM CPU type' , 'range' => [ readfile('/home/zhang/source/linux_kernel/mach') ] } , 'armplat' => { 'name' => 'ARM Platform' , 'range' => [ readfile('/home/zhang/source/linux_kernel/plat') ] } , 'incprefix' => [ '/include' , '/arch/%xxx%/include' , '/arch/arm/%ARM-CPU%/include' , '/arch/arm/%ARM-PLAT%/include' ] , 'maps' => { '\/arch\/%xxx%\/' => '/arch/$a/' , '\/arch\/arm\/%ARM-CPU%\/' => '/arch/arm/mach-${armcpu}/' , '\/arch\/arm\/%ARM-PLAT%\/' => '/arch/arm/plat-${armplat}/' },
the file mach and plat contains arm cpu and platform dir name. this is ok , it works.
The bug is when I open /lxr/source, it look good, but when I chose ARM arch as arm or some else, and hit change, the bug comes. On the top, the same row with Architecture and Version select column, also there are ARM plat and ARM arch, there is a new select column which name has nothing but colon and it a blank select column.
For example, if I select arm , s5pv210, samsung, and click change, I am sure lxr know what I choosed because I can see the result from the url like
/lxr/source/?a=arm&%24a=arm&%24armcpu=s5pv210&%24armplat=samsung&%24v=3.2.7
but the page display that nothing changed except arm arch. the arm cpu still is at91 and arm plat also iop, like I didn't click the change button. I test these a lot of time and I still don't know what wrong.
I checked the html source file after change something and click change button, I found something strange like
<td class="banner"><span class="banner"><a class='banner' href="/lxr/source/?a=arm; Linux kernel release 3.x <> These are the release notes for Linux version 3. Read them carefully, as they tell you what this is all about, explain how to install the kernel, and what to do if something goes wrong. WHAT IS LINUX?=samsung">linux-3.2.7</a>/</span></td> </tr> </table> </td>
and before I click the change button, this part of cold should like
<td class="banner"><span class="banner"><a class='banner' href="/lxr/source/?a=arm">linux-3.2.7</a>/</span></td> </tr> </table>
below is an url, that is copied after I click the change and open a hyperlink
/lxr/source/arch/?a=arm;Linux%20kernel%20release%203.x%20<>These%20are%20the%20release%20notes%20for%20Linux%20version%203.%20%20Read%20them%20carefully,as%
the Revert button also does not work.
Andre-Littoz
2012-02-29
Revert button:
Its "revert" capability is very limited. As the doc says: "it reverts the variables to the value they had on entering this view." You "enter a view" either by clicking on a link or by clicking on the "Change" button. Once you click on this button, changes in variables take effect and "revert" will give you these new values.
Revert is only intended to cancel changes before clicking on "Change".
Faulty URL:
Something is wrong because text has somehow crept into the 'href'. I can't diagnose without further information. Send me at ajlittoz (at) users (dot) sf (dot) net:
- copy of your lxr.conf,
- name of file you displayed when you got such a strange link in 'banner'
- if possible, screen snapshot (mainly header area of LXR window) so that I can see the 'variables' row.
Hi, could you give me another email address pls? I have sent you an email to the address you give me, but it failed. I don't why, a reply msg says DNS could not be found.
Andre-Littoz
2012-03-02
Try page74010-sf (at) yahoo (dot) fr
Replace the (…) by @ and . respectively. This is a robot protection measure. If you did not replace the characters in ajlittoz (at) users (dot) sf (dot) net, the address was not valid.
Sorry for the delay, my Internet connection has been down for one day and a half. | http://sourceforge.net/p/lxr/discussion/86145/thread/72029b89/ | CC-MAIN-2014-15 | refinedweb | 2,899 | 68.36 |
Python is the top most programming language these days. I have wrote a lot of python tutorials, here I am providing Python Interview Questions and Answers that will help you in python interview. These python interview questions are good for beginners as well as experienced programmers. There are coding questions too to brush up your coding skills.
Python Interview Questions
Python is getting a lot of attention, specially in the field of data science, pen testing, scientific and mathematical algorithm development, machine learning, artificial intelligence etc.
I have been working on Python for more than 5 years now, all these python interview questions are coming from my learning on the job as well as the interviews I have taken for Python developers role. You should bookmark this post as I will keep on adding more interview questions to this list in future.
- What is Python? What are the benefits of using Python?
- What is Python? What are the benefits of using Python?
- What is PEP 8?
- What are the differences between Python 2.x and Python 3.x?
- Why do you need to make your code more readable?
- How many Keywords are there in Python? And why should we know them?
- What are the built-in data-types in Python?
- How many types of operators Python has? Give brief idea about them
- What is the output of the following code and why?
- What is PEP 8?
- What should be the output of the following code and why?
- What is the statement that can be used in Python if the program requires no action but requires a statement syntactically?
- What are the advantages of Python Recursion?
- What are the disadvantages of Python Recursion?
- What is lambda in python?
- Why don’t Python lambda have any statement?
- What do you understand by Python Modules?
- A module print_number given, what will be the output of the following code?
- What do you understand by Python Package?
- What will be the output of the following code?
- Will this code output any error? Explain.
- What will be the output of the following code?
- What will be the output of the following code2? Explain
- What is namespace in Python?
- Why do we need Python Directories
- How to get current directory using Python?
- Why Should We Use File Operation?
- Why should we close files?
- What are python dictionaries?
- What are the differences between del keyword and clear() function?
- What is Python Set?
- How will you convert a string to a set in python?
- What a blank curly brace initialize? A dictionary or a set?
- Explain split() and join() function.
- What is Python Decorator?
- What do you understand by Python Generator?
- What do you understand by Python iterator and Iterable elements?
- What do you know about iterator protocol?
- What will be output of the following code? Explain (Python Inheritance)
- Why do we need operator overloading?
- What is the difference between tuples and lists in Python?
- How to compare two list?
- How can you sort a list?
- How can you sort a list in reverse order?
- How will you remove all leading and trailing whitespace in string?
- How can you pick a random item from a list or tuple?
- How will you change case for all letters in string?
- In Python what is slicing?
- How will you get a 10 digit zero-padded number from an original number?
- What is negative index in Python?
Python Interview Questions and Answers
What is Python? What are the benefits of using Python?
Python is a high level object-oriented programming language. There are many benefits of using Python. Firstly, Python scripts are simple, shorter, portable and open-source. Secondly, Python variables are dynamic typed. So you don’t need to think about variable type while coding. Thirdly, Python classes has no access modifiers which Java have. So, you don’t need to think about access modifiers. Lastly, Python provides us different library, data-structure to make our coding easier.
Does Python use interpreter or compiler? What’s the difference between compiler and interpreter?
Python uses interpreter to execute its scripts. The main difference between an interpreter and a compiler is, an interpreter translates one statement of the program to machine code at a time. Whereas, a compiler analyze the whole script and then translate it to machine code. For that reason the execution time of whole code executed by an interpreter is more than the code executed by compiler.
What is PEP 8?
Basically PEP 8 is a style guide for coding convention and suggestion. The main objective of PEP 8 is to make python code more readable.
What are the differences between Python 2.x and Python 3.x?”).
Why do you need to make your code more readable?
We need to make our code more readable so that other programmer can understand our code. Basically for a large project, many programmers work together. So, if the readability of the code is poor, it will be difficult for other to improve the code later.
How many Keywords are there in Python? And why should we know them?
There are 33 keywords in Python. We should know them to know about their use so that in our work we can utilize them. Another thing is, while naming a variable, the variable name cannot be matched with the keywords. So, we should know about all the keywords.
What are the built-in data-types in Python?
The built-in data-types of Python are
- Numbers
- Strings
- Tuples
- List
- Sets
- Dictionary
Among them, the first three are immutable and the rest are mutable. To know more, you can read our
Python Data Types tutorial.
How many types of operators Python has? Give brief idea about them
Python has five types of operators. They are
- Arithmetic Operators : This operators are used to do arithmetic operations
- Comparison Operators : This operators are used to do compare between two variables of same data-type.
- Bitwise Operators : This kind of operators are used to perform bitwise operation between two variable
- Logical Operators : This operators performs logical AND, OR, NOT operations among two expressions.
- Python Assignment Operators : This operators are used to perform both arithmetic and assignment operations altogether.
Read more at Python Operators tutorial.
What is the output of the following code and why?
a = 2 b = 3 c = 2 if a == c and b != a or b == c: print("if block: executed") c = 3 if c == 2: print("if block: not executed")
The output of the following code will be
if block: executed
This happens because logical AND operator has more precedence than logical OR operator. So a == c expression is true and b != a is also true. So, the result of logical AND operation is true. As one variable of OR operation is true. So the result of Logical operation is also true. And that why the statements under first if block executed. So the value of variable c changes from 2 to 3. And, As the value of C is not true. So the statement under second block doesn’t execute.
Write a program that can determine either the input year is a leap year or not
The following code will determine either the input year is a leap year or not.
try: print('Please enter year to check for leap year') year = int(input()) except ValueError: print('Please input a valid year') exit(1) if year % 400 == 0: print('Leap Year') elif year % 100 == 0: print('Not Leap Year') elif year % 4 == 0: print('Leap Year') else: print('Not Leap Year')
Below image shows the sample output of above program.
What should be the output of the following code and why?
a = 10 while a > 0: print(a) else: print('Now the value of a is ',a); break
The following code will result in SyntaxError. Because the break statement is not in a loop. It should be under the scope of a loop.
What is the statement that can be used in Python if the program requires no action but requires a statement syntactically?
Python pass statement can be used if the program requires no action but requires a statement syntactically. Python pass statement has no action. But it is a statement. Read more at python pass statement tutorial.
What are the advantages of Python Recursion?
Implementing something using Python recursion requires less effort. The code we write using recursion will be comparatively smaller than the code that is implemented by loops. Again, code that are written using recursion are easier to understand also.
What are the disadvantages of Python Recursion?
Python.
For examples, see our Python Recursion example.
What is lambda in python?
Python lambda is a single expression anonymous function which has no name. Therefore, we can use Python lambda for a small scope of program.
Why doesn’t Python lambda have any statement?
Python lambda doesn’t have any statement because statement does not return anything while an expression returns some value. The basic syntax of python lambda is
lambda arguments : expression
The value of the expression for those arguments is returned by Python lambda.
To know more with examples, read our Python Lambda tutorial.
What do you understand by Python Modules?
A file containing Python definitions and statements is called a python module. So naturally, the filename is the module name which is appended with the suffix .py.
A module print_number given, what will be the output of the following code?
# module name: print_number def printForward(n): #print 1 to n for i in range(n): print(i+1) def printBackwards(n): #print n to 1 for i in range(n): print(n-i)
from print_number import printForward as PF PF(5)
The output of the program will be like this.
1 2 3 4 5
Because PF refers the function printForward. So it passes the argument to the function and the result will be like given one.
Read our tutorial on Python modules to have clear idea on this.
What do you understand by Python Package?
Python package is a collection of modules in directories that give a package hierarchy. More elaborately, python packages are a way of structuring python’s module by using “dotted module names”. So A.B actually indicates that B is a sub module which is under a package named A.
What will be the output of the following code? Explain the output
print(10) print(0x10) print(0o10) print(0b10)
The output of the following code will be:
10 16 8 2
Because
0x10 is a hexadecimal value which decimal representation is 16. Similarly
0o10 is a octal value and
0b10 is a binary value.
Will this code output any error? Explain.
a = 3 + 4j
This will not produce any error. Because
3 + 4j is a complex number. Complex number is a valid data-type in Python.
Read more at Python Number tutorial for more details.
What will be the output of the following code?
def func(): try: return 1 finally: return 2 print(func())
The code will output 2. Because whatever statements the try block has, the finally block must execute. So it will return two.
What will be the output of the following code2? Explain
def func(): a = 2 try: a = 3 finally: return a return 10 print(func())
The code will output 3. As no error occurs, the try block will execute and the value a is changed from 2 to 3. As the return statement of
finally block works. The last line of the function will not execute. So the output will be 3, not 10.
What is namespace in Python?
Namespace is the naming system to avoid ambiguity and to make name uniques. Python’s namespace is implemented using Python Dictionary. That means, Python Namespace is basically a key-value pair. For a given key, there will be a value.
Why do we need Python Directories?
Suppose, you are making some a a file or read a file from that directory. To do so, Python has introduced this facility.
How to get current directory using Python?
To get current Directory in Python, we need to use
os module. Then, we can get the location of the current directory by using
getcwd() function. The following code will illustrate the idea
import os #we need to import this module print(os.getcwd()) #print the current location
To get more examples, see our tutorials on Python Directories.
Why Should We Use File Operation?
We cannot always rely on run-time input. For example, we are trying to solve some problem. But we can’t solve it at once. Also, the input dataset of that problem is huge and we need to test the dataset over and over again. In that case we can use Python File Operation. We can write the dataset in a text file and take input from that text file according to our need over and over again.
Again, if we have to reuse the output of our program, we can save that in a file. Then, after finishing our program, we can analysis the output of that program using another program. In these case we need Python File Operation. Hence we need Python File Operation.
How to close file? Why should we close files?
To.
To know more, see our tutorial on Python File.
What are python dictionaries?
Python dictionary is basically a sequence of key-value pair. This means, for each key, there should be a value. All keys are unique. We can initialize a dictionary closed by curly braces. Key and values are separated by semicolon and and the values are separated by comma.
What are the differences between del keyword and clear() function?
The difference between
del keyword and
clear() function is, del keyword remove one element at a time. But clear function removes all the elements. The syntax to use the
del keyword is:
del dictionary[‘key']
While the syntax for
clear() function is:
dictionary.clear()
To know more see our tutorial on Python Dictionary.
What is.
How will you convert a string to a set in python?
We can convert a string to a set in python by using
set() function. For examaple the following code will illustrate the idea
a = 'Peace' b = set(a) print(b)
What a blank curly brace initialize? A dictionary or a set?
Well, both Python Dictionary and Python Set requires curly braces to initialize. But a blank curly brace or curly brace with no element, creates a dictionary. To create a blank set, you have to use
set() function.
Explain split() and join() function.
As the name says, Python’s
split() function helps to split a string into substrings based on some reference sequence. For example, we can split Comma Separated Values(CSV) to a list. On the other hand,
join() function does exactly the opposite. Given a list of values you can make a comma separated values using join function.
What is Python Decorator?
Python.
What do you understand by Python Generator?
Python generator is one of the most useful and special python function ever. We can turn a function to behave as an iterator using python generator function. So, as similar to the iterator, we can call the next value return by generator function by simply using
next() function.
What do you understand by Python iterator and Iterable elements?
Most of the objects of Python are iterable. In python, all the sequences like Python String, Python List, Python Dictionary etc are iterable. On the other hand, an iterator is an object which is used to iterate through an iterable element.
What do you know about iterator protocol?
Python Iterator Protocol includes two functions. One is iter() and the other is next().
iter() function is used to create an iterator of an iterable element. And the
next()function is used to iterate to the next element.
What will be output of the following code? Explain())
The output to the given code will be Richard. The name when printed is ‘Richard’ instead of ‘John’..
Why do we need operator overloading?
We need Python Operator Overloading to compare between two objects. For example all kind of objects do not have specific operation what should be done if plus(+) operator is used in between two objects. This problem can be resolved by Python Operator Overloading. We can overload compare operator to compare between two objects of same class using python operator overloading.
What is the difference between tuples and lists in Python?
The main differences between lists and tuples are, Python List is mutable while Python Tuples is immutable. Again, Lists are enclosed in brackets and their elements and size can be changed, while tuples are enclosed in parentheses and cannot be updated.
How to compare two list?
Two compare we can use
cmp(a,b) function. This function take two lists as arguments as
a and
b. It returns -1 if a<b, 0 if a=b and 1 if a>b.
How can you sort a list?
We can sort a list by using
sort() function. By default a list is sorted in ascending order. The example is given
listA.sort()
How can you sort a list in reverse order?
We can sort a Python list in reverse order by using
sort() function while passing the value for key
’sorted’ as false. The following line will illustrate the idea.
listA.sort(reverse=True)
How will you remove all leading and trailing whitespace in string?
Removing all leading whitespace can be done my by using
rstrip() function. On the other hand, all trailing whitespace can be removed by using
lstrip() function. But there is another function by which the both operation can be done. That is,
strip() function.
How can you pick a random item from a list or tuple?
You can pick a random item from a list or tuple by using
random.choice(listName) function. And to use the function you have import
random module.
How will you toggle case for all letters in string?
To toggle case for all letters in string, we need to use
swapcase() Then the cases of all letters will be swapped.
In Python what is slicing?
Python slicing is the mechanism to select a range of items from a sequence like strings, list etc.
The basic syntax of of slicing is listObj[start:end+1], here the items from
start to
end will be selected.
How will you get a 10 digit zero-padded number from an original number?
We can get a 10 digit zero-padded number from an original number by using
rjust() function. The following code will illustrate the idea.
num = input('Enter a number : ') print('The zero-padded number is : ', str(num).rjust(10, '0'))
What is negative index in Python?
There are two type of index in python. Non-negative and negative. Index 0 addresses the first item, index 1 address the second item and so on. And for the negative indexing, -1 index addresses the last item, -2 index addresses the second last item and so on.
So, That’s all for python interview questions and answers. We wish you success on python interview. Best of Luck!
Very helpful Q-A article for beginner level.Well explained with examples.
this question is very light question. Most probably this type of question maynot come in interview | https://www.journaldev.com/15490/python-interview-questions | CC-MAIN-2019-39 | refinedweb | 3,219 | 68.67 |
I have a function that solves equations of a certain form without needing to know all about it in an attempt to keep it generic. This can be achieved by requiring all equations that need solving to specify args even if they don't use it, but I think this is a bit hacky. Is there a better way to handle this?
- Code: Select all
def dif_eq1(x, y, args):
''' We need to specify args even if we don't use it '''
return - 2 * y + x + 4
def dif_eq2(x, y, args):
a, b = args
return - 2 * y + x * a + 4 / b
def RK4(dydx, dx, x, y, args):
''' Arguments: (function to solve,
initial x value,
initial y value,
arguments required by function to solve)
This function should not have to know what args is '''
x = [x]
y = [y]
steps = 100
h = dx / steps
for n in xrange(steps):
k1 = h * dydx(x[n], y[n], args)
k2 = h * dydx(x[n] + h/2, y[n] + k1/2, args)
k3 = h * dydx(x[n] + h/2, y[n] + k2/2, args)
k4 = h * dydx(x[n] + h, y[n] + k3, args)
k = (k1 + 2 * k2 + 2 * k3 + k4) / 6
x.append(x[n] + h)
y.append(y[n] + k)
return x, y
''' Here we do need to know what args is '''
x1, y1 = RK4(dif_eq1, 2., 0, 1, None)
x2, y2 = RK4(dif_eq2, 2.5, 0, 1, (2, 3)) | http://www.python-forum.org/viewtopic.php?p=12843 | CC-MAIN-2014-15 | refinedweb | 239 | 78.93 |
Hello guys I want a suggestion on how to control outputs using the timer1 of the atmega16.
I was able to turn on the led for a second.I want to do a couple more things from this timer1:-
1. I want to light up the led for 8mins and then turn that off(long delay which is new for me), I was able to learn more about my problem on the internet people we saying that the Watchdog timer can do this job well (It can produce long delays fairly easily)If that is the case someone can explain that to me.
2.All this led blinking I want with a button(I hadn't used the interrupts as I don't require that milisecond response , a not so noticeable lag will also suffice). I was actually successful with the button but lost to the long delays .
3. Also please check that is my code for avoiding button debounce is correct or not ,with correct i mean that it works well but is it the right way to do this ??
Can someone advice me ?
Below is the simple code that I had written:-
#include <avr/io.h> #include <avr/interrupt.h> #include <util/delay.h> void timer1_init() { TCCR1B |= (1 << WGM12) | (1 << CS12); TCNT1 = 0; OCR1A = 31250; TIMSK |= (1 << OCIE1A); sei(); } ISR (TIMER1_COMPA_vect) { PORTD ^= (1<<PD6); /*I want this led to be ON for 8 mins when the button is pressed and then turn off until button is pressed again*/ } int main(void) { DDRD = 0xff; PORTD = 0x00; DDRA = 0x00; PORTA = 0xff; while(1) { if(!(PINA & (1<<PA0))) { timer1_init(); _delay_ms(200);//debounce delay } } }
Sorry I am just new here please don't be harsh on me for asking so many questions on a single thread.
Remember the multitasking tutorial? Your answer is there.
Your debouncing is poor. You have already been given the answer in your previous thread.
Top
- Log in or register to post comments
Set your timer to 100ms using ctc mode, then when the overflow flag is set, increment a long counter, 10 times is one second, so when the counter reached the desired time out value, do what is needed.
Jim
(Possum Lodge oath) Quando omni flunkus, moritati.
"I thought growing old would take longer"
Top
- Log in or register to post comments
Ok. I too firmly believe that the my answer is there but it's in a way hidden from me let me try to solve my problem using the multitasking code you have provided and I will be back with something, and again thanks for the help.
Thank you for your reply, let me test this.
anshumaan kumar
Top
- Log in or register to post comments
I didn't set the timer to 100ms I had set to count in 1 sec. Well I was a little bit confused on your answer then patiently read your answer a couple of times and then I have done this. This works flawlessly I required a delay of 3 minutes and this code fulfills that, please check this for any suggestions.
Thanks Jim I have now fully understood the concept you are talking about and I am happy about it and now I will try to implement it in the Multitasking code written by Kartman. But before that let me check my debouncing.
Very much pleasured to be in a kind and helping community.
anshumaan kumar
Top
- Log in or register to post comments | https://www.avrfreaks.net/comment/2807431 | CC-MAIN-2021-25 | refinedweb | 577 | 77.47 |
On Wed, Jun 27, 2012 at 07:47:43PM +0100, Ben Hutchings wrote: > > That said, there was this bug report saying "We'll RM celt anytime soon, > > so don't use it anymore" and that's it. > Surely you were aware that CELT was experimental and this was due > to happen? Yep. And upstream code is cluttered with #if CELT_0_5 #elif CELT_0_7 #else #endif so I appreciate the arrival of opus. > CELT never had a stable bitstream format, so this sort of codec > compatibility issue could occur even if all parties had some version > of it. > > It's now dead upstream, and support for any version of CELT would have > become less and less useful during the lifetime of wheezy. Nothing to argue with that, that's why I dropped CELT support immediately. Cheers -- mail: adi@thur.de PGP/GPG: key via keyserver | https://lists.debian.org/debian-devel/2012/06/msg01009.html | CC-MAIN-2016-44 | refinedweb | 143 | 71.55 |
In our last post we learned about writing data into a file,in this tutorial we shall learn how to read the data from the file.To read the data from the file we need to use the object of the class File streams.
There are different types of File Stream classes in java.Reading data from the file is same as getting the data.Therefore we need to use FileInputStream object to read the data.
The methods provided by the FileInputStream class are available(), read().The procedure for reading the data is, first we need to bind the text file to the object of the FileInputStream.This is done by
FileInputStream fis = new FileInputStream("Name_of_the_File");
After binding the file to the object we need to find the available data in the file using the method available().We invoke this method using the object to which the file is bounded. After knowing the amount of data present in the file, we need to read the data.Since the data is written in the bytes we need to typecast the data into char for displaying the data to the user.Now the entire program is
import java.util.*; import java.io.*; class fileread { public static void main(String arg[]) { try { FileInputStream fis = new FileInputStream("f1.txt"); int available = fos.available(); System.out.print("\n\n\t\t"); for(int i=0;i<available;i++) { System.out.print((char)fos.read()); } fos.close(); } catch(Exception e) { System.out.print(e); } } }
Pingback: Copying data from one file to other file using java | letusprogram...!!! | https://letusprogram.com/2014/01/11/java-program-to-read-the-data-from-the-file/ | CC-MAIN-2021-17 | refinedweb | 260 | 66.64 |
In addition to testing your Code Engine code in the UI, you can also test your code programmatically. Alooma supports all the testing experiences you would expect:
For this example, let's assume we have some mobile applications
sending data to Alooma, and we have an event type which contains a
field called
price. Most of the time,
price contains numeric values, but in some
versions of the mobile application
price values are sent as strings. Moreover, in
some rare cases, these strings are prefixed by a dollar sign. So,
possible values of
price include
0,
1,
1.1,
"1.1", and
"$1.1". If you map the
price field in the Mapper to a column of type float, the mapper will handle the
first four values, but it will fail to handle the last one,
"$1.1".
You come up with the following function to fix this issue:
def fix_numeric(num): if str(num).startswith('$'): return float(num[1:]) return float(num)
and you call it by using:
def transform(event): event['price'] = fix_numeric(event['price']) return event
Save your code (the functions
transform and
fix_numeric shown above), as a file called
my_transform.py. We now describe how to test the code
above.
Unit testing your Code Engine code means testing it locally on
your own computer before deploying it to Alooma. This should not be
any different from how you would test any other code, so feel free
to use your favorite testing tool. For this example we will use
basic
unittest functionality only.
Create another file called
my_tests.py with the following content:
import my_transform import unittest class TestCodeEngine(unittest.TestCase): def test_fix_numeric(self): self.assertEqual(0, my_transform.fix_numeric(0)) self.assertEqual(1, my_transform.fix_numeric(1)) self.assertEqual(1.1, my_transform.fix_numeric(1.1)) self.assertEqual(1.1, my_transform.fix_numeric('1.1')) self.assertEqual(1.1, my_transform.fix_numeric('$1.1'))
You can run the tests using:
python -m unittest my_tests
Which should print something like:
. -------------------- Ran 1 test in 0.000s OK
You can get sample events from your Alooma instance to be used in your unit tests. To do this, you need to install Alooma's Python API, which can be done by:
pip install alooma
Then, launch a Python interpreter, and run the following:
import alooma alooma_api = alooma.Client( '<your-username>', '<your-password>', '<your-account>') samples = alooma_api.get_samples( event_type='<event-type-you-wish-to-test>')
Alooma.get_samples, when used
without any parameters, as we have done here, simply returns 10
randomly selected events from your stream. Each event is
represented by a dictionary containing the following keys:
sample - a Python object
containing the event itself;
timestamp - the time in which this event was
sampled, in milliseconds since epoch;
status - the status of this event;
eventType - the event type of this
event.
Alooma.get_samples can also
receive 2 optional parameters:
event_type - a string containing a name of an
event type you wish to samples. If this parameter is provided, only
events of this event type will be returned in the
sample.
error_codes - a list of
strings representing status codes you wish to sample. The full list
of possible status codes can be retrieved by
Alooma.get_sample_status_codes().
Now
samples contains a list of
events that actually flowed through Alooma, which you can use in
your tests. For example:
import json json.dumps(samples[0]['sample'])
The above code would print the event as follows:
'{" } }'
You can use this serialized event in your unit test for the
transform function, by modifying
my_tests.py like this:
import my_transform import unittest import json class TestCodeEngine(unittest.TestCase): event_a = '{" } }' def test_transform(self): event = json.loads(self.event_a) transformed_event = my_transform.transform(event) self.assertEqual(20, transformed_event['price']) # add more assertions here...
There are 2 reasons for keeping a serialized version of the event in the test code (instead of getting a sample from Alooma when running the test):
It runs much faster, because you don't have to send an HTTP request to Alooma to get the event.
The
Alooma.get_samplesfunction may return different events every time you call it, because it updates its sample from events streaming through your Alooma system continuously. Therefore, if you change your code, run the tests on a new sample, and get a test failure, it will be difficult to determine the cause: It might be caused by the change in your code that introduced a regression, or it could be that a new event that came in caused the failure.
Even with the 2 reasons mentioned above, it is also a good idea to test your code against live samples. An example could run like this:
import alooma import my_transform import unittest class TestCodeEngineLiveData(unittest.TestCase): def test_on_live_data(self): alooma_api = alooma.Client( '<your-username>', '<your-password>', '<your-account>') samples = alooma_api.get_samples( event_type='<event-type-you-wish-to-test>') for sample in samples: try: my_transform.transform(sample['sample']) except Exception: self.fail( 'Failed on event "%s"\n%s' % ( sample['sample'], traceback.format_exc()))
Notice that this test does not assert anything about the output
of the
transform but only verifies
that it runs without raising exceptions. Also note that
Alooma.get_samples returns 10
sample events. To run on more events, you would have to introduce
another loop that will call
get_samples again.
Before deploying new code, it is recommended to run it in
Alooma. This can be done by reading the code as text, and
submitting it with a sample event to test on, to
Alooma.test_transform as follows:
import alooma import unittest class TestCodeEngineIntegration(unittest.TestCase): def test_on_alooma(self): with open('my_transform.py') as f: code = f.read() alooma_api = alooma.Client( '<your-username>', '<your-password>', '<your-account>') samples = alooma_api.get_samples( event_type='<event-type-you-wish-to-test>') for sample in samples: output = alooma_api.test_transform( sample['sample'], code) if 'errorMessage' in output: self.fail( 'Failed on event "%s"\n%s' % ( sample['sample'], output['errorMessage']))
Alooma.test_transform returns a
dictionary with the following keys:
output - strings printed to stdout by the code
while being executed;
result - the
resulting event;
runtime -
execution time of the code, in milliseconds.
errorMessage - an optional key that exists
only if an error occurred during the execution of the code. In this
test we only check the existence of
errorMessage in the returned dictionary. It is
also possible to make assertions regarding the structure of the
returned event (which can be accessed through
output['result']).
We'd love to hear your ideas for improving our testing infrastructure, and we're always here to answer your questions.
Happy testing!
Article is closed for comments. | https://support.alooma.com/hc/en-us/articles/360000679812-Testing-Your-Code-Programmatically | CC-MAIN-2018-17 | refinedweb | 1,089 | 56.55 |
This seems like it should be pretty trivial, but I am new at Python and want to do it the most Pythonic way.
I want to find the n'th occurrence of a substring in a string.
There's got to be something equivalent to what I WANT to do which is
mystring.find("substring", 2nd)
Mark's iterative approach would be the usual way, I think.
Here's an alternative with string-splitting, which can often be useful for finding-related processes:
def findnth(haystack, needle, n): parts= haystack.split(needle, n+1) if len(parts)<=n+1: return -1 return len(haystack)-len(parts[-1])-len(needle)
And here's a quick (and somewhat dirty, in that you have to choose some chaff that can't match the needle) one-liner:
'foo bar bar bar'.replace('bar', 'XXX', 1).find('bar') | https://codedump.io/share/sikYh0owSDwB/1/find-the-nth-occurrence-of-substring-in-a-string | CC-MAIN-2017-39 | refinedweb | 143 | 71.75 |
We introduced Rcpp 0.10.0 with a number of very nice new features a
few days ago, and the activity on the rcpp-devel mailing list has been pretty
responsive which is awesome.
But because few things beat a nice example, this post tries to build some more excitement. We will illustrate how Rcpp attributes
makes it really easy to add C++ code to R session, and that that code is as easy to grasp as R code.
Our motivating example is everybody’s favourite introduction to Monte Carlo simulation: estimating π. A common method uses the fact
the unit circle has a surface area equal to π. We draw two uniform random numbers
x and
y, each between zero
and one. We then check for the distance of the corresponding point
(x,y) relative to the origin. If less than one (or equal),
it is in the circle (or on it); if more than one it is outside. As the first quadrant is a quarter of a square of area one, the area of
the whole circle is π — so our first quadrant approximates π over four. The following figure, kindly borrowed from Wikipedia
with full attribution and credit, illustrates this:
Now, a vectorized version (drawing
N such pairs at once) of this approach is provided by the following R function.
piR <- function(N) { x <- runif(N) y <- runif(N) d <- sqrt(x^2 + y^2) return(4 * sum(d < 1.0) / N) }
And in C++ we can write almost exactly the same function thanks the Rcpp sugar vectorisation available via Rcpp:
#include <Rcpp.h> using namespace Rcpp; // [[Rcpp::export]] double piSugar(const int N) { RNGScope scope; // ensure RNG gets set/reset NumericVector x = runif(N); NumericVector y = runif(N); NumericVector d = sqrt(x*x + y*y); return 4.0 * sum(d < 1.0) / N; }
Sure, there are small differences: C++ is statically typed, R is not. We need one include file for declaration, and we need one instantiation
of the
RNGScope object to ensure random number draws remain coordinated between the calling R process and the C++ function
calling into its (compiled C-code based) random number generators. That way we even get the exact same draws for the same seed.
But the basic approach is identical: draw a vector
x and vector
y, compute the distance to the origin and then
obtain the proportion within the unit circle — which we scale by four. Same idea, same vectorised implementation in C++.
But the real key here is the one short line with the
[[Rcpp::export]] attribute. This is all it takes (along with
sourceCpp() from Rcpp 0.10.0) to get the C++ code into R.
The full example (which assumes the C++ file is saved as
piSugar.cpp in the same directory) is now:
#!/usr/bin/r library(Rcpp) library(rbenchmark) piR <- function(N) { x <- runif(N) y <- runif(N) d <- sqrt(x^2 + y^2) return(4 * sum(d < 1.0) / N) } sourceCpp("piSugar.cpp") N <- 1e6 set.seed(42) resR <- piR(N) set.seed(42) resCpp <- piSugar(N) ## important: check results are identical with RNG seeded stopifnot(identical(resR, resCpp)) res <- benchmark(piR(N), piSugar(N), order="relative") print(res[,1:4])
and it does a few things: set up the R function, source the C++ function (and presto: we have a callable C++ function just like that),
compute two simulations given the same seed and ensure they are in fact identical — and proceed to compare the timing in a benchmarking
exercise. That last aspect is not even that important — we end up being almost-but-not-quite twice as fast on my machine for different
values of
N.
The real takeaway here is the ease with which we can get a C++ function into R — and the new process completely takes care of passing
parameters in, results out, and does the compilation, linking and loading.
More details about Rcpp attributes are in the new vignette. Now enjoy the π.
Update:One somewhat bad typo fixed.
Update:Corrected one background... | http://www.r-bloggers.com/rcpp-attributes-a-simple-example-making-pi-2/ | CC-MAIN-2015-48 | refinedweb | 676 | 62.78 |
poll()
Multiplex input/output over a set of file descriptors
Synopsis:
#include <sys/poll.h> int poll( struct pollfd fds*, nfds_t nfds, int timeout );
Since:
BlackBerry 10.0.0.
Polling can interfere with the kernel's efforts to manage power usage. For more information, see the Tick, Tock: Understanding the Microkernel's Concept of Time chapter of the BlackBerry 10 OS Programmer's Guide.
The fds argument is an array of pollfd structures:
struct pollfd { int fd; short events; short revents; };
The members are:
- fd
- The file descriptor to be examined.
- events
- Flags that indicate the type of events to look for.
- revents
- Returned both POLLIN and POLLHUP events..>.
Returns:
Errors:
- EAGAIN
- The allocation of internal data structures failed, but a subsequent request may succeed.
- EINTR
- A signal was caught during poll().
- EFAULT
- The fds argument pointed to a nonexistent portion of the calling process's address space.
Examples:
; }
Classification:
Caveats:
Not all managers support POLLPRI, POLLWRBAND, POLLERR, and POLLHUP.
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/poll.html | CC-MAIN-2017-26 | refinedweb | 181 | 59.19 |
?? 'new' and 'protected' Modifiers on Structs ??
- From: "Tom Baxter" <tlbaxter99@xxxxxxxxxxxxxxxx>
- Date: Tue, 30 Oct 2007 23:33:15 -0500
Hi everyone,
Has anyone looked at section 18.1.1 of the C# spec? It indicates 'new' and 'protected' are valid modifiers on struct declarations. First, how can 'protected' be valid on a struct, since structs cannot be inherited? The compiler gives an error (as I expect it should) if you try this:
protected struct MyStruct {}
so I'm wondering if the spec is wrong when it says 'protected' is a valid struct-modifier.
The second point about section 18.1.1 is that it indicates 'new' is an allowable modifier on a struct. Actually, 18.1.1 indicates, "The modifiers of a struct declaration have the same meaning as those of a class declaration", referring the reader to section 17.1.1 on class declaration modifiers.
I looked at section 17.1.1 and sure enough, it says:
"The new modifier is permitted on nested classes.
It specifies that the class hides an inherited member
by the same name, as described in section 17.2.2. It
is a compile-time error for the new modifier to appear
on a class declaration that is not a nested class declaration."
My testing indicates 'new' is not allowed on class nor struct declarations (nested or not). It seems this is how 'new' should be used as a class modifier, according to the spec (sec. 17.1.1):
class MyClass {
public virtual int M() { return 1; }
}
class Outer {
new class Inner : MyClass { // ERROR on 'new' modifier
public new int M() { return 2; }
}
}
The above code gives a warning on the declaration of Inner.
So, am I way off base here? Is there something wrong with sections 18.1.1 and 17.1.1?
Thanks
--
Tom Baxter
.
- Follow-Ups:
- Re: ?? 'new' and 'protected' Modifiers on Structs ??
- From: Marc Gravell
- Re: ?? 'new' and 'protected' Modifiers on Structs ??
- From: John B
- Prev by Date: Re: Getting namespace errors on compile
- Next by Date: Re: Getting namespace errors on compile
- Previous by thread: Getting namespace errors on compile
- Next by thread: Re: ?? 'new' and 'protected' Modifiers on Structs ??
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2007-10/msg04119.html | crawl-002 | refinedweb | 364 | 66.74 |
In this article, we are going to setup the environment and perform the basic test using multiple browsers. For the sake of simplicity we are going to use Firefox and Chrome browser. By default Firefox driver is shipped with the selenium installation. So you don’t need to install any specific Firefox driver for running tests on Firefox browser. However in case of chrome, you’re required to download official chrome webdriver. Let’s start with installation phase.
Installation
You need to install some of the following in order for executing tests with python.
- Python (3.5+ if possible). You can download the most recent version from the official website.
- Chrome WebDriver Installation. Watch the video – Chrome webdriver Setup on Windows.
- Selenium Webdriver.
- Text editor (Try Visual Studio Code or Sublime Text etc).
After you finish installation of Python and Chrome Driver, you need to install Selenium webdriver. We are going to use PIP to install Selenium. Use the following command to install selenium webdriver.
pip install selenium
This will take a minute or two to download all the dependencies and necessary webdriver files. After installation of webdriver package, we are going to write some code and execute our first test with firefox browser. Open code editor and type the following code in it. Save the file with name demo.py or any other name of your choice.
import time
from selenium import webdriver
driver = webdriver.Firefox()
time.sleep(15)
driver.quit()
Code explanation: First line in the code calls for time module. It is required to use the sleep method in our program. We need sleep method to pause the webdriver script for few seconds. After sleep method call, we are going to close the browser using quit().
In second line, we are calling selenium package to import webdriver class.
In third line, we are creating webdriver instance with driver variable.
In fourth line, we are using time.sleep() call to halt the execution for few seconds. You can specify your number in the method.
In fifth line, quite method is being called to close the browser after specific time interval.
Execution:
In order to execute the following code, you need to type the following in the command prompt.
python demo.py
Your filename here could be different. Just replace the name accordingly. When you execute this code you’d be able to use Firefox browser for the test.
Chrome Driver
If you wish to use the Chrome browser for the test then you code needs a bit of modification. Check the code below:
import time
from selenium import webdriver
driver = webdriver.Chrome()
time.sleep(15)
driver.quit()
If you check the third line of the code then you can see that chrome webdriver is called with Chrome() method. Do note that you need to have Chrome webdriver installed in order for this code to work properly.
Now that we have seen how to open a browser, we can move ahead with more specific actions. In case of website testing we’d be needing to do particular actions in order to pass or fail our tests. Here are some of such scenario.
- Open browser
- Go to website.
- Find an element.
- Click on that element.
- Verify the resulted URL.
- Close the browser.
When you create selenium test for above test scenario, you’d be needing to perform various actions using the webdriver. In the below code you can check out how to do just that.
import time
from selenium import webdriver
driver = webdriver.Chrome()
driver.get(‘’)
elm = driver.find_element_by_name(‘btnK’)
elm.click()
print(driver.current_url)
time.sleep(15)
driver.quit()
Code explanation: Here we used method find_element_by_name() to point the browser to this location. And also used click() method to click on the element. You can watch the video below to see how you can find and use the Element by ID.
Conclusion
This doesn’t stop here and we have plenty of advanced testing scenarios that we can execute with the Selenium webdriver. In next few tutorials, we’ll tackle some of the advanced testing scenarios. I hope the explanation here helps to understand Selenium WebDriver binding for Python. If you have any questions or feedback, then feel free to share that below comment form. | http://onecore.net/selenium-webdriver-for-python.htm | CC-MAIN-2017-09 | refinedweb | 706 | 68.06 |
How to Set Constraints on Kubernetes Resources.
Configure Constraints on Container Resources
The most basic resource metrics for a pod are CPU and memory.
Kubernetes provides requests and limits to pre-allocate resources and limit resource usage, respectively.
Limits restrict the resource usage of a pod as follows:
- If its memory usage exceeds the memory limit, this pod is out of memory (OOM) killed.
- If its CPU usage exceeds the CPU limit, this pod is not killed, but its CPU usage is restricted to the limit.
Testing the Memory Limit
Testing the CPU Limit
Container QoS:
- Guaranteed: Limits and requests are set for all containers in a pod. Each limit is equal to the corresponding request. If a limit is set but the corresponding request is not set, the request is automatically set to the limit value.
- Burstable: Limits are not set for certain containers in a pod, or certain limits are not equal to the corresponding requests. During node scheduling, this type of pod may overclock nodes.
- BestEffort: Limits and requests are not set for any containers in a pod.
Code for querying QoS:
Impact of Different QoS Levels on Containers
O.
.
Source Coderequest:
Pod Eviction
If the memory and CPU resources of a node are insufficient and this node starts to evict its pods, the QoS level also affects the eviction priority as follows:
- The kubelet preferentially evicts pods whose QoS level is BestEffort and pods whose QoS level is Burstable with resource usage larger than preset requests.
- Then, the kubelet evicts pods whose QoS level is Burstable with resource usage smaller than preset requests.
- At last, the kubelet evicts pods whose QoS level is Guaranteed. The kubelet preferentially prevents pods whose QoS level is Guaranteed from being evicted due to resource consumption of other pods.
- If pods have the same QoS level, the kubelet determines the eviction priority based on the pod priority.
ResourceQuota
Kubernetes provides the ResourceQuota object to set constraints on the number of Kubernetes objects by type and the amount of resources (CPU and memory) in a namespace.
- One or more ResourceQuota objects can be created in a namespace.
- If the ResourceQuota object is configured in a namespace, requests and limits must be set during deployment; otherwise, pod creation is rejected.
- To avoid this problem, the LimitRange object can be used to set the default requests and limits for each pod.
- For more information about extended resources supported in versions later than Kubernetes V1.10, see
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
namespace: example
spec:
hard:
requests.cpu: "3"
requests.memory: 1Gi
limits.cpu: "5"
limits.memory: 2Gi
pods: "5"
LimitRange:
- default: indicates default limits.
- defaultRequest: indicates default requests.
- max: indicates maximum limits.
- min: indicates minimum requests.
- maxLimitRequestRatio: indicates the maximum ratio of a limit to a request. Because a node schedules resources based on pod requests, resources can be oversold. The maxLimitRequestRatio parameter indicates the maximum oversold ratio of pod resources.
Summary.
References
-
-
To learn more about Alibaba Cloud Container Service for Kubernetes, visit
Reference: | https://alibaba-cloud.medium.com/how-to-set-constraints-on-kubernetes-resources-96fde5734f2c | CC-MAIN-2021-10 | refinedweb | 505 | 56.66 |
Hi uses human-readable descriptions of software user requirements as the basis for software tests.
BDD approach helps us to define the tests in common or shared vocabulary between stakeholders, domain experts, and of course, the engineers involved. A simple given, when and then vocabulary helps to make our test more readable. And due to this reason, even a non-technical person can get an idea of the testing that is being carried out.
Let me give you an example so that you can get an idea of why BDD is simple to understand.
I’ve written a test where we are navigating through a site, and we are searching its blog section to search for cypress’s blogs. The screenshot below shows the test script for the same scenario, written as a regular javascript test in cypress.
With just one glance, one may assert that it will be difficult for a non-technical person to understand the test script. However, if we convert this to BDD standards, it will look much simpler and very much readable. As depicted in the screenshot below.
The screenshot above is the corresponding BDD code for the test script mentioned earlier. As you can see, the line of code reduced drastically. And it is much readable now.
I hope, now we know why to opt BDD framework. Moving on, let’s see how we can achieve this in cypress.
Steps to integrate cucumber with Cypress
To integrate cucumber we would have to make use of an external cypress-cucumber plugin in our project. Please follow the following steps to install and integrate the plugin.
- Install the cucumber plugin by running the following command
npm install --save-dev cypress-cucumber-preprocessor
This will fetch the latest version of this plugin that is present on the npm.
- After installing the plugin, add this code snippet in the index.js file under the plugins folder.
const cucumber = require('cypress-cucumber-preprocessor').default module.exports = (on, config) => { on('file:preprocessor', cucumber()) }
This will make this plugin recognizable to cypress.
Until now, we were writing test in a file with .js extension. But with the introduction of cucumber thing changes, according to the cucumber standards, we would have to use .feature file to write the test cases. But cypress only recognises .js files until now, a question arises here that how are we going to run the test that we have written in feature file with gherkins syntax.
- What we have to do is, we would have explicitly tell cypress to run the .feature file extension as well. For this, we have to add this line mentioned below in our cypress.json file.
{ "testFiles": "**/*.feature" }
Please note that this will make cypress ignore the .js extension in your test runner. If you want both the extension to be recognized by the cypress, add the following in your cypress.json file instead of the one mentioned above.
{ "testFiles": "**/*.{feature,js}" }
- After that, we also have to add the following in the package.json file.
"cypress-cucumber-preprocessor": { "nonGlobalStepDefinitions": true }
This will help cypress to convert the mocha test to a feature file having gherkins syntax.
- And lastly, we need to add an extension to our IDE so that it can recognise the .feauture extension. In our case, we have used Visual Code Studio. you can install the plugin in it by
- pess ctrl+P
- type ‘ext install alexkrechik.cucumberautocomplete’
- press enter
And the cucumber extension will get configured for our IDE.
Converting test script in Gherkins syntax
For cypress, we can write our test in Javascript only. However, since we are using cucumber with cypress, we need to convert the Java Script test to Gherkins. Gherkins consist mainly of four main keywords, ‘Given’, ‘When’, ‘And’ and ‘Then’. In cypress, we are going to write our test in Javascipt and then we will link these Javascript test to these Gherkins keywords as a step definition. For example. consider a scenario we are simply visiting a website.
To visit a website in cypress, we use the cy.visit() command. A corresponding Gherkin command of this can be defined with the ‘Given’ keyword. We are going to link our cy.visit() command to the Given keyword of Gherkins.
To understand better, please have a look at the Ghekinks and JavaScript code below.
Given We visit knoldus Website
We have to link this Given command to javaScript test, so that cypress can understand it.
Given("We visit knoldus Website", function(){ cy.visit("") })
Please note that the description given in the feature file with the Given keyword should be the same when defining the step definition. The corresponding JavaScript code of Gherkins should always be defined in .js file. And this step definition .js file should be placed in the folder having the identical name as of the feature file.
For example, if we a file Test1.feature for writing our BDD cucumber test then, the corresponding step definition javaScript file should be placed under the folder having the name Test1 only. I hope I was able to clear it out.
Implementation
The approach has defined already in the ealier section. To understand better, let’s have a look at the code snippets.
import { Given,When,Then, And } from "cypress-cucumber-preprocessor/steps"; // have to import these, so that cypress can recognise cucumber keywords Given("We visit knoldus Website", function(){ // we can make an anonymus function as well here, use "()=>" instead of function() cy.visit("") }) When("We click on blogs and search for cypress",()=>{ cy.contains("Blog").invoke("removeAttr","target").click({force: true}) // using {force: true} as the element is being overlapped by some other webElement // we have used here, .invoke, it enables cypress to use jquery functions and to remove attribute we used removeAttr() which is a jquery function. cy.wait(1000) cy.get("#tophead > div > a").click() // to click on search option cy.wait(1000) cy.get("input[placeholder='Search …']").type("cypress{enter}") // to search for 'cypress' and then pressing enter. }) And("Open the blog Cypress-101",function(){ cy.contains("Cypress – 101").click() // to click on the desired blog }) Then("assert",()=>{ cy.url().should("include","") // to assert that we succesfully move to the redirected url })
The code above is the step defination for the feature file mentioned below.
Feature: Sample feature file for the demo Here, you can add some description for the feature file. Please note that before moving to the next statement, press enter and then tab button. Scenario: Knoldus Website Given We visit knoldus Website When We click on blogs and search for cypress And Open the blog Cypress-101 Then assert
For your refrence, this is the folder structure that I’ve used.
In this project, we have a BDD folder under the default integration folder. Inside the BDD folder, we have a sampleTest.feature file where we have defined our test as per the cucumber standards. As mentioned earlier, the corresponding step defination definition file is under the sampleTest folder. The name of the feature file and the name of the folder that contains the step definition javaScript file should always be the same.
That’s all folks. I hope you may have found this usefiul. Thanks! | https://blog.knoldus.com/cypress-with-cucumber/ | CC-MAIN-2021-31 | refinedweb | 1,199 | 65.83 |
By far the most commonly used functionality of the STL library are the STL container classes. If you need a quick refresher on container classes, check out lesson 10.6 -- Container classes..
The following program inserts 6 numbers into a vector and uses the overloaded [] operator to access them in order to print them.
This program produces the result:
10 9 8 7 6 5
This program produces the result:
8 9 10 0 1 2
We’ll talk more about both linked lists and iterators in future lessons.
Associative Containers
Associative containers are containers that automatically sort their inputs when those inputs are inserted into the container. By default, associative containers compare elements using operator<.
Container Adapters
Container adapters are special predefined containers that are adapted to specific uses. The interesting part about container adapters is that you can choose which sequence container you want them to use.
Why aren't the code example for Sequence Containers properly indented and shown inside of "
?" Even 16.3--STL iterators overview lacks it.
In the section on sequence containers, why don’t you have bullets with information/examples for array and forward_list ? Or at least give them a brief mention like you did for vector, deque and list.
And why do you mention “STL strings“ but not “basic_string“ in the final bullet ?
This just feels “mysterious”, and I guess (short of reading lesson by lesson or going back to the Index), that I’m missing something ...
What further reading is recommended for the STL containers? I'm trying to read Effective STL by Scott Myers since that's STL focused but it looks a lot like I should get to know the container classes and iterators he's talking about before doing that so I'd like to know what fills the gap here. I want more than a reference since that doesn't tell me when to choose which and how they defer exactly. I'd love to have a look at all the "...Effective..." books by him since they seem to give sound advice to intermediate to advanced programmers but I'm not sure whether the older one are still mostly applicable.
Alan Talbot's talk at cppcon 2019 (available on youtube) should give you a good overview.
I'm gradually getting to know the STL. However, I'm finding std::ref confusing.
Since std::ref is often used with std::bind, I made a simple version of std::bind() which I've called bond() better to understand how std::ref() works:
I know what std::ref does, but I don't know how it works. To demonstrate what I mean:
On the other hand:
This is, of course, exactly how std::ref is supposed to work. But how does it do this exactly? From https ://en.cppreference.com/w/cpp/utility/functional/reference_wrapper, I can see that the object returned by std::ref is a std::reference_wrapper, which stores its internal object as a pointer. This makes sense, since pointers are ultimately the mechanism by which references are implemented, and I've somehow got to have access to the address of the internal object to achieve the desired behaviour. So, to begin with, I believe that a std::reference_wrapper object is copy-constructed (i.e. by value) through the bond() helper function into member variable bond::arg_ of the bond_t object.
If the object is *not* a std::ref, what happens next is simple. Let's say that it's an int with value 5, as in the example code. The line 'return func_(arg_);' is executed, passing the int to the function stored in the function pointer, and '5' is printed. But, if the object is a std::ref, how can I make a deferenced pointer to an internal object be printed? Because this is what needs to happen for the '7' to be printed. Perhaps it's an overload? But what overload and where?
Thanks :)
`std::ref` isn't a type, it's a function that returns an `std::reference_wrapper`.
`std::reference_wrapper<T>` has a conversion operator to `T&` ( ) which gets used when you call `print`.
Oops, that was a typo calling std::ref an object. Yes, the object returned is the std::reference_wrapper object. I have not seen a conversion operator to T& before.
So am I correct in saying that it is specifically the 'int&' in the 'void print(int& n)' function that triggers the use of the std::reference_wrapper's T& overload? Or put another way, it's the fact that the function pointer 'S(*func_)(T&)' has the 'T&'?
The conversion operator gets used when you call the function in line 11
At the risk of sounding like an annoying child who keeps asking 'why?', can I ask why the conversion operator & gets used when one calls the function in line 11? :)
I've been experimenting, and I'm surprised to see that the result is the same whether a reference is used as the argument in the function pointer or not. I.e. this works the same:
So when you say that the conversion operator get used, I'm just not seeing how. I thought perhaps that the copy constructor and assignment operator had been altered to use a reference, but that isn't it I guess. I'm lost!
All of these are fine. When the types don't match, eg. `std::reference_wrapper<int>` and `int`, your compiler will try to convert them. `std::reference_wrapper<int>` can be converted to `int&` and `int&` can be converted to `int`.
> can I ask why the conversion operator & gets used when one calls the function in line 11?
Yes go ahead.
> // S = the return type, T = the class type
Template names don't have to be single letters.
But.. but..
When a std::reference_wrapper<int> object is passed into a @bond_t object, how does the compiler know to pass the @int, held as a pointer inside the @bond_t object, to @func_ as its @arg_?
And I know your answer will be:
>`std::reference_wrapper<T>` has a conversion operator to `T&` ( ) which gets used when you call `print`.
From that site, I see there's this:
operator T& () const noexcept;
This.. erm.. how does this work? I'm sorry, but I can't see how this grabs the pointer, dereferences it, and passes it to @func_.
EDIT:
So perhaps it's defined like this:
If that's the case, then I'm just having problems visualising what a T& operator is. Something like a ++ operator, on the other hand, is easy to visualise.
Give lesson 9.10 ( ) another read, seems like this is what's giving you trouble.
Please edit comments instead of deleting and re-posting them. Syntax highlighting works after refreshing the page.
I'm inserting an edit here, since it makes sense to the flow. I meant:
Finally I understand:) Thank you so much for your patience. Yes, I did not see that it was a cast overload. Anyway, I wrote my own version of std::ref<T> and std::reference_wrapper<T> to test it out, and yes it works!:
I'm glad you figured it out :)
Coming up with own implementations of standard function really helps to understand them, keep it up!
Which category does std::pair belong to ?
std::pair isn't considered a container in this context. It's just a templated data type that holds 2 values.
Ok, thanks
why you don't include "using namespace std" in any of your code ? Is there any specific reason or you are used to write code like this.
See
thanks man now i get it.
Hi Alex,
Forgive me if this is a somewhat silly question: In the vector and deque code samples, why have you used variable names like "nCount" and "nIndex" instead of "count" and "index" respectively?
Old naming convention that is no longer recommended. This lesson hasn't been updated yet to be compliant with modern naming standards.
I hope this isn't too trivial, however the link at the top that says,"10.4 -- Container Classes" I think is supposed to say, "10.6 -- Container Classes".
Nope, not too trivial. Fixed! Thank you for pointing out the error.
In the Associative Containers section I am missing the std::unordered_map, sometimes called hashmap. Is there a reason you left that one out? I think it is quite important in many cases.
std::unordered_map was defined in C++11, and this lesson hasn't been rewritten to be C++11 compliant yet. I'm intending to do a whole chapter on containers soon, so this lesson will probably disappear and get replaced by a much weightier discussion.
Hi Alex!
which is prefered to use vector or deque? and why?
You should use whichever one is more appropriate for the problem you are trying to solve. I'm intending to rewrite this lesson some point soon to talk more about the pros and cons of the various containers.
hello Alex,
As we know Sequence containers are container classes that maintain the ordering of elements in the container in a continuous dynamic array but how come list can be called a sequence container.
To be a sequence container, a class must allow the user to access elements in sequential order without recursion. Linked lists meet this criteria by allowing you to start from the head and walk through each node in sequence until you reach the tail.
I want to know more about iterators In c++
What about std::array? Isn't that a sequence container too?
Yes, as is std::forward_array, both added in C++11. I've updated the article to account for this.
#typo "Associative contains are containers that automatically" -> containers are containers
Fixed. Thanks!
Hi Alex, a typo: "containerss" --> "containers"
thanks Alex;
Hi Alex :
Now :
I only have problems in the derived class with a Boolean variable
run online :
Aah, I see. The problem is that you've declared a vector of base. When you push_back a derived, the base portion of derived is copied into your vector. So when you call it->print(), it's calling the base version because there is no derived portion of the class.
You can solve this by making the vector a vector of base* instead of base.
please help , these things I can't understand :
_ why there is a semicolon after the derived constructor
_ how can u push_back a base* to normal base vector
and thank's
1) It's extraneous.
2) He dereferenced the pointer before pushing it onto the vector.
Could you please elaborate your response?
I understood what he is asking, but your solution to the problem isn't clear to me.
Thanks
Ah, yes, I think I answered a question that he wasn't asking. :)
Rather than answer this question here, let me direct you to the lesson on object slicing which discusses the underlying causes of exactly this case in more detail (because it's a common mistake), and includes solutions to the issue.
Hi alex
1. Why not return bool
2. Will be more optimized code comment
I don't understand your question.
Hi Alex
I ask of you to compile code
If memory allocation is done properly then why not set a boolean variable in derived class
I still don't understand what you're asking. You're referring to this line of code:
One of two things will happen here. Either:
1) The derived class will be allocated, and the address assigned to base pointer p. If you get a valid address for p, you know the derived class allocated properly.
2) The system will be out of memory, or the allocation of the derived class will fail for some other reason (e.g. the constructor threw an exception), in which case an exception will be thrown for you to handle.
If P exists, it was created correctly. So why would it need a boolean to track whether it was allocated properly?
Hi Alex
Is it possible to build a vector classes several different types of data?
Are you asking whether you can build a single std::vector that can hold more than one data type? The answer is yes, sort of, but not directly. There are a few ways to do this:
1) Build a std::vector of a struct, class, or union that contains multiple types.
2) Build a std::vector of a pointer to a base class, and then insert objects of derived classes into the vector.
There may be others.
Hi Alex,
Kudos to your efforts in maintaining this site. Bless you.
A special request:
Can you develop lectures for using data structures and algorithms in C++? I am a beginner C++ programmer and our course teacher has asked us to use STL for algorithms course.
It's on my to do list, but it's unlikely I'll get to it before you finish your course. :(
Very beautifully written. I always refer this website for c++. I can not see linked list lesson. Can anyone tell me if it is explained in detail because in this lesson it is written that we will talk more about linked list in future lessons.
Thanks.
The lesson on linked lists has unfortunately not been written yet.
VERY GOOD
Not needed to change int nCount to some other type.
It is just to iterate the loop.
For the vector program to compile correctly, I believe this:
int nCount
has to be changed to:
size_t nCount
or
unsigned int nCount
Is that true?
A super site....thanks
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/16-2-stl-containers-overview/ | CC-MAIN-2020-29 | refinedweb | 2,287 | 73.37 |
Code Style
After reading this article, you’ll know:
- Why it’s a good idea to have consistent code style
- Which style guide we recommend for JavaScript code
- How to set up ESLint to check code style automatically
- Style suggestions for Meteor-specific patterns, such as Methods, publications, and more
Benefits of consistent style
Countless hours have been spent by developers throughout the years arguing over single vs. double quotes, where to put brackets, how many spaces to type, and all kinds of other cosmetic code style questions. These are all questions that have at best a tangential relationship to code quality, but are very easy to have opinions about because they are so visual.
While it’s not necessarily important whether your code base uses single or double quotes for string literals, there are huge benefits to making that decision once and having it be consistent across your organization. These benefits also apply to the Meteor and JavaScript development communities as a whole.
Easy to read code
The same way that you don’t read English sentences one word at a time, you don’t read code one token at a time. Mostly you just look at the shape of a certain expression, or the way it highlights in your editor, and assume what it does. If the style of every bit of code is consistent, that ensures that bits of code that look the same actually are the same - there isn’t any hidden punctuation or gotchas that you don’t expect, so you can focus on understanding the logic instead of the symbols. One example of this is indentation - while in JavaScript, indentation is not meaningful, it’s helpful to have all of your code consistently indented so that you don’t need to read all of the brackets in detail to see what is going on.
Automatic error checking
Having a consistent style means that it’s easier to adopt standard tools for error checking. For example, if you adopt a convention that you must always use
let or
const instead of
var, you can now use a tool to ensure all of your variables are scoped the way you expect. That means you can avoid bugs where variables act in unexpected ways. Also, by enforcing that all variables are declared before use, you can easily catch typos before even running any code!
Deeper understanding
It’s hard to learn everything about a programming language at once. For example, programmers new to JavaScript often struggle with the
var keyword and function scope. Using a community-recommended coding style with automatic linting can warn you about these pitfalls proactively. This means you can jump right into coding without learning about all of the edge cases of JavaScript ahead of time.
As you write more code and come up against the recommended style rules, you can take that as an opportunity to learn more about your programming language and how different people prefer to use it.
JavaScript style guide
Here at Meteor, we strongly believe that JavaScript is the best language to build web applications, for a variety of reasons. JavaScript is constantly improving, and the standards around ES2015 have really brought together the JavaScript community. Here are our recommendations about how to use ES2015 JavaScript in your app today.
An example of refactoring from JavaScript to ES2015
Use the
ecmascript package
ECMAScript, the language standard on which every browser’s JavaScript implementation is based, has moved to yearly standards releases. The newest complete standard is ES2015, which includes some long-awaited and very significant improvements to the JavaScript language. Meteor’s
ecmascript package compiles this standard down to regular JavaScript that all browsers can understand using the popular Babel compiler. It’s fully backwards compatible to “regular” JavaScript, so you don’t have to use any new features if you don’t want to. We’ve put a lot of effort into making advanced browser features like source maps work great with this package, so that you can debug your code using your favorite developer tools without having to see any of the compiled output.
The
ecmascript package is included in all new apps and packages by default, and compiles all files with the
.js file extension automatically. See the list of all ES2015 features supported by the ecmascript package.
To get the full experience, you should also use the
es5-shim package which is included in all new apps by default. This means you can rely on runtime features like
Array#forEach without worrying about which browsers support them.
All of the code samples in this guide and future Meteor tutorials will use all of the new ES2015 features. You can also read more about ES2015 and how to get started with it on the Meteor Blog:
Follow a JavaScript style guide
We recommend choosing and sticking to a JavaScript style guide and enforcing it with tools. A popular option that we recommend is the Airbnb style guide with the ES6 extensions (and optionally React extensions).
Check your code with ESLint
“Code linting” is the process of automatically checking your code for common errors or style problems. For example, ESLint can determine if you have made a typo in a variable name, or some part of your code is unreachable because of a poorly written
if condition.
We recommend using the Airbnb eslint configuration which verifies the Airbnb styleguide.
Below, you can find directions for setting up automatic linting at many different stages of development. In general, you want to run the linter as often as possible, because it’s the fastest and easiest way to identify typos and small errors.
Installing and running ESLint
To setup ESLint in your application, you can install the following npm packages:
Meteor comes with npm bundled so that you can type meteor npm without worrying about installing it yourself. If you like, you can also use a globally installed npm command.
You can also add a
eslintConfig section to your
package.json to specify that you’d like to use the Airbnb config, and to enable ESLint-plugin-Meteor. You can also setup any extra rules you want to change, as well as adding a lint npm command:
To run the linter, you can now simply type:
For more details, read the Getting Started directions from the ESLint website.
Integrating with your editor
Linting is the fastest way to find potential bugs in your code. Running a linter is usually faster than running your app or your unit tests, so it’s a good idea to run it all the time. Setting up linting in your editor can seem annoying at first since it will complain often when you save poorly-formatted code, but over time you’ll develop the muscle memory to just write well-formatted code in the first place. Here are some directions for setting up ESLint in different editors:
Sublime Text
You can install the Sublime Text packages that integrate them into the text editor. It’s generally recommended to use Package Control to add these packages. If you already have that setup, you can just add the these packages by name; if not, click the instructions links:
- Babel (for syntax highlighting – full instructions)
- SublimeLinter (full instructions)
- SublimeLinter-contrib-eslint (full instructions)
To get proper syntax highlighting, go to a .js file, then select the following through the View dropdown menu: Syntax -> Open all with current extension as… -> Babel -> JavaScript (Babel). If you are using React .jsx files, do the same from a .jsx file. If it’s working, you will see “JavaScript (Babel)” in the lower right hand corner of the window when you are on one of these files. Refer to the package readme for information on compatible color schemes.
A side note for Emmet users: You can use \
Atom
Using ESLint with Atom is simple. Just install these three packages:
Then restart (or reload by pressing Ctrl+Alt+R / Cmd+Opt+R) Atom to activate linting.
WebStorm
WebStorm provides these instructions for using ESLint. After you install the ESLint Node packages and set up your
package.json, just enable ESLint and click “Apply”. You can configure how WebStorm should find your
.eslintrc file, but on my machine it worked without any changes. It also automatically suggested switching to “JSX Harmony” syntax highlighting.
Linting can be activated on WebStorm on a project-by-project basis, or you can set ESLint as a default under Editor > Inspections, choosing the Default profile, checking “ESLint”, and applying.
Visual Studio Code
Using ESLint in VS Code requires installation of the 3rd party ESLint extension. In order to install the extension, follow these steps:
- Launch VS Code and open the quick open menu by typing
Ctrl+P
- Paste
ext install vscode-eslintin the command window and press
Enter
- Restart VS Code
Meteor code style
The section above talked about JavaScript code in general - you can easily apply it in any JavaScript application, not just with Meteor apps. However, there are some style questions that are Meteor-specific, in particular how to name and structure all of the different components of your app.
Collections
Collections should be named as a plural noun, in PascalCase. The name of the collection in the database (the first argument to the collection constructor) should be the same as the name of the JavaScript symbol.
Fields in the database should be camelCased just like your JavaScript variable names.
Methods and publications
Method and publication names should be camelCased, and namespaced to the module they are in:
Note that this code sample uses the ValidatedMethod package recommended in the Methods article. If you aren’t using that package, you can use the name as the property passed to
Meteor.methods.
Here’s how this naming convention looks when applied to a publication:
Files, exports, and packages
You should use the ES2015
import and
export features to manage your code. This will let you better understand the dependencies between different parts of your code, and it will be easy to know where to look if you need to read the source code of a dependency.
Each file in your app should represent one logical module. Avoid having catch-all utility modules that export a variety of unrelated functions and symbols. Often, this can mean that it’s good to have one class, UI component, or collection per file, but there are cases where it is OK to make an exception, for example if you have a UI component with a small sub-component that isn’t used outside of that file.
When a file represents a single class or UI component, the file should be named the same as the thing it defines, with the same capitalization. So if you have a file that exports a class:
This class should be defined inside a file called
ClickCounter.js. When you import it, it’ll look like this:
Note that imports use relative paths, and include the file extension at the end of the file name.
For Atmosphere packages, as the older pre-1.3
api.export syntax allowed more than one export per package, you’ll tend to see non-default exports used for symbols. For instance:
Templates and components
Since Spacebars templates are always global, can’t be imported and exported as modules, and need to have names that are completely unique across the whole app, we recommend naming your Blaze templates with the full path to the namespace, separated by underscores. Underscores are a great choice in this case because then you can easily type the name of the template as one symbol in JavaScript.
If this template is a “smart” component that loads server data and accesses the router, append
_page to the name:
Often when you are dealing with templates or UI components, you’ll have several closely coupled files to manage. They could be two or more of HTML, CSS, and JavaScript files. In this case, we recommend putting these together in the same directory with the same name:
The whole directory or path should indicate that these templates are related to the
Lists module, so it’s not necessary to reproduce that information in the file name. Read more about directory structure below.
If you are writing your UI in React, you don’t need to use the underscore-split names because you can import and export your components using the JavaScript module system. | https://guide.meteor.com/code-style.html | CC-MAIN-2018-09 | refinedweb | 2,074 | 57 |
Objects are a powerful software engineering construct, and Java uses them extensively. In fact, it encourages the use of objects so much that developers sometimes forget the costs behind the construct. The result can be object churn, a program state in which most of your processor time is soaked up by repeatedly creating and then garbage collecting objects.
Java performance programming: Read the whole series!
- Part 1. Learn how to reduce program overhead and improve performance by controlling object creation and garbage collection
- Part 2. Reduce overhead and execution errors through type-safe code
- Part 3. See how collectionsalternatives measure up in performance, and find out how to get the most out of each type
This is the first in a series of articles focused on performance issues in Java. In this series, we'll examine a number of areas in which Java performance can be less than ideal, and provide techniques for bypassing many of these performance roadblocks. Actual timing measurements will be used throughout to demonstrate the performance improvements possible with the right coding techniques. memory management.
The developer generally doesn't need to be directly involved in this garbage collection process. Objects drop out of the reachable set and become eligible for recycling as they're replaced with other objects, or as methods return and their variables are dropped from the calling thread's stack. The JVM runs garbage collection periodically, either when it can, because the program threads are waiting for some external event, or when it needs to, because it's run out of memory for creating new objects. Despite the automatic nature of the process, it's important to understand that it's going on, because it can be a significant part of the overhead of Java programs.
Besides the time overhead of garbage collection, there's also a significant space overhead for objects in Java. The JVM adds internal information to each allocated object to help in the garbage collection process. It also adds other information required by the Java language definition, which is needed in order to implement such features as the ability to synchronize on any object. When the storage used internally by the JVM for each object is included in the size of the object, small objects may be substantially larger than their C/C++ counterparts. Table 1 shows the user-accessible content size and actual object memory size measurements for several simple objects on various JVMs, illustrating the memory overhead added by the JVMs.
This space overhead is a per object value, so the percentage of overhead decreases with larger objects. It can lead to some unpleasant surprises when you're working with large numbers of small objects, though -- a program juggling a million
Integers will have most systems down on their knees, for example!
Comparison with C/C++
For most operations, Java performance is now within a few percent of C/C++. The just-in-time (JIT) compilers included with most JVMs convert Java byte codes to native code with amazing efficiency, and in the latest generation (represented by IBM's JVM and Sun's HotSpot) they're showing the potential to start beating C/C++ performance for computational (CPU intensive) applications.
However, Java performance can suffer by comparison with C/C++ when many objects are being created and discarded. This is due to several factors, including initialization time for the added overhead information, garbage collection time, and structural differences between the languages. Table 2 shows the impact these factors can have on program performance, comparing C/C++ and Java versions of code repeatedly allocating and freeing arrays of byte values.
For both short- and long-term allocations, the C++ program is considerably faster than the Java program running on any JVM. Short-term allocations have been one focus area for optimization in HotSpot. Results show that -- with the Server 2.0 beta used in this test -- this is the closest any JVM comes to the C++ code, with a 50 percent longer test time. For long-term allocations, the IBM JVM gives better performance than the HotSpot JVM, but both trail far behind the performance of the C++ code for this type of operation.
Even the relatively good performance of HotSpot on short-term allocations is not necessarily a cause for joy. In general, C++ programs tend to allocate short-lived objects on the stack, which would give a lower overhead than the explicit allocation and deallocation used in this test. C++ also has a big advantage in the way it allocates composite objects, using a single block of memory for the combined entity. In Java, each object needs to be allocated by its own block.
We'll certainly see more performance improvements for object allocation as vendors continue to work on their VMs. Given the above advantages, though, it seems unlikely the performance will ever match C++ in this area.
Does this mean your Java programs are eternally doomed to sluggish performance? Not at all -- object creation and recycling is just one aspect of program performance, and, providing you're sensible about creating objects in heavily used code, it's easy to avoid the object churn cycle! In the remainder of this article we'll look at ways to keep your programs out of the churn by reducing unnecessary object creation.
Keep it primitive
Probably the easiest way to reduce object creation in your programs is by using primitive types in place of objects. This approach doesn't apply very often -- usually there's a good reason for making something an object in the first place, and just replacing it with a primitive type is not going to fill the same design function. In the cases where this technique does apply, though, it can eliminate a lot of overhead.
The primitive types in Java are
boolean,
byte,
char,
double,
float,
int,
long, and
short. When you create a variable of one of these types, there is no object creation overhead, and no garbage collection overhead when you're done using it. Instead, the JVM allocates the variable directly on the stack (if it's a local method variable) or within the memory used for the containing object (if it's a member variable).
Java defines wrappers for each of these primitive types, which can sometimes confuse Java novices. The wrapper classes represent immutable values of the corresponding primitive types. They allow you to treat values of a primitive type as objects, and are very useful when you need to work with generic values that may be of any type. For instance, the standard Java class libraries define the
java.util.Vector,
java.util.Stack, and
java.util.Hashtable classes for working with object collections. Wrapper classes provide a way to use these utility classes with values of primitive types (not necessarily a good approach from the performance standpoint, for reasons we'll cover in the next article in this series, but a quick and easy way to handle some common needs).
Except for such special cases, you're best off avoiding the usage of the wrapper classes and staying with the base types. This avoids both the memory and performance overhead of object creation.
Besides the actual wrapper types, other classes in the class libraries take values of primitive types and add a layer of semantics and behavior. Classes such as
java.util.Date and
java.awt.Point are examples of this type. If you're working with a large number of values of such types, you can avoid excessive object overhead by storing and passing values of the underlying primitive types, only converting the values into the full objects when necessary for use with methods in the class libraries. For instance, with the
Point class you can access the internal
int values directly, even combining them into a
long so that a single value can be returned from a method call. The following code fragment illustrates this approach with a simple midpoint calculation:
... // Method working with long values representing Points, // each long contains an x position in the high bits, a y position // in the low bits. public long midpoint(long a, long b) { // Compute the average value in each axis. int x = (int) (((a >> 32) + (b >> 32)) / 2); int y = ((int) a + (int) b) / 2; // Return combined value for midpoint of arguments. return (x << 32) + y; } ...
Reuse objects
Now let's consider another approach to reducing object churn: reusing objects. There are at least two major variations of this approach, depending on whether you dedicate the reused object for a particular use or use it for different purposes at different times. The first technique -- dedicated object reuse -- has the advantages of simplicity and speed, while the second -- the free pool approach -- allows more efficient use of the objects.
Dedicated object reuse
The simplest case of object reuse occurs when one or more helper objects are required for handling a frequently repeated task. We'll use date formatting as an example, since it's a function that occurs fairly often in a variety of applications. To just generate a default string representation of a passed date value (represented as a
long, let's say, given the preceding discussion of using primitive types), we can do the following:
... // Get the default string for time. long time = ...; String display = DateFormat.getDateInstance().format(new Date(time)); ...
This is a simple statement that masks a lot of complexity and object creation behind the scenes. The call to
DateFormat.getDateInstance() creates a new instance of
SimpleDateFormat, which in turn creates a number of associated objects; the call to
format then creates new
StringBuffer and
FieldPosition objects. The total memory allocation resulting from this one statement actually came out to about 2,400 bytes, when measured with JRE 1.2.2 on Windows 98.
Since the program only uses these objects (except the output string) during the execution of this statement, there's going to be a lot of object churning if this use and discard approach is implemented in frequently executed code. Dedicated object reuse offers a simple technique for eliminating this type of churn.
Owned objects
By using some one-time allocations, we can create a set of objects required for formatting, then reuse these dedicated objects as needed. This set of objects is then owned by the code which uses them. For example, if we apply this approach using instance variables, so that each instance of the containing class owns a unique copy of the objects, we'd have something like this: | http://www.javaworld.com/article/2076539/build-ci-sdlc/java-performance-programming--part-1--smart-object-management-saves-the-day.html | CC-MAIN-2015-18 | refinedweb | 1,754 | 50.87 |
In the following I tried to answer to Allen and Roman postings at the same time (as they both refer to the same mail), quoting both of their mails.... ["Module" vs "Pipeline" vs "ModuleBuilder" vs "module_builder_t] Allen Bierbaum wrote: >. Roman Yakovenko wrote: > I would like to call it generator_t, but module_builder_t is also okay. I'm fine with "Module Builder" as well. As to the exact spelling, I'd vote for "ModuleBuilder" for two reasons: 1) It's the same guideline used in the standard Python library (and also most other packages I've come across) 2) It separates the high level API classes from the internal lower level API classes. So the user will notice when he's about to use stuff that requires some "deeper understanding" of the internals of pyplusplus (and that is more likely to break his script in future versions as I'd still argue that the low level API is more likely to change than the high level API from version to version). 3) It's the naming convention I'm used to because I use it for my own stuff as well... ;-) ok, these are three reasons, I'd vote for it for three reasons. Allen Bierbaum wrote: >>. Roman Yakovenko wrote: > I think object oriented interface is fine. I will not let global variables to > come into pyplusplus without good reason. More over, I think, that within same > script it should be possible to use more then one module_builder_t( I > like this name ). I introduced the global functions simply as an abbreviation so that I could be more concise in my script. My point is not to forbid explicit usage of the module builder class. Even in my version of the API you could have used the builder class explicitly as Allen did in his version. Well, having global functions and an internal global builder class is a rather minor feature, I could also live without. My argument is just that introducing the global functions has absolutely no effect on those people who instantiate the builder themselves, whereas leaving the functions out affects those people who would prefer to use them. So why not let the user decide for themselves? Roman Yakovenko wrote: >> configure pygccxml parser > - code creators factory configuration > keeps all data to configure module_creator.creator_t class > - declarations > returns declarations > within this property, files will be parsed, only once, and > declaration tree will be returned. > - module_creator > returns module_t code creator that has been created using by creator_t class > > In this way user don't need to think "parsing" and "code creators > factory", but rather I have a set of declarations, lets do some adaptation. I have to admit that I caught myself forgetting to call parse() before trying to access the declaration tree when I was setting up a simple pyplusplus example. But on the other hand, triggering such a "big" operation like parsing the headers just by accessing an attribute sounds unusual. But then, you didn't say how the attribute access would look like. The parse() step could really be done internally once the user calls any of the Class(), Method(), etc. methods which is basically what you were proposing. I think this is not such a bad idea at all, I'm in favor of trying it out. :) Allen Bierbaum. I agree that the decision whether there should be two declaration wrapper classes or only one is really just an implementation detail. I suppose the question rather is what interface we would like to have on that declaration wrapper(s) and whether the interface should depend on a) the number of contained declarations and b) on the type of the contained declaration(s). Our implementations agreed in that they did not base the interface on the declaration type (which means there should already be test/handling code in each method). I also didn't base the interface on the number of contained declarations because I thought whenever I call a method on a MultiDecl object I could just as well iterate over the contained declarations and call that method on each of them individually. And that's basically what I'm doing, relieving the user from having to write that loop himself. Roman Yakovenko wrote: > mb = module_builder_t( ... ) > mb.class_ = mb.class_group > > You replace function that return 1 class with function that returns > many classes. Your code will work without changes. If the basic idea behind this can be rephrased as "let the user customize the API", then I think I can agree, but I'd do it the other way. Instead of replacing methods by new methods I would just allow to set options that alter the semantics of the methods a little bit. For example, you could provide new default values for arguments (like the recursive flag mentioned somewhere below) or you could enable/disable the automatic assertion feature that I've mentioned in an earlier mail. If I want to reference several classes at once I could disable automatic assertion for class queries. Whereas if I want to be sure to get exactly the class I have specified I enable automatic assertion with a count of 1. Roman Yakovenko wrote: > Also we can not join between decl_wrapper and multi_decl_wrapper. > Every declaration > has set of unique properties like parent or location. Those properties > will not be in interface of multi_decl_wrapper. As mentioned above neither Allen's nor my API bases the declaration interface on the *types* of the contained declaration. So currently, you don't have that anyway (but this hasn't been a problem for me, and obviously neither for Allen as the main purpose of the DeclWrapper class is to *decorate* the declarations, the selection has been done earlier). Allen Bierbaum. Right, you *could*, but you don't *have to*. The above code would work in my version just as well with the same semantics, i.e. class1_method would only contain the method of class1 and not the one from class2 because you called Method() on a previous selection of exactly one class. Only if you would call Method() on the main namespace (which by default also contains all children nodes) or on a class selection that references both classes would you get the "method1" methods from both classes. Suppose I modify the above code and add a line like this (assuming your version of the API): classes = ns.Class("class.*") This would already address both classes and return a MultiDeclWrapper object in the above case. This means, I couldn't call Method() on them to further refine my query. But if the library only had a class1 class but no class2 class, the above call would return a DeclWrapper object and I would get a different interface where Method() is available. In my version I wanted to prevent such cases as I consider this to be somewhat inconsistent (you cannot tell what interface the returned object has just by looking at the above line. You can only answer that question by knowing the contents of the headers that were parsed). The bottom line is that my main argument for my approach would be the same as above. Together with auto assertions my approach doesn't affect the way you use your API whereas limiting the flexibility affects the way I was creating my wrappers. So again, why not letting the users decide for themselves which approach suits them best? Roman Yakovenko wrote: > But some time it should be possible to say something > like this: give me all declarations that their names start with QXml or QDom. That's already possible in both versions by using a regular expression (such as QDom.*) on the name. Allen Bierbaum wrote: >>. Note however, that by using the global Method() function I more or less explicitly stated that I really wanted to search the entire declaration tree. When I would have wanted to restrict the query to a particular class I would have written: Class("Foo").Method("operator()", retval=["float &", "double &"]).ignore() > Maybe something like this instead: > > ns = mod.Namespace("test_ns") > ns.Method("operator()", retval=["float &", "double &"], recursive=True).ignore() > > (notice the explicit request to recursively search). Well, I could argue that calling Method() on a namespace and explicitly setting recursive to True is sort of redundant. ;) But apart from that I'm fine with it. (Could we also agree on making the default value for recursive customizable? Then it almost feels like home... :) Roman Yakovenko wrote: > Well, I think that for the first version we will implement Matthias's API. > Using it, is very easy to mimic what you want: > > mb = module_builder_t(...) > > class1 = mb.class( name="class1" ) > class1_method1 = mb.member_function( name=class1.decl_string + "method1" ) Ah, it seems there is still some confusion about how my version of the API is used. Even though I was mainly using the global query functions (that act on the main namespace) you are by no means restricted to them. Of course, you can use the respective methods on a previously made selection, so the last line in the above example would rather look like this: class1_method1 = class1.member_function( name="method1" ) That is, you would call member_function() on the class and not on the namespace. Allen Bierbaum wrote: > In my personal opinion (and I am higly biased) I would summarize the > comparison by saying that the prototype I put together may be further > ahead on features in general but could definitely be helped out with > more expressiveness of queries. I agree with that summary. :) >? As our APIs are close enough I don't think we have to start over from scratch again. Feel free to take anything you need from my version and post any updates as soon as you have them finished so that I can test it and maybe even add some stuff. In the meantime, I'll refrain from doing more changes to my version. Personally, I think the following items have to be sorted out as quickly as possible: - Where is the main version of the "experimental" API kept? Ideally, this should be a cvs/subversion repository that we can all access. I guess the only repository that is already there is the pyplusplus repository itself. But this would mean Roman would have to reserve an area in his repository and give us write access to it. Alternatively, I'm fine with keeping the main sources in Allen's hands and sending him patches whenever someone actually does changes to the code (I'd recommend to announce such attempts here so that everyone knows what everyone else is up to. Maybe this is really the time to start using the wiki). - What is the internal "decoration" API of pyplusplus? Does the patch from Allen already contain everything that is needed? Was this part of the patch accepted and applied to cvs? Where is this API documented? - What are the guidelines for writing doc strings and which tool will be used to create reference documentation? (I think pyplusplus itself is also in dire need of doc string and now that I keep looking at the sources I could just as well provide some doc strings myself. But for this, I need to know what guidelines I have to follow (should it be plain text or is it ok to add some markup for a specific tool? And if so, which tool? epydoc? doxygen? etc)) - Matthias - | http://mail.python.org/pipermail/cplusplus-sig/2006-February/010051.html | CC-MAIN-2013-20 | refinedweb | 1,894 | 60.95 |
.
/!\ as of linux 3.11, the audit framework is not yet compatible with the namespace implementation, if you use namespaces, consider deactivate the audit framework. It may also affects the performance of the system.
Contents
Installation
audit is available in the community repository, it can be installed with:
# pacman -S audit
# systemctl start auditd.service # systemctl enable
j and break the relevancy of your logs. add it to the /etc/audit/audit.rules :
-w /etc/audit/audit.rules -p rwxa -w /etc/security/
Audit syscalls
The audit framework allow you to audit the syscalls performed.
using pid
you can search events related to a particular pid using :
# ausearch -p 1
This command will show you all the events logged according to your rules related to the PID1 (i.e. systemd).
using keys
You can use the -k option in your rules to be able to find related events easily :
# auditctl -w /etc/passwd -p rwxa -k KEY_pwd
A lot of other options are available, see man ausearch
# ausearch -k KEY_pwd
A similar search with a key set up will give you
Look for anormalies
The aureport tool can be used to quicly report any anormal event performaed on the system, it include network interface used in promiscous mode, process or thread crashing or exiting with ENOMEM error etc.
The easiest way to use aureprt is :
# aureport -n
You. | https://wiki.archlinux.org/index.php?title=Audit_framework&oldid=278323 | CC-MAIN-2018-26 | refinedweb | 228 | 51.48 |
IRC log of ldp on 2014-04-17
Timestamps are in UTC.
12:56:23 [RRSAgent]
RRSAgent has joined #ldp
12:56:23 [RRSAgent]
logging to
12:56:25 [trackbot]
RRSAgent, make logs public
12:56:25 [Zakim]
Zakim has joined #ldp
12:56:27 [trackbot]
Zakim, this will be LDP
12:56:27 [Zakim]
ok, trackbot; I see DATA_LDPWG()8:30AM scheduled to start 26 minutes ago
12:56:28 [trackbot]
Meeting: Linked Data Platform (LDP) Working Group Teleconference
12:56:28 [trackbot]
Date: 17 April 2014
13:01:51 [Zakim]
DATA_LDPWG()8:30AM has now started
13:01:58 [Zakim]
+MIT-F2F-group
13:03:18 [nmihindu]
Zakim, what's the code?
13:03:18 [Zakim]
the conference code is 53794 (tel:+1.617.761.6200 sip:zakim@voip.w3.org), nmihindu
13:03:33 [Ashok]
Ashok has joined #ldp
13:03:46 [Zakim]
+[IPcaller]
13:03:54 [codyburleson]
Zakim, IPcaller is me.
13:03:54 [Zakim]
+codyburleson; got it
13:04:15 [codyburleson]
Zakim, who is talking?
13:04:15 [Zakim]
+??P17
13:04:22 [Zakim]
+??P16
13:04:24 [BartvanLeeuwen]
Zakim, ??P17 is me
13:04:24 [Zakim]
+BartvanLeeuwen; got it
13:04:26 [Zakim]
codyburleson, listening for 11 seconds I could not identify any sounds
13:04:34 [nmihindu]
Zakim, ??P16 is me
13:04:34 [Zakim]
+nmihindu; got it
13:04:39 [nmihindu]
Zakim, mute me
13:04:39 [Zakim]
nmihindu should now be muted
13:05:45 [Ashok]
scribenick: Ashok
13:05:48 [JohnArwe]
JohnArwe has joined #ldp
13:06:47 [Ashok]
Topic: Agenda
13:06:55 [Zakim]
-BartvanLeeuwen
13:07:14 [Ashok]
Arnaud: Access control and Patch format are on the agenda with testing in the afternoon.
13:07:18 [Zakim]
+??P17
13:07:20 [BartvanLeeuwen]
Zakim, ??P17 is me
13:07:20 [Zakim]
+BartvanLeeuwen; got it
13:07:46 [Ashok]
... I think we need to talk about the spec again. Esp. about JSON-LD.
13:07:58 [Ashok]
... also talk about paging a bit more.
13:07:59 [codyburleson]
*has also the noise
13:08:29 [Ashok]
Arnaud: We can delay testing
13:09:04 [deiu]
deiu has joined #ldp
13:09:13 [Ashok]
... start with Access Control Note now. Then spec and paging
13:09:23 [Ashok]
... and patch
13:09:24 [deiu]
Zakim, who is on the call?
13:09:24 [Zakim]
On the phone I see MIT-F2F-group, codyburleson, nmihindu (muted), BartvanLeeuwen
13:09:52 [Ashok]
Arnaud: I don't see a quick solution to PATCH.
13:10:01 [Ashok]
Topic: Access Control
13:10:18 [TallTed]
TallTed has joined #ldp
13:10:25 [deiu]
Zakim, MIT-F2F-group has Arnaud, Ashok, betehess, JohnArwe, roger, sandro, SteveS, TallTed, deiu
13:10:25 [Zakim]
+Arnaud, Ashok, betehess, JohnArwe, roger, sandro, SteveS, TallTed, deiu; got it
13:10:30 [Ashok]
Arnaud: This was a compromise we came to when we wrote the charter
13:11:12 [Ashok]
... did not want to put on charter because it's a hard issue. So we decided on a separate note.
13:11:48 [codyburleson]
Annoying noise just stopped
13:11:56 [Ashok]
... on yesterday's wish list discussion Access Control was high priority
13:12:10 [Ashok]
... we need agreement on usescases
13:13:01 [deiu]
Ashok: we started on this and we have 2 requirements
13:13:09 [deiu]
... one: you need some form of authentication
13:13:29 [deiu]
... however, we don't want to specify the authentication protocol
13:13:58 [MiguelAraCo]
MiguelAraCo has joined #ldp
13:14:14 [deiu]
... second: once you are authenticated (and say you get a token), then you can access various things, update them, etc., so there must be some way to specify what you can do
13:15:10 [deiu]
... the question is: where do we specify that? In LDP, in HTTP? How do we specify what the ACL privileges are?
13:15:41 [deiu]
... our wiki page has example we can build on
13:16:01 [deiu]
... we haven't gotten further on this because there wasn't a lot of enthusiasm in the group
13:16:07 [deiu]
Arnaud: I agree with that part
13:16:40 [deiu]
... but we should not get sidetracked into discussing the solutions, but instead try to figure out the questions we need to ask ourselves
13:17:12 [deiu]
Ashok: does fine-grained ACL means access to one attribute?
13:17:15 [deiu]
Arnaud: yes
13:17:31 [deiu]
Ashok: people also want access to a group of resources, and to specify that group is hard
13:17:45 [deiu]
Arnaud: that's not what we're trying to decide now
13:17:50 [deiu]
... now we want to know what the requirements are
13:17:55 [betehess]
betehess has joined #ldp
13:18:09 [deiu]
... let's look at the doc together and decide how we can improve it
13:18:50 [deiu]
... why don't we go through the use-cases first?
13:20:59 [deiu]
Arnaud: do people agree that the use-case involving ACL for group is important?
13:22:02 [roger]
roger has joined #ldp
13:22:25 [TallTed]
Use Case: granting access to a (group of) resources to attendees of a particular session at a conference
13:24:24 [TallTed]
Requirements: group entitites; grant permissions to the group; set permissions on a (group of) resources
13:26:17 [Ashok]
Ted: Granularity of access control is important
13:33:50 [TallTed]
Requirement: Grant permissions for (set restrictions on) individual (enumeration) entity/resource.
13:34:32 [TallTed]
Requirement: Group entities/resources by enumeration (closed ended.)
13:34:32 [TallTed]
Requirement: Group entities/resources by attribute (open ended.)
13:35:59 [Ashok]
Arnaud: Do we need 3.3?
13:36:40 [Ashok]
Ted: Disagrees
13:37:36 [Ashok]
... that is should be removed
13:38:02 [Ashok]
s/is/it/
13:40:31 [TallTed]
s/Use Case: granting/generic requirement: granting/
13:43:26 [Ashok]
Usecase: Ted wants to access/update some resource ... he wants his friends to get acess ... he wants to acess related resources
13:44:08 [Ashok]
Ted: Change title of 3.4
13:46:01 [Ashok]
Andre: Do we need access control in LDP?
13:46:16 [Ashok]
... underlying store will have policies
13:46:29 [Ashok]
... how to expose these policies to client
13:47:49 [Ashok]
Ted: Talks about distinction between usecases and requirements
13:49:14 [Ashok]
Alexandre: Is there anyting special about access control for LDP?
13:49:32 [Ashok]
Ted: Is there a system that satisfies these requirements?
13:49:46 [Ashok]
... there is no W3C spec that tells us
13:49:58 [TallTed]
Question: Granularity. LDPC? LDPR? attribute within LDPR ("triple level")?
13:51:43 [Zakim]
+ericP
13:53:43 [Ashok]
Steve: We have application-specific access control
13:54:12 [nmihindu]
people who are doing similar things today with Linked Data without LDP (Victor from UPM, Serena from Inria) do it as dataset, graph, triple levels as far as I know
13:54:59 [codyburleson]
We should escape Identification and Authentication. We need only a URI to represent ANY Principal. Once we have that, we can focuse specifically on Authorization and make no claim about how the Principal URI is derived.
13:55:08 [Ashok]
Andre: We use Wen Acess Control to specify policies ... LDP just uses them
13:55:25 [TallTed]
Access Control == Identification (e.g., WebID, Username, OpenID) + Authentication (e.g., WebID+TLS, Password, Password+Token) + Authorization (permissions, policies)
13:57:57 [Ashok]
s/Wen/Web/
13:59:20 [ericP]
q+ to ask whether an LDP Resource can have different triples (or Container have different members) depending on the authentication
13:59:25 [Ashok]
Ashok: The question is -- Is Access Control our problem
13:59:47 [Ashok]
Steve: If we don't do it, who will do it?
14:00:43 [ericP]
if resources can look different, you enable fine-grained access control.
14:01:15 [Ashok]
Ashok: Should we write this as a charter for a WG?
14:01:32 [Ashok]
Andre: There is a lot of work to be done here
14:01:32 [ericP]
(i don't think there's anything that says that can't be different, though HTTP purists may argue that two representations with different triples can't represent the same resource at the same time)
14:04:59 [Ashok]
Ashok: #.5 follows from 3.4 ... part of 3.4 actually
14:06:03 [codyburleson]
Problem Statement: Any platform for developing web applications would be incomplete without a mechanism for Authentication and Authorization. Without this functionality, the platform could serve only light, utilitarian purposes at best. Without security, it would not even be proper to call the system a "platform".
14:06:11 [Arnaud]
ack ericP
14:06:11 [Zakim]
ericP, you wanted to ask whether an LDP Resource can have different triples (or Container have different members) depending on the authentication
14:07:10 [Ashok]
Eric: Is there something in LDP that depends on auathentication?
14:07:39 [Ashok]
Ted: Nothing says two user have to see the same thing
14:09:22 [Ashok]
Andre: Let's ask folks who have implemented access control to send usecases
14:09:36 [Ashok]
\Sandro: I like very specific usecases
14:10:40 [Ashok]
Sandro: 3 paras that would define scope of work in a charter would be good
14:10:56 [Ashok]
s/\sandro/sandro/
14:11:31 [Ashok]
s/\Sandro/Sandro/
14:12:23 [codyburleson]
+q
14:12:51 [codyburleson]
-q
14:15:39 [Ashok]
Move last usecase to intro section
14:16:02 [Ashok]
Move 4.2 to section 5
14:17:57 [Ashok]
Arnaud: Do we need section 5?
14:20:15 [Ashok]
... move to other document?
14:21:28 [Ashok]
Sandro: Start another wiki page with the 3 paras that could go into a charter
14:21:41 [Ashok]
Ted: make that the conclusion of this document
14:21:47 [Ashok]
Arnaud: Yes
14:23:03 [Ashok]
Nandana, do you have a comment?
14:23:28 [nmihindu]
Ashok, no I've just put my mind completely to the primer :)
14:24:03 [Ashok]
Great!
14:24:54 [BartvanLeeuwen]
break now ?
14:25:05 [TallTed]
break until 10:35 local
14:26:18 [Zakim]
-BartvanLeeuwen
14:35:45 [sandro]
topic; json
14:35:53 [Ashok]
Topic: ISSUE-97 Should we use JSON in addition to Turtle?
14:36:07 [sandro]
issue-97
14:36:07 [sandro]
topic: JSON
14:36:07 [trackbot]
issue-97 -- Json instead of (in addition to?) turtle -- raised
14:36:07 [trackbot]
14:36:25 [Ashok]
Arnaud: We could put in Best Practices doc.
14:36:41 [Ashok]
... don't want to go to another Last Call.
14:37:34 [Ashok]
... We could put a SHOULD in the spec. We can do this w/o going to another Last Call
14:37:39 [codyburleson]
+1 "SHOULD" support JSON-LD
14:37:58 [ericP]
imo, that's n'th last call
14:38:13 [betehess]
q+
14:38:16 [Ashok]
PROPOSAL: Add SHOULd support JSON-LD in spec
14:38:39 [bblfish_]
bblfish_ has joined #ldp
14:38:57 [Ashok]
... we can also add "who supports JSON-LD" when we go to CR
14:39:19 [Ashok]
Sandro: Are we saying need to convert formats?
14:39:37 [Arnaud]
ack betehess
14:39:41 [Ashok]
... need translation on output or store both formats
14:41:12 [Ashok]
Steve: Or say you match the format of request
14:41:52 [betehess]
my take: MUST Turtle / SHOULD JSON-LD does not sound like a so great idea. not sure that it solves exactly
14:41:58 [Zakim]
+??P4
14:42:04 [BartvanLeeuwen]
Zakim, ??P4 is me
14:42:04 [Zakim]
+BartvanLeeuwen; got it
14:42:18 [Ashok]
Sandro: We should gather data to go into Director meeting
14:42:59 [betehess]
q+
14:43:05 [Ashok]
Sandro: We will put in spec ... if Director obejcts move to BP
14:43:27 [Ashok]
Eric: Can we put into separate doc that put to REC
14:43:38 [Ashok]
Sandro: One line ... too much hassle
14:45:16 [Ashok]
Eric: There is mapping from JSON-LD to Turtle .... 1 to m mapping ...
14:45:29 [Ashok]
... need context
14:45:32 [Arnaud]
ack betehess
14:45:35 [SteveS]
q+
14:45:51 [Ashok]
Sandro: We only support JSON-LD which has context embedded in ti
14:46:00 [Ashok]
s/ti/it/
14:46:40 [ericP]
right, but that's a mapping from json to one namespace unless you want to use the very ugly json-ld with no short names
14:46:46 [Ashok]
Alexandre: Discussion about where SHOULD goes
14:47:03 [ericP]
(Sandro)
14:47:56 [Arnaud]
ack Steves
14:47:57 [Ashok]
... also SHOULD vs. MUST
14:48:10 [sandro]
ericP, no, the servers and the clients get to figure out the @context to use
14:48:46 [Ashok]
Steve: We may have to go to another call if we get significant comments in CR.
14:49:03 [Ashok]
... so we can put in BP and then add to spec later
14:49:28 [Ashok]
Sandro: Let's ask director. If he says NO we move to BP.
14:49:39 [ericP]
sandro, ahh, so we don't ahve a standard serialization in json
14:49:54 [Ashok]
Arnaud: can we add to AT RISK
14:50:11 [Ashok]
Sandro: Maybe
14:51:18 [betehess]
betehess: if there was no LC issue, I would like to see MUST implement JSON-LD (no Turtle mandatory)
14:51:26 [roger]
+1
14:51:44 [deiu]
+1
14:52:57 [Ashok]
Ashok: You are making a marketing asessment
14:53:33 [Ashok]
Sandro: Clear that JSON has market ... not clear if JSON-LD has market
14:54:45 [Ashok]
Arnaud: We are not forcing servers to convert ... it's a SHOULD
14:55:09 [Zakim]
-nmihindu
14:55:50 [Ashok]
Ted: MUST for both format is best
14:56:51 [sandro]
PROPOSED: Close issue-97 with adding JSON-LD as a SHOULD in the spec, if we can in W3C process without another Last Call; if it'll require another LC, then we advocate for it in BP.
14:56:55 [Ashok]
Arnaud: We can put a SHOULD for JSON-LD in spec or put in BP
14:57:03 [codyburleson]
There is a marketing factor at play here that shouldn't be discounted. Turtle is meaningless to the "average" web developer. JSON-LD provides an option that is meaningful for them. If we want to successful, we need to appeal to the broader audience. So, I agree that is SHOULD be in the spec; not BPs.
14:57:19 [TallTed]
+1
14:57:24 [BartvanLeeuwen]
+1
14:57:33 [codyburleson]
+1
14:57:35 [MiguelAraCo]
+1
14:57:37 [SteveS]
+1
14:57:37 [Ashok]
+1
14:57:42 [roger_]
roger_ has joined #ldp
14:57:44 [sandro]
+1
14:57:44 [deiu]
+1 (just to help with adoption, but I would rather see a MUST instead)
14:57:58 [roger_]
+1
14:58:03 [JohnArwe]
+1
14:58:15 [MiguelAraCo]
(I agree with deiu)
14:58:16 [ericP]
-.1 # i'm uncomfortable engouth with sneaking this in after LC to whine about it, but not uncomfortable do something else
14:58:21 [betehess]
+0
14:58:23 [sandro]
RESOLVED: Close issue-97 with adding JSON-LD as a SHOULD in the spec, if we can in W3C process without another Last Call; if it'll require another LC, then we advocate for it in BP.
14:58:28 [Ashok]
Some leaning towards MUST
14:58:49 [Ashok]
Topic: PATCH
14:59:40 [Ashok]
Arnaud: There are different solutions but have no agreement on requirements
14:59:47 [Ashok]
... and usecases
15:00:48 [Ashok]
Sandro: Need to be able to patch from any graph to any graph and you need every patch to be tractable
15:01:06 [sandro]
(those are my constraints, other people have others)
15:01:07 [SteveS]
Some prior discussion: Use Cases
15:01:24 [Arnaud]
15:01:37 [SteveS]
Requirements:
15:01:48 [Ashok]
Alexandre presents slides
15:02:09 [sandro]
15:04:34 [Ashok]
SPARQL patch+skolemization, SPARQL patch w/o skolemization, RDF Patch
15:06:03 [Ashok]
q+
15:09:10 [SteveS]
JSON Merge Patch
15:09:21 [Ashok]
Ashok: Issue with patching large arrays
15:12:55 [Ashok]
q-
15:13:48 [sandro]
q+
15:15:12 [Ashok]
Web Payments using JSON-LD and JSON patch
15:21:51 [Ashok]
Arnaud: Are you agruing for RDF Patch?
15:21:59 [Ashok]
Alexandre: Yes
15:22:46 [Arnaud]
ack sandro
15:24:15 [Ashok]
Sandro: Easy if you don't have blank nodes. So I said use Skolemization.
15:24:32 [Ashok]
... Eric argues that Skolemization is expensive
15:25:36 [Ashok]
Sandro: You could serialize triples to add and triples to delete in Turtle
15:25:42 [SteveS]
q+ to ask if there really is a single universal solution for patch
15:27:42 [Arnaud]
ack steves
15:27:42 [Zakim]
SteveS, you wanted to ask if there really is a single universal solution for patch
15:27:43 [Ashok]
Discussion about blank nodes can be identified
15:28:52 [Ashok]
Arnaud: Either we agree to something that's not perfect or we have no solution at all.
15:29:00 [Ashok]
q+
15:29:35 [sandro]
Pick your poison: blank-node-identifiers or worst-case-fails
15:30:08 [sandro]
q+
15:30:30 [sandro]
q+ to say where LDP *needs* patch (huge containers)
15:30:33 [Ashok]
Steve: My usecase is more towards a lightweight RDF Patch. Limited requirements.
15:31:00 [Arnaud]
ack ashok
15:32:33 [Ashok]
Ashok: Alexandre, is there a document we can point to?
15:33:06 [Arnaud]
ack sandro
15:33:06 [Zakim]
sandro, you wanted to say where LDP *needs* patch (huge containers)
15:33:50 [Ashok]
Sandro: What do LDP users really need?
15:34:23 [Ashok]
... add/delete triple from huge graph ... no blank nodes
15:34:42 [Ashok]
... cannot use PUT
15:37:29 [betehess]
Arnaud, define "big" :-)
15:39:46 [Ashok]
Discussion about Skolemization and how expensive it is
15:43:39 [sandro]
sandro: One could also do a nice, efficient streaming protocol for maintaining sync
15:43:50 [Ashok]
Arnaud: This seems to be brainstorming. How to we make progress? Can we reach a compromise?
15:45:11 [Ashok]
Sandro: Requirement: There is a big graph in a triple store and you want to change a few triples in it
15:45:52 [Ashok]
... must be able to patch any graph
15:46:19 [betehess]
also, remember that there will be a time to see what is supported in implementations... who is planning to implement one of the solutions?
15:48:42 [betehess]
question: do we want to be able to patch _any_ graph? or do we think "realistic" (define realistic) graphs are just ok
15:48:50 [Arnaud]
STRAWPOLL: a) I'd rather keep it simple and accept a limited solution, b) I want a general solution and am willing to accept the additional cost
15:49:18 [betehess]
strong a)
15:49:29 [Ashok]
q+
15:50:35 [Arnaud]
ack ashok
15:50:46 [ericP]
this puts universality and tractibility at odds
15:50:58 [TallTed]
require universality; require tractability; require both; require neither...
15:52:58 [sandro]
STRAWPOLL: If we suggest one PATCH format, we make it (a) fail on certain pathological graphs, or (b) require the server to maintain Skolemization maps
15:54:19 [betehess]
sandro, that's not a strawpoll, that's a fact
15:54:30 [sandro]
:-)
15:54:52 [betehess]
my answer: yes :-)
15:55:06 [sandro]
STRAWPOLL: If we suggest one PATCH format, we make it (a) fail on certain pathological graphs, or (b) require the server to maintain Skolemization maps. Vote for which branch you'd rather live with;
15:55:38 [sandro]
STRAWPOLL: (Assuming we suggest one PATCH format) should it (a) fail on certain pathological graphs, or (b) require the server to maintain Skolemization maps.
15:56:21 [nmihindu]
nmihindu has joined #ldp
15:58:23 [Arnaud]
STRAWPOLL: I'd rather have a solution that (a) doesn't address certain pathological graphs, or (b) requires the server to maintain Skolemization maps
15:58:47 [betehess]
strong (a)
15:58:48 [deiu]
a) I don't want to pay a high price every time, regardless of case (while also maintaining skolemized versions), AND because I also want to do the PATCH operation in one request
15:58:49 [ericP]
a
15:58:55 [sandro]
-1 go to lunch :-)
15:59:17 [SteveS]
a +1, b -0 [we do a) first and can do b) later if needed]
15:59:21 [ericP]
OBJECT
15:59:40 [deiu]
a) +1, b) +0
15:59:43 [sandro]
a -0, b +0
15:59:47 [ericP]
a +1, b -.5
15:59:48 [MiguelAraCo]
a) +1
15:59:49 [betehess]
(a) +1 (b) -.9
16:00:32 [BartvanLeeuwen]
a) +1
16:00:45 [JohnArwe]
a +1, b (if needed as fallback) +0.5 ... I would prefer a better understanding of which graphs are considered pathological
16:01:03 [roger]
(a) +1, (b) -0.5, but, mostly plan on using domain specific ways to do PATCH like things
16:01:37 [Ashok]
a
16:01:41 [TallTed]
general solution for all but pathological case; once that's recognized, fall back to Skolemnize
16:02:07 [codyburleson]
a) +1
16:02:36 [Ashok]
Sandro: Still question on expressiveness
16:03:08 [Ashok]
Alexandre: Do we want to handle blank nodes or not
16:04:26 [Ashok]
Eric: Question is whether you have variables and xxx
16:06:39 [Ashok]
Eric: Not hard to produce a spec on SPARQL patch
16:07:58 [Zakim]
-codyburleson
16:08:03 [Arnaud]
lunch break until 12:45 local
16:08:06 [BartvanLeeuwen]
enjoy lunch
16:08:11 [Zakim]
-BartvanLeeuwen
16:08:12 [Arnaud]
zakim, mute ericp
16:08:13 [Zakim]
ericP should now be muted
16:08:41 [ericP]
i'll see your mutation and raise you a departure
16:08:47 [Zakim]
-ericP
16:09:00 [Ashok]
BREAK UNTIL 1PM EASTERN
16:09:12 [JohnArwe]
i.e. for ~51 mins
16:11:49 [BartvanLeeuwen]
thats gone be a quick dinner for me ;)
16:24:05 [jmvanel]
jmvanel has joined #ldp
16:39:25 [Arnaud]
Arnaud has joined #ldp
16:50:50 [deiu]
scribenick: deiu
16:51:10 [deiu]
Arnaud: resuming meeting
16:51:26 [deiu]
... we can spend 1h on PATCH and maybe another hour on paging
16:52:35 [deiu]
... the poll was a useful exercise, so now we know what are the problems we need to solve
16:52:45 [deiu]
... the question is: is there a solution?
16:53:01 [deiu]
... what can we agree on to make progress?
16:53:23 [deiu]
betehess: we have two solutions: ericP's (with BGP) and Pierre-Antoine's solution
16:53:41 [deiu]
sandro: what about RDF patch?
16:53:47 [deiu]
betehess: it needs skolemization
16:54:05 [Zakim]
+[IPcaller]
16:54:12 [deiu]
sandro: Tim's is not expressed in concrete terms...
16:54:41 [deiu]
betehess: that's basically ericP's solution, with additional constraints
16:54:55 [deiu]
Arnaud: let's have a straw poll on these two options
16:55:15 [deiu]
sandro: the big difference is that one feels like SPARQL and the other one doesn't
16:55:31 [deiu]
... "feeling" like SPARQL is a negative point for LDP adoption
16:56:08 [betehess]
solutions are: ericP's SPARQL Update with constrained BGP vs Pierre-Antoine's RDF Patch + property path
16:56:15 [Arnaud]
STRAWPOLL: pursue a) ericP's (with BGP) or b) Pierre-Antoine's solution
16:56:59 [deiu]
a) 0 b) +1
16:57:02 [roger]
a) -0.5, b) 0.5
16:57:04 [betehess]
a) +0 (not disagreeing with ericP's view) b) +1
16:57:19 [sandro]
a -0.5 b 0.5
16:57:30 [TallTed]
a +0.5 b +0.25
16:58:02 [SteveS]
a) +.1 b) +.9
16:58:10 [Ashok]
0, 1
17:00:03 [MiguelAraCo]
a) +0.5 b) -.5
17:00:56 [Zakim]
+??P1
17:01:05 [nmihindu]
Zakim, ??P1 is me
17:01:05 [Zakim]
+nmihindu; got it
17:01:12 [nmihindu]
Zakim, mute me
17:01:12 [Zakim]
nmihindu should now be muted
17:01:53 [Zakim]
+[IBM]
17:02:20 [betehess]
q+ to comment on syntax
17:02:35 [Arnaud]
ack betehess
17:02:35 [Zakim]
betehess, you wanted to comment on syntax
17:03:07 [Zakim]
-[IBM]
17:03:29 [betehess]
Arnaud: we are not married to this syntax, could be JSON
17:03:58 [deiu]
Arnaud: what do we take from this?
17:04:10 [deiu]
... the majority seems to prefer b)
17:04:49 [deiu]
... what's the status of PA's proposal? is it written somewhere?
17:05:00 [deiu]
betehess: no, it isn't, but I plan to do it
17:05:44 [deiu]
... I can also provide a test suite and implementation
17:06:10 [deiu]
Arnaud: do we agree this is the next step? (start drafting the PATCH spec)
17:06:31 [deiu]
... then we have consensus
17:06:53 [deiu]
sandro: we can make it a REC track
17:07:42 [deiu]
Arnaud: there's a big difference, not just in the outcome but in what we do towards it
17:07:42 [Zakim]
+??P12
17:07:47 [BartvanLeeuwen]
Zakim, ??P12 is me
17:07:47 [Zakim]
+BartvanLeeuwen; got it
17:08:11 [betehess]
LD Patch
17:08:21 [deiu]
Arnaud: how do we name it?
17:09:11 [deiu]
betehess: the full name can be "LD patch format" and the short name can be "LD patch"
17:09:31 [betehess]
LD Patch Format, would live at
17:09:42 [sandro]
Linked Data Patch Format
17:09:45 [betehess]
Linked Data Patch Format, would live at
17:09:48 [sandro]
ldpatch
17:09:59 [sandro]
ld-patch
17:09:59 [deiu]
s/LD Patch Format/Linked Data Patch Format/g
17:11:10 [sandro]
PROPOSED: We encourage Alexandre to draft a Linked Data Patch Format, along the lines of Pierre-Antoine's proposal
17:12:10 [SteveS]
+1 (encourage yes, require/mandate is even better ;)
17:12:42 [deiu]
+1
17:12:43 [nmihindu]
+1
17:12:45 [sandro]
+1
17:12:45 [TallTed]
+1
17:12:46 [betehess]
+1
17:12:54 [Ashok]
+1
17:12:57 [roger]
+1
17:13:15 [codyburleson]
+0
17:13:19 [betehess]
/me feels encouraged
17:13:31 [sandro]
RESOLVED: We encourage Alexandre to draft a Linked Data Patch Format, along the lines of Pierre-Antoine's proposal
17:14:17 [deiu]
Arnaud: the resolution is that as a group we will start working on PA's proposal, while betehess will write it down in the document
17:15:11 [Arnaud]
s/We encourage Alexandre/We agree/
17:15:32 [deiu]
Arnaud: now betehess can take an action to do it
17:16:29 [deiu]
... then we're done with PATCH for today!
17:16:33 [betehess]
ACTION: betehess to draft a Linked Data Patch Format, along the lines of Pierre-Antoine's proposal
17:16:33 [trackbot]
Created ACTION-139 - Draft a linked data patch format, along the lines of pierre-antoine's proposal [on Alexandre Bertails - due 2014-04-24].
17:16:47 [deiu]
... I think everyone's happy with this
17:17:08 [deiu]
Topic: paging
17:17:19 [deiu]
Arnaud: Ashok has a proposal for us
17:17:34 [deiu]
Ashok: it looks we're not agreeing on one solution right now
17:17:49 [deiu]
... most solutions have caveats
17:18:18 [deiu]
... so we could add a warning, saying that if you do paging, the collection may change
17:18:29 [deiu]
Arnaud: I think we have agreed that we can do better
17:18:49 [deiu]
... today we're not providing any mechanisms in that regard
17:19:21 [deiu]
... yesterday we were left with 2 options: what we have in the spec + notification (which doesn't stop the client from continuing); the second option was to pursue sandro's proposal
17:19:43 [deiu]
TallTed: adding this editorially to the existing spec makes it clear what you get
17:19:49 [deiu]
sandro: it's clear in the spec that it is lossy
17:20:08 [deiu]
... the spec has the word "lossy"
17:21:10 [deiu]
Arnaud: so basically sandro wants to veto this
17:22:00 [deiu]
... we've agreed that we will do the notification (which is supposed to be mandatory) so the clients know if there was a change during paging
17:22:07 [deiu]
sandro: ok, I can live with that
17:23:06 [sandro]
WARNING: YOU MIGHT NOT SEE INSERTIONS OR DELETTIONS THAT MIGHT HAPPEN DURING PAGING.
17:23:13 [SteveS]
SteveS has joined #ldp
17:23:14 [sandro]
+1 that's all I've ever asked for.
17:23:58 [TallTed]
we do also have -- 7.1.1 A LDP client SHOULD NOT present paged resources as coherent or complete, or make assumptions to that effect. [RFC5005].
17:24:03 [sandro]
(because it implies you WILL see triples that are there the whole time)
17:24:12 [deiu]
Arnaud: triples that were there when you started and are still there when you end, are definitely seen by the client
17:26:47 [deiu]
... this is a clarification of how lossy paging is
17:27:10 [deiu]
sandro: is everyone ok with the wording?
17:27:18 [deiu]
TallTed: I'm not ok with any wording so far
17:27:35 [Arnaud]
Arnaud has joined #ldp
17:27:41 [deiu]
sandro: are you ok with having test cases that cover the lossy behavior?
17:29:20 [SteveS]
Anyone have a reference to a source code copyright/license header for W3C test suites?
17:29:28 [deiu]
TallTed: if I'm on the page with items 11-20, and someone deletes 19, what is the first item on the next page?
17:30:10 [deiu]
sandro: I would like to have a test case for those cases
17:30:59 [deiu]
Ashok: if the server remembers the first and last triples it displays, then if you do a delete, then it's ok
17:31:17 [deiu]
... so the triples won't move around between pages
17:31:35 [deiu]
TallTed: things will appear to shift if you scroll back and forth (or if you reload the same page)
17:32:02 [deiu]
... if you reload the same page you may not see the same data (same for scrolling) -> these are the warnings
17:32:33 [deiu]
Arnaud: people are starting to see the value in sandro's proposal
17:32:43 [deiu]
... we still need to agree on how to word it
17:33:57 [deiu]
Arnaud: I think the lossly aspect is especially important in the case where the client doesn't choose when things get paged
17:35:14 [deiu]
Ashok: when the client starts to page, it caches the collection and then pages over the cache, but it may not have enough space
17:35:26 :35:26 [TallTed]
such. All pages should be tagged NoCache.
17:37:31 [deiu]
[people don't like the NoCache bit]
17:37:37 :37:38 [TallTed]
such. Caching flags TBD.
17:38:07 :41:01 [SteveS]
+1
17:41:13 [sandro]
+1 as long as we're clear this is really an implementation technique, and the key point is the underlying invariant
17:41:16 [betehess]
+0
17:41:26 [Ashok]
+1
17:41:28 [deiu]
+0.9(9999)
17:41:31 [TallTed]
+1
17:41:43 [roger]
+0.8
17:41:43 [SteveS]
17:41:50 [nmihindu]
+1
17:41:53 [BartvanLeeuwen]
+1
17:42:15 [MiguelAraCo]
+.5
17:42:20 [MiguelAraCo]
+0.5
17:42:28 [sandro]
sandro: for eample, you could have pages that are determined by the content
17:42:33 [deiu]
Ashok: suppose I don't care about updates and I just want to page, and if the contents change then I'm ok with it, then am I allowed to do that?
17:42:37 [Arnaud]
RESOLVED::43:32 [Arnaud]
STRAWPOLL: I prefer paging to be controlled by a) the client b) the server
17:44:13 [sandro]
sandro: If the server wants to implement by doing a snapshot, it's welcome to. That meets the invariant.
17:44:43 [deiu]
sandro: the client sends a "preferred page size header" to initiate paging
17:49:38 [sandro]
sandro: it's not aboiut the client being resource limited, so much as the client wanting to focus on a particular bit.
17:49:55 [Arnaud]
STRAWPOLL: I prefer paging to be initiated by a) the client b) the server
17:50:43 [Ashok]
a)
17:50:56 [TallTed]
c - either
17:51:08 [deiu]
a)
17:52:44 [Arnaud]
q?
17:53:06 [sandro]
a
17:53:09 [betehess]
c - potentially both
17:54:04 [sandro]
How about: Prefer: Page-Size-KB=100
17:54:51 [sandro]
How about: Prefer: Page-Size-KB=unlim
17:56:15 [deiu]
=*
17:57:35 [sandro]
sandro: Is it the case that the client MUST understand paging?
17:58:50 [sandro]
STRAWPOLL: The server MAY do paging even if the client hasn't asked for it
17:59:07 [sandro]
(today in the spec, means the client MUST understand paging.)
17:59:29 [sandro]
(as in the spec today)
17:59:43 [TallTed]
+1
17:59:46 [SteveS]
+1
17:59:46 [Ashok]
+1
17:59:47 [sandro]
+0
17:59:48 [deiu]
-0.9
17:59:53 [MiguelAraCo]
+1
18:00:02 [BartvanLeeuwen]
+1
18:00:12 [betehess]
+1
18:00:28 [codyburleson]
+0
18:01:23 [deiu]
Arnaud: so we have consensus
18:01:37 [sandro]
STRAWPOLL: We'll allow for clients to ask for paging, ask for no paging, and ask for page size
18:01:52 [deiu]
... we can talk about page sizes or no page, or what are the preferences the clients can convey to servers
18:02:53 [TallTed]
+1
18:04:42 [SteveS]
+0 (could defer until a LDP.next)
18:05:00 [deiu]
+0 (same as SteveS)
18:05:28 [betehess]
+0
18:05:44 [nmihindu]
+0.5 (would nice to have if possible)
18:05:53 [Ashok]
+1
18:06:17 [roger]
+0.5
18:08:55 [deiu]
deiu: paging can be replaced by sorting+filtering+limit
18:10:05 [TallTed]
s/+limit/+limit+offset/
18:13:30 [deiu]
Arnaud: filtering is pretty complicated
18:13:40 [deiu]
betehess: what about the scope of bnodes between pages
18:15:03 [deiu]
Arnaud: the client should have a say regarding the paging preference
18:16:25 [TallTed]
HTTP code 413 Payload Too Large -- as a result of the client asking for max-result=10KB + whatever request
18:17:23 [deiu]
roger: you could SPARQL to page over the results
18:19:00 [deiu]
... we can use subsets of SPARQL for paging and/or patch
18:21:00 [deiu]
sandro: if a clients says "I want the top 10 items", it also know more about the shape of the graphs than the server
18:21:12 [deiu]
s/know/knows
18:21:35 [deiu]
... there could be a "group by subject" clause to define the items that will be returned
18:23:33 [deiu]
Arnaud: I still think the best way is to allow the client to say "I want paging" or "I don't want paging"
18:28:32 [deiu]
[sandro propses a way to do paging over periods of time - i.e. sending data over 100 ms ]
18:29:53 [deiu]
SteveS: we couldn't come up with something that made sense in OSLC
18:30:12 [deiu]
... small/medium/large are very relative
18:31:09 [deiu]
sandro: then what about time? (if size in kb is not good)
18:31:49 [deiu]
betehess: if you're the client, then you do paging based on a rough idea of the ration between the triple and the size
18:31:55 [deiu]
... so I guess the triple is fine
18:38:51 [deiu]
Arnaud: people are now saying that maybe we can give a page size in triples
18:41:31 [sandro]
PROPOSED: We'll provide a way for the client to express a desired page size hint to the server, including whether or not to do paging at all
18:42:06 [sandro]
PROPOSED::42:23 [deiu]
+1
18:42:35 [TallTed]
+1
18:42:38 [betehess]
+1
18:42:41 [SteveS]
+1
18:42:44 [Ashok]
+1
18:42:47 [BartvanLeeuwen]
+1
18:42:50 [nmihindu]
+1
18:42:51 [sandro]
+0.5 (only because number-of-triples isn't the right metric)
18:43:27 [bblfish]
bblfish has joined #ldp
18:43:28 [Ashok]
Roger: +1
18:43:54 [sandro]
PROPOSED: We'll provide a way for the client to express a desired page size hint to the server, including whether or not to do paging at all. Size in number of KILOBYTES, but we know the server might be doing associated-chunks of triples, like around a blank node, or the same container item.
18:43:56 [roger]
roger has joined #ldp
18:44:05 [sandro]
+1 :-)
18:44:17 [betehess]
-0.1
18:44:20 [deiu]
0
18:44:23 [SteveS]
0
18:44:25 [TallTed]
+1
18:44:27 [Ashok]
-1
18:44:30 [roger]
0
18:44:33 [BartvanLeeuwen]
0
18:44:43 [codyburleson]
0
18:45:12 [Arnaud]
RESOLVED::45:29 [deiu]
Arnaud: we can discuss the details later
18:46:55 [deiu]
... are there more issues re. paging?
18:47:10 [deiu]
... we have tackled the most important ones
18:47:19 [sandro]
how about: Prefer: Page-Size=100
18:47:19 [sandro]
and Prefer: Page-Size=* (for no paging) or Page-Size=No-Paging
18:49:13 [deiu]
Ashok: do you want to add membership triples to the top of the page?
18:49:33 [deiu]
sandro: I think there are one per item (one membership and one containment)
18:49:51 [sandro]
PROPOSED: If a Container has membership triples and containment triples included, the membership triples and containment triples MUST be on the same page as each other.
18:54:21 [sandro]
PROPOSED: If a Container has membership triples and containment triples included, the membership triples and containment triples for a given resource MUST be on the same page as each other.
18:54:51 [sandro]
arnaud: No one is going to have the triples on the same page.
18:55:09 [sandro]
PROPOSED: If a Container has membership triples and containment triples included, the membership triples and containment triples for a given (contained/member) resource MUST be on the same page as each other.
18:55:25 [sandro]
arnaud: No one is going to have the membership and containment triples on the same page.
18:56:24 [TallTed]
+1
18:56:39 [deiu]
0
18:56:48 [sandro]
+1
18:56:53 [SteveS]
-0.1 (let impls do what makes sense for triples they have)
18:57:34 [betehess]
+0 (not sure how useful it is)
18:58:16 [sandro]
RESOLVED: If a Container has membership triples and containment triples included, the membership triple and containment triple for a given (contained/member) resource MUST be on the same page as each other.
18:58:44 [roger]
+0.5
18:58:51 [Ashok]
+1
18:59:14 [deiu]
Arnaud: I think we have achieved a lot!
19:00:03 [deiu]
... people are welcome to stick around for interop testing
19:00:25 [BartvanLeeuwen]
:)
19:00:51 [deiu]
Arnaud: let's adjourn the meeting
19:01:04 [deiu]
... on Monday I will host an informative call
19:01:40 [deiu]
... next formal meeting is on the 28th, when I expect all drafts to be ready
19:01:41 [Zakim]
-BartvanLeeuwen
19:02:12 [Arnaud]
adjourned
19:02:18 [Zakim]
-nmihindu
19:02:18 [sandro]
woo hoo!
19:02:22 [deiu]
+1
19:02:28 [Zakim]
-[IPcaller]
19:02:34 [codyburleson]
codyburleson has left #ldp
19:06:49 [Arnaud]
trackbot, end meeting
19:06:49 [trackbot]
Zakim, list attendees
19:06:49 [Zakim]
As of this point the attendees have been codyburleson, BartvanLeeuwen, nmihindu, Arnaud, Ashok, betehess, JohnArwe, roger, sandro, SteveS, TallTed, deiu, ericP, [IPcaller], [IBM]
19:06:57 [trackbot]
RRSAgent, please draft minutes
19:06:57 [RRSAgent]
I have made the request to generate
trackbot
19:06:58 [trackbot]
RRSAgent, bye
19:06:58 [RRSAgent]
I see 1 open action item saved in
:
19:06:58 [RRSAgent]
ACTION: betehess to draft a Linked Data Patch Format, along the lines of Pierre-Antoine's proposal [1]
19:06:58 [RRSAgent]
recorded in | https://www.w3.org/2014/04/17-ldp-irc | CC-MAIN-2021-31 | refinedweb | 6,858 | 63.63 |
Options for creating a source node for a sender or receiver. More...
#include <source_options.hpp>
Options for creating a source node for a sender or receiver.
Options can be "chained". For more information see proton::connection_options.
Normal value semantics: copy or assign creates a separate copy of the options.
Control whether messsages are browsed or consumed.
The default is source::MOVE, meaning consumed.
Control the persistence of the source node.
The default is source::NONDURABLE, meaning non-persistent.
The expiry period after which the source is discarded.
The default is no timeout.
Control when the clock for expiration begins.
The default is source::LINK_CLOSE.
Unsettled API - Specify a filter mechanism on the source that restricts message flow to a subset of the available messages. | http://qpid.apache.org/releases/qpid-proton-0.18.1/proton/cpp/api/classproton_1_1source__options.html | CC-MAIN-2018-34 | refinedweb | 124 | 62.95 |
Step 1) Download the MinGW Installation Manager (mingw-get) automated GUI installer assistant to install the MinGW software. The mingw-get-setup.exe is the file that needs to be downloaded the button for which is at the top of the page.
NOTE: That downloaded will look something like mingw-get-setup.exe(Date: 2017-09-06, Size: 91.00 KB) which might appear dated. Ignore this fact as this is just an installer and when used it will install and up to date compiler toolset.
Step 2) Run that installer downloaded from the above step:
This will bring up the minimal Web installer that will bring up an install wizard to guide you through the installation process as shown below: To install the software do the following:
Code: Select all
c:\temp\mingw-get-setup.exe
a. Select the packages you wish to install by right clicking on the items to be installed.
b. To install the selected packages, use the Installation, Apply Changes menu and select the Apply option in the resulting dialog.
NOTE: At a minimum select the mingw32-base (C compiler) and mingw32-gcc-g++ (C++ compiler) package options or alternatively select all the packages as shown in the image above.
Step 3) When running the installation using the wizard take note of the install folder used. Lets assume the install folder used as shown below:
Step 3) Once the installation is complete use the Windows Control Panel to open up the System icon and use the Environment Variables button found in the Advanced system settings link to add this bin installation folder to the PATH environment variable.
Code: Select all
C:\Program Files (x86)\mingw32\
IMPORTANT: Take extreme care when editing this PATH environment variable. Only add to it. Never delete from it. More details about the PATH can be found here:
For the installation folder noted earlier, this bin folder will need to be added to the PATH:
If the installation folder used had been this:
Code: Select all
C:\Program Files\mingw32\bin
Then the bin folder that needs to be added PATH would be as follows:
Code: Select all
C:\mingw32\
In either case make sure the bin folder used above exists and contains both the C++ compiler (g++.exe ) and the C compiler (gcc.exe) files.
Code: Select all
C:\mingw32\bin
In the steps that follow it is assumed the GNU C++ compiler is being used, but if the GNU C compiler is required, just replace the g++.exe command with the gcc.exe command.
Step 4) Test the PATH settings using the Windows Start Button and run the cmd executable to bring up a command prompt.
From inside that command prompt type in this C++ compiler command line:
Running this command line should result in the following output:
Code: Select all
g++.exe --version
To test the C compiler use this command line:
Code: Select all
g++ (MinGW.org GCC Build-20200227-1) 9.2.0 Copyright (C) 2019 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
If you don't see output for either of those commands then the PATH has not been correctly configured or the installation has not worked.
Code: Select all
gcc.exe --version
Step 5)
Test the compiler by creating a simple c:\temp\test.cpp test file using the C++ code shown below:\ g++.exe test.cpp -o test.exe dir test.exe
Running those commands should produce the following output:
Running the c:\temp\test.exe executable produced should result in this output:
Code: Select all
c:\>cd c:\temp\ c:\Temp>g++.exe test.cpp -o test.exe c:\Temp>dir test.exe Volume in drive D is DATA Volume Serial Number is 06EC-1105 Directory of c:\Temp 21/05/2016 02:30 PM 2,707,662 test.exe 1 File(s) 2,707,662 bytes 0 Dir(s) 1,824,970,645,504 bytes free
Extra Zeus IDE Configuration
Code: Select all
Hello world...
g++.exe "$fn" -o "$fdd$fb.exe"
NOTE: The options above are for the GNU C++ compiler. To use the GNU C compiler you need to use the gcc.exe and set the options for that compiler. More details on options for the GNU C compiler refer to this link.
If you only want to syntax check the current file change the command line to be this:
To make the compiler always compile as c++, to increase the warning levels and to turn on the C++11 features just change the command line to be this:
Code: Select all
g++.exe -c "$fn"
To make the compiler treat the file as C code change the command line to be this:
Code: Select all
g++.exe -x c++ -Wall -std=c++11 "$fn" -o "$fdd$fb.exe"
For more compiler options use the Tools, DOS Command Line menu and type in this command line:
Code: Select all
g++.exe -x c -Wall -std=c90 "$fn" -o "$fdd$fb.exe"
To test the compiler settings use the Templates button found on the navigator panel, select the C/C++ Document Type from the list at the top and click on the New File template.
Code: Select all
g++.exe --help
Save the resulting file to c:\temp\test.cpp and then use the Compiler, Compile menu to compile the file.
Code: Select all
#include <iostream> using namespace std; int main() { cout << "Hello world..." << endl; return 0; }
This will produce a c:\temp\test.exe and again use the DOS Command Line menu to run the executable and you should see this output:
Running the Executable for Inside Zeus
Code: Select all
Hello world... MinGW post for details on how to fix this.
Setting up the Language Server
To setup the C/C++ language server follow the instructions found here:
Cheers Jussi | https://www.zeusedit.com/phpBB3/viewtopic.php?f=5&t=8164&p=13236&sid=43be59c1f6932532c727206a2e6e2b83 | CC-MAIN-2021-43 | refinedweb | 993 | 72.46 |
The program below illustrates functions and methods you can use to investigate the allocated (virtual) memory of your process. For instance, you can get and print the limit of your data section, guess the limit of your stack and so on.
One curious thing you’ll notice is that small mallocs appear to allocate memory inside the heap as expected (sbrk(0) will return the top of the heap). But you request a larger memory section from malloc it will actually not use the heap and instead use an mmap() system call to map a file to memory. You can read more about this on the StackOverflow thread I started.
#include <stdio.h> #include <stdlib.h> #include <unistd.h> int globalVar; int main(){ int localVar; int *ptr; printf("localVar address (i.e., stack) = %p\n",&localVar); printf("globalVar address (i.e., data section) = %p\n",&globalVar); printf("Limit of data section = %p\n",sbrk(0)); ptr = malloc(sizeof(int)*1000); printf("ptr address (should be on stack)= %p\n",&ptr); printf("ptr points to: %p\n",ptr); printf("Limit of data section after malloc= %p\n",sbrk(0)); return 0; } | https://www.programminglogic.com/c-program-to-investigate-with-memory-sections/ | CC-MAIN-2019-13 | refinedweb | 190 | 55.24 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Adding Our Context3:03 with James Churchill
Let's add a context class to our project, so we can persist and retrieve data from our database.1.6 -b adding-our-context
Additional Learning
- 0:00
Before we can persist and
- 0:01
retrieve data from our database, we need to add a context class to our project.
- 0:06
The context class is our gateway to the database.
- 0:09
All communication from our application to the database flows through the context.
- 0:14
The context defines the available Entity sets and
- 0:17
manages the relationships between those entities.
- 0:21
It's used to retrieve entities from the database, persist new and
- 0:25
changed entities to the database, and even to remove entities from the database.
- 0:30
When retrieving entities from the database, the context is responsible for
- 0:34
materializing the data from the database into Entity Object Instances.
- 0:39
The context also caches those entity object instances for its lifetime, however
- 0:44
short or long that might be so that it can track changes to those entities.
- 0:50
As we learn about EF and develop our projects,
- 0:52
we'll interact with the context again and again.
- 0:56
Let's see how to add a context class to our project.
- 0:59
Right-click on the project and select Add > Class.
- 1:05
Name the class Context and click Add.
- 1:11
Just like we did with the entity class, go ahead and add the public access modifier.
- 1:17
Then inherit from the EF DbContext class.
- 1:23
Visual Studio will complain that it can't find the type.
- 1:26
So go ahead and add the missing using statement for
- 1:29
the System.Data.Entity namespace.
- 1:34
The DbContext class is a higher level abstraction of EF's object context class.
- 1:40
before the DbContext class was added to EF,
- 1:44
object context was used to load and persist entities.
- 1:48
While object context isn't deprecated,
- 1:51
it's almost never used directly now that we have the DbContext class.
- 1:56
Given that, we'll focus on learning how to use the DbContext class.
- 2:00
Our context class needs to contain a collection of Db set properties.
- 2:05
One property for each indie that we need to write queries for.
- 2:09
Let's add a Db set property for the ComicBook entity.
- 2:14
Public DbSet of type ComicBook.
- 2:20
Add the missing namespace, or ComicBookGalleryModel.Models, and
- 2:27
use the plural version of our entity class name for the property name, ComicBooks.
- 2:34
While not necessary, using the plural version of the entity class name
- 2:37
is a common convention for DB set property names.
- 2:41
Often, you'll add a DB set property for
- 2:43
each entity class that you have in your model.
- 2:46
But sometimes you won't need to add a DB set property for an entity.
- 2:50
We'll see an example of that later in this course.
- 2:53
For now, this is all of the code that our context class needs to contain.
- 2:57
Next, we'll update our console app to persist and
- 3:00
retrieve data using our context. | https://teamtreehouse.com/library/adding-our-context | CC-MAIN-2018-09 | refinedweb | 584 | 72.97 |
Note: This tutorial uses pointers pretty heavily. If don't, you could define a really big array like so:
int array[100000];
There are several problems with this. No matter how big the array is, the user could still have more input. If the user doesn't have that much input, you have wasted memory.
When you are using malloc(), realloc() and free() you need the following header file:
#include <stdlib.h>
First, I will show you how to allocate memory for a pointer. You can declare a pointer like so:
int *pointer;
The pointer can point to any location at first. You should always make it point to something or you can allocate some memory that your pointer will point to. To do this, you need to use the malloc() function. Use it like so:
pointer=malloc(2*sizeof(int));
malloc() returns a void pointer and takes an argument of how many bytes to allocate. Because pointer points to an integer, we use the 2*sizeof(int). Using malloc like the above is similar to doing this:
int array[2];
Error checking:
If the operating system can't allocate more memory for your program, malloc will fail and return a NULL value. It's always a good idea to make sure malloc is successful:
pointer=malloc(1*sizeof(*pointer)); if (pointer==NULL) { printf("Error allocating memory!\n"); //print an error message return 1; //return with failure }
Now I will show you how to use realloc(). You use realloc after you have used malloc to give a pointer more or less memory. Let's say you want to give a pointer 5 integers of memory. The code should look like this:
int *temp = realloc(pointer, 5*sizeof(int)); if ( temp != NULL ) //realloc was successful { pointer = temp; } else //there was an error { free(pointer); printf("Error allocating memory!\n"); return 1; }
This is just like malloc, except realloc takes two arguments. The first argument is the pointer you want to copy the data from. The above code copies pointer to temp, then copies temp back to pointer if everything goes correctly. You may have noticed a new function though, and that is free().
Free is used to free the memory you have allocated with malloc or realloc. All memory that you allocate should be freed when you are done using it. Free takes a pointer as an argument like so:
free(pointer);
Here is an example program that makes use of a dynamic array. Everything you need to know is in the comments.
#include <stdio.h> #include <stdlib.h> /* This program takes input and outputs everything backwards */ int main() { int *data,*temp; data=malloc(sizeof(int)); int c; /* c is the current character */ int i; /* i is the counter */ for (i=0;;i++) { c=getchar(); /* put input character into c */ if (c==EOF) /* break from the loop on end of file */ break; data[i]=c; /* put the character into the data array */ temp=realloc(data,(i+2)*sizeof(int)); /* give the pointer some memory */ if ( temp != NULL ) { data=temp; } else { free(data); printf("Error allocating memory!\n"); return 1; } } /* Output data backwards one character at a time */ for (i--;i>=0;i--) putchar(data[i]); /* Free the pointer */ free(data); /* Return success */ return 0; }
So that's it. Any suggestions for improvement are welcome. +Reputation is very much appreciated.
Edited by Roger, 29 September 2014 - 04:43 PM.
Improved wording | http://forum.codecall.net/topic/51010-dynamic-arrays-using-malloc-and-realloc/ | CC-MAIN-2015-11 | refinedweb | 569 | 64.2 |
Count of palindromic substrings in a string in C++
Problem statement:
Given a string, the task is to count the number of palindromic substrings present in this string. If any 2 substrings are identical but they have different starting or ending index, then they are considered to be different
Example 1:
Input: "abc" Output: 3 Explanation: Three palindromic strings: "a", "b", "c".
Example 2:
Input: "aaa" Output: 6 Explanation: Six palindromic strings: "a", "a", "a", "aa", "aa", "aaa".
Palindromes:
Palindromes are the sequence of characters that reads the same forward and backward. To read more about it, click here.
ex: aaaa, abba, 56765
Naive approach:
Simply, generate all the substrings of the given string and check whether they are palindrome or not. If they are palindrome, increment the count value by 1 and simply return the count value.
Since checking a string for palindrome takes linear time and generating all substrings of a string takes quadratic time so, total time complexity will be cubic. Submitting a solution with cubic time complexity on an online judge will obviously lead to TLE.
Optimized approach:
Idea is to count all the palindrome substrings with odd and even lengths. We can do so by taking every index as a midpoint.
- In the case of odd length palindromes, fix the current index position as the center and iterate in both directions until we hit any extreme end position or the character on the right side of the center index is different than that of the character on the left side.
- For even length palindrome, we take center as the current index and compare it with the character on just one index lower than the center. we keep on iterating to the left side character and right side character and comparing them.
C++ implementation of the above concept:
#include<bits/stdc++.h> using namespace std; int countSubstrings(string s) { int tps = 0, odd, even; int n = s.length(); for (int i = 0; i < n; i++) { odd = 1; while (i - odd >= 0 && i + odd < n && s[i - odd] == s[i + odd]) // for calculating odd-length palindromes with centres s[i] { odd++; } even = 0; while (i - even - 1 >= 0 && i + even < n && s[i - even - 1] == s[i + even]) //for calculating even-length palindromes with centres s[i] { even++; } tps += (odd + even); } return tps; } int main() { string s; cin >> s; cout << countSubstrings(s); return 0; }
If this post added any value to your algorithmic knowledge, please share your valuable feedback in the comment section below. Thank you! | https://www.codespeedy.com/count-of-palindromic-substrings-in-a-string-in-c/ | CC-MAIN-2021-17 | refinedweb | 419 | 64.24 |
Week 13 - Networking and communication
Reading the 10bit-ADC with I2C from a Pi!
Contents
- Contents
- I2C protocol
- Board Tests
- Communication via I2C
- Final Project task
I2C protocol
For this area, I followed the following tutorials on SparkFun and NXP-Philips. I detail some of my learnings from the SparkFun tutorial below (what I grasped).
Very simply, the Inter-integrated Circuit (I2C) Protocol is a protocol intended to allow multiple slave digital integrated circuits (chips) to communicate with one or more master chips. Like the Serial Peripheral Interface (SPI), it is only intended for short distance communications within a single device. Compared to UART (asynchronous RX/TX) where devices need to talk with agreed data clock speed, SPI (the one of MOSI/MISO/SCK.. ), where the number of wires rises with the number of devices, I2C is more convenient since it allows for several masters to be in the same system and for only the masters to be able to drive the data line high. This means that no slave will be able to lock the line in case other is talking:
Image Source: NXP semiconductors
So, I2C bus consists of two signals: SCL and SDA. SCL is the clock signal, and SDA is the data signal. The clock signal is always generated by the current bus master and both lines are pulled up, and therefore the lines are driven high when not used by any slave. Normal values for the pull up resistors could be around 5kΩ. The actual protocol works with messages.
Image Source: Sparkfun
For the I2C protocol on a Attiny device, we need to look into some short of solucion as in here (more on this below).
Notes on Clock-Stretching
Clock stretching is a technique used when the slave is not able to provide the data, either because it’s not ready or because it’s busy with other things. In these situation, it is possible to do clock stretching. Put simply: normally, the clock is driven by the master devices and slaves simply put data on the bus, or take data off the bus in response to the master’s clock pulses. At any point in the data transfer process, an addressed slave can hold the SCL line low after the master releases it. The master is required to refrain from additional clock pulses or data transfer until such time as the slave releases the SCL line.
However, there are some implementations in the Raspberry Pi that don’t allow clock-stretching with the slaves. Sometimes if the AttinyX4 is run at 8MHz, it can provoke this problem, but it’s not guaranteed that one can get away with higher clock speeds. Nevertheless and for this reason, an external resonator with 20MHz will be used.
Board Tests
In this section I will detail the process I followed to obtain readings from the board using a Raspberry Pi. The board was already tested in the Input Devices week and here I will focus on reading it via I2C.
Communication via I2C
The communication between the AttinyX4 and the Raspberry Pi will be done over I2C. The Raspberry Pi I will be using is a model 3, and the pinout can be found in this link.
I will be connecting the 5V power supply to the I2C grove connector, and the SDA, SCL and GND lines to the grove connector ones. This connection right now is done via jumper cable, but it will be substituted by a Raspberry Pi Hat as part of my final project.
For reference, these connectors are in the board:
Setting up the Raspberry Pi
First thing, I will detail my workflow to set up the Raspberry Pi with a MAC. First thing is to download and mount a Raspbian version with Etcher onto the Pi. This can be done easily under the instructions of the official rasperry pi documentation.
Next, would be to connect to the Pi via ssh or VNC (thanks again Victor for the guidance). These are network communication ways to interact with the Pi without the use of a keyboard and a screen attached to the Pi.
- SSH: stands for Secure Shell and it’s a secure way to connect to the Pi’s command line (and really to any known IP address on our same network). It can also redirect programs through the screen via output redirection.
- VNC: stands for Virtual Network Computing and with it we will have access to the Pi’s screen and use the origin’s keyboard and mouse. This will be the procedure I will be using, with VNC Viewer for MAC.
Now, for both these procedures, we need the Pi to be connected to the our same network in order to use it’s IP. In order to discover the Pi’s address, we can use a command like this in MAC:
MY_RANGE=$(ip addr | grep "UP" -A3 | grep '192' -A0 | awk '{print $2}') && nmap -sn $MY_RANGE && arp -na | grep b8:27:eb
Where we are using
grep and
awk in order to retrieve the host network (normally something like 192.168.0.1/24). Then, this is used by a network scanner such as nmap to find a device with a MAC address that contains the
b8:27:eb part of the Pi.
This command would give us something like
192.168.0.133 and it can be used for us to connect to the Pi.
Next, we need to turn on the Pi’s I2C with:
sudo raspi-config
And then going to Interface > I2C > Enable I2C. Next, we need to install a couple of libraries (depending on the Pi’s version) for the Pi to detect and interact with the I2C interface. Normally, they are present in the newest versions, but in case not, we can activate it through this command.
Finally, as a last check, we need to install
i2c-tools in the Pi:
sudo apt-get install -y i2c-tools
And with it we can perform a first check on the Pi’s network, if anything connected (for this test I connected a SHT31 temperature sensor, which address is normally 0x44):
pi@raspberrypi:~/$ i2cdetect -y 1 0 1 2 3 4 5 6 7 8 9 a b c d e f 00: -- -- -- -- -- -- -- -- -- -- -- -- -- 10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 40: -- -- -- -- 44 -- -- -- -- -- -- -- -- -- -- -- 50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 70: -- -- -- -- -- -- -- --
Now we are all set up with the Pi’s network and we can move on to the Attiny84.
Attiny I2C library
Now, the Attiny has no TWI (two wire interface) in hardware for the I2C communication, however, they come with a USI (Universal Serial Interface) that can be used to implement via software a TWI interface compatible with I2C (page 121 of the datasheet) and also explained in a very complex way in this Atmel ANN.
For the Attiny Slave side, I will be using this implementation for the USI as a TWI in the Attiny84. There is a good example in this repository that reads a photoresistor with an Attiny and sends it over via I2C, onto which I will be basing my code.
A picture of the setup is shown below:
Through the FabISP we will be programming the board as specified above, and reading the sensor using a Python code in the Raspberry Pi. The values we will be reading are 10-bits (the resolution of the ADC in the tiny) and therefore we should be splitting them in at least 4 bytes: 2 bytes is the case used by the example, but if we want to send full resolution, we need to go from 16 bits to 32 (not possible to go to 24 since it’s not power of 2). With this operation we can split the values into 4 bytes and store each of the in the variable
i2c_regs:
i2c_regs[0] = pressureSmooth >> 24 & 0xFF; i2c_regs[1] = pressureSmooth >> 16 & 0xFF; i2c_regs[2] = pressureSmooth >> 8 & 0xFF; i2c_regs[3] = pressureSmooth & 0xFF;
We also need to specify the address in the code and set it up (I chose the address 13). Also, I include below the definition of the i2c registers and the library initialisation as a SLAVE:
/* * Set I2C Slave address */ #define I2C_SLAVE_ADDRESS 0x13 #ifndef TWI_RX_BUFFER_SIZE #define TWI_RX_BUFFER_SIZE ( 16 ) #endif // I2C Stuff volatile uint8_t i2c_regs[] = { 0, //older 8 0, 0, 0 //younger 8 }; void setup() { /* * Setup I2C */ TinyWireS.begin(I2C_SLAVE_ADDRESS); TinyWireS.onRequest(requestEvent); }
Finally, the library comes with an interrupt callback under I2C request to send the data over I2C. This function will be triggered everytime the master requests a value and will send the value needed:
void requestEvent() { TinyWireS.send(i2c_regs[reg_position]); reg_position++; if (reg_position >= reg_size) { reg_position = 0; } }
We need to time this properly between both of them, so that the Pi receives the data in order. For that, I will be taking averages of the measurements in the attiny with a smoothing function, using a timer to take these measurements:
int smooth(int data, float filterVal, long smoothedVal){ if (filterVal > 1){ // check to make sure params are within range filterVal = .99; } else if (filterVal <= 0){ filterVal = 0; } smoothedVal = (data * (1 - filterVal)) + (smoothedVal * filterVal); return (int)smoothedVal; } void loop() { // What time is it? unsigned long currentMillis = millis(); // Check if we have passed the minimum time between measurements: if (abs(currentMillis - lastReadout) > MAX_TICK) { int sensorReading = analogRead(SENSOR); /* ** ** Convert the values ** */ // Smooth them pressureSmooth = smooth(pressure, LPF_FACTOR, pressureSmooth); // in Pa // Send it over I2C i2c_regs[0] = pressureSmooth >> 24 & 0xFF; i2c_regs[1] = pressureSmooth >> 16 & 0xFF; i2c_regs[2] = pressureSmooth >> 8 & 0xFF; i2c_regs[3] = pressureSmooth & 0xFF; // Update the time lastReadout = currentMillis; }
WiringPi and SM.bus libraries
I tested two libraries for the I2C communication: WiringPi and SM.bus. Both of them connect properly to the I2C and read the data, but I finally used SM.bus for the final example (I found it more robust, but very likely I am not doing it that well with the WiringPi). Nevertheless, I detail below both workflows for reference:
WiringPi
The code for it is below, assuming 4 packs of data MSB (most significant first):
// Compile this using g++ SHT31.cpp -lwiring -o SHT31 // And then run it with ./SHT31 #include <wiringPiI2C.h> #include <iostream> using namespace std; int fd, reading; int transmission; int i = 0; int packet_size = 2; int main(){ fd = wiringPiI2CSetup(0x13); while (1) { transmission = wiringPiI2CRead (fd); //Print out the result cout << "Transmission" << endl; cout << transmission << endl; reading += (transmission << 8*(i+1)); i++; if (i == packet_size) { cout << "Reading" << endl; std::cout << reading << std::endl; reading = 0; i = 0; } //~ //Print out the result //~ cout << "Transmission" << endl; //~ cout << transmission << endl; //~ cout << "Reading" << endl; //~ std::cout << reading << std::endl; } }
In order to compile and execute the program, we need to use g++ (gnu C compiler) and link it with WiringPi (important!) in the terminal
g++ TestWiringPi.cpp -lwiringPi -o TestWiringPi
Next, when we run it in the terminal:
./TestWiringPi
SM.Bus
The code is below, with the same 4 packs assumption MSB:
import smbus import time bus = smbus.SMBus(1) # Indicates /dev/i2c-1 address = 0x13 packet_size = 4 def ReadSensor(_address): i = 0 _value = 0 while (i < packet_size): _measure = bus.read_i2c_block_data(_address, 0, 1) #~ print "Measure" #~ print _measure _value |= _measure[0] << (8*(packet_size-(1+i))) i+=1 return _value while True: result = ReadSensor(address) #~ print "Result" print result time.sleep(1)
And then run it (no need to compile it since python is an interpreted language):
python TestSMBus.py
With SM.Bus, the results in DPa (I know, weird units) are:
Which are very representative of a normal sea level atmospheric pressure (~100kPa)!
As a final note, below, we find how the result is built:
Where 40 corresponds to 40 « 8 and 10240 (to be summed to the last value).
Final Project task
Note: All the designs below are available here
Here, I will detail the process followed to mill and design a Raspberry Pi Hat with 4 I2C Grove connectors that will connect to the different elements on my Final Project.
Design in KiCad
The schematic is pretty simple. I will be using the already created I2C connectors from previous assignments, as well as the generic 2x20 header:
The PCB layout looks like:
Then, the different milling strategies are exported to png. For the traces:
For the inner cuts (remember these ones first!):
For the outter cut:
Final Result
These are cut in the Modela MDX-20, with the following result after soldering:
| http://fab.academany.org/2018/labs/barcelona/students/oscar-gonzalezfernandez/2018/05/02/Week-13-Networking-and-communication.html | CC-MAIN-2021-43 | refinedweb | 2,055 | 63.22 |
Suppose, we are given a binary tree and a node that is situated at the leaf of the binary tree.We have to make the leaf node the root node of the binary tree. We can do it in the following way −
If a node has a left child, it becomes the right child.
A node's parent becomes its left child. In this process, the parent node's link to that node becomes null, so it will have only one child.
The node structure of the tree is like below −
TreeNode: data: <integer> left: <pointer of TreeNode> right: <pointer of TreeNode> parent: <pointer of TreeNode>
We have to return the root of the converted tree.
So, if the input is like
and the new root is 8; then the inorder representation of the converted tree will be − 2, 3, 4, 5, 7, 6, 8,
The new root node of the tree is 8.
To solve this, we will follow these steps −
Define a function helper() . This will take node, new_par
if node is same as root, then
parent of node := new_par
if left of node is same as new_par, then
left of node := null
if right of node is same as new_par, then
right of node := null
return root
if left of node is not null, then
right of node := left of node
if left of parent of node is same as node, then
left of parent of node := null
left of node := helper(parent of node, node)
parent of node := new_par
return node
return helper(leaf, null)
Let us see the following implementation to get better understanding −
import collections class TreeNode: def __init__(self, data, left = None, right = None, parent = None): self.data = data self.left = left self.right = right self.parent = parent def insert(temp,data): que = [] que.append(temp) while (len(que)): temp = que[0] que.pop(0) if (not temp.left): if data is not None: temp.left = TreeNode(data, parent = temp) else: temp.left = TreeNode(0, parent = temp) break else: que.append(temp.left) if (not temp.right): if data is not None: temp.right = TreeNode(data, parent = temp) else: temp.right = TreeNode(0, parent = temp) break else: que.append(temp.right) def make_tree(elements): Tree = TreeNode(elements[0]) for element in elements[1:]: insert(Tree, element) return Tree def search_node(root, element): if (root == None): return None if (root.data == element): return root res1 = search_node(root.left, element) if res1: return res1 res2 = search_node(root.right, element) return res2 def print_tree(root): if root is not None: print_tree(root.left) print(root.data, end = ', ') print_tree(root.right) def solve(root, leaf): def helper(node, new_par): if node == root: node.parent = new_par if node.left == new_par: node.left = None if node.right == new_par: node.right = None return root if node.left: node.right = node.left if node.parent.left == node: node.parent.left = None node.left = helper(node.parent, node) node.parent = new_par return node return helper(leaf, None) root = make_tree([5, 3, 7, 2, 4, 6, 8]) root = solve(root, search_node(root, 8)) print_tree(root)
root = make_tree([5, 3, 7, 2, 4, 6, 8]) root = solve(root, search_node(root, 8))
2, 3, 4, 5, 7, 6, 8, | https://www.tutorialspoint.com/program-to-change-the-root-of-a-binary-tree-using-python | CC-MAIN-2021-39 | refinedweb | 534 | 76.22 |
41455/flask-pagedown-and-mathjax
Hey, all the tools that you have ...READ MORE
If you want to concatenate int or ...READ MORE
You probably want to use np.ravel_multi_index:
[code]
import numpy ...READ MORE
show() is just a convenience function for ...READ MORE
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
Enumerate() method adds a counter to an ...READ MORE
You absolutely can use nameko and Flask together.
In that ...READ MORE
raw_input() is not used in Python 3. Use input() ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/41455/flask-pagedown-and-mathjax | CC-MAIN-2022-21 | refinedweb | 129 | 79.06 |
The Object Manager is the Windows NT Executive subsystem that receives the least amount of attention or recognition. Ironically, the Object Manager provides a resource management support infrastructure that all other NT Executive subsystems (including the Memory Manager, I/O Manager, and Process Manager) rely on. The Object Manager is a support subsystem that performs its work behind the scenes. As an NT systems administrator or a Win32 programmer, you probably will never interact directly with the Object Manager; however, almost everything you do, from opening files to starting a program to viewing the Registry, requires its assistance.
In this tour of the Object Manager, I'll first describe where the Object Manager fits into NT's architecture, the role it plays in common operations, and the services it provides. Next, I'll explain how NT's subsystems define object types and what kind of information the Object Manager tracks. Finally, I'll look at the Object Manager namespace, which is the doorway to file system namespaces and the Registry namespace.
Resource Management
A major part of NT's role is managing a computer's physical and logical resources, such as physical and virtual memory, disks, files, processes, threads, synchronization primitives (semaphores, events, etc.), printers, and video displays. NT must provide a mechanism whereby programs can look up resources, share them, protect them, read and modify their attributes, and interact with them. Thus, resource management encompasses tracking the state of resources, allowing only actions consistent with their state, and an API so that programs can manipulate them.
Files are a visible example of a common resource. NT provides an API for creating and opening files; modifying their attributes (hidden, read-only, etc.); and on NTFS, ensuring that programs honor file security settings. Programs expect these capabilities, and NT implements its resource management (i.e., the way NT efficiently keeps track of a file's state) hidden from applications. The operating system does the work of tracking the state and protecting resources.
In NT, the typical way a program accesses a resource such as a file is to open or create the resource and then manipulate it. Usually, a resource is assigned a name when it's created so that programs can share it. To look up or open an existing shared resource or a global resource (e.g., a file system, disk, or other physical device attached to the system), a program specifies the resource's name. However, programs often create unnamed resources, which typically are logical resources (e.g., synchronization primitives), that an application will privately use.
Regardless of whether resources are physical resources (such as disk drives and keyboards) or logical resources (such as files and shared virtual memory), NT represents them as object data structures, which the Object Manager defines. Objects are shells that other NT Executive subsystems can fill in so that they can build custom object types to represent the resources they manage. The Object Manager tracks information that is independent of the type of resource an object represents; the subsystem-specific core of an object contains data relevant to a particular resource.
The reason NT builds objects using the Object Manager's infrastructure is simple: Each subsystem does not need to reinvent the wheel to track the system-related state of an object, look up its resources, charge applications for the memory required to allocate an object, and protect resources with security. By concentrating these functions in the Object Manager, Microsoft can share code across subsystems, write and validate NT's security code once, and apply the same naming conventions to all resources.
Object Types
The Object Manager's duties involve creating object types, creating and deleting objects, querying and setting an object's attributes, and locating objects. As I mentioned previously, Executive subsystems (including the Object Manager) create objects that represent the resources they manage. Before a subsystem (e.g., the I/O Manager) can tell the Object Manager to make an object (e.g., a file), the Object Manager must define the underlying object type that the subsystem will instantiate the object from. Object types, like classes in C++ parlance, store information common to all objects representing the same type of resource. This information includes statistical information and pointers to the method procedures that the subsystems will invoke when they perform actions on an object of the corresponding type. Thus, when the Executive subsystems initialize, they call a function in the Object Manager to define their object types.
For example, open files require resource management, and the I/O Manager subsystem builds file objects (with help from the Object Manager) to track open files. Inside the file object's body, the I/O Manager keeps track of the file's name, its logical volume, and whether the file is marked for deletion; the file object also provides storage for private data associated with the file system driver that the file belongs to.
Other examples of subsystem object types include process objects, which represent active processes; section objects, which represent memory-mapped files or shared memory; and device objects, which represent physical or logical devices. Table 1 summarizes the 23 object types NT 4.0 defines.
As with objects, NT implements object types as data structures. Figure 1 shows typical information an object type data structure stores. The object type's statistical data is useful for system monitoring. This data includes information such as the name of the object type, how many objects of that type currently exist, the maximum number of objects that have existed at any time, and the default amount of memory that NT charges a process each time the subsystem creates an object of that type.
Object type procedures or methods are what really differentiate object types. When a subsystem creates an object type, the subsystem passes to the Object Manager a data structure that contains pointers to all the object type procedures. The Object Manager calls these procedures when the subsystem requires actions performed on an object. For example, if a subsystem wants to close an object, the Object Manager first calls the Okay-To-Close Procedure (if the subsystem has specified one). If that procedure returns a FALSE value, an error returns to the closer to signal that the object cannot be closed. If the procedure returns a TRUE value, the Object Manager then calls the Close Procedure so that the subsystem responsible for the object can clean up the object's state. Each object type procedure receives a predefined list of parameters that includes information pertinent to the requested operation. The call-out mechanism the Object Manager uses for object types lets subsystems see and control the actions taken on the objects (and therefore, the resources) that they manage.
I'll describe the purpose of most of the object type procedures throughout the rest of the article, but I'll comment on two unrelated procedures that appear in Figure 1 now. The Dump Procedure is not defined for any object type, nor does NT ever reference it. Microsoft developers probably relied on dump procedures in the early days of NT, but they removed the code after they had the basic object building blocks in place and working. The Security Procedure lets subsystems implement object security schemes that differ from NT's default security policies. NT falls back on its default security policies if a subsystem does not define a Security Procedure.
Objects
When a subsystem directs the Object Manager to create an object, the subsystem passes the Object Manager a pointer to an object type, which serves as the connection to the object type's global data and procedures. Other parameters include an optional name and security information to be applied to the new object, the size of the subsystem-specific body of the object, and pool charges that can override the default charges in the object type.
To the subsystem creating the new object, the Object Manager returns a pointer to the body of the new object. In this body area, subsystems can store data that tracks the state of the resource the object represents. Preceding this body (effectively private to the Object Manager) is an object header, which Figure 2 illustrates. The object header stores the name of the object, the parameters passed to the creation function, the object's security attributes, and a pointer to the object type.
Two fields in the object header, Handle Count and Reference Count, track different kinds of references to the object. Handle Count refers to the number of times applications have opened the object. When a program opens or creates a resource, NT's APIs return a special value, or handle, that the program can use to refer to the open resource. APIs that manipulate a resource's state use the resource's handle in lieu of its name; thus, handles provide a convenient way to refer to open resources.
Although handles are opaque values to applications, handles reference entries in a process' handle table (also shown in Figure 2). A handle table is a dynamically managed array (i.e., no hard upper ceiling exists for how large the array can become) that the Object Manager indexes via handles to locate the objects the handles refer to. Handles are process specific, so two processes can have different handle values for the same open resource.
When a program closes an object, the Object Manager calls the object type's Close Procedure and passes it the object's handle count. One example of a subsystem that monitors object handle counts is the I/O Manager, which notifies file systems when it closes all handles for a file object so that the file system can perform necessary cleanup operations. At that time, a file system will delete a file marked for deletion, because no application is using it.
The second field tracked in object headers, Reference Count, is the total number of references to an object. Operating system components can reference or create objects without going through the NT API, and consequently, do not require handles. Reference Count records the number of handles for an object plus the number of active references that operating system components make to the object. The Object Manager uses this count to determine when the system no longer needs an object. When Reference Count drops to zero, nothing in the system is using the object, so the system can remove the object's state and storage. The Object Manager will call an object type's Delete Procedure (which eliminates the object, not the resource the object represents) with the object as a parameter.
Locating Objects
Up to this point, I've avoided the details about how applications open an object by specifying the object's name. Every NT object that has a name lives within the Object Manager namespace. This namespace, which is very much like the familiar file system namespace, consists of directories that contain subdirectories or objects. In fact, you enter the file system namespace (with names like C:\temp\file.txt) and the Registry namespace (with names like Registry\Machine) via the Object Manager namespace. First, let's look at the Object Manager namespace, and then we'll look at how NT embeds alternative namespaces within it.
If you study Table 1, you'll see the Object Manager's Directory and SymbolicLink object types. NT uses these object types to define the Object Manager namespace. The optional name given to an object when it's created locates the object within the namespace. When NT is initializing, various subsystems create directories in the Object Manager namespace. The I/O Manager creates a \device subdirectory that it will use to store named device objects, and the Object Manager creates a subdirectory called \??. The \?? subdirectory holds objects accessible via the Win32 API. Thus, any Object Manager resource referenced from Win32 must have a corresponding named object in this subdirectory.
For example, serial ports named COM1, COM2, and so forth in Win32 have objects with those names in the \?? subdirectory. You'll also find C: and other drive letters in this directory. The objects with those names are symbolic link objects, which point (with alternative names) at objects elsewhere in the namespace. Drive letters point to the \device subdirectory at device objects that have names associated with the hard disk partitions they reside on. For example, C: might point at \device\harddisk0\partition1.
An object type's Parse Procedure is what lets NT connect alternative namespaces to the Object Manager namespace. When the Object Manager performs a name lookup and encounters an object, the Object Manager checks to see whether the object's type has a Parse Procedure, and calls it. The subsystem managing the object type can then take the remaining portion of the name and perform a lookup within the subsystem's namespace.
The same sequence of events happens when you open C:\temp\file.txt from a Win32 program. First, Win32 translates the name to \??\C:\temp\file.txt. Next, NT calls the Object Manager's name-parsing routine, which locates the C: symbolic link object in the \?? directory. The Object Manager then looks up \device\harddisk0\partition1, which the symbolic link points to, and finds a device object. The Object Manager passes the rest of the name, \temp\file.txt, to the I/O Manager's device object type Parse Procedure, which locates the file system responsible for C: and hands it the name.
You enter the Registry namespace similarly via the \Registry key object type Parse Procedure. Figure 3 presents a simplified depiction of these three namespaces and how they are connected. (In Figure 3, HKLM stands for HKEY_CU stands for HKEY_CURRENT_USER.)
Most Win32 programmers and systems administrators don't know about the Object Manager namespace because they don't need to know about it to open files and Registry keys. However, you can use native NT system services to obtain information about what's in the namespace. With the Win32 software development kit (SDK), Microsoft provides the WinObj tool, which will display the namespace as if you were browsing with Explorer. Unfortunately, the WinObj tool has several significant bugs that cause it to display incorrect information (e.g., inaccurate handle and reference counts).
Another version of the WinObj tool, which you can get at, doesn't suffer from the same problems as Microsoft's WinObj tool, and it displays additional information about certain object types. Screen 1 shows the view of the Object Manager's \?? directory that this alternative WinObj tool displays. One subdirectory worth noting is the ObjectTypes subdirectory, which contains all the defined object types.
A Little Knowledge Goes a Long Way
Although you can get along just fine managing or programming NT without knowing about the Object Manager, some familiarity with it is useful. For example, using the Control Panel's Ports applet, you unfortunately can direct NT to create invalid serial ports. The WinObj tool lets you look in the \?? subdirectory of the Object Manager namespace for COM objects and determine which serial ports really exist. Even if you don't run into such problems, knowledge of NT object management can give you a better understanding of NT's architecture and the Win32 API. | http://www.itprotoday.com/management-mobility/inside-nts-object-manager | CC-MAIN-2018-09 | refinedweb | 2,538 | 51.58 |
This was another shield that grabbed my attention, this LCD was used on some older nokia phones. You can find it on the internet at very reasonable rates , I have purchased this as a shield for an Arduino but you can also get a bare module and wire the display up to your Arduino
LCD Features
- SPI Interface (using Arduino Digital Pin 2,3,4,5,6)
- A Reset button
- A 5 degree joystick (using Arduino Analog Pin 0)
- Backlit control (Arduino DIO Pin 7)
Below the LCD is a four-way joystick with a built in button. This is connected to analog pin 0 via a resistor network. Here you can see a photograph of the shield and its running the example you will create later
lcd4884 shield
Reading the joystick position is accomplished via analogRead(0);. Setup some Serial debugging, open the Serial monitor and move the joystick around and take note of the values. In your sketch you can use an if conditional statement to perform an action based on the value returned
The shield requires an Arduino library which is in the link at the bottom of the page.
The example below initiliases the display, clears the screen and the displays some text. Method 1 uses constants for the x and y positions, method 2 hard codes these values. As you can see there is a function called lcd.LCD_write_string that displays text on the screen
OK, download the library from the link at the bottom, copy it into the Libraries folder and lets get started
Code
#include "LCD4884.h" #define MENU_X 1 #define MENU_Y 1 void setup() { lcd.LCD_init(); lcd.LCD_clear(); lcd.LCD_write_string(MENU_X, MENU_Y, "test screen 1", MENU_HIGHLIGHT ); lcd.LCD_write_string(1, 3, "test screen 2", MENU_HIGHLIGHT ); } void loop() { }
Useful Links
LCD4484s library Download Link
Amazon US link – SainSmart Graphic LCD4884 Shield for Arduino
Amazon UK link – ATmega2560 + Graphic LCD4884 Shield for Arduino | http://www.arduinoprojects.net/lcd-projects/lcd4884-shield-example.php | CC-MAIN-2020-50 | refinedweb | 318 | 64.64 |
You need to sign in to do that
Don't have an account?
Trigger to update lead status when activity is logged
trigger changeLeadStatus on Task (before insert, before update) {
String desiredNewLeadStatus = 'Working';
List<Id> leadIds=new List<Id>();
for(Task t:trigger.new){
if(t.Status=='Completed'){
if(String.valueOf(t.whoId).startsWith('00Q')==TRUE){//check if the task is associated with a lead
leadIds.add(t.whoId);
}//if 2
}//if 1
}//for
List<Lead> leadsToUpdate=[SELECT Id, Status FROM Lead WHERE Id IN :leadIds AND IsConverted=FALSE];
For (Lead l:leadsToUpdate){
l.Status=desiredNewLeadStatus;
}//for
try{
update leadsToUpdate;
}catch(DMLException e){
system.debug('Leads were not all properly updated. Error: '+e);
}
}//trigger
The process did not set the correct Type value on submitting for approval
Challenge not yet complete... here's what's wrong:
The process did not set the correct Type value on submitting for approval
I'm not sure why it isn't approving. Please help!
Here is mine version of that Approval process which worked perfectly.
You will notice that in Approval steps I don't have any rejection step which is marked as red bold in your Approval process. remove that criteria and it should work.
Thanks,
Himanshu
.
Create an Apex class that uses the @future annotation to update Account records.
Create an Apex class with a method using the @future annotation that accepts a List of Account IDs and updates a custom field on the Account object with the number of contacts associated to the Account. Write unit tests that achieve 100% code coverage for the class.
Create a field on the Account object called 'Number_of_Contacts__c' of type Number. This field will hold the total number of Contacts for the Account.
Create an Apex class called 'AccountProcessor' that contains a 'countContacts' method that accepts a List of Account IDs. This method must use the @future annotation.
For each Account ID passed to the method, count the number of Contact records associated to it and update the 'Number_of_Contacts__c' field with this value.
Create an Apex test class called 'AccountProcessorTest'.
The unit tests must cover all lines of code included in the AccountProcessor class, resulting in 100% code coverage.
Run your test class at least once (via 'Run All' tests the Developer Console) before attempting to verify this challenge.
public class AccountProcessor
{
@future
public static void countContacts(Set<id> setId)
{
List<Account> lstAccount = [select id,Number_of_Contacts__c , (select id from contacts ) from account where id in :setId ];
for( Account acc : lstAccount )
{
List<Contact> lstCont = acc.contacts ;
acc.Number_of_Contacts__c = lstCont.size();
}
update lstAccount;
}
}
and
@IsTest
public class AccountProcessorTest {
public static testmethod void TestAccountProcessorTest(){
Account a = new Account();
a.Name = 'Test Account';
Insert a;
Contact cont = New Contact();
cont.FirstName ='Bob';
cont.LastName ='Masters';
cont.AccountId = a.Id;
Insert cont;
set<Id> setAccId = new Set<ID>();
setAccId.add(a.id);
Test.startTest();
AccountProcessor.countContacts(setAccId);
Test.stopTest();
Account ACC = [select Number_of_Contacts__c from Account where id = :a.id LIMIT 1];
System.assertEquals ( Integer.valueOf(ACC.Number_of_Contacts__c) ,1);
}
}
Error "Executing against the trigger does not work as expected."
I have checked the name of the class, task name as mentioned in the challenge description.
I have copied the code below :
trigger ClosedOpportunityTrigger on Opportunity (before insert, before update) {
List<Task> taskList = new List<Task>();
//If an opportunity is inserted or updated with a stage of 'Closed Won'
// add a task created with the subject 'Follow Up Test Task'.
for (Opportunity opp : [SELECT Id,Name FROM Opportunity
WHERE Id IN :Trigger.new AND StageName = 'Closed Won']) {
//add a task with subject 'Follow Up Test Task'.
taskList.add(new Task(Subject='Follow Up Test Task', WhatId = opp.id ));
}
if (taskList.size() > 0) {
insert taskList;
}
Thank you
Pierre-Alain
Please select this as a best answer.
Is there a way to perform validation for apex:inputField?
For example, there are 2 field, users name and email. Users cannot leave the name field blank, if so, I want to show a message of "This field is required to fill in" beside the field. And for email field, I want to have another message to remind users to insert a valid email.
Is there anyway in visualforce to do so? Thanks for your help.
You can add something like this:
1. Check the Empty
<apex:page
<script>
function show()
{
var name=document.getElementById('page:f1:p1:ip1').value;
if(name== "" || name==null)
{
document.getElementById("page:f1:p1:op2").innerHTML = "Please enter your name";
}
}
</script>
<apex:form
<apex:pageblock
<apex:outputlabel
<apex:inputtext
<apex:commandbutton
<apex:outputlabel
</apex:pageblock>
</apex:form>
</apex:page>
---> Besides salesforce has the on field validation, so u can have that option.
2. create formula syntax
Regression : '([a-zA-Z0-9_\\-\\.]+)@(((\\[a-z]{1,3}\\.[a-z]{1,3}\\.[a-z]{1,3}\\.)|(([a-zA-Z0-9\\-]+\\.)+))([a-zA-Z]{2,4}|[0-9]{1,3}))'
Thanks
Aniket
Difference between Process Builder and Flows with example ?
Thanks in Advance.
AJ
Process Builder:
Process Builder is.
I suggest you, to complete the trailhead of process builder it has a better example to make you clear about it.
Flow:
Flow is a powerful business automation tool that can manipulate data in Salesforce in a variety of ways. Such an application can be created right from the org’s setup with just drag-drop/point-click. The ease of creating flows makes it the number one go-to tool when it comes to complex business requirements
The trailhead for process builder:
Useful link for Flow with an example:
I hope you find the above solution helpful. If it does, please mark as Best Answer to help others too.
Thanks and Regards,
Deepali Kulshrestha.
Generate an Apex class using WSDL2Apex and write a test class.
The Challenge is as follows:
Generate an Apex class using WSDL2Apex and write a test class.
Generate an Apex class using WSDL2Apex for a SOAP web service, write unit tests that achieve 100% code coverage for the class using a mock response, and run your Apex tests.
Use WSDL2Apex to generate a class called 'ParkService' in public scope using this WSDL file. After you click the 'Parse WSDL' button don't forget to change the name of the Apex Class Name from 'parksServices' to 'ParkService'.
Create a class called 'ParkLocator' that has a 'country' method that uses the 'ParkService' class and returns an array of available park names for a particular country passed to the web service. Possible country names that can be passed to the web service include Germany, India, Japan and United States.
Create a test class named ParkLocatorTest that uses a mock class called ParkServiceMock to mock the callout response.
The unit tests must cover all lines of code included in the ParkLocator class, resulting in 100% code coverage.
Run your test class at least once (via 'Run All' tests the Developer Console) before attempting to verify this challenge.
The error I receive when checking the challencge is:
Challenge Not yet complete... here's what's wrong:
Executing the 'country' method on 'ParkLocator' failed. Make sure the method exists with the name 'country', is public and static, accepts a String and returns an array of Strings from the web service.
Here is the code I am using:
public class ParkLocator { public static String[] country(String ctry) { ParkService.ParksImplPort prk = new ParkService.ParksImplPort(); return prk.byCountry(ctry); } }
and
@isTest global class ParkServiceMock implements WebServiceMock { global void doInvoke( Object stub, Object request, Map<String, Object> response, String endpoint, String soapAction, String requestName, String responseNS, String responseName, String responseType) { // start - specify the response you want to send ParkService.byCountryResponse response_x = new ParkService.byCountryResponse(); List<String> myStrings = new List<String> {'Park1','Park2','Park3'}; response_x.return_x = myStrings; // end response.put('response_x', response_x); } }
and
@isTest private class ParkLocatorTest { @isTest static void testCallout() { // This causes a fake response to be generated Test.setMock(WebServiceMock.class, new ParkServiceMock()); // Call the method that invokes a callout List<String> result = new List<String>(); List<String> expectedvalue = new List<String>{'Park1','Park2','Park3'}; result = ParkLocator.country('India'); // Verify that a fake result is returned System.assertEquals(expectedvalue, result); } }
Any help which can be provided is greatly appreciated. If you could advise me at raadams173@gmail.com if you reply with a solution, I can log in to check it.
Thanks.
Ryan
Use below code for ParkLocator class.
public class ParkLocator { public static String[] country(String country){ ParkService.ParksImplPort parks = new ParkService.ParksImplPort(); String[] parksname = parks.byCountry(country); return parksname; } }If this not resolves the problem then use a new Developer Org for completing the Challenge.
Let me know if this helps :)
How to automatically authorize an application without user interaction
I dont know much about PHP, i am posting generic CURL statement that will be hlepful to understand.
- Use Username and passwords to login to saleforce and get the access token, which when appended with REST calls to salesforce will give data back.For this someone has to create an App inside Salesforce whic will provide client secret and ID
curl https://<instance>.salesforce.com/services/oauth2/token -d "grant_type=password" -d "client_id=myclientid" -d "client_secret=myclientsecret" -d "mylogin@salesforce.com" -d "password=mypassword123456"
</pre>
This will return JSON format data which will contain acceess_token to be used for further calls. and than use REST URL as follows
<pre>
curl -H "Authorization: Bearer "THIS_IS_THE_ACCESS_TOKEN_RECEIVED_EARLIER>"
</pre>.
1) u create project obj and API project__C.
2) choose date type of project...text (not for any number)
3).in this create field priority with API priority__C.
4) in security controlls owd set public read only..on project obj& also give sharing rule name as ur wish.(i am given that private)
5) if all ready exist Training Coordinator in ur role highrarchey then don.t add..its..if not there add Training Coordinator.on under ceo role.
then u got 500 points........ok bye..
You need to create the same trigger on the Event Object. Events and Tasks are both activities, but they are separate objects (They are special in that way). You already have the code, just copy it for Events and you should be good!
Hope this helps! | https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2 | CC-MAIN-2019-35 | refinedweb | 1,678 | 57.67 |
No project description provided
Project description
CoreNLG
Contents :
- General Information
- Default text processing
- CoreNLG functions
- CoreNLG classes
- Quick start
General Information
CoreNLG is an easy to use and productivity oriented Python library for Natural Language Generation.
It aims to provide the essential tools for developers to structure and write NLG projects.
Auto-agreement tools based on extra-resources are not provided in this library.
Default text processing
Typographical conventions
You can chose a language (French or English) and typography will be automatically handled based on it.
For example:
In French 'Ma liste d'éléments:' becomes "Ma liste d'éléments :".
In English "My list of items :" will become "My list of items:"
A period will always be followed by a capitalized word.
Automatic contractions
Contractions are automatically handled based on the selected language (French or English).
word_1 = 'le dépassement' word_2 = 'les hausses' self.free_text('À cause de', word_1) # "À cause du dépassement" self.free_text('À cause de', word_2) # "À cause des hausses"
CoreNLG functions
free_text
The free_text method takes multiple strings or nested list/tuple of strings and return a string where each parameter is separated by a space. It aims to avoid forgetting the spaces between each element of a string when concatenating it.
self.free_text( "The variation of the", indicator.label, "is", "positive" if indicator.variation > 0 else "negative" if indicator.variation < 0 else "not significant", "compared to last year." ) self.free_text( "We can also use collection of strings as parameter,", ( "if the next is true", "this text will be written" ) if test else ( "else, we will", "have this text" ), "." )
nlg_syn and post_eval
The nlg_syn method takes multiples strings as parameters and return a string based on two modes.
def synonym(self, *words, mode="smart")
- "random": one of the strings in parameter will be chosen randomly.
- "smart": the chosen string will be the best as possible considering previously chosen synonyms in order to avoid repetitions.
# Basic use self.free_text( 'I was', self.nlg_syn('hungry', 'starving'), 'so I decided to eat', self.nlg_syn('one apple', 'three apples'), '.' ) # Synonyms trees can be made self.free_text( 'I was', self.nlg_syn( 'hungry so I decided to eat ' + self.nlg_syn('one apple', 'three apples'), 'starving and I went to the restaurant' ), '.' )
As you build complex structure, you will want to know at some point what word will be chosen to be able to match the rest of the sentence with it.
Instead of a string, you can send a tuple as an argument to the nlg_syn method :
self.nlg_syn( 'one', ('three', 'PLURAL') )
You can now use the post_eval method which is defined as follow :
def post_eval( key_to_check, string_to_write_if_active='', string_to_write_if_inactive='', deactivate_the_key=False )
You can now build sentences like that :
self.free_text( 'I decided to eat', self.nlg_syn( 'one', ('three', 'PLURAL') ), self.post_eval('PLURAL', 'apples', 'apple', True), '.' ) # This will give you either "I decided to eat one apple." or "I decided to eat three apples." # The 'PLURAL' key is now deactivated so next post_eval method would not find it.
nlg_enum and nlg_iter
The nlg_enum method takes a list of element and an instance of IteratorConstructor class as parameters. It returns a string.
The IteratorConstructor object allows to specify several criterion for creating the output string.
def enum(self, my_list_of_elements, iterating_parameters=None) class IteratorConstructor: def __init__( self, # maximum number of elements of the list that will be displayed max_elem=None, # if the size of the list is superior to this number, it will create a bullet-point list nb_elem_bullet=None, # the output string will begin with this string begin_w="", # the output string will end with this string end_w="", # separator for each element except the last sep=",", # separator for the last item last_sep="and", # each beginning of bullet-point should be capitalized capitalize_bullets=True, # if the list is empty, this string will appear text_if_empty_list="", # at the end of each bullet point except the last end_of_bullet = "", # at the end of the last bullet-point end_of_last_bullet = "" )
my_list = ["six apples", "three bananas", "two peaches"] self.nlg_enum(my_list) # "six apples, three bananas and two peaches" self.nlg_enum(my_list, IteratorConstructor(last_sep="but also")) # "six apples, three bananas but also two peaches" my_list = ['apples', 'bananas', 'peaches'] self.nlg_enum( my_list, IteratorConstructor(max_elem=2, nb_elem_bullet=2, begin_w='Fruits I like :', end_w='Delicious, right ?', end_of_bullet=',', end_of_last_bullet='.') ) """ Fruits I like : - Apples, - Bananas. Delicious, right ? """ my_list = ['apples', 'bananas'] self.nlg_enum([self.free_text( fruit, self.nlg_syn('so', '') + ' ' + self.nlg_syn('succulent', 'tasty') ) for fruit in my_list], IteratorConstructor(begin_w='I find', end_w='.') ) """ One of the following: I find apples so tasty and bananas succulent. I find apples tasty and bananas so succulent. I find apples so succulent and bananas tasty. I find apples succulent and bananas so tasty. """
The nlg_enum method is a wrapper of nlg_iter which allows to do a bit more complex things.
Instead of a list of elements, it takes a list of lists and strings. Through the iteration it maps every element with its associated ones. It then stops when there is no more elements in the smaller list.
my_list_of_fruits = ['apples', 'bananas', 'peaches'] my_list_of_syno = [self.nlg_syn('succulent', 'tasty') for i in range(2)] self.nlg_iter([ my_list_of_fruits, "are", my_list_of_syno ]) # apples are tasty and bananas are succulent
nlg_num
The nlg_num method allows to transform a number in a string following several criterion.
def nlg_num(self, num, short="", sep=".", mile_sep=" ", dec=None, force_sign=False, remove_trailing_zeros=True) my_number = 10000.66028 self.nlg_num(my_number, dec=3, force_sign=True) # +10 000.66 # The remove_trailing_zeros parameter will remove the last decimal even though we indicated 3 decimals because it is a 0.
nlg_tags
The nlg_tags method allows to create HTML tags with attributes and encapsulate text into them.
def nlg_tag(self, tag,<h1>My content</h1></div>
no_interpret
The no_interpret method allows to deactivate the nlg interpretation (automatic contractions and typographical conventions) for a given string.
# "This is a string.with a dot inside ." becomes "This is a string. With a dot inside." after NLG processing. self.no_interpret("This is a string.with a dot inside .") # This is a string.with a dot inside .
CoreNLG classes
Datas
The Datas class is used to store the input you receive.
It should be inherited by your own custom data classes.
class Datas: def __init__(self, json_in) class MyDatas(Datas) def __init__(self, json_in) super().__init__(json_in) my_datas = MyDatas(input)
Document
The Document class is your final document wrapper.
class Document: def __init__(self, datas, title="", log_level="ERROR", css_path="css/styles.css", lang="fr", freeze=False) my_datas = MyDatas(input) document = Document(my_datas)
It takes at least an instance of a Datas class (or your custom one) as parameter.
The 'freeze' parameter means that for every nlg_syn call, the chosen string will always be the first. It is useful for non-regression tests.
Section
The Section class is a text zone of your document independant of others for the draw of synonyms.
It is created from the Document class with the new_section method.
You can give a HTML tag name in parameter (by_default 'div') and HTML attributes.
my_datas = MyDatas(input) document = Document(my_datas) first_paragraph_section = document.new_section(html_elem_attr={"id": "firstParagraph"}) second_paragraph_section = document.new_section(html_elem_attr={"id": "secondParagraph"}) document.write()
You should write your sections in the document with the write method of the class Document.
You can also write each section separately to manage the order of the sections in the document with the write_section method.
def write_section(self, section, parent_elem=None, parent_id=None)
You should not confuse a Section with a simple text zone.
If you want your first and second paragraph to be independant, you create sections like we saw it above.
If you just want to have two separates text zone in your document but without indepedancy on the synonyms, you create tags with nlg_tags.
paragraph_section = document.new_section() paragraph_section.text = ( paragraph_section.tools.add_tag('div', id='first_paragraph', text='First paragraph text'), paragraph_section.tools.add_tag('div', id='two_paragraph', text='Second paragraph text') )
You will never use this way of calling the nlg_tags function because we created the TextClass object.
TextClass
A TextClass is a class in which you will write your text. You should create your own sub-class for each part of your text.
A TextClass takes a Section as parameter.
class MyDatas(Datas) def __init__(self, json_in) super().__init__(json_in) self.Hello everyone.<br> <b>Nice to meet you.</b> I am a developer.</div>
The TextClass is a powerful object wich allows you to call all the CoreNLG functions with self.
You can also access every attributes of your Datas class the same way.
The self.text write your text in the Section that was send as a parameter to your TextClass.
You can use it with strings, nested lists or tuples and it will do the same job as the free_text function.
Don't be afraid ! The '=' operator is override, to enjoy all the possibility of it, you should do :
self.text = "Hello," self.text = "this is one sentence" self.text = ( "that I am", "writing here." ) # Hello, this is one sentence that I am writing here.
TextVar
The TextVar is a simple object, sub-class of str, whose '+=' operator is overloaded.
It's the same principle as free_text and self.text, it works with strings and nested lists/tuples.
It aims to ease the concatenation of strings.
class MyText(TextClass): def __init__(self, section): super().__init__(section) self.text = self.nlg_tags('b', self.text_with_free_text()) self.text = self.nlg_tags('b', self.text_with_text_var()) def text_with_free_text(self): return self.free_text( "first test is true" if test_1 else "first test is false", "and", ( "second test", "is true" ) if test_2 else ( "second test", "is false" ) ) def text_with_text_var(self): my_text = TextVar() if test_1: my_text += "first test is true" else: my_text += "first test is false" my_text += "and" if test_2: my_text += "second test", "is true" else: my_text += ( "second test", "is false" ) return my_text
In this example, the two methods returns equivalent strings. You can use both depending on which one you find the simpler to understand and the number of nested tests you have to write.
Quick start
Install the library:
pip install CoreNLG
Create a basic template with cookiecutter:
pip install cookiecutter cookiecutter
You should obtain this architecture of project:
MyProject |-- ProjectEntryPoint.py |-- MyProject | |-- Datas | | |-- MyDatas.py | |-- TextClass | | |-- Introduction.py | | |-- Content.py | |-- Resources | |-- Tools |-- inputs | |-- test.json
ProjectEntryPoint.py will be your main, you can use it to test locally your application.
Run this file and you will see the HTML result in your console and your browser will render it automatically.
Happy coding !
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/CoreNLG/ | CC-MAIN-2019-47 | refinedweb | 1,754 | 50.23 |
Parallax Forums
>
General Forums
>
Propeller 1 Multicore Microcontroller
> Storing Decimal Value in Eeprom?
PDA
View Full Version :
Storing Decimal Value in Eeprom?
Carl Lawhon
01-07-2009, 05:08 AM
Is there any way I can store a decimal value in the eeprom? For example, how could I store something like 123.456?
Erik Friesen
01-07-2009, 05:53 AM
Using a modified version of James burrows/Javalins i2c code it would be something like this
var
long test
i2c:"i2c"
pub
i2c.writepage(eepromaddress,0,8,@test,8,1)'write data to eeprom
'to read
i2c.readpage(eepromaddress,0,8,@test,8,1)
The above examples require some modification when used with James code. In my code the 8 stands for the number of bytes to send.
I probably haven't answered your question directly but this is a way to store two longs in memory.
mcstar
01-07-2009, 05:56 AM
You need to convert your number to a floating point number first, then save it using the code that Erik posted or something similar. Floating point numbers are represented as a long in memory with the first bit set.
John Abshier
01-07-2009, 06:01 AM
You need to know what 123.456 actually is. If it is a floating point number, it occupies 4 bytes. You have 2 options to writing it to eeprom. First is to write 4 bytes, that is the floating point number. The other is to convert it to a string representation and then write the string, 8 bytes with a $00 byte added to terminate the string. If this is data you pulled off the GPS, it may already be a string.
John Abshier
Paul Baker
01-07-2009, 08:24 AM
Another way to do it is to use BCD encoding where each digit is 4 bits, or 2 numbers per byte. Valid digits are 0-9, you can encode the decimal point as one of the unused values (say $A), then to indicate the end of the number you can use another unused value (like $F). This scheme maintains the simplicity of strings but encodes it at double the density. The number 123.456 would be stored as $12, $3A, $45, $6F.
You can go even fancier by assigning other unused values, like - to $B and E to $E. So a scientific number like -1.74E-6 would be $B1, $A7, $4E, $B6, $F0. The last digit is a "don't care", it's just padding to make the number take 8 bits.)
Post Edited (Paul Baker) : 1/7/2009 1:39:58 AM GMT
grasshopper
01-07-2009, 10:31 AM
I just store it like any other variable using the BasicObject. using this
var
long FloatingData 'floating point number
BasicObj.Write_HighMem(1200,FloatingData,4)
Carl Lawhon
01-08-2009, 02:03 AM
I hate to be noobish, but I'm still a little confused. Anyone want to hold my hand through it using basic_i2c_driver and the example value 123.456? (Thanks to everyone who has posted above, I certainly don't mean to say that you haven't been helpful).
Carl Lawhon
Mike Green
01-08-2009, 02:08 AM
You haven't said yet what form the number is in. Is it a string of characters? If so, what marks the end of the string. Is there a zero byte or carriage return (13) or do you have a known length for the string? Is it a floating point number (stored in a long)? First you have to specify what you have, then we can suggest a way to get it copied to an EEPROM.
Carl Lawhon
01-08-2009, 02:54 AM
Ah. It is a floating point value converted from the gps's deg, mn, and minfrac strings.
Mike Green
01-08-2009, 03:26 AM
In that case, the value fits in a long variable and you can use the read/write long methods in the Basic_I2C_Driver. I don't have a copy of the driver handy, but you'll see simplified read/write routines for words and longs. Important things to remember:
1) The 4 bytes of the long have to fit in an EEPROM "page". The best way to do this is simply to require the EEPROM address to be a multiple of 4.
2) A write operation takes roughly 5ms. You can put in a 5ms wait like WAITCNT(CLKFREQ/200+CNT) or you can use the test for completion of the write that's shown in the comments at the beginning of the driver.
3) The pin # of the SCL signal can be specified separately or combined with the EEPROM address as described in the comments. The pin # of the SDA signal is always the next pin.
grasshopper
01-08-2009, 03:41 AM
You could do this
object is at link
obex.parallax.com/objects/30/ ()
Var
LONG GPS_DATA
Obj
EEPROM : "BS2_Function"
Pub Main
EEprom.Start(31,30)
GPS_DATA := 123.325
StoreEEProm
Pub StoreEEProm
'Write_HighMem(address, value, size) This will write to the Variable called GPS_DAta to an address
EEprom.Write_HighMem(10,GPS_DATA , 4)
Post Edited (grasshopper) : 1/7/2009 8:47:20 PM GMT | http://forums.parallax.com/archive/index.php/t-109241.html | CC-MAIN-2014-15 | refinedweb | 870 | 73.58 |
class Person
{
private String Name = null;
private String Surname = null;
public Person(String name, surname)
{
Name = name;
Surname = surname;
}
public String getName()
{
return Name;
}
public String getSurname()
{
return Surname;
}
}
you should create a new class to store your person details and then use something like:
List people = new ArrayList();
while(rs.next())
{
name = rs.getString(1);
surname = rs.getString(2);
people.add(new Person(name, surname));
}
Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.
C:\Program Files\Apache Group\Tomcat 4.1\work\Standalone\localh
symbol : class Person
location: class org.apache.jsp.applicants_
people.add(new Person(name, surname));
<%@ page import = "Person" %>
If your Person class in the package then import the class with the package path. Eg: Your Person class is in com\jsp\util\Person.class then the import will be like
<%@ page import = "com.jsp.util.Person"%>
I hope it help.
C:\Program Files\Apache Group\Tomcat 4.1\work\Standalone\localh
import classfiles.*;
my folder structure is C:\Project\WEB-INF\classes
and i import the class files like this <%@ page import="classfiles.*" %>
Raftor. | https://www.experts-exchange.com/questions/20948478/arrays-in-jsp.html | CC-MAIN-2018-26 | refinedweb | 188 | 52.46 |
UTIME(3V) UTIME(3V)
NAME
utime - set file times
SYNOPSIS
#include <<utime.h>>
int utime(path, times)
char *path;
struct utimbuf *times;
DESCRIPTION
utime() sets the access and modification times of the file named by
path.
If times is NULL, the access and modification times are set to the cur-
rent time. The effective user ID (UID) of the calling process must
match the owner of the file or the process must have write permission
for the file to use utime() in this manner.
If times is not NULL, it is assumed to point to a utimbuf structure,
defined in <<utime.h>> as:
struct utimbuf {
time_t actime; /* set the access time */
time_t modtime; /* set the modification time */
};
The access time is set to the value of the first member, and the modi-
fication time is set to the value of the second member. The times con-
tained in this structure are measured in seconds since 00:00:00 GMT Jan
1, 1970. Only the owner of the file or the super-user may use utime()
in this manner.
Upon successful completion, utime() marks for update the st_ctime field
of the file.
RETURN VALUES
utime() returns:
0 on success.
-1 on failure and sets errno to indicate the error.
ERRORS
EACCES Search permission is denied for a component of the
path prefix of path.
EACCES The effective user ID is not super-user and not the
owner of the file, write permission is denied for
the file, and times is NULL.
EFAULT path or times points outside the process's allo-
cated address space.
EIO An I/O error occurred while reading from or writing
to the file system.
ELOOP Too many symbolic links were encountered in trans-
lating path.
ENAMETOOLONG The length of path exceeds {PATH_MAX}.
A pathname component is longer than {NAME_MAX}
while {_POSIX_NO_TRUNC} is in effect (see path-
conf(2V)).
ENOENT The file referred to by path does not exist.
ENOTDIR A component of the path prefix of path is not a
directory.
EPERM The effective user ID of the process is not super-
user and not the owner of the file, and times is
not NULL.
EROFS The file system containing the file is mounted
read-only.
SYSTEM V ERRORS
In addition to the above, the following may also occur:
ENOENT path points to an empty string.
SEE ALSO
pathconf(2V), stat(2V), utimes(2)
21 January 1990 UTIME(3V) | http://modman.unixdev.net/?sektion=3&page=utime&manpath=SunOS-4.1.3 | CC-MAIN-2017-17 | refinedweb | 403 | 72.05 |
DEBSOURCES
Skip Quicknav
sources / python-netaddr / 0.7.18-1~bpo8
---------------
Release: 0.7.18
---------------
Date: 4 Sep 2015
^^^^^^^^^^^^^^^^^^^^
Changes since 0.7.17
^^^^^^^^^^^^^^^^^^^^
* cidr_merge() algorithm is now O(n) and much faster.
Thanks to Anand Buddhdev (aabdnn) and Stefan Nordhausen (snordhausen).
* nmap target specification now fully supported including IPv4 CIDR
prefixes and IPv6 addresses.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 100:
- nmap.py - CIDR targets
FIXED Issue 112:
- Observation: netaddr slower under pypy
---------------
Release: 0.7.17
---------------
Date: 31 Aug 2015
^^^^^^^^^^^^^^^^^^^^
Changes since 0.7.16
^^^^^^^^^^^^^^^^^^^^
* Fixed a regression with valid_mac due to shadow import in the
netaddr module.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 114:
- netaddr.valid_mac('00-B0-D0-86-BB-F7')==False for 0.7.16 but True for 0.7.15
---------------
Release: 0.7.16
---------------
Date: 30 Aug 2015
^^^^^^^^^^^^^^^^^^^^
Changes since 0.7.15
^^^^^^^^^^^^^^^^^^^^
* IPv4 networks with /31 and /32 netmasks are now treated according to
RFC 3021. Thanks to kalombos and braaen.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 109:
- Identify registry of global IPv6 unicast allocations
FIXED Issue 108:
- One part of docs unclear?
FIXED Issue 106:
- Eui64 Updated (pull request for Issue 105)
FIXED Issue 105:
- Support dialects for EUI-64 addresses
FIXED Issue 102:
- 0.7.15 tarball is missing tests.
FIXED Issue 96:
- Wrong hosts and broadcasts for /31 and /32 networks.
---------------
Release: 0.7.15
---------------
Date: 29 Jun 2015
^^^^^^^^^^^^^^^^^^^^
Changes since 0.7.14
^^^^^^^^^^^^^^^^^^^^
* Fix slowness in IPSet.__contains__. Thanks to novas0x2a for noticing.
* Normalize IPNetworks when they are added to an IPSet
* Converted test suite to py.test
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 98:
- Convert test suite to py.test
FIXED Issue 94:
- IPSet.__contains__ is about 40 times slower than the equivalent IPRange
FIXED Issue 95:
- Inconsistent Address Handling in IPSet
---------------
Release: 0.7.14
---------------
Date: 31st Mar 2015
^^^^^^^^^^^^^^^^^^^^
Changes since 0.7.13
^^^^^^^^^^^^^^^^^^^^
* Fix weird build breakage in 0.7.13 (wrong Python path, incorrect OUI DB).
* EUI, OUI, and IAB objects can now be compared with strings. You can do
my_mac = EUI("11:22:33:44:55:66")
my_mac == "11:22:33:44:55:66"
and Python will return True on the "==" operator.
* Implement the "!=" operator for OUI and IAB under Python2. It was already
working under Python3.
* 64 bit EUIs could only be created from strings with "-" as a separator.
Now, ":" and no seperator are supported, which already worked for 48 bit EUIs.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 80:
- Compare L2 addresses with their representations
FIXED Issue 81:
- OUI database tests fail in 0.7.13
FIXED Issue 84:
- Incorrect python executable path in netaddr-0.7.13-py2.py3-none-any.whl
FIXED Issue 87:
- Handle eui64 addresses with colon as a delimiter and without delimeter.
---------------
Release: 0.7.13
---------------
Date: 31st Dec 2014
^^^^^^^^^^^^^^^^^^^^
Changes since 0.7.12
^^^^^^^^^^^^^^^^^^^^
* IPAddress objects can now be added to/subtracted from each other
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 73:
- Adding IP Addresses
FIXED Issue 74:
- compute static global ipv6 addr from the net prefix and mac address
FIXED Issue 75:
- add classifiers for python 3.3 and 3.4 support
---------------
Release: 0.7.12
---------------
Date: 6th Jul 2014
^^^^^^^^^^^^^^^^^^^^
Changes since 0.7.11
^^^^^^^^^^^^^^^^^^^^
* Added method IPSet.iter_ipranges().
* bool(IPSet()) works now for large IPSets, e.g. IPSet(['2405:8100::/32']).
* IPNetwork.iter_hosts now skips the subnet-router anycast address for IPv6.
* Removed function fbsocket.inet_aton because it is unused and unnecessary
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 69:
- Add __nonzero__ method to IPSet
FIXED Pull Request 68:
- Fixed a bug related to allowing ::0 during iter_hosts for v6
FIXED Issue 67:
- Remove function fbsocket.inet_aton
FIXED Pull Request 66:
- Added Function to create list of IPRange for non-contiguous IPSet
---------------
Release: 0.7.11
---------------
Date: 19th Mar 2014
^^^^^^^^^^^^^^^^^^^^
Changes since 0.7.10
^^^^^^^^^^^^^^^^^^^^
* Performance of IPSet increased dramatically, implemented by
Stefan Nordhausen and Martijn van Oosterhout. As a side effect,
IPSet(IPNetwork("10.0.0.0/8")) is now as fast as you'd expect.
* Various performance improvements all over the place.
* netaddr is now hosted on PyPI and can be installed via pip.
* Doing "10.0.0.42" in IPNetwork("10.0.0.0/24") works now.
* IPSet has two new methods: iscontiguous() and iprange(), thanks to Louis des Landes.
* Re-added the IPAddress.netmask_bits() method that was accidently removed.
* Networks 128.0.0.0/16, 191.255.0.0/16, and 223.255.255.0/24 are not marked as
reserved IPv4 addresses any more. Thanks to marnickv for pointing that out.
* Various bug fixes contributed by Wilfred Hughes, 2*yo and Adam Goodman.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 58:
- foo.bar doesn't throw AddrFormatError
FIXED Issue 57:
- netaddr packages not hosted on PyPI
FIXED Issue 56:
- Fix comparison with large IPSet()
FIXED Issue 55:
- Fix smallest_matching_cidr and all_matching_cidrs
FIXED Issue 53:
- Exclude 128.0.0.0/16 and possibly others from reserved range set?
FIXED Issue 51:
- Encoding errors in netaddr/eui/oui.txt
FIXED Issue 46:
- len(IPSet()) fails on python3
FIXED Issue 43:
- Method to check if IPSet is contiguous
FIXED Issue 38:
- netmask_bits is missing from the IPAddress
FIXED Issue 37:
- Test failures with Python 3.3
---------------
Release: 0.7.10
---------------
Date: 6th Sep 2012
^^^^^^^^^^^^^^^^^^^
Changes since 0.7.9
^^^^^^^^^^^^^^^^^^^
* A bunch of Python 3.x bug fixes. Thanks Arfrever.
* Extended nmap support to cover full target specification.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 36 -
- ResourceWarnings with Python >=3.2
FIXED Issue 35 -
- netaddr-0.7.9: Test failure with Python 3
FIXED Issue 34 -
- netaddr.ip.iana.SaxRecordParser.endElement() incompatible with Python 3.1
FIXED Issue 33 -
- netaddr script not installed with Python 3
FIXED Issue 23 -
- valid_nmap_range() does not validate nmap format case.
FIXED Issue 22 -
- all_matching_cidrs: documentation incorrect
--------------
Release: 0.7.9
--------------
Date: 28th Aug 2012
^^^^^^^^^^^^^^^^^^^
Changes since 0.7.8
^^^^^^^^^^^^^^^^^^^
* Re-release to fix build removing Sphinx dependency.
--------------
Release: 0.7.8
--------------
Date: 28th Aug 2012
^^^^^^^^^^^^^^^^^^^
Changes since 0.7.7
^^^^^^^^^^^^^^^^^^^
* New SAX parser for IANA data source files (contributed by Andrew Stromnov)
* Fixed pickling failures with EUI, OUI and IAB classes.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 31 -
- Exclude '39.0.0.0/8' network from reserved set. Thanks Andrew Stromnov
FIXED Issue 28 -
- Fix algorithm in ipv6_link_local to fully conform to rfc4291. Thanks Philipp Wollermann
FIXED Issue 25 -
- install_requires is too aggressive? Thanks Adam Lindsay and commenters.
FIXED Issue 21 -
- deepcopy for EUI fails. Thanks Ryan Nowakowski.
--------------
Release: 0.7.7
--------------
Date: 30th May 2012
^^^^^^^^^^^^^^^^^^^
Changes since 0.7.6
^^^^^^^^^^^^^^^^^^^
* Comprehensive documentation update! It's only taken 4 years
to get around to using Sphinx and I can confirm it is
**TOTALLY AWESOME!**
* Various bug fixes
* Refreshed IEEE OUI and IAB data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 24 -
- Fixed TypeError when comparing BaseIP instance with non-BaseIP objects. Thanks pvaret
FIXED Issue 17 -
- For large ipv6 networks the .subnet() method fails. Thanks daveyss
FIXED Issue 20 -
- Test failure with Python 3. Thanks Arfrever
--------------
Release: 0.7.6
--------------
Date: 13th Sep 2011
^^^^^^^^^^^^^^^^^^^
Changes since 0.7.5
^^^^^^^^^^^^^^^^^^^
* A bug fix point release
* Refreshed 3rd party data caches
* Tested against Python 3.2.x and PyPy 1.6.x
* Fixed unit tests under for Mac OSX
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 15 -
- Incorrect and invalid glob produced when last octet is not *
FIXED Issue 13 -
- Added support for IPython 0.11 API changes. Thanks juliantaylor
FIXED Issue 11 -
- Calling valid_glob on cidr raises ValueError. Thanks radicand
FIXED Issue 7 -
- Unpickling Bug in IPSet. Thanks LuizOz and labeneator
FIXED Issue 2 -
- UnboundLocalError raised in IPNetwork constructor. Thanks keesbos
^^^^^^^^^^^
Miscellanea
^^^^^^^^^^^
- Has a famous soft drink company started making it own NICs?
--------------
Release: 0.7.5
--------------
Date: 5th Oct 2010
^^^^^^^^^^^^^^^^^^^
Changes since 0.7.4
^^^^^^^^^^^^^^^^^^^
* Python 3.x is now fully supported. The paint is still drying on this so
please help with testing and raise bug tickets when you find any issues!
New Issue Tracker -
* Moved code hosting to github. History ported thanks to svn2git.
- ()
* All netaddr objects now use approx. 65% less memory due to the use of
__slots__ in classes throughout the codebase. Thanks to Stefan Nordhausen
and his Python guru for this suggestion!
* Applied many optimisations and speedups throughout the codebase.
* Fixed the behaviour of the IPNetwork constructor so it now behaves in
a much more sensible and expected way (i.e. no longer uses inet_aton
semantics which is just plain odd for network addresses).
* One minor change to behaviour in this version is that the .value property
on IPAddress and IPNetwork objects no longer support assignment using a
string IP address. Only integer value assignments are now valid. The impact
of this change should be minimal for the majority of users.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 49 -
- Incorrect IP range recognition on IPs with leading zeros
FIXED Issue 50 -
- CIDR block parsing
FIXED Issue 52 -
- ipv6 cidr matches incorrectly match ipv4 [sic]
FIXED Issue 53 -
- Error in online documentation
FIXED Issue 54 -
- IP recognition failure
FIXED Issue 55 -
- Support for Python 3.x
FIXED Issue 56 -
- checking IPAddress in IPNetwork
FIXED Issue 57 -
- netaddr objects can't pickle
FIXED Issue 58 -
- IPSet operations should accept the same arguments as IPAddress
FIXED Issue 59 -
- netaddr fails to load when imported by a PowerDNS coprocess
^^^^^^^^^^^
Miscellanea
^^^^^^^^^^^
- Welcome back to standards.ieee.org which seems to have been down for weeks!
- Goodbye Sun Microsystems + Merrill Lynch, hello Oracle + Bank of America ...
--------------
Release: 0.7.4
--------------
Date: 2nd Dec 2009
^^^^^^^^^^^^^^^^^^^
Changes since 0.7.3
^^^^^^^^^^^^^^^^^^^
* Applied speed patches by S. Nordhausen
* Fixed an inconsistency between EUI and IPAddress interfaces. Made
EUI.packed and EUI.bin properties (previously methods) and added a
words() property.
--------------
Release: 0.7.3
--------------
Date: 14th Sep 2009
^^^^^^^^^^^^^^^^^^^
Changes since 0.7.2
^^^^^^^^^^^^^^^^^^^
* Added __add__, __radd__, __sub__, __rsub__ operators to the IPAddress class.
* Added support for validation and iteration of simple nmap style IPv4 ranges
(raised in Issue 46).
* Removed some unused constants from fallback socket module.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 44 -
- int/long type error
FIXED Issue 46 -
- Question about IPv4 ranges
FIXED Issue 47 -
- IPNetwork cannot be evaluated as a boolean when it has a large size
--------------
Release: 0.7.2
--------------
Date: 20th Aug 2009
^^^^^^^^^^^^^^^^^^^
Changes since 0.7.1
^^^^^^^^^^^^^^^^^^^
FIXED a boundary problem with the iter_iprange() generator function
and all associated calls to it throughout the codebase, including
unit test coverage and adjustments.
* Replaced regular expressions in cidr_merge() with pre-compiled equivalents
for a small speed boost.
* Adjustments to README raised by John Eckersberg.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 43 -
- IPNetwork('0.0.0.0/0') not usable in for loop
--------------
Release: 0.7.1
--------------
Date: 14th Aug 2009
^^^^^^^^^^^^^^^^^
Changes since 0.7
^^^^^^^^^^^^^^^^^
* Renamed the netaddr shell script from 'nash' to plain 'netaddr'. This
is to avoid a potentially nasty clash with an important Linux tool
with the same name.
Thanks to John Eckersberg for spotting this one early!
* Updated IANA and IEEE data files with latest versions.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 42 -
- Bug in cidr_merge() function when passed the CIDRs 0.0.0.0/0 and/or ::/0
------------
Release: 0.7
------------
Date: 11th Aug 2009
^^^^^^^^^^^^^^^^^^^
Changes since 0.6.x
^^^^^^^^^^^^^^^^^^^
Please Note - This release represents a major overhaul of netaddr. It breaks
backward compatibility with previous releases. See the API documentation for
full details of what is available.
Some highlights of what has changed :-
* Internal module hierarchy has been completely overhauled and redesigned. This
fixes up a lot of inconsistencies and problems with interdependent imports.
All public classes, objects, functions and constants are still published via
the main netaddr module namespace as in previous releases.
* No more AT_* and ST_* 'constants'.
* The Addr base class is gone. This removes the link between EUI and IP
functionality so the library is can now easily be split into distinct units
without many interdependencies between layer 2 and layer 3 functionality.
* The use of custom descriptor classes has been completely discontinued.
* Strategy classes and singleton objects have been replaced with a group of
strategy modules in their own netaddr.strategy namespace. Each IP or EUI
address object now holds a reference to a module rather than a singleton
object.
* Many operations that were previously static class methods are now presented as
functions in the relevant modules. See the API documentation for details.
* The IP and CIDR classes have been replaced with two new classes called
IPAddress and IPNetwork respectively. This name change is important as the IP
part of netaddr has been completed redesigned. The notion of an individual IP
address and an IP network or subnet has been made more obvious. IPAddress
objects are now true scalars and do not evaluate in a list or tuple context.
They also do not support any notion of a netmask or CIDR prefix; this is the
primary function of an IPNetwork object.
* Abritrary IP ranges and are still supported but a lot of their functionality
has also been exposed via handy functions.
* IP globbing routines (previous known as Wildcards) have been moved into
their own submodule.
* Added a new IPSet class which fully emulates mutable Python sets. This
replaces a lot of half-baked experimental classes found in 0.5.x and 0.6.x
such as IPRangeSet and CIDRGroup. See documentation for details.
* All methods and properties that previously used or supported the 'fmt'
formatting property no longer do so. In all cases, objects are now returned to
correctly support pass through calls without side effects. It is up to the
user to extract data in the right format from the objects IPAddress objects
returned as required.
* Unit tests have been completed re-written to support docstring style tests
bundled into test suites. These are handy as they double up as documentation
being combined with wiki syntax. Implemented code coverage checking using
coverage 3.x.
* nash - a nascent shell like tool for the netaddr library (requires IPython).
* Support for RFC 1924 added ;-)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 13 -
- Searching for a match in a list of CIDR objects
FIXED Issue 26 -
- Refactor out use of isinstance()
FIXED Issue 28 -
- Add support for network block operations
FIXED Issue 34 -
- Addition issue?
--------------
Release: 0.6.4
--------------
Date: 11th Aug 2009
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 40 -
- Building RPM wth "python setup.py bdist_rpm" fails, multiple errors
--------------
Release: 0.6.3
--------------
Date: 23rd Jun 2009
^^^^^^^^^^^^^^^^^^^
Changes since 0.6.2
^^^^^^^^^^^^^^^^^^^
* Fixed line endings in a number of new files created under Windows.
* Tweaked the ordering of values in tuple passed into the hash() function in
the __hash__ method of the IP and IPRange classes to make it the same as
the values used for comparisons implemented in the __eq__ method (Python
best practice).
* Added a number of unit tests to improve code coverage.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 33 -
- CIDR subtraction is broken for out-of-range CIDR objects
FIXED Issue 35 -
- install error (on Python interpreters where socket.has_ipv6 is False)
FIXED Issue 36 -
- netaddr.CIDR fails to parse default route CIDR
FIXED Issue 37 -
- Bug in bitwise AND operator for IP addresses
FIXED Issue 38 -
- Feature request: Addr.__nonzero__
FIXED Issue 39 -
- CIDR.abbrev_to_verbose() not applying implicit classful netmask
rules consistently
--------------
Release: 0.6.2
--------------
Date: 13th Apr 2009
^^^^^^^^^^^^^^^^^^^
Changes since 0.6.1
^^^^^^^^^^^^^^^^^^^
* Refreshed IEEE and IANA data files with latest revisions from their
respective URLs.
- IANA IPv4 Address Space Registry (last updated 2009-03-11)
- Internet Multicast Addresses (last updated 2009-03-17)
- IEEE OUI and IAB files (last updated 2009-04-13)
* Added get_latest_files() functions to both the netaddr.eui and
netaddr.ip modules to assist in automating release builds.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 32 -
- Addr.__ne__ returns wrong answer
--------------
Release: 0.6.1
--------------
Date: 6th Apr 2009
^^^^^^^^^^^^^^^^^
Changes since 0.6
^^^^^^^^^^^^^^^^^
* Added COPYRIGHT file with details and attribution for all 3rd party files
bundled with netaddr.
* Minimum Python version required is now 2.4.x changed from 2.3.x.
- Python 2.3 compatibility code in many sections of code have been removed.
- the @property and @staticmethod decorators are now used throughout the
code along with the reversed() and sorted() builtin iterators.
- A specific version check has also been added that will raise RuntimeError
exceptions if you run netaddr on a Python interpreter version < 2.4.x.
* Integer addresses passed to the IP() and EUI() constructors no longer
require a mandatory second address type (AT_*) argument in most cases. This
is now only really required to disambiguate between IPv4/IPv6 addresses with
the same numerical value. The same behaviour applies to EUI-48/EUI-64
identifiers. A small speed boost is achieved if the 2nd address type
argument is explicitly provided.
* IPv6 addresses returned by EUI.ipv6_link_local() now always have a subnet
prefix of /64.
* Default sort order of aggregate classes (IPRange, CIDR and Wildcard) has
been changed (again). They now sort initially by first address and then
by network block size from largest to smallest which feels more natural.
* Fixed a bug in the CIDR.abbrev_to_verbose() static method where IPv4
addresses with 4 octets (i.e. non-partial addresses) were being assigned
subnet prefixes using abbreviated rules. All complete IPv4 addresses should
always get a /32 prefix where it is not explicitly provided.
* Abbreviated address expansion in the CIDR constructor is now optional and
can be controlled by a new 'expand_abbrev' boolean argument.
* Added the new CIDR.summarize() static method which transforms lists of IP
addresses and CIDRs into their most compact forms. Great for trimming down
large ad hoc address lists!
* Added the previous() and next() methods to the CIDR classes which return
the CIDR subnets either side of a given CIDR that are of the same size.
For the CIDR 192.0.2.0/24, previous will return 192.0.1.0/24 and next
will return 192.0.3.0/24. Also accepts and optional step size (default
is 1).
* Added the supernet() method to the CIDR class which returns a generator of
all the subnets that contain the current CIDR found by decrementing the
prefixlen value for each step until it reaches zero.
* Changed the way the fallback code works when the socket module is missing
important constants and functions.
* Removed the uppercase options from the Strategy constructors and internals
as this behaviour can be easily replicated using the word_fmt option
instead and requires less code (word_fmt='%X').
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FIXED Issue 23 -
- Improve IPv6 IPv4 mapped/compatible address formatting
FIXED Issue 24 -
- bug in CIDR.subnet() when using the fmt argument
FIXED Issue 29 -
- CIDR.subnet method's count argument isn't working as documented
FIXED Issue 30 -
- not compatible with Python 2.3
FIXED Issue 31 -
- byte order in documentation confusing or wrong
------------
Release: 0.6
------------
Date: 20th Jan 2009
^^^^^^^^^^^^^^^^^^^
Changes since 0.5.x
^^^^^^^^^^^^^^^^^^^
* Namespace changes
3 new sub namespaces have been added :-
- netaddr.eui
Currently contains IEEE OUI and IAB classes and lookup code.
- netaddr.ip
Currently contains IANA IPv4, IPv6 and IPv4 multicast lookup code.
- netaddr.core
Currently contains only a couple of classes that are shared between code in
netaddr.eui and netaddr.ip.
Please Note: This change is part of a two stage internal restructuring of
netaddr. In future releases, layer-2 MAC/EUI functionality will be separated
from and layer-3 IP, CIDR and Wildcard functionality. All shared code will
be moved to netaddr.core. When the migration is complete (expected in 0.7)
the netaddr.address and netaddr.strategy namespaces will be removed. Please
endeavour to access everything you need via the top-level netaddr namespace
from this release onwards. See netaddr.__all__ for details of constants,
objects, classes and functions intended for the public interface.
* Addition of IEEE and IANA informational lookups
- the IP() and EUI() classes now have an additional info() method through
which contextual information about your addresses can be accessed. This
data is published by IANA and the IEEE respectively and sourced directly
from text files bundled with netaddr that are available for download
publically online. Details are available in the docstring of the relevant
parsing classes. Subsequent netaddr releases will endeavour to keep
up-to-date with any updates to these files.
- the EUI() class has been updated with the addition of the OUI() and IAB()
classes. They provide object based access to returned via the EUI.info()
method. Please see API docs included with netaddr for details.
- added new NotRegisteredError exception that is raised when an EUI doesn't
match any currently registration entries in the IEEE registry files.
* Addr() class removed from the public interface
- This class is only ever meant to be used internally and its usage may soon
be deprecated in favour converting it into an abstract base class in
future releases.
* Deletion of AddrRange() class
- replaced with the more specific IPRange() class. AddrRange() wasn't
very useful in practice. Too much time has been spent explaining its
theoretical merits over its actual practicality for every day use.
* Addition of new IPRange() class
- the new base class for CIDR() and Wildcard().
- a 'killer feature' of this new class are the new methods iprange(),
cidrs() and wildcard() which allow you to use and switch between all
3 formats easily. IPRange('x', 'y').cidrs() is particularly useful
returning all the intervening CIDRs between 2 arbitrary IP addresses.
- IPRange() is a great place to expose several new methods available to
sub classes. They are issupernet(), issubnet(), adjacent() and overlaps().
- previous method called data_flavour() has been renamed (again) to a more
suitable format().
* IP() class updates
- is_netmask() and is_hostmask() methods have been optimised and are now
both approximately 4 times faster than previously!
- added wildcard() and iprange() methods that return pre-initialised
objects of those classes based on the current netmask / subnet prefix.
- copy constructor methods ipv4() and ipv6() now preserve the value of the
prefixlen property now also support IPv6 options for returning IPv4-mapped
or IPv4-compatible IPv6 addresses.
- added new methods is_loopback(), is_private(), is_link_local(),
is_ipv4_mapped() and is_ipv4_compat() which are all self explanatory.
- added a bin() method which provides an IP address in the same format
as the standard Python bin() builtin type ('0bxxx') now available in
Python 2.6.x and higher.
- added a packed() method which provides an IP address in packed binary
string format, suitable for passing directly to Python socket calls.
* nrange() generator function updates
- by default this now returns IP() objects instead of Addr() objects.
* CIDR() class updates
- the 'strict_bitmask' option in the CIDR class constructor has been had a
name change and is now just 'strict' (less typing).
- support for Cisco ACL-style (hostmask) prefixes. Also available to the
IP() class. They are converted to their netmask equivalents before being
applied to the base address.
- added a new subnet() generator method that returns iterators to subnet
CIDRs found within the current CIDR object's boundaries e.g. a /24 CIDR
can provide address with subnet prefixes between a /25 and /32.
- added a new span() method which takes a list of IP, IPRange, CIDR and/or
Wildcards returning a single CIDR that 'spans' the lowest and highest
boundary addresses. An important property of this class is that only a
single CIDR is returned and that it (potentially) overlaps the start and
end addresses. The most important aspect of this method is that it
identifies the left-most set of bits that are common to all supplied
addresses. It is the plumbing that makes a lot of other features function
correctly.
- although IPv6 doesn't support the concept of a broadcast address, after
some pondering I've decide to add network() and broadcast() methods to the
CIDR class. It is an interface quirk that users expect so it has been
added for ease of use.
- the methods network(), broadcast(), hostmask() and netmask() have been
wrapped in property() builtin calls to make them appear as read-only
properties.
* Many more MAC and IPv4 string address representation are now supported
- Improvements to both EUI and IP classes. They now accept many more valid
address formats than previously. Thanks for all the bugs tickets raised.
* ``__repr__()`` method behaviour change
- Using ``repr()`` now assume that you have performed a ``from netaddr import *``
before you execute them. They no longer specify the originating namespace
of objects which is a bit unnecessary and a lot to read on-screen.They
will also be moving around within the namespace shortly anyway so its
best not to think of them as being anywhere other than directly below
netaddr itself.
* 'klass' property renamed to 'fmt' (format)
- now referred to as the 'format callable' property. An unfortunately but
necessary change. 'klass' was a bad initial name choice as it most often
doesn't even reference a class object also supporting references to Python
types, builtin functions and user defined callables.
* Complete re-work and consolidation of unit tests.
- now over 100 tests covering all aspects of the API and library
functionality.
- Moved all tests into a single file. Lots of additional tests have been
added along with interface checks to ensure netaddr's always presents
a predictable set of properties and methods across releases.
* Nascent support for Python eggs and setuptools.
- Help is need to test this as it is not something I use personally.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Finally fixed the IPv6 string address compression algorithm so that it
is now compliant with the socket modules inet_ntop() and inet_pton() calls.
(not available on all platforms).
^^^^^^^^^^^^^^^^^^^^^
Experimental Features
^^^^^^^^^^^^^^^^^^^^^
* added bitwise operators to the IP class
- does what it says on the tin. Does not effect that value of the IP object
itself but rather, returns a new IP after the operation has been applied.
* IPRangeSet() class added (EXPERIMENTAL).
- the intention with this class is to allows you to create collections of
unique IP(), IPRange(), CIDR() and Wildcard() objects. It provides
iteration over IPs in the collection as well as several membership based
operations such as any_match() all_matches(), min_match() and max_match().
- lots more work to do here. Please raise bugs and feature requests against
this as you find them. Improvements to this are coming in 0.7.
--------------
Release: 0.5.2
--------------
Date: 29th Sep 2008
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Fixed Issue 15 in bug tracker. Bad validation and conversion of IPv4
mapped IPv6 address values in IPv6Strategy class. Covered with unit
test cases.
* Updated PrefixLenDescriptor() class so that modifications to the property
CIDR.prefixlen also update CIDR.first and CIDR.last keeping them in sync.
Covered by unit test cases.
* IP.hostname() method returns None when DNS lookup fails.
--------------
Release: 0.5.1
--------------
Date: 23rd Sep 2008
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Specific bug fixes addressed in this release
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* CIDR constructor was throwing a TypeError for valid unicode string addresses
which worked in previous releases. Fixed and covered with a unit test case.
* The methods CIDR.netmask() and CIDR.hostmask() contained code errors that
were causing them to fail. Problem fixed and covered with unit test case.
------------
Release: 0.5
------------
Date: 19th Sep 2008
^^^^^^^^^^^^^^^^^^^
Changes since 0.4.x
^^^^^^^^^^^^^^^^^^^
**General**
* Access to all important object attributes in all netaddr classes now takes
place via custom Python descriptor protocol classes. This has greatly
simplified internal class logic and made external attributes changes much
safer and less error prone. It has also made aggregate classes such as CIDR
and Wildcard effectively read-write rather than read-only which they have
been up until this release.
* Ammended the way sort order is calculated for Addr and AddrRange (sub)class
instances so that the address type is taken into account as well as as the
numerical value of the address or address range. The ascending sort order
is IPv4, IPv6, EUI-48 and EUI-64. Sequences of AddrRange (sub)class
instances now sort correctly!
* Comparisons between instances of Addr and AddrRange (sub)classes now return
False, rather than raising an AttributeError.
* Added checks and workaround code for Python runtime environments that suffer
from the infamous socket module inet_aton('255.255.255.255') bug. This was
discovered recently in Python 2.4.x on PowerPC under MacOS X. The fix also
applies in cases where the socket module is not available (e.g. on Google
App Engine).
* All general Exception raising in the strategy module has now been replaced
with more specific exceptions, mainly ValueError (these were unintentionally
missed out of the 0.4 release).
* Implemented __hash__() operations for the Addr and AddrStrategy classes. This
allows you to use IP, CIDR and Wildcard objects as keys in dictionaries and
as elements in sets. Please note - this is currently an experimental feature
which may change in future releases.
* Added __ne__() operation to Addr and AddrRange classes.
* Obeying the 'Law of Demeter', the address type of Addr and AddrRange
(sub)class instances can be accessed using the property directly :-
obj.addr_type # 0.5 onwards
rather than having to go via the strategy object :-
obj.strategy.addr_type # 0.4 and earlier
* Renamed the AT_DESCR lookup dictionary to AT_NAMES. Removed invalid and
duplicated imports from all modules.
**Addr class changes**
* Removed the setvalue() method from the Addr class and replaced all uses of
__setattr__() replaced by custom descriptors throughout.
**IP class changes**
* Removed the ambiguity with masklen and prefixlen attributes in the IP class.
prefixlen now denotes the number of bits that define the netmask for an IP
address. The new method netmask_bits() returns the number of non-zero bits
in an IP object if the is_netmask() method returns True. A prefixlen value
other than /32 for an address where is_netmask() returns True is invalid
and will raise a ValueError exception.
* Removed the family() method from the IP class. It duplicates information
now provided by the prefixlen property.
* IP class has several new methods. is_multicast() and is_unicast() quickly
tell you what category of IP address you have and while ipv4() and ipv6()
act as IPv4 <-> IPv6 conversions or copy constructors depending on context.
* Reverse DNS lookup entries now contain a trailing, top-level period (.)
character appended to them.
* Added the hostname() method to IP instances which performs a reverse DNS
* The IP class __str__() method now omits the subnet prefix is now implicit
for IPv4 addresses that are /32 and IPv6 addresses that are /128. Subnet
prefix is maintained in return value for all other values.
**AddrRange class changes**
* The AddrRange class no longer stores instances of Addr (sub)classes for the
first and last address in the range. The instance variables self.start_addr
and self.stop_addr have been renamed to self.first and self.last and the
methods obj.first() and obj.last() have been removed.
Instead, self.first and self.last contain integer values and a reference
to a strategy object is stored. Doing this is a lot more useful and cleaner
for implementing internal logic.
To get Addr (sub)class objects (or strings, hex etc when manipulating the
the klass property) use the index values obj[0] and obj[-1] as a substitute
for obj.first() and obj.last() respectively.
* AddrRange (sub)class instances now define the increment, __iadd__(), and
decrement, __isub__(), operators. This allows you to 'slide' CIDRs and
Wildcards upwards and downwards based on their block sizes.
* The _retval() method has now been renamed data_flavour() - yes, the UK
spelling ;-) You shouldn't really care much about this as it mostly for
internal use. I gave it a decent name as I didn't see any real need to hide
the functionality if users wanted it.
**CIDR class changes**
* The strictness of the CIDR class constructor in relation to non-zero bits
once the prefix bitmask has been applied can be disabled use the optional
argument strict_bitmask=False. It is True (strictness enabled) by default.
* Fixed a bug in abbreviated CIDR conversion. Subnet prefix for multicast
address 224.0.0.0 is now /4 instead of /8.
* The CIDR class now supports subtraction between two CIDR objects, returning
a list of the remainder. Please note that the bigger of the two CIDR objects
must be on the left hand side of the the expression, otherwise an empty list
is return. Sorry, you are not allowed to create negative CIDRs ;-)
* The function abbrev_to_cidr() has been renamed to and turned into the static
method CIDR.abbrev_to_verbose(). No major changes to the logic have been
made.
**Wildcard class changes**
* The Wildcard class now defines a static method Wildcard.is_valid() that
allows you to perform validity tests on wildcard strings without fully
instantiation a Wildcard object.
------------
Release: 0.4
------------
Date: 7th Aug 2008
^^^^^^^^^^^^^^^^^^^
Changes since 0.3.x
^^^^^^^^^^^^^^^^^^^
* All general Exception raising has been replaced with more specific
exceptions such as TypeError and ValueError and with the addition of two
custom exception classes, AddrFormatError and AddrConversionError.
* The IP class now accepts a subnet prefix. It is *NOT* strict about non-zero
bits to the right of implied subnet mask, unlike the CIDR class (see below).
* The CIDR class is now completely strict about non-zero bits to the right of
the implied subnet netmask and raises a ValueError if they exist, with a
handy hint as to the correct CIDR to be used based on the supplied subnet
prefix.
* The CIDR class now also supports abbreviated CIDR ranges and uses older
classful network address rules to decided on a subnet prefix if one is not
explicitly provided. Supported forms now include 10, 10/8 and 192.168/16.
Currently only supports these options for IPv4 CIDR address ranges.
* __repr__() methods have been defined for all classes in the netaddr module
producing executable Python statements that can be used to re-create the
state of any object.
* CIDR and Wildcard classes now have methods that support conversions between
these two aggregate types :-
* CIDR -> Wildcard
* Wildcard -> CIDR
^^^^^^^^^^^^^^^^^^^^
Housekeeping Changes
^^^^^^^^^^^^^^^^^^^^
* Massive docstring review and tidy up with the inclusino of epydoc specific
syntax to spruce up auto-generated API documentation.
* Thorough review of code using pylint.
* Netaddr module now has the special __version__ variable defined which is
also referenced by setup.py.
* Some minor changes to setup.py and MANIFEST.in.
* Constants and custom Exception classes have been moved to __init__.py from
strategy.py
* An import * friendly __all__ has been defined for the netaddr namespace
which should remove the need to delve too much into the address and strategy
submodules.
* Fixed a number of line-ending issues in several files. | https://sources.debian.org/src/python-netaddr/0.7.18-1~bpo8+1/CHANGELOG/ | CC-MAIN-2019-43 | refinedweb | 5,755 | 58.48 |
in reply to Re^2: Challenge: Algorithm To Generate Bubble Blast 2 Puzzles With Difficultyin thread Challenge: Algorithm To Generate Bubble Blast 2 Puzzles With Difficulty
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int won_game(int *copy);
void hit(int *copy, int pos);
int n_neighbor(int *copy, int pos);
int s_neighbor(int *copy, int pos);
int e_neighbor(int *copy, int pos);
int w_neighbor(int *copy, int pos);
int main (void) {
static const char filename[] = "games.dat";
FILE *file = fopen (filename, "r");
if (file != NULL) {
char line[32];
srand(time(NULL));
while (fgets(line, sizeof line, file) != NULL) {
int i; // looping variable
int j; // looping variable
int hits; // index where the current n
+umber of hits is stored
int curr; // index of current item bei
+ng worked on
int last; // index of last item in our
+ work queue
int games_played; // keep track of how many ga
+mes we have played
int stat[10]; // Keeps track of number of
+hits to win a game
int work[1000][31]; // Work queue (DFS will neve
+r need more than 1000)
games_played = 0;
hits = 30;
// Populate stats to all be 0
for (i = 0; i < 10; i++) {
stat[i] = 0;
}
// Populate first item on work queue
for (i = 0; i < 30; i++) {
work[0][i] = ((int) line[i]) - 48;
}
work[0][hits] = 9; // number of hits allowed t
+o play a board
last = 1;
while (last > 0) {
int possible[30]; // maximum number of possibl
+e hits
int last_idx; // index of last possible hi
+t for "this" board
int temp; // used for Fisher-Yates shu
+ffle
/* Takes too long to play games exhaustively, even in
+C */
if (games_played > 500000) {
break;
}
curr = last;
--work[curr][hits]; // drop the number of hits
// populate possible hits for "this board"
last_idx = -1;
for (i = 0; i < 30; i++) {
if (work[curr][i] > 0) {
++last_idx;
possible[last_idx] = i;
}
}
// Fisher-Yates shuffle them so that games played are
+all distinct but random
for (i = last_idx; i > 0; i--) {
j = rand() % (i + 1); // yes, I know this isn't pe
+rfect
temp = possible[i];
possible[i] = possible[j];
possible[j] = temp;
}
// Loop over all possible hits on the board, hitting e
+ach one
for (i = 0; i <= last_idx; i++) {
// copy of the board to manipulate
int copy[31];
for (j = 0; j < 31; j++) {
copy[j] = work[curr][j];
}
hit(copy, possible[i]);
// Game won
if (won_game(copy) == 1) {
++games_played;
++stat[work[curr][hits]];
}
// Game lost
else if (work[curr][hits] == 0) {
++games_played;
++stat[9];
}
// Still playing so put it back on the work queue
else {
for (j = 0; j < 31; j++) {
work[last][j] = copy[j];
}
}
}
}
for (i = 0; i < 30; i++) {
printf("%c", line[i]);
}
for (i = 0; i < 10; i++) {
printf(",%i", stat[i]);
}
printf("\n");
}
fclose (file);
}
else {
perror(filename);
}
return 0;
}
int won_game(int *copy) {
int i;
for (i = 0; i < 30; i++) {
if (copy[i] > 0) {
return 0;
}
}
return 1;
}
void hit(int *copy, int pos) {
if (pos < 0 || copy[pos] == 0) {
return;
}
--copy[pos];
// If we exploded, we have to hit our neighbors
if (copy[pos] == 0) {
hit(copy, n_neighbor(copy, pos));
hit(copy, s_neighbor(copy, pos));
hit(copy, e_neighbor(copy, pos));
hit(copy, w_neighbor(copy, pos));
}
return;
}
int n_neighbor(int *copy, int pos) {
while (1) {
pos = pos - 5;
if (pos < 0) {
return -1;
}
if (copy[pos] > 0) {
return pos;
}
}
}
int s_neighbor(int *copy, int pos) {
while (1) {
pos = pos + 5;
if (pos > 29) {
return -1;
}
if (copy[pos] > 0) {
return pos;
}
}
}
int e_neighbor(int *copy, int pos) {
int min_val;
min_val = (pos / 5) * 5;
while (1) {
--pos;
if (pos < min_val) {
return -1;
}
if (copy[pos] > 0) {
return pos;
}
}
}
int w_neighbor(int *copy, int pos) {
int max_val;
max_val = ((pos / 5) * 5) + 4;
while (1) {
++pos;
if (pos > max_val) {
return -1;
}
if (copy[pos] > 0) {
return pos;
}
}
}
[download]
Even in my neophyte C, I have realized an enormous performance increase. Even in C, it still takes too long to exhaustively play all possible games for a board when you allow the player to use up to 9 hits. For this reason, I duplicated the same changes I did to the original Perl (play a limited number of distinct but random games). The difference is that I am now able to use up to 9 hits and play 500K games in less time then it was taking me to play 6 hits for 40K games.
Cheers - L~R
It has a number of distinctions from the previous version.
It no longer chooses to play distinct random games. It arranges the places to hit by choosing all 1 hits before 2 hits before 3 hits before 4 hits. The idea being you can't finish a board without hitting a 1-hit bubble and they can lead to cascading bursts.
It keeps track of more statistics. In addition to how many ways it won with 1-9 hits and the number of games lost, it also keeps track of the following fields for 1 - 9 hits:
I am going to let it run for a while and see if it produces more usable data.
It has been a number of years since I have done anything of significance in C, so some best practices may have changed, but from a quick glance, I see a couple of things:
Other than that, do you have a profiler that shows any hotspots?
--MidLifeXis
I agree. It grew over time. I am also more verbose than I need to be. I am using -O3 for gcc which I believe inlines whatever it can. I am potentially considering a new direction entirely so if I touch it again, I will certainly make more use of functions.
Macros are incredibly powerful but if you don't code enough in C to make them second nature they can have the opposite effect as intended (making code harder to understand rather than easier). I prefer to explicitly say what I am doing when working in a language I am not comfortable in so that I understand the code when I come back to it later. As far as magic numbers - yeah. I started to make variables to give them meaning but wasn't consistent as I tried to finish the code. In a re-write I would probably have some global static variables defined outside of main.
No. I was expecting someone to look at the way I copy one array to another and say a far more efficent way of doing that was memcopy with a working example but alas, no such luck.
Some of the repeated board manipulations or constants should probably be put into macros
Macros are incredibly powerful but if you don't code enough in C to make them second nature they can have the opposite effect as intended
Macros are incredibly powerful but if you don't code enough in C to make them second nature they can have the opposite effect as intended
Don't use macros, use functions marked inline instead:
inline int func( int arg ) {
// stuff
return 1;
}
[download]
All the benefits of macros with none of the nasty side-effects or opacity.
1. Keep it simple
2. Just remember to pull out 3 in the morning
3. A good puzzle will wake me up
Many. I like to torture myself
0. Socks just get in the way
Results (284 votes). Check out past polls. | http://www.perlmonks.org/?node_id=1053345 | CC-MAIN-2016-44 | refinedweb | 1,229 | 63.87 |
TI Home
»
TI E2E Community
»
Support Forums
»
Microcontrollers
»
MSP430™ Microcontrollers
»
MSP430 Ultra-Low Power 16-bit Microcontroller Forum
»
Using Pointer to help send multiple AD Conversions out Serially
Hey guys,
I am trying to use a DMA and Pointers to help me send A/D values through serial. Right now, I can only get the code to display the values coming in from ADC12MEM0
on my terminal program. The other values being dumped into my RAM are not being transmitted to the terminal. Therefore, I am trying to set up a pointer that will keep track of
where I am in RAM and send every conversion to the terminal. Here is my code, I tried using the comments to explain what I was thinking
# include <msp430xG46x.h>int ADCSample;int *Pointer1;int ADCSample2;void main(void){WDTCTL = WDTPW+WDTHOLD; // Stop watchdogP5DIR |= 0x02;P5OUT |= 0x02;// Initialization of ADC12//P6SEL |= 0x01; // Enable A/D channel A0ADC12CTL0 &= ~ENC; // Disable ConversionsADC12CTL0 = ADC12ON + SHS_1 + REFON + REF2_5V + MSC; // turn on ADC12, set samp timeADC12CTL1 = SHP+CONSEQ_2; // Use sampling timerADC12MCTL0 = SREF_1+INCH_0; // Vr+=VeREF+ (external)//ADC12IFG = 0;//Timer ATACCR0 = 1500; // Delay to allow Ref to settleTADC12CTL0 |= ENC; // Enable conversions// Initialization of Rs-232//FLL_CTL0 |= XCAP14PF; // Configure load capsdo/TXDUCA0CTL1 |= UCSSEL_1; // CLK = ACLKUCA0BR0 = 0x03; // 32k/9600 - 13.65UCA0BR1 = 0x00; //UCA0MCTL = 0x06; // ModulationUCA0CTL1 &= ~UCSWRST; // **Initialize USCI state machine**IE2 |= UCA0TXIE + UCA0RXIE; // enable RXD and TXD interrupt;// Initialize DMADMACTL0 = DMA0TSEL_6; // ADC12IF setDMACTL1 = DMAONFETCH;__data16_write_addr((unsigned short) &DMA0SA, (unsigned long) &ADC12MEM0); // Source address__data16_write_addr((unsigned short) &DMA0DA, (unsigned long) 0x001108); // Destination single addressDMA0SZ = 0x0FFF; // Set DMA Block size ADCSample sizeDMA0CTL = DMADT_4 + DMADSTINCR_3 + DMAIE + DMADSTBYTE + DMASRCBYTE; // Repeat single, inc dst, interruptsADCSample = ADC12MEM0; //Set integer to value of ADC12MEM0Pointer1 = &ADCSample; //Set Pointer1 to the address of ADCSample(Not sure where this address, I want it to be at 0x001108)DMA0CTL |= DMAEN; //Enable DMAADC12CTL0 |= ADC12SC; //Start Conversions//Serial Loopwhile(*Pointer1 <= 0x030FF) // Execute loop until loop reaches 0x030FF address{ADCSample2 = *Pointer1; //Set integer to value stored in Pointer1UCA0TXBUF = ADCSample2 >> 8; //Send upper byte from Pointer1 to Serialwhile(!(IFG2 & UCA0TXIFG)){__delay_cycles(1000);// wait for first transmit}UCA0TXBUF = ADCSample2;*Pointer1 = (*Pointer1 + 1) &0x030FF; //Increment Pointer1 from address 0x001108 to 0x030FF__bis_SR_register(LPM0_bits + GIE);}}
Martin Novotny102956IE2 |= UCA0TXIE + UCA0RXIE; // enable RXD and TXD interrupt;
You didn't post any ISR, so I assume you have none, so the first interrupt 8after GIE has been set) will jump into the void, crashing and resetting the MSP.
Martin Novotny102956ADC12CTL0 |= ADC12SC; //Start Conversions
Martin Novotny102956while(*Pointer1 <= 0x030FF) // Execute loop until loop reaches 0x030FF address
The DMA probably does the transfer of 4k of words (= 8k data) from ADC12MEM0 to 0x1108 to 0x3106 (overwriting everything that happens to be placed there by the linker), but it would be pure coincidence if this changes *Pointer1 without also messing up Pointer1 itself.
Martin Novotny102956while(!(IFG2 & UCA0TXIFG)){__delay_cycles(1000);// wait for first transmit}
Martin Novotny102956*Pointer1 = (*Pointer1 + 1) &0x030FF; //Increment Pointer1 from address 0x001108 to 0x030FF
What thsi does is: it takes the value pointed to by Pointer1, increments this value by 1, then does a bit-wise AND with 0x30ff and stores the result back to the memory location Pointer1 points to (whcih still is either ADCSample variable, or maybe anywhere int the addressing range, depending on what damage teh DMA has done.
The whole construct makes no sense, sorry.I guess, you didn't really understand what pointers in C are and how they are used.
____________________________________.
Sorry, I guess I'm not sure why you bothered to even reply to my post when all you did was tell me everything I did wrong and offered no means of assistance. Of course, the code does not do what I was describing and NO I do not have a great understanding of what pointers in C are, but no where in your post did you make any attempt to help me learn or suggest ways to improve my code. Therefore do not bother responding to this reply because I won't be coming back to this forum. Its embarassing that you are to the go to expert on these forums as I posted a problem and instead of posting possible fix or even some good advice you simply told me everything already know which is that my code is obviously flawed.
Jens-Michael Gross is one of more tolerant gurus. You must have caught him at a bad moment.
I am guessing you are trying to DMA a number of samples from ADC to a buffer and simultaneously sending the buffer to the serial port. I think your code is using unallocated memory as though it was an array. This is not safe as the unallocated memory can move. Better to allocate yourself an array and ask the DMA to fill that array. The questions is not really about pointers but more of buffer arrays and DMA transfer. My background is not with MSP430...here's my "straw man" version of your code. See if others will knock it down.
# include <msp430xG46x.h>#define SAMPLES 256volatile int ADCSamples[SAMPLES];void main(void){ int i; int x; WDTCTL = WDTPW+WDTHOLD; // Stop watchdog P5DIR |= 0x02; P5OUT |= 0x02; // Initialization of ADC12// P6SEL |= 0x01; // Enable A/D channel A0 ADC12CTL0 &= ~ENC; // Disable Conversions ADC12CTL0 = ADC12ON + SHS_1 + REFON + REF2_5V + MSC; // turn on ADC12, set samp time ADC12CTL1 = SHP+CONSEQ_2; // Use sampling timer ADC12MCTL0 = SREF_1+INCH_0; // Vr+=VeREF+ (external) //ADC12IFG = 0; //Timer A TACCR0 = 1500; // Delay to allow Ref to settle T ADC12CTL0 |= ENC; // Enable conversions // Initialization of Rs-232// FLL_CTL0 |= XCAP14PF; // Configure load caps do {/TXD UCA0CTL1 |= UCSSEL_1; // CLK = ACLK UCA0BR0 = 0x03; // 32k/9600 - 13.65 UCA0BR1 = 0x00; // UCA0MCTL = 0x06; // Modulation UCA0CTL1 &= ~UCSWRST; // **Initialize USCI state machine** IE2 |= UCA0TXIE + UCA0RXIE; // enable RXD and TXD interrupt; // Initialize DMA // Repeat single src, inc dst, interrupts DMACTL0 = DMA0TSEL_6; // ADC12IF set DMACTL1 = DMAONFETCH; __data16_write_addr((unsigned short) &DMA0SA, (unsigned long) &ADC12MEM0); // Source __data16_write_addr((unsigned short) &DMA0DA, (unsigned long) ADCSamples); // Dest DMA0SZ = SAMPLES; // Set DMA Block size ADCSample size BYTES? ITEMS????? DMA0CTL = DMADT_4 + DMADSTINCR_3 + DMAIE + DMADSTBYTE + DMASRCBYTE; DMA0CTL |= DMAEN; //Enable DMA ADC12CTL0 |= ADC12SC; //Start Conversions // Need to wait for first sample to complete here. How to? //Serial Loop //Send samples from DMA buffer to serial port. //Assumes that ADC samples faster that the serial port transmits. for(i=0; i<SAMPLES; i++) { x = ADCSamples[i]; // Get a 16 bit sample UCA0TXBUF = x >> 8; // Send upper byte to Serial while(!(IFG2 & UCA0TXIFG)) continue; // Wait for first transmit UCA0TXBUF = x; // Send lower byte to Serial while(!(IFG2 & UCA0TXIFG)) continue; // Wait for second transmit __bis_SR_register(LPM0_bits + GIE); // What does this do? }}
I've put questions marks '?" where I'm not sure what is your intent. Note that the code sends binary values across the serial port. A terminal program such as Hyperterminal wants ASCII characters. Maybe your terminal program has a mode where it will print out binary values in human readable form.
Martin Novotny102956Sorry, I guess I'm not sure why you bothered to even reply to my post when all you did was tell me everything I did wrong and offered no means of assistance.
It won't help you if I jsut write a workign code version for you. You wouldn't learn anything and come back to the forum with your next non-working code. Telling you where you did wrong (and giving you the advice to take a class about usage of C pointers) will, if accepted by you, increase your knowledge and therefore your ability to do it right by yourself, while at the same time decreasing the probability that you come back for more assistance soon - with the very same mistakes.
Martin Novotny102956 I do not have a great understanding of what pointers in C are, but no where in your post did you make any attempt to help me learn or suggest ways to improve my code
Martin Novotny102956you simply told me everything already know which is that my code is obviously flawed
And I never provide complete code. First because I don't have the time to write it and test it (I surely don't want to release untested code), then I don't have the equipment to test the code (there are ~400 MSP derivates, not counting the required external circuitry for each case). And finally, nobody learns walking if he's carried around all the time.I may lend a hand while some tries to walk, but if someone hasn't even discovered that he has legs... well, I have given private lessons in the past. Mainly chemistry, physics and math. But for cash. And back then I had the time for doing so. Today, I have a fulltime job, and it's not being the 'go to expert'. It's not even for TI. I do this is my spare time. Free of charge.
Norman WongJens-Michael Gross is one of more tolerant gurus. You must have caught him at a bad moment.
But now to your code. I didn't check the clock and por tinitialization. However, there are a few things...
Norman Wong //__bis_SR_register(LPM3_bits + GIE); // Wait for delay, Enable interrupts
TACCTL0 &= ~CCIFG; clear interrupt flagwhile (!(TACCTL0&CCIFG)); // wait until TAR has counted to TACCR0
Norman WongIE2 |= UCA0TXIE + UCA0RXIE; // enable RXD and TXD interrupt;
Norman Wong DMA0SZ = SAMPLES; // Set DMA Block size ADCSample size BYTES? ITEMS?????
Norman Wong DMA0CTL = DMADT_4 + DMADSTINCR_3 + DMAIE + DMADSTBYTE + DMASRCBYTE;
Norman Wong // Need to wait for first sample to complete here. How to?
Norman Wong __bis_SR_register(LPM0_bits + GIE); // What does this do?
However, there is no ISR at all, so this line is not doing any good.It should be replaced by something that checks DMA0SZ:
while(DMA0SZ>=(SAMPLES. | http://e2e.ti.com/support/microcontrollers/msp430/f/166/t/181938.aspx | CC-MAIN-2014-41 | refinedweb | 1,597 | 56.89 |
OFF Parser Documentation¶
About¶
This package provides a class for parsing .OFF (Object File Format) 3D model files for manipulation of the data using Python. In addition, it provides helper functions for downloading and parsing data from the below datasets.
This package is in no way affiliated with these datasets. For terms of dataset usage, please consult each datasets respective licensing and conditions.
Installation¶
The latest development version of this package may be installed via pip:
pip install git+
Parsing Files¶
An .OFF file may be loaded by passing the file path to the class constructor:
from off_parser import OffParser p = OffParser('path/file.off')
The points and faces contained in the file can then be accessed via the points and faces properties.
The 3D model may also be plotted using matplotlib by calling the plot method:
p.plot()
The functions specified in the table in the About section may be used to load their respective datasets. The first time a function is called, the dataset will be downloaded to the operating system’s temporary directory. If the download is interrupted or you encounter a problem with the file, delete the file from the temporary directory and try calling the function again. The dataset files can take some time to download so please be patient. | https://off-parser.readthedocs.io/en/latest/ | CC-MAIN-2021-39 | refinedweb | 215 | 63.49 |
A special type used to provide an address that identifies a set of related analyses. More...
#include "llvm/IR/PassManager.h"
A special type used to provide an address that identifies a set of related analyses.
These sets are primarily used below to mark sets of analyses as preserved.
For example, a transformation can indicate that it preserves the CFG of a function by preserving the appropriate AnalysisSetKey. An analysis that depends only on the CFG can then check if that AnalysisSetKey is preserved; if it is, the analysis knows that it itself is preserved.
Definition at line 82 of file PassManager.h. | https://www.llvm.org/doxygen/structllvm_1_1AnalysisSetKey.html | CC-MAIN-2021-43 | refinedweb | 103 | 54.73 |
I tried to use OMX_GetComponentsOfRole/OMX_GetRolesOfComponent functions, but I couldn't make them work. Is it because they haven't been implemented yet, or it's me not using them correctly?
If it hasn't been implemented yet, when it's going to?
Is there any other way (api documentation) to do it with vcilcs since there is already some functions such as vcil_in_component_role_enum in vcilscs header file ?
here is the code I use:
Code: Select all
#include <stdio.h> #include <stdlib.h> #include <string.h> #include "bcm_host.h" #include "ilclient.h" int main(int argc, char **argv) { bcm_host_init(); if (OMX_Init() != OMX_ErrorNone) { return -1; } OMX_U32 NumRoles; OMX_U8 **roles; OMX_GetRolesOfComponent("OMX.broadcom.audio_decode", &NumRoles, NULL); roles = (OMX_U8**) malloc(NumRoles * sizeof (OMX_U8*)); printf("Number of Roles %d \n", NumRoles); int c; for (c = 0; c < NumRoles; c++) { roles[c] = (OMX_U8*) malloc(OMX_MAX_STRINGNAME_SIZE); } OMX_GetRolesOfComponent("OMX.broadcom.audio_decode", &NumRoles, roles); for (c = 0; c < NumRoles; c++) { printf("Roles:%s\n", roles[c]); } for (c = 0; c < NumRoles; c++) { free(roles[c]); roles[c] = NULL; } free(roles); roles = NULL; OMX_Deinit(); return 0; } | https://lb.raspberrypi.org/forums/viewtopic.php?f=70&t=33827 | CC-MAIN-2019-18 | refinedweb | 177 | 60.01 |
CGI::Ex::App - Anti-framework application framework.
A basic example:
-------- File: /cgi-bin/my_cgi -------- #!/usr/bin/perl -w use strict; use base qw(CGI::Ex::App); __PACKAGE__->navigate; exit; sub main_file_print { return \ "Hello World!"; }
Properly put content in an external file...
-------- File: /cgi-bin/my_cgi -------- #!/usr/bin/perl -w use strict; use base qw(CGI::Ex::App); __PACKAGE__->navigate; sub template_path { '/var/www/templates' } -------- File: /var/www/templates/my_cgi/main.html -------- Hello World!
Adding substitutions...
-------- File: /cgi-bin/my_cgi -------- #!/usr/bin/perl -w use strict; use base qw(CGI::Ex::App); __PACKAGE__->navigate; sub template_path { '/var/www/templates' } sub main_hash_swap { my $self = shift; return { greeting => 'Hello', date => sub { scalar localtime }, }; } -------- File: /var/www/templates/my_cgi/main.html -------- [% greeting %] World! ([% date %])
Add forms and validation (inluding javascript validation)...
-------- File: /cgi-bin/my_cgi -------- #!/usr/bin/perl -w use strict; use base qw(CGI::Ex::App); __PACKAGE__->navigate; sub template_path { '/var/www/templates' } sub main_hash_swap { {date => sub { scalar localtime }} } sub main_hash_fill { return { guess => 50, }; } sub main_hash_validation { return { guess => { required => 1, compare1 => '<= 100', compare1_error => 'Please enter a value less than 101', compare2 => '> 0', compare2_error => 'Please enter a value greater than 0', }, }; } sub main_finalize { my $self = shift; my $form = $self->form; $self->add_to_form({was_correct => ($form->{'guess'} == 23)}); return 0; # indicate to show the page without trying to move along } -------- File: /var/www/templates/my_cgi/main.html -------- <h2>Hello World! ([% date %])</h2> [% IF was_correct %] <b>Correct!</b> - The number was [% guess %].<br> [% ELSIF guess %] <b>Incorrect</b> - The number was not [% guess %].<br> [% END %] <form name="[% form_name %]" method="post"> Enter a number between 1 and 100: <input type="text" name="guess"><br> <span id="guess_error" style="color:red">[% guess_error %]</span><br> <input type="submit"> </form> [% js_validation %]
There are infinite possibilities. There is a longer "SYNOPSIS" after the process flow discussion and more examples near the end of this document. It is interesting to note that there have been no databases so far. It is very, very difficult to find a single database abstraction that fits every model. CGI::Ex::App is Controller/Viewer that is somewhat Model agnostic and doesn't come with any default database abstraction. things, in a simple manner, without getting in the developer's way. However, there are various design patterns for CGI applications that CGI::Ex::App handles for you that the other frameworks require you to bring in extra support. The entire CGI::Ex suite has been taylored to work seamlessly together. Your mileage in building applications may vary.
If you build applications that submit user information, validate it, re-display it, fill in forms, or separate logic into separate modules, then this module may be for you. If all you need is a dispatch engine, then this still may be for you. If all you want is to look at user passed information, then this may still be for you. If you like writing bare metal code, this could still be for you. If you don't want to write any code, this module will help - but you still need to provide your key actions and html.
One of the great benefits of CGI::Ex::App vs. Catalyst or Rails style frameworks is that the model of CGI::Ex::App can be much more abstract. And models often are abstract.
The following pseudo-code describes the process flow of the CGI::Ex::App framework. Several portions of the flow are encapsulated in hooks which may be completely overridden to give different flow. All of the default actions are shown. It may look like a lot to follow, but if the process is broken down into the discrete operations of step iteration, data validation, and template printing the flow feels more natural.
The process starts off by calling ->navigate.
navigate { eval { ->pre_navigate ->nav_loop ->post_navigate } # dying errors will run the ->handle_error method ->destroy }
The nav_loop method will run as follows:
nav_loop { ->path (get the array of path steps) # ->path_info_map_base (method - map ENV PATH_INFO to form) # look in ->form for ->step_key # make sure step is in ->valid_steps (if defined) ->pre_loop($path) # navigation stops if true foreach step of path { ->require_auth (hook) # exits nav_loop if true ->morph (hook) # check ->allow_morph (hook) # ->morph_package (hook - get the package to bless into) # ->fixup_after_morph if morph_package exists # if no package is found, process continues in current file ->path_info_map (hook - map PATH_INFO to form) ->run_step (hook) ->refine_path (hook) # only called if run_step returned false (page not printed) ->next_step (hook) # find next step and add to path ->set_ready_validate(0) (hook) ->unmorph (hook) # ->fixup_before_unmorph if blessed to current package # exit loop if ->run_step returned true (page printed) } end of foreach step ->post_loop # navigation stops if true ->default_step ->insert_path (puts the default step into the path) ->nav_loop (called again recursively) } end of nav_loop
For each step of the path the following methods will be run during the run_step hook.
run_step { ->pre_step (hook) # skips this step if true and exit nav_loop ->skip (hook) # skips this step if true and stays in nav_loop ->prepare (hook - defaults to true) ->info_complete (hook - ran if prepare was true) ->ready_validate (hook) ->validate_when_data (hook) # returns false from info_complete if ! ready_validate ->validate (hook - uses CGI::Ex::Validate to validate form info) ->hash_validation (hook) ->file_val (hook) ->vob_path (defaults to template_path) ->base_dir_rel ->name_module ->name_step ->ext_val # returns true if validate is true or if nothing to validate ->finalize (hook - defaults to true - ran if prepare and info_complete were true) if ! ->prepare || ! ->info_complete || ! ->finalize { ->prepared_print ->hash_base (hook) ->hash_common (hook) ->hash_form (hook) ->hash_fill (hook) ->hash_swap (hook) ->hash_errors (hook) # merge form, base, common, and fill into merged fill # merge form, base, common, swap, and errors into merged swap ->print (hook - passed current step, merged swap hash, and merged fill) ->file_print (hook - uses base_dir_rel, name_module, name_step, ext_print) ->swap_template (hook - processes the file with Template::Alloy) ->template_args (hook - passed to Template::Alloy->new) ->fill_template (hook - fills the any forms with CGI::Ex::Fill) ->fill_args (hook - passed to CGI::Ex::Fill::fill) ->print_out (hook - print headers and the content to STDOUT) ->post_print (hook - used for anything after the print process) # return true to exit from nav_loop } ->post_step (hook) # exits nav_loop if true } end of run_step
It is important to learn the function and placement of each of the hooks in the process flow in order to make the most of CGI::Ex::App. It is enough to begin by learning a few common hooks - such as hash_validation, hash_swap, and finalize, and then learn about other hooks as needs arise. Sometimes, it is enough to simply override the run_step hook and take care of processing the entire step yourself.
Because of the hook based system, and because CGI::Ex::App uses sensible defaults, it is very easy to override a little or a lot which ends up giving the developer a lot of flexibility.
Additionally, it should be possible to use CGI::Ex::App with the other frameworks such as CGI::Application or CGI::Prototype. For these you could simple let each "runmode" call the run_step hook of CGI::Ex::App and you will instantly get all of the common process flow for free.
The default out of the box configuration will map URIs to steps as follows:
# Assuming /cgi-bin/my_app is the program being run URI: /cgi-bin/my_app STEP: main FORM: {} WHY: No other information is passed. The path method is called which eventually calls ->default_step which defaults to "main" URI: /cgi-bin/my_app?foo=bar STEP: main FORM: {foo => "bar"} WHY: Same as previous example except that QUERY_STRING information was passed and placed in form. URI: /cgi-bin/my_app?step=my_step STEP: my_step FORM: {step => "my_step"} WHY: The path method is called which looks in $self->form for the key ->step_key (which defaults to "step"). URI: /cgi-bin/my_app?step=my_step&foo=bar STEP: my_step FORM: {foo => "bar", step => "my_step"} WHY: Same as before but another parameter was passed. URI: /cgi-bin/my_app/my_step STEP: my_step FORM: {step => "my_step"} WHY: The path method is called which called path_info_map_base which matched $ENV{'PATH_INFO'} using the default regex of qr{^/(\w+)$} and place the result in $self->form->{$self->step_key}. Path then looks in $self->form->{$self->step_key} for the initial step. See the path_info_map_base method for more information. URI: /cgi-bin/my_app/my_step?foo=bar STEP: my_step FORM: {foo => "bar", step => "my_step"} WHY: Same as before but other parameters were passed. URI: /cgi-bin/my_app/my_step?step=other_step STEP: other_step FORM: {step => "other_step"} WHY: The same procedure took place, but when the PATH_INFO string was matched, the form key "step" already existed and was not replaced by the value from PATH_INFO.
The remaining examples in this section are based on the assumption that the following method is installed in your script.
sub my_step_path_info_map { return [ [qr{^/\w+/(\w+)/(\d+)$}, 'foo', 'id'], [qr{^/\w+/(\w+)$}, 'foo'], [qr{^/\w+/(.+)$}, 'anything_else'], ]; } URI: /cgi-bin/my_app/my_step/bar STEP: my_step FORM: {foo => "bar"} WHY: The step was matched as in previous examples using path_info_map_base. However, the form key "foo" was set to "bar" because the second regex returned by the path_info_map hook matched the PATH_INFO string and the corresponding matched value was placed into the form using the keys specified following the regex. URI: /cgi-bin/my_app/my_step/bar/1234 STEP: my_step FORM: {foo => "bar", id => "1234"} WHY: Same as the previous example, except that the first regex matched the string. The first regex had two match groups and two form keys specified. Note that it is important to order your match regexes in the order that will match the most data. The third regex would also match this PATH_INFO. URI: /cgi-bin/my_app/my_step/some/other/type/of/data STEP: my_step FORM: {anything_else => 'some/other/type/of/data'} WHY: Same as the previous example, except that the third regex matched. URI: /cgi-bin/my_app/my_step/bar?bling=blang STEP: my_step FORM: {foo => "bar", bling => "blang"} WHY: Same as the first sample, but additional QUERY_STRING information was passed. URI: /cgi-bin/my_app/my_step/one%20two?bar=three%20four STEP: my_step FORM: {anything_else => "one two", bar => "three four"} WHY: The third path_info_map regex matched. Note that the %20 in bar was unescaped by CGI::param, but the %20 in anything_else was unescaped by Apache. If you are not using Apache, this behavior may vary. CGI::Ex::App doesn't decode parameters mapped from PATH_INFO.
See the path method for more information about finding the initial step of the path.
The form method calls CGI::Ex::form which uses CGI::param to retrieve GET and POST parameters. See the form method for more information on how GET and POST parameters are parsed.
See the path_info_map_base method, and path_info_map hook for more information on how the path_info maps function.
Using the following code is very useful for determing what hooks have taken place:
use CGI::Ex::Dump qw(debug); sub post_navigate { my $self = shift; debug $self->dump_history, $self->form; }
CGI::Ex::App uses CGI::Ex::Validate for its data validation. See CGI::Ex::Validate for more information about the many ways you can validate your data.
The default hash_validation hook returns an empty hashref. This means that passed in data is all valid and the script will automatically call the step's finalize method.
The following shows how to add some contrived validation to a step called "my_step".
sub my_step_hash_validation { return { username => { required => 1, match => 'm/^(\w+)$/', match_error => 'The $field field may only contain word characters', max_len => 20, }, password => { required => 1, max_len => 15, }, password_verify => { validate_if => 'password', equals => 'password', }, usertype => { required => 1, enum => [qw(animal vegetable mineral)], }, }; }
The step will continue to display the html form until all of the fields pass validation.
See the hash_validation hook and validate hook for more information about how this takes place.
You must first provide a hash_validation hook as explained in the previous section.
Once you have a hash_validation hook, you would place the following tags into your HTML template.
<form name="[% form_name %]" method="post"> ... </form> [% js_validation %]
The "form_name" swap-in places a name on the form that the javascript returned by the js_validation swap-in will be able to find and check for validity.
See the hash_validation, js_validation, and form_name hooks for more information.
Also, CGI::Ex::validate.js allows for inline errors in addition to or in replacement of an alert message. To use inline errors, you must provide an element in your HTML document where this inline message can be placed. The common way to do it is as follows:
<input type="text" name="username"><br> <span class="error" id="username_error">[% username_error %]</span>
The span around the error allows for the error css class and it provides a location that the Javascript validation can populate with errors. The [% username_error %] provides a location for errors generated on the server side to be swapped in. If there was no error the [% username_error %] tag would default to "".
All variables returned by the hash_base, hash_common, hash_form, hash_swap, and hash_errors hooks are available for swapping in templates.
The following shows how to add variables using the hash_swap hook on the step "main".
sub main_hash_swap { return { color => 'red', choices => [qw(one two three)], "warn" => sub { warn @_ }, }; }
You could also return the fields from the hash_common hook and they would be available in both the template swapping as well as form filling.
See the hash_base, hash_common, hash_form, hash_swap, hash_errors, swap_template, and template_args hooks for more information.
The default template engine used is Template::Alloy. The default interface used is TT which is the Template::Toolkit interface. Template::Alloy allows for using TT documents, HTML::Template documents, HTML::Template::Expr documents, Text::Tmpl documents, or Velocity (VTL) documents. See the Template::Alloy documentation for more information.
All variables returned by the hash_base, hash_common, hash_form, and hash_fill hooks are available for filling html fields in on templates.
The following shows how to add variables using the hash_fill hook on the step "main".
sub main_hash_fill { return { color => 'red', choices => [qw(one two three)], }; }
You could also return the fields from the hash_common hook and they would be available in both the form filling as well as in the template swapping.
See the hash_base, hash_common, hash_form, hash_swap, hash_errors, fill_template, and fill_args hooks for more information.
The default form filler is CGI::Ex::Fill which is similar to HTML::FillInForm but has several benefits. See the CGI::Ex::Fill module for the available options.
CGI::Ex::App tries to help your applications use a good template directory layout, but allows for you to override everything.
External template files are used for storing your html templates and for storing your validation files (if you use externally stored validation files).
The default file_print hook will look for content on your file system, but it can also be completely overridden to return a reference to a scalar containing the contents of your file (beginning with version 2.14 string references can be cached which makes templates passed this way "first class" citizens). Actually it can return anything that Template::Alloy (Template::Toolkit compatible) will treat as input. This templated html is displayed to the user during any step that enters the "print" phase.
Similarly the default file_val hook will look for a validation file on the file system, but it too can return a reference to a scalar containing the contents of a validation file. It may actually return anything that the CGI::Ex::Validate get_validation method is able to understand. This validation is used by the default "info_complete" method for verifying if the submitted information passes its specific checks. A more common way of inlining validation is to return a validation hash from a hash_validation hook override.
If the default file_print and file_val hooks are used, the following methods are employed for finding templates and validation files on your filesystem (they are also documented more in the HOOKS AND METHODS section.
Absolute path or arrayref of paths to the base templates directory. Defaults to base_dir_abs which defaults to ['.'].
Relative path inside of the template_path directory where content can be found. Default "".
Directory inside of base_dir_rel where files for the current CGI (module) will be stored. Default value is $ENV{SCRIPT_NAME} with path and extension removed.
Used with ext_print and ext_val for creating the filename that will be looked for inside of the name_module directory. Default value is the current step.
Filename extensions added to name_step to create the filename looked for inside of the name_module directory. Default is "html" for ext_print and "val" for ext_val.
It may be easier to understand the usage of each of these methods by showing a contrived example. The following is a hypothetical layout for your templates:
/home/user/templates/ /home/user/templates/chunks/ /home/user/templates/wrappers/ /home/user/templates/content/ /home/user/templates/content/my_app/ /home/user/templates/content/my_app/main.html /home/user/templates/content/my_app/step1.html /home/user/templates/content/my_app/step1.val /home/user/templates/content/another_cgi/main.html
In this example we would most likely set values as follows:
template_path /home/user/templates base_dir_rel content name_module my_app
The name_module method defaults to the name of the running program, but with the path and extension removed. So if we were running /cgi-bin/my_app.pl, /cgi-bin/my_app, or /anypath/my_app, then name_module would default to "my_app" and we wouldn't have to hard code the value. Often it is wise to set the value anyway so that we can change the name of the cgi script without effecting where template content should be stored.
Continuing with the example and assuming that name of the step that the user has requested is "step1" then the following values would be returned:
template_path /home/user/templates base_dir_rel content name_module my_app name_step step1 ext_print html ext_val val file_print content/my_app/step1.html file_val /home/user/templates/content/my_app/step1.val
The call to the template engine would look something like the following:
my $t = $self->template_obj({ INCLUDE_PATH => $self->template_path, # defaults to base_dir_abs }); $t->process($self->file_print($step), \%vars);
The template engine would then look for the relative file inside of the absolute paths (from template_path).
The call to the validation engine would pass the absolute filename that is returned by file_val.
The name_module and name_step methods can return filenames with additional directories included. The previous example could also have been setup using the following values:
template_path /home/user/templates base_dir_rel name_module content/my_app
In this case the same values would be returned for the file_print and file_val hooks as were returned in the previous setup.
This example script would most likely be in the form of a cgi, accessible via the path (or however you do CGIs on your system. About the best way to get started is to paste the following code into a cgi script (such as cgi-bin/my_app) and try it out. A detailed walk-through follows in the next section. There is also a longer recipe database example at the end of this document that covers other topics including making your module a mod_perl handler.
### File: /var/www/cgi-bin/my_app (depending upon Apache configuration) ### -------------------------------------------- #!/usr/bin/perl -w use strict; use base qw(CGI::Ex::App); use CGI::Ex::Dump qw(debug); __PACKAGE__->navigate; # OR # my $obj = __PACKAGE__->new; # $obj->navigate; exit; ###------------------------------------------### sub post_navigate { # show what happened debug shift->dump_history; } sub main_hash_validation { return { 'general no_alert' => 1, 'general no_confirm' => 1, 'group order' => [qw(username password password2)], username => { required => 1, min_len => 3, max_len => 30, match => 'm/^\w+$/', match_error => 'You may only use letters and numbers.', }, password => { required => 1, min_len => 6, }, password2 => { equals => 'password', }, }; } sub main_file_print { # reference to string means ref to content # non-reference means filename return \ "<h1>Main Step</h1> <form method=post name=[% form_name %]> <input type=hidden name=step> <table> <tr> <td><b>Username:</b></td> <td><input type=text name=username><span style='color:red' id=username_error>[% username_error %]</span></td> </tr><tr> <td><b>Password:</b></td> <td><input type=text name=password><span style='color:red' id=password_error>[% password_error %]</span></td> </tr><tr> <td><b>Verify Password:</b></td> <td><input type=text name=password2><span style='color:red' id=password2_error>[% password2_error %]</span></td> </tr> <tr><td colspan=2 align=right><input type=submit></td></tr> </table> </form> [% js_validation %] "; } sub main_finalize { my $self = shift; if ($self->form->{'username'} eq 'bar') { $self->add_errors(username => 'A trivial check to say the username cannot be "bar"'); return 0; } debug $self->form, "Do something useful with form here in the finalize hook."; ### add success step $self->add_to_swap({success_msg => "We did something"}); $self->append_path('success'); $self->set_ready_validate(0); return 1; } sub success_file_print { \ "<div style=background:lightblue> <h1>Success Step - [% success_msg %]</h1> Username: <b>[% username %]</b><br> Password: <b>[% password %]</b><br> </div> "; } __END__
Note: This example would be considerably shorter if the html file (file_print) and the validation file (file_val) had been placed in separate files. Though CGI::Ex::App will work "out of the box" as shown it is more probable that any platform using it will customize the various hooks to their own tastes (for example, switching print to use a templating system other than Template::Alloy).
This section goes step by step over the previous example.
Well - we start out with the customary CGI introduction.
#!/usr/bin/perl -w use strict; use base qw(CGI::Ex::App); use CGI::Ex::Dump qw(debug);
Note: the "use base" is not normally used in the "main" portion of a script. It does allow us to just do __PACKAGE__->navigate.
Now we need to invoke the process:
__PACKAGE__->navigate; # OR # my $obj = __PACKAGE__->new; # $obj->navigate; exit;
Note: the "exit" isn't necessary - but it is kind of nice to infer that process flow doesn't go beyond the ->navigate call.
The navigate routine is now going to try and "run" through a series of steps. Navigate will call the ->path method which should return an arrayref containing the valid steps. By default, if path method has not been overridden, the path method will default first to the step found in form key named ->step_name, then it will fall to the contents of $ENV{'PATH_INFO'}. If navigation runs out of steps to run it will run the step found in ->default_step which defaults to 'main'. So the URI '/cgi-bin/my_app' would run the step 'main' first by default. The URI '/cgi-bin/my_app?step=foo' would run the step 'foo' first. The URI '/cgi-bin/my_app/bar' would run the step 'bar' first.
CGI::Ex::App allows for running steps in a preset path or each step may choose the next step that should follow. The navigate method will go through one step of the path at a time and see if it is completed (various methods determine the definition of "completed"). This preset type of path can also be automated using the CGI::Path module. Rather than using a preset path, CGI::Ex::App also has methods that allow for dynamic changing of the path, so that each step can determine which step to do next (see the goto_step, append_path, insert_path, and replace_path methods).
During development it would be nice to see what happened during the course of our navigation. This is stored in the arrayref contained in ->history. There is a method that is called after all of the navigation has taken place called "post_navigate". This chunk will display history after we have printed the content.
sub post_navigate { debug shift->dump_history; } # show what happened
Ok. Finally we are looking at the methods used by each step of the path. The hook mechanism of CGI::Ex::App will look first for a method ${step}_${hook_name} called before falling back to the method named $hook_name. Internally in the code there is a call that looks like $self->run_hook('hash_validation', $step). In this case the step is main. The dispatch mechanism finds our method at the following chunk of code.
sub main_hash_validation { ... }
The process flow will see if the data is ready to validate. Once it is ready (usually when the user presses the submit button) the data will be validated. The hash_validation hook is intended to describe the data and will be tested using CGI::Ex::Validate. See the CGI::Ex::Validate perldoc for more information about the many types of validation available.
sub main_file_print { ... }
The navigation process will see if user submitted information (the form) is ready for validation. If not, or if validation fails, the step needs to be printed. Eventually the file_print hook is called. This hook should return either the filename of the template to be printed, or a reference to the actual template content. In this example we return a reference to the content to be printed (this is useful for prototyping applications and is also fine in real world use - but generally production applications use external html templates).
A few things to note about the template:
First, we add a hidden form field called step. This will be filled in automatically at a later point with the current step we are on.
We provide locations to swap in inline errors.
<span style="color:red" id="username_error">[% username_error %]</span>
As part of the error html we name each span with the name of the error. This will allow for us to have Javascript update the error spots when the javascript finds an error.
At the very end we add the TT variable [% js_validation %]. This swap in is provided by the default hash_base hook and will provide for form data to be validated using javascript.
Once the process flow has deemed that the data is validated, it then calls the finalize hook. Finalize is where the bulk of operations should go. We'll look at it more in depth.
sub main_finalize { my $self = shift; my $form = $self->form;
At this point, all of the validated data is in the $form hashref.
if ($form->{'username'} eq 'bar') { $self->add_errors(username => 'A trivial check to say the username cannot be "bar"'); return 0; }
It is most likely that though the data is of the correct type and formatting, it still isn't completely correct. This previous section shows a hard coded test to see if the username was 'bar'. If it was then an appropriate error will be set, the routine returns 0 and the run_step process knows that it needs to redisplay the form page for this step. The username_error will be shown inline. The program could do more complex things such as checking to see if the username was already taken in a database.
debug $form, "Do something useful with form here in the finalize hook.";
This debug $form piece is simply a place holder. It is here that the program would do something useful such as add the information to a database.
### add success step $self->add_to_swap({success_msg => "We did something"});
Now that we have finished finalize, we add a message that will be passed to the template engine.
$self->append_path('success'); $self->set_ready_validate(0);
The program now needs to move on to the next step. In this case we want to follow with a page that informs us we succeeded. So, we append a step named "success". We also call set_ready_validate(0) to inform the navigation control that the form is no longer ready to validate - which will cause the success page to print without trying to validate the data. It is normally a good idea to set this as leaving the engine in a "ready to validate" state can result in an recursive loop (that will be caught).
return 1; }
We then return 1 which tells the engine that we completed this step successfully and it needs to move on to the next step.
Finally we run the "success" step because we told it to. That step isn't ready to validate so it prints out the template page.
For more of a real world example, it would be good to read the sample recipe db application included at the end of this document.
CGI::Ex::App's dispatch system works on the principles of hooks (which are essentially glorified method lookups). When the run_hook method is called, CGI::Ex::App will look for a corresponding method call for that hook for the current step name. It is perhaps easier to show than to explain.
If we are calling the "print" hook for the step "edit" we would call run_hook like this:
$self->run_hook('print', 'edit', $template, \%swap, \%fill);
This would first look for a method named "edit_print". If it is unable to find a method by that name, it will look for a method named "print". If it is unable to find this method - it will die.
If allow_morph is set to true, the same methods are searched for but it becomes possible to move some of those methods into an external package.
See the discussions under the methods named "find_hook" and "run_hook" for more details.
Some hooks expect "magic" values to be replaced. Often they are intuitive, but sometimes it is easy to forget. For example, the finalize hook should return true (default) to indicate the step is complete and false to indicate that it failed and the page should be redisplayed. You can import a set of constants that allows for human readible names.
use CGI::Ex::App qw(:App__finalize); OR use MyAppPkg qw(:App__finalize); # if it is a subclass of CGI::Ex::App
This would import the following constants: App__finalize__failed_and_show_page (0), App__finalize__finished_and_move_to_next_step => (1 - default), and App__finalize__finished_but_show_page ("" - still false). These constants are provided by CGI::Ex::App::Constants which also contains more options for usage.
The following is the alphabetical list of methods and hooks.
Should return true if this step is allowed to "morph" the current App object into another package. Default is false. It is passed a single argument of the current step. For more granularity, if true value is a hash, the step being morphed to must be in the hash.
If the returned value is "1", and the module doesn't exist, then the App will continue to run blessed into the current package. If there is an error requiring the module or if the module doesn't exist and the return value is "2" (true but not 1), then App will die with the appropriate error.
To enable morphing for all steps, add the following: (Packages that don't exists won't be morphed to)
sub allow_morph { 1 }
To force morphing for all steps add the following:
sub allow_morph { 2 }
To enable morph on specific steps, do either of the following:
sub allow_morph { return { edit => 1, delete => 2, # must morph }; } # OR sub allow_morph { my ($self, $step) = @_; return 1 if $step eq 'edit'; return 2 if $step eq 'delete'; return; }
See the morph "hook" for more information.
Arguments are the steps to append. Can be called any time. Adds more steps to the end of the current path.
Should return a hashref that will be passed to the auth_obj method which should return a CGI::Ex::Auth compatible object. It is augmented with arguments that integrate it into CGI::Ex::App.
See the get_valid_auth method and the CGI::Ex::Auth documentation.
sub auth_args { return { login_header => '<h1>My login header</h1>', login_footer => '[% TRY %][% INCLUDE login/login_footer.htm %][% CATCH %]<!-- [% error %] -->[% END %]', secure_hash_keys => [qw(aaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbb ccccccccccccccccccccccc 2222222222222)], # use_blowfish => 'my_blowfish_key', }; }
Contains authentication data stored during the get_valid_auth method. The data is normally blessed into the CGI::Ex::Auth::Data package which evaluates to false if there was an error and true if the authentication was successful - so this data can be defined but false.
See the get_valid_auth method.
Passed auth_args. Should return a CGI::Ex::Auth compatible object. Default is to call CGI::Ex::Auth->new with the passed args.
Used as the absolute base directory to find template, validation and conf files. It may return a single value or an arrayref of values, or a coderef that returns an arrayref or coderef of values. You may pass base_dir_abs as a parameter in the arguments passed to the "new" method.
Default value is ['.'].
For example, to pass multiple paths, you would use something similar to the following:
sub base_dir_abs { return ['/my/path/one', '/some/other/path']; }
The base_dir_abs value is used by template_path along with the base_dir_rel, name_module, name_step, ext_print and ext_values for determining the values returned by the default file_print and file_val hooks. See those methods for further discussion.
See the section on FINDING TEMPLATES for further discussion.
The base_dir_abs method is also used as the default value for conf_path and vob_path.
Added as a relative base directory to content under the base_dir_abs directory.
Default value is "".
The template_path method is used as top level where template includes may pull from, while the base_dir_rel is directory relative to the template_path where the content files will be stored.
A value for base_dir_rel may passed as a parameter in the arguments passed to the new method.
See the template_path and base_dir_abs methods for more discussion.
See the section on FINDING TEMPLATES for further discussion.
Used as a hook during get_valid_auth. Allows for cleaning up the username. See the get_valid_auth method.
sub cleanup_user { my ($self, $user) = @_; return lc $user; }
If the same CGI::Ex::App based object is used to run multiple navigate sessions, the clear_app method should be called which will attempt to clear as much session information as it can. The following items will be cleared:
cgix vob form cookies stash path path_i history _morph_lineage_start_index _morph_lineage hash_errors hash_fill hash_swap hash_common
Used by default in init_from_conf if load_conf returns true. Will try to read the file returned by the conf_file method using the object returned by conf_obj using that object's read method. If conf_validation returns a non-empty hashref, the conf hash will be validated using $self->vob->validate (see the validate method).
This method may be used for other purposes as well (including when load_conf is false)..
Caches results in $self->{'conf'}.
If the conf_file can't be found, the method will die unless conf_die_on_fail returns 0 (defaults to true).
Used by conf_obj.
Defaults to $self->{'conf_args'} which defaults to {}. Will have paths => $self->conf_path added before passing to CGI::Ex::Conf->new.
Used by conf for finding the configuration file to load. Defaults to $self->{'conf_file'} which defaults $self->name_module with the extention returned by $self->ext_conf added on. For example, if name_module returns "my_app" and ext_conf returns "ini" the value returned will be "my_app.ini".
The value returned can absolute. If the value will be searched for in the paths passed to conf_obj.
The ext_conf may be any of those extentions understood by CGI::Ex::Conf.
Used by the conf method to load the file returned by conf_file. Defaults to conf_obj which defaults to loading args from conf_args, adding in paths returned by conf_path, and calling CGI::Ex::Conf->new.
Any object that provides a read method that returns a hashref can be used.
Defaults to $self->{'conf_path'} which defaults to base_dir_abs. Should be a path or an arrayref of paths to look the configuration file returned by conf_file when that file is not absolute.
Used by default conf method. Defaults to an empty hashref. If non-empty hashref is passed, the hashref returned by conf_obj->read will be validated using the hashref returned by conf_validation.
Returns the current step that the nav_loop is functioning on.
Step to show if the path runs out of steps. Default value is the 'default_step' property which defaults to 'main'.
If nav_loop runs of the end of the path (runs out of steps), this method is called, the step is added to the path, and nav_loop calls itself recursively.
Called at the end of navigate after all other actions have run. Can be used for undoing things done in the ->init method called during the ->new method.
Show simplified trace information of which steps were called, the order they were called in, the time they took to run, and a brief list of the output (to see the full response returned by each hook, pass a true value as the only argument to dump_history - $self->dump_history(1)). Indentation is also applied to show which hooks called other hooks.
The first line shows the amount of time elapsed for the entire navigate execution. Subsequent lines contain:
Step - the name of the current step. Hook - the name of the hook being called. Found - the name of the method that was found. Time - the total elapsed seconds that method took to run. Output - the response of the hook - shown in shortened form.
Note - to get full output responses - pass a true value to dump_history - or just call ->history. Times displayed are to 5 decimal places - this accuracy can only be provided if the Time::HiRes module is installed on your system (it will only be used if installed).
It is usually best to print this history during the post_navigate method as in the following:
use CGI::Ex::Dump qw(debug); sub post_navigate { debug shift->dump_history }
The following is a sample output of dump_history called from the sample recipe application at the end of this document. The step called is "view".
debug: admin/Recipe.pm line 14 shift->dump_history = [ "Elapsed: 0.00562", "view - require_auth - require_auth - 0.00001 - 0", "view - run_step - run_step - 0.00488 - 1", " view - pre_step - pre_step - 0.00003 - 0", " view - skip - view_skip - 0.00004 - 0", " view - prepare - prepare - 0.00003 - 1", " view - info_complete - info_complete - 0.00010 - 0", " view - ready_validate - ready_validate - 0.00004 - 0", " view - prepared_print - prepared_print - 0.00441 - 1", " view - hash_base - hash_base - 0.00009 - HASH(0x84ea6ac)", " view - hash_common - view_hash_common - 0.00148 - HASH(0x8310a20)", " view - hash_form - hash_form - 0.00004 - HASH(0x84eaa78)", " view - hash_fill - hash_fill - 0.00003 - {}", " view - hash_swap - hash_swap - 0.00003 - {}", " view - hash_errors - hash_errors - 0.00003 - {}", " view - print - print - 0.00236 - 1", " view - file_print - file_print - 0.00024 - recipe/view.html", " view - name_module - name_module - 0.00007 - recipe", " view - name_step - name_step - 0.00004 - view", " view - swap_template - swap_template - 0.00161 - <html> ...", " view - template_args - template_args - 0.00008 - HASH(0x865abf8)", " view - fill_template - fill_template - 0.00018 - 1", " view - fill_args - fill_args - 0.00003 - {}", " view - print_out - print_out - 0.00015 - 1", " view - post_print - post_print - 0.00003 - 0" ];
Defaults to "__error". The name of a step to run should a dying error be caught by the default handle_error method. See the handle_error method.
This method should not normally used but there is no problem with using it on a regular basis. Essentially it is a "goto" that allows for a long jump to the end of all nav_loops (even if they are recursively nested). This effectively short circuits all remaining hooks for the current and remaining steps. It is used to allow the ->goto_step functionality. If the application has morphed, it will be unmorphed before returning. Also - the post_navigate method will still be called.
Used by the default conf_file method. Defaults to $self->{'ext_conf'} which defaults to 'pl' meaning that the read configuration file should return a valid perl hashref.
Added as suffix to "name_step" during the default file_print hook.
Default value is 'html'.
For example, if name_step returns "foo" and ext_print returns "html" then the file "foo.html" will be searched for.
See the section on FINDING TEMPLATES for further discussion.
Added as suffix to "name_step" during the default file_val hook.
Default value is 'val'.
For example, if name_step returns "foo" and ext_val returns "val" then the file "foo.val" will be searched for.
See the section on FINDING TEMPLATES for further discussion.
Returns a hashref of args that will be passed to the CGI::Ex::Fill::fill. It is augmented with the template to swap and the fill hash. This could be useful if you needed to only swap a particular form on the template page. Arguments are passed directly to the fill function.
sub fill_args { {target => 'my_form'} }
Arguments are a template and a hashref. Takes the template that was prepared using swap_template, and swaps html form fields using the passed hashref. Overriding this method can control the fill behavior.
Calls the fill_args hook prior to calling CGI::Ex::Fill::fill
Returns a filename of the content to be used in the default print hook. Adds method base_dir_rel to hook name_module, and name_step and adds on the default file extension found in $self->ext_print which defaults to the property $self->{ext_print} which will default to ".html". Should return a filename relative to template_path that can be swapped using Template::Alloy, or should be a scalar reference to the template content that can be swapped. This will be used by the hook print.
sub template_path { '/var/www/templates' } sub base_dir_rel { 'content' } sub name_module { 'recipe' } sub ext_print { 'html' } # default # ->file_print('this_step') # would return 'content/recipe/this_step.html' # the template engine would look in '/var/www/templates' # for a file by that name
It may also return a reference to a string containing the html template. This is useful for prototyping applications and/or keeping all of the data for the application in a single location.
Returns a filename containing the validation. Performs the same as file_print, but uses ext_val to get the extension, and it adds vob_path (which defaults to template_path which defaults to base_dir_abs) onto the returned value (file_print is relative to template_path, while file_val is fully qualified with vob_path). If vob_path returns an arrayref of paths, then each path is checked for the existence of the file.
The file should be readable by CGI::Ex::Validate::get_validation.
This hook is only necessary if the hash_validation hook has not been overridden. 5B This method an also return a hashref containing the validation - but then you may have wanted to override the hash_validation hook.
Defaults to true. Used to do whatever needs to be done with the data once prepare has returned true and info_complete has returned true. On failure the print operations are ran. On success navigation moves on to the next step.
This is normally were there core logic of a script will occur (such as adding to a database, or updating a record). At this point, the data should be validated. It is possible to do additional validation and return errors using code such as the following.
if (! $user_is_unique) { $self->add_errors(username => 'The username was already used'); return 0; }
Called by run_hook. Arguments are a hook name, a step name. It should return an arrayref containing the code_ref to run, and the name of the method looked for. It uses ->can to find the appropriate hook.
my $code = $self->hook('finalize', 'main'); ### will look first for $self->main_finalize; ### will then look for $self->finalize;
This system is used to allow for multiple steps to be in the same file and still allow for moving some steps out to external sub classed packages (if desired).
If the application has successfully morphed via the morph method and allow_morph then it is not necessary to add the step name to the beginning of the method name as the morphed packages method will override the base package (it is still OK to use the full method name "${step}_hookname").
See the run_hook method and the morph method for more details.
Returns the first step of the path. Note that first_step may not be the same thing as default_step if the path was overridden.
Defaults to "__forbidden". The name of a step to run should the current step name be invalid, or if a step found by the default path method is invalid. See the path method.
Returns a hashref of the items passed to the CGI. Returns $self->{form} which defaults to CGI::Ex::get_form.
Return the name of the form to attach the js validation to. Used by js_validation.
This method is passed a username and the authentication object. It should return the password for the given user. See the get_pass_by_user method of CGI::Ex::Auth for more information. Installed as a hook to the authentication object during the get_valid_auth method.
If require_auth hook returns true on any given step then get_valid_auth will be called.
It will call auth_args to get some default args to pass to CGI::Ex::Auth->new. It augments the args with sensible defaults that App already provides (such as form, cookies, and template facilities). It also installs hooks for the get_pass_by_user, cleanup_user, and verify_user hooks of CGI::Ex::Auth.
It stores the $auth->last_auth_data in $self->auth_data for later use. For example, to get the authenticated user:
sub require_auth { 1 } sub cleanup_user { my ($self, $user) = @_; return lc $user; } sub get_pass_by_user { my ($self, $user) = @_; my $pass = $self->some_method_to_get_the_pass($user); return $pass; } sub auth_args { return { login_header => '<h1>My login header</h1>', login_footer => '[% TRY %][% INCLUDE login/login_footer.htm %][% CATCH %]<!-- [% error %] -->[% END %]', }; } sub main_hash_swap { my $self = shift; my $user = $self->auth_data->{'user'}; return {user => $user}; }
Successful authentication is cached for the duration of the nav_loop so multiple steps will run the full authentication routine only once.
Full customization of the login process and the login template can be done via the auth_args hash. See the auth_args method and CGI::Ex::Auth perldoc for more information.
This method is not normally used but can solve some difficult issues. It provides for moving to another step at any point during the nav_loop. Once a goto_step has been called, the entire nav_loop will be exited (to simply replace a portion of a step, you can simply run_hook('run_step', 'other_step')). The method goto_step effectively short circuits the remaining hooks for the current step. It does increment the recursion counter (which has a limit of ->recurse_limit - default 15). Normally you would allow the other hooks in the loop to carry on their normal functions and avoid goto_step. (Essentially, this hook behaves like a goto method to bypass everything else and continue at a different location in the path - there are times when it is necessary or useful to do this).
The method jump is an alias for this method.
Goto_step takes a single argument which is the location in the path to jump to. This argument may be either a step name, the special strings "FIRST, LAST, CURRENT, PREVIOUS, OR NEXT" or the number of steps to jump forward (or backward) in the path. The default value, 1, indicates that CGI::Ex::App should jump to the next step (the default action for goto_step). A value of 0 would repeat the current step (watch out for recursion). A value of -1 would jump to the previous step. The special value of "LAST" will jump to the last step. The special value of "FIRST" will jump back to the first step. In each of these cases, the path array returned by ->path is modified to allow for the jumping (the path is modified so that the path history is not destroyed - if we were on step 3 and jumped to one, that path would contain 1, 2, 3, *1, 2, 3, 4, etc and we would be at the *). If a step name is not currently on the path, it will be replace any remaining steps of the path.
# goto previous step (repeat it) $self->goto_step($self->previous_step); $self->goto_step('PREVIOUS'); $self->goto_step(-1); # goto next step $self->goto_step($self->next_step); $self->goto_step('NEXT'); $self->goto_step(1); $self->goto_step; # goto current step (repeat) $self->goto_step($self->current_step); $self->goto_step('CURRENT'); $self->goto_step(0); # goto last step $self->goto_step($self->last_step); $self->goto_step('LAST'); # goto first step (repeat it) $self->goto_step($self->first_step); $self->goto_step('FIRST');
If anything dies during execution, handle_error will be called with the error that had happened. Default action is to try running the step returned by the error_step method.
A hash of base items to be merged with hash_form - such as pulldown menus, javascript validation, etc. It will now also be merged with hash_fill, so it can contain default fillins as well. It can be populated by passing a hash to ->add_to_base. By default a sub similar to the following is what is used for hash_common. Note the use of values that are code refs - so that the js_validation and form_name hooks are only called if requested:
sub hash_base { my ($self, $step) = @_; return $self->{hash_base} ||= { script_name => $ENV{SCRIPT_NAME}, js_validation => sub { $self->run_hook('js_validation', $step) }, form_name => sub { $self->run_hook('form_name', $step) }, }; }
Almost identical in function and purpose to hash_base. It is intended that hash_base be used for common items used in various scripts inheriting from a common CGI::Ex::App type parent. Hash_common is more intended for step level populating of both swap and fill.
Called in preparation for print after failed prepare, info_complete, or finalize. Should contain a hash of any errors that occurred. Will be merged into hash_form before the pass to print. Each error that occurred will be passed to method format_error before being added to the hash. If an error has occurred, the default validate will automatically add {has_errors =>1}. To the error hash at the time of validation. has_errors will also be added during the merge in case the default validate was not used. Can be populated by passing a hash to ->add_to_errors or ->add_errors.
Called in preparation for print after failed prepare, info_complete, or finalize. Should contain a hash of any items needed to be filled into the html form during print. Items from hash_form, hash_base, and hash_common will be layered together. Can be populated by passing a hash to ->add_to_fill.
By default - forms are sticky and data from previous requests will try and populate the form. You can use the fill_template hook to disable templating on a single page or on all pages.
This method can be used to pre-populate the form as well (such as on an edit step). If a form fails validation, hash_fill will also be called and will only want the submitted form fields to be sticky. You can use the ready_validate hook to prevent pre-population in these cases as follows:
sub edit_hash_fill { my $self = shift; my $step = shift; return {} if $self->run_hook('ready_validate', $step); my %hash; ### get previous values from the database return \%hash; }
Called in preparation for print after failed prepare, info_complete, or finalize. Defaults to ->form. Can be populated by passing a hash to ->add_to_form.
Called in preparation for print after failed prepare, info_complete, or finalize. Should contain a hash of any items needed to be swapped into the html during print. Will be merged with hash_base, hash_common, hash_form, and hash_errors. Can be populated by passing a hash to ->add_to_swap.
The hash will be passed as the second argument to swap_template.
Returns a hash of the validation information to check form against. By default, will look for a filename using the hook file_val and will pass it to CGI::Ex::Validate::get_validation. If no file_val is returned or if the get_validation fails, an empty hash will be returned. Validation is implemented by ->vob which loads a CGI::Ex::Validate object.
Returns an arrayref which contains trace history of which hooks of which steps were ran. Useful for seeing what happened. In general - each line of the history will show the current step, the hook requested, and which hook was actually called.
The dump_history method shows a short condensed version of this history which makes it easier to see what path was followed.
In general, the arrayref is free for anything to push onto which will help in tracking other occurrences in the program as well.
Calls the ready_validate hook to see if data is ready to validate. If so it calls the validate hook to validate the data. Should make sure the data is ready and valid. Will not be run unless prepare returns true (default).
Called by the default new method. Allows for any object initilizations that may need to take place. Default action does nothing.
Called by the default new method. If load_conf is true, then the conf method will be called and the keys returned will be added to the $self object.
This method is called after the init method. If you need to further fix up values added during init_from_conf, you can use the pre_navigate method.
Arguments are the steps to insert. Can be called any time. Inserts the new steps at the current path location.
Returns true if the object has successful authentication data. It returns false if the object has not been authenticated.
Return the URI path where the CGI/Ex/yaml_load.js and CGI/Ex/validate.js files can be found. This will default to "$ENV{SCRIPT_NAME}/js" if the path method has not been overridden, otherwise it will default to "$ENV{SCRIPT_NAME}?step=js&js=" (the latter is more friendly with overridden paths). A default handler for the "js" step has been provided in "js_run_step" (this handler will nicely print out the javascript found in the js files which are included with this distribution. js_run_step will work properly with the default "path" handler.
Will return Javascript that is capable of validating the form. This is done using the capabilities of CGI::Ex::Validate and CGI::Ex::JSONDump. This will call the hook hash_validation which will then be encoded into json and placed in a javascript string. It will also call the hook form_name to determine which html form to attach the validation to. The method js_uri_path is called to determine the path to the appropriate validate.js files. In order to make use of js_validation, it must be added to the variables returned by either the hash_base (default), hash_common, hash_swap or hash_form hook (see examples of hash_base used in this doc).
Alias for the goto_step method.
Returns the last step of the path.
Defaults to ->{load_conf} which defaults to false. If true, will allow keys returned by the conf method to be added to $self during the init_from_conf method.
Enabling this method allows for out-of-the-box file based configuration.
Allows for temporarily "becoming" another object type for the execution of the current step. This allows for separating some steps out into their own packages.
Morph will only run if the method allow_morph returns true. Additionally if the allow_morph returns a hash ref, morph will only run if the step being morphed to is in the hash. Morph also passes the step name to allow_morph.
The morph call occurs at the beginning of the step loop. A corresponding unmorph call occurs before the loop is exited. An object can morph several levels deep. For example, an object running as Foo::Bar that is looping on the step "my_step" that has allow_morph = 1, will do the following:
Call the morph_package hook (which would default to returning Foo::Bar::MyStep in this case) Translate this to a package filename (Foo/Bar/MyStep.pm) and try and require it, if the file can be required, the object is blessed into that package. Call the fixup_after_morph method. Continue on with the run_step for the current step.
At any exit point of the loop, the unmorph call is made which re-blesses the object into the original package.
Samples of allowing morph:
sub allow_morph { 1 } # value of 1 means try to find package, ok if not found sub allow_morph { {edit => 1} } sub allow_morph { my ($self, $step) = @_; return $step eq 'edit' }
Used by morph. Return the package name to morph into during a morph call. Defaults to using the current object type as a base. For example, if the current object running is a Foo::Bar object and the step running is my_step, then morph_package will return Foo::Bar::MyStep.
Because of the way that run_hook works, it is possible that several steps could be located in the same external file and overriding morph_package could allow for this to happen.
See the morph method.
Return the name (relative path) that should be pre-pended to name_step during the default file_print and file_val lookups. Defaults to the value in $self->{name_module} which in turn defaults to the name of the current script.
cgi-bin/my_app.pl => my_app cgi/my_app => my_app
This method is provided so that each cgi or mod_perl application can have its own directory for storing html for its steps.
See the file_print method for more information.
See the section on FINDING TEMPLATES for further discussion.
Return the step (appended to name_module) that should used when looking up the file in file_print and file_val lookups. Defaults to the current step.
See the section on FINDING TEMPLATES for further discussion.
This is the main loop runner. It figures out the current path and runs all of the appropriate hooks for each step of the path. If nav_loop runs out of steps to run (which happens if no path is set, or if all other steps run successfully), it will insert the ->default_step into the path and run nav_loop again (recursively). This way a step is always assured to run. There is a method ->recurse_limit (default 15) that will catch logic errors (such as inadvertently running the same step over and over and over because there is either no hash_validation, or the data is valid but the set_ready_validate(0) method was not called).
Takes a class name or a CGI::Ex::App object as arguments. If a class name is given it will call the "new" method to instantiate an object by that class (passing any extra arguments to the new method). All returns from navigate will return the object.
The method navigate is essentially a safe wrapper around the ->nav_loop method. It will catch any dies and pass them to ->handle_error.
This starts the process flow for the path and its steps.
Same as the method navigate but calls ->require_auth(1) before running. It will only work if the navigate_authenticated method has not been overwritten. See the require_auth method.
Object creator. Takes a hashref of arguments that will become the initial properties of the object. Calls the init method once the object has been blessed to allow for any other initilizations.
my $app = MyApp->new({name_module => 'my_app'});
As a method it returns the next step in the path - if the path has more steps left.
It is also used as a hook by the refine_path hook. If there is no more steps, it will call the next_step hook to try and find a step to append to the path.
Return an arrayref (modifiable) of the steps in the path. For each step the run_step hook and all of its remaining hooks will be run.
Hook methods are looked up and ran using the method "run_hook" which uses the method "find_hook" to lookup the hook. A history of ran hooks is stored in the array ref returned by $self->history.
If path has not been defined, the method will look first in the form for a key by the name found in ->step_key. It will then look in $ENV{'PATH_INFO'}. It will use this step to create a path with that one step as its contents. If a step is passed in via either of these ways, the method will call valid_steps to make sure that the step is valid (by default valid_steps returns undef - which means that any step is valid). Any step beginning with _ can not be passed in and are intended for use on private paths. If a non-valid step is found, then path will be set to contain a single step of ->forbidden_step.
For the best functionality, the arrayref returned should be the same reference returned for every call to path - this ensures that other methods can add to the path (and will most likely break if the arrayref is not the same).
If navigation runs out of steps to run, the default step found in default_step will be run. This is what allows for us to default to the "main" step for many applications.
Used to map path_info parts to form variables. Similar to the path_info_map_base method. See the path_info_map_base method for a discussion of how to use this hook.
Called during the default path method. It is used to custom map portions of $ENV{'PATH_INFO'} to form values. If should return an arrayref of arrayrefs where each child arrayref contains a regex qr with match parens as the first element of the array. Subsequent elements of the array are the key names to store the corresponding matched value from the regex under. The outer arrayref is iterated until it one of child arrayrefs matches against $ENV{'PATH_INFO'}. The matched values are only added to the form if there is not already a defined value for that key in the form.
The default value returned by this method looks something like the following:
sub path_info_map_base { return [[qr{^/(\w+)}, $self->step_key]]; }
This example would map the following PATH_INFO string as follows:
/my_step # $self->form->{'step'} now equals "my_step"
The following is another example:
sub path_info_map_base { return [ [qr{^/([^/]+)/(\w+)}, 'username', $self->step_key], [qr{^/(\w+)}, $self->step_key], ]; } # the PATH_INFO /my_step # still results in # $self->form->{'step'} now equals "my_step" # but with the PATH_INFO /my_user/my_step # $self->form->{'step'} now equals "my_step" # and $self->form->{'username'} equals "my_user"
In most cases there is not a need to override the path_info_map_base method, but rather override the path_info_map hook for a particular step. When the step is being run, just before the run_step hook is called, the path_info_map hook is called. The path_info_map hook is similar to the path_info_map_base method, but is used to allow step level manipulation of form based on elements in the $ENV{'PATH_INFO'}.
sub my_step_path_info_map { return [[qr{^/my_step/(\w+)$}, 'username']]; } # the PATH_INFO /my_step/my_user # results in # $self->form->{'step'} equal to "my_step" because of default path_info_map_base # and $self->form->{'username'} equals "my_user" because of my_step_path_info_map
The section on mapping URIs to steps has additional examples.
Ran after all of the steps in the loop have been processed (if prepare, info_complete, and finalize were true for each of the steps). If it returns a true value the navigation loop will be aborted. If it does not return true, navigation continues by then inserting the step $self->default_step and running $self->nav_loop again (recurses) to fall back to the default step.
Called from within navigate. Called after the nav_loop has finished running but within the eval block to catch errors. Will only run if there were no errors which died during the nav_loop process.
It can be disabled from running by setting the _no_post_navigate property.
If per-step authentication is enabled and authentication fails, the post_navigate method will still be called (the post_navigate method can check the ->is_authed method to change behavior). If application level authentication is enabled and authentication fails, none of the pre_navigate, nav_loop, or post_navigate methods will be called.
A hook which occurs after the printing has taken place. Is only run if the information was not complete. Useful for cases such as printing rows of a database query after displaying a query form.
Ran at the end of the step's loop if prepare, info_complete, and finalize all returned true. Allows for cleanup. If a true value is returned, execution of navigate is returned and no more steps are processed.
Called right before the navigation loop is started (at the beginning of nav_loop). At this point the path is set (but could be modified). The only argument is a reference to the path array. If it returns a true value - the navigation routine is aborted.
Called at the very beginning of the navigate method, but within the eval block to catch errors. Called before the nav_loop method is started. If a true value is returned then navigation is skipped (the nav_loop is never started).
Ran at the beginning of the loop before prepare, info_compelete, and finalize are called. If it returns true, execution of nav_loop is returned and no more steps are processed..
Defaults to true. A hook before checking if the info_complete is true. Intended to be used to cleanup the form data.
Called when any of prepare, info_complete, or finalize fail. Prepares a form hash and a fill hash to pass to print. The form hash is primarily intended for use by the templating system. The fill hash is intended to be used to fill in any html forms.
List the step previous to this one. Will return '' if there is no previous step.
Take the information generated by prepared_print, format it using swap_template, fill it using fill_template and print it out using print_out. Default incarnation uses Template::Alloy which is compatible with Template::Toolkit to do the swapping. Arguments are: step name (used to call the file_print hook), swap hashref (passed to call swap_template), and fill hashref (passed to fill_template).
During the print call, the file_print hook is called which should return a filename or a scalar reference to the template content is
Called with the finished document. Should print out the appropriate headers. The default method calls $self->cgix->print_content_type and then prints the content.
The print_content_type is passed $self->mimetype (which defaults to $self->{'mimetype'} which defaults to 'text/html') and $self->charset (which defaults to $self->{'charset'} which defaults to '').
Should return true if enough information is present to run validate. Default is to look if $ENV{'REQUEST_METHOD'} is 'POST'. A common usage is to pass a common flag in the form such as 'processing' => 1 and check for its presence - such as the following:
sub ready_validate { shift->form->{'processing'} }
Changing the behavior of ready_validate can help in making wizard type applications.
You can also use the validate_when_data hook to change the behavior of ready_validate. If valiate_when_data returns true, then ready_validate will look for keys in the form matching keys that are in hash_validation - if they exist ready_validate will be true. If there are no hash_validation keys, ready_validate uses its default behavior.
Called at the end of nav_loop. Passed a single value indicating if there are currently more steps in the path.
The default implementation returns if there are still more steps in the path. Otherwise, it calls the next_step hook and appends it to the path with the append_path method, and then calls the set_ready_validate hook and passes it 0.
This allows you to simply put
sub edit_next_step { '_edit_success' }
In your code and it will automatically do the right thing and go to the _edit_success step.
Default 15. Maximum number of times to allow nav_loop to call itself. The recurse level will increase every time that ->goto_step is called, or if the end of the nav_loop is reached and the process tries to add the default_step and run it again.
If ->goto_step is used often - the recurse_limit will be reached more quickly. It is safe to raise this as high as is necessary - so long as it is intentional.
Often the limit is reached if a step did not have a validation hash, or if the set_ready_validate(0) method was not called once the data had been successfully validated and acted upon.
Arguments are the steps used to replace. Can be called any time. Replaces the remaining steps (if any) of the current path.
Defaults to self->{require_auth} which defaults to undef. If called as a method and passed a single value of 1, 0, or undef it will set the value of $self->{require_auth} to that value. If set to a true value then any subsequent step will require authentication (unless its hook has been overwritten).
Any of the following ways can be used to require authentication on every step.
sub require_auth { 1 }
__PACKAGE__->navigate_authenticated; # instead of __PACKAGE__->navigate;
__PACKAGE__->new({require_auth => 1}->navigate;
sub init { shift->require_auth(1) }
Because it is called as a hook, the current step is passed as the first argument. If the hook returns false, no authentication will be required on this step. If the hook returns a true, non-hashref value, authentication will be required via the get_valid_auth method. If the method returns a hashref of stepnames to require authentication on, the step will require authentication via the get_valid_auth method if the current step is in the hashref. If authentication is required and succeeds, the step will proceed. If authentication is required and fails at the step level the current step will be aborted, authentication will be asked for (the post_navigate method will still be called).
For example you could add authentication to the add, edit, and delete steps in any of the following ways:
sub require_auth { {add => 1, edit => 1, delete => 1} }
sub add_require_auth { 1 } sub edit_require_auth { 1 } sub delete_require_auth { 1 }
sub require_auth { my ($self, $step) = @_; return 1 if $step && $step =~ /^(add|edit|delete)$/; return 0; }
If however you wanted to require authentication on all but one or two methods (such as requiring authentication on all but a forgot_password step) you could do either of the following:
sub require_auth { my ($self, $step) = @_; return 0 if $step && $step eq 'forgot_password'; return 1; # require auth on all other steps }
sub require_auth { 1 } # turn it on for all steps sub forgot_password_require_auth { 0 } # turn it off
See the get_valid_auth method for what occurs should authentication be required.
There is one key difference from the 2.14 version of App. In 2.14 and previous versions, the pre_navigate and post_navigate methods would not be called if require_auth returned a true non-hashref value. In version 2.15 and later, the 2.15 pre_navigate and post_navigate methods are always called - even if authentication fails. Also in 2.15 and later, the method is called as a hook meaning the step is passed in.
Arguments are a hook name and the step to find the hook for. Calls the find_hook method to get a code ref which it then calls and returns the result passing any extra arguments to run_hook as arguments to the code ref.
Each call to run_hook is logged in the arrayref returned by the history method. This information is summarized in the dump_history method and is useful for tracing the flow of the program.
The run_hook method is part of the core of CGI::Ex::App. It allows for an intermediate layer in normal method calls. Because of run_hook, it is possible to logically override methods on a step by step basis, or override a method for all of the steps, or even to break code out into separate modules.
Similar to run_hook - but allows for temporarily running a hook in another package.
sub blah_morph_package { 'SomeOther::Module' } my $hash = $self->run_hook_as('hash_swap', 'blah'); # runs as SomeOther::Module # OR my $hash = $self->run_hook_as('hash_swap', 'SomeOther::Module');
Note that the second form will use 'SomeOther::Module' as the step name which will be somewhat misleading in looking up names.
Runs all of the hooks specific to each step, beginning with pre_step and ending with post_step (for a full listing of steps, see the section on process flow). Called after ->morph($step) has been run. If this hook returns true, the nav_loop is exited (meaning the run_step hook displayed a printed page). If it returns false, the nav_loop continues on to run the next step.
This hook performs the same base functionality as a method defined in CGI::Applications ->run_modes. The default run_step method provides much more granular control over the flow of the CGI.
Arguments are the steps to set. Should be called before navigation begins. This will set the path arrayref to the passed steps.
This method is not normally used.
Sets that the validation is ready (or not) to validate. Should set the value checked by the hook ready_validate. Has no affect if validate_when_data flag is set.
The following would complement the "processing" flag example given in ready_validate description:
sub set_ready_validate { my $self = shift; my ($step, $is_ready) = (@_ == 2) ? @_ : (undef, shift); if ($is_ready) { $self->form->{'processing'} = 1; } else { delete $self->form->{'processing'}; } return $is_ready; }
Note that for this example the form key "processing" was deleted. This is so that the call to fill in any html forms won't swap in a value of zero for form elements named "processing."
Also note that this method may be called as a hook as in
$self->run_hook('set_ready_validate', $step, 0) # OR $self->set_ready_validate($step, 0);
Or it can take a single argument and should set the ready status regardless of the step as in:
$self->set_ready_validate(0);
Ran at the beginning of the loop before prepare, info_complete, and finalize are called. If it returns true, nav_loop moves on to the next step (the current step is skipped).
Returns a hashref that can store arbitrary user space data without worrying about overwriting the internals of the application.
Should return the keyname that will be used by the default "path" method to look for in the form. Default value is 'step'.
Takes the template and hash of variables prepared in print, and processes them through the current template engine Template::Alloy.
Arguments are the template and the swap hashref. The template can be either a scalar reference to the actual content, or the filename of the content. If the filename is specified - it should be relative to template_path (which will be used to initialize INCLUDE_PATH by default).
The default method will create a template object by calling the template_args hook and passing the returned hashref to the template_obj method. The default template_obj method returns a Template::Alloy object, but could easily be swapped to use a Template::Toolkit based object. If a non-Template::Toolkit compatible object is to be used, then the swap_template hook can be overridden to use another templating engine.
For example to use the HTML::Template engine you could override the swap_template method as follows:
use HTML::Template; sub swap_template { my ($self, $step, $file, $swap) = @_; my $type = UNIVERSAL::isa($file, 'SCALAR') ? 'scalarref' : UNIVERSAL::isa($file, 'ARRAY') ? 'arrayref' : ref($file) ? 'filehandle' : 'filename'; my $t = HTML::Template->new(source => $file, type => $type, path => $self->template_path, die_on_bad_params => 0, ); $t->param($swap); return $t->output; }
Uou could also simply do the following to parse the templates using HTML::Template::Expr syntax.
sub template_args { return {SYNTAX => 'hte'}; }
For a listing of the available syntaxes, see the current Template::Alloy documentation.
Returns a hashref of args that will be passed to the "new" method of Template::Alloy The method is normally called from the swap_template hook. The swap_template hook will add a value for INCLUDE_PATH which is set equal to template_path, if the INCLUDE_PATH value is not already set.
The returned hashref can contain any arguments that Template::Alloy would understand.
sub template_args { return { PRE_CHOMP => 1, WRAPPER => 'wrappers/main_wrapper.html', }; }
See the Template::Alloy documentation for a listing of all possible configuration arguments.
Called from swap_template. It is passed the result of template_args that have had a default INCLUDE_PATH added via template_path. The default implementation uses Template::Alloy but can easily be changed to use Template::Toolkit by using code similar to the following:
use Template; sub template_obj { my ($self, $args) = @_; return Template->new($args); }
Defaults to $self->{'template_path'} which defaults to base_dir_abs. Used by the template_obj method.
Allows for returning an object back to its previous blessed state if the "morph" method was successful in morphing the App object. This only happens if the object was previously morphed into another object type. Before the object is re-blessed the method fixup_before_unmorph is called.
See allow_morph and morph.
Called by the default path method. Should return a hashref of path steps that are allowed. If the current step is not found in the hash (or is not the default_step or js_step) the path method will return a single step of ->forbidden_step and run its hooks. If no hash or undef is returned, all paths are allowed (default). A key "forbidden_step" containing the step that was not valid will be placed in the stash. Often the valid_steps method does not need to be defined as arbitrary method calls are not possible with CGI::Ex::App.
Any steps that begin with _ are also "not" valid for passing in via the form or path info. See the path method.
Also, the pre_step, skip, prepare, and info_complete hooks allow for validating the data before running finalize.
Passed the form from $self->form. Runs validation on the information contained in the passed form. Uses CGI::Ex::Validate for the default validation. Calls the hook hash_validation to load validation hashref (an empty hash means to pass validation). Should return true if the form passed validation and false otherwise. Errors are stored as a hash in $self->{hash_errors} via method add_errors and can be checked for at a later time with method has_errors (if the default validate was used).
There are many ways and types to validate the data. Please see the CGI::Ex::Validate module.
Upon success, it will look through all of the items which were validated, if any of them contain the keys append_path, insert_path, or replace_path, that method will be called with the value as arguments. This allows for the validation to apply redirection to the path. A validation item of:
{field => 'foo', required => 1, append_path => ['bar', 'baz']}
would append 'bar' and 'baz' to the path should all validation succeed.
Defaults to "validate_when_data" property which defaults to false. Called during the ready_validate hook. If returns true, ready_validate will look for keys in the form matching keys that are in hash_validation - if they exist ready_validate will be true. If there are no hash_validation keys, ready_validate uses its default behavior.
Installed as a hook to CGI::Ex::App during get_valid_auth. Should return true if the user is ok. Default is to always return true. This can be used to abort early before the get_pass_by_user hook is called.
sub verify_user { my ($self, $user) = @_; return 0 if $user eq 'paul'; # don't let paul in return 1; # let anybody else in }
Often in your program you will want to set cookies or bounce to a differnt URL. This can be done using either the builtin CGI::Ex object or the built in CGI object. It is suggested that you only use the CGI::Ex methods as it will automatically send headers and method calls under cgi, mod_perl1, or mod_perl2. The following shows how to do basic items using the CGI::Ex object returned by the ->cgix method.
### CGI::Ex::App prints headers for you, ### but if you are printing custom types, you can send your own $self->cgix->print_content_type; # SAME AS # $self->cgix->print_content_type('text/html');
$self->cgix->set_cookie({ -name => "my_key", -value => 'Some Value', -expires => '1y', -path => '/', });
$self->cgix->location_bounce(""); $self->exit_nav_loop; # normally should do this to long jump out of navigation
my $data = {foo => "bar", one => "two or three"}; my $query = $self->cgix->make_form($data); # $query now equals "foo=bar&one=two%20or%20three"
my $form = $self->form;
In this example $form would now contain a hashref of all POST and GET parameters passed to the server. The form method calls $self->cgix->get_form which in turn uses CGI->param to parse values. Fields with multiple passed values will be in the form of an arrayref.
my $cookies = $self->cookies;
In this example $cookies would be a hashref of all passed in cookies. The cookies method calls $self->cgix->get_cookies which in turn uses CGI->cookie to parse values.
See the CGI::Ex and CGI documentation for more information.
The concepts used in CGI::Ex::App are not novel or unique. However, they are all commonly used and very useful. All application builders were built because somebody observed that there are common design patterns in CGI building. CGI::Ex::App differs in that it has found more common design patterns of CGI's than other application builders and tries to get in the way less than others.
CGI::Ex::App is intended to be sub classed, and sub sub classed, and each step can choose to be sub classed or not. CGI::Ex::App tries to remain simple while still providing "more than one way to do it." It also tries to avoid making any sub classes have to call ->SUPER:: (although that is fine too).
And if what you are doing on a particular is far too complicated or custom for what CGI::Ex::App provides, CGI::Ex::App makes it trivial to override all behavior.
There are certainly other modules for building CGI applications. The following is a short list of other modules and how CGI::Ex::App is different.
CGI::Application
Seemingly the most well know of application builders. CGI::Ex::App is different in that it:
* Uses Template::Toolkit compatible Template::Alloy by default. CGI::Ex::App can easily use another toolkit by simply overriding the ->swap_template method. CGI::Application uses HTML::Template. * Offers integrated data validation. CGI::Application has had custom plugins created that add some of this functionality. CGI::Ex::App has the benefit that validation is automatically available in javascript as well. * Allows the user to print at any time (so long as proper headers are sent. CGI::Application requires data to be pipelined. * Offers hooks into the various phases of each step ("mode" in CGI::Application lingo). CGI::Application provides only ->runmode which is only a dispatch. * Support for easily jumping around in navigation steps. * Support for storing some steps in another package. * Integrated authentication * Integrated form filling * Integrated PATH_INFO mapping
CGI::Ex::App and CGI::Application are similar in that they take care of handling headers and they allow for calling other "runmodes" from within any given runmode. CGI::Ex::App's ->run_step is essentially equivalent to a method call defined in CGI::Application's ->run_modes. The ->run method of CGI::Application starts the application in the same manner as CGI::Ex::App's ->navigate call. Many of the hooks around CGI::Ex::App's ->run_step call are similar in nature to those provided by CGI::Application.
CGI::Prototype
There are actually many similarities. One of the nicest things about CGI::Prototype is that it is extremely short (very very short). The ->activate starts the application in the same manner as CGI::Ex::App's ->navigate call. Both use Template::Toolkit as the default template system (CGI::Ex::App uses Template::Alloy which is TT compatible). CGI::Ex::App is differrent in that it:
* Offers more hooks into the various phases of each step. * Support for easily jumping around in navigation steps. * Support for storing only some steps in another package. * Integrated data validation * Integrated authentication * Integrated form filling * Integrated PATH_INFO mapping
The following example shows the creation of a basic recipe database. It requires the use of DBD::SQLite, but that is all. Once you have configured the db_file and template_path methods of the "recipe" file, you will have a working script that does CRUD for the recipe table. The observant reader may ask - why not use Catalyst or Ruby on Rails? The observant programmer will reply that making a framework do something simple is easy, but making it do something complex is complex and any framework that tries to do the those complex things for you is too complex. CGI::Ex::App lets you write the complex logic but gives you the ability to not worry about the boring details such as template engines, or sticky forms, or cgi parameters, or data validation. Once you are setup and are running, you are only left with providing the core logic of the application.
### File: /var/www/cgi-bin/recipe (depending upon Apache configuration) ### -------------------------------------------- #!/usr/bin/perl -w use lib qw(/var/www/lib); use Recipe; Recipe->navigate; ### File: /var/www/lib/Recipe.pm ### -------------------------------------------- package Recipe; use strict; use base qw(CGI::Ex::App); use CGI::Ex::Dump qw(debug); use DBI; use DBD::SQLite; ###------------------------------------------### sub post_navigate { # show what happened debug shift->dump_history; } sub template_path { '/var/www/templates' } sub base_dir_rel { 'content' } sub db_file { '/var/www/recipe.sqlite' } sub dbh { my $self = shift; if (! $self->{'dbh'}) { my $file = $self->db_file; my $exists = -e $file; $self->{'dbh'} = DBI->connect("dbi:SQLite:dbname=$file", '', '', {RaiseError => 1}); $self->create_tables if ! $exists; } return $self->{'dbh'}; } sub create_tables { my $self = shift; $self->dbh->do("CREATE TABLE recipe ( id INTEGER PRIMARY KEY AUTOINCREMENT, title VARCHAR(50) NOT NULL, ingredients VARCHAR(255) NOT NULL, directions VARCHAR(255) NOT NULL, date_added VARCHAR(20) NOT NULL )"); } ###----------------------------------------------------------------### sub main_info_complete { 0 } sub main_hash_swap { my $self = shift; my $<h2>[% success %]</h2></span>[% END %] <table style="border:1px solid blue"> <tr><th>#</th><th>Title</th><th>Date Added</th></tr> [% FOR row IN recipies %] <tr> <td>[% loop.count %].</td> <td><a href="[% script_name %]/view?id=[% row.id %]">[% row.title %]</a> (<a href="[% script_name %]/edit?id=[% row.id %]">Edit</a>) </td> <td>[% row.date_added %]</td> </tr> [% END %] <tr><td colspan=2 align=right><a href="[% script_name %]/add">Add new recipe</a></td></tr> </table> </html> File: /var/www/templates/content/recipe/edit.html ### -------------------------------------------- <html> <head> <title>[% step == 'add' ? "Add" : "Edit" %] Recipe</title> </head> <h1>[% step == 'add' ? "Add" : "Edit" %] Recipe</h1> <form method=post name=[% form_name %]> <input type=hidden name=step> <table> [% IF step != 'add' ~%] <tr> <td><b>Id:</b></td><td>[% id %]</td></tr> <input type=hidden name=id> </tr> <tr> <td><b>Date Added:</b></td><td>[% date_added %]</td></tr> </tr> [% END ~%] <tr> <td valign=top><b>Title:</b></td> <td><input type=text name=title> <span style='color:red' id=title_error>[% title_error %]</span></td> </tr> <tr> <td valign=top><b>Ingredients:</b></td> <td><textarea name=ingredients rows=10 cols=40 wrap=physical></textarea> <span style='color:red' id=ingredients_error>[% ingredients_error %]</span></td> </tr> <tr> <td valign=top><b>Directions:</b></td> <td><textarea name=directions rows=10 cols=40 wrap=virtual></textarea> <span style='color:red' id=directions_error>[% directions_error %]</span></td> </tr> <tr> <td colspan=2 align=right> <input type=submit</td> </tr> </table> </form> (<a href="[% script_name %]">Main Menu</a>) [% IF step != 'add' ~%] (<a href="[% script_name %]/delete?id=[% id %]">Delete this recipe</a>) [%~ END %] [% js_validation %] </html> File: /var/www/templates/content/recipe/view.html ### -------------------------------------------- <html> <head> <title>[% title %] - Recipe DB</title> </head> <h1>[% title %]</h1> <h3>Date Added: [% date_added %]</h3> <h2>Ingredients</h2> [% ingredients %] <h2>Directions</h2> [% directions %] <hr> (<a href="[% script_name %]">Main Menu</a>) (<a href="[% script_name %]/edit?id=[% id %]">Edit this recipe</a>) </html> ### --------------------------------------------
Notes:
The dbh method returns an SQLite dbh handle and auto creates the schema. You will normally want to use MySQL or Oracle, or Postgres and you will want your schema to NOT be auto-created.
This sample uses hand rolled SQL. Class::DBI or a similar module might make this example shorter. However, more complex cases that need to involve two or three or four tables would probably be better off using the hand crafted SQL.
This sample uses SQL. You could write the application to use whatever storage you want - or even to do nothing with the submitted data.
We had to write our own HTML (Catalyst and Ruby on Rails do this for you). For most development work - the HTML should be in a static location so that it can be worked on by designers. It is nice that the other frameworks give you stub html - but that is all it is. It is worth about as much as copying and pasting the above examples. All worthwhile HTML will go through a non-automated design/finalization process.
The add step used the same template as the edit step. We did this using the add_name_step hook which returned "edit". The template contains IF conditions to show different information if we were in add mode or edit mode.
We reused code, validation, and templates. Code and data reuse is a good thing.
The edit_hash_common returns an empty hashref if the form was ready to validate. When hash_common is called and the form is ready to validate, that means the form failed validation and is now printing out the page. To let us fall back and use the "sticky" form fields that were just submitted, we need to not provide values in the hash_common method.
We use hash_common. Values from hash_common are used for both template swapping and filling. We could have used hash_swap and hash_fill independently.
The hook main_info_complete is hard coded to 0. This basically says that we will never try and validate or finalize the main step - which is most often the case.
It may be useful sometimes to separate some or all of the steps of an application into separate files. This is the way that CGI::Prototype works. This is useful in cases were some steps and their hooks are overly large - or are seldom used.
The following modifications can be made to the previous "recipe db" example that would move the "delete" step into its own file. Similar actions can be taken to break other steps into their own file as well.
### File: /var/www/lib/Recipe.pm ### Same as before but add the following line: ### -------------------------------------------- sub allow_morph { 1 } ### File: /var/www/lib/Recipe/Delete.pm ### Remove the delete_* subs from lib/Recipe.pm ### -------------------------------------------- package Recipe::Delete; use strict; use base qw(Recipe); sub skip { shift->edit_skip(@_) } sub info_complete { 1 } sub finalize { my $self = shift; $self->dbh->do("DELETE FROM recipe WHERE id = ?", {}, $self->form->{'id'}); $self->add_to_form(success => "Recipe deleted from the database"); return 1; }
Notes:
The hooks that are called (skip, info_complete, and finalize) do not have to be prefixed with the step name because they are now in their own individual package space. However, they could still be named delete_skip, delete_info_complete, and delete_finalize and the run_hook method will find them (this would allow several steps with the same "morph_package" to still be stored in the same external module).
The method allow_morph is passed the step that we are attempting to morph to. If allow_morph returns true every time, then it will try and require the extra packages every time that step is ran. You could limit the morphing process to run only on certain steps by using code similar to the following:
sub allow_morph { return {delete => 1} } # OR sub allow_morph { my ($self, $step) = @_; return ($step eq 'delete') ? 1 : 0; }
The CGI::Ex::App temporarily blesses the object into the "morph_package" for the duration of the step and re-blesses it into the original package upon exit. See the morph method and allow_morph for more information.
The previous samples are essentially suitable for running under flat CGI, Fast CGI, or mod_perl Registry or mod_perl PerlRun type environments. It is very easy to move the previous example to be a true mod_perl handler.
To convert the previous recipe example, simply add the following:
### File: /var/www/lib/Recipe.pm ### Same as before but add the following lines: ### -------------------------------------------- sub handler { Recipe->navigate; return; } ### File: apache2.conf - or whatever your apache conf file is. ### -------------------------------------------- <Location /recipe> SetHandler perl-script PerlHandler Recipe </Location>
Notes:
Both the /cgi-bin/recipe version and the /recipe version can co-exist. One of them will be a normal cgi and the other will correctly use mod_perl hooks for headers.
Setting the location to /recipe means that the $ENV{SCRIPT_NAME} will also be set to /recipe. This means that name_module method will resolve to "recipe". If a different URI location is desired such as "/my_cool_recipe" but the program is to use the same template content (in the /var/www/templates/content/recipe directory), then we would need to explicitly set the "name_module" parameter. It could be done in either of the following ways:
### File: /var/www/lib/Recipe.pm ### Same as before but add the following line: ### -------------------------------------------- sub name_module { 'recipe' } # OR sub init { my $self = shift; $self->{'name_module'} = 'recipe'; }
In most use cases it isn't necessary to set name_module, but it also doesn't hurt and in all cases it is more descriptive to anybody who is going to maintain the code later.
Having authentication is sometimes a good thing. To force the entire application to be authenticated (require a valid username and password before doing anything) you could do the following.
### File: /var/www/lib/Recipe.pm ### Same as before but add ### -------------------------------------------- sub get_pass_by_user { my $self = shift; my $user = shift; my $pass = $self->lookup_and_cache_the_pass($user); return $pass; } ### File: /var/www/cgi-bin/recipe (depending upon Apache configuration) ### Change the line with ->navigate; to ### -------------------------------------------- Recipe->navigate_authenticated; # OR ### File: /var/www/lib/Recipe.pm ### Same as before but add ### -------------------------------------------- sub require_auth { 1 } # OR ### File: /var/www/lib/Recipe.pm ### Same as before but add ### -------------------------------------------- sub init { shift->require_auth(1) }
See the require_auth, get_valid_auth, and auth_args methods for more information. Also see the CGI::Ex::Auth perldoc.
Sometimes you may only want to have certain steps require authentication. For example, in the previous recipe example we might want to let the main and view steps be accessible to anybody, but require authentication for the add, edit, and delete steps.
To do this, we would do the following to the original example (the navigation must start with ->navigate. Starting with ->navigate_authenticated will cause all steps to require validation):
### File: /var/www/lib/Recipe.pm ### Same as before but add ### -------------------------------------------- sub get_pass_by_user { my $self = shift; my $user = shift; my $pass = $self->lookup_and_cache_the_pass($user); return $pass; } sub require_auth { {add => 1, edit => 1, delete => 1} }
We could also enable authentication by using individual hooks as in:
sub add_require_auth { 1 } sub edit_require_auth { 1 } sub delete_require_auth { 1 }
Or we could require authentication on everything - but let a few steps in:
sub require_auth { 1 } # turn authentication on for all sub main_require_auth { 0 } # turn it off for main and view sub view_require_auth { 0 }
That's it. The add, edit, and delete steps will now require authentication. See the require_auth, get_valid_auth, and auth_args methods for more information. Also see the CGI::Ex::Auth perldoc.
The following corporation and individuals contributed in some part to the original versions.
Bizhosting.com - giving a problem that fit basic design patterns. Earl Cahill - pushing the idea of more generic frameworks. Adam Erickson - design feedback, bugfixing, feature suggestions. James Lance - design feedback, bugfixing, feature suggestions. Krassimir Berov - feedback and some warnings issues with POD examples.
This module may be distributed under the same terms as Perl itself.
Paul Seamons <perl at seamons dot com> | http://search.cpan.org/~rhandom/CGI-Ex-2.38/lib/CGI/Ex/App.pod | CC-MAIN-2016-18 | refinedweb | 15,532 | 63.29 |
Routing in WebAPI is very similar to routing in MVC. In MVC application routing maps a URL to action methods in Controller.Similarly in WebAPI routing is used to map a URL to action method in Controller.
In a WebAPI project routing is defined in the file WebApiConfig.cs in the App_Start folder.By default following route is defined in the WebApiConfig file
public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); }
There are few important things to note in the WebApiConfig file
1.It imports the System.Web.Http namespace instead of using System.Web.Mvc unlike RouteConfig class in MVC
using System.Web.Http
2.It uses the MapHttpRoute method for adding the routes to the routing table.It is MapRoute method in MVC.
config.Routes.MapHttpRoute()
3.Route template does not include the action method name as in the case of MVC.
routeTemplate: "api/{controller}/{id}"
Instead of declaring action method name as a segment variable the action method is selected based on the HTTP method used to make the request.Though we can also include the action method name in a route if required as:
api/{controller}/{action}/{id}
In this case action method is explicitly specified and the route will map to URLs which includes action method name in URL.
4.Route template has api prefix.This is to distinguish WebAPI URL from the MVC URL. | https://codecompiled.com/asp-net/page/3 | CC-MAIN-2021-49 | refinedweb | 242 | 59.5 |
unit test our code; there is no point in unit testing framework code, lets issue a little trust to the designers of the framework code and take it on good faith that they unit tested their code. We don’t want to unit test UI code because well emulating mouse clicks and keyboard entries to exercise GUI elements doesn’t isolate the change (its more of an integration or consumer test then a unit test). So what does that leave us with? Business logic.
How do I Setup The Unit Tests
A unit test is just a method that executes the code under test (CUT) and verifies the expected result. To actual run the unit test something needs to call that method; this something is generally another executable typically referred to as a Test Runner or Test Harness application. These Test Runner applications are just console executable that require no user input to run through all the unit tests. They generally print out formatted text to the console to let the user know how many test passed and which ones failed and can generate the same information as an XML file that can be used for reporting, build systems like Jenkins can generate graphs and formatted reports based off of the XML data.
How do I add a Test Runner Application Project and Link it with my Existing Project?
In Visual Studio speak we want to create a solution and include both our production project and our Test Runner project to it. Qt Creator does not have any notion of solution like Visual Studio does however Qt Creator does have a project template that lets you create hierarchy.
Creating a Project Hierarchy. Now if you.
Creating a Test Runner Application
Now that we have a hierarchical project structure we can start adding additional projects to our solution. Right-click on the root project and select New Subproject… this will bring up the New Subproject wizard. We’re going to create our Test Runner/Test Harness application which heiarchy should look like this:
Writing Our Test Runner Application
We could start writing some custom code to loop through all our test methods, print out messages explaining the relative success or failure of the tests, and to compile an XML output file but why re-invent the wheel when so many others have already written complete frameworks which do all that and more. I’ll publish another article on the different Unit Testing Frameworks that exist but for the sake of this tutorial we’re going to use QTest which is a Unit Test Framework that comes with Qt (so the good news is that you already have it installed since by this point you must have already installed the Qt Framework).
Commercial break...
Configuring our Test Runner Application’s Project Settings
Open the QtQuickSampleTest.pro file you should see the following which is the default boiler plate stuff for a Console application.
QT += core QT -= gui TARGET = QtQuickSampleTest CONFIG += console CONFIG -= app_bundle TEMPLATE = app SOURCES += main.cpp
As we are going to use QTest we need to add the QTestLib module to the project via the QT qMake variable and add testcase to the configuration.
QT += core testlib QT -= gui TARGET = QtQuickSampleTest CONFIG += console CONFIG -= app_bundle CONFIG += testcase TEMPLATE = app SOURCES += main.cpp
Creating a Test Class
All of our Unit Test methods that we write need to go into a class, I’ll refer to this class as the Test Class; our main.cpp will call all the unit test methods the Test Class exposes. To create the Test Class right-click on the QtQuickSampleTest project folder and select Add New… to bring up the New File Wizard. Add a new C++ Class called QtQuickSampleApplicationTest that extends QObject. That is all you need to do for now, we’ll add tests later on.
Configuring our Test Runner Applications Main
Open the main.cpp, by default it has a boiler plate main() method. We need to modify it so that it will run our Test Class, luckily QTest has a MACRO that will do that for us. Delete everything that is currently in your main.cpp and replace it with the following:
#include <QtTest/QTest> #include "QtQuickSampleApplicationTest.h" QTEST_APPLESS_MAIN( QtQuickSampleApplicationTest )
There are three MACROs you can use QTEST_MAIN, QTEST_GUILESS_MAIN, and QTEST_APPLESS_MAIN.
- QTEST_MAIN – Implements a main() function that instantiates a QApplication object and the Test Class, then executes all tests in the order they were defined.
- QTEST_GUILESS_MAIN – The same as QTEST_MAIN but instead of instantiating a QApplication object it instantiates a QCoreApplication object.
- QTEST_APPLESS_MAIN – The same as QTEST_MAIN but does not instantiates a QApplication object, just the Test Class.
In our case the QTEST_APPLESS_MAIN will work as we are writing unit tests not integration tests, that is we are just testing business logic here.
At this point you can build and run the QtQuickSampleTest project (you might have to run qmake first to update the QtQuickSampleSolution.pro file) and you’ll get a console output showing two passed tests. These are built in setup and tare down methods that get run at the beginning and end of your test suite (before and after executing all your unit tests). We don’t have any actual unit tests yet but we are ready to start adding them…. or are we?
How do I get Access to My Business Logic from My Test Runner?
You might have noticed now that all our business logic is in our production application how is our Test Runner suppose to access them? Well you hit on the first thing one must come to adopt and writing Unit Tests and that is that the code under test needs to be share-able between the production assembly and the Test Runner assembly; meaning that it can’t be solely located within the production assembly (assuming the production assembly isn’t a DLL or static lib which in our case it isn’t). What we have to do at this point is to move our existing business logic out of the QtQuickSampleApp.exe and into a library file that both QtQuickSampleApp.exe and QtQuickSampleTest.exe can share.
We can use either a Static or Dynamic lib for this, I personally don’t care one way or the other; I don’t have strong feelings towards one type of lib over another but for this case since QtQuickSampleApp.exe was previously released as a single EXE I’d like to keep the footprint the same so I’ll go with a Static Lib.
Adding a Business Logic Library Project
Right-click on the QtQuickSampleAppSolution and select New Subproject… to open the New Subproject wizard. This time select Libraries and C++ Library.
Make sure the Type drop down is set to Statically Linked Library and call it QtQuickSampleLib. Only the QtCore module is needed for this library. Your Solution should now be looking something like this:
Moving the Existing Business Logic to the Lib
In the Windows Explorer go to the QtQuickSampleApp folder and do a cut and paste of the MyCalculatorViewModel.h and MyCalculatorViewModel.cpp files from the QtQuickSampleApp\QtQuickSampleApp folder to the QtQuickSampleApp\QtQuickSampleLib folder.
Now back in QtCreator open QtQuickSmapleApp.pro. On line 10 notice the SOURCES qMake variable, remove the reference to MyCalculatorViewModel.cpp. On line 20 notice the HEADERS qMake variable, since we no longer have any headers in the QtQUickSampleApp project you can delete this qMake variable all together.
Now open QtQuickSampleLib.pro and add MyCalculatorViewModel.cpp to its SOURCES qMake variable and MyCalculatorViewModel.hpp to its HEADERS qMake variable.
Once you save these changes wait for QtCreator to re-index the project (done automatically, should only take a second or two to kick off) then expand the Headers and Sources filters under the QtQuickSampleLib project and you notice that the MyCalculatorViewModel class is now part of that project.
Updating both my Production and Test Runner applications to use the Shared Library
First thing we need to do is make sure our build order is setup to ensure that the library assembly is built first and that the applications get re-compiled if there is a change in the library. we need to let the two application projects know where to look for the MyCalculatorViewModel header file when including and where the static library assembly can be found when linking. Open QtQuickSampleApp.pro; add the INCLUDEPATH qMake variable with a relative path to the QtQuickSampleLib folder and using the LIBS qMake variable tell it what library file to link to. Because the library file can be either under the releases or debug sub folder we’re going to use the CONFIG qMake variable to add some logic to the settings file.
# Add more folders to ship with the application, here folder_01.source = qml/QtQuickSampleApp() # Adds the QtQuickSampleLib project path to the header file include lookup path INCLUDEPATH += $$PWD/../QtQuickSampleLib # Adds the QtQuickSampleLib.lib to the linker win32:CONFIG(release, debug|release): LIBS += -L$$OUT_PWD/../QtQuickSampleLib/release/ -lQtQuickSampleLib else:win32:CONFIG(debug, debug|release): LIBS += -L$$OUT_PWD/../QtQuickSampleLib/debug/ -lQtQuickSampleLib
Do the same for QtQuickSampleTest then right-click on the QtQuickSampleAppSolution root folder and select Run qmake to re-generate the make file then select Build -> Build All to recompile the entire solution. You shouldn’t get any errors and if you run QtQuickSampleApp you’ll be looking at your favorite Calculator again.
Commercial break... after initialization of the MyCalculatorViewModel class that we get the expected default value of zero. To do this we need to add a method to our Test Class which will execute this test.
#ifndef QTQUICKSAMPLEAPPLICATIONTEST_H #define QTQUICKSAMPLEAPPLICATIONTEST_H #include <QObject> class QtQuickSampleApplicationTest : public QObject { Q_OBJECT public: explicit QtQuickSampleApplicationTest( QObject *parent = 0); private slots: void myCalculatorViewModelUserInputDefaultValuesTest(); }; #endif // QTQUICKSAMPLEAPPLICATIONTEST_H
The implementation of the
myCalculatorViewModelUserInputDefaultValuesTest() Test Method is pretty straight forward; we instantiate the Code Under Test (CUT), perform any test setup and assertions to ensure that the initial state of the test environment is what we expect it to be going into the test, then execute the CUT and verify its results.
In this test since we’re just testing for the default value there isn’t any setup required, just instantiate an instance of the
MyCalculatorViewModel class. We then use the build in MACRO provided by QTest called
QVERIFY2 to verify that the actual value we get from
getUserInput() is what we expect it to be. If
getUserInput() does not return the expected value the test will be marked as failed.
#include "QtQuickSampleApplicationTest.h" #include
#include "MyCalculatorViewModel.h" ///////////////////////////////////////////////////////////////////////////// QtQuickSampleApplicationTest::QtQuickSampleApplicationTest(QObject *parent) : QObject(parent) { } ///////////////////////////////////////////////////////////////////////////// void QtQuickSampleApplicationTest::myCalculatorViewModelUserInputDefaultValuesTest() { // Setup the test MyCalculatorViewModel model; // Test QVERIFY2( model.getUserInput() == 0, "Expect the user input to be zero by default"); }
If you build and run the QtQuickSmapleTest project now you’ll see 3 passed tests, two being the built in setup/tare down methods and one being our first unit test
myCalculatorViewModelUserInputDefaultValuesTest().
If getUserInput() didn’t return zero for some reason the test would be marked as failed and the output would look like this:
Notice because we used
QVERIFY2 our message is being displayed stating that this test failed because we were expecting the method to return zero. We could make our message more easier to understand by having it state clearly what we got and what was expected by adding those values to the message. Lets add another Unit Test which verifies that we can get non-default values from
getUserInput() and make the error message more human readable.
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// void QtQuickSampleApplicationTest::myCalculatorViewModelUserInputTest() { // Setup the test MyCalculatorViewModel model; QCOMPARE( model.getUserInput(), 0 ); //Expect the user input to be zero by default int expect( 100 ); model.setUserInput( expect ); // Test - we're actually testing both the set and get method here, not as isolated as I would like but ok for what it is. int actual = model.getUserInput(); QVERIFY2( actual == expect, QString("Expect the user input to be [%1] but actually got [%2] instead.").arg(expect).arg(actual).toStdString().c_str()); }
In this test we again instantiate the MyCalculatorViewModel class because each test is independent from every other test in the suite but this time we use
QCOMPARE to assert that the default value is 0; we do this because we’re not testing the default value here but we want to make sure that
getUserInput() isn’t already returning the expected values which would invalidate the test to make sure we can set the user input value (if its already 100 before I set it to 100 how can I prove that my set method really worked?). If the
QCOMPARE statement fails the test is aborted; however, if it evaluate to true then I complete my test setup by setting the user input value to something other then the default value (in this case I set it to the integer value of 100).
In the actual test phase of this Test Method notice that this time I’m storing the return value of
getUserInput() into a variable, I’m doing this so that I can use the value in the
QVERIFY2 statement for evaluation and also in the error message. By doing it this way I’m also making the code more readable because its clearly stating which value is expected and which value is the actual obtained value.
If you run the Test Runner now and the
myCalculatorViewModelUserInputTest() fails you’ll get the following output.
Notice that now the error message is clearly stating that it expected the value of 100 but got a value of 0 which should indicate to us that the
setUserInput() method failed. Fixing the error in the offending method and running the suite again and you’ll get a happy and successful test run.
And there you have it, you can now start adding all the Unit Tests required to get the MyCalculatorViewModel class under test and you can follow the same pattern as described in this tutorial to add additional business logic classes to the calculator application and get those classes similarly under test.
You can download a version of the QtQuick Sample Application which has a full set of unit tests by clicking this link.
Thank you I hope you have enjoyed and found this tutorial helpful. Feel free to leave any comments or questions you might have below and I’ll try to answer them as time permits.
Until next time think imaginatively and design creatively
I came here for a search on “QTest Jenkins” – just when I thought “here’s what I’m looking for”, the article ended 🙂 Could you please elaborate how you use this project in Jenkins?
Not a problem, I’ve just uploaded a new post which should elaborate on how to report QTest test results within Jenkins. You can find it here: How the Heck do you Report QTest results in Jenkins?
Thanks for the article, but I was wondering what is the advantage of your method of incorporating unit-tests over this method:
Your method is more complicated and requires more work (separating business logic, creating static/dynamic libs, etc.) but I’m not sure what is the benefit
Hi spagheticat, I read the article you linked to and it is a sound way of setting up a project with unit tests however I feel that its more complicated then the one I described above unless the project in question is small and short lived.
With my approach yes you need to separate your UI logic from your business logic but that is generally viewed as a good thing. The separation gives me more options as I can now share code between projects and create completely different UIs which utilize the same business logic code. For example I could create a UI project for a desktop application and another one for an Android application (since Qt can be used for both platforms) then have both UI projects link against the same business logic without any duplication. This in its self is probably a good topic for another post; I’ll see about writing something up on why separating business logic is a good thing.
Secondly I find that the approach in the articular requires more manual work and complicates the core application with if/else statements. When I add a new test suite in my applications I right click on the unit test project in Qt Creator and click Add New… then follow the wizard for adding a new C++ Class. This wizard creates two new files, the CPP and HPP files, with appropriate snippets and adds the class to my *.pro file so that they get automatically added to my make file. With the linked to article’s approach you would have to go in and manually move the test suite files into the appropriate condition; that is move them within the test scope to ensure they are only built and included in the test configuration. Add the fact that there will also be a condition to select which main to run, the test’s main or the actual applications main, it to me looks like more effort and is something that would likely be missed often on larger projects.
Also as the author of the linked to article points out the new test configuration is stored in the *.user.pro file so each team member has to remember to create their own test configuration if they want to run the unit tests. This might become more problematical when you think about the fact that the build server also needs to be able to run the unit tests and this would require some manual setup on the actual build server to ensure it has a test configuration. If your build server setup works with multiple nodes this might become rather error prone.
Lastly I would find it unpleasant to have to continually switch between my Debug and Test configurations while working since I run my unit tests often. I’d feel that unless you were really dedicated to unit testing you might start running them less often and might even end up forgetting to run them due to the inconvenience. Unit testing is an absolutely fantastic and powerful tool but tools which are hard to use or not as convenient as we would like don’t get used no matter how helpful they really are.
So the approach in the linked to article is not wrong and might work well on small short lived projects but on larger projects worked on by multiple developers over the coarse of years I feel that it would be to inconvenient and error prone and the approach I outlined above would be the better fit. There are no right or wrong ways of doing things in software development all options have trade offs and some approaches fit a particular problem better then others. I’ll leave it up to you to choose which solution best fits your needs. I hope this answered your question. Until next time think imaginatively and design creatively
Great tutorial
For unix, just add this
unix {
LIBS += -L$$OUT_PWD/../QtQuickSampleLib -lQtQuickSampleLib
} | http://imaginativethinking.ca/qtest-101-writing-unittests-qt-application/ | CC-MAIN-2017-34 | refinedweb | 3,176 | 56.29 |
Unit Testing With TestNG and JMockit
TestNG is a testing framework for unit test development. JMockit is a framework for mock objects that provides mock object functionality using the java.lang.instrument package of jdk 1.5. Together, these frameworks can provide the tools to create very robust test cases without design limitations of other testing frameworks currently available.
In part one of this two part tutorial, we will cover the creation of a test case and the implementation of the related class to be tested. We will reuse the same scenario as in my tutorial: Unit testing with JUnit and EasyMock. If you are new to unit testing in general, I suggest you check out the JUnit tutorial as well for it's section on unit testing in general. In part two, we will cover scenarios that favor the TestNG and jmockit frameworks including testing using mocks without injection and organizing tests into groups.
To begin, complete my Unit testing with JUnit and EasyMock tutorial. Then run TestNG's JUnit converter. That's it. Tutorial over. Just kidding (although TestNG does have a converter that you can run to convert existing JUnit tests to TestNG tests).
Setup!
The scenario.
The interfaces
As stated before, we will be testing the same scenario as we did in the previous tutorial. To review,);
}
TestNG*
The before and after series of annotations tell the test runner to execute the annotated method either before or after the specified test, group, method, suite or class.
@Test
The
@Testannotation indicates to the runner that a method is a test method..
JMockit:
- It requires an interface. Don't get me wrong, programming to interfaces is a good thing, but not everything requires an interface. EasyMock does have an extension that handles classes without an interface, but it can be a pain to have two different
EasyMockobjects included in your test class to do basically the same thing.
- Dependency injection of some kind is required. Don't get me wrong, I love Spring and use it on my current project. But what if you don't want to have Spring manage everything. You cannot mock an object that is created in your method. The only way to use mock objects is if you control the instantiation of an object elsewhere (in a factory typically)..
The test case.
import mockit.Expectations;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;
public class LoginServiceTest extends Expectations {
private LoginServiceImpl service;
UserDAO mockDao;
@BeforeTest.
Next we have our
setupMocks method. This has TestNG's
@BeforeTest annotation. This tells TestNG to run this method once before each test. This will give us a fresh instance and mock to test so that we don't have to be concerned with any clean-up that may be required. In our
setupMocks method, we create a mock DAO by calling jmockit's
Mockit.setUpMock() method and passing it our interface. is the fully qualified path to the lib folder of your project):
-javaagent:;
};
}
}
Conclusion!
Originally posted at
- Login or register to post comments
- 10518 reads
- Printer-friendly version
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Alexander Shvets replied on Tue, 2008/06/24 - 1:25pm
Michael,
nice introduction. You can get more from JUnit4 not extending from TestCase:
import org.junit.Before;
import org.junit.Test;
import static org.junit.Assert.*;
public class LoginServiceTest {
@Before
public void setup() {
...
}
@Test
public void testRosyScenario() {
...
}
}
Jess Holle replied on Wed, 2008/06/25 - 7:53am
It is good to see someone cover jmockit.
I'm tired of articles saying "this code is no good and needs to be refactored for unit testing" when the only issue is that they're using bad tooling. Not everything needs to be set up for dependency injection by pre-Java-5 mechanisms like EasyMock. It just isn't so.
praveenc7 replied on Tue, 2009/04/14 - 3:07am | http://java.dzone.com/articles/unit-testing-with-testng-and-j | crawl-002 | refinedweb | 661 | 57.47 |
I my application, i have below requests:
1. There has one thread will regularly record some logs in file. The log file will be rollovered in certain interval. for keeping the log files small.
2. There has another thread also will regularly to process these log files. ex: Move the log files to other place, parse the log's content to generate some log reports.
But, there has a condition is the second thread can not process the log file that's using to record the log. in code side, the pseudocode similars like below:
#code in second thread to process the log files
for logFile in os.listdir(logFolder):
if not file_is_open(logFile) or file_is_use(logFile):
ProcessLogFile(logFile) # move log file to other place, and generate log report....
try:
myfile = open(filename, "r+") # or "a+", whatever you need
except IOError:
print "Could not open file! Please close Excel!"
try:
os.remove(filename) # try to remove it directly
except OSError as e:
if e.errno == errno.ENOENT: # file doesn't exist
break
An issue with trying to find out if a file is being used by another process is the possibility of a race condition. You could check a file, decide that it is not in use, then just before you open it another process (or thread) leaps in and grabs it (or even deletes it).
Ok, let's say you decide to live with that possibility and hope it does not occur. To check files in use by other processes is operating system dependant.
On Linux it is fairly easy, just iterate through the PIDs in /proc. Here is a generator that iterates over files in use for a specific PID:
def iterate_fds(pid): dir = '/proc/'+str(pid)+'/fd' if not os.access(dir,os.R_OK|os.X_OK): return for fds in os.listdir(dir): for fd in fds: full_name = os.path.join(dir, fd) try: file = os.readlink(full_name) if file == '/dev/null' or \ re.match(r'pipe:\[\d+\]',file) or \ re.match(r'socket:\[\d+\]',file): file = None except OSError as err: if err.errno == 2: file = None else: raise(err) yield (fd,file)
On Windows it is not quite so straightforward, the APIs are not published. There is a sysinternals tool (
handle.exe) that can be used, but I recommend the PyPi module
psutil, which is portable (i.e., it runs on Linux as well, and probably on other OS):
import psutil for proc in psutil.process_iter(): try: flist = proc.get_open_files() if flist: print(proc.pid,proc.name) for nt in flist: print("\t",nt.path) # This catches a race condition where a process ends # before we can examine its files except psutil.NoSuchProcess as err: print("****",err) | https://codedump.io/share/mcPIet2DOUEz/1/check-if-a-file-is-not-open-not-used-by-other-process-in-python | CC-MAIN-2017-34 | refinedweb | 452 | 67.86 |
Dask is a parallel computing library in python. It provides a bunch of API for doing parallel computing using data frames, arrays, iterators, etc very easily. Dask APIs are very flexible that can be scaled down to one computer for computation as well as can be easily scaled up to a cluster of computers. Python already has a list of libraries for doing parallel computing like
multiprocessing,
concurrent.futures,
threading,
pyspark,
joblib,
ipyparallel, etc. All of these libraries have some kind of limitations that are nicely tackled by dask APIs. We'll be introducing various APIs available in dask for introduction purpose but our main concentration as a part of this tutorials will be
dask.bag API.
Below we have given a list of APIs available in dask.
pysparkwhich lets us work on a list of items or iterators while applying some functions on each item. This API is commonly used when working with big data. We can combine more than one function by applying them one by one on a list of values. This API divides list/iterators, works on list of values in parallel, and then combines them.
pandasdata frames but it can handle quite big data frames and do computation on them in parallel.
numpyarray but it can handle very large arrays as well as perform computation on them in parallel.
lazy evaluationand creates the
computation graphof each computation in sequence. It then optimizes these computations based on graph when lazy objects are evaluated. Delayed objects do not complete computation immediately when called, instead, they evaluate when they are explicitly called to evaluate. This, in turn, evaluates all delayed objects in the graph in parallel.
concurrent.futureswhich is a flexible API to submit tasks to threads, processes and even on clusters. It lets us submit tasks that can be run in parallel on one computer or cluster of computers.
We'll be focusing on
dask.bag API as a part of this tutorial. It provides bunch methods like
map,
filter,
groupby,
product,
max,
join,
fold,
topk etc. The list of all possible methods with
dask.bag API can be found on this link. We'll explain their usage below with different examples. We can also combine these methods on our iterator/list of values to perform complicated computations.
dask.bag API is commonly used when working with unstructured data like JSON files, text files, log files, etc. It's almost similar to
pyspark which can benefit a person having a bit of
spark background.
Benefit: One of the benefits of dask is that it sidelines GIL when working on pure python objects and hence fasten the parallel computations even more.
Drawback: The main drawback of
dask.bag API is that it's not suitable when data needs to be passed between multiple processes/workers. It can work embarrassingly fast when data passing is as minimal as possible between workers/processes. In short, it's not suitable for a situation when processes communicate a lot for computations.
This ends our small intro of dask. We'll be exploring various functions available through API now.
So without further delay, let’s get started with the coding part.
We'll start by importing all the necessary libraries.
import dask import dask.bag as db import sys print("Python Version : ", sys.version) print("Dask Version : ", dask.__version__)
Python Version : 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] Dask Version : 2.15.0
dask.bagAPI. ¶
The usage of dask.bag API involves a list of steps which are commonly used to perform complicated computations in parallel which are as below:
from_sequence(),
from_delayed()or
from_url().
map(),
filter(),
groupby(), etc one by one on lazy dask bag object created from step 1.
compute()method on final bag object from step 2 which was created after calling all operations.
Please make a note that dask creates directed graphs of lazy objects when you call methods on it one after another from step 1 & 2 above and will only evaluate and run all methods when compute() from step 3 is called on final lazy object. When compute() method is called on final object than it'll run all operations in parallel and return final result.
Dask provides an optional dashboard which we can utilize to see the performance of operations getting performed in parallel. This step is optional if you don't want to analyze results but it can be very useful to debug things..
We'll first create lazy objects using methods available from dask.bag API.
from_sequence()to turn list/iterators into Lazy Dask Bag Objects¶
The
from_sequence() method is commonly used as a starting point in converting a list of operations into dask compatible operations so that they run in parallel. The
from_sequence() method accepts a list of values, iterators and converts it to a lazy dask bag object consisting of a list of values which will go input to the next methods called on it.
bag1 = db.from_sequence(range(1000)) bag1
dask.bag<from_sequence, npartitions=100>
By default
from_sequence() will divide data into 100 partitions. We can also explicitly pass a number of values to keep in each partition as well as partition size by setting
npartitions and
partition_size parameter.
bag2 = db.from_sequence(range(1000000), partition_size=1000, npartitions=1000) bag2
dask.bag<from_sequence, npartitions=1000>
We'll check size of
bag object using
sys.getsizeof() which returns the size of the object in bytes.
sys.getsizeof(bag1), sys.getsizeof(bag2)
(56, 56)
We can see that both have a size of 56 bytes even though input to both are different size lists.
We'll now apply a list of commonly available functions to perform various computations on the list of values. These methods will also generate another lazy dask bag object.
Below are list of commonly used operations:
bag_object.map(function):It'll apply function passed to
mapon all individual entry of bag_object.
bag_object.filter(condition):It'll check condition passed to
filteron all individual entry of bag_object and only keep entries which satisfies condition.
bag_object.product(another_bag):It calculates cross product of both bags and creates another bag of that values.
bag_object.max():It returns maximum from list.
bag_object.min():It returns minimum from list.
bag_object.accumulate():It takes as input binary function which operates on two input values and returns one value. This value is given as input as first parameter in next iteration.
bag_object.count():It returns number of values in a bag object.
bag_object.sum():It returns sum of all values of list.
bag_object.std():It returns standard deviation.
bag_object.frequencies():- It returns frequency of each value in bag.
bag_object.groupby():- It groups all values in list based on some key specified. We can then perform operations on these grouped values.
bag_object.join():- It joins one list with another list based on key specified. It merges values where key matches.
bag_object.topk():- It joins one list with another list based on key specified. It merges values where key matches.
Please make a note that the above steps will also create another lazy bag object only. It'll only perform actual computation when we call
compute() on the final bag object.
final_bag1 = bag1.map(lambda x: x*2) final_bag1
dask.bag<lambda, npartitions=100>
final_bag2 = bag2.filter(lambda x: x%100 == 0) final_bag2
dask.bag<filter-lambda, npartitions=1000>
len(final_bag1.compute())
1000
final_bag2.compute()[:10]
[0, 100, 200, 300, 400, 500, 600, 700, 800, 900]
We can evaluate bag objects from step 1 and it'll return the actual list. We can even directly call the method
list passing bag object and it'll also return all values.
final_list = bag1.compute() print("Size : %d bytes"%sys.getsizeof(final_list)) print("Length of Values : ", len(final_list))
Size : 8544 bytes Length of Values : 1000
list(bag1)[:10]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
We'll now explain the usage of various ways to use
dask.bag API with examples.
map()&
filter()Usage ¶
Below we have created one simple example which loops through 10Mn numbers takes the square of each and only keeps number which is divisible by 100. We have first implemented it with a loop in pure python and then converted it to the dask version.
final_list = [] for i in range(1000000): x = i*2 if x%100 == 0: final_list.append(x) final_list[:10]
[0, 100, 200, 300, 400, 500, 600, 700, 800, 900]
bag1 = db.from_sequence(range(1000000)) result = bag1.map(lambda x: x*2).filter(lambda x : x%100 == 0) result.compute()[:10]
[0, 100, 200, 300, 400, 500, 600, 700, 800, 900]
final_list = [] for i in range(10): for j in range(10): if i != j: final_list.append(i+j) final_list[:10]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 1]
bag1 = db.from_sequence(range(10)) bag2 = db.from_sequence(range(10)) result = bag1.product(bag2)\ .filter(lambda x : x[0]!=x[1])\ .map(lambda x : x[0]+x[1]) result.compute()[:10]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 1]
sum()&
mean()usage ¶
Below we have explained another example where we are creating an array of random numbers of size 100x100, looping through each row of the array, taking every 5th element, summing them up, and adding final summed value to list. We are then taking the mean of that list of summed numbers. We have explained example with both normal python loop and using
dask.bag API as well.
import numpy as np rnd_state = np.random.RandomState(100) x = rnd_state.randn(100,100) result = [] for arr in x: result.append(arr[::5].sum()) final_result = sum(result) / len(result) print(final_result)
-0.5237035655317279
bag1 = db.from_sequence(x) result = bag1.map(lambda x: x[::5].sum()).mean() final_result = result.compute() print(final_result)
-0.5237035655317279
import itertools list(itertools.accumulate(range(100)))[:10]
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
bag1 = db.from_sequence(range(100)) result = bag1.accumulate(lambda x, y : x+y).compute() result[:10]
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
final_dict = {} for key, val in [("a",100), ("b",200), ("c",300), ("d",400), ("e",500), ("a",200), ("e",300)]: if key not in final_dict: final_dict[key] = val list(final_dict.items())
[('a', 100), ('b', 200), ('c', 300), ('d', 400), ('e', 500)]
bag1 = db.from_sequence([("a",100), ("b",200), ("c",300), ("d",400), ("e",500), ("a",200), ("e",300)]) bag1.distinct(key=lambda x: x[0]).compute()
[('a', 100), ('b', 200), ('c', 300), ('d', 400), ('e', 500)]
frequencies()usage ¶
Below we have given example where we are explaining the usage of
frequencies() method of
dask.bag API. We are looping through 1000 random number between 1-100, taking only numbers which are divisible by 5 and counting frequencies of each. We have implemented both normal python and dask.bag API versions of code.
from collections import Counter x = np.random.randint(1,100, 1000) result = [] for i in x: if i % 5 == 0: result.append(i) list(Counter(result).items())[:10]
[(95, 9), (35, 13), (5, 17), (85, 7), (20, 7), (55, 6), (90, 13), (15, 15), (40, 12), (50, 8)]
bag1 = db.from_sequence(x) bag1.filter(lambda x: x%5 == 0).frequencies().compute()[:10]
[(95, 9), (35, 13), (5, 17), (85, 7), (20, 7), (55, 6), (90, 13), (15, 15), (40, 12), (50, 8)]
x = [("a",100), ("b",200), ("c",300), ("d",400), ("e",500), ("a",200), ("e",300)] result = {} for key, val in x: if key in result: result[key] += val else: result[key] = val list(result.items())
[('a', 300), ('b', 200), ('c', 300), ('d', 400), ('e', 800)]
bag1 = db.from_sequence([("a",100), ("b",200), ("c",300), ("d",400), ("e",500), ("a",200), ("e",300)]) bag1.groupby(lambda x: x[0]).map(lambda x: (x[0], sum([i[1] for i in x[1]]))).compute()
[('a', 300), ('b', 200), ('c', 300), ('d', 400), ('e', 800)]
x = [("a",100), ("b",200), ("c",300), ("d",400), ("e",500)] y = [("a",150), ("b",250), ("c",350), ("d",450), ("e",550)] result = {} for key, val in x+y: if key in result: result[key] += val else: result[key] = val list(result.items())
[('a', 250), ('b', 450), ('c', 650), ('d', 850), ('e', 1050)]
bag1 = db.from_sequence(x) bag1.join(y, lambda x: x[0]).map(lambda x: (x[0][0], sum([val[1] for val in x]))).compute()
[('a', 250), ('b', 450), ('c', 650), ('d', 850), ('e', 1050)]
x = [("a",100), ("b",200), ("c",300), ("d",400), ("e",500)] y = [("a",150), ("b",250), ("c",350), ("d",450), ("e",550)] result = {} for key, val in x+y: if key in result: result[key] += val else: result[key] = val sorted(result.items(), key=lambda x : x[1], reverse=True)[:2]
[('e', 1050), ('d', 850)]
bag1 = db.from_sequence(x) bag1.join(y, lambda x: x[0])\ .map(lambda x: (x[0][0], sum([val[1] for val in x])))\ .topk(2, key=lambda x: x[1])\ .compute()
[('e', 1050), ('d', 850)]
x = np.random.randint(1,100,1000) sorted(x)[-2:]
[99, 99]
bag1 = db.from_sequence(x) bag1.topk(2).compute()
[99, 99]
It's normally advisable to save the output of the bag to save it to another file after computation is complete. It might happen that output after performing
compute() is very big and can not be held in memory than its better to save it to disk and then verify results.
Dask bag provides a list of methods for converting the output to another format and saving it to various file formats. Below are three methods available for converting a bag of values to another format and saving it.
to_avro()- It can be used to save a bag of values as Avro files.
to_dataframe()- It can be used to convert bag of values to dask data frames which is available through the
dask.dataframemodule and lets us work on pandas data frames in parallel.
to_textfiles()- It can be used to save a bag of values as text files.
This ends our small tutorial on dask.bag API. Please feel free to let us know your views in the comments section.
Below is list of other python libraries for performing the computation in parallel on a single computer. | https://coderzcolumn.com/tutorials/python/dask-bag-parallel-computing-in-python | CC-MAIN-2021-04 | refinedweb | 2,366 | 67.45 |
.
OPC Data Access Techniques
OPC Toolbox lets you discover, access, and read raw and processed data from any data historian compliant with the OPC Historical Data Access standard. You can also access live data from an OPC Data Access server in three ways:
- Execute all OPC Toolbox functions directly from the MATLAB® command line or incorporate them into your own MATLAB applications
- Use the OPC Client app to rapidly connect to OPC DA servers; to create and configure OPC Toolbox objects; and to read, write, and log data
- Use the Simulink® blocks from OPC Toolbox to read and write data to and from an OPC DA server while simulating a system
OPC Data Access Object
When Data Access if the simulation runs more slowly than the system clock.
OPC DA Data Reading and Writing
Once). You can log data to memory or disk.
In Simulink, OPC Read and OPC Write blocks retrieve and transmit data synchronously or asynchronously to and from the OPC DA server. The blocks contain a client manager that lets you specify and manage the OPC DA server, select items, and define block sample times.
OPC Historical Data Access
You create an OPC Historical Data Access Client object to connect to an OPC HDA server. This client lets you browse the server namespace and retrieve fully qualified IDs of each item stored on the server. You use these IDs to request historical data from the server. You can retrieve raw or processed data stored on the OPC HDA server, specifying the IDs you want to retrieve, a time period for which to retrieve data, and optional parameters. OPC Toolbox supports the following read operations:
- Retrieve raw data as it was stored on the server.
- Retrieve data aggregated (processed) by the server. Each server implements different aggregate types, such as minimum, maximum, and average.
- Retrieve data from specific time intervals. The server interpolates data from surrounding raw measurements.
- Retrieve data that has been modified on the server. Some OPC HDA servers allow historical values to be modified, storing a history of values that have changed.
Data is retrieved into OPC HDA Data objects, which allow you to visualize and preprocess the historical data for further analysis in the MATLAB environment. Preprocessing operations include resampling, data conversion, and data display functions.
OPC UA Data Access
With the toolbox, you can browse for available OPC UA servers. You then connect to an OPC UA server by creating an OPC UA Client object. The toolbox provides functions that enable you to browse and search the nodes in the server namespace to determine available data nodes. You can interact with multiple nodes at the same time by creating an OPC UA node array. When you read the current value for a node or node array, you receive the value, timestamp, and an estimate of the quality of the data, and can determine whether the data is a raw or interpolated value. You can also write a current value to a node.
OPC UA Historical Data Access
You can read historical data from nodes on the UA server. To find the nodes available on your server, you can use the Browse Name Space graphical utility function. The browser displays the index and IDs for all the nodes on the server.
To read the data into MATLAB, you specify the nodes and a time range for which you would like to read data. OPC UA servers also provide aggregate functions for returning preprocessed data to clients. You can query the aggregate functions that your server supports, and read the preprocessed data that results from applying aggregate functions to the nodes. Examples of aggregate functions include average, maximum, minimum, and delta.
All OPC UA historical data is stored in OPC UA Data objects which contain datetime objects to represent the timestamp. You can then easily visualize and process the data for further analysis in the MATLAB environment. | https://fr.mathworks.com/products/opc.html | CC-MAIN-2020-05 | refinedweb | 652 | 51.99 |
Well, I have finished one of my programs and have been tracing it over and over. Trying to figure out my problem and I just can't seem to figure it out so I have decided to try and let another pair of eyes look at it. A very simple program.
The idea of it is to let the user enter the sides of a dice, the number of dice to roll, and the number of rolls.
/*Kevin, september 14 * DiceRolls.java from Chapter 10 * Generate counts of dice roll outcomes. */ /** * Dice are rolled and the outcome of each roll is counted. */ import java.util.Scanner; import java.util.Random; public class DiceRolls1 { public static void main(String[] args) { int[] outcomes; Scanner input = new Scanner(System.in); int numRolls; int sidedDice; int outcome; int max; int numDiceRolled; Random rand = new Random(); outcome = 0; System.out.println("Enter the number of sides on the dice "); sidedDice = input.nextInt(); System.out.println("Enter the number of dice to roll"); numDiceRolled = input.nextInt(); /* prompt user for number of rolls */ System.out.println("How many rolls? "); numRolls = input.nextInt(); max = (sidedDice * numDiceRolled + 1); outcomes = new int[max]; //array with indices whatever the possibilities are for (int roll = 0; roll < numRolls; roll++) { /* roll dice and add up outcomes */ for (int die = 0; die < numDiceRolled; die++) { outcome += ( rand.nextInt(sidedDice) + 1); outcomes[outcome] += 1; } } /* show counts of outcomes */ for (int i = numDiceRolled; i <= max; i++) { System.out.println(i + ": " + outcomes[i]); } } } | https://www.daniweb.com/programming/software-development/threads/55092/dice-rolls-troubleshooting | CC-MAIN-2018-13 | refinedweb | 245 | 66.23 |
.1 anton 325: 1.27 pazsan 326: slowvoc @ 327: slowvoc on 1.1 anton 328: vocabulary new-locals 1.27 pazsan 329: slowvoc ! 1.1 anton 330: new-locals-map ' new-locals >body cell+ A! \ !! use special access words 331: 332: variable old-dpp 333: 334: \ and now, finally, the user interface words 1.14 anton 335: : { ( -- addr wid 0 ) \ gforth open-brace 1.1 anton 336: dp old-dpp ! 337: locals-dp dpp ! 338: also new-locals 339: also get-current locals definitions locals-types 340: 0 TO locals-wordlist 341: 0 postpone [ ; immediate 342: 343: locals-types definitions 344: 1.14 anton 345: : } ( addr wid 0 a-addr1 xt1 ... -- ) \ gforth close-brace 1.1 anton 346: \ ends locals definitions 347: ] old-dpp @ dpp ! 348: begin 349: dup 350: while 351: execute 352: repeat 353: drop 354: locals-size @ alignlp-f locals-size ! \ the strictest alignment 355: set-current 356: previous previous 357: locals-list TO locals-wordlist ; 358: 1.14 anton 359: : -- ( addr wid 0 ... -- ) \ gforth dash-dash 1.1 anton 360: } 1.9 anton 361: [char] } parse 2drop ; 1.1 anton: 1.28 anton 453: \ Implementation: 1.1 anton 454: 1.3 anton 455: \ explicit scoping 1.1 anton 456: 1.14 anton 457: : scope ( compilation -- scope ; run-time -- ) \ gforth 1.3 anton 458: cs-push-part scopestart ; immediate 459: 1.14 anton 460: : endscope ( compilation scope -- ; run-time -- ) \ gforth 1.3 anton 461: scope? 1.1 anton 462: drop 1.3 anton 463: locals-list @ common-list 464: dup list-size adjust-locals-size 465: locals-list ! ; immediate 1.1 anton 466: 1.3 anton 467: \ adapt the hooks 1.1 anton 468: 1.3 anton 469: : locals-:-hook ( sys -- sys addr xt n ) 470: \ addr is the nfa of the defined word, xt its xt 1.1 anton 471: DEFERS :-hook 472: last @ lastcfa @ 473: clear-leave-stack 474: 0 locals-size ! 475: locals-buffer locals-dp ! 1.3 anton 476: 0 locals-list ! 477: dead-code off 478: defstart ; 1.1 anton 479: 1.3 anton 480: : locals-;-hook ( sys addr xt sys -- sys ) 481: def? 1.1 anton 482: 0 TO locals-wordlist 1.3 anton 483: 0 adjust-locals-size ( not every def ends with an exit ) 1.1 anton 484: lastcfa ! last ! 485: DEFERS ;-hook ; 486: 1.28 anton: 1.27 pazsan 500: : (then-like) ( orig -- addr ) 501: swap -rot dead-orig = 502: if 503: drop 504: else 505: dead-code @ 506: if 507: set-locals-size-list dead-code off 508: else \ both live 509: dup list-size adjust-locals-size 510: locals-list @ common-list dup list-size adjust-locals-size 511: locals-list ! 512: then 513: then ; 514: 515: : (begin-like) ( -- ) 516: dead-code @ if 517: \ set up an assumption of the locals visible here. if the 518: \ users want something to be visible, they have to declare 519: \ that using ASSUME-LIVE 520: backedge-locals @ set-locals-size-list 521: then 522: dead-code off ; 523: 524: \ AGAIN (the current control flow joins another, earlier one): 525: \ If the dest-locals-list is not a subset of the current locals-list, 526: \ issue a warning (see below). The following code is generated: 527: \ lp+!# (current-local-size - dest-locals-size) 528: \ branch <begin> 529: 530: : (again-like) ( dest -- addr ) 531: over list-size adjust-locals-size 532: swap check-begin POSTPONE unreachable ; 533: 534: \ UNTIL (the current control flow may join an earlier one or continue): 535: \ Similar to AGAIN. The new locals-list and locals-size are the current 536: \ ones. The following code is generated: 537: \ ?branch-lp+!# <begin> (current-local-size - dest-locals-size) 538: 539: : (until-like) ( list addr xt1 xt2 -- ) 540: \ list and addr are a fragment of a cs-item 541: \ xt1 is the conditional branch without lp adjustment, xt2 is with 542: >r >r 543: locals-size @ 2 pick list-size - dup if ( list dest-addr adjustment ) 544: r> drop r> compile, 545: swap <resolve ( list adjustment ) , 546: else ( list dest-addr adjustment ) 547: drop 548: r> compile, <resolve 549: r> drop 550: then ( list ) 551: check-begin ; 552: 553: : (exit-like) ( -- ) 554: 0 adjust-locals-size ; 555: 1.1 anton 556: ' locals-:-hook IS :-hook 557: ' locals-;-hook IS ;-hook 1.27 pazsan 558: 559: ' (then-like) IS then-like 560: ' (begin-like) IS begin-like 561: ' (again-like) IS again-like 562: ' (until-like) IS until-like 563: ' (exit-like) IS exit-like 1.1 anton 564: 565: \ The words in the locals dictionary space are not deleted until the end 566: \ of the current word. This is a bit too conservative, but very simple. 567: 568: \ There are a few cases to consider: (see above) 569: 570: \ after AGAIN, AHEAD, EXIT (the current control flow is dead): 571: \ We have to special-case the above cases against that. In this case the 572: \ things above are not control flow joins. Everything should be taken 573: \ over from the live flow. No lp+!# is generated. 574: 575: \ About warning against uses of dead locals. There are several options: 576: 577: \ 1) Do not complain (After all, this is Forth;-) 578: 579: \ 2) Additional restrictions can be imposed so that the situation cannot 580: \ arise; the programmer would have to introduce explicit scoping 581: \ declarations in cases like the above one. I.e., complain if there are 582: \ locals that are live before the BEGIN but not before the corresponding 583: \ AGAIN (replace DO etc. for BEGIN and UNTIL etc. for AGAIN). 584: 585: \ 3) The real thing: i.e. complain, iff a local lives at a BEGIN, is 586: \ used on a path starting at the BEGIN, and does not live at the 587: \ corresponding AGAIN. This is somewhat hard to implement. a) How does 588: \ the compiler know when it is working on a path starting at a BEGIN 589: \ (consider "{ x } if begin [ 1 cs-roll ] else x endif again")? b) How 590: \ is the usage info stored? 591: 592: \ For now I'll resort to alternative 2. When it produces warnings they 593: \ will often be spurious, but warnings should be rare. And better 594: \ spurious warnings now and then than days of bug-searching. 595: 596: \ Explicit scoping of locals is implemented by cs-pushing the current 597: \ locals-list and -size (and an unused cell, to make the size equal to 598: \ the other entries) at the start of the scope, and restoring them at 599: \ the end of the scope to the intersection, like THEN does. 600: 601: 602: \ And here's finally the ANS standard stuff 603: 1.14 anton 604: : (local) ( addr u -- ) \ local paren-local-paren 1.3 anton 605: \ a little space-inefficient, but well deserved ;-) 606: \ In exchange, there are no restrictions whatsoever on using (local) 1.4 anton 607: \ as long as you use it in a definition 1.3 anton 608: dup 609: if 610: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 611: else 612: 2drop 613: endif ; 1.1 anton 614: 1.4 anton 615: : >definer ( xt -- definer ) 616: \ this gives a unique identifier for the way the xt was defined 617: \ words defined with different does>-codes have different definers 618: \ the definer can be used for comparison and in definer! 1.18 anton 619: dup >code-address [ ' spaces >code-address ] Literal = 1.4 anton 620: \ !! this definition will not work on some implementations for `bits' 621: if \ if >code-address delivers the same value for all does>-def'd words 622: >does-code 1 or \ bit 0 marks special treatment for does codes 623: else 624: >code-address 625: then ; 626: 627: : definer! ( definer xt -- ) 628: \ gives the word represented by xt the behaviour associated with definer 629: over 1 and if 1.13 anton 630: swap [ 1 invert ] literal and does-code! 1.4 anton 631: else 632: code-address! 633: then ; 634: 1.23 pazsan 635: :noname 636: ' dup >definer [ ' locals-wordlist >definer ] literal = 637: if 638: >body ! 639: else 640: -&32 throw 641: endif ; 642: :noname 1.21 anton 643: 0 0 0. 0.0e0 { c: clocal w: wlocal d: dlocal f: flocal } 1.28 anton 644: comp' drop dup >definer 1.21 anton 645: case 646: [ ' locals-wordlist >definer ] literal \ value 647: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 1.25 anton 648: [ comp' clocal drop >definer ] literal 1.21 anton 649: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 1.25 anton 650: [ comp' wlocal drop >definer ] literal 1.21 anton 651: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 1.25 anton 652: [ comp' dlocal drop >definer ] literal 1.21 anton 653: OF POSTPONE laddr# >body @ lp-offset, POSTPONE 2! ENDOF 1.25 anton 654: [ comp' flocal drop >definer ] literal 1.21 anton 655: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 656: -&32 throw 1.23 pazsan 657: endcase ; 1.24 anton 658: interpret/compile: TO ( c|w|d|r "name" -- ) \ core-ext,local 1.1 anton 659: 1.6 pazsan 660: : locals| 1.14 anton 661: \ don't use 'locals|'! use '{'! A portable and free '{' 1.21 anton 662: \ implementation is compat/anslocals.fs 1.8 anton 663: BEGIN 664: name 2dup s" |" compare 0<> 665: WHILE 666: (local) 667: REPEAT 1.14 anton 668: drop 0 (local) ; immediate restrict | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?annotate=1.29;f=h;only_with_tag=MAIN;ln=1 | CC-MAIN-2022-27 | refinedweb | 1,557 | 67.65 |
?
It is quite rare to see a desktop PC user interacting with an on-screen keyboard, but what about other form factors, like Surface RT or WP8? An on-screen keyboard is quite common, and always in a predictable location. Can these users' privacy or security be compromised by this issue? If so, it seems like at least mobile IE9 and all versions of IE10 should be patched, and quickly.
And surface had extra security measures for on screen keyboard. Blog post says site can't tell what is under the mouse anyway….
@Matt H
The mouse cursor doesn't move when the user use the on screen keyboard with his fingers.
Touch events don't necessarily cause the mouse pointer to move.
so I guess Windows 8 tablet users are safe from this bug.
Well written Dean… A classic product manager trying to skate around the issue.
It's a security bug that's existed in IE for over a decade… Just fix it so that IE isn't the embarrassing sIEve we all know it to be!
I have to agree with Steve.
All fluff about ad companies, no useful information and no time frame for a fix. Why not just own up to your mistakes? That would be a better way to become "the browser I loved to hate".
The fact of the matter is that this behavior allows arbitrary sites to gather potential private information from the users desktop, i.e. outside of the site's intended scope. As such it should be fixed swiftly, before more people than just some ad networks use these techniques.
So when you fix the global event object from leaking info when the focus, and cursor are not even within the IE window are you also going to fix the security bug with named popup/opened windows sharing a common namespace even when they are not tied to the same domain?
This is the other trick "advertisers" yeah (air-quotes) advertisers use to track secure information without user concent or knowledge.
Should we bring up the ieframe.dll leaks too? Or is 2 gaping security holes in IE enough for one day!
@Dean Hachamovitch, Corporate Vice President, Internet Explorer
Please fix your post title: "alleged" needs to be removed as you yourself pointed out that Microsoft Internet Explorer does indeed enable a website to track mouse movements and certain keystrokes even though IE does not have the active focus, nor is even under the cursor.
By definition alone this is a security hole!
You can argue about how severe it is all day long but call a spade a spade – it's a security breach!
You lose major credibility every time you come on this blog and put PR spin on your posts. You should have posted that you acknowledged there was a security hole issue posted online, that BECAUSE it was NOT filed in CONNECT it gained INSTANT attention and your team is working on a patch they expect to release as a critical update within a few days.
Dancing around the issue and trying to put the blame on the issue reporter is a classic sign of a weak manager/company trying to point the finger in hopes of distracting the public from the real issue – IE has a long standing security issue that it is taking its sweet time about fixing!
Good one Dean!
Virtual online keyboards is not very commonly used, except in South Africa where some of the major banks still tries to force you to use virtual keyboards and IE is still the predominant browser. So in a case here it is a major risk.
is this a windows bug or IE big?
The underlying issue is that IE doesn't properly restrict reporting to it's own window coordinates. I have books on my shelf explaining how to do this from over a decade ago.
The real issue here is that it takes eons for Microsoft to patch bugs in IE. It has always been this way and I don't see this changing. No amount of PR spin will alter the fact.
Either improve the bug fixing process and provide real timelines for when and how bugs in IE will be fixed, or developers will continue trash talking IE and refusing to work with it.
This is not an issue, even for people with virtual keyboards.
For example, clicks are not registered.
The evil ad-company will just have a large list of mouse-coordinates.
Even if they could guess what window/item was beneath the cursor, it would require some
really clever heuristic to detect whether the mouse stopped because of a click or otherwise.
They would never be able to reliably guess passwords or anything valueable.
@DavidH "The real issue here is that it takes eons for Microsoft to patch bugs in IE"
The bug was privately reported October 1 2012. Lets see if it takes an aeon (a long indefinite period, aka 1B years), of it it is fixed by the time IE10 is released for Windows 7.
Gentlemen, dean is right on this one. Save your anger for real issues. Bitching about stuff like this only delays fixing real sec threats.
when will IE 10 be finished for windows 7?
"This is not an issue" Are you kidding me?
This is very much a big issue. IE is leaking information. A real world demonstration was made that shows how easy it is to determine the phone number of someone called in Skype. All by just having IE open. And to top it off, it works even while IE IS MINIMIZED.
"Really clever heuristic"?
It doesn't take much at all to determine which buttons a user clicks on if they move to click, click, then abruptly move away. This is predictable and already exploited.
The discussion of this being something that merely affects business competition is a very disingenuous. This is a big security risk, period. Not as bad as ActiveX was 'in the browser we loved to hate', but still big.
Additionally, the possibilities for evil to use this exploit are staggering. Suppose evil people get a list of alllllll the positions your cursor has been for the past few days. By using this technique + finding uniqueness (and oh yes, bad people with botnets and compromised ad servers CAN, in fact, uniquely identify you), they can have a better picture of what exactly you are doing. Oh what's that? People who click here, here, here, and move here are most likely using Notepad? Oh, what's that, people who click here are most likely using Skype? Remember how Stuxnet profiled Iran's nuclear reactors? Remember how it determined WHO to attack, and then attack ONLY THEM?
This vulnerability can get them in the door to determine who is worth profiling and then other exploits are used to exploit further.
Bottom line, hackers don't just use ONE exploit, they use several. And this is a nice way to determine if your compromised machine is worth saying 'hello' to.
🙂
Dean,
Just fix it. You would have been better off not posting at all.
All the comments above are right on the money.
Its a security breech just because you can't think of a way to exploit doesn't mean it can't be done. People will find ways to use this to their advantage.
To give you a better understanding of the issue, a few seconds of googling pulls up images like this:
web.media.mit.edu/…/cheese-list.jpg
I don't know if images are allowed on the blog here, but the image itself shows mouse movement super imposed on a website. I assure you that the information recorded is enough to probabilistic determine useful information of what sites you might be on. Especially if a little number crunching is provided by a bot net of some sort. Now just imagine this leaked information extends outside of IE and on your entire computer. It is breaking out of the sandbox that the frickin web browser is supposed to provide!
If you take away one thing from my posts, it should be this: This is just ONE itty bitty vulnerability. While it doesn't give an attacker much, it gives away more than you realize and it is enough to be used IN CONJUNCTION with other vulnerabilities to really ruin your day/week/life.
Steve and Peter are same person.
@Security, what if someone installs an ad serving toolbar on IE and visits their bank site, which requires them to click in their PIN code? if the toolbar could track those statistics and send it back to the main server, couldn't you simply overlay the coords on top of a screenshot of the page and deduce their PIN that way?
Still seems relatively unsafe. Also, this blog post is a joke. You guys can't even fix a simple problem like loading JS/CSS/HTML on demand outside of an iframe. Trident is a joke. IE is a joke. Sorry guys, try again. Maybe use WebKit this time? 🙂
so the mouse position is being broadcast to all windows in the OS and it is up to the program's discretion to make use of the data?
Guys Guys !
Just stop using IE and start using Chrome.
Because IE will provide your mouse movement advertising agencies who cannot figure out what is on the screen just some numeric pixel values.
With Chrome, they will NOT let anyone capture your mouse coordinates BUT send ALL activities to Google server which Google (with its habit of harvesting user info) will use to profile you and sell to highest bidder (agencies interested in profiling people). How else they make billions of dollars annual revenue if everything they are offering is free? By selling YOU out.
Well I don't give a crrrap about either of them.
Lee said, "Are you kidding me?"
No sir, we are kidding ourselves.
Welcome to the internet !!
Anything running on BGP protocol is NOT SAFE!
Those are completely different mechanisms. It's disingenuous to try to put them in the same boat.
The mechanism that is built into Chrome to transmit browse history is CONSENSUAL and BY DESIGN. In fact, part of that mechanism is a check box in Chrome's settings. The issue with IE is a security vulnerability that happens WITHOUT CONSENT and NOT INTENDED due to the way the [incorrect handling of] Javascript works.
The vulnerability in IE can send that information to ANYONE. IT's not a matter of the browser-maker knowing what you are doing, it's a matter of ANYONE WHO MAKES A MALICIOUS web page ad or scriptlet can know what you are doing.
Stop trying to blame the messenger.
Stop trying to deflect blame to competitor.
Stop trying to say 'lol other companies do it too'.
The word 'Chrome' or any other company besides Microsoft has no business in this discussion.
This is a VULNERABILITY as discovered that exists in INTERNET EXPLORER.
Extra points:
FYI: The mechanism that you ARE talking about, the one that Google Chrome uses to share information with Google, is also present in Internet Explorer. So, yes, IE ALSO shares information with Microsoft. But again, this discussion is not about that consensual feature.
Sorry, I didn't mean to sound like a dick.
I just hate it when companies try to blatantly dismiss vulnerabilities such as this and whitewash it with 'lol well it's low risk, don't worry about it' when I know for a fact it's much more important than they make it out to be.
@Lee, Microsoft does not sell your information and make profit. They sell products like Windows Servers, DataCenter, Pro, RT, Office, SharePoint, Dynamics, Xbox, Phones, Surface, PixelSense etc. and provide service for those products.
Unless you are a Google employee; then no matter what Google claims, how your bias refrains and make harder on you to even think about it impartially, when you practically realize some karma slaps it will definitely make you feel like dick and leave you wondering "What was I thinking?"
i agree with these claims. i received email from oracle education on gmail. next hours i was watching videos on youtube and every ad was about oracle courses. youtube is google service which i witness. i don't know with how many other companies they shard that info? their terms says 'we share your information with our trusted parties'. what about user? how do they know if every user also trust their personal information with those parties which google trust in? this issue of internet explorer is nothing compared to what google does. this issue should be fixed. but just mouse movement information leak is a very minor vulnerability. there are big problems such as chrome.
I do not work for Google. I admit, however, that I have a pro-Google bias because I like the products they make, but I am also pro-Apple because I like some of their products and yes, even pro-Microsoft products (I'm a .NET developer by trade and passion).
That said, my character has nothing to do with this issue.
This is about acknowledging there is a vulnerability and taking accountability in offering a solution.
I am criticizing this post because it's a complete dismissal of the problem in the typical corporate save-face fashion and does not help us, the people who use the Internet.
@Lee
IE only collects data for telemetry (to improve the software). The data is anonymous and its only confined to the components of browser frame (like how many times control X was used) as opposed to what's going inside the browser. You can always disable the telemetry agent and Microsoft will collect no information, whatsoever.
Meaning unless I have a Xbox membership or using Microsoft billing, they will never know which Bank I am affiliated with even if I use IE for Internet Banking. Can't say the same thing about Google. Because now I know they even know how many bucks I have in my account when I was carelessly banking on my Android phone browser.
@Lee – Nice for you (yes, that's sarcasme), but anyway, I've news for you: this blog is about the development of IE, not about how "awesome" Chrome is (it's not awesome, like V for Vendetta). You know, there is a lot of news on that IE has an exploit that tracks the position of your mouse. Anyway: it isn't useful at al! You don't know if it's hovering or clicking, you don't know what it does in wich window. So, what's the problem? Chrome is one big privacy hole, so let's stop using Chrome and start using IE. Like Microsoft say it:
The Browser You LoveD to Hate.
I am not talking about Chrome. We should stop talking about Chrome and Google.
Chrome may have issues, but stop deflecting blame here…
Like I just discussed, this is a Internet Explorer VULNERABILITY found ONLY IN INTERNET EXPLORER.
The only thing that I want is for Microsoft to actually admit that.
@mocax : The issue is caused because in Internet Explorer javascript handler for mouse movement is continuously executes even though the mouse has exceeded the bounds of the browser window. This leaks more information about the outside world (the os) from the browser, which is supposed to be a sandbox, isolated place. IT IS NOT SUPPOSED TO DO THIS AT ALL.
From there, it is trivial to stream this back to malware servers to determine what you are looking at and what you are doing.
Some points:
1. Each person has a unique mouse dexterity.
2. Moving mouse to position, then stopping is not the same a click.
3. Computers have different resolutions and window sizes – content is placed differently.
4. Applications that let Windows position themselves are placed on screen "almost at random".
5. Toolbars and ActiveX can already monitor everything on the computer, no need for JavaScript exploits.
6. Repeated letters cannot be captured.
This is a bug, yes, but it is not a security bug.
Finally, this bug is also CONSENSUAL. I am certain that Microsoft's terms state, that they are not resposible for any damage, bla bla bla.
Lee, you must have accepted these terms when you installed Windows and IE. 😉
That is not what I mean by consensual. Not at all. 🙂
I don't mean consent as a waiver of liability. I mean consent as in giving this information away to 3rd parties is NOT BY DESIGN.
In any case, let me counter with other points:
– The uniqueness can be tracked. Which actually makes this vulnerability worse.
– Don't downplay a vulnerability by saying there are many other vulnerabilities on the operating system. That doesn't inspire confidence in Microsoft. 🙂
– Windows cascade and display in known ways. If someone were monitoring the overall aggregate data acquired by mouse movements, one could determine the normal layout of your screen and the sizes of the windows that you interact with on a daily basis. Got ya.
– Any time information leaks, it is an issue. Splitting hairs by calling it not a 'security bug' is not important. IT CAN BE USED TO COMPROMISE SECURITY, period.
I saw this on HackerNews. Apparently IE even leaks mouse position through events that are totally unrelated to the mouse. marquee onbounce (!!!) is the example given: news.ycombinator.com/item
I'm disgusted Dean – you have a wide open security breach with example code publicly available and you are blaming advertisers.
You should not have posted anything about a security breach until you have a patch available!
As for this post now that you've made it you need to correct it ASAP.
REMOVE the word ALLEGED immediately! You have clearly indicated as has anyone that has viewed the exploit code that this is NOT alleged, we've seen the exploit in action and all witnessed it first hand!
Next add an addendum to this post indicating that it was extremely unprofessional to use this blog to point fingers at an ad network and That Dean & Microsoft should be solely focused on fixing the security breach ASAP and you expect to have a patch available as fast as possible.
I'm switching from IIS to apache Monday morning this type of behavior from Microsoft on web security is absolutely disgusting.
Alternatively if you are unfit your resignation Dean will also be accepted.
Well that was easy.
Open google.com and bing.com in two tabs. Type the same keyword in both and hit search button.
Now, in both results pages hover with your mouse on different result links and observe the URL in status bar at the bottom of browser window.
Bing will show you the direct URL to resource. With Google, you will get URL to Google server. When you click, they will take you to their server first, then redirect you to the original resource.
Q: Why would they do that?
A: They are collecting information.
Q: Why would they do that?
A: They want to make money.
Q: How will they make money?
A: (read the above comments again to find out)
@Lee
First of all, stop shouting and no one is "deflecting" the blame. Its because Google has the biggest stake in advertising world and no wonder if this so called "exploit" exhibition all over the Internet is funded by them. More people using Chrome means more money starting to flow in their pocket without user consent.
Secondly, the mouse movement tracking is not a security bug. You cannot capture anything besides dummy coordinates with no underlying content and no real-time page-state to "guess" what was on the page.
Even on virtual keyboard, the click on keyboard cannot be captured. You cannot tell remotely, which key was clicked on the keyboard. And by the way who use virtual keyboard with mouse when touch is there? And touch events are different than that of mouse.
Finally, the "behavior bug" fix is coming so don't cry about it.
Is this a bug? Yes.
Can this be exploited? No.
Is this a security issue? Not really.
Stop whining and go back to Chrome. Thanks.
@Victor, don't be naive. Bing does it too. They do it through JavaScript though.
@Lee,…/ccs11.pdf. Point being, mouse movement is quite unique from person to person.
@Security, NAIVE? I don't understand where are you getting those "hypertheticals"… be realistic willya?
Bing does NOT collect click information via JavaScript. You can check using fiddler, network monitor or any network sniffer? No XMLHTTPRequest packet will sent back to Bing when you click a result link on Bing result page.
With Bing its a straight deal.
With Google its a betrayal.
@Lee: the point you keep missing is that for the mouse data to be useful, you actually have to have at least some idea of what is on screen. And that simply isn't possible to determine using this alone. The images of mouse cursor activity overlaid on a webpage falls apart if you overlay the data over completely random webpages instead. The so-called demonstration of being able to read a Skype phone number could only work if Skype is open and located at a known position on screen, which again is impossible to determine.
You could argue that heuristics could be used to determine certain application targets, but it's very much grasping at straws, the sheer number of combinations and behaviours leads to a signal to noise ratio that would be deeply unfavourable to malware attempting to use the data.
@Victor
Go to Bing, search. Open IE dev tools and turn on network inspection.
Click on a search result without releasing the mouse (otherwise you will be taken to the page).
Observe in dev tool that Bing downloads an image from this url:
/fd/ls/GLinkPing.aspx?IG=ebed87486be644048edabb29866355cd&CID=2B01F2B8EB956AB207C6F68FEA926A1A&PM=Y&&ID=SERP,5097.1
What do you think this extremly detailed url is for? Or look at the name, LinkPing!
As Microsoft said at the header, it is not a flaw, it is a behavior. That is, is a deliberate feature of invasion of privacy to analyze where the mouse pointer is on the screen, so that she (microsoft) is not even trying to "fix" this failure.
@Flávio, this is not invasion of privacy. Even if the eavesdropper, otherwise the hacker, come in contact with that information, he/she:
– cannot know on which window was active.
– cannot know what content the mouse was moving on.
– cannot know about clicks.
– cannot know what were you typing.
Those bunch of numbers will not come back and bite you on your butt.
So take a deep breath, roll your eyes and Chill !
Does IE 10 use more watt than chrome?
@Dead – the word "Alleged" is still in this post title even after its been proven (including by you and Eric Lawrence) to be a bug AND after many people have asked you to be professional and remove it.
There's no excuse for ignoring these requests when you've clearly already updated the article once.
Current respect for @Dean = zero
Current credibility for @Dean = zero
@Dean's representation for The Microsoft Internet Explorer Team? = 100%
That was meant to be @Dean (there was no spite intended there… Just over zealous auto correct)
@Mr. – Do you mean watt as in electric? In that case, no, IE10 use less energie.
@Lee – You're seriously missing the point, like everybody here say. Pleas stop boycotting IE on it's own blog, and go back to Google, we want to follow the IE development without stupid reactions, thanks.
For developers, how to test IE for Xbox? Is there an emulator?
spider.io/…/responsible-disclosure
Uhm were still waiting Dean! You've failed to remove the word Alleged from your article about Microsoft's secuity hole that exists in every currently supported browser that you've shipped.
Hardly seems like something you want to just sit back on when the media is upset that you didn't take full responsibility for the IE bug when you first posted it.
Now that several sources have confirmed the bug is real it seems incredibly childish for you to continue this charade without at least accepting responsibility like a professional.
@Concrete Blonde, listening to "Long time ago" 😀
Here is how you can emulate: Open the target page in IE10. Hit F12 and you will find Tools menu in the F12 developer tools. Under tools, hover over "Change user agent string" and you will get submenu with handful of devices: Windows Phone, IE for Xbox, Safari, Chrome etc.
A web-applications may be aware of various kinds of devices and alter its behavior, at client-side, accordingly. In case the website is supposed to run on Xbox, it is imperative to consume CSS3 "@media tv" rules.
@Dwane @hAl
Spider IO is an infamous advertising company, trying to make some significance and now closely in bed with Google. Every step of the way, they are promoting Google and its product Chrome and cussing Microsoft and its products or any competitor to Google. How can their reports be fair?
The mouse pointer movement data is worthless if there is no information about the underlying content or if there is no information about the clicks. But they are still exploiting it as if its some kind of horror movie.
Mouse movement tracking must be least of your worries. The only "advantage" these advertising companies will get is to undermine IE, convince users that there is nothing bad about how other companies, such as Google, are tracking your privacy and selling the information, just because Microsoft has cut their supply line by setting DNT flag on! They are after the reputation ever since and of course the biggest animal in that jungle is Google, pulling strings, ripping off privacy and getting richer.
There is no report of anyone getting hurt by this bug unless you have exact idea what is on the screen and you are capturing mouse click event. And if you happen to have information about mouse click event, then you have everything and you don't need the mouse movement data. Which means you have hacked someone's website and you are able to inject your code.
@Marshal @Dwayne What is alleged is that it's a security or information disclosure issue, not that the bug exists. So far there is very little actual evidence that it is exploitable in any way.
"Update to Alleged Information and Security Issue with Mouse Position Behavior"
NOT ALLEGED! Information is being leaked. Both mouse movements and CTRL, SHIFT, & ALT keys
"we’ve seen reports alleging abuse of a browser behavior regarding mouse position"
Excuse me? you've already confirmed this is occurring (see this comment):
"We are actively working to adjust this behavior in IE"
You wouldn't need to be actively working on "adjusting" read:Changing! this "behavior" read:Leaking in IE if there wasn't a problem… and you certainly wouldn't need to put a blog post on the IE Blog if there was nothing actually going on… you'd put a cease & desist order out and a simple denial of any leak post out. But you didn't you posted not-quite-so-clear-smothered-in-PR-deflection yes we are aware and are working on something but we are heading into a weekend and we are not planning to work on this data leaking right now when we can blame ad networks that everyone notoriously hates.
As always with any security breach you can't just look at the picture and presume that you can't think of a way to abuse this info therefore there is no risk.
I would never have thought that a standalone iPhone would be able to read my blood pressure but with the flash light on right next to the video camera it is possible to detect the slightest skin tone color change as blood pulses with each beat and from that you can accurately detect blood pressure. I swore this was a scam until I looked into it… but it isn't.
Likewise this info leaking in IE is most definitely, undeniable 100% leaking from the browser as confirmed by EVERYONE that has investigated the issue.
THEREFORE
You seriously need to RETRACT the AMBIGUOUS wording and tone of this article and present the cold hard facts. IE IS LEAKING INFO… maybe not enough in YOUR opinion to be an issue but it most certainly is leaking!
Internet Explorer Sucks…Less
The abuse is alledged.
What spider.io reported as findings in the wild is usage of this feature by competing userstatistics companies but only within the browserwindows (so usage not particular to IE !!!) to track user behavior.
Although this is not the intended use of the mouse positioning script support it does not seem illegal. Spider.io might call it abuse.
However that is correctly described as alledged since others who use this method might see this as a normal business practice.
Almost all tracking activities could be seen by some people as abuse.
Ignoring the DNT setting of IE10 might wel be called abuse.
@"@Jason T" – PS please use your real name or at least a non-@ fake name so we can properly reference your comment!
This post is about IE and the topic of IE leaking mouse movements that exist outside the browser window and also outside the browser even having focus – Period.
On that note IE is most certainly leaking the mouse events that it should not be providing to JavaScript on a page. Therefore the bug is 100% real. There's more details about some of the potential risks listed here: webbugtrack.blogspot.ca/…/bug-593-ie-leaks-all-your-windows-mouse.html
We can argue all day long as to whether it is a huge security hole or just a small one but you can't deny for a second that it is a hole and Dean even discussed the fact that they are actively working on fixing it.
It's for that reason that this post is causing such a stir! Dean/Microsoft would win a lot more developer support if he stepped off his PR podium and spoke the real truth.
"Yes there is a security bug."
"Yes we are working on a fix."
"Microsoft does not feel that this security bug is that major _____(explanation why they feel differently than everyone else)____"
So again we ask that Dean take 2 minutes to update this post to more accurately reflect the fact that this issue has been confirmed and there are test cases available – it is not "Alleged"
The leaking of the mouse positions out side the browser windows could be considered a bug.
However it is not nescesarily a security issue.
The area outside the browserwindows is like a black box.
Knowing where you are in a black box does not reveal what is written on the floor of the box.
You nor anyone else have provided a method of exploiting this leaking of mouse positions in a real life scenarion into getting meaningfull data.
spider.io claimed exploitation on billions of webpages.
However what they were referring to was extually not a bug but was about using mouse positions inside a browser windows. Something which is normal behavior in ALL browser.
All claims on exploitation are alledged and are found to be actually untrue as they refer to normal browser behavior
All claims on abuse are alledged
All claims on posing a real security issue are alledged.
IE 10 is definetely better than IE9.
Make IE10 program thing like google chrome frame is… but IE10 frame for windows xp and vista!!! Come on Microsoft don't you care for your customers?
Official Confession:
We are shameless because we are Google.
In last few months, we have made decisions to take care of our annoying competition, Microsoft:
^ we have made IE9 and IE10 users to pay for it by not letting them download the attachments from Gmail (we don't care how well outlook, skydrive, office365 etc. work on our browser).
^ we have made decision to discontinue development for Windows 8 and Windows Phone apps, our excuse is pretty vague, that is; "we are careful about our investment" (although many freelance developers are able to manage their apps singlehandedly on three platforms Android, iOS and Windows Phone).
^ we have decided to discontinue Gmail service for windows phone, by pulling the plug on supporting ActiveSync (its there since the arrival of Windows Phone in Dec 2010).
^ now we are paying our advertising fellow agencies to conspire against Microsoft corporation by these bogus exploits (when Microsoft is shaking hands with us on forums like cppiso and w3c)
Look, the idea "really" is to make users stop using Microsoft products and lure them to use our cool looking browser. This way we can collect user information, sell it and make profit. The more users join our "free" ecosystem with "all free" products, the more money we'll make and conspiracy / chaos will prevail with privacy ripping becoming a joke.
^ it has all been told in news in last 2 months. All you need is to "figure out" what's going on you stoopid!!
Someone was posting as my name earlier.
No registration required == random asshats.
My issue was with Microsoft, not Google. Stop playing the blame game and just fix the issue.
Stop the childish playground 'We're better than Google' 'no we're better than Microsoft' 'IE sucks' 'No, Chrome sucks'.
Acknowledge that Microsoft fucked up with this vulnerability. Admit that the problem exists. Give us a timeline for a fix. FOLLOW THROUGH.
I don't care about anything else. I don't care how IE is compared to Chrome or Firefox. Just fix this damn Internet Explorer vulnerability and let's get on with our lives.
I also like how the defense for the vulnerability comes down to not that
– The issue is real
– The issue is in the wild
– The issue has been actively used by a company (spider.io? never heard of them until this week)
but instead:
– "welllll it's such a small issue that it's unlikely to be useful by hackers"
– "it's unlikely"
What a terrible logical fallacy… argument from incredulity.
Oh, the hackers are limited by only what YOU can conceivably think of?
What a dangerous line of reasoning.
And because I have some more time to kill:
en.wikipedia.org/…/Keystroke_dynamics
I want you to understand that there is a science to this technique (the principle can be applied to mouse movement just as easily) and that it has been done before. People make a ton of money off of understanding this.
@Dean we are still waiting for you to adjust your post wording about the confirmed bug in IE.
The comments on CNet about this issue are painting you in a bad light not because IE has the bug but because of the lack of clear communication and a lack of estimated timeline.
It's Wednesday already but yet we don't even have confirmation that Microsoft is planning to release an out of band patch or even an expected date for it to land so we can patch up our browsers.
Did you actually say:"From our conversations with security researchers across the industry, we see very little risk to consumers at this time. "?
Unbelievable! Let me just say that. I work in building Internet Banking sites. The virtual keyboard has been a de facto standard to avoid key logging. But since IE now allows tracking the mouse, now any attacker can get the position of the mouse, thus the codes!
I'm guessing that you would suggest to scramble in a random way a Querty keyboard. Yeah, right! Great user experience that would result from that.
Virtual keyboards are security theatre preying on the naive. Any PC vulnerable to a keylogger is just as vulnerable to a mouse-logger.
So is there a fix yet?
Seems a bit odd that Microsoft is just sitting on a security bug affecting all shipped versions of IE without an ETA for the fix.
Seems even worse that instead of properly admitting to the bug and trying desperately to blame the bug reporter for ulterior motives.
No wonder enterprises are moving to non-IE browsers faster than ever.
Maybe if Microsoft committed to working with the community and having proper transparency this wouldn't be an issue?!
ditto to all of the correspondence…. | https://blogs.msdn.microsoft.com/ie/2012/12/13/update-to-alleged-information-and-security-issue-with-mouse-position-behavior/ | CC-MAIN-2017-09 | refinedweb | 6,141 | 72.46 |
React-Draggable
A simple component for making elements draggable.
forked from mzabriskie/react-draggable
Demo
Installing
$.
Exports API
The
<Draggable/> component transparently adds draggable to whatever element is supplied as
this.props.children.
Note: Only a single elementis used along with onMouseUp to keep track of dragging state.
onMouseUpis used along with onMouseDown to keep track of dragging state.
onTouchStartis used along with onTouchEnd to keep track of dragging state.
onTouchEndis used along with onTouchStart to keep track of dragging state.
React.DOM elements support the above six properties by default, so you may use those elements as children without any changes. If you wish to use a React component you created, you might find this React page helpful.
Props:
{ // Called when dragging starts. If `false` is returned from this method, // dragging will cancel. // These callbacks are called with the arity: // (event: Event, // { // position: {left: number, top: number}, // deltaX: number, // deltaY: number // } // ) onStart: Function, // Called while dragging. onDrag: Function, // Called when dragging stops. onStop: Function, // Called whenever the user mouses down. Called regardless of handle or // disabled status. onMouseDown: Function, // Specifies the `x` and `y` that the dragged item should start at. // This is generally not necessary to use (you can use absolute or relative // positioning of the child directly), but can be helpful for uniformity in // your callbacks and with css transforms. start: {x: number, y: number}, // If true, will not call any drag handlers. disabled: boolean, // Specifies a selector to be used to prevent drag initialization. // Example: '.body' cancel: string, // Specifies a selector to be used as the handle that initiates drag. // Example: '.handle' handle: string, // // - An object with `left, top, right, and bottom` properties. // These indicate how far in each direction the draggable // can be moved. bounds: {left: number, top: number, right: number, bottom: number} | string, // Specifies the x and y that dragging should snap to. grid: [number, number], // Specifies the zIndex to use while dragging. zIndex: number }
Note that sending
className,
style, or
transform as properties will error - set them on the child element
directly.
Draggable Usage
var React = require('react'),; var ReactDOM = require('react-dom'); var Draggable = require('react-draggable'); var App = React.createClass({ handleStart: function (event, ui) { console.log('Event: ', event); console.log('Position: ', ui.position); }, handleDrag: function (event, ui) { console.log('Event: ', event); console.log('Position: ', ui.position); }, handleStop: function (event, ui) { console.log('Event: ', event); console.log('Position: ', ui.position); }, render: function () { return ( <Draggable axis="x" handle=".handle" start={{x: 0, y: 0}} grid={[25, 25]} zIndex={100} onStart={this.handleStart} onDrag={this.handleDrag} onStop={this.handleStop}> <div> <div className="handle">Drag from here</div> <div>This readme is really dragging on...</div> </div> </Draggable> ); } }); ReactDOM.render(<App/>, document.body);
For users that require more control, a
<DraggableCore> element is available. This is useful for more programmatic
usage of the element. See React-Resizable and
React-Grid-Layout for some examples of this.
<DraggableCore> is a useful building block for other libraries that simply want to abstract browser-specific
quirks and receive callbacks when a user attempts to move an element. It does not set styles or transforms
on itself.
DraggableCore API
<DraggableCore> takes all of the above
<Draggable> options, with the exception of:
axis
bounds
start
zIndex
Drag callbacks are called with the following parameters:
( event: Event, ui:{ node: Node position: { // lastX + deltaX === clientX deltaX: number, deltaY: number, lastX: number, lastY: number, clientX: number, clientY: number } } ) | https://www.npmtrends.com/@bokuweb/react-draggable-custom | CC-MAIN-2021-43 | refinedweb | 565 | 50.53 |
It is not that hard to get into a situation where you want two or more controllers with the same name.
Assume we have an application that gives users the option to update their personal profile settings. We might have a ProfileController for this. Now we might also need an administration backend which gives administrators the option to update user profiles. ProfileController would again be a good name for handling these kinds of requests.
With Grails 2.3 we can now do this by adding namespaces to controllers using the static namespace property:
package foo.bar.user class ProfileController { static namespace = 'user' // actions that can be accessed by users }
package foo.bar.admin class ProfileController { static namespace = 'admin' // actions that can be accessed by administrators }We can now use the namespace to map the controllers to different URLs within UrlMappings.groovy:
class UrlMappings { static mappings = { '/profile' { controller = 'profile' namespace = 'user' } '/admin/profile' { controller = 'profile' namespace = 'admin' } .. } }To make the namespace part of the URL by default we can use the $namespace variable:
static mappings = { "/$namespace/$controller/$action?"() }Using this way we are able to access our controllers with the following URLs:
/user/profile/<action>
/admin/profile/<action>
Please note that we also need to provide the namespace when building links:
<g:linkProfile admin functions</g:link> | https://www.mscharhag.com/grails/controller-namespaces-in-grails | CC-MAIN-2019-18 | refinedweb | 216 | 54.52 |
MOUNT_NTFS(8) MidnightBSD System Manager’s Manual MOUNT_NTFS(8)
NAME
mount_ntfs — mount an NTFS file system
SYNOPSIS
mount_ntfs [−a] [−i] [−u user] [−g group] [−m mask] [−C charset] [−W u2wtable] special node
DESCRIPTION
The mount_ntfs utility attaches the NTFS file system residing on the device special to the global file system namespace at the location indicated:
−a
Force behaviour to return MS-DOS 8.3 names also on readdir().
−i
Make name lookup case insensitive for all names except POSIX names.
−u user
Set the owner of the files in the file system to user. The default owner is the owner of the directory on which the file system is being mounted.
−g group
Set the group of the files in the file system to group. The default group is the group of the directory on which the file system is being mounted.
−m mask
Specify the maximum file permissions for files in the file system.
−C charset
Specify local charset to convert Unicode file names. Currently only reading is supported, thus the file system is to be mounted read-only.
−W]
‘ATTRTYPE’ is one of the identifiers listed in $AttrDef file of volume. Default is $DATA. ‘ATTR characters. written by Semen Ustimenko 〈semenu@FreeBSD.org〉.
MidnightBSD 0.3 January 3, 1999 MidnightBSD 0.3 | http://www.midnightbsd.org/documentation/man/mount_ntfs.8.html | CC-MAIN-2015-40 | refinedweb | 215 | 56.66 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.