text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Subject: Re: [boost] [threads] making parts of Boost.Threads header-only
From: Ulrich Eckhardt (doomster_at_[hidden])
Date: 2009-04-08 01:46:28
On Tuesday 07 April 2009 12:11:47 joaquin_at_[hidden] wrote:
>.
There is a "compile-in-place" project in the sandbox, which allows you to
#include <boost/thread/compile-in-place.cpp> in exactly one translation unit
in order to use Boost.Thread. Unfortunately I lack the time/energy to finish
converting newer versions so that this feature can finally be pushed to the
trunk.
Would that be an alternative?
Uli
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2009/04/150550.php | CC-MAIN-2018-34 | en | refinedweb |
On Thu, Feb 28, 2013 at 1:16 PM, Eric Snow <ericsnowcurrently at gmail.com> wrote: > On Thu, Feb 28, 2013 at 10:32 AM, Guido van Rossum <guido at. > > Could you elaborate on what confusion it might cause? Well, x.__class__ is different, repr(x) is different, ... > As to performance relative to dict, this has definitely been my > primary concern. I agree that the impact has to be insignificant for > the **kwargs proposal to go anywhere. Obviously OrderedDict in C had > better be an improvement over the pure Python version or there isn't > much point. However, it goes beyond that in the cases where we might > replace current uses of dict with OrderedDict. What happens to the performance if I insert many thousands (or millions) of items to an OrderedDict? What happens to the space it uses? The thing is, what started out as OrderedDict stays one, but its lifetime may be long and the assumptions around dict performance are rather extreme (or we wouldn't be redoing the implementation regularly). > My plan has been to do a bunch of performance comparison once the > implementation is complete and tune it as much as possible with an eye > toward the main potential internal use cases. From my point of view, > those are **kwargs and class namespace. This is in part why I've > brought those two up. I'm fine with doing this by default for a class namespace; the type of cls.__dict__ is already a non-dict (it's a proxy) and it's unlikely to have 100,000 entries. For **kwds I'm pretty concerned; the use cases seem flimsy. > For instance, one guidepost I've used is that typically **kwargs is > going to be small. However, even for large kwargs I don't want any > performance difference to be a problem. So, in my use case, the kwargs is small, but the object may live a long and productive life after the function call is only a faint memory, and it might grow dramatically. IOW I have very high standards for backwards compatibility here. >> And I don't recall ever having wanted to know the order of the kwargs >> in the call. > > Though it may sound a little odd, the first use case I bumped into was > with OrderedDict itself: > > OrderedDict(a=1, b=2, c=3) But because of the self-referentiality, this doesn't prove anything. :-) > There were a few other reasonable use cases mentioned in other threads > a while back. I'll dig them up if that would help. It would. >> But even apart from that backwards incompatibility I think this >> feature is too rarely useful to make it the default behavior -- if >> anything I want **kwargs to become faster! > > You mean something like possibly not unpacking the **kwargs in a call > into another dict? > > def f(**kwargs): > return kwargs > d = {'a': 1, 'b': 2} > assert d is f(**d) > > Certainly faster, though definitely a semantic difference. No, that would introduce nasty aliasing problems in some cases. I've actually written code that depends on the copy being made here. (Yesterday, actually. :-) -- --Guido van Rossum (python.org/~guido) | https://mail.python.org/pipermail/python-ideas/2013-February/019704.html | CC-MAIN-2018-34 | en | refinedweb |
Linq to SQL Stored Procedures with Multiple Results – IMultipleResults
Continuing my post series about Linq to SQL, this post talks about using stored procedures that return multiple result sets in Linq to SQL. If you missed any of my previous posts about Linq to SQL, here is a reminder:
Sql Server supports returning more than a single result type from a stored procedure. This was very useful when we wanted to fill a large Dataset with multiple tables in a single access to the database. Similarly, this is also very useful when using Linq to SQL, and
So, having created the following stored procedure, based on the schema in the post Linq to SQL Stored Procedures.
CREATE PROCEDURE dbo.GetPostByID
(
@PostID int
)
AS
FROM Posts AS p
WHERE p.PostID = @PostID
SELECT c.*
FROM Categories AS c
JOIN PostCategories AS pc
ON (pc.CategoryID = c.CategoryID)
WHERE pc.PostID = @PostID
The calling method in the class the inherits from DataContext should look like:
[Database(Name = "Blog")]
public class BlogContext : DataContext
{
…
[Function(Name = "dbo.GetPostByID")]
[ResultType(typeof(Post))]
[ResultType(typeof(Category))]
public IMultipleResults GetPostByID(int postID)
{
IExecuteResult result =
this.ExecuteMethodCall(this,
((MethodInfo)(MethodInfo.GetCurrentMethod())),
postID);
return (IMultipleResults)(result.ReturnValue);
}
}
Notice that the method is decorated not only with the Function attribute that maps to the stored procedure name, but also with the ReturnType attributes with the types of the result sets that the stored procedure returns. Additionally, the method returns an untyped interface of IMultipleResults:
public interface IMultipleResults : IFunctionResult, IDisposable
{
IEnumerable<TElement> GetResult<TElement>();
}
so the program can use this interface in order to retrieve the results:
BlogContext ctx = new BlogContext(…);
IMultipleResults results = ctx.GetPostByID(…);
IEnumerable<Post> posts = results.GetResult<Post>();
IEnumerable<Category> categories = results.GetResult<Category>();
Enjoy!
Thank you for the great series of articles a bout linq to sql .
Please generate and post the T-SQL used to create the 4 tables ( Post , blogs etc )
and the relations between them.
Hi Zvika,
The SQL Script can be found here:
Enjoy!
Could I do it with a Dynamic Linq Query (ExecuteQuery) that returns to me multiple resultsets ? How?
Thanks Guy! This post as well as your previous post on Stored Procedures was extremely helpful. It was exactly what I was looking for, thanks!
What happens if the first select returns no results? I have a scenario where the first select returns nothing and the second result returns something. I'm getting "Unable to cast object of type
to ". If the first select returns something, and the second returns nothing, I get "Value cannot be null. Parameter name: source".
I need some direction or info on how LINQ handles multiple results when one or more results has no records coming back.
Tim –
Just make your first select return an empty recordset.
So instead of
BEGIN
IF @param = 1
SELECT a, b FROM c
SELECT d, e FROM f
END
do
BEGIN
SELECT a, b FROM c
WHERE @param = 1
SELECT d, e FROM f
END
Does the function (which gets created for the SP), in this case GetPostByID, gets the return type IMultipleresult by default after dragging and dropping on the .dbml file?
I am trying this with one of my SPs, which returns four tables, but in the .dbml.designer.cs file I see the function return type as ISingleResult.
Can we change the .designer.cs file?
Thanks in Advance,
Pranil
hi,
I checked on some of the sites and came to a conclusion that, if your
SP is returning more than one tables, then either you can use sqlmetal.exe tool or manually modify the designer.cs class to make return type for the method as IMultipleResults.
Now the SP returns 6 tables(which is what I require), however there is data only in the first table, the other tables are empty(which is wrong), since if I execute the SP through server explorer I get data in all the 6 tables.
Any idea what is going wrong?
Thanks,
Pranil
i see your code it good.
i have prob. where SP return two table and both from different join.
in your SP field return from both query are from single table even join used in second query so it is easy to undustand the return type i.e Post and Category.
in my case what is the retun type i take where both query have fields from different table also?
e.g
Table Name::
emp(id,ename,salary,deptid)
dept(deptid,dname)
SP::
select ename,salary,dname from emp,dept where emp.deptid=dept.deptid
select dname,sum(salary) from emp,dept where emp.deptid=dept.deptid group by dname
HI,
I am using dataset with stored procedure(sp)
but now i want to do these using linq but sp is needed
i have sp' which return more the one result
whose solution is imultipleresult as in about forum is given.
But i got problem there
where each result of that sp will have join from more then one table
& in my dbml file only one result class is generated automatically.
how can i create another class for my second or next result set.
should i done menuly or any easy way is there.
plz reply as soon as poossible
How do we handle this if the second RecordSet is from another sp executed inside the 1st one.
Am getting error when i change the Context Designer from IsingleResults to ImultipleResults.
On executing it says "More than one result type declared for function
'Sp_Name' that does not return IMultipleResults".
Any Idea why is it so?
Please reply as soon as poossible
the dbml file overrides the modifications to the designer.cs file and each time I add something to the model I have to re-type the IMultipleResults definition.
Any recomendation?
Leit
What about classes Post and Category? Where is mapping between sp results and those classes?
Uri, in case you're still wondering hoy do modify the dbml file and avoid the so call to be overwritten, you can create a partial class for your DataContext and put the code to call the SP in there.
Eduardo
Any one this code in VB.Net.
it is really a fabulous article.it also enabled me to think out of box
Thanks a lot for this post | http://blogs.microsoft.co.il/bursteg/2007/10/05/linq-to-sql-stored-procedures-with-multiple-results-imultipleresults/ | CC-MAIN-2018-34 | en | refinedweb |
On Tue, Dec 15, 2009 at 6:50 PM, steve donovan <steve.j.donovan@gmail.com> wrote: > On Tue, Dec 15, 2009 at 5:13 PM, Francesco Abbate <gslshell@gmail.com> >>Talking about point 2, I don't know but I've got the idea that Lua >> does not have a really good set of libraries. > > No standard libraries - that's one of the Python/Lua differences. The > Three do the kernel and we're supposed to supply the OS ;) A lot of > good code has been produced, but not so much consensus. I wonder if it's time to start thinking about specifying a basic set of standard libraries; not in the sense of 'endorsed by the Lua developers) since we know that's not their job. Another way of putting this: what are the functions you find yourself needing to rewrite for yourself? And then to agree on a core API which satisfies as many of those needs, but keeping the total function count to less than a hundred. E.g., a candidate would be a split() function which takes a string and a regular expression and returns a table; nearly every non-trivial Lua package has one of these and it is actually a more delicate operation than it looks at first. Not a new idea, of course; there is stdlib and my Penlight (which was partly inspired by frustration with stdlib). On the non- Lua side, there is Mark Edgar's extension proposal (which is BTW not about extending the core):. Namespacing is an important concept, which the Perl people understand well (David M can elaborate on this) Even if we could agree on some namespaces, that would be progress. steve d. | http://lua-users.org/lists/lua-l/2009-12/msg00604.html | CC-MAIN-2018-34 | en | refinedweb |
pcap_list_datalinks(3) pcap_list_datalinks(3)
NAME
pcap_list_datalinks, pcap_free_datalinks - get a list of link-layer header types supported by a capture device, and free that list
SYNOPSIS
#include <pcap/pcap.h> pcap_list_datalinks(3) *p, int **dlt_buf); void pcap_free_datalinks(int *dlt_list);
DESCRIPTION
pcap_list_datalinks(3) is used to get a list of the supported link-layer header types of the interface associated with the pcap descriptor. pcap_list_datalinks(3)(3)- ror() may be called with p as an argument to fetch or display the error text.
SEE ALSO
pcap(3), pcap_geterr(3), pcap_datalink_val_to_name(3), pcap-linktype(7) 17 September 2013 pcap_list_datalinks(3)
libpcap 1.7.2 - Generated Sat Mar 14 06:32:40 CDT 2015 | http://manpagez.com/man/3/pcap_list_datalinks/ | CC-MAIN-2018-34 | en | refinedweb |
In Anticipation of Tiger
Theme 5: Ease of Development
The Java language and syntax are being enhanced to make code more readable, expressive, comact, safer, and easier to develop, without sacrificing compatibility. These new language features are being incorporated after several years of discussion and comparison of Java to other programming languages. These features are the most anticipated amongst the Tiger release contents and have spawned numerous threads of discussion within the developer community. The changes will come in the form of class file format changes, generic (a.k.a. parameterized types like C++ templates) support, a 'foreach' type of looping construct, automatic conversion between primitive, and corresponding object data types, type-safe enums, an improved syntax for using static constants, and the use of metadata tags which allow the developer to insert simple declarative tags in the source code for which the compiler generates the necessary boilerplate code.
These changes will have an impact on the way developers write Java programs. Taking advantage of these changes will require the developer to learn new language features, new syntax, and must become part of the developer's arsenal. Let's look deeper into each of these new features.
- Class File Format Changes
These changes will modify and extend the Java Class File format to support updates to the Java platform and language specifications. The changes will add support for quicker and more efficient byte code verification, class literal support directly from the class file format, and will increase certain implicit size limits within the class file. These changes will be transparent to the application developer. No code changes would be required for existing codebases.
- Support for Generics
Generic support is the ability to specify a particular type of object used in a collection, rather than default to using the Object type for generic behaviour. These notions of genericity are based on parametric polymorphism. These concepts are best explained with an example illustrating the benefits and syntax changes.
Let's say you have a method that prints all the Strings in the collection passed to it in lower case.
/** * Print all the Strings in the Collection in lower case. **/ public void printLowerCase(Collection c) { Iterator iter = c.iterator(); while(iter.hasNext()) { String str = (String)iter.next(); System.out.println(str.toLowerCase()); } }
Generics provide the way to avoid casting by binding generic collections to specifc types at compile time rather than runtime. The same method above written using generics would be:
/** * Print all the Collection Strings in lower case. **/ public void printLowerCase(Collection<String> c) { Iterator<String> iter = c.iterator(); while(iter.hasNext()) { String str = iter.next(); System.out.println(str.toLowerCase()); } }
Here, we explicity specify what type of Objects the Collection will contain, thereby eliminating the need for casting. The programmer's intent is now absolutely clear and is part of the method signature rather than the documentation. If a programmer inadvertently passes a collection of any other type, this method would not even compile. This eliminates runtime ClassCastExceptions altogether and provides compile-time type safety.
Generics will improve code readability, expressiveness, and provide safety against runtime exceptions.
- The 'foreach' type of construct: Tiger plans to introduce an additional for loop syntax with similar capability as the 'foreach' looping construct found in so many other languages. This addition is mainly to increase developer productivity and reduce the amount of code required to iterate through loops.
Let's say you have to loop through a Collection of Strings. Currently, you may achieve the loop by using a while statement or a for statement, as illustrated below:
Collection c; Iterator i = collection.iterator(); while(i.hasNext()) { // Process element } for(Iterator i = c.iterator; i.hasNext() ; ) { // Process element. }
Both these constructs require the creation of an Iterator object and explicit specification on how to iterate through the collection, by using the hasNext() method.
The 'enhanced for' construct makes the code look like:
Collection c; for( Object o : c ) { //Process element }
This reads as 'for each Object o in Collection c'. Combined with generics support, this could be:
Collection
c; for( String o : c) { //Process element }
The 'foreach' type construct eliminates the need for Iterator creation and delegates all details of how to iterate through the collection to the compiler. It also reduces the amount of code required to iterate through collections.
- Automatic conversion between primitive types and corresponding object types: Java has a 'split type' system, which means there is a difference in the way primitive types and object types are handled. This forces the programmer to perform manual conversions between primitive types and corresponding object types in code.
If you would like to add a primitive int to an Integer object, you would have to do:
int i; Integer j; int k = i + j.intValue(); Integer kObj = new Integer(k);
In Tiger, the conversion between primitive types and their corresponding Object type will be automatic. This feature is also referred to as 'autoboxing of primitives'.
The code is now as easy as:
int i; Integer j; int k = i + j; Integer kObj = i + j;
This feature almost removes all distinction between primitive types and corresponding object types for the programmer. It further allows the addition of primitive types to Collections, so this
Collection c; int i; c.add(new Integer(i));
becomes
c.add(i);
- Typesafe enums: An enum is a bounded set of distinct values that a particular object can take. They are similar to a grouping of constants representing all permissible values for a particular object. Tiger intends to add language-level support for the typesafe enum pattern. This pattern is best described by an example:
/** * A type safe enum repesenting the states of a traffic * light **/ public class TrafficSignal { /* Clients should not be able to instantiate. This object can only take on a fixed set of values. */ private TrafficSignal() { } String color; private static final TrafficSignal GREEN = new TrafficSignal("GREEN"); private static final TrafficSignal YELLOW = new TrafficSignal("YELLOW"); private static final TrafficSignal RED = new TrafficSignal("RED"); public boolean equals(Object o) { if(o.getClass() != getClass()) return(false); TrafficSignal signal = (TrafficSignal)o; return(signal.color == color); } }
and use it as:
TrafficSignal signal; if(signal.equals(TrafficSignal.GREEN))
In Tiger, language-level support for this pattern will be provided, thereby reducing the code to look like:
public enum TrafficSignal { GREEN("GREEN"), YELLOW("YELLOW"), RED("RED"); private String color; TrafficSignal(String color) { this.color = color } }
and use it as:
TrafficSignal signal; if(signal == TrafficSignal.GREEN))
You could even use these enums in switch statements, like this:
TrafficSignal signal; switch(signal) { case TrafficSignal.RED: stop(); case TrafficSignal.YELLOW: slowDown(); case TrafficSignal.GREEN: go(); default: slowDown(); }
Because the typesafe enum type is just like a class, we could add methods and fields and even use them in collections.
- Improved syntax for using static constants: To use constants in Java, the programmer must either fully qualify the constant reference or implement an interface defining the constants. Consider an example of what is described as the 'Constants Antipattern':
public interface Constants { public static final int DAYS_IN_WEEK = 7; public static final int DAYS_IN_YEAR = 365; }
To use these constants, the programmer must either fully qualify the constant reference:
int weeks_in_year = Constants.DAYS_IN_YEAR/Constants.DAYS_OF_WEEK;
or implement the interface:
public class Year implements Constants { int weeks_in_year = DAYS_IN_YEAR/DAYS_IN_WEEK; }
Tiger intends to introduce a 'static import' facility that allows for referencing of static constants without implementing the interface or prefixing the class name of the defining class.
import static Constants; public class Year { int weeks_in_year = DAYS_IN_YEAR/DAYS_IN_WEEK; }
The import statement will make all static members of the Constants class available to the Year class. This compact syntax provides the benefits of not having to fully qualify the constant reference and overcomes the disadvantages of implementing an interface.
- Metadata facility to annotate source code: Programmers writing EJBs and RMI-based apps need to define a lot of boilerplate code to implement their objects. For example, to implement a Remote object (in the RMI sense), a programmer must define the interface.
package service; import java.rmi.Remote; import java.rmi.RemoteException; public interface ServiceRunner extends Remote { Object executeService(Service s) throws RemoteException; }
and define the implementation of the remote interface:
package servicerunner; import java.rmi.*; import java.rmi.server.*; public class ServiceRunnerImpl extends UnicastRemoteObject implements ServiceRunner { public Object executeService(Service t) { ... } }
Using Tiger's metadata facility would allow the programmer to write code using tags that instruct the compiler to generate the necessary boilerplate code for the remote interface:
public class ServiceRunner { @Remote public Object executeService(Service t) { ... } }
This also annotates the source code by marking it to have certain characteristics.
There is still much discussion about this feature, its implementation, extent, and scope. More details on this will be available as we come closer to the release date.
Summary
As you can see, this release is going to be a big one, especially with respect to language changes. Overall, with the introduction of these changes and additional APIs will come with some amount of learning curve but will make Java more developer friendly and easier to work with. The language changes widen Java's reach to cross-language developers. The performance enhancements will make this Java the fastest one ever. The focus on XML and Web Services makes Java a language of choice for modern network programming paradigms. This release will strengthen the mind-share and further increase the popularity of the platform.
Resources
- JSR-176 J2SETM 1.5 (Tiger) Release Contents:
- JSR-3 Java Management Extensions (JMX) Specification:
- Implementation of JMX:
- JSR-174 Monitoring and Management Specification for the Java Virtual Machine:
- JSR-163 Java Platform Profiling Architecture:
- JSR-14 Add Generic Types to the Java Programming Language:
- Experimental Java compiler with support for generics:
- Specification Draft for Generics:
- JSR-101 Java APIs for XML RPC:
- JSR-105 XML Digital Signature APIs:
- JSR-106 XML Digital Encryption APIs:
- Sun's Web Services Developer Pack (WSDP):
- JSR-121 Application Isolation API Specification:
- JSR-203 More new I/O APIs for the Java Platform (NIO.2):
- JSR-202 Java Class File Specification Update:
- JSR 201 Extending the JavaTM Programming Language with Enumerations, Autoboxing, Enhanced for loops, and Static Import:
- Josh Bloch's article on New Language Features for Tiger:
About the Author
Apu Shah, a Java developer and avid follower of breaking Java trends since 1995, also writes a biweekly column on the events of the JCP on developer.com. If you have any thoughts on these features or would you like to see other features in Tiger, feel free to contact him at apu@jcpwatch.org.
# # #<< | https://www.developer.com/java/ejb/article.php/10931_2213791_2/In-Anticipation-of-Tiger.htm | CC-MAIN-2018-34 | en | refinedweb |
Working in a team of developers, the following might happen:
Developer 1 creates a piece of UI and adds a method to load data into said UI. The method he creates is called GetUserInfo(…)
Here comes Developer 2 and he also creates a piece of UI that displays user info. Developer 2 however, needs to display more than just the user info, he also needs the requests made by the user. Thus GetUserInfoWithRequests(…) is created next to the already existing GetUserInfo method.
Developer 3 wants to display some user info as well, but not just the info by itself, and also without the requests but WITH the users team information. Can you guess what happens? … Right. GetUserInfoWithTeamMembers(…) is added to our list.
So we now have three methods that do something similar, but not quite the same:
- GetUserInfo(…)
- GetUserInfoWithRequests(…)
- GetUserInfoWithTeamMembers(…)
If you let this run uncontrolled it will turn in to a maintenance nightmare! We could simply end the discussion and say it’s a lack of discipline within the team, but there is a nice way to help prevent this through code!
What is needed is a query-building object that will handle the loading of all this data for us. Without resorting to separate methods that all touch the database.
This query-building object has at least one method: Fetch(). The fetch method is the only method that will touch the database. The constructor of our object will take in the key on which to filter. In our example, this would probably be the primary key for the user in our database. Lets call it a UserInfoRetriever to match the example.
public class UserInfoRetriever { readonly int _userId; public UserInfoRetriever(int userId) { _userId = userId; } public User Fetch() { using (var context = new DbContext()) { return context.Users.Where(u => u.UserId == _userId).SingleOrDefault(); } } }
The code above is what this might look like for the first method in our set of three. But we have more! Lets expand the class a bit further to support the other scenarios.
public class UserInfoRetriever { readonly int _userId; bool _withRequests; bool _withTeamMembers; public UserInfoRetriever(int userId) { _userId = userId; } public UserInfoRetriever WithRequests() { _withRequests = true; return this; } public UserInfoRetriever WithTeamMembers() { _withTeamMembers = true; return this; } public User Fetch() { using (var context = new DbContext()) { var users = context.Users; if (_withRequests) { users.Include("Requests"); } if (_withTeamMembers) { users.Include("TeamMembers"); } return context.Users.Where(u => u.UserId == _userId).SingleOrDefault(); } } }
This version of the class implements two extra methods. They only change the bools on our class and then returns itself. This is where the cool stuff is at. Because we return to ourselves in the methods, we can chain them together to form queries as we see fit like this:
var userInfo = new UserInfoRetriever(10).Fetch(); var userInfo = new UserInfoRetriever(10).WithTeamMembers().WithRequests().Fetch(); var userInfo = new UserInfoRetriever(10).WithRequests().Fetch(); var userInfo = new UserInfoRetriever(10).WithTeamMembers().Fetch();
As you can see, we can load the data any way we like and our data-access code is still all in one place (in the Fetch() method). Adding new scenarios is very easy and it prevents (with a little discipline of course :-)) willy-nilly methods that all handle their own data-access. | https://itq.nl/fluent-data-access/ | CC-MAIN-2018-34 | en | refinedweb |
For my upcoming game, Mouse Dreams, I need to support gamepad input as well as keyboard and touch. It turns out this is not quite as simple as I first thought. The big issue with supporting UI with different input methods is how it will appear to the user. With touch or mouse input, the user simply presses a button, but with controller or keyboard input, you need to indicate to the player which button is currently selected.
Unity’s UI can easily implement colour changes when a button is selected, but this can look weird with touch input, and is not always easy to see. Unity’s UI also doesn’t work with controller input if no UI object is currently selected (or a default set), which is a pain.
I wrote a simple script to help with these problems. It simply scales the currently selected UI button to be a bit larger than the others. The scaling amount can be adjusted in the Inspector, and you can set a default UI object to be selected when the scene starts, which in turn enables controller input.
The Code
using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.EventSystems; using UnityEngine.UI; public class ButtonHighlighter : MonoBehaviour { private Button previousButton; [SerializeField] private float scaleAmount = 1.4f; [SerializeField] private GameObject defaultButton; void Start() { if (defaultButton != null) { EventSystem.current.SetSelectedGameObject(defaultButton); } } void Update() { var selectedObj = EventSystem.current.currentSelectedGameObject; if (selectedObj == null) return; var selectedAsButton = selectedObj.GetComponent<Button>(); if(selectedAsButton != null && selectedAsButton != previousButton) { if(selectedAsButton.transform.name != "PauseButton") HighlightButton(selectedAsButton); } if (previousButton != null && previousButton != selectedAsButton) { UnHighlightButton(previousButton); } previousButton = selectedAsButton; } void OnDisable() { if (previousButton != null) UnHighlightButton(previousButton); } void HighlightButton(Button butt) { if (SettingsManager.Instance.UsingTouchControls) return; butt.transform.localScale = new Vector3(scaleAmount, scaleAmount, scaleAmount); } void UnHighlightButton(Button butt) { if (SettingsManager.Instance.UsingTouchControls) return; butt.transform.localScale = new Vector3(1, 1, 1) } }
How to Use
Just add the script to a UI canvas with multiple buttons. I coded the script specifically for buttons, but you can easily adjust it to work with different UI objects if you need to.
For improved scaling, you can implement some tweening or lerping.
Set the scaling to an appropriate amount. The default is 1.4f, which may be a bit large for some uses. You need to find a balance between looking good and being noticeable to the player.
Set a default button in the Inspector. This will be selected automatically when the scene starts, which has the side effect of enabling controller input. | http://unity.grogansoft.com/2016/12/ | CC-MAIN-2018-47 | en | refinedweb |
React, probably the most popular library to render views these days, created by Facebook developers.
This writing belongs to a serie of articles about using components with Vue, React, Polymer and Angular 2.
Introduction to React
According to its authors, the mear existence of this framework is to compose large web applications through components instead of directives. Those components will only be updated if the data bound to them does.
To achieve this React provides a set of methods to express HTML elements with object notation, an abstraction pattern usually known as virtual DOM.
So, instead of creating and appending elements as usual, you represent them with an object passing tag, properties and children to the createElement function.
let Link = React.createElement( 'a', { href: '', className: 'github-link' }, 'GitHub' );
In this example we are creating an anchor, passing the href and class properties and a text node as its only children.
It’s necessary to express element’s properties as their JavaScript equivalent, that’s why className is used instead of class.
JSX
Not mandatory, but an optional way to describe render trees is JSX which basically let’s you describe React elements in a syntax similar to HTML.
const GitHubLink = ( <a className="github-link" href="" > GitHub </a> );
Seeing tags inside your code might feel weird at first, however it becomes a better option when writing more complex elements with a bigger number of children that can be harder to read using React’s method.
Of course this will need to be transpiled to actually work, but more on that later.
Writing components
As React promises, the best reason for using it is to improve the architecture of your application diving the views into reusable components.
One of the many ways to achieve this is extending the component class.
import React, { Component } from 'react'; import { render } from 'react-dom'; class GitHubLink extends Component { render() { return ( <a href="" className="github-link"> { this.props.user } on github </a> ); } } render( <GitHubLink user="jeremenichelli"/>, document.querySelector('#example') );
These components become custom tags you can put inside other components or pass it to the render function. Component tags need to be capitalized so JSX can differenciate them from standard HTML ones.
The problem with this component here is that the url is hardcoded, and an anchor always pointing to the same page won’t be that reusable.
Props
To customize our components, data values can be passed to them as props.
import React, { Component } from 'react'; import { render } from 'react-dom'; const baseUrl = ''; class GitHubLink extends Component { constructor(props) { super(props); } render() { return ( <a href={ baseUrl + this.props.user } { this.props.user } on github </a> ); } } render( <GitHubLink user="jeremenichelli"/>, document.querySelector('#example') );
When declaring components with this pattern, props need to be passed to the super class constructor so they are applied to the component itself.
As you see in the href value, JavaScript expressions can be used inside JSX when wrapped with curly braces to apply more dynamic and readable approaches.
import React, { Component } from 'react'; import { render } from 'react-dom'; const baseUrl = ''; class GitHubLink extends Component { constructor(props) { super(props); } render() { return ( <a href={ baseUrl + this.props.user } { this.props.user } on github </a> ); } } class GitHubUsers extends Component { constructor(props) { super(props); } render() { return ( <div> <GitHubLink user="jeremenichelli"/> <GitHubLink user="iamdustan"/> </div> ); } } render( <GitHubUsers />, document.querySelector('#example') );
The render function in React components always has to return a single root element, that’s why the two GitHub links are placed inside a div tag.
Remember you can use JavaScript inside
render, which is pretty neat when the number of children is unknown or too big.
import React, { Component } from 'react'; const users = [ 'jeremenichelli', 'iamdustan' ]; class GitHubUsers extends Component { constructor(props) { super(props); } render() { return ( <div hidden={ !users.length }> { users.map(user => <GitHubLink user={ user }/>) } </div> ); } }
This is a better pattern since now the logic inside
render doesn’t need to be updated when the data changes, improving the maintainability of the code.
PropTypes
Validation can be added to props, for example specifying type.
GitHubLink.propTypes = { user: PropTypes.string.isRequired };
After passing the type we can go further and use
isRequired so the presence of it becomes mandatory for rendering the component.
propTypes have been moved to a standalone package.
State
When data values can change internally because a network request or a user interaction happened, we placed them on the component’s state.
import React, { Component } from 'react'; class AccordionElement extends Component { constructor(props) { super(props); this.state = { expanded: false } } render() { return ( <div className={ this.state.expanded ? 'expanded' : '' }> <button>{ this.props.heading }</button> <p>{ this.props.content }</p> </div> ); } }
Here we are defining an accordion element, the expanded value will define whether the content will be visible or not, so it makes sense to define it as a state.
To reveal the content we need to toggle the expanded value.
To do it we use setState and React will update the components view.
import React, { Component } from 'react'; class AccordionElement extends Component { constructor(props) { super(props); this.state = { expanded: false } } toggleState() { this.setState({ expanded: !this.state.expanded }); } render() { return ( <div className={ this.state.expanded ? 'expanded' : '' }> <button onClick={ this.toggleState.bind(this) }> { this.props.heading } </button> <p>{ this.props.content }</p> </div> ); } }
When the toggleState function gets called the context will be the rendered node, with bind we change it back to the component.
Model binding
As the library doesn’t come with directives out of the box, when you need to track properties like the value of an input you will have to do it yourself.
It’s not hard since React encapsulation itself comes handy for this.
import React, { Component } from 'react'; export default class SearchBox extends Component { constructor(props) { super(props); this.state = { searchValue: '' }; // bind events this.handleChange = this.handleChange.bind(this); } handleChange(e) { this.setState({ searchValue: e.target.value }); } render() { return ( <form action="?"> <input type="text" value={ this.props.searchValue } onChange={ this.handleChange } /> <button type="submit">Search</button> </form> ); } }
This is a pretty common pattern that allows you to access to that state later in case it affects some other rendered section of the component.
Styles
If you’re using React, the main reason should be that you find the separation of concerns and encapsulation the best way to structure your application, so your strategy around styles should match this philosophy.
CSS modules alter original selectors so they are unique, and as a consequence, encapsulates the styles for a set of elements.
import React, { Component } from 'react'; import styles from '../styles/icon.css'; class Icon extends Component { render() { return ( <i className={ styles.icon }></i> ); } }
When you import a style file like this, selectors are changed to keys, combinations of letters and numbers. Inside your script an object is returned containing exactly those unique references you can use in your components as classes.
Of course this happens at compilation time so you will need external tools like webpack loaders to manage your styles this way.
Routing
The most popular alternative to turn your React project in a single page application is the official react router. As you might have guessed, you need to define the shell of your app and the views as components.
import React, { Component } from 'react'; class App extends Component { render() { return ( <div className="app"> <h1>React single page application</h1> { this.props.children } </div> ); } } class Home extends Component { render() { return ( <div className="home"> <h2>Home</h2> // view content ... </div> ); } } class About extends Component { render() { return ( <div className="about"> <h2>About</h2> // view content ... </div> ); } }
Next, you pass these views as props to special route components that will mount the app and manage the transitions for you.
import { render } from 'react-dom'; import { Router, Route, IndexRoute } from 'react-router'; render(( <Router> <Route path="/" component={ App }> <IndexRoute component={ Home } /> <Route path="/about" component={ About }> <Route path="/about/:author" component={ Author }></Route> </Route> </Route> </Router> ), document.getElementById('app'))
Basically you’re configuring the routes by placing tags defining the structure of your project, meaning you can go deeper placing routes and defining parameters.
import React, { Component } from 'react'; import { Link } from 'react-router'; class Home extends Component { render() { return ( <div className="home"> <h2>Home</h2> <Link to="about/jeremenichelli">About me</Link> // view content ... </div> ); } }
To render anchors pointing to a defined route, a Link component is available.
Ecosystem
Even when it has a great and growing community, its ecosystem is its weakest point.
Try to learn a new framework always brings a learning curve that, in my opinion, is too steep in React. There are a lot of reasons for that.
The first one is the documentation. I have to admit is really complete but unorganized, which is a big deal for begginers, probably a consequence of a fast evolution pace the repository experimented recently.
The official docs grew organically and need gardening
- Dan Abramov
The second one is JSX itself. Using it really improves the developing experience, but it brings its own tricks and limitations to the yard.
I would suggest trying React without it first or you will find yourself learning two things at the same time, not knowing what’s wrong and where when your script renders nothing.
The last one is tooling. If you’re building your application with React, you will need transpiling and a build process to handle the whole thing since you’re choosing this path to structure your project separating it in smaller parts.
There are a lot of boilerplates, which I think is a symptom of what’s going on with the library nowadays. Apparently developers are having a hard time around decisions when they start a new project which includes React.
Architecture
This heading probably won’t be present in all the articles from this serie and the reason is React actually shaped the structure of my application.
While trying to avoid repeated code and follow best practices, the library itself in combination with CSS modules forced me in own way or another to encapsulate both logic and styles.
Components as modules
Once you’ve resolved the tooling part, enclosing each component as a module is the best way to organize an application that might scale in time.
The SearchBox component example shown previously can be easily imported to compose more complex components or views.
import React, { Component } from 'react'; // components import Card from '../components/card.js'; import SearchBox from '../components/search-box.js'; export default class SearchView extends Component { constructor(props) { super(props); } render() { return ( <Card> <SearchBox /> </Card> ); } };
Following this design rules, the project grows naturally without overthinking around where you should put something or where not, and that’s a big win from this library.
Wrap-up
React has something that makes you like it, it does one thing. Short set of methods and patterns to learn and you’re ready to go, but it needs to improve documentation and tooling to help onboard new adopters without strong concepts around architecture and components.
It definitely forces you to change how you develop and scale an application, but after you passed the initial learning curve, letting React rule your project’s architecture is a relief.
Most of these thoughts came while building a simple web app using tools and approaches mentioned in this article you can explore on GitHub. | https://jeremenichelli.io/2016/07/building-a-component-based-app-react/ | CC-MAIN-2018-47 | en | refinedweb |
TypeScript - Understanding TypeScript
By Peter Vogel | January 2015#.
The question then remains, “Would you rather write your client-side code in this language or in JavaScript?”
TypeScript Is Data-Typed
TypeScript doesn’t have many built-in data types you can use to declare variables—just string, number and Boolean. Those three types are a subtype of the any type (which you can also use when declaring variables). You can set or test variables declared with those four types against the types null or undefined. You can also declare methods as void, indicating they don’t return a value.
This example declares a variable as string:
You can extend this simple type system with enumerated values and four kinds of object types: interfaces, classes, arrays and functions. For example, the following code defines an interface (one kind of object type) with the name ICustomerShort. The interface includes two members: a property called Id and a method called CalculateDiscount:
As in C#, you can use interfaces when declaring variables and return types. This example declares the variable cs as type ICustomerShort:
You can also define object types as classes, which, unlike interfaces, can contain executable code. This example defines a class called CustomerShort with one property and one method:
Like more recent versions of C#, it’s not necessary to provide implementation code when defining a property. The simple declaration of the name and type is sufficient. Classes can implement one or more interfaces, as shown in Figure 1, which adds my ICustomerShort interface, with its property, to my CustomerShort class.
As Figure 1 shows, the syntax for implementing an interface is as simple in TypeScript as in C#. To implement the interface’s members you simply add members with the same name instead of tying the interface name to the relevant class’ members. In this example, I simply added Id and CalculateDiscount to the class to implement ICustomerShort. TypeScript also lets you use object type literals. This code sets the variable cst to an object literal containing one property and one method:
This example uses an object type to specify the return value of the UpdateStatus method:
Besides object types (class, interface, literal and array), you can also define function types that describe a function’s signature. The following code rewrites CalculateDiscount from my CustomerShort class to accept a single parameter called discountAmount:
That parameter is defined using a function type that accepts two parameters (one of string, one of boolean) and returns a number. If you’re a C# developer, you might find that the syntax looks much like a lambda expression.
A class that implements this interface would look something like Figure 2.
Like the recent versions of C#, TypeScript also infers the datatype of a variable from the value to which the variable is initialized. In this example, TypeScript will assume the variable myCust is of CustomerShort:
Like C#, you can declare variables using an interface and then set the variable to an object that implements that interface:
Finally, you can also use type parameters (which look suspiciously like generics in C#) to let the invoking code specify the data type to be used. This example lets the code that creates the class set the datatype of the Id property:
This code sets the datatype of the Id property to a string before using it:
To isolate classes, interfaces and other public members and avoid name collisions, you can declare these constructs inside modules much like C# namespaces. You’ll have to flag those items you want to make available to other modules with the export keyword. The module in Figure 3 exports two interfaces and a class.
To use the exported components, you can prefix the component name with the module name as in this example:
Or you can use the TypeScript import keyword to establish a shortcut to the module:
TypeScript Is Flexible About Data Typing
All this should look familiar if you’re a C# programmer, except perhaps the reversal of variable declarations (variable name first, data type second) and object literals. However, virtually all data typing in TypeScript is optional. The specification describes the data types as “annotations.” If you omit data types (and TypeScript doesn’t infer the data type), data types default to the any type.
TypeScript doesn’t require strict datatype matching, either. TypeScript uses what the specification calls “structural subtyping” to determine compatibility. This is similar to what’s often called “duck typing.” In TypeScript, two classes are considered identical if they have members with the same types. For example, here’s a CustomerShort class that implements an interface called ICustomerShort:
Here’s a class called CustomerDeviant that looks similar to my CustomerShort class:
Thanks to structural subtyping, I can use CustomerDevient with variables defined with my CustomerShort class or ICustomerShort interface. These examples use CustomerDeviant interchangeably with variables declared as CustomerShort or ICustomerShort:
This flexibility lets you assign TypeScript object literals to variables declared as classes or interfaces, provided they’re structurally compatible, as they are here:
This leads into TypeScript-specific features around apparent types, supertypes and subtypes leading to the general issue of assignability, which I’ll skip here. Those features would allow CustomerDeviant, for example, to have members that aren’t present in CustomerShort without causing my sample code to fail.
TypeScript Has Class
The TypeScript specification refers to the language as implementing “the class pattern [using] prototype chains to implement many variations on object-oriented inheritance mechanisms.” In practice, it means TypeScript isn’t only data-typed, but effectively object-oriented.
In the same way that a C# interface can inherit from a base interface, a TypeScript interface can extend another interface—even if that other interface is defined in a different module. This example extends the ICustomerShort interface to create a new interface called ICustomerLong:
The ICustomerLong interface will have two members: FullName and Id. In the merged interface, the members from the interface appear first. Therefore, my ICustomerLong interface is equivalent to this interface:
A class that implements ICustomerLong would need both properties:
Classes can extend other classes in the same way one interface can extend another. The class in Figure 4 extends CustomerShort and adds a new property to the definition. It uses explicit getters and setters to define the properties (although not in a particularly useful way).
class CustomerShort { Id: number; } class CustomerLong extends CustomerLong { private id: number; private fullName: string; get Id(): number { return this.id } set Id( value: number ) { this.id = value; } get FullName(): string { return this.fullName; } set FullName( value: string ) { this.fullName = value; } }
TypeScript enforces the best practice of accessing internal fields (like id and fullName) through a reference to the class (this). Classes can also have constructor functions that include a feature C# has just adopted: automatic definition of fields. The constructor function in a TypeScript class must be named constructor and its public parameters are automatically defined as properties and initialized from the values passed to them. In this example, the constructor accepts a single parameter called Company of type string:
Because the Company parameter is defined as public, the class also gets a public property called Company initialized from the value passed to the constructor. Thanks to that feature, the variable comp will be set to “PH&VIS,” as in this example:
Declaring a constructor’s parameter as private creates an internal property it can only be accessed from code inside members of the class through the keyword this. If the parameter isn’t declared as public or private, no property is generated.
Your class must have a constructor. As in C#, if you don’t provide one, one will be provided for you. If your class extends another class, any constructor you create must include a call to super. This calls the constructor on the class it’s extending. This example includes a constructor with a super call that provides parameters to the base class’ constructor:
TypeScript Inherits Differently
Again, this will all look familiar to you if you’re a C# programmer, except for some funny keywords (extends). But, again, extending a class or an interface isn’t quite the same thing as the inheritance mechanisms in C#. The TypeScript specification uses the usual terms for the class being extended (“base class”) and the class that extends it (“derived class”). However, the specification refers to a class’ “heritage specification,” for example, instead of using the word “inheritance.”
To begin with, TypeScript has fewer options than C# when it comes to defining base classes. You can’t declare the class or members as non-overrideable, abstract or virtual (though interfaces provide much of the functionality that a virtual base class provides).
There’s no way to prevent some members from not being inherited. A derived class inherits all members of the base class, including public and private members (all public members of the base class are overrideable while private members are not). To override a public member, simply define a member in the derived class with the same signature. While you can use the super keyword to access a public method from a derived class, you can’t access a property in the base class using super (though you can override the property).
TypeScript lets you augment an interface by simply declaring an interface with an identical name and new members. This lets you extend existing JavaScript code without creating a new named type. The example in Figure 5 defines the ICustomerMerge interface through two separate interface definitions and then implements the interface in a class.
Figure 5 The ICustomerMerge Interface Defined Through Two Interface Definitions
Classes can also extend other classes, but not interfaces. In TypeScript, interfaces can also extend classes, but only in a way that involves inheritance. When an interface extends a class, the interface includes all class members (public and private), but without the class’ implementations. In Figure 6, the ICustomer interface will have the private member id, public member Id and the public member MiddleName.
The ICustomer interface has a significant restriction—you can only use it with classes that extend the same class the interface extended (in this case, that’s the Customer class). TypeScript requires that you include private members in the interface to be inherited from the class that the interface extends, instead of being reimplemented in the derived class. A new class that uses the ICustomer interface would need, for example, to provide an implementation for MiddleName (because it’s only specified in the interface). The developer using ICustomer could choose to either inherit or override public methods from the Customer class, but wouldn’t be able to override the private id member.
This example shows a class (called NewCustomer) that implements the ICustomer interface and extends the Customer class as required. In this example, NewCustomer inherits the implementation of Id from Customer and provides an implementation for MiddleName:
This combination of interfaces, classes, implementation and extension provides a controlled way for classes you define to extend classes defined in other object models (for more details, check out section 7.3 of the language specification, “Interfaces Extending Classes”). Coupled with the ability of TypeScript to use information about other JavaScript libraries, it lets you write TypeScript code that works with the objects defined in those libraries.
TypeScript Knows About Your Libraries
Besides knowing about the classes and interfaces defined in your application, you can provide TypeScript with information about other object libraries. That’s handled through the TypeScript declare keyword. This creates what the specification calls “ambient declarations.” You many never have to use the declare keyword yourself because you can find definition files for most JavaScript libraries on the DefinitelyTyped site at definitelytyped.org. Through these definition files, TypeScript can effectively “read the documentation” about the libraries with which you need to work.
“Reading the documentation,” of course, means you get data-typed IntelliSense support and compile-time checking when using the objects that make up the library. It also lets TypeScript, under certain circumstances, infer the type of a variable from the context in which it’s used. Thanks to the lib.d.ts definition file included with TypeScript, TypeScript assumes the variable anchor is of type HTMLAnchorElement in the following code:
The definition file specifies that’s the result returned by the createElement method when the method is passed the string “a.” Knowing anchor is an HTMLAnchorElement means TypeScript knows the anchor variable will support, for example, the addEventListener method.
The TypeScript data type inference also works with parameter types. For example, the addEventListener method accepts two parameters. The second is a function in which addEventListener passes an object of type PointerEvent. TypeScript knows that and supports accessing the cancelBubble property of the PointerEvent class within the function:
In the same way that lib.d.ts provides information about the HTML DOM, the definition files for other JavaScript provide similar functionality. After adding the backbone.d.ts file to my project, for example, I can declare a class that extends the Backbone Model class and implements my own interface with code like this:
If you’re interested in details on how to use TypeScript with Backbone and Knockout, check out my Practical TypeScript columns at bit.ly/1BRh8NJ. In the new year, I’ll be looking at the details of using TypeScript with Angular.
There’s more to TypeScript than you see here. TypeScript version 1.3 is slated to include union datatypes (to support, for example, functions that return a list of specific types) and tuples. The TypeScript team is working with other teams applying data typing to JavaScript (Flow and Angular) to ensure TypeScript will work with as broad a range of JavaScript libraries as possible.
If you need to do something that JavaScript supports and TypeScript won’t let you do, you can always integrate your JavaScript code because TypeScript is a superset of JavaScript. So the question remains—which of these languages would you prefer to use to write your client-side code?
Peter Vogel is a principal with PH&V Information Services, specializing in Web development with expertise in SOA, client-side development and UI design. PH&V clients include the Canadian Imperial Bank of Commerce, Volvo and Microsoft. He also teaches and writes courses for Learning Tree International and writes the Practical .NET column for VisualStudioMagazine.com.
Thanks to the following Microsoft technical expert for reviewing this article: Ryan Cavanaugh
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/en-us/magazine/dn890374.aspx | CC-MAIN-2018-47 | en | refinedweb |
regression test scripting framework for embedded systems developers
Project description
Intro
Using MONk you can write tests like you would write unit tests, just that they are able to interact with your embedded system.
Let’s look at an example. In the following example we have an embedded system with a serial terminal and a network interface. We want to write a test, which checks whether the network interface receives correct information via dhcp.
The test case written with nosetests:
import nose.tools as nt import monk_tf.conn as mc import monk_tf.dev as md def test_dhcp(): """ check whether dhcp is implemented correctly """ # setup device = md.Device(mc.SerialConn('/dev/ttyUSB1','root','sosecure')) # exercise device.cmd('dhcpc -i eth0') # verify ifconfig_out = device.cmd('ifconfig eth0') nt.ok_('192.168.2.100' in ifconfig_out)
Even for non python programmers it should be not hard to guess, that this test will connect to a serial interface on /dev/ttyUSB1, send the shell command dhcpc to get a new IP adress for the eth0 interface, and in the end it checks whether the received IP address that the tester would expect. No need to worry about connection handling, login and session handling.
For more information see the API Docs.
Release 0.1.10/0.1.11 (2014-05-05)
Enables devices to use current connections to find its IP addresses. Example usecase: You have a serial connection to your device that you know how to access. The Device itself uses DHCP to get an IP address and you want to send HTTP requests to it. Now you can use MONK to find its IP address via SerialConnection and then send your HTTP requests.
Release 0.1.9 (2014-04-14)
- add option to set debug fixture file which overwrites differences between test environment and development environment while developing
Release 0.1.7/0.1.8 (2014-03-27)
- workaround for slower password prompt return times
- there was a problem in the publishing process which lead to changes not being added to the 0.1.7 release
Release 0.1.6 (2014-03-06)
- again bugs got fixed
- most important topic was stabilizing the connect->login->cmd process
- error handling improved with more ifs and more userfriendly exceptions
- it is now possible to completely move from Disconnected to Authenticated even when the target device is just booting.
Release 0.1.5 (2014-02-25)
- fixed many bugs
- most notably 0.1.4 was actually not able to be installed from PyPI without workaround
Release 0.1.4 (2014-01-24)
- fixed some urgent bugs
- renamed harness to fixture
- updated docs
Release 0.1.3
- complete reimplementation finished
- documentation not up to date yet
- Features are: * create independent connections with the connection layer * example implementation with SerialConnection * create complete device abstraction with the dev layer * basic device class in layer * separate test cases and connection data for reusage with harness layer * example parser for extendend INI implemented in harness layer
Release 0.1.2
- added GPLv3+ (source) and CC-BY-SA 3.0 (docs) Licenses
- updated coding standards
Release 0.1.1
- rewrote documentation
- style guide
- configured for pip, setuptools, virtualenv
- restarted unit test suite with nosetests
- added development and test requirements
Release 0.1
The initial Release.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/monk_tf/0.2.22/ | CC-MAIN-2018-47 | en | refinedweb |
).
The included scaffolds are these:
starter
- URL mapping via URL dispatch and no persistence mechanism.
zodb
- URL mapping via traversal and persistence via ZODB. Note that, as of this writing, this scaffold will not run under Python 3, only under Python 2.
alchemy
- URL mapping via URL dispatch and persistence via SQLAlchemy
Creating the Project¶
In Installing Pyramid, you created a virtual Python environment via
the
virtualenv command. To start a Pyramid project, use
the
pcreate command installed within the virtualenv. We’ll choose the
starter scaffold for this purpose. When we invoke
pcreate, it will
create a directory that represents our project.
In Installing Pyramid we called the virtualenv directory
env; the
following commands assume that our current working directory is the
env
directory.
On UNIX:
$ bin/pcreate -s starter MyProject
Or on Windows:
> Scripts\pcreate -s starter MyProject
The above command uses the
pcreate command to create a project with the
starter scaffold. To use a different scaffold, such as
alchemy, you’d just change the
-s argument value. For example,
on UNIX:
$ bin/pcreate -s alchemy MyProject
Or on Windows:
> Scripts\pcreate -s alchemy MyProject
Here’s sample output from a run of
pcreate on UNIX for a project we name
MyProject:
$ bin/pcreate -s starter MyProject Creating template pyramid Creating directory ./MyProject # ... more output ... Running /Users/chrism/projects/pyramid/bin/python setup.py egg_info
As a result of invoking the
pcreate command, a directory named
MyProject is created. That directory is a
package which holds very simple:
$ cd MyProject $ ../bin/python setup.py develop
Or on Windows:
> cd MyProject > ..\Scripts\python.exe setup.py develop
Elided output from a run of this command on UNIX is shown below:
$ cd MyProject $ ../bin/python setup.py develop ... Finished processing dependencies for MyProject==0.0
This will install a distribution representing your project into the
interpreter’s library set so it can be found by
import statements and by
other console scripts such as
pserve,
pshell,
proutes and
pviews.
Running The Tests For Your Application¶
To run unit tests for your application, you should invoke them using the
Python interpreter from the virtualenv you created during
Installing Pyramid (the
python command that lives in the
bin
directory of your virtualenv).
On UNIX:
$ ../bin/python setup.py test -q
Or on Windows:
> ..\Scripts\python.exe setup.py test -q
Here’s sample output from a test run on UNIX:
$ ...
Running The Project Application¶
Once a project is installed for development, you can run the application it
represents using the
pserve command against the generated configuration
file. In our case, this file is named
development.ini.
On UNIX:
$ ../bin/pserve development.ini
On Windows:
> ..\Scripts\pserve development.ini
Here’s sample output from a run of
pserve on UNIX:
$ ../bin/pserve development.ini Starting server in PID 16601. serving on 0.0.0.0:6543 view at
By default, Pyramid applications generated from a scaffold
will listen on TCP port 6543. You can shut down a server started this way by
pressing
Ctrl-C.
The default server used to run your Pyramid application when a project is
created from a scaffold is named Startup. For more information about environment variables and configuration file settings that influence startup and runtime behavior, see Environment Variables and .ini File Settings.
Reloading Code¶
Pyramid application is not put into effect until the server restarts.
For example, on UNIX:
$ ../bin/pserve development.ini --reload Starting subprocess with file monitor Starting server in PID 16601. serving on
Now if you make a change to any of your project’s
.py files or
.ini
files, you’ll see the server restart automatically:.
Viewing the Application¶
Once your application is running via
pserve, you may visit in your browser. You will see something in your
browser like what is displayed in the following image:
This is the page shown by default when you visit an unmodified
pcreate
generated
starter application in a browser.
The Debug Toolbar¶
If you click on the image shown at the right hand top of the page (“^DT”),
you’ll be presented with a debug toolbar that provides various niceties while
you’re developing. This image will float above every HTML page served by
Pyramid while you develop an application, and allows you show the
toolbar as necessary. Click on
Hide to hide the toolbar and show the
image again.
:
the hash mark anywhere except the
first column instead, for example like this:
When you attempt to restart the application with a section like the abvoe you’ll receive an error that ends something like this, and the application will not start:
ImportError: No module named #pyramid_debugtoolbar
The Project Structure¶
The
starter scaffold generated a project (named
MyProject),
which contains a Python package. The package is also named
myproject, but it’s lowercased; the scaffold generates a project which
contains a package that shares its name except for case.
All Pyramid
pcreate -generated projects share a similar structure.
The
MyProject project we’ve generated has the following directory
structure:
MyProject/ |-- CHANGES.txt |-- development.ini |-- MANIFEST.in |-- myproject | |-- __init__serve,.
Note, Chameleon and Mako will be shown on the right hand
side). This means that any remote system which has TCP
access to your system can see your Pyramid application.
The sections that live between the markers
# Begin logging configuration
and
# End logging configuration represent Python’s standard library
logging module configuration for your application. The sections
between these two markers. and The
Hitchhiker’s Guide to Packaging., when adding Python
package dependencies, install and use your application. entry point for commands such as
pserve,
pshell,
pviews, and others.
-.
Lines 3-10 define a function named
mainthat returns a Pyramid WSGI application. This function is meant to be called by the PasteDeploy framework as a result of running
pserve.
Within this function, application configuration is performed.
Line 6 creates an instance of a Configurator.
Line 7 registers a static view, which will serve up the files from the
myproject:staticasset specification (the
staticdirectory of the
myprojectpackage).
Line 8 adds a route to the configuration. This route is later used by a view in the
viewsmodule.
Line 9 calls
config.scan(), which picks up view registrations declared elsewhere in the package (in this case, in the
views.pymodule).
Line 10 3 a asset
specification that specifies the
mytemplate.pt file within the
templates directory of the
myproject package. The asset
specification could have also been specified as
myproject:templates/mytemplate.pt; the leading package name and colon is
optional. The template file it actually points to is a Chameleon ZPT
template file.
(
templates/my_template.pt).
See Writing View Callables Which Use a Renderer for more information about how views, renderers, and templates relate and cooperate.
Note
Because our
development.ini has a
pyramid
pyramid.reload_templates to
false to increase
the speed at which templates may be rendered.
static¶
This directory contains static assets which support the
mytemplate.pt
template. It includes CSS and images.
templates/mytemplate.pt¶
The single Chameleon template that exists in the project. Its
contents are too long to show here, but it displays a default page when
rendered. It is referenced by the call to
@view_config as the
renderer of the
my_view view callable in the
views.pyproject.
You can then continue to add view callable functions to the
blog.py
module, but you can also add other
.py files which contain view callable
functions to the
views directory. As long as you use the
@view_config directive to register views in conjuction an Pyramid scaffold scaffolding. But we strongly recommend using while developing your
application, because many other convenience introspection commands (such as
pviews,
prequest,
proutes and others) are also implemented in
terms of configuration availaibility of this
.ini file format. It also
configures Pyramid logging and provides the
--reload switch for
convenient restarting of the server when code changes.
Using an Alternate WSGI Server¶
Pyramid scaffolds. | https://docs.pylonsproject.org/projects/pyramid/en/1.3-branch/narr/project.html | CC-MAIN-2018-47 | en | refinedweb |
I have data such as:
['$15.50']
['$10.00']
['$15.50']
['$15.50']
['$22.28']
['$50']
['$15.50']
['$10.00']
I want to get rid of the dollar sign and turn the strings into floats so I can use the numbers for several calculations. I have tried the following:
array[0] = float(array.text.strip('$'))
which gives me an attribute error, because apparently a 'list' object has no 'text' attribute. My bad. Is there a similar way for 'list' objects to get stripped? Any other suggestions would be welcome too. Thanks in advance.
Try using a list comprehension:
array = [float(x.strip("$")) for x in array]
With regex:
import re array = ([float(re.sub("\$","",x)) for x in array])
In case '$' is not at the end or beginning of the string
This should do:
[float(s.replace(',', '.').replace('$', '')) for s in array]
I have taken the liberty to change your data in order to consider a wider variety of test cases:
array = ['$15.50', '$ 10.00', ' $15.50 ', '$15,50', '$22,28 ', ' 10,00 $ ']
And this is what you get:
In [8]: [float(s.replace(',', '.').replace('$', '')) for s in array] Out[8]: [15.5, 10.0, 15.5, 15.5, 22.28, 10.0] | http://www.dlxedu.com/askdetail/3/bd83a93c091e0cf44185995bb3c86cb2.html | CC-MAIN-2018-47 | en | refinedweb |
I would like to name a function
draw in multiple modules. (My graph theory module would have a
draw from drawing graphs, my hyperbolic geometry module would have a
draw for drawing things in the hyperbolic plane, and so forth.) At times, I’d like to have a few modules loaded at the same time, but Julia tells me that the name is already taken. I thought, thanks to multiple dispatch, that as long as the argument types are different, this is possible. But I’m having trouble. Suggestions on how to achieve this please?
Same function name in multiple modules?
I would like to name a function
You need to extend the
draw method in the same way as you extend
Base.getindex for example (prefix with module name to the module that defined the actual function that the rest of the modules extend).
If all methods that happened to have the same name automatically got merged, that would be kinda chaos.
Two options: If you want them to be completely unrelated functions that happen to share the name
draw, just use the qualified names:
import A, B A.draw() B.draw()
On the other hand, if they are supposed to be part of a family of related functions, so that you can have functions from module
C that act on objects from
A or
B generically, e.g. a function
foo(x) = (dosomething(x); draw(x)) that can act on an
x from
A or
B, then the modules have to know about one another. In this case, as @kristoffer.carlsson says above, you need to extend a common
draw function in
A and
B:
module A using GenericDraw # first module where draw is defined ... GenericDraw.draw(x::Atype) = ... end module B using GenericDraw # first module where draw is defined ... GenericDraw.draw(x::Btype) = ... end
Then both
A and
B are defining different methods of the same
draw function, dispatched by types defined in those modules.
Thanks, but I’m not understanding. Could I have a
Master module that just defines:
function draw() end
And then modules
A,
B, and so on with
import Master: draw function draw(x::Atype) .... end
in module
A, and likewise in
B and
C. Or do I have no choice but to use
A.draw(...) and
B.draw(...) and so on.
I thought with multiple dispatch I could have lots and lots of functions named
draw so long as their arguments were different types.
Very helpful. I think I’ve got it. THANK YOU!
Yes, you could do that.
A generic function has some “higher level concept” that we extend with other types. For example, let’s look at the docstring for
push!
help?> push! search: push! pushfirst! push pushfirst pushdisplay push!(collection, items...) -> collection Insert one or more items at the end of collection.
If you extend
Base.push! then you opt into the “contract” that is specified by the function docstring. By doing so we can write generic code that works for many types of collections.
However, let’s say you are writing a game or something where you have a method called
push! which pushes another player. Then you should not extend
Base.push! because this function has a completely different meaning. There is no way to write generic code with
Base.push! and your game version of
push!.
So presumably, your
draw function in the “Master” module has some higher level concept of drawing associated with it. Then you extend that function using
Master.draw with other types that agree to that concept and we can write generic code using that
draw function. | https://discourse.julialang.org/t/same-function-name-in-multiple-modules/16881 | CC-MAIN-2018-47 | en | refinedweb |
This will walk you through getting up and running from scratch with Apache OpenWhisk on OSX, and setting up an Action Sequence where the output of one OpenWhisk Action is fed into the input of the next Action.
Install OpenWhisk via Vagrant
You should see reams of output, followed by:
SSH into Vagrant machine and run OpenWhisk CLI
Now you can access the OpenWhisk CLI:
Re-run the “Hello world” via:
Hello Go/Docker
I tried following the instructions on James Thomas’ blog for running Go within Docker, but ran into an error (see Disqus comment), and so here’s how I worked around it.
First create a simple Go program and cross compile it. Save the following to
exec.go:
Cross compile it for Linux:
Pull the upstream Docker image:
Create a custom docker image based on
openwhisk/dockerskeleton:
Build and test:
OpenWhisk Hello Go/Docker
Push up the docker image to dockerhub:
Create the OpenWhisk action:
Invoke the action to verify it works:
Define custom actions
Get a list of AWS users using aws-go-sdk
Save this to
main.go
Build and package into docker image, and push up to docker hub
Create an OpenWhisk action:
Invoke it:
Write to a CloudantDB
Cloudant Setup
Create a Cloudant database via the Bluemix web admin.
Under the Permissions control panel section for the database, choose Generate a new API key.
Check the _writer permission and make a note of the Key and Password
Verify connectivity by making a curl request:
OpenWhisk + Cloudant
I’m currently getting this error:
Switch to BlueMix
At this point I swiched to the OpenWhisk on Bluemix, and downloaded the
wsk cli from the Bluemix website, and configure it with my api key per the instructions. Then I re-installed the action via:
and made sure it worked by running:
Cloudant Setup
Following these instructions:
You can get your Bluemix Org name (maybe the first part of your email address by default) and BlueMix space (dev by default) from the Bluemix web admin.
Refresh packages:
It didn’t work according to the docs, and no bindings were created even though I had created a Cloudant database in the Bluemix admin earlier.
I retried the
package bind command that had failed earlier:
and this time success!!
Try writing to the db with:
and you should get a response like:
Connect them in a sequence
Create a new package binding pinned to a particular db
The
/yournamespace/myCloudant/write action expects a
dbname parameter, but the upstream
fetch_aws_keys doesn’t contain that parameter. (and it’s better that it doesn’t, to reduce decoupling). So if you try to connect the two actions in a sequence at this point, it will fail.
Create sequence action
Create a sequence that will invoke these actions in sequence:
- Fetch the AWS keys
- Write the doc containing the AWS keys to the
testdbdatabase bound to the myCloudantTestDb package
Try it out:
To view the resulting document:
Drive with a scheduler
Let’s say we wanted this to run every minute.
First create an alarm trigger that will fire every minute:
Now create a rule that will invoke the
fetch_and_write_aws_keys action (which is a sequence action) whenever the
everyMinute feed is triggered:
To verify that it is working, check your cloudant database to look for new docs:
Or you can also monitor the activations: | http://tleyden.github.io/blog/2017/07/02/openwhisk-action-sequences/ | CC-MAIN-2018-47 | en | refinedweb |
In this series of tutorials, we’ll go through setting up Unity’s built-in Nav Mesh, which enables pathfinding and navigation without the need for complex code.
Before you Begin
For Intermediate Unity Developers
This tutorial series is for intermediate Unity users. I won’t go into detail on Unity basics, and assume you know your way around Unity’s interface and core features. Please refer to the Pong tutorial for beginners if you are not yet familiar enough with Unity to follow this tutorial.
Unity Version
This project was created with Unity version 2017.3.1f1, which is the latest version at the time of writing. You should be OK with any Unity version that is not too much older or newer.
Download
A download link is at the bottom of this tutorial. It contains a project as it should be if all the instructions in this part of the tutorial are followed.
Navigation Basics
Unity comes with a great navigation system that can be set up quickly and easily, and gives your characters the ability to navigate a complex environment by pathfinding and avoiding obstacles.
Nav Mesh
Unity’s navigation solution is the Nav Mesh. The Nav Mesh is a map of the areas in your game where a character can walk. When you tell your character to walk to a position on the Nav Mesh, they will automatically find a path to that position (if possible) and move there. If the character can’t reach the target position, they will get as close as possible.
Nav Agent
For a character to use a Nav Mesh, they must have a Nav Mesh Agent component on their GameObject. This agent contains the capabilities needed for navigation on the Nav Mesh, and you call methods on this object to make the character move, and use the various settings to specify the precise behaviour you want.
Create a Basic Navigating Character
To start with, we will create a simple scene with a navigating character who moves from their starting position to a target position.
The Scene
If you want to skip the basic non-AI related stuff, download the starter project, which includes a pre-built scene with the basics already done for you so you can get straight into the good stuff.
The base project contains a single scene ‘MainScene’, with some ground, an Agent (who we will set up to navigate the world), and a target GameObject which we will use as the place the player will navigate towards.
Create the Base Project and Scene
If you would rather set up the base project yourself, here are the steps required (skip this part if you downloaded the base project, and simply open that project then continue to Add a Nav Mesh).
- Start a new 3D Unity project.
- Create a scene called ‘MainScene’.
- Add a large Plane object to act as the ground
- Add a small sphere or cube called ‘Agent’, and make sure it is above the ground. Place the Agent near one of the corners.
- Create another GameObject called ‘Target’, and give it any 3D shape, such as a cylinder or a cube, and place it far from the Agent object (we will be making the Agent navigate towards the Target).
- Add materials to the objects to separate the objects visually, and to make your scene more interesting.
- Setup the camera so that you can see the whole of the floor and have a good view of the Agent and Target objects.
Here is what my base scene looks like:
Add a Nav Mesh
A Nav Mesh is used differently to the typical Unity components. Instead of adding it to objects, you add objects to it. For this reason, there is a Navigation window that you must use to setup your Nav Mesh.
If you don’t already have the Navigation window visible, go to the Unity menu and select Window > Navigation:
This window has four tabs that present different options and customisations; these are:
- Agents – customise the behaviour of AI characters (agents) using the mesh.
- Areas – customise the different types of terrain (e.g. terrain that is slower to walk on or terrain that can’t be walked on).
- Bake – this is where you apply all your settings and create the Nav Mesh.
- Object – where you select which objects are included in your mesh, and some of their properties.
You will also see that there is a Scene Filter option. This allows you to hide objects in the scene Hierarchy while working on navigation. For example, if you choose the Mesh Renderers option, everything without a Mesh Renderer will be hidden in the Hierarchy, making it easier to find the objects you want to use for navigation. This will come in handy for complex scenes with lots of objects, but for this tutorial we don’t need to worry ourselves about this.
We won’t go into a lot of detail right now. For now, let’s just get something working.
In the Navigation window:
- Select the Object tab.
- Select the ground object in the Hierarchy to make it the active object.
- Check that the ground object is now selected in the Navigation window.
- Set the settings as below:
- Tick Navigation Static.
- Select Walkable in the Navigation Area drop-down box.
- Ignore Generate OffMesh Links for now (we’ll look at it later).
You’ve now told the Nav Mesh that you want the ground to be walkable, and that it is ‘navigation static’ (i.e. the Nav Mesh will ‘see’ it when determining the walkable areas). This is the most basic Nav Mesh setup you need.
Bake It
Although we’ve added the ground to the Nav Mesh and made it walkable and navigation static, we have not actually created the Nav Mesh. We need to ‘bake’ it. Baking is a process of doing something that is too complex or time-consuming to do at runtime during the development process. There is no need to create a navigation mesh during runtime, since the terrain doesn’t change much in a game (and small changes don’t require a complete rebuilding of the mesh). The same is done for complex lighting in many games.
- Select the Bake tab in the Navigation window.
- Click the Bake button.
- If you do not already have the Scene window open, open it to see your Nav Mesh.
In the Scene window, the Nav Mesh is presented as a blue overlay on the ground, which represents where Nav Mesh Agents are able to walk:
In our current project, it will just be a simple blue square, but this will change as we later add some complexity.
Add an Agent
A Nav Mesh is pointless without someone to walk around it. We will now turn the Agent object into a ‘Nav Mesh Agent’ who can navigate the Nav Mesh.
- Select the Agent object in the Hierarchy to make it the active object.
- Add a Nav Mesh Agent component to the object in the Inspector window.
- Set the Speed property to any number (between 5 and 10 would be ideal).
You’ll notice a lot of settings on the Nav Mesh Agent component. Some of these are quite obvious (e.g. Speed), and some are not quite so obvious. For now we’ll ignore these settings.
Script
We need some code to make the player navigate, but it’s probably not as much code as you think. The Nav Mesh Agent takes care of movement and navigation, and we only need to tell the agent where to walk to.
To keep things simple for now, we’ll set a static target for the player to move towards (the Target GameObject in our scene).
Now, create a new C# script called ‘Agent’. I won’t go into detail about the code, as most of it doesn’t directly have anything to do with navigation (and it’s pretty basic Unity code). The only line of code that is navigation specific is:
agent.SetDestination(target.position);
That line tells the agent where it should try to navigate to, and will trigger the agent to start moving if they are not already at the target.
Here is the full code for the Agent script:
using UnityEngine; using UnityEngine.AI; public class Agent : MonoBehaviour { [SerializeField] Transform target; NavMeshAgent agent; void Start() { // get a reference to the player's Nav Mesh Agent component agent = GetComponent<NavMeshAgent>(); // set the agent's destination agent.SetDestination(target.position); } }
- Save the script.
- Attach the script to the Agent GameObject.
- Drag the Target GameObject into the Agent’s Target field in the Inspector:
Yes, that’s all you need for now to get the player navigating towards the target!
Run the Scene
Run the scene and watch the player automatically move towards the target. You can try experimenting with some of the player’s settings, though some of them won’t have any effect on such a basic scene, as the navigation is going to always be in a straight line.
It’s perhaps not too impressive to see the character move in a straight line from start to finish, so in the next part, we’ll add some obstacles and see the pathfinding in action.
Download
Here is the download for the project as it should be at the end of this part of the tutorial. Download it if you get stuck or want to compare your project to mine.
| http://unity.grogansoft.com/tag/navigation/ | CC-MAIN-2018-47 | en | refinedweb |
Workshop : CDOSYS + .Net
As anyone who has used the .Net System.Web.Mail namespace can attest, the default .Net mail classes are woefully lacking
in functionality. It’s really a shame too, considering that
they’re based off of the very powerful CDONTS/CDOSYS libraries, which
allow a lot more functionality.
In this first investigation into unleashing the full power .Net web
mail, we will create a simple web interface to send email messages with
a user uploaded attachment without saving the uploaded attachment to disk first.
Some of this functionality is likely built into many third party
mail packages, but why pay for it if you have time and you can build it
for free?
So if you’re still interested, hop on over to the workshop article.
As always, please leave comments, questions and criticism in the post.
Recent Comments | http://charliedigital.com/2005/10/12/workshop-cdosys-net/ | CC-MAIN-2018-47 | en | refinedweb |
A flutter api for photo, you can get image/video from ios or android.
一个提供相册 api 的插件, android ios 可用,没有 ui,以便于自定义自己的界面, 你可以通过提供的 api 来制作图片相关的 ui 或插件
If you just need a picture selector, you can choose to use photo library , a multi image picker. All UI create by flutter.
dependencies: photo_manager: $latest_version
import 'package:photo_manager/photo_manager.dart';
see the example/lib/main.dart
or see next
You must get the user's permission on android/ios.
var result = await PhotoManager.requestPermission(); if (result) { // success } else { // fail /// if result is fail, you can call `PhotoManger.openSetting();` to open android/ios applicaton's setting to get permission }
List<AssetPathEntity> list = await PhotoManager.getAssetPathList();
or
List<AssetPathEntity> list = await PhotoManager.getImageAsset();
or
List<AssetPathEntity> list = await PhotoManager.getVideoAsset();
List<AssetEntity> imageList = await data.assetList;
AssetEntity entity = imageList[0]; File file = await entity.file; // image file List<int> fileData = await entity.fullData; // image/video file bytes Uint8List thumbBytes = await entity.thumbData; // thumb data ,you can use Image.memory(thumbBytes); size is 64px*64px; Uint8List thumbDataWithSize = await entity.thumbDataWithSize(width,height); //Just like thumbnails, you can specify your own size. unit is px; AssetType type = entity.type; // the type of asset enum of other,image,video Duration duration = await entity.duration; //if type is not video, then return null.
If
isCache of the
getAssetPathList is true, method will return cache data. Invalid if no method with cache = false has been called before.
If the
isCache = false method has not been invoked before, the data cannot be returned.
If
releaseCache method is called, then cache will be clear, you must call
getAssetPathList(isCache:false) before
getAssetPathList(isCache:true)
AssetEntitywith id
the
id is
AssetEntity.id
AssetEntity asset = await createAssetEntityWithId(id);
When this method is called, the image corresponding to ID has been deleted, and the return value is null.
use
addChangeCallback to regiser observe.
PhotoManager.addChangeCallback(changeNotify); PhotoManager.startChangeNotify();
PhotoManager.removeChangeCallback(changeNotify); PhotoManager.stopChangeNotify();
Because the album is a privacy privilege, you need user permission to access it. You must to modify the
Info.plist file in Runner project.
like next
<key>NSPhotoLibraryUsageDescription</key> <string>App need your agree, can visit your album</string>
xcode like image
Google recommends completing all support-to-AndroidX migrations in 2019. Documentation is also provided.
This library has been migrated in version 0.2.2, but it brings a problem. Sometimes your upstream library has not been migrated yet. At this time, you need to add an option to deal with this problem.
The complete migration method can be consulted gitbook.
Android native use glide to create image thumb bytes, version is 4.9.0.
If your other android library use the library, and version is not same, then you need edit your android project's build.gradle.
rootProject.allprojects { subprojects { project.configurations.all { resolutionStrategy.eachDependency { details -> if (details.requested.group == 'com.github.bumptech.glide' && details.requested.name.contains('glide')) { details.useVersion "4.9.0" } } } } }
if your flutter print like the log. see stackoverflow
Xcode's output: ↳ === BUILD TARGET Runner OF PROJECT Runner WITH CONFIGURATION Debug === "Runner" target. === BUILD TARGET Runner OF PROJECT Runner WITH CONFIGURATION Debug === While building module 'photo_manager' imported from /Users/cai/IdeaProjects/flutter/sxw_order/ios/Runner/GeneratedPluginRegistrant.m:9: In file included from <module-includes>:1: In file included from /Users/cai/IdeaProjects/flutter/sxw_order/build/ios/Debug-iphonesimulator/photo_manager/photo_manager.framework/Headers/photo_manager-umbrella.h:16: /Users/cai/IdeaProjects/flutter/sxw_order/build/ios/Debug-iphonesimulator/photo_manager/photo_manager.framework/Headers/MD5Utils.h:5:9: error: include of non-modular header inside framework module 'photo_manager.MD5Utils': '/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator11.2.sdk/usr/include/CommonCrypto/CommonDigest.h' [-Werror,-Wnon-modular-include-in-framework-module] #import <CommonCrypto/CommonDigest.h> ^ 1 error generated. /Users/cai/IdeaProjects/flutter/sxw_order/ios/Runner/GeneratedPluginRegistrant.m:9:9: fatal error: could not build module 'photo_manager' #import <photo_manager/ImageScannerPlugin.h> ~~~~~~~^ 2 errors generated.
Support flutter 1.6.0 android's thread changes for channel.
Fix customizing album containing folders on iOS.
AssetEntity add property:
originFile
AssetEntity add property:
exists
fix NPE for image crash on android.
add a method to create
AssetEntity with id
add
isCache for method
getImageAsset,
getVideoAsset or
getAssetPathList
add observer for photo change.
add field
createTime for
AssetEntity
add two method to load video / image
getVideoAsset
getImageAsset
add asset size field
release cache method
fix
when number of photo/video is 0, will crash
add video duration
fix bug: Android's latest picture won't be found
update gradle wrapper version.
update kotlin version
Fix Android to get pictures that are empty bug.
support ios icloud image and video
update all path hasVideo property
add a params to help user disable get video
ios get video file is async
fix 'ios video full file is a jpg' problem
support video in android. and will change api from ImageXXXX to AssetXXXX
update for the issue #1 (NPE when request other permission on android)
first version
api for photo
example/README.md
Demonstrates how to use the image_scanner plugin.
For help getting started with Flutter, view our online documentation.
Add this to your package's pubspec.yaml file:
dependencies: photo_manager: ^0.3.4
You can install packages from the command line:
with Flutter:
$ flutter pub get
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:photo_manager/photo_manager.dart';
We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries.
Fix
lib/src/manager.dart. (-0.50 points)
Analysis of
lib/src/manager.dart reported 1 hint:
line 294 col 23: The method '_isCloudWithAsset' isn't used. | https://pub.dev/packages/photo_manager | CC-MAIN-2019-30 | en | refinedweb |
48675/mypy-with-dynamic-typing-example
Have a look at this example that prints the frequency of a word in a file:
import sys
import re
if not sys.argv[1:]:
raise RuntimeError('Usage: wordfreq FILE')
d = {}
with open(sys.argv[1]) as f:
for s in f:
for word in re.sub('\W', ' ', s).split():
d[word] = d.get(word, 0) + 1
# Use list comprehension
l = [(freq, word) for word, freq in d.items()]
for freq, word in sorted(l):
print('%-6d %s' % (freq, word))
Have a look at this:
import numpy as ...READ MORE
Have a look at this:
import matplotlib
import matplotlib.pyplot ...READ MORE
Actually in later versions of pandas this ...READ MORE
Using the following logic you can arrive ...READ MORE
if you google it you can find. ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
Have a look at this example:
import sys
import ...READ MORE
The map function executes the function_object for each ...READ MORE
OR | https://www.edureka.co/community/48675/mypy-with-dynamic-typing-example?show=48683 | CC-MAIN-2019-30 | en | refinedweb |
public class MockitoAnnotations extends Object
MockitoSessionwhich not only initializes mocks but also adds extra validation for cleaner tests!); } }
Read also about other annotations @
Spy, @
Captor, @
InjectMocks
MockitoAnnotations.initMocks(this) method has to be called to initialize annotated fields.. | https://static.javadoc.io/org.mockito/mockito-core/3.0.0/org/mockito/MockitoAnnotations.html | CC-MAIN-2019-30 | en | refinedweb |
226
110 S.Ct. 2356
110 L.Ed.2d 191. P., 271275, 102 S.Ct. 269, 275-77, 70 L.Ed.2d 440whichapplies with equal force to
the Act. Pp. 247-253.
(a) Because the Act on its face grants equal access to both secular and
religious speech, it meets the secular purpose prong of the test. Pp. 248249.
, 655,,, 583-584,,.
7,
102 S.Ct. 269, 70 L.Ed.2d 440 (1981), and that Westside's denial of.
11.
12
We granted certiorari, 492 U.S. 917, 109 S.Ct. 3240, 106 L.Ed.2d 587 (1989),
and now affirm.
II
A.
13, n. 14.
14
15
).
16
17 assure that
attendance of students at meetings is voluntary," 4071(f).
B
19.
20
Unfortunately, the Act does not define the crucial phrase "noncurriculum
related student group." Our immediate task is therefore one of statutory
interpretation. We begin, of course, with the language of the statute. See, e.g.,
Mallard v. ."
21
22.
23
24
We think it significant, however, that the Act, which was passed by wide,
bipartisan majorities in both the House and the Senate, reflects at least some
consensus on a broad legislative purpose. The Committee Reports indicate that
26.
27.
28, , 89 S.Ct. 733, 738, 21
L.Ed.2d 731 .
29
are those that " 'cannot properly be included in a public school curriculum' ").
This interpretation of the Act, we are told, is mandated by Congress' intention
to "track our own Free Speech Clause jurisprudence," post, at 279, n. 10, by
incorporating Widmar notion of a "limited public forum" into the language of
the Act. Post, at 271-272.
30, 103 S.Ct. 948, 954-57, 74 L.Ed.2d 794 (1983), and had it intended to
import that concept into the Act, one would suppose that it would have done so
explicitly. Indeed, Congress' deliberate choice to use a different termand to
define that termcan."
31
33:
34
35
See also Garnett v. Renton School Dist. No. 403, 874 F.2d 608, 614 (CA9
1989) ("Complete deference [to the school district] would render the Act
meaningless because school boards could circumvent the Act's requirements
simply by asserting that all student groups are curriculum related").
36 Westside. Moreover, Westside's
principal acknowledged at trial that the Peer Advocates programa service
group that works with special education classesdoes
38
39
same result.
III
40 the club with an official
platform to proselytize other students.
41
42
We think the logic of Widmar applies with equal force to the Equal Access Act.
As an initial matter, the Act's prohibition of discrimination on the basis of, (O'CONNOR, J., concurring in part
and concurring in judgment)).
44
We disagree. First, although we have invalidated the use of public funds to pay
for teaching state-required subjects at parochial schools, in part because of the
risk of creating "a crucial symbolic link between government and religion,initiated, school sponsored, or teacher-led religious, 105 S.Ct. 3180, 3188, 87 L.Ed.2d 220
(1985); see also Rostker v. Goldberg, 453 U.S. 57, 64, 101 S.Ct. 2646, 2651, 69
L.Ed.2d 478 (1981), we do not lightly second-guess such legislative judgments,
particularly where the judgments are based in part on empirical determinations.
46,,.
47
48,
105 S.Ct. 1953, 1963-64, 85 L.Ed.2d 278 , 102 S.Ct., at 275, n. 11.
50.
51
It is so ordered.
53
Opportunities to play are held after school throughout the school year.
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
71
PHOTOGRAPHY CLUBThis is a club for the student who has the interest
and/or ability in photography. Students have an opportunity to take photos of
school activities. A dark room is provided for the students' use. Membership in
this organization begins in the fall of each school year.
ORCHESTRAThis activity is an extension of our regular curriculum.
Performances are given periodically throughout the year. Tryouts are held for
some special groups within the orchestra. All students signed up for that class
have the opportunity to try out.
72
73
74
75
76
77
ZONTA CLUB (Z Club)Is a volunteer club for girls associated with Zonta
International. Approximately one hundred junior and senior girls are involved
in this volunteer organization. Eleventh and twelfth grade students are
encouraged to join in the fall of each school year.
78
79
80
81
82
Justice KENNEDY, with whom Justice SCALIA joins, concurring in part and
concurring in the judgment.
83
84
* .
85.
86.
II
87,).
88.
89
For these reasons, I join Parts I and II of the Court's opinion and concur in the
judgment.
91
92
93
96).endorsed religious practice, we have shown particular "vigilan[ce] in
monitoring compliance with the Establishment Clause in elementary and
secondary schools." Edwards v. Aguillard, 482 U.S. 578, 583-584, (1948)
(invalidating statute providing for voluntary religious education in the public
schools). This vigilance must extend to our monitoring of the actual effects of
an "equal access" policy. If public schools are perceived as conferring the
98, 102 S.Ct. 2799, 2806, 73 L.Ed.2d 435 (1982)
(plurality) (quoting Ambach v. Norwick, 441 U.S. 68, 76-77, 99 S.Ct. 1589,
1594, 60 L.Ed.2d 49 (1979)). Given the nature and function of student clubs at
Westside, the school makes no effort to disassociate itself from the activities
and goals of its student clubs.
99 schools.
103.
104.
105 Moreover, in the absence of a truly robust forum that includes the participation
108
109.
110 Justice STEVENS, dissenting.
111 The dictionary is a necessary, and sometimes sufficient, aid to the judge
confronted with the task of construing an opaque subjectsy.
112 * The Act's basic design is easily summarized: when a public high school has a
"limited open forum," it must not deny any student group access to
116 University of Missouri. In Widmar, we held that the university had created "a
generally open forum," id., at 269, 102 S.Ct., at 274. Over 100 officially
recognized student groups routinely participated in that forum. Id., at 265, 102
S.Ct., at 272., 102 S.Ct., at 276; controversial positions
that a state university's obligation of neutrality prevented it from endorsing.
117
118
119and dictated its nationwide adoptionsimply because
it approved the application of Widmar to high schools. And it seems absurd to
presume that Westside has invoked the same strategy by recognizing clubs like
the Swimming Timing Team and Subsurfers which, though they may not
correspond directly to anything in Westside's course offerings, are no more
controversial than a grilled cheese sandwich.
120 have access to school facilities.7 More importantly,
nothing in that case suggests that the constitutional issue should turn on
whether French is being taught in a formal course while the club is functioning.
121.
122 with his conclusion that,
under a proper interpretation of the Act, this dramatic difference requires a
different result.
123.
124 First, as the majority correctly observes, Congress intended the Act to prohibit
schools from excluding,.
125.
126.
127
128," ibid.;
they are instead the sheet anchors holding fast a debate that would otherwise be
swept away in a gale of confused utterances.16
129, 105 S.Ct. 3439,
3465, 87 L.Ed.2d 567 (1985) (STEVENS, J., dissenting).17 Lawyers and
legislators seeking to capture our distinctions in legislative terminology should
be forgiven if they occasionally stumble.18 Certainly, 110 S.Ct. 960, 973, 108
L.Ed.2d 72 (1990) (STEVENS, J., dissenting).
II
130.
131. In deed,.
132 We have always treated with special sensitivity the Establishment Clause
problems that result when religious observances are moved into the public
schools. Edwards v. Aguillard, 482 U.S. 578, 583-584, 107 S.Ct. 2573, 25772578, 96 L.Ed.2d 510 (1987). "The public school is at once the symbol of our
democracy and the most pervasive means for promoting our common destiny.
In no activity of the State is it more vital to keep out divisive forces than in its
schools. . . ." Illinois ex rel. McCollum Board of In deed, the very fact that Congress omitted
any definition in the statute itself is persuasive evidence of an intent to allow
local officials broad discretion in deciding whether or not to create limited
public fora. I see no reasonand no evidence of congressional intentto
constrain that discretion any more narrowly than our holding in Widmar
requires.
III
136."
137 I respectfully dissent.,' "
For an extensive discussion of the phrase and its ambiguity, see Laycock, Equal
Access and Moments of Silence: The Equal Status of Religious Speech by
Private Speakers, 81 Nw.U.L.Rev. 1, 36-41 (1986).).
We would, of course, then have to consider, as the Court does now, whether the
Establishment Clause permits Congress to apply Widmar's reasoning to
secondary schools.
The Court of Appeals also put too much weight upon the existence of a chess
club at Westside. The court quoted an exchange between Senator Gorton and
Senator Hatfield in which Senator Hatfield, a cosponsor of the,). basis of whether a group presented a one-sided view
of controversial subjects. Id., at 706-707.).
13
Under my reading of the statute, for example, a difficult case might be posed if
a district court were forced to decide whether a high school's Nietzsche Club
were concerned with philology or doctrine. None of the very common clubs at
Westside, however, causes any difficulties for this test, while nearly all of them
present close questions if examined pursuant to the Court's rubric. The
Nietzsche Club is a problem that can be dealt with when it actually arises.
14
Senator Gorton proposed replacing the Act with another, which read:
"No public secondary school receiving Federal financial assistance shall
prohibit the use of school facilities for meetings during noninstructional time
by voluntary student groups solely on the basis that some or all of the speech
engaged in by members of such groups during their meetings is or will be
religious in nature." 130 Cong.Rec. 19225 (1984). from Widmar for reasons of
administrative clarity, Congress kept its intent well hidden, both in the statute
and in the debates preceding its passage.
16.
17
See also Farber & Nowak, The Misleading Nature of Public Forum Analysis:
Content and Context in First Amendment Adjudication, 70 Va.L.Rev. 1219,
1223-1225 (1984); L. Tribe, American Constitutional Law 12-24 (2d ed.
1988). public," 454
U.S., at 268, 102 S.Ct., at 273; "a generally open forum," id., at 269, 102 S.Ct.,
at 274; and "a public forum," id., at 270, 102 S.Ct., at 274. The District Court
opinion in Benderan opinion of great concern to Congress when it passed
this Actobserved.
19
20
The difficulty of the constitutional question compounds the problems with the
Court's treatment of the statutory issue. In light of the ambiguity which it
concedes to exist in both the statutory text and the legislative history, the Court
has an obligation to adopt an equally reasonable construction of the Act that
will avoid the constitutional issue. Cf. NLRB v. Catholic Bishop of Chicago,
440 U.S. 490, 500, 99 S.Ct. 1313, 1318, 59 L.Ed.2d 533 (1979).day Saints v. Amos, 483 U.S. 327, 338, 107 S.Ct. 2862, 2869, 97 L.Ed.2d
273 , 108 S.Ct. 562,
98 L.Ed.2d 592 ").
23
The quotation is from Congressman Frank, who spoke in support of the bill on
the House floor. 130 Cong.Rec. 20933 (1984).).
25
26
the community in which they are likely to live as adults. See Hazelwood School
Dist. v. Kuhlmeier, 484 U.S., at 271-272, 108 S.Ct., at 570., 93 S.Ct. 1278, 130507,). local option. Everything is left to the local
administrators and the local school board") (statement of Rep. Goodling). | https://www.scribd.com/document/310840929/Board-of-Ed-of-Westside-Community-Schools-Dist-66-v-Mergens-496-U-S-226-1990 | CC-MAIN-2019-30 | en | refinedweb |
This tutorial shows you how to create a continuous delivery pipeline using Google Kubernetes Engine (GKE), Cloud Source Repositories, Cloud Build, and Spinnaker. detecks the image, deploys image to Canary, and tests the Canary deployment. After a manual approval, Spinnaker deploys the image to production.
Objectives
- Set up your environment by launching Cloud Shell, creating a GKE cluster, and configuring your identity and user management scheme.
- Download a sample app, create a Git repository, and upload it to a Cloud Source Repositories.
- Deploy Spinnaker to GKE using Helm.
- Platform (GCP), including:
- GKE
- Cloud Load Balancing
- Cloud Build GKE, Cloud Build, and Cloud Source Repositories APIs.
Set up your environment
In this section, you configure the infrastructure and identities required to complete the tutorial.
Start a Cloud Shell instance and create a GKE cluster
You run all the terminal commands in this tutorial from Cloud Shell.
Open Cloud Shell:
Create a GKE cluster to deploy Spinnaker and the sample app with the following commands:
gcloud config set compute/zone us-central1-f
gcloud container clusters create spinnaker-tutorial \ --machine-type=n1-standard-2
Configure identity and access management
You create a Cloud Identity and Access Management (Cloud IAM) service account to delegate permissions to Spinnaker, allowing it to store data in Cloud Storage. Spinnaker stores its pipeline data in Cloud Storage to ensure reliability and resiliency. If your Spinnaker deployment unexpectedly fails, you can create an identical deployment in minutes with access to the same pipeline data as the original.
Create the service account:
gcloud iam service-accounts create spinnaker-account \ --display-name spinnaker-account
Store the service account email address and your current project ID in environment variables for use in later commands:
export SA_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:spinnaker-account" \ --format='value(email)') export PROJECT=$(gcloud info --format='value(config.project)')
Bind the
storage.adminrole to your service account:
gcloud projects add-iam-policy-binding \ $PROJECT --role roles/storage.admin --member serviceAccount:$SA_EMAIL
Download the service account key. You need this key later when you install Spinnaker and upload the key to GKE.
gcloud iam service-accounts keys create spinnaker-sa.json --iam-account $SA_EMAIL
Set up Cloud Pub/Sub to trigger Spinnaker pipelines
Create the Cloud Pub/Sub topic for notifications from Container Registry. This command might fail with the error "Resource already exists in the project", which means that the topic has already been created for you.
gcloud beta pubsub topics create projects/$PROJECT/topics/gcr
Create a subscription that Spinnaker can read from to receive notifications of images being pushed.
gcloud beta pubsub subscriptions create gcr-triggers \ --topic projects/${PROJECT}/topics/gcr
Give Spinnaker's service account permissions to read from the
gcr-triggerssubscription.
export SA_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:spinnaker-account" \ --format='value(email)') gcloud beta pubsub subscriptions add-iam-policy-binding gcr-triggers \ --role roles/pubsub.subscriber --member serviceAccount:$SA_EMAIL
Deploying Spinnaker using Helm
In this section, you use Helm to deploy Spinnaker from the Charts repository. Helm is a package manager you can use to configure and deploy Kubernetes apps.
Install Helm
Download and install the
helmbinary:
wget
Unzip the file to your local system:
tar zxfv helm-v2.10.0-linux-amd64.tar.gz
cp linux-amd64/helm .
Grant Tiller, the server side of Helm, the cluster-admin role in your cluster:
kubectl create clusterrolebinding user-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account) kubectl create serviceaccount tiller --namespace kube-system kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Grant Spinnaker the
cluster-adminrole so it can deploy resources across all namespaces:
kubectl create clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:default spinnaker-admin
Initialize Helm to install Tiller in your cluster:
./helm init --service-account=tiller
./helm update
Ensure that Helm is properly installed by running the following command. If Helm is correctly installed,
v2.10.0appears for both client and server.
./helm version"}
Configure Spinnaker
Create a bucket for Spinnaker to store its pipeline configuration:
export PROJECT=$(gcloud info \ --format='value(config.project)') export BUCKET=$PROJECT-spinnaker-config gsutil mb -c regional -l us-central1 gs://$BUCKET
Create the file (
spinnaker-config.yaml) describing the configuration for how Spinnaker should be installed:
export SA_JSON=$(cat spinnaker-sa.json) export PROJECT=$(gcloud info --format='value(config.project)') export BUCKET=$PROJECT-spinnaker-config cat > spinnaker-config.yaml <<EOF gcs: enabled: true bucket: $BUCKET project: $PROJECT jsonKey: '$SA_JSON' dockerRegistries: - name: gcr address: username: _json_key password: '$SA_JSON' email: 1234@5678.com # Disable minio as the default storage backend minio: enabled: false # Configure Spinnaker to enable GCP services halyard: spinnakerVersion: 1.10.2 image: tag: 1.12.0 additionalScripts: create: true data: enable_gcs_artifacts.sh: |- \$HAL_COMMAND config artifact gcs account add gcs-$PROJECT --json-path /opt/gcs/key.json \$HAL_COMMAND config artifact gcs enable enable_pubsub_triggers.sh: |- \$HAL_COMMAND config pubsub google enable \$HAL_COMMAND config pubsub google subscription add gcr-triggers \ --subscription-name gcr-triggers \ --json-path /opt/gcs/key.json \ --project $PROJECT \ --message-format GCR EOF
Deploy the Spinnaker chart
Use the Helm command-line interface to deploy the chart with your configuration set. This command typically takes five to ten minutes to complete.
./helm install -n cd stable/spinnaker -f spinnaker-config.yaml --timeout 600 \ --version 1.1.6 --wait
After the command completes, run the following command to set up port forwarding to the Spinnaker UI from Cloud Shell:
export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" \ -o jsonpath="{.items[0].metadata.name}") kubectl port-forward --namespace default $DECK_POD 8080:9000 >> /dev/null &
To open the Spinnaker user interface, click Web Preview in Cloud Shell and click Preview on port 8080.
You see the welcome screen, followed by the Spinnaker UI:
Building the Docker image
In this section, you configure Cloud Build to detect changes to your app source code, build a Docker image, and then push it to Container Registry.
Create your source code repository
In Cloud Shell, download the sample source code: GCP -l us-central1 gs://$PROJECT-kubernetes-manifests
Enable versioning on the bucket so that you have a history of your manifests.
gsutil versioning set on gs://$PROJECT-kubernetes-manifests
Set the correct GCP project ID in your Kubernetes deployment manifests:
sed -i s/PROJECT/$PROJECT/g k8s/deployments/*
Commit the changes to the repository:
git commit -a -m "Set project ID".
Install the spin CLI for managing Spinnaker
Spin is a command-line utility for managing Spinnaker's applications and pipelines.
Download the latest version of
spin.
curl -LO
Make
spinexecutable.
chmod +x spin the Spinnaker installation:
../helm delete --purge cd
Delete the sample app services:
kubectl delete -f k8s/services
Remove the service account IAM bindings:
export SA_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:spinnaker-account" --format='value(email)') export PROJECT=$(gcloud info --format='value(config.project)') gcloud projects remove-iam-policy-binding $PROJECT --role roles/storage.admin --member serviceAccount:$SA_EMAIL
Delete the service account:
export SA_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:spinnaker-account" --format='value(email)') gcloud iam service-accounts delete $SA_EMAIL
Delete the GKE cluster:
gcloud container clusters delete spinnaker-tutorial --zone=us-central1 Platform features for yourself. Have a look at our tutorials. | https://cloud.google.com/solutions/continuous-delivery-spinnaker-kubernetes-engine?hl=ca | CC-MAIN-2019-30 | en | refinedweb |
You can use the
import
method to create a product set and products with reference images all at the
same time using a CSV file. This page describes how you format the CSV file.
Creating your reference images
Reference images are images containing various views of your products. The following recommendations apply:
- Make sure the size of the file doesn't exceed the maximum size (20MB).
- Consider viewpoints that logically highlight the product and contain relevant visual information.
- Create reference images that supplement any missing viewpoints. For example, if you only have images of the right shoe in a pair, provide mirrored versions of those files as the left shoe.
- Upload the highest resolution image available.
- Show the product against a white background.
- Convert PNGs with transparent backgrounds to a solid background.
Images must be stored in a Google Cloud Storage bucket. If you're authenticating your image create call with an API key, the bucket must be public. If you're authenticating with a service account, that service account must have read access on the bucket.
CSV formatting guidelines
To use the
import method, both the CSV file and the images it points to
must be in a Google Cloud Storage bucket. CSV files are limited to a maximum of
20000 lines. To import more images, split them into multiple CSV files.
The CSV file must contain one image per line and contain the following columns:
image-uri: The Google Cloud Storage URI of the reference image.
image-id: Optional. A unique value if you supply it. Otherwise, the system will assign a unique value.
product-set-id: A unique identifier for the product set to import the images into.
product-id: A user-defined ID for the product identified by the reference image. A
product-idcan be associated with multiple reference images.
product-category: Allowed values are
homegoods-v2,
apparel-v2,
toys-v2, and
packagedgoods-v1(beta,
v1p4beta1endpoint only)*; the category for the product identified by the reference image. Inferred by the system if not specified in the create request. Allowed values are also listed in the
productCategoryreference documentation.
product-display-name: Optional. If you don't provide a name for the product
displayNamewill be set to " ". You can update this value later.
labels: Optional. A string (with quotation marks) of key-value pairs that describe the products in the reference image. For example:
"category=shoes"
"color=black,style=formal"
Vision Product Search also allows you to provide multiple values for a single key. For example:
"category=shoes,category=heels"
"color=black,style=formal,style=mens"
bounding-poly: Optional. Specifies the area of interest in the reference image. If a bounding box is not specified:
- Bounding boxes for the image are inferred by the Vision API; multiple regions in a single image may be indexed if multiple products are detected by the API.
- The line must end with a comma.
See the example below for a product without a bounding poly specified.
If you include a bounding box, the
boundingPolycolumn should contain an even number of comma-separated numbers, with the format
p1_x,p1_y,p2_x,p2_y,...,pn_x,pn_y. An example line looks like this:
0.1,0.1,0.9,0.1,0.9,0.9,0.1,0.9.
To define a bounding box with the actual pixel values of your image use non-negative integers. Thus, you could express bounding boxes in 1000 pixel by 1000 pixel images in the following way:
"gs://example-reference-images/10001-001/10001-001_A.jpg","img001","sample-set-summer","sample-product-123","tan summer bag","apparel","style=womens,color=tan","100,150,450,150,450,550,100,550" "gs://example-reference-images/10001-001/10001-001_A.jpg","img001","sample-set-summer","sample-product-456","blue summer bag","apparel","style=womens,color=blue","670,790,980,790,980,920,670,920" "gs://example-reference-images/10002-002/10002-002_B.jpg","img002","sample-set-summer","sample-product-123","apparel",,,
Vision Product Search also allows you to use normalized values for bounding boxes. Define a bounding box using normalized values with float values in [0, 1].
Using normalized values, the above reference image rows could also be expressed as:
"gs://example-reference-images/10001-001/10001-001_A.jpg","img001","sample-set-summer","sample-product-123","tan summer bag","apparel","style=womens,color=tan","0.10,0.15,0.45,0.15,0.45,0.55,0.10,0.55" "gs://example-reference-images/10001-001/10001-001_A.jpg","img001","sample-set-summer","sample-product-456","blue summer bag","apparel","style=womens,color=blue","0.67,0.79,0.98,0.79,0.98,0.92,0.67,0.92" "gs://example-reference-images/10002-002/10002-002_B.jpg","img002","sample-set-summer","sample-product-123","apparel",,, | https://cloud.google.com/vision/product-search/docs/csv-format?hl=hu | CC-MAIN-2019-30 | en | refinedweb |
Developers enjoy writing code but few developers enjoy writing exception handling code and even fewer do it right. A new book titled Exceptional Ruby by Avdi Grimm attacks the subject and helps developers take the right approach to solid exception handling code.
Exceptional Ruby is a new guide to exceptions and error-handling that includes over 100 pages of content and working examples. The book includes details on how exceptions work to designing for failures in your application.
InfoQ caught up with Avdi Grimm, the book's author, to discuss the book and topic of exception handling.
Most books today covering a programming language or framework, don't go into intricate detail of a subject. Avdi explains what the difference is between most books and his:
Exceptional Ruby is a book about the art of handling failure in Ruby. Where other books spend a chapter or two on the topic of exceptions and dealing with errors, Exceptional Ruby devotes over 100 pages to detailed explanations of how exceptions work, how to tweak Ruby's exception system to work for you, alternatives to exceptions, how to verify that your code is exception-safe, and how to structure a clean and robust failure-management architecture.
Developers are made aware of exceptions early on but to make best use of them it takes a thorough understanding. This book is intended for bit more experienced developers:
I wrote the book for intermediate to advanced Ruby programmers. Anyone familiar with the basics of Ruby will be able to get something from it, and even veteran Rubyists will learn some new tricks; I can pretty much guarantee that unless your name is Matz you'll learn something new about the Ruby language and failure handling.
A book of this size on a single subject such as Ruby exception handling makes us wonder what we could expect to learn from the book:
First of all, you should come away with a deep, practical understanding of how Ruby's exception system works--and be able to use that knowledge to write better programs. You'll also have a lot more tools in your toolbox for dealing with exceptional situations: you'll be able to identify cases where raising an exception may not be the best solution, and where an alternative, such as a caller-supplied fallback strategy, is preferable.
You'll learn about some time-tested patterns for structuring resilient programs and avoiding failure cascades. You'll understand the concept of "exception safety", and learn how to verify that your most critical methods are robust in the face of unexpected failures. You'll come away with a better understanding of how to write applications and libraries with failure firmly in mind, rather than as an afterthought.
In general, exception handling in any language is an afterthought by many and then not always done with best practices in mind. On the subject of what Avdi has seen from other developers approach to exception handling:
I think developers suffer from a paucity of guidance on how best to approach failures. One of the reasons I wrote this book (and the talk that preceded it) was that I felt like I needed to get a better handle on the subject for my own projects. I can't tell you how many times, for instance, I've struggled with the question of how to best structure an exception class hierarchy. These are areas where there's a lack of solid guidelines; even the best programming books rarely devote more than a brief section to failure cases.
In addition, I think the Ruby community has suffered to some degree from a lack of exposure to some of the classic literature on this topic. For instance, the exception safety testing technique that I demonstrate is something I first came across years ago in the C++ community. It's actually a much easier exercise in Ruby, but I've never seen it documented in Ruby books or articles.
I think what I see most often is simply that the failure case has been left as an afterthought. I see this in code that is littered with begin/rescue/end blocks that were clearly patched in as a result of some unexpected exception that cropped up in production.
I also see a lot of code that hasn't really taken into account the fact that in Ruby, an exception might be raised literally at *any* point. That's one of the reasons I included a section on exception safety and verifying that methods are exception-safe.
A lot of APIs raise exceptions unnecessarily. When a method raises an exception it is stating "I know for a fact that this event is unexpected". But one programmer's "unexpected" is another's daily reality. A great example is HTTP libraries that raise an exception for "404 Not Found". When client code has to wrap API calls in "rescue" blocks in order to handle predictable events, that says there's something lacking in the API design. In many cases APIs can be improved by delegating the decision about what constitutes a truly exceptional case back to the client code.
Finally, I've seen quite a few subtle bugs crop up as a result of over-broad "rescue" clauses causing failures to be silently ignored. I spend a fair amount of time in the book on techniques for tightening up your "rescue" clauses to be specific about what they are intended to catch, even when 3rd-party libraries raise over-generic exception types.
Having a deep knowledge on any single software development topic takes experience and to acquire the knowledge demonstrated in such an unpopular topic such as error handling requires discipline. When asked about how this knowledge was attained:
I've been working with Ruby for ten years now, so a lot of it came from personal experience. I've spent a lot of time reflecting on the common patterns and anti-patterns I've seen in all the projects I've worked on. I also solicited thoughts on Ruby exceptions from other longtime Rubyists; so for instance I've included convention for choosing between the "raise" and "fail" synonyms that I learned from Jim Weirich.
I also wanted very much for the book to be grounded in the established programming literature. So dusted off my library and re-read sections on dealing with failure in classics like "Code Complete", "The Practice of Programming", and "The Pragmatic Programmer". In order to establish definitions, as well as a solid framework for thinking about exceptions and failures, I went back to Bertrand Meyer's "Object-Oriented Software Construction". Meyer's programming language, Eiffel, strongly influenced Ruby's exception system, and his work on the idea of a method having a "contract" with its caller really helps clarify what we mean when we talk about terms like "error", "failure", and "exception".
Developers love to learn about best practices. We wanted to know if the topics discussed in Exceptional Ruby could be considered best practices or obscure methods that would be hard for developers to implement themselves:
I've tried to strike a balance between showing how to implement established, time-tested patterns and idioms in Ruby; and introducing lesser-known techniques to a wider audience. So you'll see things like the Bulkhead pattern--as documented by Steve McConnell and (more recently) Michael Nygard--demonstrated. On the other hand you'll find techniques like using Tag Modules to namespace exceptions, which as far as I know has never been documented before.
Avdi Grimm has been hacking on Ruby code for 10 years, and is still loving it. He's spoken at numerous Ruby conferences and user groups, and he writes about Ruby gotchas, guidelines, and style (among other things) at his blog, Virtuous Code. He is Chief Aeronaut at ShipRise, a consultancy specializing in sustainable software development and facilitating geographically dispersed agile teams. Avdi lives in York County, Pennsylvania with his wife and four children.
More information about Exceptional Ruby can be found at the book's web site where InfoQ readers have been offered $3 off the book by using the discount code INFOQER.
Avdi will be speaking at RailsConf being held this week, May 17-19, 2011 in Baltimore, MD in the event readers would like to meet him.
Community comments | https://www.infoq.com/news/2011/05/exceptional-ruby/ | CC-MAIN-2019-30 | en | refinedweb |
Currently the VM runcommand scenario use the fixed ip to ssh connect to the instance.
This could only works if fixed ip range is directly accessible with is the case in very limited deployments.
This blueprint proposes to support attachment of a floating ip to the instance allowing connection in most deployments.
Floating ips management requires to modify current scenario parameters:
- Currently the network used to connect to the instance is referred as "network".
- The concept of fixed and floating network should be introduced: parameter "network" will be renamed in "fixed_network" and parameter "floating_network" will be added.
- To maintain compatibility with current implementation, a new parameter "use_floatingip" will be added. This parameter will be defaulted to "true" as the use of floating ips is generally preferred.
def boot_runcommand
script, interpreter,
port=22, use_floatingip=
Blueprint information
- Status:
- Complete
- Approver:
- Boris Pavlovic
- Priority:
- Medium
- Direction:
- Approved
- Definition:
- Approved
- Series goal:
- None
- Implementation:
Implemented
- Milestone target:
- None
Related branches
Related bugs
Sprints
Whiteboard
Gerrit topic: https:/
Addressed by: https:/
Add nova floating ip management in VM scenario
Dependency tree
* Blueprints in grey have been implemented. | https://blueprints.launchpad.net/rally/+spec/benchmark-scenarios-vm-floating-ip | CC-MAIN-2019-30 | en | refinedweb |
This page provides the Query Builder example of the Cloud Security Command Center (Cloud SCC) application package. The Query Builder app enables you to create and schedule advanced, multi-step queries on Cloud SCC data using a web application interface. When these queries run, the results can be sent to a Cloud Pub/Sub topic where they can be consumed by other apps. You can also configure Query Builder to add Cloud SCC security marks to results.
For example, you could schedule a query to periodically look for network firewalls with port 22 allowed, and use Query Builder to mark the results in Cloud SCC and notify your security team so they can take appropriate action. Query Builder
- Kubernetes Engine Admin -
roles/container.admin
- Service Account Admin -
roles/iam.serviceAccountAdmin
- Service Account Key Admin -
roles/iam.serviceAccountKeyAdmin
- Service Account User -
roles/iam.serviceAccountUser
- Storage Admin -
roles/storage.admin
- Storage Object Admin -
roles/storage.objectAdmin
- Pub/Sub Admin -
roles/pubsub.admin
- Project IAM Admin -
roles/resourcemanager.projectIamAdmin
- Cloud SQL Admin -
roles/cloudsql.admin
- DNS Administrator -
roles/dns.admin
- Compute Admin -
roles/compute.admin
- IAP-secured Web App User -
roles/iap.httpsResourceAccess QUERY_BUILDER_PROJECT_ID=[YOUR_QUERY_BUILDER_PROJECT_ID] # A valid billing account ID export BILLING=[YOUR_BILLING_ACCOUNT_ID]
On the Cloud Shell menu bar, click Upload file on the More devshell settings menu.
Upload the
scc-query-builder-${VERSION}.zipfile you downloaded during the installation setup.
Unzip the file you uploaded earlier by running:
unzip -qo scc-query-builder-${VERSION}.zip -d ${WORKING_DIR}
Go to the installation working directory:
cd ${WORKING_DIR}
Installing the Query Builder app package
In any of the following sections, you can simulate executions of the commands by
using the option
--simulation.
Step 1: Creating the project
Create the project and enable billing by running:
Create the project:
gcloud projects create ${QUERY_BUILDER_PROJECT_ID} \ --organization ${ORGANIZATION_ID}
Enable billing:
gcloud beta billing projects link ${QUERY_BUILDER_PROJECT_ID} \ --billing-account ${BILLING}
Step 2: Enabling APIs
To enable the required Google APIs in the Notifier project, run:
gcloud services enable \ cloudapis.googleapis.com \ cloudbuild.googleapis.com \ clouddebugger.googleapis.com \ cloudtrace.googleapis.com \ compute.googleapis.com \ container.googleapis.com \ containerregistry.googleapis.com \ dns.googleapis.com \ logging.googleapis.com \ monitoring.googleapis.com \ oslogin.googleapis.com \ replicapool.googleapis.com \ replicapoolupdater.googleapis.com \ resourceviews.googleapis.com \ servicemanagement.googleapis.com \ serviceusage.googleapis.com \ sourcerepo.googleapis.com \ --project ${QUERY_BUILDER_PROJECT_ID} gcloud services enable \ sql-component.googleapis.com \ sqladmin.googleapis.com \ stackdriver.googleapis.com \ storage-api.googleapis.com \ pubsub.googleapis.com \ storage-component.googleapis.com \ securitycenter.googleapis.com \ iamcredentials.googleapis.com \ cloudresourcemanager.googleapis.com \ iam.googleapis.com \ --project ${QUERY_BUILDER_PROJECT_ID}
Step 3: Creating the Cloud SCC client service account
This step requires the following Cloud IAM roles:
- Organization Administrator -
roles/resourcemanager.organizationAdmin
- Security Center Admin -
roles/securitycenter.admin
- Service Account Admin -
roles/iam.serviceAccountAdmin
- Service Account Key Admin -
roles/iam.serviceAccountKeyAdmin
You will use these roles to create a service account with the following organizational-level role:
- Security Center Sources Viewer -
roles/securitycenter.sourcesViewer
- Security Center Findings Viewer -
roles/securitycenter.findingsViewer
- Security Center Assets Viewer -
roles/securitycenter.assetsViewer
- Security Center Finding Marks Writer -
roles/securitycenter.findingSecurityMarksWriter
- Security Center Asset Marks Writer -
roles/securitycenter.assetSecurityMarksWriter
Create the service account that will be used to deploy the application, download the key file, and grant the necessary roles by running:
Create the Service Account:
gcloud iam service-accounts create scc-query-builder \ --display-name "SCC Query Builder SA" \ --project ${QUERY_BUILDER_PROJECT_ID}
Download the service account key file:
(cd setup; \ gcloud iam service-accounts keys create \ service_accounts/scc-query-builder-${QUERY_BUILDER_PROJECT_ID}-service-account.json \ --iam-account scc-query-builder@${QUERY_BUILDER_PROJECT_ID}.iam.gserviceaccount.com)
Export the absolute path to the service account key file:
export SCC_SA_FILE=[PATH_TO_SERVICE_ACCOUNT_FILE]
Grant the Organization Level roles:
gcloud beta organizations add-iam-policy-binding ${ORGANIZATION_ID} \ --member="serviceAccount:scc-query-builder@${QUERY_BUILDER_PROJECT_ID}.iam.gserviceaccount.com" \ --role='roles/securitycenter.assetsViewer' gcloud beta organizations add-iam-policy-binding ${ORGANIZATION_ID} \ --member="serviceAccount:scc-query-builder@${QUERY_BUILDER_PROJECT_ID}.iam.gserviceaccount.com" \ --role='roles/securitycenter.findingsViewer' gcloud beta organizations add-iam-policy-binding ${ORGANIZATION_ID} \ --member="serviceAccount:scc-query-builder@${QUERY_BUILDER_PROJECT_ID}.iam.gserviceaccount.com" \ --role='roles/securitycenter.sourcesViewer' gcloud beta organizations add-iam-policy-binding ${ORGANIZATION_ID} \ --member="serviceAccount:scc-query-builder@${QUERY_BUILDER_PROJECT_ID}.iam.gserviceaccount.com" \ --role='roles/securitycenter.findingSecurityMarksWriter' gcloud beta organizations add-iam-policy-binding ${ORGANIZATION_ID} \ --member="serviceAccount:scc-query-builder@${QUERY_BUILDER_PROJECT_ID}.iam.gserviceaccount.com" \ --role='roles/securitycenter.assetSecurityMarksWriter'
Step 4: Creating an API key
Before you run the Query Builder 5: Creating the database service account
Create a service account to access the Cloud SQL database and download a key by running:
(cd setup; \ pipenv run python3 create_service_account.py \ --name sql-service-account \ --project_id ${QUERY_BUILDER_PROJECT_ID} \ --roles_file roles/querybuilder-database.txt \ --output_file service_accounts/sql-${QUERY_BUILDER_PROJECT_ID}-service-account.json \ --no-simulation)
Step 6: Creating the scheduler service account
Create a service account to access the Query Builder scheduler component and download a key by running:
(cd setup; \ pipenv run python3 create_service_account.py \ --name scheduler-service-account \ --project_id ${QUERY_BUILDER_PROJECT_ID} \ --roles_file roles/querybuilder-scheduler.txt \ --output_file service_accounts/scheduler-${QUERY_BUILDER_PROJECT_ID}-service-account.json \ --no-simulation)
Step 7: Creating the notification publisher service account
Create a service account to publish query results to Cloud Pub/Sub topics, download a key, and grant the service account the necessary role:
Create the service account and download the key:
(cd setup; \ pipenv run python3 create_service_account.py \ --name notification-service-account \ --project_id ${QUERY_BUILDER_PROJECT_ID} \ --roles_file roles/querybuilder-notification.txt \ --output_file service_accounts/notification-${QUERY_BUILDER_PROJECT_ID}-service-account.json \ --no-simulation)
To send messages to a topic in another project, grant the service account the Pub/Sub Publisher role on that project:
gcloud beta organizations add-iam-policy-binding ${ORGANIZATION_ID} \ --member="serviceAccount:publisher@${QUERY_BUILDER_PROJECT_ID}.iam.gserviceaccount.com" \ --role='roles/pubsub.publisher'
Step 8: Configuring the application domain and certificates
For users to access the Query Builder app, the app needs an internet domain or sub-domain properly configured in a DNS. If you already have a domain, you can use it, or you can register a domain name through Google Domains or another domain registrar of your choice. Follow your organization's guidelines and get help from your organization's Network Administrators if needed.
After you have a domain to use for Query Builder, follow the steps below to configure the DNS with Cloud DNS:
Create a DNS zone by running:
# the domain that you created or informed by your Network Administrator export QUERY_BUILDER_DOMAIN=[YOUR_QUERY_BUILDER_DOMAIN] # the DNS zone name that will be created on Google Cloud DNS. (do not change) export DNS_ZONE=[QUERY_BUILDER_DNS_ZONE] (cd setup; pipenv run python3 create_dns_zone.py \ --custom_domain ${QUERY_BUILDER_DOMAIN} \ --dns_zone ${DNS_ZONE} \ --dns_project_id ${QUERY_BUILDER_PROJECT_ID} \ --no-simulation)
Change the nameserver to configure your domain to use Cloud DNS servers by running:
gcloud dns managed-zones describe "${DNS_ZONE}" \ --project ${QUERY_BUILDER_PROJECT_ID}
Output example:
creationTime: '2018-01-31T12:13:44.346Z' description: ' A zone' dnsName: example.com. id: '529777864777155386' kind: dns#managedZone name: examplecom nameServers: - ns-cloud-c1.googledomains.com. - ns-cloud-c2.googledomains.com. - ns-cloud-c3.googledomains.com. - ns-cloud-c4.googledomains.com.
Update the nameserver record for your domain using the
nameServerslisted by the previous command, or send that list to your organization's Network Administrators. For information about how to update the nameserver record, see your domain registrar's documentation.
- Because of the distributed nature of DNS, nameserver changes can take time to propagate to all servers. This is usually complete within a few minutes, but can take up to 24 hours to be fully propagated throughout the internet.
Create a new A record on your DNS zone using the static IP for the Query Builder web app by running:
gcloud compute addresses create querybuilder-web-static-ip \ --global --project ${QUERY_BUILDER_PROJECT_ID} export IP=$(gcloud compute addresses describe querybuilder-web-static-ip \ --global --format 'value(address)' --project ${QUERY_BUILDER_PROJECT_ID}) (cd setup; \ pipenv run python3 create_records_on_zone.py \ --dns_zone ${DNS_ZONE} \ --dns_project_id ${QUERY_BUILDER_PROJECT_ID} \ --record_name ${QUERY_BUILDER_DOMAIN} \ --ttl 90 \ --value ${IP} \ --record_type A \ --no-simulation)
Step 9: Generating SSL certificates
To deploy Query Builder in a domain, you will need an SSL certificate. If your Network Administrators provided you with the domain/subdomain, ask them for the SSL certificates. If you already have SSL certificates, upload them to Cloud Shell to be used during Query Builder deploy.
If you don't have SSL certificates, follow the steps below to generate them for your registered domain. The script accepts more than one prefix, so you can generate SSL certificates for the primary domain and multiple sub-domains by in a comma-separated list.
To complete this step, you will need a DNS Zone, DNS domain, and e-mail. During execution, you might need to answer some questions and enter a password to generate the SSH public and private key files if you don't have them already.
Set up environment variables:
# the project created to install the application export QUERY_BUILDER_PROJECT_ID=[YOUR_QUERY_BUILDER_PROJECT_ID] # the domain that you got from your registrar export QUERY_BUILDER_DOMAIN=[YOUR_QUERY_BUILDER_DOMAIN] # the zone that will be created on your Cloud DNS. It can be chosen a region name that fits better for you export DNS_ZONE=[YOUR_DNS_ZONE_ID] # used to warn you when the certificate is close to expiration export EMAIL=[YOUR_EMAIL]
Generate the SSL certificate by running the command below. Note that the
--main_domainargument supports only your main domain. If you want to generate SSL certificates for your sub-domains, use the
--prefix_sub_domainsargument:
(cd setup; pipenv run python3 ssl_generate.py \ --main_domain ${QUERY_BUILDER_DOMAIN} \ --dns_zone ${DNS_ZONE} \ --dns_project_id ${QUERY_BUILDER_PROJECT_ID} \ --email ${EMAIL} \ --no-simulation)
This script will generate SSL certificates and return the path for a
.zipfile named
certs-${QUERY_BUILDER_DOMAIN}.zipthat contains the certificates.
Unzip the SSL certificates file by running:
unzip certs-${QUERY_BUILDER_DOMAIN}.zip
This creates a folder named
certsthat contains
privkey1.pemand
cert1.pem. You will need these to deploy Query Builder.
Deploying the Query Builder app package
Step 1: Installing Python dependencies
(cd scc-query-builder/setup; \ pipenv --python 3.5.3 ; \ pipenv install --ignore-pipfile)
Step 2: Creating the application's configuration file
The Query Builder app uses a
JSON configuration file to parameterize the
application during the deployment process.
Use the command below to set a configuration file:
(cd scc-query-builder/setup; \ pipenv run python3 create_query_builder_sample.py \ --output-file=parameters_file.json)
Change configuration attributes to appropriate values for your installation by editing
parameters_file.jsonin the
./scc-query-builder/setupfolder:
root
organization_id: Your GCP Organization ID.
organization_display_name: Your GCP Organization display name.
query_builder
project_id: The Query Builder Project ID you created.
compute_zone: A Compute Engine zone to create the Google Kubernetes Engine cluster from the list of Regions and Zones.
custom_domain: The Query Builder domain.
dns
zone_name: The Cloud DNS Zone name you created.
scc
service_account: Full path to the client service account file created above to access Cloud SCC API.
developer_key: The API key created above to authorize the use of the Cloud SCC API.
scheduler
service_account: Full path to the scheduler service account file created above.
notification
service_account: Full path to the notification publisher service account file to publish to Pub/Sub topics, created above.
cloud_sql
instance_name: The Cloud SQL instance name to be created. It must be a unique name not used before, because previous deleted instances take some time to be permanently deleted.
database_name: The database name to be created.
user_name: The database user to be created.
user_password: The database password to be created.
service_account: Full path to database service account file created above to access Cloud SQL.
- ssl_certificates
cert_key: Full path to the
cert.pemfile for your domain.
private_key: Full path to the
privkey.pemfile for your domain.
Step 3: Executing the deployment
To create the remaining infrastructure, including networking, GKE Cluster, Cloud SQL instance, and Cloud Pub/Sub topics, and deploy the application, run:
# the project created to install the application export QUERY_BUILDER_PROJECT_ID=[YOUR_QUERY_BUILDER_PROJECT_ID] # make sure your gcloud is set to your Query Builder Project gcloud config set project ${QUERY_BUILDER_PROJECT_ID} (cd scc-query-builder/setup; \ pipenv run python3 run_query_builder_setup.py \ --input-file=parameters_file.json \ --no-simulation)
Configuring Cloud IAP
After you deploy the infrastructure and application, you need to configure Cloud Identity-Aware Proxy (Cloud IAP so that only authorized users can access the application.
Step 1: Configuring the OAuth Consent Screen
- Go to the OAuth consent screen page page in the GCP Console.
Go to the OAuth consent screen page
- On the project selector drop-down list, select the project in which you created the Query Builder app.
- Complete the following fields:
- Support email: an email address for user support that will be displayed on the consent screen.
- Application name: the application name displayed on the consent screen, such as "Query Builder".
- Authorized domains: the domain where the application is hosted. This should be the domain you used or created above. Press Enter after you complete this field to save it correctly.
- When you're finished entering details, click Save.
Step 2: Creating the OAuth client ID
- Go to the Credentials page page in the GCP Console.
Go to the Credentials page
- On the Create credentials drop-down list, select OAuth client ID.
On the Create OAuth client ID page that appears, enter the following details:
- Application type: select Web application.
- Name: enter the name of this Client ID, such as "Query Builder Cloud IAP Client ID".
Authorized redirect URIs: enter a URL to redirect the user to after they have authenticated. The URL should be in the following format where [QUERY_BUILDER_DOMAIN] is the domain you used or created above.
https://[QUERY_BUILDER_DOMAIN]/_gcp_gatekeeper/authenticate`.
Press Enter after you complete this field to save it correctly.
When you're finished entering details, click Create.
On the OAuth client dialog that appears, copy the client ID and client secret. You'll need these values in the next steps.
Step 3: Turning on Cloud IAP
- Change the following configuration attributes to appropriate values for your installation by editing
parameters_file.jsonin the
./scc-query-builder/setupfolder:
oauth
client_id: the client ID you copied when you created the OAuth client ID above.
client_secret: the client secret you copied when you created the OAuth client ID above.
Turn configure and turn on Cloud IAP by running:
(cd scc-query-builder/setup; \ pipenv run python3 run_query_builder_iap_setup.py --input-file=parameters_file.json \ --no-simulation)
Step 4: Accessing the Query Builder application
To access the Query Builder app, a user must have the following Cloud IAM role at the project level:
- IAP-secured Web App User -
roles/iap.httpsResourceAccessor
To access the application, use the URL
https://[QUERY_BUILDER_DOMAIN]. For
more information about how to use the app, see
Using Query Builder below.
Linking Query Builder to the Notifier app
If you have deployed the Notifier app, you can optionally link it to Query Builder. To link the apps, review the notification publisher service account section above and then run the following:
# the project created to install the Query Builder application export QUERY_BUILDER_PROJECT_ID=[YOUR_QUERY_BUILDER_PROJECT_ID] # the project id where Notifier application was deployed export NOTIFIER_PROJECT_ID=[YOUR_NOTIFIER_PROJECT_ID] # App Engine endpoint namespace (do not change) export NOTIFIER_PROJECT_ENDPOINT=notifier (cd setup; \ export NOTIFIER_PUBSUB_PATH={NOTIFIER_PROJECT_ENDPOINT}-pubsub-dot-${NOTIFIER_PROJECT_ID}; \ export NOTIFIER_PUSH_ENDPOINT=${NOTIFIER_PUBSUB_PATH}.appspot.com/_ah/push-handlers/receive_message; \ pipenv run python3 add_subscription.py \ --topic_name notification \ --topic_project ${QUERY_BUILDER_PROJECT_ID} \ --subscription_project ${NOTIFIER_PROJECT_ID} \ --subscription_name publish-to-notifier \ --push_endpoint ${NOTIFIER_PUSH_ENDPOINT})
Cleanup
You can optionally choose to deactivate your environment and uninstall Query Builder. If you want to do this, use the script below to:
- Stop the following resources created in the installation:
- Cloud SQL instance
- Delete the following resources created in the installation:
- GKE Cluster
- Static IP
- Network and Subnetwork
- Backend Services
- Target Proxies
- Forwarding Rules
- Health Checks
- URL Map
Edit the
parameters_file.jsonfile you used during Query Builder setup to run the cleanup script. This file is in the
./scc-query-builder/setupfolder.
(cd scc-query-builder/setup; \ pipenv run python3 run_query_builder_cleanup.py --input-file=parameters_file.json \ --no-simulation)
Delete the project where you installed Query Builder by running:
export QUERY_BUILDER_PROJECT_ID=[YOUR_QUERY_BUILDER_PROJECT_ID] gcloud projects delete ${QUERY_BUILDER_PROJECT_ID}
Using Query Builder
Your can use the Query Builder app to create, update, and delete multi-step queries for Assets and Findings. Registered queries can be used to perform searches using the Cloud SCC API. Results from executed queries are updated with security marks. You can also run periodic searches with scheduled queries and receive notifications of the results.
Query list
The query list page displays a list of saved queries. This page enables you to perform the following actions:
- To filter the queries that are displayed by name, description, or owner, click Search.
- To run, view, or delete a query, click More next to the query.
- To create a new query, click Create button on the lower right side.
- To turn notifications for a query on or off, click the toggle next to the query.
To display more information about a query, click the arrow next to the query to expand the details panel. Query details include:
- Last execution date: the last time the query was executed, triggered by a user or by the scheduler.
- Last execution result: the total of Assets and/or Findings that satisfied the query as of the last execution.
- Last updated: the last time the query information was updated.
- Next Execution date: the next time a scheduled query will be executed.
- Mark: a (key, value) pair used by Cloud SCC to filter the results that were marked in the last query execution. You can copy this value into the "Filter by" field on the Cloud SCC Assets or Findings tabs.
Create or edit a query
A query is a series of consecutive steps in which each step is a call to the Cloud SCC API search assets or search findings methods. A query includes the following values:
- Name (required)
- A description (required)
- An optional topic to be used for notifications. The formats accepted are:
- Publishing to a topic from another project:
projects/[PROJECT_ID]/topics/[TOPIC_NAME]
- Publishing to a topic from the current Query Builder project:
[TOPIC_NAME]
- A toggle to turn notifications on or off
- A toggle to turn scheduling on or off. When scheduling is turned on, a detailed scheduling configuration section is displayed.
Scheduling Configuration
Queries can be scheduled by hour, day, or week. For each schedule, you can select the frequency and an offset for the start of execution. To display the next three future executions, click Show Next Executions.
Kind field
The kind field specifies the kind of resource you want to query:
ASSET or
FINDING.
Compare duration field
The Compare duration field is only available for assets. When Compare duration is set, the asset is updated to indicate whether it was added, removed, or remained present during the Compare duration period of time that precedes the Read time. If no value is defined for Read time, Compare duration uses the current time.
For more information, see the
organizations.assets.list
Cloud SCC API documentation.
A duration is a string in the format
{number}w+{number}d+{number}h+{number}m+{number}s where each value
corresponds to the following:
w: weeks
d: days
h: hours
m: minutes
s: seconds
Example formats:
2w- 2 weeks
24h+30m- 24 hours and 30 minutes
48h- 48 hours
30m- 30 minutes
30m+45s- 30 minutes and 45 seconds
Read time field
The Read time field is the exact time the search will be executed. It can use a specific date/time or a value in the format w/d/h/m/s to search before the current time.
For more information, see the following Cloud SCC API documentation:
When you complete the Read time section, you'll select if you want to read from a specific timestamp or at a specified time in the future:
- Timestamp enables you to select:
- Date + time in ISO format
- Time zone
- From_now enables you to specify a string in the format
{number}w+{number}d+{number}h+{number}m+{number}s
Filter
Filter accepts an expression based on the attributes, properties, and security marks of the Cloud SCC assets and findings.
For more information, see the following Cloud SCC API documentation:
Threshold
Threshold is a required value that defines a size condition that the query result will be evaluated against. The threshold is a pair of the following operators and an integer value:
Less than
Less or equal
Equal
Not equal
Greater or equal
Greater than
Multi-step queries
You can combine multiple calls to the Cloud SCC API by adding steps to a query. When you add steps, you need to define:
outJoin: the field that should be read from the result of the first query.
inJoin: the field on the second query that will use the values from the first query.
The
inJoin and
outJoin fields must be a valid
<field> as described in the
Cloud SCC API documentation for
organizations.assets.list query paramaters
and
organizations.sources.findings.list query parameters.
The
outJoin field must be a valid field on the first step, and the
inJoin
field must be a valid field on the second step. For example,
as illustrated below, if you select an
outJoin from the assets
name field to
link with an
inJoin from the findings
resourceName field, the query gets
findings in step 2 based on the assets returned in step 1.
| https://cloud.google.com/security-command-center/docs/how-to-install-query-builder | CC-MAIN-2019-30 | en | refinedweb |
On-Boarding H2o.ai and Generic Java Models) using any interface (eg.Python, Flow, R) provided by H2o., she.
Prerequisites¶
Java 1.8
The following Released components:
- Java Client v1.11.0 (java_client-1.11.0.jar)
- Generic Model Runner v2.2.3 (h2o-genericjava-modelrunner-2.2.3.jar)
Preparing to On-Board your H2o or a Generic Java Model¶
- Place Java Client jar in one folder locally. This is the folder from which you intend to run the jar. After the jar runs, the created artifacts will also be available in this folder. You will use some of these artifacts if you are doing Web-based onboarding. We will see this later. Note: the versions of the libraries in the screenshots may be outdated.
- Prepare a supporting folder with the following contents. Items of this folder will be used as input for the java client jar.
It will contain:
-
Models - In case of H2o, your model will be a MOJO zip file. In case of Generic Java, the model will be a jar file.
-
Model runner or Service jar - For H2O rename downloaded h2o-genericjava-modelrunner.jar as per the first section to H2OModelService.jar or to GenericModelService.jar for Java model and Place it in this folder.
-
CSV file used for training the model - Place the csv file (with header having the same column names used for training but without the quotes (“ ”) ) you used for training the model here. This is used for autogenerating the .proto file. If you don’t have the .proto file, you will have to supply the .proto file yourself in the supporting folder. Make sure you name it default.proto.
-
default.proto - This is only needed If you don’t have sample csv data for training, then you will have to provide the proto file yourself. In this case, Java Client cannot autogenerate the .proto file. You will have to supply the .proto file yourself in the supporting folder. Make sure you name it default.proto Also make sure, the default.proto file for the model is in the following format. You need to appropriately replace the data and datatypes under DataFrameRow and Prediction according to your model.syntax = "proto3"; option java_package = "com.google.protobuf"; option java_outer_classname = "DatasetProto"; message DataFrameRow { string sepal_len = 1; string sepal_wid = 2; string petal_len = 3; string petal_wid = 4; } message DataFrame { repeated DataFrameRow rows = 1; } message Prediction { repeated string prediction= 1; } service Model { rpc transform (DataFrame) returns (Prediction); }
-
application.properties file - Mention the port number on which the service exposed by the model will finally run on.server.contextPath=/modelrunner # IF WORKING WITH MODEL CONNECTOR AND COMPOSITE SOLUTION, THE #server.contextPath will be / # NOTE: THIS WILL TAKE AWAY SWAGGER # This is the port number you want to run the service on. User may select a convenient port. server.port=8336 spring.http.multipart.max-file-size=100MB spring.http.multipart.max-request-size=100MB # Linux version # if model_type is Generic Java, then default_model will be /models/model.jar # if model_type is H2o, then the default_model will be /models/Model.zip #default_model=/models/model.jar default_model=/models/Model.zip default_protofile=/models/default.proto logging.file = ./logs/modelrunner.log # The value of model_type can be H or G # if model is Generic java model, then model_type is G. # if model is H2o model, then model_type is H. And the /predict method will use H2O model; otherwise, it will use generic Model # if model_type is not present, then the default is H #model_type=G model_type=H model_config=/models/modelConfig.properties # Linux some properties are specific to java generic models # The plugin_root path has to be outside of ModelRunner root or the code won't work # Default proto java file, classes and jar # DatasetProto.java will be in $plugin_root\src # DatasetProto$*.classes will be in $plugin_root\classes # pbuff.jar will be in $plugin_root\classes plugin_root=/tmp/plugins
-
modelConfig.properties - Add this file only in case of Generic Java model onboarding. This file contains the modelMethod and modelClassName of the model.modelClassName=org.acumos.ml.XModel modelMethod=predict
Create your modeldump.zip file¶
Java Client jar is the executable client jar file.
For Web-based onboarding of H2o models, the parameters to run the client jar are:
- Current Folder path : Full folder path in which Java client jar is placed and run from
-.
For CLI-based onboarding, the parameters to run the client jar are:
- Onboarding server url.
- Pass the authentication API url for onboarding - This API returns jwtToken for authenticated users. e.g http://<hostname>:8090/onboarding-app/v2/auth
-.
- Username of the Portal MarketPlace account.
- Password of the Portal MarketPlace account.
-.
See example below for how to run the client jar and how the modeldump.zip artifact appears after its successful run:
Onboarding to the Acumos Portal¶
- If you used CLI-based onboarding, you don’t need to perform the steps outlined just below. The Java client has done it for you. You will see a message on the terminal that states the model onboarded successfully.
- If you use Web-based onboarding, you must complete the following steps:
- After you run the client, you will see a modeldump.zip file generated in the same folder where we ran the Java Client for.
- Upload this file in the Web based interface (drap and drop). See On-Boarding a Model Using the Portal UI
- You will be able to see a success message in the Web interface. you will be able to see a success method in the Web interface.
The needed TOSCA artifacts and docker images are produced when the model is onboarded to the Portal. You and your teammates.
Addendum : Creating a model in H2o¶
You must have H2o 3.14.0.2 installed on your machine. For instructions on how to install visit the H2o web site:.
H2o provides different interfaces to create models and use H2o for eg. Python, Flow GUI, R, etc. As an example, below we show how to create a model using the Python innterface of H2o and also using the H2o Flow GUI. You can use the other interfaces too which have comparable functions to train a model and download the model in a MOJO format.
Here is a sample H2o iris program that shows how a model can be created and downloaded as a MOJO using the Python interface:
import h2o import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # for jupyter notebook plotting, %matplotlib inline sns.set_context("notebook") h2o.init() # Load data from CSV iris = h2o.import_file(' iris_wheader.csv') Iris data set description ------------------------- 1. sepal length in cm 2. sepal width in cm 3. petal length in cm 4. petal width in cm 5. class: Iris Setosa Iris Versicolour Iris Virginica iris.head() iris.describe() # training parameters training_columns = ['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid'] # response parameter response_column = 'class' # Split data into train and testing train, test = iris.split_frame(ratios=[0.8]) train.describe() test.describe() from h2o.estimators import H2ORandomForestEstimator model = H2ORandomForestEstimator(ntrees=50, max_depth=20, nfolds=10) # Train model model.train(x=training_columns, y=response_column, training_frame=train) print (model) # Model performance performance = model.model_performance(test_data=test) print (performance) # Download the model in MOJO format. Also download the h2o-genmodel.jar file modelfile = model.download_mojo(path="/home/deven/Desktop/", get_genmodel_jar=True) predictions=model.predict(test) predictions
Here is a sample H2o iris example program that shows how a model can be created and downloaded as a MOJO using the H2o Flow GUI. | https://docs.acumos.org/en/athena/AcumosUser/portal-user/portal/onboarding-java-guide.html | CC-MAIN-2019-30 | en | refinedweb |
Transfer
Manager
Class
Definition
This class contains a collection of methods (and structures associated with those methods) which perform higher-level operations. Whereas operations on the URL types guarantee a single REST request and make no assumptions on desired behavior, these methods will often compose several requests to provide a convenient way of performing more complex operations. Further, we will make our own assumptions and optimizations for common cases that may not be ideal for rarer cases.
public class TransferManager
- Inheritance
- java.lang.ObjectTransferManager | https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.storage.blob._transfer_manager?view=azure-java-preview | CC-MAIN-2019-30 | en | refinedweb |
If you are self-employed and are planning to file return of income, you should learn the income-tax provisions which will help you to file the return for Assessment Year 2018-19.
Every entrepreneur, India has ever produced, was once self-employed. The best thing about being self-employed is that you work with passion and are never worried about giving any explanation to anyone. This attitude needs a little re-thinking as the Income-Tax Department can legally ask you to give details of your income.
A self-employed person can be a trader, freelancer, doctor, lawyer, Chartered Accountant, website developer, artist, music composer, cab drivers, so on and so forth. A taxpayer is deemed to be self-employed if he doesn’t carry on his business or profession in any legal form of entity, i.e., Company, Partnership firm, LLP, etc. If you are self-employed, you should understand the basics of income tax so that you continue working for your goals without tax defaults as it can be very punitive.
Deadline for filing of income-tax return (ITR) is around the corner. It is advisable to file your return in time and not to wait for the last date. If you miss the deadline, it would become a costly affair. Late filing fee up to Rs 10,000 shall be levied on late-filers and they also lose the right to carry forward the losses of the current year.
Self-employed persons can file returns in Form ITR 3 or ITR 4. Form ITR 4 can be used by a taxpayer opting for presumptive taxation scheme. If return is to be filed in Form ITR 3, the filing process would be more tedious and complex. To prepare the income-tax return, one has to download the Java or Excel utility from e-filing portal. After preparation of return, the utility shall generate an XML file which is to be uploaded on e-filing portal.
If you are self-employed and are planning to file return of income, you should learn the following important income-tax provisions which will help you to file the return for Assessment Year 2018-19.
1. Books of Account
Income earned by a self-employed person is taxable under the head ‘Profits and Gains of any Business or Profession’. To compute income under this head, the Income-Tax Act allows two options to a self-employed person — first option is to calculate the taxable income on presumptive basis without claiming deduction for any expense. Second option is to compute the real profit after claiming the expenses. If the second option is opted for, the taxpayer shall be required to maintain proper books of account and get them audited if gross turnover exceeds Rs 1 crore. The books of account shall be maintained in a format along with underlying evidences which would enable the Tax Officer to compute the taxable income.
2. Presumptive Taxation Scheme
As small and medium enterprises lack access to tax professionals, the Income-Tax Act allows them to opt for presumptive taxation scheme, wherein income is computed on presumptive basis and taxpayer is exempted from maintaining regular books of account. For a resident taxpayer, the Income-Tax Act has introduced three presumptive tax schemes. Income computed under these schemes shall be chargeable to tax as per the applicable tax rates.
A taxpayer engaged in a business (not profession) can opt for Section 44AD presumptive scheme, provided turnover from the business doesn’t exceed Rs 2 crore. Under this scheme, the income is presumed to be at 8% of gross turnover. If business receipts are in digital mode (cheque, bank transfer, credit cards, etc.) then income is presumed to be at 6% of such digital receipts.
Also See: Income Tax efiling: A beginner’s guide to furnishing ITR Form-1 Sahaj
For professionals, presumptive taxation scheme is available under section 44ADA, provided the gross receipts from the profession do not exceed Rs 50 lakh. In this case, the presumptive income shall be 50% of gross professional receipts. Doctors, lawyers, architects, engineers, or similar professionals can opt for this scheme.
The last presumptive taxation scheme under Section 44AE is for the transporters, engaged in the business of plying, hiring or leasing out of such goods carriages, who don’t own more than ten goods carriages at any time during the previous year.
3. Continue with Presumptive Taxation Scheme
If a taxpayer wishes to opt for presumptive taxation scheme under Section 44AD, then he can’t reverse his choice during the next 5 years. If he does so, he will be excluded from re-opting for the scheme during the next 5 years. Further, during these 5 years, he will also be liable to get his books of account audited if his total income exceeds the maximum amount which isn’t chargeable to tax.
4. Audit of books of account
An individual taxpayer, who isn’t entitled to opt for presumptive taxation scheme, shall get his accounts audited, if turnover from business exceeds Rs 1 crore during the Financial Year. For professionals, it shall be mandatory if gross professional receipts exceed Rs 50 lakh. When a taxpayer opts for presumptive taxation scheme during a financial year but he doesn’t opt for it for the next 5 years, he shall be required to get the accounts audited if income from business or profession exceeds the maximum exemption limit.
5. Applicable ITR Form
A taxpayer maintaining regular books of account is required to file
return in ITR-3 Form. Direct e-Filing facility isn’t available in case of ITR-3 and Excel /Java utility has to be used to prepare the return. When a taxpayer opts for presumptive taxation scheme, the return shall be filed in ITR-4. It can be done either through online facility at e-filing portal or through Excel/Java Utility.
6. Payment of Advance Tax
A taxpayer is required to pay advance tax if his estimated income tax liability during the year is Rs 10,000 or more. Advance tax is to be paid in 4 instalments: 15% of total estimated tax by June 15, 45% of estimated tax by September 15, 75% of estimated tax by December 15 and 100% of estimated tax by March 15 of the financial year. In case of any deficiency in payment of advance tax, in aggregate or in any instalment, he shall be liable to pay interest under section 234B and 234C.
Also See: Income Tax Return filing: 6 reasons you should file your ITR on time
However, if presumptive taxation scheme has been opted for, then the whole estimated tax liability has to be paid on or before March 15 of the financial year without any condition of payment of tax throughout the year in instalments.
7. Digital Signature mandatory in case of audit of Books of Account
After filing of the income tax return, it has to be verified by the authorized person, which is generally the taxpayer himself. Verification of income tax return can be done through Digital Signature Certificate (DSC), Aadhaar Based OTP or Net banking facility. Verification of return through DSC is mandatory if books of account are audited under income tax and taxpayer cannot choose EVC or any other mode for verifying ITR. Options shall be available to verify the return through DSC or through EVC if taxpayer opts for presumptive taxation scheme or when books of account aren’t audited.
(By Naveen Wadhwa, DGM, Taxmann.com; with inputs from Rahul Singh,. | https://www.financialexpress.com/money/income-tax/income-tax-return-filing-for-ay-2018-19-how-to-file-itr-for-self-employed-in-india/1228106/ | CC-MAIN-2019-30 | en | refinedweb |
The CSE Analysis object. More...
#include "llvm/CodeGen/GlobalISel/CSEInfo.h"
The CSE Analysis object.
This installs itself as a delegate to the MachineFunction to track new instructions as well as deletions. It however will not be able to track instruction mutations. In such cases, recordNewInstruction should be called (for eg inside MachineIRBuilder::recordInsertion). Also because of how just the instruction can be inserted without adding any operands to the instruction, instructions are uniqued and inserted lazily. CSEInfo should assert when trying to enter an incomplete instruction into the CSEMap. There is Opcode level granularity on which instructions can be CSE'd and for now, only Generic instructions are CSEable.
Definition at line 71 of file CSEInfo.h.
Definition at line 82 of file CSEInfo.cpp.
Definition at line 229 of file CSEInfo.cpp.
References llvm::dbgs(), llvm::MachineBasicBlock::empty(), llvm::MachineInstr::getOpcode(), and LLVM_DEBUG.
This instruction was mutated in some way.
Implements llvm::GISelChangeObserver.
Definition at line 227 of file CSEInfo.cpp.
This instruction is about to be mutated in some way.
Implements llvm::GISelChangeObserver.
Definition at line 222 of file CSEInfo.cpp.
Definition at line 158 of file CSEInfo.cpp.
An instruction has been created and inserted into the function.
Note that the instruction might not be a fully fledged instruction at this point and won't be if the MachineFunction::Delegate is calling it. This is because the delegate only sees the construction of the MachineInstr before operands have been added.
Implements llvm::GISelChangeObserver.
Definition at line 221 of file CSEInfo.cpp.
An instruction is about to be erased.
Implements llvm::GISelChangeObserver.
Definition at line 220 of file CSEInfo.cpp.
Use this callback to inform CSE about a newly fully created instruction.
Now insert the new instruction.
We'll reuse the same UniqueMachineInstr to avoid the new allocation.
This is a new instruction. Allocate a new UniqueMachineInstr and Insert.
Definition at line 175 of file CSEInfo.cpp.
References assert(), llvm::dbgs(), llvm::MachineInstr::getOpcode(), and LLVM_DEBUG.
Use this callback to insert all the recorded instructions.
At this point, all of these insts need to be fully constructed and should not be missing any operands.
Definition at line 205 of file CSEInfo.cpp.
Remove this inst from the CSE map.
If this inst has not been inserted yet, it will be removed from the Tempinsts list if it exists.
Definition at line 197 of file CSEInfo.cpp.
Referenced by llvm::CSEMIRBuilder::buildInstr().
Definition at line 257 of file CSEInfo.cpp.
References llvm::dbgs(), and LLVM_DEBUG.
Records a newly created inst in a list and lazily insert it to the CSEMap.
Sometimes, this method might be called with a partially constructed MachineInstr,
Definition at line 168 of file CSEInfo.cpp.
References llvm::dbgs(), llvm::MachineInstr::getOpcode(), and LLVM_DEBUG.
Definition at line 243 of file CSEInfo.cpp.
References MRI, and print().
Referenced by llvm::GISelCSEAnalysisWrapper::releaseMemory().
-----— GISelCSEInfo ----------—//
Definition at line 77 of file CSEInfo.cpp.
References llvm::MachineFunction::getRegInfo(), and MRI.
Referenced by llvm::IRTranslator::runOnMachineFunction().
Definition at line 212 of file CSEInfo.cpp.
References assert(), and llvm::isPreISelGenericOpcode(). | http://www.llvm.org/doxygen/classllvm_1_1GISelCSEInfo.html | CC-MAIN-2019-30 | en | refinedweb |
took to build this system.
For those of you who aren’t yet familiar with what SaltStack is, the best way to describe it is as an asynchronous, reactive event bus with different execution layers built on top of it. One such layer is intended for configuration management of your servers, while another is something called the salt reactor, which allows you to define custom reactions to various events.
The first problem tackled was to automate building of the web tier, which involved installing and configuring PHP, Nginx and uWSGI as well as cloning the application code and installing the necessary dependencies. The formulas used for these instances can be found here and here.
Next was setting up a MongoDB replica set with 3 servers and configuring them to speak to each other. One pattern that was incredibly useful for this was to take advantage of the salt
mine to act as a poor man’s DNS. To make this work, add the following to a universally applied
pillar file.
mine_functions: network.ip_addrs: [eth0] network.get_hostname: []
This tells each minion to send the IP address for their
eth0 network interface, as well as their hostname. With this information, it becomes possible to add the following to a base state:
{% for id, addr_list in salt['mine.get']('env:{0}'.format(grains['env']), 'network.ip_addrs', expr_form='grain').items() %} {% if id == grains['id'] %} self-host-entry: host.present: - ip: 127.0.0.1 - names: - {{ id }} {% else %} {{ id }}-host-entry: host.present: - ip: {{ addr_list|first() }} - names: - {{ id }} {% endif %} {% endfor %}
This stores the hostname and ip address of all of the other minions in a given environment to the
/etc/hosts file of the minion where the state is executed. This results in easy hostname resolution of the other minions in the deployment without having to manage any DNS infrastructure. Now that host discovery has been taken care of, it becomes trivial to dynamically configure the database connections for the application servers. It also makes it possible to use the following Jinja logic to configure the replica set using only the information available from Salt.
{% if 'mongo_primary' in grains['roles'] %} {% set replset_config = {'_id': salt['pillar.get']('mongodb:replica_set:name', 'repset0'), 'members': []} %} {% set member_id = 0 %} {% for id, addrs in salt['mine.get']('roles:mongodb_server', 'network.get_hostname', expr_form='grain').items() %} {% do replset_config['members'].append({'_id': member_id, 'host': id}) %} {% set member_id = member_id + 1 %} {% endfor %}
Now that the application and database are configured, we need to manage application deployment using
git. To do this we take advantage of the SaltStack reactor system. This lets us execute specific actions in response to events that are triggered on the Salt master. In addition to the reactor system, we need to make sure that Salt API is installed and active.
The deployment pipeline is triggered from GitHub webhooks sent to the Salt API, so it is necessary to disable authentication. The configuration that I used is:
rest_cherrypy: port: 8000 ssl_crt: /etc/pki/{{ tls_dir }}/certs/{{ common_name }}.crt ssl_key: /etc/pki/{{ tls_dir }}/certs/{{ common_name }}.key webhook_disable_auth: True
This uses pillar data to define the location and name of your SSL certificate and key, as well as disabling HTTP basic auth for the API. By disabling authentication on the API endpoint, it becomes necessary to handle validation of all requests in the reactor function. Fortunately, GitHub sends all of their webhooks with an HMAC signature.
The verification and deployment of code from the webhook is handled by a custom reactor definition:
import hashlib import hmac def run(): '''Verify the signature for a Github webhook and deploy the appropriate code''' _, signature = data['headers'].get('X-Hub-Signature').split('=') body = data['body'] target = tag.split('/')[-1] key = __opts__.get('github', {}).get('webhook-key') computed_signature = hmac.new(key, body, hashlib.sha1).hexdigest() # signature_match = hmac.compare_digest(computed_signature, signature) if computed_signature == signature: return { 'github_webhook_deploy': { 'local.state.sls': [ {'tgt': 'roles:{0}'.format(target)}, {'expr_form': 'grain'}, {'arg': ['{0}.deploy'.format(target), 'prod']}, ] } } else: return {}
This uses the python DSL for state files which greatly simplifies the representation of the logic involved. The first thing it does is to check that the HMAC signature is valid by computing what it should be based on a secret key that is defined in the master’s configuration and the body of the webhook request. If the signatures match, then the function returns a python dictionary consisting of a state definition that is to be executed. In this case, the state file is one that handles the deployment of the application code. The actual deployment is simply cloning the latest code from git to the servers whose grains match the target role which is determined based on the last portion of the URL to which the webhook was sent (
php-web-host). The cloned source is then symlinked to
current in the deployment directory, after which the Nginx and uWSGI servers are restarted.
To make this reactor function active, simply add this to the master configuration file:
reactor: - 'salt/netapi/hook/deploy/*': - /srv/lta/reactor/code-deploy.sls
This translates to an API endpoint of
https://<salt_master_url>/api/hook/<target_role>.
Now, it is possible to have a git-based workflow similar to what my client had gotten used to with Heroku, with the additional benefit of being able to define specific actions that will trigger the webhook. This adds greater flexibility without any unnecessary additional complexity.
In the next post I explain how I managed creation and scaling of the EC2 nodes with salt-cloud and the reactor system. | https://blog.renaissancedev.com/from-heroku-to-aws-with-saltstack-part-1.html?utm_source=rss&utm_medium=rss | CC-MAIN-2019-30 | en | refinedweb |
#include <CGAL/Nef_polyhedron_S2.h>
Figures figureNefS2SVertexIncidences and figureNefS2SHalfloopIncidences illustrate the incidences of an sface.
An sface is described by its boundaries._S2<Traits> manages the needed sfaces internally.
CGAL::Nef_polyhedron_S2
CGAL::Nef_polyhedron_S2::SVertex | https://doc.cgal.org/4.7/Nef_S2/classCGAL_1_1Nef__polyhedron__S2_1_1SFace.html | CC-MAIN-2019-30 | en | refinedweb |
core.h Example Filestandalone/core.h
/**************************************************************************** ** ** Copyright (C) 2017 Klarälvdalens Datakonsult AB, a KDAB Group company, info@kdab.com, author Milian Wolff <milian.wolff@kdab.com> ** Contact: ** ** This file is part of the QtWebChannel CORE_H #define CORE_H #include "dialog.h" #include <QObject> /* An instance of this class gets published over the WebChannel and is then accessible to HTML clients. */ class Core : public QObject { Q_OBJECT public: Core(Dialog *dialog, QObject *parent = nullptr) : QObject(parent), m_dialog(dialog) { connect(dialog, &Dialog::sendText, this, &Core::sendText); } signals: /* This signal is emitted from the C++ side and the text displayed on the HTML client side. */ void sendText(const QString &text); public slots: /* This slot is invoked from the HTML client side and the text displayed on the server side. */ void receiveText(const QString &text) { m_dialog->displayMessage(Dialog::tr("Received message: %1").arg(text)); } private: Dialog *m_dialog; }; #endif // CORE. | http://doc.qt.io/qt-5/qtwebchannel-standalone-core-h.html | CC-MAIN-2018-09 | en | refinedweb |
Learn the importance of BufferedReader class while writing code for reading the input stream. You will get to know the features of BufferedReader in Java program.
AdsTutorials
In this article I will explain you the benefits and uses of BufferedReader class in Java. In Java there are classes for reading the file/input stream (source stream). With help of InputStreamReader you can read the data into character stream. The InputStreamReader class reads the bytes from stream and then decode into characters. The InputStreamReader class uses the charset to decode the byte into characters.
The BufferedReader class provides advanced functionality of reading the character from input stream and then buffering it for high performance reading of the stream into characters, arrays and lines. If you are writing a program to read a text file then you can use both classes InputStreamReader and BufferedReader classes for reading the file line by line very efficiently.
How to user the BufferedReader Class?
The BufferedReader class is very easy to use class of the java.io package in Java. You can easily instantiate the class by passing the Reader object as constructor parameter. In this case default buffer size is used, which is sufficient for most of the programming purposes. But you can also specify the buffer size as second argument of the constructor.
Here is the Constructor details and Description of the BufferedReader class:
BufferedReader(Reader in) - This constructor is used to create the object of BufferedReader class by using the default buffer size.
BufferedReader(Reader in, int sz) - This constructor is used when you have to specify the buffer size to be used while reading the data.
Mostly the read() and readLine() methods of the class is used for reading the buffered data.
The method read() - This method is used to reads a single character from the stream.
The method readLine() - This method is used to read a line of text from the stream.
Benefits of BufferedReader class
Example of BufferedReader class
Our example code given here will read the file line by line and print on the console. Create a new file "mytext.txt" and add the following content into it:
Tutorials:
Hibernate Framework
Struts Framework
Spring Framework
XML
Ajax
JavaScript
Java
Web Services
Database
Technology
Web Development
PHP
Then create a new Java file with the following code:
import java.io.*; /** * @author Deepak Kumar * Example usase of BufferedReader class in Java */ public class BufferedReaderExample { public static void main(String args[]) throws IOException { FileReader fileReader=new FileReader("mytext.txt"); BufferedReader bufferedReader=new BufferedReader(fileReader); String str=""; while((str=bufferedReader.readLine())!=null){ System.out.println("Data is: " + str); } fileReader.close(); } }
Compile the program and execute it. After execution it will display the data from the text file on console. Here is the output of the program:
Read more tutorials at:
Posted on: August 21, 2013 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: What is the use of BufferedReader in Java program?
Post your Comment | http://roseindia.net/java/javafile/What-is-the-use-of-BufferedReader-in-Java-program.shtml | CC-MAIN-2018-09 | en | refinedweb |
Text::Microformat - A Microformat parser
Version 0.02
use Text::Microformat; use LWP::Simple; # Parse a document my $doc = Text::Microformat->new( get('') ); # Extract all known Microformats my @formats = $doc->find; my $hcard = shift @formats; # Easiest way to get a value (returns the first one found, else undef) my $full_name = $hcard->Get('fn'); my $family_name = $hcard->Get('n.family-name'); my $city = $hcard->Get('adr.locality'); # Get the human-readable version specifically my $family_name = $hcard->GetH('n.family-name'); # Get the machine-readable version specifically my $family_name = $hcard->GetM('n.family-name'); # The more powerful interface (access multiple properties) my $family_name = $hcard->n->[0]->family_name->[0]->Value; # Dump to a hash my $hash = $hcard->AsHash; # Dump to YAML print $hcard->ToYAML, "\n"; # Free the document and all the formats $doc->delete;
Text::Microformat is a Microformat parser for Perl.
Text::Microformat sports a very pluggable API, which allows not only new kinds of Microformats to be added, but also extension of the parser itself, to allow new parsing metaphors and source document encodings.
Parses the string $content and creates a new Text::Microformat object.
Recognized options:
Specify the content type. Any content type containing 'html' invokes the HTML Parser, and content type containing XML invokes XML Parser. Defaults to 'text/html'. (See HTML::TreeBuilder and XML::TreeBuilder)
Returns an array of all known Microformats in the document.
Deletes the underlying parse tree - which is required by HTML::TreeBuilder to free up memory. Behavior of Text::Microformat::Element::* objects is undefined after this method is called.
This is as easy as creating a new module in the Text::Microformat::Element::* namespace, having Text::Microformat::Element as a super-class. It will be auto-loaded by Text::Microformat.
Every Microformat element has it's own namespace auto-generated, for example:
Text::Microformat::Element::hCard::n::family_name
So it's easy to override the default behavior of Text::Microformat::Element via inheritance.
See existing formats for hints.
This is as easy as creating a new module in the Text::Microformat::Plugin::* namespace. It will be auto-loaded by Text::Microformat. Text::Microformat has several processing phases, and uses NEXT to traverse the plugin chain.
Current processing phases are, in order of execution:
Set default options in $c->opts
Pre-parsing activities (Operations on the document source, perhaps)
Parsing - at least one plugin must parse $c->content into $c->tree
Post-parsing activities (E.g. the include pattern happens here)
Before looking for Microformats
Populate the $c->formats array with Text::Microformat::Element objects
After looking for Microformats
A plugin may add handlers to one or more phases.
See existing plugins for hints.
HTML::TreeBuilder, XML::TreeBuilder,
Keith Grennan,
<kgrennan at cpan.org>
Log bugs and feature requests here:
Project homepage:
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/dist/Text-Microformat/lib/Text/Microformat.pm | CC-MAIN-2018-09 | en | refinedweb |
MooX::Types::MooseLike::Email - Email address validation type constraint for Moo.
package MyClass; use Moo; use MooX::Types::MooseLike::Email qw/:all/; has 'email' => ( isa => EmailAddress, is => 'ro', required => 1 ); has 'message' => ( isa => EmailMessage, is => 'ro', required => 1 );
MooX::Types::MooseLike::Email is Moo type constraints which uses Email::Valid, Email::Valid::Loose and Email::Abstract to check for valid email addresses and messages.
An email address
An email address, which allows . (dot) before @ (at-mark)
An object, which is a Mail::Internet, MIME::Entity, Mail::Message, Email::Simple or Email::MIME
use Scalar::Util qw(blessed); has 'message' => ( is => 'ro', isa => EmailMessage, required => 1, coerce => sub { return ( $_[0] and blessed( $_[0] ) and blessed( $_[0] ) ne 'Regexp' ) ? $_[0] : Email::Simple->new( $_[0] ); }, );
hayajo <hayajo@cpan.org>
MooX::Types::MooseLike, MooseX::Types::Email, MooseX::Types::Email::Loose
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~hayajo/MooX-Types-MooseLike-Email-0.03/lib/MooX/Types/MooseLike/Email.pm | CC-MAIN-2018-09 | en | refinedweb |
question about programmatically accessing seam componentGerardo Segura Nov 22, 2007 4:49 AM
Hello,
I'm in the need of programmatically accessing (I mean not using @In annotation) a component which is created with @Factory . But neither this:
Course course = (Course)Contexts.lookupInStatefulContexts("course") ;
nor this, is working:
Course course=(Course)Contexts.getSessionContext().get("course") ;
Both methods return null, as if the factory method is not been called. Is this correct behavior or could it be something in my code, I reproduce here the related parts.
By the way I'm not able to inject the component because its factory method is declared inside a component with @Restrict("#{identity.loggedIn}") declared, but I'm trying to use it when no login has happened yet. But just before I plan to use the login ocurrs, so at that time the @Restrict should not be a problem. If I try to use @In the injection tries to occur to early.
I think I better wrote the code:
firstly the component with the declared factory
@Name("courseManager") @Scope(ScopeType.SESSION) @Restrict("#{identity.loggedIn}") public class CourseManagerAction implements CourseManager { @Factory public Course getCourse() { // gets default single course Course course = (Course) entityManager .createQuery("select c from Course c").getSingleResult(); return course; } }
then the case where I want to use the "course" component:
@Scope(ScopeType.EVENT) @Name("register") public class RegisterAction implements Register { public void register() { //here the user register itself .... //after that do auto-login identity.authenticate() ; //now I want to access the course Course course = (Course)Contexts.getSessionContext().get("course") ; //but course is null //why? shouldn't this call be similar to: @In Course course ?? } }
any comments?
This content has been marked as final. Show 3 replies
1. Re: question about programmatically accessing seam componentSean Burns Nov 22, 2007 5:11 AM (in response to Gerardo Segura)
I use
Component.getInstance("course");
Im not quite sure how the factory method works in your case... but you could always get courseManager...
CourseManager cm = (CourseManager) Component.getInstance("courseManager");
2. Re: question about programmatically accessing seam componentkoen handekyn Nov 22, 2007 5:26 AM (in response to Gerardo Segura)
Component.getInstance will work also for components defined by factories
3. Re: question about programmatically accessing seam componentGerardo Segura Nov 22, 2007 7:17 PM (in response to Gerardo Segura)
Indeed, it works
thanks! | https://developer.jboss.org/thread/140034 | CC-MAIN-2018-09 | en | refinedweb |
When you are working with main memory it is crucial to make sure that all the data structures are correctly sized and aligned. A typical approach is to create blocks of data that are processed independently. For the developer, the question is how large such blocks should be? The answer is that those blocks should always be either cache sized.
Now, how large is the cache on the system you are using? Either you can go for experiments detecting the different cache levels and the cache line sizes, or if you are happy to have a Intel Linux system at hand, to simply explore the information as it is stored in the sysfs filesystem exported by the Kernel.
However, parsing this information at development time might be ok, at run-time the best way is to adjust the system settings based on the actual configuration. At the moment, libudev does not support to read the information, so your off to yourself.
To avoid that everybody writes the same code I took some time to write a small library that reads the information about the CPU caches and the CPU topology and allows to easily process this information in your program. You can find the most current version of the code as usual at Github.
#include <systopo.h> using namespace systopo; int main(void) { System s = getSystemTopology(); return 0; }
The System definition parses all data from sysfs and can be reviewed here:
struct Cache { size_t coherency_line_size; size_t level; size_t number_of_sets; size_t physical_line_partition; size_t size; size_t ways_of_associativity; std::string type; }; struct Topology { size_t core_id; size_t physical_package_id; std::vector<size_t> core_siblings; std::vector<size_t> thread_siblings; }; struct CPU { std::vector<Cache> caches; Topology topology; }; struct System { std::vector<CPU> cpus; std::vector<size_t> online_cpus; std::vector<size_t> offline_cpus; };
For more information about the meaning of the Topology please refer to the Kernel documentation. The meaning for the CPU cache fields should be clear or refer to “What every programmer should know about memory”.
If you have feedback, comments or ideas, I’m glad to respond!
– Martin | https://punchcard.wordpress.com/tag/linux/ | CC-MAIN-2018-09 | en | refinedweb |
Name: rlT66838 Date: 08/10/99 If my first line of my javadoc comment ends in a question mark instead of a period, the next line gets included as part of the first line. Not good. Question marks are good for describing boolean attributes or methods that return a boolean value. I know that the requirement is to find a line that ends in a period and then a space, but perhaps this can be extended to include a quesion mark and a space also. -----Test Case----------------------------- public class JavadocDemo { /** First line. Second line. */ public boolean correct() { } /** First line? Second line. */ public boolean notCorrect() { } } (Review ID: 93457) ====================================================================== | https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4261388 | CC-MAIN-2018-09 | en | refinedweb |
SYNOPSIS#include <sys/types.h>
#include "lfc_api.h"
struct lfc_direnrep *lfc_readdirxp (lfc_DIR *dirp, char *pattern, char *se)
DESCRIPTIONlfc_readdirxp reads the LFC directory opened by lfc_opendir in the name server. It does restricted pattern matching on basename.xp caches a variable number of such entries, depending on the filename size, to minimize the number of requests to the name server.
- dirp
- specifies the pointer value returned by lfc_opendir.
- pattern
- allows to restrict the listing to entries having the basename starting with this pattern.
- se
- allows to restrict the replica entries to a given SE.
RETURN VALUEThxp returns a null pointer both at the end of the directory and on error, an application wishing to check for error situations should set serrno to 0, then call lfc_readdirxp, pattern exceeds CA_MAXNAMELEN or the length of se exceeds CA_MAXHOSTNAMELEN.
- SECOMERR
- Communication error.
- ENSNACT
- Name server is not running or is being shutdown.
AUTHORLCG Grid Deployment Team | http://manpages.org/lfc_readdirxp/3 | CC-MAIN-2018-09 | en | refinedweb |
Iterative Point Matching for Registration of Free-Form Curves and Surfaces
- Brittany Hodges
- 2 years ago
- Views:
Transcription
1 International Journal of Computer Vision, 13:2, (1994) 1994 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Iterative Point Matching for Registration of Free-Form Curves and Surfaces ZHENGYOU ZHANG INRIA Sophia-Antipolis, 2004 route des Lucioles, BP 93, F Sophia-Antipolis Cedex-FRANCE Received June 23, Revised March 4, sophia.inria.fr Abstract. 1 Introduction The work described in this paper was carried out in the context of autonomous vehicle navigation in rugged terrain based on vision. A single view is usually not sufficient for path planning and manipulation, and it is preferable to combine several views to produce a more credible interpretation. The objective of this work is to compute precisely the displacement of the vehicle between successive views in order to register different 3-D visual maps. A 3-D visual map can be a set of curves obtained by using either an edgebased stereovision system (Pollard et al. 1985; Robert and Faugeras 1991) or a range imaging sensor (Sampson 1987). It can also be a dense 3-D map either acquired by an active sensor (e.g., ERIM (Sampson 1987)), or reconstructed by a correlation-based stereovision system (Fua 1992), or obtained by fusing the two. The reader is referred to (Faugeras et al. 1992) for a quantitative and qualitative comparison of some area and feature-based stereo algorithms. The registration step is indispensable for the following reasons: better localize the mobile vehicle, eliminate errors introduced in stereo matching and reconstruction, build a more global Digital Elevation Map (DEM) of the environment. Geometric matching remains one of the bottlenecks in computer and robot vision, although progress has been made in recent years for some particular applications. There are two main applications: object recognition and visual navigation. The problem in object recognition is to match observed data to a prestored model representing different objects of interest. The problem in visual navigation is to match data observed in a dynamic scene at different instants in order to recover object motions and to interpret the scene. Registration for inspection/validation is also an important application of geometric matching (Menq et al. 1992). Besl and Jain (1985), and Chin and Dyer (1986) have made two excellent surveys ofpre-1985 work on matching in object recognition. Besl (1988) surveys the current methods for geometric matching and geometric representations while emphasizing the latter. Most of the previous work focused on polyhedral objects; geometric primitives such as points, lines and planar patches were usu-
2 120 Zhang ally used. This is of course very limited compared with the real world we live in. Recently, curved objects have attracted the attention of many researchers in computer vision. This paper deals with objects represented by free-form curves and surfaces, i.e., arbitrary space shapes of the type found in practice. A free-form curve can be represented by a set of chained points. Several matching techniques for flee-form curves have been proposed in the literature. In the first category of techniques, curvature extrema are detected and then used in matching (Bolles and Cain 1982). However, it is difficult to localize precisely curvature extrema (Waiters 1987; Milios 1989), especially when the curves are smooth. Very small variations in the curves can change the number of curvature extrema and their positions on the curves. Thus, matching based on curvature extrema is highly sensitive to noise. In the second category, a curve is transformed into a sequence of local, rotationally and translationally invariant features (e.g., curvature and torsion). The curve matching problem is then reduced to a 1-D string matching problem (Pavtidis 1980; Schwartz and Sharir 1987; Wolfson 1990; Gueziec and Ayache 1992). As more information is used, the methods in this category tend to be more robust than those in the first category. However, these methods are still subject to noise disturbance because they use arclength sampling of the curves to obtain point sets. The arclength itself is sensitive to noise. A dense 3-D map is a set of 3-D points. We can divide the methods proposed in the literature for registering two dense 3-D maps in two categories (the reader is referred to Zhang (1991) for a more detailed review): Primitive-based approach. A set of primitives are first extracted. A dense 3-D map can then be described by a graph with primitives defining the nodes and geometric relations defining the links. The registration of two maps becomes the mapping of the two graphs: subgraph isomorphism. Some heuristics are usually introduced to reduce the complexity. Surface-based approach. A 3-D map is considered as a surface, having the form (a Monge patch) x(x,y)=[x, y, z(x,y)]r with(x,y) E]R e. The idea is to find the transformation by minimizing a criterion relating the distance between the two surfaces. In the primitive-based approach, one often uses some differential properties invariant to rigid transformation such as Gaussian curvature. The primitives often used are 1. special points (Goldgof et al. 1988; Hebert et al. 1989; Kweon and Kanade 1992), whose curvature is locally maximal and is bigger than a threshold. 2. contours. A contour can indicate where the elevation changes significantly, which is called a cliff in Rodrfguez and Aggarwal (1989). It can also be a distance profile (Radack and Badler 1989), each point on which has the same distance to a common point. In certain specific cases, a contour can be a curve of a constant depth (Kamgar-Parsi et al. 1991). 3. surface patches (Kehtarnavaz and Mohan 1989; Liang and Todhunter 1990). Each surface patch is classified into different categories according to the sign of the Gaussian and mean curvatures. This type of primitives is usually used in a limited scene, for example, a scene containing several objects to be recognized. In a natural scene, there will be many surface patches such that the mapping becomes impractical. Among the surface-based methods, we find 1. a technique similar to the correlation (Gennery 1989), applicable when the number of degrees of freedom of the transformation between two maps is reduced (2, for example). 2. a differential technique (Horn and Harris 1991), applicable when the motion between two views is very small or when we have a very good initial estimate of the motion, and when the data are not very noisy. 3. a technique based on the coherence and compatibility between two maps (Hebert et at. 1989; Kweon and Kanade 1992) (quantified by the distance between two surfaces). Szeliski (1988) proposed a similar technique by adding a smoothness constraint. The main difference between the above two approaches resides in the information to be pro-
3 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 121 cessed during the registration. The information used in the primitive-based approach is much more concise than in the surface-based approach, and is in general preferable. But in a natural environment, with the state of the art of the current methods, we cannot detect robustly and localize precisely primitives (Waiters 1987; Milios t989). The surface-based approach uses all available information. The large redundancy allows for a precise computation of the transformation between the two maps, but this approach usually requires some a priori knowledge of the transformation. The primitive-based approach and the curve matching methods cited above exploit global matching criteria in the sense that they can deal with two sets of free-form curves and surfaces which differ by a large motion/transformation. This ability to deal with large motions is usually essential for applications to object recognition. In many other applications, for example, visual navigation, the motion between curves in successive frames is in general either small (because the maximum velocity of an object is limited and the sample frequency is high) or known within a reasonable precision (because a mobile vehicle is usually equipped with several instruments such as odometric and inertial systems which can provide such information). In the latter case, we can first apply the rough estimate of the motion to the first frame to produce an intermediate frame; then the motion between the intermediate frame and the second frame can be considered to be small. A surface-based method is attractive for such applications. This paper describes a method, similar to the third technique of the surface-based approach but much faster, to register two 3-D maps differing by a small motion. The key idea underlying our approach is the following. Given that the motion between two successive frames is small, a point in the first frame is close to the corresponding point in the second frame. By matching points in the first frame to their closest points in the second, we can find a motion that brings the two sets of points closer. Iteratively applying this procedure, the algorithm yields a better and better motion estimate. Recently, several pieces of independent work exploiting the similar ideas have been published. They are Besl and McKay (1992); Chen and Medioni (1992); Menq et al. (1992); Champleboux et al. (1992). A detailed comparison between these methods and ours will be given in Section 8. 2 Problem Statement A parametric 3-D (space) curve segment C is a vector function x : [a, b] --+ R 3, where a and b are scalar. In computer vision applications, the data of a space curve are usually available in the form of a set of chained 3-D points. If we know the type of the curve, we can obtain its description x by fitting, say, conics to the point data (Safaee-Rad et al. 1991; Taubin 1991). A parametric surface S is a vector function x : 1R 2 -+ R 3. In computer vision applications, the data of a surface are usually available in the form of a set of 3-D points. If we know the type of the surface, we can obtain its description x by fitting, say, planes or quadratic surfaces to the point data (Faugeras and Hebert 1986; Taubin 1991). In this work, we shall use directly the chained points for curves and point sets for surfaces, i.e., we are interested in free-form shapes without regard to particular primitives. This is very appropriate for a non-structured environment. In the following, if not explicitly stated, the property that a curve is a set of chained points is not used, i.e., we shall treat curve data in the same way as surface data (a set of points). The word shape (S) will refer to either curves or surfaces. The points in the first 3-D map are noted by xi (i = 1... m), and those in the second map are noted by x} (j = 1... n). These points are sampled from S and S', where S = C when curves are in consideration and S = S when surfaces are in consideration. In the noise-free case, if S and S' are registered by a transformation T, then the distance of a point on S, after applying T, to S' is zero, and the distance of a point on S', after applying the inverse of T, to S should be zero, too. The objective of registration is to find the motion between the two frames, i.e., R for rotation and t for translation, such that the following criterion trt f'(r,t) 1 ~ Pi d2(rxi + t, S') -- ~im l Pi i=1 (I) 1 n -1- ~j=l qj "= qj d2(rrxj - Rrt' S) is minimized, where d(x, S) denotes the distance of the point x to S (to be defined below), and pi (resp. qj) takes value 1 if the point xi (resp. x}) can be
4 122 Zhang matched to a point on S t in the second flame (resp. S in the first frame) and takes value 0 otherwise. The minimum of ~(R, t) will be zero in the noise-free case. It is necessary to have the parameters pi and qj because some points are only visible from one point of view and some are outliers, as to be described in Section 8. The above criteria are symmetric in the sense that neither of the two frames prevails over the other. To economize computation, we shall only use the first part of the right hand side of Equation 1. In other words, the objective function to be minimized is m 1 Z Pi d2(rxi + t;, St). f'(r, t) -- ~i=1 Pi i=i The effect of this simplification is described in Section 7.3. However, the minimization of 5C(R, t) is very difficult not only because d(rxi -t- t, S t) is highly nonlinear (the corresponding point of xi on S' is not known beforehand) but also because Pi can take either 0 or 1 (an Integer Programming Problem). As said in the introduction, we follow a heuristic approach by assuming the motion between the two frames is small or approximately known. In the latter case, we can first apply the approximate estimate of the motion between the two frames to the first one to produce an intermediate flame; then the motion between the intermediate flame and the second frame can be considered to be small. Small depends essentially on the scene of interest. If the scene is dominated by a repetitive pattern, the motion should not be bigger than half of the pattern distance. For example, in the situation illustrated in Figure 1, our algorithm will converge to a local minimum. In this case, other methods based on more global criteria, such as those cited in the introduction section, could be used to recover a rough estimate of the motion. The algorithm described in this paper can then be used to obtain a precise motion estimate. 3 iterative Pseudo Point Matching Algorithm (2) We describe in this section an iterative algorithm for 3-D shape registration by matching points in the first frame, after applying the previously recovered motion estimate (R, t), with their closest points in the second. A least-squares estimation reduces the aver- age distance between the matched points in the two frames. As a point in one flame and its closest point in the other do not necessarily correspond to a single point in space, several iterations are indispensable. Hence the name of the algorithm. 3.1 Finding Closest Points l_~t us first define the distance d(x, S') between point x and shape S', which is used in Equation 2. By definition, we have d(x, S') = min d(x, x'), (3) x'~s' where d(xl, x2) is the Euclidean distance between the two points xl and x2, i.e., d(xl, x2) = llxl -x2tl. In our case, S' is available as a set of points x~ (j = 1... n). We use the following simplification: d(x, S') = min d(x, x~). (4) j~{l,.,.,n} See Section 7.4 for more discussions on the distance. The closest point y in the second flame to a given point x is the one satisfying d(x,y) <d(x,z), Vz~S'. The worst case cost of finding the closest point is O(n), where n is the number of points in the second frame. The total cost while performing the above computation for each point in the first frame is O(mn), where m is the number of points in the first flame. There are several methods which can considerably speed up the search process, including bucketing techniques and k-d trees (abbreviation for k-dimensional binary search tree) (Preparata and Shamos 1986). k-d trees are implemented in our algorithm, see Appendix A of this article for the details. 3.2 Pseudo Point Matching For each point x we can always find a closest point y. However, because there are some spurious points in both frames due to sensor error, or because some points visible in one frame are not in the other due to sensor or object motion, it probably does not make any sense to pair x with y. Many constraints can be imposed to remove such spurious pairings. For example, distance continuity in a neighborhood, which is similar to the figural continuity in stereo
5 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 123 //... _. /,." i / / Fig. 1. Our algorithm exploits a local matching technique, and converges to the closest local minimum, which is not necessarily the optimal one matching (Mayhew and Frisby 1981; Pollard et al. 1985; Grimson 1985), should be very useful to discard the false matches. These constraints are not incorporated in our algorithm in order to maintain the algorithm in its simplest form. Instead, we can exploit the following two simple heuristics, which are all unary. The first is the maximum tolerance for distance. If the distance between a point xi and its closest one Yi, denoted by d(xi, yi), is bigger than the maximum tolerable distance Dmax, then we set Pi = 0 in Equation 2, i.e., we cannot pair a reasonable point in the second frame with the point xi. This constraint is easily justified since we know that the motion between the two frames is small and hence the distance between two points reasonably paired cannot be very big. In our algorithm, Drnax is set adaptively and in a robust manner during each iteration by analyzing distances statistics. See Section 3.3. The second is the orientation consistency. We can estimate the surface normal or the curve tangent (both referred below as orientation vector) at each point. It can be easily shown that the angle between the orientation vector at point x and that at its corresponding point y in the second frame can not go beyond the rotation angle between the two frames (Zhang et al. 1988). Therefore, we can impose that the angle between the orientation vectors at two paired points should not be bigger than a prefixed value, which is the maximum of the rotation angle expected between the two frames. This constraint is not implemented for surface registration, because the computation of the surface normals from 3-D scattered points is relatively expensive. For curves, we compute an approximate tangent for each point from the vector linking its neighboring points (Zhang 1992b; Zhang 1992a). It is the only place where the property that points of a curve are chained is used. In our implementation, we set = 60 to take into account noise effect in the tangent computation. If the tangents can be precisely computed, can be set to a smaller value. This constraint is especially useful when the motion is relatively big. 3.3 Updating the Matching Instead of using all matches recovered so far, we exploit a robust technique to discard several of them by analyzing the statistics of the distances. The basic idea is that the distances between reasonably paired points should not be very different from each other. To this end, one parameter, denoted by 79, needs to be set by the user, which indicates when the registration between two frames is good. See Section 4.1 for the choice of the value 79. i Let Dma x denote the maximum tolerable distance in iteration I. At this point, each point in the first frame (after applying the previously recovered motion) whose distance to its closest point is less than Dma I--1 x is retained, together with its closest point and their distance. Let {xi }, {yi }, and {di} be, respectively, the resulting sets of original points, closest points, and their distances after the pseudo point matching, and let N be the cardinal of the sets. Now compute the mean/z and the sample deviation cr of the distances, which are given by 1 N # = ~~di,.= = i=1 Depending on the value of/z, we adaptively set the maximum tolerable distance Dtmax as shown below: 1
6 124 Zhang if=~ < 79 /* the registration is quite good */ Dlaax =/z q- 30"; elseif/z < 379 /* the registration is still good */ DImax =/x + 2o-; elseif/~ < 679 /* the registration is not too bad */ Dtmax =/z + o- ; else /* the registration is really bad */ DImax = $; endif The explanation of ~ is deferred to Section 4.2. a reasonable motion estimate, which is sufficient for the algorithm to converge to the correct solution. 3.4 Computing Motion At this point, we have a set of 3-D points which have been reasonably paired with a set of closest points, denoted respectively by {xi} and {y/}. Let N be the number of pairs. Because N is usually much greater than 3 (three points are the minimum for the computed rigid motion to be unique), it is necessary to devise a procedure for computing the motion by minimizing the following mean-squares objective function 1 N ~(R, t) = ~ [Iax~ +t-y~ll 2, (5) i=l which is the direct result of Equation 2 with the definition of distance given by Equation 4. Any optimization method, such as steepest descent, conjugate gradient, or simplex, can be used to find the leastsquares rotation and translation. Fortunately, several much more efficient algorithms exist for solving this particular problem. They include quaternion method (Fangeras and Hebert 1986; Horn 1987), singular value decomposition (Arun et al. 1987), dual number quaternion method (Walker et al. 1991), and the method proposed by Brockett (1989). We have implemented both the quaternion method and the dual number quaternion one. They yield exactly the same motion estimate. For completeness, the dual quaternion method (Walker et al. 1991) is summarized in Appendix B. Fig. 2. A histogram of distances At this point, we use the newly set D~a x to update the matching previously recovered: a paring between x/and Yi is removed if their distance di is bigger than D~a x. The remaining pairings are used to compute the motion between the two frames, as to be described below. Because Dmax is adaptively set based on the staffstics of the distances, our algorithm is rather robust to relatively big motion and to gross outliers (as to be shown in the experiment section). Even if there remain several false matches in the retained set after update, the use of least-squares technique still yields 3.5 Summary We can now summarize the iterative pseudo point matching algorithm as follows: input: Two 3-D frames containing m and n 3-D points, respectively. output: The optimal motion between the two frames. procedure: 1. initialization Dma o x is set to 2079, which implies that every point in the first frame whose distance to its closest point in the second frame is bigget than Dma 0 x is discarded from considera-
7 herative Point Matching for Reg&tration of Free-Form Curves and Surfaces 125 tion during the first iteration. The number 20 is not crucial in the algorithm, and can be replaced by a larger one. 2. preprocessing (a) Compute the tangent at each point of the two frames (only for curves). (b) Build the k-d tree representation of the second frame. 3. iteration until convergence of the computed motion (a) Find the closest points satisfying the distance and orientation constraints, as described in Section 3.2. (b) Update the recovered matches through statistical analysis of distances, as described in Section 3.3. (c) Compute the motion between the two frames from the updated matches, as described in Section 3.4. (d) Apply the motion to all points (and their tangents for curves) in the first frame. Several remarks should be made here. First, the construction and the use of k-d trees for finding closest points are explained in Appendix A. Second, the motion is computed between the original points in the first frame and the points in the second frame. Therefore, the final motion given by the algorithm represents the transformation between the original first frame and the second frame. Last, the iterationtermination condition is defined as the change in the motion estimate between two successive iterations. The change in translation at iteration I is defined as ~t = [[t~ -tm II/lltlll. To measure the change in rotation, we use the rotation axis representation, which is a 3-D vector, denoted by r. Let 0 = Ilrll and n = r/[irll, the relation between r and the quaternion q is q = [sin(o/2)n r, cos(0/2)] r. We do not use the quaternions because their difference does not make much sense. We then define the change in rotation at iteration I as Sr = llri- ri-l II/llr~ll. We terminate the iteration when both Sr and 3t are less than 1%, or when the number of iterations achieves a prefixed threshold (20 for curves and 40 for surfaces). One could also define the termination condition as the absolute change, i.e., ~r = Ilrl- rl-i II and 8t = Iltl- ti-~ II- We stop the iteration if 8r is less than a threshold, say 0.5 de- grees, and 3t is less than a threshold, say 0.5 centimeters. 4 Practical Considerations In this section, we consider several important aspects in practice, including choice of the parameters 79 and ~, and coarse-to-fine strategy. 4.1 Choice of the Parameter 79 The only parameter needed to be supplied by the user is 79, which indicates when the registration between two frames can be considered to be good. In other words, the value of 79 should correspond to the expected average distance when the registration is good. When the motion is big, 79 should not be very small. Because we set Dma 0 x --= 2079, if 79 is very small we cannot find any matches in the first iteration and of course we cannot improve the motion estimate. (A solution to this is to set Dma o x bigger, say 30/9). In practice, if we know the precision of the initial estimate, say, within 20 centimeters, we can set Dma 0 x to that value. The value of 79 has an impact on the convergence of the algorithm. If 79 is smaller than necessary, then more iterations are required for the algorithm to converge because many good matches will be discarded at the step of matching update. On the other hand, if 79 is much bigger than necessary, it is possible for the algorithm not to converge to the correct solution because possibly many false matches will not be discarded. Thus, to be prudent, it is better to choose a small value for 79. In our implementation, we relate 79 to the resolution of the data. Let /) be the average distance between neighboring points in the second frame. Consider a perfect registration shown in Figure 3. Points from the first frame are marked by a cross and those from the second, by a dot. Assume that a cross is located in the middle of two dots. Then in this case, the mean/z of the distances between two sets of points is equal to I3/2. In general, we can expect # >/9/2. So, if/) is computed, we can set 79 =/3. For curves, we do compute b for each run. For surfaces, 79 is set to 10 centimeters, which corresponds roughly twice the resolution of a 3-D map reconstructed by a correlation-based stereo for a depth range of about 10 meters. This gives us satisfactory results.
8 126 Zhang ~, 700 "e 1~ 600 i Fig. 3. Illustration of a perfect registration to show how to choose Choice of the Parameter In Section 3.3, we described how to update matches through a statistical analysis of distances, and we have assumed that the distribution of distances is approximately Gaussian when the registration between two frames is good. Because of the local property of the matching criterion used, our algorithm converges to the closest minimum. It is thus best applied in situations where the motion is small or approximately known and a precise estimate of the motion is required. In the case of a very bad initial estimate of the motion between two frames, one observes that the form of the distribution of distances is in general very complex, We show in Figure 4 one such typical histogram. As can be observed, the form of the histogram in Figure 4 is irregular. There are several peaks. Furthermore, many points are found near zero. This shows the difficulty of our approach. When the initial estimate is very bad, we probably find matches having small distances due to occasionally bad alignments, that is, these matches are in fact not reasonable. We cannot guarantee that our algorithm yields the correct estimate of the motion. One possible method is to generate a hypothesis for each peak, And then evaluate each hypothesis in parallel. The criterion for measuring the quality of a hypothesis can be a function of the number of matches and of the final average distance. In the end, the hypothesis which gives the best score is retained as the transformation between the two frames. We have adopted a simpler method. The maximal peak gives in general, at least we expect, a hint of a o distances Fig. 4. Histogram of distances when the initial estimate of the motion is very bad "8 t_ o 0 distance ~ Dmax Fig. 5. How to choose the value of reasonable correspondence between the two flames. We have chosen in our implementation the valley after the maximal peak as the value of ~ (see Figure 5). That is, all matches after the valley are discarded from consideration. To avoid the noise perturbation, we impose that the number of points at the valley must not go beyond 60% of the number of points at the peak. In all our experiments, this method provides us with sufficient results, as to be shown below.
9 Iterative Point Matching for Registration of Free-Form Curves and Surfaces Coarse-to-Fine Strategy As to be shown in the next section, we find fast convergence of the algorithm during the first few iterations that slows down as it approaches the local minimum. We find also that more search time is required during the first few iterations because the search space is larger at the beginning. Since the total search time is linear in the number of points in the first frame, it is natural to exploit a coarse-to-fine strategy. During the first few iterations, we can use coarser samples (e.g., every five) instead of all sample points on the curve. When the algorithm almost converges, we use all available points in order to obtain a precise estimate. 5 Experimental Results with Curves The proposed algorithm has been implemented in C. In order to maintain the modularity, the code is not optimized. The program is run on a SUN 4/60 workstation, 2 and any quoted times are given for execution on that machine. This section is divided into three subsections. In the first the algorithm is applied to synthetic data. The results show clearly the typical behavior of the algorithm to be expected in practice. The second describes the robustness and efficiency of the algorithm using synthetic data with different levels of noise and different samplings. The third describes the experimental restflts with real data. 5.1 A Case Study In this experiment, the parametric curve described by x(u) = [u 2, 5u sin(u) + 10u cos(1.5u), 0] r is used. The curve is sampled twice in different ways. Each sample set contains 200 points. The second set is then rotated and translated with r [0.02, 0.25, -0.15] r and t = [40.0, 120.0, -50.0] T. We thus get two noise-free frames. (The same noise-free data are used in the experiments described in the next section.) For each point, zero-mean Gaussian noise with a standard deviation equal to 2 is added to its x, y and z components. We show in Figure 6 the front and top views of the noisy data. For visual convenience, points are linked. The solid curve is the one in the first frame, and the dashed one, in the second frame. The data are used as is; no smoothing is performed. The first step is then to find matches for the points in the first frame. As /) max is big, each point has a match. We find 200 matches in total, which are shown in Figure 7, where matched points are linked. Many false matches are observed. We then update these matches using the technique described in Section 3.3, and 100 matches survive, which are shown in Figure 8. Even after the updating, there are still some false matches. Because there are more good matches then false matches, the motion estimation algorithm still yields a reasonable estimate. This can be observed in Figure 9, where the motion estimated has been applied to the points in the first frame. We can observe the improvement of the registration of the two curves, especially in the top view. Now we enter the second iteration. We find at this time 176 matches, which are shown in Figure 10a- (Top view is not shown, because the two curves are very close.) Several false matches are observable. After updating, 146 matches remain, as shown in Figure 10b. Almost all these matches are correct. Motion is then computed from these matches. We iterate the process in the same manner. The motion result after 10 iterations is shown in Figure 11. The registration between the two curves is already quite good. The algorithm yields after 15 iterations the following motion estimate: ~= [2.442 x 10-2, x 10 -t, x 10-i] r, = [3.879 x 10 l, x 102, x 101] T. To measure the precision in the motion estimate, we define the rotation error as er = Ilr- ~ll/[[rll i00%, (6) where r and ~ are respectively the real and estimated rotation parameters, and the translation error as et = lit- tll/lltll x 100%, (7) where t is the real translation parameter and t is the estimated one. In Figure 12, we show the evolution of the rotation and translation errors versus the number of iterations. Fast convergence is observed during the first few iterations and relatively slower later. After 15 iterations, the rotation error is 1.6% and the translation error is 4.6%.
10 128 Zhang \?.o, Fig. 6. Front and top views of the data Fig. 7. Matched points in the first iteration before updating (front and top views) Fig. 8. Matched points in the first iteration after updating (front and top views)
11 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 129 i,,,,,,,,,,i i i, H,,,,? Fig. 9. Front and top views of the motion result after the first iteration (~) (b) Fig. 10. Matched points before and after updating in the second iteration (only the front view) \ %11'? Fig. 11. Front and top views of the motion result after ten iterations
12 130 Zhang 60 5O i : : Error in rotation o... ~ Error in translation also implies that the Gaussian assumption of the distance distribution is reasonable. The total execution time is 6.5 seconds in this experiment. 40 I 5.2 Synthetic Data \ ',,o ~ O"" "0"" "0"" "0"" O...0, I0 II 12 : Iterations Fig. 12. Evolution of the rotation and translation errors versus the number of iterations We show in Table 1 several intermediate results during different iterations. The results are divided into three parts. The second to fourth rows indicate the execution time (in seconds) required for finding matches, updating the matching, and computing the motion, respectively. The fifth row shows the values of Omax used in different iterations. The last row shows the comparison of the numbers of matches found in different iterations before and after updating. We have the following remarks: Dmax almost decreases monotonically with the number of iterations. This is because the registration becomes better and better, and Dmax is computed dynamically through the statistical analysis of distances. The time required for finding matches almost decreases monotonically, too. This is because of the almost monotonic decrease of Dm~- Less search in k-d tree is required when the search region becomes smaller. The time required for updating the matching is negligible. The time required for computing the motion is almost constant, as it is related to the number of matches (here almost constant). Furthermore, the motion algorithm is very efficient: about 0.05 seconds for 145 matches. The numbers of matches before and after updating do not vary much after the first few iterations. This In this section, we describe the robustness and efficiency of the algorithm using the same synthetic data as in the last section, but with different levels of noise and different samplings. All results given below are the average of ten tries. The first series of experiments are carried out with respect to different levels of noise. The standard deviation of the noise added to each point varies from 0 to 20. Similar to Figure 12, we show, as a sample, in Figure 13 and Figure 14 the evolutions of the rotation and translation errors versus the number of iterations with a standard deviation equal to 2 and 8. From these results, we observe that The translation error decreases almost monotonically, while the behavior of the rotation error is more complex. Noise has a stronger impact on the rotation parameters than on the translation parameters. When noise is small, there is in general a smaller error in rotation than in translation. When noise is significant, the inverse is observed m 30 1) " l ' l ' l ' l ' l ' l ' l ' l ' l ' l ' l ' l ' l " $ % \ '% Error in rotation o-----o Error in translation "a',,o.o. _ I. I I. I. I I0 Ii Iterations Fig. I3. Evolution of the rotation and translation errors versus the number of iterations with a standard deviation equal to 2
13 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 131 Table 1. Several detailed results in different iterations iteration I0 1I matching time update time 0, , , ,02 motion time , , , , Dma t t before n b. I after I I I43 t t46 I47 I I ~/! " I " I " I " t "! ' I " I " I " I " 1 '%""'t ' I " H Error in rotation : o... o Error in translation We now summarize more results in Table 2. The rotation and translation errors are measured in percents, and the execution time, in seconds. Each number shown is the average of 10 tries. 15 iterations have been applied. We have the following conclusions: 40 20?tO I. I. I. I. I. I. I. I. I. I. I. I. I.I,,lls~j 2 a ~ s 6 7 a ~, Iterations Fig. 14. Evolution of the rotation and translation errors versus the number of iterations with a standard deviation equal to 8 We think the above phenomena are due to the fact that the relation between the measurements and the rotation parameters is nonlinear while that between the measurements and the translation parameters is linear. To visually demonstrate the effect of the noise added and the ability of the algorithm, we show in Figure 15 and Figure 16 two sample results. In each figure, the upper row displays the front and top views of the two noisy curves before registration; the lower row displays the front and top views of the two noisy curves after registration. In Figure 15 and Figure 16, we have added, to each x, y, and z components of each point of the two curves, zero-mean Gaussian noise with a standard deviation equal to 8 and 16, respectively. Even though the curves are so noisy, the registration between them is surprisingly good. The errors in rotation and in translation increase with the increase in the noise added to the data, as expected. Noise in the measurements has more effect in the rotation than in translation. The algorithm is robust to noise. It yields a reasonable motion estimate even when the data are significantly corrupted. The execution time increases also with the increase in the noise added to the data. This is because when the data are very noisy the value of/)max stays big, and the search has to be performed in a large space. We now investigate the ability of the algorithm with respect to different samplings of curves. The same dam are used. Zero-mean Ganssian noise with a standard deviation equal to 2 is added to each x, y, and z components of each point of the two curves. We will describe in Section 7.4 the effect of different samplings of the curves in the. second frames. Here we vary the sampling of the curve in the first frame from I (i.e., all points) to 10 (i.e., one out of every ten points). Ten tries are carried out for each sampling. The errors in rotation and in translation (in percents), and the execution time (in seconds) versus different samplings are shown in Table 3. Two remarks can be made: Generally speaking, the more samples there are in a curve, the less the error in the estimation of the rotation and translation. However, the exact relation is not very clear. Consider sampling and sampling = I0. The latter has only 20 points
14 132 Zhang 2" i:", 1 I% Fig. 15. Front and top views of two noisy curves with a standard deviation equal to 8 before and after registration Table 2. A summary of the experimental results with synthetic data standard deviation rotation error translation error execution time while the former has 200 points. The motion error, however, is only twice as large. The execution time decreases monotonically as the number of sample points decreases. If disregarding the preprocessing time, the execution time is linear in the number of points in the first frame. In the foregoing discussions we have observed that using coarsely sampled points in the curves in the first frame does not affect too much the accuracy of the final motion estimate, but it considerably speeds up the whole process. It is natural to think about using a coarse-to-fine strategy such as that described in Section 4.3. The finding of fast convergence of the algorithm during the first few iterations (see Figure 13 and Figure 14) and the finding of relatively expensive search (see Table 1) justify the following strategy. During the first few iterations, we use coarser, instead of all, sample points, which allows for finding an estimate close to the optimal. We then use all sample points to refine this estimate. We have conducted ten experiments using the same data as before by adding zeromean Gaussian noise with a standard deviation equal to 3. During the first five iterations, only 40 points (one out of every five points) are used. These are followed by ten iterations using all points. The average results of the ten experiments are: rotation
15 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 133.i('~' U Fig. 16. Front and top views of two noisy curves with a standard deviation equal to 16 before and after registration Table 3. Results with respect to different samplings fraction of points 1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 I/9 1/10 rotation error , , translation error , execution time t , t 1.0t error = 4.56%, translation error = 4.29%, and execution time = 3.39 s. For comparison, we performed 15 iterations using all points. The average results of the ten tries are: rotation error = 4.68%, translation error = 4.14%, and execution time = 7.49 s. Only little difference between the final motion estimates is observed, but the algorithm is more than twice faster by exploiting the coarse-to-fine strategy. 5.3 Real Data In this section, we provide an example with real data. A trinocular stereo system mounted on our mobile vehicle is used to take images of a chair scene (the scene is static but the robot moves). We show" in Figure 17 two images taken by the first camera from two different positions. The displacement between the two positions is about 4 degrees in rotation and 100 millimeters in translation. The chair is about 3 meters from the mobile vehicle. The curve-based trinocular stereo algorithm developed in our laboratory (Robert and Faugeras 1991) is used to reconstruct the 3-D frames corresponding to the two positions. There are 36 curves and 588 points in the first frame, and 48 curves and 763 points in the second frame. We show in the upper row of Figure 18
16 134 Zhang Fig. IZ Images of a chair scene taken by the first camera from two different positions : \ ~t'\ Fig. 18. Superposition of two 3-D flames before and after registration: front and top views the front view and the top view of the superposition of the two 3-D frames. The curves in the first frame is displayed in solid lines while those in the second flames, in dashed lines, We apply the algorithm to the two flames. The algorithm converges after 12 iterations, It takes in total 32.5 seconds on a SUN 4/60 workstation and half of the time is spent in the first iteration (so we could speed up the process by setting o Dma x to a smaller value). The final motion estimate is [ x 10-3, x 10-2, x 10-3] r,
17 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 135 Fig. 19. The first triplet of images of a rock scene Fig. 20. The second triplet of images of a rock scene --7- [ x 10, x l0, x 10z] r, where r is in radians and t is in centimeters. The motion change is: 3r = 0.78% and 6t = 0.53%. The result is shown in the lower row of Figure 18 where we have applied the estimated motion to the first frame. Excellent registration is observed for the chair. The registration of the border of the wall is a little bit worse because more error has been introduced during the 3-D reconstruction, for it is far away from the cameras. Now we exploit the coarse-to-fine strategy. As before, we do coarse matching in the first five iterations by sampling evenly one out of every five points on the curves in the first frame, followed by fine matching using all points. The algorithm converges after I2 iterations and yields exactly the same motion estimate as when only doing fine matching. The execution time, however, decreases from 32.5 seconds to 10.5 seconds, about three times faster. If we now sample evenly one out of every ten points on the curves in the first frame, and do coarse matching in the first five iterations and fine matching in the subsequent ones, the algorithm converges after 13 iterations (one iteration more), and the final motion estimate is [ x t0 3, x 10-2, x 10-3] r, [ x 10, x 10, x 102] r, which is almost the same as the one estimated using directly all points. The motion change is: 3r % and t = 0.50%. The execution time is now 8.8 seconds. 6 Experimental Results with Surfaces We provide in this section two examples. In the first example, two 3-D frames of a rock scene are reconstructed by a correlation-based stereovision system. They are first registered manually. Then we want to see the limit of our algorithm by using different initial estimates. The second example shows the registration of two range images of a head figure. 6.1A Rock Scene We show in Figure 19 and Figure 20 two triplets of images of a rock scene. The stereo rig is about 6 meters from the scene. The two positions differ by 30 degrees in rotation and 3.75 meters in translation. The correlation-based stereo system reconstructs points for the first position and points for the second position. For experimental purpose,
18 t36 Zhang we have taken two similar triplets of images having marks put on the rocks. From these marks, we are able to manually compute the displacement between the two positions. This result is shown in Figure 21 (see color figure section), where the first map is drawn in quadrangles, and the second in grayed surface. The registration is reasonable. One can observe that many points are only visible from one position. In the sequel, we vary the initial motion estimate, run our algorithm on the two frames, and then compare the results obtained by our algorithm with the result obtained manually. Note that the two frames are now expressed in the same coordinate system by applying the manual estimate to the first frame. Thus, the final estimate of the displacement between the two frames is expected to be zero, and the estimate given by the algorithm is directly the motion error with respect to the manual estimate. We have done several tests, and three of them are shown in the following. The initial estimate will be represented by a 6-vector: the first three elements constitute the r vector, and the last three, the t vector. If we set the initial estimate to [0.0, 0.0, 0.35, 0.5, -2.0, 0.2] r (i.e., a rotation of 20 degrees and a translation of 2.07 meters). The difference between the two frames corresponding to this estimate is shown in Figure 22 (see color figure section). After 40 iterations, the motion estimate given by our algorithm is [ x 10-3, x 10-2, x 10-3, x 10-2, x 10-2, x 10-2] r. Thus, there is a difference of 0.86 degrees in rotation and 5.66 centimeters in translation from the manual estimate. The result after the registration is shown in Figure 23 (see color figure section). We see that even when the initial estimate is very different from the real one, we still obtain satisfactory results. After several more iterations, we obtain a better result. What happens if we increase further the difference between the initial and final estimates? The initial estimate in this test is [0.0, 0.0, 0.35, -0.5, -2.5, 0.2] r (i.e., a rotation of 20 degrees and a translation of 2.56 meters). The difference between the two frames corresponding to this estimate is shown in Figure 24 (see color figure section). After 40 iterations, the motion estimate given by our algorithm is [ x 10-2, x 10-2, x 10 -I, x 10-1, x 10, x 10-1] T. The result is mediocre, as shown in Figure 25 (see color figure section), but it is better than the initial estimate. If we continue, the result shows some further improvement. After 80 iterations, the motion estimate is [ x 10-2, x 10-3, xl0-3, x 10-2, x 10-2, x 10-2] r. The difference with the manual estimate is of 0.92 degrees in rotation and 6.18 centimeters in translation, which is reasonably small, as shown in Figure 26 (see color figure section). Up to now, the tests we have carried out are all with a rotation around an axis perpendicular to the ground plane. What happens if the vehicle is found in two different slopes (e.g., the vehicle scrambles on a pile of rocks)? Here is an example. The initial estimate is [0.35, 0.0, 0.17, -0.5, -2.5, 0.2] r. Thus, there is a rotation of 20 degrees with respect to the ground plane, and a rotation of t0 degrees around an axis perpendicular to the ground plane. The translation between the two views is 2.56 meters. The difference between the two frames corresponding to this estimate is shown in Figure 27 (see color figure section). After 40 iterations, the motion estimate is [ x 10-2, x 10-4, x t0-3, x 10-3, x 10-2, x 10-3] r. The difference with the manual estimate is of 0.65 degrees in rotation and 2.87 centimeters in translation. The registration result is quite good, as shown in Figure 28 (see color figure section). 6.2 A Head Figure The proposed algorithm is used by Chen for range image registration (Chen 1992). One modification he has made is the closest point search procedure. He uses the technique proposed by Chen and Medioni (1992) (see Section 8). One example is shown in this section. Figure 29a and Figure 29d show two range images of a head figure. For display purpose, they are shown in shaded intensity image form. The coordinate system is defined as follows: The origin is in the center of the image, the x-axis is parallel to the columns of the images (unit in pixels), the y-axis is parallel to the rows (unit in pixels), and the z-axis points out of the paper (unit in grey levels). The two images differ by [ , 0, 0, 0, 0, 0] r. Instead of using all points, about 150 points on a regular grid are chosen from the first image, as shown
19 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 137 Fig. 29. A head figure. (a) First view; (b) Result after one iteration; (c) Result after five iterations; (d) Result after 39 iterations in Figure 29a as small circles overlaid on the original image. The initial motion estimate is [0.0873, 0, 0, 0, 0, 0] r, i.e., a difference of 15 degrees in rotation from the real transformation. The result after one iteration is shown in Figure 29b, where the points from the first image after transformation (in circles) are projected to the second image. The result is very bad. After five iterations, the estimated transformation is already reasonable, as shown in Figure 29c. The algorithm converges after 39 iterations, yielding a result as shown in Figure 29d. The corresponding motion estimate is [ , ,.0008, , , ] T. The error is less than 0.3 degrees in rotation and 0.2 pixels in translation. 7 Discussions Z1 About Complexity As described earlier, each iteration of our algorithm consists of four main steps. The first is to find closest points, at an expected cost of O (m log n), where m and n are the numbers of points in the first and second frames, respectively. The second is to update the matches recovered in the first step, at a cost of O(m). The third step is to compute the 3-D motion, also at a cost of O(m). The last step is to apply the estimated motion to all points in the first frame, at a cost of O(m). Thus the total complexity of our algo-
20 - Yi 138 Zhang rithm is O(m logn). In Zhang (1992a), we show that our algorithm has a lower bound of computational cost than the string-based curve matching algorithms, e.g., Schwartz and Sharir (1987). 7.2 About Convergence Convergence is always an important issue of an iterative procedure. Our algorithm cannot be guaranteed to reach the global minimum. Although we have observed, given a reasonable start point, a good convergence of our algorithm in the experimental sections, we are not able to show that it converges monotonically to a local minimum. This is not like the algorithm of Besl and McKay (1992) (see Section 8) which converges always monotonically to a local minimum. The difference is that Pi in our objective function (2) can take a value of either one or zero depending on situations. As will be clear, however, our algorithm is wellbehaved. Let us make a thorough investigation during each iteration. As described in Section 3.2, only points whose distances to their closest points in the second frame are less than Dma ~-l x are retained as potential matches. The mean squared error d~loses t of these matches, given by m d~io~e~t - ~ Y~. Pi l)rt-ixi + t )-1 - Yi l[ 2, is upper-bounded by Dmax, I-I i.e., I I-I d~loses t _ < Dma x. Then a statistical analysis of distances is carried out, and a new distance threshold DImax is computed. The pairings whose distances are greater than Dlm~x are discarded, i.e., their pi's are turned to be zero. Thus the mean squared error du~paate I of the updated matches, given by d~pdate -- m ~ Pi lira-ix/+ ti-~ - Yi II 2, Y~-i=I P~ i=1 is less than d~loses I t, i.e., dupdate ) < dcltosest I. We have of I I course d~paate < Dma x. The least-squares technique described in Section 3.4 is applied to the remaining matches, and a new motion estimate (R E, t I) is available. Let 1, Z P~ IlRIxi + ~I - d[sq -- ~'inl Pi i=1 it/ II 2- I We have always d~s q <_ diupdate, because if dis q ;> d~pdate, then the zero motion (R x = I, t I = 0) would yield a smaller mean-squared error, which contradicts the hypothesis. Thus we have I - I 1 d/s q _< dupdate _< mln(dcloses t, Dmax), and I-I d~loses t _< Dma x. Unfortunately, we do not have the inequality: 3~+1 contains two parts. The "closestdl+l --< d~s q. Indeed, ~closest first consists of xi's which are also contained in d[s q. We can easily show that this part is always decreasing. The second part consists of xi's which are not contained in d~s q, but whose distances to their closest points are less than Dma x. The combination of the two parts is not necessarily less than d/s q. As is clear from the above discussions, the objective function is upper-bounded by Dmax. As the registration becomes better and better, Dmax in general becomes smaller and smaller, but it may be occasionally bigger. In order to ensure a monotonic decrease of Dmax, we must impose that Dma 1 _< Dma I-1 x I after computing Draax as described in Section 3.3. We have done this and rerun the algorithm with the synthetic and real data presented in Section 5, and exactly the same results have been obtained. We have also rerun the algorithm with the data presented in Section 6. For the test 1, the estimate after 40 iterations is [ x 10-3, x 10-2, x 10-3, x 10 -z, x 10-2, x 10-2] T. There is a difference of 0.11 degrees in rotation and 0.86 centimeters in translation. For the test 2, we obtained the same motion estimate after 40 iterations. The estimate after 80 iterations is [ x I0-3, x 10-3, x 10-2, x 10-2, x 10-2, x 10-2] r. There is a difference of 0.37 degrees in rotation and 3.28 centimeters in translation. For the test 3, the estimate after 40 iterations is [ x 10-2, x 10-3, x 10-3, x 10-3, x 10-2, x 10-2] r. There is a difference of 0.24 degrees in rotation and 1.73 centimeters in translation. These differences are sufficiently small compared with the resolution of the data (about 5 centimeters). In Figure 30, two graphs are shown. The first plots the evolution of the mean distance (i.e., the objective function) versus iteration number. The second plots the evolution of the number of matches after update versus iteration number (one example has al-
21 Test Berative Point Matching for Registration of Free-Form Curves and Surfaces 139 ready been given in the last row of Table 1). The data presented in Section 6 have been used. Note that there are two curves for test 2: "Test 2" for iterations from 1 to 40, and "Test 2bis" for iterations from 41 to 80. About one fourth (17749 points) of the points in the first view have been used. From Figure 30a, we see that the mean distance decreases towards 0.04 meters in all three tests. These curves confirm that our algorithm is well-behaved. As shown in Figure 30b, the number of matches varies continuously through the iterations and finally steadies towards ,~ ~750 ' I i t t! I I 0 ~ ,450 i "t II ', t I Test l 2 _ Test 2bis... Test 3 0, About Simplifications For computation consideration, we have made two simplifications. The first is that the non-symmetric matching criterion (2) is used, instead of the symmetric one (1). The second is that the approximate distance metric (4) is used, which will be discussed in Section 7.4. The symmetric matching criterion (1) is in fact also implemented for curves. Table 4 gives a comparison of the results using the two criteria. The synthetic data in Section 5 are used. Different levels of Gaussian noise are added. Ten iterations are applied in each case. Rotation errors, translation errors, and execution times are shown, each being the average of ten tries. The algorithm using the symmetric criterion yields better motion estimates than that using the non-symmetric one. This is expected because the data in the two frames both contribute to the motion estimation and neither of the frames prevails over the other. On the other hand, the execution time using the symmetric criterion is twice as long. We have also carried out an experiment with the real data described in Section 5. The algorithm using the symmetric criterion converges after 12 iterations, yielding a motion estimate 0, i i 1 i i i i 5 ]0 ] iterations (a) I I I I I I I " I ' [~ I/~--~'~" ~ tt \ -~:':~'~-.I ~.4"'~, I~ E = '" Test 1 /! '... Test 2 / _ Test 2bis, Test 3 / I I I I io 15 2o ~S iterations (b)...!! Fig. 30. Evolution of (a) the mean distance and (b) the number of matches versus iteration number critical applications the non-symmetric matching criterion is preferred. {= [ x 10-3, x l0-2, x 10-3] T, 7.4 About Sampling [ x t0, x 10, x 10z] r. The difference compared with that using the nonsymmetric criterion is 0.6% in both rotation and translation. The execution time is 70.2 seconds on a SUN 4/60 workstation, about twice as long. Thus in time As described earlier, the algorithm developed is based on the use of a simplified, instead of real, definition of the distance between a point and a shape (see Equation 4). That is, we use the minimum of all distances from a given point to each sample point of the shape. Different sampling of a shape (even the approxima-
22 140 Zhan g Table 4. Comparison between the matching criteria (1) and (2) tandard deviation :riterion (2) (1) (2) (1) (2) (1) (2) (1) (2) (1) (2) (1) otation error ranslation error , xecution time 5.04 t ,48 tion error is negligible) does affect the final estimate of the motion. Take a simple example of curves as shown in Figure 31. The curve consists of two line segments (Figure 3 la). The sampling in the first frame consists of three points as indicated by the crosses in Figure 31a. We have two samplings in the second frame. The first sampling consists of three points as indicated by dark dots, and the second sampling consists of five points by adding two additional ones (indicated by empty dots) to the first sampling, as shown in Figure 31a. The motion result between the two frames with the first sampling is shown in Figure 3 lb, and that with the second sampling, in Figure 31c. Clearly, more samples, better results. To solve the problem resulted from sampling, we should ideally use the real distance definition (Equation 3), and use the real closest points instead of the closest sample points. However, we lose the efficiency achieved with sample points. One possible improvement, as proposed in Besl and McKay (1992), could be the following: First, create a piecewise-simplex (line segments or triangles) approximation of the shape in the second frame (e.g., the Delaunay triangulation from the sample points (Faugeras et al. 1990)). Then, given a point in the first frame, a pure Newton's minimization procedure can be used to find the real closest point, starting with the closest sample point. There is an easy way to overcome the sampling problem while maintaining the efficiency of the algorithm. It consists in simply increasing the number of sample points through interpolation. The more the number of sample points, the less the sampling will affect the final motion estimate. However, this causes two problems: the increase in the memory required and the increase in the search time (because we increase also the size of the k-d tree). Thus a tradeoff must be found. From the experiments we have carried out, we have obtained satisfactory results using directly clos- est sample points because the sample points are sufficiently dense. 7.5 Uncertainty The importance of explicitly estimating and manipulating uncertainty is now well recognized by the computer vision and robotics community (Blostein and Huang 1987; Matthies and Shafer 1987; Kriegman et al. 1989; Ayache and Faugeras 1989; Szeliski 1990). This is extremely important when the data available have different uncertainty distribution for example in stereo where uncertainty increases significantly with depth. We have shown in Zhang and Faugeras (1991) that accounting for uncertainty in motion estimation (via, e.g., a Kalman filter) yields much better results. For computational tractability and as a reasonable approximation, the uncertainty in a 3-D point reconstructed from stereo is usually modeled as Gaussian; that is, it is characterized by a 3-D position vector and a 3 x 3 covariance matrix. The algorithm for motion computation described in Section 3.4 is very efficient. However, it assumes each point has equal uncertainty. And unfortunately it is difficult to extend it to fully take uncertainty into account. To fully take uncertainty into account, we can use for example Kalman filtering techniques which have been widely and successfully applied to solve quite a number of vision problems (Zhang and Faugeras 1992a). However, there will be a significant increase in computation. The method described below can partially take uncertainty into account. Indeed, we can associate, to each pairing between the two frames, a scalar weighting factor wi. Instead of minimizing Equation 5, we compute R and t by minimizing the following weighted objective function:
23 Iterative Point Matching for Registration of Free-Form Carves and Surfaces 141 X (b) (c) Fig. 31. Influence of curve sampling on motion estimation 1 N 5r(R't) = N/~I= willrxi +t-yill 2. (8) The quaternion or dual quaternion method can still be used to compute efficiently R and t. The weighting factor wi should be related to the uncertainty of Rxi -t-t --Yi. Let Axl, Ayl, and Ai be the covariance matrices of xi, yi, and Rxi + t - Yi. Axi and Ayl are given by the sensing system. Ai is then computed as Ai = RAxiR r + Ayi, where R takes the rotation matrix computed during a previous iteration as an approximation. The trace of Ai roughly indicates the magnitude of the uncertainty of Rxi q- t - Yi. Therefore, we choose wi as 1 1 wi - tr(ai) -- tr(axi) + tr(ayl)' Thus, the weighting factor is independent of the motion. We have not implemented this method in the current version. The mechanism for updating the matching, described in Section 3.3, has been designed without considering the different uncertainties in the data points. The same threshold Dmax has been used for all points. If the uncertainties in the data points and that in the motion are modeled, one would like to use a pruning criterion that better reflects the sources uncertainty. 3 The idea is the following, similar to that used in Zhang and Faugeras (1992a) for matching 3-D line segments. Let the point under consideration in the first view be x with covariance matrix Ax. Let the points in the second view be {yi} with covari- ance matrix {Ayi}. Let the motion relating the two views be d with covariance matrix Ad. The vector d could be [r r, tv] r. To be general, we define two functions relating d to the rotation matrix R and the translation t: R = f(d) and t = g(d). The (squared) Mahalanobis distance can be used to take into account the uncertainty. It is defined by d M = (f(d)x + g(d) - yi)rai (f(d)x + g(d) - yi), which can be interpreted as the squared Euclidean distance weighted by the uncertainty measure. Ai is the covariance matrix of f(d)x + g(d) - Yi, and is given, up to the first order, by Ai = f(d)axf(d) T + Ayi + JdAdJd T, where Jd is the Jacobian 0 [f(d)x + g(d)]/0d. Now the closest point to x is the point Yi having the smallest distance d/m. The reader is referred to Zhang and Faugeras (1992a) for more details on the Mahalanobis distance. As described in Section 3.3, we do not want to simply match the closest point yi with x. In order for Yi and x to be matched, the Mahalanobis distance d M must be less than some threshold z. As d M follows a X 2 distribution with three degrees of freedom, we can choose an appropriate e, for example, 7.81 for a probability of 89%. In summary, we can replace, if uncertainty is considered, the first two steps of the algorithm described in Section 3.5 by the following: 1. Find, for each point x in the first view, the point Yi having the smallest Mahalanobis distance d, M.
24 142 Zhang 2. Discard the pairings {(xi, yi)} whose d)t's are larger than the threshold s. finally group these patches into objects according to motion similarity. 7.6 About Large Motion Because of the local property of the matching criterion used, our algorithm converges to the closest minimum. It is thus best applied in situations where the motion is small or approximately known, tn the case of large motion, the algorithm can be adapted in two different ways. The first way is to apply first the global methods as cited in the introductory section to obtain an estimate, which can then be refined by applying the algorithm described in this paper. The second way is to obtain a set of initial registrations by sampling the 6-D motion space, and then apply our algorithm to each initial registration. "The final estimate corresponding to the global minimum error is retained as the optimal one. A similar method has been used in Besl and McKay (1992) to solve the object recognition problem. Z7 Multiple Object Motions In a dynamic environment, there is usually more than one moving object. It is important to have a reliable algorithm for segmenting the scene into objects using motion information. However, little work has been done so far in this direction. We have proposed in Zhang and Fangeras (1992b) a framework to deal with multiple object motions. It consists of two levels. The first level deals with the tracking of 3-D tokens from frame to frame and the estimation of their motions. The processing is completely parallel for each token. The second level groups tokens into objects based on the similarity of motion parameters. Tokens coming from a single object should have the same motion parameters. In Zhang and Faugeras (1992b) the tokens used are 3-D line segments, and the experiments have shown that the framework is flexible and powerful. This framework is used in Navab and Zhang (1992) to solve multiple object motions through motion and stereo cooperation. Now if we replace 3-D line segments by 3-D curves and estimate 3-D motion for each curve, the general framework is still applicable. For surfaces, we need to over-segment them into patches such that each patch belongs only to one object. We can then compute motion for each patch and 8 Highlights With Respect to Previous Work As mentioned in the introduction, several pieces of similar but independent work have recently been published. They include Besl and McKay (1992); Chen and Medioni (1992); Menq et al. (1992); Champleboux et al. (1992). The same idea is: iteratively matching points in one set to the closest points in another set, given the transformation between the two sets is small. However, as each algorithm is developed in its own context, different techniques have been used. One of the main differences lies in the matching criterion. Refer to Equation 2. In our algorithm, pi can take value either 1 or 0 depending on whether the point in the first set has a reasonable match in the second set or not. This is determined by the maximum tolerable distance Dmax, which, in turn, is set in a dynamic way by analyzing the statistics of the distances as described in Section 3.3. Therefore, our algorithm is capable of dealing with the following situations: Gross outliers in the data. The outliers are automatically discarded in the matching and thus have no effect on the final motion estimate. Appearance and disappearance in which curves in one set do not appear in the other set. This is usually the case in navigation where objects may enter or leave the field of view. Occlusion. An object may occlude other objects, and it may itself be occluded. This is common in both object recognition and navigation. Besl and McKay (1992) have developed an algorithm for object recognition and location, where a portion of a given model shape is assumed to be observed. In their algorithm, Pi takes always value 1. Thus, their algorithm can only deal with the case in which the first set is a subset of the second set. It is powerless in the situations described above. The quaternion algorithm is used to estimate the transformation between the two sets. The singularvalue-decomposition algorithm by Haralick et al. (1989) is suggested to replace it in order to identify outliers. Chen and Medioni (1992) have developed an algorithm for registering multiple range images in or-
25 Iterative Point Matching for Registration of Free-Form Curves and Surfaces 143 der to create a complete model of an object. About one hundred of points on a regular grid in the first range image, called control points, are used in order to save computation time. Only points in smooth areas are selected as the control points in order to find a reliable closest point by their method (see below). The method for motion estimation is not specified. Occlusion and outliers issues are not addressed. Menq et al. (1992) have developed an algorithm for registering range data points with a CAD model for the inspection purpose. In their algorithm, Pi always takes the value t, too. Occlusion and outliers issues are not addressed. The transformation is estimated by solving a set of nonlinear equations. Champleboux et al. (1992) have developed an algorithm for the registration of two sets of 3-D points obtained with a laser range finder. Assume that most (about 99%) of points in one set match surfaces in the other, an iterative nonlinear least-squares technique (the Levenberg-Marquardt algorithm) is applied to find the rigid transformation between the two sets. When the iterative process converges, the points whose distances to the other set are larger than a prefixed threshold are considered as outliers and are rejected. Some more iterations are then applied to the retained points. Another main difference is in the procedure for closest-point computation. In our applications, dense point sets are available, which are directly sorted in a k-d tree for efficient closest-point search. In Besl and McKay (1992), several methods are proposed to compute the closest point on a geometric entity (point set, curve, or surface) to a given point. In Chen and Medioni (1992), the surface normal for each control point in the first set is computed. The closest point is found, through an iterative process, at the intersection of the surface normal to the digital surface in the second frame. In Menq et al. (1992), as the model is represented by a set of parametric surface patches, the closest point is determined by solving two nonlinear equations. In Champleboux et al. (1992), the first set of 3-D points is converted into an octreespline, which is a classical octree decomposition of the work volume, followed by a further division near surface points. The Euclidean distance from nodes to the surface are computed in an exhaustive manner, and saved in the octree-spline. This allows them to quickly compute approximate Euclidean distances from points to surface. 9 Conclusions We have described an algorithm for registering two sets of 3-D curves obtained by using an edge-based stereo system, or two dense 3-D maps obtained by using a correlation-based stereo system. We have used the assumption that the motion between two frames is small or approximately known, a realistic assumption in many practical applications including visual navigation. A number of experiments have been carried out and good results have been obtained. Our algorithm has the following features: It is simple. The reader can easily reproduce the algorithm. It is extensible. More sophisticated strategies such as figural continuity can be easily integrated in the algorithm. It is general. First, the representation used, i.e., point sets, is general for representing arbitrary shapes of the type found in practice. Second, the ideas behind the algorithm are applicable to (many) other matching problems. The algorithm can easily be adapted to solve for example 2-D curve matching. It is efficient. The most expensive computation is the process of finding closest points, which has a complexity O(N log N). Exploiting the coarse-tofine strategy described in Section 4.3 considerably speeds up the algorithm with only a small change in the precision of the final estimate. It is robust to gross errors and can deal with appearance, disappearance and occlusion of objects, as described in Section 8. This is achieved by analyzing dynamically the statistics of the distances, as described in Section 3.3. It yields an accurate estimation because all available information is used in the algorithm. It does not require any preprocessing of 3-D point data such as for example smoothing. The data are used as is in our algorithm. That is, there is no approximation error. 4 The registration results do not depend on any derivative estimation (which is sensitive to noise), in contrast with many other feature-based or stringbased matching methods. However, imposing the orientation consistency in matching (Section 3.2) increases the convergence range of the algorithm.
26 144 Zhang Our algorithm can only partially take the uncertainty of measurements into account. To fully take into account the uncertainty, we should replace the quatemion or dual quaternion algorithm by other methods such as Kalman filtering techniques. This would cause a significant increase in the computational cost of the algorithm. In our algorithm, one parameter, the parameter 79, needs to be set. It indicates when the registration can be considered to be good. It has an impact on the convergence rate, as described in Section 4.1. This is a limitation of our algorithm. In our implementation, 79 is related to the resolution of the data available. This method works well for all experiments we have carried out. The result of our algorithm is not sensitive to the value of 79. Instead of 10 centimeters, we can set 79 to 8 or 12 centimeters, and the same results are obtained. However, a better method probably exists. One question raised is: can the parameter 79 be eliminated? The parameter 79 has been introduced with the concern that the initial estimate could be mediocre. If we have faith in the result provided by the instruments such as odometric and inertial systems on the mobile vehicle, then we can directly use 3or (the first case in Section 3.3) to update the matching. The parameter 79 is thus not necessary. Our algorithm converges (not necessarily monotonically) to the closest local minimum, and thus is not appropriate for solving large motion problems. Two possible extensions of the algorithm to deal with large motions have been described in Section 7.6: coupling with a global method or sampling the motion space. The proposed algorithm works better in a rugged terrain than on a fiat ground. This is because there are many local minima for a flat ground which are very close to each other. Due to the local technique we exploit, the final motion estimate will depend essentially on the initial one in this case. By the way, primitivebased methods will not work, either. No salient features can be extracted. A Search for Closest Points With k-d Trees Several methods exist to speed up the search process for closest points, including bucketing techniques and k-d trees (abbreviation for k-dimensional binary search tree). We have chosen k-d trees, because the data points we have are sparse in space. It is not efficient enough to use bucketing techniques because only a few buckets would contain many points, and many others nothing. 5 The k-d tree is a generalization of bisection in one dimension to k dimensions (Preparata and Shamos 1986). In our case, k = 3. A 3-D tree is constructed as follows. First choose a plane parallel to yz-plane passing through a data point P to cut the whole space into two (generalized) rectangular parallelepipeds 6 such that there are approximately equal numbers of points on either side of the cut. We obtain then a left son and a right son. Next, each son is further split by a plane parallel to xz-plane such that there are approximately equal numbers of points on either side of the cut, and we obtain a left grandson and a right one. We continue splitting each grandson by choosing a plane parallel to xy-plane, and so on, letting at each step the direction of the cutting plane alternate between yz-, xz- and xy-plane. This splitting process stops when we reach a rectangular parallelepiped not containing any point; the corresponding node is a leaf of the tree. A k-d tree can be constructed in O (n log n) time with O(n) storage, which are both optimal (Preparata and Shamos 1986). we now investigate the use of the 3-D tree in searching for closest points. The standard way of using k-d trees is to find all points whose distances to x are within a given value. In our case, we want to find the closest point. One possibility is to use the standard technique to find all points within a given distance, and then to find the point having the smallest distance. We have developed a recursive algorithm which allows the given distance to vary. The algorithm is thus more efficient. More formally, a node v of the 3-D tree T is characterized by two items (P(v), t(v)). Point P(v) is the point through which the space is cut into two. The parameter t (v), taking the value 0, 1, or 2, indicates whether the cutting plane is parallel to yz-, xz-, or xy-plane. Two global variables P and D are used to save the point found and the corresponding distance. They are initialized to -1 and Dmax, respectively. At the output, if P is still -1, it implies that we cannot find any point with distance less than Dmax. The search for the closest points to x is conducted by calling SEARCH(root(T), x) of the following procedure: input: a point x, a 3-D tree T; two global variables P and D initialized to -1 and Dmax, respectively. output: the closest point P and the corresponding distance D.
27 - - if - - ct Iterative Point Matching for Registration of Free-Form Curves and Surfaces 145 procedure: SEARCH(v, x) (v == leaf) return ; = x[t(v)] ; --c2 = P(v)[t(v)] ; /* c2 has been used to cut the space */ -- if (Iq -- c21 < D) and (i[x - P(v)It < D) then P ~ P(v), D ~ IIx- P(v)ll ; -- - D < c2) then SEARCH(leftson(v), x); -- if (c2 -- D < cl) then SEARCH(rightson(v), x); Unfortunately, the worst-case search time is O(n 2/3) with the 3-D tree method (see (Preparata and Shamos 1986; pp.77)). Other more efficient algorithms exist, such as a direct access method, but they require much more storage, In practice, we observed good performance with 3-D trees. We found that the search time depends on Dma. When Dmax is small, the search can be performed very fast. As we update Dmax during each iteration, it becomes quite small after a few iterations. B Motion Computation using Dual Number Quaternions For completeness, we summarize in this appendix the dual number quaternion method described in Walker et al. (1991), which can solve a weighted leastsquares problem. We can compute R and t by minimizing the following function 1 N Jr(R, t) = ~ E wi IIRxi + t - Yi 112, (9) i=i where wi is the positive weighting factor associated with the pairing between xi and yi. We can relate wi to the uncertainty in xi and Yi as shown in Section 7.5. A quaternion q can be considered as being either a 4-D vector [ql, q2, q3, q4] T or a pair ((t, q4) where Cl = [qt, q2, q3] r. A dual quatemion (t consists of two quaternions q and s, i.e., cl = q + ss, (10) where a special multiplication rule for s is defined by s 2 = 0. Two important matrix functions of quaternions are defined as Q(q) = [q4i+k(0)_ct T q4cl], (11) I q4i - K(O) 51 I (12) W(q) = -O r q4 ' where I is the identity matrix, and K((t) is the skewsymmetric matrix defined as K(0) = I 0 --q3 q2 ] q3 0 -ql -q2 ql 0 A 3-D rigid motion can be represented by a dual quatemion dl satisfying the following two constraints: qrq and qrs = 0. (13) Thus, we have still six independent parameters for representating a 3-D motion. The rotation matrix R can be expressed as R = (q42 - (tr61)i + 2C1(1T + 2q4K(O), (14) and the translation vector T = 15, where 15 is the vector part of the quaternion p given by p=w(q)ws. (15) The scalar part P4 of p is always zero. A 3-D vector x is identified with the quatemion (x, 0) 7, and we shall also use x to represent its corresponding quatemion if there is no ambiguity in the context. It can then be easily shown that Rx + t = W(Q)Ts + W(q)TQ(q)x. Thus the objective function Equation 5 can be written as a quadratic function of q and s S = l[qrclq + Wsrs + src2q + const.], (16) where N C1 = -2 E wiq(yi)tw(xi) i=1 N = -2 wi i=l K(y)K(x) + yx T --ytk(x) -K(y)x ] (17) yt x
28 146 Zhang N C2 = 2 E wi[w(xl) - Q(yi)] i=1 N[ ] = 2Ewi -K(x)-K(y) x-y (18) i=1 --(X -- y)t 0 ' N W = E wi, (19) i=1 N const. = E wi (x/rxi + y/ryi). (20) i=1 By adjoining the constraints (Equation 13), the optimal dual quatemion is obtained by minimizing f" = l[qrctq + Wsrs + srczq + const. -t- )~1 (qtq _ 1) + ~2(sTq)], (21) The error is thus minimized if we select the eigenvector corresponding to the largest eigenvalue. Having computed q, the rotation matrix R is computed from Equation 14. The dual part s is computed from Equation 24 and the translation vector t can then be solved from Equation 15. Acknowledgments This work was carried out in part in the French CNES program gap. The author would like to thank Olivier Faugeras for stimulating discussions during the work, Steve Maybank for carefully reading the draft version, and Xin Chen for providing the result described in Section 6.2. The author would also like to thank the anonymous reviewers for their suggestions and comments which helped me improve this paper. where,k I and,k2 are Lagrange multipliers. Taking the partial derivatives gives 0f' _ 1 [(Cl+cr)q+Cfs+2 lq+,k2s] 0q N = 0, (22) Of.' 1 -- [2Ws q- C2q + )~2q] = 0. (23) 0s N Multiplying Equation 23 by qr gives,k2 = -qrc2q = 0, because C2 is skew-symmetric. Thus s is given by 1 8 = -- C2q. (24) 2W Substituting these into Equation 22 yields where Aq =,kl q, (25) a=~ Thus, the quaternion q is an eigenvector of the matrix A and )~l is the corresponding eigenvalue. Substituting the above result back into Equation 21 gives ~ (const. - )~I). (27) Notes 1. Here we assume the distribution of distances is approximately Gaussian when the registration is good. This has been confirmed by experiments. A typical histogram is shown in Figure 2. More strictly, the X distribution is a better approximation if we use the sum of squared distances. As is well known in statistics (central limit theorem), the distribution can be well approximated by a Gaussian if a large number of samples are available. Indeed, we have more than one hundred point matches. 2. The double precision LINPACK rating for the SUN 4/60 is 1.05 Mflops," 3. I thank one of the reviewers for having raised this discussion. 4. It is certain that errors have been introduced during the reconstruction of 3-D points, and that they have been propagated in the motion estimate. 5. Another possibility is to apply bucketing techniques in 2-D, for example, by projecting all points on the ground or on the image plane of the sensors. We have not compared this technique with the k-d trees. 6. A generalized rectangular parallelepiped is possibly an infinite volume. 7. Note that in Walker et al. (1991) a 3-D vector x is identified with the quaternion (x/2, 0). References Arun, K, Huang, T. and Blostein, S.: 1987, Least-squares fitting of two 3-D point sets, IEEE Trans. PAMI 9(5), Ayacbe, N. and Faugeras, O. D.: 1989, Maintaining Representations of the Environment of a Mobile Robot, IEEE Trans. RA 5(6), Besl, P. and Jain, R.: 1985, Three-dimensional object recognition, ACM Computing Surveys 17(1),
29 tterative Point Matching for Registration of Free-Form Curves and Surfaces 147 Besl, P. J.: 1988, Geometric modeling and computer vision, Proc. IEEE 76(8), Besl, P. J. and McKay, N. D.: 1992, A method for registration of 3-D shapes, IEEE Trans. PAMI 14(2), Blostein, S. and Huang, T.: 1987, Error analysis in stereo determination of a 3-D point position, IEEE Trans. PAMI 9(6), Bolles, R. and Cain, R.: 1982, Recognizing and locating partially visible objects, the local-feature-focus method, lnt't J. Robotics Res. 1(3), Brockett, R.: 1989, Least squares matching problems, Linear Algebra and Its Applications 122/123/124, Champleboux, G., Lavall6e, S., Szeliski, R. and Brunie, L.: 1992, From accurate range imaging sensor calibration to accurate model-based 3-1) object localization, Proc. IEEE Conf. Comput. Vision Pattern Recog., Champaign, Illinois, pp Chen, X.: 1992, Vision-Based Geometric Modeling, Ph.D. dissertation, Ecole Nationale Sup~rieure des T~.16communications, Paris, France. Chen, Y. and Medioni, G.: 1992, Object modelling by registration of multiple range images, Image and Vision Computing 10(3), Chin, R. and Dyer, C.: 1986, Model-based recognition in robot vision, A CM Computing Surveys 18(1 ), Faugeras, O. and Hebert, M.: 1986, The representation, recognition, and locating of 3D shapes from range data, lnt'l J. Robotics Res. 5(3), Faugeras, O. D.~ Lebras-Mehlman, E. and Boissonnat, J.: 1990, Representing Stereo data with the Delaunay Triangulation, Artif lntell. Faugeras, O., Fua, P., Hotz, B., Ma, R., Robert, L., Thonnat, M. and Zhang, Z.: 1992, Quantitative and qualitative comparison of some area and feature-based stereo algorithms, in W. FOstner and S. Ruwiedel (eds), Robust Computer Vision: Quality of Vis&n Algorithms, Wichmann, Karlsruhe, Germany, pp Fua, P.: t992, A parallel stereo algorithm that produces dense depth maps and preserves image features, Machine Vision and Applications. Accepted for publication. Gennery, D. B.: 1989, Visual terrain matching for a Mars rover, Proc. IEEE Cor~ Comput. Vision Pattern Recog., San Diego, CA, pp Goldgof, D. B., Huang, T. S. and Lee, H.: 1988, Feature extraction and terrain matching, Proc. IEEE Conf. Comput. Vision Pattern Recog., Ann Arbor, Michigan, pp Grimson, W.: t985, Computational experiments with a feature based stereo algorithm, IEEE Trans. PAMI 7(1), Gueziec, A. and Ayache, N.: 1992, Smoothing and matching of 3-D space curves, Proc. Second European Co~f Comput. Vision, Santa Margharita Ligure, Italy, pp Haralick, R. et al.: 1989, Pose estimation from corresponding point data, IEEE Trans. SMC 19(6), Hebert, M., Caillas, C., Krotkov, E., Kweon, I. S. and Kanade, T.: 1989, Terrain mapping for a roving planetary explorer, Proc. lnt'l Conf. Robotics Automation, pp Horn, B.: 1987, Closed-form solution of absolute orientation using unit quaternions, Journal of the Optical Society of America A 7, Horn, B. and Harris, J.: 199 I, Rigid body motion from range image sequences, CVGIP: Image Understanding 53(1), Kamgar-Parsi, B., Jones, J. L. and Rosenfeld, A.: t991, Registration of multiple overlapping range images: Scenes without distinctive features, IEEE Trans. PAMI 13(9), Kehtamavaz, N. and Mohan, S.: 1989, A framework for estimation of motion parameters from range images, Comput. Vision, Graphics Image Process. 45, Kriegman, D., Triendl, E. and Binford, T.: 1989, Stereo vision and navigation in buildings for mobile robots, IEEE Trans. RA 5(6), Kweon, I. and Kanade, T.: 1992, High-resolution terrain map from multiple sensor data, IEEE Trans. PAMI 14(2), Liang, P. and Todhunter, J. S.: 1990, Representation and recognition of surface shapes in range images: A differential geometry approach, Comput. Vision, Graphics Image Process. 52, Matthies, L, and Shafer, S. A.: 1987, Error modeling in stereo navigation, IEEE J. tea 3(3), Mayhew, J. E. W. and Frisby, J. P.: 1981, Psychophysical and computational studies towards a theory of human stereopsis, Artif lntell 17, Menq, C.-H., Yau, H.-T. and Lai, G.-Y.: 1992, Automated precision measurement of surface profile in CAD-directed inspection, IEEE Trans. RA 8(2), Milios, E. E.: 1989, Shape matching using curvature processes, Comput. Vision, Graphics Image Process. 47, Navab, N. and Zhang, Z.: 1992, From multiple objects motion analysis to behavior-based object recognition, Proc. ECAI 92, Vienna, Austria, pp Pavlidis, T.: 1980, Algorithms for shape analysis of contours and waveforms, IEEE Trans. PAMI 2(4), Pollard, S, Mayhew, J. and Frisby, J.: 1985, PMF: A stereo correspondence algorithm using a disparity gradient limit, Perception 14, Preparata, F. and Shamos, M.: t986, Computational Geometry, An tntrodaction, Springer, Berlin, Heidelberg, New-York. Radack, G. M. and Badler, N. I.: 1989, Local matching of surfaces using a boundary-centered radial decomposition, Comput. Vision, Graphics hnage Process. 45, Robert, L. and Faugeras, O.: 1991, Curve-based stereo: Figural continuity and curvature, Proc. IEEE Conf Comput. Vision Pattern Recog., Maul, Hawaii, pp Rodrfguez, J', J. and Aggarwal, J. K.: 1989, Navigation using image sequence analysis and 3-D terrain matching, Proc. Workshop on Interpretation of 3D Scenes, Austin, TX, pp Safaee-Rad, R., Tcboukanov, I., Benhabib, B. and Smith, K. C.: 1991, Accurate parameter estimation of quadratic curves from grey-level images, CVGIP: Image Understanding 54(2), Sampson, R. E.: 1987, 3D range sensor-phase shift detection, Computer 20, Schwartz, J. T. and Sharir, M.: 1987, Identification of partially obscured objects in two and three dimensions by matching noisy characteristic curves, Int'l J. Robotics Res. 6(2), Szetiski, R.: 1988, Estimating motion from sparse range data without correspondence, Proc. Second Int'l Conf. Comput. Vision, IEEE, Tampa, FL, pp Szeliski, R.: 1990, Bayesian modeling of uncertainty in low-level vision, lnt'l J. Comput. Vision 5(3), 27t-301. Taubin, G.: 1991, Estimation of planar curves, surfaces, and nonplanar space curves defined by implicit equations with applications to edge and range image segmentation, IEEE Trans. PAM1 13(t 1), 1lt Walker, M. W., Shao, L. and Volz, R. A.: 1991, Estimating 3-D location parameters using dual number quaternions, CVGIP: Image Understanding 54(3),
30 148 Zhang Waiters, D.: 1987, Selection of image primitives for generalpurpose visual processing, Comput. Vision, Graphics Image Process. 37(3), Wolfson, H.: 1990, On curve matching, IEEE Trans. PAMI 12(5), Zhang, Z.: 1991, Recalage de deux cartes de protbndeur denses: L'rtat de I'art, Rapport VAP de la phase 4, CNES, Toulouse, France. Zhang, Z.: 1992a, Iterative point matching for registration of freeform curves, Research Report 1658, INRIA Sophia-Antipolis. Zhang, Z.: t992b, On local matching of free-form curves, Proc. British Machine Vision Conf., University of Leeds, UK, pp Zhang, Z. and Faugeras, O.: 1991, Determining motion from 3D line segments: A comparative study, Image and Vision Computing 9(1), Zhang, Z. and Faugeras, O.: 1992a, 3D Dynamic Scene Analysis: A Stereo BasedApproach, Springer, Berlin, Heidelberg. Zhang, Z. and Faugeras, O.: 1992b, Three-dimensional motion computation and object segmentation in a long sequence of stereo frames, lnt'l J. Comput. Vision 7(3), Zhang, Z., Faugeras, O. and Ayache, N.: 1988, Analysis of a sequence of stereo scenes containing multiple moving objects using rigidity constraints, Proc. Second lnt'l Conf. Comput. Vision, Tampa, FL, pp Also as a chapter in R. Kasturi and R.C. Jain (eds), Computer Vision: Principles, IEEE computer society press, 1991.
31 Iterative Point Matching for Registration of Free-Form Curves and Surfaces Color Figures Fig. 21. Superposition of the two 3-D maps of a rock scene after a manual registration: front and top views Fig. 22. Test 1: Superposition of two 3-D maps before registration: front and top views 149
32 150 Zhang Fig. 23. Test 1: Superposition of two 3-D maps after registration: front and top views Fig. 24. Test 2: Superposition of two 3-D maps before registration: front and top views
33 Iterative Point Matching jbr Registration of Free-Form Curves and Surfaces Fig. 25. Test 2: Superposition of two 3-D maps after 40 iterations: front and top views Fig. 26. Test 2: Superposition of two 3-D maps after 80 iterations: front and top views 151
34 152 Zhang Fig. 27. Test 3: Superposition of two 3-D maps before registration: front and top views Fig. 28. Test 3: Superposition of two 3-D maps after registration: front and top views
The Trimmed Iterative Closest Point Algorithm
Image and Pattern Analysis (IPAN) Group Computer and Automation Research Institute, HAS Budapest, Hungary The Trimmed Iterative Closest Point Algorithm Dmitry Chetverikov and Dmitry Stepan
Part-Based Recognition
Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for TO NEURAL NETWORKS
INTRODUCTION TO NEURAL NETWORKS Pictures are taken from By Nobel Khandaker Neural Networks
Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration
Chapter 6: The Information Function 129 CHAPTER 7 Test Calibration 130 Chapter 7: Test Calibration CHAPTER 7 Test Calibration For didactic purposes, all of the preceding chapters have assumed that the
Face detection is a process of localizing and extracting the face region from the
Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.
Path Tracking for a Miniature Robot
Path Tracking for a Miniature Robot By Martin Lundgren Excerpt from Master s thesis 003 Supervisor: Thomas Hellström Department of Computing Science Umeå University Sweden 1 Path Tracking Path tracking Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration
Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence
Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document
Constrained curve and surface fitting
Constrained curve and surface fitting Simon Flöry FSP-Meeting Strobl (June 20, 2006), floery@geoemtrie.tuwien.ac.at, Vienna University of Technology Overview Introduction Motivation, Overview, Problem,
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
The calibration problem was discussed in details during lecture 3.
1 2 The calibration problem was discussed in details during lecture 3. 3 Once the camera is calibrated (intrinsics are known) and the transformation from the world reference system to the camera reference
Experiment #1, Analyze Data using Excel, Calculator and Graphs.
Physics 182 - Fall 2014 - Experiment #1 1 Experiment #1, Analyze Data using Excel, Calculator and Graphs. 1 Purpose (5 Points, Including Title. Points apply to your lab report.) Before we start measuring
Linear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
Problem definition: optical flow
Motion Estimation Why estimate motion? Lots of uses Track object behavior Correct
Metrics on SO(3) and Inverse Kinematics
Mathematical Foundations of Computer Graphics and Vision Metrics on SO(3) and Inverse Kinematics Luca Ballan Institute of Visual Computing Optimization on Manifolds Descent approach d is a ascent direction
Whiteboard It! Convert Whiteboard Content into an Electronic Document
Whiteboard It! Convert Whiteboard Content into an Electronic Document Zhengyou Zhang Li-wei He Microsoft Research Email: zhang@microsoft.com, lhe@microsoft.com Aug. 12, 2002 Abstract This ongoing project
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer
Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary
Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:
Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability
Classification of Fingerprints Sarat C. Dass Department of Statistics & Probability Fingerprint Classification Fingerprint classification is a coarse level partitioning of a fingerprint database into smaller
CHAPTER 6 TEXTURE ANIMATION
CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of
Two-Frame Motion Estimation Based on Polynomial Expansion
Two-Frame Motion Estimation Based on Polynomial Expansion Gunnar Farnebäck Computer Vision Laboratory, Linköping University, SE-581 83 Linköping, Sweden gf@isy.liu.se Abstract.
The accurate calibration of all detectors is crucial for the subsequent data
Chapter 4 Calibration The accurate calibration of all detectors is crucial for the subsequent data analysis. The stability of the gain and offset for energy and time calibration of all detectors involved,
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the
Efficient Pose Clustering Using a Randomized Algorithm
International Journal of Computer Vision 23(2), 131 147 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. Efficient Pose Clustering Using a Randomized Algorithm CLARK F. OLSON
MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem
MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem David L. Finn November 30th, 2004 In the next few days, we will introduce some of the basic problems in geometric modelling, and
VELOCITY, ACCELERATION, FORCE
VELOCITY, ACCELERATION, FORCE velocity Velocity v is a vector, with units of meters per second ( m s ). Velocity indicates the rate of change of the object s position ( r ); i.e., velocity tells you how
Improved Billboard Clouds for Extreme Model Simplification
Improved Billboard Clouds for Extreme Model Simplification I.-T. Huang, K. L. Novins and B. C. Wünsche Graphics Group, Department of Computer Science, University of Auckland, Private Bag 92019, Auckland,
2. Simple Linear Regression
Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according
Prentice Hall Mathematics: Algebra 1 2007 Correlated to: Michigan Merit Curriculum for Algebra 1
STRAND 1: QUANTITATIVE LITERACY AND LOGIC STANDARD L1: REASONING ABOUT NUMBERS, SYSTEMS, AND QUANTITATIVE SITUATIONS Based on their knowledge of the properties of arithmetic, students understand and reason
Gradient Methods. Rafael E. Banchs
Gradient Methods Rafael E. Banchs INTRODUCTION This report discuss one class of the local search algorithms to be used in the inverse modeling of the time harmonic field electric logging problem, the Gradient
More Local Structure Information for Make-Model Recognition
More Local Structure Information for Make-Model Recognition David Anthony Torres Dept. of Computer Science The University of California at San Diego La Jolla, CA 9093 Abstract An object classification
discuss how to describe points, lines and planes in 3 space.
Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position
Solutions to old Exam 1 problems
Solutions to old Exam 1 problems Hi students! I am putting this old version of my review for the first midterm review, place and time to be announced. Check for updates on the web site as to which sections
Robot Manipulators. Position, Orientation and Coordinate Transformations. Fig. 1: Programmable Universal Manipulator Arm (PUMA)
Robot Manipulators Position, Orientation and Coordinate Transformations Fig. 1: Programmable Universal Manipulator Arm (PUMA) A robot manipulator is an electronically controlled mechanism, consisting
Bachelor Graduation Project SOLVING JIGSAW PUZZLES USING COMPUTER VISION
SOLVING JIGSAW PUZZLES USING COMPUTER VISION AUTHOR : AREEJ MAHDI SUPERVISOR : REIN VAN DEN BOOMGAARD DATE : JUNE 22, 2005 SIGNED BY : Bachelor Graduation Project Solving Jigsaw Puzzles Using Computer:!
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
Jiří Matas. Hough Transform
Hough Transform Jiří Matas Center for Machine Perception Department of Cybernetics, Faculty of Electrical Engineering Czech Technical University, Prague Many slides thanks to Kristen Grauman and Bastian
NEW MEXICO Grade 6 MATHEMATICS STANDARDS
PROCESS STANDARDS To help New Mexico students achieve the Content Standards enumerated below, teachers are encouraged to base instruction on the following Process Standards: Problem Solving Build new mathematical
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
Calculus C/Multivariate Calculus Advanced Placement G/T Essential Curriculum
Calculus C/Multivariate Calculus Advanced Placement G/T Essential Curriculum UNIT I: The Hyperbolic Functions basic calculus concepts, including techniques for curve sketching, exponential and logarithmic
SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING
AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations
PCHS ALGEBRA PLACEMENT TEST
MATHEMATICS Students must pass all math courses with a C or better to advance to the next math level. Only classes passed with a C or better will count towards meeting college entrance requirements. | http://docplayer.net/280785-Iterative-point-matching-for-registration-of-free-form-curves-and-surfaces.html | CC-MAIN-2018-09 | en | refinedweb |
Prototype Pattern
DefinitionThe Prototype pattern is basically the creation of new instances through cloning existing instances. By creating a prototype, new objects are created by copying this prototype.
Where to use•When a system needs to be independent of how its objects are created, composed, and represented.
•When adding and removing objects at runtime.
•When specifying new objects by changing an existing objects structure.
•When configuring an application with classes dynamically.
•When keeping trying to keep the number of classes in a system to a minimum.
•When state population is an expensive or exclusive process.
Benefits•Speeds up instantiation of large, dynamically loaded classes.
•Reduced subclassing.
Drawbacks/consequencesEach subclass of Prototype must implement the Clone operation. Could be difficult with existing classes with internal objects with circular references or which does not support copying.
Prototype Pattern class-diagram
In the class-diagram above:
•Prototype declares an interface for cloning itself.
•ConcretePrototype implements an operation for cloning itself.
•Client creates a new object by asking a prototype to clone itself.
You could use a PrototypeManager to keep track on the different types of prototypes. The PrototypeManager maintains a list of clone types and their keys. The client, instead of writing code that invokes the "new" operator on a hard-wired class name, calls the clone() method on the prototype.
Prototype Pattern example
import java.util.Hashtable; public class PrototypeExample { HashtableWhen PrototypeExample is executed the result is:
productMap = new Hashtable (); public Product getProduct(String productCode) { Product cachedProduct =(Product)productMap.get(productCode); return (Product)cachedProduct.clone(); } public static void main(String[] args) { PrototypeExample pe = new PrototypeExample(); pe.loadCache(); Book clonedBook = (Book)pe.getProduct("B1"); System.out.println("SKU = " + clonedBook.getSKU()); System.out.println("SKU = " + clonedBook.getDescription()); System.out.println("SKU = " + clonedBook.getNumberOfPages()); DVD clonedDVD = (DVD)pe.getProduct("D1"); System.out.println("SKU = " + clonedDVD.getSKU()); System.out.println("SKU = " + clonedDVD.getDescription()); System.out.println("SKU = " + clonedDVD.getDuration()); } } /** Prototype Class * */; } } /** Concrete Prototypes to clone * */ public class Book extends Product { private int numberOfPages; public int getNumberOfPages() { return numberOfPages; } public void setNumberOfPages(int i) { numberOfPages = i; } } /** Concrete Prototypes to clone * */ public class DVD extends Product { private int duration; public int getDuration() { return duration; } public void setDuration(int i) { duration = i; } }
c:>SKU = B1
c:>SKU = Oliver Twist
c:>SKU = 100
c:>SKU = D1
c:>SKU = Superman
c:>SKU = 180
Usage exampleIf you are designing a system for performing bank account transactions, then you would want to make a copy of the Object which holds your account information, perform transactions on it, and then replace the original Object with the modified one. In such cases, you would want to use clone() instead
of new. | http://www.javatutorialcorner.com/2017/07/design-patterns-prototype-pattern.html | CC-MAIN-2018-09 | en | refinedweb |
> You agiven. For example if your mount got lazy umounted (like hal probablydoes) then it's a floating mount not one tied to any tree going to theroot of any namespace.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2006/4/20/43 | CC-MAIN-2017-22 | en | refinedweb |
view raw
I'm having a JSON file and I'm trying to do a search using the values ( not the keys). Is there a built in function in Python that does so?
[["2778074170846781111111110", "a33104eb1987bec2519fe051d1e7bd0b4c9e4875"],
["2778074170846781111111111", "f307fb3db3380bfd27901bc591eb025398b0db66"]]
def OptionLookUp(keyvalue):
with open('data.json', 'r') as table:
x= json.loads(table)
After your edit I can say that there is no faster/more efficient way than turning your JSON into python 2-dimensional list and loop through each node and compare second field with your "keyvalue".
EDIT: faster/more efficient | https://codedump.io/share/vW5Gcf5NHrf8/1/python-searching-a-json-with-key-value | CC-MAIN-2017-22 | en | refinedweb |
Prerequisites for ASMX-based code samples in Project 2013
Learn information to help you create projects in Visual Studio by using the ASMX-based code samples that are included in the Project Server Interface (PSI) reference topics.
Last modified: March 09, 2015
Applies to: Project Server 2013
Many of the code samples included in the Project Server 2013 class library and web service reference were originally created for the Office Project 2007 SDK, and use a standard format for ASMX web services. The samples still work in Project Server 2013 and are designed to be copied into a console application and run as a complete unit. Exceptions are noted in the sample.
New PSI samples in the Project 2013 SDK conform to a format that uses Windows Communication Foundation (WCF) services. The ASMX-based samples can also be adapted to use WCF services. This article shows how to use the samples with ASMX web services. For information about using the samples with WCF services, see Prerequisites for WCF-based code samples in Project 2013.
Before running the code samples, you must set up the development environment, configure the application, ASMX web web services require that you create a PSI proxy assembly by using the CompileASMXProxyAssembly.cmd script in the Documentation\IntelliSense\WSDL subdirectory in the Project 2013 SDK download. The script creates the ASMX-based ProjectServerServices.dll proxy assembly. For more information, see the [ReadMe_IntelliSense] file in the SDK download.
Create a console application.
When you create a console application, in the drop-down list of the New Project dialog box, select .NET Framework 4. You can copy the PSI example code into the new application.
Add the reference required for ASMX.
In Solution Explorer, add a reference to System.Web.Services (see Figure 1).Figure 1. Adding a reference in Visual Studio QueueRenameProject has the namespace Microsoft.SDK.Project.Samples.RenameProject. If the name of the Visual Studio project is RenameProject, copy the namespace from the Program.cs file, and then open the project Properties pane (on the Project menu, choose RenameProject Properties). On the Application tab, copy the namespace into the Default namespace text box.
Set the web references.
Most examples require a reference to one or more of the PSI web services. These are listed in the sample itself or in comments that precede the sample. To get the correct namespace of the web references, ensure that you first set the default application namespace.
There are three ways to add an ASMX web service reference for the PSI:
Build a PSI proxy assembly named ProjectServerServices.dll, and then set a reference to the assembly. To get IntelliSense, this is the recommended way to add a PSI reference. See Using a PSI proxy assembly and IntelliSense descriptions.
Add a proxy file from the wsdl.exe output to the Visual Studio solution. See Adding a PSI proxy file.
Add a web service reference by using Visual Studio. See Adding a web service reference.
Using a PSI proxy assembly and IntelliSense descriptions
You can build and use the ProjectServerServices.dll proxy assembly for all ASMX-based web services in the PSI, by using the CompileASMXProxyAssembly.cmd script in the Documentation\IntelliSense\WSDL folder of the Project 2013 SDK download. For a link to the download, see Project 2013 developer documentation.
Following is the GenASMXProxyAssembly.cmd script that generates WSDL output files for the PSI web services, and then compiles the assembly.
@echo off @ECHO --------------------------------------------------- @ECHO Creating C# files for the ASMX-based proxy assembly @ECHO --------------------------------------------------- REM Replace ServerName with the name of the server and REM the instance name of Project Web App. Do not use localhost. (set VDIR=) (set OUTDIR=.\Source) REM ** Wsdl.exe is the same version in the v6.0A and v7.0A subdirectories. (set WSDL="C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\x64\wsdl.exe") if not exist %OUTDIR% ( md %OUTDIR% ) for /F %%i in (Classlist_asmx.txt) do %WSDL% /nologo /l:CS /namespace:Svc%%i /out:%OUTDIR%\wsdl.%%i.cs %VDIR%/%%i.asmx?wsdl @ECHO ---------------------------- @ECHO Compiling the proxy assembly @ECHO ---------------------------- (set SOURCE=%OUTDIR%\wsdl) (set CSC=%WINDIR%\Microsoft.NET\Framework64\v4.0.30319\csc.exe) (set ASSEMBLY_NAME=ProjectServerServices.dll) %CSC% /t:library /out:%ASSEMBLY_NAME% %SOURCE%*.cs
The script uses the ClassList_asmx.txt file, which contains the list of web services that are available for third-party developers.
The scripts create an assembly named ProjectServerServices.dll. Avoid confusing it with ProjectServerServices.dll for the WCF-based assembly. The assembly names are the same, to enable using either assembly with the ProjectServerServices.xml IntelliSense file.
The arbitrary namespace created by the scripts for both the ASMX web services and the WCF services is the same, so that the ProjectServerServices.xml IntelliSense file web service namespace, for example ProjectWebSvc, for IntelliSense to work you must change the sample to use SvcProject so that the namespace matches the proxy assembly.
An advantage to using the ASMX-based proxy assembly is that it includes all PSI web service namespaces; you do not have to create multiple web references. Another advantage is that, if you add the ProjectServerServices.xml file to the same directory where you set a reference to the ProjectServerServices.dll proxy assembly, you can get IntelliSense descriptions for the PSI classes and members. Figure 2 shows the IntelliSense text for the Project.QueueCreateProject method. For more information, see the [ReadMe_IntelliSense] file in the and ProjectServerServices.xml IntelliSense file to use different namespaces.
Adding a PSI proxy file
The Project 2013 SDK download includes the source files generated by the Wsdl.exe command for the proxy assembly. The source files are in Source.zip in the Documentation\IntelliSense\ASMX subdirectory. Instead of setting a reference to the proxy assembly, you can add one or more of the source files to a Visual Studio solution. For example, after running the GenASMXProxyAssembly.cmd script, add the wsdl.Project.cs file to the solution. Instead of running the script, you can run the following commands to generate a single source file, for example:
To define a Project object as a class variable named project, use the following code. The AddContextInfo method adds the context information to the project object for Windows authentication and Forms-based authentication.
private static SvcProject.Project project; private static SvcLoginForms.LoginForms loginForms = new SvcLoginForms.LoginForms(); . . . public void AddContextInfo() { // Add the Url property. project.Url = ""; // Add Windows credentials. project.Credentials = CredentialCache.DefaultCredentials; // If Forms authentication is used, add the Project Server cookie. project.CookieContainer = loginForms.CookieContainer; }
Adding a web service reference
If you do not use the ASMX-based proxy assembly or add a WSDL output file, you can set one or more individual web references. The following steps show how to set a web reference by using Visual Studio 2012.
In Solution Explorer, right-click the References folder, and then choose Add Service Reference.
In the Add Service Reference dialog box, choose Advanced.
In the Service Reference Settings dialog box, choose Add Web Reference.
In the URL text box, type, and then press Enter or choose the Go icon. If you have Secure Sockets Layer (SSL) installed, you should use the HTTPS protocol instead of the HTTP protocol. For example, use the following URL for the Project service on the site for Project Web App:
OR
Open your web browser, and navigate to. Save the file to a local directory, such as C:\Project\WebServices\ServiceName.wsdl. In the Add Web Reference dialog box, for URL, type the file protocol and the path to the file. For example, type:\Project\WebServices\Project.wsdl.
After the reference resolves, type the reference name in the Web reference name text box. Code examples in the Project 2013 developer documentation use the arbitrary standard reference name SvcServiceName. For example, the Project web service is named SvcProject (see Figure 3).Figure 3. Adding an ASMX web service reference
For application components that must run on the Project Server computer, use impersonation, or have elevated permissions, use a WCF service reference instead of an ASMX web reference. For more information, see Prerequisites for WCF-based code samples in Project 2013.
Project Server applications often use other services, such as SharePoint Server 2013 web services. If other services are required, they are noted in the you need, and then choose OK.
Authentication of on-premises Project Server users, whether by Windows authentication or Forms authentication, is done through claims processing in SharePoint Server 2013. Multiple authentication means that the web application on which Project Web App is provisioned supports both Windows authentication and Forms-based authentication. If that is the case, a call to an ASMX web service that uses Windows authentication will fail with the following error, because the claims process cannot determine which type of user to authenticate:
To fix the problem for ASMX, all calls to PSI methods should be to a derived class that is defined for each PSI web service. The derived class must also use the SvcLoginWindows.LoginWindows class to get a cookie for the derived PSI service class. In the following example, the ProjectDerived class derives from the SvcProject.Project class. The derived class adds the EnforceWindowsAuth property and overrides the web request header for every call to a method in the Project class. If the EnforceWindowsAuth property is true, the GetWebRequest method adds a header that disables Forms authentication. If EnforceWindowsAuth is false, Forms authentication can proceed.
To use the following ASMXLogon_MultiAuth sample, create a console application, follow the steps in Creating the application and adding a web service reference, and then add the wsdl.LoginWindows.cs proxy file and the wsdl.Project.cs proxy file. The Main method creates the project instance of the ProjectDerived class. The sample must use the derived LoginWindowsDerived class to get a CookieContainer object for the project.CookieContainer property, which distinguishes Forms authentication from Windows authentication. The project object can then be used to make calls to any method in the SvcProject.Project class.
using System; using System.Net; using PSLibrary = Microsoft.Office.Project.Server.Library; namespace ASMXLogon_MultiAuth { class Program { private const string PROJECT_SERVER_URL = ""; static void Main(string[] args) { bool isWindowsUser = true; // Create an instance of the project object. ProjectDerived project = new ProjectDerived(); project.Url = PROJECT_SERVER_URL + "Project.asmx"; project.Credentials = CredentialCache.DefaultCredentials; try { // The program works on a Windows-auth-only computer if you comment-out the // following line. The line is required for multiple authentication. project.CookieContainer = GetLogonCookie(); project.EnforceWindowsAuth = isWindowsUser; // Get a list of all published projects. // Use ReadProjectStatus instead of ReadProjectList, // because the permission requirements are lower. SvcProject.ProjectDataSet projectDs = project.ReadProjectStatus(Guid.Empty, SvcProject.DataStoreEnum.PublishedStore, string.Empty, (int)PSLibrary.Project.ProjectType.Project); Console.WriteLine(string.Format( "There are {0} published projects.", projectDs.Project.Rows.Count)); } catch (UnauthorizedAccessException ex) { Console.WriteLine(ex.Message); } catch (WebException ex) { Console.WriteLine(ex.Message); } finally { Console.Write("Press any key to continue..."); Console.ReadKey(false); } } private static CookieContainer GetLogonCookie() { // Create an instance of the loginWindows object. LoginWindowsDerived loginWindows = new LoginWindowsDerived(); loginWindows.EnforceWindowsAuth = true; loginWindows.Url = PROJECT_SERVER_URL + "LoginWindows.asmx"; loginWindows.Credentials = CredentialCache.DefaultCredentials; loginWindows.CookieContainer = new CookieContainer(); if (!loginWindows.Login()) { // Login failed; throw an exception. throw new UnauthorizedAccessException("Login failed."); } return loginWindows.CookieContainer; } } // Derive from LoginWindows class; include additional property and // override the web request header. class LoginWindowsDerived : SvcLoginWindows.LoginWindows { public bool EnforceWindowsAuth { get; set; } protected override WebRequest GetWebRequest(Uri uri) { WebRequest request = base.GetWebRequest(uri); if (this.EnforceWindowsAuth) { request.Headers.Add("X-FORMS_BASED_AUTH_ACCEPTED", "f"); } return request; } } // Derive from Project class; include additional property and // override the web request header. class ProjectDerived : SvcProject.Project { public bool EnforceWindowsAuth { get; set; } protected override WebRequest GetWebRequest(Uri uri) { WebRequest request = base.GetWebRequest(uri); if (this.EnforceWindowsAuth) { request.Headers.Add("X-FORMS_BASED_AUTH_ACCEPTED", "f"); } return request; } } }
Using the derived LoginWindows class, and making PSI calls with a web request header that disables Forms authentication, is required for applications that run in a multiple authentication environment. If Project Server uses only claims authentication, it is not necessary to derive a class that adds a web request header. The previous example runs in both environments.
The fix for a WCF-based application is different. For more information, see the Using multiple authentication section in Prerequisites for WCF-based code samples in Project 2013. or other prerequisites in the Project database. For example, use the following query to select the top 200 rows of the pub that you. | https://msdn.microsoft.com/en-us/library/office/aa568853(v=office.15) | CC-MAIN-2017-22 | en | refinedweb |
Here’s a snippet of Winxed code I was running a few days ago on my machine:
function Foo(int a, int b) { ... } function Foo(string s) { ... } function main[main]() { Foo(1, 2); Foo("Hello"); }
Notice anything interesting about it? Up until now, Winxed hasn’t supported multiple dispatch. NotFound didn’t really use the feature and users haven’t been asking for it too loudly, so it never got implemented. However, now that I’m pretending to be a Winxed super haxxors, I decided to take a stab at it. As of yesterday evening, I have a working prototype and am doing some testing and tweaking to get a pull request ready.
This isn’t full-featured MMD, yet. Parrot’s MMD system allows you to dispatch based on types and inheritance and there are wildcards. The current Winxed implementation I’m playing with only dispatches on the four primary register types. We read the parameter list and, if there are multiple functions with the same name in the same scope, we convert them to multis.
It’s a simple patch and just the first step towards getting full Multi support. The hardest part about moving forward is not the implementation (I’ll reiterate, Winxed is pretty easy to hack on), but instead picking the syntax we want to use to specify options.
NotFound has been away, and I don’t think that this will get merged in to master or pulled into the Parrot repo before the release tomorrow. If it passes code-review muster, maybe it could be in place shortly therafter. Then, we can start on the next step: Finding a syntax with which we can specify improved type information supported by the MMD system.
As a quick exploration, we could do something like this:
function Foo(Bar.Baz baz, Bar.Fie fie) { ... }
That would be just fine for a multiple dispatch situation, but specifying types in the signature implies that we are doing some kind of type-checking, which I suspect Winxed would not want to do. If we did that, we could automatically promote every single function with type information in the signature to a MultiSub, even if there was only one of them. Right now, the patch only auto-promotes a Sub to a MultiSub if there are more than one of them with the same name. This promotion, and the dispatch mechanisms associated with MultiSub have non-negligible cost. It runs contrary to expectations to think that adding more type information for the compiler would decrease performance of the generated code.
If we keep the logic that we only promote to MultiSub when there are multiple
functions with the same name, we could instead insert type checks into an
ordinary Sub that has type information but is not auto-promoted. Of course,
then the code generator for Sub has to keep track of storage information in
the owner namespace, which gets messy quick. The laziest approach would be to
not insert any type-check information at all, and allow a parameter which is
declared as
Bar.Baz baz to be filled by an object of any type without any
indication that it does not match what is written. I suspect that’s not what
anybody wants.
Perhaps we could do something like adding metadata in tags:
function Foo[multi(Bar.Baz, Bar.Fie)](var baz, var fie) { ... }
But that’s verbose and ugly, and updating the parser to support it is non-trivial. It does have the benefit that you are explicitly telling the compiler that you want it promoted to MultiSub.
Another possible syntax would be this:
multi Foo(Bar.Baz baz, Bar.Fie fie) { ... }
Here, we use the
multi keyword if we want to specify type information in
the parameter list, to make clear that we only do type checking in a MultiSub,
but don’t do it for an ordinary
function.
One more than just came to mind would be something like:
function Foo(var[Bar.Baz] baz, var[Bar.Fie] fie) { ... }
Here, it’s clear that the first parameter is still a
var and can be any
type, but there is the tag there that says it should be considered a specific
type in a multidispatch situation.
Anyway, the easy part is done. With my patch you can do basic multi-dispatch on the four register types in Winxed without any new fancy syntax. That’s easy, it’s implemented, and it works. Doing the next step is harder because we are going to need to add new syntax, and finding such a syntax which is functional, attractive, and does not promise things it does not deliver.
Maybe we don’t find any such syntax, and Winxed never gets an easy syntax for class-based multiple dispatch. That’s a little disappointing to think about, but we’ve come pretty far without any MMD support in Winxed, and we will go much further with the little bit provided in my patch. Maybe that takes us far enough for most uses. | http://whiteknight.github.io/2011/08/15/multiple_dispatch_in_winxed.html | CC-MAIN-2017-22 | en | refinedweb |
Your answer is one click away!
I'm struggling to understand how Code First sets up relationships based on the model. Here's the model:
public class Person { [Key] public string PersonName { get; set; } [Required] public virtual Nation NationOfBirth { get; set; } [Required] public virtual Nation CurrentNationOfResidence { get; set; } } public class Nation { [Key] public string NationName { get; set; } public virtual Person CurrentSecondInCommand { get; set; } }
At first I got an error while creating the database, which I was able to solve by adding a couple of modelBuilder commands:
modelBuilder.Entity<Person>() .HasRequired(p => p.NationOfBirth) .WithRequiredDependent() .WillCascadeOnDelete(false); modelBuilder.Entity<Person>() .HasRequired(p => p.CurrentNationOfResidence) .WithRequiredDependent() .WillCascadeOnDelete(false);
And I also added:
modelBuilder.Entity<Nation>() .HasOptional(n => n.CurrentSecondInCommand) .WithOptionalDependent() .WillCascadeOnDelete(false);
Now when I try and populate my database, with the following code:
var america = new Nation { NationName = "America" }; context.Nations.Add(america); var bush = new Person { PersonName = "Bush", CurrentNationOfResidence = america, NationOfBirth = america }; var obama = new Person { PersonName = "Obama", CurrentNationOfResidence = america, NationOfBirth = america }; var biden = new Person { PersonName = "Biden", CurrentNationOfResidence = america, NationOfBirth = america }; context.People.Add(bush); context.People.Add(obama); context.People.Add(biden); context.SaveChanges();
At the point at which 'bush' is added, both the CurrentNationOfResidence and the NationOfBirth properties are not null for him and are set to be the 'america' object. But as soon as 'obama' is added, the CurrentNationOfReside
The virtual property in Nation class must be
ICollection<Person> CurrentSecondInCommand, that way you will have "many persons in one nation" in the database the relationship will be "one to many" not "one to one"
The two relations from Person to Nation aren't one to one, but many to one:
So your configuration should be:
modelBuilder.Entity<Person>() .HasRequired(p => p.NationOfBirth) .WithMany() .WillCascadeOnDelete(false); modelBuilder.Entity<Person>() .HasRequired(p => p.CurrentNationOfResidence) .WithMany() .WillCascadeOnDelete(false); | http://www.devsplanet.com/question/35269493 | CC-MAIN-2017-22 | en | refinedweb |
view raw
I'm trying to make a test for checking whether a sys.argv input matches the RegEx for an IP address...
As a simple test, I have the following...
import re
pat = re.compile("\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}")
test = pat.match(hostIP)
if test:
print "Acceptable ip address"
else:
print "Unacceptable ip address"
\d+
You have to modify your regex in the following way
pat = re.compile("^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$")
that's because
. is a wildcard that stands for "every character" | https://codedump.io/share/RIgGeAuoha22/1/using-a-regex-to-match-ip-addresses-in-python | CC-MAIN-2017-22 | en | refinedweb |
Keith Beattie <KSBeattie at lbl.gov> wrote: > How do I pass the namespaces into minidom.parseString(), or > Domlette.NonvalidatingReader.parseString(), such that they'll be happy > with the 'unbound prefix'? I know of no convenient way of doing this with either minidom or domlette. Probably the quickest solution is to hack the input content so it's surrounded with an element declaring all the known namespaces, then ignore the root element of the result. Alternatively, the DOM Level 3 method parseWithContext would let you insert directly into the relevant part of the document (with namespaces declared above). pxdom supports this method and the domConfig parameter 'canonical-form', so that might be a possibility too. -- Andrew Clover mailto:and at doxdesk.com | https://mail.python.org/pipermail/xml-sig/2003-December/010044.html | CC-MAIN-2017-22 | en | refinedweb |
MAY 2013 |
| 1
{cvu}
ISSN 1354-3164
ACCU is an organisation of programmers who care
about professionalism in programming. That is, we
care about writing good code, and about writing it in
a good way. We are dedicated to raising the standard
of programming.
ACCU exists for programmers at all levels of
experience, from students and trainees to experienced
developers. As well as publishing magazines, we run
a respected annual developers’ conference, and
provide targeted mentored developer projects.
The articles in this magazine have all been written by
programmers, for programmers – and have been
contributed free of charge.
To find out more about ACCU’s activities, or to join the
organisation and subscribe to this magazine, go to.
Membership costs are very low as this is a non-profit
organisation.
The official magazine of ACCU
accu
{cvu}
STEVE LOVE
FEATURES EDITOR
The New Informs The Old
few weeks ago, I finally finished reading the Freeman and Price book Growing
object oriented software, guided by tests. I bought it a
couple of years ago, and (for shame!) have only just got
round to reading it. For even more shame, I decided once
I’d finished it to read a book that’s been left practically
untouched for far longer, Kent Beck’s Test-driven
development. The reason I’d not got round to reading that one
is that there’s so much commentary and discussion about TDD
– within ACCU, and on countless forums – I felt I’d already
read it, in a way.
What was most interesting for me was the reading of these two
books back to back, and in reverse order, so to speak. Going back a
decade or so with Kent’s book gave me new insights on the more
modern practice, which also fed new comprehension of
Growing..... The basic premise seems to be very similar, with
the keeping of a to-do list, writing code test first, getting fast
feedback, keeping the code ‘clean’. The New has differences to
the Old, for example, getting a ‘walking skeleton’ in place, with
acceptance tests, which drives a slightly different approach to
development.
Indeed, this difference of approach seems to have spawned an
entire debate: that of ‘Classic’ (or ‘Detroit’) versus ‘London-style’ TDD. This seems
to me to be similar to the differences between ‘Mockists’ and ‘Classicists’ as
described by Martin Fowler in his article ‘Mocks aren’t stubs’, but goes further than
that. The idea of aiming first for a full end-to-end test (with the help of Mock
Objects) drives design differently to beginning with the simplest piece of
functionality that represents measurable progress, as in ‘classic’ TDD. I’ve heard this
described as ‘outside-in’ design versus ‘inside-out’.
I’m pretty sure I don’t yet understand either approach well enough to comment on the
better-ness of either one. However, one of those insights I mentioned that came from
reading both books, was that the more modern approach looks to me like a natural
progression of the classic approach, and that the use of Mock Objects to explore the
relationships between collaborating objects is complementary to testing publicly
visible state. Of one thing I am certain: whether you write tests in ‘classic’ or
‘London’ style is less important than your tests being clear and useful – and that
you’ve written some!
A
Volume 25 Issue 2
May 2013
Features Editor
Steve Love
cvu@accu.org
Regulars Editor
Jez Higgins
jez@jezuk.co.uk
Contributors
Pete Goodliffe, Martin Janzen,
Paul F. Johnson, Filip van
Laenen, Chris Oldwood, Roger
Orr, Richard Polton, Mark
Radford
ACCU Chair
Alan Griffiths
chair@accu.org
ACCU Secretary
Giovanni Asproni
secretary@accu.org
ACCU Membership
Craig Henderson
accumembership@accu.org
ACCU Treasurer
R G Pauer
treasurer@accu.org
Advertising
Seb Rose
ads@accu.org
Cover Art
Pete Goodliffe
Print and Distribution
Parchment (Oxford) Ltd
Design
Pete Goodliffe
2 |
| MAY 2013
ADVERTISE WITH US
The ACCU magazines represent an effective, targeted
advertising channel. 80% of our readers make
purchasing decisions or recommend products for their
organisations.
To advertise in the pages of C Vu or Overload, contact
the advertising officer at ads@accu.org.
Our advertising rates are very reasonable, and we offer
advertising discounts for corporate members.
Some articles and other contributions use terms that
are either registered trade marks or claimed as such.
The use of such terms is not intended to support nor
disparage any trade mark claim. On request we will
withdraw all references to a specific trade mark and its
owner. C Vu without written
permission from the copyright holder.
{cvu}.
DIALOGUE
18 Standards Report
Mark Radford looks at
some features of the next
C++ Standard.
19 Code Critique Competition
Competition 81 and the
answers to 80.
24 Letter to the Editor
Martin Janzen reflects on
Richard Polton’s article.
REGULARS
22 Bookcase
The latest roundup of
book reviews.
24 ACCU Members Zone
Membership news.
SUBMISSION DATES
C Vu 25.3:1
st
June 2013
C Vu 25.4:1
st
August 2013
Overload 116:1
st
July 2013
Overload 117:1
st
September 2013
FEATURES
3 Bug Hunting
Pete Goodliffe implores us to debug effectively.
6 Tar-Based Back-ups
Filip van Laenen rolls his own with some simple tools.
8 ACCU Conference 2013
Chris Oldwood shares his experiences from this year’s
conference.
10 Writing a Cross Platform Mobile App in C#
Paul F. Johnson uses Mono to attain portability.
12 Let’s Talk About Trees
Richard Polton puts n-ary trees to use parsing XML.
16 Team Chat
Chris Oldwood considers the benefits of social media in
the workplace..
MAY 2013 | | 3
{cvu}
Bug Hunting
Pete Goodliffe implores us to debug effectively.
If debugging is the process of removing software bugs, then
programming must be the process of putting them in.
~ Edsger Dijkstra
t’s open season. A year-round season. There are no permits required,
no restrictions levied. Grab yourself a shotgun and head out into the
open software fields to root out those pesky varmints, the elusive bugs,
and squash them, dead.
Well, it’s not really as saccharin that. But sometimes you end up working
on code in which you swear the bugs are multiplying and ganging up on
you. A shotgun is the only response.
The story is an old one, and it goes like this: Programmers write code.
Programmers aren’t perfect. The programmer’s code isn’t perfect. It
therefore doesn’t work perfectly first time. So we have bugs.
If we bred better programmers we’d clearly breed better bugs.
Some bugs are simple mistakes that are obvious to spot and easy to fix.
When we encounter these, we are lucky.
The majority of bugs, the ones we invest hours of effort tracking down,
losing our follicles and/or hair pigment in the search, are the nasty, subtle
issues. These are the odd surprising interactions, or unexpected
consequences of the actions we instigate. The seemingly non-deterministic
behaviour of software that looks so very simple. It can only have been
infected by gremlins.
This isn’t a problem limited to newbie programmers who don’t know any
better. Experts are just as prone. The pioneers of our craft suffered; the
eminent computer scientist Maurice Wilkes wrote in [1]:
I well remember [...] on one of my journeys between the EDSAC room and
the punching equipment that ‘hesitating at the angles of stairs’ the realisation
came over me with full force that a good part of the remainder of my life was
going to be spent in finding errors in my own programs.
So face it. You’ll be doing a lot of debugging. You’d better get used to it.
And you better get good at it. (At least you can console yourself that you’ll
have plenty of chance to practice.)
An economic concern
How much time do you think is spent debugging? Add up the effort of all
of the programmers in every country around the world. Go on, guess.
Greg Law (who provided me with the initial impetus to write this – as well
as collating an amount of excellent material that I have wilfully stolen)
points out that a staggering $312bn per year is spent on the wage bills for
programmers debugging their software. To put that in perspective, that’s
two times all Euro-zone bailouts since 2008! This huge, but realistic, figure
comes from research carried out by Cambridge University’s Judge
Business School [2].
You have a responsibility to fix bugs faster: to save the global economy.
The state of the world is in your hands.
It’s not just the wage bill, though. Consider all the other implications of
buggy software: shipping delays, cancelled projects, the reputation
damage from unreliable software, and the cost of bugs fixed in shipping
software.
An ounce of prevention
It would be remiss of any article on debugging to not stress how much
better it is to actively prevent bugs manifesting in the first place, rather than
attempt a post-bug cure. An ounce of prevention is worth a pound of cure.
If the cost of debugging is astronomical, we should primarily aim to
mitigate this by not creating bugs in the first place.
This, in a classic editorial sleight-of-hand, is material for a different article,
and so we won’t investigate the theme exhaustively here.
Suffice to say, we should always employ sound engineering techniques
that minimise the likelihood of unpleasant surprises. Thoughtful design,
code review, pair programming, and a considered test strategy (including
TDD practices and fully automated unit test suites) are all of the utmost
importance. Techniques like assertions, defensive programming and code
coverage tools will all help minimise the likelihood of errors sneaking past.
We all know these mantras. Don’t we? But how diligent are we in
employing such tactics?
Avoid injecting bugs into your code by employing sound
engineering practices. Don’t expect quickly-hacked out code to
be of high quality.
The best bug-avoidance advice is to not write incredibly ‘clever’ (which
often equates to complex) code. Brian Kernighan states:
Debugging is twice as hard as writing the code in the first place. Therefore,
if you write the code as cleverly as possible, you are, by definition, not smart
enough to debug it.
Martin Fowler reminds us:
Any fool can write code that a computer can understand. Good programmers
write code that humans can understand.
Bug hunting
Beware of bugs in the above code;
I have only proved it correct, not tried it.
~ Donald Knuth
Being realistic, no matter how sound your code-writing regimen, some of
those pernicious bugs will always manage to squeeze through the defences
and require you to don the coder’s hunting cap and an anti-bug shotgun.
How should we go about finding and eliminating them? This can be a
Herculean task, akin to finding a needle in a haystack. Or, more accurately,
a needle in a needle stack.
Finding and fixing a bug is like solving a logic puzzle. Generally the
problem isn’t too hard when approached methodically; the majority of
bugs are easily found and fixed in minutes. There are two ‘vectors’ that
make a bug hard to fix: how reproducible it is, and how long it is between
the cause of the bug itself (the ‘software fault’) and you noticing. When a
I
Becoming a Better Programmer # 80
PETE GOODLIFFE
Pete Goodliffe is a programmer who never stays at the same
place in the software food chain. He has a passion for curry
and doesn’t wear shoes. Pete can be contacted at
pete@goodliffe.net or @petegoodliffe
no matter how sound your code-
writing regimen, some of those
pernicious bugs will always manage
to squeeze through the defences
4 | | MAY 2013
{cvu}
bug scores high on both, it’s almost impossible to track down without sharp
tools and a keen intellect.
If you plot frequency versus time-to-fix you get a curve asymptotically
approaching infinite time to fix. In other words, the hard bugs are few in
number, but that’s where we will spend most of our time.
There are a number of practical techniques and strategies we can employ
to solve the puzzle and locate the fault.
The first, and most important thing, is to investigate and characterise the
bug. Give yourself the best raw material to work with:
Reduce it to the simplest set of reproduction steps possible. Sift out
all the extraneous fluff that isn’t contributing to the problem, and
only serves to distract.
Ensure that you are focusing on a single problem. It can be very easy
to get into a tangle when you don’t realise you’re conflating two
separate – but related – faults into one.
Determine how repeatable the problem is. How frequently do your
repro steps demonstrate the problem? Is it reliant on a simple series
of actions? Does it depend on software configuration, or the type of
machine you’re running on? Do peripheral devices attached make
any difference? These are all crucial data points in the investigation
work that is to come.
In reality, when you’ve constructed a single set of reproduction steps, you
really have won most of the battle.
Here are some useful debugging strategies:
Lay traps
You have errant behaviour. You know a point when the system seems
correct (maybe it’s at start-up, but hopefully a lot later through the repro
steps), and you can get it to a point where its state is invalid. Find places
in the code path between these two points, and set traps to catch the fault.
Add assertions or tests to verify the system invariants that must hold. Add
diagnostic print-outs to see the state of the code so you can work out what’s
going on.
As you do this, you’ll gain a greater understanding of the code, reasoning
more about the structure of the code, and will likely add many more
assertions to the mix to prove your assumptions hold. Some of these will
be genuine assertions about invariant conditions in the code, others will
be assertions relevant to this particular run. Both are valid tools to help you
pinpoint the bug. Eventually a trap will snap, and you’ll have the bug
cornered.
Assertions and logging (even the humble printf) are potent
debugging tools. Use them often.
Many of these diagnostic logs and assertions may be valid to leave in the
code after you’ve found and fixed the problem.
Learn to binary chop
Aim for a binary-chop strategy, to focus in on bugs as quickly as possible.
Rather than single-stepping through code paths, work out the start of a
chain of events, and the end. Then partition the problem space into two,
and work out if the middle point is good or bad. Based on this information,
you’ve narrowed the problem space to something half the size. Repeat this
a few times, and you’ll soon have homed-in on the problem.
Employ this technique with trap-laying. Or with the other techniques
below.
Employ software archaeology
Software archaeology describes the art of mining through the historical
records in your version control system. This can provide an excellent route
into the problem; it’s often a simple way to hunt a bug.
Determine a point in the near past of the codebase when this bug didn’t
exist. Armed with your reproducible test case, step forwards in time to
determine which code changeset caused the breakage. Again, a binary
chop strategy is the best bet here.
Once you find the breaking code change, the cause of the fault is usually
obvious, and the fix self-evident. (This is another compelling reason to
make series of small, frequent, atomic check-ins, rather than massive
commits covering a range of things at once.)
Do not despise tests
Invest time as you develop your software to write a suite of unit tests. This
will not only help shape how you develop and verify the code you’ve
initially written. It acts as a great early warning device for changes you
make later; it acts like the miner’s canary – the test fails long before the
problem becomes complex to find and expensive to fix.
These tests can also act as great points from which to begin debugging
sessions. A simple, reproducible unit test case is a far simpler scaffold to
debug than a fully running program that has to spin up and have a series
of manual actions run to reproduce the fault. For this reason, it’s advisable
to write a unit test to demonstrate a bug, rather than start to hunt it from a
running ‘full system’.
Once you have a suite of tests, consider employing a code coverage tool
to inspect how much of your code is actually covered by the tests. You may
be surprised. A simple rule of thumb is: if your test suite does not exercise
it, then you can’t believe it works. Even if it looks like it’s OK now, without
a test harness then it’ll be very likely to get broken later.
Untested code is a breeding ground for bugs. Tests are your
bleach.
When you finally determine the cause of a bug, consider writing a simple
test that clearly illustrates the problem, and add it to the test suite before
you really fix the code. This takes some genuine discipline, as once you
find the code culprit, you’ll naturally want to fix it ASAP and publish the
fix. Instead, first write a test harness to demonstrate the problem, and use
this harness to prove that you’ve fixed it. The test will serve to prevent the
bug coming back in the future.
Invest in sharp tools
The are many tools that are worth getting accustomed to, including
memory checkers like electric fence, and swiss-army knife tools like
Valgrind. These are worth learning now rather than reaching for them at
the last minute. If you know how to use a tool before you have a problem
that demands it, you’ll be far more effective.
Learning a range of tools will prevent you from cracking a nut with a
pneumatic drill.
Of course, the tool of debugging champions is the debugger. This is the
king of tools that allows you to break into the execution of a running
program, step forwards by a single instruction, or step in – and out of –
functions. Some advanced debuggers even allow you to step backwards.
(Now, that’s real voodoo.)
In some circles there is a real disdain for the debugger. Real programmers
don’t need a debugger. To some extent this is true; being overly reliant on
such a tool is a bad thing. Single-stepping through code mindlessly can
trick you into focusing on the micro, rather than thinking about the overall
shape of the code.
But it’s not a sign of weakness. Sometimes it’s just far easier and quicker
to pull out the big guns. Don’t be afraid to use the right tool for the job.
Learn how to use your debugger well. Then use it at the right
times.
Remove code to exclude it from cause analysis
When you can reproduce a fault, consider removing everything that
doesn’t appear to contribute to the problem to help focus in on the
offending lines of code. Disable other threads that shouldn’t be involved.
MAY 2013 | | 5
{cvu}
Remove subsections of code that do not look like they’re related. It’s
common to discover objects indirectly attached to the ‘problem area’, for
example via a message bus or a notifier-listener mechanism. Physically
disconnect this coupling (even if you’re convinced it’s benign). If you still
reproduce the fault, you have proven your hunch about isolation, and have
reduced the problem space.
Then consider removing or skipping over sections of code leading up to
the error (as much as makes practical sense). Delete, or comment out
blocks that don’t appear to be involved.
Cleanliness prevents infection
Don’t allow bugs to stay in your software for
longer than necessary. Don’t let them linger.
Don’t dismiss problems as known issues. This
is a dangerous practice. It can lead to broken
window syndrome [3]; making it gradually
feel the norm and acceptable to have buggy
behaviour. This lingering bad behaviour can
mask the causes of other bugs you’re hunting.
One project I worked on was demoralisingly bad in this respect. When
given a bug report to fix, before managing to reproduce the initial bug
you’d encounter ten different issues on the way that all also needed to be
fixed, and may (or may not) have contributed to the bug on question.
Oblique strategies
Sometimes you can bash your head against a gnarly problem for hours and
get nowhere. It’s important to learn when you should simply stop and walk
away. A break can give you fresh perspective.
This can help you to think more carefully. Rather than running headlong
back into the code, take a break to consider the problem description and
code structure. Go for a walk and step away from the keyboard. (How many
times have you had those ‘eureka’ moments in the shower? Or in the
toilet?! It happens to me all the time.)
Describe the problem to someone else. Often when describing any problem
(including a bug hunt) to another person, you instantly explain it to yourself
and solve it. Failing another actual, live, person, you can follow the rubber
duck strategy described by the Pragmatic Programmers [4]. Talk to an
inanimate object on your desk to explain the problem to yourself. It’s only
a problem if the rubber duck starts to talk back.
Don't rush away
Once you find and fix a bug, don’t rush mindlessly on. Stop for a moment
and consider if there are other related problems lurking in that section of
code. Perhaps the problem you’ve fixed is a pattern that repeats in other
sections of the code. Is there further work that you could do to shore up
the system with the knowledge you just gained?
Non-reproducible bugs
Having attempted to form a set of reproduction steps, sometimes you
discover that you can’t. It’s just not possible. From time to time we uncover
nasty, intermittent bugs. The ones that seem to be caused by cosmic rays
rather than any direct user interaction. These are the gnarly bugs that take
ages to track down, often because we never get a chance to see them on a
development machine, or when running in a debugger.
How do we go about finding these?
Keep records of the factors that contribute to the fault. Over time
you many spot a pattern that will help you identify the common
causes.
As you get more information start to draw conclusions. Perhaps
identify more data points to keep in the record.
Consider adding more logging and assertions in beta/release builds
to help gather information from the field.
If it’s a really pressing problem, set up a test farm to run long
running-soak tests. If you can automate driving the system in a
representative manner then you can accelerate the hunting season.
There are a few things that are known to contribute to such unreliable bugs.
You may find they provide hints as to where to start investigating:
Threaded code; as threads entwine and interact in non-deterministic
and hard-to-reproduce ways, they often contribute to freaky
intermittent failures. Often this behaviour is very different when you
pause the code in a debugger, so is hard to observe forensically.
Network interaction, which is by
definition laggy and may drop or stall at
any point in time. Code that presumes
access to local storage works (because,
most often, it does) will not scale to
storage over a network.
The variable speed of storage (spinny
disks, database operations, or network
transactions) may change the behaviour
of your program, especially if you are
balanced precariously on the edge of timeout thresholds.
Memory corruption, where your aberrant code overwrites the stack
or heap, can lead to a myriad of unreproducible strangenesses that
are very hard to detect. Software archaeology is often the easiest
route to diagnose these errors.
Conclusion
Debugging isn’t easy. But it’s our own fault. We wrote the bugs.
Effective debugging is an essential skill for any programmer.
Acknowledgments
The inspiration for this article came from a conversation I had with Greg
Law about his excellent ACCU 2013 conference presentation on
debugging. Greg’s company, Undo Software, creates a most impressive
‘backwards debugger’ that you may want to look at. Check it out at undo-
software.com.
References
[1] Maurice Wilkes, Memoirs of a Computer Pioneer. The MIT Press.
1985. ISBN 0-262-23122-0
[2] Cambridge Research puts the global cost of debugging at $312billion
annually. reference.-
7
[3] Broken Windows Theory
Broken_windows_th
eory
[4] Andrew Hunt and David Thomas, The Pragmatic Programmer.
Addison Wesley. ISBN 0-201-
61622-X.
Questions
1.Assess how much of your time you think you spend debugging.
Consider every activity that isn’t writing a fresh line of code in a
system.
2.Do you spend more time debugging new lines of code you have
written, or on adjustments to existing code?
3.Does the existence of a suite of unit tests for existent code change
the amount of time you spend debugging, or the way you debug?
4.Is it realistic to aim for bug-free software? Is this achievable? When
is it appropriate to genuinely aim for bug-free software? What
determines the optimal amount of ‘bugginess’ in a product?
Stop for a moment and
consider if there are other
related problems lurking in
that section of code
6 | | MAY 2013
{cvu}
Tar-Based Back-ups
Filip van Laenen rolls his own with some simple tools.
few months ago, I found out that I had to change the back-up strategy
on my personal laptop. Until then I had used Areca [1], which in
itself worked fine, but I was looking for something that could be
scripted and used from the command line, in addition to be easier to install
and maintain. As often is the case in the Linux world, it turned out that
you can easily script a solution together on your own using some basic
building blocks. For this particular task, the building blocks are Bash [2],
tar, rm and split, together with sha256sum and cmp to build a conditional
copying function.
Why use a script?
What was my problem with Areca? First of all, from time to time, Areca
had to be updated. In the Linux world, this is usually a good thing, but not
if the new version is incompatible with the old archives. This can also cause
problems when restoring archives, e.g. from one computer to another, or
after a complete reinstallation of the operating system. Furthermore, since
Areca uses a graphical user interface, scripting and running the back-up
process from the command line (or crontab) wasn’t possible.
Notice that these problems were generic, and not particular to Areca.
Before deciding to script a solution together, I looked for an alternative
solution that was scriptable and easy to install, but without success. That
is, except for the suggestions to build my own solution using tar.
Getting started
Listing 1 shows the start of my tar-based back-up script. It starts with a
shebang interpreter directive to the Bash shell. Then it checks the number
of arguments that were provided to the script – it should be exactly one,
otherwise the script exits here. Next it sets up four environment variables:
a base directory in
BASEDIR
, the back-up directory where all archives will
be stored in
BACKUPDIR
, the number of the current month (two digits) in
MONTH
, and the first argument passed to the script in
LEVEL
. The
LEVEL
variable represents the back-up level, i.e. 1 if only the most important
directories should be archived, 2 if some less important directories should
be archived too, etc…
Backing up a Directory
Next we define a two parameter function that backs up a particular
directory to a file. Listing 2 shows how this function looks, together with
some examples of how it can be used. First it logs to the console that it’s
going to back up. Next it uses tar to do the actual archiving. Notice that
the output of tar is redirected to a log file. That way we keep the console
output tidy, and at the same time can browse through the log file if
something went wrong. That’s also why we included
v
(verbosely list files
processed) in the option list for tar, together with
c
(create a new archive),
p
(preserve file permissions),
z
(zip) and
f
(use archive file). Finally the
function creates a SHA-256 [3] digest from the result. This digest can be
used to decide whether two archive files are identical or not without having
to compare large, multi-GB files.
The variable
MONTH
is used to create rolling archives. In Listing 2, the
directories bin and dev will always be backed up to the same archive file,
but for the Documents and Thunderbird directory, a new one will
be created every month. Of course, if the script is run a second time during
the same month, the archive file for the Documents and Thunderbird
directory will be overwritten. Also, the same will happen when the script
is run a year later: the one year old archive file will then be overwritten
with a fresh back-up. If you want some other behaviour, like e.g. a new
archive every week or every day, you simply have to define your own
variable and use date to set it. Tailor to your needs in your own back-up
script!
Listing 3 shows how
LEVEL
can be used to differentiate between important
and often-changing directories on the one hand, and more stable directories
you do not want to archive every time you run your script on the other hand.
Currently my back-up script has three levels, but I’m considering splitting
off the small archives from level 1 in a separate level, so I could add a line
to crontab to take a quick back-up of some important directories once every
day.
Splitting large files
Next, I’d like to split large files into chunks that are easier to handle when
transferring them to external media. This makes it easier to move archives
between computers or to external media. Listing 4 shows a function that
splits a large file into pieces of 4 GB (hence the magic number
4,294,967,296 = 4 × 230), together with a loop that finds all files that
should be split.
Let’s start with a look at the function that splits the files. It receives one
parameter, the path to the file. The first thing the function does is to extract
A
FILIP VAN LAENEN
Filip van Laenen is a chief technologist at the Norwegian
software company Computas. He has a special interest in
software engineering, security, Java and Ruby, and likes
to do some hacking on his Ubuntu laptop in his spare time.
He can be contacted at f.a.vanlaenen@ieee.org
function back_up_to_file {
echo "Backing up $1 to $2."
tar -cvpzf ${BACKUPDIR}/$2.tar.gz\
${BASEDIR}/$1 &> ${BACKUPDIR}/$2.log
sha256sum -b ${BACKUPDIR}/$2.tar.gz\
> ${BACKUPDIR}/$2.sha256
}
back_up_to_file bin bin
back_up_to_file dev dev
back_up_to_file Documents Documents-${MONTH}
back_up_to_file .thunderbird/12345678.default\
Thunderbird-${MONTH}
Listing 2
#!/bin/bash
## Creates a local back-up. The resulting files
# can be dumped to a media device.
if [[ $# -ne 1 ]]; then
echo "Usage:"
echo " `basename $0` <LEVEL>"
echo "where LEVEL is the back-up level."
exit
fi
BASEDIR=/home/filip
BACKUPDIR=${BASEDIR}/backup
MONTH=`date +%m`
LEVEL=$1
Listing 1
MAY 2013 | | 7
{cvu}
the file name from the path, so that we can log to the console in a nice way
which file we’re going to split. Next it removes any chunks it finds from
the previous run, using option
f
to suppress any error messages in case
there aren’t any chunks present. Then it does the splitting into chunks of
4 GB, using option
d
to create numeric suffixes instead of alphabetic. This
means that if the function would split a file called dev.tar.gz, the
names of the resulting chunks would be dev.tar.gz.00,
dev.tar.gz.01, etc… Finally, when the function is done, it removes
the original file, because we don’t need to have it around any more.
The function is called inside a loop, which goes through all files having
the .tar.gz extension. For each file it uses stat to calculate the total size
(
-c%s
), and then compares it to 4 GB. If the file is larger, our function to
split the file is called.
Done
Finally, at the end of the script, we write to the console that we’re done
(
echo "Done."
). I like to do that to indicate explicitly that everything
went well, especially since this script can take a while.
Storing the back-ups
There’s a little detail in Listing 1 that we haven’t dealt with yet: where do
we store the back-ups? The script as it stands can be used to create local
back-ups, i.e. putting the back-up files on the same disk as the original data,
on the one hand, or write the back-ups directly to an external disk on the
other hand. Since I have enough space on my hard disk, I like to create the
back-up files locally first, and then plug in the external disk to transfer the
files. That’s also why I create SHA-256 digests, so I can detect when a
back-up file hasn’t changed and doesn’t need to be transferred to the
external disk.
Listing 5 shows how the back-up files we just created can be copied
conditionally to an external drive. It loops through all the SHA-256 digests
in the directory with the back-up files, and compares them to the SHA-256
digests in the target directory using
cmp
(silently, though the option
s
). If
both files exist, and their content is the same,
cmp
will return 0. In that case,
we don’t need to copy files to the target directory, and can continue with
the next SHA-256 digest. Otherwise we call the function that copies the
set of files associated to the SHA-256 digest.
The function to do that takes one parameter: the basename of the SHA-
256 digest file, but without the extension. The set of files we then want to
copy consists of either the back-up file as a whole, or the different chunks
resulting from the split function, in addition to the log file and of course
the SHA-256 digest. We therefore have to start by removing the old back-
up file or the chunks from the split function. Next, we copy the back-up
file or the chunks in a small loop that lets us log to the console what we’re
doing. Finally, we also copy the log file and the file with the SHA-256
digest. Notice that copying the SHA-256 digest file is the last thing we do:
if the script is interrupted, we want to be sure that the next run will try and
copy this file set again.
The code in Listing 5 forms the body of its own script, separate from the
code in the other listings. In fact, the script contains only three more things:
the definition of
BACKUPDIR
and
TARGETDIR
, and writing to the console
that we’re done. It assumes that we want to keep a copy of the back-up
files on our hard disk, hence the use of
cp
to transfer the files to the target
directory. If you’d rather move the back-up files to the target directory, you
should not only use
mv
instead of
cp
to transfer the files, but also remember
to remove the set of files from the back-up directory in case of identical
SHA-256 digests.
References
[1] See
[2] See
[3] See
# Backup of directories subject to changes
if [ ${LEVEL} -ge 1 ]; then
back_up_to_file bin bin-${MONTH}
back_up_to_file Documents Documents-${MONTH}
back_up_to_file .thunderbird/12345678.default\
Thunderbird-${MONTH}
back_up_to_file dev dev-${MONTH}
…
fi
# Backup of relatively stable directories
if [ ${LEVEL} -ge 2 ]; then
back_up_to_file Drawings Drawings
back_up_to_file Photos/2010 Photos-2010
back_up_to_file Movies/2013 Movies-2010
back_up_to_file .fonts fonts
…
fi
# Backup of stable directories
if [ ${LEVEL} -ge 3 ]; then
back_up_to_file Music Music
…
fi
Listing 3
function split_large_file {
FILENAME=$(basename $1)
echo "Going to split ${FILENAME}."
rm -f $1.0*
split -d -b 4294967296 $1 $1.
rm $1
}
for f in ${BACKUPDIR}/*.tar.gz
do
FILESIZE=$(stat -c%s $f)
if (( $FILESIZE > 4294967296 )); then
split_large_file $f
fi
done
function copy_files {
rm -f "${TARGETDIR}/$1.tar.gz"*
for f in ${BACKUPDIR}/$1.tar.gz*
do
FILENAME=$(basename $f)
echo "Copying ${FILENAME}."
cp $f "${TARGETDIR}"
done
cp ${BACKUPDIR}/$1.log "${TARGETDIR}"
cp ${BACKUPDIR}/$1.sha256 "${TARGETDIR}"
}
for f in ${BACKUPDIR}/*.sha256
do
FILENAME=$(basename $f)
BASEFILENAME=\
`echo ${FILENAME} | sed -e 's/.sha256$//'`
cmp -s ${BACKUPDIR}/${FILENAME}\
"${TARGETDIR}/${FILENAME}" > /dev/null
if [ $? -eq 0 ]; then
echo "Skipping ${BASEFILENAME}."
else
copy_files ${BASEFILENAME}
fi
done
Listing 5
Listing 4
8 | | MAY 2013
{cvu}
ACCU Conference 2013
Chris Oldwood shares his experiences from
this year’s conference.
t’s April once again and that can only mean one
thing – apart from the school holidays and Easter
eggs – it’s the ACCU Conference. This year saw
one of the biggest changes to the conference – a new
venue. And not just a new hotel but in a new city too!
For the last 5 years I’ve only ever been to the same
hotel in Oxford and so with much trepidation I headed
down to Bristol. One of my biggest ‘worries’ was what was going to
replace all those long standing traditions, like ‘Chutneys Tuesday’? But
hey, we’re all agile these days so we should embrace change, right?
Wednesday
Once again I didn’t get to partake in one of the tutorial days on the Tuesday
which was a shame as they looked excellent as usual. Instead I made my
way down on the Wednesday and arrived in the early afternoon. That
meant I missed lunch, but also more importantly the keynote from Eben
Upton about the Raspberry Pi and a talk from Jonathan Wakely about
SFINAE. The comments coming through on Twitter about these, and the
other parallel talks, generated much gnashing of teeth as I cursed my late
arrival.
Not wanting to take things lightly I dived head-first into Johan Herland’s
session about Git. I’ve done a little messing around with Git and have read
the older 1
st
edition O’Reilly book but I wasn’t sure whether I’d really
understood it. Luckily Johan walked us slowly through how Git works in
theory, and then in practice. I’m glad I saw this as it seems I was on the
right track but he explained some things much better than the book. I also
got to quiz him in the bar later
about how some Subversion
concepts might translate to Git, or
not as i t seems, whi ch was
priceless.
Next up, was Pete Goodliffe doing the live version of his C Vu column on
Becoming a Better Programmer. This was split into two parts with the first
being Pete discussing what we even mean by ‘better’. He provided some
of his thoughts and there were the usual array of highly entertaining slides
to back them up – you are always guaranteed a good show.
The second part was provided by various speakers (chosen by
Pete) who got to spend 5 or so minutes discussing a topic that
t hey bel i eve makes t hem a bet t er pr ogr ammer.
Unsurprisingly these varied greatly from the practical, such
as Automation (Steve Love), to the more philosophical – The
Music of Programming (Didier Verna). The audience got to
have a quick vote on what they felt was the most useful and Seb Rose’s
Deliberate Practice got the nod.
Once the main sessions have finished for the day the floor is opened up to
everyone in the guise of Lightning Talks. These are short 5 minute affairs
where anyone can let off steam, share a tip or plug something (non-
commercial). Even though it was only the first evening there was a full
program with talks about such topics as Design Sins
(Pete Goodliffe), C++ Active Objects (Calum Grant), BDD with Boost
Test (Guy Bolton King), Communities (Didier Verna) and an attempt at
Just a Minute from Burkhard Kloss. With 12 talks in total it was a good
start.
Al t hough not
directly part of the
ACCU conference,
the Bristol & Bath
Scrum Group held an evening event afterwards where James Grenning
talked about TDD. Although it had been a long day already I couldn’t resist
squeezing one more talk in, especially from someone like this. Being
aimed at a wider audience than just developers meant there were an
interesting assortment of questions afterwards which was useful. One in
particular was the common question of writing the test first versus
immediately after which always causes a interesting debate.
Thursday
A full English breakfast and plenty of coffee set me up for the day and I
was greeted with my first 2013 keynote, courtesy of Brian Marick. He
started with an interesting tangent about crickets and how they tune in to
a mate and eventually got onto the topic of how to cope with the inevitable
natural decay older programmers will suffer from. The thing that has stuck
with me most is the advice of converting ‘goal attainment’ to ‘maintaining
invariants’. This he explained by
showing how a baseball fielder
might try to catch a ball by
moving himself so he sees a
linear trajectory rather than
trying to anticipate a parabolic path. This was one of the most enjoyable
keynotes I’ve seen.
I didn’t have much choice in what I went to after Brian as it was my turn
to step up to the plate. This was my third year of speaking and I’d like to
say I might finally be getting the hang of it. At least, I don’t think anyone
fell asleep.
Unbeknownst to me until I checked my Twitter feed afterwards but there
was a small bug in the code on one of my slides. This made
my next choice easy – The Art of Reviewing Code with
Arjan van Leeuwen. There was plenty of sound advice here,
particularly around the area of getting started in code
reviews where it’s important to make both parties
comfortable to avoid a sense of personal attack. As Arjan
pointed out, time is often the perceived barrier to doing reviews, but it’s
reminded me how valuable it can be.
That was only a short session and the other 45 minutes I spent with Ewan
Milne as he discussed Agile Contracts. Although I don’t get involved
(yet?) in that side of the process I still find it useful to comprehend the other
parts of an agile approach. Understanding how the different forms of
contract attempt to transfer the risk from one side to the other was
enlightening – especially when you consider the role lawyers try to play
in the process. Ewan normally has his hands full with organising the
lightning talks so it was good to see
him speak for longer t han 30
seconds.
My final session for the day was to
I
MAY 2013 | | 9
{cvu}
be spent with Michel Grootjans. With a title
of Ruby and Rails for n00bs I felt that suited
my knowledge of Ruby right down to the
ground and hoped I would get to see what
some of the fuss is about. It actually turned
out to be way more useful than I expected
because Michel developed a simple web app using a full-on TDD approach
too. Not only did I get a small taste for what Ruby and Rails is about but
I also saw someone develop a different sort of application using different
tools in a more enterprise-y way.
Once more, after the main sessions had completed, most of us convened
to the main hall to listen to another round of lightning talks. This time there
was a total of 13 topics with an even wider range than the day before.
Notably for me, given my attendance at an earlier session on Git, was a
rant from Charles Bailey about Git being evil. There were also complaints
about poor variable naming (Simon Sebright) and why anyone would use
C++ when D exists (Russel Winder, naturally). On the more useful front
we saw Dmitry Kandalov implement an Eclipse plug-in in 5 minutes and
a C++ technique that seems close to C#’s async/await mechanism (Stig
Sandnes). The abusive C++ award though goes to Phil Nash with his
<-
operator for implementing extension methods. Oh, and Anders Schau
Knatten used ‘Science’ to help us decide that C# is in fact the best
programming language.
Friday
What better start to the day than a
keynote from the very person we
have to thank for C++ – Bjarne
Stroustrup. It’s been some years since he graced the ACCU conference
with his presence and so like many I was looking forward to what he had
to say about the modern state of C++. His presentation was generally about
the new features we now have in C++ 11 as he had a separate session
planned for C++ 14. However there was as much about how the
established practices (e.g. RAII) are still the dominant force and
critical to its effectiveness. Naturally there were plenty of
questions and he pulled no punches when airing his opinion on
the relationship between C and C++.
With my C++ side ignited I felt it was only right that I attend
Nico Josuttis’ talk about move semantics and how that plays
with the exception safety guarantee of a function like
push_back()
. He entered the murkier depths of C++ to show
how complex this issue is for those who produce C++ libraries. When
someone like Nico says C++ is getting ‘a little scary’ you know you need
to pay attention. My ‘moment of the conference’ happened here when, in
response to a question for Nico about the
std::pair
class, Jonathan
Wakely instantly rattled off the C++ standard section number to help him
find the right page…
We all love writing fresh, new code, but many of us spend our lives
wallowing in the source code left to us by others. Cleaning Code by Mike
Long was a session that showed you why refactoring is important and what
some of the tools and techniques you can use to help in the fight against
entropy. This was a very well attended talk and rightly so with a good mix
of the theoretical and practical. One tool in particular for finding duplicate
code certainly looked sexy and will definitely be getting a spin.
After another round of coffee I decided to close the day off by listening to
the C Vu editor (Steve Love) explain why C# is such a Doddle to learn and
use. Yes, his tongue very firmly placed in his cheek. As someone who uses
C# for a living it’s easy to forget certain things that you take for granted
with something like C++, such as the complexity guarantees of the core
containers. Generics also came in for a bit of a bashing as a watered down
version of templates. Anyone who thinks the
world of C# is dragon free would have done
well to attend.
The final set of lightning talks took their cue
from the volcano fiasco a few years ago.
Back then, due to speaker problems caused by a lack of air transport, a set
of 15 minute lightning keynotes were put together instead and that’s the
length these ones adopted. Seb Rose opened the proceedings with a
response to an earlier lightning talk about whether the term ‘passionate’
is a useful one for describing the kind of people we want to work with,
given its dictionary definition. Much nodding of heads suggested he was
probably right. He was followed by me trying to show how many of the
old texts, such as the papers by David Parnas, are still largely relevant
today. And, more importantly they’re often cheap. Tom Gilb was next up
to answer a question I had posed in
my t al k about quant i f yi ng
robustness. Let’s face it, we knew
he would. Finally Didier Verna
got to extend his earlier slot on The Pete Goodliffe Show to go into more
detail about the similarities he sees between music and programming. I’ve
never really given Jazz a second thought before, and even though we only
got a 30 second burst of his own composition my interest is definitely
piqued.
The Friday evening always plays host to The Conference Dinner, which
is a sort of banquet where we get to spend a little more time mingling with
the various speakers and attendees. This is a perfect opportunity to corner
a speaker and ask some questions you didn’t get a chance to earlier. Jon
made sure the tables regularly got mixed up to keep the flow of people
moving between courses which helps you mix with people you might not
normally know. After the dinner there was the Bloomberg Lounge to keep
us entertained through the night, if you fancied staying up until silly
o’clock.
Saturday
There was another change to the session structure this year as the Saturday
keynote was moved to the end of the day; instead the normal sessions
started earlier. Sadly I overdid the conference dinner again and so an early
start was never really on the cards.
How to Program Your Way Out of a Paper Bag seemed like the
ideal eventual start to the day. Frances Buontempo had sold the
idea well – is it possible to actually write a program to get out of
a paper bag? Obviously there was a certain amount of artistic
licence, but ultimately she did it, and along the way we got to
find out a whole lot about machine learning. I was a little worried
there might be a bit too much maths at that time of day but it was
well within even my meagre reach.
My final session of the conference was to be with Hubert Matthews – A
History of a Cache. This was a case study of some work he been involved
in. The session had a wonderful narrative as he started by explaining how
the system was originally designed, and then went on to drop the bomb on
how he needed to find a huge performance boost with the usual array of
‘impossible’ constraints. Each suggested improvement brought about a
small win, but not enough by itself and that’s what made it entertaining.
It also goes to show what can be achieved sometimes without going
through a rewrite.
Epilogue
I keep expecting the magic of this conference to wear off, but so far it seems
to be holding fast. I have looked around at some of the other conferences
but I’m just not as impressed by the content or the
price for that matter. I thought the new venue
worked well and even though it was a little further
to travel it wasn’t exactly onerous. More of the
talks were filmed this year and so hopefully I
should be able to catch up on some of those I
missed. With 5 concurrent sessions running in
each time slot you’re never going to get to see
everything you want to, but that’s just another reason to keep coming back
year-after-year – to try and catch up on everything you’ve missed in
previous years. Of course in the meantime the world has moved on and
there’s another load of new stuff to see and learn!
10 | | MAY 2013
{cvu}
Writing a Cross Platform Mobile App in C#
Paul F. Johnson uses Mono to attain portability.
A brief piece of history
any years back, Ximian (a small bunch of very nice people) decided
to write an open source version of the .NET language based on the
ECMA documentation. Initially for Linux, it soon spread to Mac,
BSD and many other platforms (including Windows). This was good and
fine. Novell then bought Ximian and signed what was considered (in the
non-SuSE part of the open source community at least) as a deal with the
devil – the devil being Microsoft.
Time moved on. Novell was bought out and so Xamarin was formed, their
task, to carry on developing the open source Mono framework which was
fast growing to be a recognised force for good.
While all of this was going on, Google moved into the mobile phone
business with Android and Apple released their iPhone. Android (as you
may know) has a Linux kernel at its heart and apps are coded in Java.
iPhones use Objective-C for the language of choice. Google controlled its
app store and Apple, in true Apple fashion, pretty much dictated under the
guise of ‘quality’ what could and could not be distributed through them.
This is fine and dandy with one problem – as with the old 8-bit systems of
old, if you wanted your app to run on both iPhone and Android, you had
to do a lot of work to port the code over, that or employ that rare breed
developers that can work in both Objective C and Java.
It doesn’t take a genius to realise that if a company can come up with a
method to write once, deploy many as was the case with .NET, then they
would win the day and praises be sung. Step forth Xamarin. Using the
mono framework, they released .NET for both iPhone and Android. While
the UI aspect is not the same, a large amount of core functionality could
be moved between the platforms with minimal work (it is after all just using
the .NET framework that we all love and use) reducing both development
time and final cost. For the iPhone, as the code generated reverts back to
ObjC and is linked against Apple’s SDK, apps created with Monotouch
(the iOS version) are available in the iOS store.
What this small series is going to show is how simple it is to achieve both
an iPhone and Android version of the same app with essentially the same
code. I will be porting some code I wrote [1] quite a few years back to run
on both platforms. It isn’t going to do anything amazing, but will allow
you to download, read and reply to your gmail.
Xamarin have released versions of monotouch and monodroid that will run
on the emulator (Android) or simulator (iOS) [2] so you can see and test
the final product. The source code for these articles is held online [3].
My recommendation is that you install Xamarin Studio to code with. While
there are plugins for VisualStudio 2010 and 2012, my experience with
them has not been great, whereas Xamarin Studio is rock solid.
Let’s get on with it then
The basis of this app is communicating with the Google servers to allow
a user to read and reply to their emails. To do this, we need a basic SMTP
and POP3 system. SMTP is supported natively, POP3 isn’t, but it’s not
difficult to code a small POP3 library that allows access to the facilities.
A word of warning
When writing code that will work between both iOS and Android, it is not
only the UI that needs to be considered. Monotouch for .NET developers
is a much simpler system to use. Instantating new classes which generate
new views is very similar to how it is done in a standard Winforms
application
NewView nv = new NewView(params);
will create a new instance of
NewView
with whatever parameters are
needed to be passed in – it is essentially the same as in a winforms
application.
Android development is not like this. For Android, the safest way to think
about how an app is structured is that there are a lot of small apps (called
Activities) that you need to get to work together. While you can certainly
pass certain objects between activities (the likes of
string
,
int
,
bool
etc), passing the likes of classes or bitmaps is not going to happen.
To start a new activity
Intent i = new Intent(this, typeof(class));
StartActivity(i);
where
class
is the name of the activity class being started.
Passing simple objects can be done with
string hello = “Hello”;
…
i.PutExtra(“name”, hello);
and read back in the receiving class using
string message =
base.Intent.GetStringExtra("content");
Alright, it’s not rocket science, but it leads to two problems; portability (it’s
not available in iOS) and propagation (the next activity will also have to
have the same
PutExtra
/
GetExtra
code to receive the data).
This difficulty can be overcome by using either a standard interface block
or better than that, a public static class. The big advantage of having the
static class is that generics, arrays, bitmaps and anything else that can be
bundled into a static class can be used. It is also completely portable
between the platforms – as long as nothing platform specific is included
in there of course!
Of course, there is nothing to stop instantiation between classes on Android
in the more usual .NET form, but it will not fire up the activity, so no view
is shown unless a bit of extra legwork is done. I will be avoiding that route
for this series!
UI design
This app will not win any design awards, but then it’s not meant to. It is
simple and functional. The UI is greatly different between iOS and
Android. To that end, I will concentrate on that aspect for the remainder
of this article.
Android
The way to think about how to design for Android is to think in either
vertical or horizontal boxes. Take the following
M
PAUL F. JOHNSON
Paul used to teach and was one time editor of a little
known magazine called C Vu. He now writes code
professionally for a living – primarily for Android and
iOS, but only ever in .NET thanks to Xamarin.Android
and Xamarin.iOS.
MAY 2013 | | 11
{cvu}
While it would seem a simple enough design, it has to be considered along
the lines of boxes within boxes viz
We have a horizontal outer, next layer are two horizontals. The one on the
right now has 4 verticals with the bottom one having two horizontals in.
Planning can be a bit tricky on deciding which way the layout has to be,
but it’s not that bad. By default, a new layout contains a vertical
LinearLayout
.
Within each layout, you can pretty much put any type of view (most of the
widget classes are derived from the View class, so you have a
TextView
,
EditView
,
ImageView
and so on). Android here becomes very similar
to .NET in that the views are similar to the standard .NET views (for
example
TextView
=
Label
,
EditView
=
TextBox
,
ImageView
=
PictureBox
), but like the .NET widgets, these views can be ‘themed’
using XML.
A view can be used by any number of activities on Android. Monodroid
(thankfully) comes with a UI designer as part of the Monodevelop or VS
plug in.
SetContentLayout(Resource.Layout.foo);
And that’s it to get the UI to display.
Attaching events to widgets is simple as well. My UI has a
TextView
called
textView
.
TextView text =
FindViewById<TextView>(Resource.Id.textView);
To attach a click event can be done in a number of ways
1.text.Click += delegate {…;};
2.text.Click += (object s, EventArgs e) => {
someMethod(s,e); }
3.text.Click += delegate(object s, EventArgs e)
{…;};
4.text.Click += HandleClick;
Each has a different purpose
1.Used for performing a particular task where the event parameters
can be completely ignored (for example, performing a calculation or
calling another method with any number of parameters, the return
value of which is used in some way)
2.Used as a both a standard event and also it allows for methods to be
overloaded (so pass
s
,
e
and say an
int
,
string
and
bool
as well)
3.Similar to (1), except now the event parameters are being passed in
and can therefore be accessed and worked with.
4.
HandleClick
does what it says on the tin. This is call to the
method
HandleClick
. The object and eventargs are passed into
the call. This is handled outside of the
OnCreate
method.
iOS
As you may expect from Apple, everything on their devices is a rich
experience and for that to happen, the developer has to be free to allow
their mind to roam, not be constrained by box limitations and generally
whatever they want to go, can go.
All iOS UI development had to be done on a Mac. This may change in the
future, but for now, it’s safe to say that all design will be done using XCode.
XCode is free to download from Apple. The most recent version of
Xamarin.iOS will allow you to code for Apple devices on a PC, as long
as there is a networked Mac for XCode to be accessed on.
With XCode, you can put things wherever you like and don’t have the same
rigid design constraints as you do for Android or Windows Phone.
Unlike Android though, the communication between the iOS UI and
application is a bit more complex. With iOS, you have two types of
interface, an outlet and an action. Don’t let the names fool you; an outlet
is the one that reacts when you click on it, the action is the receiver.
A widget can be both an action and outlet. Take the following code
btnClickMe.TouchDown += delegate {
btnClickMe.SetTitle ("Clicked",
UIControlState.Highlighted);
};
Here,
btnClickMe
is both the outlet (
TouchDown
event) and the action
(
SetTitle
). When creating the UI though, it is usually sufficient to say
if an object is an outlet or an action.
iOS calls the view into existence when the class it belongs to is called into
existence. However, there are some considerations to add along to that.
public override void ViewDidLoad()
This method is called immediately after the class has been instantated. At
this point, you can either add in what you want the outlets to do, or call
another method to do that for you (which can be preferable sometimes).
This is similar to the Android
OnCreate
method.
public override void ViewDidUnload()
called when the class is finished with. This removes the view, so freeing
up the memory it previously occupied.
Unlike Android, the types of (say) Click are different. Typically in
Android, you have Click. In iOS, there are 9 different Touch events
covering cancel, drag, clicks inside of an object and even a plain normal
click (
TouchDown
). It could be considered overkill, or it could be
considered as giving the developer far greater control over every aspect of
the development cycle. Either way, there are a lot of them.
Memory management
This is not an issue for iOS. The
ViewDidUnload()
method removes
the view, frees the memory and makes life easy.
Not so on Android.
The reason for this is easy enough. Monodroid is C# on top of Java. Think
of it more as a glue layer than anything. When an object is created in C#,
the C# GC disposes of the object when it’s done with. The problem is this.
When dealing with the UI, the glue creates a Java object for (say) the
TextView
widget and everything to do with that widget is handled
through the glue layer. At the end of it’s life, the C# GC will clear away
only the reference it has used. It does not dispose of the underpinning
object from the Java layer – the Java object sits there, hogging memory
until the app falls over dead.
For Android, there are two simple methods to ensure you don’t run out of
memory
1.Whenever you can, if a process is memory intensive (typically
anything to do with graphics), employ something like
using
(Bitmap bmp = CreateBitmapFromFile(filename)) { }
.
Once out of scope, the memory is freed up.
2.At the end of the activity, explicitly dispose of the objects by calling
the GC.
protected override void OnDestroy()
{
base.OnDestroy();
GC.Collect();
}
will do this for you.
That’s enough for this time. Next time I’ll start to look at code and how it
differs between the platforms to do the same task.
References
[1]
[2]
[3]
12 | | MAY 2013
{cvu}
Let’s Talk About Trees
Richard Polton puts n-ary trees to use parsing XML.
his article will show how to define a tree data structure in both C#
and F# and then will proceed to create a tree and load the contents of
an XML file containing SPAN data into it. Let’s start with a quick
recap over tree structures.
The classic binary tree, which contains a value at each node, might be
represented in C# as shown in Listing 1.
As can be seen, the data structure is defined recursively. That is, it is
defined in terms of itself. Therefore, any node contains zero, one (because
the code has allowed null in the setter) or two subtrees in addition to a value
of type
T
. Such a tree might be initialised using
var t = BinaryTree.Node( 5,
BinaryTree.Node( 8, BinaryTree.Leaf(9),
BinaryTree.Leaf(7)),
.Node( 2, BinaryTree.Leaf(1),
BinaryTree.Leaf(3)));
In F# we might define the tree structure as
type tree =
| Node of 'T * tree * tree
| Leaf of 'T
which also makes it clearer that any single (sub-)tree is either a branch
point, containing both a value and left and right branches, or a leaf,
containing only a value. We might then create an object of this type using
let t = Node (5,
Node ( 8, Leaf 9, Leaf 7),
Node ( 2, Leaf 1, Leaf 3))
See [1] and [2] for further information on the definition and traversal of
binary tree structures.
Let us now generalise this to an n-ary tree. In C# we might write this as
Listing 2, which is roughly the C# equivalent of the F# tree definition given
by the discriminated union (see [3] for a discussion of Algebraic Data
Types)
type tree =
| Node of 'T * tree list
Let us now pause awhile and divert our attention to the reason why this
subject presented itself in the first place. XML.
XML – it’s supposed to be the Holy Grail of data formats, easily consumed
by both the computer and the lucky human reader. I recently had the
distinct pleasure of working with some SPAN XML files [4] published by
the Australian Stock Exchange. These files are freely available for
download and a snippet from one of these files is reproduced here.
This snippet (Listing 3), lightly edi ted, was extracted from
ASXCLEndOfDayRiskParameterFile130305.spn
As might be expected, the XML represents a hierarchical data set. The
highest-level element in the snippet,
clearingOrg
, contains both simple
T
RICHARD POLTON
Richard has enjoyed functional programming ever
since discovering SICP and feels heartened that
programming languages are evolving back to
LISP. He likes ‘making it better’ and enjoys riding
his bike when he can’t. He can be contacted at
richard.polton@shaftesbury.me
<clearingOrg>
<ec>ASXCLF</ec>
<name>ASX Clear Futures</name>
<curConv>
<fromCur>AUD</fromCur>
<toCur>USD</toCur>
<factor>0.000000</factor>
</curConv>
<pbRateDef>
<r>1</r>
<isCust>1</isCust>
<acctType>H</acctType>
</pbRateDef>
<pbRateDef>
<r>4</r>
<isCust>1</isCust>
<acctType>H</acctType>
</pbRateDef>
</clearingOrg>
Listing 3
public class BinaryTree<T>
{
public T Value { get; private set; }
public BinaryTree<T> Left { get; private set; }
public BinaryTree<T> Right { get; private set; }
public BinaryTree(T value, BinaryTree<T> left,
BinaryTree<T> right)
{
Left = left;
Right = right;
Value = value;
}
}
public static class BinaryTree
{
public static BinaryTree<T> Node<T>(T value,
BinaryTree<T> left, BinaryTree<T> right)
{
return new BinaryTree<T>(value, left, right);
}
public static BinaryTree<T> Leaf<T>(T value)
{
return new BinaryTree<T>(value, null, null);
}
}
Listing 1
public class NaryTree<T>
{
public Tuple<T,List<NaryTree<T>>> Node
{ get; private set; }
public List<NaryTree<T>> SubTrees
{ get { return Node.Item2; } }
public NaryTree(T value,
List<NaryTree<T>> subTrees)
{
Node = Tuple.Create(value, subTrees);
}
}
Listing 2
MAY 2013 | | 13
{cvu}
and complex data elements, eg
name
and
pbRateDef
respectively.
(Before you ask, no, I didn’t change the names of the elements. They really
are called
ec
and
r
!)
We want to load the XML and parse it into a data structure using F#. We
want to do this so that we can subsequently query the data set automatically
instead of having to rely on eyeballs and Notepad. I say Notepad because,
although the data sets are not especially large, they do appear to be large
enough to cause both Internet Exploder’s and Visual Studio’s XML
renderers to fail, which leaves the ever-faithful Notepad as our key
inspection vehicle.
The first attempt at parsing this XML made use of discriminated unions
like the below:
type SpanXMLClearingOrg =
| Ec of string
| Name of string
| CurConv of SpanXMLCurConv list
| PbRateDef of SpanXMLPbRateDef list
gi ven pri or si mi l ar defi ni t i ons for
SpanXMLCurConv
and
SpanXMLPbRateDef
. This layout maps trivially to the XML
representation and so building a parser for this is very easy.
Whilst it may be possible to parse this XML using LINQ to XML using a
dictionary as demonstrated in [5], in this version of the parser, the XML
is read using recursive functions such as seen in Listing 4.
As can be seen, the function makes use of an accumulator (see article in
previous CVu for a quick intro or htdp.org [6]) to store the state of the
parsed structure up until the current point. In the example code the state is
called
acc
and is a list of
SpanXMLClearingOrg
. Other than that the
parser simply repeats the above form for each data structure that is to be
read from the XML. That is, compare the name of the current element with
one of a set of possible names and take the appropriate action, which is
one of converting the element value to a specific data type, eg
int
, or
reading an embedded data structure, eg
PbRateDef
. The result is then
prepended to the accumulated list of data structures loaded thus far and
then the function is called again. If the name of the current element does
not match any of the possible names then the function exits returning the
accumulated list to the caller. Thus the tree is built up as the XML is
consumed.
In the end we had a tree of data but unfortunately it turned out to be very
difficult to query. So much so, in fact, that an alternative representation
was sought.
Instead of the ‘natural’ mapping from XML to structures as shown above,
we chose to use a traditional functional tree data structure.
In the literature, for example [7], functional tree structures are presented
for binary trees. They look like this:
type tree =
| Leaf of string
| Node of tree * tree
In other words, every node in the tree contains
either two further trees or a value, in this case a
string. Note that the data structure is defined
recursively.
Our tree, however, is slightly different. It is not a
binary tree but is an n-ary tree (where n depends
on the actual location in the tree). Also each node
has one or more values. Additionally, each of the
different levels of the tree, at least in the XML, can
only be created from a well-defined subset of data
types. We can tackle the fact that a node has a
value as well as a subtree by defining our tree
structure as
type tree =
| Leaf of string
| Node of string * tree * tree
This is a bit unsatisfactory, though, primarily
because of the unnecessary distinction between
Leaf
and
Node
as all the
nodes in our tree contain data. However, we can modify the definition to
accomodate this and extend to multiple sub-trees using
type tree<'T> =
| Node of 'T * tree<'T> list
Et voilà! Well, almost. We now have a recursive tree structure whose every
node can contain a datum as well as zero (because the list can be empty)
or more sub-trees. The next challenge is how to render our data structures
such that they will fit in this new tree.
We can solve this trivially by defining an algebraic data type to be the union
of all the possible types of data that can be stored at a node. In order to
retain the structure of the original XML, we choose to create records
(which are like ‘C’ structures) that hold the data values and then the union
refers to all the record types. So, for example, we can define the record
type SpanXMLCurConv =
{
FromCur : string
ToCur : string;
Factor : float;
}
to represent the currency conversion data element
curConv
. This XML
element does not contain any complex XML elements itself but its parent,
the XML element
clearingOrg
, clearly does. We choose to represent
clearingOrg
as the record
type SpanXMLClearingOrg =
{
Ec : string;
Name : string;
}
Note that the nested complex XML elements are not stored within the
record in this implementation (unlike in the first implementation of the
parser). This is because we will be storing the nested complex elements in
the list of sub-trees. However, we still need to define a union so that it is
possible to store one of a number of distinct data types in the data value
of the node. So we write
type nodeType =
| ....
| SpanXMLCurConv of SpanXMLCurConv
| ...
| SpanXMLClearingOrg of SpanXMLClearingOrg
| ...
where the first of the two names in the union is the name of the
discriminator and the second is the name of the type that is stored therein.
Now we can rewrite our tree type definition as
type tree =
| Node of nodeType * tree list
let rec readClearingOrg (reader:System.Xml.XmlReader) acc =
match reader.Name with
| "ec" -> SpanXMLClearingOrg.Ec
(reader.ReadElementContentAsString()) :: acc
|> readClearingOrg reader
| "name" -> SpanXMLClearingOrg.Name
(reader.ReadElementContentAsString()) :: acc
|> readClearingOrg reader
| "curConv" -> (SpanXMLClearingOrg.CurConv
(readCurConv (reader.ReadStartElement() ;
reader) [] )) :: acc
|> readClearingOrg (reader.ReadEndElement() ; reader)
| "pbRateDef" -> (SpanXMLClearingOrg.PbRateDef
(readPbRateDef (reader.ReadStartElement() ; reader) [] )) :: acc
|> readClearingOrg (reader.ReadEndElement() ; reader)
| _ -> acc
Listing 4
14 | | MAY 2013
{cvu}
The advantage of a data structure of this form is the ease by which it can
be traversed and, therefore, queried. Given the above definition, we can
write queries to extract all
curConv
elements very simply (Listing 5).
If we want to find a specific conversion, say from GBP for example, then
we could modify our function to take an extra parameter and to use this as
a guard in the ‘match’ (Listing 6).
It couldn’t be easier. This works because of the power of the F# pattern
matching. This is analogous to the switch statement in C-style languages
except that the pattern that is being matched is not constrained to compile-
time constants. Type matching, as here, is commonplace, as are more
sophisticated matches on the return values of functions. Look at
Functional.Switch
for an example of a similar construct in C# (both
prior editions of CVu and functional-utils-csharp [8] on Google Code).
So it looks like the pain of transforming the XML into our new tree
structure is going to pay dividends (boom! boom!). All that is missing now
is that transformation. The ‘read’ functions all have the same format. On
account of there being so many record
types having such similar structure, we
created a code generator to simplify the
work. This code generator produces the
basic reader function which we then
manually modify to account for the
nested structures. (This was a trade-off;
time to code vs time to edit by hand, and
the latter won the day.) For the terminally curious, the code
generator lives in the span-for-margin project [9].
Notice that, although
findAllCurConv
is a very simple query
function, it has a shortcoming in that it only returns those nodes
which satisfy the criterion supplied and does not provide the
route taken through the tree in order to reach them. We want to
modify the function so that a path to each successful node is also
returned.
First, then, we need to change the internal find function to return
a 2-tuple, having the matching node and the path to the matching
node as its components. This 2-tuple becomes our accumulator.
Therefore, on a successful match we return
(node, (uNode :: path |> List.rev)) ::
acc
where
node
is the Node which has been matched,
uNode
is
the
SpanXML
record,
path
is a list of
SpanXML
records
traversed to reach this point and
acc
is the accumulator. Note
that we have to reverse the
path
list once we have a match
because functional lists prepend new items to the head rather
than append to the tail.
If the function fails to find a matching node, i.e. we have an
unsuccessful termination condition, then at the bottom of a
given branch we just return the current state of the
accumulator.
In the ‘inbetween’ state where we have a node which does not
match but is not a leaf node, i.e. it has a non-empty
list of sub-trees, we need to prepend this node to the
path
and then call the recursive
find
function again
for each of the subtrees under this node.
And so we can write Listing 7:
collect
is the F#
analogue of
SelectMany
, or more precisely,
SelectMany
i s based upon t he al gori t hm
encapsulated by
collect
. That is, given a function
which accepts a single element and which returns a
list, evaluate this function for every element in the
container and flatten the results into a single list.
Now suppose we want to find the
Div
nodes in the
tree which satisfy some predicate. We could write
very similar code to
findCurConvWithPath
,
changing only the
nodeType
name in the
match
(Listing 8) using, for example:
let divDateChk fromCur (curConv:SpanXMLCurConv) =
curConv.FromCur = fromCur
findDivs (divDateChk "1-Apr-2013") theTree
Clearly, the
findNodeTypeWithPath
pattern will be repeated for all
node types to be queried in the tree. Instead of copying the entire function
perhaps there is some way we can generalise the
findN
function.
Active patterns [7] are the obvious choice here. This would leave us with
let findNodeWithPath actPattern f tree =
let rec findNode tree acc path =
match tree with
| actPattern ....
but the problem with this is that it does not appear to be possible to pass
an Active Pattern as a parameter to a function. If any of you know how to
do this, my email address is in the byline. Otherwise, huh! So much for all
functions being first-class objects in F#. Therefore, we would like to be
let findAllCurConv theTree =
let rec findAllCurConv' theTree acc =
match theTree with
| Node (SpanXMLCurConv (_), _) as node -> node :: acc
| Node (_, subTrees) -> subTrees |>
List.collect
(fun node -> findAllCurConv' node acc findAllCurConv' theTree []
Listing 5
let findCurConvFrom fromCur theTree =
let rec findCurConvFrom' theTree acc =
match theTree with
| Node (SpanXMLCurConv (curConv), _) as node
when curConv.FromCur = fromCur ->
node :: acc
| Node (_, subTrees) -> subTrees |>
List.collect (fun node -> findCurConvFrom' node acc)
findCurConvFrom' theTree []
let allConversionsFromGBP = findCurConvFrom "GBP" theTree
Listing 6
let findCurConvWithPath fromCur tree =
let rec findCurConv tree acc path =
match tree with
| Node (SpanXMLCurConv (cc) as uNode, _) as node
when curConv.FromCur = fromCur ->
(node, (uNode :: path |> List.rev)) :: acc
| Node (_, []) -> acc
| Node (uNode, trees) ->
trees |>
List.collect (fun node -> findCurConv node acc
(uNode :: path))
findCurConv tree [] []
Listing 7
let findDivsWithPath pred tree =
let rec findDivs tree acc path =
match tree with
| Node (SpanXMLDiv (div) as uNode, _) as node
when pred div ->
(node, (uNode :: path |> List.rev)) :: acc
| Node (_, []) -> acc
| Node (uNode, trees) ->
trees |>
List.collect (fun node -> findDivs node acc (uNode :: path))
findDivs tree [] []
let findDivs f tree = findDivsWithPath f tree |> List.map first
Listing 8
MAY 2013 | | 15
{cvu}
able to define a general Active Pattern which
accepts a parameter. This parameter would then
be the type name that we wish to check.
let (|Check|) theType input =
...
However, this quickly becomes unwieldy leading
to a worse mess of code than we had in the original
problem and so we must seek an alternative
approach. Given that we are not going to be able
to use an Active Pattern, let us pass instead a
predicate-like function that returns an option
(again, see previous CVu and Google Code [8]).
Even though adopting this approach means that we will have to perform
an additional pattern match step outside of our generic function, it should
be an improvement. See Listing 9.
In this function,
pattern
is a function with signature
(tree ->
(nodeType * 'a) option)
. For example, the following function
divNode
could be used as the
pattern
.
let divNode input =
match input with
| Node (SpanXMLDiv (record) as uNode, _) as
node -> Some(uNode,node)
| _ -> None
However, this doesn’t allow us to filter the
Div
nodes of interest as we
can do so in
findDivsWithPath
. If we modify the function to accept
an additional parameter then we can pass a curried function into the
find
function. So we write
let divNode f input =
match input with
| Node (SpanXMLDiv (record) as uNode, _)
as node when f record -> Some(uNode,node)
| _ -> None
where
divNode
has been redefined to accept a predicate
f
. Now we can
write
let divs = findNodeWithPath (divNode fn) tree
for some given value of
fn
to populate
divs
with all the
Div
nodes in
tree
that satisfy
fn
. An example of
fn
is
let fn (div:SpanXMLDiv) = div.SetlDate > 20100301
With this solution it is still necessary to copy and edit the
XNode
function
for each of the types in the tree but this is a simpler piece of code which
does nothing more than return a success value or
None
, a reasonable
compromise.
Finally we present the boiler-plate code to
populate one of the
SpanXMLxxx
records,
specifically the
SpanXMLClearingOrg
. The
steps are simple. We initialise a dictionary which
records the state (incomplete, for the most part) of
the current record being created. Therefore, this
dictionary contains an entry for each of the fields
in the record, i.e. each of the simple XML
elements contained within the
clearingOrg
XML element. Next we define a function,
read
,
which transforms the element into a field in the
record. The simple elements are read in directly
through an appropriate conversion. Again, this
could probably be performed using LINQ-to-
XML in the manner demonstrated in [5],
especially as we are using a dictionary to store the
state, but we will persist with the recursive
solution for now.
The complex elements are read in using their own
equivalent read function and prepended to the
state. Note that there are, in principle, two
separate vehicl es for retaini ng the st ate
information; the dictionary already discussed for the simple types and a
list for each of the complex types. Having read the
clearingOrg
element
and its constituent parts we then construct the
SpanXMLClearingOrg
record setting the fields accordingly and concatenating all the lists of
complex XML elements together into a single list, the list of subtrees.
Given equivalent definitions of
readCurConv
and
readPbRateDef
we
can write Listing 10.
And there we have it. A lightning-fast discussion of n-ary trees followed
by a somewhat more long-winded, yet still abbreviated, example of one
in action in the Real World [10].
It’s not all work, work, work [11] though. Trees have other uses. For
example, one could have written Colossal Cave [12] using a tree structure.
Suppose we wanted to recreate something like the ‘maze of twisty little
passages, all alike’ or, indeed, the ‘maze of twisty little passages, all
different’.
First we need to design the tree structure. We might choose the mutually-
recursive
type tree =
| Corridor of int * int * room list
| DeadEnd
| Exit
and room =
| Room of int * tree list
The integers would be references into simple arrays of adjectives, so that
the description of the nodes in the tree can be varied.
This tree does not directly support cyclic data. To do that with the above
structure it would be necessary to use generator functions and slightly
redefine the
Corridor
and
Room
to refer to delayed objects. In such a
way, a previous state could be substituted for a new node in the tree.
let findNodeWithPath pattern tree =
let rec findNode tree acc path =
match pattern tree with
| Some(uNode,node) -> (node, (uNode :: path |> List.rev)) :: acc
| _ ->
match tree with
| Node (_, []) -> acc
| Node (uNode, trees) ->
trees |>
List.collect (fun node -> findNode node acc (uNode :: path))
findNode tree [] []
Listing 9
let readClearingOrg (reader:System.Xml.XmlReader)
let dict = ["ec","":>obj; "name","":>obj; ] |> toDict
let rec read curConv pbRateDef =
match reader.Name with
| "ec" as name ->
dict.[name] <- readAsString reader ; read curConv pbRateDef
| "name" as name ->
dict.[name] <- readAsString reader ; read curConv pbRateDef
| "curConv" as name ->
read (Node (readCurConv reader) :: curConv) pbRateDef
| "pbRateDef" as name ->
read curConv (Node (readPbRateDef reader) :: pbRateDef)
| _ -> curConv, pbRateDef
reader.ReadStartElement()
let curConv, pbRateDef = read [] []
reader.ReadEndElement()
SpanXMLClearingOrg(
{
Ec = dict.["ec"] :?> string
Name = dict.["name"] :?> string
}), (curConv @ pbRateDef)
Listing 10
16 | | MAY 2013
{cvu}
This, however, we leave as an exercise for the reader, particularly the
reader who feels that they ought to contribute an article to
CVu
but just
can’t think of a topic.
References
[1] Tree structures
[2] Traversing trees
[3] Algebraic Data Type
Algebraic_data_type
[4] ASX Risk Parameter file
[5] Linq-to-XML example
9719526/seq-todictionary
[6] Recursive functions using the Accumulator pattern
2003-09-26/Book/curriculum-Z-H-39.html
[7] Expert F# v2.0, Don Syme
[8] Functional C#
[9] Span parser on Google Code-
margin/
[10]
[11]
All+work+and+no+play+makes+Jack+a+dull+boy
[12] Colossal Cave walkthrough
adventure.html
Let’s Talk About Trees
(continued)
Team Chat
Chris Oldwood considers the benefits of social media
in the workplace.
s I write this MSN Messenger is taking its last few breaths before
Microsoft confine it to history. Now that they own Skype they have
two competing products and I guess one has to go. I’m sad to see it
be retired because it was the first instant messaging
product I used to communicate with work colleagues
whilst I was both in and out of the office.
At my first programming job back in the early 90s the
company used Pegasus Mail for email as they were
running Novell NetWare. They also used TelePathy (a
DOS based OLR) to host some in-house forums and
act as a bridge to the online worlds of CompuServe,
CIX, etc. Back then I hardly knew anyone with an
email address and it was a small company so I barely
got any traffic. The conferencing system (more
affectionately known as TP) on the other hand was a great way to ‘chat’
with my work colleagues in a more asynchronous fashion. Although some
of the conversations were social in nature, having access to the technical
online forums was an essential developer aid. Even the business (a small
software house) used it occasionally, such as to ‘connect’ the marketing
and development teams. Whereas email was used in a closed, point-to-
point manner, the more open chat system allowed for the serendipitous
water-cooler moments through the process of eavesdropping.
As the Internet took off the landscape changed dramatically with the
classic dial-up conferencing systems and bulletin boards (BBS’s) trying
to survive the barrage of web based forums and ubiquitous access to the
Usenet. Although I did a couple of contracts at large corporations I still
had little use for office email and that continued when I joined a small
finance company around the turn of the millennium.
It was nice being back in a small company – working with other fathers
who also had a desire to actually spend time with their families – because
it meant we could set up remote working. The remote access was VPN
based (rather than remote desktop) which meant that we would have to
configure Outlook locally to talk to the Exchange server in the office. This
was somewhat harder back then and it was just another
memory hog to have cluttering up your task bar. A few
of us had been playing with this new MSN Messenger
thing, which, because we were signed up personally
meant that whether we were at home or in the office
we easily talk to each other. Given our desire to
distribute our working hours in a more family friendly
manner that meant we often found ourselves working
(remotely) alongside a team-mate in the evening.
Instant messaging soon became an integral part of how
the team communicated. With the likelihood that at
least one of us was working from home we could still discuss most
problems when needed. Of course there was always the option to pick up
the old fashioned telephone if the limited bandwidth became an issue or
the emoticon count reached epidemic proportions. Even 3- and 4-way
conversations seemed to work quite painlessly. However shared desktops
and whiteboards felt more like pulling teeth, even over a massive 128 Kbps
broadband connection.
Eventually I had to move and I ended up back at one of those big
corporations – one that was the complete opposite of my predecessor. Here
A
we often found
ourselves working
(remotely)
alongside a team-
mate in the evening
In the Tool Box # 2
MAY 2013 | | 17
{cvu}
everything was blocked, you couldn’t (or shouldn’t) install anything
without approval and instant messaging was blocked by the company
firewall. In this organisation email ruled. This was not really surprising
because The Business, development teams, infrastructure teams, etc. were
all physically separated. Consequently emails would grow and grow like
a snowball as they acquired more recipients, questions
and replies until eventually they would finally die
(probably under their own weight) and just clog up the
backup tapes. The company’s technical forums were
also run using email distribution lists. Anyone brave
enough to post a question had to consider the value of
potentially getting an answer versus spending the next
20 minutes dealing with the deluge of Out of Office
replies from the absent forum participants. They even
had a special ‘Reply All’ plug-in that would pop up a
message box to check if you were really, really, really
sure that every recipient you were intending to spam actually needed to
see your finest display of English prose and vast knowledge of the subject
matter.
Little known to most employees the company actually ran an internal IRC
style chat service. Presumably, in an attempt to reduce the pummelling the
Exchange Server was taking, they forced their developers to ‘discover’ it
by making the chat client start up every time they logged in. They also
disbanded the email distribution lists and set up IRC channels instead.
Even the ACCU had its own channel!
It may sound like a draconian tactic, but it worked, and I for one am really
glad they did. Suddenly the heydays of the conferencing system I had used
back in the beginning were available once more. Although there was a
‘miscellaneous’ topic where a little social chit-chat went on I’d say that
by-and-large the vast majority of the public traffic was work related. Both
junior and senior developers could easily get help from other employees
on a range of technical subjects covering tools and languages. Naturally,
given the tighter feedback loop, the conversations easily escalated to the
level of ‘what problem are you trying to solve exactly?’ which is often
where the real answer lies.
One particular channel was set up to try and enable more cross pollination
of internal libraries and tools. In an organisation of their size I would dread
to think how many logging libraries and thread pools had been
implemented by different teams over the years. Our system also had its
own dedicated channel too which made communicating with our off-shore
teams less reliant on email. Given the number of development branches
and test environments in use this was a blessing that kept the inbox level
sane. The service recorded all conversations, which I’m sure to some
degree kept the chatter honest, but more importantly transcripts were
available via a search engine which made FAQs easier to handle.
When it came time to move contracts once more I was sorely disappointed
to find myself back where I was originally with the last company. Actually
it was worse because there were no internal discussion lists either that I
could find. Determined not to let my inbox get spammed with pointless
chatter I set up a simple IRC server for our team to use. My desire was to
sell its benefits to other teams and perhaps even get some communities
going, even if we had to continue hosting the server ourselves. Internally
the company had Office Communicator (OC), which in the intervening
years had acquired the same chat product my previous client used, but
sadly this extra add-on was never rolled out and so we
remained with our simple IRC setup. Contact with
some
of the support teams was occasionally via OC but
For me IRC style communication has been perfect for
the more mundane stuff. For example things like
owning up to a build break, messing with a test
environment, forwarding links to interesting blog
posts or just polling to see if anyone is up for coffee.
Using a persistent chat service (or enabling client side
logging) also allows it be used as a record of events
which can be particularly useful when diagnosing a production problem.
I suspect that from a company’s perspective they are worried that such as
service will be abused and used for ‘social networking’ instead, which is
probably why they blocks sites like Twitter and Facebook. However, if
teams are left to their own devices they will fill the void anyway and so a
company is better off providing their own service which everyone expects
will be monitored and so will probably self-regulate. But the biggest
benefit must surely come from the sharing of knowledge in both the
technical and problem domains. As the old saying goes, “
A rising tide lifts
all boats.
”
we often found
ourselves working
(remotely)
alongside a team-
mate in the evening
Write for us!
C Vu and Overload rely on article contributions from members. That’s you! Without articles there are no
magazines. We need articles at all levels of software development experience; you don’t have to write about
rocket science or brain surgery.
What do you have to contribute?
What are you doing right now?
What technology are you using?
What did you just explain to someone?
What techniques and idioms are you using?
For further information, contact the editors: cvu@accu.org or overload@accu.org
18 | | MAR 2013
{cvu}
Standards Report
Mark Radford looks at some features of the next C++ Standard.
n my last few standards reports I’ve been going on about the
forthcoming ISO C++ standards meeting in Bristol. Well, it is
forthcoming no longer and is currently (at the time of writing) taking
place. The delegates number about 100 (which is very much on the high
side, although not all of them are there every day) and, whereas meetings
have traditionally lasted five days, they are now extended with Bristol
being the first six day meeting. The pre-meeting mailing contained 96
papers (compare with 41 and 71 papers in the pre-meeting mailings for the
early and late 2012 meetings, respectively). Given that the meeting is in
the UK for the first time in six years I was disappointed that, because of
work commitments, I was unable to attend. However I managed to visit
on Wednesday evening, which was a good time to be there owing to the
Concepts Lite presentation which I will talk about below.
In November 2012’s CVu I gave a summary of the structure of the
committee: at the time there were three working groups and six study
groups. Since then activity has increased so that there are now four working
groups and eleven (!) study groups. In addition to the traditional Core,
Library and Evolution groups, there is now a separate Library Evolution
working group. The list of study groups now consists of: Concurrency and
Parallelism (SG1), Modules (SG2), File System (SG3), Networking
(SG4), Transactional Memory (SG5), Numerics (SG6), Reflection (SG7),
Concepts (SG8), Ranges (SG9), Feature Test (SG10), Database Access
(SG11). Note that not all study groups meet daily during the week of the
meeting. For example, the Database group (tasked with ‘creating a
document that specifies a C++ library for accessing databases’) only had
its first meeting on Thursday morning.
Before going any further I’d like to talk briefly about one of the
deliverables a standards committee can produce: that is, a technical
specification, or TS for short. Readers may have come across a technical
report (TR) before, such as TR1 which proposed various extensions to the
library for C++0x. A TR is informational whereas, by contrast, a TS is
normative. More information about this can be found on ISO’s web site [1].
Readers will no doubt be aware of the Concepts proposal and its troubled
journey through the process leading to the C++11 standard, only to be
pulled at the eleventh hour. The story of Concepts, in my opinion, should
serve as a cautionary warning: the original proposal inspired more ideas
and the whole thing grew and grew in complexity. In the end, its removal
from the C++11 (C++0x at the time) standard was a pragmatic necessity
in order to ship the new standard that had become long overdue.
Now, Concepts are back on the agenda for the future of C++, reinvented
in the form of Concepts Lite. The current main source of information on
Concepts Lite is the paper ‘Concepts Lite: Constraining Templates with
Predicates’ by Andrew Sutton, Bjarne Stroustrup, Gabriel Dos Reis
(N3580). There is also a web site [2]. Given the history of this feature
(alluded to above), I had concerns about its reintroduction. Therefore, I was
glad I had the chance to go to the Wednesday evening presentation given
by Andrew Sutton. This was the same presentation he gave at the ACCU
conference and it can downloaded [3]. I found myself liking Concepts Lite.
My original understanding was (and I can’t remember where it came from)
that the aim was for the feature to be in C++14. However, this matter came
up at the presentation and Andrew Sutton said this wasn’t going to happen,
rather there would be a TS instead. Currently there are no library proposals,
but the TS will probably include some library features (or there may even
be a separate TS for a constrained library). This proposal has generated a
lot of interest among the committee, and I expect it will so do so among
t
he C++ community in general. Therefore, I will spend the rest of this
report on it, and go into some more detail.
Concepts Lite
Concepts Lite offer an effective approach to constraining template
arguments without the complexity of the original Concepts. They do,
however, leave open a migration path to full Concepts. Currently though,
they are much simpler than Concepts were. In particular, there is no
attempt to check the definition of the template: the constraints are checked
only at the point of use. This is a big difference when compared to the
Concepts originally proposed: Concepts Lite are intended to check the use
– and not the definition – of templates. Other good points include: observed
compile-time gains of between 15% and 25% (according to Andrew
Sutton), templates can be overloaded on their constraints, and the
constraint check is syntactic only. That last point is another source of
simplification. Consider an
Equality_Comparable
constraint: this
would enable the compiler to check that a template argument type is
comparable using
operator==
, but there is no mechanism for attempting
to evaluate whether or not the
operator==
has the correct semantics.
Regarding overloading, function templates would be selected on the basis
that the more constrained template is the better match.
The icing on the cake is that much of Concepts Lite has been implemented
on a branch of GCC 4.8 in an experimental prototype [3].
That wraps up another standards report. As usual, N3580 and all the other
submitted papers can be found on the website [4]. Finally, I would like to
thank Steve Love for his flexibility with deadlines.
References
[1]-
all.htm?type=ts
[2]
[3]
[4]
I
MARK RADFORD
Mark Radford has been developing software for twenty-five years, and
has been a member of the BSI C++ Panel for fourteen of them. His
interests are mainly in C++, C# and Python. He can be contacted at
mark@twonine.co.uk
MAY 2013 | | 19
{cvu}
Code Critique Competition 81
Set and collated by Roger Orr. A book prize is
awarded for the best entry
Please note that participation in this competition is open to all members,
whether novice or expert. Readers are also encouraged to comment on
published entries, and to supply their own possible code samples for the
competition (in any common programming language) to scc@accu.org.
Last Issue's Code
I have been starting to use IPv6 and have tried to write a routine to print
abbreviated IPv6 addresses following the proposed rules in RFC 5952. It’s
quite hard – especially the rules for removing consecutive zeroes. Can you
check it is right and is there a more elegant way to do it?
Here is a summary of the rules:
Rule 1. Suppress leading zeros in each 16bit number
Rule 2. Use the symbol "::" to replace consecutive zeroes. For example,
2001:db8:0:0:0:0:2:1 must be shortened to 2001:db8::2:1. If there
is more than one sequence of zeroes shorten the longest sequence
– if there are two such longest sequences shorten the first of them.
Rule 3. Use lower case hex digits.
The code is in Listing 1.
/* cc80.h */
#include <iosfwd>
void printIPv6(std::ostream & os,
unsigned short const addr[8]);
/* cc80.cpp */
#include "cc80.h"
#include <iostream>
#include <sstream>
namespace
{
// compress first sequence matching 'zeros'
// return true if found
bool compress(std::string & buffer,
char const *zeros)
{
std::string::size_type len =
strlen(zeros);
std::string::size_type pos =
buffer.find(zeros);
if (pos != std::string::npos)
{
buffer.replace(pos, len, "::");
return true;
}
return false;
}
}
void printIPv6(std::ostream & os,
unsigned short const addr[8])
{
std::stringstream ss;
ss << std::hex << std::nouppercase;
for (int idx = 0; idx != 8; idx++)
{
if(idx) ss << ':';
ss << addr[idx];
}
// might be spare colons either side of
// the compressed set
while (compress(buffer, ":::"))
;
os << buffer;
}
/* testcc80.cpp */
#include <iostream>
#include <sstream>
#include "cc80.h"
struct testcase
{
unsigned short address[8];
char const *expected;
} testcases[] =
{
{ {0,0,0,0,0,0,0,0},
"::" },
{ {0,0,0,0,0,0,0,1},
"::1" },
{ {0x2001,0xdb8,0,0,0,0xff00,0x42,0x8329},
"2001:db8::ff00:42:8329" },
};
#define MAX_CASES sizeof(testcases) /
sizeof(testcases[0])
int test(testcase const & testcase)
{
std::stringstream ss;
printIPv6(ss, testcase.address);
if (ss.str() == testcase.expected)
{
return 0;
}
std::cout << "Fail: expected: "
<< testcase.expected
<< ", actual: " << ss.str() << std::endl;
return 1;
}
int main()
{
int failures(0);
for (int idx = 0; idx != MAX_CASES; ++idx)
{
failures += test(testcases[idx]);
}
return failures;
}
Listing 1
Listing 1 (cont’d)
ROGER ORR
Roger has been programming for over 20 years, most
recently in C++ and Java for various investment banks in
Canary Wharf and the City. He joined ACCU in 1999 and
the BSI C++ panel in 2002. He may be contacted at
rogero@howzatt.demon.co.uk
20 | | MAY 2013
{cvu}
Critiques
I obviously failed to produce an interesting enough example this time as
nobody wrote a critique. That may of course be because few readers are
interested in IPV6: I believe the readership of this magazine mostly comes
from countries where the shortage of IPV4 addresses is not yet a serious
problem. Take-up of IPV6 is most prevalent in countries where the use of
the Internet is developing rapidly, such as India and China. Either that, or
nobody thought there were any problems with the code.
Commentary
The first trouble with the code above is the use of an array of 8 short
integers to represent an IPV6 address. There may be problems with
network byte ordering if the IP addresses used as examples are passed
unchanged to a network call. It would be a lot better to use the standard
data structures for IP addresses such as in this case
in6_addr
.
It is surprisingly hard to print out (or read in) IPV6 addresses by hand.
Fortunately there are very few cases when this is advisable – using a
standard facility is very strongly recommended.
We do not at present have such a facility in C++ although the networking
study group is discussing proposals for a network address class or classes;
if a consensus is reached and adopted we might have a standard C++ way
to do this before too long. In the meantime you could use boost
(
boost::asio::ip::address
): see the
to_string
method of that
class.
The function
inet_ntop
is one standard way to do this in C.
char dst[INET6_ADDRSTRLEN];
if (!inet_ntop(AF_INET6, addr,
dst, sizeof(dst)))
{
// handle error...
}
I would probably try to avoid critiquing the user’s code as provided and
focus their attention on using a standard facility.
However, once this has been accomplished, I might return to their code
and point out that searching the string for
"0:…"
incorrectly matches the
initial zero against any hex number with a trailing zero digit. The test cases
provided by the user failed to cover this case.
Such failings in test coverage are quite common. For example, a bug was
discovered with the streaming of doubles in Visual Studio 2012 (http://
connect.microsoft.com/VisualStudio/feedback/details/778982). I am sure
Microsoft have test coverage of this operation; but obviously their test data
set lacked adequate coverage.
The trouble with doing the abbreviation of the longest run of zeros with
the textual representation is that there are boundary conditions at both the
beginning and end of the string. I think the easiest algorithm is to pass
through the binary representation to find the start address and length of the
longest run and then use this information when converting the
representation to characters.
The algorithm though misses another special case – that of IPV4 mapped
and compatible addresses. These have an alternative convention for
display which emphases the IPV4 ‘nature’ of the address. So, for example,
the IPV6 address 0:0:0:0:0:ffff:c00:280 would be displayed as
::ffff:192.0.2.128 on many platforms.
This is would hopefully provide another reason to reinforce why using the
standard function is normally preferable to writing your own.
Finally I notice that the code to join the eight short integers together with
a colon delimiter is addressed by the recent C++ proposal N3594
(‘
std::join()
:
An algorithm for joining a range of elements
’).
Code Critique 81
(Submissions to scc@accu.org by Jun 1st)
I am new to C++ and trying to write some objects to disk and read them back
in. How can I get the pointer to the objects that are read back in?
Where would you start with trying to help this newcomer?
The code is in Listings 2, 3, 4 and 5 (note: it uses a few C++11 features so
will need modifying to run on a non-conformant compiler):
/*
* Bike.cpp
*/
#include "Bike.h"
//Bike::Bike() {} // TODO Auto-generated stub
Bike::~Bike() {} // TODO Auto-generated stub
std::ostream& operator << (std::ostream& os,
Bike &m){
os << std::left << std::setw(10)
<< m.getAddress() << "\t"
<< m.getName() << "\t"
<< m.getPrice() << "\t" << m.getMake();
return os;
}
Listing 2
/*
* Bike.h
*/
#ifndef BIKE_H_
#define BIKE_H_
#include <iostream>
#include <string>
#include <vector>
#include <iterator>
#include <algorithm>
#include <iomanip>
#include <ios>
class Bike {
Bike* address; // Pointer to Bike object
std::string name;
double price;
std::string make;
public:
//Bike(); // eliminate to avoid ambiguity
Bike(Bike* a, const std::string& n =
"unknown", double p=0.01,
const std::string& m="garage") :
address(a), name(n), price(p), make(m){}
virtual ~Bike();
inline std::string getName(){return name;}
inline double getPrice(){return price;}
inline std::string getMake(){return make;}
inline Bike* getAddress(){return address;}
static void writeToDisk(
std::vector<Bike> &v);
static void readFromDisk(std::string);
static void splitSubstring(std::string);
static void restoreObject(
std::vector<std::string> &);
};
std::ostream& operator << (std::ostream& os,
Bike &b);
#endif /* BIKE */
Listing 2
MAY 2013 | | 21
{cvu}
You can also get the current problem from the accu-general mail list (next
entry is posted around the last issue's deadline) or from the ACCU website
(). This particularly
helps overseas members who typically get the
magazine much later than members in the UK and
Europe.
/*
* file_io.cpp
*/
#include "Bike.h"
#include <fstream>
#include <iomanip>
#include <iostream>
#include <iterator>
#include <vector>
#include <cstring>
#include <sstream>
#include <algorithm>
// Write objects to disk
void Bike::writeToDisk(std::vector<Bike> &v){
std::ofstream out_2("bike_2.dat");
for (auto b:v){
out_2 << b.getAddress() << ':'
<< b.getName()
<< ':'<< b.getPrice() << ':'
<< b.getMake() << std::endl;
}
out_2.close();
}
//--------------------------------------------
//Read from disk into vector and make objects
void Bike::readFromDisk(
std::string bdat) // "bike_2.dat"
{
std::cout << "\nStart reading: \n";
std::vector<char> v2;
std::ifstream in(bdat);
copy(std::istreambuf_iterator<char>(in),
std::istreambuf_iterator<char>(),
std::back_inserter(v2));
in.close();
for(auto a:v2){
std::cout << a; // Debug output
}
std::string s2(&(v2[0])); // Vector in String
std::cout << "\nExtract members:\n";
while (!s2.empty()){
// objects separated by \n
size_t posObj = s2.find_first_of('\n');
std::string substr = s2.substr(0,posObj);
s2=s2.substr(posObj+1);
splitSubstring(substr);
}
}
void Bike::splitSubstring (std::string t){
// Save the address and the members in v3
std::vector<std::string> v3{(4)};
size_t posM; // [in substring]
int i;
for (i=0; i<4; i++){
posM = t.find_first_of(':');
v3[i] = t.substr(0,posM);
if (posM==std::string::npos) break;
t=t.substr(posM+1);
}
for(auto member:v3){
std::cout << std::setw(10) << std::left
<< member << " \t";}
restoreObject(v3);
std::cout << std::endl;
v3.clear();
}
Listing 4
void Bike::restoreObject(std::vector<std::string>
&v3){
Bike* target; // I want the object here ...
double p;
std::stringstream ss(v3[2]);
ss >> p;
Bike dummy{&(dummy),v3[1], p, v3[3]};
target = &(dummy);
std::cout << "\nRestore: " << *target
<< std::endl;
}
Listing 4 (cont’d)
/*
* main_program.cpp
*/
#include "Bike.h"
#include <fstream>
#include <iomanip>
#include <iostream>
#include <iterator>
#include <vector>
#include <cstring>
#include <sstream>
#include <algorithm>
int main(){
std::cout << "start\n";
std::vector<Bike> v;
Bike thruxton{&(thruxton), "Thruxton",
100.00 , "Triumph"};
Bike sanya{&(sanya)};
Bike camino{&(camino), "Camino ",
150.00, "Honda"};
Bike vespa{&(vespa), "Vespa ",
295.00, "Piaggio"};
v.push_back(thruxton);
v.push_back(sanya);
v.push_back(camino);
v.push_back(vespa);
for(Bike b:v) std::cout << b << std::endl;
// using overloaded << operator
Bike::writeToDisk(v);
// restore objects
Bike::readFromDisk("bike_2.dat");
// where are the restored objects??
return 0;
}
Listing 5
22 | | MAY 2013
{cvu}
Patterns
Refactoring to Patterns
By Joshua Kerievsky, published by
Addison Wesley ISBN: 978-
032121335
Reviewed by Alan Lenton
For some reason this book
escaped my notice until
recently, which is a pity, because it’s a
very useful book indeed. Quite a lot of
programmers, even those using agile
methods, seem to think that patterns are
merely something that you spot at the design
stage. This is not the case, though it’s useful if
you do spot a pattern early on. Programs evolve,
and as they do, patterns become more obvious,
and indeed may not have been appropriate at
earlier stages of the evolution.
The book, as its title implies, deals with evolving
programs, and does it very well. The bulk of the
book takes a relatively small number of patterns
and, using real world examples, gives a step by
step analysis, with Java code, of how to refactor
into the pattern. As long as readers do treat these
as examples, rather than something set in stone,
they will learn a lot about the arts of identifying
patterns and the nitty gritty of refactoring.
I also liked the pragmatism of the author. Unlike
some
pattern freaks, he freely admits that there
are times when using a specific pattern is
overkill, especially where the problem is simple.
Most people, myself included, when the idea of
patterns are first grasped, tend to see patterns in
everything and immediately implement them.
This is frequently inappropriate, and rather than
making the program structure clearer, muddies
the waters. There are a number of warnings in
the book against this approach.
I was very impressed by this book. In fact it is
one o
f a small number of books that has made it
to my work desk, where it fits, both intellectually
and literally, in between the Gang of Four’s
Design Patterns, and Martin Fowler’s
Refactoring!
Highly recommended.
Elemental Design
Patterns
By Jason McC. Smith, published by
Addison-Wesley ISBN: 978-
0321711922
Reviewed by Alan Lenton.
Android
Android Programming
Unleashed
By B.M. Harwan, published by
Sams
Reviewed by Paul F. Johnson
Not recommended – not even
slightly recommended unless
you like levelling up beds and even
then, I can think of better books.
This is my first review in what seems
an etern
ity and unfortunately, it's not a good one.
The Android market it getting bigger by the
minute
and with that, more and more books are
coming out professing to show you how, in 200
pages, you can go from a user to someone who
can create an app that redefines the landscape for
apps out there. This is no exception.
It starts by wasting the first chapter telling you
how to install
the Android SDK. Why? The
installer pretty much does everything for you
now. Sure you may need to know how to set up
the emulators and you may wish to not just
accept the defaults, but why waste a chapter on
it? That said, I have the same issue with most
books of this ilk; “let’s use a chapter to show
some screen dumps of how to install Visual
Studio”. Just annoying.
Okay, that bit over. What’s left? Code errors
everywhere, poor explanations of how things
work and why they’re done like that and did I
mention stuff that plain doesn’t compile? No?
There is quite a bit of it.
Ok, let’s look at a particular example on page
188. A nice simple media player.
public class PlayAudioAppActivity
extends Activity {
@override
Bookcase
The latest roundup of book reviews.
If you want to review a book, your first port of call should be the members section of the ACCU website,
which contains a list of all of the books currently available. If there is something that you want to review,
but can’t find on there, just ask. It is possible that we can get hold of it.
After you’ve made your choice, email me and if the book checks out on my database, you can have it.
I will instruct you from there. Remember though, if the book review is such a stinker as to be awarded
the most un-glamorous ‘not recommended’ rating, you are entitled to another book completely free.
Thanks to Pearson and Computer Bookshop for their continued support in providing us with books.
Jez Higgins (publicity@accu.org)
MAY 2013 | | 23
{cvu}
public void onCreate(Bundle
savedInstanceState) {
super.onCreate(savedInstance);
setContentView(R.layout.activity_
play_audio_app);
Button playButton =
(Button)findViewById(R.id.playbtn
);
playButton.setOnClickListener(new
Button.onClickListener() {
public void onClick(View v) {
MediaPlayer mp =
MediaPlayer.create(PlayAudioAppAc
tivity.this, R.raw.song1);
mp.Start();
}
});
Looks ok, except for the 2 lines that do anything
– (
MediaPlayer mp =
... and
mp.Start()
)
What’s it doing? That’s right, it creates an
instance u
sing the Activity context and then uses
a file in the raw directory. What raw directory?
Unle
ss the author created one, there isn’t one. It
then starts the media player. No sort of trapping
or defensive coding, just hey, it works or I’ll
leave you to puzzle out why it didn’t – and given
quite a lot of the code is written in the same way,
the poor ol’ user is left scratching their head and
wondering why SAMs didn’t include a source
code CD or not limit getting the code for 30 days
past purchase of the book.
Now, let’s add some insult to injury. The book
claims to cover
up to 4.1.2, so let’s look at
something that varies massively over the
versions – animation. Prior to version 3,
animation was pretty awful. It worked, but
wasn’t really that good.
Google rewrote huge chunks for version 3 and
animation was there and happy. What does this
book say about the differences? Nothing. It
sticks to what is there prior to version 3. The only
saving grace is the section on using animation
using XML.
Android comes with SQLite databases as
stand
ard. Why then does the author go about
creating a custom database using ArrayLists?
I could really rip into this book and I mean
serio
usly rip into it, but at the end of the day life
is too short to waste my time trying to find code
in there that works as it should.
Given the author is also a time-served lecturer
with an ‘easy to understand’ style, I’m amazed
this managed to get past the technical editor’s
eye – unless said technical editor is one of his
students...
Beginning Android 4
Application
Development
By Wei-Meng Lee, published by
Wrox, ISBN 978-1-118-19954-1
Reviewed by Paul F. Johnson
Recommended with reservations.
There are plenty of awful Android books out
there
(see my other review for one such
example). Lots of errors in the code, broken
examples, wasted paper, illogical layouts and
well, pretty much a waste of a tree. This is NOT
one of those books.
This is a rather good book. Not amazing, but still
far better than a lot of things out there. From the
word go, there are screen shots a-plenty, lots of
code examples with the emphasis definitely on
trying things out for yourself. But therein lies the
problem with the book. It is all well and good
having example code, but not when you have to
disappear onto a website and dig around for it (it
is why this review is on Recommended and not
Highly Recommended).
A major omission is the lack of anything on
graphics handlin
g. While it does show you how
to display graphics, there is nothing on drawing
or use of the camera. An omission which while
understandable, does detract from this book
quite a lot. Drawing leads into long and short
presses, drags, canvases and other fun bits and
pieces. Perhaps for the next edition this could be
included? Here’s hoping!
The author of this work does know what he is on
abou
t with a clear way to his writing style.
I will happily admit that I don’t do Java. I’ve
never understood it and really, it doesn’t make
too much sense to me. I do, however, program
for Android using Xamarin.Android (or
Monodroid as it was). There is only one or two
books out there that are dedicated to using .NET
on Android. The beauty of this book though is
that it explains how the system works and how
events are used and as long as you know the
equivalent in .NET world, this book provides
you with a great resource that is currently
missing.
The book covers just about all of the main parts
of An
droid development (including data
persistence, maps, messaging and networking)
up to Ice Cream Sandwich. Jellybean doesn’t
appear to be in the book.
All in all, this is one of the better books out there
for Android development. It’s good, but has its
failings.
Miscellaneous
API Design for C++
By Martin Reddy, published by
Morgan Kaufmann ISBN: 978-
0123850034
Reviewed by Alan Lenton
Martin Reddy has written a
very useful book on the art
and science of Application Programming
Interfaces (APIs), and along the way has
produced a book chock full of useful hints and
help for more junior programmers. It is not a
book for someone wanting to learn to program
in C++, but if you have been programming in
C++ for a year or so, then you will find this book
will help you move toward towards program
design instead of just ‘coding’.
Obviously, the book concentrates on API
desig
n, but along the way it covers selected
patterns, API styles, performance, testing and
documentation. As a bonus it also covers
scripting and extensibility, and I found the
section on plugins particularly useful. An
appendix covers the varied technical issues
involved in building both static and dynamic
libraries on Windows Mac and Linux.
The only minor disagreement I would have with
the author is with the extent to which he goes to
move internal details out of header files in the
name of preventing the API users from doing
anything that might allow them to access those
features. From my point of view, using the API
is a type of contract between the API writer and
the API user. If the user is foolish enough to
break that contract then he or she has to take the
consequences in terms of broken code when a
new version of the library comes out. In any case
this sort of behaviour should be picked up by
code review in any halfway decent software
studio.
That is, however, a minor niggle, and this book
rep
resents a rich seam for programmers to mine
for good programming practices – even if you
aren’t writing API, your use of them will
improve dramatically!
The Essential Guide To
HTML5: Using Games To
Learn HTML5 And
JavaScript (Paperback)
By Jeanine Meyer, published by
FRIENDS OF ED ISBN: 978-
1430233831
Reviewed by Alan Lenton
I really can’t recommend buying this
book. It seems to have been written
mainly for people with a very short attention
span, and therefore skips on explaining why you
do things in a specific way. The chosen way of
displaying programs listings, while it might
have be useful for annotating each line, makes
it impossible to look at the program flow, or
consider the over all design. The one correct idea
– that of incremental program development –
becomes merely a vehicle for large spaced out
repetitive chunks of code which probably extend
the size of the book by as much as 20%.
The code itself, is, how shall I put it, somewhat
less than optimal, and
not conducive to creating
good coding habits by those learning from the
book. For instance, in the dice game example,
the code for drawing a dot on the dice is repeated
in a ‘cut and paste’ style every time a dot is
drawn, instead of being gathered into a function
and called each time it is needed.
I shudder to think about what sort of web site
someone who learned from this book would put
together. Fortunately, perhaps, they are not
likely to learn enough from the book to make a
web site work.
A triumph of enthusiasm over pedagogy.
Definitely not recommended!
24 | | MAY 2013
REVIEWS
accu
ACCU Information
Membership news and committee reports
{cvu}
View from the Chair
Alan Griffiths
chair@accu.org
I’ve been given special dispensation
by the C Vu editor to submit this report ‘late’
(that is, after the conference and AGM). I know
I enjoyed the conference and believe that most of
those who could attend did so too. This was the
first year away from Oxford, from what I saw it
was a mostly successful move. There were a few
‘opportunities for improvement’ but I’m sure
that the conference committee will be
considering carefully what lessons can be
applied for next year.
As this is written after the AGM it is possible to
report on proceedings there. We had a number of
votes and proxies registered on the motions for
constitutional change before the meeting, but
both these and the votes at the meeting were
overwhelmingly in favour of the proposed
changes.
There were two constitutional motions passed:
one p
roposed by Roger Orr and Ewan Milne to
rationalise the committee posts required by the
constitution; and, a much larger one proposed by
Giovanni Asproni and Mick Brooks to support
voting by members that cannot attend the AGM.
Last year’s AGM made changes to the
constit
ution that required constitutional motions
be notified in advance and that preregistered and
proxy votes on these motions be accepted. There
was also a call for the committee to be more
transparent about the way the organisation runs.
In line with this we’ve used the members
mai
ling list to prepare the proposed changes to
the constitution. Drafts of these motions were
posted to the list and updated in response to
addressed in advance of the AGM and the final
wording we had was passed quickly.
The committee has been taking other steps over
th
e year to make the operation of the
organisation more transparent. As part of this
minutes of the committee meetings are now
published on the accu-members mailing list
once they are approved. Also, while committee
meetings have always been open to members
(subject to prior arrangement with the secretary)
they can now be attended remotely.
At this year’s AGM there was also a call from
th
e floor to ensure that committee members from
overseas could attend committee meetings. This
didn’t move to a vote as the same technology
that allows members to attend remotely is
already in use by committee members.
One notable failure by the committee was that
we did
n’t have the accounts ready for the AGM.
This has happened before, but was unexpected:
the treasurer got the figures to the accountant in
what we believed was good time and we only
realised that the accounts going to be late when
they didn’t appear as expected. In the end, the
accounts were actually available for the AGM,
but no-one present (neither committee members,
nor the honorary auditors) had had a chance to
review them. We will be investigating further at
the next committee meeting.
As anticipated by my last report we’ve had a
coupl
e of people stand down from the committee
– Mick Brooks has been replaced as
Membership Secretary by Craig Henderson.
Tom Sedge and Stewart Brodie have also
stepped down.
My last report also mentioned the ‘hardship
f
und’. This was originally created to support the
memberships of individuals who could not
finance themselves. However, there have been
decreasing calls on this fund over the years and
while it is still possible to donate, nothing has
been paid out for quite some time. The
committee needs guidance on how to proceed:
Should we continue accepting contributions to
the fund? And what do we do with the money
already donated?
We do sometimes offer concessionary
me
mberships to members in financial
difficulties. So one option for using the hardship
fund could be to ‘make up’ the difference
(effectively transferring the money to our
general budget). It would also be possible to
spend the money in new ways (supporting
attendance at the Conference for example). If
you can suggest something else then the
committee would be pleased to consider it.
We still have no volunteer to moderate the accu-
cont
acts mailing list. This isn’t an onerous task
(there are a few emails each week to classify as
‘OK’, ‘Spam’ or ‘needs fixing’) but we don’t
currently have a replacement for this role. Is this
something you could do for the ACCU?
Please contact me at chair@accu.org about any
of the items above
.
Letter to the Editor
Dear Editor,
Not having used F# before, it was interesting to read Richard Polton's
article, ‘Comparing Algorithms’, in the March 2013 C Vu, in which he
compares a variety of iterative and recursive solutions to the first Project
Euler problem (sum the multiples of 3 or 5 below 1000).
As a didactic exercise, this is all well and good. The problem itself, though,
strikes me as being very similar to the one in the famous story about Carl
Friedrich Gauss; and indeed it turns out that the sum of all multiples of i
less than or equal to n can be found (in plain old C) without any loop at all:
int sum(int i, int n) {
return i * (n/i * (n/i + 1))/2; }
We can use this to find the sums of the multiples of 3 and of 5, then remove
the ones we’ve double-counted:
int euler1(int i1, int i2, int n) {
return sum(i1, n-1) + sum(i2, n-1)
- sum(i1*i2, n-1); }
In C++, if the arguments are constants we can convert function arguments
to template parameters, reducing our runtime cost all the way to zero by
doing the work at compile time; and in C++11 we should be able to
accomplish the same thing simply by sticking
constexpr
in front of both functions.
I think the lesson is that, while iteration,
recursion, functional programming, templates, C++11, and all our other
flashy tools and techniques each have their place, in our eagerness to try
out our new capabilities we mustn’t lose sight of the problem we originally
set out to solve.
Sincerely,
Martin Janzen
(martin.janzen@gmail.com)
If you read something in C Vu that you
particularly enjoyed, you disagreed with
or that has just made you think, why not
put pen to paper (or finger to keyboard)
and tell us about it?
Log in to post a comment | https://www.techylib.com/en/view/secrettownpanamanian/cvu_accu | CC-MAIN-2017-22 | en | refinedweb |
.il.http;23 24 import javax.net.ssl.HostnameVerifier;25 import javax.net.ssl.SSLSession;26 27 /* An implementation of the HostnameVerifier that accepts any SSL certificate28 * hostname as matching the https URL that was used to initiate the SSL connection.29 * This is useful for testing SSL setup in development environments using self30 * signed SSL certificates.31 *32 * This is a duplicate object found here and in the server module. Like the33 * Base64Encoder, we'll probably want to move it somewhere else. nathan@jboss.org34 35 @author Scott.Stark@jboss.org36 @version $Revision: 37459 $37 */38 public class AnyhostVerifier implements HostnameVerifier39 {40 public boolean verify(String s, SSLSession sslSession)41 {42 return true;43 }44 }
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/jboss/mq/il/http/AnyhostVerifier.java.htm | CC-MAIN-2017-22 | en | refinedweb |
How VPN Pivoting Works (with Source Code)October 14, 2014
A VPN pivot is a virtual network interface that gives you layer-2 access to your target’s network. Rapid7’s Metasploit Pro was the first pen testing product with this feature. Core Impact has this capability too.
In September 2012, I built a VPN pivoting feature into Cobalt Strike. I revised my implementation of this feature in September 2014. In this post, I’ll take you through how VPN pivoting works and even provide code for a simple layer-2 pivoting client and server you can play with. The layer-2 pivoting client and server combination don’t have encryption, hence it’s not correct to refer to them as VPN pivoting. They’re close enough to VPN pivoting to benefit this discussion though.
The VPN Server
Let’s start with a few terms: The attacker runs VPN server software. The target runs a VPN client. The connection between the client and the server is the channel to relay layer-2 frames.
To the attacker, the target’s network is available through a virtual network interface. This interface works like a physical network interface. When one of your programs tries to interact with the target network, the operating system will make the frames it would drop onto the wire available to the VPN server software. The VPN server consumes these frames, relays them over the data channel to the VPN client. The VPN client receives these frames and dumps them onto the target’s network.
Here’s what the process looks like:
The TAP driver makes this possible. According to its documentation, the TUN/TAP provides packet reception and transmission for user space programs. The TAP driver allows us to create a (virtual) network interface that we may interact with from our VPN server software.
Here’s the code to create a TAP [adapted from the TUN/TAP documentation]:
#include <linux/if.h> #include <linux/if_tun.h> int tun_alloc(char *dev) { struct ifreq ifr; int fd, err; if( (fd = open("/dev/net/tun", O_RDWR)) < 0 ) return tun_alloc_old(dev); memset(&ifr, 0, sizeof(ifr)); ifr.ifr_flags = IFF_TAP | IFF_NO_PI; if( *dev ) strncpy(ifr.ifr_name, dev, IFNAMSIZ); if( (err = ioctl(fd, TUNSETIFF, (void *) &ifr)) < 0 ) { close(fd); return err; } strcpy(dev, ifr.ifr_name); return fd; }
This function allocates a new TAP. The dev parameter is the name of our interface. This is the name we will use with ifconfig and other programs to configure it. The number it returns is a file descriptor to read from or write to the TAP.
To read a frame from a TAP:
int totalread = read(tap_fd, buffer, maxlength);
To write a frame to a TAP:
write(tap_fd, buffer, length);
These functions are the raw ingredients to build a VPN server. To demonstrate tunneling frames over layer 2, we’ll take advantage of simpletun.c by Davide Brini.
simpletun.c is an example of using a network TAP. It’s ~300 lines of code that demonstrates how to send and receive frames over a TCP connection. This GPL(!) example accompanies Brini’s wonderful Tun/Tap Interface Tutorial. I recommend that you read it.
When simpletun.c sends a frame, it prefixes the frame with an unsigned short in big endian order. This 2-byte number, N, is the length of the frame in bytes. The next N bytes are the frame itself. simpletun.c expects to receive frames the same way.
To build simpletun:
gcc simpletun.c -o simpletun
Note: simpletun.c allocates a small buffer to hold frame data. Change BUFSIZE on line 42 to a higher value, like 8192. If you don’t do this, simpletun.c will eventually crash. You don’t want that.
To start simpletun as a server:
./simpletun -i [interface] -s -p [port] -a
The VPN Client
Now that we understand the VPN server, let’s discuss the VPN pivoting client. Cobalt Strike’s VPN pivoting client sniffs traffic on the target’s network. When it sees frames, it relays them to the VPN pivoting server, which writes them to the TAP interface. This causes the server’s operating system to process the frames as if they were read off of the wire.
Let’s build a layer-2 pivoting client that implements similar logic. To do this, we will use the Windows Packet Capture API. WinPcap is the Windows implementation of LibPCAP and RiverBed Technology maintains it.
First, we need to open up the target network device that we will pivot onto. We also need to put this device into promiscuous mode. Here’s the code to do that:
pcap_t * raw_start(char * localip, char * filterip) { pcap_t * adhandle = NULL; pcap_if_t * d = NULL; pcap_if_t * alldevs = NULL; char errbuf[PCAP_ERRBUF_SIZE]; /* find out interface */ d = find_interface(&alldevs, localip); /* Open the device */ adhandle = (pcap_t *)pcap_open(d->name, 65536, PCAP_OPENFLAG_PROMISCUOUS | PCAP_OPENFLAG_NOCAPTURE_LOCAL, 1, NULL, errbuf); if (adhandle == NULL) { printf("\nUnable to open the adapter. %s is not supported by WinPcap\n", d->name); return NULL; } /* filter out the specified host */ raw_filter_internal(adhandle, d, filterip, NULL); /* ok, now we can free out list of interfaces */ pcap_freealldevs(alldevs); return adhandle; }
Next, we need to connect to the layer-2 pivoting server and start a loop that reads frames and sends them to our server. I do this in raw.c. Here’s the code to ask WinPcap to call a function when a frame is read:
void raw_loop(pcap_t * adhandle, void (*packet_handler)(u_char *, const struct pcap_pkthdr *, const u_char *)) { pcap_loop(adhandle, 0, packet_handler, NULL); }
The packet_handler function is my callback to respond to each frame read by WinPCAP. It writes frames to our layer-2 pivoting server. I define this function in tunnel.c.
void packet_handler(u_char * param, const struct pcap_pkthdr * header, const u_char * pkt_data) { /* send the raw frame to our server */ client_send_frame(server, (void *)pkt_data, header->len); }
I define client_send_frame in client.c. This function writes the frame’s length and data to our layer-2 pivoting server connection. If you want to implement a new channel or add encryption to make this a true VPN client, client.c is the place to explore this.
We now know how to read frames and send them to the layer-2 pivoting server.
Next, we need logic to read frames from the server and inject these onto the target network. In tunnel.c, I create a thread that calls client_recv_frame in a loop. The client_recv_frame function reads a frame from our connection to the layer-2 server. The pcap_sendpacket function injects a frame onto the wire.
DWORD ThreadProc(LPVOID param) { char * buffer = malloc(sizeof(char) * 65536); int len, result; unsigned short action; while (TRUE) { len = client_recv_frame(server, buffer, 65536); /* inject the frame we received onto the wire directly */ result = pcap_sendpacket(sniffer, (u_char *)buffer, len); if (result == -1) { printf("Send packet failed: %d\n", len); } } }
This logic is the guts of our layer-2 pivoting client. The project is ~315 lines of code and this includes headers. Half of this code is in client.c which is an abstraction of the Windows Socket API. I hope you find it navigable.
To run the layer-2 pivoting client:
client.exe [server ip] [server port] [local ip]
Once the layer-2 client connects to the layer-2 server, use a DHCP client to request an IP address on your attack server’s network interface [or configure an IP address with ifconfig].
Build Instructions
I’ve made the source code for this simple Layer-2 client available under a BSD license. You will need to download the Windows PCAP Developer Pack and extract it to the folder where the layer-2 client lives. You can build the layer-2 client on Kali Linux with the included Minimal GNU for Windows Cross Compiler. Just type ‘make’ in its folder.
Deployment
To try this Layer-2 client, you will need to install WinPcap on your target system. You can download WinPcap from RiverBed Technology. And, that’s it. I hope you’ve enjoyed this deep dive into VPN pivoting and how it works.
The layer-2 client is a stripped down version of Cobalt Strike’s Covert VPN feature. Covert VPN compiles as a reflective DLL. This allows Cobalt Strike to inject it into memory. The Covert VPN client and server encrypt the VPN traffic [hence, VPN pivoting]. Covert VPN will also silently drop a minimal WinPcap install and clean it up for you. And, Covert VPN supports multiple data channels. It’ll tunnel frames over TCP, UDP, HTTP, or through Meterpreter.
Sorry, I don’t see how simpletun would crash if leaving BUFSIZE set to its default value? Why should it be raised?
Hi Juan,
Run it and don’t change the BUFSIZE. You will eventually get a crash. I use simpletun.c here, to provide a complete example, so I don’t have a lot of experience with it. My theory is that a call to read will sometimes return multiple frames in one call. This can easily go over the small size of BUFSIZE that the author set. I haven’t looked deeply at simpletun.c to determine if this is the case or not.
Rapahel, yout tool works fine, but when im trying to Man In the Middle attack with ettercap, the app crash
In Kali Linux
ettercap-ng -TqM arp // // -i demo0
Can you help me ??
If you’re seeing the simpletun server crash, make sure you heed this comment [from the post]:
“Note: simpletun.c allocates a small buffer to hold frame data. Change BUFSIZE on line 42 to a higher value, like 8192. If you don’t do this, simpletun.c will eventually crash. You don’t want that.”
I make this Rapahel, you tool works Perfect !
I made VPN connect between 2 networks
The problem is when i start ARP Spoofing attack
E.g:
ARP SPOOF with Ettercap (client.exe in my WinXP crash)
root@kali# ettercap -TqM arp // // -i demo0
ARP Spoof with ARPspoof
root@kali# arpspoof -i demo0 192.168.1.100 -t 192.168.1.1
root@kali# arpspoof -i demo0 192.168.1.1 -t 192.168.1.100
Start Wireshark
Fom 192.168.1.100 (Debian)
root@debian# ping 192.168.1.1
Wireshark Capture packets !
BUt in Debian the connection Freeze and man in the middle Doens’t work
Man in the middle does not work
=(
Can u help me?
Your tool is fantastic =) | https://blog.cobaltstrike.com/2014/10/14/how-vpn-pivoting-works-with-source-code/ | CC-MAIN-2017-22 | en | refinedweb |
The Wipe-Eyeglasses is a new invention that wipes automatically your eyeglasses or your sunglasses.
Today, eyeglasses and sunglasses are very important in our life. Seeing well is a priority, so having clean eyeglasses is necessary. Then, when we wipe our glasses, we mechanically use what we have at hand : a piece of t-shirt or a tissue and that's not really convenient. When your home, there is the solution to not forget and to keep your eyeglasses cleaned: the Wipe-Eyeglasses : a machine that can wipe your eyeglasses or sunglasses automatically. Moreover, it can be a new place for your eyeglasses so they will always be immaculate. The Wipe-Eyeglasses wipes both sides of your eyeglasses or sunglasses with a smart system composed by an Arduino Uno, servo motors and others electrical components, and optical tissue.
Here is the steps to realise it with wood boards, 3D printing or cardboard.
Step 1: What You Need to Build It
Components:
• Wood boards, cardboard or filaments to 3D print the structure
• Cotton
• Optical tissue
• An Arduino Uno
• A breadboard
• 2 servo motors
• Wires of different sizes
• 2 capacitors of 100 microF
• A switch
• A USB-Arduino wire to power the circuit
Tools:
• A cutter, a wood saw, or a 3D printing machine
• Eventually a welder
Note: Prepare all your components and tools before starting to build an object, it will help you to stay organised and efficient!
Step 2: How to Build the Structure
First, you need to create the structure of the Wipe-Eyeglasses : you can either do it by cutting and gluing wood or cardboard, or by 3D print the parts that compose it.
Here are the 3D designs of the structure which you need to recreate with the material of your choice and the picture of the result with wood boards.
However, these 3D models are designed for 3D printing so If you choose to build the Wipe-Eyeglasses with wood boards or cardboard (as I did it), you will need to adapt them. Click on "Edit 3D" and use the ruler to have the mesurements (The structure is designet at real scale) : it will help you to cut your wood boards or cardboard in order to create the different parts that you will assemble after with glue. (For exemple, the "support_servo" piece can be a simple 16x2 cm sized wood board).
Note: To realize my invention, I have built the box with wood and I have only 3D printed the "rotor" design. It is easier and cheaper.
You can also find here the .stl files of this designs if you want to 3D print it:
Step 3: How to Build the Circuit
The electronic circuit of the Wipe-Eyeglasses is based on the Arduino. Here is what you need to reproduce it with the components.
Note : You can obviously adapt it depending how you realized the structure!
Step 4: How to Program It
I used the Arduino Software to program the Wipe-Eyeglasses. Here is the code:
#include <Servo.h> Servo Droite; Servo Gauche; const int switchPin = 2; const int greenLed = 3; const int redLed = 4; int switchVal; void setup() { Droite.attach(9); Gauche.attach(10); pinMode(greenLed, OUTPUT); pinMode(redLed, OUTPUT); pinMode(switchPin, INPUT); } void loop() { switchVal = digitalRead(switchPin);(90); Gauche.write(90); delay(100000000); }
Note: I used servo motor with continue rotation and I needed to set the speed and the time, so you will need if you use basic servos to make some changes to the program.
Step 5: How to Assemble Everything
1. Put the Arduino and the circuit inside the structure (with all its parts)
2. Glue or fix with another way the "rotor" pieces (printed in 3D or designed by yourself) (2) to the servos
3. Put a cotton around the ball of the "rotor" piece
4. Put an optical tissue around the cotton
5. Glue the servos to the structure and verify everything is solid and functionnal, then plug the cable to the Arduino and to a computer or a USB charger
6. Here it is! You have built the Wipe-Eyeglasses!
Step 6: How It Works
1. (Wet your eyeglasses or sunglasses with water or with a special optical spray)
2. Put your eyeglasses on the Wipe-Eyeglasses.
3. Turn on the switch, it starts to work : cotton balls covered with optical tissue go 30° to the left and then 30° to the right.
4. The Wipe-Eyeglasses stops to run; it is finished.
5. You eyeglasses are clean :)
Step 7: Watch It Working in a Video!
Recently, I was on a french TV channel, M6 for this invention because it won the "Innovez" contest!
In the video, I explain how it works! (In french but I'm sure you can undersatand easily the concept)
Note: "Innovez" is an invention contest opened to young people and organizes by a scientific periodic made for teens : Sciences et Vie Junior!
Here you can find the Video!
Thank you for reading!
Check out my new Instructables: "The Wipe-Eyeglasses"!— Victor Badoual (@VictorBadoual) March 4, 2016 | http://www.instructables.com/id/The-Wipe-Eyeglasses-1/ | CC-MAIN-2017-22 | en | refinedweb |
resque-scheduler
Description
Resque-scheduler is an extension to Resque that adds support for queueing items in the future., argument) # runs a job in 5 days, calling SendFollowupEmail.perform(argument) # or Resque.enqueue_at(5.days.from_now, SomeJob, argument) # runs a job at a specific time, calling SomeJob.perform(argument)
Documentation
This
README covers what most people need to know. If you're looking
for details on individual methods, you might want to try the
rdoc.
Installation
To install:
gem install resque-scheduler
If you use a Gemfile:
gem 'resque-scheduler'
Adding the resque:scheduler rake task:
require 'resque/scheduler/tasks'
Rake integration' # you probably already have this somewhere Resque.redis = 'localhost:6379' end task :setup_schedule => :setup do require 'resque-scheduler' # task :scheduler => :setup_schedule end
The scheduler rake task is responsible for both queueing items from the schedule and polling the delayed queue for items ready to be pushed on to the work queues. For obvious reasons, this process never exits.
rake resque:scheduler
or, if you want to load the environment first:
rake environment resque:scheduler
Standalone Executable
The scheduler may also be run via a standalone
resque-scheduler
executable, which will be available once the gem is installed.
# Get some help resque-scheduler --help
The executable accepts options via option flags as well as via environment variables.
Environment Variables
Both the Rake task and standalone executable support the following environment variables:
APP_NAME- Application name used in procline (
$0) (default empty)
BACKGROUND- Run in the background if non-empty (via
Process.daemon, if supported) (default
false)
DYNAMIC_SCHEDULE- Enables dynamic scheduling if non-empty (default
false)
RAILS_ENV- Environment to use in procline (
$0) (default empty)
INITIALIZER_PATH- Path to a Ruby file that will be loaded before requiring
resqueand
resque/scheduler(default empty).
RESQUE_SCHEDULER_INTERVAL- Interval in seconds for checking if a scheduled job must run (coerced with
Kernel#Float()) (default
5)
LOGFILE- Log file name (default empty, meaning
$stdout)
LOGFORMAT- Log output format to use (either
'text'or
'json', default
'text')
PIDFILE- If non-empty, write process PID to file (default empty)
QUIET- Silence most output if non-empty (equivalent to a level of
MonoLogger::FATAL, default
false)
VERBOSE- Maximize log verbosity if non-empty (equivalent to a level of
MonoLogger::DEBUG, default
false)
Resque Pool integration
For normal work with the
resque-pool gem, add the
following task to wherever tasks are kept, such as
./lib/tasks/resque.rake:
task 'resque:pool:setup' do Resque::Pool.after_prefork do |job| Resque.redis.client.reconnect end end are free to process it.
Also supported is
Resque.enqueue_at which takes a timestamp to queue the
job, and
Resque.enqueue_at_with_queue which takes both a timestamp and a
queue name:
Resque.enqueue_at_with_queue( 'queue_name', 5.days.from_now, SendFollowUpEmail, user_id: current_user.id ))
If you need to cancel) # remove jobs matching just the account: Resque.remove_delayed_selection { |args| args[0]['account_id'] == current_account.id } # or remove jobs matching just the user: Resque.remove_delayed_selection { |args| args[0]['user_id'] == current_user.id }
If you need to enqueue immediately) # enqueue immediately jobs matching just the account: Resque.enqueue_delayed_selection { |args| args[0]['account_id'] == current_account.id } # or enqueue immediately jobs matching just the user: Resque.enqueue_delayed_selection { |args| args[0]['user_id'] == current_user.id }
Scheduled Jobs (Recurring Jobs)
Scheduled (or recurring) jobs are logically no different than a standard cron job. They are jobs that run based on a schedule which can be static or dynamic.
Static schedules
Static schedules are set when
resque-scheduler starts by passing a schedule file
to
resque-scheduler initialization like this (see Installation above for a more complete example):
Resque.schedule = YAML.load_file('your_resque_schedule.yml')
If a static schedule is not set
resque-scheduler will issue a "Schedule empty!" warning on
startup, but despite that warning setting a static schedule is totally optional. It is possible
to use only dynamic schedules (see below).
The schedule file is a list of Resque job classes with arguments and a schedule frequency (in crontab syntax). The schedule is just a hash, but is usually stored in a YAML like this:
CancelAbandonedOrders: cron: "*/5 * * * *" queue_documents_for_indexing: cron: "0 0 * * *" # you can use rufus-scheduler "every" syntax in place of cron if you prefer # every: 1h #"
IMPORTANT: Rufus
every syntax will calculate jobs scheduling time starting from the moment of deploy,
resulting in resetting schedule time on every deploy, so it's probably a good idea to use it only for
frequent jobs (like every 10-30 minutes), otherwise - when you use something like
every 20h and deploy once-twice per day -
it will schedule the job for 20 hours from deploy, resulting in a job to never be run..
Dynamic schedules
Dynamic schedules are programmatically set on a running
resque-scheduler.
All rufus-scheduler options are supported
when setting schedules.
Dynamic schedules are not enabled by default. To be able to dynamically set schedules, you
must pass the following to
resque-scheduler initialization (see Installation above for a more complete example):
Resque::Scheduler.dynamic = true
NOTE: In order to delete dynamic schedules via
resque-web in the
"Schedule" tab, you must include the
Rack::MethodOverride middleware (in
config.ru or equivalent).
Dynamic schedules allow for greater flexibility than static schedules as they can be set,
unset or changed without having to restart
resque-scheduler. You can specify, if the schedule
must survive a resque-scheduler restart or not. This is done by setting the
persist configuration
for the schedule: it is a boolean value, if set the schedule will persist a restart. By default,
a schedule will not be persisted.
The job to be scheduled must be a valid Resque job class.
For example, suppose you have a SendEmail job which sends emails. The
perform method of the
job receives a string argument with the email subject. To run the SendEmail job every hour
starting five minutes from now, you can do:
name = 'send_emails' config = {} config[:class] = 'SendEmail' config[:args] = 'POC email subject' config[:every] = ['1h', {first_in: 5.minutes}] config[:persist] = true Resque.set_schedule(name, config)
Schedules can later be removed by passing their name to the
remove_schedule method:
name = 'send_emails' Resque.remove_schedule(name)
Schedule names are unique; i.e. two dynamic schedules cannot have the same name. If
set_schedule is
passed the name of an existing schedule, that schedule is updated. E.g. if after setting the above schedule
we want the job to run every day instead of every hour from now on, we can do:
name = 'send_emails' config = {} config[:class] = 'SendEmail' config[:args] = 'POC email subject' config[:every] = '1d' Resque.set_schedule(name, config).
on_enqueue_failure: Called with the job args and the exception that was raised while enqueueing a job to resque or external application fails. Return values are ignored. For example:
Resque::Scheduler.failure_handler = ExceptionHandlerClass redundancy.::Locking..
You might want to share a redis instance amongst multiple Rails applications with different scheduler with different config yaml files. If this is the case, normally, only one will ever run, leading to undesired behaviour. To allow different scheduler configs run at the same time on one redis, you can either namespace your redis connections, or supply an environment variable to split the shared lock key resque-scheduler uses thus:
RESQUE_SCHEDULER_MASTER_LOCK_PREFIX=MyApp: rake resque:scheduler
Logging
There are several options to toggle the way scheduler logs its actions. They are toggled by environment variables:
QUIETwill stop logging anything. Completely silent.
VERBOSEopposite of 'QUIET'; will log even debug information
LOGFILEspecifies the file to write logs to. (default standard output)
LOGFORMATspecifies either "text" or "json" output format (default "text")
All of these variables are optional and will be given the following default values:
Resque::Scheduler.configure do |c| c.quiet = false c.verbose = false c.logfile = nil # meaning all messages go to $stdout c.logformat = 'text' end
Polling frequency
You can pass a
RESQUE_SCHEDULER_INTERVAL option which is an integer or
float representing the polling frequency. The default is 5 seconds, but
for a semi-active app you may want to use a smaller value.
$ RESQUE_SCHEDULER_INTERVAL=1 rake resque:scheduler
NOTE This value was previously
INTERVAL but was renamed to
RESQUE_SCHEDULER_INTERVAL to avoid clashing with the interval Resque
uses for its jobs..
Development
Working on resque-scheduler requires the following:
- A relatively modern Ruby interpreter
- bundler
The development setup looks like this, which is roughly the same thing that happens on Travis CI and Appveyor:
# Install everything bundle install # Make sure tests are green before you change stuff bundle exec rake # Change stuff # Repeat
If you have vagrant installed, there is a development box available that requires no plugins or external provisioners:
vagrant up
Deployment Notes
It is recommended that a production deployment of
resque-scheduler be hosted
on a dedicated Redis database. While making and managing scheduled tasks,
resque-scheduler currently scans the entire Redis keyspace, which may cause
latency and stability issues if
resque-scheduler is hosted on a Redis instance
storing a large number of keys (such as those written by a different system
hosted on the same Redis instance).
Compatibility Notes
Different versions of the
redis and
rufus-scheduler gems are needed
depending on your version of
resque-scheduler. This is typically not a
problem with
resque-scheduler itself, but when mixing dependencies with an
existing application.
This table explains the version requirements for redis gem
This table explains the version requirements for rufus-scheduler
Contributing
See CONTRIBUTING.md
Authors
See AUTHORS.md | http://www.rubydoc.info/gems/resque-scheduler/frames | CC-MAIN-2017-22 | en | refinedweb |
#include <hallo.h> Anthony Towns wrote on Tue Dec 17, 2002 um 09:00:32PM: > Before we spend time, effort or imagination on this, there needs to be > some evidence that it'll actually benefit people. Well, how did other distribution manage this situation? Anyone tracking their changes and willing to make an overview? My best ideas was: create a wrapper for LD_LIBRARY_PATH hackery, call it oldcompat-c++, provide compatibility packages for all old libs, where the files are stored elsewhere. Tell everybody to use this wrapper explicitely for every self-compiled application. Gruss/Regards, Eduard. | https://lists.debian.org/debian-devel/2002/12/msg01100.html | CC-MAIN-2017-22 | en | refinedweb |
Standard Template Library(STL) is a general purpose library consisting of containers, generic algorithms, iterators, function objects, allocators, adaptors and data structures. The data structures used by the algorithms are abstract in the sense that the algorithms can be used with (practically) any data type.
The algorithms can process these abstract data types because they are template based. This chapter does not cover template construction (see chapter 21 for that). Rather, it focuses on the use of the algorithms.
Several elements also used by the standard template library have already been discussed in the C++ Annotations. In chapter 12 abstract containers were discussed, and in section 11.10 function objects were introduced. Also, iterators were mentioned at several places in this document.
The main components of the STL are covered in this and the next chapter. Iterators, adaptors, smart pointers, multi threading and other features of the STL are discussed in coming sections. Generic algorithms are covered in the next chapter (19).
Allocators take care of the memory allocation within the STL. The default allocator class suffices for most applications, and is not further discussed in the C++ Annotations.
All elements of the STL are defined in the
standard namespace. Therefore, a
using namespace std or a comparable
directive is required unless it is preferred to specify the required namespace
explicitly. In header files the
std namespace should explicitly
be used (cf. section 7.11.1).
In this chapter the empty angle bracket notation is frequently used. In
code a typename must be supplied between the angle brackets. E.g.,
plus<>
is used in the C++ Annotations, but in code
plus<string> may be encountered.
<functional>header file must be included.
Function objects play important roles in generic
algorithms. For example, there exists a generic algorithm
sort
expecting two iterators defining the range of objects that should be sorted,
as well as a function object calling the appropriate comparison operator for
two objects. Let's take a quick look at this situation. Assume strings are
stored in a vector, and we want to sort the vector in descending order. In
that case, sorting the vector
stringVec is as simple as:
sort(stringVec.begin(), stringVec.end(), greater<string>());The last argument is recognized as a constructor: it is an instantiation of the
greater<>class template, applied to
strings. This object is called as a function object by the
sortgeneric algorithm. The generic algorithm calls the function object's
operator()member to compare two
stringobjects. The function object's
operator()will, in turn, call
operator>of the
stringdata type. Eventually, when
sortreturns, the first element of the vector will contain the string having the greatest
stringvalue of all.
The function object's
operator() itself is not visible at this
point. Don't confuse the parentheses in the `
greater<string>()' argument
with calling
operator(). When
operator() is actually used inside
sort, it receives two arguments: two strings to compare for
`greaterness'. Since
greater<string>::operator() is defined inline, the
call itself is not actually present in the above
sort call. Instead
sort calls
string::operator> through
greater<string>::operator().
Now that we know that a constructor is passed as argument to (many) generic
algorithms, we can design our own function objects. Assume we want to sort our
vector case-insensitively. How do we proceed? First we note that the default
string::operator< (for an incremental sort) is not appropriate, as it does
case sensitive comparisons. So, we provide our own
CaseInsensitive class,
which compares two strings case insensitively. Using the
POSIX function
strcasecmp, the following program performs the trick. It
case-insensitively sorts its command-line arguments in ascending alphabetic
order:
#include <iostream> #include <string> #include <cstring> #include <algorithm> using namespace std; class CaseInsensitive { public: bool operator()(string const &left, string const &right) const { return strcasecmp(left.c_str(), right.c_str()) < 0; } }; int main(int argc, char **argv) { sort(argv, argv + argc, CaseInsensitive()); for (int idx = 0; idx < argc; ++idx) cout << argv[idx] << " "; cout << '\n'; }
The default constructor of the
class CaseInsensitive is used to
provide
sort with its final argument. So the only member function that
must be defined is
CaseInsensitive::operator(). Since we know it's called
with
string arguments, we define it to expect two
string arguments,
which are used when calling
strcasecmp. Furthermore,
operator()
function is defined inline, so that it does not produce overhead when
called by the
sort function. The
sort function calls the function
object with various combinations of
strings. If the compiler grants our
inline requests, it will in fact call
strcasecmp, skipping two extra
function calls.
The comparison function object is often a predefined function object. Predefined function object classes are available for many commonly used operations. In the following sections the available predefined function objects are presented, together with some examples showing their use. Near the end of the section about function objects function adaptors are introduced.
Predefined function objects are used predominantly with generic algorithms. Predefined function objects exists for arithmetic, relational, and logical operations. In section 24.3 predefined function objects are developed performing bitwise operations.
plus<Type>is available. If we replace
Typeby
size_tthen the addition operator for
size_tvalues is used, if we replace
Typeby
string, the addition operator for strings is used. For example:
#include <iostream> #include <string> #include <functional> using namespace std; int main(int argc, char **argv) { plus<size_t> uAdd; // function object to add size_ts cout << "3 + 5 = " << uAdd(3, 5) << '\n'; plus<string> sAdd; // function object to add strings cout << "argv[0] + argv[1] = " << sAdd(argv[0], argv[1]) << '\n'; } /* Output when called as: a.out going 3 + 5 = 8 argv[0] + argv[1] = a.outgoing */
Why is this useful? Note that the function object can be used with all kinds of data types (not only with the predefined datatypes) supporting the operator called by the function object.
Suppose we want to perform an operation on a left hand side operand which is always the same variable and a right hand side argument for which, in turn, all elements of an array should be used. E.g., we want to compute the sum of all elements in an array; or we want to concatenate all the strings in a text-array. In situations like these function objects come in handy.
As stated, function objects are heavily used in the context of the generic algorithms, so let's take a quick look ahead at yet another one.
The generic algorithm
accumulate visits all elements specified by an
iterator-range, and performs a requested binary operation on a common element
and each of the elements in the range, returning the accumulated result after
visiting all elements specified by the iterator range. It's easy to use this
algorithm. The next program accumulates all command line arguments and prints
the final string:
#include <iostream> #include <string> #include <functional> #include <numeric> using namespace std; int main(int argc, char **argv) { string result = accumulate(argv, argv + argc, string(), plus<string>()); cout << "All concatenated arguments: " << result << '\n'; }
The first two arguments define the (iterator) range of elements to visit,
the third argument is
string. This anonymous string object provides an
initial value. We could also have used
string("All concatenated arguments: ")in which case the
coutstatement could simply have been
cout << result << '\n'. The string-addition operation is used, called from
plus<string>. The final concatenated string is returned.
Now we define a class
Time, overloading
operator+. Again, we can
apply the predefined function object
plus, now tailored to our newly
defined datatype, to add times:
#include <iostream> #include <string> #include <vector> #include <functional> #include <numeric> using namespace std; class Time { friend ostream &operator<<(ostream &str, Time const &time); size_t d_days; size_t d_hours; size_t d_minutes; size_t d_seconds; public: Time(size_t hours, size_t minutes, size_t seconds); Time &operator+=(Time const &rhs); }; Time &&operator+(Time const &lhs, Time const &rhs) { Time ret(lhs); return std::move(ret += rhs); } Time::Time(size_t hours, size_t minutes, size_t seconds) : d_days(0), d_hours(hours), d_minutes(minutes), d_seconds(seconds) {} Time &Time::operator+=(Time const &rhs) { d_seconds += rhs.d_seconds; d_minutes += rhs.d_minutes + d_seconds / 60; d_hours += rhs.d_hours + d_minutes / 60; d_days += rhs.d_days + d_hours / 24; d_seconds %= 60; d_minutes %= 60; d_hours %= 24; return *this; } ostream &operator<<(ostream &str, Time const &time) { return cout << time.d_days << " days, " << time.d_hours << " hours, " << time.d_minutes << " minutes and " << time.d_seconds << " seconds."; } int main(int argc, char **argv) { vector<Time> tvector; tvector.push_back(Time( 1, 10, 20)); tvector.push_back(Time(10, 30, 40)); tvector.push_back(Time(20, 50, 0)); tvector.push_back(Time(30, 20, 30)); cout << accumulate ( tvector.begin(), tvector.end(), Time(0, 0, 0), plus<Time>() ) << '\n'; } // Displays: 2 days, 14 hours, 51 minutes and 30 seconds.
The design of the above program is fairly straightforward.
Time
defines a constructor, it defines an insertion operator and it defines its own
operator+, adding two time objects. In
main four
Time objects are
stored in a
vector<Time> object. Then,
accumulate is used to compute
the accumulated time. It returns a
Time object, which is inserted into
cout.
While the first example did show the use of a named function object,
the last two examples showed the use of anonymous objects that were
passed to the (
accumulate) function.
The STL supports the following set of arithmetic function objects. The
function call operator (
operator()) of these function objects calls the
matching arithmetic operator for the objects that are passed to the function
call operator, returning that arithmetic operator's return value. The
arithmetic operator that is actually called is mentioned below:
plus<>: calls the binary
operator+;
minus<>: calls the binary
operator-;
multiplies<>: calls the binary
operator*;
divides<>: calls
operator/;
modulus<>: calls
operator%;
negate<>: calls the unary
operator-. This arithmetic function object is a unary function object as it expects one argument.
transformgeneric algorithm is used to toggle the signs of all elements of an array.
Transformexpects two iterators, defining the range of objects to be transformed; an iterator defining the begin of the destination range (which may be the same iterator as the first argument); and a function object defining a unary operation for the indicated data type.
#include <iostream> #include <string> #include <functional> #include <algorithm> using namespace std; int main(int argc, char **argv) { int iArr[] = { 1, -2, 3, -4, 5, -6 }; transform(iArr, iArr + 6, iArr, negate<int>()); for (int idx = 0; idx < 6; ++idx) cout << iArr[idx] << ", "; cout << '\n'; } // Displays: -1, 2, -3, 4, -5, 6,
==, !=, >, >=, <and
<=.
The STL supports the following set of relational function objects. The
function call operator (
operator()) of these function objects calls the
matching relational operator for the objects that are passed to the function
call operator, returning that relational operator's return value. The
relational operator that is actually called is mentioned below:
equal_to<>: calls
operator==;
not_equal_to<>: calls
operator!=;
greater<>: calls
operator>;
greater_equal<>: calls
operator>=;
less<>: this object's
operator()member calls
operator<;
less_equal<>: calls
operator<=.
sortis:
#include <iostream> #include <string> #include <functional> #include <algorithm> using namespace std; int main(int argc, char **argv) { sort(argv, argv + argc, greater_equal<string>()); for (int idx = 0; idx < argc; ++idx) cout << argv[idx] << " "; cout << '\n'; sort(argv, argv + argc, less<string>()); for (int idx = 0; idx < argc; ++idx) cout << argv[idx] << " "; cout << '\n'; }
The example illustrates how strings may be sorted alphabetically and
reversed alphabetically. By passing
greater_equal<string> the strings are
sorted in decreasing order (the first word will be the 'greatest'), by
passing
less<string> the strings are sorted in increasing order (the
first word will be the 'smallest').
Note that
argv contains
char * values, and that the relational
function object expects a
string. The promotion from
char const * to
string is silently performed.
and, or,and
not.
The STL supports the following set of logical function objects. The
function call operator (
operator()) of these function objects calls the
matching logical operator for the objects that are passed to the function
call operator, returning that logical operator's return value. The
logical operator that is actually called is mentioned below:
logical_and<>: calls
operator&&;
logical_or<>: calls
operator||;
logical_not<>: calls
operator!.
operator!is provided in the following trivial program, using
transformto transform the logicalvalues stored in an array:
#include <iostream> #include <string> #include <functional> #include <algorithm> using namespace std; int main(int argc, char **argv) { bool bArr[] = {true, true, true, false, false, false}; size_t const bArrSize = sizeof(bArr) / sizeof(bool); for (size_t idx = 0; idx < bArrSize; ++idx) cout << bArr[idx] << " "; cout << '\n'; transform(bArr, bArr + bArrSize, bArr, logical_not<bool>()); for (size_t idx = 0; idx < bArrSize; ++idx) cout << bArr[idx] << " "; cout << '\n'; } /* Displays: 1 1 1 0 0 0 0 0 0 1 1 1 */
minus<int>function object is bound to 100, then the resulting value is always equal to 100 minus the value of the function object's second argument.
Originally two binder adapters (
bind1st and
bind2nd) binding,
respectively, the first and the second argument of a binary function were
defined. However, in the next C++17 standard
bind1st and
bind2nd
are likely to be removed, as they are superseded by the more general
bind
binder.
Bind itself is likely to become a deprecated function, as it can
easily be replaced by (generic) lambda functions (cf. section
18.7).
As
bind1st and
bind2nd are still available, a short example showing
their use (concentrating on
bind2nd) is provided. A more elaborate
example, using
bind is shown next. Existing code should be modified so
that either
bind or a lambda function is used.
Before using
bind (or the namespace
std::placeholders, see below) the
<functional> header file must be included.
Here is an example showing how to use
bind2nd to count the number of
strings that are equal to a string (
target) in a vector of strings
(
vs) (it is assumed that the required headers and
using namespace std
have been specified):
count_if(vs.begin(), vs.end(), bind2nd(equal_to<string>(), target));In this example the function object
equal_tois instantiated for strings, receiving
targetas its second argument, and each of the strings in
vsare passed in sequence to its first argument. In this particular example, where equality is being determined,
bind1stcould also have been used.
The
bind adaptor expects a function as its first argument, and then any
number of arguments that the function may need. Although an unspecified number
of arguments may be specified when using
bind it is not a variadic
function the way the C programming language defines them.
Bind is a
variadic function template, which are covered in section 22.5.
By default
bind returns the function that is specified as its first
argument, receiving the remaining arguments that were passed to
bind as
its arguments. The function returned by
bind may then be called. Depending
on the way
bind is called, calling the returned function may or may not
required arguments.
Here is an example:
int sub(int lhs, int rhs); // returns lhs - rhs bind(sub, 3, 4); // returns a function object whose // operator() returns sub(3, 4)Since
bind'sreturn value is a function object it can be called:
bind(sub, 3, 4)();but more commonly
bind'sreturn value is assigned to a variable, which then represents the returned function object, as in:
auto functor = bind(sub, 3, 4); // define a variable for the functor cout << functor() << '\n'; // call the functor, returning -1.
Instead of specifying the arguments when using
bind, placeholders
(cf. section 4.1.3.1) can be specified. Explicit argument values
must then be specified when the returned functor is called. Here are some
examples:
using namespace placeholders; auto ftor1 = bind(sub, _1, 4); // 1st argument must be specified ftor1(10); // returns 10 - 4 = 6 auto ftor2 = bind(sub, 5, _1); // 2nd argument must be specified ftor2(10); // returns 5 - 10 = -5 auto ftor3 = bind(sub, _1, _2); // Both arguments must be specified ftor3(10, 2); // returns 10 - 2 = 8 auto ftor3 = bind(sub, _2, _1); // Both arguments must be specified ftor3(10, 2); // but in reversed order: returns // 2 - 10 = -8Alternatively, the first argument can be the address of a member function. In that case, the first argument specifies the object for which the member function will be called, while the remaining arguments specify the arguments (if any) that are passed to the member function. Some examples:
struct Object // Object contains the lhs of a { // subtraction operation int d_lhs; Object(int lhs) : d_lhs(lhs) {} int sub(int rhs) // sub modifies d_lhs { return d_lhs -= rhs; } }; int main() { using namespace placeholders; Object obj(5); auto ftor = bind(&Object::sub, obj, 12); cout << ftor() << '\n'; // shows -7 cout << obj.d_x << '\n'; // obj not modified, bind uses a copy auto ftor = bind(&Object::sub, ref(obj), 12); cout << ftor() << '\n'; // shows -7 cout << obj.d_x << '\n'; // obj modified, cout shows -7 }Note the use of
refin the second
bindcall: here
objis passed by reference, forwarding
objitself, rather than its copy, to the
for2functor. This is realized using a facility called perfect forwarding, which is discussed in detail in section 22.5.2.
If the return type of the function that is called by the functor
doesn't match its context (e.g., the functor is called in an expression where
its return value is compared with a
size_t) then the return type of the
functor can easily be coerced into the appropriate type (of course, provided that the
requested type conversion is possible). In those cases the requested return
type can be specified between pointed brackets immediately following
bind. E.g.,
auto ftor = bind<size_t>(sub, _1, 4); // ftor's return type is size_t size_t target = 5; if (target < ftor(3)) // -1 becomes a large positive value cout << "smaller\n"; // and so 'smaller' is shown.Finally, the example given earlier, using
bind2ndcan be rewritten using
bindlike this:
using namespace placeholders; count_if(vs.begin(), vs.end(), bind(equal_to<string>(), _1, target));Here,
bindreturns a functor expecting one argument (represented by
_1) and
count_ifwill pass the strings in
vswill in sequence to the functor returned by
bind. The second argument (
target) is embedded inside the functor's implementation, where it is passed as second argument to the
equal_to<string>()function object.
bind1stand
bind2nd, two negator function adaptors were predefined:
not1is the negator to use with unary predicates,
not2is the negator to with binary function objects. In specific situations they may still be usable in combination with the
bindfunction template, but since
bind1stand
bind2ndwill be deprecated in C++17, alternative implementations are being considered for
not1and
not2as well (see, e.g.,).
Since
not1 and
not2 are still part of the C++ standard, their use
is briefly illustrated here. An alternate implementation, suggesting how a
future
not_fn might be designed and how it can be used is provided in
section 22.5.5.
Here are some examples illustrating the use of
not1 and
not2: To count
the number of elements in a vector of strings (
vs) that are alphabetically
ordered before a certain reference string (
target) one of the following
alternatives could be used:
count_if(vs.begin(), vs.end(), bind2nd(less<string>(), target))or, using
bind:
count_if(vs.begin(), vs.end(), bind(less<string>(), _1, target));
not2negator:
count_if(vs.begin(), vs.end(), bind2nd(not2(greater_equal<string>()), target));Here
not2is used as it negates the truth value of
greater_equal'struth value.
Not2receives two arguments (one of
vs'selements and
target), passes them on to
greater_equal, and returns the negated return value of the called
greater_equalfunction.
In this example
bind could also have been used:
count_if(vs.begin(), vs.end(), bind(not2(greater_equal<string>()), _1, target));
not1in combination with the
bind2ndpredicate: here the arguments that are passed to
not1's function call operator (i.e., the elements of the
vsvector) are passed on to
bind2nd's function call operator, which in turn calls
greater_equal, using
targetas its second argument. The value that is returned by
bind2nd's function call operator is then negated and subsequently returned as the return value of
not1'sfunction call operator:
count_if(vs.begin(), vs.end(), not1(bind2nd(greater_equal<string>(), target)))When using
bindin this example a compilation error results, which can be avoided using
not_fn(section 22.5.5).
<iterator>header file must be included.
Iterators are objects acting like pointers. Iterators have the following general characteristics:
==and
!=operators. The ordering operators (e.g.,
>,
<) can usually not be used.
iter,
*iterrepresents the object the iterator points to (alternatively,
iter->can be used to reach the members of the object the iterator points to).
++iteror
iter++advances the iterator to the next element. The notion of advancing an iterator to the next element is consequently applied: several containers support reversed_iterator types, in which the
++iteroperation actually reaches a previous element in a sequence.
vectorand
deque. For such containers
iter + 2points to the second element beyond the one to which
iterpoints. See also section 18.2.1, covering
std::distance.
#include <vector> #include <iostream> using namespace std; int main() { vector<int>::iterator vi; cout << &*vi; // prints 0 }
iterator). These members are commonly called
beginand
endand (for reversed iterators (type
reverse_iterator))
rbeginand
rend.
Standard practice requires iterator ranges to be
left inclusive. The notation
[left, right) indicates that
left
is an iterator pointing to the first element, while
right is an iterator
pointing just beyond the last element. The iterator range is empty
when
left == right.
The following example shows how all elements of a vector of strings can be
inserted into
cout using its iterator ranges
[begin(), end()), and
[rbegin(), rend()). Note that the
for-loops for both ranges are
identical. Furthermore it nicely illustrates how the
auto keyword can be
used to define the type of the loop control variable instead of using a much
more verbose variable definition like
vector<string>::iterator (see also
section 3.3.6):
#include <iostream> #include <vector> #include <string> using namespace std; int main(int argc, char **argv) { vector<string> args(argv, argv + argc); for (auto iter = args.begin(); iter != args.end(); ++iter) cout << *iter << " "; cout << '\n'; for (auto iter = args.rbegin(); iter != args.rend(); ++iter) cout << *iter << " "; cout << '\n'; }
Furthermore, the STL defines
const_iterator types that must be used
when visiting a series of elements in a constant container. Whereas the
elements of the vector in the previous example could have been altered, the
elements of the vector in the next example are immutable, and
const_iterators are required:
#include <iostream> #include <vector> #include <string> using namespace std; int main(int argc, char **argv) { vector<string> const args(argv, argv + argc); for ( vector<string>::const_iterator iter = args.begin(); iter != args.end(); ++iter ) cout << *iter << " "; cout << '\n'; for ( vector<string>::const_reverse_iterator iter = args.rbegin(); iter != args.rend(); ++iter ) cout << *iter << " "; cout << '\n'; }
The examples also illustrates that plain
pointers can be used as iterators. The
initialization
vector<string> args(argv, argv + argc) provides the
args vector with a pair of pointer-based iterators:
argv points to the
first element to initialize
args with,
argv + argc points just beyond
the last element to be used,
++argv reaches the next command line
argument. This is a general pointer characteristic, which is why they too can
be used in situations where
iterators are expected.
The STL defines five types of iterators. These iterator types are expected by generic algorithms, and in order to create a particular type of iterator yourself it is important to know their characteristics. In general, iterators (see also section 22.14) must define:
operator==, testing two iterators for equality,
operator!=, testing two iterators for inequality,
operator++, incrementing the iterator, as prefix operator,
operator*, to access the element the iterator refers to,
InputIterators are used to read from a container. The dereference operator is guaranteed to work as
rvaluein expressions. Instead of an InputIterator it is also possible to use (see below) Forward-, Bidirectional- or RandomAccessIterators. Notations like
InputIterator1and
InputIterator2may be used as well. In these cases, numbers are used to indicate which iterators `belong together'. E.g., the generic algorithm
inner_producthas the following prototype:Type inner_product(InputIterator1 first1, InputIterator1 last1, InputIterator2 first2, Type init);
InputIterator1 first1and
InputIterator1 last1define a pair of input iterators on one range, while
InputIterator2 first2defines the beginning of another range. Analogous notations may be used with other iterator types.
OutputIterators can be used to write to a container. The dereference operator is guaranteed to work as an
lvaluein expressions, but not necessarily as
rvalue. Instead of an OutputIterator it is also possible to use (see below) Forward-, Bidirectional- or RandomAccessIterators.
ForwardIterators combine InputIterators and OutputIterators. They can be used to traverse containers in one direction, for reading and/or writing. Instead of a ForwardIterator it is also possible to use (see below) Bidirectional- or RandomAccessIterators.
BidirectionalIterators can be used to traverse containers in both directions, for reading and writing. Instead of a BidirectionalIterator it is also possible to use (see below) a RandomAccessIterator.
RandomAccessIterators provide random access to container elements. An algorithm like
sortrequires a RandomAccessIterator, and can therefore not be used to sort the elements of lists or maps, which only provide BidirectionalIterators.
Using pointer arithmetic to compute the number of elements between two
iterators in, e.g., a
std::list or
std::unordered_map is not possible,
as these containers do not store their elements consecutively in memory.
The function
std::distance fills in that little gap:
std::distance expects two InputIterators and returns the number of
elements between them.
Before using
distance the
<iterator> header file must be included.
If the iterator specified as first argument exceeds the iterator specified as
its second argument then the number of elements is non-positive, otherwise it
is non-negative. If the number of elements cannot be determined (e.g., the
iterators do not refer to elements in the same container), then
distance's
return value is undefined.
Example:
#include <iostream> #include <unordered_map> using namespace std; int main() { unordered_map<int, int> myMap = {{1, 2}, {3, 5}, {-8, 12}}; cout << distance(++myMap.begin(), myMap.end()) << '\n'; // shows: 2 }
copygeneric algorithm has three parameters. The first two define the range of visited elements, the third defines the first position where the results of the copy operation should be stored.
With the
copy algorithm the number of elements to copy is usually
available beforehand, since that number can usually be provided by pointer
arithmetic. However, situations exist where pointer arithmetic cannot be
used. Analogously, the number of resulting elements sometimes differs from the
number of elements in the initial range. The generic algorithm
unique_copy is a case in point. Here the number of
elements that are copied to the destination container is normally not known
beforehand.
In situations like these an inserter adaptor function can often be used to create elements in the destination container. There are three types of inserter adaptors:
back_inserter: calls the container's
push_backmember to add new elements at the end of the container. E.g., to copy all elements of
sourcein reversed order to the back of
destination, using the
copygeneric algorithm:
copy(source.rbegin(), source.rend(), back_inserter(destination));
front_insertercalls the container's
push_frontmember, adding new elements at the beginning of the container. E.g., to copy all elements of
sourceto the front of the destination container (thereby also reversing the order of the elements):
copy(source.begin(), source.end(), front_inserter(destination));
insertercalls the container's
insertmember adding new elements starting at a specified starting point. E.g., to copy all elements of
sourceto the destination container, starting at the beginning of
destination, shifting up existing elements to beyond the newly inserted elements:
copy(source.begin(), source.end(), inserter(destination, destination.begin()));
typedefs:
typedef Data value_type, where
Datais the data type stored in the class offering
push_back, push_frontor
insertmembers (Example:
typedef std::string value_type);
typedef value_type const &const_reference
back_inserter, this iterator expects the name of a container supporting a member
push_back. The inserter's
operator()member calls the container's
push_backmember. Objects of any class supporting a
push_backmember can be passed as arguments to
back_inserterprovided the class adds
typedef DataType const &const_reference;to its interface (where
DataType const &is the type of the parameter of the class's member
push_back). Example:
#include <iostream> #include <algorithm> #include <iterator> using namespace std; class Insertable { public: typedef int value_type; typedef int const &const_reference; void push_back(int const &) {} }; int main() { int arr[] = {1}; Insertable insertable; copy(arr, arr + 1, back_inserter(insertable)); }
istream_iterator<Type>can be used to define a set of iterators for
istreamobjects. The general form of the
istream_iteratoriterator is:
istream_iterator<Type> identifier(istream &in)Here,
Typeis the type of the data elements read from the
istreamstream. It is used as the `begin' iterator in an interator range.
Typemay be any type for which
operator>> is defined in combination with
istreamobjects.
The default constructor is used as the end-iterator and corresponds to the end-of-stream. For example,
istream_iterator<string> endOfStream;The stream object that was specified when defining the begin-iterator is not mentioned with the default constructor.
Using
back_inserter and
istream_iterator adaptors, all strings
from a stream can easily be stored in a container. Example (using anonymous
istream_iterator adaptors):
#include <iostream> #include <iterator> #include <string> #include <vector> #include <algorithm> using namespace std; int main() { vector<string> vs; copy(istream_iterator<string>(cin), istream_iterator<string>(), back_inserter(vs)); for ( vector<string>::const_iterator begin = vs.begin(), end = vs.end(); begin != end; ++begin ) cout << *begin << ' '; cout << '\n'; }
streambufobjects.
To read from
streambuf objects supporting input operations
istreambuf_iterators can be used, supporting the
operations that are also available for
istream_iterator. Different from
the latter iterator type
istreambuf_iterators support three constructors:
istreambuf_iterator<Type>:
The end iterator of an iterator range is created using the default
istreambuf_iteratorconstructor. It represents the end-of-stream condition when extracting values of type
Typefrom the
streambuf.
istreambuf_iterator<Type>(streambuf *):
A pointer to a
streambufmay be used when defining an
istreambuf_iterator. It represents the begin iterator of an iterator range.
istreambuf_iterator<Type>(istream):
An istream may be also used when defining an
istreambuf_iterator. It accesses the
istream's
streambufand it also represents the begin iterator of an iterator range.
istreambuf_iteratorsand
ostreambuf_iterators.
ostream_iterator<Type>adaptor can be used to pass an
ostreamto algorithms expecting an OutputIterator. Two constructors are available for defining
ostream_iterators:
ostream_iterator<Type> identifier(ostream &outStream); ostream_iterator<Type> identifier(ostream &outStream, char const *delim);
Typeis the type of the data elements that should be inserted into an
ostream. It may be any type for which
operator<< is defined in combinations with
ostreamobjects. The latter constructor can be used to separate the individual
Typedata elements by
delimiterstrings. The former constructor does not use any delimiters.
The example shows how
istream_iterators and an
ostream_iterator may be used to copy information of a file to another
file. A subtlety here is that you probably want to use
in.unsetf(ios::skipws). It is used to clear the
ios::skipws flag. As a consequence whitespace characters are
simply returned by the operator, and the file is copied character by
character. Here is the program:
#include <iostream> #include <algorithm> #include <iterator> using namespace std; int main() { cin.unsetf(ios::skipws); copy(istream_iterator<char>(cin), istream_iterator<char>(), ostream_iterator<char>(cout)); }
streambufobjects.
To write to
streambuf objects supporting output operations
ostreambuf_iterators can be used, supporting the
operations that are also available for
ostream_iterator.
Ostreambuf_iterators support two constructors:
ostreambuf_iterator<Type>(streambuf *):
A pointer to a
streambufmay be used when defining an
ostreambuf_iterator. It can be used as an OutputIterator.
ostreambuf_iterator<Type>(ostream):
An ostream may be also used when defining an
ostreambuf_iterator. It accesses the
ostream's
streambufand it can also be used as an OutputIterator.
istreambuf_iteratorsand
ostreambuf_iteratorswhen copying a stream in yet another way. Since the stream's
streambufs are directly accessed the streams and stream flags are bypassed. Consequently there is no need to clear
ios::skipwsas in the previous section, while the next program's efficiency probably also exceeds the efficiency of the program shown in the previous section.
#include <iostream> #include <algorithm> #include <iterator> using namespace std; int main() { istreambuf_iterator<char> in(cin.rdbuf()); istreambuf_iterator<char> eof; ostreambuf_iterator<char> out(cout.rdbuf()); copy(in, eof, out); }
unique_ptrclass presented in this section the
<memory>header file must be included.
When pointers are used to access dynamically allocated memory strict bookkeeping is required to prevent memory leaks. When a pointer variable referring to dynamically allocated memory goes out of scope, the dynamically allocated memory becomes inaccessible and the program suffers from a memory leak.
To prevent such memory leaks strict bookkeeping is required: the programmer has to make sure that the dynamically allocated memory is returned to the common pool just before the pointer variable goes out of scope.
When a pointer variable points to a dynamically allocated single value or
object, bookkeeping requirements are greatly simplified when the pointer
variable is defined as a
std::unique_ptr object.
Unique_ptrs are objects masquerading as pointers. Since they are objects, their destructors are called when they go out of scope. Their destructors automatically delete the dynamically allocated memory.
Unique_ptrs have some special characteristics:
unique_ptrto another move semantics is used. If move semantics is not available compilation fails. On the other hand, if compilation succeeds then the used containers or generic algorithms support the use of
unique_ptrs. Here is an example:
std::unique_ptr<int> up1(new int); std::unique_ptr<int> up2(up1); // compilation errorThe second definition fails to compile as
unique_ptr's copy constructor is private (the same holds true for the assignment operator). But the
unique_ptrclass does offer facilities to initialize and assign from rvalue references:
class unique_ptr // interface partially shown { public: unique_ptr(unique_ptr &&tmp); // rvalues bind here private: unique_ptr(const unique_ptr &other); };In the next example move semantics is used and so it compiles correctly:
unique_ptr<int> cp(unique_ptr<int>(new int));
unique_ptrobject should only point to memory that was made available dynamically, as only dynamically allocated memory can be deleted.
unique_ptrobjects should not be allowed to point to the same block of dynamically allocated memory. The
unique_ptr's interface was designed to prevent this from happening. Once a
unique_ptrobject goes out of scope, it deletes the memory it points to, immediately changing any other object also pointing to the allocated memory into a wild pointer.
Derivedis derived from
Base, then a newly allocated
Derivedclass object can be assigned to a
unique_ptr<Base>, without having to define a virtual destructor for
Base. The
Base *pointer that is returned by the
unique_ptrobject can simply be cast statically to
Derived, and
Derived'sdestructor is automatically called as well, if the
unique_ptrdefinition is provided with a deleter function address. This is illustrated in the next example:
class Base { ... }; class Derived: public Base { ... public: // assume Derived has a member void process() static void deleter(Base *bp); }; void Derived::deleter(Base *bp) { delete static_cast<Derived *>(bp); } int main() { unique_ptr<Base, void (*)(Base *)> bp(new Derived, &Derived::deleter); static_cast<Derived *>(bp.get())->process(); // OK! } // here ~Derived is called: no polymorphism required.
unique_ptroffers several member functions to access the pointer itself or to have a
unique_ptrpoint to another block of memory. These member functions (and
unique_ptrconstructors) are introduced in the next few sections.
A
unique_ptr (as well as a
shared_ptr, see section 18.4) can
be used as a safe alternative to the now deprecated
auto_ptr.
Unique_ptr also augments
auto_ptr as it can be used with
containers and (generic) algorithms as it adds customizable deleters. Arrays
can also be handled by
unique_ptrs.
unique_ptrobjects. Each definition contains the usual
<type>specifier between angle brackets:
unique_ptrobject that does not point to a particular block of memory. Its pointer is initialized to 0 (zero):
unique_ptr<type> identifier;This form is discussed in section 18.3.2.
unique_ptrobject. Following the use of the move constructor its
unique_ptrargument no longer points to the dynamically allocated memory and its pointer data member is turned into a zero-pointer:
unique_ptr<type> identifier(another unique_ptr for type);This form is discussed in section 18.3.3.
unique_ptrobject to the block of dynamically allocated memory that is passed to the object's constructor. Optionally
deletercan be provided. A (free) function (or function object) receiving the
unique_ptr's pointer as its argument can be passed as deleter. It is supposed to return the dynamically allocated memory to the common pool (doing nothing if the pointer equals zero).
unique_ptr<type> identifier (new-expression [, deleter]);This form is discussed in section 18.3.4.
Unique_ptr's default constructor defines a
unique_ptrnot pointing to a particular block of memory:
unique_ptr<type> identifier;The pointer controlled by the
unique_ptrobject is initialized to
0(zero). Although the
unique_ptrobject itself is not the pointer, its value can be compared to
0. Example:
unique_ptr<int> ip; if (!ip) cout << "0-pointer with a unique_ptr object\n";Alternatively, the member
getcan be used (cf. section 18.3.5).
unique_ptrmay be initialized using an rvalue reference to a
unique_ptrobject for the same type:
unique_ptr<type> identifier(other unique_ptr object);The move constructor is used, e.g., in the following example:
void mover(unique_ptr<string> &¶m) { unique_ptr<string> tmp(move(param)); }Analogously, the assignment operator can be used. A
unique_ptrobject may be assigned to a temporary
unique_ptrobject of the same type (again move-semantics is used). For example:
#include <iostream> #include <memory> #include <string> using namespace std; int main() { unique_ptr<string> hello1(new string("Hello world")); unique_ptr<string> hello2(move(hello1)); unique_ptr<string> hello3; hello3 = move(hello2); cout << // *hello1 << /\n' << // would have segfaulted // *hello2 << '\n' << // same *hello3 << '\n'; } // Displays: Hello world
The example illustrates that
hello1is initialized by a pointer to a dynamically alloctated
string(see the next section).
unique_ptr hello2grabs the pointer controlled by
hello1using a move constructor. This effectively changes
hello1into a 0-pointer.
hello3is defined as a default
unique_ptr<string>. But then it grabs its value using move-assignment from
hello2(which, as a consequence, is changed into a 0-pointer as well)
hello1or
hello2had been inserted into
couta segmentation fault would have resulted. The reason for this should now be clear: it is caused by dereferencing 0-pointers. In the end, only
hello3actually points to the originally allocated
string.
unique_ptris most often initialized using a pointer to dynamically allocated memory. The generic form is:
unique_ptr<type [, deleter_type]> identifier(new-expression [, deleter = deleter_type()]);The second (template) argument (
deleter(_type)) is optional and may refer to a free function or function object handling the destruction of the allocated memory. A deleter is used, e.g., in situations where a double pointer is allocated and the destruction must visit each nested pointer to destroy the allocated memory (see below for an illustration).
Here is an example initializing a
unique_ptr pointing to a
string
object:
unique_ptr<string> strPtr(new string("Hello world"));The argument that is passed to the constructor is the pointer returned by
operator new. Note that
typedoes not mention the pointer. The type that is used in the
unique_ptrconstruction is the same as the type that is used in
newexpressions.
Here is an example showing how an explicitly defined deleter may be used to delete a dynamically allocated array of pointers to strings:
#include <iostream> #include <string> #include <memory> using namespace std; struct Deleter { size_t d_size; Deleter(size_t size = 0) : d_size(size) {} void operator()(string **ptr) const { for (size_t idx = 0; idx < d_size; ++idx) delete ptr[idx]; delete[] ptr; } }; int main() { unique_ptr<string *, Deleter> sp2(new string *[10], Deleter(10)); Deleter &obj = sp2.get_deleter(); }
A
unique_ptr can be used to reach the
member functions that are available for
objects allocated by the
new expression. These members can be reached as
if the
unique_ptr was a plain pointer to the dynamically allocated
object. For example, in the following program the text `
C++' is inserted
behind the word `
hello':
#include <iostream> #include <memory> #include <cstring> using namespace std; int main() { unique_ptr<string> sp(new string("Hello world")); cout << *sp << '\n'; sp->insert(strlen("Hello "), "C++ "); cout << *sp << '\n'; } /* Displays: Hello world Hello C++ world */
unique_ptroffers the following operators:
unique_ptr<Type> &operator=(unique_ptr<Type> &&tmp):
This operator transfers the memory pointed to by the rvalue
unique_ptrobject to the lvalue
unique_ptrobject using move semantics. So, the rvalue object loses the memory it pointed at and turns into a 0-pointer. An existing
unique_ptrmay be assigned to another
unique_ptrby converting it to an rvalue reference first using
std::move. Example:unique_ptr<int> ip1(new int); unique_ptr<int> ip2; ip2 = std::move(ip1);
operator bool() const:
This operator returns
falseif the
unique_ptrdoes not point to memory (i.e., its
getmember, see below, returns 0). Otherwise,
trueis returned.
Type &operator*():
This operator returns a reference to the information accessible via a
unique_ptrobject . It acts like a normal pointer dereference operator.
Type *operator->():
This operator returns a pointer to the information accessible via a
unique_ptrobject. This operator allows you to select members of an object accessible via a
unique_ptrobject. Example:unique_ptr<string> sp(new string("hello")); cout << sp->c_str();
The class
unique_ptr supports the following
member functions:
Type *get():
A pointer to the information controlled by the
unique_ptrobject is returned. It acts like
operator->. The returned pointer can be inspected. If it is zero the
unique_ptrobject does not point to any memory.
Deleter &unique_ptr<Type>::get_deleter():
A reference to the deleter object used by the
unique_ptris returned.
Type *release():
A pointer to the information accessible via a
unique_ptrobject is returned. At the same time the object itself becomes a 0-pointer (i.e., its pointer data member is turned into a 0-pointer). This member can be used to transfer the information accessible via a
unique_ptrobject to a plain
Typepointer. After calling this member the proper destruction of the dynamically allocated memory is the responsibility of the programmer.
void reset(Type *):
The dynamically allocated memory controlled by the
unique_ptrobject is returned to the common pool; the object thereupon controls the memory to which the argument that is passed to the function points. It can also be called without argument, turning the object into a 0-pointer. This member function can be used to assign a new block of dynamically allocated memory to a
unique_ptrobject.
void swap(unique_ptr<Type> &):
Two identically typed
unique_ptrsare swapped.
unique_ptris used to store arrays the dereferencing operator makes little sense but with arrays
unique_ptrobjects benefit from index operators. The distinction between a single object
unique_ptrand a
unique_ptrreferring to a dynamically allocated array of objects is realized through a template specialization.
With dynamically allocated arrays the following syntax is available:
[]) notation is used to specify that the smart pointer controls a dynamically allocated array. Example:
unique_ptr<int[]> intArr(new int[3]);
intArr[2] = intArr[0];
delete[]rather than
delete.
The smart pointer class
std::auto_ptr<Type> has traditionally
been offered by C++. This class does not support move semantics, but
when an
auto_ptr object is assigned to another, the right-hand object
loses its information.
The class
unique_ptr does not have
auto_ptr's drawbacks and
consequently using
auto_ptr is now deprecated.
Auto_ptrs suffer from
the following drawbacks:
Because of its drawbacks and available replacements the
auto_ptr class is
no longer covered by the C++ Annotations. Existing software should be modified
to use smart pointers (
unique_ptrs or
shared_ptrs) and new software
should, where applicable, directly be implemented in terms of these new smart
pointer types.
unique_ptrthe class
std::shared_ptr<Type>is available, which is a reference counting smart pointer.
Before using
shared_ptrs the
<memory> header file must be included.
The shared pointer automatically destroys its contents once its reference
count has decayed to zero.).
Shared_ptrs support copy and move constructors as well as standard and
move overloaded assignment operators.
Like
unique_ptrs, shared_ptrs may refer to dynamically allocated arrays.
shared_ptrobjects. Each definition contains the usual
<type>specifier between angle brackets:
shared_ptrobject that does not point to a particular block of memory. Its pointer is initialized to 0 (zero):
shared_ptr<type> identifier;This form is discussed in section 18.4.2.
shared_ptrso that both objects share the memory pointed at by the existing object. The copy constructor also increments the
shared_ptr's reference count. Example:
shared_ptr<string> org(new string("hi there")); shared_ptr<string> copy(org); // reference count now 2
shared_ptrwith the pointer and reference count of a temporary
shared_ptr. The temporary
shared_ptris changed into a 0-pointer. An existing
shared_ptrmay have its data moved to a newly defined
shared_ptr(turning the existing
shared_ptrinto a 0-pointer as well). In the next example a temporary, anonymous
shared_ptrobject is constructed, which is then used to construct
grabber. Since
grabber's constructor receives an anonymous temporary object, the compiler uses
shared_ptr's move constructor:
shared_ptr<string> grabber(shared_ptr<string>(new string("hi there")));
shared_ptrobject to the block of dynamically allocated memory that is passed to the object's constructor. Optionally
deletercan be provided. A (free) function (or function object) receiving the
shared_ptr's pointer as its argument can be passed as deleter. It is supposed to return the dynamically allocated memory to the common pool (doing nothing if the pointer equals zero).
shared_ptr<type> identifier (new-expression [, deleter]);This form is discussed in section 18.4.3.
Shared_ptr's default constructor defines a
shared_ptrnot pointing to a particular block of memory:
shared_ptr<type> identifier;The pointer controlled by the
shared_ptrobject is initialized to
0(zero). Although the
shared_ptrobject itself is not the pointer, its value can be compared to
0. Example:
shared_ptr<int> ip; if (!ip) cout << "0-pointer with a shared_ptr object\n";Alternatively, the member
getcan be used (cf. section 18.4.4).
shared_ptris initialized by a dynamically allocated block of memory. The generic form is:
shared_ptr<type> identifier(new-expression [, deleter]);The second argument (
deleter) is optional and refers to a function object or free function handling the destruction of the allocated memory. A deleter is used, e.g., in situations where a double pointer is allocated and the destruction must visit each nested pointer to destroy the allocated memory (see below for an illustration). It is used in situations comparable to those encountered with
unique_ptr(cf. section 18.3.4).
Here is an example initializing a
shared_ptr pointing to a
string
object:
shared_ptr<string> strPtr(new string("Hello world"));The argument that is passed to the constructor is the pointer returned by
operator new. Note that
typedoes not mention the pointer. The type that is used in the
shared_ptrconstruction is the same as the type that is used in
newexpressions.
The next example illustrates that two
shared_ptrs indeed share their
information. After modifying the information controlled by one of the
objects the information controlled by the other object is modified as well:
#include <iostream> #include <memory> #include <cstring> using namespace std; int main() { shared_ptr<string> sp(new string("Hello world")); shared_ptr<string> sp2(sp); sp->insert(strlen("Hello "), "C++ "); cout << *sp << '\n' << *sp2 << '\n'; } /* Displays: Hello C++ world Hello C++ world */
shared_ptroffers the following operators:
shared_ptr &operator=(shared_ptr<Type> const &other):
Copy assignment: the reference count of the operator's left hand side operand is reduced. If the reference count decays to zero the dynamically allocated memory controlled by the left hand side operand is deleted. Then it shares the information with the operator's right hand side operand, incrementing the information's reference count.
shared_ptr &operator=(shared_ptr<Type> &&tmp):
Move assignment: the reference count of the operator's left hand side operand is reduced. If the reference count decays to zero the dynamically allocated memory controlled by the left hand side operand is deleted. Then it grabs the information controlled by the operator's right hand side operand which is turned into a 0-pointer.
operator bool() const:
If the
shared_ptractually points to memory
trueis returned, otherwise,
falseis returned.
Type &operator*():
A reference to the information stored in the
shared_ptrobject is returned. It acts like a normal pointer.
Type *operator->():
A pointer to the information controlled by the
shared_ptrobject is returned. Example:shared_ptr<string> sp(new string("hello")); cout << sp->c_str() << '\n';
The following member function member functions are supported:
Type *get():
A pointer to the information controlled by the
shared_ptrobject is returned. It acts like
operator->. The returned pointer can be inspected. If it is zero the
shared_ptrobject does not point to any memory.
Deleter &get_deleter():
A reference to the
shared_ptr's deleter (function or function object) is returned.
void reset(Type *):
The reference count of the information controlled by the
shared_ptrobject is reduced and if it decays to zero the memory it points to is deleted. Thereafter the object's information will refer to the argument that is passed to the function, setting its shared count to 1. It can also be called without argument, turning the object into a 0-pointer. This member function can be used to assign a new block of dynamically allocated memory to a
shared_ptrobject.
void shared_ptr<Type>::swap(shared_ptr<Type> &&):
Two identically typed
shared_ptrsare swapped.
bool unique() const:
If the current object is the only object referring to the memory controlled by the object
trueis returned otherwise (including the situation where the object is a 0-pointer)
falseis returned.
size_t use_count() const:
The number of objects sharing the memory controlled by the object is returned.
shared_ptrobjects. Consider the following two classes:
struct Base {}; struct Derived: public Base {};).
Of course, a
shared_ptr<Derived> can easily be defined. Since a
Derived object is also a
Base object, a pointer to
Derived can
be considered a pointer to
Base without using casts, but a
static_cast
could be used for force the interpretation of a
Derived * to a
Base *:
Derived d; static_cast<Base *>(&d);
However, a plain
static_cast cannot be used when initializing a shared
pointer to a
Base using the
get member of a shared pointer to a
Derived object. The following code snipped eventually results in an
attempt to delete the dynamically allocated
Base object twice:
shared_ptr<Derived> sd(new Derived); shared_ptr<Base> sb(static_cast<Base *>(sd.get()));Since
sdand
sbpoint at the same object
~Basewill be called for the same object when
sbgoes out of scope and when
sdgoes out of scope, resulting in premature termination of the program due to a double free error.
These errors can be prevented using casts that were specifically designed
for being used with
shared_ptrs. These casts use specialized constructors
that create a
shared_ptr pointing to memory but shares ownership (i.e.,
a reference count) with an existing
shared_ptr. These special casts are:
std::static_pointer_cast<Base>(std::shared_ptr<Derived> ptr):
A
shared_ptrto a
Baseclass object is returned. The returned
shared_ptrrefers to the base class portion of the
Derivedclass to which the
shared_ptr<Derived> ptrrefers. Example:shared_ptr<Derived> dp(new Derived()); shared_ptr<Base> bp = static_pointer_cast<Base>(dp);
std::const_pointer_cast<Class>(std::shared_ptr<Class const> ptr):
A
shared_ptrto a
Classclass object is returned. The returned
shared_ptrrefers to a non-const
Classobject whereas the
ptrargument refers to a
Class constobject. Example:shared_ptr<Derived const> cp(new Derived()); shared_ptr<Derived> ncp = const_pointer_cast<Derived>(cp);
std::dynamic_pointer_cast<Derived>(std::shared_ptr<Base> ptr):
A
shared_ptrto a
Derivedclass object is returned. The
Baseclass must have at least one virtual member function, and the class
Derived, inheriting from
Basemay have overridden
Base's virtual member(s). The returned
shared_ptrrefers to a
Derivedclass object if the dynamic cast from
Base *to
Derived *succeeded. If the dynamic cast did not succeed the
shared_ptr's
getmember returns 0. Example (assume
Derivedand
Derived2were derived from
Base):shared_ptr<Base> bp(new Derived()); cout << dynamic_pointer_cast<Derived>(bp).get() << ' ' << dynamic_pointer_cast<Derived2>(bp).get() << '\n';The first
getreturns a non-0 pointer value, the second
getreturns 0.
unique_ptrclass no specialization exists for the
shared_ptrclass to handle dynamically allocated arrays of objects.
But like
unique_ptrs, with
shared_ptrs referring to arrays the
dereferencing operator makes little sense while in these circumstances
shared_ptr objects would benefit from index operators.
It is not difficult to create a class
shared_array offering such
facilities. The class template
shared_array, derived from
shared_ptr
merely should provide an appropriate deleter to make sure that the array
and its elements are properly destroyed. In addition it should define the
index operator and optionally could declare the derefencing operators using
delete.
Here is an example showing how
shared_array can be defined and used:
struct X { ~X() { cout << "destr\n"; // show the object's destruction } }; template <typename Type> class shared_array: public shared_ptr<Type> { struct Deleter // Deleter receives the pointer { // and calls delete[] void operator()(Type* ptr) { delete[] ptr; } }; public: shared_array(Type *p) // other constructors : // not shown here shared_ptr<Type>(p, Deleter()) {} Type &operator[](size_t idx) // index operators { return shared_ptr<Type>::get()[idx]; } Type const &operator[](size_t idx) const { return shared_ptr<Type>::get()[idx]; } Type &operator*() = delete; // delete pointless members Type const &operator*() const = delete; Type *operator->() = delete; Type const *operator->() const = delete; }; int main() { shared_array<X> sp(new X[3]); sp[0] = sp[1]; }
shared_ptris initialized at definition time with a pointer to a newly allocated object. Here is an example:
std::shared_ptr<string> sptr(new std::string("hello world"))In such statements two memory allocation calls are used: one for the allocation of the
std::stringand one used interally by
std::shared_ptr's constructor itself.
The two allocations can be combined into one single allocation (which is
also slightly more efficient than explicitly calling
shared_ptr's
constructor) using the
make_shared template. The function template
std::make_shared has the following prototype:
template<typename Type, typename ...Args> std::shared_ptr<Type> std::make_shared(Args ...args);
Before using
make_shared the
<memory> header file must be included.
This function template allocates an object of type
Type, passing
args to its constructor (using perfect forwarding, see section
22.5.2), and returns a
shared_ptr initialized with the address of
the newly allocated
Type object.
Here is how the above
sptr object can be initialized
using
std::make_shared. Notice the use of
auto which frees us from
having to specify
sptr's type explicitly:
auto sptr(std::make_shared<std::string>("hello world"));After this initialization
std::shared_ptr<std::string> sptrhas been defined and initialized. It could be used as follows:
std::cout << *sptr << '\n';
In addition to
make_shared the function
std::make_unique can be used. It can be used
make_shared but returns a
std::unique_ptr rather than a
shared_ptr.
class Filter { istream *d_in; ostream *d_out; public: Filter(char const *in, char const *out); };Assume that
Filterobjects filter information read from
*d_inand write the filtered information to
*d_out. Using pointers to streams allows us to have them point at any kind of stream like
istreams, ifstreams, fstreamsor
istringstreams. The shown constructor could be implemented like this:
Filter::Filter(char const *in, char const *out) : d_in(new ifstream(in)), d_out(new ofstream(out)) { if (!*d_in || !*d_out) throw string("Input and/or output stream not available"); }Of course, the construction could fail.
newcould throw an exception; the stream constructors could throw exceptions; or the streams could not be opened in which case an exception is thrown from the constructor's body. Using a function try block helps. Note that if
d_in's initialization throws, there's nothing to be worried about. The
Filterobject hasn't been constructed, its destructor is not be called and processing continues at the point where the thrown exception is caught. But
Filter's destructor is also not called when
d_out's initialization or the constructor's
ifstatement throws: no object, and hence no destructor is called. This may result in memory leaks, as
deleteisn't called for
d_inand/or
d_out. To prevent this,
d_inand
d_outmust first be initialized to 0 and only then the initialization can be performed:
Filter::Filter(char const *in, char const *out) try : d_in(0), d_out(0) { d_in = new ifstream(in); d_out = new ofstream(out); if (!*d_in || !*d_out) throw string("Input and/or output stream not available"); } catch (...) { delete d_out; delete d_in; }This quickly gets complicated, though. If
Filterharbors yet another data member of a class whose constructor needs two streams then that data cannot be constructed or it must itself be converted into a pointer:
Filter::Filter(char const *in, char const *out) try : d_in(0), d_out(0) d_filterImp(*d_in, *d_out) // won't work { ... } // instead: Filter::Filter(char const *in, char const *out) try : d_in(0), d_out(0), d_filterImp(0) { d_in = new ifstream(in); d_out = new ofstream(out); d_filterImp = new FilterImp(*d_in, *d_out); ... } catch (...) { delete d_filterImp; delete d_out; delete d_in; }Although the latter alternative works, it quickly gets hairy. In situations like these smart pointers should be used to prevent the hairiness. By defining the stream pointers as (smart pointer) objects they will, once constructed, properly be destroyed even if the rest of the constructor's code throws exceptions. Using a
FilterImpand two
unique_ptrdata members
Filter's setup and its constructor becomes:
class Filter { std::unique_ptr<std::ifstream> d_in; std::unique_ptr<std::ofstream> d_out; FilterImp d_filterImp; ... }; Filter::Filter(char const *in, char const *out) try : d_in(new ifstream(in)), d_out(new ofstream(out)), d_filterImp(*d_in, *d_out) { if (!*d_in || !*d_out) throw string("Input and/or output stream not available"); }We're back at the original implementation but this time without having to worry about wild pointers and memory leaks. If one of the member initializers throws the destructors of previously constructed data members (which are now objects) are always called.
As a rule of thumb: when classes need to define pointer data members they should define those pointer data members as smart pointers if there's any chance that their constructors throw exceptions.
sort(cf. section 19.1.59) and
find_if(cf. section 19.1.17) generic algorithms. As a rule of thumb: when a called function must remember its state a function object is appropriate, otherwise a plain function can be used.
Frequently the function or function object is not readily available, and it must be defined in or near the location where it is used. This is commonly realized by defining a class or function in the anonymous namespace (say: class or function A), passing an A to the code needing A. If that code is itself a member function of the class B, then A's implementation might benefit from having access to the members of class B.
This scheme usually results in a significant amount of code (defining the class), or it results in complex code (to make available software elements that aren't automatically accessible to A's code). It may also result in code that is irrelevant at the current level of specification. Nested classes don't solve these problems either. Moreover, nested classes can't be used in templates.
Lamba expressions solve these problems. A lambda expression defines an anonymous function object which may immediately be passed to functions expecting function object arguments, as explained in the next few sections.
According to the C++ standard, lambda expressions provide a concise way to create simple function objects. The emphasis here is on simple: a lambda expression's size should be comparable to the size of inline-functions: just one or maybe two statements. If you need more code, then encapsulate that code in a separate function which is then called from inside the lambda expression's compound statement, or consider designing a separate function object.
Lambda expressions are used inside blocks, classes or namespaces (i.e., pretty much anywhere you like). Their implied closure type is defined in the smallest block, class or namespace scope containing the lamba expression. The closure object's visibility starts at its point of definition and ends where its closure type ends.
The closure type defines a (
const) public inline function call
operator. Here is an example of a lambda expression:
[] // the `lambda-introducer' (int x, int y) // the `lambda-declarator' { // a normal compound-statement return x * y; }The function call operator of the closure object created by this lambda expression expects two
intarguments and returns their product. It is an inline
constmember of the closure type. To drop the
constattribute, the lamba expression should specify
mutable, as follows:
[](int x, int y) mutable ...The lambda-declarator may be omitted, if no parameters are defined, but when specifying
mutable(or
constexpr, see below) the lambda-declarator must at least start with an empty set of parenthese. The parameters in a lamba declarator cannot be given default arguments.
Declarator specifiers can be
mutable, or (starting with C++17)
constexpr, or both. A
constexpr lambda-expression is itself a
constexpr, which may be compile-time evaluated if its arguments qualify as
const-expressions. Moreover, if a lambda-expression is defined inside a
constexpr function then the lambda-expression itself must qualify as a
constexpr, and explicitly specifying the
constexpr declarator
specifier is not required. The following function definitions, therefore, are
identical:
// starting with C++17: int constexpr change10(int n) { return [n] { return n > 10 ? n - 10 : n + 10; }(); } // starting with C++17: int constexpr change10(int n) { return [n] () constexpr { return n > 10 ? n - 10 : n + 10; }(); }
A closure object as defined by the above lamda expression could be used e.g.,
in combination with the
accumulate (cf. section 19.1.1) generic
algorithm to compute the product of a series of
int values stored in a
vector:
cout << accumulate(vi.begin(), vi.end(), 1, [](int x, int y) { return x * y; });The above lambda function uses the implicit return type
decltype(x * y). An implicit return type can be used in these cases:
returnstatement (i.e., a void lambda expression);
returnstatement; or
returnstatements returning values of identical types (e.g., all
intvalues).
If there are multiple
return statements returning values of different
types then the lambda expression's return type must specified be explicitly
using a
late-specified return type,
(cf. section 3.3.6):
[](int x, int y) -> int { return y < 0 ? x / static_cast<double>(y) : z + x; }
Variables that are visible at the location of a lambda expression can be accessed by the lambda expression. How these variables are accessed depends on the contents of the lambda-introducer (the area between the square brackets, called the lambda-capture). The lambda-capture allows passing a local context to lambda expressions.
Visible global and static variables as well as local variables defined in the lambda expression's compound statement itself can directly be accessed and, when applicable, modified. Example:
int global; void fun() { []() // [] may contain any specification { int localVariable = 0; localVariable = ++global; }; }
Lambda expressions that are defined inside a (non-static) class member
function then using an initial
& or
= character in the lambda-capture
enables the
this pointer, allowing the lambda expression access to all
class members (data and functions). In that case the lambda expression may
modify the class's data members.
If a lambda expression is defined inside a function then the lambda expression may access all the function's local variables which are visible at the lambda expression's point of definition.
An initial
& character in the lambda-capture accesses these local
variables by reference. These variables can then be modified from within the
lambda expression.
An initial
= character in the lambda-capture creates a local copy of
the referred-to local variables. Note that in this case the values of these
local copies can only be changed by the lambda expression if the lambda
expression is defined using the
mutable keyword. E.g.,
struct Class { void fun() { int var = 0; [=]() mutable { ++var; // modifies the local } // copy, not fun's var } }
In the C++17 standard, when defining lambda-expressions in (non-static)
member functions, the lambda-capture may also contain
*this. Even when not specified, the
lambda expression implicitly captures the
this pointer, and class members
are always accessed relative to
this. But when members are called
asynchronously a problem may arise, because the asynchronously called
lambda function may refer to members of an object whose lifetime ended shortly
after asynchronously calling the lambda function. This potentially arising
problem is solved by using `
, *this' in the lambda-capture if it starts
with
=, e.g.,
[=, *this] (in addition, variables may still
also be captured, as usual). When specifying `
, *this' the object to which
this refers is explicitly captured: if the object's scope ends it is
not immediately destroyed, but is captured by the lambda-expression for the
duration of that expression. In order to use the `
, *this' specification,
the object must directly be available. Consider the following example:
struct s2 { double ohseven = .007; auto f() { return [this] { return [*this] { return ohseven; // OK } }(); } auto g() { return [] { return [*this] { // error: *this not captured by outer // lambda-expression }; }(); } };
Fine-tuning lambda-captures is also possible. With an initial
=,
comma-separated
&var specifications indicate that the mentioned local
variables should be processed by reference, rather than as copies; with an
initial
&, comma separated
var specifications indicate that local
copies should be used of the mentioned local variables. Again, these copies
have immutable values unless the lambda expression is provided with the
mutable keyword.
Another fine-tuning consists of using
this in the lambda-capture: it also
allows the lambda-expression to access the surrounding class members.
Example:
class Data { std::vector<std::string> d_names; public: void show() const { int count = 0; std::for_each(d_names.begin(), d_names.end(), [this, &count](std::string const &name) { std::cout << ++count << ' ' << capitalized(name) << '\n'; } ); } private: std::string capitalized(std::string name); };
Although lambda expressions are anonymous function objects, they can be
assigned to variables. Often, the variable is defined using the keyword
auto. E.g.,
auto sqr = [](int x) { return x * x; };The lifetime of such lambda expressions is equal to the lifetime of the variable receiving the lambda expression as its value.
First we consider named lambda expressions. Named lambda expressions nicely fit in the niche of local functions: when a function needs to perform computations which are at a conceptually lower level than the function's task itself, then it's attractive to encapsulate these computations in a separate support function and call the support function where needed. Although support functions can be defined in anonymous namespaces, that quickly becomes awkward when the requiring function is a class member and the support function also must access the class's members.
In that case a named lambda expression can be used: it can be defined inside
a requiring function, and it may be given full access to the surrounding
class. The name to which the lambda expression is assigned becomes the name of
a function which can be called from the surrounding function. Here is an
example, converting a numeric IP address to a dotted decimal string, which can
also be accessed directly from an
Dotted object (all implementations
in-class to conserve space):
class Dotted { std::string d_dotted; public: std::string const &dotted() const { return d_dotted; } std::string const &dotted(size_t ip) { auto octet = [](size_t idx, size_t numeric) { return to_string(numeric >> idx * 8 & 0xff); }; d_dotted = octet(3, ip) + '.' + octet(2, ip) + '.' + octet(1, ip) + '.' + octet(0, ip); return d_dotted; } };
Next we consider the use of generic algorithms, like
the
for_each (cf. section 19.1.18):
void showSum(vector<int> const &vi) { int total = 0; for_each( vi.begin(), vi.end(), [&](int x) { total += x; } ); std::cout << total << '\n'; }Here the variable
int totalis passed to the lambda expression by reference and is directly accessed by the function. Its parameter list merely defines an
int x, which is initialized in sequence by each of the values stored in
vi. Once the generic algorithm has completed
showSum's variable
totalhas received a value that is equal to the sum of all the vector's values. It has outlived the lambda expression and its value is displayed.
But although generic algorithms are extremely useful, there may not always be
one that fits the task at hand. Furthermore, an algorithm like
for_each
looks a bit unwieldy, now that the language offers range-based for-loops. So
let's try this, instead of the above implementation:
void showSum(vector<int> const &vi) { int total = 0; for (auto el: vi) [&](int x) { total += x; }; std::cout << total << '\n'; }But when
showSumis now called, its
coutstatement consistently reports 0. What's happening here?
When a generic algorithm is given a lambda function, its implementation
instantiates a reference to a function. that referenced function is thereupon
called from within the generic algorithm. But, in the above example the
range-based for-loop's nested statement merely represents the defintion
of a lamba function. Nothing is actually called, and hence
total remains
equal to 0.
Thus, to make the above example work we not only must define the lambda expression, but we must also call the lambda function. We can do this by giving the lambda function a name, and then call the lamba function by its given name:
void showSum(vector<int> const &vi) { int total = 0; for (auto el: vi) { auto lambda = [&](int x) { total += x; }; lambda(el); } std::cout << total << '\n'; }
In fact, there is no need to give the lambda function a name: the
auto
lambda definition represents the lambda function, which could also
directly be called. The syntax for doing this may look a
bit weird, but there's nothing wrong with it, and it allows us to drop the
compound statment, required in the last example, completely. Here goes:
void showSum(vector<int> const &vi) { int total = 0; for (auto el: vi) [&](int x) { total += x; }(el); // immediately append the // argument list to the lambda // function's definition std::cout << total << '\n'; }
lambda expressions can also be used to prevent spurious returns from
condition_variable's wait calls (cf. section 20.5.3).
The class
condition_variable allows us to do so by offering
wait
members expecting a lock and a predicate. The predicate checks the data's
state, and returns
true if the data's state allows the data's
processing. Here is an alternative implementation of the
down member shown
in section 20.5.3, checking for the data's actual availability:
void down() { unique_lock<mutex> lock(sem_mutex); condition.wait(lock, [&]() { return semaphore != 0 } ); --semaphore; }The lambda expression ensures that
waitonly returns once
semaphorehas been incremented.
Lambda expression are primarily used to obtain functors that are used in a
very localized section of a program. Since they are used inside an existing
function we should realize that once we use lambda functions multiple
aggregation levels are mixed. Normally a function implements a task which can
be described at its own aggregation level using just a few sentences. E.g.,
``the function
std::sort sorts a data structure by comparing its elements
in a way that is appropriate to the context where
sort is called''. By
using an existing comparison method the aggregation level is kept, and the
statement is clear by itself. E.g.,
sort(data.begin(), data.end(), greater<DataType>());If an existing comparison method is not available, a tailor-made function object must be created. This could be realized using a lambda expression. E.g.,
sort(data.begin(), data.end(), [&](DataType const &lhs, DataType const &rhs) { return lhs.greater(rhs); } );Looking at the latter example, we should realize that here two different aggregation levels are mixed: at the top level the intent is to sort the elements in
data, but at the nested level (inside the lambda expression) something completely different happens. Inside the lambda expression we define how a the decision is made about which of the two objects is the greater. Code exhibiting such mixed aggregation levels is hard to read, and should be avoided.
On the other hand: lambda expressions also simplify code because the overhead of defining a tailor-made functor is avoided. The advice, therefore, is to use lambda expressions sparingly. When they are used make sure that their sizes remain small. As a rule of thumb: lambda expressions should be treated like in-line functions, and should merely consist of one, or maybe occasionally two expressions.
autoto define their parameters. When used, an appropriate lambda expression is created by looking at the actual types of arguments. Since they are generic, they can be used inside one function with different types of arguments. Here is an example (assuming all required headers and namespace declaration):
1: int main() 2: { 3: auto lambda = [](auto lhs, auto rhs) 4: { 5: return lhs + rhs; 6: }; 7: 8: vector<int> values {1, 2, 3, 4, 5}; 9: vector<string> text {"a", "b", "c", "d", "e"}; 10: 11: cout << 12: accumulate(values.begin(), values.end(), 0, lambda) << '\n' << 13: accumulate(text.begin(), text.end(), string(), lambda) << '\n'; 14: }
The generic lambda function is defined in lines 3 through 6, and is
assigned to the
lambda identifier. Then,
lambda is passed to
accumulate in lines 12 and 13. In line 12 it is instantiated to add
int values, in line 13 to add
std::string values: the same
lambda
is instantiated to two completely different functors, which are only locally
available in
main.
As a prelude to our coverage of templates (in particular chapter 21), a generic lambda expression is equivalent to a class template. To illustrate: the above example of a generalized lambda function could also be implemented using a class template like this:
struct Lambda { template <typename LHS, typename RHS> auto operator()(LHS const &lhs, RHS const &rhs) const { return lhs + rhs; } }; auto lambda = Lambda{};One of the consequences of this identity is that using
autoin the lambda expression's parameterlist obeys the rules of template argument deduction (cf. section 21.4), which are somewhat different from the way
autonormally operates.
Another extension is how lambda expressions capture outer scope variables. In the C++11 standard capture was used either by value or by reference. A consequence of this is that an outer scope variable of a type that only supports move construction cannot be passed by value to a lambda function. This restriction was dropped, allowing variables to be initialized from arbitrary expressions. This not only allows move-initialization of variables in the lambda introducer, but variables may here also be initialized if they do not have a correspondingly named variable in the lambda expression's outer scope. In this case initializer expressions can be used as shown in this example:
auto fun = [value = 1] { return value; };This lambda function (of course) returns 1: the declared capture deduces the type from the initializer expression as if
autohad been used.
To use move-initialization
std::move should be used. E.g.,
std::unique_ptr<int> ptr(new int(10)); auto fun = [value = std::move(ptr)] { return *value; };
In generic lambda expressions the keyword
auto indicates that the compiler
determines which types to use when the lambda function is instantiated. A
generic lamda expression therefore is a class template (cf. chapter
22), even though it doesn't look like one. As an example, the
following lambda expression defines a generic class template, which can be
used as shown:
char const *target = "hello"; auto lambda = [target](auto const &str) { return str == target; }; vector<string> vs{stringVectorFactory()}; find_if(vs.begin(), vs.end(), lambda);This works fine, but if the programmer defines
lambdathis way then he/she should be prepared for complex error messages result if the types of the derefenced iterators and lambda's (silently assumed)
strtype don't match.
regcompand
regexec), but the dedicated regular expression facilities have a richer interface than the traditional C facilities, and can be used in code using templates.
Before using the specific C++ implementations of regular expressions the
header file
<regex> must be included.
Regular expressions are extensively documented elsewhere (e.g., regex(7), Friedl, J.E.F Mastering Regular Expressions, O'Reilly). The reader is referred to these sources for a refresher on the topic of regular expressions. In essence, regular expressions define a small meta-language recognizing textual units (like `numbers', `identifiers', etc.). They are extensively used in the context of lexical scanners (cf. section 24.8.1) when defining the sequence of input characters associated with tokens. But they are also intensively used in other situations. Programs like sed(1) and grep(1) use regular expressions to find pieces of text in files having certain characteristics, and a program like perl(1) adds some `sugar' to the regular expression language, simplifying the construction of regular expressions. However, though extremely useful, it is also well known that regular expressions tend to be very hard to read. Some even call the regular expression language a write-only language: while specifying a regular expression it's often clear why it's written in a particular way. But the opposite, understanding what a regular expression is supposed to represent if you lack the proper context, can be extremely difficult. That's why, from the onset and as a rule of thumb, it is stressed that an appropriate comment should be provided, with each regular expression, as to what it is supposed to match.
In the upcoming sections first a short overview of the regular expression language is provided, which is then followed by the facilities C++ is currently offering for using regular expressions. These facilities mainly consist of classes helping you to specify regular expression, matching them to text, and determining which parts of the text (if any) match (parts of) the text being analyzed.
regexclasses.
C++'s default definition of regular expressions distinguishes the following atoms:
x: the character `x';
.: any character except for the newline character;
[xyz]: a character class; in this case, either an `x', a `y', or a `z' matches the regular expression. See also the paragraph about character classes below;
[abj-oZ]: a character class containing a range of characters; this regular expression matches an `a', a `b', any letter from `j' through `o', or a `Z'. See also the paragraph about character classes below;
[^A-Z]: a negated character class: this regular expression matches any character but those in the class beyond
^. In this case, any character except for an uppercase letter. See also the paragraph about character classes below;
[:predef:]: a predefined set of characters. See below for an overview. When used, it is interpreted as an element in a character class. It is therefore always embedded in a set of square brackets defining the character class (e.g.,
[[:alnum:]]);
\X: if X is `a', `b', `f', `n', `r', `t', or `v', then the ANSI-C interpretation of `\x'. Otherwise, a literal `X' (used to escape operators such as
*);
(r): the regular expression
r. It is used to override precedence (see below), but also to define
ras a marked sub-expression whose matching characters may directly be retrieved from, e.g., an
std::smatchobject (cf. section 18.8.3);
(?:r): the regular expression
r. It is used to override precedence (see below), but it is not regarded as a marked sub-expression;
In addition to these basic atoms, the following special atoms are available (which can also be used in character classes):
\s: a whitespace character;
\S: any character but a whitespace character;
\d: a decimal digit character;
\D: any character but a decimal digit character;
\w: an alphanumeric character or an underscore (
_) character;
\W: any character but an alphanumeric character or an underscore (
_) character.
Atoms may be concatenated. If
r and
s are atoms then the regular
expression
rs matches a target text if the target text matches
r
and
s, in that order (without any intermediate characters
inside the target text). E.g., the regular expression
[ab][cd] matches the
target text
ac, but not the target text
a:c.
Atoms may be combined using operators. Operators bind to the preceding
atom. If an operator should operate on multiple atoms the atoms must be
surrounded by parentheses (see the last element in the previous itemization).
To use an operator character as an atom it can be escaped. Eg.,
*
represent an operator,
\* the atom character star. Note that
character classes do not recognize escape sequences:
[\*] represents a
character class consisting of two characters: a backslash and a star.
The following operators are supported (
r and
s represent regular
expression atoms):
r*: zero or more
rs;
r+: one or more
rs;
r?: zero or one
rs (that is, an optional r);
r{m, n}: where
1 <= m <= n: matches `r' at least m, but at most n times;
r{m,}: where
1 <= m: matches `r' at least m times;
r{m}: where
1 <= m: matches `r' exactly m times;
r|s: matches either an `r' or an `s'. This operator has a lower priority than any of the multiplication operators;
^r:
^is a pseudo operator. This expression matches `r', if appearing at the beginning of the target text. If the
^-character is not the first character of a regular expression it is interpreted as a literal
^-character;
r$:
$is a pseudo operator. This expression matches `r', if appearing at the end of the target text. If the
$-character is not the last character of a regular expression it is interpreted as a literal
$-character;
When a regular expression contains marked sub-expressions and multipliers, and
the marked sub-expressions are multiply matched, then the target's final
sub-string matching the marked sub-expression is reported as the text matching
the marked sub-expression. E.g, when using
regex_search (cf. section
18.8.4.3), marked sub-expression (
((a|b)+\s?)), and target text
a a
b, then
a a b is the fully matched text, while
b is reported as the
sub-string matching the first and second marked sub-expressions.
\s, \S, \d, \D, \w,and
\W; the character range operator
-; the end of character class operator
]; and, at the beginning of the character class,
^. Except in combination with the special atoms the escape character is interpreted as a literal backslash character (to define a character class containing a backslash and a
dsimply use
[d\]).
To add a closing bracket to a character class use
[] immediately following
the initial open-bracket, or start with
[^] for a negated character class
not containing the closing bracket. Minus characters are used to define
character ranges (e.g.,
[a-d], defining
[abcd]) (be advised that the
actual range may depend on the locale being used). To add a literal minus
character to a character class put it at the very beginning (
[-, or
[^-) or at the very end (
-]) of a character class.
Once a character class has started, all subsequent characters are added to the
class's set of characters, until the final closing bracket (
]) has been
reached.
In addition to characters and ranges of characters, character classes may also contain predefined sets of character. They are:
[:alnum:] [:alpha:] [:blank:] [:cntrl:] [:digit:] [:graph:] [:lower:] [:print:] [:punct:] [:space:] [:upper:] [:xdigit:]These predefined sets designate sets of characters equivalent to the corresponding standard C
isXXXfunction. For example,
[:alnum:]defines all characters for which isalnum(3) returns true.
(w)regexclass presented in this section the
<regex>header file must be included.
The types
std::regex and
std::wregex define regular
expression patterns. They define, respectively the types
basic_regex<char> and
basic_regex<wchar_t>
types. Below, the class
regex is used, but in the examples
wregex
could also have been used.
Regular expression facilities were, to a large extent, implemented through
templates, using, e.g., the
basic_string<char> type (which is equal to
std::string). Likewise, generic types like OutputIter (output
iterator) and BidirConstIter (bidirectional const iterator) are used with
several functions. Such functions are function templates. Function templates
determine the actual types from the arguments that are provided at
call-time.
These are the steps that are commonly taken when using regular expressions:
regexobject.
The way
regex objects handle regular expressions can be configured using a
bit_or combined set of
std::regex_constants values,
defining a
regex::flag_type value. These
regex_constants are:
std::regex_constants::awk:
awk(1)'s (POSIX) regular expression grammar is used to specify regular exressions (e.g., regular expressions are delimited by
/-characters, like
/\w+/; for further details and for details of other regular expression grammars the reader should consult the man-pages of the respective programs);
std::regex_constants::basic:
the basic POSIX regular expression grammar is used to specify regular expressions;
std::regex_constants::collate:
the character range operator (
-) used in character classes defines a locale sensitive range (e.g.,
[a-k]);
std::regex_constants::ECMAScript:
this
flag_typeis used by default by
regexconstructors. The regular expression uses the Modified ECMAScript regular expression grammar;
std::regex_constants::egrep:
egrep(1)'s (POSIX) regular expression grammar is used to specify regular exressions. This is the same grammar as used by
regex_constants::extended, with the addition of the newline character (
'\n') as an alternative for the
'|'-operator;
std::regex_constants::extended:
the extended POSIX regular expression grammar is used to specify regular exressions;
std::regex_constants::grep:
grep(1)'s (POSIX) regular expression grammar is used to specify regular exressions. This is the same grammar as used by
regex_constants::basic, with the addition of the newline character (
'\n') as an alternative for the
'|'-operator;
std::regex_constants::icase:
letter casing in the target string is ignored. E.g., the regular expression
Amatches
aand
A;
std::regex_constants::nosubs:
When performing matches, all sub-expressions (
(expr)) are treated as non-marked (
?:expr);
std::regex_constants::optimize:
optimizes the speed of matching regular expressions, at the cost of slowing down the construction of the regular expression somewhat. If the same regular expression object is frequently used then this flag may substantially improve the speed of matching target texts;
Constructors
The default, move and copy constructors are available. Actually, the
default constructor defines one parameter of type
regex::flag_type, for
which the value
regex_constants::ECMAScript is used by default.
regex():
the default constructor defines a
regexobject not containing a regular expression;
explicit regex(char const *pattern):
defines a
regexobject containing the regular expression found at
pattern;
regex(char const *pattern, std::size_t count):
defines a
regexobject containing the regular expression found at the first
countcharacters of
pattern;
explicit regex(std::string const &pattern):
defines a
regexobject containing the regular expression found at
pattern. This constructor is defined as a member template, accepting a
basic_string-type argument which may also use non-standard character traits and allocators;
regex(ForwardIterator first, ForwardIterator last):
defines a
regexobject containing the regular expression found at the (forward) iterator range
[first, last). This constructor is defined as a member template, accepting any forward iterator type (e.g., plain
charpointers) which can be used to define the regular expression's pattern;
regex(std::initializer_list<Char> init):
defines a
regexobject containing the regular expression from the characters in the initializer list
init.
Here are some examples:
std::regex re("\\w+"); // matches a sequence of alpha-numeric // and/or underscore characters std::regex re{'\\', 'w', '+'} ; // idem std::regex re(R"(\w+xxx")", 3); // idem
Member functions
regex &operator=(RHS):
The copy and move assignment operators are available. Otherwise, RHS may be:
- an NTBS (of type
char const *);
- a
std::string const &(or any compatible
std::basic_string);
- an
std::initializer_list<char>;
regex &assign(RHS):
This member accepts the same arguments as
regex'sconstructors, including the (optional)
regex_constantsvalues;
regex::flag_type flag() const:
Returns theNote that when a combination of
regex_constantsflags that are active for the current
regexobject. E.g.,int main() { regex re; regex::flag_type flags = re.flags(); cout << // displays: 16 0 0 (re.flags() & regex_constants::ECMAScript) << ' ' << (re.flags() & regex_constants::icase) << ' ' << (re.flags() & regex_constants::awk) << ' ' << '\n'; }
flag_typevalues is specified at construction-time that only those flags that were specified are set. E.g., when
re(regex_constants::icase)would have been specified the about
coutstatement would have shown
0 1 0. It's also possible to specify conflicting combinations of flag-values like
regex_constants::awk | regex_constants::grep. The construction of such
regexobjects succeeds, but should be avoided.
locale_type get_loc() const:
Returns the locale that is associated with the current
regexobject;
locale_type imbue(locale_type locale):
Replaces the
regexobject's current locale setting with
locale, returning the replaced locale;
unsigned mark_count() const:
The number of marked sub-expressions in the
regexobjext is returned. E.g.,int main() { regex re("(\\w+)([[:alpha:]]+)"); cout << re.mark_count() << '\n'; // displays: 2 }
void swap(regex &other) noexcept:
Swaps the current
regexobject with
other. Also available as a free function:
void swap(regex &lhs, regex &rhs), swapping
lhsand
rhs.
regexobject is available, it can be used to match some target text against the regular expression. To match a target text against a regular expression the following functions, described in the next section (18.8.4), are available:
regex_matchmerely matches a target text against a regular expression, informing the caller whether a match was found or not;
regex_searchalso matches a target text against a regular expression, but allows retrieval of matches of marked sub-expressions (i.e., parenthesized regular expressions);
regex_replacematches a target text against a regular expression, and replaces pieces of matched sections of the target text by another text.
These functions must be provided with a target text and a (const reference
to) a
regex object. Usually another argument, a
std::match_results object is also passed to these
functions, to contain the results of the regular expression matching
procedure.
Before using the
match_results class the
<regex> header file must be
included.
Examples of using
match_results objects are provided in section
18.8.4. This and the next section are primarily for referential
purposes.
Various specializations of the class
match_results exist. The
specialization that is used should match the specializations of the used
regex class. E.g., if the regular expression was specified as a
char
const * the
match_results specialization should also operate on
char
const * values. The various specializations of
match_results have been
given names that can easily be remembered, so selecting the appropriate
specialization is simple.
The class
match_results has the following specializations:
cmatch:
defines
match_results<char const *>, using a
char const *type of iterator. It should be used with a
regex(char const *)regular expression specification;
wcmatch:
defines
match_results<wchar_ const *>, using a
wchar_t const *type of iterator. It should be used with a
regex(wchar_t const *)regular expression specification;
smatch:
defines
match_results<std::string::const_iterator>, using a
std::string::const_iteratortype of iterator. It should be used with a
regex(std::string const &)regular expression specification;
wsmatch:
defines
match_results<std::wstring::const_iterator>, using a
std::wstring::const_iteratortype of iterator. It should be used with a
regex(wstring const &)regular expression specification.
Constructors
The default, copy, and move constructors are available. The default
constructor defines an
Allocator const & parameter, which by default is
initialized to the default allocator. Normally, objects of the class
match_results receive their match-related information by passing them to
the above-mentioned functions, like
regex_match. When returning from these
functions members of the class
match_results can be used to retrieve
specific results of the matching process.
Member functions
match_results &operator=:
The copy and move assignment operators are available;
std::string const &operator[](size_t idx) const:
Returns a (const) reference to sub-match
idx. With
idxvalue 0 a reference to the full match is returned. If
idx >= size()(see below) a reference to an empty sub-range of the target string is returned. The behavior of this member is undefined if the member
ready()(see below) returns
false;
Iterator begin() const:
Returns an iterator to the first sub-match.
Iteratoris a const-iterator for
const match_resultsobjects;
Iterator cegin() const:
Returns an iterator to the first sub-match.
Iteratoris a const-iterator;
Iterator cend() const:
Returns an iterator pointing beyond the last sub-match.
Iteratoris a const-iterator;
Iterator end() const:
Returns an iterator pointing beyond the last sub-match.
Iteratoris a const-iterator for
const match_resultsobjects;
ReturnType format(Parameters) const:
As this member requires a fairly extensive description, it would break the flow of the current overview. This member is used in combination with the
regex_replacefunction, and it is therefore covered in detail in that function's section (18.8.4.5);
allocator_type get_allocator() const:
Returns the object's allocator;
bool empty() const:
Returns
trueif the
match_resultsobject contains no matches (which is also returned after merely using the default constructor). Otherwise it returns
false;
int length(size_t idx = 0) const:
Returns the length of sub-match
idx. By default the length of the full match is returned. If
idx >= size()(see below) 0 is returned;
size_type max_size() const:
Returns the maximum number of sub-matches that can be contained in a
match_resultsobject. This is an implementation dependent constant value;
int position(size_t idx = 0) const:
Returns the offset in the target text of the first character of sub-match
idx. By default the position of the first character of the full match is returned. If
idx >= size()(see below) -1 is returned;
std::string const &prefix() const:
Returns a (const) reference to a sub-string of the target text that ends at the first character of the full match;
bool ready() const:
No match results are available from a default constructed
match_resultsobject. It receives its match results from one of the mentioned matching functions. Returns
trueonce match results are available, and
falseotherwise.
size_type size() const:
Returns the number of sub-matches. E.g., with a regular expression
(abc)|(def)and target
defconthree submatches are reported: the total match (def); the empty text for
(abc); and
deffor the
(def)marked sub-expression.
Note: when multipliers are used only the last match is counted and reported. E.g., for the pattern
(a|b)+and target
aaabtwo sub-matches are reported: the total match
aaab, and the last match (
b);
std::string str(size_t idx = 0) const:
Returns the characters defining sub-match
idx. By default this is the full match. If
idx >= size()(see below) an empty string returned;
std::string const &suffix() const:
Returns a (const) reference to a sub-string of the target text that starts beyond the last character of the full match;
void swap(match_results &other) noexcept:
Swaps the current
match_resultsobject with
other. Also available as a free function:
void swap(match_results &lhs, match_results &rhs), swapping
lhsand
rhs.
<regex>header file must be included.
There are three major families of functions that can be used to match a target
text against a regular expression. Each of these functions, as well as the
match_results::format member, has a final
std::regex_constants::match_flag_type parameter (see the next section),
which is given the default value
regex_constants::match_default which can
be used to fine-tune the way the regular expression and the matching process
is being used. This
final parameter is not explicitly mentioned with the regular expression
matching functions or with the
format member. The three families of
functions are:
bool std::regex_match(Parameters):
This family of functions is used to match a regular expression against a target text. Only if the regular expression matches the full target text
trueis returned; otherwise
falseis returned. Refer to section 18.8.4.2 for an overview of the available overloaded
regex_matchfunctions;
bool std::regex_search(Parameters):
This family of functions is also used to match a regular expression against a target text. This function returns true once the regular expression matches a sub-string of the target text; otherwise
falseis returned. See below for an overview of the available overloaded
regex_searchfunctions;
ReturnType std::regex_replace(Parameters):
This family of functions is used to produce modified texts, using the characters of a target string, a
regexobject and a format string. This member closely resembles the functionality of the
match_results::formatmember discussed in section 18.8.4.4.
match_results::formatmember can be used after
regex_replaceand is discussed after covering
regex_replace(section 18.8.4.4).
formatmembers and all regular expression matching functions accept a final
regex_constants::match_flag_typeargument, which is a bit-masked type, for which the
bit_oroperator can be used. All
formatmembers by default specify the argument
match_default.
The
match_flag_type enumeration defines the following values (below,
`
[first, last)' refers to the character sequence being matched).
format_default(not a bit-mask value, but a default value which is equal to 0). With just this specification ECMAScript rules are used to construct strings in
std::regex_replace;
format_first_only:
std::regex_replaceonly replaces the first match;
format_no_copy: non-matching strings are not passed to the output by
std::regex_replace;
format_sed: POSIX sed(1) rules are used to construct strings in
std::regex_replace;
match_any: if multiple matches are possible, then any match is an acceptable result;
match_continuous: sub-sequences are only matching if they start at
first;
match_not_bol: the first character in
[first, last)is treated as an ordinary character:
^does not match
[first, first);
match_not_bow:
\bdoes not match
[first, first);
match_default(not a bit-mask value, but equal to 0): the default value of the final argument that's passed to the regular expression matching functions and
match_results::formatmember. ECMAScript rules are used to construct strings in
std::regex_replace;
match_not_eol: the last character in
[first, last)is treated as an ordinary character:
$does not match
[last,last);
match_not_eow:
\bdoes not match
[last, last);
match_not_null: empty sequences are not considered matches;
match_prev_avail:
--firstrefers to a valid character position. When specified
match_not_boland
match_not_boware ignored;
std::regex_matchreturns
trueif the regular expression defined in its provided
regexargument fully matches the provided target text. This means that
match_results::prefixand
match_results::suffixmust return empty strings. But defining sub-expressions is OK.
The following overloaded variants of this function are available:
bool regex_match(BidirConstIter first, BidirConstIter last, std::regex const &re):
this function behaves like the previous function, but does not return the results of the matching process in a
match_resultsobject;
bool regex_match(char const *target, std::match_results &results, std::regex const &re):
this function behaves like the first overloaded variant, using the characters in
targetas its target text;
bool regex_match(char const *str, std::regex const &re):
this function behaves like the previous function but does not return the match results;
bool regex_match(std::string const &target, std::match_results &results, std::regex const &re):
this function behaves like the first overloaded variant, using the characters in
targetas its target text;
bool regex_match(std::string const &str, std::regex const &re):
this function behaves like the previous function but does not return the match results;
bool regex_match(std::string const &&, std::match_results &, std::regex &) = delete(the
regex_matchfunction does not accept temporary
stringobjects as target strings, as this would result in invalid string iterators in the
match_resultargument.)
argv[1]) if it starts with 5 digits and then merely contains letters (
[[:alpha:]]). The digits can be retrieved as sub-expression 1:
#include <iostream> #include <regex> using namespace std; int main(int argc, char const **argv) { regex re("(\\d{5})[[:alpha:]]+"); cmatch results; if (not regex_match(argv[1], results, re)) cout << "No match\n"; else cout << "size: " << results.size() << ": " << results.str(1) << " -- " << results.str() << '\n'; }
regex_matchthe regular expression matching function
std::regex_searchreturns
trueif the regular expression defined in its
regexargument partially matches the target text.
The following overloaded variants of this function are available:
bool regex_search(BidirConstIter first, BidirConstIter last, std::regex const &re):
this function behaves like the previous function, but does not return the results of the matching process in a
match_resultsobject;
bool regex_search(char const *target, std::match_results &results, std::regex const &re):
this function behaves like the first overloaded variant, using the characters in
targetas its target text;
bool regex_search(char const *str, std::regex const &re):
this function behaves like the previous function but does not return the match results;
bool regex_search(std::string const &target, std::match_results &results, std::regex const &re):
this function behaves like the first overloaded variant, using the characters in
targetas its target text;
bool regex_search(std::string const &str, std::regex const &re):
this function behaves like the previous function but does not return the match results;
bool regex_search(std::string const &&, std::match_results &, std::regex &) = delete:
the
regex_searchfunction does not accept temporary
stringobjects as target strings, as this would result in invalid string iterators in the
match_resultargument.
regex_searchcould be used:
1: #include <iostream> 2: #include <string> 3: #include <regex> 4: 5: using namespace std; 6: 7: int main() 8: { 9: while (true) 10: { 11: cout << "Enter a pattern or plain Enter to stop: "; 12: 13: string pattern; 14: if (not getline(cin, pattern) or pattern.empty()) 15: break; 16: 17: regex re(pattern); 18: while (true) 19: { 20: cout << "Enter a target text for `" << pattern << "'\n" 21: "(plain Enter for the next pattern): "; 22: 23: string text; 24: if (not getline(cin, text) or text.empty()) 25: break; 26: 27: smatch results; 28: if (not regex_search(text, results, re)) 29: cout << "No match\n"; 30: else 31: { 32: cout << "Prefix: " << results.prefix() << "\n" 33: "Match: " << results.str() << "\n" 34: "Suffix: " << results.suffix() << "\n"; 35: for (size_t idx = 1; idx != results.size(); ++idx) 36: cout << "Match " << idx << " at offset " << 37: results.position(idx) << ": " << 38: results.str(idx) << '\n'; 39: } 40: } 41: } 42: }
match_results::format
formatmember is a rather complex member function of the class
match_results, which can be used to modify text which was previously matched against a regular expression, e.g., using the function
regex_search. Because of its complexity and because the functionality of another regular expression processing function (
regex_replace) offers similar functionality it is discussed at this point in the C++ Annotations, just before discussing the
regex_replacefunction.
The
format member operates on (sub-)matches contained in a
match_results object, using a format string, and producing text in
which format specifiers (like
$&) are replaced by
matching sections of the originally provided target text. In addition, the
format member recognizes all standard C escape sequences (like
\n). The
format member is used to create text that is modified with
respect to the original target text.
As a preliminary illustration: if
results is a
match_results object
and
match[0] (the fully matched text) equals `
hello world', then
calling
format with the format string
this is [$&] produces the text
this is [hello world]. Note the specification
$& in this format
string: this is an example of a format specifier. Here is an overview of all
supported format specifiers:
$`: corresponds to the text returned by the
prefixmember: all characters in the original target text up to the first character of the fully matched text;
$&: corresponds to the fully matched text (i.e., the text returned by the
match_results::strmember);
$n: (where
nis an integral natural number): corresponds to the text returned bu
operator[](n);
$': corresponds to the text returned by the
suffixmember: all characters in the original target string beyond the last character of the fully matched text;
$$: corresponds to the single
$character.
Four overloaded versions of the
format members are available. All
overloaded versions define a final
regex_constants::match_flag_type
parameter, which is by default initialized to
match_default. This final
parameter is not explicitly mentioned in the following coverage of the
format members.
To further illustrate the way the
format members can be used it is assumed
that the following code has been executed:
1: regex re("([[:alpha:]]+)\\s+(\\d+)"); // letters blanks digits 2: 3: smatch results; 4: string target("this value 1024 is interesting"); 5: 6: if (not regex_search(target, results, re)) 7: return 1;
After calling
regex_search (line 6) the results of the regular
expression matching process are available in the
match_results results
object that is defined in line 3.
The first two overloaded
format functions expect an output-iterator to
where the formatted text is written. These overloaded members return the
final output iterator, pointing just beyond the character that was last
written.
OutputIter format(OutputIter out, char const *first, char const *last) const:
the characters in the range
[first, last)are applied to the sub-expressions stored in the
match_resultsobject, and the resulting string is inserted at
out. An illustration is provided with the next overloaded version;
OutputIter format(OutputIter out, std::string const &fmt) const:
the contents of
fmtare applied to the sub-expressions stored in the
match_resultsobject, and the resulting string is inserted at
out. The next line of code inserts the value 1024 into
cout(note that
fmtmust be a
std::string, hence the explicit use of the
stringconstructor):results.format(ostream_iterator<char>(cout, ""), string("$2"));
The remaining two overloaded
format members expect an
std::string or
an NTBS defining the format string. Both members return a
std::string
containing the formatted text:
std::string format(std::string const &fmt) const
std::string format(char const *fmt) const
stringcan be obtained in which the order of the first and second marked sub-expressions contained in the previously obtained
match_resultsobject have been swapped:
string reverse(results.format("$2 and $1"));
std::regex_replacefunctions
?? uses a regular expression to perform substitution on a sequence
of characters. Their functionality closely resembles the functionality of the
match_results::format member discussed in the previous section. The
following overloaded variants are available:
OutputIt regex_replace(OutputIter out, BidirConstIter first, BidirConstIter last, std::regex const &re, std::string const &fmt):
OutputIteris an output iterator;
BidirConstItera bidirectional const iterator.
The function returns the possibly modified text in an iterator range
[out, retvalue), where
outis the output iterator passed as the first argument to
regex_replace, and
retvalueis the output iterator returned by
regex_replace.
The function matches the text at the range
[first, last)against the regular expression stored in
re. If the regular expression does not match the target text in the range
[first, last)then the target text is literally copied to
out. If the regular expression does match the target text then
The workings ofThe workings of
- first, the match result's prefix is copied to
out. The prefix equals the initial characters of the target text up to the very first character of the fully matched text.
- next, the matched text is replaced by the contents of the
fmtformat string, in which the format specifiers can be used that were described in the previous section (section 18.8.4.4), and the replaced text is copied to
out;
- finally, the match result's suffix is copied to
out. The suffix equals all characters of the target text beyond the last character of the matched text.
regex_replaceis illustrated in the next example:1: regex re("([[:alpha:]]+)\\s+(\\d+)"); // letters blanks digits 2: 3: string target("this value 1024 is interesting"); 4: 5: regex_replace(ostream_iterator<char>(cout, ""), target.begin(), 6: target.end(), re, string("$2"));
In line 5
regex_replaceis called. Its format string merely contains
$2, matching 1024 in the target text. The prefix ends at the word
value, the suffix starts beyond 1024, so the statement in line 5 inserts the textthis 1024 is interestinginto the standard output stream.
OutputIt regex_replace( OutputIter out, BidirConstIter first, BidirConstIter last, std::regex const &re, char const *fmt):
This variant behaves like the first variant. When using, in the above example,
"$2"instead of
string("$2"), then this variant would have been used;
std::string regex_replace(std::string const &str, std::regex const &re, std::string const &fmt):
This variant returns a
std::stringcontaining the modified text, and expects a
std::stringcontaining the target text. Other than that, it behaves like the first variant. To use this overloaded variant in the above example the statement in line 5 could have been replaced by the following statement, initializing the
string result:string result(regex_replace(target, re, string("$2")));
std::string regex_replace(std::string const &str, std::regex const &re, char const *fmt):
After changing, in the above statement,
string("$2")into
"$2", this variant is used, behaving exactly like the previous variant;
std::string regex_replace(char const *str, std::regex const &re, std::string const &fmt):
This variant uses a
char const *to point to the target text, and behaves exactly like the previous but one variant;
std::string regex_replace(char const *str, std::regex const &re, char const *fmt):
This variant also uses a
char const *to point to the target text, and also behaves exactly like the previous but one variant;
<random>header file must be included.
The STL offers several standard mathematical (statistical) distributions. These distributions allow programmers to obtain randomly selected values from a selected distribution.
These statistical distributions need to be provided with a random number
generating object. Several of such random number generating objects are
provided, extending the traditional
rand function that is part of the
C standard library.
These random number generating objects produce pseudo-random numbers, which are then processed by the statistical distribution to obtain values that are randomly selected from the specified distribution.
Although the STL offers various statistical distributions their functionality
is fairly limited. The distributions allow us to obtain a random number from
these distributions, but
probability density functions
or
cumulative distribution functions
are currently not provided by the STL. These functions (distributions as well
as the density and the cumulative distribution functions) are, however,
available in other libraries, like the
boost math library (specifically:).
It is beyond the scope of the C++ Annotations to discuss the mathematical characteristics of the various statistical distributions. The interested reader is referred to the pertinent mathematical textbooks (like Stuart and Ord's (2009) Kendall's Advanced Theory of Statistics, Wiley) or to web-locations like.
The
linear_congruential_engine random number generator computes
valuei+1
= OPENPAa * valuei
+ c) % m
a; the additive constant
c; and the modulo value
m. Example:
linear_congruential_engine<int, 10, 3, 13> lincon;The
linear_congruentialgenerator may be seeded by providing its constructor with a seeding-argument. E.g.,
lincon(time(0)).
The
subtract_with_carry_engine random number generator computes
valuei
= (valuei-s
- valuei-r
- carryi-1
) % m
m; and the subtractive constants
sand
r. Example:
subtract_with_carry_engine<int, 13, 3, 13> subcar;The
subtract_with_carry_enginegenerator may be seeded by providing its constructor with a seeding-argument. E.g.,
subcar(time(0)).
The predefined
mersenne_twister_engine mt19937 (predefined using a
typedef defined by the
<random> header file) is used in the examples
below. It can be constructed using
`
mt19937 mt' or it can be seeded by providing its
constructor with an argument (e.g.,
mt19937 mt(time(0))).
Other ways to initialize the
mersenne_twister_engine are beyond the
scope of the C++ Annotations (but see Lewis et
al. (
Lewis, P.A.W., Goodman, A.S., and Miller, J.M. (1969), A pseudorandom
number generator for the System/360, IBM Systems Journal, 8, 136-146.) (1969)).
The random number generators may also be seeded by calling their members
seed accepting
unsigned long values or generator functions (as in
lc.seed(time(0)), lc.seed(mt)).
The random number generators offer members
min and
max
returning, respectively, their minimum and maximum values (inclusive). If a
reduced range is required the generators can be nested in a function or class
adapting the range.
RNGis used to indicate a Random Number Generator and
URNGis used to indicate a Uniform Random Number Generator. With each distribution a
struct param_typeis defined containing the distribution's parameters. The organization of these
param_typestructs depends on (and is described at) the actual distribution.
All distributions offer the following members (result_type refers to the type name of the values returned by the distribution):
result_type max() const
result_type min() const
param_type param() const
param_typestruct;
void param(const param_type ¶m)redefines the parameters of the distribution;
void reset():clears all of its cached values;
All distributions support the following operators (distribution-name
should be replaced by the name of the intended distribution, e.g.,
normal_distribution):
template<typename URNG> result_type operator()(URNG &urng)
urngreturning the next random number selected from a uniform random distribution;
template<typename URNG> result_type operator()
(URNG &urng, param_type ¶m)
paramstruct. The function object
urngreturns the next random number selected from a uniform random distribution;
std::istream &operator>>(std::istream &in, distribution-name &object):The parameters of the distribution are extracted from an
std::istream;
std::ostream &operator<<(std::ostream &out, distribution-name const &bd):The parameters of the distribution are inserted into an
std::ostream
The following example shows how the distributions can be used. Replacing
the name of the distribution (
normal_distribution) by another
distribution's name is all that is required to switch distributions. All
distributions have parameters, like the mean and standard deviation of the
normal distribution, and all parameters have default values. The names of the
parameters vary over distributions and are mentioned below at the individual
distributions. Distributions offer members returning or setting their
parameters.
Most distributions are defined as class templates, requiring the specification
of a data type that is used for the function's return type. If so, an empty
template parameter type specification (
<>) will get you the default
type. The default types are either
double (for real valued return types)
or
int (for integral valued return types). The template parameter type
specification must be omitted with distributions that are not defined as
template classes.
Here is an example showing the use of the statistical distributions, applied to the normal distribution:
#include <iostream> #include <ctime> #include <random> using namespace std; int main() { std::mt19937 engine(time(0)); std::normal_distribution<> dist; for (size_t idx = 0; idx < 10; ++idx) std::cout << "a random value: " << dist(engine) << "\n"; cout << '\n' << dist.min() << " " << dist.max() << '\n'; }
bernoulli_distributionis used to generate logical truth (boolean) values with a certain probability
p. It is equal to a binomial distribution for one experiment (cf 18.9.2.2).
The bernoulli distribution is not defined as a class template.
Defined types:
typedef bool result_type; struct param_type { explicit param_type(double prob = 0.5); double p() const; // returns prob };
Constructor and members:
bernoulli_distribution(double prob = 0.5)
probof returning
true;
double p() const
prob;
result_type min() const
false;
result_type max() const
true;
binomial_distribution<IntType = int>is used to determine the probability of the number of successes in a sequence of
nindependent success/failure experiments, each of which yields success with probability
p.
The template type parameter
IntType defines the type of the generated
random value, which must be an integral type.
Defined types:
typedef IntType result_type; struct param_type { explicit param_type(IntType trials, double prob = 0.5); IntType t() const; // returns trials double p() const; // returns prob };
Constructors and members and example:
binomial_distribution<>(IntType trials = 1, double prob = 0.5)constructs a binomial distribution for
trialsexperiments, each having probability
probof success.
binomial_distribution<>(param_type const ¶m)constructs a binomial distribution according to the values stored in the
paramstruct.
IntType t() const
trials;
double p() const
prob;
result_type min() const
result_type max() const
trials;
cauchy_distribution<RealType = double>looks similar to a normal distribution. But cauchy distributions have heavier tails. When studying hypothesis tests that assume normality, seeing how the tests perform on data from a Cauchy distribution is a good indicator of how sensitive the tests are to heavy-tail departures from normality.
The mean and standard deviation of the Cauchy distribution are undefined.
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType a = RealType(0), RealType b = RealType(1)); double a() const; double b() const; };
Constructors and members:
cauchy_distribution<>(RealType a = RealType(0), RealType b = RealType(1))constructs a cauchy distribution with specified
aand
bparameters.
cauchy_distribution<>(param_type const ¶m)constructs a cauchy distribution according to the values stored in the
paramstruct.
RealType a() const
aparameter;
RealType b() const
bparameter;
result_type min() const
result_typevalue;
result_type max() const
result_type;
chi_squared_distribution<RealType = double>with
ndegrees of freedom is the distribution of a sum of the squares of
nindependent standard normal random variables.
Note that even though the distribution's parameter
n usually is an
integral value, it doesn't have to be integral, as the chi_squared
distribution is defined in terms of functions (
exp and
Gamma) that
take real arguments (see, e.g., the formula shown in the
<bits/random.h>
header file, provided with the Gnu
g++ compiler distribution).
The chi-squared distribution is used, e.g., when testing the goodness of fit of an observed distribution to a theoretical one.
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType n = RealType(1)); RealType n() const; };
Constructors and members:
chi_squared_distribution<>(RealType n = 1)constructs a chi_squared distribution with specified number of degrees of freedom.
chi_squared_distribution<>(param_type const ¶m)constructs a chi_squared distribution according to the value stored in the
paramstruct;
IntType n() const
result_type min() const
result_type max() const
result_type;
extreme_value_distribution<RealType = double>is related to the Weibull distribution and is used in statistical models where the variable of interest is the minimum of many random factors, all of which can take positive or negative values.
It has two parameters: a location parameter
a and scale parameter
b.
See also
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType a = RealType(0), RealType b = RealType(1)); RealType a() const; // the location parameter RealType b() const; // the scale parameter };
Constructors and members:
extreme_value_distribution<>(RealType a = 0, RealType b = 1)constructs an extreme value distribution with specified
aand
bparameters;
extreme_value_distribution<>(param_type const ¶m)constructs an extreme value distribution according to the values stored in the
paramstruct.
RealType a() const
RealType stddev() const
result_type min() const
result_type;
result_type max() const
result_type;
exponential_distribution<RealType = double>is used to describe the lengths between events that can be modelled with a homogeneous Poisson process. It can be interpreted as the continuous form of the geometric distribution.
Its parameter
prob defines the distribution's lambda parameter, called
its rate parameter. Its expected value and standard deviation are both
1 / lambda.
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType lambda = RealType(1)); RealType lambda() const; };
Constructors and members:
exponential_distribution<>(RealType lambda = 1)constructs an exponential distribution with specified
lambdaparameter.
exponential_distribution<>(param_type const ¶m)constructs an exponential distribution according to the value stored in the
paramstruct.
RealType lambda() const
lambdaparameter;
result_type min() const
result_type max() const
result_type;
fisher_f_distribution<RealType = double>is intensively used in statistical methods like the Analysis of Variance. It is the distribution resulting from dividing two Chi-squared distributions.
It is characterized by two parameters, being the degrees of freedom of the two chi-squared distributions.
Note that even though the distribution's parameter
n usually is an
integral value, it doesn't have to be integral, as the Fisher F distribution
is constructed from Chi-squared distributions that accept a non-integral
parameter value (see also section 18.9.2.4).
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType m = RealType(1), RealType n = RealType(1)); RealType m() const; // The degrees of freedom of the nominator RealType n() const; // The degrees of freedom of the denominator };
Constructors and members:
fisher_f_distribution<>(RealType m = RealType(1), RealType n = RealType(1))constructs a fisher_f distribution with specified degrees of freedom.
fisher_f_distribution<>(param_type const ¶m)constructs a fisher_f distribution according to the values stored in the
paramstruct.
RealType m() const
RealType n() const
result_type min() const
result_type max() const
result_type;
gamma_distribution<RealType = double>is used when working with data that are not distributed according to the normal distribution. It is often used to model waiting times.
It has two parameters,
alpha and
beta. Its expected value is
alpha
* beta and its standard deviation is
alpha * beta2.
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType alpha = RealType(1), RealType beta = RealType(1)); RealType alpha() const; RealType beta() const; };
Constructors and members:
gamma_distribution<>(RealType alpha = 1, RealType beta = 1)constructs a gamma distribution with specified
alphaand
betaparameters.
gamma_distribution<>(param_type const ¶m)constructs a gamma distribution according to the values stored in the
paramstruct.
RealType alpha() const
alphaparameter;
RealType beta() const
betaparameter;
result_type min() const
result_type max() const
result_type;
geometric_distribution<IntType = int>is used to model the number of bernoulli trials (cf. 18.9.2.1) needed until the first success.
It has one parameter,
prob, representing the probability of success in an
individual bernoulli trial.
Defined types:
typedef IntType result_type; struct param_type { explicit param_type(double prob = 0.5); double p() const; };
Constructors, members and example:
geometric_distribution<>(double prob = 0.5)constructs a geometric distribution for bernoulli trials each having probability
probof success.
geometric_distribution<>(param_type const ¶m)constructs a geometric distribution according to the values stored in the
paramstruct.
double p() const
probparameter;
param_type param() const
param_typestructure;
void param(const param_type ¶m)redefines the parameters of the distribution;
result_type min() const
0);
result_type max() const
template<typename URNG> result_type operator()(URNG &urng)
template<typename URNG> result_type operator()
(URNG &urng, param_type ¶m)
paramstruct.
#include <iostream> #include <ctime> #include <random> int main() { std::linear_congruential_engine<unsigned, 7, 3, 61> engine(0); std::geometric_distribution<> dist; for (size_t idx = 0; idx < 10; ++idx) std::cout << "a random value: " << dist(engine) << "\n"; std::cout << '\n' << dist.min() << " " << dist.max() << '\n'; }
lognormal_distribution<RealType = double>is a probability distribution of a random variable whose logarithm is normally distributed. If a random variable
Xhas a normal distribution, then
Y = eX has a log-normal distribution.
It has two parameters, m and s representing, respectively, the mean
and standard deviation of
ln(X).
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType m = RealType(0), RealType s = RealType(1)); RealType m() const; RealType s() const; };
Constructor and members:
lognormal_distribution<>(RealType m = 0, RealType s = 1)constructs a log-normal distribution for a random variable whose mean and standard deviation is, respectively,
mand
s.
lognormal_distribution<>(param_type const ¶m)constructs a log-normal distribution according to the values stored in the
paramstruct.
RealType m() const
mparameter;
RealType stddev() const
sparameter;
result_type min() const
result_type max() const
result_type;
normal_distribution<RealType = double>is commonly used in science to describe complex phenomena. When predicting or measuring variables, errors are commonly assumed to be normally distributed.
It has two parameters, mean and standard deviation.
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType mean = RealType(0), RealType stddev = RealType(1)); RealType mean() const; RealType stddev() const; };
Constructors and members:
normal_distribution<>(RealType mean = 0, RealType stddev = 1)constructs a normal distribution with specified
meanand
stddevparameters. The default parameter values define the standard normal distribution;
normal_distribution<>(param_type const ¶m)constructs a normal distribution according to the values stored in the
paramstruct.
RealType mean() const
meanparameter;
RealType stddev() const
stddevparameter;
result_type min() const
result_type;
result_type max() const
result_type;
negative_binomial_distribution<IntType = int>probability distribution describes the number of successes in a sequence of Bernoulli trials before a specified number of failures occurs. For example, if one throws a die repeatedly until the third time 1 appears, then the probability distribution of the number of other faces that have appeared is a negative binomial distribution.
It has two parameters: (
IntType) k (> 0), being the number of failures
until the experiment is stopped and (
double) p the probability of success
in each individual experiment.
Defined types:
typedef IntType result_type; struct param_type { explicit param_type(IntType k = IntType(1), double p = 0.5); IntType k() const; double p() const; };
Constructors and members:
negative_binomial_distribution<>(IntType k = IntType(1), double p = 0.5)constructs a negative_binomial distribution with specified
kand
pparameters;
negative_binomial_distribution<>(param_type const ¶m)constructs a negative_binomial distribution according to the values stored in the
paramstruct.
IntType k() const
kparameter;
double p() const
pparameter;
result_type min() const
result_type max() const
result_type;
poisson_distribution<IntType = int>is used to model the probability of a number of events occurring in a fixed period of time if these events occur with a known probability and independently of the time since the last event.
It has one parameter,
mean, specifying the expected number of events in
the interval under consideration. E.g., if on average 2 events are observed in
a one-minute interval and the duration of the interval under study is
10 minutes then
mean = 20.
Defined types:
typedef IntType result_type; struct param_type { explicit param_type(double mean = 1.0); double mean() const; };
Constructors and members:
poisson_distribution<>(double mean = 1)constructs a poisson distribution with specified
meanparameter.
poisson_distribution<>(param_type const ¶m)constructs a poisson distribution according to the values stored in the
paramstruct.
double mean() const
meanparameter;
result_type min() const
result_type max() const
result_type;
student_t_distribution<RealType = double>is a probability distribution that is used when estimating the mean of a normally distributed population from small sample sizes.
It is characterized by one parameter: the degrees of freedom, which is equal to the sample size - 1.
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType n = RealType(1)); RealType n() const; // The degrees of freedom };
Constructors and members:
student_t_distribution<>(RealType n = RealType(1))constructs a student_t distribution with indicated degrees of freedom.
student_t_distribution<>(param_type const ¶m)constructs a student_t distribution according to the values stored in the
paramstruct.
RealType n() const
result_type min() const
result_type max() const
result_type;
uniform_int_distribution<IntType = int>can be used to select integral values randomly from a range of uniformly distributed integral values.
It has two parameters,
a and
b, specifying, respectively, the lowest
value that can be returned and the highest value that can be returned.
Defined types:
typedef IntType result_type; struct param_type { explicit param_type(IntType a = 0, IntType b = max(IntType)); IntType a() const; IntType b() const; };
Constructors and members:
uniform_int_distribution<>(IntType a = 0, IntType b = max(IntType))constructs a uniform_int distribution for the specified range of values.
uniform_int_distribution<>(param_type const ¶m)constructs a uniform_int distribution according to the values stored in the
paramstruct.
IntType a() const
aparameter;
IntType b() const
bparameter;
result_type min() const
aparameter;
result_type max() const
bparameter;
uniform_real_distribution<RealType = double>can be used to select
RealTypevalues randomly from a range of uniformly distributed
RealTypevalues.
It has two parameters,
a and
b, specifying, respectively, the
half-open range of values (
[a, b)) that can be returned by the
distribution.
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType a = 0, RealType b = max(RealType)); RealType a() const; RealType b() const; };
Constructors and members:
uniform_real_distribution<>(RealType a = 0, RealType b = max(RealType))constructs a uniform_real distribution for the specified range of values.
uniform_real_distribution<>(param_type const ¶m)constructs a uniform_real distribution according to the values stored in the
paramstruct.
RealType a() const
aparameter;
RealType b() const
bparameter;
result_type min() const
aparameter;
result_type max() const
bparameter;
weibull_distribution<RealType = double>is commonly used in reliability engineering and in survival (life data) analysis.
It has two or three parameters and the two-parameter variant is offered by the
STL. The three parameter variant has a shape (or slope) parameter, a scale
parameter and a location parameter. The two parameter variant implicitly uses
the location parameter value 0. In the two parameter variant the shape
parameter (a) and the scale parameter (b) are provided. See for an interesting coverage of the meaning of the Weibull distribution's parameters.
Defined types:
typedef RealType result_type; struct param_type { explicit param_type(RealType a = RealType(1), RealType b = RealType(1)); RealType a() const; // the shape (slope) parameter RealType b() const; // the scale parameter };
Constructors and members:
weibull_distribution<>(RealType a = 1, RealType b = 1)constructs a weibull distribution with specified
aand
bparameters;
weibull_distribution<>(param_type const ¶m)constructs a weibull distribution according to the values stored in the
paramstruct.
RealType a() const
RealType stddev() const
result_type min() const
result_type max() const
result_type; | http://ftp.icce.rug.nl/documents/cplusplus/cplusplus18.html | CC-MAIN-2017-22 | en | refinedweb |
- (329)
- Academic Free License (20)
- Affero GNU Public License (1)
- Apache License V2.0 (10)
- Apache Software License (3)
- Artistic License (2)
- Attribution Assurance License (1)
- BSD License (47)
- Boost Software License (1)
- Common Development and Distribution License (2)
- Common Public License 1.0 (2)
- Eclipse Public License (1)
- Educational Community License, Version 2.0 (5)
- Fair License (1)
- GNU General Public License version 2.0 (143)
- GNU General Public License version 3.0 (37)
- Public Domain (25)
- Other License (7)
- Creative Commons Attribution License (4)
- Linux (338)
- Windows (250)
- Grouping and Descriptive Categories (225)
- 32-bit MS Windows (95/98) (7)
- 32-bit MS Windows (NT/2000/XP) (27)
- 64-bit MS Windows (11)
- All 32-bit MS Windows (58)
- All BSD Platforms (18)
- All POSIX (117)
- Classic 8-bit Operating Systems (1)
- OS Independent (19)
- OS Portable (83)
- Project is OS Distribution-Specific (3)
- Project is an Operating System Distribution (1)
- Mac (167)
- BSD (127)
- Android (120)
- Modern (110)
- Emulation and API Compatibility (30)
- Other Operating Systems (21)
Algorithms Software
- Genetic Algorithms
- Hot topics in Algorithms Softwaremagnet:?xt=urn:btih:68afc02a8eee0f214063ca62902765a12c576bd7 sha256 rhash crc32 md5 decrypt sha256 sha512 java certificate for whatsapp quine mccluskey c code hash magnet torrent
Program Execution Monitor
-- PEM - A tool like a visual debugger with a rich user interface(but there is much more to this than a debugger), it is better described using the example of Eclipse debugger. This tool inverts the debugger to develop code.
Programming Linux with C
A whole lot of source code related to programming Linux systems with
Project CIF(Core Information Foundation)
The project CIF now created with new philosophy of JPM (Joint Project Module) with SDK style. Now simply include CIF.h and use all of functions, macros and classes. BTS (Basic Text Support) Library Used in TOM (Text Object Module). TOM for now with CAnsiString , CAnsiStringLite And Text namespace. CAnsiString created with more functionality then typical string and text classes. EVS (Enhanced Visual Support) provide COpenFileDialog And CSaveFileDialog. JCM (Joint Core Module) used as base in project. Properly support for a class available TPT (Template Properly Type) and VTM (Value Type Macro). BTS,EVS,JCM,TOM,TPT,VTM are parts of project CIF. CIF5SDK used in project Bytes Counter. Project Core Information Foundation (CIF) is created by Muhammad Arshad Latti.
Promote Library
Promote is an easy to use, generic data structure and utility library for C,C++, and possibly more.
Quine-McCluskey minimizer
simplifies boolean functions with Quine-McCluskey algorithm28
Raiden Block Cipher
Raiden block cipher: An extremely lightweight and fast block cipher, developed using genetic programming, with the intention to be an alternative to TEA. This cipher is as fast as TEA, and without many of its known weaknesses.1 weekly downloads
Random Projection Trees
Random Projection Trees is a recursive space partitioning datastructure which can automatically adapt to the underlying (linear or non-linear) structure in data. It has strong theoretical guarantees on rates of convergence and works well in practice.1 weekly downloads
Real Time Free Surface Solver (RTFSS)
A Fast MAC based 3D Free surface fuid solver. Capable also of simulating viscoelastic fluids.Includes also wave equation solver for simulating shallow water phenomena.
RootContest
Objective of this project is to DEVELOP OPENSOURCE portal written in PHP/MySQL to practise websecurity in real time. Rootcontest portal will help users to understand websecurity in a better way.
Ropes for python
Ropes are a scalable string implementation that are designed for efficient operation by utilizing lazy operations such as concatenation and slicing. This is a python C module which implements ropes for pythonsS: Simple Contest System
SCS - Simple Contest System - is a system for makieing Programming contests in simple way.
SEARCHTHELAN
SEARCHTHELAN is a SEARCHER/INDEXER of files/directories in a local area network....
SHOBHIT-Advance String Search
SHOBHIT-Advance String Search is a pattern search Algorithm
SMART
SMART (Shape Matching Algorithm Research Tool) enables you to implement 2D and 3D shape matching algorithms as plugins. Plugins can either be implemented in Java or as native plugins, i.e. in C/C++.
SSEPlus
SSEPlus is a SIMD function library. It provides optimized emulation for newer SSE instructions. It also provides a rich set of high performance routines for common operations such as arithmetic, bitwise logic, and data packing and unp
Scex
Computer Science documenation project. This project intend to provide full documentation and academic information for studies. All course taken , all exams with solutions lots of examples and more.
Sensory Networks--BUPT
北邮传感网络 The research of sensory networks in BUPT, Beijing, China!
Shobhit-Improved String Search
SHOBHIT-Improved String Search is new improved string search algorithm
Shortest Path
Solving the Travelling Salesman problem is not our objective. We are writing an algorithm which will sort out the traffic woes of transport companies. In turn, it will help in saving traveling cost and time for the people involved and the transport compa
Silicis- formal [verification] framework
Currently, all existing formal tools are designed to serve as formal verifiers, using one implementation or another. NO tool is providing a global framework to develop algorithms. Silicis is a new formal framework for designing [verification] algorithms. | https://sourceforge.net/directory/development/algorithms/language%3Ac/?sort=name&page=8 | CC-MAIN-2017-22 | en | refinedweb |
Opened 6 years ago
Closed 6 years ago
#14815 closed (duplicate)
app "labels" are ambiguous and cause bugs in manage.py
Description
In several places in django/db/models/loading.py, apps are looked up
via a non-namespaced name: that is, if your settings.INSTALLED_APPS
contains 'django.contrib.admin', the string 'admin' is used to look up
apps. This is done eg. in get_app().*
This is bad because it requires the last part of the dotted name - the
"app label" - to be unique across all installed apps, which I think is
an undocumented assumption. If there are duplicates, it makes the
behavior of manage.py commands ambiguous at best.
Here's one bug: my settings.INSTALLED_APPS contains among other things ('obadmin.admin', ... , 'django.contrib.admin', ...)
If I do
manage.py test admin, it runs only the tests from obadmin.admin.
If I do
manage.py sqlall admin, it prints only the SQL for
django.contrib.admin.
This is broken in several obvious ways:
- The app chosen is inconsistent between the two commands
- It is impossible for me to tell
manage.py testto test only django.contrib.admin
- It is impossible for me to tell
manage.py sqallto give me only the sql for obadmin.admin
"Namespaces are one honking great idea. Let's do more of those."
I don't know how hard it would be to solve this while preserving
backward compatibility. Would it be possible to allow specifying the
full dotted name of the app, and fall back to the current behavior if
the name doesn't contain dots? Maybe the various dicts on AppCache
could have both labels and dotted names as keys?
- Resolving the label to a module is also done in get_models(app_mod),
via the cache, even though the app_mod argument is already a
module. This might actually qualify as a separate bug: get_models
should always return models belonging to the app_mod passed in; it
should never use the name of that module to find models in a different
module.
See a related ticket #3591 and this branch: (it doesn't solve exactly this problem, but makes a step forward a reasonable solution). Backwards compatibility is the main problem here. | https://code.djangoproject.com/ticket/14815 | CC-MAIN-2017-22 | en | refinedweb |
is not be editable in the inspector.
// Create a plane and dont let it be modificable in the Inspector // nor in the Sceneview.
function Start() { var createdGO : GameObject = GameObject.CreatePrimitive(PrimitiveType.Plane); createdGO.hideFlags = HideFlags.NotEditable; }
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void Start() { GameObject createdGO = GameObject.CreatePrimitive(PrimitiveType.Plane); createdGO.hideFlags = HideFlags.NotEditable; } } | https://docs.unity3d.com/ScriptReference/HideFlags.NotEditable.html | CC-MAIN-2017-22 | en | refinedweb |
import "github.com/spf13/hugo/tpl/cast"
cast.go docshelper.go init.go
Namespace provides template functions for the "cast" namespace.
New returns a new instance of the cast-namespaced template functions.
ToInt converts the given value to an int.
ToString converts the given value to a string.
Package cast imports 5 packages (graph) and is imported by 1 packages. Updated 2017-05-22. Refresh now. Tools for package owners. | https://godoc.org/github.com/spf13/hugo/tpl/cast | CC-MAIN-2017-22 | en | refinedweb |
tgext.menu 1.0rc3
Automatic menu/navbar/sidebar generation extension for TurboGears== Welcome to the TurboGears Menu Extension ==
tgext.menu provides a way for controllers to be more easily reused,
while providing developers a way to stop worrying about the menus in
their application. In a nutshell, that is what you get.
One thing that all web applications have in common is the need for
menus to help users navigate the application. The menus can be small,
single level, easily displayed across the top of the page. Or they can
be large, complex, hierarchical, and highly variable depending on
permissions and a whole host of other factors. tgext.menu helps out in
any of these cases.
When you use tgext.menu, you need to do the following three steps:
1. Add it to your project. This is accomplished by modifying your setup.py. Add "tgext.menu" to your **install_requires** list.
2. Get JQuery installed. tgext.menu supports working with both ToscaWidgets 1 and 2. You will need to manually choose and install the correct jquery for your application. If you are using ToscaWidgets1, then just run this command:
{{{
easy_install tw.jquery
}}}
If you are using ToscaWidgets2, you will need to do the following:
{{{
easy_install tw2.forms tw2.core
hg clone
cd tw2jquery
python setup.py develop
}}}
3. Update your project/config/app_cfg.py. If you already have a line like this:
{{{
base_config.variable_provider = my_function
}}}
Change the {{{variable_provider}}} part to {{{tgext_menu_sub_variable_provider}}}
4. Add the following lines to project/config/app_cfg.py at the bottom of the file.
{{{
import tgext.menu
base_config.variable_provider = tgext.menu.menu_variable_provider
}}}
5. Add it to your master template. This step varies depending on whether you are using Mako or Genshi for your templating engine. In either case, where you have the code to render your navigation bar, put this code:
**Mako**
{{{
#!html+mako
${render_navbar()|n}
}}}
**Genshi**
{{{
#!genshi
${HTML(render_navbar())}
}}}
6. Add tgext.menu into your controllers. Place this code in any controllers that will be using tgext.menu:
{{{
#!python
from tgext.menu import navbar, sidebar, menu
}}}
In front of any @expose'd methods, place a call to add them to your navigation bar, like so:
{{{
#!python
@navbar('TestHome')
@expose('genshi:tgext.menu.test.templates.index')
def index(self, *p, **kw):
return dict()
}}}
To nest menus, use || as separators, like so:
{{{
#!python
@navbar('My || Sub || Menu')
@expose('genshi:tgext.menu.test.templates.index')
def index(self, *p, **kw):
return dict()
}}}
And that's it, your menus will now render automatically. When you
addmore items to your navbar using @navbar, they will appear without
your having to update any templates at all.
== Full Documentation ==
tgext.menu is built around the idea that a given web application can
have any number of menus. In this extension, each of them are
named. Two are so common that they are given shortcuts in the API:
navbar and sidebar. If you have other menus you wish to add to, you
will need to name them specifically. Fortunately, this is not as hard
as it sounds.
In your controller code, you will be using one of three decorators:
* @navbar('menu path')
* @sidebar('menu path')
* @menu('menu path', 'menu name')
Using @menu will be rare. Normally, you will only be using @navbar or
@sidebar.
=== Appending and Removing Entries ===
In addition, you may be using one of six functions in your code:
* menu_append('menu path', 'menu name')
* navbar_append('menu path')
* sidebar_append('menu path')
* menu_remove('menu path', 'menu name')
* navbar_remove('menu path')
* sidebar_remove('menu path')
The above functions are meant to allow you to easily add and remove menu
entries programmatically.
The decorators and methods all take a common set of parameters:
* menu path
* permission
* url
* extension
* extras
* sortorder
* right
* icon
menu path is simply a string of entries, separated by ||, indicating
where this particular entry lives in the menu hierarchy. An example
would be "Help || About" or "Edit || Copy", or "My || Deeply || Nested || Submenu".
Spaces around || will be stripped off, and are only put
in those strings for readability.
permission is the permission to check in order to display this item. It may be
a simple string or a full predicate from repoze.what.
url is the place the menu item points to. It will always be prefixed by the
path to the controller calling any of these methods *unless* the url is an
absolute url (i.e.: it has a ":" in it somewhere).
extension is the file extension to put at the end. The dot will be supplied
for you, so you would only need to add "js" for javascript, for instance.
extras is a dictionary of extra attributes to place on the <li> tag
which contains the actual <a> tag for this menu item. All extras will
be placed as is, no filtering or escaping of any sort will be done.
Note that one "special" tag exists: extratext. This is extra text that
will be rendered after the link. This will allow you to place a
comment after the link and before the next menu item. For instance,
this can be used to have a sidebar with links and descriptions on
those links.
sortorder is the value of this menu item's sortorder. See "sortorder" under
"Configuration Options" (below) for details on how to use this.
right is a shortcut for adding the HTML class "right" to the menu item. This
is just a boolean value.
icon is a URL pointing to the icon image file you wish to assign to this
entry. While you would normally do this with CSS, you have the option of doing
this in your code if you wish.
If you use the @menu decorator or menu_append or menu_remove, you will need to
specify which menu this particular menu entry belongs to. This will allow you
to provide a specific menu for a specific section of your application. An
example would be if you are writing up an admin module, and want to provide an
admin-only menu.
Finally, for menu_append, navbar_append, and sidebar_append, you *must* supply
one additional parameter named "base". "base" is the class instance that is
the root of the path to be appended. Typically, this will be "self", though if
you know of another class instance that can be found in the hierarchy, you can
use that instead (of course, if that is appropriate to do). Failure to supply
"base" is an error, and will result in an exception being thrown.
=== url_from_menu and get_entry ===
This function allows you to locate the URL that a given menu path
has. This will be useful, for example, in templates, especially in
other TurboGears extensions. For instance, if you have the following
controller code:
{{{
#!python
from tg import require, TGController
from tgext.menu import navbar
class MyController(TGController):
@navbar('FooBaz')
def foobaz(self, *p, **kw):
return {}
}}}
Then, in your template, you can place this code:
{{{
#!genshi
<a href="${tg.url(url_from_menu('navbar', 'FooBaz'))}">Go To FooBaz!</a>
}}}
And the actual link to FooBaz will be rendered at that location,
regardless of where in your controller hierarchy that particular
controller was mounted.
get_entry will allow you to retrieve the menu object used to store the
data about the menu entry. This will allow you to do such things as
update extra attributes, permissions, and the like. It uses the same
parameters as url_from_menu.
=== switch_template ===
The default template that is provided, while useful, may not meet all of your
needs. You may write your own Mako template, load it as a single string, and
pass that string to this function. It will then be compiled and used for all
menu templates from that point on.
Note that you must pass in one single string, and that string must be a valid
Mako template, not the name of a template file. For instance, you can use
something similar to the way the default template is loaded in your own code.
It is currently loaded this way:
{{{
from pkg_resources import Requirement, resource_string
tmpl_string = resource_string(Requirement.parse("tgext.menu"),"tgext/menu/templates/divmenu.mak")
}}}
After that, you may call {{{switch_template(tmpl_string)}}} to begin using
your template.
=== Render The Menu ===
In your template, you will need to render the menu. This follows a
similar pattern from above. You will have the following methods
available to you in your template:
* render_navbar(vertical=False, active=None)
* render_sidebar(vertical=False, active=None)
* render_menu(menu_name, vertical=False, active=None)
The "vertical" parameter is used to tell jdMenu that this should be a vertical
The "active" parameter is used to tell tgext.menu the menu path for
the currently rendered page. When this link is rendered in the menus,
it will be given a CSS class of active. Note that if you leave it as
None, then no link will ever be given that CSS class.
=== User Defined Callbacks ===
Finally, you may register callbacks from the menu system. These
callbacks will call your methods and add the results of those methods
into the user's menus. Note that this happens on every page request,
so you may have a callback that adds items depending on the parameters
of the request (the user, the group the user is in, etc). Those
callback registration functions follow a similar pattern as the
append/remove functions, and are named as follows:
* register_callback('menu name', function)
* register_callback_navbar(function)
* register_callback_sidebar(function)
* deregister_callback('menu name', function)
* deregister_callback_navbar(function)
* deregister_callback_sidebar(function)
Any given callback is expected to return a list of
tgext.menu.entry. This class uses the same parameters that the
decorators do, with the same names.
== Configuration Options ==
In your app_cfg, you may have a section named "base_config.tgext_menu". The
way to add this section is to produce code like this:
{{{
#!python
base_config.tgext_menu = {}
base_config.tgext_menu['inject_css'] = True
}}}
This has three possible parameters in it: inject_css, inject_js and sortorder.
=== inject_css : default False ===
The "inject_css" parameter is used to inject the default CSS that
comes from the [[
plugin]]. It is set to False,by default, so as to allow you to specify
your own CSS.
=== inject_js : default True ===
The "inject_js" parameter allows you to turn off the javascript. In this case,
you will have just a set of ul/li/a tags in your page representing the menu,
and may do as you see fit with it.
== sortorder: default None ===
"sortorder" is a bit more complex to explain. Basically, using this, you can
set up the sort ordering for your menus. This is done by assigning entries a
numeric value, with higher values going towards the right/bottom, and lower
values towards the left/top. An entry is a single path segment. This is
accomplished by assigning a dictionary to the sortorder configuration
paramter. Unassigned entries will be given a value of 999999. All of this is
best to explain by example.
Assume we have the following menu entries:
ExitApp
Foo Spot || Bar
Foo Spot || Baz
Foo Spot || Foo
Now assume we have set the following dictionary as our sort order:
{ 'ExitApp': 20, 'Foo': 10 }
This will result in the menus being order like so:
ExitApp
Foo Spot || Foo
Foo Spot || Bar
Foo Spot || Baz
'ExitApp' will be compared with 'Foo Spot'. 'Foo Spot' has the value 999999,
and ExitApp has 20. ExitApp goes first.
In the sub menu for 'Foo Spot', 'Foo' has the value 10, while the others have
the value 999999. 'Foo' goes first.
== Drawbacks ==
Adding the code for rendering a Google sitemap should be relatively
easy. However, it has not been done as yet.
=== @require ===
This isn't so much a drawback as it is an issue with getting
permissions to be honored properly by tgext.menu. As it stands right
now, tgext.menu honors the allow_only attribute. However, it cannot
honor the @require decorator without some additional work on your
part.
The reason for this is because @require is actually a function in its
own right. It takes a repoze.what Predicate, validates that the
current user passes the Predicate check, and then calls the wrapped
method. Therefore, when TurboGears calls your method, it is *actually*
calling the require function, which validates the security, then calls
your code, which returns data back up the stack to TurboGears.
By doing this, the Predicate becomes a local variable inside of
@require that cannot be accessed by outside code. The only way to
ensure that tgext.menu will properly honor such predicates is to do
something like this:
{{{
#!python
from tg import require, TGController
from repoze.what.predicates import Not, is_anonymous
from tgext.menu import navbar
class MyController(TGController):
is_not_anon = Not(is_anonymous())
@require(is_not_anon)
@navbar('FooBaz', permission=is_not_anon)
def foobaz(self, *p, **kw):
return {}
}}}
I do wish there were a better way, but it just doesn't exist yet.
== Doc To-Do Items ==
* Document the code internally
- Author: Michael Pedersen
- Keywords: turbogears2.widgets
- Package Index Owner: pedersen, percious, amol
- DOAP record: tgext.menu-1.0rc3.xml | https://pypi.python.org/pypi/tgext.menu/ | CC-MAIN-2017-47 | en | refinedweb |
Welcome to Cisco Support Community. We would love to have your feedback.
For an introduction to the new site, click here. And see here for current known issues.
Hello Community,
I've converted an EEM to Tcl using the tool on this site. However, I'm getting the following error:
R3(config)#event manager policy OjectTracking.tcl
EEM Register event failed: Error empty reg spec, policy does not start with EEM registration commands.
EEM configuration: failed to retrieve intermediate registration result for policy OjectTracking.tcl
Can someone please take a little look at the script and let me know what's wrong?
::cisco::eem::event_register_track tag SLA1 1 state down
::cisco::eem::event_register_track tag SLA2 2 state down
::cisco::eem::event_register_track tag SLA3 3 state down
::cisco::eem::trigger {
::cisco::eem::correlate event SLA1 or event SLA2 or event SLA3
}
#
# This EEM tcl policy was generated by the EEM applet conversion
# utility at
# using the following applet:
#
# event manager applet object
# event tag SLA1 track 1 state down
# event tag SLA2 track 2 state down
# event tag SLA3 track 3 state down
# trigger
# correlate event SLA1 or event SLA2 or event SLA3
# action 1.0 syslog msg "Track $_event_tag1 is down"
#
namespace import ::cisco::eem::*
namespace import ::cisco::lib::*
array set arr_einfo [event_reqinfo]
action_syslog msg "Track $_event_tag1 is down"
Cheers
Carlton
On what version of IOS are you trying to register this Tcl policy?
Hi Joseph,
The IOS is:
Cisco IOS Software, 7200 Software (C7200-SPSERVICESK9-M), Version 15.0(1)M9, RELEASE SOFTWARE (fc1)
Cheers
I should mention I'm using GNS3
Cheers
Try converting it again and use that version.
Joseph,
I'm not sure what you mean by 'that version'?
Hi Joseph,
Just so you know, I tried loading the script to a 3660, just in case the problem was with the 7206. When I do so I get the following on the 3660:
R3(config)#even ma p OjectTracking.tcl
Compile check and registration failed:policy file does not start with event register cmd
Tcl policy execute failed: policy file does not start with event register cmd
Embedded Event Manager configuration: failed to retrieve intermediate registration result for policy OjectTracking.tcl: Unknown error 0
Sorry Joseph, I understand what you meant.
I recreated the script and applied it but still get the following error:
R3(config)#even ma p objectv2.tcl
Compile check and registration failed:policy file does not start with event register cmd
Tcl policy execute failed: policy file does not start with event register cmd
Embedded Event Manager configuration: failed to retrieve intermediate registration result for policy objectv2.tcl: Unknown error 0
R3(config)#
Hi Joseph,
Back on the 7206 and I get the following error:
R3(config)#even manager pol objectv2.tcl
EEM Register event failed: Error empty reg spec, policy does not start with EEM registration commands.
EEM configuration: failed to retrieve intermediate registration result for policy objectv2.tcl
Reconvert it again. That version should work.
I can register this policy just fine. Make sure you're trying THIS version.
Joseph,
Sorry, but I don't know what you mean when you say THIS version.
Do you mean the version from the site that i use to convert the EEM? If so, I can only use the version that is provided on the site.
Again sorry but I don't understand
Cheers
Sent from Cisco Technical Support iPhone App
I mean the version you attached to this thread or the version you'd get by converting your applet now will register. I tried it so I know it works.
Joseph,
I re-converted but still getting the following error:
R3(config)#even mana pol objectv4.tcl
EEM Register event failed: Error empty reg spec, policy does not start with EEM registration commands.
EEM configuration: failed to retrieve intermediate registration result for policy objectv4.tcl
It works for me. Perhaps your device doesn't support the track ED?
Hi Joseph,
Thanks for getting back to me, I do believe the device supports ED:
R3#show event manager version
Embedded Event Manager Version 3.10
Component Versions:
eem: (v310_throttle)4.1.23
eem-gold: (v310_throttle)1.0.7
eem-call-home: (v310_throttle)1.0.6
Event Detectors:
Name Version Node Type
application 01.00 node0/0 RP
syslog 01.00 node0/0 RP
track 01.00 node0/0 RP
resource 01.00 node0/0 RP
routing 02.00 node0/0 RP
cli 01.00 node0/0 RP
counter 01.00 node0/0 RP
interface 01.00 node0/0 RP
ioswdsysmon 01.00 node0/0 RP
none 01.00 node0/0 RP
oir 01.00 node0/0 RP
snmp 01.00 node0/0 RP
snmp-notification 01.00 node0/0 RP
timer 01.00 node0/0 RP
ipsla 01.00 node0/0 RP
snmp-object 01.00 node0/0 RP
test 01.00 node0/0 RP
config 01.00 node0/0 RP
env 01.00 node0/0 RP
gold 01.00 node0/0 RP
nf 01.00 node0/0 RP
rpc 01.00 node0/0 RP
R3#
Paste the exact output when you "more" the file as you have copied it to the router.
You need to upload the file as a plain ASCII text file. I'm not sure how you're transferring it to the router, but that's clearly wrong.
Joseph,
I've uploaded the file using tftp server.
I've successfully uploaded other .tcl files from this site.
I don't think I'm doing anything different....
I don't know what more to tell you. This is a hex dump of an ASCII file, not the raw file itself. You need to get the raw ASCII file loaded on flash.
Hi Joseph,
I'll play around and see what I can do.
Thanks for your help anyway mate.
Joseph,
Just so you know the file is being loaded onto disk0:
Joseph,
Do you think you could convert it for me - just in case I'm converting incorrectly?
Cheers
I just downloaded the attached file you posted, copied it to my router using TFTP, and registered it. It worked. It's not a question of your file, but how it appears on flash. Either how you're saving it locally or how you're transferring it is not preserving the original structure.
Success!
Joseph,
You were correct. The problem is the way I copy the file to the tftp server.
I simply extracted object4.tcl.zip, you tested in the previous post and I got no errors registering.
Now got to try and test it out.
Cheers mate.
Hi Joseph,
The problem I have now is that a need environment variable for $_event_tag1. But that variable represents, SLA1 or SLA2 or SLA3
*Jul 18 22:28:37.643: %TRACKING-5-STATE: 2 ip sla 2 reachability Up->Down
can't read "_event_tag1": no such variable
while executing
"action_syslog msg "Track $_event_tag1 is down""
invoked from within
"$slave eval $Contents"
(procedure "eval_script" line 7)
invoked from within
"eval_script slave $scriptname"
invoked from within
"if {$security_level == 1} { #untrusted script
interp create -safe slave
interp share {} stdin slave
interp share {} stdout slave
..."
Multi-event is different in Tcl. You have to test for each tag:
array set mar_einfo [event_reqinfo_multi]
set tag {}
foreach tag [list "SLA1" "SLA2" "SLA3"] {
if { [info exists mar_einfo($tag)] } {
break
}
}
action_syslog msg "Track $tag is down"
Wow!
Where would I add that to the Tcl?
Seriously, if its too much trouble just point me in the right direction and I'll try and figure it out - you've helped me a great deal...
Joseph,
I got it to work.
You're a genius! Seriously, I was about to give up.
Thank you so much mate. | https://supportforums.cisco.com/t5/eem-scripting/tcl-script-assistance-modification/td-p/2242862 | CC-MAIN-2017-47 | en | refinedweb |
MPI_Op_freeFrees a user-defined combination function handle
int MPI_Op_free( MPI_Op *op );
Parameters
- op
- [in] operation (handle)
RemarksMarks a user-defined reduction operation for deallocation and sets op to MPI_OP_NULL on exit.
Null Handles
The MPI 1.1 specification, in the section on opaque objects, explicitly
disallows freeing a null MPI_Op._ARG
- Invalid argument; the error code associated with this error indicates an attempt to free an MPI permanent operation (e.g., MPI_SUM).
See AlsoMPI_Op_create
Example Code
The following sample code illustrates MPI_Op_free.
#include "mpi.h"
#include <stdio.h>
void addem ( int *, int *, int *, MPI_Datatype * );
void addem(int *invec, int *inoutvec, int *len, MPI_Datatype *dtype)
{
int i;
for ( i=0; i<*len; i++ )
inoutvec[i] += invec[i];
}
int main( int argc, char **argv )
{
int rank, size, i;
int data;
int errors=0;
int result = -100;
int correct_result;
MPI_Op op;
MPI_Init( &argc, &argv );
MPI_Comm_rank( MPI_COMM_WORLD, &rank );
MPI_Comm_size( MPI_COMM_WORLD, &size );
data = rank;
MPI_Op_create( (MPI_User_function *)addem, 1, &op );
MPI_Reduce ( &data, &result, 1, MPI_INT, op, 0, MPI_COMM_WORLD );
MPI_Bcast ( &result, 1, MPI_INT, 0, MPI_COMM_WORLD );
MPI_Op_free( &op );
correct_result = 0;
for(i=0;i<size;i++)
correct_result += i;
if (result != correct_result) errors++;
MPI_Finalize();
return errors;
} | http://mpi.deino.net/mpi_functions/MPI_Op_free.html | CC-MAIN-2017-47 | en | refinedweb |
Creating Visual Web Parts for Sharepoint 2010 using Visual Studio 2010 – Part 1
Visual Studio 2010 development tools for Sharepoint 2010 provide an easy way to develop custom solutions for Sharepoint with minimum effort. Developing for Sharepoint 2010 is now as easy as developing ASP.NET web applications.
Development tools like project template specific for Sharepoint 2010, Workflow templates and Visual Web Parts templates greatly enhances developer productivity and saves lot of development time.
In this article, we’re going to see how to use visual web part template and develop a custom solution in Sharepoint 2010 with minimum code and effort.
Case Study
We’re going to create a web part that will display a list of available SharePoint lists in the current web site. Once the web part is developed and deployed into the site, it can be used in any Sharepoint page.
Solution
Follow these steps to create a new project in Visual Studio 2010.
Open Visual Studio 2010, Select File, New, Project from the menu.
Select Visual C#, SharePoint, 2010 from Installed Templates
Select Visual Web Part and type name and location for your project files
Select where you want to deploy your project
Once project is successfully created, Visual Studio 2010 will provide you with a default web part called VisualWebPart1. We can add as many web parts as we want for our project.
Customizing Visual Web Part
Design the web part to your requirements, in this case we’ve added a Label and Listbox control from the Toolbox.
Goto the Page_Load event of the control and add the following code accordingly:
using System;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using Microsoft.SharePoint;
namespace VisualWebPartProject1.vwpList
{
public partial class vwpListUserControl : UserControl
{
protected void Page_Load(object sender, EventArgs e)
{
SPWeb web = SPContext.Current.Web;
lbLists.Items.Clear();
foreach (SPList list in web.Lists)
{
lbLists.Items.Add(new ListItem(list.Title,
list.ID.ToString()));
}
}
}
}
The code is simple, we’re creating a SPWeb object for the current sharepoint site and looping through the lists and populating the list control with sharepoint list name and id.
That’s it, run the project (press F5) from Visual Studio 2010 to deploy and test the solution.
The Run command will deploy the custom solution to the sharepoint site we’ve mentioned while creating the project.
Insert the web part into sharepoint page by doing the following:
Select Edit to edit the sharepoint page.
Goto Insert tab and click Web Part
Select Custom from Categories and web part name Web Parts section
Click Add to insert our new web part into the page.
Summary
In just about minutes we’ve developed a fully functional web part that is ready to be used across sharepoint sites. Though we’ve started simple, in the coming articles we’ll explore more options about how to work with sharepoint object like lists, document libraries, workflows etc. and create custom solutions with minimum effort.
Thanks to u….i have created my first visual web part and used in SPT2010 | http://techwirenews.com/2012/02/03/creating-visual-web-parts-for-sharepoint-2010-using-visual-studio-2010-%E2%80%93-part-1-2/ | CC-MAIN-2017-47 | en | refinedweb |
Image.blend(image1, image2, alpha)
Returns: Imagethat creates a new image by morphing image1 and image2, using a constant alpha.
outImage =.
Note: Both images must have the same size.
In this task, you will create a function called morphPicture that has two arguments representing the file names of image1 and image2. Your function will use a for loop to call Image.blend with the following alpha values: 0, 1/10, 2/10, … 1. Your function then saves and displays the result image for each alpha.
The name of the files created by your function must have the following format: “morph”+str(k)+”.gif”. Note that (k) means the value of the variable that controls the for loop.
Here is my code so far. I keep getting an erro message that says, 'str' object has no attribute 'load'. Can someone please lead me in the righ direction.
def morphPicture(image1,image2): myImage=Image. open(image1) myImage2=Image. open(image2) image1.load() image2.load() alpha=[0,.1,.2,.3,.4,.5,.6,.7,.8,.9,1] for k in range(len(alpha)): if myImage.size==myImage2.size: value=alpha[k] Image.blend(image1,image2,value) outImage=image1*(1.0-value)+image2*value outImage.show() file=outImage[:-4] file="morph"+str(k)+".gif"
This post has been edited by atraub: 14 October 2012 - 03:02 PM
Reason for edit:: added code tags | http://www.dreamincode.net/forums/topic/295582-morphing-images/ | CC-MAIN-2017-47 | en | refinedweb |
I've a desktop application to detect faces written in python script, using opencv and numpy.
i want to put these python files into flask and run it, would it run without problems? like
import cv2
import numpy as np
from flask import Flask
app = Flask(__name__)
## define my functions here
@app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
#call the functions here
app.run()
Yes it would work, one thing you should know is if you do like below, the HTTP request won't return until after the processing is done e.g.
@app.route('/webcam') def webcam_capture(): """ Returns a picture snapshot from the webcam """ image = cv2... # call a function to get an image response = make_response(image) # make an HTTP response with the image response.headers['Content-Type'] = 'image/jpeg' response.headers['Content-Disposition'] = 'attachment; filename=img.jpg' return response
Otherwise, if you put in the main function like below
if __name__ == '__main__': # <-- No need to put things here, unless you want them to run before # the app is ran (e.g. you need to initialize something) app.run()
Then your flask app won't start until the init/processing is done. | https://codedump.io/share/xE2i3Y4zDGYH/1/run-python-script-into-flask | CC-MAIN-2017-47 | en | refinedweb |
This question really makes no sense in Java. We can't just make a pointer to the middle of a String... We have to make a new String anyway, so the problem is trivial using basic String methods.
public class Solution { public String strStr(String haystack, String needle) { int pos = haystack.indexOf(needle); if (pos == -1) return null; return haystack.substring(pos); } }
I just modified the code definition to return the index instead of String. (Similar to Java's indexof method).
Hopefully this makes more sense.
Note: Click on the reload button to reset the code definition if you still see the old code definition.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/4210/this-question-makes-no-sense-in-java | CC-MAIN-2017-47 | en | refinedweb |
TypeScript Fundamentals.
Welcome and Setup
We’ll introduce ourselves and meet each other, and ensure everyone is properly set up for the course.
Strange JavaScript & The Benefits of Types
JavaScript has some quirky characteristics that can lead to considerable confusion, particularly from those who come from a background using a strongly-typed language like C++ or Java.
Using the TypeScript Compiler
The Typescript compiler is a robust tool for turning your code into JavaScript. We’ll look at how the tool is typically used, and the vast array of important configuration parameters we can tweak to get just what we need.
EXERCISE: Compiling TypeScript into JavaScript
We’ll use our knowledge of the
tsc command to compile some TypeScript files into appropriate JavaScript code.
Coffee Break
Coffee Break
2 — Today's JavaScript is Yesterday's TypeScript
Starting with the ES2015 revision of the JavaScript language standard, it has begun to adopt many well-loved features of TypeScript. Because of this, it’s common to see some things that the JavaScript community would consider “experimental” used widely in TypeScript programs. We’ll look at some of these areas in detail.Agenda
Modules
Modules have been a first class part of TypeScript since before people started using them widely in JavaScript. We’ll look at how to stitch encapsulated and modular pieces of code together, and cover some best practices regarding namespaces, named and default exports, and importing.
EXERCISE: Refactor into modules
Using our best practices for modules, refactor the code for this exercise so that it’s separated into distinct modules, and more easily unit testable.
Classes and Prototypes
Classes are an important abstraction built on top of JavaScript’s prototypal inheritance, which reduces the propensity for developers to tread into a counterintuitive territory. TypeScript makes heavy use of the concept of a JavaScript class and adds some unique features that you won’t see in the JS world.
EXERCISE: Color picker: class edition
Implement a color picker using JavaScript classes.
Lunch
Break for lunch
Decorators
Decorators allow us to modify and annotate things like classes, functions and values in an easy and declarative way. While they’re starting to look like promising additions to the JavaScript language spec, you’ll see them in TypeScript all the time.
EXERCISE:.
Enhanced Objects & Property Descriptors
Enhanced object literals allow us to do common things like add methods and properties more clearly and easily than ever before. We’ll look at how object literals have evolved since the ES5 JavaScript standard, and then explore additional features that TypeScript bring to the party.
EXERCISE: Getter/Setter based properties
Using a property descriptor, define a property on an object that’s derived from other values. We should be able to get and set this property, just as if it was value based. For the getter and setter you define, you should take care of keeping all dependencies properly in sync.
Iterators and Generators foundation for many higher-level JavaScript language features.
Async & Await
Async and await are starting to creep into the JavaScript world, but these keywords have been broadly used in TypeScript programs for many years. We’ll learn about how these new keywords allow us to write async code that looks almost like the synchronous (blocking) equivalent!
Recap & Wrap Up
We’ll recap what we’ve covered today, and set our sights on a homework assignment and tomorrow’s agenda
HOMEWORK: Async Task Runner
One of the most powerful things we can build on top of generator functions is an “async task runner”. You have an “autocomplete” use case already set up, that involves running several async operations in sequence.
Welcome & Solution to Homework
We’ll go through the agenda for today and the solution to last night’s homework exercise.
Type Annotations
Type annotations, which can be used anywhere a value is declared, passed or returned, are the basis for some fantastic editor features and static code analysis. We’ll look at some of the most basic type annotation use cases, and demonstrate how those ugly areas of JavaScript quickly begin to go away.
EXERCISE: Typed Color Picker
Add type annotations to our color picker. Your solution should result in no warnings emitted by the TypeScript compiler.
Ambient Types
Ambient types allow us to provide type information for any JavaScript code that’s included in our project.
EXERCISE: Adding types for an existing JS library
Use our knowledge of the standard setup for
*.d.ts files to supply type information for our color conversion library.
Coffee Break
Coffee Break
Optionals
Adding type annotations to your code can start to apply some unexpected constrains – one of which is that “optional” arguments must be explicitly defined as such. We’ll look at how this is done in TypeScript, and how to decide between “optionals” or arguments with default values when designing functions.
EXERCISE: Making our color picker more robust
Using our knowledge of optionals and default parameter values, make the provided edge and corner test cases pass with no TypeScript compiler warnings.
Interfaces
Interfaces allow us to go way beyond the default basic types that we’re provided with, and to define complex types of our own. We’ll look at how interfaces can be used for “structural typing” and how we can implement multiple interfaces in an ES2015 class using the
implements keyword.
EXERCISE: Structural typing with interfaces
Use an interface to represent a color as an object with r, g, and b channels. Update the rest of your code so that all tests pass with this new color representation, with no TypeScript compiler warnings or errors.
Lunch
Break for lunch
Generics
Generics allow us to define classes or functions in ways that are type-agnostic, meaning that they work across a broad range of types, while still providing the benefits of type safety. We’ll look at how this works, and walk through some common use cases.
EXERCISE: Generics
Solve the provided exercise so that all tests pass, and the TypeScript compiler emits no warnings or errors.
Type Guards, Coersion, Casting and Assertion
There are several ways we can guard against and convert values to get the type we need, but these language features are a little different than what you may be used to due to TypeScript not having any kind of runtime reflection API. We’ll look at the broad range of options available, and then narrow down to best practices that will serve you well, even in very complex applications.
Working with TSX
TypeScript can be used with JSX very easily, but there are certain types of TypeScript syntax that interfere with JSX parsing. We’ll identify these issues, and provide some TSX-friendly workarounds that’ll let us get the same things done, even in React components.
Access Modifiers
The
public,
private and
protected access modifiers allow us to control what our classes expose down their inheritance chain, and out to the rest of the world. We’ll study how structural type matching is affected by these modifiers, and provide some guidance and best practices to strike the appropriate balance between safety and flexibility.
Readonly and Static
The
readonly and
static keywords further enhance what we can do with JavaScript classes.
EXERCISE: Access modifiers
Solve the provided exercise, such that all tests pass, and the TypeScript compiler emits no warnings or errors
Enums
Enums allow us to group a collection of related values together. TypeScript provides us with a robust and full-featured solution in this area, with some options that let us strike the balance between full-featured and lightweight.
Mixins, Abstract Classes and Interfaces
When it comes to inheritance, we have many options to choose from. We’ll look at the appropriateness of abstract classes, interfaces, and mixins for various use cases, highlighting the pros and cons of each.
EXERCISE: Data Modeling
Solve the provided exercise, such that all tests pass, and the TypeScript compiler emits no warnings or errors
Code Style
Odds are, you’re probably used to writing plain JavaScript. We’ll go over some code style best practices that you may want to add to your tslint typescript linting configuration.
Using TypeScript with React
React components provide us with some excellent opportunities to reap the benefits of what we’ve learned so far. We’ll review React’s DefinitelyTyped library type descriptions and see how much our editor helps us, compared to what we’d see were we using “vanilla JavaScript”.
EXERCISE: A typed react component
Rebuild the UI component for our color picker, using interfaces for the component state and props.
5 — Migrating to TypeScript
As Typescript works side-by-side with JavaScript easily and conveniently, the overhead to start using Typescript is very low. We’ll discuss some topics related to moving a conventional JavaScript app to TypeScript, while striking the balance between capability and productivity.Agenda
Adding Types Incrementally
One of the core requirement of TypeScript is that it must be conveniently usable side-by-side with regular JavaScript. We’ll look at what it would take to add TypeScript to an existing project, and then incrementally add type information over time.
Using TypeScript with Babel
You will often use TypeScript and Babel together. Because both of these libraries are responsible for taking something other than browser-friendly JavaScript and transforming it to ES5, there can be some strange behavior depending on how things are set up. We’ll provide some guidelines for a setup that maximizes the benefit you get from both of these tools while minimizing confusion.
Wrap up and recap
We’ll recap everything we’ve covered today, and provide some recommendations for further reading and learning. | https://simplabs.com/training/2017-05-01-typescript-fundamentals.html | CC-MAIN-2017-47 | en | refinedweb |
Add animate.css
" npm install animate.css --save" in your project directory
import in main.js
import Vue from 'vue' import Quasar from 'quasar' import 'animate.css/animate.min.css'
- add <transition> to the element/component you want to animate
more info here:
<div class="row"> <button @ toggleIt </button> <transition name="custom-classes-transition" enter- <button v-ANIMATION</button> </transition> </div>
It’s basically just wrapping the vue <transition> tag around an element with attribute like v-if etc. (that can listen to vue transitions.)
Then add the enter and leave classes.
“animated”: a “signal” for animate.css to do something
"bounceInLeft etc" : animate.css animations
You can control speed by adding your own class:
.slow { -webkit-animation-duration : 1s; }
Excellent. There’s even a package called
vue-animate, but it’s unclear if it works with Vue 2.
Is there any way to use transition on Quasar tab content?
Haven’t analyzed this for v0.13, but you’ll be able to do this on v0.14 for sure.
@rstoenescu Can’t wait to see it then!!! :)
Any news about release date?
We’re a few weeks from releasing it. Currently working on documentation. | http://forum.quasar-framework.org/topic/199/add-animate-css | CC-MAIN-2017-47 | en | refinedweb |
In the last article we set up a new project with NPM and installed some dependencies including Webpack. In Node.js we build code in modules and export functionality that is required by other modules. This module system is called CommonJS and we will be using this approach for both our server side Node.js code and our front end React.js code. CommonJS is not natively supported in web browsers so we need tooling to help us develop and deploy our code in this way. This is where we want to start developing with Webpack. Webpack is a module bundler that provides us with a set of tools for building modules for the browser. With Webpack we build and bundle our code into a single file (or set of files) including the static dependencies of those modules such as CSS and images.
To see how Webpack works we need to start writing some code! First we create an index.html file in the project directory from the last article named react-isomorphic. This file is the entry point to our application from a web browser. In the index.html file we want to include the JavaScript file that will be the resulting bundle produced by Webpack and a root element that we will render our React.js application into. The approach we will take in our development on this project will be mobile first so we will also need a viewport meta tag to ensure the page displays correctly on mobile devices. Write out and save the following code in the index.html file using the code editor of your choice.
Open this file in a web browser and you should see the text “Hello world from HTML” displayed on a white page. You open a file in most browsers including Chrome in the File menu under Open or Open file.
Next we want to create our first JavaScript module so that we can build and view the bundle for the first time. This module will be the entry point to our application and will be responsible for initiating and rendering the different parts of the web page using React.js. But, before we can create this file we must install the required React.js packages using NPM.
This command installs the core React.js framework (react) and the React.js DOM library (react-dom). Before we build our main module let’s do a little set up and a create a new directory inside our project folder to house our React.js application. Add a new folder named client in the root of the react-isomorphic project directory and create a new file named index.js within this folder. Recreate the following code in the index.js file, feel free to copy and paste this code but we recommend typing it out so that you get a better understanding of what it does and how to debug the code as is changes.
We begin this file with some import statements. Import statements are how we specify the external modules we wish to import into the current module so that we are able to use them in our own code. Here we import the React core library which we need for our JSX code and the React DOM library we need to render our root component into the main tag with the id of “app” in our index.html page. For now our root component is just a div with a message.
Let’s build the bundle with Webpack for the first time to test it is working. Run the following command from the root of the react-isomorphic project directory.
You should see a success message with some stats about the size of the resulting bundle.js file. If not try to correct the errors shown before continuing. Breaking down this command from right to left we call the webpack command line utility program and tell it that the entry point to our application code is ./client/index.js and that the output for our bundle is the ./bundle.js file. We are using JavaScript 2015 (or ES6) to write our code so we must first run it through Babel to transpile the code into a format that is able to run in all modern browsers. To do this we use the –module-bind option and inform it that all files with the extension .js must be loaded with the Babel loader which handles the transpilation. We also pass the presets we installed for React and es2015 in the query string as a JSON object. If talk of transpilers and the different versions of JavaScript go over your head right now don’t worry as we will be explaining the code in all of the examples in this series as we work through them.
Open the index.html file in a web browser again and you should now see the message “Hello world from React” which proves to us that the bundle is built and loaded and that our React code is working. Yay!
Building self contained modules
Now we have some working code we can start to add the main component modules of our application. We will be developing these components in a self contained way where each component lives in it’s own folder. We do this so that we can keep all of the files that make up each component together to ensure that our application code is well organised and easy to restructure should we ever need to do so in the future. Let’s start this approach by creating a root component named App and importing it. Update ./client/index.js to import this new module.
Here we added a new import statement to import the App component we are about to build. If we try to build our bundle with Webpack now we will see an error message telling us that the App component can’t be found so let’s add it. Before we do so we need to create two new directories. First create a folder named components inside of the client directory where our index.js file lives then inside of the components directory create another folder named app. Write out and save the following code in a file named index.js within the new app folder.
We name this file index.js so that when we require it from another module using the name “components/app” the module importer knows implicitly which file is the entry point to that module. You can read more about folders as modules in the node documentation.
In the App module code we import two separate things from React. First is the default export from React which is the core library with all of it’s features and second we see the name of the Component feature in braces. We have this named import so that we can reference the Component class by just specifying the name Component in our code but we could have just as easily written this as follows.
The Component class is the base class for all React components which provides a set of life cycle methods that we can choose to implement to make our own components behave in a way that React understands. One method we must provide is the render method so that React can render the component into the page. We set the App class as the default export from our module and this is the only thing that we export.
Let us try to build our bundle again.
Running this should show you an error. The new component we are trying to import from ./client/index.js can’t be found. This is due to the path portion of the import statement “components/app”. We can fix this by prefixing the path “./components/app” so that the import knows to start looking for the components directory from the ./client folder where the index.js file lives or we can tell Webpack explicitly where to find the component modules. Let’s do the latter so that our code remains clean and maintainable.
For this to work we have to create a configuration file for webpack, much like the package.json configuration file for NPM. The default Webpack configuration file is named webpack.config.js and from this file we export a Node.js module containing the configuration options. We add the options we have been using up to this point with additional options for resolving our component imports.
We use the older CommonJS syntax seen in Node.js to export from our module in this file which may be a little confusing at this point. We will write our Node.js code in JS2015 format at a later stage using babel-node but here we will stick to this format to save additional setup.
In the configuration we tell Webpack the same information as before but in a different format. We define where our source code lives with the context option and which file is the entry point. We provide the output options for the bundle in the output section and list which loaders we require to use to build our code in the modules section. For each loader we define how to test the file name using a simple regular expression to know which files the loader should be applied. In addition there is now the resolve section with an alias telling Webpack where to resolve the components directory that is was unable to find before. With this file in the root of our react-isomorphic directory we can build our bundle by just running.
Once successfully built open the index.html file once again in the browser and you should see the message “Hello world from a React component”.
Component style imports
Although it may not feel like it right now you have come a long way toward building some meaningful components for our application. Before we move on to these though let us first install a some new loaders for Webpack that will allow us to include styling for our components. In this series we will be using LESS to write our CSS code so we will need to install the less, css and style loaders.
The style-loader is responsible for injecting the component styles into the head tag of the index.html page, the css-loader for parsing the css rules from our stylesheets and the less loader for generating these rules from our LESS files. For these loaders to work we need to add a new loader entry to our webpack.config.js file to handle files with the .less file extension.
We can now add a style sheet for our App component. In the ./client/components/app directory create a new file named style.less containing the following styles and save it.
Import these styles into the App component in ./client/components/app/index.js by adding a new import statement for the style sheet.
Note that the import statement specifies no name for the module being imported as we do not require any reference to the imported styles in our code. All we want here is for Webpack to build them into the page for us. Run the webpack command once again to build the bundle and open the index.html page in the browser. If successfully built you should see the component message displayed on a grey background in a white box in the centre of the page.
We’ve covered a lot in this article but there is much more in Webpack for us to explore yet. In the next article we will be setting up our application server with Express so that we are able to serve our index.html page dynamically. After that we will return to building React.js components with the addition of development tooling that will automatically build our bundle after each change and reload our application for us.
If you’ve enjoyed the series so far why not sign up for free and receive updates about new article as we release them.
I want to use stylus so the code is
I was going to ask how to hookup browser-sync with webpack so you don’t have to keep doing webpack to rebuild the script, but I hope that’s coming in the next article.
Great tutorial so far!
Yes you can use Stylus or Sass interchangeably for LESS if you wish.
We will be setting up webpack-dev-server, hot loading and nodemon very soon so stay tuned for that!
So I tried the first webpack command and it threw an error. Not knowing what to do, I just tried my hand at creating the webpack.sonfig.js that is later in the tutorial and the command worked after that.
Just wondering how it’s working for others without creating this file, but not for me.
Hey Sam, what was the error you were seeing? If you post the stack trace I may be able to help you understand what was wrong.
This one in my case :
C:\Users\thomas\Desktop\aciiid\test_aciiid>webpack ./client/index.js ./bundle.js –module-bind ‘js=babel?{“presets”: [“es2015″,”react”]}’
Hash: 90d2ddada0a452b50606
Version: webpack 1.12.9
Time: 82ms
Asset Size Chunks Chunk Names
[es2015,react]}’ 1.67 kB 0 [emitted] main
[0] multi main 40 bytes {0} [built] [1 error]
[1] ./client/index.js 0 bytes [built] [failed]
[2] ./bundle.js 0 bytes {0} [built]
ERROR in ./client/index.js
Module parse failed: C:\Users\thomas\Desktop\aciiid\test_aciiid\client\index.js Line 1: Unexpected token
You may need an appropriate loader to handle this file type.
| import React from ‘react’;
| import ReactDOM from ‘react-dom’;
|
@ multi main
Possibly a version issue, you could try moving the presets to a .babelrc file as documented here
Amazing article. I’ve been looking at webpack/React tutorials all day and yours is the only one that is not only concise, but also up to date with the most recent versions.
Thanks for the great tutorial! Small heads up — the second webpack.config.js snippet is missing the babel query presets.
Thanks, I probably moved them to .babelrc but forgot to document it!
Anxiously awaiting the next article.
I’ll do it soon! I’ve been very busy recently but plan to crack on with these ASAP.
Thanks!
It would be extremely helpful if you could add it back. I was following the tutorial and got stuck, tried to fix it for a while, then started over cutting and pasting your code exactly which didn’t help. It took me several hours to figure this out. I was thinking it was my machine config since after the first try, I cut and pasted the code.
When dealing with a tutorial like this, I assume I did something wrong and kept rechecking my code. I am sure a lot of newbies will get stuck here and won’t be able to do the rest of the tutorial.
Thanks for the tutorial.
Done, apologies for that!
I agree with previous comments, it is awesome that you have written these articles. Thank you very much for taking your time to do so, this really helps to get onboard with React.js and Node.
Thanks, I’m glad that you appreciate them.
Hi thanks for this awesome tutorial!
I had an error while building the app with the React component :
ERROR in ./client/index.js
Module not found: Error: Cannot resolve module ‘components/app’ in /Users/damien/Documents/react-iso/client
@ ./client/index.js 11:11-36
To fix this i’ve replaced
import App from ‘components/app’;
By
import App from ‘./components/app’;
It could be a version issue (my babel-core version is 6.5.1).
Thanks. Only reason I can think you saw that error would be not having the components alias set in your webpack configuration.
Any advice on writing tests for the hello world? thanks!
Thanks for this great tutorial!
I received some Errors when running:
webpack ./client/index.js ./bundle.js –module-bind ‘js=babel?{“presets”: [“es2015″,”react”]}’
I had to replace the single quation marks with the double ones and escaped the double one in the text:
webpack ./client/index.js ./bundle.js –module-bind “js=babel?{\”presets\”: [\”es2015\”,\”react\”]}”
This worked for me.
Yes, I had the same problem, but i got around it by using double quotes for js=babel……to end
and used single quotes for thee json block within
webpack ./client/index.js ./bundle.js
–module-bind “js=babel?{‘presets’: [‘es2015’, ‘react’]}”
I think thats it. Thanks sooo much! struggled for over an hour! I just reversed the singles and double quotes BTW. “js=babel?{‘presets’: [‘es2015′,’react’]}” and it worked. Makes sense now. The terminal didn’t like the single quotes.
Not a bad tut so far. My only challenge has been you kind of skipped the part about what to do about the index.html file. It’s gone in the sample code, and you keep mentioning all this dynamic stuff happening. If it’s a static file, it’s always going to give me the same message. Not sure where react was supposed to come into play.
index.html gets swapped for the server views which are built with handlebars and render the application on the server side initially then on the client once the scripts are loaded.
Great tutorial. Having issues when using ‘babel-node server’ when adding react to the express index.js. I keep getting and error:
ReferenceError: react is not defined
at Object. (app.js:98:19)
It seems like import react but this is in the generated file so am unsure what to do.
Any suggestions?
Hey, sorry I didn’t respond sooner! Sounds like something to do with the Webpack build, do you have React installed for the server module?
NB: npm install for the current project gives a warning (“WARN deprecated graceful-fs@3.0.8”). The dependency is express-handlebars: that 3.0.0 version (from January 2016) now depends on a current version of graceful-fs. I updated package.json to say:
“express-handlebars”: “^3.0.0”,
and all is well
Hi Phil, excellent tutorial series compared with many others.
I like that you show code and then explain what’s happening. Saves significant time and also ramps up speed of understanding. I look forward to more 🙂
Great tutorial.
In the webpack.config.js, I needed to add this:
const path = require(‘path’);
Only then the webpack command stopped failing.
Scratch that 🙂 I actually scrolled down and missed the require statement:) My bad. | http://spraso.com/webpack-react-js-and-modern-javascript-application-development/ | CC-MAIN-2017-47 | en | refinedweb |
- Redux Form
- API
reducer
reducer
The form reducer. Should be given to mounted to your Redux state at
form.
If you absolutely must mount it somewhere other than
form, you may provide a
getFormState(state)function to the
reduxForm()decorator, to get the slice of the Redux state where you have mounted the
redux-formreducer.
If you're using Immutablejs to manage your Redux state, you MUST import the reducer from 'redux-form/immutable'.
ES5 Example
var redux = require('redux'); var formReducer = require('redux-form').reducer; // Or with Immutablejs: // var formReducer = require('redux-form/immutable').reducer; var reducers = { // ... your other reducers here ... form: formReducer }; var reducer = redux.combineReducers(reducers); var store = redux.createStore(reducer);
ES6 Example
import { createStore, combineReducers } from 'redux'; import { reducer as formReducer } from 'redux-form'; // Or with Immutablejs: // import { reducer as formReducer } from 'redux-form/immutable'; const reducers = { // ... your other reducers here ... form: formReducer }; const reducer = combineReducers(reducers); const store = createStore(reducer);
Created by Erik Rasmussen | https://redux-form.com/6.5.0/docs/api/reducer.md/ | CC-MAIN-2017-47 | en | refinedweb |
AOP – Encrypting with AspectJ using an Annotation
This is a continuation of AOP – Encrypting with AspectJ. It is expected you have read (or skimmed) that article first. This article uses the same project. The files in the project so far are these:
- Encrypt.java
- EncryptFieldAspect.aj
- FakeEncrypt.java
- Main.java
- Person.java
The final version of EncryptFieldAspect.aj in the previous article is not exactly a “good” Aspect. The problem with it is that it isn’t very reusable. There is a one to one relationship between this Aspect and the field it encrypts. We want a one to many relationship where one Aspect works for encrypting many fields.
Another problem is there is nothing in the objects that would inform a developer that any Aspect Oriented Programming is occurring.
Step 9 – Making the Aspect reusable
Lets make the aspect reusable. Lets make one aspect with a single pointcut and a single around advice work for multiple setters in a class.
- In the Person class, add a forth field, DriversLicense. This field should also be encrypted.
- Add a getter and setter for the DriversLicense field.
- In the Main class, add sample code to set and later print out the DriversLicense field.
- In the EncryptFieldAspect, change it to work for all setters by using a * after the word set and making it work for any method that takes a string, not just the SSN setter.
package AOPEncryptionExample; public aspect EncryptFieldAspect { pointcut encryptStringMethod(Person p, String inString): call(void Person.set*(String)) && target(p) && args(inString) && !within(EncryptFieldAspect); void around(Person p, String inString) : encryptStringMethod(p, inString) { proceed(p, FakeEncrypt.Encrypt(inString)); return; } }
Ok, now it is reusable but we now have another problem. It is encrypting all the fields, including FirstName and LastName. It is definitely reusable, but now we need something to differentiate what is to be encrypted from what isn’t.
Step 10 – Create an Encrypt annotation
Lets add and use an annotation to mark setters that should use encryption.
- Right-click on the package in Package Explorer and choose New | Annotation.
Note: The package should already be filled out for you.
- Give the annotation a name.
Note: I named mine Encrypt.
- Click Finish.
- Maybe add a comment that this annotation is for use by the Encrypt Aspect.
public @interface Encrypt { // Handled by EncryptFieldAspect }
Step 11 – Add the @Encrypt annotation to setters
- In the Person class, add @Encrypt above the setSSN setter.
- Also add @Encrypt above the setDriversLicense setter.
@Encrypt public void setSSN(String inSSN) { SSN = inSSN; }
@Encrypt public void setDriversLicense(String inDriversLicense) { DriversLicense = inDriversLicense; } }
Step 12 – Alter the EncryptFieldAspect to work for multiple objects
Well, now that it works for any setter in the Person object that is marked with @Encrypt, the next step is to make it work for any setter marked with @Encrypt no matter what object it is in.
- Remove any reference to Person. We don’t even need it or the parameter ‘p’.
package AOPEncryptionExample; public aspect EncryptFieldAspect { pointcut encryptStringMethod(String inString): call(@Encrypt * *(String)) && args(inString) && !within(EncryptFieldAspect); void around(String inString) : encryptStringMethod(inString) { proceed(FakeEncrypt.Encrypt(inString)); return; } }
Great, now let’s test it.
Step 11 – Test Using @Encrypt on multiple objects
Ok, let’s create a second object that has an ecrypted field to test this. Lets create a Credentials.java file with a UserName and Password fields, getters and setters. Of course the Password should be encrypted.
- Right-click on the package in Package Explorer and choose New | Class.
Note: The package should already be filled out for you.
- Give the class a name.
Note: I named mine Credentials.
- Click Finish.
- Add both a UserName and Password field.
- Add a getter and setter for both.
- Add the @Encrypt annotation above the setPassword setter.
package AOPEncryptionExample; public class Credentials { // UserName private String UserName; public String getUserName() { return UserName; } public void setUserName(String userName) { UserName = userName; } // Password private String Password; public String getPassword() { return Password; } @Encrypt public void setPassword(String password) { Password = password; } }
Now lets create a Credentials object in the main() method and print out the values.
package AOPEncryptionExample; public class Main { public static void main(String[] args) { Person p = new Person(); p.setFirstName("Billy"); p.setLastName("Bob"); p.setSSN("123456789"); p.setDriversLicense("987654321"); System.out.println("Person:"); System.out.println(" FirstName: " + p.getFirstName()); System.out.println(" LastName: " + p.getLastName()); System.out.println(" SSN: " + p.getSSN()); System.out.println(" Driver's License: " + p.getDriversLicense()); System.out.println(); Credentials c = new Credentials(); c.setUserName("billybob"); c.setPassword("P@sswd!"); System.out.println("Person:"); System.out.println(" UserName: " + c.getUserName()); System.out.println(" Password: " + c.getPassword()); } }
Ok, test it. The output should be as follows:
Person: FirstName: Billy LastName: Bob SSN: #encrypted#123456789#encrypted# Driver's License: #encrypted#987654321#encrypted# Person: UserName: billybob Password: #encrypted#P@sswd!#encrypted#
Well, now you have learned how to encrypt using Aspect Oriented Programming.
Here are a couple of benefits of encrypting with AOP.
- The crosscutting concern of security is now modular.
- If you wanted to change/replace the encryption mechanism, including the encryption method names, you can do that in a single place, the EncryptFieldAspect object.
- The code to encrypt any field in your entire solution is nothing more than an annotation, @Encrypt.
- The @Encrypt annotation can provide documentation in the class and in API documentation so the fact that encryption is occurring is known.
- You don’t have to worry about developers implementing encryption differently as it is all done in a single file in the same predictable manner.
Congratulations on learning to encrypt using AOP, specifically with AspectJ and Annotations.
What about decryption?
By the way, I didn’t show you how to decrypt. Hopefully I don’t have to and after reading you know how. Maybe in your project the design is that the getters return the encrypted values and you don’t need to decrypt. However, maybe in your design you need your getters to decrypt. Well, you should be able to impement a DecryptFieldAspect and a @Decrypt annotation for that.
If you use decrypt on the getter, the output is the same as if there were no encrypt/decrypt occuring. However, it is still enhanced security because the value is stored encrypted in memory and not as plain text in memory.
However, I have a version of the project that decrypts that you can download.
AOP Encryption example project Download
Download the desired version of the project here:
Return to Aspected Oriented Programming – Examples
You can certainly see your skills in the work you write. The sector hopes for more passionate writers like you who aren't afraid to say how they believe. At all times go after your heart.
[...] Encrypting with AspectJ using an Annotation Filed under: FreeBSD — rhyous @ 8:31 pm Read more Share this:DiggRedditLike this:LikeBe the first to like this post. Leave a [...] | https://www.rhyous.com/2012/05/26/aop-encrypting-with-aspectj-using-an-annotation/ | CC-MAIN-2017-47 | en | refinedweb |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
django-ipgeo application
What is this?
django-ipgeo provides API to work with database from ipgeobase.ru
How to use it?
Install via pip django-ipgeo package.
Add "ipgeo" to INSTALLED_APPS
Run syncdb
Run "manage.py ipgeo_update"
Use it like:
from ipgeo.models import Range
- try:
rang = Range.objects.find(request.META['REMOTE_ADDR']
- except Range.DoesNotExist:
print 'Unknown location'
- else:
print 'The country is', rang.country if rang.location:
print 'The city is', rang.location.name | https://bitbucket.org/lorien/django-ipgeo | CC-MAIN-2019-04 | en | refinedweb |
Introduction to Visual Basic and Visual C++ Array. What is Array? Why use Array?
- Heather Perry
- 2 years ago
- Views:
Transcription
1 Introduction to Visual Basic and Visual C++ Array Lesson 7 Declare and Assign Array I154-1-A Peter Lo I154-1-A Peter Lo What is Array? Why use Array? An array is a collection of data of the same type. It is implemented in Visual Basic as an object. Each individual item in array that contains a value is called an element Arrays provide access to data by using a numeric index, or subscript, to identify each element in the array Data Type Default Value Any Numeric Data Type 0 Reference Data Type Boolean Data Type Null False I154-1-A Peter Lo Suppose that you want to evaluate the exam grades for 30 students and to display the names of the students whose marks. Dim student1 As String, mark1 As Integer Dim student2 As String, mark2 As Integer Dim student3 As String, mark3 As Integer Array Name Upper bound of subscripts in the array Dim student(30) As String Dim mark(30) As Integer Data Type I154-1-A Peter Lo
2 Array Terminology Syntax for Array Declaration: Dim arrayname(n)asdatatype 0 is the Lower Bound of the array n is the Upper Bound of the array the last available subscript in this array The number of elements is the size of the array Syntax for assigning values into an array arrayname(arrayindex) = Value Arrays may be initialized when they are created: Dim arrayname( ) As vartype = {value0, value1, _ value2,..., valuen} Creating a Two-Dimensional Array A two-dimensional array holds data that is arranged in rows and columns A two-dimensional array can also be described as an array of arrays. I154-1-A Peter Lo I154-1-A Peter Lo ReDim Statement The size of an array may be changed after it is created where arrayname is the name of the already declared array and m is an Integer literal,variable, or expression.: ReDim arrayname(m) To keep any data that has already been stored in the array when resizing it, use ReDim Preserve arrayname(m) Copying Arrays If arrayone() and arraytwo() have been declared with the same data type, then the statement arrayone = arraytwo makes arrayone() an exact duplicate of arraytwo(). It will have the same size and contain the same information. I154-1-A Peter Lo I154-1-A Peter Lo
3 Using the Length Property The For Each Loop The Length property of an array contains the number of elements in an array (1 less than upper bound) I154-1-A Peter Lo I154-1-A Peter Lo Searching an Array Sorting an Array Searching each element in an array is called a sequential search The BinarySearch method searches a sorted array for a value using a binary search algorithm The binary search algorithm searches an array by repeatedly dividing the search interval in half I154-1-A Peter Lo I154-1-A Peter Lo
4 Passing an Array by Reference An array can be passed as an argument to a Sub procedure or a Function procedure Menu Creating a Menu Bar I154-1-A Peter Lo I154-1-A Peter Lo MenuStrip Object A menu bar is a strip across the top of a window that contains one or more menu names A menu is a group of commands, or items, presented in a list Standard Items for a Menu Visual Basic 2008 contains an Action Tag that allows you to create a full standard menu bar commonly provided in Windows programs Action tags provide a way for you to specify a set of actions, called smart actions, for an object as you design a form Drag the MenuStrip component onto the Windows Form object. Click the Action Tag on the MenuStrip object Click Insert Standard Items on the MenuStrip Tasks menu Click File on the menu bar to view the individual menu items and their associated icons on the File menu I154-1-A Peter Lo I154-1-A Peter Lo
5 File Handling File Handling Read and Write Text File To process data more efficiently, many developers use text files (or binary files) to store and access information for use within an application Text files have an extension that ends in.txt A simple text file is called a sequential file (actually it is usually called an ASCII text file) I154-1-A Peter Lo I154-1-A Peter Lo Read Text File Stream Reader. Handling End of File ReadToEnd works best when you need to read all the input from the current position to the end of the stream. If more control is needed over how many characters are read from the stream, use Read(Char[], Int32, Int32), which generally results in better performance. ReadToEnd assumes that the stream knows when it has reached an end. For interactive protocols, in which the server sends data only when you ask for it and does not close the connection, ReadToEnd might block indefinitely and should be avoided. Note than when using the Read method, it is more efficient to use a buffer that is the same size as the internal buffer of the stream. If the size of the buffer was unspecified when the stream was constructed, its default size is 4 kilobytes (4096 bytes). I154-1-A Peter Lo I154-1-A Peter Lo
6 Save Text File To add the ability to write to a file via the application, use the Stream Writer class. Stream Writer is designed for character output in a particular encoding, whereas the Stream class is designed for byte input and output. Use Stream Writer for writing lines of information to a standard text file. Opening a Text File To open a text file, you need an object available in the System.IO namespace called a StreamReader Dim objreader as IO.StreamReader If IO.File.Exists( e:\file.txt ) Then objreader = IO.File.OpenText( e:\file.txt ) Else MsgBox( Error! File not Exist. ) Me.close() End If I154-1-A Peter Lo I154-1-A Peter Lo Reading a Text File To determine whether the end of the file has been reached, use the Peek procedure of the StreamReader object Writing to a Text File Writing to a text file is similar to reading a text file. The System.IO namespace also includes the StreamWriter class which is used to write a stream of text to a file Stream Reader should be close at the end I154-1-A Peter Lo I154-1-A Peter Lo
7 Multiple-Document Interface (MDI) Multiple Forms Multiple-document interface (MDI) applications allow you to display multiple documents at the same time, with each document displayed in its own window. MDI applications often have a Window menu item with submenus for switching between windows or documents. Multiple-Document Interface (MDI) I154-1-A Peter Lo I154-1-A Peter Lo Creating an Instance of a Windows Form Object To display a second or subsequent form, the initial step in displaying the form is to create an instance of the Windows Form object When creating multiple Windows Form objects, Visual Basic allows you to generate two types of forms: A Modal Form retains the input focus while open ShowDialog( ) A Modeless Form allows you to switch the input focus to another window Show( ) Startup Objects Every application begins executing a project by displaying the object designated as the Startup object I154-1-A Peter Lo I154-1-A Peter Lo
8 Application Class If you have an application with multiple forms (windows), you can exit the application and close all the open forms (windows) by using the Exit method of the Application Class. Application.Exit( ) The FormClosing Event Form Closing event: Occurs when a form is about to be closed by the program code or by the user Allows you to trap the closing action and take any necessary actions such as saving data Can be used to cancel the close action Set e.cancel = True to cancel the closing action Form may be closed by Me.Close() statement or by user clicking the Close button on title bar I154-1-A Peter Lo I154-1-A Peter Lo Example of Form Closing Event Message box Interactive Dialog Box I154-1-A Peter Lo I154-1-A Peter Lo
9 The MessageBox.Show Method How to use MessageBox.Show MessageBox.Show method: Displays a message box with text, one or more buttons, and an icon When a message box is displayed, the program waits until the user selects a button MessageBox.Show returns an integer value indicating which button the user selected DialogResult values include: Windows.Forms.DialogResult.Yes Windows.Forms.DialogResult.No I154-1-A Peter Lo I154-1-A Peter Lo Style for Message Dialog Drawing Bitmap, Graphic & Pen I154-1-A Peter Lo I154-1-A Peter Lo
10 Graphic Object The Graphics object provides methods for drawing a variety of lines and shapes. Simple or complex shapes can be rendered in solid or transparent colors, or using user-defined gradient or image textures. Lines, open curves, and outline shapes are created using a Pen object. To fill in an area, such as a rectangle or a closed curve, a Brush object is required. Pen and Brush Object. I154-1-A Peter Lo I154-1-A Peter Lo Catching Errors Exception Handling Exceptions Handled in Visual Basic by displaying an error message and then abruptly terminating the application Try Catch statement Used to take control of the exception handling in your code Error Capture and Handling I154-1-A Peter Lo Clearly Visual Basic: Programming with Visual Basic I154-1-A Peter Lo
11 Exception Handling Example The Try-Catch set of statements detects exceptions and takes corrective action I154-1-A Peter Lo I154-1-A Peter Lo Exception Type Exception Handling Example I154-1-A Peter Lo I154-1-A Peter Lo
12 Deploying with an MSI File Overview Deployment Deploy your application to users Windows Installer creates.msi files Proven technology but no panacea Best option for traditional deployment Extended to support needs of.net Visual Studio.NET 2003: Uses Windows Installer 2.0 Setup and Deployment project templates I154-1-A Peter Lo I154-1-A Peter Lo Deploying with an MSI File Deploying the.net Framework Framework is not installed automatically Distribute using Dotnetfx.exe Cannot include in a Visual Studio.NET Setup Project Deployment options Manual installation Bootstrapper Electronic Software Distribution Active Directory or SMS Publishing an Application with Click Once Deployment After an application is completely debugged and working properly, you can deploy the project Deploying a project means placing an executable version of the program on your hard disk, on a Web server, or on a network server When programming using Visual Basic 2008, you can create a deployed program by using ClickOnce Deployment The deployed version of the program you create can be installed and executed on any computer that has the.net framework installed I154-1-A Peter Lo I154-1-A Peter Lo
1. Create SQL Database in Visual Studio
1. Create SQL Database in Visual Studio 1. Select Create New SQL Server Database in Server Explorer. 2. Select your server name, and input the new database name, then press OK. Copyright 2011 Lo Chi Wing
WPF Shapes. WPF Shapes, Canvas, Dialogs 1
WPF Shapes WPF Shapes, Canvas, Dialogs 1 Shapes are elements WPF Shapes, Canvas, Dialogs 2 Shapes draw themselves, no invalidation or repainting needed when shape moves, window is resized, or shape s properties
Data Tool Platform SQL Development Tools
Data Tool Platform SQL Development Tools ekapner Contents Setting SQL Development Preferences...5 Execution Plan View Options Preferences...5 General Preferences...5 Label Decorations Preferences...6
CRM Setup Factory Installer V 3.0 Developers Guide
CRM Setup Factory Installer V 3.0 Developers Guide Who Should Read This Guide This guide is for ACCPAC CRM solution providers and developers. We assume that you have experience using: Microsoft Visual
TRANSITION FROM TEACHING VB6 TO VB.NET
TRANSITION FROM TEACHING VB6 TO VB.NET Azad Ali, Butler County Community College azad.ali@bc3.edu David Wood, Robert Morris University wood@rmu.edu ABSTRACT The upgrade of Microsoft Visual Basic from version,
Producing Standards Based Content with ToolBook
Producing Standards Based Content with ToolBook Contents Using ToolBook to Create Standards Based Content... 3 Installing ToolBook... 3 Creating a New ToolBook Book... 3 Modifying an Existing Question...
Ohio University Computer Services Center August, 2002 Crystal Reports Introduction Quick Reference Guide
Open Crystal Reports From the Windows Start menu choose Programs and then Crystal Reports. Creating a Blank Report Ohio University Computer Services Center August, 2002 Crystal Reports Introduction Quick
Smart Board Notebook Software A guide for new Smart Board users
Smart Board Notebook Software A guide for new Smart Board users This guide will address the following tasks in Notebook: 1. Adding shapes, text, and pictures. 2. Searching the Gallery. 3. Arranging objects
Epson Brightlink Interactive Board and Pen Training. Step One: Install the Brightlink Easy Interactive Driver
California State University, Fullerton Campus Information Technology Division Documentation and Training Services Handout Epson Brightlink Interactive Board and Pen Training Downloading Brightlink Drivers
Using SQL Server Management Studio
Using SQL Server Management Studio Microsoft SQL Server Management Studio 2005 is a graphical tool for database designer or programmer. With SQL Server Management Studio 2005 you can: Create databases
Asset Track Getting Started Guide. An Introduction to Asset Track
Asset Track Getting Started Guide An Introduction to Asset Track Contents Introducing Asset Track... 3 Overview... 3 A Quick Start... 6 Quick Start Option 1... 6 Getting to Configuration... 7 Changing
Load Balancing BEA WebLogic Servers with F5 Networks BIG-IP v9
Load Balancing BEA WebLogic Servers with F5 Networks BIG-IP v9 Introducing BIG-IP load balancing for BEA WebLogic Server Configuring the BIG-IP for load balancing WebLogic Servers Introducing BIG-IP load
WESTMORELAND COUNTY PUBLIC SCHOOLS 2011 2012 Integrated Instructional Pacing Guide and Checklist Computer Math
Textbook Correlation WESTMORELAND COUNTY PUBLIC SCHOOLS 2011 2012 Integrated Instructional Pacing Guide and Checklist Computer Math Following Directions Unit FIRST QUARTER AND SECOND QUARTER Logic Unit
CREATING FORMAL REPORT. using MICROSOFT WORD. and EXCEL
CREATING a FORMAL REPORT using MICROSOFT WORD and EXCEL TABLE OF CONTENTS TABLE OF CONTENTS... 2 1 INTRODUCTION... 4 1.1 Aim... 4 1.2 Authorisation... 4 1.3 Sources of Information... 4 2 FINDINGS... 4
Microsoft PowerPoint 2008
Microsoft PowerPoint 2008 Starting PowerPoint... 2 Creating Slides in Your Presentation... 3 Beginning with the Title Slide... 3 Inserting a New Slide... 3 Slide Layouts... 3 Adding an Image to a Slide...
Appendix K Introduction to Microsoft Visual C++ 6.0
Appendix K Introduction to Microsoft Visual C++ 6.0 This appendix serves as a quick reference for performing the following operations using the Microsoft Visual C++ integrated development environment (IDE):
Formulas, Functions and Charts
Formulas, Functions and Charts :: 167 8 Formulas, Functions and Charts 8.1 INTRODUCTION In this leson you can enter formula and functions and perform mathematical calcualtions. You will also be able to
[Not for Circulation]
Using SMART Notebook SMART Notebook is software meant to supplement use with the SMART Board. The software helps users create interactive presentations, and offers a variety of ways to enhance presenting
TECHNOLOGY Computer Programming II Grade: 9-12 Standard 2: Technology and Society Interaction
Standard 2: Technology and Society Interaction Technology and Ethics Analyze legal technology issues and formulate solutions and strategies that foster responsible technology usage. 1. Practice responsible
Introduction to Programming
Introduction to Programming If you re new to programming, you might be intimidated by code and flowcharts. You might even wonder how you ll ever understand them. This lesson offers some basic ideas and
SQL Server An Overview
SQL Server An Overview SQL Server Microsoft SQL Server is designed to work effectively in a number of environments: As a two-tier or multi-tier client/server database system As a desktop database system
Business Objects Version 5 : Introduction
Business Objects Version 5 : Introduction Page 1 TABLE OF CONTENTS Introduction About Business Objects Changing Your Password Retrieving Pre-Defined Reports Formatting Your Report Using the Slice and Dice
Ipswitch Client Installation Guide
IPSWITCH TECHNICAL BRIEF Ipswitch Client Installation Guide In This Document Installing on a Single Computer... 1 Installing to Multiple End User Computers... 5 Silent Install... 5 Active Directory Group
Introduction to Microsoft Word 2003
Introduction to Microsoft Word 2003 Sabeera Kulkarni Information Technology Lab School of Information University of Texas at Austin Fall 2004 1. Objective This tutorial is designed for users who are new
Creating a Simple Macro
28 Creating a Simple Macro What Is a Macro?, 28-2 Terminology: three types of macros The Structure of a Simple Macro, 28-2 GMACRO and ENDMACRO, Template, Body of the macro Example of a Simple Macro, 28-4
Chapter 16. Using Dynamic Data Exchange (DDE)
104 Student Guide 16. Using Dynamic Data Exchange (DDE) Chapter 16 Using Dynamic Data Exchange (DDE) Copyright 1994-2003, GE Fanuc International, Inc. 16-1 FIX Fundamentals 16. Using Dynamic Data Exchange
Hypercosm. Studio.
Hypercosm Studio Hypercosm Studio Guide 3 Revision: November 2005 Copyright 2005 Hypercosm LLC All rights reserved. Hypercosm, OMAR, Hypercosm 3D Player, and Hypercosm Studio are trademarks
Legal Notes. Regarding Trademarks. 2012 KYOCERA Document Solutions Inc.
Legal Notes Unauthorized reproduction of all or part of this guide is prohibited. The information in this guide is subject to change without notice. We cannot be held liable for any problems arising.
Visual Dialogue User Guide. Version 6.1
Visual Dialogue User Guide Version 6.1 2015 Pitney Bowes Software Inc. All rights reserved. This document may contain confidential and proprietary information belonging to Pitney Bowes Inc. and/or its
7-1. This chapter explains how to set and use Event Log. 7.1. Overview... 7-2 7.2. Event Log Management... 7-2 7.3. Creating a New Event Log...
7-1 7. Event Log This chapter explains how to set and use Event Log. 7.1. Overview... 7-2 7.2. Event Log Management... 7-2 7.3. Creating a New Event Log... 7-6 7-2 7.1. Overview The following are the basic
Computing Concepts with Java Essentials
2008 AGI-Information Management Consultants May be used for personal purporses only or by libraries associated to dandelon.com network. Computing Concepts with Java Essentials 3rd Edition Cay Horstmann
ACS Version 10.6 - Check Layout Design
ACS Version 10.6 - Check Layout Design Table Of Contents 1. Check Designer... 1 About the Check Design Feature... 1 Selecting a Check Template... 2 Adding a Check Template... 2 Modify a Check Template...
WS_FTP Professional 12
WS_FTP Professional 12 Tools Guide Contents CHAPTER 1 Introduction Ways to Automate Regular File Transfers...5 Check Transfer Status and Logs...6 Building a List of Files for Transfer...6 Transfer Files
LAMBDA CONSULTING GROUP Legendary Academy of Management & Business Development Advisories
Curriculum # 05 Four Months Certification Program WEB DESIGNING & DEVELOPMENT LAMBDA CONSULTING GROUP Legendary Academy of Management & Business Development Advisories The duration of The Course is
Excel 2007: Basics Learning Guide
Excel 2007: Basics Learning Guide Exploring Excel At first glance, the new Excel 2007 interface may seem a bit unsettling, with fat bands called Ribbons replacing cascading text menus and task bars. This
The Reporting Console
Chapter 1 The Reporting Console This chapter provides a tour of the WebTrends Reporting Console and describes how you can use it to view WebTrends reports. It also provides information about how to customize
Auditing manual. Archive Manager. Publication Date: November, 2015
Archive Manager Publication Date: November, 2015 All Rights Reserved. This software is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this software,
Fiery Clone Tool For Embedded Servers User Guide
Fiery Clone Tool For Embedded Servers User Guide Fiery Clone Tool allows you to clone image files to a folder on a USB flash drive connected to the Fiery server. You can restore the image file to the Fiery
Visual Basic Programming. An Introduction
Visual Basic Programming An Introduction Why Visual Basic? Programming for the Windows User Interface is extremely complicated. Other Graphical User Interfaces (GUI) are no better. Visual Basic provides
isupport 15 Release Notes
isupport 15 Release Notes This document includes new features, changes, and fixes in isupport v15. The Readme.txt file included with the download includes a list of known issues. New Features in isupport
Designing and Implementing Forms 34
C H A P T E R 34 Designing and Implementing Forms 34 You can add forms to your site to collect information from site visitors; for example, to survey potential customers, conduct credit-card transactions,
edgebooks Quick Start Guide 4
edgebooks Quick Start Guide 4 memories made easy SECTION 1: Installing FotoFusion Please follow the steps in this section to install FotoFusion to your computer. 1. Please close all open applications prior!
Sharing Software. Chapter 14
Chapter 14 14 Sharing Software Sharing a tool, like a software application, works differently from sharing a document or presentation. When you share software during a meeting, a sharing window opens automatically
Table Of Contents. iii
PASSOLO Handbook Table Of Contents General... 1 Content Overview... 1 Typographic Conventions... 2 First Steps... 3 First steps... 3 The Welcome dialog... 3 User login... 4 PASSOLO Projects... 5 Overview...
Windows PowerShell Essentials
Windows PowerShell Essentials Windows PowerShell Essentials Edition 1.0. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights
SMART Notebook 10 Software for Mac Computers. Creating SMART Notebook Files
SMART Notebook 10 Software for Mac Computers Creating SMART Notebook Files Product Registration If you register your SMART product, we ll notify you of new features and software upgrades. Register online,
Using these objects to view the process of the whole event from triggering waiting for processing until alarm stops. Define event content first.
Chapter 7 Event Log... 2 7.1 Event Log Management... 2 7.1.1 Excel Editing... 3 7.2 Create a New Event Log... 4 7.2.1 Alarm (Event) Log General Settings... 4 7.2.2 Alarm (Event) Log Message Settings...
BulkSMS Text Messenger Product Manual
BulkSMS Text Messenger Product Manual 1. Installing the software 1.1. Download the BulkSMS Text Messenger Go to and choose your country. process. Click on products on the top menu and select
Basic Excel Handbook
2 5 2 7 1 1 0 4 3 9 8 1 Basic Excel Handbook Version 3.6 May 6, 2008 Contents Contents... 1 Part I: Background Information...3 About This Handbook... 4 Excel Terminology... 5 Excel Terminology (cont.)...
SMART Board Menu. Full Reference Guide
SMART Board Full Reference Guide Start-Up After entering Windows, click on the desktop icon SMART Board Tools. The SMART Board icon will appear in the system tray on the bottom right of the screen. Turn
Toad for Oracle 8.6 SQL Tuning
Quick User Guide for Toad for Oracle 8.6 SQL Tuning SQL Tuning Version 6.1.1 SQL Tuning definitively solves SQL bottlenecks through a unique methodology that scans code, without executing programs, to
SMART board 101. SMART board 101 Training
SMART board 101 SMART board 101 Training For those who want to learn/remember how to connect it, turn it on, configure it, and feel better about using it at a basic level. We will talk about how the
Getting Started Guide
Getting Started Guide Mulberry Internet Email/Calendar Client Version 4.0 Cyrus Daboo Pittsburgh PA USA mailto:mulberry@mulberrymail.com Information in this document is subject
KaleidaGraph Quick Start Guide
KaleidaGraph Quick Start Guide This document is a hands-on guide that walks you through the use of KaleidaGraph. You will probably want to print this guide and then start your exploration of the product.
Client Marketing: Sets
Client Marketing Client Marketing: Sets Purpose Client Marketing Sets are used for selecting clients from the client records based on certain criteria you designate. Once the clients are selected, you
Microsoft Dynamics GP 2010. Report Writer User s Guide
Microsoft Dynamics GP 2010 Report Writer User s Guide Copyright Copyright 2011 Microsoft Corporation. All rights reserved. Limitation of liability This document is provided as-is. Information and,
Canterbury Maps Quick Start - Drawing and Printing Tools
Canterbury Maps Canterbury Maps Quick Start - Drawing and Printing Tools Quick Start Guide Standard GIS Viewer 2 Canterbury Maps Quick Start - Drawing and Printing Tools Introduction This document will
Creating and Using Links and Bookmarks in PDF Documents
Creating and Using Links and Bookmarks in PDF Documents After making a document into a PDF, there may be times when you will need to make links or bookmarks within that PDF to aid navigation through the
OpenInsight 9.3 Quick Start Guide
OpenInsight 9.3 Quick Start Guide Page 2 of 68 STARTING OPENINSIGHT... 4 I. Starting OpenInsight... 4 II. Opening an Existing Application... 6 III. Creating a New Application... 9 WORKING WITH LINEAR HASH
What s New V 11. Preferences: Parameters: Layout/ Modifications: Reverse mouse scroll wheel zoom direction
What s New V 11 Preferences: Reverse mouse scroll wheel zoom direction Assign mouse scroll wheel Middle Button as Fine tune Pricing Method (Manufacturing/Design) Display- Display Long Name Parameters:
Microsoft Access Basics
Microsoft Access Basics 2006 ipic Development Group, LLC Authored by James D Ballotti Microsoft, Access, Excel, Word, and Office are registered trademarks of the Microsoft Corporation Version 1 - Revision
ProCAP Transfer with Omneon Interface
ProCAP Transfer with Omneon Interface 1 Table of Contents: Table of Contents:... 2 Transfer Omneon Overview... 3 Single Transfer... 4 Loading Transfer Files...4 Selecting the Video Clip...5 Encode Properties...7
UF Health SharePoint 2010 Introduction to Content Administration
UF Health SharePoint 2010 Introduction to Content Administration Email: training@health.ufl.edu Web Page: Last Updated 2/7/2014 Introduction to SharePoint 2010 2.0 Hours
4cast Client Specification and Installation
4cast Client Specification and Installation Version 2015.00 10 November 2014 Innovative Solutions for Education Management System requirements The client requires Administrative rights
Foxit Reader Quick Guide
I Contents Foxit Reader Contents... II Chapter 1 Get Started... 1 Foxit Reader Overview... 1 System Requirements... 1 Install Foxit Reader... 2 Uninstall Foxit Reader... 2 Update Foxit Reader... 2 Workspace...
28 What s New in IGSS V9. Speaker Notes INSIGHT AND OVERVIEW
28 What s New in IGSS V9 Speaker Notes INSIGHT AND OVERVIEW Contents of this lesson Topics: New IGSS Control Center Consolidated report system Redesigned Maintenance module Enhancement highlights Online
SPSS: Getting Started. For Windows
For Windows Updated: August 2012 Table of Contents Section 1: Overview... 3 1.1 Introduction to SPSS Tutorials... 3 1.2 Introduction to SPSS... 3 1.3 Overview of SPSS for Windows... 3 Section 2: Entering
Easy Setup Guide for the Sony Network Camera
-878-191-11 (1) Easy Setup Guide for the Sony Network Camera For setup, a computer running the Microsoft Windows Operating System is required. For monitoring camera images, Microsoft Internet Explorer
Quick Reference Guide. Data Storage, Syncing, and Exporting
Quick Reference Guide Data Storage, Syncing, and Exporting SMS Mobile Data Storage, Syncing, and Exporting This document covers the topics listed below that explain data management in the SMS Mobile Software.
Oracle Enterprise Single Sign-on Logon Manager How-To: Using the Trace Controller Utility Release 11.1.1.2.0 20416-01
Oracle Enterprise Single Sign-on Logon Manager How-To: Using the Trace Controller Utility Release 11.1.1.2.0 20416-01 December 2010 Oracle Enterprise Single Sign-on Logon Manager How-To: Using the Trace
Adobe Training Services Exam Guide. ACE: Illustrator CS6
Adobe Training Services Exam Guide ACE: Illustrator CS6 Adobe Training Services provides this exam guide to help prepare partners, customers, and consultants who are actively seeking accreditation as Adobe | http://docplayer.net/22697678-Introduction-to-visual-basic-and-visual-c-array-what-is-array-why-use-array.html | CC-MAIN-2019-04 | en | refinedweb |
I’m creating a new method in the SPI definition of the TinyClr for the G120. How does that method show up in visual studio?
How do new methods show up in Visual Studio when you update TinyClr
Creating a new definition in SPI where? I’m not sure what you mean
I’m trying to create a new method in the C++ file: LPC17_SPI.cpp
TinyCLR_Result LPC17_Spi_TransferFullDuplexWithOffsets(const TinyCLR_Spi_Provider* self, const uint8_t* writeBuffer, size_t& writeOffset, size_t& writeLength, uint8_t* readBuffer, size_t& readOffset, size_t& readLength, size_t& startReadOffset) {
}
in the SPI source file. But how do I get this method to show up in visual studio intellisense after I compile the firmware.
So then, it didn’t compile everything. How do I do a complete compile?
You don’t, the only way would be to get John to add it as all the white man’s magic is closed and pre compiled.
looks like “…Read the code; fork the code; fix the code; problem solved…” does not work
I guess not Kevin.
Hmm, This questions doing my head in!!!
So It was an excuse to pulled down the TinyCLR porting source and managed to compile the firmware for a G120 for the first time.
That was fun, enjoyed seeing that getting built and seeing those G120Firmware.glb files appear !
Um SO the reason this is doing my head in (I’m stupid so its easy!) is a few reasons and I might be having a brain fade day, but I’ve just confused myself big time thinking about this. So I thought Id embarrass myself, as maybe I’ll learn something!
First I thought this sounds doable, but then went ummm err!!!??
I thought of using an mock extension method, as a work around just to get it to appear in IntelliSense, but…
From what I think your saying is you roughly want to add a method to the SPI controller in the G120 firmware, and then see that new method pop up in the IntelliSense in Visual Studio.?
Um… but isn’t this Native code on the Hardware, and C# CLR managed code in VS?
The DLL’s in VS which are installed via the nuget packages are c# libraries.
To call the ‘new method’ in the G120 firmware, would you have to call out ‘invoke’ from C# to native code?
Oh man! This is doing my head in.
Can some smart person please explain all this as my head is spinning!!!
Can we get a over view of Firmware / What the source ‘builds’ / Whats in the NuGet GHIElectronics.TinyCLR packages, and how those .DLL’s are built (do we not have access to the source for these?)
Am also confused at what ‘source.zip’ is at
Seems both links pull down “TinyCLR_Core.0.6.0.zip” even though source link points to “v0.6.0.zip”, maybe GitHub works like that and I’ve never noticed?
Just adding a new method in the CPP file isn’t enough. There are two separate concepts in TinyCLR: interops and APIs.
Interops are what allow you to call from managed code into native code. You can’t add more interops to an existing class in our library just like you can’t add new methods to our libraries. You can, however, create your own library that has its own interops. Interops are very similar to our old RLP, but now they’re much more integrated with the core and easier to use (there are many more improvements coming).
APIs are just a published way to interacting with various services. All the services we define are listed in TinyCLR.h. You cannot add your own methods to this file, but you can create your own API. You’ll just have to distribute your definition to other users. If you reuse an existing TinyCLR_Api_Type, you’ll be expected to conform to the corresponding API we defined. 0x80000000 and up are reserved for custom types. Particularly useful is the “API Provider” api itself. It is used to find other APIs in the system and an instance of it is passed to each interop call and to the firmware port on startup.
The real power is when these two are combined. In an interop, you’re running native code so you can access any device registers you need. What may be easier is to find and interact with an existing API in the system. Some APIs you may have to, like time and memory. The interop API is very useful since it allows you to marshal data to and from managed code, raise managed events, and interact with and create managed objects.
For GHIElectronics.TinyCLR.Devices (and our other libraries), the managed and interop APIs are very similar But you’re free to create your own if you need to. The TinyCLR core no longer has any knowledge of any specific device peripheral. All that it needs is passed to the various
TinyCLR_Startup functions in the firmware
main(). Particularly: heap location, devince name and version, the debugger API, plus the deployment, time, interrupt, and power APIs. The main.cpp we provide is just a reference implementation. You can always create your own as long as you call the functions as required.
Based on your other thread, it seems the SPI api is a bit ineffcient since it requires exact sized arrays. I agree it’s not ideal, but we’re following the Windows IoT API in our official GHIElectronics.TinyCLR.Devices package. While we’re investingating a more low level package that Devices will build on top of, in the mean time you can always create your own custom SPI API if you need.
Just clone ports and get the latest core library. Clone whichever device you’re starting with, say the G120, and change the name and various IDs as required in the Device.h file. Depending on the changes you want to make, cloning a new port may be required. In either case, change
LPC17_Spi_GetApi to use the API struct that you define (make sure to follow the pattern from TinyCLR.h exactly and use function pointers) and update the read write functions to take an additional index and offset. Update the API name and author, type too to custom.
Now that you’ve defined your API, you actually need to use it. So you’ll want to create a new class library that has whatever interops you deem appropriate, likely mirroring what you have in your custom API. Design the managed API as you see fit as well. Then in the native code for your interops, you’ll get an instance of the API provider API from the
TinyCLR_Interop_MethodData parameter in the interop. You can use this to get an instance of your custom SPI provider that you interact with by calling the function pointers. You’ll also need to get an instance of the interop api so you can read the parameters passed to the function.
Of course, you could just implement the SPI functions directly in your interop and not use an API at all. Defining an API has the benefit that other systems in the firmware can use your API. A long term goal of ours is that you can distribute that API and interop source with a nuget pacakge that gets built and shiped along to the device by the build system. So you’ll add a reference to some other nuget package and get to use its native functionality.
The reason you need to use the Custom API type for another SPI provider is that when fetching APIs, if you get one back that has the SpiProvider API type, it is expected to conform to the SPI API we define in TinyCLR.h.
You can find and interact with the various APIs and interops registered on the system with the types available under the
System.Runtime.InteropServices namespace.
Keep in mind master is the stable 0.6.0 release while dev is changing a lot between releases. The STM32F4 port is currently the cleanest but there is still a lot of work we want to do on all of them.
One unfortunate limitation in the current interop setup is that you need to compile an interop to a specific window in memory and use that in your linkerscript, then pass the address of a specific object to the interop
Add method. So you’ll need to create custom compilations for each device you want to support. To support this, we’ve set aside a few KB in ram in each device that you can use to put your interops. We want to improve this story going forward, perhaps with dynamic loading and fixups.
The interop and API docs under porting have some more specific info and steps as well.
Had to get a cup of coffee to sit down and read that overview.
Ok, Onward! | https://forums.ghielectronics.com/t/how-do-new-methods-show-up-in-visual-studio-when-you-update-tinyclr/20867 | CC-MAIN-2019-04 | en | refinedweb |
Buttons communicate the action that will occur when the user touches them.
Material buttons trigger an ink reaction on press. They may display text, imagery, or both. Flat buttons and raised buttons are the most commonly used types.
Flat buttons are text-only buttons. They may be used in dialogs, toolbars, or inline. They do not lift, but fill with color on press.
Outlined buttons are text-only buttons with medium emphasis. They behave like flat buttons but have an outline and are typically used for actions that are important, but aren’t the primary action in an app.
Raised buttons are rectangular-shaped buttons. They may be used inline. They lift and display ink reactions on press.
A floating action button represents the primary action in an application. Shaped like a circled icon floating above the UI, it has an ink wash upon focus and lifts upon selection. When pressed, it may contain more related actions.
Only one floating action button is recommended per screen to represent the most common action.
The floating action button animates onto the screen as an expanding piece of material, by default.
A floating action button that spans multiple lateral screens (such as tabbed screens) should briefly disappear, then reappear if its action changes.
The Zoom transition can be used to achieve this. Note that since both the exiting and entering
animations are triggered at the same time, we use
enterDelay to allow the outgoing Floating Action Button's
animation to finish before the new one enters.
Icon buttons are commonly found in app bars and toolbars.
Icons are also appropriate for toggle buttons that allow a single choice to be selected or deselected, such as adding or removing a star to an item.
Sometimes you might want to have icons for certain button to enhance the UX of the application as we recognize logos more easily than plain text. For example, if you have a delete button you can label it with a dustbin icon.
If you have been reading the overrides documentation page but you are not confident jumping in, here's an example of how you can change the main color of a Button.
The Flat Buttons, Raised Buttons, Floating Action Buttons and Icon Buttons are built on top of the same component: the
ButtonBase.
You can take advantage of this lower level component to build custom interactions.
One common use case is to use the button to trigger a navigation to a new page.
The
ButtonBase component provides a property to handle this use case:
component.
Given that a lot of our interactive components rely on
ButtonBase, you should be
able to take advantage of it everywhere:
import { Link } from 'react-router-dom' import Button from '@material-ui/core/Button'; <Button component={Link} Link </Button>
or if you want to avoid properties collisions:
import { Link } from 'react-router-dom' import Button from '@material-ui/core/Button'; const MyLink = props => <Link to="/open-collective" {...props} /> <Button component={MyLink}> Link </Button>
Note: Creating
MyLink is necessary to prevent unexpected unmounting. You can read more about it here. | https://material-ui-next.com/demos/buttons/ | CC-MAIN-2019-04 | en | refinedweb |
This is a very exciting day for me as two major projects I am deeply involved with are having a major launch. First of all Fedora Workstation 24 is out which crosses a few critical milestones for us. Maybe most visible is that this is the first time you can use the new graphical update mechanism in GNOME Software to take you from Fedora Workstation 23 to Fedora Workstation 24. This means that when you open GNOME Software it will show you an option to do a system upgrade to Fedora Workstation 24. We been testing and doing a lot of QA work around this feature so my expectation is that it will provide a smooth upgrade experience for you.
The second major milestone is that we do feel Wayland is now in a state where the vast majority of users should be able to use it on a day to day basis. We been working through the kinks and resolving many corner cases during the previous 6 Months, with a lot of effort put into making sure that the interaction between applications running natively on Wayland and those running using XWayland is smooth. For instance one item we crossed off the list early in this development cycle was adding middle-mouse button cut and paste as we know that was a crucial feature for many long time linux users looking to make the switch. So once you updated I ask all of you to try switching to the Wayland session by clicking on the little cogwheel in the login screen, so that we get as much testing as possible of Wayland during the Fedora Workstation 24 lifespan. Feedback provided by our users during the Fedora Workstation 24 lifecycle will be a crucial information to allow us to make the final decision about Wayland as the default for Fedora Workstation 25. Of course the team will be working ardently during Fedora Workstation 24 to make sure we find and address any niggling issues left.
In addition to that there is also of course a long list of usability improvements, new features and bugfixes across the desktop, both coming in from our desktop team at Red Hat and from the GNOME community in general.
There was also the formal announcement of Flatpak today (be sure to read that press release), which is the great new technology for shipping desktop applications. For those of you who have read my previous blog entries you probably seen me talking about this technology using its old name xdg-app. Flatpak is an incredible piece of engineering designed by Alexander Larsson we developed alongside a lot of other components.
Because as Matthew Garret pointed out not long ago, unless we move away from X11 we can not really produce a secure desktop container technology, which is why we kept such a high focus on pushing Wayland forward for the last year. It is also why we invested so much time into Pinos which is as I mentioned in my original annoucement of the project our video equivalent of PulseAudio (and yes a proper Pinos website is getting close :). Wim Taymans who created Pinos have also been working on patches to PulseAudio to make it more suitable for using with sandboxed applications and those patches have recently been taken over by community member Ahmed S. Darwish who is trying to get them ready for merging into the main codebase.
We are feeling very confident about Flatpak as it has a lot of critical features designed in from the start. First of all it was built to be a cross distribution solution from day one, meaning that making Flatpak run on any major linux distribution out there should be simple. We already got Simon McVittie working on Debian support, we got Arch support and there is also an Ubuntu PPA that the team put together that allows you to run Flatpaks fully featured on Ubuntu. And Endless Mobile has chosen flatpak as their application delivery format going forward for their operating system.
We use the same base technologies as Docker like namespaces, bind mounts and cgroups for Flatpak, which means that any system out there wanting to support Docker images would also have the necessary components to support Flatpaks. Which means that we will also be able to take advantage of the investment and development happening around server side containers.
Flatpak is also heavy using another exciting technology, OSTree, which was originally developed by Colin Walters for GNOME. This technology is actually seeing a lot of investment and development these days as it became the foundation for Project Atomic, which is Red Hats effort to create an enterprise ready platform for running server side containers. OStree provides us with a lot of important features like efficient storage of application images and a very efficient transport mechanism. For example one core feature OSTree brings us is de-duplication of files which means you don’t need to keep multiple copies on your disk of the same file, so if ten Flatpak images share the same file, then you only keep one copy of it on your local disk.
Another critical feature of Flatpak is its runtime separation, which basically means that you can have different runtimes for some families of usecases. So for instance you can have a GNOME runtime that allows all your GNOME applications to share a lot of libraries yet giving you a single point for security updates to those libraries. So while we don’t want a forest of runtimes it does allow us to create a few important ones to cover certain families of applications and thus reduce disk usage further and improve system security.
Going forward we are looking at a lot of exciting features for Flatpak. The most important of these is the thing I mentioned earlier, Portals.
In the current release of flatpak you can choose between two options. Either make it completely sandboxed or not make it sandboxed at all. Portals are basically the way you can sandbox your application yet still allow it to interact with your general desktop and storage. For instance Pinos and PulseAudios role for containers is to provide such portals for handling audio and video. Of course more portals are needed and during the the GTK+ hackfest in Toronto last week a lot of time was spent on mapping out the roadmap for Portals. Expect more news about Portals as they are getting developed going forward.
I want to mention that we of course realize that a new technology like Flatpak should come with a high quality developer story, which is why Christian Hergert has been spending time working on support for Flatpak in the Builder IDE. There is some support in already, but expect to see us flesh this out significantly over the next Months. We are also working on adding more documentation to the Flatpak website, to cover how to integrate more build systems and similar with Flatpak.
And last, but not least Richard Hughes has been making sure we have great Flatpak support in Software in Fedora Workstation 24 ensuring that as an end user you shouldn’t have to care about if your application is a Flatpak or a RPM.
16 comments ↓
FWIW, is still a complete showstopper for Wayland for me. I cannot use it for practical work until that’s fixed. Most anyone who uses a password manager will tell you the same. I can hack manually re-typing long, random passwords full of special characters for about a day, then I run screaming back to X…
Thanks for pointing me to that Adam, I will check with the guys what the plan is and let you know.
FWIW, for me is close to the point where “I cannot use wayland for practical work until that’s fixed.”
Agreed. While I doubt I’ll ever use GNOME 3 (too much difference of opinion on how a desktop should work and how much it should weigh), this will definitely be a major factor in slowing my move to Wayland.
I type 20+-character hard-random passwords using Win+P and I value configurable window tiling enough that I wrote and maintain QuickTile, which exploits X11’s insecure design philosophy to monkey-patch it into any WM.
15 years ago, when I was a teenager, I’d have been excited about Wayland. Now, I’ve got actual work to get done, I can just as easily enjoy the thrill of being on the cutting edge by learning Rust, and I don’t need another source of drudgework to burn me out.
[WORDPRESS HASHCASH] The poster sent us ‘0 which is not a hashcash value.
Drudgework? I don’t think you really thought this through did you? There’s practically zero work involved in switching to Wayland for users.
Yet another change averse moaner….
Fair point, that’s not implemented yet because that’s kind of a layering break. clutter/libst need to learn about mutter’s clipboard implementation, and a x11-only one should be available too.
In 3.21/master we’ve folded an internal copy of clutter into mutter, so it will be certainly easier to implement now.
Could you please compare Flatpak and AppImage?
I’m disapointed about à new format war instead of an unified solution in the Linux world.
Well I think it would be better if more or less neutral third parties did that comparison as any comparison I would do would likely be met with claims of bias. In general though I think the important differences you would find are things like the runtime separation, ostree functionality like de-duplication and the sandboxing model with portals.
Mouse pointer movements are not smooth enough under Wayland, I see some annoiyng lags.
I think that is a fixed libinput issue, at least I know there was some issues like that caused by missing but now implemented libinput support.
Seamless upgrade. Good job.
I was so excited when middle-click paste landed in Wayland and then I discovered it’s not fully backwards-compatible. :(
I use vim with X11 clipboard support and :set clipboard=unnamed, inside a gnome-terminal tab, and means I’m used to copying text in vim using ‘yy’ and pasting it into Chromium using the middle mouse. And also selecting text in Chromium and then pasting it into vim by hitting ‘p’. Neither of these work in Wayland. :(
I suspect it’s because Xwayland doesn’t export the PRIMARY selection.
Who cares how you accomplish copying and pasting…? Just use the Wayland equivalent and move on.
Marius, what mechanism does vim use to detect the availability of a X11 clipboard? mutter 3.20 indeed implemented the PRIMARY selection to X11 clients, that’s mapped to the wp_primary_selection protocol for gtk+ wayland applications, so c&p using middle click works across clients.
So whatever happens in your case, I think it’s around proper detection of this feature.
– Does the feature work if you run on GDK_BACKEND=x11 gnome-terminal?
– Does the feature work if you run on xterm?
– Does middle click pasting work otherwise between those clients?
Anyhow, if you file a bug, I’ll have a look at this.
I know little about vim internals, but I believe it uses Xlib directly to talk to the X server when DISPLAY is set.
Middle-click paste works fine for Xwayland clients *when I use middle-click to paste*. It doesn’t work when I use key mappings to ask vim to copy/paste.
I seem to remember something about a Wayland security feature that the contents of the selection aren’t made available to a Wayland client unless the user actually used the middle mouse button on a window that belongs to that client, to prevent passive password sniffing attacks. I thought that was the reason.
Thank you for the encouragement of filing a bug! I probably will, once I get over my current burnout phase.
Christian, is there a target date in mind for the ‘Software’ update that will allow the upgrade from F23 to F24?
The suspense is killing me! | https://blogs.gnome.org/uraeus/2016/06/21/fedora-workstation-24-is-out-and-flatpak-is-now-officially-launched/ | CC-MAIN-2019-04 | en | refinedweb |
.
Enter Docker containers. Containers make it possible to isolate applications into small, lightweight execution environments that share the operating system kernel. Typically measured in megabytes, containers use far fewer resources than virtual machines and start up almost immediately. They can be packed far more densely on the same hardware and spun up and down en masse with far less effort and overhead.
Thus containers provide a highly efficient and highly granular mechanism for combining software components into the kinds of application and service stacks needed in a modern enterprise, and for keeping those software components updated and maintained.
Docker container basics
Docker containers are the most modern incarnation of an idea that has been in Unix operating systems such as BSD and Solaris for decades—the idea that a given process can be run with some degree of isolation from the rest of the operating environment.
Virtual machines provide isolation by devoting an entire operating system instance to each application that needs compartmentalizing. This approach provides almost total isolation, but at the cost of significant overhead. Each guest operating instance eats up memory and processing power that could be better devoted to apps themselves.
Containers take a different approach. Each application and its dependencies use a partitioned segment of the operating system’s resources. The container runtime (Docker, most often) sets up and tears down the containers by drawing on the low-level container services provided by the host operating system.
To understand Linux containers, for example, we have to start with cgroups and namespaces, the Linux kernel features that create the walls between containers and other processes running on the host. Linux namespaces, originally developed by IBM, wrap a set of system resources and present them to a process to make it look like they are dedicated to that process.
Linux cgroups, originally developed by Google, govern the isolation and usage of system resources, such as CPU and memory, for a group of processes. For example, if you have an application that takes up a lot of CPU cycles and memory, such as a scientific computing application, you can put the application in a cgroup to limit its CPU and memory usage.
Namespaces deal with resource isolation for a single process, while cgroups manage resources for a group of processes. Together, cgroups and namespaces were used to create a container technology called, appropriately enough, Linux Containers, or LXC.
How the virtualization and container infrastructure stacks stack up.
How Docker changed containers
The original Linux container technology, LXC is a Linux operating system level virtualization method for running multiple isolated Linux systems on a single host. Namespaces and cgroups make LXC possible.like flexibility to any infrastructure capable of running containers.
Docker also provides a way to create container images—specifications for which software components a given container would run and how. Docker’s container image tools allow a developer to build libraries of images, compose images together into new images, and launch the apps in them on local or remote infrastructure.
Docker also makes it easier to coordinate behaviors between containers, and thus build application stacks by hitching containers together. More advanced versions of these behaviors—what’s called container orchestration—are offered by third-party products, such as Kubernetes. But Docker provides the basics.
By taking the LXC concept and building an API and ecosystem around it, the developers of Docker have made working with containers far more accessible to developers and far more useful to enterprises.
Finally, although Docker was originally built atop LXC, eventually the Docker team created its own runtime, called libcontainer. Libcontainer not only provides a richer layer of services for containers, but also makes it easier for the Docker team to develop Docker container technology separately from Linux.
Today, Docker is a Linux or Windows utility that can efficiently create, ship, and run containers.; some are by-products of their nature..
Microsoft offers two types of containers on Windows that blur the lines slightly between containers and virtual machines:
- Windows Server Containers are essentially Docker-style containers on Windows. Microsoft essentially provided the Windows kernel with some of the same mechanisms used in Linux to perform the isolation, so Docker containers could have the same behaviors on both platforms.
- Hyper-V Containers are containers that run in their own virtual machine with their own kernel for additional isolation. Thus Hyper-V Containers can run different versions of the Windows kernel if needed. Conventional containers can be converted to Hyper-V Containers if the need arises.
Keep in mind that, while Hyper-V Containers run on the Hyper-V hypervisor and take advantage of Hyper-V isolation, they are still a different animal than full-blown virtual machines.
Docker containers don’t provide bare-metal speed
Containers don’t have nearly the overhead of virtual machines, but their performance impact is still measureable. If you have a workload that requires bare-metal speed, a container might be able to get you close enough—much closer than a VM—but you’re still going to see some overhead.
Docker containers are stateless and immutable
Containers boot and run from an image that describes their contents. That image is immutable by default—once created, it doesn’t change.
Consequently, containers don’t have persistency. If you start a container instance, then kill it and restart it, the new container instance won’t have any of the stateful information associated with the old one.
This is another way containers differ from virtual machines. A virtual machine has persistency across sessions by default, because it has its own file system. With a container, the only thing that persists is the image used to boot the software that runs in the container; the only way to change that is to create a new, revised container image.
On the plus side, the statelessness of containers makes the contents of containers more consistent, and easier to compose predictably into application stacks. It also forces developers to keep application data separate from application code.
If you want a container to have any kind of persistent state, you need to place that state somewhere else. That could be a database or a stand-alone data volume connected to the container at boot time.
Docker containers are not microservices
I mentioned earlier how containers lend themselves to creating microservices applications. That doesn’t mean taking a given application and sticking it into a container will automatically create a microservice. A microservices application must be built according to a microservice design pattern, whether it is deployed in containers or not. It is possible to containerize an application as part of the process of converting it to a microservice, but that’s only one step among many.
When virtual machines came along, they made it possible to decouple applications from the systems they ran on. Docker containers take that idea several steps further—not just by being more lightweight, more portable, and faster to spin up than virtual machines, but also by offering scaling, composition, and management features that virtual machines can’t. | https://www.infoworld.com/article/3204171/docker/what-is-docker-docker-containers-explained.html | CC-MAIN-2019-04 | en | refinedweb |
So, I have a new Raspberry Pi 3, a new Raspberry Pi Camera module and a Twitter account. What can we build from this? How about a Raspberry Pi that will search out a specific hashtag and when it finds it sends a tweet to the originator of the tweet with the hashtag. Now building on that, we would like to be able to send a command that the Pi will carry out when it reads it. Oh wait, I just found a PIR sensor in my toolbox. We can also use this Pi as a security monitor as well.
Let’s run down the requirements:
- Only certain users can request a picture
- Limited users can send a command to the Pi
- Pi needs to run headless
- Check for only new hashtag tweets
- The security monitor must run as a separate thread from the main program
- System must remember authorized users as well as the last tweet received
- Camera needs to be able to flip via Tweet command
Shopping List:
- Raspberry Pi (I used a Pi 3, but I think any internet connected Pi will work)
- Standard 8G SD card with the latest Raspbian image
- Power cord
- Raspberry Pi Camera Module
- Network connection either Wifi or Cat5 connection to the internet
- Pyroelectric “Passive” InfraRed Sensor (PIR)
- LED & resistor
- 5 GPIO connection wires
Now before we start any of this, we need to have a Twitter API set up. I decided to setup a separate Twitter account for listening for my chosen hashtag as well as replies to be sent. I could go into how to set this up, but there are already a number of others who have written this out very well, so I’ll let them walk you through it. Just go to your favorite search engine and search for “How to Register a Twitter App”. Basically, you will go to, sign in, click on “Create New App” answer a few questions and there you go. You will need to annotate the following information that will be used to connect your Python script to your twitter account.
CONSUMER_KEY _______________________________________
CONSUMER_SECRET _______________________________________
ACCESS_KEY _______________________________________
ACCESS_SECRET _______________________________________
Before you start you should ensure that your software is up to date.
sudo apt-get update
Now you are going to need PIP, a package management system used to install and manage software packages written in Python.
sudo apt-get install python-pip
Last install you need is tweepy with pip ()
sudo pip install tweepy
Let’s start scripting. I will break down the script into chunks and describe what each section is doing. At the end, I will display the entire script so that you can use it to cut and paste into your PI. You will of course, need to modify some of the code that I will identify as we go. This isn’t a lesson on how to program, so I am assuming that you have some basic Python programing experience.
First off, my script is in the user pi’s home directory within a subdirectory of TwitterPi. The script itself is called twitterPi.py. In the TwitterPi directory, there will be 2 files that will accompany the script. We will identify those files when we get to them in the description.
The first line of the script allows us to run the script without having to type in sudo python ./twitterPi.py. Once the script is complete, you will have to change the permissions on the script to allow it to be executable. chmod 550 twitterPi.py
#!/usr/bin/python
Next, we will import the libraries that we need to run the program.
tweepy – used to interface with Python and Twitter.
time – we will use this for pausing within the program.
os – this allows us to interface with the Raspbian operating system.
threading – Threading allows us to “split” off parts of the program to operate independently.
Picamera – Pulling in the PiCamera library gives us communication with the Pi camera.
GPIO – GPIO gives us the ability to read information from the PIR as well as light up the LED.
import tweepy , time, os, threading from picamera import PiCamera import RPi.GPIO as GPIO
Imports done, we’ll move on to setting up the GPIOs. I am using GPIO 26 for my LED’s positive pin and any random ground for the negative pin. Connect the resistor between the Pi and either one of the pins on the LED. For the PIR, I decided to use GPIO 4 as the input or VCC. You will also connect the PIR ground as well as the source pin to one of the 5v pins on the Pi. I am using the GPIO mode of BCM that Identifies the GPIO numbers for the GPIO pins vice the physical location on the Pi. Then set up the PIR pin as an input, the LED pin as an output and set the LED to be turned off.
LED = 26 PIR = 4 GPIO.setmode(GPIO.BCM) GPIO.setup(PIR, GPIO.IN) GPIO.setup(LED, GPIO.OUT) GPIO.output(LED, GPIO.LOW)
While I want to have a list of people that can request pictures to be taken by the Pi and sent to them via Twitter, I want to limit the people that can run commands on the Pi to an even smaller group. I am assigning the latter group to the list called SuperUser. You will swap out the YOURTWITTERID with the Twitter accounts you want to be able to control your Pi.
SuperUser = ['YOURTWITTERID']
Identify the working directory where the program lives as well as the directory in which files created by the program will be saved to.
WorkDir = "/home/pi/TwitterPi"
With the use of the Twitter commands you will be able to turn on/off the blinking LED if you don’t wish to have an LED flashing regularly. I personally like to have a “heart beat” blinking so that I know my script is still running. We will turn on the “heart beat” by setting the BlinkLED variable to 1. Setting this variable to 0 will deactivate the LED flashing.
BlinkLED = 1
Now we are going to get into the functions of the script. This first function will blink the LED for half a second and then turn it off for half a second. Now this function needs to be called with input the input will be a (0, 1) or a (1, 0). This allows us to turn the LED on then off or off then on. The first (0, 1) is used if we want to leave the LED on when we are done to take a picture. The (1, 0) is used for a normal half second blink. We also check to see if the BlinkLED is equal to 1. If it is set to 0 then there will be no blinking at all.
def BlinkLed(OnOff, OffOn): if BlinkLED == 1: GPIO.output(LED, OnOff) time.sleep(.5) GPIO.output(LED, OffOn) time.sleep(.5)
This next function is used to snap the photo as requested. We will save the photo to the same file name each time. If we want to keep a copy, we can just download it from Twitter. We will blink the LED off/on 5 times, take the photo and then turn off the LED when we are done.
def TakePicture(): PathToFile = WorkDir + "/mypic.JPG" for i in range(0,4): BlinkLed(0, 1) # Blink off then on camera.capture(PathToFile) GPIO.output(LED, 0) # led off
The GetStringID is used to read in the tweet ID number that we dealt with the last time we were running the program. If we do not find a file, then we are probably running the script for the first time and will start with the number 1.
def GetStringID(): PathToFile = WorkDir + "/stringID.txt" try: file = open(PathToFile, "r") NUMBER = int(file.read()) file.close() except IOError: NUMBER = 1 return NUMBER
This next function receives the latest tweet ID number found and saves it to our file so that we know what tweet ID to start looking at first.
def SaveNewStringID(id_string): PathToFile = WorkDir + "/stringID.txt" file = open(PathToFile, "w") file.write(id_string) file.close()
After we find a tweet with our hashtag from an authorized user and have taken a picture, we will use this function to send out our tweet in reply with the photo we took.
def SendReply(user_name): PathToFile = WorkDir + "/mypic.JPG" tweet = "@" + user_name + " " + str(time.strftime("%c")) status = api.update_with_media(PathToFile, tweet)
From time to time, we may want to find out what the current status of the program is. This function sends a reply to the SuperUser with the current setting for blinking the LED, whether the camera view is normal or flipped, verifies that the thread for the alarm system is still running and finally if the motion detection is active.
def SendStatus(user_name): if BlinkLED == 1: BLINK = "True" else: BLINK = "False" if motionDetection == 1: MD = "ON" else: MD = "OFF" tweet = "@" + user_name + " " + str(time.strftime("%c")) \ + "\nFlip = " + str(camera.vflip) \ + "\nBlink LED = " + BLINK \ + "\nThread Running = " + ThreadRunning \ + "\nMotion Detection = " + MD status = api.update_status(status=tweet)
From time to time, we may want to allow new users to be able to request a picture be sent via Twitter. This function is used when a SuperUser sends a command to add a new user to our list. We wipe out the current file, then add all of the users back in along with the new user given by the SuperUser.
def UpdateUsers(newUser): PathToFile = WorkDir + "/Authorized.txt" Authorized.append(newUser) file = open(PathToFile, "w") file.write("") file.close file = open(PathToFile, "a") for u in Authorized: file.write(u + "\n") file.close()
Depending on what position your camera is, you may need to flip the camera vertically. Since we didn’t want to have to modify the script if we move the camera this function will flip the camera via a twitter command.
def CamFlip(): if camera.vflip == True: camera.vflip = False else: camera.vflip = True
This function is where we check for commands received via the tweet with our hashtag. The tweet will look something like this “cmd:COMMAND:extra #YOURHASHTAG”. Since we use this function to modify some variables from outside of the function, we need to identify those “global” variables at the beginning of the function. For this version of the script we have the following commands but more commands could always be added to this list.
All cmd:xxxxxxx:yyyyy should be lowercase and include the #YOURHASHTAG.
cmd:adduser:NEWUSER Adds a user to the authorized list.
cmd:flip Flip the images captured vertically.
cmd:blinkoff Disable the LED from blinking
cmd:blinkon Enable the LED blinking.
cmd:alerton Turns on the motion detection
cmd:alertoff Disables the motion detection.
cmd:status Send a tweet to SuperUser with status
cmd:reboot Will reboot your Pi
cmd:shutdown Shutdown the Pi
cmd:stop Halts the program
If the command is set to return a 1, then the program executes the command and will then take a picture and post it to twitter. If it returns a 0 then the command is executed, but no tweet will be sent. If no command is found in the tweet, then the function will return a 1 and a picture is taken and the tweet is sent to the requestor.
def CheckForCommands(commandString, user_name): global BlinkLED global motionDetection global stopThreads tmpList = commandString.split(" ") cmdString = tmpList[0] cmdList = cmdString.split(":") if "cmd" in cmdList: if "adduser" in cmdList: UpdateUsers(cmdList[2]) return 0 elif "flip" in cmdList: CamFlip() return 1 elif "blinkoff" in cmdList: BlinkLED = 0 return 0 elif "blinkon" in cmdList: BlinkLED = 1 return 0 elif "alerton" in cmdList: BlinkLED = 0 motionDetection = 1 SendStatus(user_name) return 0 elif "alertoff" in cmdList: BlinkLED = 1 motionDetection = 0 SendStatus(user_name) return 0 elif "status" in cmdList: SendStatus(user_name) return 0 elif "reboot" in cmdList: GPIO.cleanup() os.system("reboot") elif "shutdown" in cmdList: GPIO.cleanup() os.system("shutdown now") elif "stop" in cmdList: stopThreads = 1 GPIO.cleanup() exit() else: return 0 else: return 1
The MonitorTweets function is the main function of the program. It runs on a continuous while True loop. Searching out new tweets with the hashtag you identify in the search_text variable. Once the new tweet is found, the new tweet id number is saved and then the tweets user ID is verified against the Authorized list. If the user isn’t in the Authorized list then the tweet is ignored. If this is an Authorized user then we want to see if this is a SuperUser to see if we need to check for a command (cmd) tweet. If not a SuperUser, then we will just take a photo and tweet it at the authorized user. If this is a SuperUser, then we’ll pass the information to the CheckForCommands function. We also blink the LED during the sleep section. We only check for new tweets once a minute to ensure that we don’t surpass the maximum number of Twitter API interactions within a 15-minute period.
def MonitorTweets(): global id_string while True: search_text = "#YOURHASHTAG" search_result = api.search(search_text, rpp=1, since_id=id_string) for i in search_result: id_string = i.id_str SaveNewStringID(id_string) tweet = api.get_status(id_string) user_name = tweet.user.screen_name if user_name in Authorized: if user_name in SuperUser: check = (CheckForCommands(i.text,user_name)) else: check = 1 if check == 1: TakePicture() SendReply(user_name) BlinkLed(1,0) # Blink on then off time.sleep(30) BlinkLed(1, 0) time.sleep(28)
This is the function that reads the motion detector when the “alerton” command has been received. Because we don’t want to just check for motion once a minute, we need to run this in a separate thread from the rest of the program. Once the motion detection is active, we will constantly watch for motion. Once we capture motion, we will wait one second to allow the subject to get closer and then take a photo. The photo is then attached to a tweet and sent to the Twitter user ID identified in the function.
def SecurityAlert(): global ThreadRunning while True: try: if stopThreads == 1: exit() if motionDetection == 1: if GPIO.input(PIR): time.sleep(1) TakePicture() SendReply(SuperUser[0]) time.sleep(60) except: ThreadRunning = 'NO' SendReply(SuperUser[0]) exit()
Done with the functions, we now set up the Twitter API information to allow tweepy to access Twitter. This is the section where you will plug in the information that you gathered when you set up your Twitter API on the Twitter web site. Once that information has been set, tweepy variables are used to finish setting up the tweepy configuration for access.
CONSUMER_KEY = 'YOUR CONSUMER KEY GOES HERE' CONSUMER_SECRET = 'YOUR CONSUMER SECRET GOES HERE' ACCESS_KEY = 'YOUR ACCESS KEY GOES HERE' ACCESS_SECRET = 'YOUR ACCESS SECRET GOES HERE' auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_KEY, ACCESS_SECRET) api = tweepy.API(auth)
Now we are going to setup the PiCamera module. The normal position of my Pi and camera requires that I flip the camera so I am going to set my camera.vflip to True. Depending on the normal position of your PiCamera you may want to set this value to False. With the command tweets you can flip the camera if needed after the program is up and running regardless.
camera = PiCamera() camera.vflip = True
When the program first starts, we need to identify the last tweet ID number that we found the last time the program was running. This section pulls in that number from the GetStringID function.
id_string = GetStringID()
We don’t want everyone with a Twitter account to be given the ability to request a picture so we are going to build a list of Twitter user names that are allowed to request a picture be taken and sent to them via Twitter. If there aren’t any saved users, then the program will assume that only the SuperUser is authorized to request pictures
try: PathToFile = WorkDir + "/Authorized.txt" Authorized = [] with open(PathToFile) as file: for line in file: line = line.strip() Authorized.append(line) file.close() except IOError: Authorized = [SuperUser]
By default, we don’t want to have the Pi set as a motion detecting security system we will set the motionDetection to 0. This can be changed while the program is running by sending an “alerton” command tweet.
motionDetection = 0
Had some issues the SecurityAlert thread dying without a notice so I have set up a variable that presents during a status request. If that thread dies, it will modify this variable to NO so we will know.
ThreadRunning = "YES"
Now that we are up and running for the most part, we want to send a status tweet to our primary Twitter address.
SendStatus(SuperUser[0])
Finally, we are going to start up our SecurityAlert thread and then start monitoring Twitter for our designated hashtag. The stopThreads is set to 0 to keep the thread running. If when we change stopThreads to 1 it will shutdown the SecurityAlert thread. This will happen when we Control C when running the program from the command line or if we send a command tweet to stop the program.
try: stopThreads = 0 securityalert = threading.Thread(name='SecurtiyAlert', target=SecurityAlert) securityalert.start() MonitorTweets() except KeyboardInterrupt: stopThreads = 1 GPIO.cleanup() exit()
For a PDF of this program that you can cut and paste from, click here: twitterPi | https://alaskaraspi.wordpress.com/2017/03/11/alaska-raspi-twitterpi/ | CC-MAIN-2019-04 | en | refinedweb |
SDK Documentation: Super Users
Why Send User Data?
Sending user data to us will annotate your users’ sessions, giving you more insight into who is using your app (Super Users) and who is seeing which in-app UI flows (Release Notes, Onboarding, etc.).
You can add as little or as much information as you’d like. We recommend you at least provide a
userId(something that you, or your server provides you, that identifies your user). For your convenience, you may add the user’s name and/or email address, if that makes it easier for you to visualize the data.
IMPORTANT: Make sure providing the user information to a trusted 3rd party like us is covered in your Privacy Policy.
When A User Logs In
Add this code when the user logs in, or when a logged-in user starts the app. This will associate your user&rsqou;s information with their session here.
#import <AppToolkit/AppToolkit.h> ... // When the user finishes logging in, or is already logged in on start [[AppToolkit sharedInstance] setUserIdentifier:@"qxb49bd" email:@"bob@loblaw.org" name:@"Bob Loblaw"]; ...
import AppToolkit ... // When the user finishes logging in, or is already logged in on start AppToolkit.sharedInstance().setUserIdentifier("qxb49bd", email: "bob@loblaw.org", name: "Bob Loblaw") ...
When A User Logs Out
In order to clear the AppToolkit session of the user’s information, add this code when the user logs out.
#import <AppToolkit/AppToolkit.h> ... [[AppToolkit sharedInstance] setUserIdentifier:nil email:nil name:nil];
import AppToolkit ... AppToolkit.sharedInstance().setUserIdentifier(nil, email: nil, name: nil)
Accessing Super Status
Once you have configured Super Users in your Dashboard, you can check inside the app whether the current user qualifies as super or not.
if (ATKAppUserIsSuper()) { // User is a Super User, so you can perform different tasks for that user. [self enableAwesomeFeature]; }
if (ATKAppUserIsSuper()) { // User is a Super User, so you can perform different tasks for that user. enableAwesomeFeature(); } | https://apptoolkit.io/sdk/super-users/ | CC-MAIN-2019-04 | en | refinedweb |
Now that we have established the basic functionality for our API, and put it through rigorous testing, it's time to take the next step and hook up our frontend.
Wait, but didn't we say in this lesson that we would not be focusing on
.hbs and routing for our API - that we are providing raw data for users to query and not on frontend?
Yes, that is true: We won't be touching any
.hbs templates this week, but we still need to create routes that fire the correct methods to add, read, update and delete data the API serves, so we'll still be working with our App.java controller file - they are in a sense our frontend!
Let's get started on that now by creating an App.java file in our src/main/java directory and adding our main method to it.
Our App.java file will be the file that communicates with the our DAO files, so we'll need to supply basic data in order to connect to our database, just as we did for To Do List.
Let's also declare our first route. There will be a lot of new concepts and code here, but we'll take it step by step.
The first route we'll need to have and handle is the ability for
Restaurants to get added to the API - after all, what is an API if it doesn't return any data?
Make your build.gradle look like the one below, and refresh your gradle imports.
... dependencies { testCompile group: 'junit', name: 'junit', version: '4.12' compile "com.sparkjava:spark-core:2.6.0" compile 'org.slf4j:slf4j-simple:1.7.21' compile 'org.sql2o:sql2o:1.5.4' compile 'com.google.code.gson:gson:2.5' // to be explained shortly } ...
Next, add the following code to your App.java file, so that it looks like this:
import com.google.gson.Gson; import dao.Sql2oFoodtypeDao; import dao.Sql2oRestaurantDao; import models.Restaurant; import org.sql2o.Connection; import org.sql2o.Sql2o; import static spark.Spark.*;) -> { Restaurant restaurant = gson.fromJson(req.body(), Restaurant.class); restaurantDao.add(restaurant); res.status(201); res.type("application/json"); return gson.toJson(restaurant); }); } }
The upper portion of App.java, before the the route handler should look familiar: We are importing things, setting up our
main() method, making our Sql2o objects, and setting up our database connection. Cool.
Notice that, just like To Do List, we opted to use an in-memory database for testing, but here we are actually writing to the filesystem to persist data (if the computer should shut down, for example). Make sure you adjust that string here.
But then the code looks similar, yet also quite different from our previous routes. There is lot to unpack.
Let's jump right in.
In our previous applications, our routes looked something like this:
... //post: process new category form post("/categories/new", (request, response) -> { //new Map<String, Object> model = new HashMap<>(); String name = request.queryParams("name"); Category newCategory = new Category(name); categoryDao.add(newCategory); List<Category> categories = categoryDao.getAll(); //refresh list of links for navbar. model.put("categories", categories); return new ModelAndView(model, "success.hbs"); }, new HandlebarsTemplateEngine()); ...
Because we were interacting with our backend through a web-based frontend, we were able to capture data submitted by a form (the
queryParams part). Then, we performed a set of actions, and sent the user a nice template to see the result of those actions. But because we now no longer have a web-based frontend, we have to explore a new way of getting data both IN and OUT of our application. How are we gonna go that? ALL in good time.
Once more, this is the new format for our route, this time, with some comments:
post("/restaurants/new", "application/json", (req, res) -> { //accept a request in format JSON from an app Restaurant restaurant = gson.fromJson(req.body(), Restaurant.class);//make java from JSON with GSON restaurantDao.add(restaurant);//Do our thing with our DAO res.status(201);//A-OK! But why 201?? res.type("application/json"); return gson.toJson(restaurant);//send it back to be displayed });
In order to store data, we will: Send a request to our server using JSON, then transform that JSON to java objects, and write those Java objects into our database with SQL.
In order to retrieve data, we will: Retrieve data with SQL, build Java objects out of them, then transform those into JSON, and return them to the user.
What a trip! But why, I can hear you asking, can't we simply transform data directly from JSON to database data via SQL? What the blazes is GSON? I this a typo?
These are good questions. Let's answer them by learning a little more about JSON.
You may have already heard the term JSON during Intro to Programming, and we will spend a significant amount of time on JSON in both JavaScript unit and React unit.
JSON stands for JavaScript Object Notation and is basically a way of representing data that looks like JavaScript objects.
One of our objects in JSON might look a bit like this:
{ "restaurant": { "name": "Fish Witch", "address": "214 NE Broadway", "zipcode": "97232", "phone": "503-402-9874", "website": "", "email": "[email protected]" } }
Neat. This is fairly easy to read - our data is stored as key/value pairs, similar to a the HashMap that we are now familiar with. If it looks a lot like a JavaScript Object Literal - that's because it is! Sweet! But why do we care about this in Java?
JSON is everywhere in modern web applications. It's readable, it's lightweight, and it works super well with applications written in JavaScript, as it is JavaScript. But is also comparatively easy to get applications written in other languages to read it and generate it as well - including Java. This means that an API that returns JSON can be accessed by an application written in Java, Ruby, Python, JS, PHP and many more. This makes an API scalable and platform independent. Aha! Scalable! Platform Independent! Good words, powerful words, $$ words.
But because it is not Java per se, we cannot transform it directly to SQL, and as it is very difficult to connect to a SQL based database server with JavaScript (for reasons that are too complicated to explain here), we will have to transform our JSON data to Java first, then we can use our Java/SQL interface (JDBC) to write our data to the database server.
Our browser also can't understand Java or SQL commands. Because browsers can only parse HTML, CSS and JavaScript, and JSON is JavaScript, we need to transform Java back into JSON to present it to our user.
In order to easily move Java to JSON, we're going to ask for some help - this is what GSON is for.
GSON is a library created by Google in 2008 and freely available on Github. Give the repo a look; this is what a large scale project on a very professional level looks like. It's interesting to see. (Oh look! Their commit messages complete the phrase "it will" just like yours. Good job.)
In order to use Gson, we'll need instantiate a
Gson object, call methods on that
Gson object, passing Java objects to it as arguments.
The rest of the things you can see in the route -
res.status(201); and
return gson.toJson(restaurant) will be easier to understand once we view the result of what we are trying to submit.
But wait - if we don't have a UI, as in, no
.hbs templates, no webpages, no forms - how do we interact with our routes?
Another good question! Another exciting new thing to learn about.
We'll use an awesome, super cool and industry standard tool called Postman to fire JSON at our API and get JSON back when we make a request. You'll be using Postman pretty much any time you work with an API for the foreseeable future, so make sure you mention it on your LinkedIn profile as a tool you use!
Postman is already installed on Epicodus computers. You can download it for your home environment at the Postman website. You can choose to sign up for an account or use Postman without an account.
Let's go over the basics of making an API call in Postman. Let's query the Star Wars API to get some information about the planet Tatoouine. It's an API that returns JSON, just like ours is about to be!
Note: don't copy the URL shown in the image, use the URL mentioned in step 2.
We choose the kind of request we'd like to make. In this case, it's a GET request.
We can input the API URL in the URL box: You can then click on the Params button to add parameters as key-value pairs, but we do not need to do this for this simple query.
Click on the Headers tab to add any headers. We don't need to add anything here. The Body field next to it is grayed-out because this is a GET request, but you can specify a request body if you're making a POST or PUT request.
This is where you'll find the HTTP status code that accompanies the response. Aha! The call returned a 200 OK status. (why not 201? Google it.)
The JSON format can be returned in "Pretty" form. This makes it very easy to read the JSON response and see how it's nested.
6.You can work with multiple API calls at once by using tabs.
The Send button is self-explanatory. Once you've configured the call, click Send to make the request.
You can even save API calls for later use. This is particularly useful when working on multi-day projects.
Go ahead and hit send, and you'll see a LOT of information returned to you - information in JSON that looks similar to what we are creating in our app!
Nice.
Let's use Postman to interface with our own app next! | https://www.learnhowtoprogram.com/java/api-development-extended-topics/jadle-frontend-json-gson-and-postman | CC-MAIN-2019-04 | en | refinedweb |
Material-UI tries to make composition as easy as possible.
In order to provide the maximum flexibility and performance,
we need a way to know the nature of the child elements a component receives.
To solve this problem we tag some of our components when needed
with a
muiName static property.
However, users like to wrap components in order to enhance them.
That can conflict with our
muiName solution.
If you encounter this issue, you need to:
Let's see an example:
const WrappedIcon = props => <Icon {...props} />; WrappedIcon.muiName = 'Icon';
Material-UI allows you to change the root node that will be rendered via a property called
component.
The component will render like this:
return React.createElement(this.props.component, props)
For example, by default a
List will render a
<ul> element. This can be changed by passing a React component to the
component property.
The following example will render the
List component with a
<nav> element as root node instead:
<List component="nav"> <ListItem> <ListItemText primary="Trash" /> </ListItem> <ListItem> <ListItemText primary="Spam" /> </ListItem> </List>
This pattern is very powerful and allows for great flexibility, as well as a way to interoperate with other libaries, such as
react-router or your favorite forms library. But it also comes with a small caveat!
Using an inline function as an argument for the
component property may result in unexpected unmounting, since you pass a new component to the
component property every time React renders.
For instance, if you want to create a custom
ListItem that acts as a link, you could do the following:
const ListItemLink = ({ icon, primary, secondary, to }) => ( <li> <ListItem button component={props => <Link to={to} {...props} />}> {icon && <ListItemIcon>{icon}</ListItemIcon>} <ListItemText inset primary={primary} secondary={secondary} /> </ListItem> </li> ); property instead.
Let's change our
ListItemLink to the following:
class ListItemLink extends React.Component { renderLink = itemProps => <Link to={this.props.to} {...itemProps} />; render() { const { icon, primary, secondary, to } = this.props; return ( <li> <ListItem button component={this.renderLink}> {icon && <ListItemIcon>{icon}</ListItemIcon>} <ListItemText inset primary={primary} secondary={secondary} /> </ListItem> </li> ); } }
renderLink will now always reference the same component. Here is a demo with react-router: | https://material-ui-next.com/guides/composition/ | CC-MAIN-2019-04 | en | refinedweb |
#include <pcap/pcap.h> int pcap_get_selectable_fd(pcap_t *p);
Some network devices opened with pcap_create(3PCAP) and pcap_activate(3PCAP), or with pcap_open_live(3PCAP), do not support those calls (for example, regular network devices on FreeBSD 4.3 and 4.4, and Endace DAG devices), so PCAP_ERROR is returned for those devices. In that case, those calls must be given a timeout less than or equal to the timeout returned by pcap_get_required_select_timeout(3PCAP) for the device for which pcap_get_selectable_fd() returned PCAP_ERROR, the device must be put in non-blocking mode with a call to pcap_setnonblock(3PCAP), and an attempt must always be made to read packets from the device when the call returns. If pcap_get_required_select_timeout() returns NULL, it is not possible to wait for packets to arrive on the device in an event loop..
Note that in:
select(), poll(), and kevent() do not work correctly on BPF devices; pcap_get_selectable_fd() will return a file descriptor on most of those versions (the exceptions being FreeBSD 4.3 and 4.4), but a simple select(), poll(), or kevent() call will not indicate that the descriptor is readable until a full buffer's worth of packets is received, even if the packet timeout expires before then. To work around this, code that uses those calls to wait for packets to arrive must put the pcap_t in non-blocking mode, and must arrange that the call have a timeout less than or equal to the packet buffer timeout, and must try to read packets after that timeout expires, regardless of whether the call indicated that the file descriptor for the pcap_t is ready to be read or not. (That workaround will not work in FreeBSD 4.3 and later; however, in FreeBSD 4.6 and later, those calls work correctly on BPF devices, so the workaround isn't necessary, although it does no harm.)
Note also that poll() and kevent() doesn't work on character special files, including BPF devices, in Mac OS X 10.4 and 10.5, so, while select() can be used on the descriptor returned by pcap_get_selectable_fd(), poll() and kevent() cannot be used on it those versions of Mac OS X. poll(), but not kevent(), works on that descriptor in Mac OS X releases prior to 10.4; poll() and kevent() work on that descriptor in Mac OS X 10.6 and later.
pcap_get_selectable_fd() is not available on Windows. | https://www.tcpdump.org/manpages/pcap_get_selectable_fd.3pcap.html | CC-MAIN-2019-04 | en | refinedweb |
If.
Recently I also saw this on Twitter:
In #React-land: is it legit to have a component that only *does* stuff, but isn't visible? i.e. for setting cookie from a dispatched redux action, or kick off a background task, etc.— @rem (@rem) November 30, 2017
The idea is interesting so I decided to experiment and see the pros and cons. Imagine how we add/compose functionality with markup only. Instead of doing it in a JavaScript function we just drop a tag. But let’s do couple of examples and see how it looks like.
No matter what we use for our React applications we always have that mapping between logic layer and rendering layer. In the Redux land this is the so called
connect function where we say map this portion of the state to props or map this actions to this props.
function Greeting({ isChristmas }) { return ( <p> { this.props.isChristmas ? 'Merry Christmas' : 'Hello' } dear user! </p> ); } const mapStateToProps = state => ({ isChristmas: state.calendar.isChristmas }); export default connect(mapStateToProps)(Greeting);
isChristmas is just a boolean for
Greeting. The component doesn’t know where this boolean is coming from. We may easily extract the function into an external file which will make it completely blind for Redux and friends. That is fine and it works well. But what if we have the following:
import IsChristmas from './IsChristmas.jsx'; export default function Greeting() { return ( <div> <IsChristmas> { answer => answer ? 'Merry Christmas dear user!' : 'Hello dear user!' } </IsChristmas> </div> ); }
Now
Greeting does not accept any properties but still does the same job. It is the
IsChristmas component having the wiring and fetching the knowledge from the state. Then we have the render props pattern to make the decision what string to render.
// IsChristmas.jsx const IsChristmas = ({ isChristmas, children }) => children(isChristmas); export default connect( state => ({ isChristmas: state.calendar.isChristmas }) )(IsChristmas);
Using this technique we are shifting the dependency of the state to an external component.
Greeting becomes a composition layer with zero knowledge of the application state.
This example is a simple one and looks pointless. Let’s go with a more complicated scenario:
function UserProfile() { return ( <UserDataProvider>{ user => ( <ActionsProvider>{ actions => ( <section> Hello, { user.fullName }, please <a onClick={ actions.purchase }>order</a> here. </section> ) }</ActionsProvider> ) }</UserDataProvider> ) }
We have two providers the role of which is to deliver (a) some data for the current user and (b) a redux action creator
purchase so we can fire it when the user click on the
order link. These providers are nothing more then functions that use the
children prop as a regular function:
// UserDataProvider.jsx function UserDataProvider({ children }) { return children({ fullName: 'Jon Snow'}); } connect(state => ({ user: state.user })) (UserDataProvider); // ActionsProvider.jsx function ActionsProvider({ children }) { return children({ purchase: () => alert('Woo') }); } connect(null, dispatch => ({ purchase: () => dispatch(purchaseActionCreator()) })) (UserDataProvider);
This idea shifts the dependencies resolution into JSX syntax which to be honest I really like. We don’t have to know about wiring and on a later stage we may completely swap the provider by just re-implementing the component. For example in the code above if we say that the user’s data comes from the cookie and not from a Redux’s store we may just change the body of
UserDataProvider.
Of course I do see some problems with this approach. First, testing wise we still need the same setup to make our main component testable.
UserProfile still needs the Redux stuff because its internal components are using them. While if we do the wiring directly to
UserProfile we will get
user and
purchase as props and we could mock them. Second, the code looks a little bit ugly if we need to use the render props pattern.
Overall, I don’t know :) The idea seems interesting but as with most of the patterns can not be applied to every case. Let’s see how it evolves and I will post an update soon. | http://outset.ws/blog/article/react-markup-as-function | CC-MAIN-2019-04 | en | refinedweb |
REST interface for JBPMAlexei none Mar 17, 2010 7:29 AM
1. Re: REST interface for JBPMRonald van Kuijk Mar 17, 2010 8:18 AM (in response to Alexei none)
Would be nice if you could write a blog post about this.
Curious though why you used jaxb for a rest interface... jaxb=xml, rest!=xml right?
2. Re: REST interface for JBPMMaciej Swiderski Mar 17, 2010 9:08 AM (in response to Alexei none)
BPM Console has a REST interface as well, with JSON thou but perhaps you have the different functionality at your interface.
Either way could be interesting to see what you have developed.
Cheers,
Maciej
3. Re: REST interface for JBPMAlexei none Mar 17, 2010 9:41 AM (in response to Alexei none)I used JAXB-annotated classes to wrap JBPM classes and interfaces. I also tryed castor mapping and apache cxf, but they didn fit. So, I used JAXB to serialize JBPM object to xml and Jersey to bind JBPM services to certain URLs. I attached sources, but removed jbpm.jar & jdbc driver from lib directory (to reduce archive size to 15M). Looking for Bazaar repository.
- jbpm-rest.zip 14.5 MB
4. Re: REST interface for JBPMAlexei none Mar 17, 2010 9:49 AM (in response to Maciej Swiderski)Would you please give me link to any description of BPM Console's REST interface? Cause I actually missed it hope my work was not useless!
Don't be 2 strict! I'm still working on it
5. Re: REST interface for JBPMMaciej Swiderski Mar 17, 2010 10:33 AM (in response to Alexei none)
it is deployed as part of gwt-console-server application, so URL assuming that it is on your local machine and default port for JBoss AS it will be:
Once you enter that URL you will get a page with a link to entire description of available URIs. It is located (if I remember correctly) at:
Cheers
Maciej
6. Re: REST interface for JBPMRonald van Kuijk Mar 17, 2010 12:57 PM (in response to Alexei none)
Ahhh.. ok, jaxb for the responses etc... cool... Personally (that is me, not the jBPM project) I like this way of doing it. In fact I even thought of doing this directly on the real jBPM services... (why not?) Not everybody doing (j)BPM needs or wants to run a full console to be able to do rest.
Keep up this good work... I personally am certainly interested.
Cheers,
Ronald
7. Re: REST interface for JBPMAlexei none Mar 18, 2010 8:15 AM (in response to Alexei none)
By the way, I've shared code at Lounchpad
project is not totally portable (there are still several easy to resolve dependencies in eclipse's project files), but I gonna fix it.
So, the idea is to represent JBPM as a hypermedia. For example, I got repository service at URL and if I wanna get available process definitions, I had to acceess URL. The result will be like
<definitionList> <processDefinitions> <id>Order-1</id> <key>Order</key> <name>Order</name> <version>1</version> <suspended>false</suspended> <deployment ref=""/> </processDefinitions> </definitionList>
as you can see, there is a hypermedia reference to another service . If you try to access this URL you will get something like
<deployment> <id>1</id> <state>active</state> <timestamp>0</timestamp> </deployment>
and so on. For example, task has reference to corresponding execution, execution to process definition.
Now I got 3 services (execution, repository, task) & aprox. 20 resolvable URLs. My next step will be to add remaining services. And then I gonna add servlet filters for retrieving actor & group information from request attributes and for unwrapping references i.e. for link like the result should be like
<definitionList> <processDefinitions> <id>Order-1</id> <key>Order</key> <name>Order</name> <version>1</version> <suspended>false</suspended> <deployment> <id>1</id> <state>active</state> <timestamp>0</timestamp> </deployment> </processDefinitions> </definitionList>
8. Re: REST interface for JBPMAlexei none Mar 25, 2010 12:46 PM (in response to Alexei none)
I've added unwrapping filter. Time to create small wiki, I guess.
Can anybody tell me how to remove xml namespaces & prefixes in a proper way (i.e. not a regexp way)?
9. Re: REST interface for JBPMAlexei none Mar 27, 2010 4:24 PM (in response to Alexei none)
GET{id}
So, currently supportet URLs
Repository service
GET - list of available process definitions
GET{id} - definition identified by id
GET - list of available deployments
GET{id} - deployment identified by id
POST - accepts multipart/form-data i.e. file from <input type="file" ... />
Execution service
GET{id}
GET{id}/variableNames
GET{id}/variable/{variableName}
POST{id}/variables - accepts application/x-www-form-urlencoded where variables represented as key-value pairs. You can modify or add variables to execution by means of this method
GET{id}
POST{processDefinitionId}/start - starts process with variables accepted by means of key-value pairs, just the same as POST{id}/variables
GET{processDefinitionId}/start - starts process without any variables attached
POST{processDefinitionId}/start - ends process with state passed by form param named state
GET
Managment service
GET{jobId}/execute
GET
GET{processInstanceId}
Task service
GET{userId}
GET{groupId}
GET{id}
GET{id}/outcomes
GET{id}/variableNames
GET{id}/variable/{variableName}
POST{id}/variables
GET{id}/comments
POST{id}/end/{outcome} - ends task with given outcome and parameters passed by key-value pairs
History service
not implemented yet but wrapping classes already done.
You can also add unwrap path param to force replacement of hypermedia links with actual values without additional http requests.
So, I got several questions:
What do you think about proper way to add jbpm user & groups to requests? I was going to pass them as request attributes, but not sure, cause I'm not too good in JBPM4 identitity model.
And the second question, if anybody knows how to remove namespaces & prefixes from JAXB-generated xml?
P.S. Sorry for my poor english (it's not my native, as you can see from profile ), but I'm ready to clear any misunderstands
10. Re: REST interface for JBPMAlexei none Apr 2, 2010 3:50 AM (in response to Alexei none)
Namespace & prefixes cleanup added
11. Re: REST interface for JBPMKoen Aers Apr 2, 2010 11:56 AM (in response to Alexei none)
Alexei,
It would be good if you could document this a bit more on a wiki page. I think that it probably would be valuable to include it somewhere in our codebase.
Cheers,
Koen
12. Re: REST interface for JBPMAlexei none Apr 2, 2010 5:18 PM (in response to Koen Aers)
I've never been writing wiki before, so I hope somebody will help me (at least to make my english better a little bit ) . So here is my article
13. Re: REST interface for JBPMAlexei none Jul 8, 2010 7:34 AM (in response to Alexei none)
Up!
Need ur feedback 4 future improvements
14. Re: REST interface for JBPMAlexei none Jul 20, 2010 8:52 AM (in response to Alexei none)
Identity service added & LDAP IdentitySession (if anybody cares ) | https://developer.jboss.org/message/532596 | CC-MAIN-2018-05 | en | refinedweb |
// MetaphoneCOM.cpp : Implementation of DLL Exports.
#include "stdafx.h"
#include "resource.h"
// The module attribute causes DllMain, DllRegisterServer and DllUnregisterServer to be automatically implemented for you
[ module(dll, uuid = "{845FE5AF-CC53-4C37-9464-EE20A866B9B0}",
name = "MetaphoneCOM",
helpstring = "Adam J. Nelson's Double Metaphone Implementations",
resource_name = "IDR_METAPHONECOM") ];
//Expose the value assigned to short alternate keys when no alternate key is available
//This way, VB users can use "MetaphoneKey.Invalid" to test for the absence of alternate keys
[ export, library_block]
enum MetaphoneKey {
Invalid = ) | https://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=4620&zep=MetaphoneCOM%2FMetaphoneCOM.cpp&rzp=%2FKB%2Fstring%2Fdmetaphone1%2F%2Fdmetaphone_src.zip | CC-MAIN-2018-05 | en | refinedweb |
Television Listings and XMLTV
February 18, 2004
Introduction
For several years I've wanted to assemble my own PC. Every time I decided to replace my computer, I would say that maybe this time I will get around to building my own. This resolution lasted about as long as it took for me to go to Dell and price its latest, comparing that to the amount of free time I had, which is very little.
The emergence of Linux-based packages for building personal video recorders (PVR) like TiVO -- something I would probably never be able to justify buying just for its own sake -- offered me the chance I waiting for. A mini PC with a TV capture card, a WiFi card, a monster hard drive (you can get up to a quarter terabyte nowadays), and a Linux package like MythTV can not only do almost everything a TiVO can do, but can also serve up MP3 files, act as a Windows file server with Samba, run a web server, and more.
One critical element of a DIY TiVO is TV listings. Without these all the fancy hardware in the world won't do much good. But there's an open source, Perl XML-based solution by Edward Avis called XMLTV that many of the TV-on-your-PC packages like Freevo and MythTV support. With support for screen-scraping data for many country's cable systems, XMLTV can take various sources and create a consistent stream of XML.
Here's a snippet to give you an idea of the kind of information you can get:
<tv> <programme channel="C54amc.zap2it.com" start="20031230002000 -0500" stop="20031230022000 -0500"> <title>Mystic Pizza</title> <desc>Three teenage girls come of age one summer working in a pizza parlor in Mystic, Conn.</desc> <date>1988</date> <category>Comedy</category> <rating system="VCHIP"> <value>14</value> </rating> <rating system="MPAA"> <value>R</value> </rating> <star-rating> <value>2.5/4</value> </star-rating> </programme> </tv>
If you have an iCal-compliant viewer (like Mozilla) you can even convert this to a calendar using Irving Probst's XSLT stylesheet (screenshot).
Getting started
As a first step I grabbed the latest Windows version of XMLTV from the SourceForge project. (For OS X, RPM-based Linux systems, and Debian package-based systems you also get packages; see the home page for details.) This gives you a binary "xmltv.exe" at the top level of the directory where you unpack the ZIP file. Like any good tool with a UNIX heritage, XMLTV is meant to act as a filter chained together with other programs. Once you set it up (in my case to point to the North American listings), you can run the program and get a stream of XML suitable for your homegrown electronic program guide:
C:\writing\xmltv-0.5.24-win32>xmltv tv_grab_na --configure Timezone is -0500 Welcome to XMLTV 0.5.24 (tv_grab_na V3.20031101) for Canada and US tv listings Please report any problems, bugs or suggestions to: xmltv-users@lists.sourceforge.net For more information consult checking XMLTV release information.. Warning: failed to get current release information from: If this problem persists, look for a new XMLTV release. starting manual configuration process.. how many times do you want to retry on www site failures ? (default=2) how many seconds do you want to between retries ? (default=30) what is your postal/zip code ? 11375 getting list of providers for postal/zip code 11375, be patient.. Choose a service provider: 0: DIRECTV New York - New York (128766) 1: DISH New York - New York (128719) 2: RCN Cable (Microwave) - New York - Digital Rebuild (70946) 3: RCN Cable (Microwave) - New York - Rebuild (70945) 4: RCN Cable (Microwave) - New York (70944) 5: Time Warner Cable - Brooklyn - Cable Ready (71328) 6: Time Warner Cable - Brooklyn - Digital (71329) 7: Time Warner Cable - Brooklyn (71327) 8: Time Warner Forest Hills - Forest Hills - Cable Ready (71440) 9: Time Warner Forest Hills - Forest Hills (71439) 10: C-Band - USA (87341) 11: DIRECTV - USA (62044) 12: DISH Network - USA (62046) 13: VOOM - USA (179304) 14: Local Broadcast Listings (137303) Select one: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14 (default=0)] 6 you chose 71329 # Time Warner Cable - Brooklyn - Digital getting channel list, be patient..
After a few moments you get:
got channel list add channel 1 NY1 ? [yes,no,all,none (default=yes)] A . . . add channel 1000 MOONDEM ? yes add channel 1020 ADONDEM ? yes add channel 1031 HBODEM ? yes add channel 1032 CINDEM ? yes add channel 1033 SHOWDEM ? yes add channel 1034 TMCDEM ? yes updating C:\/.xmltv/tv_grab_na.conf.. configuration step complete, let the games begin !
My first impression? digital cable gives you way too many choices for good health.
Looking at the Format
Preamble
The top-level element,
<tv>, contains no big surprises:
<tv date="20031230133339 -0500" generator- ... </tv>
In
date, the timestamp given (including the GMT offset for timezone) lets you
know when the original source generated the listing data. The attributes
source-info-url and
source-info-name provide a glimpse into how
xmltv the program works: for the U.S. it screen-scrapes HTML from a website providing
channel listings by ZIP code. We'll be reading right past this information for our
example
program below.
This brings up an important question: what's the legal status of XMLTV? The Zap2IT license seems to be broad enough to allow for it.
While you may interact with or download a single copy of any portion of the Content for your own personal, non-commercial entertainment, information or use, you may not and may not authorize others to reproduce, sell, publish, distribute, modify, display, repost or otherwise use any portion of the Content in any other way or for any other purpose without the prior written consent of TMS. Requests regarding use of the Content for any purpose other than personal, non-commercial use should be directed to Feedback at Zap2it.com.
Other services in other countries have shut out XMLTV. And it's possible that they'd make more a bigger issue of it if there were more Linux PVRs out there pulling down their data. Even if there were no legal concerns about XMLTV sourcing, there is also the technical risk: every time the HTML layout on Zap2IT changes, XMLTV will break.. It would be hard to make a profit off homegrown DIY users wanting commercial-grade TV listings, especially given the risk that providing the data in a format which is so easy to redistribute. The whole issue brings to mind the MP3 debate: do people use software like XMLTV because there's no good pay alternative, or because they wouldn't use it unless it was free?
No matter what happens with the listing sources, XMLTV itself is still useful to understand and handle, and it's a good example of XML's strengths in syndication and bridging diverse applications.
Channel information
Next up in the format we have multiple
<channel> tags describing all
the available channels in your area. XMLTV maps this information to the program listings
by
an ID which we'll see again later; the ID should follow RFC 2838: Uniform Resource
Identifiers for Television Broadcasts but the DTD obviously can't enforce this.
Channels can include an optional icon and an optional URL.
<channel id="C2wcbs.zap2it.com"> <display-name>2 WCBS</display-name> <display-name>2</display-name> <icon src=""/> </channel>
XMLTV supports basic localization by a "lang" attribute, e.g. fr_FR. (In a perfect world the DTD would have used xml:lang instead.) It thus allows for multiple display names. Thankfully one variant offered for at least my feed is the channel number itself, which will be needed for PVR software.
Program information
The mother lode of information in XMLTV is in the program listings: what programs play on what channel ID, starting and stopping at what times. Here's an example:
<programme channel="C2wcbs.zap2it.com" start="20031230043000 -0500" stop="20031230050000 -0500"> <title>CBS Morning News</title> <desc>News reports on current events.</desc> <category>News</category> <audio> <stereo>stereo</stereo> </audio> <subtitles type="teletext"/> </programme>
The DTD allows for a lot of optional information, including icon, URL, language, year, country, credits (director, actor, writer, etc.), star ratings, audio metadata, video aspect ratio, whether it has subtitles, etc.. We're going to stick with title for the example; for a serious application you might need a commercial feed (should one ever become available) with more reliable and detailed information.
Episodes
Episodic programs get special treatment in the XMLTV format. Here's an example from the feed I pulled:
<programme channel="C2wcbs.zap2it.com" start="20031230030700 -0500" stop="20031230033700 -0500"> <title>Becker</title> <sub-title>Small Wonder</sub-title> <desc>Reggie and the gang dispute Becker's crazy theory that little people are bad luck.</desc> <episode-num . . 0/3</episode-num> <audio> <stereo>stereo</stereo> </audio> <subtitles type="teletext"/> <rating system="VCHIP"> <value>PG</value> </rating> </programme>
The "system" attribute in <episode-num> has two allowed values: "xmltv_ns", which is used here, and "onscreen". The latter provides the human displayable version; the former has more structured data. It's supposed to be three numbers (with "." as a separator): the season number, the episode number within the entire series, and finally the part number. Slashes indicate out of how many, and numbers begin at zero; so "0/3" means the first of three. The DTD provides a good set of examples:!
Easy, right? But look at the actual data. As you can probably guess, this "Becker" episode is not a three-parter, and the first two fields are missing entirely. We're looking at dirty data: no season number, no episode number, and an unreliable last segment. You couldn't run a real electronic program guide off of XMLTV, which is probably good for the developer's legal exposure.
Playing Around
grep is a good way to scan through XML for fragments of interest, but if you
want to process XMLTV programatically you'll want heftier tools. One of my favorite
tools
for processing XML with minimal programming effort is XPath. The Jaxen project provides a good implementation
in Java, my language of choice, but the open source community has provided a wealth
of
options in your pick of languages. If your only goal is to produce HTML, you could
also
consider using XSLT.
XPath packs a lot of information into a very small space, so mixing it with your procedural and OO code can make for compact, expressive code. It's also very easy to store XPath fragments in XML, databases, and property files, so you can make your program more configurable. Here's the path to find all programs:
//programme
and then all programs on CBS, using the channel ID for our area:
//programme[@channel='C2wcbs.zap2it.com']
and all programs with a rating of PG or G:
//rating[value='PG' or value='G']
Let's say you want to develop a "coming up" program schedule for a fan homepage for Becker. You might even be thinking of turning the fragment into a portlet to collect all those Becker fan pages . (I promise the code will be more realistic than the premise.) We can find all the Becker episode titles with a single line of XPath code:
//programme[@channel='C2wcbs.zap2it.com' and title='Becker']/sub-title/text()
Next we need source data. You can get the next 14 days worth of data in a nightly cron job. After configuring your feed source, you can run the following to get a full two weeks of source data.
xmltv tv_grab_na --days 14 > feed.xml
Next we need to process it. The following sample Java code loads the file into a DOM Document and uses Jaxen to select and print the episode titles under the nodes. (Note this example excludes all error handling, reasonable argument processing ,and modular design you'd expect from production code.)
import java.io.File; import java.util.List; import java.util.HashMap; import java.util.Map; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import org.jaxen.XPath; import org.jaxen.dom.DocumentNavigator; import org.w3c.dom.Document; import org.w3c.dom.Element; public class XMLTV { public static final void main(String[] args) throws Exception { // set up Java XML processing DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder docBuilder = dbf.newDocumentBuilder(); // parse the feed File srcFile = new File(args[0]); Document doc = docBuilder.parse(srcFile); // get an instance of Jaxen's DOM handler DocumentNavigator navigator = DocumentNavigator.getInstance(); // pre-compile the XPath expressions XPath channelXpath = navigator.parseXPath("/tv/channel"); XPath beckerXpath = navigator.parseXPath("//programme[title='Becker']"); // create a mapping from ID to display name Map channelMap = new HashMap(); List channelNodes = channelXpath.selectNodes(doc); for (int ii = 0; ii < channelNodes.size(); ii++) { Element channelElem = (Element)channelNodes.get(ii); Element displayNameElem = (Element)channelElem. getElementsByTagName("display-name").item(0); channelMap.put(channelElem.getAttribute("id"), displayNameElem.getFirstChild()); } // find the episode nodes! List nodeList = beckerXpath.selectNodes(doc); System.out.println(nodeList.size() + " matches found"); for (int ii = 0; ii < nodeList.size(); ii++) { Element programElem = (Element)nodeList.get(ii); Element subTitleElem = (Element)programElem. getElementsByTagName("sub-title").item(0); System.out.print("Episode title = " + subTitleElem.getFirstChild()); System.out.println("'; channel = " + channelMap.get (programElem.getAttribute("channel"))); } } }
The example does a little more than get the episode title. It first maps channel ID to channel name, then finds all the elements. This is something that you can do very quickly in Perl or Java but that might take a little more work in XSLT. Of course, emitting HTML based on output would be much easier in XSLT, arguing for a combination of the two -- creating a pipeline with an XMLTV producer, a Java processor, and then a stylesheet using Cocoon might be one way to do it.
For a real tool you might consider SAX2 despite the greater complexity, and implement page caching using a package like OSCache or produce the HTML in a nightly batch as well. XMLTV creates a lot of data and a web app that transforms from even a large static file has the potential to be very slow.
A Wish List
XMLTV is an evolving format; the version covered in this article is 0.5. A revised but convertible 0.6 format is on the way. For the future, I have a short wish list, all XML technical issues. (The content aspect already seems quite complete.)
- It would be nice to have a standard namespace so one could consider weaving XMLTV content together with other XML vocabularies.
- An XML schema would be useful here to allow stricter validation; DTD can't cover the typed data XMLTV carries around. It would also provide a structured way to make visible the great documentation hidden away in comments in the DTD now.
- The application itself emits a DOCTYPE with a relative location for the DTD; an HTML URL might be more appropriate, especially since the application already requires access to the Web.
Wrapping Up
Good software can be used as a building block to make other software, and by this measure XMLTV -- both the de facto standard and the software -- is very useful. Although people's dreams of combining computers with televisions have yet to pan out, now there are solid mechanisms that let you combine Internet data with live video, and insert your own software in between. People have already done work combining closed captioning with full-text indexing to find video clips of interest; obviously a lot of work has been done to enable PVR functionality. But the really exciting element is not what has been done, but the convergence of interesting information, ease of access and processing with XML-based formats like XMLTV, with freely-available, powerful software. With those building blocks I am sure we will see more and more innovative combinations of television and computing in the future. | https://www.xml.com/pub/a/2004/02/18/xmltv.html | CC-MAIN-2018-05 | en | refinedweb |
ProgrammerGuy123Member
Content count11
Joined
Last visited
Community Reputation127 Neutral
About ProgrammerGuy123
- RankMember
Updating Entities in a Chunk Based System
ProgrammerGuy123 posted a topic in General and Gameplay Programming.
Game Design With Update and Draw Functions
ProgrammerGuy123 posted a topic in General and Gameplay Programming.
Hovercraft Physics
ProgrammerGuy123 replied to ProgrammerGuy123's topic in Math and PhysicsAh yes thank you. Very informative links.
Hovercraft Physics
ProgrammerGuy123 posted a topic in Math and PhysicsI'm trying to implement hovercraft like physics into my game but I need help. An improtant thing to note is that the hovercraft will be pretty high off the ground unlike realistic hovercrafts. After searching and asking questions I arrived at this source code: [CODE] double); } [/CODE] [url=""]this[/url] picture shows. How would I do that? Thanks.
Whats Wrong With My Perlin Noise Function?
ProgrammerGuy123 replied to ProgrammerGuy123's topic in General and Gameplay ProgrammingThank you and everyone else I just need the /100 and it works perfectly now!
Whats Wrong With My Perlin Noise Function?
ProgrammerGuy123 replied to ProgrammerGuy123's topic in General and Gameplay ProgrammingThat is indeed how I'm called the method: [CODE] byte[,] temp = new byte[height, width]; for(int x = 0; x < width; x++) { byte value = (byte)((PerlinNoise2D((double)x) * 64) + 64); temp[value, x] = 1; } [/CODE] This is exactly why I was using integers in my pervious attempts for everything and not decimals. Is there any way I could efficiently convert the integers into decimals and back into inegers again? Or should I go back to the way I was doing it and have everything calculated with integers. Right now I'm multiplying the perlin value by 64 just for testing as you can see then adding the same value to get rid of any negatives. But like you said I'm only going to get values based on integer boundries so what should I do?
Whats Wrong With My Perlin Noise Function?
ProgrammerGuy123 replied to ProgrammerGuy123's topic in General and Gameplay Programming[quote name='FLeBlanc' timestamp='1338478848' post='4945015'] You have revised it to be even more wrong than before. By just calling Next without the seeding operation, you guarantee you get a different value for any given input point each time it is called. Random isn't really what you want to use here. You want a hash function, that correlates an input, x, to an output, y. The hash needs to ensure that the hash for x+1 is in no way correlated to the hash for x, so that patterns do not appear in the output. jtippets linked to the reference implementation for Perlin noise. It might be a good idea to look and see how it does hashing, interpolation, etc... Also I note that you seem to be using a lot of integer coordinates and values. There is a problem with this. The algorithm you are using will generate pseudo-random values at integer coordinates, but since you never sample the "between" points (non-integer), the result is probably going to look a great deal like white noise. [/quote] Ok so I tried to fix a lot all of these problems but the function is still acting up and I'm really confused on why because I scrapped the whole thing to use double values instead of integers and I even took the complete RNG from [url=""]this[/url] article about perlin noise. This solves the problem of having to use hash tables because now I'll get a random number where x and x+1 is a completely different number. However now it seems to return negative values which makes absolutely no since considering I use his RNG not .NET's. Also the InterpolatedNoise method is almost the exact same I'm just not smoothing the values because I want to simplify it. Then there is the CosineInterpolate method. To me logically this must be the reason for my negative values because as stated before I took the entire RNG from the article so it can't be that. The CosineInterpolate method was taken from [url=""]this[/url] article. I've made no methods all the methods responsible for generating perlin values are from reliable sources. What is going on? [CODE] public double PerlinNoise2D(double x) { int octaves = 1; double freq = 16; double amp = 1; double total = 0; for(int i = 0; i < octaves; i++) { total += InterpolateNoise(x * freq) * amp; } return total; } double InterpolateNoise(double x) { int xInt = (int)x; double xFrac = x - xInt; double v1 = Noise(xInt); double v2 = Noise(xInt + 1); return CosineInterpolate(v1, v2, xFrac); } double Noise(int x) { x = (x<<13) ^ x; return ( 1.0 - ((x * (x * x * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0); } double CosineInterpolate(double a, double b, double x) { double f = (1 - Math.Cos(x * Math.PI)) / 2; return (a * (1 - f) + b * f); } [/CODE]
Whats Wrong With My Perlin Noise Function?
ProgrammerGuy123 replied to ProgrammerGuy123's topic in General and Gameplay Programming[quote name='jefferytitan' timestamp='1338441167' post='4944882'] If I recall correctly, each time around the main loop you're meant to double the frequency and decrease the amplitude. You seem to be keeping them constant. [/quote] The values were not changing because I only had one octave for simplicity therefor there was no need for them to change. [quote name='Olof Hedman' timestamp='1338448657' post='4944897'] You are also using Random() wrong. You should initialize it once, with a single seed of your choosing, and then just continuously call "next" to read out the random value. [/quote] This was a mistake on my part I was thinking I needed a random value based on "x" which mean if "x" is ever the same it should generate the same number. I have revised the code I think to fix this problem but no luck. First I need just one octaves to work then I can generate more and add them up but for now I'm focusing on one octave which you will see in my code: [CODE] public int PerlinNoise2D(int x) { int total = 0; int octaves = 1; //only one octave for simplicity for(int i = 0; i < octaves; i++) { int freq = 16;//16 for simplicity I only have one octave so it doesn't need to change int amp = 1; //1 for simplicity again ^^^ total += InterpolateNoise(x * freq) * amp; } return total / octaves; } int InterpolateNoise(int x) { int v1 = random.Next(64); int v2 = random.Next(64); return CosineInterpolate(v1, v2, x, 16); // the 16 is from the frequency } int CosineInterpolate(int a, int b, int x, int length) { return (int)((1 + Math.Cos(3.1415f * x / length)) / 2 * (a - b) + b); } [/CODE]
Whats Wrong With My Perlin Noise Function?
ProgrammerGuy123 posted a topic in General and Gameplay ProgrammingI understand the concept behind perlin noise but I can't understand why the method doesn't return perlin values: [CODE] public int PerlinNoise2D(int x) { int freq = 4;//initFrequency; int amp = 1;//initAmplitude; int total = 0; for(int i = 0; i < octaves; i++) { total += InterpolateNoise(x * freq) * amp; } return total; } int InterpolateNoise(int x) { int v1 = new Random(x).Next(64);//this returns a pseudo-random number less than 64 with "x" as the seed int v2 = new Random(x + 1).Next(64); return CosineInterpolate(v1, v2, x, 4); } int CosineInterpolate(int a, int b, int x, int length) { return (int)((1 + Math.Cos(3.1415f * x / length)) / 2 * (a - b) + b); }[/CODE] It generates repeating values that are not perlin and as explained before I need help on fixing it. Thanks.
2D Perlin Noise
ProgrammerGuy123 replied to ProgrammerGuy123's topic in General and Gameplay Programming[quote name='JTippetts' timestamp='1336344285' post='4937893'] You interpolate the top-left and top-right corner values based on the x coordinate, interpolate the bottom-left and bottom-right corners based on the x coordinate, then interpolate these two intermediate values based on the y coordinate. [/quote] What do you mean by "corner values"? The octaves are made up of multiple squares. Am I getting the corner values of those squares? Wouldnt they be the same value then? Could you explain what you mean by this?
2D Perlin Noise
ProgrammerGuy123 posted a topic in General and Gameplay ProgrammingI'm trying to create perlin noise but I'm having trouble understanding how interpolation works. I understand how to do it in a single dimension but can't wrap my head around it in two. Right now I have this for an octave: [img][/img] Obviously this is not coherent and I need to interpolate to get something similar to this: [img][/img] Most sources such as [url=""]this[/url] explain how to do it one dimension but don't in two. Quote: "You can, of course, do the same in 2 dimensions." So could someone explain the concept of how [u]linear[/u] interpolation works in two dimensions? Thanks. | https://www.gamedev.net/profile/198373-programmerguy123/?tab=topics | CC-MAIN-2018-05 | en | refinedweb |
[
]
Sean Winard commented on FLINK-4977:
------------------------------------
It looks like changing the access modifier of the enum can make this work. In the case of
my small example program, you can make the enum {{public}} and it will work. In my real program,
I have a separate top-level enum class which is package-private, which is then in a POJO which
is streamed. If I change the enum to {{public}}, then it will work fine. Note that the access
modifier of regular classes does not seem to affect their ability to be serialized properly,
in my testing. And again note that the proposed change for the EnumSerializer seems to work
for enums with any access modifier.
> Enum serialization does not work properly
> -----------------------------------------
>
> Key: FLINK-4977
> URL:
> Project: Flink
> Issue Type: Bug
> Affects Versions: 1.1.3
> Environment: Java SE 1.8.0_91
> Ubuntu 14.04.4 LTS (trusty)
> Reporter: Sean Winard
> Priority: Minor
>
> Enums produce serialization failures whether they are by themselves or part of a POJO
in the stream. I've tried running in IntelliJ IDEA and also via {{flink run}}. Here is a small
program to reproduce:
> {code:java}
> package org.apache.flink.testenum;
> import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
> public class TestEnumStream {
> private enum MyEnum {
> NONE, SOMETHING, EVERYTHING
> }
> public static void main(String[] args) throws Exception {
> final StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
> environment.setParallelism(1);
> environment.fromElements(MyEnum.NONE, MyEnum.SOMETHING, MyEnum.EVERYTHING)
> .addSink(x -> System.err.println(x));
> environment.execute("TestEnumStream");
> }
> }
> {code}
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: Cannot access the constants of
the enum org.apache.flink.testenum.TestEnumStream$MyEnum
> at org.apache.flink.api.common.typeutils.base.EnumSerializer.createValues(EnumSerializer.java:132)
> at org.apache.flink.api.common.typeutils.base.EnumSerializer.<init>(EnumSerializer.java:43)
> at org.apache.flink.api.java.typeutils.EnumTypeInfo.createSerializer(EnumTypeInfo.java:101)
> at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.fromCollection(StreamExecutionEnvironment.java:773)
> at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.fromElements(StreamExecutionEnvironment.java:674)
> {noformat}
> I took a look at that line in EnumSerializer.java and swapped out the reflection on the
"values" method for the simpler `enumClass.getEnumConstants()`, and that seems to work after
I install my custom flink-core jar. I believe this is because []
specifically states you cannot reflect on the "values" method since it is implicitly generated
at compile time.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.apache.org/mod_mbox/flink-issues/201611.mbox/%3CJIRA.13016672.1477936198000.145177.1478016298327@Atlassian.JIRA%3E | CC-MAIN-2018-05 | en | refinedweb |
Set up the namespace with full DES authentication, even if the domains will operate in NIS-compatibility mode. Use the NIS+ scripts described in Solaris Naming Setup and Configuration Guide to set up your namespace; see Solaris Naming Administration Guide for more explanation of NIS+ structure and concepts. Then perform the following steps:
Set up the root domain.
If you are going to run the root domain in NIS-compatibility mode, use nisserver. (If you choose not to use the setup scripts, use the -Y flag of rpc.nisd and nissetup.)
Populate the root domain tables.
You can use nispopulate to transfer information from NIS maps or text files. Of course, you can also create entries one at a time with nistbladm or nisaddent.
Set up clients of the root domain.
Set up a few clients in the root domain so that you can properly test its operation. Use full DES authentication. Some of these client machines will later be converted to root replica servers and some will serve as workstations for the administrators who support the root domain. NIS+ servers should never be an individual's workstation.
Create or convert site-specific NIS+ tables.
If the new NIS+ root domain requires custom, site-specific NIS+ tables, create them, with nistbladm and transfer the NIS data into them with nisaddent.
Add administrators to root domain groups.
Remember, the administrators must have LOCAL and DES credentials (use nisaddcred). Their workstations should be root domain clients and their root identities should also be NIS+ clients with DES credentials.
Update the sendmailvars table, if necessary.
If your email environment has changed as a result of the new domain structure, populate the root domain's sendmailvars table with the new entries.
Set up root domain replicas.
First convert the clients into servers (use rpc.nisd with -Y for NIS compatibility and also use -B if you want DNS forwarding), then associate the servers with the root domain by running nisserver -R.
For NIS compatibility, run rpc.nisd with the -Y and edit the /etc/init.d/rpc file to remove the comment symbol (#) from the EMULYP line. For DNS forwarding, use the -B option with rpc.nisd.
Test the root domain's operation.
Develop a set of installation-specific test routines to verify a client's functioning after the switch to NIS+. This will speed the transition work and reduce complaints. You should operate this domain for about a week before you begin converting other users to NIS+.
Set up the remainder of the namespace.
Do not convert any more clients to NIS+, but go ahead and set up all the other domains beneath the root domain. This includes setting up their master and replica servers. Test each new domain as thoroughly as you tested the root domain until you are sure your configurations and scripts work properly.
Test the operation of the namespace.
Test all operational procedures for maintenance, backup, recovery, and other scenarios. Test the information-sharing process between all domains in the namespace. Do not proceed to Phase II until the entire NIS+ operational environment has been verified.
Customize the security configuration of the NIS+ domains.
This may not be necessary if everything is working well; but if you want to protect some information from unauthorized access, you can change the default permissions of NIS+ tables so that even NIS clients are unable to access them. You can also rearrange the membership of NIS+ groups and the permissions of NIS+ structural objects to align with administrative responsibilities. | https://docs.oracle.com/cd/E19455-01/806-2904/6jc3d07hr/index.html | CC-MAIN-2018-05 | en | refinedweb |
Problem with the toString() method and a "result" field
Given this protocol structure, with a field with name="result":
<struct name="styx_test"> <var name="result" type="int8" /> </struct>
The corresponding generated Java class is the following:
public class StyxTest implements ProtocolObject, Visitable { ... public byte result; .... public String toString() { StringBuilder result = new StringBuilder("StyxTest :"); result.append(" result["+result+"]"); return result.toString(); } }
The appended variable is not the correct one. It should be something like "this.result"
Good catch! Feel free to submit a pull request with a fix.. :)
Done.
Solved and merged | https://bitbucket.org/cubeia/cubeia-styx/issues/14/problem-with-the-tostring-method-and-a | CC-MAIN-2018-05 | en | refinedweb |
The basic idea of this program is to recursively compute the height of the left subtree and of the right subtree. If the returned two heights' are both positive and their difference is less or equal to 1, return the larger height. Otherwise, return -1 which indicates there existing heights' differ than 1.
/** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */ public class Solution { public boolean isBalanced(TreeNode root) { return compareHeight(root,0)>=0; } public int compareHeight(TreeNode node, int h) { if(node==null) return h; int left_h = compareHeight(node.left,h+1); int right_h = compareHeight(node.right,h+1); return (left_h>=0 && right_h>=0 && Math.abs(left_h-right_h)<=1)?Math.max(left_h,right_h):-1; } } | https://discuss.leetcode.com/topic/21804/java-simple-o-n-solution | CC-MAIN-2018-05 | en | refinedweb |
TypeScript JSX syntax as a typed DSL
TypeScript has support for JSX and TypeScript’s compiler provides really nice tools to customize how JSX will be compiled and ultimately gives the ability to write DSL over JSX that will be type-checked on compile time. This article is exactly about that — how to implement a DSL over JSX.
📦 Repository with complete working example.
As an example of a JSX DSL, I won’t use anything related to web or React to give you a hint that TypeScript’s JSX is not limited anyhow to React components or rendering. I’ll implement DSL for type-checked rich Slack message templates.
For example, this is a Slack message template constructed with objects.
It looks OK, but here is a thing that we can improve — readability. For example, take a look at
color property in the attachments or
title_link along with those
_(italics in Slack) in the
text. They mess with the content and make it harder to distinguish what’s important and what’s not. Our template DSL can solve this problem.
Next example describes exactly the same message but with the DSL we gonna implement.
Second example is much better — clear separation of content from styling.
🔮✨ Implementing the DSL
⚙️ Project config
First of all, we have to enable JSX syntax in TypeScript project and tell the compiler that we don’t use React and need JSX to be compiled differently.
Option
"jsx": "react" enables support for JSX syntax in the project and compiles all JSX elements to calls to
React.createElement. And then with the option
"jsxFactory" we tell the compiler that we don’t use React and need it to compile JSX tags to calls to
Template.create function.
And now
Roughly compiles to
🏷 JSX tags
Now compiler knows to what JavaScript function calls JSX syntax should be compiled and it’s time to actually define the DSL tags.
For this purpose, we will use really cool feature of TypeScript — in-project namespace definitions. In fact, we need to define name and attributes of each JSX tag and to do so we have to define namespace
JSX with interface
IntrinsicElements and TypeScript’s compiler and language services will pick them up and use for type-checking and autocompletion.
Here we defined all JSX tags from the example with all their attributes. So the name of a key in the interface is the actual name of the tag and right side is a definition of its attributes. Note that some tags don’t have any attributes like
i and
message while others have optional attributes.
🛠 Template.create
Now it’s time to define factory function for the JSX tags. Remember that
Template.create from the
tsconfig.json? Now it’s time to implement it.
Aha, now we have a basic definition of
Template.
Tags that just add styling to the text like
i tag are easy. We just return their content as string wrapped in
_. But with more complex tags it’s not that obvious what to do. In fact most of the time I did spend on this part — trying to come up with a good solution. So what’s the problem?
Problem right now is that TypeScript compiler infers type of
<message>Text</message> to be
any. Which isn’t close to the goal of type-checked DSL. And thing is, there is no way to explain to TypeScript compiler the type of result for each tag because of the limitation of JSX in general — all tags produce the same type(which works for React — everything is a
React.Component).
So the solution I came up is really simple — describe some common type for each tag and use it as intermediate state. Good news is TypeScript allows defining type that will be used for all tags.
We just added
Element type and TypeScript now infers the type of each JSX block to be of type
Element. This is a default behavior of TypeScript compiler on JSX namespace. It uses the interface with name
Element as a type for each JSX block.
Now we can go back to
Template and finish its implementation to return object matching this interface.
This is it, now we can compile the example and it will produce correct Slack message object.
I’m pretty sure TypeScript guys didn’t intent JSX syntax to be used this way, but this seems to be useful or at least very interesting to play around.
📦 Repository with complete working example. | https://medium.com/@dempfi/typescript-jsx-syntax-as-typed-dsl-97c052b825c8 | CC-MAIN-2018-05 | en | refinedweb |
I am pleased to announce that the new version of nScale, 0.16, has been released today! This new release includes several new features and improvements, including: AWS Auto Scaling groups support, deployment of multiple versions of the same microservice, and a clean start/stop cycle. Update it with:
npm install nscale -g
How nScale Auto Scaling support works
Auto Scaling is one of the most powerful, but complex, features in AWS. Thanks to Auto Scaling, AWS can automatically adjust the number of instances depending on the traffic your application is facing. However, setting up and configuring these new instances is tricky, and different solutions have been developed along the years. The problem is that AWS spins up a freshly-minted instance based on an AWS Machine Image (AMI) you specify. In order to deploy a new version of your application, one has to create a new AMI, and change it in the Auto Scaling group. nScale and Docker simplifies all that flow, thanks to a clever usage of the AWS SDK for Node.js.
nScale creates a few things for you: an Auto Scaling group, a launch configuration, a Simple Notification Service (SNS) topic, and a Simple Queue Service (SQS) queue. These new entities are added to one already supported: Elastic Load Balancers, Security Groups, and bare instances.
Flow diagram of nScale Auto Scaling support
The diagram above shows the message flow that allows us to support Auto Scaling: nScale sets up the Auto Scaling group to publish new Auto Scaling events into the SNS topic, and connect the SQS queue to receive the notification from the SNS topic. Then, it starts listening for new messages on the SQS queue. When a new Auto Scaling event occurs, nScale performs a fix cycle: it redeploys the current version. Thanks to the homeostasis property, nScale detects that the instances run no containers, and it deploys the containers you have specified.
Event, Analyze, Fix: The nScale Auto Scaling loop
nScale is becoming more intelligent, and in fact it is now similar to a robot. My background is on the Internet of Things, and I studied the basics of automation control: the system works as a loop, where some events happen in the real (or virtual!) world, then nScale detects these events, and finally acts on them. So, nScale will soon be part of Skynet. :-)
Setting up an Auto Scaling group with nScale
You can start using Auto Scaling groups right now inside your nScale system, you just need to add some definitions (if you are not familiar with nScale, check out the docs):
exports.webAutoscaling = { type: 'aws-autoscaling', specific: { ImageId: 'ami-xxxxxxxx', // AMI with Docker installed MinSize: 2, MaxSize: 5 } }; exports.autoMachine = { type: 'blank-container' };
And then use them in your system.js:
exports.name = 'sudc-system'; exports.namespace = 'sudc'; exports.id = '62999e58-66a0-4e50-a870-f2673acf6c79'; exports.topology = { autoscaling: { awsWebElb: [{ awsWebSg: [{ webAutoscaling: { autoMachine: ['web', 'doc', 'hist', 'real'] } }] }] } };
You can use as many Auto Scaling groups you want, and nScale will manage them accordingly.
nScale leaves you in charge, so it does not enable any scaling policies for you. However, you can enable them very easily from the AWS console. In order to enable an Auto Scaling policy, you need to associate it with an alarm, associated with Auto Scaling metrics, ELB metrics, or SQS metrics. For Auto Scaling metrics, you can do this editing the scaling policy in the AWS console, by clicking action and then edit. If you want to use ELB metrics of SQS metrics, you will have to first create an alarm from the Cloudwatch interface and then set the Auto Scaling policy to react on it.
Then, you need to click on the create new alarm and fill in the relevant details.
You should repeat these settings for both the nscale-scaling-down and nscale-scaling-up policies.
The development of this feature was lead by me (Matteo Collina), with a great help from Luca Lanziani, without which this would not be possible. If you want to help contributing, check out this guide.
New startup process
One of the major culprits of nScale was the messy start/stop commands, which might leave some processes running even in case of stop, if you tried out the development workflow.
The start/stop commands were entirely rewritten by Darragh Hayes, and he also added a fantastic logo on startup. You can now start nScale via an ‘nscale start’ and stop it with ‘nscale stop’. Finally, he added an nscale status command.
The new nScale start/stop/status commands
Running two versions of the same service
nScale now fully supports running two versions of the same service, one alongside the other. The only catch is that you need to specify two different names in the definitions, and add the ‘checkoutDir: “path-in-workspace”’ inside the specific block, like so:
exports.doc = { type: 'docker', override: { process: { type: 'process' } }, specific: { repositoryold_url: '’, processBuild: 'npm install', checkoutDir: 'doc-v1', execute: { args: '-p 9002:9002 -d', process: 'srv/doc-srv.js' } } }
This feature was ideated by Richard Rodger and Peter Elger.
Improved development workflow
The development workflow using bare operating system processes has been drastically improved, and we added support for rolling back and forward, as well as analysis, and some more clarity in the codebase. Peter Elger championed this work in the last few days at an impressive pace!
Breaking changes
Unfortunately, I have to announce that we introduced a breaking change in 0.16. Due to the way we define containers, e.g. web-abc1234, we assumed that on AWS these names will not clash with each other, so you could deploy more than one environment per region. Unfortunately, this is not true for the Elastic Load Balancer. Thus, we had to change the algorithm for generating that hash so that it includes the environment too. In practice, this means on AWS nScale will recreate the security groups, ELB and instances that you previously created.
Luca Lanziani spotted this bug, and proposed a change.
About nScale
nScale is a toolkit that makes deployment and management of distributed software systems easy. Try it on your software system today and let us know how you get on. Visit nscale.nearform.com to download and install. If you want to get involved in nScale development, please check out the new guide.
nScale represents nearForm’s ongoing commitment to helping node become a mainstream technology and open source is one of the ways that we support that.
If you’d like to know more about nearForm, get to know us here and the team here.
Want to work for nearForm? We’re hiring.
Phone +353-1-514 3545 | https://www.nearform.com/blog/nscale-0-16-now-deploying-aws-auto-scaling-group/ | CC-MAIN-2017-39 | en | refinedweb |
secure-filters is a collection of Output Sanitization functions ("filters")
to provide protection against Cross-Site Scripting
(XSS) and other
injection attacks.
npm install --save secure-filters
html(value)- Sanitizes HTML contexts using entity-encoding.
js(value)- Sanitizes JavaScript string contexts using backslash-encoding.
jsObj(value)- Sanitizes JavaScript literals (numbers, strings, booleans, arrays, and objects) for inclusion in an HTML script context.
jsAttr(value)- Sanitizes JavaScript string contexts in an HTML attribute using a combination of entity- and backslash-encoding.
uri(value)- Sanitizes URI contexts using percent-encoding.
css(value)- Sanitizes CSS contexts using backslash-encoding.
style(value)- Sanitizes CSS contexts in an HTML
styleattribute
XSS is the #3 most critical security flaw affecting web applications for 2013, as determined by a broad consensus among OWASP members.
To effectively combat XSS, you must combine Input Validation with Output Sanitization. Using one or the other is not sufficient; you must apply both! Also, simple validations like string length aren't as effective; it's much safer to use whitelist-based validation.
The generally accepted flow in preventing XSS looks like this:
Whichever Input Validation and Output Sanitization modules you end up using, please review the code carefully and apply your own professional paranoia. Trust, but verify.
secure-filters doesn't deal with Input Validation, only Ouput Sanitization.
You can roll your own input validation or you can use an existing module. Either way, there are many important rules to follow.
This Stack-Overflow thread lists several input validation options specific to node.js.
One of those options is node-validator (NPM, github). It provides an impressive list of chainable validators. Validator also has a 3rd party express-validate middleware module for use in the popular Express node.js server.
Input Validation can be specialized to the data format. For example, the jsonschema module (NPM, github) can be useful for providing strict validation of JSON documents (e.g. bodies in HTTP).
Output Sanitization (also known as Ouput Filtering) is what
secure-filters is
responsible for.
In order to properly santize output you need to be sensitive to the context in which the data is being output. For example, if you want to place text in an HTML document, you should HTML-escape the text.
But what about CSS or JavaScript contexts? You can't use the HTML-escape filter; a different escaping method is necessary. If the filter doesn't match the context, it's possible for browsers to misinterpret the result, which can lead to XSS attacks!
secure-filters aims to provide the filter functions necessary to do this type
of context-sensitive sanitization.
"Sanitization" is an overloaded term and can be confused with other security techniques.
For example, if you need to store and sanitize HTML, you'd want to parse,
validate and sanitize that HTML in one hybridized step. There are tools like
Google Caja to do HTML sanitization.
The
sanitizer module
packages-up Caja for node.js/CommonJS usage.
secure-filters can be used with EJS or as normal functions.
npm install --save secure-filters
⚠️ CAUTION: If the
Content-Type HTTP header for your document, or
the
<meta charset=""> tag (or eqivalent) specifies a non-UTF-8 encoding these
filters may not provide adequate protection! Some browsers can treat some
characters at Unicode code-points
0x00A0 and above as if they were
< if the
encoding is not set to UTF-8!
To configure EJS, simply wrap your
require('ejs') call. This will import the
filters using the names pre-defined by this module.
var ejs = ;
Then, within an EJS template:
Welcome <%-: userName |html%>Click here to activate
There's a handy cheat sheet showing all the filters in EJS syntax.
Rather than importing the pre-defined names we've chosen, here are some other
ways to integrate
secure-filters with EJS.
As of EJS 0.8.4, you can replace the
escape() function during template
compilation. This allows
<%= %> to be safer than the
default.
var escapeHTML = secureFiltershtml;var templateFn = ejs;
It's possible that the filter names pre-defined by this module interferes with
existing filters that you've written. Or, you may wish to import a sub-set of
the filters. In which case, you can simply assign properties to the
ejs.filters object.
var secureFilters = ;var ejs = ;ejsfilterssecJS = secureFiltersjs;
Or, you can namespace using a parametric style, similar to how EJS' pre-defined
get:'prop' filter works:
var secureFilters = ;var ejs = ;ejsfilters {return secureFilterscontextval;};
The filter functions are just regular functions and can be used outside of EJS.
var htmlEscape = html;var escaped = ;assert;
You can simply include the
lib/secure-filters.js file itself to get started.
We've also added AMD module
definition to
secure-filters.js
for use in Require.js and other AMD frameworks. We
don't pre-define a name, but suggest that you use 'secure-filters'.
By convention in the Contexts below,
USERINPUT should be replaced with the
output of the filter function.
Sanitizes output for HTML element and attribute contexts using entity-encoding.
Contexts:
Hello, USERINPUT
⚠️ CAUTION: this is not the correct encoding for embedding the contents of
a
<script> or
<style> block (plus other blocks that cannot have
entity-encoded characters).
Any character not matched by
/[\t\n\v\f\r ,\.0-9A-Z_a-z\-\u00A0-\uFFFF]/ is
replaced with an HTML entity. Additionally, characters matched by
/[\x00-\x08\x0B\x0C\x0E-\x1F\x7F-\x9F]/ are converted to spaces to avoid
browser quirks that interpret these as non-characters.
<%= %>
You might be asking "Why provide
html(var)? EJS already does HTML escaping!".
Prior to 0.8.5,
EJS doesn't escape the
' (apostrophe) character when using the
<%= %>
syntax. This can lead to XSS accidents! Consider the template:
When given user input
x' onerror='alert(1), the above gets rendered as:
Which will cause the
onerror javascript to run. Using this module's filter
should prevent this.
When given user input
x' onerror='alert(1), the above gets rendered as:
Which will not run the attacking script.
Sanitizes output for JavaScript string contexts using backslash-encoding.
⚠️ CAUTION: you need to always put quotes around the embedded value; don't assume that it's a bare int/float/boolean constant!
⚠️ CAUTION: this is not the correct encoding for the entire contents of a
<script> block! You need to sanitize each variable in-turn.
Any character not matched by
/[,\-\.0-9A-Z_a-z]/ is escaped as
\xHH or
\uHHHH where
H is a hexidecimal digit. The shorter
\x form is used for
charaters in the 7-bit ASCII range (i.e. code point <= 0x7F).
Sanitizes output for a JSON string in an HTML script context.
This function escapes certain characters within a JSON string. Any character
not matched by
/[",\-\.0-9:A-Z\[\\\]_a-z{}]/ is escaped consistent with the
js(value) escaping above. Additionally, the sub-string
]]> is
encoded as
\x5D\x5D\x3E to prevent breaking out of CDATA context.
Because
< and
> are not matched characters, they get encoded as
\x3C and
\x3E, respectively. This prevents breaking out of a surrounding HTML
<script> context.
For example, with a JSON string like
'{"username":"Albert </script><script>alert(\"Pwnerton\")"}',
json() gives output:
Sanitizes output for a JavaScript literal in an HTML script context.
This function encodes the object with
JSON.stringify(), then
escapes using
json() detailed above.
For example, with a literal object like
{username:'Albert </script><script>alert("Pwnerton")'},
jsObj() gives output:
Article: JSON isn't a JavaScript Subset.
JSON is almost a subset of JavaScript, but for two characters:
LINE SEPARATOR U+2028
and
PARAGRAPH SEPARATOR
U+2029. These
two characters can't legally appear in JavaScript strings and must be escaped.
Due to the ambiguity of these and other Unicode whitespace characters,
secure-filters will backslash encode U+2028 as
\u2028, U+2029 as
\u2029,
etc.
Sanitizes output for embedded HTML scripting attributes using a special combination of backslash- and entity-encoding.
click to activateClick To Display
The string
<ha>, 'ha', "ha" is escaped to
<ha>, \'ha\', \"ha\". Note the backslashes before the apostrophe and quote
entities.
Sanitizes output in URI component contexts by using percent-encoding.
The ranges 0-9, A-Z, a-z, plus hypen, dot and underscore (
-._) are
preserved. Every other character is converted to UTF-8, then output as %XX
percent-encoded octets, where X is an uppercase hexidecimal digit.
Note that if composing a URL, the entire result should ideally be HTML-escaped before insertion into HTML. However, since Percent-encoding is also HTML-safe, it may be sufficient to just URI-encode the untrusted components if you know the rest is application-supplied.
Sanitizes output in CSS contexts by using backslash encoding.
⚠️ CAUTION this is not the correct filter for a
style="" attribute; use
the
style(value) filter instead!
⚠️ CAUTION even though this module prevents breaking out of CSS
context, it is still somewhat risky to allow user-controlled input into CSS and
<style> blocks. Be sure to combine CSS escaping with whitelist-based input
sanitization! Here's a small sampling of what's possible:
The ranges a-z, A-Z, 0-9 plus Unicode U+10000 and higher are preserved. All
other characters are encoded as
\h, where
h is one one or more lowercase
hexadecimal digits, including the trailing space.
Confusingly, CSS allows
NO-BREAK SPACE U+00A0 to be used in an identifier.
Because of this confusion, it's possible browsers treat it as whitespace, and
so
secure-filters escapes it.
Since the behaviour of NUL in CSS2.1 is
undefined, it is replaced
with
\fffd,
REPLACEMENT CHARACTER U+FFFD.
For example, the string
<wow> becomes
\3c wow\3e (note the trailing space).
Encodes values for safe embedding in HTML style attribute context.
USAGE: all instances of
USERINPUT should be sanitized by this function
⚠️ CAUTION even though this module prevents breaking out of style-attribute context, it is still somewhat risky to allow user-controlled input (see caveats on css above). Be sure to combine with whitelist-based input sanitization!
Encodes the value first as in the
css() filter, then HTML entity-encodes the result.
For example, the string
<wow> becomes
\3c wow\3e.
Please see the Contribution Guide.
Support is provided via github issues.
For responsible disclosures, email Salesforce Security.
This release changes the behavior of secure-filters, but should be backwards-compatible with 1.0.5.
js,
jsObjand
jsAttrfilter now use a strict allow-list for characters in strings. This is safer, but does increase the size of these strings slightly. Compliant JSON and JavaScript parsers will not be affected negatively by this change.
jsAttrwas incorrect. It previously stated that
<ha>, 'ha', "ha"was escaped to
<ha>, \'ha\', \"ha\"
© 2014 salesforce.com
Licensed under the BSD 3-clause license. | https://www.npmjs.com/package/secure-filters | CC-MAIN-2017-39 | en | refinedweb |
UPDATE 15 February 2014: BoostPro is no more. You may find this alternative post useful in setting up the Boost libraries that require separate compilation.
A number of Windows-based Boost libraries are not “header-only” and require that you must get them compiled. One way is to compile them yourself. A possibly easier way is to do this via the prebuilt installer packages from BoostPro.
Say for example you wish to use the Boost serialize facilities in your program:
#include <boost/archive/text_oarchive.hpp> #include <boost/archive/text_iarchive.hpp>
Until you have installed the serialize package you will probably get the following error message when trying to use the above headers:
fatal error C1083: Cannot open include file: 'boost/archive/text_oarchive.hpp': No such file or directory
To resolve this using BoostPro, go to the downloads page and obtain the installer that is appropriate for the version of Boost you are using, in may case version 1.46.1:
Once downloaded, accept the licence agreements etc and work through the wizard, eventually to selecting the default variants:
And then choose which of the Boost components you want to be installed:
And then finish the remainder of the installation:
The last stage in getting the Boost serialize to work is to then set the Visual Studio properties, in the form of the additional include and library directories. In Visual Studio 2010 and onwards, these can be set globally in the VC++ Directories settings in the configuration properties: | http://www.technical-recipes.com/2012/using-boostpro-to-install-boost-library-packages/ | CC-MAIN-2017-39 | en | refinedweb |
Analyzing my Spotify Music Library With Jupyter And a Bit of Pandas
Or the quest to cleanup my saved songs to play everything on random.
Are you a Spotify user? By now, that’s the way I consume most of my music.
Recently, I have been developing a web app which uses the Spotify API in part for a series of posts about developing a small real-life application and web development in general. The app is written in Python and Flask for displaying full-sized album art for the songs which are playing on a user’s Spotify account.
While playing around with the Spotify web API, and building a login flow in the app, it was pretty easy to get an access token for my account with all kinds of permissions for access to my data. Among others, it’s good for everything needed to analyze the heck out of your whole music library - information about songs and albums in particular.
The Motive
My music library on Spotify is quite big. What I like to do, is just put all of my “saved” songs, and put the on shuffle play. I’m not sure the library is meant for such use - it’s a bit in the way of when songs are “saved”.
To listen to songs in offline mode, you can “download” them on a particular device. Doing so also “saves” them, thus adding the complete album to your music library. You can get around that with playlists, but that’s a few more clicks.
As I’m in the habit of giving complete albums a listen every once in a while, and forgetting about the stuff I download-saved, my music library is now a bit messy. This leads to song playing which are not as enjoyable without the album context. Oh the pain.
The Cure
Having the amazing data crunching powers of the forementioned TOKEN, I thought it’d be cool to take a look at what albums I have in my library. And maybe clean up obvious mis-saves, but also to see what stuff I really seem to like. Here’s the plan:
- Get data on all songs in my library (especially which album they belong to)
- Get data for each album referenced (total number of songs, how many of those are in my library)
- Do something with the information (aka cleanup)
There are further possible stuff one could do with the data, for example to handcraft an own simple artist/album recommendation script, but that’s for anonther time.
Before we Proceed
A word of caution:
The code below was never meant to be reliable or presentable. It doesn’t mean it’s bad in general. I wrote it to satisfy my own curiosity, and with one person in mind who would ever see it: yours truly. The samples are not as polished as other specimen you will find elsewhere, and might not be the best learning resource regarding best practices.
That said, it works, does what it’s supposed to do given the context and priorities of and probably looks very similar to many real-world projects, where results and time constraints are significant factors.
The whole code is integrated in the article, but if you want to get a copy of the jupyter notebook (sans secrets and my data), just drop me your email in the form at the bottom of the page, and let me know that you’d like to have it by responding to the mail :)
Alright! Let’s jump into it!
The Setup Step by Step
Here’s what I usually do when starting an exploratory data project with Python.
When tinkering with data years ago, I used to rely on running tiny Python scripts and saving intermediate results. Then I got introduced to ipython notebooks and never looked back. By now, jupyter notebooks are what you should use. It’s the perfect tool for working with Python and data when you’re trying stuff out and playing with an early-stage data project.
With Python projects, I like setting up a virtual environments for each one. This makes it easy to isolate dependencies, install EXACTLY what is needed in the version which is needed and make sure that the code runs out of the box.
So I went ahead and created a new virtual environment using virtualenv wrapper. It’s really convenient, if you don’t know it and like Python, check it out! In the following, code blocks starting with $ is what happens in a terminal, while everything else is Python code.
$ mkvirtualenv spotify
Jupyter is a python module, and can be installed using pip:
$ pip install jupyter
For data-tastic Python fun, I usually install a few modules by default, to make sure I can do basic data crunching, plotting and http requests without much effort. Here is my choice of tools for all of the above:
- requests - A great library for talking to web apis and fetching single web pages.
- furl - For creating urls and requests in general with parameters for GET / POST stuff.
- pandas - Makes python feel a bit like R, namely able to work with data frames and crunch data without getting too verbose about it.
- matplotlib - To give pandas plotting-superpowers.
Here’s how you’d install it all:
$ pip install pandas matplotlib requests furl
In this investigation, I did not really use much pandas, nor matplotlib apart from a tiny diversion. Don’t be discouraged by them appearing to be of little value, they shine when the data handling gets more challenging.
Once everything was installed, I went into a new directory created for the project and started the Jupyter notebook server with:
$ jupyter notebook
Which also opens the Jupyter web interface in a new tab in your browser. Everything that was left to do at this point, was to create a new notebook, give it a name and start the fun.
Hello API
The code samples below, are what I typed into the Jupyter notebook cells. They are depending on each other, use previous variables as you would in a single script, but can be re-executed as needed. It’s perfect to experiment and get things to work without enduring unnecessary waiting time.
But anyway, to access the Spotify Web API, we need the API access token. You could use the big album art project in development mode. You’ll need a private Spotify app, insert the tokens as environment variables, add a “print” statement in the home view, run the app localy and you’ll be set. Don’t forget to adjust the permission request (see a few lines below).
Otherwise, I will write about the application in a later article as it’s a whole different topic. If you’d like to be notified when it’s published, just drop me your email adress in the form below :)
The permission scopes we’ll need are:
"user-read-private user-read-email user-read-playback-state user-read-currently-playing user-library-read"
I just made the app print the token value, and copied it into the first cell of the notebook. The token expires after a while, so you might need to refresh it when working on the code.
TOKEN = "???"
I like to put constants in the beginning of a project and write the name in UPPER_CASE_NOTATION. We will need a few Python modules for convenience - the ones we installed previously. Let’s import them:
import json import requests from furl import furl from math import ceil # to save some typing import pandas as pd import matplotlib # to display plots in the notebook %matplotlib inline import matplotlib.pyplot as plt
The code above also makes it possible for plots to appear inline in the notebook, which looks nice. We’re ready to get the data!
The First Request
Now that we have the token and tools in place, it’s really simple to get the first bit of information.
As we want to get the tracks of our user, we’ll go ahead and take a peek at how many there are. The API is well documented, and can be browsed here. The tracks endpoint is what we need. The response is paginated, but for the field we need this does not matter. The total number of songs is available under the “total” key.
url = "" headers = {'Authorization': "Bearer {}".format(TOKEN)} r = requests.get(url, headers=headers) parsed = json.loads(r.text) count_songs = parsed["total"] print "Total number of songs: {}".format(count_songs)
We make the request using requests, add a header with the authentication info (as described in the docs), parse the response into json and - without checking for errors - take the data field into a var. If the request goes wrong for some reason, we will get an exception in the notebook and should be able to fix it /rerun the thing. So no fancy edgecase handling is needed.
Apparently I have 2416 songs in the library.
Getting Track Data
User libraries can be quite large. Thousands of songs. It would be unnecessary and at some point impractical to return all songs on every single request to the tracks endpoint. That’s why it is paginated. We need all of those of course.
We got the number of songs, and the first “page” of songs in the user library. The maximum amount of songs returned per request is 50 if you specify it. So all we need is the total number and a for loop.
Once again, as this code is pretty unlikely to fail, and can be reexecuted if needed we don’t care about crashing if everything goes wrong.
# paginate over all tracks all_songs = [] for i in range(int(ceil(count_songs/50.0))): offset = 50*i url = "{}".format(offset) headers = {'Authorization': "Bearer {}".format(TOKEN)} r = requests.get(url, headers=headers) parsed = json.loads(r.text) all_songs.extend(parsed["items"]) print "Number of gathered songs: {}".format(len(all_songs))
The printed number equals the one above. Neato. If the responses would fail, we’d notice. Now we have the complete user library of songs. Imagine the possibilities.
How Many Albums Are Referenced?
Staying on track. We’d like to get the albums which are referenced by the songs. The track data does not have everything we need unfortunately - there’s only the album id and a bit of other info. We’ll need to get detailed data on each relevant album, especially the number of tracks each has in total
Using a Python set, we can get a unique list of all album ids which we will need.
album_ids = set() for song in all_songs: album_id = song["track"]["album"]["id"] album_ids.add(album_id) print "Number of albums: {}".format(len(album_ids))
For me that prints 1307 as the number of albums. Roughly half of my library song count. Huh. Who would have thought.
Lots of Requests Later
With the album ids at hand, we can proceed. I usually get the raw data and produce derived datasets in later steps. This way we can go back and use fields which we ignored at first. This also suits the iterative superpowers of Jupyter - single computation steps can be reexecuted in isolation, not requiring the previous ones to run again.
This part is anything but elegant. In fact it’s a bit rude. We could request multiple album ids and only use 1/20th of the current requests. Also, it would be faster if we watched the rate limits and the Retry-After header. Once the waiting times are long enough, or when there would be multiple users I’d reconsider being more polite and less lazy.
Running this takes a few minutes.
# gather information on all albums album_info_by_id = {} from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry # silly solution, but good enough # could handle the header entry 'Retry-After' for better manners # s = requests.Session() retries = Retry(total=3000, backoff_factor=1, status_forcelist=[ 502, 503, 504, 429 ]) s.mount('https://', HTTPAdapter(max_retries=retries)) for album_id in album_ids: url = "{}".format(album_id) headers = {'Authorization': "Bearer {}".format(TOKEN)} r = s.get(url, headers=headers) #TODO: check for error apart from dumb retries? parsed = json.loads(r.text) album_info = parsed #TODO: sanity checks? try: a = album_info["tracks"] except: print "Entry seems wrong. Fix it:" print album_info break album_info_by_id[album_id] = album_info
In this step, we actually care about the data quality, as mistakes might be more subtle than in previous steps. We want to get a snapshot of the album data, and really can’t have the data be incomplete or wrong for some reason. By trying to access the “tracks” field in each item we are making sure that the data is not formatted in a way we don’t expect. If unexpected stuff happen, we prefer to crash instead of quietly saving garbage.
API-related rate limits could be handled more graceful, but in our case it’s still relatively small and silly retries on expected errors are what we roll with. The total number of retries is limited to a still-large-but-not-huge number. Just in case.
Obviously, we really don’t want to be so lazy for really large requests or with many users.
Putting it All Together
After the previous chunk of code ran through without complaining, we have everything we need in our notebook for more elaborate tinkering. No more API requests are needed and we can work with the data we have. Neat-o!
This is a nice time to create a checkpoint :)
How do we handle the raw data? I really like to put convenient wrapper classes around the data entries. This way, we can perform basic tasks without giving them too much thought and keep the code readable.
For accessing album data and computing simple derived values, I created an “AlbumBin” class. It will provide information on all of the user’s tracks which are related to an individual album and accessors to raw album metainformation.
You can read up on the data we are working on, in the albums API endpoint docs.
class AlbumBin: def __init__(self, album_id, album_info): self.album_id = album_id self.album_info = album_info self.my_tracks = [] def add(self, song): self.my_tracks.append(song) def my_track_count(self): return len(self.my_tracks) def total_track_count(self): return len(self.album_info["tracks"]["items"]) def get_completeness_ratio(self): return (1.0 * self.my_track_count() / self.total_track_count()) def get_name(self): return self.album_info["name"] def get_artists(self): """ comma separated list of artists for pretty printing""" return ", ".join(map(lambda x: x["name"], self.album_info["artists"]))
So much for the data class. It can help us see how “complete” an album is (what the ratio of songs is, which is saved in the library), and straight-forward ways for getting the name of the album as well as the artists without caring about the underlying raw data structure. Now, we can create a binning of tracks (aka put tracks into album classes).
binning = {} # create album bins for album_id in album_ids: album_info = album_info_by_id[album_id] the_bin = AlbumBin(album_id, album_info) binning[album_id] = the_bin # fill album bins with songs for song in all_songs: album_id = song["track"]["album"]["id"] the_bin = binning.get(album_id) the_bin.add(song)
My naming is on the inconsistent side here - songs and tracks are pretty much interchangeable in my mind it seems.
Printing First Results - Terribly
This first one, is a part I’m not proud of. I could have been way lazier - but chose to copy-paste stuff tweaking the numbers instead of thinking for a bit. Originally I just wanted to get a grasp of the almost-completely-saved album counts, but later got interested in the complete distribution. Bear with me, we will handle this better with pandas. But first, here’s the copypaste bit for you to behold:
album_bins = binning.values() def fi(at_least, at_most): """ at_most is non-inclusive """ return filter(lambda x: at_least <= x.get_completeness_ratio() and at_most > x.get_completeness_ratio(), album_bins) album_count = len(album_bins) albums_100_percent = fi(1.0, 9000.0) albums_90_percent = fi(0.9, 1.0) albums_80_percent = fi(0.8, 0.9) albums_70_percent = fi(0.7, 0.8) albums_60_percent = fi(0.6, 0.7) albums_50_percent = fi(0.5, 0.6) albums_40_percent = fi(0.4, 0.5) albums_30_percent = fi(0.3, 0.4) albums_20_percent = fi(0.2, 0.3) albums_10_percent = fi(0.1, 0.2) albums_0_percent = fi(0.0, 0.1) print "Album count: {}".format(album_count) print "" print "Album count at/over 100%: {}".format(len(albums_100_percent)) print "Album count over 90% but under 100%: {}".format(len(albums_90_percent)) print "Album count over 80% but under 90%: {}".format(len(albums_80_percent)) print "Album count over 70% but under 80%: {}".format(len(albums_70_percent)) print "Album count over 60% but under 70%: {}".format(len(albums_60_percent)) print "Album count over 50% but under 60%: {}".format(len(albums_50_percent)) print "Album count over 40% but under 50%: {}".format(len(albums_40_percent)) print "Album count over 30% but under 40%: {}".format(len(albums_30_percent)) print "Album count over 20% but under 30%: {}".format(len(albums_20_percent)) print "Album count over 10% but under 20%: {}".format(len(albums_10_percent)) print "Album count over 0% but under 10%: {}".format(len(albums_0_percent))
URGH. Research code, right? It did what it was supposed to do, but it’s anything but good. One more function getting a list of arguments would have totally fixed this.
But why. We have a first impression of the “completeness” distribution of saved albums:
Album count: 1307 Album count over 100%: 172 Album count over 90% but under 100%: 5 Album count over 80% but under 90%: 1 Album count over 70% but under 80%: 0 Album count over 60% but under 70%: 2 Album count over 50% but under 60%: 23 Album count over 40% but under 50%: 8 Album count over 30% but under 40%: 25 Album count over 20% but under 30%: 90 Album count over 10% but under 20%: 236 Album count over 0% but under 10%: 745
There are a whopping 172 albums which are at 100%. This can be explained by single-song releases (singles). But I don’t think all of those are. Otherwise, I seem to be a picky listener. The albums at 50% or 30% might be worth revisiting, as they are either well-pruned or have potential to be just my type.
Printing and Visualizing Results - a bit better
Let’s pause for a moment, and look at how we could handle the distribution issue with pandas instead of lots of copy-pasted code.
There are many ways to visualize data in Pandas. For helping us understand the distribution, and maybe tell a story, a histogram would be nice. A box plot would make it very easy to understand the distribution a bit better.
Pandas feels really convenint for common data-crunching tasks. Just as R does. The code might look a bit daunting at first, but if you get into the topic, you’ll be comfortable in reading and writing this flavor of Python.
completeness_ratios = map(lambda x: x.get_completeness_ratio(), album_bins) df = pd.DataFrame(completeness_ratios) df.describe()
Basically, we create a list of “completeness ratios” for each album, and put it into a “data frame”. The ‘describe’ function outputs useful numbers to understand the distribution of the underlying data.
Simple, right? However, that’s not very intuitive and takes time to understand. Plots are better for this, and can be generated without much hassle:
ax = df.plot(kind='hist', title ="Album Completeness Histogram", figsize=(15, 10), legend=False, fontsize=12) ax.set_xlabel("Completenness Percent", fontsize=12) ax.set_ylabel("Album Count", fontsize=12) plt.show() ax = df.plot(kind='box', title ="Album Completeness Distribution Boxplot", figsize=(15, 10), legend=False, fontsize=12) ax.set_xlabel("All tracks in the user library", fontsize=12) ax.set_ylabel("Completeness Percent", fontsize=12) plt.show()
The results are two plots:
To get a bin-overview as in the code above, we can load off a huge chunk onto Pandas.
# originally I used #bins = list(range(0,101,10)) # but I wanted 100 to be a special case, so: bins = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 99, 100] bin_labels = [] for i in range(len(bins)-1): bin_labels.append("({}, {}]".format(bins[i], bins[i+1])) bins = map(lambda x: x/100.0, bins) # results in bins (0, 10], so 0 is not included, but 10 is # this is different than our handling above, with [0, 10) # and results in different numbers df["bins"] = pd.cut(df[0], bins, labels=bin_labels) print df["bins"].groupby(df["bins"]).count()
The output looks as you would expect:
Alright, back on track.
List of Albums Which Are Completely Saved - Ready For Pruning
With our list of albums, and easy way to compute completeness ratios, we can just filter the albums for every entry which is
- Not from an one-song album
- Completely saved in the library
With a bit of sorting based on track count, we get the information I was interested in originally:
albs = fi(1.0, 9000.0) albs_big = filter(lambda x: x.total_track_count()!=1, albs) albs_big.sort(key=lambda x: -x.total_track_count()) for a in albs_big: print u"[{tracks_total}] '{name}' - {artists}".format( name=a.get_name(), tracks_total=a.total_track_count(), artists=a.get_artists())
The top part of the results (of 48 non-single-albums):
[32] 'Zirkus Zeitgeist (Deluxe)' - Saltatio Mortis [24] 'Meet The EELS: Essential EELS 1996-2006 Vol. 1' - Eels [23] 'Подмосковные вечера (Имена на все времена)' - Владимир Трошин [21] 'Great Divide (Deluxe Version)' - Twin Atlantic [20] 'Opposites (Deluxe)' - Biffy Clyro [20] 'Holy Wood [International Version (UK)]' - Marilyn Manson [18] 'Come What[ever] May' - Stone Sour
That’s way more than I expected. And in the very first results are some entries I’d rather not keep around in their entirety. Hooray! That’s all the actionable information I needed to begin pruning.
Just How Many Songs From Complete Albums Are in my Library?
Easy to answer. Interesting to know.
count_of_songs_on_albums = sum(map(lambda x: x.my_track_count(), albs_big)) print "Songs from 'complete' albums: {}".format(count_of_songs_on_albums) print "Total number of songs: {}".format(count_songs) percentage = 100.0*count_of_songs_on_albums/count_songs print "Percent: {:.2f}%".format(percentage)
Turns out, it’s quite a lot:
Songs from 'complete' albums: 652 Total number of songs: 2416 Percent: 26.99%
Over one fourth. Alright, that’s why I started noticing. That took a while.
Conclusion
After a bit of coding, I got all the answers I was interested in. My Spotify library can use a bit of cleaning up, and I’m really looking forward to improve my all-random listening enjoyment.
Python spiced with Jupyter and other helpful modules is great to tinkering with data. Requests + Pandas are amazing libraries, which you should add to your arsenal if you’re working with data. Pandas did not really shine here, I think it deserves its own article with a few harder questions to justify its use and demonstrate its full potential. Do you have an idea about an interesting way to look at the data, or a question you’d be interested in answering? Just drop me a mail :) Also, the Spotify API is cool and well documented.
As mentioned in the beginning if you enter your favourite email in the form below, I’ll send you the Jupyter notebook I used (sans secrets and data), so you can plug your own token in, get results for your account and have a good starting point to begin exploring the data. Regarding the token-getting-app-business: working on the article soon, stay tuned for more.
Thanks a lot for reading. I’d be thrilled to hear from you! If you have any questions, remarks or cool projects in mind just write me a mail. | https://vsupalov.com/analyze-spotify-music-library-with-jupyter-pandas/ | CC-MAIN-2017-39 | en | refinedweb |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to Set Default Directory for Attachments Depedning on Object
I am implementing the Knowledge Management module in odoo v8 and I have come across a major short coming.The directory field in the attachments is not automatically set by the system even if I configure the "Folders per resource" part in Knowledge.
For example, I have a Clients folder whose parent folder is documents. This folder is set as "Folders per resource" for the partner model. Now, if I attach an attachment on a customer, it is not being placed in the clients folder by default.
Does anyone know how to make attachments in odoo to go to a specific directory by default so I don't have to manually set the directory field in the attachment form. I feel this is a vital feature in as documents organisation is concerned.
(Please upvote the question so that I can be able to comment)
I found some great explanation here (Antony Lesuisse's answer) on how to set odoo to save the attachments in the file system.
I then used the following piece of code to set a default for 'parent_id' field in the attachment. (ir.attachment class)
from openerp.osv import osv, fields
from openerp.tools.translate import _
class ir_attachment(osv.osv):
#extend document
_inherit = "ir.attachment"
def create(self, cr, uid, vals, context=None):
if context is None:
context = {}
if not vals.get('res_model', False) and context.get('default_res_model', False):
vals.get['res_model'] = context.get('default_res_model', False)
if not vals.get('parent_id', False):
parent_id = self.pool.get('document.directory').search(cr, uid, [('ressource_type_id','=',vals.get('res_model'))])
if parent_id and parent_id[0]:
vals['parent_id'] = parent_id[0]
else:
vals['parent_id'] = self.pool.get('document.directory')._get_root_directory(cr,uid, context)
return super(ir_attachment, self).create(cr, uid, vals, context)
ir_attachment()
class document_directory(osv.osv):
_inherit = 'document.directory'
_sql_constraints = [
('dir_uniq', 'unique (ressource_id,ressource_parent_type_id)',
'The Directory name must be unique per Resource Type and Resource !'),
]
Hello Cyrus,
The functionality "Folders by resource" only works in relation to module "document_ftp". That is, when you access to your repository using FTP, the application dynamically constructs one folder per resource (e.g. one folder for each sales order), based on the flag that you set to the 'Customers' folder in Odoo. These are not physical but 'virtual' folders.
What you ask for is interesting, but would require a specific development, so that each object in Odoo would create it's own physical folder per resource.
Regards,
Jordi.
Unfortunately, document_ftp has not been ported to odoo v8. Please see how I achieved what I wanted in my answer below
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-set-default-directory-for-attachments-depedning-on-object-82312 | CC-MAIN-2017-39 | en | refinedweb |
Exchange Server 2003 goes into Extended Support on 4/14/2009.Details on our lifecycle policy for Exchange 2003 can be found at Products Released General Availability Date Mainstream Support Retired Extended Support Retired Service Pack Retired Notes Exchange Server 2003 Enterprise Edition 9/28/2003 4/14/2009 4/8/2014 5/25/2005 Exchange Server 2003 Service Pack 1 5/25/2004 Not Applicable…
Year: 2009
Outlook Rules!
Rules are evaluated in sequence according to the increasing order of this value. The evaluation order for rules that have the same value in this property is undefined.PidTagRuleSequence: The Rule.ExecutionOrder property could be used to assign priority to rules. Rules.Item(1) represents a rule with ExecutionOrder being 1, Rules.Item(2) represents a rule with ExecutionOrder being…
Outlook 2007 and Simple MAPI
We have seen a lot of applications that use Simple MAPI. Unlike Extended MAPI (or MAPI), it is supported in managed code as well.Also, a lot developers have come across issues with Simple MAPI with Outlook 2007.Starting with Outlook 2007, Simple MAPI is no longer supported. However, its still supported by Exchange 2003.
Using Powershell to send email message using CDOSYS
Scripts are used do a pre-check before getting started with debugging application.VB Scripts being the favourite on Microsoft platform.Most common being CDOSYS issues.Below is the Powershell script to send email using System.Net.Mail namespace.[System.Net.Mail.MailMessage]$message = New-Object System.Net.Mail.MailMessage(“from@contoso.com”, “to@contoso.com”, “This is Subject”, “This is body”)[System.Net.Mail.SmtpClient]$client = New-Object System.Net.Mail.SmtpClient(“XXX.XXX.XXX.XXX”)$client.Timeout = 100$client.Send($message)Long live Powershell! | https://blogs.msdn.microsoft.com/nayan/?m=20094 | CC-MAIN-2017-39 | en | refinedweb |
0
I have set up a code in a class but i am not sure if this is the class that i would use.I have to put the logic of the game in a Class and then use the methods from the class to run the game.Right now i have some methods in a class but i am not sure on how i would go on getting it to work and calling the functions.I would like help on Possible telling me if i need another class and how to fix my methods in the class.Also if you would point me in the right direction on how i would use the methods and then call them in another file so i can make the BlackJack Program work. please and thanks for the help
from random import * class DeckOfCards: def __init__(self): #Make the Deck Of Cards self.__cards = [2,3,4,5,6,7,8,9,10,10,10,10,11] * 4 #keeps the score for the player and computer playerScore = 0 computerScore = 0 #holds the cards player = [] computer =[] #boolean to see who wins PlayerLoser = False ComputerLoser = False def drawCard(self): #draw a card for the player player.append(choice(self.__cards)) def drawCardComp(self): #draw a card for the computer computer.append(choice(self.__cards)) def totalOfPlayer(self): #the total the player has in his hands self.__totalOfP = total(player) return self.__totalOfP def totalOfComputer(self): #the total the computer has in his hands self.__totalOfC = total(computer) return self.__totalOfC def playGame(self):#i dunno if this is the method i am going to use here while True: #i want it to call the function in the class to draw a card drawCard() drawCard() if self.__totalOfP > 21: print 'You Lost' playerLoser = True elif self.totalOfP == 21: print 'hey congrats you got BlackJack :)' else: goAgain = raw_input("Hit or Stand (h or s): ").lower() if goAgain == 'h': #i want it to call the function in the class to get another class drawCard() while True: # loop for the computer's play #draw the computer cards drawCardComp() drawCardComp() while True: if totalOfC < 18: #draw the card if its less than 18 print "the computer has %s for a total of %d" % (computer, self.__totalOfComputer) # now figure out who won ... if self.__totalOfComputer > 21: print "The computer is busted!" ComputerLoser = True if PlayerLoser == False: print 'The Player Wins' playerScore += 1 elif totalOfC > totalOfP: print 'the Computer wins' computerScore += 1 elif totalOfC == totalOfP: print 'TIE' elif totalOfP > totalOfC: if PlayerLoser == False: print 'The Player Wins' playerScore += 1 elif ComputerLoser == False: print 'The Computer Wins' computerScore += 1 print print "Wins, player = %d computer = %d" % (playerScore, computerScore) exit = raw_input("Press Enter (q to quit): ").lower() print | https://www.daniweb.com/programming/software-development/threads/159799/blackjack-game-help | CC-MAIN-2017-39 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.