id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
9f228c059078276aa5ba838c8848c14c054a1744
Abstract This document defines a standard profile for X.509 certificates for the purpose of supporting validation of assertions of "right-of-use" of Internet Number Resources (INRs). The certificates issued under this profile are used to convey the issuer’s authorization of the subject to be regarded as the current holder of a "right-of-use" of the INRs that are described in the certificate. This document contains the normative specification of Certificate and Certificate Revocation List (CRL) syntax in the Resource Public Key Infrastructure (RPKI). This document also specifies profiles for the format of certificate requests and specifies the Relying Party RPKI certificate path validation procedure. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc6487. Table of Contents 1. Introduction ........................................ 4 1.1. Terminology ...................................... 5 2. Describing Resources in Certificates .......................... 5 3. End-Entity (EE) Certificates and Signing Functions in the RPKI 5 4. Resource Certificates ..................................... 6 4.1. Version ........................................... 6 4.2. Serial Number ..................................... 6 4.3. Signature Algorithm ................................ 6 4.4. Issuer ............................................ 7 4.5. Subject ........................................... 7 4.6. Validity .......................................... 7 4.6.1. notBefore ................................... 8 4.6.2. notAfter ................................... 8 4.7. Subject Public Key Info ............................. 8 4.8. Resource Certificate Extensions ....................... 8 4.8.1. Basic Constraints ............................ 8 4.8.2. Subject Key Identifier ......................... 9 4.8.3. Authority Key Identifier ....................... 9 4.8.4. Key Usage .................................... 9 4.8.5. Extended Key Usage ........................... 9 4.8.6. CRL Distribution Points .......................10 4.8.7. Authority Information Access ...................10 4.8.8. Subject Information Access ...................11 4.8.10. IP Resources ................................12 4.8.11. AS Resources ................................12 6.1. PCKS#10 Profile ....................................14 6.1.1. PKCS#10 Resource Certificate Request Template Fields .14 6.2. CRMF Profile .......................................15 6.2.1. CRMF Resource Certificate Request Template Fields ..15 6.2.2. Resource Certificate Request Control Fields ..........16 6.3. Certificate Extension Attributes in Certificate Requests ..16 7. Resource Certificate Validation .............................17 7.1. Resource Extension Validation ........................17 7.2. Resource Certification Path Validation ..................18 8. Design Notes ...........................................19 9. Operational Considerations for Profile Agility ................22 10. Security Considerations ..................................24 11. Acknowledgements ......................................25 12. References ..........................................25 12.1. Normative References ............................25 12.2. Informative References ...........................26 Appendix A. Example Resource Certificate .......................27 Appendix B. Example Certificate Revocation List .................31 1. Introduction This document defines a standard profile for X.509 certificates [X.509] for use in the context of certification of Internet Number Resources (INRs), i.e., IP Addresses and Autonomous System (AS) numbers. Such certificates are termed "resource certificates". A resource certificate is a certificate that conforms to the PKIX profile [RFC5280], and that conforms to the constraints specified in this profile. A resource certificate attests that the issuer has granted the subject a "right-of-use" for a listed set of IP addresses and/or Autonomous System numbers. This document is referenced by Section 7 of the "Certificate Policy (CP) for the Resource Public Key Infrastructure (RPKI)" [RFC6484]. It is an integral part of that policy and the normative specification for certificate and Certificate Revocation List (CRL) syntax used in the RPKI. The document also specifies profiles for the format of certificate requests, and the relying party (RP) RPKI certificate path validation procedure. Resource certificates are to be used in a manner that is consistent with the RPKI Certificate Policy (CP) [RFC6484]. They are issued by entities that assign and/or allocate public INRs, and thus the RPKI is aligned with the public INR distribution function. When an INR is allocated or assigned by a number registry to an entity, this allocation can be described by an associated resource certificate. This certificate is issued by the number registry, and it binds the certificate subject’s key to the INRs enumerated in the certificate. One or two critical extensions, the IP Address Delegation or AS Identifier Delegation Extensions [RFC3779], enumerate the INRs that were allocated or assigned by the issuer to the subject. Relying party (RP) validation of a resource certificate is performed in the manner specified in Section 7.1. This validation procedure differs from that described in Section 6 of [RFC5280], such that: - additional validation processing imposed by the INR extensions is required, - a confirmation of a public key match between the CRL issuer and the resource certificate issuer is required, and - the resource certificate is required to conform to this profile. This profile defines those fields that are used in a resource certificate that MUST be present for the certificate to be valid. Any extensions not explicitly mentioned MUST be absent. The same applies to the CRLs used in the RPKI, that are also profiled in this document. A Certification Authority (CA) conforming to the RPKI CP MUST issue certificates and CRLs consistent with this profile. 1.1. Terminology It is assumed that the reader is familiar with the terms and concepts described in "Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile" [RFC5280], and "X.509 Extensions for IP Addresses and AS Identifiers" [RFC3779]. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. 2. Describing Resources in Certificates The framework for describing an association between the subject of a certificate and the INRs currently under the subject’s control is described in [RFC3779]. This profile further requires that: - Every resource certificate MUST contain either the IP Address Delegation or the Autonomous System Identifier Delegation extension, or both. - These extensions MUST be marked as critical. - The sorted canonical format describing INRs, with maximal spanning ranges and maximal spanning prefix masks, as defined in [RFC3779], MUST be used for the resource extension field, except where the "inherit" construct is used instead. When validating a resource certificate, an RP MUST verify that the INRs described in the issuer’s resource certificate encompass the INRs of the resource certificate being validated. In this context, "encompass" allows for the issuer’s INRs to be the same as, or a strict superset of, the subject’s INRs. 3. End-Entity (EE) Certificates and Signing Functions in the RPKI As noted in [RFC6480], the primary function of end-entity (EE) certificates in the RPKI is the verification of signed objects that relate to the usage of the INRs described in the certificate, e.g., Route Origin Authorizations (ROAs) and manifests. The private key associated with an EE certificate is used to sign a single RPKI signed object, i.e., the EE certificate is used to validate only one object. The EE certificate is embedded in the object as part of a Cryptographic Message Syntax (CMS) signed-data structure [RFC6488]. Because of the one-to-one relationship between the EE certificate and the signed object, revocation of the certificate effectively revokes the corresponding signed object. An EE certificate may be used to validate a sequence of signed objects, where each signed object in the sequence overwrites the previous instance of the signed object in the repository publication point, such that only one instance of the signed object is published at any point in time (e.g., an EE certificate MAY be used to sign a sequence of manifests [RFC6486]). Such EE certificates are termed "sequential use" EE certificates. EE certificates used to validate only one instance of a signed object, and are not used thereafter or in any other validation context, are termed "one-time-use" EE certificates. 4. Resource Certificates A resource certificate is a valid X.509 public key certificate, consistent with the PKIX profile [RFC5280], containing the fields listed in this section. Only the differences from [RFC5280] are noted below. Unless specifically noted as being OPTIONAL, all the fields listed here MUST be present, and any other fields MUST NOT appear in a conforming resource certificate. Where a field value is specified here, this value MUST be used in conforming resource certificates. 4.1. Version As resource certificates are X.509 version 3 certificates, the version MUST be 3 (i.e., the value of this field is 2). RPs need not process version 1 or version 2 certificates (in contrast to [RFC5280]). 4.2. Serial Number The serial number value is a positive integer that is unique for each certificate issued by a given CA. 4.3. Signature Algorithm The algorithm used in this profile is specified in [RFC6485]. 4.4. Issuer The value of this field is a valid X.501 distinguished name. An issuer name MUST contain one instance of the CommonName attribute and MAY contain one instance of the serialNumber attribute. If both attributes are present, it is RECOMMENDED that they appear as a set. The CommonName attribute MUST be encoded using the ASN.1 type PrintableString [X.680]. Issuer names are not intended to be descriptive of the identity of issuer. The RPKI does not rely on issuer names being globally unique, for reasons of security. However, it is RECOMMENDED that issuer names be generated in a fashion that minimizes the likelihood of collisions. See Section 8 for (non-normative) suggested name-generation mechanisms that fulfill this recommendation. 4.5. Subject The value of this field is a valid X.501 distinguished name [RFC4514], and is subject to the same constraints as the issuer name. In the RPKI, the subject name is determined by the issuer, not proposed by the subject [RFC6481]. Each distinct subordinate CA and EE certified by the issuer MUST be identified using a subject name that is unique per issuer. In this context, "distinct" is defined as an entity and a given public key. An issuer SHOULD use a different subject name if the subject’s key pair has changed (i.e., when the CA issues a certificate as part of re-keying the subject.) Subject names are not intended to be descriptive of the identity of subject. 4.6. Validity The certificate validity period is represented as a SEQUENCE of two dates: the date on which the certificate validity period begins (notBefore) and the date on which the certificate validity period ends (notAfter). While a CA is typically advised against issuing a certificate with a validity period that spans a greater period of time than the validity period of the CA’s certificate that will be used to validate the issued certificate, in the context of this profile, a CA MAY have valid grounds to issue a subordinate certificate with a validity period that exceeds the validity period of the CA’s certificate. 4.6.1. notBefore The "notBefore" time SHOULD be no earlier than the time of certificate generation. In the RPKI, it is valid for a certificate to have a value for this field that pre-dates the same field value in any superior certificate. Relying Parties SHOULD NOT attempt to infer from this time information that a certificate was valid at a time in the past, or that it will be valid at a time in the future, as the scope of an RP’s test of validity of a certificate refers specifically to validity at the current time. 4.6.2. notAfter The "notAfter" time represents the anticipated lifetime of the current resource allocation or assignment arrangement between the issuer and the subject. It is valid for a certificate to have a value for this field that post-dates the same field value in any superior certificate. The same caveats apply to RP’s assumptions relating to the certificate’s validity at any time other than the current time. 4.7. Subject Public Key Info The algorithm used in this profile is specified in [RFC6485]. 4.8. Resource Certificate Extensions The following X.509 v3 extensions MUST be present in a conforming resource certificate, except where explicitly noted otherwise. Each extension in a resource certificate is designated as either critical or non-critical. A certificate-using system MUST reject the certificate if it encounters a critical extension it does not recognize; however, a non-critical extension MAY be ignored if it is not recognized [RFC5280]. 4.8.1. Basic Constraints The Basic Constraints extension field is a critical extension in the resource certificate profile, and MUST be present when the subject is a CA, and MUST NOT be present otherwise. The issuer determines whether the "cA" boolean is set. The Path Length Constraint is not specified for RPKI certificates, and MUST NOT be present. 4.8.2. Subject Key Identifier This extension MUST appear in all resource certificates. This extension is non-critical. The Key Identifier used for resource certificates is the 160-bit SHA-1 hash of the value of the DER-encoded ASN.1 bit string of the Subject Public Key, as described in Section 4.2.1.2 of [RFC5280]. 4.8.3. Authority Key Identifier This extension MUST appear in all resource certificates, with the exception of a CA who issues a "self-signed" certificate. In a self-signed certificate, a CA MAY include this extension, and set it equal to the Subject Key Identifier. The authorityCertIssuer and authorityCertSerialNumber fields MUST NOT be present. This extension is non-critical. The Key Identifier used for resource certificates is the 160-bit SHA-1 hash of the value of the DER-encoded ASN.1 bit string of the issuer’s public key, as described in Section 4.2.1.1 of [RFC5280]. 4.8.4. Key Usage This extension is a critical extension and MUST be present. In certificates issued to certification authorities only, the keyCertSign and CRLSign bits are set to TRUE, and these MUST be the only bits set to TRUE. In EE certificates, the digitalSignature bit MUST be set to TRUE and MUST be the only bit set to TRUE. 4.8.5. Extended Key Usage The Extended Key Usage (EKU) extension MUST NOT appear in any CA certificate in the RPKI. This extension also MUST NOT appear in EE certificates used to verify RPKI objects (e.g., ROAs or manifests). The extension MUST NOT be marked critical. The EKU extension MAY appear in EE certificates issued to routers or other devices. Permitted values for the EKU OIDs will be specified in Standards Track RFCs issued by other IETF working groups that adopt the RPKI profile and that identify application-specific requirements that motivate the use of such EKUs. 4.8.6. CRL Distribution Points This extension MUST be present, except in "self-signed" certificates, and it is non-critical. In a self-signed certificate, this extension MUST be omitted. In this profile, the scope of the CRL is specified to be all certificates issued by this CA issuer. The CRL Distribution Points (CRLDP) extension identifies the location(s) of the CRL(s) associated with certificates issued by this issuer. The RPKI uses the URI [RFC3986] form of object identification. The preferred URI access mechanism is a single rsync URI ("rsync://") [RFC5781] that references a single inclusive CRL for each issuer. In this profile, the certificate issuer is also the CRL issuer, implying that the CRLIssuer field MUST be omitted, and the distributionPoint field MUST be present. The Reasons field MUST be omitted. The distributionPoint MUST contain the fullName field, and MUST NOT contain a nameRelativeToCRLIssuer. The form of the generalName MUST be of type URI. The sequence of distributionPoint values MUST contain only a single DistributionPoint. The DistributionPoint MAY contain more than one URI value. An rsync URI [RFC5781] MUST be present in the DistributionPoint and MUST reference the most recent instance of this issuer’s CRL. Other access form URIs MAY be used in addition to the rsync URI, representing alternate access mechanisms for this CRL. 4.8.7. Authority Information Access In the context of the RPKI, this extension identifies the publication point of the certificate of the issuer of the certificate in which the extension appears. In this profile, a single reference to the publication point of the immediate superior certificate MUST be present, except for a "self-signed" certificate, in which case the extension MUST be omitted. This extension is non-critical. This profile uses a URI form of object identification. The preferred URI access mechanisms is "rsync", and an rsync URI [RFC5781] MUST be specified with an accessMethod value of id-ad-caIssuers. The URI MUST reference the point of publication of the certificate where this Issuer is the subject (the issuer’s immediate superior certificate). Other accessMethod URIs referencing the same object MAY also be included in the value sequence of this extension. A CA MUST use a persistent URL name scheme for CA certificates that it issues [RFC6481]. This implies that a reissued certificate overwrites a previously issued certificate (to the same subject) in the publication repository. In this way, certificates subordinate to the reissued (CA) certificate can maintain a constant Authority Information Access (AIA) extension pointer and thus need not be reissued when the parent certificate is reissued. 4.8.8. Subject Information Access In the context of the RPKI, this Subject Information Access (SIA) extension identifies the publication point of products signed by the subject of the certificate. 4.8.8.1. SIA for CA Certificates This extension MUST be present and MUST be marked non-critical. This extension MUST have an instance of an accessMethod of id-ad-caRepository, with an accessLocation form of a URI that MUST specify an rsync URI [RFC5781]. This URI points to the directory containing all published material issued by this CA, i.e., all valid CA certificates, published EE certificates, the current CRL, manifest, and signed objects validated via EE certificates that have been issued by this CA [RFC6481]. Other accessDescription elements with an accessMethod of id-ad-caRepository MAY be present. In such cases, the accessLocation values describe alternate supported URI access mechanisms for the same directory. The ordering of URIs in this accessDescription sequence reflect the CA’s relative preferences for access methods to be used by RPs, with the first element of the sequence being the most preferred by the CA. This extension MUST have an instance of an AccessDescription with an accessMethod of id-ad-rpkiManifest, ``` id-ad OBJECT IDENTIFIER ::= { id-pkix 48 } id-ad-rpkiManifest OBJECT IDENTIFIER ::= { id-ad 10 } ``` with an rsync URI [RFC5781] form of accessLocation. The URI points to the CA’s manifest of published objects [RFC6486] as an object URL. Other accessDescription elements MAY exist for the id-ad-rpkiManifest accessMethod, where the accessLocation value indicates alternate access mechanisms for the same manifest object. 4.8.8.2. SIA for EE Certificates This extension MUST be present and MUST be marked non-critical. This extension MUST have an instance of an accessMethod of id-ad-signedObject, \[ \text{id-ad-signedObject OBJECT IDENTIFIER ::= \{ id-ad 11 \}} \] with an accessLocation form of a URI that MUST include an rsync URI [RFC5781]. This URI points to the signed object that is verified using this EE certificate [RFC6481]. Other accessDescription elements may exist for the id-ad-signedObject accessMethod, where the accessLocation value indicates alternate URI access mechanisms for the same object, ordered in terms of the EE’s relative preference for supported access mechanisms. Other AccessMethods MUST NOT be used for an EE certificates’s SIA. This extension MUST be present and MUST be marked critical. It MUST include exactly one policy, as specified in the RPKI CP [RFC6484] 4.8.10. IP Resources Either the IP Resources extension, or the AS Resources extension, or both, MUST be present in all RPKI certificates, and if present, MUST be marked critical. This extension contains the list of IP address resources as per [RFC3779]. The value may specify the "inherit" element for a particular Address Family Identifier (AFI) value. In the context of resource certificates describing public number resources for use in the public Internet, the Subsequent AFI (SAFI) value MUST NOT be used. This extension MUST either specify a non-empty set of IP address records, or use the "inherit" setting to indicate that the IP address resource set of this certificate is inherited from that of the certificate’s issuer. 4.8.11. AS Resources Either the AS Resources extension, or the IP Resources extension, or both, MUST be present in all RPKI certificates, and if present, MUST be marked critical. This extension contains the list of AS number resources as per [RFC3779], or it may specify the "inherit" element. Routing Domain Identifier (RDI) values are NOT supported in this profile and MUST NOT be used. This extension MUST either specify a non-empty set of AS number records, or use the "inherit" setting to indicate that the AS number resource set of this certificate is inherited from that of the certificate's issuer. 5. Resource Certificate Revocation Lists Each CA MUST issue a version 2 CRL that is consistent with [RFC5280]. RPs are NOT required to process version 1 CRLs (in contrast to [RFC5280]). The CRL issuer is the CA. CRLs conforming to this profile MUST NOT include Indirect or Delta CRLs. The scope of each CRL MUST be all certificates issued by this CA. The issuer name is as in Section 4.4 above. Where two or more CRLs are issued by the same CA, the CRL with the highest value of the "CRL Number" field supersedes all other CRLs issued by this CA. The algorithm used in CRLs issued under this profile is specified in [RFC6485]. The contents of the CRL are a list of all non-expired certificates that have been revoked by the CA. An RPKI CA MUST include the two extensions, Authority Key Identifier and CRL Number, in every CRL that it issues. RPs MUST be prepared to process CRLs with these extensions. No other CRL extensions are allowed. For each revoked resource certificate, only the two fields, Serial Number and Revocation Date, MUST be present, and all other fields MUST NOT be present. No CRL entry extensions are supported in this profile, and CRL entry extensions MUST NOT be present in a CRL. 6. Resource Certificate Requests A resource certificate request MAY use either of PKCS#10 or Certificate Request Message Format (CRMF). A CA MUST support certificate issuance in PKCS#10 and a CA MAY support CRMF requests. Note that there is no certificate response defined in this profile. For CA certificate requests, the CA places the resource certificate in the repository, as per [RFC6484]. No response is defined for EE certificate requests. 6.1. PCKS#10 Profile This profile refines the specification in [RFC2986], as it relates to resource certificates. A Certificate Request Message object, formatted according to PKCS#10, is passed to a CA as the initial step in issuing a certificate. With the exception of the SubjectPublicKeyinfo and the SIA extension request, the CA is permitted to alter any field in the request when issuing a certificate. 6.1.1. PKCS#10 Resource Certificate Request Template Fields This profile applies the following additional requirements to fields that MAY appear in a CertificationRequestInfo: Version This field is mandatory and MUST have the value 0. Subject This field MAY be omitted. If present, the value of this field SHOULD be empty (i.e., NULL), in which case the CA MUST generate a subject name that is unique in the context of certificates issued by this CA. This field is allowed to be non-empty only for a re-key/reissuance request, and only if the CA has adopted a policy (in its Certificate Practice Statement (CPS)) that permits reuse of names in these circumstances. SubjectPublicKeyInfo This field specifies the subject’s public key and the algorithm with which the key is used. The algorithm used in this profile is specified in [RFC6485]. Attributes [RFC2986] defines the attributes field as key-value pairs where the key is an OID and the value’s structure depends on the key. The only attribute used in this profile is the extensionRequest attribute as defined in [RFC2985]. This attribute contains certificate extensions. The profile for extensions in certificate requests is specified in Section 6.3. This profile applies the following additional constraint to fields that MAY appear in a CertificationRequest Object: signatureAlgorithm The signatureAlgorithm value is specified in [RFC6485]. 6.2. CRMF Profile This profile refines the Certificate Request Message Format (CRMF) specification in [RFC4211], as it relates to resource certificates. A Certificate Request Message object, formatted according to the CRMF, is passed to a CA as the initial step in certificate issuance. With the exception of the SubjectPublicKeyinfo and the SIA extension request, the CA is permitted to alter any requested field when issuing the certificate. 6.2.1. CRMF Resource Certificate Request Template Fields This profile applies the following additional requirements to fields that may appear in a Certificate Request Template: version This field SHOULD be omitted. If present, it MUST specify a request for a version 3 Certificate. serialNumber This field MUST be omitted. signingAlgorithm This field MUST be omitted. issuer This MUST be omitted in this profile. Validity This field MAY be omitted. If omitted, the CA will issue a Certificate with Validity dates as determined by the CA. If specified, then the CA MAY override the requested values with dates as determined by the CA. Subject This field MAY be omitted. If present, the value of this field SHOULD be empty (i.e., NULL), in which case the CA MUST generate a subject name that is unique in the context of certificates issued by this CA. This field is allowed to be non-empty only for a re-key/reissuance request, and only if the CA has adopted a policy (in its CPS) that permits the reuse of names in these circumstances. Public Key This field MUST be present. extensions The profile for extensions in certificate requests is specified in Section 6.3. 6.2.2. Resource Certificate Request Control Fields The following control fields are supported in this profile: Authenticator Control The intended model of authentication of the subject is a "long term" model, and the guidance offered in [RFC4211] is that the Authenticator Control field be used. 6.3. Certificate Extension Attributes in Certificate Requests The following extensions MAY appear in a PKCS#10 or CRMF Certificate Request. Any other extensions MUST NOT appear in a Certificate Request. This profile places the following additional constraints on these extensions: BasicConstraints If this is omitted, then the CA will issue an EE certificate (hence no BasicConstraints extension will be included). The pathLengthConstraint is not supported in this profile, and this field MUST be omitted. The CA MAY honor the cA boolean if set to TRUE (CA Certificate Request). If this bit is set, then it indicates that the subject is requesting a CA certificate. The CA MUST honor the cA bit if set to FALSE (EE Certificate Request), in which case the corresponding EE certificate will not contain a Basic Constraints extension. KeyUsage The CA MAY honor KeyUsage extensions of keyCertSign and cRLSign if present, as long as this is consistent with the BasicConstraints SubjectType sub-field, when specified. ExtendedKeyUsage The CA MAY honor ExtendedKeyUsage extensions of keyCertSign and cRLSign if present, as long as this is consistent with the BasicConstraints SubjectType sub-field, when specified. SubjectInformationAccess This field MUST be present, and the field value SHOULD be honored by the CA if it conforms to the requirements set forth in Section 4.8.8. If the CA is unable to honor the requested value for this field, then the CA MUST reject the Certificate Request. 7. Resource Certificate Validation This section describes the resource certificate validation procedure. This refines the generic procedure described in Section 6 of [RFC5280]. 7.1. Resource Extension Validation The IP Resources and AS Resources extensions [RFC3779] define critical extensions for INRs. These are ASN.1 encoded representations of the IPv4 and IPv6 address range and an AS number set. Valid resource certificates MUST have a valid IP address and/or AS number resource extension. In order to validate a resource certificate, the resource extension MUST also be validated. This validation process relies on definitions of comparison of resource sets: more specific Given two contiguous IP address ranges or two contiguous AS number ranges, A and B, A is "more specific" than B if range B includes all IP addresses or AS numbers described by range A, and if range B is larger than range A. equal Given two contiguous IP address ranges or two contiguous AS number ranges, A and B, A is "equal" to B if range A describes precisely the same collection of IP addresses or AS numbers described by range B. The definition of "inheritance" in [RFC3779] is equivalent to this "equality" comparison. encompass Given two IP address and AS number sets, X and Y, X "encompasses" Y if, for every contiguous range of IP addresses or AS numbers elements in set Y, the range element is either "more specific" than or "equal" to a contiguous range element within the set X. Validation of a certificate’s resource extension in the context of a certification path (see Section 7.2 entails that for every adjacent pair of certificates in the certification path (certificates ‘x’ and ‘x + 1’), the number resources described in certificate ‘x’ "encompass" the number resources described in certificate ‘x + 1’, and the resources described in the trust anchor information "encompass" the resources described in the first certificate in the certification path. 7.2. Resource Certification Path Validation Validation of signed resource data using a target resource certificate consists of verifying that the digital signature of the signed resource data is valid, using the public key of the target resource certificate, and also validating the resource certificate in the context of the RPKI, using the path validation process. This path validation process verifies, among other things, that a prospective certification path (a sequence of n certificates) satisfies the following conditions: 1. for all ‘x’ in {1, ..., n-1}, the subject of certificate ‘x’ is the issuer of certificate (‘x’ + 1); 2. certificate ‘1’ is issued by a trust anchor; 3. certificate ‘n’ is the certificate to be validated; and 4. for all ‘x’ in {1, ..., n}, certificate ‘x’ is valid. Certificate validation entails verifying that all of the following conditions hold, in addition to the certification path validation criteria specified in Section 6 of [RFC5280]: 1. The certificate can be verified using the issuer’s public key and the signature algorithm 2. The current time lies within the certificate’s Validity From and To values. 3. The certificate contains all fields that MUST be present, as defined by this specification, and contains values for selected fields that are defined as allowable values by this specification. 4. No field, or field value, that this specification defines as MUST NOT be present is used in the certificate. 5. The issuer has not revoked the certificate. A revoked certificate is identified by the certificate’s serial number being listed on the issuer’s current CRL, as identified by the CRLDP of the certificate, the CRL is itself valid, and the public key used to verify the signature on the CRL is the same public key used to verify the certificate itself. 6. The resource extension data is "encompassed" by the resource extension data contained in a valid certificate where this issuer is the subject (the previous certificate in the context of the ordered sequence defined by the certification path). 7. The certification path originates with a certificate issued by a trust anchor, and there exists a signing chain across the certification path where the subject of Certificate 'x' in the certification path matches the issuer in Certificate 'x + 1' in the certification path, and the public key in Certificate 'x' can verify the signature value in Certificate 'x+1'. A certificate validation algorithm MAY perform these tests in any chosen order. Certificates and CRLs used in this process MAY be found in a locally maintained cache, maintained by a regular synchronization across the distributed publication repository structure [RFC6481]. There exists the possibility of encountering certificate paths that are arbitrarily long, or attempting to generate paths with loops as means of creating a potential denial-of-service (DOS) attack on an RP. An RP executing this procedure MAY apply further heuristics to guide the certification path validation process to a halt in order to avoid some of the issues associated with attempts to validate such malformed certification path structures. Implementations of resource certificate validation MAY halt with a validation failure if the certification path length exceeds a locally defined configuration parameter. 8. Design Notes The following notes provide some additional commentary on the considerations that lie behind some of the design choices that were made in the design of this certificate profile. These notes are non-normative, i.e., this section of the document does not constitute a formal part of the profile specification, and the interpretation of key words as defined in RFC 2119 are not applicable in this section of the document. Certificate Extensions: This profile does not permit the use of any other critical or non-critical extensions. The rationale for this restriction is that the resource certificate profile is intended for a specific defined use. In this context, having certificates with additional non-critical extensions that RPs may see as valid certificates without understanding the extensions is inappropriate, because if the RP were in a position to understand the extensions, it would contradict or qualify this original judgment of validity in some way. This profile takes the position of minimalism over extensibility. The specific goal for the associated RPKI is to precisely match the INR allocation structure through an aligned certificate structure that describes the allocation and its context within the INR distribution hierarchy. The profile defines a resource certificate that is structured to meet these requirements. Certification Authorities and Key Values: This profile uses a definition of an instance of a CA as a combination of a named entity and a key pair. Within this definition, a CA instance cannot rollover a key pair. However, the entity can generate a new instance of a CA with a new key pair and roll over all the signed subordinate products to the new CA [RFC6489]. This has a number of implications in terms of subject name management, CRL Scope, and repository publication point management. CRL Scope and Key Values: For CRL Scope, this profile specifies that a CA issues a single CRL at a time, and the scope of the CRL is all certificates issued by this CA. Because the CA instance is bound to a single key pair, this implies that the CA’s public key, the key used to validate the CA’s CRL, and the key used to validate the certificates revoked by that CRL are all the same key value. Repository Publication Point: The definition of a CA affects the design of the repository publication system. In order to minimize the amount of forced re-certification on key rollover events, a repository publication regime that uses the same repository publication point for all CA instances that refers to the same entity, but with different key values, will minimize the extent of re-generation of certificates to only immediate subordinate certificates. This is described in [RFC6489]. Subject Name: This profile specifies that subject names must be unique per issuer, and does not specify that subject names must be globally unique (in terms of assured uniqueness). This is due to the nature of the RPKI as a distributed PKI, implying that there is no ready ability for certification authorities to coordinate a simple RPKI-wide unique name space without resorting to additional critical external dependencies. CAs are advised to use subject name generation procedures that minimize the potential for name clashes. One way to achieve this is for a CA to use a subject name practice that uses the CommonName component of the Distinguished Name as a constant value for any given entity that is the subject of CA-issued certificates, and set the serialNumber component of the Distinguished Name to a value that is derived from the hash of the subject public key value. If the CA elects not to use the serialNumber component of the Distinguished Name, then it is considered beneficial that a CA generates CommonNames that have themselves a random component that includes significantly more than 40 bits of entropy in the name. Some non-normative recommendations to achieve this include: 1) Hash of the subject public key (encoded as ASCII HEX). example: cn=999d99d564de366a29cd8468c45ede1848e2cc14 2) A Universally Unique IDentifier (UUID) [RFC4122] example: cn=6437d442-6fb5-49ba-bbdb-19c260652098 3) A randomly generated ASCII HEX encoded string of length 20 or greater: example: cn=0f8fccc28e3be4869bc5f8fa114db05e1 (A string of 20 ASCII HEX digits would have 80-bits of entropy) 4) An internal database key or subscriber ID combined with one of the above example: cn=<DBkey1> (6437d442-6fb5-49ba-bbdb-19c2606520980) (The issuing CA may wish to be able to extract the database key or subscriber ID from the commonName. Since only the issuing CA would need to be able to parse the commonName, the database key and the source of entropy (e.g., a UUID) could be separated in any way that the CA wants, as long as it conforms to the rules for PrintableString. The separator 9. Operational Considerations for Profile Agility This profile requires that relying parties reject certificates or CRLs that do not conform to the profile. (Through the remainder of this section, the term "certificate" is used to refer to both certificates and CRLs.) This includes certificates that contain extensions that are prohibited, but that are otherwise valid as per [RFC5280]. This means that any change in the profile (e.g., extensions, permitted attributes or optional fields, or field encodings) for certificates used in the RPKI will not be backward compatible. In a general PKI context, this constraint probably would cause serious problems. In the RPKI, several factors minimize the difficulty of effecting changes of this sort. Note that the RPKI is unique in that every relying party (RP) requires access to every certificate issued by the CAs in this system. An important update of the certificates used in the RPKI must be supported by all CAs and RPs in the system, lest views of the RPKI data differ across RPs. Thus, incremental changes require very careful coordination. It would not be appropriate to introduce a new extension, or authorize use of an extant, standard extension, for a security-relevant purpose on a piecemeal basis. One might imagine that the "critical" flag in X.509 certificate extensions could be used to ameliorate this problem. However, this solution is not comprehensive and does not address the problem of adding a new, security-critical extension. (This is because such an extension needs to be supported universally, by all CAs and RPs.) Also, while some standard extensions can be marked either critical or non-critical, at the discretion of the issuer, not all have this property, i.e., some standard extensions are always non-critical. Moreover, there is no notion of criticality for attributes within a name or optional fields within a field or an extension. Thus, the critical flag is not a solution to this problem. In typical PKI deployments, there are few CAs and many RPs. However, in the RPKI, essentially every CA in the RPKI is also an RP. Thus the set of entities that will need to change in order to issue certificates under a new format is the same set of entities that will need to change to accept these new certificates. To the extent that this is literally true, it says that CA/RP coordination for a change is tightly linked anyway. In reality, there is an important exception to this general observation. Small ISPs and holders of provider-independent allocations are expected to use managed CA services, offered by Regional Internet Registries (RIRs) and... potentially by wholesale Internet Service Providers (ISPs). This reduces the number of distinct CA implementations that are needed and makes it easier to effect changes for certificate issuance. It seems very likely that these entities also will make use of RP software provided by their managed CA service provider, which reduces the number of distinct RP software implementations. Also note that many small ISPs (and holders of provider-independent allocations) employ default routes, and thus need not perform RP validation of RPKI data, eliminating these entities as RPs. Widely available PKI RP software does not cache large numbers of certificates, an essential strategy for the RPKI. It does not process manifest or ROA data structures, essential elements of the RPKI repository system. Experience shows that such software deals poorly with revocation status data. Thus, extant RP software is not adequate for the RPKI, although some open source tools (e.g., OpenSSL and cryptlib) can be used as building blocks for an RPKI RP implementation. Thus, it is anticipated that RPs will make use of software that is designed specifically for the RPKI environment and is available from a limited number of open sources. Several RIRs and two companies are providing such software today. Thus it is feasible to coordinate change to this software among the small number of developers/maintainers. If the resource certificate profile is changed in the future, e.g., by adding a new extension or changing the allowed set of name attributes or encoding of these attributes, the following procedure will be employed to effect deployment in the RPKI. The model is analogous to that described in [RPKI-ALG], but is simpler. A new document will be issued as an update to this RFC. The CP for the RPKI [RFC6484] will be updated to reference the new certificate profile. The new CP will define a new policy OID for certificates issued under the new certificate profile. The updated CP also will define a timeline for transition to the new certificate (CRL) format. This timeline will define 3 phases and associated dates: 1. At the end of phase 1, all RPKI CAs MUST be capable of issuing certificates under the new profile, if requested by a subject. Any certificate issued under the new format will contain the new policy OID. 2. During phase 2, CAs MUST issue certificates under the new profile, and these certificates MUST coexist with certificates issued under the old format. (CAs will continue to issue certificates under the old OID/format as well.) The old and new certificates MUST be identical, except for the policy OID and any new extensions, encodings, etc. The new certificates, and associated signed objects, will coexist in the RPKI repository system during this phase, analogous to what is required by an algorithm transition for the RPKI [RPKI-ALG]. Relying parties MAY make use of the old or the new certificate formats when processing signed objects retrieved from the RPKI repository system. During this phase, a relying party that elects to process both formats will acquire the same values for all certificate fields that overlap between the old and new formats. Thus if either certificate format is verifiable, the relying party accepts the data from that certificate. This allows CAs to issue certificates under the new format before all relying parties are prepared to process that format. 3. At the beginning of phase 3, all relying parties MUST be capable of processing certificates under the new format. During this phase, CAs will issue new certificates ONLY under the new format. Certificates issued under the old OID will be replaced with certificates containing the new policy OID. The repository system will no longer require matching old and new certificates under the different formats. At the end of phase 3, all certificates under the old OID will have been replaced. The resource certificate profile RFC will be replaced to remove support for the old certificate format, and the CP will be replaced to remove reference to the old policy OID and to the old resource certificate profile RFC. The system will have returned to a new, steady state. 10. Security Considerations A resource certificate PKI cannot in and of itself resolve any forms of ambiguity relating to uniqueness of assertions of rights of use in the event that two or more valid certificates encompass the same resource. If the issuance of resource certificates is aligned to the status of resource allocations and assignments, then the information conveyed in a certificate is no better than the information in the allocation and assignment databases. This profile requires that the key used to sign an issued certificate be the same key used to sign the CRL that can revoke the certificate, implying that the certification path used to validate the signature on a certificate is the same as that used to validate the signature of the CRL that can revoke the certificate. It is noted that this is a tighter constraint than required in X.509 PKIs, and there may be a risk in using a path validation implementation that is capable of using separate validation paths for a certificate and the corresponding CRL. If there are subject name collisions in the RPKI as a result of CAs not following the guidelines provided here relating to ensuring sufficient entropy in constructing subject names, and this is combined with the situation that an RP uses an implementation of validation path construction that is not in conformance with this RPKI profile, then it is possible that the subject name collisions can cause an RP to conclude that an otherwise valid certificate has been revoked. 11. Acknowledgements The authors would like to particularly acknowledge the valued contribution from Stephen Kent in reviewing this document and proposing numerous sections of text that have been incorporated into the document. The authors also acknowledge the contributions of Sandy Murphy, Robert Kisteleki, Randy Bush, Russ Housley, Ricardo Patara, and Rob Austein in the preparation and subsequent review of this document. The document also reflects review comments received from Roque Gagliano, Sean Turner, and David Cooper. 12. References 12.1. Normative References 12.2. Informative References Appendix A. Example Resource Certificate The following is an example resource certificate. Certificate Name: 9JfgAEcq7Q-471wMC5CJJr6EJs.cer Data: Version: 3 (0x2) Serial: 1500 (0x5dc) Signature Algorithm: SHA256WithRSAEncryption Issuer: CN=APNIC Production-CVPQsukLy7pOXdNeWVGvFX_0s Validity Not Before: Oct 25 12:50:00 2008 GMT Not After : Jan 31 00:00:00 2010 GMT Subject: CN=A91872ED Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (2048 bit) Modulus (2048 bit): 70:34:e9:3f:d7:e4:24:cd:bb:e0:0f:8e:80:eb:11: 1f:bb:cc:57:e0:05:8e:5c:7b:96:26:ff:2c:17:30:7d: 4a:ad:3c:4a:94:bf:74:4c:30:72:9b:1e:f5:8b:00: 4d:e3 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: F4:97:E0:00:47:2A:ED:0F:B8:EC:8C:0C:0B:90:89: 20:9A:FA:10:9B X509v3 Authority Key Identifier: Huston, et al. Standards Track [Page 28] X509v3 Key Usage: critical Certificate Sign, CRL Sign X509v3 Basic Constraints: critical CA: TRUE X509v3 CRL Distribution Points: URI:rsync://rpki.apnic.net/repository/A3C38A24 D60311DCAB08F31979BDBE39/CVPQSgUkLy7pOXdNe VGvnFX_0s.crl Authority Information Access: CA Issuers - URI:rsync://rpki.apnic.net/repos itory/8BDFC7DED5FD11DCB14CF4B1A703F9B7/CVP QSgUkLy7pOXdNeVGvnFX_0s.cer X509v3 Certificate Policies: critical Policy: 1.3.6.1.5.5.7.14.2 Subject Information Access: CA Repository - URI:rsync://rpki.apnic.net/mem ber_repository/A91872ED/06A83982887911DD81 3F432B2086D636/ Manifest - URI:rsync://rpki.apnic.net/member_r epository/A91872ED/06A83982887911DD813F432 B2086D636/9JfgeAECq7Q-47IwMC5CJI7r6EJs.mft sbgp-autonomousSysNum: critical Autonomous System Numbers: 24021 38610 131072 131074 sbgp-ipAddrBlock: critical IPv4: 203.133.248.0/22 203.147.108.0/23 Signature Algorithm: sha256WithRSAEncryption a:5f:97:71 Appendix B. Example Certificate Revocation List The following is an example Certificate Revocation List. CRL Name: q66IrWSGuBE7jqx8PAUHA1HCqRw.crl Data: Version: 2 Signature Algorithm: Hash: SHA256, Encryption: RSA Issuer: CN=Demo Production APNIC CA - Not for real use, E=ca@apnic.net This Update: Thu Jul 27 06:30:34 2006 GMT Next Update: Fri Jul 28 06:30:34 2006 GMT Authority Key Identifier: Key Identifier: ab:ae:88:ad:64:86:b8:11:3b:8e:ac:7c:3c:05: 07:02:51:c2:a9:1c CRLNumber: 4 Revoked Certificates: 1 Serial Number: 1 Revocation Date: Mon Jul 17 05:10:19 2006 GMT Serial Number: 2 Revocation Date: Mon Jul 17 05:12:25 2006 GMT Serial Number: 4 Revocation Date: Mon Jul 17 05:40:39 2006 GMT Signature: b2:5a:e8:7c:bd:a8:00:0f:03:1a:17:fd:40:2c:46: 17:c8:0e:ae:8c:89:ff:00:f7:81:97:03:a1:6a:0a: 02:5b:2a:d0:8a:7a:33:0a:d5:ce:be:b5:a2:7d:8d: d9 Authors’ Addresses Geoff Huston APNIC EMail: gih@apnic.net URI: http://www.apnic.net George Michaelson APNIC EMail: ggm@apnic.net URI: http://www.apnic.net Robert Loomans APNIC EMail: robertl@apnic.net URI: http://www.apnic.net
{"Source-Url": "http://potaroo.net/ietf/rfc/PDF/rfc6487.pdf", "len_cl100k_base": 11530, "olmocr-version": "0.1.49", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 63490, "total-output-tokens": 16155, "length": "2e13", "weborganizer": {"__label__adult": 0.00034880638122558594, "__label__art_design": 0.0005674362182617188, "__label__crime_law": 0.0014905929565429688, "__label__education_jobs": 0.0019779205322265625, "__label__entertainment": 0.00017821788787841797, "__label__fashion_beauty": 0.000225067138671875, "__label__finance_business": 0.003086090087890625, "__label__food_dining": 0.0002694129943847656, "__label__games": 0.0007586479187011719, "__label__hardware": 0.00229644775390625, "__label__health": 0.00046372413635253906, "__label__history": 0.00054168701171875, "__label__home_hobbies": 0.00012731552124023438, "__label__industrial": 0.0007162094116210938, "__label__literature": 0.00049591064453125, "__label__politics": 0.0008611679077148438, "__label__religion": 0.0005817413330078125, "__label__science_tech": 0.229736328125, "__label__social_life": 0.00017392635345458984, "__label__software": 0.157958984375, "__label__software_dev": 0.59619140625, "__label__sports_fitness": 0.00024700164794921875, "__label__transportation": 0.0005764961242675781, "__label__travel": 0.0002435445785522461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56462, 0.10608]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56462, 0.30394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56462, 0.85029]], "google_gemma-3-12b-it_contains_pii": [[0, 1265, false], [1265, 1265, null], [1265, 4237, null], [4237, 6704, null], [6704, 8864, null], [8864, 10604, null], [10604, 12671, null], [12671, 14526, null], [14526, 16349, null], [16349, 18615, null], [18615, 20740, null], [20740, 22567, null], [22567, 24568, null], [24568, 26416, null], [26416, 27981, null], [27981, 29627, null], [29627, 31521, null], [31521, 33508, null], [33508, 35628, null], [35628, 37929, null], [37929, 40038, null], [40038, 42674, null], [42674, 45360, null], [45360, 47871, null], [47871, 49998, null], [49998, 51125, null], [51125, 51428, null], [51428, 52987, null], [52987, 53871, null], [53871, 54694, null], [54694, 56225, null], [56225, 56462, null], [56462, 56462, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1265, true], [1265, 1265, null], [1265, 4237, null], [4237, 6704, null], [6704, 8864, null], [8864, 10604, null], [10604, 12671, null], [12671, 14526, null], [14526, 16349, null], [16349, 18615, null], [18615, 20740, null], [20740, 22567, null], [22567, 24568, null], [24568, 26416, null], [26416, 27981, null], [27981, 29627, null], [29627, 31521, null], [31521, 33508, null], [33508, 35628, null], [35628, 37929, null], [37929, 40038, null], [40038, 42674, null], [42674, 45360, null], [45360, 47871, null], [47871, 49998, null], [49998, 51125, null], [51125, 51428, null], [51428, 52987, null], [52987, 53871, null], [53871, 54694, null], [54694, 56225, null], [56225, 56462, null], [56462, 56462, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56462, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56462, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56462, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56462, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56462, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56462, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56462, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56462, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56462, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56462, null]], "pdf_page_numbers": [[0, 1265, 1], [1265, 1265, 2], [1265, 4237, 3], [4237, 6704, 4], [6704, 8864, 5], [8864, 10604, 6], [10604, 12671, 7], [12671, 14526, 8], [14526, 16349, 9], [16349, 18615, 10], [18615, 20740, 11], [20740, 22567, 12], [22567, 24568, 13], [24568, 26416, 14], [26416, 27981, 15], [27981, 29627, 16], [29627, 31521, 17], [31521, 33508, 18], [33508, 35628, 19], [35628, 37929, 20], [37929, 40038, 21], [40038, 42674, 22], [42674, 45360, 23], [45360, 47871, 24], [47871, 49998, 25], [49998, 51125, 26], [51125, 51428, 27], [51428, 52987, 28], [52987, 53871, 29], [53871, 54694, 30], [54694, 56225, 31], [56225, 56462, 32], [56462, 56462, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56462, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
d5d3e9c56847357243fc6d029250d56d3367c904
A HYPOTHETICAL DIALOGUE EXHIBITING A KNOWLEDGE BASE FOR A PROGRAM-UNDERSTANDING SYSTEM Cordell Green, et al Stanford University Prepared for: Advanced Research Projects Agency January 1975 DISTRIBUTED BY: National Technical Information Service U.S. DEPARTMENT OF COMMERCE A hypothetical dialogue exhibiting a knowledge base for a program-understanding system. The content, length and complexity of the dialogue reflect the underlying programming knowledge which would be required for a system to accomplish this task. The nature of the knowledge is discussed and the codification of such programming knowledge is suggested as a major research area in the development of program-understanding systems. A Hypothetical Dialogue Exhibiting a Knowledge Base For a Program-Understanding System by Cordell Green David Barstow ABSTRACT A hypothetical dialogue with a fictitious program-understanding system is presented. In the interactive dialogue the computer carries out a detailed synthesis of a simple insertion sort program for linked lists. The content, length and complexity of the dialogue reflect the underlying programming knowledge which would be required for a system to accomplish this task. The nature of the knowledge is discussed and the codification of such programming knowledge is suggested as a major research area in the development of program-understanding systems. This research was supported by the Advanced Research Projects Agency of the Department of Defense under Contract DAHC 15-73-C-0435. The views and conclusions contained in this document are those of the author(s) and should not be interpreted as necessarily representing the official policies, either expressed or implied, of Stanford University, ARPA, or the U. S. Government. Reproduced in the U.S.A. Available from the National Technical Information Service, Springfield, Virginia 22151. # TABLE OF CONTENTS I. INTRODUCTION (a) SUMMARY (b) DOMAIN OF DISCOURSE II. A DIALOGUE (a) INTRODUCTION (b) PART 1: Setting Up the Main Tasks (c) PART 2: Synthesizing the Selector (d) PART 3: Synthesizing the Constructor (e) PART 4: Completing the Program III. TYPES OF PROGRAMMING KNOWLEDGE IV. SUMMARY AND CONCLUSIONS V. ACKNOWLEDGEMENTS VI. REFERENCES INTRODUCTION (a) SUMMARY The overall objective of our research is to gain more insight into the programming process as a necessary step toward building program-understanding systems. Our approach has been to examine the process of synthesizing very simple programs in the domain of sorting. We hope that by beginning with this simple domain and developing and implementing a reasonably comprehensive theory, we can then gauge what is required to create more powerful and general program-understanding systems. Toward this end, we are working on first isolating and codifying the knowledge appropriate for the synthesis and understanding of programs in this class and then embedding this knowledge as a set of rules in a computer program. Along the way, we have developed some preliminary views about what a program-understanding system should know. Our goal in this particular paper is to present a dialogue with a hypothetical program-understanding system. A dialogue was chosen as a method of presentation that would exemplify, in an easily understood fashion, what such a system should know. The subject of the dialogue is the synthesis of a simple insertion sort program. Each step in the dialogue corresponds to the utilization of one or more pieces of suggested programming knowledge. Most of this knowledge is stated explicitly in each step. The dialogue presented here is a highly fictional one, although some portions of the reasoning shown in the dialogue have been tested in an experimental system. We are now in the process of formulating the necessary programming knowledge as a set of synthesis rules. However, the scope of this paper does not include the presentation of the current state of our rules. So far some 110 rules have been developed and are being refined in a rule-testing system. The synthesis tasks on which these rules are being debugged include two insertion sorts, one selection sort, and a list reversal. We hope to present in a later paper a description of the set of rules. As will become apparent in the dialogue, one of our conjectures is that a program-understanding system will need very large amounts of many different kinds of knowledge. This seems to be the key to the flexibility necessary to synthesize, analyze, modify, and debug a large class of programs. In addition to the usual types of programming knowledge, such as the semantics of programming languages or techniques of local optimization, many other types are needed. These include, at least, high-level programming constructs, strategy or planning information, domain-specific and general programming knowledge, and global optimization techniques. In Section III we discuss this further and show where these kinds of knowledge occur in the dialogue. (b) DOMAIN OF DISCOURSE Topics mentioned in the dialogue include data structures, low-level operations, and high-level programming constructs. The main data structures mentioned in our dialogue are ordered sets represented by lists. The low-level operations mentioned include assignment, pointer manipulation, list insertion, etc. Some of the higher-level (in some sense) notions or constructs we consider are permutation, ordering (by various INTRODUCTION criteria), set enumeration, generate and test, generate and process, proof by induction, conservation of elements during a transfer, and methods of temporary marking (or place-saving) of positions and elements. Time and space requirements for various methods are not discussed. The target language is LISP, in particular the INTERLISP language [10]. However, in the dialogue we represent the programs in a fictitious meta-LISP. II. A DIALOGUE (a) INTRODUCTION In this section we wish to exhibit what we consider to be a reasonable level of understanding on the part of a program-understanding system. It is not obvious how best to present this in a way that is easy for the reader to follow, since the synthesis process is rather complex. We hope that an English language dialogue is adequate. We have added to the English several "snapshots" of the developing program that help to indicate where the system is in the programming process. These diagrams are similar to the stepwise refinements used in structured programming [1]. Our dialogue may be considered as a continuation of the technique of presentation used by Floyd for a program verifier-synthesizer [2], although our more hypothetical system has been allowed to know more about program synthesis for its domain of discourse. In certain ways we feel that the dialogue is not representative of how a program-understanding system would appear to the user during the synthesis process. Although such a low-level dialogue would at times have its place. For expository purposes the dialogue has certain choices and inferences made by the machine and others made by the user. Depending upon the application, these decisions and inferences could reasonably be re-distributed between user and machine, with some made by other automated "experts", such as an efficiency expert, and other decisions forced by the context in which the program is written. For example, the data structures for this insertion sort might be determined in advance if the sort routine were part of a larger system, and all choices made by the user could be made instead by rules calculating the efficiency of each branch. A more typical user-machine dialogue would usually be conducted at a much higher level and in the words used to talk about the application area for the program. But to present a hypothetical dialogue in which the program merely said, "OK, I'm all finished," or, "I wrote a sort program as part of your operating system," wouldn't explicate very much programming knowledge. The style of the dialogue is not intended to be a strong suggestion about what the process of specifying a program should be. In fact, we feel that program specification is a difficult problem in automatic programming research [4], and we touch on only a few aspects of it here. However, some of the techniques used in the dialogue will probably find some, as yet undetermined, place in program-understanding systems. The obvious techniques include interactive specification, natural language communication, and the periodic presentation of summaries of the structure and content of the evolving program. We note that we have endowed our hypothetical system with various abilities, including an inference capability, a program simplification capability, and the ability to illustrate and summarize where it is in the programming process. Further, the dialogue sometimes takes on the character of a decision-tree, sometimes of goal-oriented problem-solving, and sometimes of step-wise refinement. These all seem appropriate to the task. Various decisions, such as how large to make a step, when to suppress or explicate an inference, and when to present a diagram, are somewhat arbitrary, but were made primarily on the basis of what seemed to be the clearest exposition. Before the start of the dialogue, we wish to remind the user that the focus is not on the particular mechanisms which might be employed to produce the hypothesized behavior, but rather on the reasoning itself. (b) PART 1: Setting Up the Main Tasks 1. User: Please write me a sort program. Use linked lists to represent the input and output, which are ordered sets. The output set should be ordered so that each element is numerically greater than the preceding element. The input set will be called INPUT and the output set will be called OUTPUT. 2. Machine: As I understand it, a sort program produces as output a permutation (of the input) that satisfies a test. In this case, the test is that the elements be ordered on the basis of the above-mentioned ordering relation, "numerically greater than". Please choose one of the two known methods* for finding a *The limitation to only two "known" choices is arbitrary, but it is reasonable to expect a system to have a limited number of choices. permutation that satisfies a test: (a) Generate and test permutations until the desired one is found. (b) Carefully construct the desired permutation. 3. User: Carefully construct the desired permutation. 4. Machine: Should the output set be constructed in the same list cells as the input set, or should a new list be created? 5. User: A new list should be created.† 6. Machine: Do you want an iterative or recursive program? 7. User: Iterative. [We wish to remind the reader that the distinctions between "user" and "machine" are somewhat arbitrary. Yet a decision must be made, so we have chosen to let our hypothetical "user" make it. Furthermore there is no reason to expect the user to have a preference for an iterative or recursive program. We have our user choose an iterative structure over what might be a more natural recursive structure, because there are fundamental aspects of the algorithm which are more explicit in its iterative form.] 8. Machine: Then the program will be in the form of a transfer program which must be total. The program will transfer all elements from the input to positions in the output, such that the final output is a permutation (of the input) which satisfies the above ordering relation, namely increasing by numerical value. † This does not preclude overwriting the input set during the sort. Overwriting might be done in order to indicate which elements have been selected from the input. † By "total", we mean that the transfer loop must be repeated until all elements have been transferred. The transfer paradigm: ``` transfer program: LOOP UNTIL all elements of input have been processed; selector: select the next element from input; constructor: add this element to output; REPEAT. ``` 9. Machine: In order to write such a transfer program, I must achieve several goals: (1) Write a selector. On each call, the selector will find that element of the input which comes next in the chosen transfer order. The selector consists of two parts: (a) selector initializer (b) selector body (2) Write a constructor. On each call, the constructor will produce a new output (or modify the old output), so that the new element is in the appropriate position but that otherwise the order of elements in the output is unchanged. The constructor consists of two parts: (a) constructor initializer † As illustrated, the transfer program has a "selector" part and a "constructor" part. The selector produces the elements one at a time, and the constructor builds the new set. Either the selector or the constructor may re-order the set, i.e., the selector can produce the elements in an order different from the stored order, or the constructor can build a set with an ordering different from the order of receipt of the elements. † We will generally present in our diagrams only those parts which seem relevant to the current context. PART 1: Setting Up the Main Tasks (b) constructor body Before I can write these parts, I must: (3) Select a transfer order--the order in which the variables will be transferred from the input to the output. (4) Select a transfer termination method--a test that all elements have been transferred, and a position in the program appropriate to the type of test selected. (5) Select a name for the transfer variable--a variable which holds as its value the element being transferred. After writing the parts, I must: (6) Combine the selector, constructor and transfer termination test into a generate and process loop. This completes the final program. PART 1: Setting Up the Main Tasks (b) constructor body Before I can write these parts, I must: (3) Select a transfer order—the order in which the variables will be transferred from the input to the output. (4) Select a transfer termination method—a test that all elements have been transferred, and a position in the program appropriate to the type of test selected. (5) Select a name for the transfer variable—a variable which holds as its value the element being transferred. After writing the parts, I must: (6) Combine the selector, constructor and transfer termination test into a generate and process loop. This completes the final program. PART 1: Setting Up the Main Tasks The basic structure of the program (including test position):\footnote{Our syntax for iterative statements and exits is that proposed by Zahn [11] and supported by Knuth [6]. Loops are exited upon encountering an event, such as DONE in the above example. The program exits to the occurrence of the event label after the body of the loop. Events are declared after LOOP UNTIL. REPEAT denotes the end of the loop.} \begin{verbatim} transfer initializer: selector initializer: constructor initializer: transfer body: LOOP UNTIL DONE: transfer termination test: IF all elements have been transferred THEN DONE: selector body: select the next element from input: constructor body: add the element to output; REPEAT; DONE: HALT. \end{verbatim} 10. Machine: Should the transfer order be linear with respect to the initial order or the final order?\footnote{We define the "initial" and "final" orders to be the conventional list-representation ordering of the input set and of the desired output set, respectively. By "linear with respect to an order" we mean either identical to that order or the converse of that order. The user’s choice of "linear with respect to the initial order" amounts to choosing an insertion sort algorithm, rather than a selection sort algorithm. The full effect of this choice will be seen when choosing the "construction re-ordering" during the synthesis of the constructor.} 11. User: Linear with respect to the initial order. 12. Machine: Should the transfer order be identical to the initial order or the converse of the initial order? PART 1: Setting Up the Main Tasks 14. Machine: Now let's select a transfer termination method. This will consist of a test and a position for the test. Since the transfer will be total, the transfer termination test must test whether all elements have been transferred. This is equivalent to testing whether no elements remain to be selected. This is the same test which is used by the selector to stop its generation of elements. I will use this test as the transfer termination test. While writing the selector I will thus have to write a third part: the selector termination test. Since this test must precede a call to the selector body, I will put the test at the beginning of the loop.† 15. Machine: I will use ELEMENT as the transfer variable.† † For ease in presentation the preceding diagram showed the result of this choice (i.e., the test appeared in the proper location), although the choice had not yet been made. Considering all of the tests which might have been used and all of the places where such a test might have been put, one could imagine a great deal of planning going into making this rather difficult decision. † To choose meaningful variable names requires knowledge about communication with the user and is a difficult problem. (a) PART 2: Synthesizing the Selector We are about to work on the indicated parts of the selector. ``` transfer initializer: selector initializer: constructor initializer: transfer body: LOOP UNTIL DONE: transfer termination test: IF all elements have been transferred all elements have been selected THEN DONE; selector body: select the next element from input; constructor body: add the element to output; REPEAT; DONE: Halt. ``` 16. Machine: I will now choose a selection re-ordering. I will use the term "selection re-ordering" to refer to the relation between (a) the order of the elements in the input (positional order) and (b) the order in which the elements are selected by the selector (temporal order). Since the transfer order is linear and forward, Throughout this dialogue, we will use special characters and type fonts to indicate different sections of these diagrams. We present the meaning of the fonts here in case confusion arises, but the meaning should be clear from the context. - areas not under consideration - areas currently under consideration PROGRAM PARTS JUST COMPLETED RESERVED WORDS BASIC LISP FUNCTIONS USER- OR MACHINE-DEFINED LABELS ⇒ the right arrow indicates a step in the refinement process PART 2: Synthesizing the Selector (that is, identical to the input order), the selection re-ordering must also be linear and forward. 17. Machine: A selector is a type of generator, a "total" generator that produces each element of the input once and only once. Corresponding to the three parts of a selector, this type of generator also has three parts: (1) a body which actually produces the elements (2) an initializer which can set up the generator before it is called the first time (3) a termination test (to be used as the transfer termination test) which tests whether all elements have been generated 18. Machine: I must now select the state-saving scheme by which the generator will save its state after each call. In order to save its state, the generator must remember how the input set is divided into two sets: (1) those elements which have already been generated (2) those elements which are yet to be generated Recall that the generator is moving linearly through the input list from first to last to find each new element. Therefore all elements before its current position in the list have already been generated, and all elements after its current position are yet to be generated. To save the state of the generator, I need \[\text{\textsuperscript{1}}\] Technically, before these parts can be written, the production re-ordering for the generator must be chosen. In this case, it is obviously the same as the selection re-ordering, so we have omitted it from the dialogue in the interests of clarity. The initializer is written after the producer, since in general the form of an initializer is dependent on the body of the loop to be initialized. only save a marker for the generator's current position in the input list. One pointer will be adequate for this purpose. Let us call this pointer REST. We are about to work on the indicated parts of the selector body 19. Machine: Now I will synthesize the body of the generator. I need code for two pieces: (1) increment the state of the generator (2) produce the next element PART 2: Synthesizing the Selector 20. **Machine:** To increment the state of the generator, I will advance the position pointer by one. Since the representation is a linked list, the code for this is merely: ``` REST := CDR(REST) ``` 21. **Machine:** Now I will write the producer. Since the state-saving scheme uses a pointer, REST, to separate those elements generated from those not yet generated, and since the next element to be generated is the first element of the non-generated set, then the process of saving the state will also effectively mark the next element to be generated. To produce the next element, then, I need merely take the first element of the non-generated set, which is pointed to by REST. Since the representation is linked-lists, the subprogram for this is: ``` ELEMENT := CAR(REST) ``` 22. **Machine:** I now put the pieces of the generator body together into a block: ``` ELEMENT := CAR(REST); REST := CDR(REST) ``` --- 1 Throughout, we will use the standard LISP terminology: - **CAR(L)** gives the first element of L - **CDR(L)** gives the remaining elements of L - **CONS(X L)** creates a new list with first element X and with the rest of the elements being those of the list L - **NIL** is the empty list (and list terminator) 1 In a selection sort, the code to find the next element to be generated would be considerably more complex since it would need to find the largest of the remaining elements. 1 There are situations in which the value of the element or a pointer into the list might be returned, rather than the element itself. PART 2: Synthesizing the Selector We have just finished the indicated parts ``` transfer initializer: ... transfer body: LOOP UNTIL DONE: ... selector body: select the next element from input generate the next element from input produce element ELEMENT = CAR(REST); increment state of generator REST = CDР(REST); ... REPEAT; DONE: HALT. ``` 23. **Machine**: I will now write the generator initializer and the generator termination test. Since I am using the pointer REST to mark the elements not generated, and since initially no elements have been generated, I must initialize it to the entire input set: ``` REST = INPUT ``` 24. **Machine**: Now let's write the generator termination test (which will be used as the transfer termination test). To test whether all elements have been generated, I will test whether there are no elements remaining. Since the elements are being generated in a forward linear order, I simply test for being at the end of the input list. Since the representation is a linked list, this is simply a test for emptiness: ``` REST = NIL ? ``` PART 2: Synthesizing the Selector We have just finished the indicated parts ```lisp transfer initializer: selector initializer: generator initializer: REST = INPUT; ... transfer body: LOOP UNTIL DONE: transfer termination test: IF all elements have been transferred all elements have been selected all elements have been generated no elements remain THEN DONE: REST = NIL ... selector body: select the next element from input generate the next element from input produce element ELEMENT = CAR(REST); increment state of generator REST = CDR(REST); ... REPEAT; DONE: HALT. ``` 25. Machine: I have finally finished the generator which the selector needed. In fact, that is all there is to the entire selector, so the selector is finished. (d) PART 3: Synthesizing the Constructor We are now working on the indicated parts of the constructor: \[ \begin{align*} transfer \text{ initializer:} \\ \text{selector initializer:} \\ \text{constructor initializer:} \\ \text{transfer body:} \\ \text{LOOP UNTIL DONE;} \\ \text{transfer termination test:} \\ \text{IF all elements have been transferred} \\ \text{all elements have been selected} \\ \text{THEN DONE;} \\ \text{selector body:} \\ \text{select the next element from input;} \\ \text{constructor body:} \\ \text{add the element to output;} \\ \text{REPEAT;} \\ \text{DONE: HALT.} \end{align*} \] 26. Machine: One way to create an output set that is a permutation of the elements received by the constructor is to write an inductive constructor that satisfies the following constraints: 1. The initial output set must be empty. 2. Each construction operation must be such that the new output set consists of the old set plus one new element. 3. After each construction operation, the new set must satisfy our ordering relation. 27. Machine: To write such a constructor, two decisions must first be made: 1. Choose a construction re-ordering. \[\text{It is sometimes convenient to think of the constructor as producing a sequence of output sets, and sometimes as successively modifying one output set.}\] PART 3: Synthesizing the Constructor (2) Choose a construction method for building a new set, given the old set, the new element, and the position where the new element should occur in the set. After these choices are made, we will produce code for the constructor body and then the constructor initializer. 28. Machine: I will now choose a construction re-ordering. I will use the term "construction re-ordering" to refer to the relation between (a) the order in which the elements are received by the constructor (temporal order) and (b) the order of the elements in the output (positional order). The re-orderings may be linear or complex, where linear means same as or converse of. Since the order in which the elements are received (the transfer order) is the same as the input order, and the desired output order is not necessarily the same as (nor the converse of) the input order, the construction re-ordering must be complex (non-linear). This re-ordering will be based on the "numerically greater than" ordering relation. This means that the correct position of each element in the output list will be such that all preceding elements are less than or equal to the new element and all following elements are greater than the new element. --- 1 Note that we have implicitly decided that the construction consists of two steps: (1) finding the position; (2) adding the element to the set. One might imagine schemes in which finding a position and modifying the set are interleaved operations. 1 This choice of a complex construction re-ordering, along with the choice of a linear selection re-ordering, completes the specification of an insertion sort, all of which follow from the user's choice of a transfer order "linear with respect to the initial order". 1 For clarity, we will refer throughout to an element being "less than" or "greater than" 29. **Machine:** Now we must select a *construction method* for building a new set from the old one. The choices are: (a) List insertion: Modify the structure of the old list by inserting a new cell containing the new element. (b) Shifting: Allocate a new space at the beginning (or end) of the set and shift all elements before (after) the new element by one position. Then put the new element into the vacant place. (c) Copying: Copy the old set into a new space, one cell larger than the old space, with the new element taking up the extra position. Which shall I use? 30. **User:** List insertion. 31. **Machine:** Now I will write the body of the constructor. The body consists of two parts: 1. a *position finder*, which finds the position where the new element belongs 2. a *list inserter*, which inserts the element at this position 32. **Machine:** When a position has been found, there are several ways to indicate that position. First we must select one of these representations for a position. --- another element and mean that the first element precedes or follows the second element according to the chosen ordering relation. † Perhaps, since the "shifting" operation is such an unlikely choice when lists are being used, it shouldn't be presented to the user. It could be called forth only upon questioning or in unusual programming contexts. We are considering positions to be between two elements in a list. Since the construction method involves inserting a new cell into the list, we must choose a representation appropriate to insertions into linked lists. I know about two types of such representations: (a) Use one pointer as illustrated: ``` (position) \ \ A L \ \ \ BEFORE ``` In using this representation, a special unique value for BEFORE must be used if the correct position is at the front of the list. "FRONT" will be used for this purpose. (b) Use two pointers as illustrated: ``` (position) \ \ A C \ \ \ \ \ BEFORE AFTER ``` In using this representation, a special unique value for BEFORE must be used if the correct position is at the front of the list. "FRONT" will be used for this purpose. If the correct position is at the end of the list, AFTER will point to the empty list. Which shall I use? 33. User: Please use two pointers. [This choice might be based either on looking ahead or on knowing at this point that the BEFORE pointer is useful for inserting PART 3: Synthesizing the Constructor into the list and the AFTER pointer is useful for incrementing the state of the position finder and for testing the correctness of a position.] 34. **Machine:** I need variable names for the two pointers. I will use **BEFORE** and **AFTER**. We are about to work on the position finder as indicated ``` transfer initializer: transfer body: LOOP UNTIL: DONE; constructor body: find the position for the new element; insert the new element into the position; REPEAT; DONE: HALT. ``` 35. **Machine:** I am now writing the program that finds the position. First we will select a **search strategy** for searching the output list to find the correct position. The only strategy which I know about for linked lists is a **forward linear scan**, so I will use that strategy. 36. **Machine:** Since the **search strategy** is a forward linear scan of a finite list, the search strategy is total; that is, all possible positions will be considered. Since the old output set is well-ordered, and since the ordering relation applies to all elements involved, one of these possible positions must be correct. Therefore, the search for a correct position must succeed, and no exit will be required for the case of not finding a correct position. So I will use a loop which consists only PART 3: Synthesizing the Constructor of a test and exit on the correctness of the position and an incremener. The incremener will increment the state of the search according to the search strategy and specify the next position to be considered. I will also need to initialize the loop. 37. Machine: Since the search strategy is forward linear, the correctness test and the incremener are independent. This means that the next position specified by the incremener does not depend on the results of the correctness test. This allows a simple non-branching incremener. We are about to work on the indicated parts of the position finder | transfer initializer: | | | | transfer body: | | LOOP UNTIL DONE; | | | | constructor body: | | | | add the element to output> | | find the position for the new element> | | initialize position finder loop:| | LOOP UNTIL CORRECT POSITION FOUND:| | IF position is correct | | THEN CORRECT POSITION FOUND; | | increment state and specify position; | | REPEAT; | | CORRECT POSITION FOUND: | | insert the new element into the position; | | REPEAT; | | DONE: HALT. | 38. Machine: Now let's design the incremener. First we must select a scheme for 1 If the search strategy had been to use a binary chop, for example, then the next position to be tested would be based on the results of the test. representing the state of the search. The incremer will increment this state and will also specify the next position to be considered. 39. Machine: Since the search strategy is a forward linear scan of the linked list, and since the two-pointer position representation has been selected, the two pointers (BEFORE and AFTER), which are already being used to represent the position, are also sufficient to remember the state.† 40. Machine: An incremer for a forward linear scan using the two-pointer position representation requires two pieces of code: one to reset BEFORE to the value of AFTER, and one to increment AFTER. This is adequate both for the general case in the interior of a linked list, and in the special case at the front of a list, where BEFORE originally has the special value "FRONT" and is reset to point to the first cell in the list. The pieces will be put together in that order. 41. Machine: I am now writing the code to reset BEFORE to the value of AFTER. The code to do that is: BEFORE = AFTER 42. Machine: I am now writing the code to increment AFTER. Since the representation is a linked list, the code to do that is: AFTER = CDR(AFTER) † The position memory and the state memory need not always be the same. For example, if only BEFORE were used to remember the position, a good choice of state- memory would probably include a pointer playing the same role as AFTER. PART 3: Synthesizing the Constructor 43. **Machine**: Putting these two pieces together, we have the following incremenenter for the position finder: ``` BEFORE ← AFTER; AFTER ← CDR(AFTER) ``` 44. **Machine**: I will also need a position initializer which initializes both the state of the position finder and the position representation to the first position according to the search strategy. 45. **Machine**: Since the position representation is also used to represent the state, we need only initialize the position representation. 46. **Machine**: Since the search strategy is forward linear scan, we must initialize the position representation to the position at the front of the list. The convention for representing the position at the front of the list is that BEFORE has the unique value "FRONT" and that AFTER points to the entire list. 47. **Machine**: The code to set BEFORE to "FRONT" is: ``` BEFORE ← "FRONT" ``` 48. **Machine**: The code to set AFTER to the entire list is: ``` AFTER ← OUTPUT ``` 49. **Machine**: Putting these two pieces together, the initializer for the position finder is: ``` BEFORE ← "FRONT"; AFTER ← OUTPUT ``` PART 3: Synthesizing the Constructor We have just finished the indicated parts of the position finder ```plaintext transfer initializer: ... transfer body: LOOP UNTIL DONE; ... constructor body: add the element to output find the position for the new element initialize position finder loop BEFORE ← "FRONT"; AFTER ← OUTPUT; LOOP UNTIL CORRECT POSITION FOUND; IF position is correct THEN CORRECT POSITION FOUND; increment state and specify position BEFORE ← AFTER; AFTER ← CDR(AFTER); END; END; REPEAT; CORRECT POSITION FOUND: insert the new element into the position; REPEAT; DONE: HALT. ``` 50. Machine: I will now work on the exit test for the correct position. I will first try to infer the necessary pieces of this test. 51. Machine: Since the set is ordered, it follows from transitivity that the new element need not be compared against all of the elements of the set, but merely against the elements immediately adjacent to the position being considered. 52. Machine: Since the search strategy is forward linear scan, at any point in the search, all of the elements occurring before the current position in the list have already been tested and found to be less than the new element. This holds for all positions including the initial position in which no elements occur before the position in the list. 53. **Machine:** Thus, the test for correctness of position need only include a test that all elements occurring after the current position in the list are greater than the new element. 54. **Machine:** To test that all elements occurring after the current position are greater than the new element, the program must first test whether there are no elements occurring after the current position. Then, if there are such elements, it must test whether all such elements are greater than the new element. 55. **Machine:** A test that there are no elements occurring after the current position is simply a test whether the list pointed to by the second pointer (AFTER) is empty: \[ \text{AFTER} = \text{NIL} ? \] 56. **Machine:** As noted previously, the correctness test need only consider the immediately adjacent elements, so that a test that all elements occurring after the current position are greater than the new element is simply a test that the first element occurring after the current position is greater than the new element. To write such a test I need code to extract the first element from the list pointed to by \text{AFTER} and a test whether this element is greater than the new element. 57. **Machine:** The code to extract the first element from a list is: \[ \text{CAR(AFTER)} \] 58. **Machine:** The code to test whether this element is greater than the new element is: \[ \text{CAR(AFTER)} > \text{ELEMENT} ? \] 59. Machine: Given the two partial tests, the complete correctness test is an ordered OR of the two partial tests: \[[\text{AFTER} = \text{NIL}] \text{ OR } (\text{CAR(AFTER)} > \text{ELEMENT})?\] The indicated parts complete the position finder ```lisp TRANSFER INITIALIZER: ... TRANSFER BODY: LOOP UNTIL DONE: ... CONSTRUCTOR BODY: ADD THE ELEMENT TO OUTPUT> FIND THE POSITION FOR THE NEW ELEMENT> INITIALIZE POSITION FINDER LOOP> BEFORE = "FRONT"; AFTER = OUTPUT; LOOP UNTIL CORRECT POSITION FOUND: IF position is correct> no elements occur after position> AFTER = NIL OR next element is greater> CAR(AFTER) > ELEMENT THEN CORRECT POSITION FOUND: INCREMENT STATE AND SPECIFY POSITION> BEFORE = AFTER; AFTER = CDW(AFTER); REPEAT; CORRECT POSITION FOUND: INSERT THE NEW ELEMENT INTO THE POSITION; REPEAT; DONE: HALT. ``` 60. Machine: Having completed the position finder, we must now write the list † By "ordered OR" we mean that the second argument is evaluated only if the first argument is false. inserter which takes a new element, a position represented by the two-pointer schema, and the output set, and produces a new output set with the new element in the indicated position. The method of inserting an element into a list depends on whether or not the indicated position is at the front of the list. I will need to write a test to separate the "front of the list" case from the "anywhere else" case, and use a different method for each case. We are about to work on the indicated parts of the list inserter ``` transfer initializer: ... transfer body: LOOP UNTIL DONE: ... constructor body: add the element to output⇒ find the position for the new element: insert the new element into the position⇒ IF position is at front THEN add element onto front ELSE insert into interior ... ``` 61. **Machine:** A test for being at the front of the list is a test whether the value of the first pointer is "FRONT". The code for this is: ``` BEFORE = "FRONT" ? ``` 62. **Machine:** The addition of the new element in the "front of the list" case entails no replacement into the list, but instead an addition to the front of the list. The code to do this is: ``` OUTPUT = CONS(ELEMENT OUTPUT) ``` 63. Machine: The addition of the new element anywhere else in the list requires a replacement into a cell in the list. The code to do this is: \[ \text{CDR(BEFORE)} = \text{CONS(ELEMENT AFTER)} \] The marked parts complete the element adder ``` transfer initializer: ... transfer body: LOOP UNTIL DONE: ... constructor body: add the element to output⇒ find the position for the new element; insert the new element into the position⇒ IF position is at front⇒ BEFORE = "FRONT" THEN add element onto front⇒ OUTPUT = CONS(ELEMENT OUTPUT) ELSE insert into interior⇒ CDR(BEFORE) = CONS(ELEMENT AFTER); REPEAT. DONE: HALT. ``` 64. Machine: Having completed the list inserter (and thus the constructor body), we must now write the constructor initializer. Since we are writing an inductive constructor that must begin with an empty output list, we simply set the output list to the empty list: \[ \text{OUTPUT} \leftarrow \text{NIL} \] † LISP programmers will recognize this as RPLACD. PART 3: Synthesizing the Constructor The initializer completes the constructor transfer initializer: * * constructor initializer OUTPUT = NIL; transfer body: LOOP UNTIL DONE; * * constructor body: add the element to output find the position for the new element initialize position finder loop BEFORE = "FRONT"; AFTER = OUTPUT; LOOP UNTIL CORRECT POSITION FOUND; IF position is correct no elements occur after position AFTER = NIL OR next element is greater CAR(AFTER) > ELEMENT THEN CORRECT POSITION FOUND; increment state and specify position BEFORE = AFTER; AFTER = CDR(AFTER); REPEAT; CORRECT POSITION FOUND: insert the new element into the position IF position is at front BEFORE = "FRONT" THEN add element onto front OUTPUT = CONS(ELEMENT OUTPUT) ELSE insert into interior CDR(BEFORE) = CONS(ELEMENT AFTER); REPEAT; DONE: HALT. 65. Machine: We have now finished the constructor. PART 4: Completing the Program The final program transinit selector initializer: generator initializer: REST = INPUT; constructor initializer; OUTPUT = NIL; transfer body: LOOP UNTIL DOME; transfer termination test: IF all elements have been transferred all elements have been selected all elements have been generated no elements remain REST = NIL THEN DOME; selector body: select the next element from input generate the next element from input produce element ELEMENT = CAR(REST); increment state of generator REST = CDR(REST); constructor body: add the element to output find the position for the new element initialize position finder loop BEFORE = "FRONT"; AFTER = OUTPUT; LOOP UNTIL CORRECT POSITION FOUND; IF position is correct no elements occur after position AFTER = NIL OR next element is greater CAR(AFTER) > ELEMENT THEN CORRECT POSITION FOUND; increment state and specify position BEFORE = AFTER; AFTER = CDR(AFTER); REPEAT; CORRECT POSITION FOUND; insert the new element into the position IF position is at front BEFORE = "FRONT" THEN add element onto front OUTPUT = CONS(ELEMENT OUTPUT) ELSE insert into interior CDR(BEFORE) = CONS(ELEMENT AFTER); REPEAT; DOME: HALT. 66. Machine: Having completed the pieces of the Selector and the pieces of the Constructor, I can assemble the pieces into the final program using a Generate and Process paradigm with the termination test before the generator. 67. Machine: The final program, without any meta-code or comments, and its translation into LISP are given below: ``` (LAMBDA NIL (PROG NIL (SETQ OUTPUT NIL) (SETQ REST INPUT) (L L1) (COND (NULL REST) (GO L2)) (SETQ ELEMENT (CAR REST)) (SETQ REST (CDR REST)) (SETQ BEFORE "FRONT") (SETQ AFTER OUTPUT) (L L3) (COND (OR (NULL AFTER) (GREATERP (CAR AFTER) ELEMENT)) (GO L4)) (SETQ BEFORE AFTER) (SETQ AFTER (CDR AFTER)) (GO L3) (L L4) (COND (EQUAL BEFORE "FRONT") (SETQ OUTPUT (CONS ELEMENT OUTPUT))) (T (RPLACD BEFORE (CONS ELEMENT AFTER)))) (GO L1) (RETURN NIL) (L L2) (REPEAT) (DONE: HALT) )) ``` ```lisp REST = INPUT; OUTPUT = NIL; LOOP UNTIL DONE: IF REST = NIL THEN DONE; ELEMENT = CAR(REST); REST = CDR(REST); BEFORE = "FRONT"; AFTER = OUTPUT; LOOP UNTIL CORRECT POSITION FOUND: IF AFTER = NIL OR CAR(AFTER) > ELEMENT THEN CORRECT POSITION FOUND; BEFORE = AFTER; AFTER = CDR(AFTER); REPEAT; CORRECT POSITION FOUND: IF BEFORE = "FRONT" THEN OUTPUT = CONS(ELEMENT OUTPUT) ELSE CDR(BEFORE) = CONS(ELEMENT AFTER); REPEAT; DONE: HALT. ``` III. TYPES OF PROGRAMMING KNOWLEDGE On reviewing the dialogue, we can see that there are several types of knowledge involved. We first note that there is significant use of a kind of strategy or planning knowledge. On one level, we see this in steps 9 and 14, where the system discusses what must be done to write a transfer program. In step 9 for example, the sub-steps 3 and 4, where the transfer order and the transfer termination method are chosen, are really a kind of strategy for determining the form that the basic algorithm will take. On a different level, we see a kind of global optimization in steps 21 and 39, where the system decides that information structures designed for one purpose are sufficient for another. In step 21, for example, the pointer originally chosen to save the state of the selector (by marking the dividing point between those elements generated and those not yet generated) is found to be adequate for the purpose of indicating the next element to be generated. One could imagine, as an alternative to this type of planning, the use of more conventional local optimization such as post-synthesis removal or combination of redundant portions. We also see that the system makes considerable use of inference and simplification knowledge. Inference plays a role in the global optimization planning mentioned above, and also appears in steps 16 and 28, where the selection and construction re-orderings are determined. Simplification and inference are both apparent in steps 50 through 56, where the test for the correctness of the position was reduced to a simple test on the variable AFTER. Simplification and inference are also needed in step 36 where the system decides that an error exit (for the case of no position being found) is unnecessary. TYPES OF PROGRAMMING KNOWLEDGE Additionally, there are types of knowledge which are spread throughout the dialogue. Relatively domain-specific knowledge (in this case, about sorting) is particularly necessary in the earlier stages. Language-specific knowledge (in this case, about LISP) is necessary when the final code is being generated. General programming knowledge, such as knowledge about set enumeration and linked lists, is necessary throughout the synthesis process. Further, one could imagine significant use of efficiency information, although it is not present in our particular dialogue. The variety of types and amounts of knowledge used in the dialogue would tend to indicate that much more information is required for automatic synthesis of sorting programs than appeared in earlier, computer-implemented, systems for writing sort programs [3, 7, 11]. Ruth has developed a formulation of the knowledge involved in interchange and bubble sort programs [9]. His formulation is aimed primarily at the analysis of simple student programs in an instructional environment and the analysis task as defined does not seem to require the same depth and generality of knowledge suggested by our dialogue. Our intuition is that a significantly greater depth of programming knowledge would be required to extend his formulation to a larger class of programs. It is also interesting to compare the information involved in our dialogue to that found in non-implemented (and not intended for machine implementation) human-oriented guides for sort-algorithm selection and in textbooks on sorting. Martin [8] gives methods for selecting a good algorithm for a particular sorting problem. Those algorithms are much more powerful than those we deal with and their derivation would require considerably more information. We note that at the level of algorithm Types of Programming Knowledge Description presented, little explicit information is available to allow pieces of algorithms to be fitted together or to allow slight modification of existing algorithms. A sorting textbook such as [5], gives several orders of magnitude more information on sorting than is required for our dialogue. Can we measure or estimate in some way how much knowledge is necessary for program-understanding systems? The fact that the dialogue describing the synthesis took some seventy steps (with some of the steps rather complex) is an indication that considerable information is involved. From our experiments, we estimate that about one or two hundred explicitly stated "facts" or rules would get a synthesis system through the underlying steps of this dialogue. Furthermore, it is our guess that at least this much knowledge density will be required for other similar tasks, in order to have the flexibility necessary for the many aspects of program understanding. Although we are suggesting that such information must be effectively available in some form to a system, we are not in a position to estimate how much of this information should be stated explicitly (as, say, rules), how much should be derivable (from, say, meta-rules), how much should be learned from experience, or available in any other fashion. IV. Summary and Conclusions In this paper we have tried to exemplify and specify the knowledge appropriate for a program-understanding system which can synthesize small programs, by presenting a dialogue between a hypothetical version of such a system and a user. Our conjecture is that unless a system is capable of exceeding the reasoning power, and even some of the communication abilities, exemplified by the dialogue, the system will not effectively "understand" what it is doing well enough to synthesize, analyze, modify, and debug programs. It appears that a system which attempts to meet this standard must have large amounts of many different kinds of knowledge. Most such programming knowledge remains to be codified into some form of machine implementable theory. In fact, the codification of such knowledge is one of the main research problems in program-understanding systems. As for our own work, in the near future we expect to refine our experimental system until it approaches (as closely as seems useful and possible) the standard suggested by our dialogue (but without the actual language interface). We hope then to extend the system to deal with several different types of sorting programs. Perhaps then we will be in a better position to estimate the requirements of larger program-understanding systems. V. ACKNOWLEDGEMENTS The authors gratefully acknowledge the helpful suggestions given by Avra J. Cohn, Brian P. McCune, Richard J. Waldinger, and Elaine Kant after numerous readings of earlier drafts of this paper. Computer time for our preliminary tests was made available by the Artificial Intelligence Center of the Stanford Research Institute. VI. REFERENCES
{"Source-Url": "http://www.dtic.mil/get-tr-doc/pdf?AD=ADA006294", "len_cl100k_base": 11345, "olmocr-version": "0.1.53", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 82783, "total-output-tokens": 14042, "length": "2e13", "weborganizer": {"__label__adult": 0.0003361701965332031, "__label__art_design": 0.00029587745666503906, "__label__crime_law": 0.0002636909484863281, "__label__education_jobs": 0.001987457275390625, "__label__entertainment": 6.365776062011719e-05, "__label__fashion_beauty": 0.0001323223114013672, "__label__finance_business": 0.00018215179443359375, "__label__food_dining": 0.00031495094299316406, "__label__games": 0.0006566047668457031, "__label__hardware": 0.0007672309875488281, "__label__health": 0.0003349781036376953, "__label__history": 0.0002048015594482422, "__label__home_hobbies": 0.00010323524475097656, "__label__industrial": 0.0003082752227783203, "__label__literature": 0.00034689903259277344, "__label__politics": 0.0002409219741821289, "__label__religion": 0.0004010200500488281, "__label__science_tech": 0.00734710693359375, "__label__social_life": 9.399652481079102e-05, "__label__software": 0.0036220550537109375, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.00028705596923828125, "__label__transportation": 0.0004780292510986328, "__label__travel": 0.0001615285873413086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54575, 0.00566]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54575, 0.50392]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54575, 0.90223]], "google_gemma-3-12b-it_contains_pii": [[0, 275, false], [275, 705, null], [705, 1881, null], [1881, 2287, null], [2287, 3801, null], [3801, 5493, null], [5493, 6956, null], [6956, 8719, null], [8719, 10310, null], [10310, 11858, null], [11858, 13306, null], [13306, 13963, null], [13963, 14617, null], [14617, 16305, null], [16305, 17565, null], [17565, 18855, null], [18855, 20535, null], [20535, 20923, null], [20923, 22517, null], [22517, 23632, null], [23632, 24445, null], [24445, 25769, null], [25769, 27634, null], [27634, 29004, null], [29004, 30212, null], [30212, 31544, null], [31544, 33124, null], [33124, 34526, null], [34526, 35686, null], [35686, 37115, null], [37115, 38565, null], [38565, 39657, null], [39657, 40911, null], [40911, 41967, null], [41967, 43256, null], [43256, 44591, null], [44591, 46112, null], [46112, 47898, null], [47898, 49755, null], [49755, 51380, null], [51380, 52779, null], [52779, 54575, null]], "google_gemma-3-12b-it_is_public_document": [[0, 275, true], [275, 705, null], [705, 1881, null], [1881, 2287, null], [2287, 3801, null], [3801, 5493, null], [5493, 6956, null], [6956, 8719, null], [8719, 10310, null], [10310, 11858, null], [11858, 13306, null], [13306, 13963, null], [13963, 14617, null], [14617, 16305, null], [16305, 17565, null], [17565, 18855, null], [18855, 20535, null], [20535, 20923, null], [20923, 22517, null], [22517, 23632, null], [23632, 24445, null], [24445, 25769, null], [25769, 27634, null], [27634, 29004, null], [29004, 30212, null], [30212, 31544, null], [31544, 33124, null], [33124, 34526, null], [34526, 35686, null], [35686, 37115, null], [37115, 38565, null], [38565, 39657, null], [39657, 40911, null], [40911, 41967, null], [41967, 43256, null], [43256, 44591, null], [44591, 46112, null], [46112, 47898, null], [47898, 49755, null], [49755, 51380, null], [51380, 52779, null], [52779, 54575, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54575, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54575, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54575, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54575, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54575, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54575, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54575, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54575, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54575, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54575, null]], "pdf_page_numbers": [[0, 275, 1], [275, 705, 2], [705, 1881, 3], [1881, 2287, 4], [2287, 3801, 5], [3801, 5493, 6], [5493, 6956, 7], [6956, 8719, 8], [8719, 10310, 9], [10310, 11858, 10], [11858, 13306, 11], [13306, 13963, 12], [13963, 14617, 13], [14617, 16305, 14], [16305, 17565, 15], [17565, 18855, 16], [18855, 20535, 17], [20535, 20923, 18], [20923, 22517, 19], [22517, 23632, 20], [23632, 24445, 21], [24445, 25769, 22], [25769, 27634, 23], [27634, 29004, 24], [29004, 30212, 25], [30212, 31544, 26], [31544, 33124, 27], [33124, 34526, 28], [34526, 35686, 29], [35686, 37115, 30], [37115, 38565, 31], [38565, 39657, 32], [39657, 40911, 33], [40911, 41967, 34], [41967, 43256, 35], [43256, 44591, 36], [44591, 46112, 37], [46112, 47898, 38], [47898, 49755, 39], [49755, 51380, 40], [51380, 52779, 41], [52779, 54575, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54575, 0.0255]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
b1fde721b0cbe8e700950b6ad8a06decfdd333df
Copyright Copyright © 2003 BEA Systems, Inc. All Rights Reserved. Restricted Rights Legend This software and documentation is subject to and made available only pursuant to the terms of the BEA Systems License Agreement and may be used or copied only in accordance with the terms of that agreement. It is against the law to copy the software except as specifically allowed in the agreement. This document may not, in whole or in part, be copied, photocopied, reproduced, translated, or reduced to any electronic medium or machine readable form without prior consent, in writing, from BEA Systems, Inc. Use, duplication or disclosure by the U.S. Government is subject to restrictions set forth in the BEA Systems License Agreement and in subparagraph (c)(1) of the Commercial Computer Software-Restricted Rights Clause at FAR 52.227-19; subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.227-7013, subparagraph (d) of the Commercial Computer Software--Licensing clause at NASA FAR supplement 16-52.227-86; or their equivalent. Information in this document is subject to change without notice and does not represent a commitment on the part of BEA Systems. THE SOFTWARE AND DOCUMENTATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. FURTHER, BEA Systems DOES NOT WARRANT, GUARANTEE, OR MAKE ANY REPRESENTATIONS REGARDING THE USE, OR THE RESULTS OF THE USE, OF THE SOFTWARE OR WRITTEN MATERIAL IN TERMS OF CORRECTNESS, ACCURACY, RELIABILITY, OR OTHERWISE. Trademarks or Service Marks All other trademarks are the property of their respective companies. Contents About This Document What You Need to Know ................................................................. vii e-docs Web Site ........................................................................ vii How to Print the Document ............................................................ vii Related Information ...................................................................... viii Contact Us! ................................................................................ viii Documentation Conventions ....................................................... ix Getting Started with Liquid Data From Multiple Data Sources to a Web Application in 60 Minutes (or Less) ...................... 1 Step 1. Start the Liquid Data Samples Server ........................................ 2 Windows ............................................................................... 2 UNIX ................................................................................... 3 Step 2. Start the WebLogic Administration Console ............................... 3 Starting the Console ............................................................ 3 Windows ............................................................................... 3 UNIX ................................................................................... 3 Logging Into the Administration Console .................................... 4 Step 3. Configure Data Sources ...................................................... 5 Configuring a Relational Data Source Description ....................... 5 Configure an XML File Data Source Description ........................ 8 Step 4. Start Data View Builder and Verify Data Sources ........................ 10 Step 5. Construct Your Query .............................................................. 11 View a Demo ........................................................................... 12 Open Relevant Data Source Schemas ........................................... 12 Select a Target Schema From the Sample Liquid Data Repository .......... 12 Map Source Elements to the Target Schema ..................................... 14 Define Query Conditions ............................................................. 18 Create a Join Between Wireless : CUSTOMER and Wireless : CUSTOMER_ORDER . 18 Create a Join between Wireless : CUSTOMER_ID and Broadband : CUSTOMER_ID . 19 Create a Query Parameter ............................................................. 19 Set the Target Namespace ............................................................ 22 Step 6. View and Test Your Query ......................................................... 23 Entering Test Mode .................................................................. 23 The Automatically Generated Query ................................................ 23 Testing Your Query .................................................................... 25 Running Your Query ............................................................... 25 Summary .................................................................................... 28 Step 7. Save Your Liquid Data Project ................................................ 28 Step 8. Deploy Your Query .............................................................. 28 Step 9. Creating a Liquid Data Control in WebLogic Workshop ..................... 30 Creating a Liquid Data Control ................................................... 31 Step 10. Creating a Liquid Data-Powered Web Application ......................... 35 Generating a Page Flow File ....................................................... 35 Obtaining Preliminary Query Results ............................................... 39 Modifying Page Flow Source ....................................................... 41 Adding Data to the Application ..................................................... 44 Adding Field Labels ................................................................ 45 Adding Data Fields .................................................................. 46 Add Repeater Fields For Order Arrays ............................................. 50 About This Document This document provides quick start steps that explain how to start the BEA Liquid Data for WebLogic server, bring up the WebLogic Server Administration Console and use it to configure some data sources, use the Data View Builder to map source and target schemas and construct a query, and run the query against the server. What You Need to Know This document is intended mainly for business users/developers and XML data designers who will be using Liquid Data to design distributed data solutions and construct queries. System Administrators will also find the initial steps useful as a snapshot example of the configuration process. e-docs Web Site BEA product documentation is available on the BEA corporate Web site. From the BEA Home page, click on Product Documentation or go directly to the “e-docs” Product Documentation page at http://e-docs.bea.com. How to Print the Document You can print a copy of this document from a Web browser, one file at a time, by using the File—>Print option on your Web browser. A PDF version of this document is available on the Liquid Data documentation Home page on the e-docs Web site (and also on the documentation CD). You can open the PDF in Adobe Acrobat Reader. About This Document and print the entire document (or a portion of it) in book format. To access the PDFs, open the Liquid Data documentation Home page, click the PDF files button and select the document you want to print. If you do not have the Adobe Acrobat Reader, you can get it for free from the Adobe Web site at http://www.adobe.com/. Related Information For more information in general about Java and XQuery, refer to the following sources. - The Sun Microsystems, Inc. Java site at: http://java.sun.com/ - The World Wide Web Consortium XML Query section at: http://www.w3.org/XML/Query For more information about BEA products, refer to the BEA documentation site at: http://edocs.bea.com/ Contact Us! Your feedback on the BEA Liquid Data documentation is important to us. Send us e-mail at docsupport@bea.com if you have questions or comments. Your comments will be reviewed directly by the BEA professionals who create and update the Liquid Data documentation. In your e-mail message, please indicate that you are using the documentation for the BEA Liquid Data for WebLogic 1.0 release. If you have any questions about this version of Liquid Data, or if you have problems installing and running Liquid Data, contact BEA Customer Support through BEA WebSupport at www.bea.com. You can also contact Customer Support by using the contact information provided on the Customer Support Card, which is included in the product package. When contacting Customer Support, be prepared to provide the following information: - Your name, e-mail address, phone number, and fax number - Your company name and company address - Your machine type and authorization codes - The name and version of the product you are using - A description of the problem and the content of pertinent error messages # Documentation Conventions The following documentation conventions are used throughout this document. <table> <thead> <tr> <th>Convention</th> <th>Item</th> </tr> </thead> <tbody> <tr> <td><strong>boldface text</strong></td> <td>Indicates terms defined in the glossary.</td> </tr> <tr> <td>Ctrl+Tab</td> <td>Indicates that you must press two or more keys simultaneously.</td> </tr> <tr> <td><em>italics</em></td> <td>Indicates emphasis or book titles.</td> </tr> <tr> <td><strong>monospace text</strong></td> <td>Indicates code samples, commands and their options, data structures and their members, data types, directories, and file names and their extensions. Monospace text also indicates text that you must enter from the keyboard.</td> </tr> <tr> <td><em>monospace italic text</em></td> <td>Identifies significant words in code.</td> </tr> <tr> <td><strong>monospace boldface text</strong></td> <td>Identifies variables in code.</td> </tr> <tr> <td><strong>UPPERCASE TEXT</strong></td> <td>Indicates device names, environment variables, and logical operators.</td> </tr> </tbody> </table> Examples: - `#include <iostream.h>` - `void main ( ) chmod u+w * \tux\data\ap .doc tux.doc BITMAP float` - `String expr` - `LPT1 SIGNON OR` <table> <thead> <tr> <th>Convention</th> <th>Item</th> </tr> </thead> <tbody> <tr> <td>{ }</td> <td>Indicates a set of choices in a syntax line. The braces themselves should never be typed.</td> </tr> </tbody> </table> | [ ] | Indicates optional items in a syntax line. The brackets themselves should never be typed. *Example:* buildobjclient [-v] [-o name ] [-f file-list]... [-l file-list]... | | | Separates mutually exclusive choices in a syntax line. The symbol itself should never be typed. | | ... | Indicates one of the following in a command line: • That an argument can be repeated several times in a command line • That the statement omits additional optional arguments • That you can enter additional parameters, values, or other information The ellipsis itself should never be typed. *Example:* buildobjclient [-v] [-o name] [-f file-list]... [-l file-list]... | | . . . . | Indicates the omission of items from a code example or from a syntax line. The vertical ellipsis itself should never be typed. | Getting Started with Liquid Data You can use this 10-step tutorial guide to learn how to: - Configure data sources in Liquid Data - Associate data sources with a target schema - Generate and run queries in the Data View Builder - Build a BEA WebLogic Workshop application around Liquid Data queries - Display formatted query results as part of a web application **From Multiple Data Sources to a Web Application in 60 Minutes (or Less)** This chapter illustrates how you can use Liquid Data to create a query that treats multiple disparate data sources as a single data source. It assumes a large Internet company that has grown through mergers or acquisitions and, as a consequence, finds that its real-time order data must be accessed from two very separate data sources: Wireless order data from an RDBMS system; BroadBand order data via an XML document. As you work through this Order Report example, keep in mind that each action you take generates source code, in some cases significant amounts. When you are done, you will find that your application contains nearly 500 lines of Java code, 97% of which was automatically generated. (For details see “Generated Files and Code” on page 54.) Getting Started contains numerous illustrations in the form of screen captures. The goal is to enable you to create a query accessing multiple data sources and use that query when building and running a simple but complete web application — in less than 60 minutes! The 10 steps in building up the Order Report web application are: - Step 1. Start the Liquid Data Samples Server - Step 2. Start the WebLogic Administration Console - Step 3. Configure Data Sources - Step 4. Start Data View Builder and Verify Data Sources - Step 5. Construct Your Query (View a Demo) - Step 6. View and Test Your Query - Step 7. Save Your Liquid Data Project - Step 8. Deploy Your Query - Step 9. Creating a Liquid Data Control in WebLogic Workshop - Step 10. Creating a Liquid Data-Powered Web Application Example directories Examples used in this chapter assume that the Liquid Data 8.1 sample domain is located at: `<WL_HOME>/samples/domains/liquiddata/` where `WL_HOME` is the root location of the WebLogic Server 8.1. You can find many other examples and samples in *Liquid Data by Example*, including the Avitek Customer Self-Service Sample Application which also combines the back-end data retrieval capabilities of Liquid Data with WebLogic Workshop to create a more fully-functional customer self-service web application. **Step 1. Start the Liquid Data Samples Server** The Liquid Data Samples server is simply a WebLogic Server 8.1 that contains Liquid Data samples, including a Liquid Data repository. It can be started from Windows or UNIX. **Windows** To start the Liquid Data samples server under Windows choose the following menu option: Step 2. Start the WebLogic Administration Console Start —— Programs —— BEA WebLogic Platform 8.1 —— BEA Liquid Data for WebLogic 8.1 SP2 —— Liquid Data Samples —— Samples Server A terminal window called Samples Server will appear. When the Samples server is ready, you will see Server started in RUNNING mode. UNIX To start the Liquid Data samples server under UNIX run the startWeblogic.sh shell command in the following domain: <WL_HOME>/samples/domains/liquiddata/startWeblogic.sh When the Samples server is ready you will see Server started in RUNNING mode. Note: For more information on setting up and starting the Samples server, see Post-Installation Tasks in the Liquid Data Installation Guide. For more information on starting Liquid Data servers in the various preconfigured domains, see Starting and Stopping the Server in the Liquid Data Administration Guide. Step 2. Start the WebLogic Administration Console The Liquid Data repository is managed and configured through the Liquid Data node of the WebLogic Administration Console. It is here that data sources are defined, security policies established, and so forth. Starting the Console The WebLogic Administration Console can be accessed from Windows or UNIX. Windows To start the WebLogic Administration Console under Windows choose the following menu option: Start —— Programs —— BEA WebLogic Platform 8.1 —— BEA Liquid Data for WebLogic 8.1 SP2 —— Liquid Data Samples —— Admin Console UNIX To start the WebLogic Administration Console for the Liquid Data Samples server running on your local machine, enter the following URL in a web browser: http://localhost:7001/console Logging Into the Administration Console Log in to the console by providing the following default username and password for the Samples server: <table> <thead> <tr> <th>Field</th> <th>Defaults</th> </tr> </thead> <tbody> <tr> <td>Username</td> <td>system</td> </tr> <tr> <td>Password</td> <td>security</td> </tr> </tbody> </table> After you log in, the Console home page is displayed. Note that Liquid Data is the bottom element in the left pane (Figure 2). Step 3. Configure Data Sources Figure 2 WebLogic Administration Console Note: For more information on using the Liquid Data node of the WebLogic Administration Console, see the Liquid Data Administration Guide. Step 3. Configure Data Sources This section describes how to use the WebLogic Administration Console to configure the two sample data sources used in the tutorial customer order report: - A relational database data source description for customer and order information. - An XML data source description for order information. Configuring a Relational Data Source Description A first step in making a data source available to Liquid Data is to register your data source with the Administration Console. To add a description of your RDBMS data source to the Liquid Data Server registry follow these steps: 1. In the left pane, click the Liquid Data node (bottom icon in the left pane; see Figure 2). 2. In the right pane, click the Data Sources tab (under Configuration). 3. Click the Relational Databases tab. 4. Click the Configure a New Relational Database Data Source Description link to display the Configure tab (Figure 4). Figure 3 Creating a New Relational Database Data Source 5. Fill in the fields for your data source as described in Table 2: 6. Click Create. If you look in the list of relational database sources, you will find your newly created data source definition. (For additional information see Configuring Access to Relational Databases in the Liquid Data Administration Guide.) Configure an XML File Data Source Description To create a description of your XML data source, follow these steps: 1. In the left pane, click the Liquid Data node. 2. Click the XML Files tab (next to Relational Databases). 3. Click the Configure a New XML Data Source Description link to access the Configure dialog box (Figure 5). Step 3. Configure Data Sources Figure 5 Defining a XML Data Source Definition 4. Fill in the fields for your XML data source as described in Table 3. Table 3 Liquid Data Data Source Description for My-PB-WL Data Source <table> <thead> <tr> <th>Field</th> <th>Value to Enter</th> </tr> </thead> <tbody> <tr> <td>Name</td> <td>MyBroadBand-LD-DS</td> </tr> <tr> <td></td> <td>This is the name of the data source description used to register the XML data source with the Liquid Data server.</td> </tr> <tr> <td>Data File</td> <td>b-co.xml</td> </tr> <tr> <td></td> <td>This is the XML data file name. It must correspond to an XML data file located in the Liquid Data server repository. (The example file name provided above resides in the Samples server repository.)</td> </tr> <tr> <td>Schema File</td> <td>b-co.xsd</td> </tr> <tr> <td></td> <td>Schema associated with the XML file. The schema you specify here must be in the Liquid Data server repository.</td> </tr> <tr> <td>Namespace URI</td> <td>Not needed for this basic example.</td> </tr> </tbody> </table> 5. Click Create. If you look in the list of XML data sources, you will find your newly created data source definition. (For additional information see Configuring Access to XML Files in the Liquid Data Administration Guide.) Step 4. Start Data View Builder and Verify Data Sources Next, you are ready to start the Windows-based Data View Builder. This is where you will create and test your query. You can start the Data View Builder two ways: 1. Choose: Start — Programs — BEA WebLogic Platform 8.1 — BEA Liquid Data for WebLogic 8.1 SP2 — Data View Builder 2. Or you can start the Data View Builder by double-clicking on the file: <WL_HOME>\liquiddata\DataViewBuilder\bin\DVBuilder.cmd If you are not already connected to the Liquid Data Samples server, enter the following address in the Server URL field in the Welcome... dialog box (Figure 6). t3://localhost:7001 No username or password is needed; leave those fields blank. Figure 6 Data View Builder Login Dialog 3. Click the Login button. The Data View Builder workspace and tools appear. You can click on the buttons on the left Navigation panel to view the various data sources available from the Liquid Data server to which the Data View Builder is connected. Step 5. Construct Your Query The Data View Builder allows you to build queries (and data views) through graphical representations of your data and any query conditions. You first define your query by dragging and dropping (mapping) elements from source to target schemas. Constants, parameters, and functions are available to fully express a query statement. The customer order report query you are building assumes that you have customer orders stored in two data sources (Wireless in a relational database and BroadBand in an XML file). The query will return the order information for any particular customer. To construct the example customer order report query, you need to: 4. Once configured, your data sources are immediately available. In the Sources area: - Click Relational Databases to view available RDBMS data sources. This list should include your new relational database data source: MyWireless-LD-DS - Click XML Files to view available XML data sources. This list should include your new XML data source: MyBroadBand-LD-DS Open Relevant Data Source Schemas Select a Target Schema From the Sample Liquid Data Repository Map Source Elements to the Target Schema Define Query Conditions Create a Query Parameter View a Demo Constructing the “Order Query” Demo... If you are looking at this documentation online, you can click the Demo button to view an animated demonstration of how to build up the conditions and create the mappings that are similar to this example. With minor variations, the Order Query demo previews Steps 1-7 of this chapter. Open Relevant Data Source Schemas To start building a query, first select the schemas for the data sources you need. 1. Open the data source schemas from the Builder toolbar as follows: - In the XML Files area double-click on MyBroadBand-LD-DS to move the associated schema into your work area. - Click Relational Databases and double-click on MyWireless-LD-DS to open the associated schema. The schemas for each of the data sources appear. Note: Move data sources around on the workspace as needed. Select a Target Schema From the Sample Liquid Data Repository A target schema represents a structure of your query results. (For additional information on target schemas see “Understanding Target Schemas” in the Schemas and Namespaces in Liquid Data chapter of Building Queries and Data Views.) For this example, use customerOrderReport.xsd as your target schema. To add the target schema: Step 5. Construct Your Query 1. Choose the menu command File —> Set Target Schema. This brings up the Liquid Data file browser. 2. Scroll to the Liquid Data Repository, the topmost item in the scroll list. **Figure 8 Selecting a Target Schema from the Liquid Data Repository** 3. Select `customerOrderReport.xsd` and click Open to set this as your target schema. 4. In the target schema panel of the Data View Builder right-click on the topmost node of the target schema (CustomerOrderReport) and choose Expand to display all the schema elements (Figure 8). **Note:** Schemas are located in the following Liquid Data repository directory: ```<WL_HOME>/samples/domains/liquiddata/ldrepository/schemas``` At this point, your work area contains two data source schemas and a target schema. **Map Source Elements to the Target Schema** The Data View Builder generates queries based on your mapping of source elements to your target schema. The CustomerOrderReport target schema includes a CUSTOMER_ORDER node for both Wireless and BroadBand. Map CUSTOMER_ORDER elements from each data source to the corresponding CUSTOMER_ORDER elements in the target schema as follows: Step 5. Construct Your Query - Drag and drop all elements in the Wireless data source CUSTOMER_ORDER into the corresponding elements in the target schema's wireless_orders: CUSTOMER_ORDER node (Figure 10). - Drag and drop all elements contained in the BroadBand data source CUSTOMER_ORDER into the corresponding elements in the target schema's wireless_orders: CUSTOMER_ORDER node (Figure 11). Figure 10 Mapping Elements in Wireless : CUSTOMER_ORDER to the Target Schema In both cases, you mapped the sub-elements: ORDER_DATE, ORDER_ID, CUSTOMER_ID, SHIP_METHOD, and TOTAL_ORDER_AMOUNT to corresponding elements in your target schema. **Note:** As you map elements from source to target, descriptions appear in the lower part of the workspace (Figure 13). If you need to delete an incorrect mapping, click in the Mapping ID field to highlight the row you wish to delete. Then click the Trash icon on your lower left. Use the Wireless data source schema to populate the CUSTOMER element in the target schema. - Drag and drop the sub-elements contained in the Wireless data source CUSTOMER into the corresponding sub-elements in the target schema CUSTOMER node. As you can see, not all elements in a source schema need to be mapped to the target. At this point, you have completed the source-to-target mappings and your target schema is ready. The 16 source-to-target mappings you created appear on the Mappings tab (Figure 13). Getting Started with Liquid Data Figure 13 Complete List of Source-to-Target Mappings <table> <thead> <tr> <th>Source</th> <th>Target</th> </tr> </thead> <tbody> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/PRODUCT_TYPE</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/CUSTOMER_ID</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/CUSTOMER_ID</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/CUSTOMER_ID</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/STATE</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/STATE</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/CITY</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/CITY</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/ZIP</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/ZIP</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/CURRENT_ADDRESS</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/CURRENT_ADDRESS</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/PAST_ADDRESS</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/PAST_ADDRESS</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/LAST_NAME</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/LAST_NAME</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/FIRST_NAME</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/FIRST_NAME</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/DATE_OF_BIRTH</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/DATE_OF_BIRTH</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/PHONE_NUMBER</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/PHONE_NUMBER</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/EMAIL_ADDRESS</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/EMAIL_ADDRESS</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/HOME_PHONE</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/HOME_PHONE</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/MOBILE_PHONE</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/MOBILE_PHONE</td> </tr> <tr> <td>[myBroadband-ID-US]/db/CUSTOMER_ORDER/SHORT_NAME</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/SHORT_NAME</td> </tr> </tbody> </table> Define Query Conditions The purpose of the query is to retrieve Broadband and Wireless transactions for a specified customer. The following sections describe how to define conditions for the project you are building: - Create a Join Between Wireless : CUSTOMER and Wireless : CUSTOMER_ORDER - Create a Join between Wireless : CUSTOMER_ID and Broadband : CUSTOMER_ID - Create a Query Parameter Create a Join Between Wireless : CUSTOMER and Wireless : CUSTOMER_ORDER In the Wireless data source drag and drop the element CUSTOMER : CUSTOMER_ID to CUSTOMER_ORDER : CUSTOMER_ID to create a join between CUSTOMER and CUSTOMER_ORDER: <table> <thead> <tr> <th>Source</th> <th>Target</th> </tr> </thead> <tbody> <tr> <td>[myWireless-LD-DS]/db/CUSTOMER/CUSTOMER_ID</td> <td>[myWireless-LD-DS]/db/CUSTOMER_ORDER/CUSTOMER_ID</td> </tr> </tbody> </table> The syntax of this join appears on the first row of the Conditions tab. Note: Whenever you create a query condition, the Conditions tab is automatically displayed in the lower part of the workspace (Figure 14). Create a Join between Wireless : CUSTOMER_ID and Broadband : CUSTOMER_ID Drag and drop the Wireless element CUSTOMER : CUSTOMER_ID onto the BroadBand data source element CUSTOMER_ORDER : CUSTOMER_ID: <table> <thead> <tr> <th>Join Element</th> <th>Join Element</th> </tr> </thead> <tbody> <tr> <td>[myWireless-LD-DS]/db/CUSTOMER/CUSTOMER_ID</td> <td>[myBroadBand-LD-DS]/db/CUSTOMER_ORDER/CUSTOMER_ID</td> </tr> </tbody> </table> The syntax of this join is displayed on the next row on the Conditions tab. Figure 14 Query Conditions After Selection of Two Joins <table> <thead> <tr> <th>Enabled</th> <th>Condition</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>[myWireless-LD-DS]/db/CUSTOMER/CUSTOMER_ID eq [myWireless-LD-DS]/db/CUSTOMER_ORDER/CUSTOMER_ID</td> </tr> <tr> <td>2</td> <td>[myWireless-LD-DS]/db/CUSTOMER/CUSTOMER_ID eq [myBroadBand-LD-DS]/db/CUSTOMER_ORDER/CUSTOMER_ID</td> </tr> </tbody> </table> Note: If you run the query at this point, your results retrieve information on all customers in your customer database. Create a Query Parameter Although you could use a constant to “hard-code” your query for a particular customer, creating a query parameter is a more flexible approach. Using a query parameter, you can retrieve information on any customer in your data set. 1. On the Builder Toolbar (left navigation panel) click the Toolbox tab. 2. Select Query Parameters. 3. You need to name the parameter and specify a type. In this example, use myCustomer as the name and select string from the Type pulldown menu (see Figure 14). 4. Click Create. 5. Next you need to associate your new query parameter with a data element. Do this by dragging the myCustomer query parameter to the CUSTOMER : CUSTOMER_ID element of MyWirelessLD-DS database schema: <table> <thead> <tr> <th>Join Element</th> <th>Join Element</th> </tr> </thead> <tbody> <tr> <td>[Query Parameter] myCustomer</td> <td>[myWireless-LD-DS]/db/CUSTOMER/CUSTOMER_ID</td> </tr> </tbody> </table> When creating a parameter you need to specify a name... ... ... and a type Step 5. Construct Your Query You have now set up all the source conditions needed for your query. Figure 16 Mapping a Parameter to a Source Schema Element Figure 17 Complete Set of Conditions Set the Target Namespace Although not necessary in all cases, it is good practice to set a namespace for your target schema. See “Understanding XML Namespaces” in the Overview and Key Concepts chapter of Building Queries and Data Views. To set a target namespace for the customerOrderReport target schema follow these steps: 1. Click on the top node of the target schema (CustomerOrderReport). 2. Select Target Namespace from the Data View Builder Query menu (Query —> Target Namespace). Figure 18 Specifying a Target Namespace 3. Enter values shown in Table 4 for the sample target namespace prefix and URI (Figure 18). Table 4 Namespace Prefix and URI Values <table> <thead> <tr> <th>Field</th> <th>Defaults</th> </tr> </thead> <tbody> <tr> <td>Prefix</td> <td>custorder</td> </tr> <tr> <td>URI</td> <td>urn:schemas-bea-com:ld-custorder</td> </tr> </tbody> </table> 4. Click Ok. Your new target namespace setting will appear in the topmost node of your target schema (Figure 19). Figure 19 Target Schema with New Namespace 5. Using the File —> Save Target Schema command, navigate to the Liquid Data Server repository directory and save your schema to the new filename: custOrdRpt.xsd Step 6. View and Test Your Query At this point you have assembled all the query elements. Specifically you have mapped data source elements to your target schema, you have created the necessary joins, and you have specified a query parameter functioning similarly to a SQL WHERE clause. Entering Test Mode When you enter Test mode, your query is automatically generated. Click the Test tab at the top of the workspace (Figure 20). After a few moments your query will automatically appear. Figure 20 Supplying a Query Parameter The Automatically Generated Query The automatically generated XQuery is shown in Listing 5. Getting Started with Liquid Data Listing 5 Generated customerOrderReport Query ```xml namespace custorder = "urn:schemas-bea-com:ld-custorder" <custorder:CustomerOrderReport> { for $MyWireless_LD_DS.CUSTOMER_1 in document("MyWireless-LD-DS")/db/CUSTOMER where ($#myCustomer of type xs:string eq $MyWireless_LD_DS.CUSTOMER_1/CUSTOMER_ID) return <customerOrder> <CUSTOMER> <FIRST_NAME>{ xf:data($MyWireless_LD_DS.CUSTOMER_1/FIRST_NAME) }</FIRST_NAME> <LAST_NAME>{ xf:data($MyWireless_LD_DS.CUSTOMER_1/LAST_NAME) } </LAST_NAME> <CUSTOMER_ID>{ xf:data($MyWireless_LD_DS.CUSTOMER_1/CUSTOMER_ID) } </CUSTOMER_ID> <STATE>{ xf:data($MyWireless_LD_DS.CUSTOMER_1/STATE) } </STATE> <EMAIL_ADDRESS>{ xf:data($MyWireless_LD_DS.CUSTOMER_1/EMAIL_ADDRESS) } </EMAIL_ADDRESS> <TELEPHONE_NUMBER>{ xf:data($MyWireless_LD_DS.CUSTOMER_1/TELEPHONE_NUMBER) } </TELEPHONE_NUMBER> </CUSTOMER> <wireless_orders> { for $MyWireless_LD_DS.CUSTOMER_ORDER_2 in document("MyWireless-LD-DS")/db/CUSTOMER_ORDER where ($MyWireless_LD_DS.CUSTOMER_1/CUSTOMER_ID eq $MyWireless_LD_DS.CUSTOMER_ORDER_2/CUSTOMER_ID) return <CUSTOMER_ORDER> <ORDER_DATE>{ xf:data($MyWireless_LD_DS.CUSTOMER_ORDER_2/ORDER_DATE) } </ORDER_DATE> <ORDER_ID>{ xf:data($MyWireless_LD_DS.CUSTOMER_ORDER_2/ORDER_ID) } </ORDER_ID> <CUSTOMER_ID>{ xf:data($MyWireless_LD_DS.CUSTOMER_ORDER_2/CUSTOMER_ID) } </CUSTOMER_ID> <SHIP_METHOD>{ xf:data($MyWireless_LD_DS.CUSTOMER_ORDER_2/SHIP_METHOD) } </SHIP_METHOD> <TOTAL_ORDER_AMOUNT>{ xf:data($MyWireless_LD_DS.CUSTOMER_ORDER_2/TOTAL_ORDER_AMOUNT) } </TOTAL_ORDER_AMOUNT> </CUSTOMER_ORDER> }} </wireless_orders> <broadband_orders> { } </custorder:CustomerOrderReport> ``` 24 Getting Started Step 6. View and Test Your Query Testing Your Query Once generated, your query is ready to be tested. However, you need to supply the customer ID parameter you set up in “Create a Query Parameter” on page 19. Click in the Value field next to myCustomer and directly enter CUSTOMER_1, as (see Figure 20). Note: Query parameters are case sensitive. Running Your Query When you are ready to run the query, click the Run query button on the toolbar in the upper left (Figure 21). Figure 21 Run Query Button The Liquid Data server processes the query and returns results. Right-click —> Expand on the top line of the generated results (customerOrder) to view your data in an outline-like arrangement. Figure 22 Test Mode Displaying Query and Result in Tree Format View as XML Checkbox If you click the View as XML checkbox in the lower part of the Test view area of the Data View Builder, query results are displayed in XML (see Listing 6). Listing 6 Query Result in XML ```xml <prefix1:CustomerOrderReport xmlns:prefix1="urn:schemas-bea-com:ld-custorder"> <customerOrder> <CUSTOMER> <FIRST_NAME>JOHN_1</FIRST_NAME> <LAST_NAME>KAY_1</LAST_NAME> <CUSTOMER_ID>CUSTOMER_1</CUSTOMER_ID> <STATE>TX</STATE> <EMAIL_ADDRESS>abc@abc.com</EMAIL_ADDRESS> <TELEPHONE_NUMBER>4081231234</TELEPHONE_NUMBER> </CUSTOMER> <wireless_orders> <CUSTOMER_ORDER> <ORDER_DATE>2002-03-06</ORDER_DATE> <ORDER_ID>ORDER_ID_1_0</ORDER_ID> <CUSTOMER_ID>CUSTOMER_1</CUSTOMER_ID> <SHIP_METHOD>AIR</SHIP_METHOD> <TOTAL_ORDER_AMOUNT>1000</TOTAL_ORDER_AMOUNT> </CUSTOMER_ORDER> <CUSTOMER_ORDER> <ORDER_DATE>2002-03-06</ORDER_DATE> <ORDER_ID>ORDER_ID_1_1</ORDER_ID> <CUSTOMER_ID>CUSTOMER_1</CUSTOMER_ID> <SHIP_METHOD>AIR</SHIP_METHOD> <TOTAL_ORDER_AMOUNT>2000</TOTAL_ORDER_AMOUNT> </CUSTOMER_ORDER> </wireless_orders> <broadband_orders> <CUSTOMER_ORDER> <ORDER_DATE>2002-04-09</ORDER_DATE> <ORDER_ID>ORDER_ID_1_0</ORDER_ID> <CUSTOMER_ID>CUSTOMER_1</CUSTOMER_ID> <SHIP_METHOD>AIR</SHIP_METHOD> <TOTAL_ORDER_AMOUNT>1000.00</TOTAL_ORDER_AMOUNT> </CUSTOMER_ORDER> <CUSTOMER_ORDER> <ORDER_DATE>2002-04-09</ORDER_DATE> <ORDER_ID>ORDER_ID_1_1</ORDER_ID> <CUSTOMER_ID>CUSTOMER_1</CUSTOMER_ID> <SHIP_METHOD>AIR</SHIP_METHOD> <TOTAL_ORDER_AMOUNT>1500.00</TOTAL_ORDER_AMOUNT> </CUSTOMER_ORDER> </broadband_orders> </customerOrder> </prefix1:CustomerOrderReport> ``` Summary You have successfully built and run a query using Liquid Data. Note: To switch back to Data View Builder Design mode, click the Design View tab on the toolbar. Step 7. Save Your Liquid Data Project Now that you have successfully built and run a query, you can save your Data View Builder project. The next time you need to run or modify your query or Data View, you can simply reopen the project. To save your project: 1. Choose the menu option File —> Save Project. 2. Navigate to the directory where you want to save your project. Enter `customerReport` in the File name field. 3. Click Save. The file is saved with a `.qpr` extension. See “Working With Projects” in the `Starting the Builder and Touring the GUI` chapter of `Building Queries and Data Views`. Step 8. Deploy Your Query To use the results of your query in an application such as WebLogic Workshop, you first need to associate a saved query with a saved target schema. This is known as deploying a query and provides the metadata necessary to create an XMLBean that provides data access services in Workshop. (See “Deploying a Query” in the `Testing Queries` chapter of `Building Queries and Data Views`.) Follow these steps to deploy your query: 1. In Test mode, select Deploy Query from the Query menu (Query —> Deploy Query). A dialog box appears where you can choose to save your current schema to the Liquid Data repository. Step 8. Deploy Your Query 2. Choose Yes. When the repository file browser appears you can use the supplied target schema name of custOrdRpt.xsd 3. Next, you will be asked if you want to save the current query. Again select Yes. 4. Save your query to the name custOrdRpt. 5. The Deploy Stored Query dialog box appears, populated with the name of your query and target schema. ![Figure 26 Creating a Stored Query](image) 5. Click Deploy. You should see a Deployment Successful! message. ![Figure 27 Deploying custOrdRpt Query with custOrdRpt Schema](image) 6. Click Deploy. You should see a Deployment Successful message. **Step 9. Creating a Liquid Data Control in WebLogic Workshop** Once you have deployed a query, you can quickly make it available to WebLogic Workshop. (For details see “Using WebLogic Workshop Controls to Develop Liquid Data Applications” in the Application Developer's Guide.) In the following two sections you will undertake: - Creating a Liquid Data Control - Generating a Page Flow File - Obtaining Preliminary Query Results - Modifying Page Flow Source - Adding Data to the Application - Testing Your Results Creating a Liquid Data Control A Liquid Data control provides access from WebLogic Workshop to queries that have been deployed to a Liquid Data server. 1. Open WebLogic Workshop from the Windows Start menu: Start —> Programs —> BEA WebLogic Platform 8.1 —> WebLogic Workshop 8.1 2. Create a new WebLogic Workshop application (File —> New —> Application). Use custOrdRpt as the application name. Notice that the selected application type is Default Application. When you create a default application, a project of type Web Project is automatically built using the name of the application plus Web. In this case the project is custOrdRptWeb. 4. Right-click on the custOrdRpt folder and select New —> Java Control. 5. Select Liquid Data from the list of available Java control extensions. Enter `custOrdRpt` as the filename for the control (Figure 31). The Liquid Data control enables you to turn Liquid Data queries into XML Beans which can be used by a WebLogic Workshop application. Figure 30 Creating a New Java Control ![Creating a New Java Control](image) Figure 31 Selecting and Naming a Liquid Data Control ![Selecting and Naming a Liquid Data Control](image) 6. Click Next. If your server is remote, you will need to enter remote connection information. However, most likely Liquid Data samples are running on your local system; if so, select Local, the default option. Figure 32 Creating a New Java Control Dialog ![New Java Control Extension - Liquid Data](image) **STEP 1** - New VCName: custOrdRpt **STEP 2** - **Local** - **Server URL (e.g., localhost):** - **Username:** - **Password:** Click Connection 7. Click Create. 8. From the list of available queries select custOrdRpt. Figure 33 Adding a Deployed Query to a Liquid Data Control ![Select Liquid Data Queries](image) Step 10. Creating a Liquid Data-Powered Web Application 9. Click the Add button. 10. When the query moves to the right-hand column, click Finish. A graphical depiction of the newly created Liquid Data control is shown in Figure 34. Figure 34 Design View of Liquid Data Control The final section takes you through the steps required to build a web application that display query results in automatically-generated java server pages (JSP). Step 10. Creating a Liquid Data-Powered Web Application Once a Liquid Data Control is available, its query elements can be built into JSP pages through creation of a WebLogic Workshop application. Generating a Page Flow File WebLogic Workshop applications are driven by a single page flow file, identified by its .jpf extension. When you generate a page flow, WebLogic Workshop creates the page flow file and several other files. (For details see “Generating a Page Flow From a Control” in the chapter Using Workshop Controls to Develop Liquid Data Applications in the Application Developer’s Guide.) Perform the following steps to generate a page file (custOrdRpt.jpf) from a Liquid Data java control. 2. Select Generate Page Flow (Figure 35). **Figure 35 Generating a Page Flow Document From a Liquid Data Java Control** 3. In the Page Flow Wizard, enter `custOrdRpt` as the page flow name. 4. Click Next. 5. In the Page Flow Wizard - Select Actions dialog box check the query methods you want available to your new application. In the case of `custOrdReport`, there only one method. Click the checkbox to the left of the method (Figure 36). 6. Select Create. Step 10. Creating a Liquid Data-Powered Web Application **Figure 36 Identifying the Liquid Data Methods for the Page Flow** WebLogic Workshop generates: - An application page flow file (`custOrdRptController.jpf`) - A start page (`index.jsp`) - A JSP file for method(s) you specified through the Liquid Data control (`custOrdRpt.jsp`). Figure 37 WebLogic Workshop View After a Page Flow is Generated The application Flow View graphically depicts the relationship between the various pages of your application. 7. Delete the *success* link that connects the *begin* element to *index.jsp* by selecting the link (click on it) and then choose Right-click —> Delete (Figure 38). The change is needed because *index.jsp* will not be the application's start-up page. Figure 38 Removing a Page Flow Link 8. Create a new link from begin to custOrdRpt.jsp; click adjacent to begin. Then, holding your mouse button down, draw a link to custOrdRpt.jsp (Figure 39). It will automatically be labeled success. Based on the XMLBean that was created from the custOrdRpt deployed query, a customer ID parameter is required by the web application. In WebLogic Workshop the new initial application page (custOrdRpt.jsp) requests a Customer ID. (See “Create a Query Parameter” on page 19.) **Figure 39 Creating a New Page Flow Link** Obtaining Preliminary Query Results You can now test your application for data access. 1. From the WebLogic Workshop menu Choose Debug —> Start Without Debugging. 2. After some moments the custOrdRptController page appears. Enter a query parameter such as CUSTOMER_3 (Figure 40). **Note:** Case matters; customer_3 will not succeed. 3. Click Submit to test your query. After a few moments raw results such as is shown in Figure 41 appears. Modifying Page Flow Source While the application's Java Page Flow (JPF) file is automatically generated, a few lines of Java code are needed to add XMLBean variables to your page flow. (For details see the topics “Adding XMLBean Variables to the Page Flow” and “To Initialize the Variable in the Page Flow” in the chapter Using Workshop Controls to Develop Liquid Data Applications in the Application Developer's Guide.) Follow these steps to copy several lines of Java code into the correct section of custOrdRptController.jpf: 1. Open the custOrdRptController.jpf file in WebLogic Workshop. 2. Click the Source View tab in WebLogic Workshop. 3. Immediately after the following line (near the beginning of the file): ```java private custOrdRpt myControl; ``` insert the following three lines of java code: ```java ``` These instructions create three variables. Two represent the CUSTOMERORDER[] arrays for BroadBandOrders and WirelessOrders data sources, respectively. The third, Customers, represents data associated with a particular customer. 4. A few lines down locate the section of code that ties the query to the custOrdRpt form. An easy way to do this is to search. From the WebLogic Workshop menu select Edit —> Find (or Control-F). Search for a section of code beginning with the string: ```java Forward custOrdRpt ``` After the line: ```java getRequest().setAttribute( "results", var ) ``` enter (or copy/paste) the following: Getting Started with Liquid Data Wireless = var.getCustomerOrderReport().getCustomerOrderArray(0).getWirelessOrders().getCUSTOMERORDERArray(); BroadBand = var.getCustomerOrderReport().getCustomerOrderArray(0).getBroadbandOrders().getCUSTOMERORDERArray(); Customers = var.getCustomerOrderReport().getCustomerOrderArray(0); These instructions make the three public variables available to the application. The relevant code is shown in Figure 7 (newly entered lines are in boldface). package custOrdRpt; import com.bea.wlw.netui.pageflow.FormData; import com.bea.wlw.netui.pageflow.Forward; import com.bea.wlw.netui.pageflow.PageFlowController; import custOrdRpt.custOrdRpt; ... public class custOrdRptController extends PageFlowController { /** * This is the control used to generate this pageflow * @common:control */ private custOrdRpt myControl; public public public // Uncomment this declaration to access Global.app. // Step 10. Creating a Liquid Data-Powered Web Application protected global.Global globalApp; // For an example of page flow exception handling see the example "catch" and "exception-handler" // annotations in {project}/WEB-INF/src/global/Global.app /** * This method represents the point of entry into the pageflow * @jpf:action */ protected Forward begin() { return new Forward( "success" ); } /** * Action encapsulating the control method :custOrdRpt * @jpf:action * @jpf:forward name="success" path="index.jsp" * @jpf:catch method="exceptionHandler" type="Exception" */ public Forward custOrdRpt( CustOrdRptForm aForm ) throws Exception { schemasBeaComLdCustorder.CustomerOrderReportDocument var = myControl.custOrdRpt( aForm.myCustomer ); getRequest().setAttribute( "results", var ); Wireless = var.getCustOrderReport().getCustomerOrderArray(0).getWirelessOrders().getCUSTOMERORDERArray(); BroadBand = var.getCustOrderReport().getCustomerOrderArray(0).getBroadbandOrders().getCUSTOMERORDERArray(); Customers = var.getCustOrderReport().getCustomerOrderArray(0); return new Forward( "success" ); } /** * @jpf:action * @jpf:forward name="success" path="custOrdRpt.jsp" */ public Forward custOrdRptLink() throws Exception { return new Forward( "success" ); } Getting Started with Liquid Data ```java /** * @jpf:exception-handler * @jpf:forward name="errorPage" path="/error.jsp" */ protected Forward exceptionHandler( Exception ex, String actionName, String message, FormData form ) { String displayMessage = "An exception occurred in the action " + actionName; getRequest().setAttribute("errorMessage", displayMessage); return new Forward("errorPage"); } /** * FormData class CustOrdRptForm * FormData get and set methods may be overwritten by the Form Bean editor. */ public static class CustOrdRptForm extends FormData { private java.lang.String myCustomer; public void setMyCustomer( java.lang.String myCustomer ) { this.myCustomer = myCustomer; } public java.lang.String getMyCustomer() { return myCustomer; } } ``` Adding Data to the Application Now that variables are available to hold query results, the last step is to modify `index.jsp` to display your results. In the `custOrdRpt` folder double-click on `index.jsp` (Figure 42). Three things need to be added to `index.jsp`: - Field labels. - Data fields, to display information about the current customer. - NetUI `repeater` fields, to contain data from the BroadBand and Wireless data sources. The following sections walk you through the `index.jsp` editing process. **Adding Field Labels** First, several labels need to be added to `index.jsp`. Go to the source view. Open the `index.jsp` file. Near the end of that file find and replace the long line beginning with: ```jsp <%Object res = request.getAttribute ... ``` With the following: ```jsp <b>CUSTOMER</b> <br> <b>BroadBand Orders</b> <br> <b>Wireless Orders</b> <br> ``` Adding Data Fields Next, a HTML table needs to be added to the page to hold customer information (in an RDBMS application, customer information would represent the *master* and customer orders the *detail*). 1. Return to Design View. 2. Using your mouse, click on CUSTOMER (Figure 43). **Figure 43 Java Server Page (index.jsp) After Adding Field Labels** 3. From the WebLogic Workshop menu select Insert —> HTML —> Table. 4. When the TableWizard appears, set the number of rows to 4 and the number of columns to 2. **Figure 44 Creating a HTML Table in WebLogic Workshop** 5. Click Ok. Notice that the table is placed in the Results Area below the CUSTOMER label. 6. Double-click in the upper-left field of the generated grid (Figure 45). This will open to the relevant section of index.jsp in Source View. **Figure 45 Selecting a Cell in WebLogic Workshop Table Grid** 7. Replace the first table detail of each row with the following descriptions: First name, Last name, Customer ID, and Email Address. Replacement text is highlighted (Figure 48). ``` <table border="1"> <tr> <td>First Name</td> <td>&nbsp;</td> </tr> <tr> <td>Last Name</td> <td>&nbsp;</td> </tr> <tr> <td>Customer ID</td> <td>&nbsp;</td> </tr> ``` Getting Started 47 <table> <tr> <td>Email Address</td> <td>&nbsp;</td> </tr> </table> 8. Switch back to Design View. 9. In the WebLogic Workshop Data Palette, expand PageFlow —> Properties —> Customers —> CUSTOMER to see the available Customer elements (Figure 46). **Figure 46 Data Palette Page Flow Data Components** 10. Drag-and-drop data elements to their corresponding text labels (Figure 47). Figure 47 Populating WebLogic Workshop HTML Table with Customer Data Element Figure 48 shows a fully configured customer information table. Add Repeater Fields For Order Arrays WebLogic Workshop repeater fields make it very easy to insert arrays of data such as Wireless and BroadBand order information. (For details see the topic “To Add a Repeater to a JSP File” in the chapter Using Workshop Controls to Develop Liquid Data Applications in the Application Developer’s Guide.) Using your cursor, highlight the return after BroadBand (Figure 49). 1. From the Data Palette, just above the Public Controls section, drag the BroadBand array symbol to the highlighted area (also Figure 49). Figure 49 Populating a JSP With an Array Containing BroadBand Order Data Elements 2. Accept the defaults (meaning all the elements in the array) by clicking Next. No title is needed. Figure 50 Selecting Fields for NetUI Repeater Element 3. Click Create to accept the default repeater data format of Table. Figure 51 Setting Format for NetUI Repeater Element 4. Repeat Steps 1 and 2 for Wireless, dragging the Wireless array icon just to the right of the Wireless label. As necessary, drag-and-drop text labels so they appear in the right location. Testing Your Results Congratulations. Your application is completed. To test it: 1. Run your application using the Start Without Debugging command. 2. When the initial screen appears enter, in uppercase, a valid customer ID such as CUSTOMER_3. When finished, your results page contain the same data as is shown in Figure 52. Figure 52 Results Page After Query is Run Summary This concludes the Liquid Data Quick Start tutorial. If you followed along with the instructions, you successfully: - identified disparate data sources - Created a parameterized query **Getting Started with Liquid Data** - Deployed your query to the Liquid Data server - Created a Liquid Data control - Used a query from the control in a web application **Generated Files and Code** All the files associated with the Liquid Data query and web application are generated automatically. Table 8 provides some details. Notice that of the nearly 500 lines of code required to produce this rather simple application, only 13 lines were created by hand. <table> <thead> <tr> <th>Generated File</th> <th>Source / Purpose</th> <th>Total lines of code</th> <th>Hand coded</th> </tr> </thead> <tbody> <tr> <td>custOrdRpt.xq</td> <td>Data View Builder</td> <td>49</td> <td>0</td> </tr> <tr> <td>custOrdRpt.jcx</td> <td>Liquid Data control, includes target schema</td> <td>96</td> <td>0</td> </tr> <tr> <td>custOrdRptController.jspf</td> <td>WebLogic Workshop Page Flow Control File</td> <td>180</td> <td>6</td> </tr> <tr> <td>index.jsp</td> <td>Web application home page</td> <td>121</td> <td>7</td> </tr> <tr> <td>custOrdRpt.jsp</td> <td>Web application input parameter page</td> <td>23</td> <td>0</td> </tr> <tr> <td><strong>TOTAL</strong></td> <td></td> <td>469</td> <td>13</td> </tr> </tbody> </table> **Topics of Further Interest** Now that you are somewhat familiar with the basics of creating Liquid Data queries using the Data View Builder, the following may be of increased interest: - *Liquid Data by Example* contains numerous samples and examples beginning starting with a comprehensive survey of Liquid Data Retail Sample Application. That Sample illustrates the data retrieval capabilities of Liquid Data with WebLogic Workshop to create a customer self-service web application. Step 10. Creating a Liquid Data-Powered Web Application - Liquid Data Application Developer's Guide provides detailed instructions on using results from Liquid Data queries in WebLogic Workshop, through Enterprise Java Beans (EJBs), and in JSP clients. - Liquid Data Concepts Guide provides an end-to-end overview of the product including a survey of the product's capabilities, architecture and role in a WebLogic-centered application development environment. - Building Queries and Data Views provides a detailed GUI overview and description of Data View Builder features. - The Liquid Data Administration Guide provides information about how to configure the data source types supported by Liquid Data, how to configure security, how to set up monitoring and reporting. Index A Administration Console logging in 4 starting 3 URL 3 where to find more information 55 C conditions, creating 18 customer support contact information viii D Data View Builder starting 10 URL 10 demo on how to construct query 12 Design tab 11 documentation, where to find it vii G generated XQuery, viewing on the Test tab 23 L logging in to the Administration Console 4 P printing product documentation vii R related information viii relational database configuring as Liquid Data data source 5 running a query 25 S Samples server, starting 2 source schemas mapping elements to target schema 14 opening/adding to a project 12 starting Administration Console 3 Data View Builder 10 Samples server 2 support technical viii T target schema configuring 14 mapping 14 testing a query 23 U URL Administration Console 3 Data View Builder 10 XML file, configuring as Liquid Data data source 8 XML query result, viewing on Test tab 27 XQuery, viewing on Test tab 23
{"Source-Url": "https://docs.oracle.com/cd/E13190_01/liquiddata/docs81/pdf/qkstart.pdf", "len_cl100k_base": 13367, "olmocr-version": "0.1.53", "pdf-total-pages": 68, "total-fallback-pages": 0, "total-input-tokens": 108022, "total-output-tokens": 16150, "length": "2e13", "weborganizer": {"__label__adult": 0.0002486705780029297, "__label__art_design": 0.00033593177795410156, "__label__crime_law": 0.0001928806304931641, "__label__education_jobs": 0.00086212158203125, "__label__entertainment": 6.878376007080078e-05, "__label__fashion_beauty": 0.0001074671745300293, "__label__finance_business": 0.00099945068359375, "__label__food_dining": 0.0002218484878540039, "__label__games": 0.0005698204040527344, "__label__hardware": 0.0005555152893066406, "__label__health": 0.00010913610458374023, "__label__history": 0.00016736984252929688, "__label__home_hobbies": 8.761882781982422e-05, "__label__industrial": 0.0003628730773925781, "__label__literature": 0.0001550912857055664, "__label__politics": 0.00011855363845825197, "__label__religion": 0.00022912025451660156, "__label__science_tech": 0.005474090576171875, "__label__social_life": 6.23464584350586e-05, "__label__software": 0.055389404296875, "__label__software_dev": 0.93310546875, "__label__sports_fitness": 0.00014317035675048828, "__label__transportation": 0.0002880096435546875, "__label__travel": 0.00014495849609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57991, 0.01438]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57991, 0.24003]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57991, 0.73391]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2223, false], [2223, 3974, null], [3974, 6431, null], [6431, 6431, null], [6431, 6431, null], [6431, 7668, null], [7668, 9477, null], [9477, 10662, null], [10662, 11661, null], [11661, 12862, null], [12862, 14509, null], [14509, 16167, null], [16167, 16540, null], [16540, 17458, null], [17458, 17816, null], [17816, 18063, null], [18063, 18397, null], [18397, 19520, null], [19520, 20775, null], [20775, 21825, null], [21825, 23261, null], [23261, 23972, null], [23972, 24438, null], [24438, 24913, null], [24913, 25690, null], [25690, 25873, null], [25873, 28533, null], [28533, 29947, null], [29947, 30388, null], [30388, 30585, null], [30585, 31719, null], [31719, 32346, null], [32346, 34194, null], [34194, 34676, null], [34676, 34985, null], [34985, 36792, null], [36792, 38213, null], [38213, 38487, null], [38487, 39121, null], [39121, 39830, null], [39830, 40186, null], [40186, 40646, null], [40646, 41276, null], [41276, 42471, null], [42471, 42932, null], [42932, 43271, null], [43271, 43737, null], [43737, 44598, null], [44598, 44705, null], [44705, 46590, null], [46590, 47971, null], [47971, 49278, null], [49278, 50324, null], [50324, 50984, null], [50984, 51561, null], [51561, 52242, null], [52242, 52638, null], [52638, 52780, null], [52780, 53331, null], [53331, 53515, null], [53515, 53883, null], [53883, 54451, null], [54451, 56240, null], [56240, 57017, null], [57017, 57017, null], [57017, 57869, null], [57869, 57991, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2223, false], [2223, 3974, null], [3974, 6431, null], [6431, 6431, null], [6431, 6431, null], [6431, 7668, null], [7668, 9477, null], [9477, 10662, null], [10662, 11661, null], [11661, 12862, null], [12862, 14509, null], [14509, 16167, null], [16167, 16540, null], [16540, 17458, null], [17458, 17816, null], [17816, 18063, null], [18063, 18397, null], [18397, 19520, null], [19520, 20775, null], [20775, 21825, null], [21825, 23261, null], [23261, 23972, null], [23972, 24438, null], [24438, 24913, null], [24913, 25690, null], [25690, 25873, null], [25873, 28533, null], [28533, 29947, null], [29947, 30388, null], [30388, 30585, null], [30585, 31719, null], [31719, 32346, null], [32346, 34194, null], [34194, 34676, null], [34676, 34985, null], [34985, 36792, null], [36792, 38213, null], [38213, 38487, null], [38487, 39121, null], [39121, 39830, null], [39830, 40186, null], [40186, 40646, null], [40646, 41276, null], [41276, 42471, null], [42471, 42932, null], [42932, 43271, null], [43271, 43737, null], [43737, 44598, null], [44598, 44705, null], [44705, 46590, null], [46590, 47971, null], [47971, 49278, null], [49278, 50324, null], [50324, 50984, null], [50984, 51561, null], [51561, 52242, null], [52242, 52638, null], [52638, 52780, null], [52780, 53331, null], [53331, 53515, null], [53515, 53883, null], [53883, 54451, null], [54451, 56240, null], [56240, 57017, null], [57017, 57017, null], [57017, 57869, null], [57869, 57991, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 57991, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57991, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57991, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57991, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57991, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57991, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57991, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57991, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57991, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57991, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2223, 2], [2223, 3974, 3], [3974, 6431, 4], [6431, 6431, 5], [6431, 6431, 6], [6431, 7668, 7], [7668, 9477, 8], [9477, 10662, 9], [10662, 11661, 10], [11661, 12862, 11], [12862, 14509, 12], [14509, 16167, 13], [16167, 16540, 14], [16540, 17458, 15], [17458, 17816, 16], [17816, 18063, 17], [18063, 18397, 18], [18397, 19520, 19], [19520, 20775, 20], [20775, 21825, 21], [21825, 23261, 22], [23261, 23972, 23], [23972, 24438, 24], [24438, 24913, 25], [24913, 25690, 26], [25690, 25873, 27], [25873, 28533, 28], [28533, 29947, 29], [29947, 30388, 30], [30388, 30585, 31], [30585, 31719, 32], [31719, 32346, 33], [32346, 34194, 34], [34194, 34676, 35], [34676, 34985, 36], [34985, 36792, 37], [36792, 38213, 38], [38213, 38487, 39], [38487, 39121, 40], [39121, 39830, 41], [39830, 40186, 42], [40186, 40646, 43], [40646, 41276, 44], [41276, 42471, 45], [42471, 42932, 46], [42932, 43271, 47], [43271, 43737, 48], [43737, 44598, 49], [44598, 44705, 50], [44705, 46590, 51], [46590, 47971, 52], [47971, 49278, 53], [49278, 50324, 54], [50324, 50984, 55], [50984, 51561, 56], [51561, 52242, 57], [52242, 52638, 58], [52638, 52780, 59], [52780, 53331, 60], [53331, 53515, 61], [53515, 53883, 62], [53883, 54451, 63], [54451, 56240, 64], [56240, 57017, 65], [57017, 57017, 66], [57017, 57869, 67], [57869, 57991, 68]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57991, 0.08293]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
5d460d8e765146247d497b8506e8dcbf981b1a56
An Experiment on the Effects of Using Color to Visualize Requirements Analysis Tasks Yesugen Baatartogtokh *Smith College* Irene Foster *Smith College* Alicia M. Grubb *Smith College, amgrubb@smith.edu* Follow this and additional works at: [https://scholarworks.smith.edu/csc_facpubs](https://scholarworks.smith.edu/csc_facpubs) Part of the *Computer Sciences Commons* **Recommended Citation** Baatartogtokh, Yesugen; Foster, Irene; and Grubb, Alicia M., "An Experiment on the Effects of Using Color to Visualize Requirements Analysis Tasks" (2023). Computer Science: Faculty Publications, Smith College, Northampton, MA. [https://scholarworks.smith.edu/csc_facpubs/360](https://scholarworks.smith.edu/csc_facpubs/360) This Conference Proceeding has been accepted for inclusion in Computer Science: Faculty Publications by an authorized administrator of Smith ScholarWorks. For more information, please contact scholarworks@smith.edu An Experiment on the Effects of Using Color to Visualize Requirements Analysis Tasks Yesugen Baatartogtokh, Irene Foster, Alicia M. Grubb Department of Computer Science Smith College, Northampton, MA, USA amgrubb@smith.edu Abstract—Recent approaches have investigated assisting users in making early trade-off decisions when the future evolution of project elements is uncertain. These approaches have demonstrated promise in their analytical capabilities; yet, stakeholders have expressed concerns about the readability of the models and resulting analysis, which builds upon Tropos. Tropos is based on formal semantics enabling automated analysis; however, this creates a problem of interpreting evidence pairs. The aim of our broader research project is to improve the process of model comprehension and decision making by improving how analysts interpret and make decisions. We extend and evaluate a prior approach, called EVO, which uses color to visualize evidence pairs. In this scientific evaluation paper, we explore the effectiveness and usability of EVO. We conduct an experiment \((n = 32)\) to measure any effect of using colors to represent evidence pairs. We find that with minimal training, untrained modelers were able to use the color visualization for decision making. The visualization significantly improves the speed of model comprehension and users found it helpful. I. INTRODUCTION Goal-oriented requirements engineering (GORE) aims to assist individuals to make decisions about their projects. To do so, analysts create models consisting of actors and intentions (e.g., goals, tasks), as well as connections between them. These models can then be evaluated for a given scenario by placing a label on each intention of interest to the user. In the domain of qualitative evaluations of goal models, there are multiple methods for evaluating intentions. For example, iStar and GRL use visual labels (e.g., checkmarks and Xs), while Tropos uses evidence pairs (e.g., \((F, P)\)). In comparing these approaches, the visual labels in iStar are more understandable to end-users but lack formal semantics, while the evidence pairs in Tropos allow for automation but are hard for users to understand. This tension between model comprehension and automated analysis is further exacerbated by evaluating models over time [1], [2] and with families of models [3], where users evaluate collections of models. Given the potential for automating analysis of goal models [4] and connecting them with downstream activities [5], the broader aim of this research program is to improve the cognitive effectiveness [6] of Tropos evidence pairs, making them more accessible to end-users. The comprehensibility of Tropos models has already been investigated in the literature. Hadar et al. compared Tropos and Use Case models and found that Tropos models seem to be more comprehensible with respect to some requirements analysis tasks, although Tropos models were found to be more time consuming [7]. In a replication of Hadar et al.’s work, Siqueira found no difference in model comprehensibility and effort between Tropos and Use Case models, when those models have equivalent complexity [8]. While an important foundation, this work is tangential to our investigation because we are interested in improving the comprehensibility of Tropos relative to itself, rather than comparing it to other approaches. In prior work, Grubb and Chechik developed automated analysis techniques for Tropos models with evolutionary information [9]. Building on this framework and the BloomingLeaf tool, Varnum et al. proposed using colors to assist users in interpreting evidence pairs in Tropos, which they called EVO (Evaluation Visualization Overlay) [10]. Varnum et al. completed a preliminary evaluation with an example but did not validate this approach with users [10]. Prior work suggests that color can help individuals interpret certain graph types faster [11], but should be used as a secondary encoding [6]. Contributions. We investigate to what extent, if any, using EVO affects how individuals understand and make decisions about goal models with timing information, using Tropos evidence pairs. We report on an IRB-approved between-subjects experiment conducted with 32 undergraduate students. We aim to answer four research questions: RQ0 Do modelers across treatment groups perform similarly on basic goal modeling and simulation tasks? RQ1 To what extent are subjects able to learn EVO, and then use EVO to answer goal modeling questions? RQ2 How does EVO compare with the control in terms of time and subjects’ perceptions? RQ3 How do subjects rate the study experience/instrument? We found that with minimal prior training in goal modeling, subjects were able to learn and use the EVO extension to make decisions. We found no evidence that EVO altered the quality of understanding or decision making, either positively or negatively. However, we found that EVO significantly decreased the time required to make decisions. Finally, the subjects responded positively to EVO and the study protocol. Organization. The remainder of the paper is organized as follows. Sect. II reviews goal modeling and the EVO approach. Sect. III describes our study methodology. We report on the results of our study in Sect. IV, and discuss lessons learned and validity in Sect. V. Finally, we review related work in Sect. VI and conclude in Sect. VII. II. BACKGROUND In this section, we review the goal modeling notation and visualization overlay used in this study. A. Goal Model Notation We use the Employee model shown in Fig. 1 to illustrate our notation. A goal model consists of actors, intentions, and links. Intentions describe the intentionality of each actor and consist of four types: goals, soft goals, tasks, and resources. For example, Fig. 1 contains one actor, named Employee, and nine intentions that describe the Employee’s motivations. Intentions can be decomposed or contribute to the fulfillment of one another via links, forming one or more graphs of nodes in the model. Decomposition links (i.e., ⊗, ⊘, ⊗) decompose an intention into subsequent or child nodes. An intention with an ⊗-decomposition requires all of its children to be fulfilled, while an or-decomposition requires only one to be fulfilled. In Fig. 1, the Employee’s only goal is to Have Employment, which is or-decomposed into two tasks Work from Home and Work in Office. Contribution links (e.g., +, -, ++S, -S) indicate that an intention has influence on another intention. For example, Work in Office (see Fig. 1) propagates all evidence to Make Work Connections via a ++ link, while the - link between Work in Office and Spend Time with Family negates and propagates partial evidence of fulfillment. The fulfillment of an intention is evaluated qualitatively using an evidence pair (s, d), which separates evidence for and against the fulfillment of the intention. Both s and d consist of one of three values: F represents full evidence, P represents partial evidence, and ⊥ represents no evidence, where ⊥ ≤ P ≤ F. Thus, goals can have one of five initial values: [Fully Satisfied (F, ⊥), Partially Satisfied (P, ∩), Partially Denied (⊥, P), [Fully] Denied (⊥, F), and None (⊥, ⊥)]; as well as four conflicting values that may result from propagation: (F, F), (F, P), (P, F), and (P, P). For clarity, we list these evidence pairs in Fig. 2. In Fig. 1, the task Prepare and Pack Lunch is assigned the value Denied (⊥, F) because the actor Employee has not yet completed the task. B. Simulating Models over Time We use the Evolving Intentions framework [9] to simulate how a model’s fulfillment changes over time. The framework allows users to specify one or more stepwise functions (called User-Defined (UD) functions) describing how the evidence pair assignment for an intention changes over time. Over any time interval, the valuation of an intention can Increase (I), Decrease (D), remain Constant (C), or be random or Stochastic (R). In Fig. 1, the resource Time remains CONSTANT with the valuation of Satisfied (F, ⊥) over time. The MP label on Prepare and Pack Lunch indicates a Monotonic Positive function, meaning that the valuation will become more fulfilled until it is fully satisfied and then it will remain constant with that value. Three other functions that appear in this paper are: (Denied-Satisfied (DS)) the satisfaction evaluation remains Denied (⊥, F) until t and then remains Satisfied (F, ⊥); (Stochastic-Constant (RC)) changes in satisfaction evaluation are stochastic or random until t and then remains constant with a given evidence pair; and (Constant-Stochastic (CR)) the satisfaction evaluation remains constant at a given evidence pair until t and then changes in evaluation are stochastic. After a path has been simulated, all of the intentions in the model are assigned an evidence pair label for each time point. Intentions that are not assigned evolving functions receive their valuations via propagation. Thus, a contribution of the framework is to allow users to make trade-off decisions about the future states of the model by stepping through each time point in a simulation and reviewing the evidence pair assignments of each intention. C. EVO: Evaluation Visualization Overlay As briefly mentioned in Sect. I, Varnum et al. introduced the Evaluation Visualization Overlay (EVO) [10]. EVO was designed to assist users in understanding evidence pairs. Each evidence pair (s, d) label is assigned a color (see legend in Fig. 2), where blue denotes evidence for (i.e., the s value), red denotes evidence against (i.e., the d value), and purple denotes conflicting evidence. The more saturated (or darker) the color shade, the stronger the evidence (i.e., F is darker... than \( P \). Observe that \((F, F)\) is a very dark shade of purple, whereas \((P, P)\) is a lighter shade of purple. For \((P, F)\) there is both blue and red present, making it purple, but because there is more evidence for denial, it is more red-purple, with the inverse being true for \((F, P)\). During modeling activities, when EVO is enabled the color of each intention corresponds to any initial assignment, while unassigned intentions retain their original color (see legend in Fig. 1). This provides an overall visualization of the model’s initial state. For example, Fig. 4 gives the initial state of the Summer model (see Sect. III-B for details). In Fig. 4, Have Summer Activity is colored dark red because it has been assigned the \( (\perp, F) \) label. The main contribution of EVO is to assist users in evaluating evidence pair assignments across a simulation path. Within the Evolving Intentions framework introduced above, it is difficult for a user to remember all of the different valuations of each intention at each time point, much less synthesize them all together to act upon the given information. EVO provides three modes to visualize simulations: State, Time, and Percent. To introduce these modes, we consider only the Spend Time with Family intention from Fig. 1. State mode shows the current time point of the model, with the background of each intention colored based on their assigned evidence pair. Fig. 3 shows the color and evidence pair assignments for Spend Time with Family at time points 0–4. Time mode shows the valuations over the entire path in one view. For example, in Fig. 3, each of the stripes on Spend Time with Family represents the colors of each state shown above. Finally, Percent mode colors by overall evaluation percentages, making the background of each intention colored with the percentage of states in the simulation where the intention has each evidence pair assignment. The width of each colored stripe corresponds to the percentage of time points that it holds a specific evidence pair, ordered based on level of fulfillment. III. METHODOLOGY In this section, we describe our methodology for conducting this study, which was approved by our institutional review board (IRB). Our supplemental materials are available online. A. Experiment Design Our primary objective in designing this experiment was to measure the effects of EVO. The original EVO proposal was implemented as an extension to BloomingLeaf [12]. We did not intend to evaluate the usability of BloomingLeaf; instead, we wanted to test EVO in isolation without the confounding variables of tooling, making our study tool agnostic. Additionally, we wanted to collect timing information in an accurate way. Thus, we designed the study instrument to be completed via our institution’s browser-based Qualtrics\textsuperscript{®} XM platform. We used the BloomingLeaf git repository [12] only for the purpose of creating our study materials and models. In designing this experiment, our main consideration was ensuring that we measured the appropriate elements, and controlled for the risks of variability between subjects’ tasks, subjects’ natural performance, and any learning, fatigue, or carryover effects (see Sect. V-C). We chose a nested 2x2 design [13], with random treatment group assignment. To measure the impacts of using EVO, we compared measurements of subjects analyzing a model with and without having access to EVO, using two different models. To mitigate any learning effects, we varied the EVO training order. We took measurements of subjects’ correctness when answering questions, labeled as score, and how long it took subjects to answer these questions. Thus, our dependent variables were score and time. Previous investigations have demonstrated that task equivocality is an important factor in analyzing model comprehensibility [8]. We designed our questions to be similar but not identical. To understand any effects that may result from model variation, we test two models in our design. We explored conducting the study as either a between- or within-subjects comparison. Ideally, our study would be analyzed in-subjects. This would control for natural variations in individual performance, model variability, and EVO ordering. Yet, analyzing this design requires the use of ANOVA, analyzed in-subjects. This would control for natural variations or within-subjects comparison. Ideally, our study would be analyzed in-subjects. This would control for natural variations in individual performance, model variability, and EVO ordering. Yet, analyzing this design requires the use of ANOVA, analyzed in-subjects. This would control for natural variations or within-subjects comparison. Instead, we planned our analysis to be performed between-subjects, but this has the downside of not being able to control for individual subject variability. B. Materials: Models and Videos In this study, we used four models: the Employment model (see Fig. 1), the Summer model (see Fig. 4), the Bike model (Fig. 5), and the Course model (not shown for space considerations, see online\textsuperscript{1}). We list these models and their associated metrics in Tbl. I. The Course model describes the process of a student (and their advisor) trying to decide whether the student should take a fun and interesting or practical and unexciting elective in the next semester. In Sect. II-A, we describe the Employment model (see Fig. 1) to introduce goal model syntax. The model describes an employee, who is debating between working from home or working in an office, with the top-level goal of Have Employment. In the Summer model (see Fig. 4), the actor Joy wants to have a summer activity, with choices between tasks Join Book Club, Join Community Center, and Join Soccer Team. These tasks are AND-decomposed into sets of tasks that must be satisfied. In the Bike model shown in Fig. 5, the City actor wants to construct bike lanes, with the top-level goal Have Bike Lanes, for which they must have satisfied both sub-goals Have Design Plans and Have Build Plans. These two goals are OR-decomposed into tasks they must choose from. --- \textsuperscript{1}See https://doi.org/10.35482/csc.002.2023 for supplement. Subjects were tested on their ability to answer questions about the Bike and Summer models (see Tbl. IV for list of questions). We created both an EVO and control version of all models. These models as well as their simulations are available online¹. While the Bike model has more intentions and links, the evolving functions are simpler than the Summer model. Our study consisted of three training videos (transcripts available online¹): (i) Goal Models in Tropos (VidGM) reviews goal modeling and explains Tropos evidence pairs and links. (ii) Introduction to Simulation Over Time (VidSim) introduces function types and evolving intentions, describing what it means to simulate a model over time. (iii) EVO (VidEVO) introduces the EVO color scheme for evidence pairs and goes over its three possible modes: State, Time, and Percent. C. Procedure: Conducting the Experiment Tbl. II lists the steps in our protocol for each treatment group. Parts 0, 1, and 5 are common across all subjects. In Part 0, we obtained informed consent from all subjects and had them rate their previous experience with goal modeling. In this step, we also had them complete a short (seven question) color deficiency test to ensure subjects met the inclusion criteria (see Sect. III-D). In Part 1, subjects completed two training modules, one introducing goal modeling more generally using VidGM, and the other introducing the minimal required subset of the Evolving Intentions framework (using VidSim). We used the Course and Employment models in Part 1 and in the ‘Training: EVO’ module in Parts 2 and 3 (see Tbl. II). Specifically, the Course model was used as part of our training materials, including videos, to introduce new concepts. After each module, subjects were asked questions to test their understanding using the Employment model. These questions allowed us to establish a baseline for comparison of subjects’ performance on goal model tasks. In Part 5, we debriefed and remunerated subjects, having them reflect on the study. Parts 2–4 (see Tbl. II) varied based on the subjects’ randomly assigned treatment group. All subjects completed the ‘Training: EVO’ module and answered questions about the Bike and Summer models (see Tbl. IV) after examining each model. What varied is which model (i.e., Bike or Summer) they answered questions about using EVO and whether they answered questions about a model before or after completing the EVO training. This allowed us to control for both variations in the models and a learning effect. D. Experimental Conditions and Subject Information We conducted the experiment in early 2023. All subjects were required to be proficient in English, be enrolled at Smith College having previously passed ‘Programming With Data Structures’, and be known to not have a color vision deficiency (i.e., colorblindness), as well as apply to participate in the study. Subjects were excluded if they had a conflict of interest with our lab. Thus, we recruited subjects through a department mailing list and flyers were posted in the science buildings on campus, see supplement¹ for details. Once subjects applied for the study, they were brought into the lab to complete the one-hour study in-person on our lab machine in a soundproof room. Since the subjects were not required to have training in goal modeling, one author was on hand to answer any questions after each training module. We recruited 32 undergraduate students to participate, eight per treatment group. All subjects achieved a perfect score on the color vision test. During Part 0 of our protocol (see Tbl. II), we asked subjects to rate their familiarity with written English, requirements engineering (RE), and three GORE languages (where 0 is no familiarity and 10 is complete familiarity). Tbl. III reports the median familiarity score for each treatment group. Subjects rated themselves highly with respect to English. One subject in each of XSm–EBk, ESm–XBk, and XBk–ESm rated their familiarity with English between six and nine, while all other subjects selected ten. The median scores for RE and iStar were low but non-zero. It is likely that some of our participants completed our course in software engineering, and while RE coverage varies each semester, iStar has been covered recently. We did not expect subjects to have any familiarity with Tropos or GRL but included them for completeness. Subjects were randomly assigned to treatment groups before demographic information was collected, so we were unable to use this information in group assignments. IV. RESULTS In this section, we answer our research questions using data collected in our investigation. A. RQ0: Establishing a Baseline for Comparison We begin by answering RQ0: Do modelers across treatment groups perform similarly on basic goal modeling and simulation tasks? All data collected during Part 1 of our protocol (see Tbl. II) was used to establish a baseline both to compare between subjects and evaluate to what extent subjects understood the training. First, subjects watched VidGM video and answered eight questions about goal modeling (TNG), and then they watched VidSim and answered six questions (plus one qualitative question) about simulating models over time (TNS), see supplement1 for questions. All answers were scored as correct or incorrect. Fig. 6 reports box plots for subjects’ training time, test time, and test scores (from left to right), for both the goal modeling and simulation training. Each box plot is sorted by treatment group and times are reported in seconds. For the goal model training (see first row in Fig. 6), most subjects spent 8–9.5 minutes on the initial training (i.e., rounded first to third quantile), which included a 7.5-minute video, most subjects took 3–5 minutes to answer the TNG questions, achieving scores between 6–8. For the simulation training (see second row), subjects completed the initial training (including a 5-minute video) in 5–6.5 minutes. They then answered the TNS questions in 5–6.5 minutes, achieving scores between 4–6. From the box plots, we cannot observe any meaningful difference between treatment groups. Our null hypothesis was that the treatment groups performed equally well on the questions, both in terms of score and time. We failed to reject our null hypothesis ($p < 0.1$), meaning that we could not detect a difference between the treatment groups. Additionally, subjects were asked to document any questions they had after reviewing the training videos (and associated documents). For the goal modeling training (TNG), eighteen subjects left a substantive question. These questions were most commonly about the evidence pairs, differences in contribution link types, and specific choices made by the modeler of the example. There were two questions about the differences between the training materials and iStar. For the simulation training, fourteen subjects asked a question. The vast majority of them were about choice and usage of evolving functions. Specifically, to explain the behavior of an intention without an assigned evolving function. Anecdotally, based on our experience teaching goal modeling, these questions are consistent with those asked in the classroom. Since subjects were not trained modelers, researchers answered subjects’ questions before proceeding to the next part of the study. We conclude that subjects performed similarly on basic goal modeling and simulation tasks. B. RQ1: Subjects’ Use of EVO Second, we consider RQ1: To what extent are subjects able to learn EVO, and then use EVO to answer goal modeling questions? Given our RQ0 results, we investigate this question between-subjects using a nested 2x2 design. In Parts 2–4 (see Tbl. II), each subject completed the EVO Training module and answered questions about the Bike and Summer models (see Tbl. IV), one using the EVO feature and one without. Thus, we compare the EVO training module and the results of each model separately. We divide RQ1 into two sub-questions: (a) TABLE IV: Summer and Bike Questions <table> <thead> <tr> <th>Page</th> <th>Num</th> <th>Summer Model</th> <th>Bike Model</th> </tr> </thead> <tbody> <tr> <td>P1</td> <td>Q1</td> <td>What is the initial satisfaction value of “Pass Tryouts”?</td> <td>What is the initial satisfaction value of “Prevent Dooring Incident”?</td> </tr> <tr> <td>P1</td> <td>Q2</td> <td>What is the initial satisfaction value of “Exercise”?</td> <td>What is the initial satisfaction value of “Bike Lane Usage”?</td> </tr> <tr> <td>P1</td> <td>Q3</td> <td>Is the initial state of the model more satisfied, denied, or conflicted?</td> <td>Is the initial state of the model more satisfied, denied, or conflicted?</td> </tr> <tr> <td>P2</td> <td>Q4</td> <td>For each of the elements listed below, how many times over the simulation does the element become Fully Satisfied? (a) Have Summer Activity, (b) Pass Tryouts, (c) Exercise</td> <td>For each of the elements listed below, how many times over the simulation does the element become Fully Satisfied? (a) Bike Lane Curbside, (b) Temporary Construction Plan, (c) Public Support</td> </tr> <tr> <td>P2</td> <td>Q5</td> <td>How does “Join Soccer Team” generally evolve over the simulation?</td> <td>How does “Public Support” generally evolve over the simulation?</td> </tr> <tr> <td>P2</td> <td>Q6</td> <td>For each of the following satisfaction values, at which time point in the simulation do the most number of elements have the value. Note: In the event of a tie, choose the later time point (higher number). (a) Fully Satisfied, (b) Fully Denied, (c) Any Conflicted Value</td> <td>For each of the following satisfaction values, at which time point in the simulation do the most number of elements have the value. Note: In the event of a tie, choose the later time point (higher number). (a) Fully Satisfied, (b) Fully Denied, (c) Any Conflicted Value</td> </tr> <tr> <td>P2</td> <td>Q7</td> <td>Which intentions are Partially Denied at Time Point 1?</td> <td>Which intentions are Partially Satisfied at Time Point 1?</td> </tr> <tr> <td>P3</td> <td>Q8</td> <td>Which intention would you choose to satisfy to make “Exercise” Fully Satisfied?</td> <td>Which intention would you choose to satisfy to make “Prevent Unloading in Bike Lane” Fully Satisfied?</td> </tr> <tr> <td>P4</td> <td>Q9</td> <td>On the previous page, we asked the question: “Which intention would you choose to satisfy to make “Exercise” Fully Satisfied”? You answered [insert Q8 choice]. Please explain your answer to this question.</td> <td>On the previous page, we asked the question: “Which intention would you choose to satisfy to make “Prevent Unloading in Bike Lane” Fully Satisfied”? You answered [insert Q8 choice]. Please explain your answer to this question.</td> </tr> <tr> <td>P4</td> <td>Q10</td> <td>How would assigning “Drive to and Play Soccer” the value Fully Satisfied influence the model?</td> <td>How would assigning “Parking Curbside” and “Temporary Construction Plan” the value Fully Satisfied influence the model?</td> </tr> <tr> <td>P5</td> <td>Q11</td> <td>Click here for a PDF to compare three different scenarios of the Summer model. Should you choose to join a book club, community garden, or soccer team?</td> <td>Click here for a PDF to compare different scenarios of the Bike Lanes model. How should you construct the bike lanes?</td> </tr> <tr> <td>P6</td> <td>Q12</td> <td>On the previous page, we asked you to compare three different scenarios of the Summer model and answer the question: “Should you choose to join a book club, community garden, or soccer team?” You answered [insert Q11 choice]. Please explain your answer to the previous question.</td> <td>On the previous page, we asked you to compare different scenarios of the Bike Lanes model and answer the question: “How should you construct the bike lanes?” You answered [insert Q11 choice]. Please explain your answer to the previous question.</td> </tr> </tbody> </table> Is our training sufficient for learning how to use EVO? and (b) To what extent were subjects able to answer questions with and without EVO? (a) EVO Training. All subjects completed a common EVO training module consisting of six questions. We matched treatment groups EBk-XSm & ESm-XBk (i.e., EVO training in Part 2, see Tbl. II) and XSm-EBk & XBk-ESm (i.e., EVO training in Part 3), to understand if there were any effects in reviewing one of the experimental models (i.e., Bike or Summer) first. Tbl. V lists the score data for the EVO training. All subjects achieved a score of 5 or 6 (out of a possible 6), and the groups are not distinguishable. Fig. 7 shows the box plots for the training and test times for the EVO Module. Subjects took between two and five and a half minutes to review the training materials and between one and four and a half minutes for the EVO questions. Our null hypothesis is that there is no significant variation between groups. We fail to reject this hypothesis (KWES, \( p \neq 0.1 \)), unable to detect variations between groups. Again, subjects were asked to document any questions they had after reviewing the EVO training, with nine subjects asking a question. Questions focused on understanding the simulation results and the differences between the EVO modes. Two subjects asked about the order of the Percent (%) mode, which was further clarified. Thus, subjects learned and demonstrated proficiency in using EVO in under ten minutes. (b) Answering Questions with EVO. We now review subjects’ ability to answer the model questions listed in Tbl. IV. Q4 and Q6 were each scored out of 3, one for each sub-question. Q9 and Q12 were excluded from scores as they were used to validate the answers of Q8 and Q11, respectively. Thus, each model was scored out of 14. Tbl. VI lists median scores for each treatment group. Scores ranged between eight and fourteen for the Bike model, with a median score of thirteen. Scores for the summer model ranged between nine and fourteen, with a median score of twelve. EVO produced a slightly better median for the Bike model but also a slightly worse median for the Summer model. The questions answered best by subjects were Q1, Q3, and Q5 (see Tbl. IV), with only one subject incorrectly answering each question between both the Bike and Summer models combined. The worst performing question was Q6(b) for the Summer model and Q6(a) for the Bike model. The phrasing of Q6 can be improved (see Sect. V-A for a discussion). Given the score data in Tbl. VI, we did not expect to find variations between groups (i.e., our null hypothesis) and, in fact, did not find any statistical difference between treatment groups (i.e., KWES, \( p \neq 0.1 \)) with respect to the subjects’ scores for Bike and Summer model questions. We conclude that subjects were able to learn EVO, and then use EVO to answer goal modeling questions. C. RQ2: Comparing EVO with the Control Next, we consider RQ2: How does EVO compare with the control in terms of time and subjects’ perceptions? We again break this research question into two sub-questions: (a) Does EVO help subjects make decisions faster? and (b) How do subjects perceive EVO? TABLE VI: Median scores (out of fourteen) for Bike and Summer questions. Bold indicates subject group used EVO. <table> <thead> <tr> <th>Group</th> <th>Bike Median</th> <th>Summer Median</th> </tr> </thead> <tbody> <tr> <td>EBk-XSm</td> <td>13</td> <td>12.5</td> </tr> <tr> <td>XSm-EBk</td> <td>13.5</td> <td>13</td> </tr> <tr> <td>ESm-XBk</td> <td>12</td> <td>12</td> </tr> <tr> <td>XBk-ESm</td> <td>13</td> <td>11.5</td> </tr> </tbody> </table> Fig. 8: Timing Data (in seconds) for answering Bike and Summer questions (see Tbl. IV). (a) Bike and Summer Times. To measure subject completion times, we added their times from Pages 1, 2, 3, and 5 (see Tbl. IV). Pages 4 and 6 were excluded because they contained solely free form answers where subjects’ time depended on the length of their answer. The times for both models are comparable, ranging from five to twenty minutes. Fig. 8 gives the box plot for each treatment group for the Bike and Summer model question times. In the Bike model (left side), EBk-XSm (red) and XSm-EBk (green) used EVO to answer the questions and visibly lower time. Again, our null hypothesis is that there is no difference between treatment groups. Using the KWES test, we find the times for the Bike model to be significantly faster ($p < 0.01$). In the Summer model (right side), ESm-XBk (blue) and XBk-ESm (purple) used EVO to answer the questions and also have visibly lower time. Again using the KWES test, we find the times for the Summer model to be significantly faster ($p < 0.001$). Upon further inspection of Fig. 8, we observe a possible learning effect—the results are more pronounced when the control group used EVO (i.e., XSm-EBk (green)) for the Bike model and XBk-ESm (purple) for the Summer model). Yet, when we conduct a pair-wise comparison based on treatment group order and EVO, we do not find a significant difference with respect to order but we do find one with respect to using EVO; thus, we hypothesize that the interaction of subjects being in the control group and using EVO may contribute to this additional benefit. Therefore, we found a significant effect between the treatment groups with respect to the time required to answer the Bike and Summer questions. (b) Qualitative Perspectives. Finally, we performed a qualitative analysis on the question, “Compare and contrast the colored views with the non-colored views, which do you prefer? Why?” All subjects preferred the EVO view over the control. More than half said that EVO was faster and/or easier to use. Other comments include that EVO was more intuitive, better for comparing models, and improved subjects’ high-level understanding of the model. While no critiques of EVO were present in this question, we discuss subjects’ recommendations for improving EVO in Sect. IV-D. We conclude that subjects preferred using EVO over the control.Subjects’ completion times were faster with EVO. D. Improvements and Recommendations Finally, we address RQ3: How do subjects rate the study instruments and experience? To answer this question, we collected optional quantitative ratings after each module and qualitative reports at the end. For each of Parts 1–4 in Tbl. II (i.e., the initial training sequence, the EVO training, the Summer model, and the Bike model), subjects rated their experience completing each part. They were asked to rate their difficulty with the three aspects (where 0 was no difficulty and 10 was complete difficulty): (i) understanding the scenario description, (ii) understanding the model, and (iii) answering the questions. Tbl. VII gives the average difficulty rating for each aspect and each part. Subjects had the most difficulty during the initial training phase, which seems appropriate because subjects had very limited familiarity with RE and goal modeling (see Tbl. III, discussed in Sect. III-D). Subjects perceived the Bike scenario and questions as slightly more difficult than the Summer model but perceived the models similarly. The EVO training was rated as the least difficult part, with average scores of 2.3–2.6. While this provides additional data for our assertions in RQ1, comparing between the scores in Tbl. VII is confounded by the fact that the EVO training was the shortest module and built on the Phase 1 training. Finally, we ask subjects for suggestions and additional comments. Specifically, to gather suggestions, we asked the question: “What suggestions or changes would you recommend to the developers of this goal modeling language (and tool)?” Tbl. VIII lists the recommendations provided by subjects, organized into three categories: improvements that can be made to EVO, goal modeling, and our study instrumentation. Subjects made a variety of recommendations about improving the look and feel of EVO—from changing the colors of conflicting evidence pairs to adding ticks to show time points in the Time mode. We are aware of the accessibility... issues associated with red-blue color vision deficiencies (see Sect. VII for details). Since this study was conducted in isolation from tooling and other approaches, many of the goal modeling recommendations have already been investigated by other approaches. For example, goal prioritization, XOR links, model-level metrics, and quantitative valuations have all been investigated by researchers [15], [16], [17], [18]. We found the recommendation about improving the visual aspects of the links of interest and may pursue this in future work. Finally, subjects recommended improvements to our study instrument. Subjects recommended clarifying the differences between link types, evolving function types, and the difference between the initial state and time point 0. Specifically, with respect to EVO, one subject thought more explanation was required to understand the difference between % and Time mode. Other comments included adding a progress bar and improving our study handouts and questions. Three subjects (excluded from Tbl. VIII) encouraged the developers to implement the EVO feature. Six subjects provided additional comments. Of these responses, three mentioned that the survey was long/hard, one said that they do not like goal modeling, one thought that $(F, F)$ is the color black, and the final comment explained an inconsistency in the subject’s answer to a previous question. We conclude that subjects rated the study instruments and experience as suitable and not overly difficult; yet, roughly 10% reported that the study was long or hard. Subjects found the initial training most difficult and the EVO training easiest. V. Discussion Next, we describe our lessons learned, compare the bike and summer model, and discuss the validity of our experiment. A. Lessons Learned and Implications for Research Subject Background and Recruitment. We developed this study instrument over a six-month period. We first iterated the instrument with individuals in our lab, then completed a small pilot with four subjects. The purpose of the pilot was to evaluate the quality of our instrument and understand what timing data was generated from our Qualtrics® XM platform. The pilot helped us improve the quality of the data we collected. We added opportunities for subjects to take breaks and originally collected one timing value for Q1-12 in Tbl. IV. We discovered these values varied dramatically based on how much text subjects entered in the free form questions. As listed in Tbl. IV, we separated these questions across six pages (see Page column) and added timing information to each page. It was extremely difficult to recruit subjects for a survey that took a full hour. Due to Smith College policies and U.S. tax legislation, we were not able to offer remuneration in an amount over $20 USD. We launched three separate iterations of the study. Our first emailed researchers within the goal modeling community and targeted trained modelers. We received five responses and of these, only one completed the study instrument. Our second attempt was to recruit subjects within a large software engineering class with Tropos instruction at another institution, again receiving only one completed response. After two unsuccessful attempts, we pivoted to an in-person lab study. We updated our protocol to include additional training and recruited students as described in Sect. III-D. There may be a cognitive difference between participating in a one-hour in-person lab session as opposed to completing a one-hour online survey, even when remuneration amounts are the same. We had sufficient volunteers for our in-person version and felt this was an important lesson learned. Improvements to the Study Instrument. We reviewed the questions and supplemental information from the study by Hadar et al. [7] and iteratively developed our study instrument. We encourage other researchers to use and adapt our survey instruments; thus, we report potential areas for improvement. For example, in question Q6 (for both the Bike and Summer models, see Tbl. IV), we asked “how many times over the simulation does the element become Fully Satisfied” which would have been better rephrased as, “how many time point(s) over the simulation is the element Fully Satisfied”. It was sometimes difficult to achieve task equivalency. For example, the tasks in question Q8 (see Tbl. IV) are not exactly matched between models. The correct Q8 answer for Bike model was none of the above because no intentions fulfill Prevent Unloading in Bike Lane. To satisfy Exercise in the Summer model requires either Water-Weed-Enjoy Garden or Drive to and Play Soccer, but we did not include Drive to and Play Soccer as an option, intending subjects to select Water-Weed-Enjoy Garden. Since the Bike model had a none of the above, we included the same for the Summer question, yet this resulted in subjects choosing it because they wanted to select Drive to and Play Soccer. In a future iteration of this <table> <thead> <tr> <th>TABLE VIII: Recommendations for Improvement</th> </tr> </thead> <tbody> <tr> <td><strong>EVO Improvements</strong></td> </tr> <tr> <td>- Add ticks or an outline to time mode. (x4)</td> </tr> <tr> <td>- Choose prettier colors (and better fonts). (x2)</td> </tr> <tr> <td>- Better contrast between text color and EVO color. (x2)</td> </tr> <tr> <td>- Change conflict colors:</td> </tr> <tr> <td>- All conflicts the same color.</td> </tr> <tr> <td>- $(P, P)$ should be grey, reduce visual noise.</td> </tr> <tr> <td>- Use green/yellow for conflicting evidence pairs.</td> </tr> <tr> <td>- Left to right arrow on time mode.</td> </tr> <tr> <td>- Eliminate possible left-right bias in % mode.</td> </tr> <tr> <td>- Colors may not be accessible to all users. (x2)</td> </tr> <tr> <td><strong>Goal Modeling Improvements</strong></td> </tr> <tr> <td>- Add goal prioritization in models.</td> </tr> <tr> <td>- Organize models as decision tree.</td> </tr> <tr> <td>- Improve visualization of links (maybe with color).</td> </tr> <tr> <td>- Create model-level metrics (in a table).</td> </tr> <tr> <td>- Distinguish between OR and XOR links.</td> </tr> <tr> <td>- Make evolving functions more explicit.</td> </tr> <tr> <td>- Add more possible values for $(s, d)$.</td> </tr> <tr> <td><strong>Study Instrument Improvements</strong></td> </tr> <tr> <td>- Clarify difference between $+\text{S}$ and $+\text{S}$. (x2)</td> </tr> <tr> <td>- Better explain evolving functions.</td> </tr> <tr> <td>- Clarify difference between initial state and time point 0. (x2)</td> </tr> <tr> <td>- Clarify difference between % and Time mode.</td> </tr> <tr> <td>- Organize handout landscape with models left to right.</td> </tr> <tr> <td>- Text too crowded/overlap, make images simpler/larger. (x2)</td> </tr> <tr> <td>- Change “become Fully Satisfied” wording in Q6.</td> </tr> <tr> <td>- $(F, F)$ looks black, not dark purple.</td> </tr> <tr> <td>- Add progress bar to questionnaire.</td> </tr> </tbody> </table> We have experienced a number of issues associated with red-blue color vision deficiencies (see Sect. VII for details). Since this study was conducted in isolation from tooling and other approaches, many of the goal modeling recommendations have already been investigated by other approaches. For example, goal prioritization, XOR links, model-level metrics, and quantitative valuations have all been investigated by researchers [15], [16], [17], [18]. We found the recommendation about improving the visual aspects of the links of interest and may pursue this in future work. Finally, subjects recommended improvements to our study instrument. Subjects recommended clarifying the differences between link types, evolving function types, and the difference between the initial state and time point 0. Specifically, with respect to EVO, one subject thought more explanation was required to understand the difference between % and Time mode. Other comments included adding a progress bar and improving our study handouts and questions. Three subjects (excluded from Tbl. VIII) encouraged the developers to implement the EVO feature. Six subjects provided additional comments. Of these responses, three mentioned that the survey was long/hard, one said that they do not like goal modeling, one thought that $(F, F)$ is the color black, and the final comment explained an inconsistency in the subject’s answer to a previous question. We conclude that subjects rated the study instruments and experience as suitable and not overly difficult; yet, roughly 10% reported that the study was long or hard. Subjects found the initial training most difficult and the EVO training easiest. V. Discussion Next, we describe our lessons learned, compare the bike and summer model, and discuss the validity of our experiment. A. Lessons Learned and Implications for Research Subject Background and Recruitment. We developed this study instrument over a six-month period. We first iterated the instrument with individuals in our lab, then completed a small pilot with four subjects. The purpose of the pilot was to evaluate the quality of our instrument and understand what timing data was generated from our Qualtrics® XM platform. The pilot helped us improve the quality of the data we collected. We added opportunities for subjects to take breaks and originally collected one timing value for Q1-12 in Tbl. IV. We discovered these values varied dramatically based on how much text subjects entered in the free form questions. As listed in Tbl. IV, we separated these questions across six pages (see Page column) and added timing information to each page. It was extremely difficult to recruit subjects for a survey that took a full hour. Due to Smith College policies and U.S. tax legislation, we were not able to offer remuneration in an amount over $20 USD. We launched three separate iterations of the study. Our first emailed researchers within the goal modeling community and targeted trained modelers. We received five responses and of these, only one completed the study instrument. Our second attempt was to recruit subjects within a large software engineering class with Tropos instruction at another institution, again receiving only one completed response. After two unsuccessful attempts, we pivoted to an in-person lab study. We updated our protocol to include additional training and recruited students as described in Sect. III-D. There may be a cognitive difference between participating in a one-hour in-person lab session as opposed to completing a one-hour online survey, even when remuneration amounts are the same. We had sufficient volunteers for our in-person version and felt this was an important lesson learned. Improvements to the Study Instrument. We reviewed the questions and supplemental information from the study by Hadar et al. [7] and iteratively developed our study instrument. We encourage other researchers to use and adapt our survey instruments; thus, we report potential areas for improvement. For example, in question Q6 (for both the Bike and Summer models, see Tbl. IV), we asked “how many times over the simulation does the element become Fully Satisfied” which would have been better rephrased as, “how many time point(s) over the simulation is the element Fully Satisfied”. It was sometimes difficult to achieve task equivalency. For example, the tasks in question Q8 (see Tbl. IV) are not exactly matched between models. The correct Q8 answer for Bike model was none of the above because no intentions fulfill Prevent Unloading in Bike Lane. To satisfy Exercise in the Summer model requires either Water-Weed-Enjoy Garden or Drive to and Play Soccer, but we did not include Drive to and Play Soccer as an option, intending subjects to select Water-Weed-Enjoy Garden. Since the Bike model had a none of the above, we included the same for the Summer question, yet this resulted in subjects choosing it because they wanted to select Drive to and Play Soccer. In a future iteration of this instrument, we would change the selected intention for the Bike model and remove the none of the above option. In our analysis, we were unable to detect any differences between scores on the models with or without EVO. Future work is required to determine whether our study instrument is sufficiently discriminatory. One of the aspects we iterated on was the length and complexity of the questions we asked in this study. We opted for a balance in these factors to ensure that subjects would complete the study in one hour, which we agreed upon as a reasonable upper bound. **Statistical Methods.** Given our per group sample size, any statistical test will have lower power to make conclusions (see Sect. V-C and online1). In Sect. IV, we used the KWRS test to evaluate if there are distinct groupings within our sample data [14]. The KWRS test is valuable for small sample sized data because it does not make assumptions about the distribution of the data and is not influenced by data points that vary greatly in magnitude, which is useful for time data. **B. Comparing Bike and Summer Models** As introduced in Sect. III-A, we explored our research questions between-subjects. In Sect. IV, we found a statistically significant difference between using EVO and the control in the time it took subjects to answer questions about both the Bike and Summer model. Yet, in this test, we cannot directly compare the times associated with the Bike and Summer model or control for individual subject variability. We briefly explore variations of the time it took subjects to answer the test questions (i.e., our dependent variable). We compare test times given three factors (independent variables): (i) whether the subject used EVO, (ii) whether it was the first or second measurement for that subject, and (iii) whether the measurement was of the Bike or Summer model. In order to identify which factors are significant, we compared within subjects by fitting multiple linear mixed-effects models and then conducted a model comparison with repeated measures data using a likelihood ratio test (i.e., ANOVA) [19]. We used a linear mixed-effects model to account for non-independence (i.e., there were two measurements for each subject). Comparing the full model to one with interactions between factors showed that the interaction terms in the model are not significant ($p > 0.05$). We found the EVO factor to be significant ($p < 0.001$), meaning that within-subjects there was a difference in the time it took subjects to answer questions with EVO as opposed to without EVO. The order of whether subjects were given the control or the treatment first was significant ($p < 0.001$), implying that there was a learning effect over time. Which model was measured was not significant ($p > 0.05$), meaning that there is no significant difference in the times for the Summer and Bike models. Since there is no significant difference between models and no interaction effect, we can analyze this as a two-way ANOVA where using EVO and order of EVO presentation are the two factors. Using a statistical power test for repeated measures ANOVA within-subjects with a medium effect size, we found that the minimum sample size using G*Power [20] for our experiment was 56. Thus, we have low statistical power. We did not find any difference between the Bike and Summer models and found the presence of a learning effect within subjects. **C. Threats to Validity** We discuss threats to validity using the categories in [13]. **Conclusion Validity.** Our main threat in this experiment is low sample size. Having 32 subjects spanning four treatment groups is considered a low sample size. Thus, we chose to conduct our main analysis between-subjects to mitigate this threat. We may have experienced a reliability of measures threat, as subjects asked questions about the wording of Q6 (see Sect. V-A). We wrote scripts to analyze our data wherever possible and automatically recorded page completion times to ensure reliable measurements. Qualitative data was randomized before review and categorization. Different authors conducted the in-person and data analysis components to reduce researcher bias. To mitigate variations in treatment implementation, we standardized the experimental setup by using our online platform, videos, and pdf handouts to ensure that the subjects had equivalent training materials (see Sect. III-B), and maintained our laboratory setup throughout the study period, to ensure a consistent in-person experience. We do not believe there is a random heterogeneity of subjects risk, since our population was homogeneous, having similar knowledge, abilities, and previous experience with English, Tropos, and RE (see Tbl. III). In a future study, we would collect data about subjects’ year in the undergraduate program (e.g., first-year, seniors) to further mitigate this risk. **Internal Validity.** We explicitly designed our study to control for a learning effect or maturation risk (i.e., where one group learns a treatment faster than another). We gave opportunities for subjects to take breaks if they were fatigued and shortened the instrument wherever possible. We controlled for an instrumentation effect in our 2x2 design; yet, the Bike model questions may have been slightly harder (see Sect. IV-B). With this design, there is still a risk of carryover effects [21]. Our voluntary study with cash remuneration may have experienced a selection effect. To our knowledge, no subjects used BloomingLeaf or EVO prior to the study. **Construct Validity.** We conducted multiple pilot mini-studies (not discussed in this paper) to ensure that our study instrument was measuring our intended constructs. In one such study, we found that our unit of time measure was inaccurate because it included too many questions; hence, we divided the questions across multiple pages as listed in Tbl. IV and isolated qualitative questions. We collected data in multiple forms (e.g., scores and times) and asked different types of questions to mitigate mono-method and mono-operation biases. As always, we have threats of hypothesis guessing and evaluation apprehension. Some subjects expressed nervousness asking if they needed to review data structures or read about goal modeling before participating. Some students who took a software engineering course may have scored better overall; yet, our common training protocol may have limited this threat. External Validity. Our setting was not reflective of the use of EVO in the “real world”. We conducted the experiment one-on-one in our lab using a survey, instead of embedding EVO within a goal modeling tool (e.g., BloomingLeaf). Due to constraints over participant time, we were unable to validate EVO on large models that are more reflective of “real world” scenarios. Our homogeneous population of undergraduate students means that we cannot generalize to the broader RE population, but given the limited prior knowledge of our subjects (see Tbl. III), these results may, in fact, generalize. Additional experiments with different populations, problem domains, and larger models for scalability are required. VI. RELATED WORK Recent work has critiqued the adaptability of GORE approaches [22]. In this paper, we address this gap by improving the interpretability of Tropos evidence pairs. As already introduced in Sect. I, Hadar et al. [7] and Siqueira [8] studied the comprehensibility of Tropos models with respect to Use Case models. While it is difficult to compare our results with these studies because we only evaluate Tropos models, this work was influential in the design of our study and the importance of controlling for the use of different models, while investigating the performance of subjects on analysis tasks. Using color as a technique to improve visualizations of goal models has been a topic of recent interest within the community. Amyot et al. used colors to visualize analysis results in the jUCMNav tool for URN [17], while TimedGRL used color in heat maps to visualize evolving GRL models [1]. Varnum et al. proposed using colors to help stakeholders interpret the evidence pairs used in Tropos for intention evaluations [10]. At the same time, Oliveira and Leite proposed mapping the primary colors onto NFR soft goal labels and contribution links, allowing color values to be quantitatively calculated and propagated throughout the model [23]. Varnum et al. used a static set of colors; whereas, Oliveira and Leite use a large range of colors calculated dynamically. In reviewing these approaches, we chose to first validate the coloring approach of Varnum et al. because of its static nature, which made it easier to evaluate experimentally and understand whether color was an effective approach. Further research is required to validate the choice of colors in both approaches, and whether the dynamic nature of Oliveira and Leite’s approach causes an additional cognitive load that reduces the overall effectiveness. We built on the methodology of similar studies in RE for our between-subjects experiment and followed the guidance in [13] and [24]. Winkler et al. reported on a between-subjects 2x2 design similar to ours with sixteen subjects [25]. The authors assumed that the treatment group had increased precision and a reduction in time to complete the tasks due to working with direct output from the tool; whereas, the control group completed the task manually. We attempted to control for differences in tool usage by providing both groups with direct output from BloomingLeaf. Ghazi et al. reported a study comparing two navigation techniques for requirements modeling tools [26]. They used time limits to motivate the participants to work as fast as they would on real tasks in industry, giving the subjects about five minutes to try out the tool. However, this may force subjects to work faster, which may result in worse results. To prevent this, we let the subjects take the time needed to review the training documents since our population comprised new learners. Santos et al. presented a quasi-experiment to explore the interpretability of iStar models given different concrete syntax [27]. Subjects were tasked with identifying defects in a goal model, a task we did not include in our study as it may have been too difficult for new learners and increased their fatigue. VII. CONCLUSIONS AND FUTURE WORK In this paper, we explored how using EVO to visualize evidence pairs impacts an individual’s ability to reason with goal models that evolve over time. To do so, we conducted an IRB-approved between-subjects experiment with 32 undergraduate students. We found that when given a consistent training protocol for goal modeling and simulation, each treatment group performed equally well on the initial training modules, establishing a baseline for comparison between treatment groups. Subjects were able to learn EVO in under ten minutes and use the extension to make decisions. From this experiment, we concluded that subjects were able to answer goal modeling comprehension questions with EVO faster than without EVO but we did not find a significant difference between the scores of subjects who answered questions with and without EVO. Thus, there was no evidence that EVO has an impact on an individual’s understanding of goal models. However, subjects had a positive response to EVO and all preferred the EVO view over the control, with most saying that EVO was faster or easier to use. Finally, our subjects, without prior training in GORE, were able to complete the instrument without much difficulty. By demonstrating the impacts of EVO, we increase the potential of automated analysis techniques in Tropos. We share our materials as part of our open-science package. Given the empirical evidence of the effectiveness of EVO presented in this paper, we encourage the original authors to continue their development of EVO within BloomingLeaf. Additionally, as mentioned in Sect. VI, the selected colors of blue, red, and purple should be validated. Our subjects proposed several alternatives for conflicting colors in Tbl. VIII. We are investigating these alternative color palettes, as well as palettes for colorblind users. In future work, we intend to replicate our study in order to establish external validity (see Sect. V-C), both with subjects in a different context and using EVO embedded within BloomingLeaf and other goal modeling tools. Additionally, future work includes conducting case studies of real groups in early-phase RE using EVO. Other work included extending and validating the EVO feature with other types of analysis. Acknowledgments. We thank our study participants. Thanks to Kaitlyn Cook for assisting in our statistical analysis. This material is based upon work supported by the National Science Foundation under Award No. 2104732. REFERENCES
{"Source-Url": "https://scholarworks.smith.edu/cgi/viewcontent.cgi?article=1361&context=csc_facpubs", "len_cl100k_base": 13361, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 40015, "total-output-tokens": 15392, "length": "2e13", "weborganizer": {"__label__adult": 0.0006256103515625, "__label__art_design": 0.0035762786865234375, "__label__crime_law": 0.0005216598510742188, "__label__education_jobs": 0.08612060546875, "__label__entertainment": 0.0003476142883300781, "__label__fashion_beauty": 0.0005717277526855469, "__label__finance_business": 0.002437591552734375, "__label__food_dining": 0.0006303787231445312, "__label__games": 0.0018339157104492188, "__label__hardware": 0.0014333724975585938, "__label__health": 0.0009641647338867188, "__label__history": 0.001247406005859375, "__label__home_hobbies": 0.00046944618225097656, "__label__industrial": 0.0010004043579101562, "__label__literature": 0.001712799072265625, "__label__politics": 0.0004973411560058594, "__label__religion": 0.0009136199951171876, "__label__science_tech": 0.256103515625, "__label__social_life": 0.0006575584411621094, "__label__software": 0.0362548828125, "__label__software_dev": 0.60009765625, "__label__sports_fitness": 0.0004475116729736328, "__label__transportation": 0.000950336456298828, "__label__travel": 0.00039505958557128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66560, 0.01534]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66560, 0.34495]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66560, 0.93814]], "google_gemma-3-12b-it_contains_pii": [[0, 951, false], [951, 6404, null], [6404, 10760, null], [10760, 17015, null], [17015, 20552, null], [20552, 25040, null], [25040, 31746, null], [31746, 36612, null], [36612, 48263, null], [48263, 54730, null], [54730, 61162, null], [61162, 66560, null]], "google_gemma-3-12b-it_is_public_document": [[0, 951, true], [951, 6404, null], [6404, 10760, null], [10760, 17015, null], [17015, 20552, null], [20552, 25040, null], [25040, 31746, null], [31746, 36612, null], [36612, 48263, null], [48263, 54730, null], [54730, 61162, null], [61162, 66560, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66560, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66560, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66560, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66560, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66560, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66560, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66560, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66560, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66560, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66560, null]], "pdf_page_numbers": [[0, 951, 1], [951, 6404, 2], [6404, 10760, 3], [10760, 17015, 4], [17015, 20552, 5], [20552, 25040, 6], [25040, 31746, 7], [31746, 36612, 8], [36612, 48263, 9], [48263, 54730, 10], [54730, 61162, 11], [61162, 66560, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66560, 0.23611]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
64cea8f0e34e90c46aed5a871ee947f3fc06e8b6
[REMOVED]
{"Source-Url": "https://www.dcc.fc.up.pt/~pribeiro/aulas/alg1819/slides/2_asymptoticanalysis_02102018.pdf", "len_cl100k_base": 14636, "olmocr-version": "0.1.53", "pdf-total-pages": 79, "total-fallback-pages": 0, "total-input-tokens": 141137, "total-output-tokens": 17661, "length": "2e13", "weborganizer": {"__label__adult": 0.0003709793090820313, "__label__art_design": 0.0006666183471679688, "__label__crime_law": 0.0005035400390625, "__label__education_jobs": 0.0030918121337890625, "__label__entertainment": 0.0001785755157470703, "__label__fashion_beauty": 0.00020444393157958984, "__label__finance_business": 0.00047659873962402344, "__label__food_dining": 0.0005831718444824219, "__label__games": 0.0019283294677734375, "__label__hardware": 0.00267791748046875, "__label__health": 0.0008673667907714844, "__label__history": 0.0006694793701171875, "__label__home_hobbies": 0.0002932548522949219, "__label__industrial": 0.0008196830749511719, "__label__literature": 0.0005955696105957031, "__label__politics": 0.0004343986511230469, "__label__religion": 0.0007786750793457031, "__label__science_tech": 0.3408203125, "__label__social_life": 0.0001481771469116211, "__label__software": 0.0100555419921875, "__label__software_dev": 0.6318359375, "__label__sports_fitness": 0.0005049705505371094, "__label__transportation": 0.0009555816650390624, "__label__travel": 0.0002605915069580078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36995, 0.04614]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36995, 0.73543]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36995, 0.74904]], "google_gemma-3-12b-it_contains_pii": [[0, 130, false], [130, 406, null], [406, 745, null], [745, 790, null], [790, 930, null], [930, 1200, null], [1200, 1260, null], [1260, 1925, null], [1925, 2667, null], [2667, 3339, null], [3339, 3716, null], [3716, 4160, null], [4160, 4803, null], [4803, 5261, null], [5261, 5942, null], [5942, 6460, null], [6460, 6737, null], [6737, 7170, null], [7170, 7269, null], [7269, 7682, null], [7682, 8097, null], [8097, 8329, null], [8329, 9132, null], [9132, 9612, null], [9612, 10305, null], [10305, 10901, null], [10901, 11283, null], [11283, 12712, null], [12712, 13503, null], [13503, 13860, null], [13860, 14684, null], [14684, 14844, null], [14844, 15060, null], [15060, 15695, null], [15695, 16104, null], [16104, 16619, null], [16619, 17101, null], [17101, 17585, null], [17585, 17884, null], [17884, 18017, null], [18017, 18339, null], [18339, 18812, null], [18812, 19216, null], [19216, 19377, null], [19377, 19848, null], [19848, 20299, null], [20299, 20712, null], [20712, 21401, null], [21401, 22307, null], [22307, 22640, null], [22640, 23016, null], [23016, 23689, null], [23689, 24456, null], [24456, 25066, null], [25066, 25623, null], [25623, 26172, null], [26172, 26770, null], [26770, 27187, null], [27187, 27541, null], [27541, 27657, null], [27657, 28034, null], [28034, 28140, null], [28140, 28796, null], [28796, 29290, null], [29290, 29727, null], [29727, 30177, null], [30177, 30584, null], [30584, 31014, null], [31014, 31741, null], [31741, 32089, null], [32089, 32350, null], [32350, 32845, null], [32845, 33596, null], [33596, 34622, null], [34622, 35140, null], [35140, 36047, null], [36047, 36336, null], [36336, 36667, null], [36667, 36995, null]], "google_gemma-3-12b-it_is_public_document": [[0, 130, true], [130, 406, null], [406, 745, null], [745, 790, null], [790, 930, null], [930, 1200, null], [1200, 1260, null], [1260, 1925, null], [1925, 2667, null], [2667, 3339, null], [3339, 3716, null], [3716, 4160, null], [4160, 4803, null], [4803, 5261, null], [5261, 5942, null], [5942, 6460, null], [6460, 6737, null], [6737, 7170, null], [7170, 7269, null], [7269, 7682, null], [7682, 8097, null], [8097, 8329, null], [8329, 9132, null], [9132, 9612, null], [9612, 10305, null], [10305, 10901, null], [10901, 11283, null], [11283, 12712, null], [12712, 13503, null], [13503, 13860, null], [13860, 14684, null], [14684, 14844, null], [14844, 15060, null], [15060, 15695, null], [15695, 16104, null], [16104, 16619, null], [16619, 17101, null], [17101, 17585, null], [17585, 17884, null], [17884, 18017, null], [18017, 18339, null], [18339, 18812, null], [18812, 19216, null], [19216, 19377, null], [19377, 19848, null], [19848, 20299, null], [20299, 20712, null], [20712, 21401, null], [21401, 22307, null], [22307, 22640, null], [22640, 23016, null], [23016, 23689, null], [23689, 24456, null], [24456, 25066, null], [25066, 25623, null], [25623, 26172, null], [26172, 26770, null], [26770, 27187, null], [27187, 27541, null], [27541, 27657, null], [27657, 28034, null], [28034, 28140, null], [28140, 28796, null], [28796, 29290, null], [29290, 29727, null], [29727, 30177, null], [30177, 30584, null], [30584, 31014, null], [31014, 31741, null], [31741, 32089, null], [32089, 32350, null], [32350, 32845, null], [32845, 33596, null], [33596, 34622, null], [34622, 35140, null], [35140, 36047, null], [36047, 36336, null], [36336, 36667, null], [36667, 36995, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36995, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36995, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36995, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36995, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 36995, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36995, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36995, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36995, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36995, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36995, null]], "pdf_page_numbers": [[0, 130, 1], [130, 406, 2], [406, 745, 3], [745, 790, 4], [790, 930, 5], [930, 1200, 6], [1200, 1260, 7], [1260, 1925, 8], [1925, 2667, 9], [2667, 3339, 10], [3339, 3716, 11], [3716, 4160, 12], [4160, 4803, 13], [4803, 5261, 14], [5261, 5942, 15], [5942, 6460, 16], [6460, 6737, 17], [6737, 7170, 18], [7170, 7269, 19], [7269, 7682, 20], [7682, 8097, 21], [8097, 8329, 22], [8329, 9132, 23], [9132, 9612, 24], [9612, 10305, 25], [10305, 10901, 26], [10901, 11283, 27], [11283, 12712, 28], [12712, 13503, 29], [13503, 13860, 30], [13860, 14684, 31], [14684, 14844, 32], [14844, 15060, 33], [15060, 15695, 34], [15695, 16104, 35], [16104, 16619, 36], [16619, 17101, 37], [17101, 17585, 38], [17585, 17884, 39], [17884, 18017, 40], [18017, 18339, 41], [18339, 18812, 42], [18812, 19216, 43], [19216, 19377, 44], [19377, 19848, 45], [19848, 20299, 46], [20299, 20712, 47], [20712, 21401, 48], [21401, 22307, 49], [22307, 22640, 50], [22640, 23016, 51], [23016, 23689, 52], [23689, 24456, 53], [24456, 25066, 54], [25066, 25623, 55], [25623, 26172, 56], [26172, 26770, 57], [26770, 27187, 58], [27187, 27541, 59], [27541, 27657, 60], [27657, 28034, 61], [28034, 28140, 62], [28140, 28796, 63], [28796, 29290, 64], [29290, 29727, 65], [29727, 30177, 66], [30177, 30584, 67], [30584, 31014, 68], [31014, 31741, 69], [31741, 32089, 70], [32089, 32350, 71], [32350, 32845, 72], [32845, 33596, 73], [33596, 34622, 74], [34622, 35140, 75], [35140, 36047, 76], [36047, 36336, 77], [36336, 36667, 78], [36667, 36995, 79]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36995, 0.07417]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ee056e5c1a496cf3cafe5c437689fd53811e7e40
EXPLOITING SHARED ONTOLOGY WITH DEFAULT INFORMATION FOR WEB AGENTS YINGLONG MA*, BEIHONG JIN†, AND MINGQUAN ZHOU‡ Abstract. When different agents communicate with each other, there needs to be some way to ensure that the meaning of what one agent embodies is accurately conveyed to another agent. It has been argued that ontologies play a key role in communication among different agents. However, in some situations, because there exist terminological heterogeneities and incompleteness of pieces of information among ontologies used by different agents, communication among agents will be very complex and difficult to tackle. In this paper, we proposed a solution to the problem for these situations. We used distributed description logic to model the mappings between ontologies used by different agents and further make a default extension to the DDL for default reasoning. Then, base on the default extension of the DDL model, a complete information query can be reduced to checking default satisfiability of the complex concept corresponding to the query. Key words. Ontology, Description Logic, Multi-agent System, Satisfiability, Default reasoning. 1. Introduction. Agents often utilize the services of other agents to perform some given tasks within multi-agent systems [1]. When different agents communicate with each other, there needs to be some ways to ensure that the meaning of what one agent embodies is accurately conveyed to the other agent. Ontologies play a key role in communication among different agents because they provide and define a shared vocabulary about a definition of the world and terms used in agent communication. In real-life scenarios, agents such as Web agents [2] need to interact in a much wider world. The future generation Web, called Semantic Web [3] originates from the form of decentralized vocabularies - ontologies, which are central to the vision of Semantic Web's multi-layer architecture [4]. In the background of the future Semantic Web intelligence, there are terminological knowledge bases (ontologies), reasoning engines, and also standards that make possible reasoning with the marked concepts on the Web. It now seems clear that Semantic Web will not be realized by agreeing on a single global ontology, but rather by weaving together a large collection of partial ontologies that are distributed across the Web [5]. In this situation, the assumption that different agents completely shared a vocabulary is unfeasible and even impossible. In facts, agents will often use private ontologies that define terms in different ways making it impossible for the other agent to understand the contents of a message [6]. Uschold identifies some barriers for agent communication, which can be classified into language heterogeneity and terminological heterogeneity [7]. In this paper, we will focus on terminology heterogeneity and not consider the problem of language heterogeneity. To overcome these heterogeneity problems, there is a need to align ontologies used by different agents, the most often discussed are merging and mapping of ontologies [8, 9]. However, it seems that the efforts are not enough. For communication among agents with heterogeneous ontologies, there are still some problems that require to be solved. In some situations, only incomplete information can be got. These happen sometime as unavailability of pieces of information, sometime as semantic heterogeneities [here, terminological heterogeneities are focused on] among ontologies from different sources. Another problem is that there always exist some exceptional facts, which conflict with commonsense information. For example, commonly bird can fly, penguin belongs to bird, but penguin couldn't fly. In these situations, communication among agents will be more complex and difficult to tackle. We must consider not only the alignment of ontologies used by different agents, but also the implicit default information hidden among these ontologies. Then, information reasoning for query should be based on both these explicitly represented ontologies and implicit default information. This form of reasoning is called default reasoning, which is non-monotonic. Little attention, however, has been paid to the problem of endowing these logics above with default reasoning capabilities. For a long time, representation and reasoning in description logic (DL) [10] have been used in a wide range of applications, which are usually given a formal, logic-based semantics. Another distinguished feature is the emphasis on reasoning as a central service. Description logic is very useful for defining, integrating, and *Computer Sciences and Technology Department, North China Electric Power University, Beijing 102206, P. R. China, m.y.long@etcaix.ucas.ac.cn. †Technology Center of Software Engineering, Institute of Software, Chinese Academy of Sciences, P. O. Box 8718, Beijing 100080, P. R. China, jbh@etcaix.ucas.ac.cn. ‡School of Information Sciences, Beijing Normal University, Beijing 100875, P. R. China, mzhou@bnu.edu.cn. maintaining ontologies, which provide the Semantic Web with a common understanding of the basic semantic concepts used to annotate Web pages. They should be ideal candidates for ontology languages [11]. DAML+OIL [12, 15] and OWL [13] are clear examples of Description Logics. Recently, Borgida and Serafini proposed an extension of the formal framework of Description Logic to distributed knowledge models [14], which are called distributed description logic (DDL). A DDL consists of a set of terminological knowledge bases (ontologies) and a set of so-called bridge rules between concept definitions from different ontologies. Two kinds of bridge rules are considered in DDL. Another important feature of DDL is the ability to transform a distributed knowledge base into a global one. In other words, the existing description logic reasoners can be applied for deriving new knowledge. We adopt the view of [6] that the mappings between ontologies will mostly be established by individual agents that use different available ontologies in order to process a given task. In our opinion, it is a solution to model the mappings between ontologies used by different agents using a DDL and further make a default extension to the DDL for default reasoning. Then, base on the default extension of the DDL model, a complete information query can be reduced to check default satisfiability of the complex concept corresponding to the query. This paper is organized as follows. Section 2 presents our motivation in making default extension to DDL for communication among multiple agents. Section 3 introduces representation and reasoning related to ontology. Distributed description logic are introduced particularly. In Section 4, we provide a formal framework for default extension to description logic. Default reasoning based on an EDDT is discussed in Section 5. Meanwhile, an algorithm is proposed for checking the default satisfiability of a given concept or a terminological subsumption assertion. Section 6 and Section 7 are related work and conclusion, respectively. 2. Motivation. In order to process a given task in multi-agent systems, it is important and essential to communicate with each other among different agents. However, there often exist some terminological heterogeneities and incompleteness of pieces of information among ontologies used by different agents, which make an agent not completely understand terms used by another agent. In the situations, it is difficult and even impossible to realize the communication among agents. We propose a solution to this problem for the situation. We model the knowledge representation of multi-agents using distributed description logic. The internal mappings between ontologies used by different agents are defined using the so-called bridge rules of distributed description logic. Then, by default extension to the DDL model, we can express explicitly some default information hidden among these ontologies. Based on the extension to DDL model, a query can be reduced to checking the default satisfiability of a concept or an assertion corresponding to the query. More precisely, an adapted algorithm is proposed for checking default satisfiability. Fig. 2.1. The situation of communication problem between two agents In order to express the problem to be resolved clearer, we make some assumption for simplicity. We only consider communication between two agents, whose ontologies are encoded on the same language. Then, we assume that ontologies used by the two agents have sufficient overlap such that internal mappings between them can be found. The following example shown in Figure 2.1 illustrates the situation described in the paper for our application. In multi-agent systems, ontologies are used as the explicit representation of domain of interest. To process a given task, an agent perhaps uses multiple ontologies, which usually supplement each other and form a complete model. However, in the model, the default information among these ontologies is not considered. For example, we perhaps establish the internal mapping specifying that BIRD is a subclass of NON_SPEAKING_ANIMAL. Through the agent using ontology 1, we take the following query BIRD\NON_SPEAKING_ANIMAL. The found question is that the agent using ontology 1 doesn’t know the meaning of the term NON_SPEAKING_ANIMAL which only can be understood by the agent using ontology 2. To get complete and correct results of query, the two agents must coordinate each other. Another question is that the query results of the agent will include SPARROW and PARROT. We will find the results are partially correct because the class of PARROT can speak like man. The reason getting partially correct results is that we have not considered the default fact: in most cases, birds cannot speak; parrots belong to the class of birds but they can speak. In our opinion, default information should be considered and added into the model with multiple ontologies, which will form a sufficiently completely model. Then, available reasoning support for ontology languages is based on the model with default information. 3. Representation and Reasoning Related to Ontologies. A formal and well-founded ontology language is the basis for knowledge representation and reasoning about the ontologies involved. Description Logic is a formalism for knowledge representation and reasoning. Description logic is very useful for defining, integrating, and maintaining ontologies, which provide the Semantic Web with a common understanding of the basic semantic concepts used to annotate Web pages. It should be ideal candidates for ontology languages. One of the important proposals that have been made for well-founded ontology languages for the Web is DAML+OIL. Recently, description logic has heavily influenced the development of the semantic Web language. For example, DAML+OIL ontology language is just an alternative syntax for very expressive description logic [12]. So in the following sections, we use syntax and semantic representations of description logic involved instead of DAML+OIL. Description Logics is equipped with a formal, logic-based semantics. Its another distinguished feature is the emphasis on reasoning as a central service. 3.1. Description Logic. The basic notations in DL are the notation of concepts embracing some individuals on a domain of individuals, and roles representing binary relations on the domain of individuals. A specific DL provides a specific set of constructors for building more complex concepts and roles. For examples: - the symbol \( T \) is a concept description which denotes the top concept, while the symbol \( \bot \) stands for the inconsistent concept which is called bottom concept. - the symbol \( \cap \) denotes concept conjunction, e.g., the description Person\(\cap\)Male denotes the class of man. - the symbol \( \forall R.C \) denotes the universal roles quantification (also called value restriction), e.g., the description \( \forall\text{hasChild}.\text{Male} \) denotes the set of individual whose children are all male. - the number restriction constructor \( (\geq n.R.C) \) and \( (\leq n.R.C) \), e.g., the description \( (\geq 1 \text{hasChild}.\text{Doctor}) \) denotes the class of parents who have at least one children and all the children are doctors. The various description logics differ from one to another based on the set of constructors they allow. Here, we show the syntax and semantics of \(\text{ALCN} \) [16], which are listed as Figure 3.1. Then we can make several kinds of assertions using these descriptions. There exist two kinds of assertions: subsumption assertions of the form \( C \sqsubseteq D \) and assertions about individuals of the form \( C(a) \) or \( p(a,b) \), where \( C \) and \( D \) denote Concepts, \( p \) denotes role, and \( a \) and \( b \) are individual, respectively. For examples, the assertion \( \text{Parent} \sqsubseteq \text{Person} \) denotes the fact the class of parents is subsumed by the class of person. The description \( \text{Person}(a) \) denotes that the individual \( a \) is a person while the description \( \text{hasChild}(a,b) \) denotes \( a \) has a child who is \( b \). The collection of subsumption assertions is called Tbox, which specifies the terminology used to describe some application domains. A Tbox can be regarded as a terminological knowledge base of the description logic. An interpretation for DL \( \mathcal{I} = (\Delta^\mathcal{I}, \cdot^\mathcal{I}) \), where \( \Delta^\mathcal{I} \) is a domain of objects and \( \cdot^\mathcal{I} \) the interpretation function. The interpretation function maps roles into subsets of \( \Delta^\mathcal{I} \times \Delta^\mathcal{I} \), concepts into subsets of \( \Delta^\mathcal{I} \) and individuals into elements of \( \Delta^\mathcal{I} \). Satisfactions and entailments in DL Tbox will be described using following notations: \[ \begin{align*} \mathcal{I} \models C \sqsubseteq D \iff \mathcal{I}^C \subseteq \mathcal{I}^D \\ \mathcal{I} \models T \iff \text{for all } C \sqsubseteq D \text{ in } T, \mathcal{I} \models C \sqsubseteq D \\ C \sqsubseteq D, \text{ iff for all possible interpretations } \mathcal{I}, \mathcal{I} \models C \sqsubseteq D \end{align*} \] Fig. 3.1. Syntax and semantics of ontology representation $T \models C \subseteq D$, iff for all interpretations $\mathcal{I}$, $\mathcal{I} \models C \subseteq D$ such that $\mathcal{I} \models T$ 3.2. Distributed Description Logic. A DDL is composed of a collection of “distributed” DLs, each of which represents a subsystem of the whole system. All of DLs in DDL are not completely independent from one another as the same piece of knowledge might be presented from different points of view in different DLs. Each DL autonomously represents and reasons about a certain subset of the whole knowledge. Distributed description logic (DDL) can be used to model heterogeneous distributed systems by modeling relations between objects and relations between concepts contained in different heterogeneous ontologies. A DDL consists of a collection of DLs, which is written $\{DL_i\}_{i \in I}$. Every local DL in DDL is distinguished by different subscripts. The constraint relations between different DLs are described by using so-called “bridge rules” in an implicit manner, while the constraints between the corresponding domains of different DLs are described by introducing the so-called “semantics binary relations”. In order to support directionality, the bridge rules from $DL_i$ to $DL_j$ will be viewed as describing “flow of information” from $DL_i$ to $DL_j$ from the point of view of $DL_j$. In DDL, $i : C$ denotes the concept $C$ in $DL_i$, $i : C \subseteq D$ denotes subsumption assertion $C \subseteq D$ in $DL_i$. A bridge rule from $i$ to $j$ is described according to two forms: $i : C \models j : D$ and $i : C \models D$. The former is called into-bridge rule, and the latter called onto-bridge rule. A DDL embraces a set of subsumption assertions, which are called DTB. A distributed Tbox (DTB) is defined based on Tboxes in all of local DLs and bridge rules between these Tboxes. A DTB $DT = (\{T_i\}_{i \in I}, B)$, where $T_i$ is Tbox in $DL_i$, and for every $i \neq j \in I$, $B = \{B_{ij}\}$, where $B_{ij}$ is a set of bridge rules from $DL_i$ to $DL_j$. A DTB can be regarded as a distributed terminological knowledge base for the distributed description logic. The semantics for distributed description logics are provided by using local interpretation for individual DL and connecting their domains using binary relations $r_{ij}$. A distributed interpretation $\mathcal{J} = (\{T_i\}_{i \in I}, r)$ of DT consists of interpretations $T_i$ for $DL_i$ over domain $\Delta_i$, and a function $r$ associating to each $i, j \in I$ a binary relation $r_{ij} \subseteq \Delta_i \times \Delta_j$. $r_{ij}(d) = \{d' \in \Delta_j | (d, d') \in r_{ij}\}$, and for any $D \in \Delta_i$, $r_{ij}(D) = \cup_{d \in D} r_{ij}(d)$. Note that semantic relation $r$ must be hold everywhere. A distributed interpretation $\mathcal{J}$ d-satisfies (written $\models_d$) the elements of DT $DT = (\{T_i\}_{i \in I}, B)$ according to following clauses: For every $i, j \in I$ $\mathcal{J} \models_d i : C \models j : D$ if $r_{ij}(C_i) \subseteq D_j$ $\mathcal{J} \models_d i : C \models D$ if $T_i \models C \subseteq D$ $\mathcal{J} \models_d T_i$ if for all $C \subseteq D$ in $T_i$ such that $T_i \models C \subseteq D$ $\mathcal{J} \models_d DT$, if for every $i, j \in I$, $\mathcal{J} \models_d T_i$ and $\mathcal{J} \models_d b$, for every $b \in \cup B_{ij}$ $DT \models_d i : C \subseteq D$, if for every distributed interpretation $\mathcal{J}$, $\mathcal{J} \models_d DT$ implies $\mathcal{J} \models_d i : C \subseteq D$ 4. Default Extension to DDL. DDL is used to better model knowledge representation in a multi-agent systems, where ontologies are used as the explicit representation of domain of interest. The internal mappings between ontologies used by different agents are defined using the so-called bridge rules of distributed description logic. As mentioned in Section 2, however, DDL model is not sufficient for modeling communication among multiple agents with heterogeneous ontologies because the default information among these ontologies is not considered. In this situation, query based on multi-agent systems will be possible to get partially correct results. To construct a sufficiently complete model, default information should be considered and added into the DDL model with multiple ontologies. In the following, we discuss the problem of default extension to DDL. Our default extension approach is operated on a distributed terminological knowledge base. A distributed terminological knowledge base originally embraces only some strict information (i.e., the information having been expressed explicitly in distributed terminological knowledge base). Default information is used for getting complete and correct information from multiple distributed ontologies. We should consider a way to explicitly include and express the default information in a distributed terminological knowledge base for reasoning based on these distributed ontologies. To be able to include default information in distributed knowledge base, we firstly introduce the notation description of a default rule. **Definition 4.1.** A default rule is of the form $P(x): J_1(x), J_2(x), \cdots, J_n(x)/C(x)$, where $P, C$ and $J_i$ are concept names ($1 \leq i \leq n$), and $x$ is a variable. $P(x)$ is called the prerequisite of the default, all of $J_i(x)$ are called the justifications of the default, and $C(x)$ is called the consequent of the default. The meaning of default rule $P(x): J_1(x), J_2(x), \cdots, J_n(x)/C(x)$ can be expressed as follows: If there exists an interpretation $I$ such that $I$ satisfies $P(x)$ and doesn’t satisfy every $J_i(x)$ ($1 \leq i \leq n$), then $I$ satisfies $C(x)$. Otherwise, if $I$ satisfies every $J_i(x)$ ($1 \leq i \leq n$), then $I$ satisfies $C(x)$. For example, to state that a person can speak except if s/he is a dummy, we can use the default rule $$\text{Person}(x): \text{Dummy}(x)/\text{CanSpeak}(x).$$ If there is an individual named John in a domain of individuals, then the closed default rule is $$\text{Person}(\text{John}): \text{Dummy}(\text{John})/\text{CanSpeak}(\text{John}).$$ To deal with strict taxonomies information as well as default information in distributed knowledge base, the definition of distributed knowledge base should be extended for including a set of default rules. We call the distributed terminological knowledge base with explicit default information default distributed terminological knowledge base, which is denoted as DDT. **Definition 4.2.** A default distributed terminological knowledge base $\text{DDT} = (\text{DT}, \text{D})$, where $\text{DT}$ is the $\text{DTB}$ of distributed description logic, and $\text{D}$ is a set of default rules. An example of a DDT is shown in figure 4.1. The DT of the DDT is based on two local terminological knowledge bases, named $T_1$ and $T_2$ respectively. The DT and D of the DDT are shown in Figure 4.1. Figure 4.2 provides a distributed interpretation of the DDK. The satisfaction problem of DDT should be discussed for queries based on it. The satisfaction symbol is denoted as \( \models_{\text{dd}} \). The kind of satisfiability of these elements in DDT means that they should satisfy not only DT, but also the set D of default rules. So we call satisfiability of elements in DDT default satisfiability. Default satisfiability serves as a complement of satisfiability definition in a distributed terminological knowledge base with default rules. In queries based on DDT, the definition will be used to detect satisfiability of a concept or assertion. **Definition 4.3.** A distributed interpretation \( \mathcal{I} \) \( \vdash_{\text{dd}} \) satisfies (written \( \models_{\text{dd}} \)) the elements of DDT = \((\mathcal{D}, \mathcal{D})\), according to following clauses: For every default rule \( \delta \) in \( \mathcal{D} \), \( \delta = P(x) : J_1(x), J_2(x), \ldots, J_n(x) / C(x) \), for every \( i, j \in I \) \[ \begin{align*} \mathcal{J} & \models_{\text{dd}} \mathcal{D}, \text{ if } \mathcal{J} \models \mathcal{D} \text{ and } \mathcal{J} \models \delta \\ \mathcal{J} & \models_{\text{dd}} \mathcal{D}, \text{ if } \mathcal{J} \models \mathcal{D} \text{ and } \mathcal{J} \models \delta \\ \mathcal{J} & \models \delta, \text{ if } \mathcal{J} \models \mathcal{D} P \subseteq C \text{ implies } \mathcal{J} \not\models \mathcal{D} J_k \subsetneq C \text{ for all } k (1 \leq k \leq n) \\ \mathcal{J} & \models \delta, \text{ if } i \neq j, \text{ such that } \mathcal{J} \models \mathcal{D} \ I : P \subseteq C \text{ or } \mathcal{J} \models \mathcal{D} \ I : P \models j : C \text{ or} \\ \mathcal{J} & \models \mathcal{D} J \models \mathcal{D} \text{DT}, \text{ if for all distributed interpretation } \mathcal{J}, \mathcal{J} \models_{\text{dd}} \mathcal{D}\text{DT} \text{ implies} \\ \mathcal{J} & \models_{\text{dd}} \mathcal{D} \end{align*} \] In a distributed knowledge base, default information may have been used during reasoning, but a DDT is not really helpful for reasoning with default information in distributed knowledge. Some additional information with respect to default rules should be included explicitly into DT. A closed default rule of the form \( P(x) : J_1(x), J_2(x), \ldots, J_n(x) / C(x) \) can be divided into two parts: \( P(x) \rightarrow C(x) \) and \( J_i(x) \rightarrow C(x) \), \( 1 \leq i \leq n \). We call the first part fulfilled rule, and the second exceptional rules. A rule of the form \( A(x) \rightarrow B(x) \) means for every (distributed) interpretation \( \mathcal{I}, x \in A^2 \), then \( x \in B^2 \), i.e. \( A \subseteq B \), where \( A \) and \( B \) are concept names, and \( x \) denotes an individual. **Definition 4.4.** An extended distributed knowledge base EDDT is constructed based on a DDT=(\( \mathcal{D}, \mathcal{D} \)), according to the following clauses: For every default rule \( \delta \) in \( \mathcal{D}, \delta = P(x) : J_1(x), J_2(x), \ldots, J_n(x) / C(x) \), 1) Dividing into two parts which embrace fulfilled rules and exceptional rules, respectively. The fulfilled rule denotes that it holds in most cases until the exception facts appear, while the exceptional rules denote some exceptional facts. 2) Adding \( P \subseteq C \) and \( J_i \subseteq C \) into DT \((1 \leq i \leq n)\), which are the assertions corresponding to fulfilled rule and exceptional rules, respectively. 3) Setting the priorities of different rules for selecting appropriate rules during reasoning. The assertions corresponding to exceptional rules have the highest priority, while original strict information has normal priority. The assertions corresponding to fulfilled rules are given the lowest priority. In the course of constructing an EDDT, default information has been added into distributed knowledge base for default reasoning, because these default information may have been used during reasoning. Exceptional information has been assigned the highest priority to avoid conflicting with some strict information, while fulfilled rules would be used only in the situation that no other strict information can be used, its priority is least. A simplification view of the EDDT based on the DDT and its interpretation (shown in figure 4.1 and 4.2) can be found in figure 4.3. The default rule \( \text{BIRD}(x) : \text{PARROT}(x)/\text{SPEAKINGIMAL}\text{Animal}(x) \) is divided into one fulfilled rule and one exceptional rule, the fulfilled rule \( \text{BIRD} \subseteq \neg\text{SPEAKINGIMAL} \) and the exceptional rule \( \text{PARROT} \subseteq \text{SPEAKINGIMAL} \) has been added into EDDT. In fact, an EDDT can be recognized as a collection of integrated ontologies with default information expressed explicitly. Default reasoning can be performed based on an EDDT. In the following section, we will focus on how the default reasoning based on EDDT will be realized. Meanwhile, an adapted algorithm will be discussed for checking default satisfiability of complex concepts and subsumption assertions. 5. Reasoning with Default Information. Reasoning with default information provides agents using different ontologies with stronger query capability. In our opinion, a query based on DDT can boil down to checking default satisfiability of complex concept in accord with the query. Based on description logics, satisfiability of a complex concept is decided in polynomial time according to Tableau algorithm for ALCN [10, 16]. An important result of DDL is the ability to transform a distributed knowledge base into a global one. So the existing description logic reasoners can be applied for deriving new knowledge. This would allow us to transfer theoretical results and reasoning techniques from the extensive current DL literatures. In our reasoning approach with default information, the result will be used. The reasoning problem of distributed terminological knowledge base of a DDL will be transformed to the reasoning problem of terminological knowledge base of a global DL corresponding to the DDL. So in our opinion, detecting default satisfiability of a DDL is just detecting the default satisfiability of the global DL in accord with the DDL. A default extension to Tableau algorithm for \textit{ACCN} DL can be used for detecting default satisfiability of \textit{ACCN} concepts based on an EDDT. \textbf{Definition 5.1.} A constraint set \( S \) consists of constraints of the form \( C(x), p(x, y) \), where \( C \) and \( p \) are concept name and role name, respectively. Both \( x \) and \( y \) are variables. An \( I \)-assignment maps a variable \( x \) into an element of \( \Delta^I \). If \( x^I \in C^I \), the \( I \)-assignment satisfies \( C(x) \). If \( (x^I, y^I) \in p^I \), the \( I \)-assignment satisfies \( p(x, y) \). If the \( I \)-assignment satisfies every element in constraint set \( S \), it satisfies \( S \). If there exist an interpretation \( I \) and an \( I \)-assignment such that the \( I \)-assignment satisfies the constraint set \( S \), \( S \) is satisfiable. \( S \) is satisfiable iff all the constraints in \( S \) are satisfiable. It will be convenient to assume that all concept descriptions in EDDT are in negation normal form (NNF). Using de-Morgan’s rules and the usual rules for quantifiers, any \textit{ACCN} concept description can be transformed into an equivalent description in NNF in linear time. For example, the assertion description \( \text{SPARROW} \sqsubseteq \text{BIRD} \) can be transformed the form \( \neg \text{SPARROW} \sqcup \text{BIRD} \). To check satisfiability of concept \( C \), our extended algorithm starts with constraint set \( S = \{C(x)\} \), and applies transformation rules in an extended distributed knowledge base. The concept \( C \) is satisfiable iff the constraint set \( S \) is unsatisfiable. In applying transformation rules, if there exist all obvious conflicts (clashes) in \( S \), \( S \) is unsatisfiable, which means the concept \( C \) is satisfiable. Otherwise, \( S \) is unsatisfiable. The transformation rules are derived from concepts and assertions in EDDT. If the constraint set \( S \) before the action is satisfiable, \( S \) after the action is also satisfiable. The transformation rules of default extension to satisfiability algorithm are shown as Figure 5.1. When the adapted algorithm is used for detecting default satisfiability of \textit{ACCN} concepts, every action must preserve satisfiability. Because if an action don’t preserve satisfiability, we cannot ensure the condition that if the constraint set before the action is satisfiable then the set after the action is satisfiable. In the extension algorithm, we must prove the actions preserve satisfiability. \textbf{Theorem 5.2.} The action of the applied transformation rules preserves satisfiability. \textit{Proof.} Because a DDL can be regarded as a global DL, for simplification, we use interpretation \( I \) of the global DL for distributed interpretation \( J \) of the DDL. In the extension algorithm, every step may involve the actions of some transformation rules that are applied, so we must prove all of these actions in these steps preserve satisfiability. Because the actions in the second step are originally derived from the classical Tableau algorithm, we have known they preserve satisfiability [10]. The remainder of the proof will only consider the actions in the first step and the third step. 1) In the first step, the action condition is that for any default rule of the form \[ P(x) : J_1(x), J_2(x), \ldots, J_n(x) / C(x) \] in set of default rules, there exists \( J_i(x) \) is contained in \( S \). If the constraint set \( S \) before the action is satisfiable, then there exists an interpretation \( I \) such that \( I \) satisfies all of elements of \( S \). Because \( \{J_i(x)\} \subseteq S \), then \( I \) satisfies \( J_i(x) \) \((1 \leq i \leq n) \). Furthermore, according to the Definition 4.1, we know \( I \) satisfies \( \neg C(x) \) after the action. From the above, we know that \( I \) satisfies both \( \neg C(x) \) and \( S \), i.e., \( I \) satisfies \( S \cup \{\neg C(x)\} \) after the action. Exceptional rules: (Used for Step 1) Condition: For any default rule of the form \( P(x) \vdash J_1(x), J_2(x), \ldots, J_n(x) \), there exists \( J_i(x) \) (\( 1 \leq i \leq n \)) is contained in \( S \), but \( S \) doesn’t contain \( \neg C(x) \). Action: \( S = S \cup \{ \neg C(x) \} \) Strict rules: (Used for Step 2) \( \exists \)-rule: Condition: \( \{ C(x), D(x) \} \subseteq S \), but \( S \) doesn’t contain both \( C(x) \) and \( D(x) \). Action: \( S = S \cup \{ C(x), D(x) \} \) \( \forall \)-rule: Condition: \( \{ C(x) \} \subseteq S \) but \( \{ C(x), D(x) \} \cap S = \emptyset \). Action: \( S = S \cup \{ C(x) \} \) \( \exists \)-rule: Condition: \( \{ (\forall R \vDash C)(x) \} \subseteq S \), but there is no individual name \( y \) such that \( S \) contains \( C(x) \) and \( R(x, y) \). Action: \( S = S \cup \{ C(y), R(x, y) \} \) \( \forall \)-rule: Condition: \( \{ (\forall R \vDash C)(x) \} \subseteq S \), but \( S \) doesn’t contain \( C(y) \). Action: \( S = S \cup \{ C(y) \} \) \( \exists \)-rule: Condition: \( (\exists \mathcal{R}) \exists (x,y) \text{ such that } \mathcal{R}(x,y) \) are in \( S \), \( 1 \leq i \leq n \). Action: \( S = S \cup \mathcal{R}(x,y) \) \( \forall \)-rule: Condition: \( \exists x \in S \text{ such that } \mathcal{R}(x) \text{ is in } S \), \( 1 \leq i \leq n \), and \( y_i \neq y_j \) are distinct individual names not occurring in \( S \). Action: \( S = S \cup \{ (\forall R \vDash C)(x) \} \) \( \forall \)-rule: Condition: \( \forall x \in S \text{ such that } \mathcal{R}(x) \text{ is in } S \), \( 1 \leq i \leq n \), \( y_i \neq y_j \) is not in \( S \) for some \( 1 \leq i \leq n \). Action: \( S = S \cup \{ (\forall R \vDash C)(x) \} \) Fulfilled rule: (Used for Step 3) Condition: no other transformation rules is applicable, and for any default rule of the form \( P(x) \vdash J_1(x), J_2(x), \ldots, J_n(x) \), \( \{ P(x) \} \subseteq S \), but all of the \( J_i(x) \) \( (1 \leq i \leq n) \) and \( C(x) \) are not contained in \( S \). Action: \( S = S \cup \{ C(x) \} \) Fig. 5.1. The adapted Tableau rules used for detecting default satisfiability of ALC\(N\)' concepts 2) In the third step, the action condition is that \( \{ P(x) \} \subseteq S \), \( S \) doesn’t contain all of the \( J_i(x) \) \( (1 \leq i \leq n) \) and no other transformation rules can be applied. If the constraint set \( S \) before the action is satisfiable, then there exists an interpretation \( I \) such that \( I \) satisfies all of elements of \( S \). Because \( \{ P(x) \} \subseteq S \), then \( I \) satisfies \( P(x) \). Furthermore, we know that \( I \) doesn’t satisfy any \( J_i(x) \) \( (1 \leq i \leq n) \), otherwise, there would exist other exceptional rules which can be applied. Because \( I \) satisfies \( P(x) \) before the action. So from Definition 4.1, we get \( I \) satisfies \( C(x) \). Because \( I \) satisfies both \( S \) and \( C(x) \), we get \( I \) satisfies \( S \cup \{ C(x) \} \). From above proofs, we can conclude that every action in the applied transform rules, in the extension algorithm, preserves satisfiability. As mentioned in Definition 4.4, an EDDT embraces three types of transformation rules: strict information, fulfilled information and exceptional information. These different types of information are given different levels of priority. Here, we use the symbol \( SR \) to denote the set of strict facts in an EDDT, \( FR \) to denote the set of fulfilled information and \( ER \) to denote the set of exceptional information. Then, based on the EDDT shown in Figure 4.3, we will get the descriptions of its sets of different types of information in NNF, where Algorithm: checking default satisfiability of C based on the EDDT Require: An EDDT which embraces SR, FR and ER. Ensure: the descriptions of SR, FR and ER in NNF. 1. \( S_0 = C(\text{x}), i = 1 \); 2. apply strict rules and transform \( S_0 \) into \( S_i \); 3. for each \( r \in \text{ER} \) do // Step 1 4. if \( S_i \) meets the condition of \( r \) 5. apply \( r \) to \( S_i \) and result of action: \( S_{i+1} \leftarrow S_i \); 6. \( i = i + 1 \); 7. if there exist clashes in \( S_i \) 8. return "C is satisfiable"; 9. end if 10. end if 11. end for 12. for each \( r \in \text{SR} \) do // Step 2 13. if \( S_i \) meets the condition of \( r \) and \( S_i \) isn’t labeled "Clash" 14. apply \( r \) to \( S_i \) and result of action: \( S_{i+1} \leftarrow S_i \); 15. \( i = i + 1 \); 16. if there exist clashes in \( S_i \) 17. \( S_i \) is labeled "Clash"; 18. end if 19. end if 20. end for 21. for each \( r \in \text{FR} \) do // Step 3 22. if \( S_i \) meets the condition of \( r \) and \( S_i \) isn’t labeled "Clash" 23. apply \( r \) to \( S_i \) and result of action: \( S_{i+1} \leftarrow S_i \); 24. \( i = i + 1 \); 25. if there exist clashes in \( S_i \) 26. \( S_i \) is labeled "Clash"; 27. end if 28. end if 29. end for 30. if the leaf nodes of all possible branches in the constructed tree-like model are labeled "Clash" 31. return "C is satisfiable"; 32. else return "C is unsatisfiable"; 33. end if \[ SR = \{ \neg \text{PARROT} \cup \neg \text{BIRD}, \neg \text{SPARROW} \cup \text{BIRD}, \neg \text{PARROT} \cup \text{FLYING\_ANIMAL}, \neg \text{GOAT} \cup \neg \text{SPEAKING\_ANIMAL} \} \\ FR = \{ \neg \text{BIRD} \cup \neg \text{SPEAKING\_ANIMAL} \} \\ ER = \{ \neg \text{PARROT} \cup \text{SPEAKING\_ANIMAL} \}. \] The subsumption assertions to be checked should be transformed into their negation description in NNF according to the theorem [10]: \( A \subseteq B \) is satisfiable iff \( A \cap \neg B \) is unsatisfiable, where \( A \) and \( B \) are concept descriptions, respectively. For example, the subsumption assertion \( \text{SPARROW} \subseteq \neg \text{SPEAKING\_ANIMAL} \) will be transformed into the concept description with negation \( \text{SPARROW} \cap \neg \text{SPEAKING\_ANIMAL} \). In the following, we will describe particularly the extension algorithm for checking default satisfiability of a given concept. The default extension algorithm can be divided into three steps. In the first step, we apply exceptional rules to constraint set because they have the highest priority. If exceptional rules can be used for the detected concept, strict rules will not be used. Otherwise, if no exceptional rules can be used, the strict rules can be applied to constraint set (step 2). The reason why we do like this is to avoid conflicting with some strict information. Another reason is to save reasoning time. In step three, only in the situation that no other strict information can be used, could fulfilled rules be used. The default extension algorithm either stops because all actions fail with obvious conflicts, or it stops without further used rules. The following example shown in Figure 5.2 demonstrates the algorithm with a tree-like diagram. We want to know whether the subsumption assertion \( \text{SPARROW} \subseteq \neg \text{SPEAKING\_ANIMAL} \) is satisfiable in the EDDT shown in Figure 4.3. That is to say, we should detect that the concept \( \text{SPARROW} \cap \neg \text{SPEAKING\_ANIMAL} \) is unsatisfiable. The concept is firstly transformed into constrain set \( S_0 \). Considering the default rule \( \text{BIRD}(x) : \) PARROT(x)/SPEAKING_ANIMAL(x), we know that PARROT(x) isn’t contained in S₀. Then, in the first step, the exceptional rule ~PARROT(x) ∪ SPEAKING_ANIMAL(x) can not be applied to S₀. In the following steps, we apply strict rules, the reasoning continues until it stops with obvious conflicts. Finally, the leaf node of every branch in this tree-like diagram is notated using “Clash” tag. So we know the constraint SPARROW ∩ SPEAKING_ANIMAL are not satisfiable. That is to say, the subsumption assertion SPARROW ⊑ ~SPEAKING_ANIMAL is satisfiable. \[ S₀ = \{ (\text{SPARROW} \cap \text{SPEAKING_ANIMAL})(x) \} \] \[ S₁ = S₀ ∪ \{ \text{SPARROW}(x), ~\text{SPEAKING_ANIMAL}(x) \} \] \[ \text{~SPARROW} \cup \text{BIRD}(x) \] \[ S₂ = S₁ ∪ \{ ~\text{SPARROW}(x) \} \] \[ S₂ = S₁ ∪ \{ \text{BIRD}(x) \} \] \[ \text{~BIRD}(x) \cup ~\text{SPEAKING_ANIMAL}(x) \] \[ S₃ = S₂ ∪ \{ \text{BIRD}(x) \} \] \[ S₃ = S₂ ∪ \{ ~\text{SPEAKING_ANIMAL}(x) \} \] Fig. 5.2. Detecting default satisfiability of complex concept Please note that the extension algorithm can tackle both general subsumption assertions and assertions about exceptional facts. In another example shown in Figure 5.3, we want to check whether the subsumption assertion PARROT ⊑ SPEAKING_ANIMAL is satisfiable, that is to say, we check the default satisfiability of the concept PARROT(x) ∩ ~SPEAKING_ANIMAL(x), which transformed into a constrain set. In the first step, when the exceptional rule ~PARROT(x) ∪ SPEAKING_ANIMAL(x) is applied to constraint set, the complete conflicts occur. So we know the concept PARROT(x) ∩ ~SPEAKING_ANIMAL(x) is not satisfiable, which means that the subsumption assertion PARROT ⊑ SPEAKING_ANIMAL is satisfiable. Then reasoning process stops without applying other transformation rules. This can be served as an example of reasoning for an exceptional fact. \[ S₀ = \{ (\text{PARROT} \cap ~\text{SPEAKING_ANIMAL})(x) \} \] \[ S₁ = S₀ ∪ \{ \text{PARROT}(x), ~\text{SPEAKING_ANIMAL}(x) \} \] \[ \text{~PARROT} \cup \text{SPEAKING_ANIMAL}(x) \] \[ S₂ = S₁ ∪ \{ ~\text{PARROT}(x) \} \] \[ S₂ = S₁ ∪ \{ \text{SPEAKING_ANIMAL}(x) \} \] Fig. 5.3. An example of detecting exceptional fact In the following, we give a brief of discussion of complexity issues about the default satisfiability algorithm. Theorem 5.3. Default satisfiability of ALCN-concept descriptions is PSPACE-complete. Proof. From [16], we know that satisfiability of \textit{ALCN}-concept descriptions is \textit{PSPACE}-complete. As mentioned above, our default satisfiability algorithm for \textit{ALCN}-concept descriptions can be divided into three steps. In fact, every step is just the satisfiability algorithm for \textit{ALCN}. Then the sequence of the three steps is also essentially the satisfiability algorithm for \textit{ALCN}. So we get the conclusion that default satisfiability of \textit{ALCN}-concept descriptions is \textit{PSPACE}-complete. \hfill \square 6. Related work and Discussions. In the description logics community, a number of approaches to extend description logics with default reasoning have been proposed. Baader and Hollunder [17] investigated the problems about open default in detail and defined a preference relation. The approach is not restricted to simple normal default. Two kinds of default rules were introduced by Straccia [18]. The first kind is similar to the fulfilled rules in our approach. The second kind of rules allows for expressing default information of fillers of roles. Lambrix [19] presented a default extension to description logics for use in an intelligent search engine, Dwebic. Besides the standard inferences, Lambrix added a new kind of inference to description logic framework to describe whether an individual belongs to a concept from a knowledge base. Calvanese [20] proposed a formal framework to specify the mapping between the global and the local ontologies. Maedche [21] also proposed a framework for managing and integrating multiple distributed ontologies. Stuckenschmidt [6] exploited partial shared ontologies in multi-agent communication using an approximation approach of rewriting concepts. However, default information was not considered in these different frameworks and systems. An important feature of our formal framework distinguished from other work is that our default extension approach is based on DDL. To our best knowledge, little work has been done to pay attention to default extension to DDL for communication among agents. There is an alternative proposal for dealing with the problem of the example shown in Figure 2.1. For example, if the term SPARROW instead of BIRD in ontology 1 is mapped into the term \texttt{NON\_SPEAKING\_ANIMAL} in ontology 2, and the term PARROT in ontology 1 is not mapped into the term \texttt{NON\_SPEAKING\_ANIMAL}, then there is no default information to be considered. It seems that we have avoided the problem of default information between the two ontologies using the inter-ontology mapping. However, in fact, this approach is exhausted and unscalable. If there are a lot of terms belonging to the subclasses of BIRD to be added into ontology 1, we have to map every one of these added terms into \texttt{NON\_SPEAKING\_ANIMAL} in ontology 2. In the situation, we will find the alternative approach is much exhausted and unscalable. In contrast to the alternative approach, our default extension approach to DDL considers the inter-ontology mapping efforts and the scalability of ontologies used by different agents as key features. Regarding to the complexity issue of the proposed default satisfiability algorithm, we will find that the algorithm increase no more complexity than satisfiability algorithm for \textit{ALCN}. It means that we can perform reasoning with strict information as well as default information in the same time and space complexity. The future work includes a flexible mechanism for parsing exchanged messages among agents. ACLs are used to construct and parse exchanged messages required by both participants. Then, concepts defined in DAML+OIL ontology language can be readily combined with the mechanism, thus increasing the flexibility of messages, and hence accessibility and interoperability of services within open environments. 7. Conclusion. In this paper, an approach is proposed to enables agents using different ontologies on the Web to exchange semantic information solely relying on internally provided mapping between the ontologies. Because of the semantic heterogeneity among these ontologies, it is difficult for an agent to understand the terminology of another agent. To get complete and correct semantic information from multiple ontologies used by different agents, default information among these ontologies should be considered. Our approach is based on default extension to DDL. The distributed terminological knowledge base is originally used to present strict information. To perform default reasoning based on DDL, strict as well as default information is taken into account. Then, all of default information above is added into an extended default distributed terminological knowledge base (EDDT), which is constructed from a default distributed terminological knowledge base (DDT). The default Tableau algorithm is used on EDDT where different rules have different priority: exceptional rules have the highest priority, and fulfilled rules the least. Reasoning with default information provides agents using different ontologies with stronger query capability. In our opinion, a query based on DDT can boil down to checking default satisfiability of complex concept in accord with the query. Our approach enables agents using different ontologies on the Web to exchange semantic information solely relying on internally provided mapping between the ontologies. But so far, our approach is considered as a basic mechanism for facilitating agent communication. To apply it in practice, there is still a lot of work to be done [23]. For example, more sophisticated agent communication protocols, similar to KQML [22] and FIPA [24], have to be developed for getting complete and correct information through agents. Using the communication protocols, concepts defined in DAML+OIL ontology language can be readily combined with the mechanism, thus increasing the flexibility of messages, and hence accessibility and interoperability of services within open environments. Acknowledgments. This research is partially supported by the National Grand Fundamental Research Program of China under Grant No.TG1999033805, 2002CB312005, the Chinese National “863” High-Tech Program under Grant No.2001AA113010. REFERENCES Edited by: Shahram Rahimi, Raheel Ahmad Received: October 10, 2005 Accepted: March 19, 2006
{"Source-Url": "http://www.scpe.org/index.php/scpe/article/download/353/38", "len_cl100k_base": 11895, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 62931, "total-output-tokens": 14067, "length": "2e13", "weborganizer": {"__label__adult": 0.0003504753112792969, "__label__art_design": 0.0006914138793945312, "__label__crime_law": 0.0006785392761230469, "__label__education_jobs": 0.0020923614501953125, "__label__entertainment": 0.00023543834686279297, "__label__fashion_beauty": 0.00025343894958496094, "__label__finance_business": 0.0007963180541992188, "__label__food_dining": 0.0005087852478027344, "__label__games": 0.00115966796875, "__label__hardware": 0.000804901123046875, "__label__health": 0.0009522438049316406, "__label__history": 0.00052642822265625, "__label__home_hobbies": 0.00017452239990234375, "__label__industrial": 0.0006885528564453125, "__label__literature": 0.0012922286987304688, "__label__politics": 0.0006136894226074219, "__label__religion": 0.0006923675537109375, "__label__science_tech": 0.43798828125, "__label__social_life": 0.00023937225341796875, "__label__software": 0.039947509765625, "__label__software_dev": 0.5078125, "__label__sports_fitness": 0.00029754638671875, "__label__transportation": 0.0007052421569824219, "__label__travel": 0.0002720355987548828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51129, 0.02461]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51129, 0.65367]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51129, 0.85057]], "google_gemma-3-12b-it_contains_pii": [[0, 5064, false], [5064, 8791, null], [8791, 14359, null], [14359, 18121, null], [18121, 21394, null], [21394, 27301, null], [27301, 31580, null], [31580, 35304, null], [35304, 38916, null], [38916, 41293, null], [41293, 46565, null], [46565, 51129, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5064, true], [5064, 8791, null], [8791, 14359, null], [14359, 18121, null], [18121, 21394, null], [21394, 27301, null], [27301, 31580, null], [31580, 35304, null], [35304, 38916, null], [38916, 41293, null], [41293, 46565, null], [46565, 51129, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51129, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51129, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51129, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51129, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51129, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51129, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51129, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51129, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51129, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51129, null]], "pdf_page_numbers": [[0, 5064, 1], [5064, 8791, 2], [8791, 14359, 3], [14359, 18121, 4], [18121, 21394, 5], [21394, 27301, 6], [27301, 31580, 7], [31580, 35304, 8], [35304, 38916, 9], [38916, 41293, 10], [41293, 46565, 11], [46565, 51129, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51129, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
3ad6ac333668011b7a4569812266d0ba1fc50a3a
SEARCHING, SORTING, AND Complexity Analysis After completing this chapter, you will be able to: - Measure the performance of an algorithm by obtaining running times and instruction counts with different data sets - Analyze an algorithm’s performance by determining its order of complexity, using big-O notation - Distinguish the common orders of complexity and the algorithmic patterns that exhibit them - Distinguish between the improvements obtained by tweaking an algorithm and reducing its order of complexity - Write a simple linear search algorithm and a simple sort algorithm Earlier in this book, you learned about several criteria for assessing the quality of an algorithm. The most essential criterion is correctness, but readability and ease of maintenance are also important. This chapter examines another important criterion of the quality of algorithms—run-time performance. Algorithms describe processes that run on real computers with finite resources. Processes consume two resources: processing time and space or memory. When run with the same problems or data sets, processes that consume less of these two resources are of higher quality than processes that consume more, and so are the corresponding algorithms. In this chapter, we introduce tools for complexity analysis—for assessing the run-time performance or efficiency of algorithms. We also apply these tools to search algorithms and sort algorithms. 11.1 Measuring the Efficiency of Algorithms Some algorithms consume an amount of time or memory that is below a threshold of tolerance. For example, most users are happy with any algorithm that loads a file in less than one second. For such users, any algorithm that meets this requirement is as good as any other. Other algorithms take an amount of time that is totally impractical (say, thousands of years) with large data sets. We can’t use these algorithms, and instead need to find others, if they exist, that perform better. When choosing algorithms, we often have to settle for a space/time tradeoff. An algorithm can be designed to gain faster run times at the cost of using extra space (memory), or the other way around. Some users might be willing to pay for more memory to get a faster algorithm, whereas others would rather settle for a slower algorithm that economizes on memory. Memory is now quite inexpensive for desktop and laptop computers, but not yet for miniature devices. In any case, because efficiency is a desirable feature of algorithms, it is important to pay attention to the potential of some algorithms for poor performance. In this section, we consider several ways to measure the efficiency of algorithms. 11.1.1 Measuring the Run Time of an Algorithm One way to measure the time cost of an algorithm is to use the computer’s clock to obtain an actual run time. This process, called benchmarking or profiling, starts by determining the time for several different data sets of the same size and then calculates the average time. Next, similar data are gathered for larger and larger data sets. After several such tests, enough data are available to predict how the algorithm will behave for a data set of any size. Consider a simple, if unrealistic, example. The following program implements an algorithm that counts from 1 to a given number. Thus, the problem size is the number. We start with the number 10,000,000, time the algorithm, and output the running time to the terminal window. We then double the size of this... number and repeat this process. After five such increases, there is a set of results from which you can generalize. Here is the code for the tester program: ``` ``` File: timing1.py Prints the running times for problem sizes that double, using a single loop. ``` ```import time problemSize = 10000000 print ("%12s16s" % ("Problem Size", "Seconds")) for count in range(5): start = time.time() # The start of the algorithm work = 1 for x in range(problemSize): work += 1 work -= 1 # The end of the algorithm elapsed = time.time() - start print ("%12d%16.3f" % (problemSize, elapsed)) problemSize *= 2 ``` The tester program uses the `time()` function in the `time` module to track the running time. This function returns the number of seconds that have elapsed between the current time on the computer's clock and January 1, 1970 (also called “The Epoch”). Thus, the difference between the results of two calls of `time.time()` represents the elapsed time in seconds. Note also that the program does a constant amount of work, in the form of two extended assignment statements, on each pass through the loop. This work consumes enough time on each iteration so that the total running time is significant, but has no other impact on the results. Figure 11.1 shows the output of the program. A quick glance at the results reveals that the running time more or less doubles when the size of the problem doubles. Thus, one might predict that the running time for a problem of size 32,000,000 would be approximately 124 seconds. As another example, consider the following change in the tester program’s algorithm: ```python for j in range(problemSize): for k in range(problemSize): work += 1 work -= 1 ``` In this version, the extended assignments have been moved into a nested loop. This loop iterates through the size of the problem within another loop that also iterates through the size of the problem. This program was left running overnight. By morning it had processed only the first data set, 1,000,000. The program was then terminated and run again with a smaller problem size of 1000. Figure 11.2 shows the results. <table> <thead> <tr> <th>Problem Size</th> <th>Seconds</th> </tr> </thead> <tbody> <tr> <td>1000</td> <td>0.387</td> </tr> <tr> <td>2000</td> <td>1.581</td> </tr> <tr> <td>4000</td> <td>6.463</td> </tr> <tr> <td>8000</td> <td>25.702</td> </tr> <tr> <td>16000</td> <td>102.666</td> </tr> </tbody> </table> [FIGURE 11.1] The output of the tester program [FIGURE 11.2] The output of the second tester program with a nested loop and initial problem size of 1000 Note that when the problem size doubles, the number of seconds of running time more or less quadruples. At this rate, it would take 175 days to process the largest number in the previous data set! This method permits accurate predictions of the running times of many algorithms. However, there are two major problems with this technique: 1. Different hardware platforms have different processing speeds, so the running times of an algorithm differ from machine to machine. Also, the running time of a program varies with the type of operating system that lies between it and the hardware. Finally, different programming languages and compilers produce code whose performance varies. For example, an algorithm coded in C usually runs slightly faster than the same algorithm in Python byte code. Thus, predictions of performance generated from the results of timing on one hardware or software platform generally cannot be used to predict potential performance on other platforms. 2. It is impractical to determine the running time for some algorithms with very large data sets. For some algorithms, it doesn’t matter how fast the compiled code or the hardware processor is. They are impractical to run with very large data sets on any computer. Although timing algorithms may in some cases be a helpful form of testing, we also want an estimate of the efficiency of an algorithm that is independent of a particular hardware or software platform. As you will learn in the next section, such an estimate tells us how well or how poorly the algorithm would perform on any platform. ### 11.1.2 Counting Instructions Another technique used to estimate the efficiency of an algorithm is to count the instructions executed with different problem sizes. These counts provide a good predictor of the amount of abstract work performed by an algorithm, no matter what platform the algorithm runs on. Keep in mind, however, that when you count instructions, you are counting the instructions in the high-level code in which the algorithm is written, not instructions in the executable machine language program. When analyzing an algorithm in this way, you distinguish between two classes of instructions: 1. Instructions that execute the same number of times regardless of the problem size 2. Instructions whose execution count varies with the problem size For now, you ignore instructions in the first class, because they do not figure significantly in this kind of analysis. The instructions in the second class normally are found in loops or recursive functions. In the case of loops, you also zero in on instructions performed in any nested loops or, more simply, just the number of iterations that a nested loop performs. For example, let us wire the algorithm of the previous program to track and display the number of iterations the inner loop executes with the different data sets: ``` "" File: counting.py Prints the number of iterations for problem sizes that double, using a nested loop. "" problemSize = 1000 print ("%12s%15s" % ("Problem Size", "Iterations")) for count in range(5): number = 0 # The start of the algorithm work = 1 for j in range(problemSize): for k in range(problemSize): number += 1 work += 1 work -= 1 # The end of the algorithm print ("%12d%15d" % (problemSize, number)) problemSize *= 2 ``` As you can see from the results, the number of iterations is the square of the problem size (Figure 11.3). <table> <thead> <tr> <th>Problem Size</th> <th>Iterations</th> </tr> </thead> <tbody> <tr> <td>1000</td> <td>1000000</td> </tr> <tr> <td>2000</td> <td>4000000</td> </tr> <tr> <td>4000</td> <td>16000000</td> </tr> <tr> <td>8000</td> <td>64000000</td> </tr> <tr> <td>16000</td> <td>256000000</td> </tr> </tbody> </table> [FIGURE 11.3] The output of a tester program that counts iterations Here is a similar program that tracks the number of calls of a recursive Fibonacci function, introduced in Chapter 6, for several problem sizes. Note that the function now expects a second argument, which is a Counter object. Each time the function is called at the top level, a new Counter object is created and passed to it. On that call and each recursive call, the function’s counter object is incremented. ```python """ File: countfib.py Prints the number of calls of a recursive Fibonacci function with problem sizes that double. """ class Counter(object): """Tracks a count.""" def __init__(self): self._number = 0 def increment(self): self._number += 1 def __str__(self): return str(self._number) def fib(n, counter): """Count the number of calls of the Fibonacci function.""" counter.increment() if n < 3: return 1 else: return fib(n - 1, counter) + fib(n - 2, counter) problemSize = 2 print ("%12s%15s" % ("Problem Size", "Calls")) for count in range(5): counter = Counter() # The start of the algorithm fib(problemSize, counter) # The end of the algorithm print ("%12d%15s" % (problemSize, counter)) problemSize *= 2 ``` The output of this program is shown in Figure 11.4. <table> <thead> <tr> <th>Problem Size</th> <th>Calls</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>1</td> </tr> <tr> <td>4</td> <td>5</td> </tr> <tr> <td>8</td> <td>41</td> </tr> <tr> <td>16</td> <td>1973</td> </tr> <tr> <td>32</td> <td>4356617</td> </tr> </tbody> </table> [FIGURE 11.4] The output of a tester program that runs the Fibonacci function As the problem size doubles, the instruction count (number of recursive calls) grows slowly at first and then quite rapidly. At first, the instruction count is less than the square of the problem size, but the instruction count of 1973 is significantly larger than 256, the square of the problem size 16. The problem with tracking counts in this way is that, with some algorithms, the computer still cannot run fast enough to show the counts for very large problem sizes. Counting instructions is the right idea, but we need to turn to logic and mathematical reasoning for a complete method of analysis. The only tools we need for this type of analysis are paper and pencil. 11.1.3 Measuring the Memory Used by an Algorithm A complete analysis of the resources used by an algorithm includes the amount of memory required. Once again, we focus on rates of potential growth. Some algorithms require the same amount of memory to solve any problem. Other algorithms require more memory as the problem size gets larger. 11.1 Exercises 1. Write a tester program that counts and displays the number of iterations of the following loop: ```python while problemSize > 0: problemSize = problemSize // 2 ``` 2. Run the program you created in Exercise 1 using problem sizes of 1000, 2000, 4000, 10,000, and 100,000. As the problem size doubles or increases by a factor of 10, what happens to the number of iterations? 3. The difference between the results of two calls of the `time` function is an elapsed time. Because the operating system might use the CPU for part of this time, the elapsed time might not reflect the actual time that a Python code segment uses the CPU. Browse the Python documentation for an alternative way of recording the processing time and describe how this would be done. 11.2 Complexity Analysis In this section, we develop a method of determining the efficiency of algorithms that allows us to rate them independently of platform-dependent timings or impractical instruction counts. This method, called complexity analysis, entails reading the algorithm and using pencil and paper to work out some simple algebra. 11.2.1 Orders of Complexity Consider the two counting loops discussed earlier. The first loop executes $n$ times for a problem of size $n$. The second loop contains a nested loop that iterates $n^2$ times. The amount of work done by these two algorithms is similar for small values of $n$, but is very different for large values of $n$. Figure 11.5 and Table 11.1 illustrate this divergence. Note that when we say “work,” we usually mean the number of iterations of the most deeply nested loop. The performances of these algorithms differ by what we call an order of complexity. The performance of the first algorithm is linear in that its work grows in direct proportion to the size of the problem (problem size of 10, work of 10, 20 and 20, etc.). The behavior of the second algorithm is quadratic in that its work grows as a function of the square of the problem size (problem size of 10, work of 100). As you can see from the graph and the table, algorithms with linear behavior do less work than algorithms with quadratic behavior for most problem sizes \( n \). In fact, as the problem size gets larger, the performance of an algorithm with the higher order of complexity becomes worse more quickly. Several other orders of complexity are commonly used in the analysis of algorithms. An algorithm has constant performance if it requires the same number of operations for any problem size. List indexing is a good example of a constant-time algorithm. This is clearly the best kind of performance to have. Another order of complexity that is better than linear but worse than constant is called logarithmic. The amount of work of a logarithmic algorithm is proportional to the \( \log_2 \) of the problem size. Thus, when the problem doubles in size, the amount of work only increases by 1 (that is, just add 1). The work of a **polynomial time algorithm** grows at a rate of $n^k$, where $k$ is a constant greater than 1. Examples are $n^2$, $n^3$, and $n^{10}$. Although $n^3$ is worse in some sense than $n^2$, they are both of the polynomial order and are better than the next higher order of complexity. An order of complexity that is worse than polynomial is called **exponential**. An example rate of growth of this order is $2^n$. Exponential algorithms are impractical to run with large problem sizes. The most common orders of complexity used in the analysis of algorithms are summarized in Figure 11.6 and Table 11.2. ![A graph of some sample orders of complexity](image) **Figure 11.6** A graph of some sample orders of complexity <table> <thead> <tr> <th>$n$</th> <th>LOGARITHMIC $(\log_2 n)$</th> <th>LINEAR $(n)$</th> <th>QUADRATIC $(n^2)$</th> <th>EXPOENENTIAL $(2^n)$</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>7</td> <td>100</td> <td>10,000</td> <td>Off the charts</td> </tr> <tr> <td>1000</td> <td>10</td> <td>1000</td> <td>1,000,000</td> <td>Off the charts</td> </tr> <tr> <td>1,000,000</td> <td>20</td> <td>1,000,000</td> <td>1,000,000,000,000</td> <td>Really off the charts</td> </tr> </tbody> </table> **Table 11.2** Some sample orders of complexity ### 11.2.2 Big-O Notation An algorithm rarely performs a number of operations exactly equal to $n$, $n^2$, or $k^n$. An algorithm usually performs other work in the body of a loop, above the loop, and below the loop. For example, we might more precisely say that an algorithm performs $2n + 3$ or $2n^2$ operations. In the case of a nested loop, the inner loop The amount of work in an algorithm typically is the sum of several terms in a polynomial. Whenever the amount of work is expressed as a polynomial, we focus on one term as dominant. As \( n \) becomes large, the dominant term becomes so large that the amount of work represented by the other terms can be ignored. Thus, for example, in the polynomial \( \frac{1}{2} n^2 - \frac{1}{2} n \), we focus on the quadratic term, \( \frac{1}{2} n^2 \), in effect dropping the linear term, \( \frac{1}{2} n \), from consideration. We can also drop the coefficient \( \frac{1}{2} \) because the ratio between \( \frac{1}{2} n^2 \) and \( n^2 \) does not change as \( n \) grows. For example, if you double the problem size, the run times of algorithms that are \( \frac{1}{2} n^2 \) and \( n^2 \) both increase by a factor of 4. This type of analysis is sometimes called asymptotic analysis because the value of a polynomial asymptotically approaches or approximates the value of its largest term as \( n \) becomes very large. One notation that computer scientists use to express the efficiency or computational complexity of an algorithm is called big-O notation. “O” stands for “on the order of,” a reference to the order of complexity of the work of the algorithm. Thus, for example, the order of complexity of a linear-time algorithm is \( O(n) \). Big-O notation formalizes our discussion of orders of complexity. ### 11.2.3 The Role of the Constant of Proportionality The constant of proportionality involves the terms and coefficients that are usually ignored during big-O analysis. However, when these items are large, they may have an impact on the algorithm, particularly for small and medium-sized data sets. For example, no one can ignore the difference between \( n \) and \( n / 2 \), when \( n \) is \( 1,000,000 \). In the example algorithms discussed thus far, the instructions that execute within a loop are part of the constant of proportionality, as are the instructions that initialize the variables before the loops are entered. When analyzing an algorithm, one must be careful to determine that any instructions do not hide a loop that depends on a variable problem size. If that is the case, then the analysis must move down into the nested loop, as we saw in the last example. Let's determine the constant of proportionality for the first algorithm discussed in this chapter. Here is the code: ```python work = 1 for x in range(problemSize): work += 1 work -= 1 ``` Note that, aside from the loop itself, there are three lines of code, each of them assignment statements. Each of these three statements runs in constant time. Let’s also assume that on each iteration, the overhead of managing the loop, which is hidden in the loop header, runs an instruction that requires constant time. Thus, the amount of abstract work performed by this algorithm is $3n + 1$. Although this number is greater than just $n$, the running times for the two amounts of work, $n$ and $3n + 1$, increase at the same rate. ### Exercises 1. Assume that each of the following expressions indicates the number of operations performed by an algorithm for a problem size of $n$. Point out the dominant term of each algorithm, and use big-O notation to classify it. a. $2^n - 4n^2 + 5n$ b. $3n^2 + 6$ c. $n^3 + n^2 - n$ 2. For problem size $n$, algorithms A and B perform $n^2$ and $\frac{1}{2}n^2 + \frac{1}{2}n$ instructions, respectively. Which algorithm does more work? Are there particular problem sizes for which one algorithm performs significantly better than the other? Are there particular problem sizes for which both algorithms perform approximately the same amount of work? 3. At what point does an $n^4$ algorithm begin to perform better than a $2^n$ algorithm? ### Search Algorithms We now present several algorithms that can be used for searching and sorting lists. We first discuss the design of an algorithm, we then show its implementation as a Python function, and, finally, we provide an analysis of the algorithm’s computational complexity. To keep things simple, each function processes a list of integers. Lists of different sizes can be passed as parameters to the functions. The functions are defined in a single module that is used in the case study later in this chapter. 11.3.1 Search for a Minimum Python’s \texttt{min} function returns the minimum or smallest item in a list. To study the complexity of this algorithm, let’s develop an alternative version that returns the \textit{position} of the minimum item. The algorithm assumes that the list is not empty and that the items are in arbitrary order. The algorithm begins by treating the first position as that of the minimum item. It then searches to the right for an item that is smaller and, if it is found, resets the position of the minimum item to the current position. When the algorithm reaches the end of the list, it returns the position of the minimum item. Here is the code for the algorithm, in function \texttt{ourMin}: ```python def ourMin(lyst): """Returns the position of the minimum item."" minpos = 0 current = 1 while current < len(lyst): if lyst[current] < lyst[minpos]: minpos = current current += 1 return minpos ``` As you can see, there are three instructions outside the loop that execute the same number of times regardless of the size of the list. Thus, we can discount them. Within the loop, we find three more instructions. Of these, the comparison in the \texttt{if} statement and the increment of \texttt{current} execute on each pass through the loop. There are no nested or hidden loops in these instructions. This algorithm must visit every item in the list to guarantee that it has located the position of the minimum item. Thus, the algorithm must make \( n - 1 \) comparisons for a list of size \( n \). Therefore, the algorithm’s complexity is \( O(n) \). 11.3.2 Linear Search of a List Python’s \texttt{in} operator is implemented as a method named \_\_contains\_\_ in the \texttt{list} class. This method searches for a particular item (called the target item) within a list of arbitrarily arranged items. In such a list, the only way to search for a target item is to begin with the item at the first position and compare it to the target. If the items are equal, the method returns \texttt{True}. Otherwise, the method moves on to the next position and compares items again. If the method arrives at the last position and still cannot find the target, it returns \texttt{False}. This kind of search is called a **sequential search** or a **linear search**. A more useful linear search function would return the index of a target if it's found, or –1 otherwise. Here is the Python code for a linear search function: ```python def linearSearch(target, lyst): """Returns the position of the target item if found, or -1 otherwise.""" position = 0 while position < len(lyst): if target == lyst[position]: return position position += 1 return -1 ``` The analysis of a linear search is a bit different from the analysis of a search for a minimum, as we shall see in the next subsection. ### 11.3.3 Best-Case, Worst-Case, and Average-Case Performance The performance of some algorithms depends on the placement of the data that are processed. The linear search algorithm does less work to find a target at the beginning of a list than at the end of the list. For such algorithms, one can determine the best-case performance, the worst-case performance, and the average performance. In general, we worry more about average and worst-case performances than about best-case performances. Our analysis of a linear search considers three cases: 1. In the worst case, the target item is at the end of the list or not in the list at all. Then the algorithm must visit every item and perform \( n \) iterations for a list of size \( n \). Thus, the worst-case complexity of a linear search is \( O(n) \). 2. In the best case, the algorithm finds the target at the first position, after making one iteration, for an \( O(1) \) complexity. 3. To determine the average case, you add the number of iterations required to find the target at each possible position and divide the sum by \( n \). Thus, the algorithm performs \( (n + n - 1 + n - 2 + \ldots + 1) / n \), or \( (n + 1) / 2 \) iterations. For very large \( n \), the constant factor of \( /2 \) is insignificant, so the average complexity is still \( O(n) \). Clearly, the best-case performance of a linear search is rare when compared with the average and worst-case performances, which are essentially the same. ### 11.3.4 Binary Search of a List A linear search is necessary for data that are not arranged in any particular order. When searching sorted data, you can use a binary search. To understand how a binary search works, think about what happens when you look up a person’s number in a phone book. The data in a phone book are already sorted, so you don’t do a linear search. Instead, you estimate the name’s alphabetical position in the book, and open the book as close to that position as possible. After you open the book, you determine if the target name lies, alphabetically, on an earlier page or later page, and flip back or forward through the pages as necessary. You repeat this process until you find the name or conclude that it’s not in the book. Now let’s consider an example of a binary search in Python. To begin, let’s assume that the items in the list are sorted in ascending order (as they are in a phone book). The search algorithm goes directly to the middle position in the list and compares the item at that position to the target. If there is a match, the algorithm returns the position. Otherwise, if the target is less than the current item, the algorithm searches the portion of the list before the middle position. If the target is greater than the current item, the algorithm searches the portion of the list after the middle position. The search process stops when the target is found or the current beginning position is greater than the current ending position. Here is the code for the binary search function: ```python def binarySearch(target, lyst): left = 0 right = len(lyst) - 1 while left <= right: midpoint = (left + right) // 2 if target == lyst[midpoint]: return midpoint elif target < lyst[midpoint]: right = midpoint - 1 else: left = midpoint + 1 return -1 ``` There is just one loop with no nested or hidden loops. Once again, the worst case occurs when the target is not in the list. How many times does the loop run in the worst case? This is equal to the number of times the size of the list can be divided by 2 until the quotient is 1. For a list of size $n$, you essentially perform the reduction $n / 2 / 2 \ldots / 2$ until the result is 1. Let $k$ be the number of times we divide $n$ by 2. To solve for $k$, you have $n / 2^k = 1$, and $n = 2^k$, and $k = \log_2 n$. Thus, the worst-case complexity of binary search is $O(\log_2 n)$. Figure 11.7 shows the portions of the list being searched in a binary search with a list of 9 items and a target item, 10, that is not in the list. The items compared to the target are shaded. Note that none of the items in the left half of the original list are visited. ![Comparison Diagram] **Figure 11.7** The items of a list visited during a binary search for 10 The binary search for the target item 10 requires four comparisons, whereas a linear search would have required 10 comparisons. This algorithm actually appears to perform better as the problem size gets larger. Our list of 9 items requires at most 4 comparisons, whereas a list of 1,000,000 items requires at most only 20 comparisons! Binary search is certainly more efficient than linear search. However, the kind of search algorithm we choose depends on the organization of the data in the list. There is some additional overall cost to a binary search which has to do with keeping the list in sorted order. In a moment, we examine several strategies for sorting a list and analyze their complexity. But first, we provide a few words about comparing data items. 11.3.5 Comparing Data Items Both the binary search and the search for the minimum assume that the items in the list are comparable with each other. In Python, this means that the items are of the same type and that they recognize the comparison operators ==, <, and >. Objects of several built-in Python types, such as numbers, strings, and lists, can be compared using these operators. To allow algorithms to use the comparison operators ==, <, and > with a new class of objects, the programmer should define the __eq__, __lt__, and __gt__ methods in that class. The header of __lt__ is the following: ```python def __lt__(self, other): ``` This method returns `True` if `self` is less than `other`, or `False` otherwise. The criteria for comparing the objects depend on their internal structure and on the manner in which they should be ordered. For example, the `SavingsAccount` objects discussed in Chapter 8 include three data fields, for a name, a PIN, and a balance. If we assume that the accounts should be ordered alphabetically by name, then the following implementation of the __lt__ method is called for: ```python class SavingsAccount(object): """This class represents a savings account with the owner’s name, PIN, and balance.""" def __init__(self, name, pin, balance = 0.0): self._name = name self._pin = pin self._balance = balance def __lt__(self, other): return self._name < other._name # Other methods ``` Note that the __lt__ method calls the < operator with the _name fields of the two account objects. The names are strings, and the string type includes the __lt__ method as well. Python automatically runs the __lt__ method when the < operator is applied in the same way as it runs the __str__ method when the `str` function is called. The next session shows a test of comparisons with several account objects: ```python >>> s1 = SavingsAccount("Ken", "1000", 0) >>> s2 = SavingsAccount("Bill", "1001", 30) >>> s1 < s2 False >>> s2 < s1 True >>> s1 > s2 True >>> s2 > s1 False >>> s2 == s1 False >>> s3 = SavingsAccount("Ken", "1000", 0) >>> s1 == s3 True >>> s4 = s1 >>> s4 == s1 True ``` The accounts can now be placed in a list and sorted by name. ### 11.3 Exercises 1. Suppose that a list contains the values ``` 20 44 48 55 62 66 74 88 93 99 ``` at index positions 0 through 9. Trace the values of the variables `left`, `right`, and `midpoint` in a binary search of this list for the target value 90. Repeat for the target value 44. 2. The method we usually use to look up an entry in a phone book is not exactly the same as a binary search because, when using a phone book, we don’t always go to the midpoint of the sublist being searched. Instead, we estimate the position of the target based on the alphabetical position of the first letter of the person’s last name. For example, when we are looking up a number for “Smith,” we look toward the middle of the second half of the phone book first, instead of in the middle of the entire book. Suggest a modification of the binary search algorithm that emulates this strategy for a list of names. Is its computational complexity any better than that of the standard binary search? Sort Algorithms Computer scientists have devised many ingenious strategies for sorting a list of items. We won’t consider all of them here. In this chapter, we examine some algorithms that are easy to write but are inefficient. Each of the Python sort functions that we develop here operates on a list of integers and uses a `swap` function to exchange the positions of two items in the list. Here is the code for that function: ```python def swap(lyst, i, j): """Exchanges the items at positions i and j."" # You could say lyst[i], lyst[j] = lyst[j], lyst[i] # but the following code shows what is really going on temp = lyst[i] lyst[i] = lyst[j] lyst[j] = temp ``` 11.4.1 Selection Sort Perhaps the simplest strategy is to search the entire list for the position of the smallest item. If that position does not equal the first position, the algorithm swaps the items at those positions. It then returns to the second position and repeats this process, swapping the smallest item with the item at the second position, if necessary. When the algorithm reaches the last position in this overall process, the list is sorted. The algorithm is called `selection sort` because each pass through the main loop selects a single item to be moved. Table 11.3 shows the states of a list of five items after each search and swap pass of selection sort. The two items just swapped on each pass have asterisks next to them, and the sorted portion of the list is shaded. Here is the Python function for a selection sort: ```python def selectionSort(lyst): i = 0 while i < len(lyst) - 1: # Do n - 1 searches minIndex = i # for the smallest j = i + 1 while j < len(lyst): # Start a search if lyst[j] < lyst[minIndex]: minIndex = j j += 1 if minIndex != i: # Exchange if needed swap(lyst, minIndex, i) i += 1 ``` This function includes a nested loop. For a list of size $n$, the outer loop executes $n - 1$ times. On the first pass through the outer loop, the inner loop executes $n - 1$ times. On the second pass through the outer loop, the inner loop executes $n - 2$ times. On the last pass through the outer loop, the inner loop executes once. Thus, the total number of comparisons for a list of size $n$ is the following: $$(n - 1) + (n - 2) + \ldots + 1 =$$ $$n (n - 1) / 2 =$$ $$\frac{1}{2} n^2 - \frac{1}{2} n$$ For large $n$, you can pick the term with the largest degree and drop the coefficient, so selection sort is $O(n^2)$ in all cases. For large data sets, the cost of swapping items might also be significant. Because data items are swapped only in the outer loop, this additional cost for selection sort is linear in the worst and average cases. Another sort algorithm that is relatively easy to conceive and code is called a bubble sort. Its strategy is to start at the beginning of the list and compare pairs of data items as it moves down to the end. Each time the items in the pair are out of order, the algorithm swaps them. This process has the effect of bubbling the largest items to the end of the list. The algorithm then repeats the process from the beginning of the list and goes to the next-to-last item, and so on, until it begins with the last item. At that point, the list is sorted. Table 11.4 shows a trace of the bubbling process through a list of five items. This process makes four passes through a nested loop to bubble the largest item down to the end of the list. Once again, the items just swapped are marked with asterisks, and the sorted portion is shaded. <table> <thead> <tr> <th>UNSORTED LIST</th> <th>AFTER 1st PASS</th> <th>AFTER 2nd PASS</th> <th>AFTER 3rd PASS</th> <th>AFTER 4th PASS</th> </tr> </thead> <tbody> <tr> <td>5</td> <td>4*</td> <td>4</td> <td>4</td> <td>4</td> </tr> <tr> <td>4</td> <td>5*</td> <td>2*</td> <td>2</td> <td>2</td> </tr> <tr> <td>2</td> <td>2</td> <td>5*</td> <td>1*</td> <td>1</td> </tr> <tr> <td>1</td> <td>1</td> <td>1</td> <td>5*</td> <td>3*</td> </tr> <tr> <td>3</td> <td>3</td> <td>3</td> <td>3</td> <td>5*</td> </tr> </tbody> </table> [Table 11.4] A trace of the data during a bubble sort Here is the Python function for a bubble sort: ```python def bubbleSort(lyst): n = len(lyst) while n > 1: i = 1 while i < n: if lyst[i] < lyst[i - 1]: # Exchange if needed swap(lyst, i, i - 1) i += 1 n -= 1 ``` As with the selection sort, a bubble sort has a nested loop. The sorted portion of the list now grows from the end of the list up to the beginning, but the performance of the bubble sort is quite similar to the behavior of selection sort: the inner loop executes \( \frac{1}{2} n^2 - \frac{1}{2} n \) times for a list of size \( n \). Thus, bubble sort is \( O(n^2) \). Like selection sort, bubble sort won’t perform any swaps if the list is already sorted. However, bubble sort’s worst-case behavior for exchanges is greater than linear. The proof of this is left as an exercise for you. You can make a minor adjustment to the bubble sort to improve its best-case performance to linear. If no swaps occur during a pass through the main loop, then the list is sorted. This can happen on any pass, and in the best case will happen on the first pass. You can track the presence of swapping with a Boolean flag and return from the function when the inner loop does not set this flag. Here is the modified bubble sort function: ```python def bubbleSort2(lyst): n = len(lyst) while n > 1: swapped = False i = 1 while i < n: if lyst[i] < lyst[i - 1]: # Exchange if needed swap(lyst, i, i - 1) swapped = True i += 1 if not swapped: return # Return if no swaps n -= 1 ``` Note that this modification only improves best-case behavior. On the average, the behavior of bubble sort is still \( O(n^2) \). ### 11.4.3 Insertion Sort Our modified bubble sort performs better than a selection sort for lists that are already sorted. But our modified bubble sort can still perform poorly if many items are out of order in the list. Another algorithm, called an *insertion sort*, attempts to exploit the partial ordering of the list in a different way. The strategy is as follows: - On the \( i \)th pass through the list, where \( i \) ranges from 1 to \( n - 1 \), the \( i \)th item should be inserted into its proper place among the first \( i \) items in the list. - After the \( i \)th pass, the first \( i \) items should be in sorted order. This process is analogous to the way in which many people organize playing cards in their hands. That is, if you hold the first \( i - 1 \) cards in order, you pick the \( i \)th card and compare it to these cards until its proper spot is found. As with our other sort algorithms, insertion sort consists of two loops. The outer loop traverses the positions from 1 to \( n - 1 \). For each position \( i \) in this loop, you save the item and start the inner loop at position \( i - 1 \). For each position \( j \) in this loop, you move the item to position \( j + 1 \) until you find the insertion point for the saved \((i)\)th item. Here is the code for the `insertionSort` function: ```python def insertionSort(lyst): i = 1 while i < len(lyst): itemToInsert = lyst[i] j = i - 1 while j >= 0: if itemToInsert < lyst[j]: lyst[j + 1] = lyst[j] j -= 1 else: break lyst[j + 1] = itemToInsert i += 1 ``` Table 11.5 shows the states of a list of five items after each pass through the outer loop of an insertion sort. The item to be inserted on the next pass is marked with an arrow; after it is inserted, this item is marked with an asterisk. <table> <thead> <tr> <th>UNSORTED LIST</th> <th>AFTER 1st PASS</th> <th>AFTER 2nd PASS</th> <th>AFTER 3rd PASS</th> <th>AFTER 4th PASS</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>2</td> <td>1*</td> <td>1</td> <td>1</td> </tr> <tr> <td>5 ←</td> <td>5 (no insertion)</td> <td>2</td> <td>2</td> <td>2</td> </tr> <tr> <td>1</td> <td>1←</td> <td>5</td> <td>4*</td> <td>3*</td> </tr> <tr> <td>4</td> <td>4</td> <td>4←</td> <td>5</td> <td>4</td> </tr> <tr> <td>3</td> <td>3</td> <td>3←</td> <td>5</td> <td>5</td> </tr> </tbody> </table> [TABLE 11.5] A trace of the data during an insertion sort Once again, analysis focuses on the nested loop. The outer loop executes \( n - 1 \) times. In the worst case, when all of the data are out of order, the inner loop iterates once on the first pass through the outer loop, twice on the second pass, and so on, for a total of \( \frac{1}{2} n^2 - \frac{1}{2} n \) times. Thus, the worst-case behavior of insertion sort is \( O(n^2) \). The more items in the list that are in order, the better insertion sort gets until, in the best case of a sorted list, the sort’s behavior is linear. In the average case, however, insertion sort is still quadratic. ### Best-Case, Worst-Case, and Average-Case Performance Revisited As mentioned earlier, for many algorithms, a single measure of complexity cannot be applied to all cases. Sometimes an algorithm’s behavior improves or gets worse when it encounters a particular arrangement of data. For example, the bubble sort algorithm can terminate as soon as the list becomes sorted. If the input list is already sorted, the bubble sort requires approximately \( n \) comparisons. In many other cases, however, bubble sort requires approximately \( n^2 \) comparisons. Clearly, a more detailed analysis may be needed to make programmers aware of these special cases. As we discussed earlier, thorough analysis of an algorithm’s complexity divides its behavior into three types of cases: 1. **Best case**—Under what circumstances does an algorithm do the least amount of work? What is the algorithm’s complexity in this best case? 2. **Worst case**—Under what circumstances does an algorithm do the most amount of work? What is the algorithm’s complexity in this worst case? 3. **Average case**—Under what circumstances does an algorithm do a typical amount of work? What is the algorithm’s complexity in this typical case? Let’s review three examples of this kind of analysis for a search for a minimum, linear search, and bubble sort. Because the search for a minimum algorithm must visit each number in the list, unless it is sorted, the algorithm is always linear. Therefore, its best-case, worst-case, and average-case performances are \( O(n) \). Linear search is a bit different. The algorithm stops and returns a result as soon as it finds the target item. Clearly, in the best case, the target element is in the first position. In the worst case, the target is in the last position. Therefore, the algorithm’s best-case performance is O(1), and its worst-case performance is O(n). To compute the average-case performance, we add up all of the comparisons that must be made to locate a target in each position and divide by \( n \). This is \( (1 + 2 + \ldots + n) / n \), or \( n / 2 \). Therefore, by approximation, the average-case performance of linear search is also O(n). The smarter version of bubble sort can terminate as soon as the list becomes sorted. In the best case, this happens when the input list is already sorted. Therefore, bubble sort’s best-case performance is O(n). However, this case is rare (1 out of \( n! \)). In the worst case, even this version of bubble sort will have to bubble each item down to its proper position in the list. The algorithm’s worst-case performance is clearly O\( (n^2) \). Bubble sort’s average-case performance is closer to O\( (n^2) \) than to O(n), although the demonstration of this fact is a bit more involved than it is for linear search. As we will see, there are algorithms whose best-case and average-case performances are similar, but whose performance can degrade to a worst case. Whether you are choosing an algorithm or developing a new one, it is important to be aware of these distinctions. ### 11.4 Exercises 1. Which configuration of data in a list causes the smallest number of exchanges in a selection sort? Which configuration of data causes the largest number of exchanges? 2. Explain the role that the number of data exchanges plays in the analysis of selection sort and bubble sort. What role, if any, does the size of the data objects play? 3. Explain why the modified bubble sort still exhibits O\( (n^2) \) behavior on the average. 4. Explain why insertion sort works well on partially sorted lists. 11.5 An Exponential Algorithm: Recursive Fibonacci Earlier in this chapter, we ran the recursive Fibonacci function to obtain a count of the recursive calls with various problem sizes. You saw that the number of calls seemed to grow much faster than the square of the problem size. Here is the code for the function once again: ```python def fib(n): """The recursive Fibonacci function.""" if n < 3: return 1 else: return fib(n - 1) + fib(n - 2) ``` Another way to illustrate this rapid growth of work is to display a call tree for the function for a given problem size. Figure 11.8 shows the calls involved when we use the recursive function to compute the sixth Fibonacci number. To keep the diagram reasonably compact, we write (6) instead of fib(6). ![Call Tree for fib(6)](image) **[Figure 11.8]** A call tree for fib(6) Note that fib(4) requires only 4 recursive calls, which seems linear, but fib(6) requires 2 calls of fib(4), among a total of 14 recursive calls. Indeed, it gets much worse as the problem size grows, with possibly many repetitions of the same subtrees in the call tree. Exactly how bad is this behavior, then? If the call tree were fully balanced, with the bottom two levels of calls completely filled in, a call with an argument of 6 would generate $2 + 4 + 8 + 16 = 30$ recursive calls. Note that the number of calls at each filled level is twice that of the level above it. Thus, the number of recursive calls generally is \(2^{n+1} - 2\) in fully balanced call trees, where \(n\) is the argument at the top or root of the call tree. This is clearly the behavior of an exponential, \(O(k^n)\) algorithm. Although the bottom two levels of the call tree for recursive Fibonacci are not completely filled in, its call tree is close enough in shape to a fully balanced tree to rank recursive Fibonacci as an exponential algorithm. The constant \(k\) for recursive Fibonacci is approximately 1.63. Exponential algorithms are generally impractical to run with any but very small problem sizes. Although recursive Fibonacci is elegant in its design, there is a less beautiful but much faster version that uses a loop to run in linear time (see the next section). Alternatively, recursive functions that are called repeatedly with the same arguments, such as the Fibonacci function, can be made more efficient by a technique called **memoization**. According to this technique, the program maintains a table of the values for each argument used with the function. Before the function recursively computes a value for a given argument, it checks the table to see if that argument already has a value. If so, that value is simply returned. If not, the computation proceeds, and the argument and value are added to the table afterward. Computer scientists devote much effort to the development of fast algorithms. As a rule, any reduction in the order of magnitude of complexity, say, from \(O(n^2)\) to \(O(n)\), is preferable to a “tweak” of code that reduces the constant of proportionality. ### 11.6 Converting Fibonacci to a Linear Algorithm Although the recursive Fibonacci function reflects the simplicity and elegance of the recursive definition of the Fibonacci sequence, the run-time performance of this function is unacceptable. A different algorithm improves on this performance by several orders of magnitude and, in fact, reduces the complexity to linear time. In this section, we develop this alternative algorithm and assess its performance. Recall that the first two numbers in the Fibonacci sequence are 1s, and each number after that is the sum of the previous two numbers. Thus, the new algorithm starts a loop if \(n\) is at least the third Fibonacci number. This number will be at least the sum of the first two (\(1 + 1 = 2\)). The loop computes this sum and then performs two replacements: the first number becomes the second one, and the second one becomes the sum just computed. The loop counts from 3 through \(n\). The sum at the end of the loop is the $n$th Fibonacci number. Here is the pseudocode for this algorithm: Set sum to 1 Set first to 1 Set second to 1 Set count to 3 While count <= N Set sum to first + second Set first to second Set second to sum Increment count The Python function `fib` now uses a loop. The function can be tested within the script used for the earlier version. Here is the code for the function, followed by the output of the script: ```python def fib(n, counter): """Count the number of iterations in the Fibonacci function.""" sum = 1 first = 1 second = 1 count = 3 while count <= n: counter.increment() sum = first + second first = second second = sum count += 1 return sum Problem Size | Iterations ---------------|----------- 2 | 0 4 | 2 8 | 6 16 | 14 32 | 30 ``` As you can see, the performance of the new version of the function has improved to linear. Removing recursion by converting a recursive algorithm to one based on a loop can often, but not always, reduce its run-time complexity. 11.7 Case Study: An Algorithm Profiler Profiling is the process of measuring an algorithm’s performance, by counting instructions and/or timing execution. In this case study, we develop a program to profile sort algorithms. 11.7.1 Request Write a program that allows a programmer to profile different sort algorithms. 11.7.2 Analysis The profiler should allow a programmer to run a sort algorithm on a list of numbers. The profiler can track the algorithm’s running time, the number of comparisons, and the number of exchanges. In addition, when the algorithm exchanges two values, the profiler can print a trace of the list. The programmer can provide her own list of numbers to the profiler or ask the profiler to generate a list of randomly ordered numbers of a given size. The programmer can also ask for a list of unique numbers or a list that contains duplicate values. For ease of use, the profiler allows the programmer to specify most of these features as options before the algorithm is run. The default behavior is to run the algorithm on a randomly ordered list of 10 unique numbers where the running time, comparisons, and exchanges are tracked. The profiler is an instance of the class `Profiler`. The programmer profiles a sort function by running the profiler’s `test` method with the function as the first argument and any of the options mentioned earlier. The next session shows several test runs of the profiler with the selection sort algorithm and different options: ```python >>> from profiler import Profiler >>> from algorithms import selectionSort >>> p = Profiler() >>> p.test(selectionSort) # Default behavior Problem size: 10 Elapsed time: 0.0 Comparisons: 45 Exchanges: 7 ``` continued The programmer configures a sort algorithm to be profiled as follows: 1. Define a sort function and include a second parameter, a `Profiler` object, in the sort function's header. 2. In the sort algorithm's code, run the methods `comparison()` and `exchange()` with the `Profiler` object where relevant, to count comparisons and exchanges. The interface for the `Profiler` class is listed in Table 11.6. Design The programmer uses two modules: 1. **profiler**—This module defines the **Profiler** class. 2. **algorithms**—This module defines the sort functions, as configured for profiling. The sort functions have the same design as those discussed earlier in this chapter, except that they receive a **Profiler** object as an additional parameter. The **Profiler** methods **comparison** and **exchange** are run with this object whenever a sort function performs a comparison or an exchange of data values, respectively. In fact, any list-processing algorithm can be added to this module and profiled just by including a **Profiler** parameter and running its two methods when comparisons and/or exchanges are made. As shown in the earlier session, one imports the **Profiler** class and the **algorithms** module into a Python shell and performs the testing at the shell prompt. The profiler’s **test** method sets up the **Profiler** object, runs the function to be profiled, and prints the results. Here is a partial implementation of the `algorithms` module. We omit most of the sort algorithms developed earlier in this chapter, but include one, `selectionSort`, to show how the statistics are updated. ```python """ File: algorithms.py Algorithms configured for profiling. """ def selectionSort(lyst, profiler): i = 0 while i < len(lyst) - 1: minIndex = i j = i + 1 while j < len(lyst): profiler.comparison() # Count if lyst[j] < lyst[minIndex]: minIndex = j j += 1 if minIndex != i: swap(lyst, minIndex, i, profiler) i += 1 def swap(lyst, i, j, profiler): """Exchanges the elements at positions i and j.""" profiler.exchange() # Count temp = lyst[i] lyst[i] = lyst[j] lyst[j] = temp # Testing code can go here, optionally ``` The `Profiler` class includes the four methods listed in the interface as well as some helper methods for managing the clock. File: profiler.py Defines a class for profiling sort algorithms. A Profiler object tracks the list, the number of comparisons and exchanges, and the running time. The Profiler can also print a trace and can create a list of unique or duplicate numbers. Example use: ```python from profiler import Profiler from algorithms import selectionSort p = Profiler() p.test(selectionSort, size=15, comp=True, exch=True, trace=True) ``` ``` import time import random class Profiler(object): def test(self, function, lyst=None, size=10, unique=True, comp=True, exch=True, trace=False): """ function: the algorithm being profiled target: the search target if profiling a search lyst: allows the caller to use her list size: the size of the list, 10 by default unique: if True, list contains unique integers comp: if True, count comparisons exch: if True, count exchanges trace: if True, print the list after each exchange "" self._comp = comp self._exch = exch self._trace = trace if lyst != None: self._lyst = lyst elif unique: self._lyst = list(range(1, size + 1)) random.shuffle(self._lyst) ``` continued else: self._lyst = [] for count in range(size): self._lyst.append(random.randint(1, size)) self._exchCount = 0 self._cmpCount = 0 self._startClock() function(self._lyst, self) self._stopClock() print(self) def exchange(self): """Counts exchanges if on."" if self._exch: self._exchCount += 1 if self._trace: print(self._lyst) def comparison(self): """Counts comparisons if on."" if self._comp: self._cmpCount += 1 def _startClock(self): """Record the starting time."" self._start = time.time() def _stopClock(self): """Stops the clock and computes the elapsed time in seconds, to the nearest millisecond."" self._elapsedTime = round(time.time() - self._start, 3) def __str__(self): """Returns the results as a string."" result = "Problem size: " result += str(len(self._lyst)) + "\n" result += "Elapsed time: " result += str(self._elapsedTime) + "\n" if self._comp: result += "Comparisons: " result += str(self._cmpCount) + "\n" if self._exch: result += "Exchanges: " result += str(self._exchCount) + "\n" return result Summary Different algorithms for solving the same problem can be ranked according to the time and memory resources that they require. Generally, algorithms that require less running time and less memory are considered better than those that require more of these resources. However, there is often a tradeoff between the two types of resources. Running time can occasionally be improved at the cost of using more memory, or memory usage can be improved at the cost of slower running times. The running time of an algorithm can be measured empirically using the computer’s clock. However, these times will vary with the hardware and the types of programming language used. Counting instructions provides another empirical measurement of the amount of work that an algorithm does. Instruction counts can show increases or decreases in the rate of growth of an algorithm’s work, independently of hardware and software platforms. The rate of growth of an algorithm’s work can be expressed as a function of the size of its problem instances. Complexity analysis examines the algorithm’s code to derive these expressions. Such an expression enables the programmer to predict how well or poorly an algorithm will perform on any computer. Big-O notation is a common way of expressing an algorithm’s run-time behavior. This notation uses the form $O(f(n))$, where $n$ is the size of the algorithm’s problem and $f(n)$ is a function expressing the amount of work done to solve it. Common expressions of run-time behavior are $O(\log_2 n)$ (logarithmic), $O(n)$ (linear), $O(n^2)$ (quadratic), and $O(k^n)$ (exponential). An algorithm can have different best-case, worst-case, and average-case behaviors. For example, bubble sort and insertion sort are linear in the best case, but quadratic in the average and worst cases. In general, it is better to try to reduce the order of an algorithm’s complexity than it is to try to enhance performance by tweaking the code. A binary search is substantially faster than a linear search. However, the data in the search space for a binary search must be in sorted order. Exponential algorithms are primarily of theoretical interest and are impractical to run with large problem sizes. 1. Timing an algorithm with different problem sizes a. can give you a general idea of the algorithm’s run-time behavior b. can give you an idea of the algorithm’s run-time behavior on a particular hardware platform and a particular software platform 2. Counting instructions a. provides the same data on different hardware and software platforms b. can demonstrate the impracticality of exponential algorithms with large problem sizes 3. The expressions $O(n^2)$, $O(n^2)$, and $O(k^n)$ are, respectively, a. exponential, linear, and quadratic b. linear, quadratic, and exponential c. logarithmic, linear, and quadratic 4. A binary search a. assumes that the data are arranged in no particular order b. assumes that the data are sorted 5. A selection sort makes at most a. $n^2$ exchanges of data items b. $n$ exchanges of data items 6. The best-case behavior of insertion sort and modified bubble sort is a. linear b. quadratic c. exponential 7. An example of an algorithm whose best-case, average-case, and worst-case behaviors are the same is a. linear search b. insertion sort c. selection sort 8. Generally speaking, it is better a. to tweak an algorithm to shave a few seconds of running time b. to choose an algorithm with the lowest order of computational complexity The recursive Fibonacci function makes approximately \( n^2 \) recursive calls for problems of a large size \( n \) \( 2^n \) recursive calls for problems of a large size \( n \) Each level in a completely filled binary call tree has - twice as many calls as the level above it - the same number of calls as the level above it **PROJECTS** 1. A linear search of a sorted list can halt when the target is less than a given element in the list. Define a modified version of this algorithm, and state the computational complexity, using big-O notation, of its best-, worst-, and average-case performances. 2. The list method `reverse` reverses the elements in the list. Define a function named `reverse` that reverses the elements in its list argument (without using the method `reverse`!). Try to make this function as efficient as possible, and state its computational complexity using big-O notation. 3. Python’s `pow` function returns the result of raising a number to a given power. Define a function `expo` that performs this task, and state its computational complexity using big-O notation. The first argument of this function is the number, and the second argument is the exponent (non-negative numbers only). You may use either a loop or a recursive function in your implementation. 4. An alternative strategy for the `expo` function uses the following recursive definition: ```python expo(number, exponent) = 1, when exponent = 0 = number \times expo(number, exponent - 1), when exponent is odd = (expo(number, exponent // 2))^2, when exponent is even ``` Define a recursive function `expo` that uses this strategy, and state its computational complexity using big-O notation. 5 Python’s list method sort includes the keyword argument reverse, whose default value is False. The programmer can override this value to sort a list in descending order. Modify the selectionSort function discussed in this chapter so that it allows the programmer to supply this additional argument to redirect the sort. 6 Modify the recursive Fibonacci function to employ the memoization technique discussed in this chapter. The function should expect a dictionary as an additional argument. The top-level call of the function receives an empty dictionary. The function’s keys and values should be the arguments and values of the recursive calls. Also use the Counter object discussed in this chapter to count the number of recursive calls. 7 Profile the performance of the memoized version of the Fibonacci function defined in Project 6. The function should count the number of recursive calls. State its computational complexity using big-O notation, and justify your answer. 8 The function makeRandomList creates and returns a list of numbers of a given size (its argument). The numbers in the list are unique and range from 1 through the size. They are placed in random order. Here is the code for the function: ```python def makeRandomList(size): lyst = [] for count in range(size): while True: number = random.randint(1, size) if not number in lyst: lyst.append(number) break return lyst ``` You may assume that range, randint, and append are constant time functions. You may also assume that random.randint more rarely returns duplicate numbers as the range between its arguments increases. State the computational complexity of this function using big-O notation, and justify your answer. 9 As discussed in Chapter 6, a computer supports the calls of recursive functions using a structure called the call stack. Generally speaking, the computer reserves a constant amount of memory for each call of a function. Thus, the memory used by a recursive function can be subjected to complexity analysis. State the computational complexity of the memory used by the recursive factorial and Fibonacci functions, as defined in Chapter 6. 10 The function that draws c-curves, and which was discussed in Chapter 7, has two recursive calls. Here is the code: ```python def cCurve(t, x1, y1, x2, y2, level): def drawLine(x1, y1, x2, y2): """Draws a line segment between the endpoints."" t.up() t.goto(x1, y1) t.down() t.goto(x2, y2) if level == 0: drawLine(x1, y1, x2, y2) else: xm = (x1 + x2 + y1 - y2) // 2 ym = (x2 + y1 + y2 - x1) // 2 cCurve(t, x1, y1, xm, ym, level - 1) cCurve(t, xm, ym, x2, y2, level - 1) ``` You can assume that the function `drawLine` runs in constant time. State the computational complexity of the `cCurve` function, in terms of the level, using big-O notation. Also, draw a call tree for a call of this function with a level of 3.
{"Source-Url": "http://www.cengage.com/resource_uploads/downloads/1111822700_283628.pdf", "len_cl100k_base": 15644, "olmocr-version": "0.1.53", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 94882, "total-output-tokens": 17467, "length": "2e13", "weborganizer": {"__label__adult": 0.0003056526184082031, "__label__art_design": 0.00026798248291015625, "__label__crime_law": 0.0003218650817871094, "__label__education_jobs": 0.0013151168823242188, "__label__entertainment": 7.647275924682617e-05, "__label__fashion_beauty": 0.00013697147369384766, "__label__finance_business": 0.0001729726791381836, "__label__food_dining": 0.00037741661071777344, "__label__games": 0.0009236335754394532, "__label__hardware": 0.0009703636169433594, "__label__health": 0.0003857612609863281, "__label__history": 0.0002219676971435547, "__label__home_hobbies": 0.0001093149185180664, "__label__industrial": 0.0004143714904785156, "__label__literature": 0.00029540061950683594, "__label__politics": 0.00023043155670166016, "__label__religion": 0.00041794776916503906, "__label__science_tech": 0.026824951171875, "__label__social_life": 8.612871170043945e-05, "__label__software": 0.006359100341796875, "__label__software_dev": 0.958984375, "__label__sports_fitness": 0.0003192424774169922, "__label__transportation": 0.0004265308380126953, "__label__travel": 0.0001595020294189453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65411, 0.0331]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65411, 0.6808]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65411, 0.89529]], "google_gemma-3-12b-it_contains_pii": [[0, 892, false], [892, 3495, null], [3495, 4840, null], [4840, 6039, null], [6039, 8391, null], [8391, 9836, null], [9836, 11075, null], [11075, 12403, null], [12403, 14038, null], [14038, 15363, null], [15363, 16991, null], [16991, 19486, null], [19486, 21309, null], [21309, 23581, null], [23581, 25542, null], [25542, 27586, null], [27586, 29306, null], [29306, 31119, null], [31119, 32545, null], [32545, 34030, null], [34030, 35340, null], [35340, 37270, null], [37270, 39257, null], [39257, 41163, null], [41163, 43309, null], [43309, 45348, null], [45348, 46722, null], [46722, 49350, null], [49350, 50562, null], [50562, 52287, null], [52287, 52694, null], [52694, 53699, null], [53699, 54706, null], [54706, 55913, null], [55913, 57101, null], [57101, 59327, null], [59327, 60666, null], [60666, 62378, null], [62378, 64599, null], [64599, 65411, null]], "google_gemma-3-12b-it_is_public_document": [[0, 892, true], [892, 3495, null], [3495, 4840, null], [4840, 6039, null], [6039, 8391, null], [8391, 9836, null], [9836, 11075, null], [11075, 12403, null], [12403, 14038, null], [14038, 15363, null], [15363, 16991, null], [16991, 19486, null], [19486, 21309, null], [21309, 23581, null], [23581, 25542, null], [25542, 27586, null], [27586, 29306, null], [29306, 31119, null], [31119, 32545, null], [32545, 34030, null], [34030, 35340, null], [35340, 37270, null], [37270, 39257, null], [39257, 41163, null], [41163, 43309, null], [43309, 45348, null], [45348, 46722, null], [46722, 49350, null], [49350, 50562, null], [50562, 52287, null], [52287, 52694, null], [52694, 53699, null], [53699, 54706, null], [54706, 55913, null], [55913, 57101, null], [57101, 59327, null], [59327, 60666, null], [60666, 62378, null], [62378, 64599, null], [64599, 65411, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 65411, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65411, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65411, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65411, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 65411, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65411, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65411, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65411, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65411, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 65411, null]], "pdf_page_numbers": [[0, 892, 1], [892, 3495, 2], [3495, 4840, 3], [4840, 6039, 4], [6039, 8391, 5], [8391, 9836, 6], [9836, 11075, 7], [11075, 12403, 8], [12403, 14038, 9], [14038, 15363, 10], [15363, 16991, 11], [16991, 19486, 12], [19486, 21309, 13], [21309, 23581, 14], [23581, 25542, 15], [25542, 27586, 16], [27586, 29306, 17], [29306, 31119, 18], [31119, 32545, 19], [32545, 34030, 20], [34030, 35340, 21], [35340, 37270, 22], [37270, 39257, 23], [39257, 41163, 24], [41163, 43309, 25], [43309, 45348, 26], [45348, 46722, 27], [46722, 49350, 28], [49350, 50562, 29], [50562, 52287, 30], [52287, 52694, 31], [52694, 53699, 32], [53699, 54706, 33], [54706, 55913, 34], [55913, 57101, 35], [57101, 59327, 36], [59327, 60666, 37], [60666, 62378, 38], [62378, 64599, 39], [64599, 65411, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65411, 0.05739]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
d101f8f6fef5fe81e3ce6e78da3a42a649878bf1
Model Transformation Approach for a Goal Oriented Requirements Engineering based WebGRL to Design Models Sangeeta Srivastava Abstract — Web applications have become integral to our lives and there is a lot of emphasis on developing high quality web applications which capture the stakeholder’s goals very closely. Web engineers mostly focus on design aspects only, overlooking the real goals and expectations of the users. Goal oriented Requirement Engineering is a popular approach for Information system development but has not been explored much for Web applications. Goal driven requirements analysis helps in capturing stakeholders’ goals very finely, they enhance the requirements analysis in many ways, as the requirement clarification and the conflicts between requirements can be detected early and design alternatives can be evaluated and selected to suit the requirements. In this paper, we take a step from the requirements phase to the design phase. While adhering to the web based goal oriented requirements engineering in the first phase we move to the A-OOH design models using a model transformation strategy to derive web specific design models supported by a UML profile. This helps in seamlessly generating the web specific design models namely the content, navigation, presentation, business process and adaptivity models. The model transformation approach aims at automatic transformation of the repeatedly refined and resolved alternatives presented by us in the GOREWEB framework as an output to the design models supported by a UML profile. This would lead to a better design and high quality of product development which captures the stakeholders’ goals very closely. Index Terms – Goal Oriented Requirements Engineering, Model transformation, UML Profile, Web Engineering. I. INTRODUCTION Although web applications have mushroomed a great deal but they have not still received much attention from the requirements engineering community. Like the traditional information systems, where Requirements analysis is given utmost importance amongst all the phases, with web applications the focus is usually more on the presentation. Web applications involve multiple stakeholders, and the size and purpose of the applications are also varied [1]. Many approaches have been developed for Goal oriented Requirements Engineering for generic systems [2],[ 3], [4]. However, the notations and models developed for generic applications do not address very important issues of web applications like navigation, adaptation etc. Some work has been done by researchers [5], [6], [7],[8] on web engineering approaches taking into account the Goal driven analysis, but many concepts of goal driven analysis like design rationale, conflict resolution, goal prioritization have been surpassed and not taken in totality. For enhancing the requirements engineering activities involved in web application development, GOREWEB: Goal Oriented Requirements Engineering for Web applications framework offers goal oriented requirement analysis of web applications. GOREWEB model extends the concepts of User Requirements Notation (URN) for comprehensive study of web application requirements. URN [9], [10]. It is currently the only standard that combines goals and scenarios in one notation. It is a combination of two notations GRL (Goal Requirements Language) and UCM (Use Case Maps). User Requirements notation aims to capture goals and decision rationale that finally shape GOREWEB Framework for Goal Oriented Requirements a system and model dynamic systems where behavior may change at run time. GRL is Goal Requirements Language that focuses on Goal analysis. It help in defining the goals including the non-functional requirements, evaluating them, resolving conflicts etc. UCM stands for Use Case Maps that are the visual notation for scenarios. UCM notation employs scenario paths to illustrate causal relationships among responsibilities. URN being the latest and standardized notation for Goal and Scenario based requirements analysis, our work is based on this notation. The approach used by us here starts from the requirements gathering phase where first in our framework, we have proposed Goal oriented approach for Web Requirements Engineering. The Goal based analysis consists of Goal based elicitation to construct Level 0 diagram for goal based analysis called Base WebGRL diagram by enhancing the GRL Metamodel. If any of the goals, concerns need walkthrough or scenario based analysis, WebUCM diagrams are constructed with the enhanced UCM notation. The process is supported with Process guidance checklist and Validation of WebGRL diagram with further elicitation. After validation and elicitation and conflict resolution the base WebGRL diagram is refined to web specific WebGRL diagrams. We use this as an input to the model transformation approach used by us to transform to the design phase. Therefore, we start from capturing the goals, validating them and after resolving conflicts and refining them we move to the next stage of web application development i.e. the design phase. Thus resulting in a healthier software development where we move seamlessly from the requirements phase to the design phase where most of the problems have already been tackled by detailed analysis and verification in the requirements phase. Manuscript received January 15, 2014. Sangeeta Srivastava, Department of Computer Science, BCAS, Delhi University, Delhi, India. Retrieval Number: F1990013614/2014©BEIESP Published By: Blue Eyes Intelligence Engineering & Sciences Publication 66 The related research work in the area of web design are many, like the Object-Oriented Hypermedia Design Method (OOHDM) [11], [12] and its successor, the Semantic Hypermedia Design Method [13], [14] that permit the concise specification and implementation of web applications. This is achieved based on various models describing information (conceptual), navigation and interface aspects of these applications, and the mapping of these models into running applications, in various environments. Another hypermedia design approach is the WebML (Web Modeling Language) [15], [16], it is a visual language for specifying the content structure of a Web application and the organization and presentation of contents in one or more hypertexts. There is also the UWE approach [17], [18] which is an object oriented approach that has as a distinguishing feature its Unified Modeling Language [19] compliance since UWE is defined in the form of a UML profile and an extension of the UML meta-model. The fundamentals of this approach are a standard notation (UML through all the models), the precise definition of the method and the specification of constraints (with the OCL language) to increase the precision of the models. There is also the Hera methodology [20] which is a model-driven methodology for designing and developing web applications. Hera includes a phase in which web navigation is specified in connection to the data intensive nature of the web application in its presentation generation phase. Before that, Hera’s integration and data retrieval phase considers how to select and obtain the data from the storage part of the application. This includes transforming the data from these sources into the format used in the application. All these approaches listed above are either not suited for the web applications as in OOHDM and SHDM and if they do treat the web application development process differently they do not emphasize on the requirements part. The greater emphasis in all these cases is on the design phase of web applications. However, the web design approach used by is different from them as we pay a lot of emphasis on the requirements phase and further move to the design phase while keeping the different requirements of a web application. The input used by us for the design phase has undergone detailed requirements analysis developed especially for web application with conflict resolution etc. the details of which are in [21]. In our approach we use the GOREWEB Framework for Goal Oriented Requirements a system and model dynamic systems where behavior may change at run time. The use of goal driven requirements analysis helps in capturing stakeholders’ goals very finely, they enhance the requirements analysis in many ways, as the requirement clarification and the conflicts between requirements can be detected early and design alternatives can be evaluated and selected to suit the requirements. Thereafter we use the transformation strategy to derive the design models supported by a UML profile. In all these approaches listed above they lack the seamless transition from the requirements phase to the design phase where due emphasis has not been given to the requirements phase. Presently, effort for requirement analysis in Web engineering is rather focused on the system and the needs of the users are figured out by the designer. This scenario leads us to websites that do not assure real user requirements and goals, thus producing user disorientation and comprehension problems. There may appear development and maintenance problems for designers, since costly, time-consuming and rather non-realistic mechanisms should be developed to improve the already implemented website, thus increasing the initial project cost. The main benefit of our point of view is that the designer will be able to make decisions from the very beginning of the development phase. These decisions could affect the structure of the envisioned website in order to satisfy needs, goals, interests and preferences of each user or user type. Also we develop five design models from the Base WebGRL diagram in the requirements phase which helps in giving a detailed picture from different perspectives related to the web applications in the design phase. Further, the transition of the design models to a UML compliant UML Profile helps in platform independent development of the product. As a part of the GOREWEB framework we have enhanced the GRL Metamodel to the WebGRL Metamodel presented in the next section. Thereafter we explain the A-OOH approach used by us for transformation of WebGRL models to design models. In section 4 we present the transformation rules to derive the A-OOH content model and its UML Profile. Further, in section 5 we present the enhanced A-OOH navigation model, and the transformation rules to derive the same. The enhanced navigation model is supported by a UML Profile. In Section 6 we present the Case Study of Online Bookstore and in the last section we present the conclusion of our work. II. WEBGRL METAMODEL The GRL Metamodel is enhanced to represent web specific functional and non-functional requirements through a goal driven approach. WebGRL metamodel shown in Figure 1 below consists of Intentional Elements and Links. The intentional elements are Goal, Softgoal, Task and Resource. The Goals and softgoals have been enhanced from GRL notation to suit t. he web specific needs. The notation has been enhanced to incorporate Web specific functional and non-functional requirements the tasks & resources are represented in a similar way as in GRL. WebGRL notation also consists of links that connect two or more intentional elements. They are decomposition links, contribution links, dependency links and means end links. The details of the webGRL metamodel are explained in [21]. After the BaseWebGRL diagram has been generated, for detailed analysis, it is refined for each functional requirement category of NFR i.e. for Content, Navigation, Presentation, Business Process and Adaptivity requirements resulting in web specific GRL diagrams. In this paper we present an approach that transforms these web... requirements expressed using the WebGRL into the web design phase using the A-OOH method to generate the five models:- - Domain Model - Navigation Model - Presentation Model - Adaptation Model - Business Process Model A number of UML Profiles or UML compliant approaches exist for transformation from requirement phase to design phase like UWE, OOH, and OOHD for web engineering. However, it is very close to our approach so we have adopted this approach for the transformation to the design phase. The characteristics of the A-OOH model used for transformation from the requirements phase expressed in web specific GRL models to the design models is explained below. III. THE A-OOH MODEL In this section, we present a proposal which provides a way of specifying requirements using webGRL in the context of A-OOH (Adaptive Object Oriented Hypermedia method) [22]. A-OOH is the extension of the OO-H modeling method [23], which includes the definition of adaptation strategies. This approach has also been extended with UML-profiles so all the conceptual models are UML-compliant. A-OOH case considers the following workflows: 1. Requirements: In this stage the requirements for each type of user are gathered, including the personalization (adaptation) requirements. 2. Analysis and Design: In this stage all the activities related to the analysis and design of the software product are included: a. Domain Analysis: From the user requirements and the designer knowledge of the domain, the relevant concepts for the application are gathered. b. Domain Design: The domain analysis model has to be refined in consecutive iterations with new helper classes, attribute types, parameters in the methods… etc. c. Navigation Design: The domain information is the main input for the design navigation activity, where the navigational paths are defined to fulfill the different functional requirements and the organization of that information in abstract pages. d. Presentation Design: Once the logic structure of the interface is defined, OO-H allows specifying the location, appearance and additional graphical components for showing the information and navigation of each of the abstract pages. e. Adaptation Design: In parallel to the other sub-phases an adaptation design phase is performed, which allows to specify the adaptation (or personalization) strategies to be performed. 3. Implementation: Implementation is the following workflow considered in A-OOH where the final application is generated. 4. Test: The goal of this workflow is verifying that the implementation work as intended. The Steps 1 and 2a are done using WebGRL and the domain analysis results in the Web Specific GRL. With the help of A-OOH we map the refined analysis models to their respective design models. As the considered Web engineering approach (A-OOH) is expressed as UML-compliant class diagram, they have used the extension mechanisms of UML to (i) define a profile for using WebGRL within UML; and (ii) extend this profile in order to adapt A-OOH to Web specific domain terminology. This approach is very close to the web specific diagrams generated in the analysis phase in our previous work so we have adopted this approach in this paper and enhanced as well as modified it to later on define the web specific models of webGRL in the design phase into their respective design models with traceability. This may even lead to enhancement and development of our own UML profile later. The A-OOH approach is requirement based whereas our work is goal oriented therefore in place of task we extend the goal as well as softgoals to the stereotypes defined in the A-OOH approach into navigation, presentation, adaptation, and business process stereotypes. The A-OOH model uses the adaptive OOH approach to define the domain and the navigation model from the use case diagrams using domain analysis. We differ here by gathering the requirements and using gore approach to develop WebGRL diagrams using grl approach for web applications, we do the requirements specification and analysis and use A-OOH only for the design phase to generate the domain model, navigation model and the presentation model later we enhance it to represent the business process and adaptation models and a UML profile to support these. After analyzing and modeling the requirements of the website with the help of the WebGRL tool presented in the paper [24] we have a good design alternative with conflicts resolved represented by web specific GRL diagrams. Once the requirements have been defined using the Web GRL diagram a transformation strategy can be used to derive the design models for the website. The transformation strategy uses a set of rules to transform these web specific diagrams into the Domain model (DM), in which the structure of the domain data is defined, a Navigation model, in which the structure and behavior of the navigation view over the domain data is defined, and finally a Presentation model, in which the layout of the generated hypermedia presentation is defined. To be able to model personalization at design time two additional models are needed: a Adaptation model, in which personalization strategies and the structure of information needed for personalization is described, and a Business Process model, in which the business processes related to the business of the web application are defined which are expressed using a UML compliant UML profile for the WebGRL. Due to space constraints, in this work, the focus is on the Domain and Navigation models. However, a skeleton of the Presentation, Adaptation and Business Process could also be generated from the requirements specification which will be presented in the later work by enhancement to the A-OOH approach. Before explaining each of the derivations, we briefly introduce the A-OOH DM and NM so the reader can easily follow the derivation of them. IV. DERIVING THE DOMAIN MODEL The A-OOH DM is expressed as a UML compliant class diagram. It encapsulates the structure and functionality required of the relevant concepts of the application and reflects the static part of the system. The main modeling elements of a class diagram are the classes (with their attributes Model Transformation Approach for a Goal Oriented Requirements Engineering based WebGRL to Design Models and operations) and their relationships. The Transformation Rules defined to derive the Domain Model(DM) of A-OOH from the content webGRL diagram of GOREWEB framework are as follows:- 1. Content 2DomainClass- This transformation rule is used to transform all Resources required to satisfy the content goals represented in the webGRL diagram to derive a domain class of the DM with the same name as the Resource. In case of the decomposition of the goal into subgoals , the subgoals are used to derive domain classes for the resource required to satisfy them and the main goal is used to derive an aggregate class or a generalization class with decomposition link as an association between them. 2. Navigation Goals&Tasks2Relationship- All navigation goals or tasks that represent a navigation pattern are used to derive associations in the DM. Preliminary relations into classes are derived from the relations among goals/tasks with attached resources by applying this rule. A navigational pattern consists of a task that requires navigation between resources or is represented by a navigational goal as shown in figure 2a. 3. Task2Operation This transformation rule detects a means end link to a task attached to a content goal. In this case each task is transformed into one operation of the corresponding domain class. 4. Content2Attribute- In this rule content pattern attached to a content goal is transformed into attribute of the corresponding domain class. Content pattern represents a set of attributes or content expressed in the task to achieve the content goal as shown in Fig 2b. 5. SoftGoals2Conditions- This transformation rule is used to transform all softgoals attached to content goals as conditions defined on the model element which contains the satisfaction level value stored as a result of that condition defined on one or more model elements for that content goal. The Method:- All content Goals in the WebGRL diagrams are represented as Domain Classes for the resource used by them to satisfy that goal. The goal always states a function which is to be carried out. That function may be a service task or a navigation task. If it leads to navigation to another content class then it is a navigation task. However, if it requires performing an operation on the attributes within the domain class defined for that content goal then it is a service task. The Table I:- Transformation of Content web GRL diagram to Domain model <table> <thead> <tr> <th>Content WebGRL Model Element</th> <th>A-OOH Domain Model Element</th> </tr> </thead> <tbody> <tr> <td>Content Goal with dependency link to Resource</td> <td>Domain Class</td> </tr> <tr> <td>Content goals with and decomposition link to subgoals</td> <td>Aggregation Relationship between domain classes</td> </tr> <tr> <td>Content goals with or decomposition link to subgoals</td> <td>Generalisation Relationship between domain classes</td> </tr> <tr> <td>Content Goal with contribution link</td> <td>Association between the domain classes</td> </tr> <tr> <td>Tasks with Means-End link to Resource</td> <td>Operation of that Domain class</td> </tr> <tr> <td>Decomposition Link</td> <td>Association</td> </tr> <tr> <td>Dependency Link</td> <td>Dependency between domain Class and Resource</td> </tr> <tr> <td>Contribution Link</td> <td>Directed relationship between Source and target</td> </tr> <tr> <td>Navigation goals/tasks</td> <td>Relationship between domain classes</td> </tr> <tr> <td>Content pattern of the Content Goal</td> <td>Attribute of that domain class</td> </tr> <tr> <td>SoftGoals</td> <td>Conditions of that domain class</td> </tr> </tbody> </table> The Table II:- UML Profile for the Content WebGRL diagram <table> <thead> <tr> <th>WebGRL Model elements</th> <th>Content Model Model stereotypes</th> <th>Stereotyped UML Metaclass</th> </tr> </thead> <tbody> <tr> <td>GRLspec</td> <td>Content Model</td> <td>Model</td> </tr> <tr> <td>GRLmodelElement</td> <td>Content model Element</td> <td>NamedElement</td> </tr> <tr> <td>Actor</td> <td>Domain Class</td> <td>Class</td> </tr> <tr> <td>IntentionalElement Goal &amp; Resource</td> <td>Domain Class</td> <td>Class</td> </tr> <tr> <td>IntentionalElement Softgoal</td> <td>Condition</td> <td>Constraint</td> </tr> <tr> <td>IntentionalElement Task</td> <td>Domain Class Operation</td> <td>Operation</td> </tr> <tr> <td>ElementLink</td> <td>Link</td> <td>Relationship</td> </tr> <tr> <td>Contribution</td> <td>Relationship Directed Relationship</td> <td></td> </tr> <tr> <td>ContributionType</td> <td>Contribution type attribute</td> <td>Enumeration</td> </tr> <tr> <td>Decomposition</td> <td>Association</td> <td>Association</td> </tr> <tr> <td>DecompositionType</td> <td>Decomposition Type</td> <td>Enumeration</td> </tr> <tr> <td>Dependency</td> <td>Association</td> <td>Association</td> </tr> </tbody> </table> The operation within the domain class would be performed on some content variables or patterns within the domain class itself. These variables or content patterns are to be represented as the attributes of the domain class with operations in the domain class defined on them. If the operation to satisfy this goal needs a navigation to obtain information from another class then a relationship is defined between the two domain classes. Softgoals are represented as conditions of that domain class defined as member conditions on the model elements, i.e. domain class, association or conditions on the attribute of the domain class with satisfaction levels. V. DERIVING THE NAVIGATION MODEL A Navigational Model (NM) describes a navigation view on data specified by the DM. In OO-H the NM is captured by one or more Navigation Access Diagrams (i.e. NADs). The designer should construct as many NADs as different views of the system are needed, and provide at least one different NAD for each identified (static) user role. The A-OOH Navigational model has been enhanced to transform the WebGRL navigation diagram. The Navigation model used by us is composed of Navigational Nodes, and their relationships indicating the navigation paths the user can follow in the final website (Navigational Links). There are three types of Nodes: (a) Navigational Classes (which are view of the domain classes), (b) Access primitive which can be Index, Showall, and Query which collaborate in the fulfillment of every navigation requirement of the user and (c) Menus or Collections (which are possible) hierarchical structures defined in Navigational Classes. The most common collection type is the concept of menu grouping Navigational Links. Navigational Links (NL) define the navigational paths that the user can follow through the system. A-OOH defines two main types of links: Transversal links (which are defined between two navigational nodes) and Service Links or the Means End Link (in this case navigation is performed to activate an operation which modifies the business logic and moreover implies the navigation to a node showing information when the execution of the service is finished). We further enhance the navigation links to represent the contribution link and decomposition links which will be represented as associations in the navigation model. Also, the Navigation target with service link is represented by access primitives explained below. A. Navigation Access Diagram The NAD is composed of Navigational Nodes, which represent (restricted) views of the domain concepts, and their relationships indicating the navigation paths the user can follow in the final Website (Navigational Links). Each Node has associated a (owner) Root Concept from the DM attached to it by the notation: “Node:DM.RootConcept”. There are three different types of navigational Nodes: **Navigational Classes (NC):** These are domain classes enriched with attributes and operations in which visibility has been constrained depending on the access permissions of the user and the navigational requirements. It is represented by a UML class with the stereotype <<Navigation Class>> as shown in Figure 3 above. *Fig. 3 Navigation Class* 1. **Access primitives:** - These are additional navigation nodes required to access navigation objects. The following access primitives are defined as UML stereotypes: index, guided tour, showall and query. The following modeling elements are used for describing indexes, guided tours and queries. Their stereotypes and associated icons are defined in [19], some of the icons are from Isakowitz, Stohr and Balasubramanian [25]. *Fig. 4 Index class and Shorthand for Index* - Index - An index allows direct access to instances of a navigation class. This is modelled by a composite object, which contains an arbitrary number of index items. Each index item is in turn an object, which has a name that identifies the instance and owns a link to an instance of a navigation class. Any index is a member of some index class, which is stereotyped by <<index>> with a corresponding icon. An index class must be built to conform to the composition structure of classes shown in Figure 4. In the short form the association between Index and Navigation Class is derived from the index composition and the association between Index Item and Navigation Class. - Guided tour - A guided tour provides sequential access to instances of a navigation class. For classes, which contain guided tour objects we use the stereotype <<guided tour>> and its corresponding icon depicted. *Fig. 5 Guided Tour Tour Class and shorthand for Guided Tour* in Figure 5. Any guided tour class must be built conform to the composition structure of classes shown in Figure 5. Each Next Item must be connected to a navigation class. Guided tours may be controlled by the user or by the system. Figure 5 above shows the shorthand notation for a guided tour class. - Query-A query is modeled by a class which has a query string as an attribute. This string may be given, for instance by an OCL select operation. For query classes we use the stereotype <<query>> and the icon depicted in Figure 6. As shown in Figure 6, any query class is the source of two directed associations related by the constraint {xor}. In this way a query with several result objects is modelled to lead first to an index supporting the selection of a particular instance of a navigation class. The query results can alternatively be used as input for a guided tour. **Fig. 6 Query class and shorthand for Query** Figure 6 above also shows the shorthand notation for a query class in combination with an index class or with a guided tour. - Show All: A show all provides navigation without indexing and without internal navigation, all the **Fig. 7 Showall Class and Shorthand for showall objects** are shown in the same abstract page. This is modeled by the Stereotype <<showall >> and its corresponding icon is depicted in the Figure 7 above. 2. Menu- A menu is a composite object which contains a collection of navigation classes and navigation links represented by a fixed number of menu items. Each menu item has a constant name and owns a link either to an instance of a navigational class or to an index, guided tour **Fig. 8 Menu Class and shorthand for Menu** or query.Any menu is an instance of some menu class which is stereotyped by «menu» with a corresponding icon as shown in Fig. 8. A menu class must be built conform to the composition structure of classes described earlier. Navigational Links (NL) define the navigational paths that the user can follow through the system. A-OOH defines three main types of traversal links: - T-Links (Transversal Links): They are defined between two navigational nodes (navigational classes, collections or access primitives). The navigation performed is done to show information through the user interface, without modifying the business logic. This type of links is represented by the stereotype <<TransversalLink>>. If a traversal link is also a decomposition link then as many traversal links between NCs are added as the number of decompositions of the navigation goal into its navigation sub goals. If the traversal link is also a contribution link then its contribution is stored as the attribute of the traversal link. **Fig. 9 The Navigation Metamodel** Table III: - The Transformation of Navigation WebGRL diagram to NM <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Navigation Soft Goals</td> <td>Conditions of the Navigation Model element</td> <td></td> <td></td> </tr> <tr> <td>Navigation between Goals</td> <td>Navigation links/Traversal links</td> <td></td> <td></td> </tr> <tr> <td>Contribution Link/</td> <td>Navigation links/Traversal links</td> <td></td> <td></td> </tr> <tr> <td>Decomposition Link</td> <td>Navigation link/Traversal link</td> <td></td> <td></td> </tr> <tr> <td>Tasks with Means End link</td> <td>Access primitive like Index, Query, Showall and Guided Tour</td> <td></td> <td></td> </tr> </tbody> </table> The Transformation Rules defined to derive the enhanced navigation model (ENM) of A-OOH from the navigation webGRL diagram of GOREWEB framework are as follows:- 1. **Navigation Goals** - **Navigation Class:** By using this rule, a “home” navigational class is added to the model, which is a collection representing a Menu grouping navigational links. From the “home” Navigation Class (NC) a transversal link is added to each of the generated NCs. From each navigational goal with an associated content goal or a resource a navigational class (NC) is derived. This navigation class must be derived from the domain classes represented in the domain model. 2. **Navigation Soft Goals** - **Navigation conditions:** This rule transforms the navigational softgoals with satisfaction level value expressed as conditions or constraints of the Navigation model element. 3. **Navigation2Traversal Link** - This rule checks navigation between one or more goals, if it is detected, then a transversal link is added from the NC that represents the root navigational goal to each of the NCs representing the associated navigational goals. If the traversal link is a contribution link the contribution value is stored as the link attribute. In case of decomposition links a traversal links between the super goal and each of its subgoals are added. 4. **Task Access Primitive:** Tasks linked to navigation goals represent navigation between class objects which provide access to the instances of the navigation class. Access primitives are additional navigation nodes required to access navigation objects. The following access primitives are defined as UML stereotypes: index, guided tour, query and showall as explained above. **The Method:** To derive the NM we take into account the navigation goals. All navigation Goals in the webGRL diagrams are represented as Navigation Classes for the resource used by them to satisfy that goal. They are derived from the domain classes of the Domain model. The navigation goal always states a navigation task that leads to navigation to another navigation class then a traversal link is added between the two navigation classes that is supported by the access primitives depending on the requirement stated in the goal. Softgoals are represented as conditions of that navigation class defined as member conditions on the model elements, i.e. navigation class, association or conditions on the attribute of the navigation class with satisfaction levels. **B. UML Profile For NAD** We define the UML Profile for NAD concepts by extending the UML concepts of class, association, constraint and tagged values the NAD concepts are specified. The purpose of defining an UML profile is to provide an easy mechanism of adaptation to the UML metamodel to elements that are specific of a particular domain, platform or method. In this sense, the particular profile for the NAD consists in adapting the elements defined in the NAD to the UML metamodel. Table IV: UML Profile for Navigation Model <table> <thead> <tr> <th>WebGRL Model Elements</th> <th>Navigation Model Stereotype</th> <th>Stereotyped UML Metaclass</th> </tr> </thead> <tbody> <tr> <td>WebGRLspec</td> <td>Model</td> <td>Model</td> </tr> <tr> <td>Actor</td> <td>Navigation Class</td> <td>Class</td> </tr> <tr> <td>IntentionalElement</td> <td>Goal &amp; Resource</td> <td>Navigation Class</td> </tr> <tr> <td>IntentionalElement</td> <td>Softgoal</td> <td>Navigational Condition</td> </tr> <tr> <td>IntentionalElement</td> <td>Task</td> <td>Navigational Class</td> </tr> <tr> <td>ImportanceType</td> <td>Navigational Importance</td> <td>Enumeration</td> </tr> <tr> <td>ElementLink</td> <td>Link</td> <td>Association</td> </tr> <tr> <td>Contribution</td> <td>Navigational Link</td> <td>Association</td> </tr> <tr> <td>ContributionType</td> <td>Contribution type</td> <td>Enumeration</td> </tr> <tr> <td>Decomposition</td> <td>Navigational Link</td> <td>Association</td> </tr> <tr> <td>DecompositionType</td> <td>Decomposition type</td> <td>Enumeration</td> </tr> <tr> <td>Dependency</td> <td>Primitive 2Attribute</td> <td>Dependency</td> </tr> </tbody> </table> The NAD Navigational Node is defined as an extension of the UML class concept which has attributes and operations (also extensions of the UML concepts). There have been defined different stereotypes for representing the different types of Navigational Nodes (i.e. <<NavigationalClass>>, <<Menu>>, <<Index>>, <<Guided Tour>>, <<Query>> and <<Showall>>). The Navigational Class concept has information about the name of the root concept in the Domain model (stored in the tagged value rootConcept). VI. CASE STUDY In this section, we provide an example of our approach based on a company that sells books on-line. In this case study, a company would like to manage book sales via an online bookstore, thus attracting as many clients as possible. Also there is an administrator of the Web to manage clients. A. Case Study Of Online Bookstore - Requirements Specification Three actors are detected that depend on each other, namely “User”, “Administrator”, and “Online Bookstore”. The main goal of online bookstore is to “sell books”, “provide info about books”, “facilitate payment” and “maintain customer details”, as shown in the base webGRL figure 11 below. To fulfill the “provide info about books” goal the base webgrl diagram shows its decomposition into four subgoals “maintain reviews” (which is a business process goal), “provide search ability” (which is a navigation goal), “maintain subject list” (which is a navigation goal) and “present info systematically” (which is a presentation goal). The adaptation goal “provide personalized recommendations” is related to the content requirement from the resource “book” and the browsing history of the customer. The navigation goal to “provide searchability” is decomposed into a couple of subgoals through “enable browse of books” into “search book by title” and “search book by author”, which are also related to the content requirement to resource “book”. In the same way, as goal to “provide a cart” is decomposed into two tasks: “add item” and “remove item” etc. These tasks are related to the content goal to resource “cart”. Finally, the goal to “maintain customer details” leads to the subgoals of “maintain customer details”, and “maintain transaction and browsing history”. These goals are represented in the base webGRL diagram which is refined and validated by our tool to give the web specific webGRL diagrams. Fig. 11 The Base webGRL diagram for Online Bookstore From here we move to the refined webspecific webGRL diagrams namely the content and the navigation webGRL diagrams of figure 12 and figure 14. The main content goals in the content webgrl diagram are to “provide info about books” and “maintain customer details”. The first content goal “provide info about books” is satisfied by the contribution from the task to achieve the same by display of book information like book cover, abstract, toc etc. and its decomposition into other content goals of “maintain author information” to resource author and to “enable browse of books” we need the content goal to “list categories” which is satisfied by the resource category. Similarly the “maintain order details” requires the resource order. The goal to “maintain customer details” is further decomposed into subgoals of “maintain personal details” and “maintain transaction and browsing history” which are fulfilled by the respective tasks attached to them Fig. 12- Content WebGRL diagram from the resources customer, transaction, cart and book. Similarly the softgoal of “information and collection up to date” is stored as a condition of the domain class customer. The goal to “provide form for payment” requires the resource payment. We use this Content webgrl diagram and the transformation rules to derive the domain model below shown in figure 13. B. The Domain Model The model transformation method used for the content webgrl diagram shown above is as follows:- In this case we can see that nine domain classes are created by applying the Content2DomainClass transformation rule: one class is generated for each resource used to satisfy the content goal specified in the Content webGRL model. Moreover, we detect one generalised resource for transaction and books browsed namely the “transaction browse” domain class with association between transaction, cart, book and customer classes. Figure 13: The Domain Model for Online Bookstore Similarly browsing history adds association between book and customer. Further four tasks are added as operations to the classes customer, cart and book by executing the Task2Operation rule. The task of “maintain personal details” which represents a content pattern is used to store attributes by applying the Content2Attribute rule to add attributes of customer id, username, password etc. in the domain class of customer. Finally we detect that the Provide Book Info requirement follows the navigational pattern in this case the rule Navigation2Relationship adds associations among all the resources found in this pattern. Similarly Softgoal2Condition represents the softgoal of “information and collection up to date” as a condition of the domain class customer. The generated Domain model is shown in Fig. 13. From the refined navigation webgrl diagram we have the main navigation goal to “provide searchability” is decomposed into subgoals into “search book by title” and “search book by author”, which are also related to the content requirement to resource “book”. In the same way, we have the navigation goal to “provide operation links on cart” is decomposed into two tasks: “add item”, “proceed for checkout” and “remove item from cart”. These tasks are related to the content goal to resource “cart”. There is also a navigation link to “customer deals”. Finally, the goal “facilitate payment” is used to “place an order” for sale of book, “provide link for payment”, “provide link to shopping cart” and “provide form for buying books”. Figure 14: - Navigation WebGRL diagram C. The Navigation Model The model transformation method used for the navigation webgrl diagram shown above is as follows:- Figure 15: The Navigation Model for online bookstore In Fig. 15 we can see the derived Navigation model from the specified requirements. In the case of the Navigational model, the rule Navigation2NavigationClass is performed adding a home page with a collection of links (i.e. menu). Afterwards, one NC is created for each navigational goal with an attached resource, in this case we have seven NC created from navigational goals. From the menu, a transversal link to each of the created NCs is added. The rule NavigationSoftgoal2Navigation condition represents the navigation softgoal to “provide relevant links” as a constraint on the traversal links by checking source and target navigation classes. The Navigation2Traversal links adds traversal links to cart, order and payment to satisfy the goals of facilitate payment, place an order and provide shopping... A model transformation approach extending the GOREWEB framework from the requirements phase to the design phase has been presented in this paper. The user goals can be both hard goals and soft goals; hence we need an approach that models the softgoals as well as web specific goals in the design phase. By applying the model transformation approach stated above we capture the goals as well as softgoals in the requirements phase and seamlessly transfer them to the design models suited for web applications along with a UML compliant UML profile to support them. The design models presented in this paper are namely the Domain Model and the Navigation Model. In future we propose to enhance the A-OOH design model to incorporate the presentation, adaptation and business process models with UML profile to support them. This would reduce the probability of risks and improve the quality of the product while keeping the stakeholders’ goals in mind. VII. CONCLUSION A model transformation approach extending the GOREWEB framework from the requirements phase to the design phase has been presented in this paper. The user goals can be both hard goals and soft goals; hence we need an approach that models the softgoals as well as web specific goals in the design phase. By applying the model transformation approach stated above we capture the goals as well as softgoals in the requirements phase and seamlessly transfer them to the design models suited for web applications along with a UML compliant UML profile to support them. The design models presented in this paper are namely the Domain Model and the Navigation Model. In future we propose to enhance the A-OOH design model to incorporate the presentation, adaptation and business process models with UML profile to support them. This would reduce the probability of risks and improve the quality of the product while keeping the stakeholders’ goals in mind. REFERENCES 4. A. Anto, “Goal identification and refinement in the specification of software-based information systems”,1997 Dissertation, Georgia Institute of Technology, Atlanta, USA
{"Source-Url": "http://www.ijsce.org/wp-content/uploads/papers/v3i6/F1990013614.pdf", "len_cl100k_base": 9153, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 39576, "total-output-tokens": 9681, "length": "2e13", "weborganizer": {"__label__adult": 0.00031113624572753906, "__label__art_design": 0.0008411407470703125, "__label__crime_law": 0.0002925395965576172, "__label__education_jobs": 0.0013875961303710938, "__label__entertainment": 7.784366607666016e-05, "__label__fashion_beauty": 0.00017750263214111328, "__label__finance_business": 0.0004222393035888672, "__label__food_dining": 0.0003008842468261719, "__label__games": 0.0005669593811035156, "__label__hardware": 0.0005679130554199219, "__label__health": 0.00038361549377441406, "__label__history": 0.00027251243591308594, "__label__home_hobbies": 7.092952728271484e-05, "__label__industrial": 0.0003752708435058594, "__label__literature": 0.00033020973205566406, "__label__politics": 0.0002002716064453125, "__label__religion": 0.0003788471221923828, "__label__science_tech": 0.0242919921875, "__label__social_life": 7.134675979614258e-05, "__label__software": 0.00891876220703125, "__label__software_dev": 0.958984375, "__label__sports_fitness": 0.0002353191375732422, "__label__transportation": 0.00041747093200683594, "__label__travel": 0.0001806020736694336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46023, 0.01826]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46023, 0.29323]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46023, 0.90717]], "google_gemma-3-12b-it_contains_pii": [[0, 5720, false], [5720, 11894, null], [11894, 18139, null], [18139, 22476, null], [22476, 27133, null], [27133, 29871, null], [29871, 35718, null], [35718, 39126, null], [39126, 42182, null], [42182, 46023, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5720, true], [5720, 11894, null], [11894, 18139, null], [18139, 22476, null], [22476, 27133, null], [27133, 29871, null], [29871, 35718, null], [35718, 39126, null], [39126, 42182, null], [42182, 46023, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46023, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46023, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46023, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46023, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46023, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46023, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46023, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46023, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46023, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46023, null]], "pdf_page_numbers": [[0, 5720, 1], [5720, 11894, 2], [11894, 18139, 3], [18139, 22476, 4], [22476, 27133, 5], [27133, 29871, 6], [29871, 35718, 7], [35718, 39126, 8], [39126, 42182, 9], [42182, 46023, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46023, 0.27273]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
1e5bd902c770f32df84d55d6efe3e8028392cf51
CBOR Encoding of Data Modeled with YANG draft-ietf-core-yang-cbor-14 Abstract This document defines encoding rules for serializing configuration data, state data, RPC input and RPC output, action input, action output, notifications and yang-data extension defined within YANG modules using the Concise Binary Object Representation (CBOR, RFC 7049). Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on July 21, 2021. Copyright Notice Copyright (c) 2021 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction ........................................... 3 2. Terminology and Notation ............................. 3 3. Properties of the CBOR Encoding ..................... 5 3.1. CBOR diagnostic notation ........................ 6 3.2. YANG Schema Item identifier .................... 6 3.3. Name ............................................ 7 4. Encoding of YANG Schema Node Instances ............ 9 4.1. The 'leaf' ...................................... 9 4.1.1. Using SIDs in keys .......................... 9 4.1.2. Using names in keys ......................... 9 4.2. The 'container' and other nodes from the data tree 10 4.2.1. Using SIDs in keys .......................... 11 4.2.2. Using names in keys ......................... 12 4.3. The 'leaf-list' ................................ 13 4.3.1. Using SIDs in keys .......................... 14 4.3.2. Using names in keys ......................... 14 4.4. The 'list' and 'list' instance(s) ................. 15 4.4.1. Using SIDs in keys .......................... 16 4.4.2. Using names in keys ......................... 18 4.5. The 'anydata' .................................. 20 4.5.1. Using SIDs in keys .......................... 21 4.5.2. Using names in keys ......................... 22 4.6. The 'anyxml' ................................... 23 4.6.1. Using SIDs in keys .......................... 23 4.6.2. Using names in keys ........................ 24 5. Encoding of 'yang-data' extension ................... 24 5.1. Using SIDs in keys .............................. 25 5.2. Using names in keys .............................. 26 6. Representing YANG Data Types in CBOR ............... 27 6.1. The unsigned integer Types ..................... 27 6.2. The integer Types ................................ 28 6.3. The 'decimal64' Type ............................ 28 6.4. The 'string' Type ................................ 29 6.5. The 'boolean' Type ................................ 29 6.6. The 'enumeration' Type .......................... 30 6.7. The 'bits' Type .................................. 31 6.8. The 'binary' Type ................................ 33 6.9. The 'leafref' Type ................................ 33 1. Introduction The specification of the YANG 1.1 data modeling language [RFC7950] defines an XML encoding for data instances, i.e. contents of configuration datastores, state data, RPC inputs and outputs, action inputs and outputs, and event notifications. An additional set of encoding rules has been defined in [RFC7951] based on the JavaScript Object Notation (JSON) Data Interchange Format [RFC8259]. The aim of this document is to define a set of encoding rules for the Concise Binary Object Representation (CBOR) [RFC7049]. The resulting encoding is more compact compared to XML and JSON and more suitable for Constrained Nodes and/or Constrained Networks as defined by [RFC7228]. 2. Terminology and Notation The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here. The following terms are defined in [RFC7950]: - action - anydata - anyxml The following terms are defined in [RFC8040]: - yang-data extension This specification also makes use of the following terminology: - child: A schema node defined as a child node of a container, a 3. Properties of the CBOR Encoding This document defines CBOR encoding rules for YANG data trees and their subtrees. A node from the data tree such as container, list instance, notification, RPC input, RPC output, action input and action output is serialized using a CBOR map in which each child schema node is encoded using a key and a value. This specification supports two types of CBOR keys; YANG Schema Item iDentifier (YANG SID) as defined in Section 3.2 and names as defined in Section 3.3. Each of these key types is encoded using a specific CBOR type which allows their interpretation during the deserialization process. Protocols or mechanisms implementing this specification can mandate the use of a specific key type. In order to minimize the size of the encoded data, the proposed mapping avoids any unnecessary meta-information beyond those natively supported by CBOR. For instance, CBOR tags are used solely in the case of SID not encoded as delta, anyxml schema nodes and the union datatype to distinguish explicitly the use of different YANG datatypes encoded using the same CBOR major type. Unless specified otherwise by the protocol or mechanism implementing this specification, the indefinite lengths encoding as defined in [RFC7049] section 2.2 SHALL be supported by CBOR decoders. Data nodes implemented using a CBOR array, map, byte string, and text string can be instantiated but empty. In this case, they are encoded with a length of zero. When schema node are serialized using the rules defined by this specification as part of an application payload, the payload SHOULD include information that would allow a stateless way to identify each node, such as the SID number associated with the node, SID delta from another SID in the application payload, the namespace qualified name or the instance-identifier. Examples in Section 4 include a root CBOR map with a single entry having a key set to either a namespace qualified name or a SID. This root CBOR map is provided only as a typical usage example and is not part of the present encoding rules. Only the value within this CBOR map is compulsory. ### 3.1. CBOR diagnostic notation Within this document, CBOR binary contents are represented using an equivalent textual form called CBOR diagnostic notation as defined in [RFC7049] section 6. This notation is used strictly for documentation purposes and is never used in the data serialization. Table 1 below provides a summary of this notation. <table> <thead> <tr> <th>CBOR content</th> <th>CBOR type</th> <th>Diagnostic notation</th> <th>Example</th> <th>CBOR encoding</th> </tr> </thead> <tbody> <tr> <td>Unsigned</td> <td>0</td> <td>Decimal digits</td> <td>123</td> <td>18 7B</td> </tr> <tr> <td>Negative</td> <td>1</td> <td>Decimal digits prefixed</td> <td>-123</td> <td>38 7A</td> </tr> </tbody> </table> ### Table 1: CBOR diagnostic notation summary Note: CBOR binary contents shown in this specification are annotated with comments. These comments are delimited by slashes ("/") as defined in [RFC8610] Appendix G.6. #### 3.2. YANG Schema Item iDentifier Some of the items defined in YANG [RFC7950] require the use of a unique identifier. In both NETCONF [RFC6241] and RESTCONF [RFC8040], these identifiers are implemented using strings. To allow the implementation of data models defined in YANG in constrained devices and constrained networks, a more compact method to identify YANG items is required. This compact identifier, called YANG Schema Item Identifier, is an unsigned integer. The following items are identified using YANG SIDs (often shortened to SIDs): - identities - data nodes To minimize their size, SIDs used as keys in inner CBOR maps are typically encoded using deltas. Conversion from SIDs to deltas and back to SIDs are stateless processes solely based on the data serialized or deserialized. These SIDs may also be encoded as absolute number when enclosed by CBOR tag 47. Mechanisms and processes used to assign SIDs to YANG items and to guarantee their uniqueness are outside the scope of the present specification. If SIDs are to be used, the present specification is used in conjunction with a specification defining this management. One example for such a specification is [I-D.ietf-core-sid]. ### 3.3. Name This specification also supports the encoding of YANG item identifiers as string, similar as those used by the JSON Encoding of Data Modeled with YANG [RFC7951]. This approach can be used to avoid the management overhead associated to SIDs allocation. The main drawback is the significant increase in size of the encoded data. YANG item identifiers implemented using names MUST be in one of the following forms: - **simple** - the identifier of the YANG item (i.e. schema node or identity). - **namespace qualified** - the identifier of the YANG item is prefixed with the name of the module in which this item is defined, separated by the colon character (":"). The name of a module determines the namespace of all YANG items. defined in that module. If an item is defined in a submodule, then the namespace qualified name uses the name of the main module to which the submodule belongs. ABNF syntax [RFC5234] of a name is shown in Figure 1, where the production for "identifier" is defined in Section 14 of [RFC7950]. \[ \text{name} = [\text{identifier} ":"] \text{identifier} \] Figure 1: ABNF Production for a simple or namespace qualified name A namespace qualified name MUST be used for all members of a top-level CBOR map and then also whenever the namespaces of the data node and its parent node are different. In all other cases, the simple form of the name SHOULD be used. Definition example: ``` module example-foomod { container top { leaf foo { type uint8; } } } module example-barmod { import example-foomod { prefix "foomod"; } augment "/foomod:top" { leaf bar { type boolean; } } } ``` A valid CBOR encoding of the 'top' container is as follows. CBOR diagnostic notation: ``` { "example-foomod:top": { "foo": 54, "example-barmod:bar": true } } ``` Both the 'top' container and the 'bar' leaf defined in a different YANG module as its parent container are encoded as namespace qualified names. The 'foo' leaf defined in the same YANG module as its parent container is encoded as simple name. 4. Encoding of YANG Schema Node Instances Schema node instances defined using the YANG modeling language are encoded using CBOR [RFC7049] based on the rules defined in this section. We assume that the reader is already familiar with both YANG [RFC7950] and CBOR [RFC7049]. 4.1. The 'leaf' A 'leaf' MUST be encoded accordingly to its datatype using one of the encoding rules specified in Section 6. The following examples shows the encoding of a 'hostname' leaf using a SID or a name. Definition example from [RFC7317]: leaf hostname { type inet:domain-name; } 4.1.1. Using SIDs in keys CBOR diagnostic notation: { 1752 : "myhost.example.com" / hostname (SID 1752) / } CBOR encoding: A1 # map(1) 19 06D8 # unsigned(1752) 72 # text(18) 6D79686F73742E6578616D706C652E636F6D # "myhost.example.com" 4.1.2. Using names in keys CBOR diagnostic notation: { 4.2. The 'container' and other nodes from the data tree Containers, list instances, notification contents, rpc inputs, rpc outputs, action inputs and action outputs MUST be encoded using a CBOR map data item (major type 5). A map is comprised of pairs of data items, with each data item consisting of a key and a value. Each key within the CBOR map is set to a schema node identifier, each value is set to the value of this schema node instance according to the instance datatype. This specification supports two types of CBOR keys; SID as defined in Section 3.2 and names as defined in Section 3.3. The following examples shows the encoding of a 'system-state' container instance using SIDs or names. Definition example from [RFC7317]: typedef date-and-time { type string { pattern '\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(\.+)?(Z|\[\+\-]\d{2}:\d{2})'; } } container system-state { container clock { leaf current-datetime { type date-and-time; } } } 4.2.1. Using SIDs in keys In the context of containers and other nodes from the data tree, CBOR map keys within inner CBOR maps can be encoded using deltas or SIDs. In the case of deltas, they MUST be encoded using a CBOR unsigned integer (major type 0) or CBOR negative integer (major type 1), depending on the actual delta value. In the case of SID, they are encoded using the SID value enclosed by CBOR tag 47 as defined in Section 9.3. Delta values are computed as follows: - In the case of a 'container', deltas are equal to the SID of the current schema node minus the SID of the parent 'container'. - In the case of a 'list', deltas are equal to the SID of the current schema node minus the SID of the parent 'list'. - In the case of an 'rpc input' or 'rpc output', deltas are equal to the SID of the current schema node minus the SID of the 'rpc'. - In the case of an 'action input' or 'action output', deltas are equal to the SID of the current schema node minus the SID of the 'action'. - In the case of an 'notification content', deltas are equal to the SID of the current schema node minus the SID of the 'notification'. CBOR diagnostic notation: { 1720 : { / system-state (SID 1720) / 1 : { / clock (SID 1721) / 2 : "2015-10-02T14:47:24Z-05:00", / current-datetime(SID 1723)/ 1 : "2015-09-15T09:12:58Z-05:00" / boot-datetime (SID 1722) / } } CBOR encoding: 4.2.2. Using names in keys CBOR map keys implemented using names MUST be encoded using a CBOR text string data item (major type 3). A namespace-qualified name MUST be used each time the namespace of a schema node and its parent differ. In all other cases, the simple form of the name MUST be used. Names and namespaces are defined in [RFC7951] section 4. The following example shows the encoding of a 'system' container instance using names. Definition example from [RFC7317]: typedef date-and-time { type string { pattern '\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:(\..)?\d+(Z|([-]+-])\d{2}:\d{2})'; } } container system-state { container clock { leaf current-datetime { type date-and-time; } leaf boot-datetime { type date-and-time; } } } CBOR diagnostic notation: Internet-Draft CBOR Encoding of Data Modeled with YANG January 2021 { "ietf-system:system-state" : { "clock" : { "current-datetime" : "2015-10-02T14:47:24Z-05:00", "boot-datetime" : "2015-09-15T09:12:58Z-05:00" } } } CBOR encoding: A1 # map(1) 78 18 # text(24) 6965746662D73797374656D3A73797374656D2D7374617465 A1 # map(1) 65 # text(5) 636C6663666B # "clock" A2 # map(2) 4.3. The 'leaf-list' A leaf-list MUST be encoded using a CBOR array data item (major type 4). Each entry of this array MUST be encoded accordingly to its datatype using one of the encoding rules specified in Section 6. The following example shows the encoding of the 'search' leaf-list instance containing two entries, "ietf.org" and "ieee.org". Definition example [RFC7317]: type domain-name { type string { length "1..253"; pattern '(((\[a-zA-Z0-9_](\[a-zA-Z0-9\-\_]){0,61})?\[a-zA-Z0-9\].)*((\[a-zA-Z0-9_](\[a-zA-Z0-9\-\_]){0,61})?\[a-zA-Z0-9\].?))\.'; } } leaf-list search { type domain-name; } 4.3.1. Using SIDs in keys CBOR diagnostic notation: { 1746 : [ "ietf.org", "ieee.org" ] / search (SID 1746) / } CBOR encoding: A1 # map(1) 19 06D2 # unsigned(1746) 82 # array(2) 68 # text(8) 696574662E6F7267 # "ietf.org" 68 # text(8) 696565652E6F7267 # "ieee.org" 4.3.2. Using names in keys CBOR diagnostic notation: { "ietf-system:search" : [ "ietf.org", "ieee.org" ] } CBOR encoding: A1 # map(1) 72 # text(18) 696574662D73797374656D3A736561726368 # "ietf-system:search" 82 # array(2) 4.4. The 'list' and 'list' instance(s) A list or a subset of a list MUST be encoded using a CBOR array data item (major type 4). Each list instance within this CBOR array is encoded using a CBOR map data item (major type 5) based on the encoding rules of a collection as defined in Section 4.2. It is important to note that this encoding rule also apply to a single 'list' instance. The following examples show the encoding of a 'server' list using SIDs or names. Definition example from [RFC7317]: 4.4.1. Using SIDs in keys The encoding rules of each 'list' instance are defined in Section 4.2.1. Deltas of list members are equal to the SID of the current schema node minus the SID of the 'list'. CBOR diagnostic notation: { 1756 : [ { 3 : "NRC TIC server", 5 : { 1 : "tic.nrc.ca", 2 : 123 }, 1 : 0, 2 : false, 4 : true }, { 3 : "NRC TAC server", 5 : { 1 : "tac.nrc.ca" } } ] } CBOR encoding: The encoding rules of each 'list' instance are defined in 4.4.2. Using names in keys Section 4.2.2. CBOR diagnostic notation: ```json { "ietf-system:server" : [ { "name" : "NRC TIC server", "udp" : { "address" : "tic.nrc.ca", "port" : 123 }, "association-type" : 0, "iburst" : false, "prefer" : true }, { "name" : "NRC TAC server", "udp" : { "address" : "tac.nrc.ca" } } ] } CBOR encoding: ``` A1 72 696574662D73797374656D3A736572766572 # text(18) A5 64 # map(5) 6E616D65 # "name" 7E6574206E646572656D63756470 # "udp" A2 67 # map(2) 6164672657373 # "address" 6A # text(10) 7469632E6E72632E6361 # "tic.nrc.ca" 64 # text(4) 706F7274 # "port" 18 7B # unsigned(123) An anydata serves as a container for an arbitrary set of schema nodes that otherwise appear as normal YANG-modeled data. An anydata instance is encoded using the same rules as a container, i.e., CBOR map. The requirement that anydata content can be modeled by YANG implies the following: - CBOR map keys of any inner schema nodes MUST be set to valid deltas or names. - The CBOR array MUST contain either unique scalar values (as a leaf-list, see Section 4.3), or maps (as a list, see Section 4.4). - CBOR map values MUST follow the encoding rules of one of the datatypes listed in Section 4. The following example shows a possible use of an anydata. In this example, an anydata is used to define a schema node containing a notification event, this schema node can be part of a YANG list to create an event logger. Definition example: module event-log { ... anydata last-event; # SID 60123 } This example also assumes the assistance of the following notification. module example-port { ... notification example-port-fault { # SID 60200 leaf port-name { # SID 60201 type string; } leaf port-fault { # SID 60202 type string; } } } 4.5.1. Using SIDs in keys CBOR diagnostic notation: ``` { 60123 : { / last-event (SID 60123) / 77 : { / example-port-fault (SID 60200) / 1 : "0/4/21", / port-name (SID 60201) / 2 : "Open pin 2" / port-fault (SID 60202) / } } } ``` CBOR encoding: ``` A1 # map(1) 19 EADB # unsigned(60123) A1 # map(1) ``` In some implementations, it might be simpler to use the absolute SID tag encoding for the anydata root element. The resulting encoding is as follows: ``` { 60123 : { 47(60200) : { 1 : "0/4/21", 2 : "Open pin 2" } } } ``` ### 4.5.2. Using names in keys **CBOR diagnostic notation:** ``` { "event-log:last-event" : { "example-port:example-port-fault" : { "port-name" : "0/4/21", "port-fault" : "Open pin 2" } } } ``` CBOR encoding: 4.6. The 'anxml' An anyxml schema node is used to serialize an arbitrary CBOR content, i.e., its value can be any CBOR binary object. anyxml value MAY contain CBOR data items tagged with one of the tags listed in Section 9.3, these tags shall be supported. The following example shows a valid CBOR encoded instance consisting of a CBOR array containing the CBOR simple values 'true', 'null' and 'true'. Definition example from [RFC7951]: ```c module bar-module { ... anyxml bar; # SID 60000 } ``` 4.6.1. Using SIDs in keys CBOR diagnostic notation: ```c { 60000 : [true, null, true] / bar (SID 60000) / } ``` CBOR encoding: 4.6.2. Using names in keys CBOR diagnostic notation: ```plaintext { "bar-module:bar" : [true, null, true] / bar (SID 60000) / } ``` CBOR encoding: ```plaintext A1 # map(1) 6E # text(14) 6261722D6D6F64756C653A626172 # "bar-module:bar" 83 # array(3) F5 # primitive(21) F6 # primitive(22) F5 # primitive(21) ``` 5. Encoding of 'yang-data' extension The yang-data extension [RFC8040] is used to define data structures in YANG that are not intended to be implemented as part of a datastore. The yang-data extension MUST be encoded using the encoding rules of nodes of data trees as defined in Section 4.2. Just like YANG containers, yang-data extension can be encoded using either SIDs or names. Definition example from [I-D.ietf-core-comi] Appendix A: module ietf-coreconf { ... import ietf-restconf { prefix rc; } rc:yang-data yang-errors { container error { leaf error-tag { type identityref { base error-tag; } } leaf error-app-tag { type identityref { base error-app-tag; } } leaf error-data-node { type instance-identifier; } leaf error-message { type string; } } } } 5.1. Using SIDs in keys The yang-data extensions encoded using SIDs are carried in a CBOR map containing a single item pair. The key of this item is set to the SID assigned to the yang-data extension container, the value is set the CBOR encoding of this container as defined in Section 4.2. This example shows a serialization example of the yang-errors yang-data extension as defined in [I-D.ietf-core-comi] using SIDs as defined in Section 3.2. CBOR diagnostic notation: ```yaml { 1024 : { 4 : 1011, / error-tag (SID 1028) / / = invalid-value (SID 1011) / 1 : 1018, / error-app-tag (SID 1025) / / = not-in-range (SID 1018) / 2 : 1740, / error-data-node (SID 1026) / / = timezone-utc-offset (SID 1740) / 3 : "Maximum exceeded" / error-message (SID 1027) / } } ``` CBOR encoding: ``` A1 # map(1) 19 0400 # unsigned(1024) A4 # map(4) 04 # unsigned(4) 19 03F3 # unsigned(1011) 01 # unsigned(1) 19 03FA # unsigned(1018) 02 # unsigned(2) 19 06CC # unsigned(1740) 03 # unsigned(3) 70 # text(16) 4D6178696D756D206578636565646564 ``` 5.2. Using names in keys The yang-data extensions encoded using names are carried in a CBOR map containing a single item pair. The key of this item is set to the namespace qualified name of the yang-data extension container, the value is set the CBOR encoding of this container as defined in Section 3.3. This example shows a serialization example of the yang-errors yang-data extension as defined in [I-D.ietf-core-comi] using names as defined Section 3.3. CBOR diagnostic notation: ```json { "ietf-coreconf:error" : { "error-tag" : "invalid-value", "error-app-tag" : "not-in-range", "error-data-node" : "timezone-utc-offset", "error-message" : "Maximum exceeded" } } ``` CBOR encoding: ``` A1 # map(1) 73 # text(19) 696574662D636F7265636F6F2D6572726F72 # "ietf-coreconf:error" A4 # map(4) 69 # text(9) 6572726F722D746167 # "error-tag" 6D # text(13) 696E76616C69642D76616C7565 # "invalid-value" 6D # text(13) 6572726F722D6170702D746167 # "error-app-tag" ``` 6. Representing YANG Data Types in CBOR The CBOR encoding of an instance of a leaf or leaf-list schema node depends on the built-in type of that schema node. The following subsection defines the CBOR encoding of each built-in type supported by YANG as listed in [RFC7950] section 4.2.4. Each subsection shows an example value assigned to a schema node instance of the discussed built-in type. 6.1. The unsigned integer Types Leaves of type uint8, uint16, uint32 and uint64 MUST be encoded using a CBOR unsigned integer data item (major type 0). The following example shows the encoding of a 'mtu' leaf instance set to 1280 bytes. Definition example from [RFC8344]: leaf mtu { type uint16 { range "68..max"; } } CBOR diagnostic notation: 1280 CBOR encoding: 19 0500 6.2. The integer Types Leafs of type int8, int16, int32 and int64 MUST be encoded using either CBOR unsigned integer (major type 0) or CBOR negative integer (major type 1), depending on the actual value. The following example shows the encoding of a 'timezone-utc-offset' leaf instance set to -300 minutes. Definition example from [RFC7317]: leaf timezone-utc-offset { type int16 { range "-1500 .. 1500"; } } CBOR diagnostic notation: -300 CBOR encoding: 39 012B 6.3. The 'decimal64' Type Leafs of type decimal64 MUST be encoded using a decimal fraction as defined in [RFC7049] section 2.4.3. The following example shows the encoding of a 'my-decimal' leaf instance set to 2.57. Definition example from [RFC7317]: leaf my-decimal { type decimal64 { fraction-digits 2; range "1 .. 3.14 | 10 | 20..max"; } } 6.4. The 'string' Type Leafs of type string MUST be encoded using a CBOR text string data item (major type 3). The following example shows the encoding of a 'name' leaf instance set to "eth0". Definition example from [RFC8343]: leaf name { type string; } CBOR diagnostic notation: "eth0" CBOR encoding: 64 65746830 6.5. The 'boolean' Type Leafs of type boolean MUST be encoded using a CBOR simple value 'true' (major type 7, additional information 21) or 'false' (major type 7, additional information 20). The following example shows the encoding of an 'enabled' leaf instance set to 'true'. Definition example from [RFC7317]: leaf enabled { type boolean; } CBOR diagnostic notation: true CBOR encoding: F5 6.6. The 'enumeration' Type Leafs of type enumeration MUST be encoded using a CBOR unsigned integer (major type 0) or CBOR negative integer (major type 1), depending on the actual value. Enumeration values are either explicitly assigned using the YANG statement 'value' or automatically assigned based on the algorithm defined in [RFC7950] section 9.6.4.2. The following example shows the encoding of an 'oper-status' leaf instance set to 'testing'. Definition example from [RFC7317]: leaf oper-status { type enumeration { enum up { value 1; } enum down { value 2; } enum testing { value 3; } enum unknown { value 4; } enum dormant { value 5; } enum not-present { value 6; } enum lower-layer-down { value 7; } } } CBOR diagnostic notation: 3 CBOR encoding: 03 Values of 'enumeration' types defined in a 'union' type MUST be encoded using a CBOR text string data item (major type 3) and MUST contain one of the names assigned by 'enum' statements in YANG. The encoding MUST be enclosed by the enumeration CBOR tag as specified in Section 9.3. Definition example from [RFC7950]: type union { type int32; type enumeration { enum unbounded; } } CBOR diagnostic notation: 44("unbounded") CBOR encoding: D8 2C 69 756E626F756E646564 6.7. The 'bits' Type Keeping in mind that bit positions are either explicitly assigned using the YANG statement 'position' or automatically assigned based on the algorithm defined in [RFC7950] section 9.7.4.2, each element of type bits could be seen as a set of bit positions (or offsets from position 0), that have a value of ether 1, which represents the bit being set or 0, which represents that the bit is not set. Leafs of type bits MUST be encoded either using a CBOR array or byte string (major type 2). In case CBOR array representation is used, each element is either a positive integer (major type 0 with value 0 being disallowed) that can be used to calculate the offset of the next byte string, or a byte string (major type 2) that carries the information whether certain bits are set or not. The initial offset value is 0 and each unsigned integer modifies the offset value of the next byte string by the integer value multiplied by 8. For example, if the bit offset is 0 and there is an integer with value 5, the first byte of the byte string that follows will represent bit positions 40 to 47 both ends included. If the byte string has a second byte, it will carry information about bits 48 to 55 and so on. Within each byte, bits are assigned from least to most significant. After the byte string, the offset is modified by the number of bytes in the byte string multiplied by 8. Bytes with no bits set at the end of the byte string are removed. An example follows. The following example shows the encoding of an 'alarm-state' leaf instance with the 'critical', 'warning' and 'indeterminate' flags set. typedef alarm-state { type bits { bit unknown; bit under-repair; bit critical; bit major; bit minor; bit warning { position 8; } bit indeterminate { position 128; } } } leaf alarm-state { type alarm-state; } CBOR diagnostic notation: [h'0401', 14, h'01'] CBOR encoding: 83 42 0401 0E 41 01 In a number of cases the array would only need to have one element - a byte string with a small number of bytes inside. For this case, it is expected to omit the array element and have only the byte array that would have been inside. To illustrate this, let us consider the same example yang definition, but this time encoding only 'under-repair' and 'critical' flags. The result would be CBOR diagnostic notation: h'06' CBOR encoding: 41 06 Elements in the array MUST be either byte strings or positive unsigned integers, where byte strings and integers MUST alternate, i.e., adjacent byte strings or adjacent integers are an error. An array with a single byte string MUST instead by encoded as just that byte string. An array with a single positive integer is an error. Values of 'bit' types defined in a 'union' type MUST be encoded using a CBOR text string data item (major type 3) and MUST contain a space-separated sequence of names of 'bit' that are set. The encoding MUST be enclosed by the bits CBOR tag as specified in Section 9.3. The following example shows the encoding of an 'alarm-state' leaf instance defined using a union type with the 'under-repair' and 'critical' flags set. Definition example: ```yaml leaf alarm-state-2 { type union { type alarm-state; type bits { bit extra-flag; } } } ``` CBOR diagnostic notation: 43("under-repair critical") CBOR encoding: D8 2B 75 756E6465722D72657061697220637269746963616665 6.8. The 'binary' Type Leafs of type binary MUST be encoded using a CBOR byte string data item (major type 2). The following example shows the encoding of an 'aes128-key' leaf instance set to 0x1f1ce6a3f42660d888d92a4d8030476e. Definition example: leaf aes128-key { type binary { length 16; } } CBOR diagnostic notation: h'1F1CE6A3F42660D888D92A4D8030476E' CBOR encoding: 50 1F1CE6A3F42660D888D92A4D8030476E 6.9. The 'leafref' Type Leafs of type leafref MUST be encoded using the rules of the schema node referenced by the 'path' YANG statement. The following example shows the encoding of an 'interface-state-ref' leaf instance set to "eth1". Definition example from [RFC8343]: typedef interface-state-ref { type leafref { path "/interfaces-state/interface/name"; } } container interfaces-state { list interface { key "name"; leaf name { type string; } leaf-list higher-layer-if { type interface-state-ref; } } } CBOR diagnostic notation: "eth1" CBOR encoding: 64 65746831 6.10. The 'identityref' Type This specification supports two approaches for encoding identityref, a YANG Schema Item iDentifier as defined in Section 3.2 or a name as defined in [RFC7951] section 6.8. 6.10.1. SIDs as identityref When schema nodes of type identityref are implemented using SIDs, they MUST be encoded using a CBOR unsigned integer data item (major type 0). (Note that no delta mechanism is employed for SIDs as identityref.) The following example shows the encoding of a 'type' leaf instance set to the value 'iana-if-type:ethernetCsmacd' (SID 1880). Definition example from [RFC7317]: ```yml identity interface-type { } identity iana-interface-type { base interface-type; } identity ethernetCsmacd { base iana-interface-type; } leaf type { type identityref { base interface-type; } ``` 6.10.2. Name as identityref Alternatively, an identityref MAY be encoded using a name as defined in Section 3.3. When names are used, identityref MUST be encoded using a CBOR text string data item (major type 3). If the identity is defined in different module than the leaf node containing the identityref data node, the namespace qualified form MUST be used. Otherwise, both the simple and namespace qualified forms are permitted. Names and namespaces are defined in Section 3.3. The following example shows the encoding of the identity 'iana-if-type:ethernetCsmacd' using its namespace qualified name. This example is described in Section 6.10.1. CBOR diagnostic notation: "iana-if-type:ethernetCsmacd" CBOR encoding: 78 1b 69616E612D69662D747970653A65746865726E657443736D616364 6.11. The 'empty' Type Leafs of type empty MUST be encoded using the CBOR null value (major type 7, additional information 22). The following example shows the encoding of a 'is-router' leaf instance when present. Definition example from [RFC8344]: leaf is-router { type empty; } CBOR diagnostic notation: null Leafs of type union MUST be encoded using the rules associated with one of the types listed. When used in a union, the following YANG datatypes are enclosed by a CBOR tag to avoid confusion between different YANG datatypes encoded using the same CBOR major type. - bits - enumeration - identityref - instance-identifier See Section 9.3 for the assigned value of these CBOR tags. As mentioned in Section 6.6 and in Section 6.7, 'enumeration' and 'bits' are encoded as CBOR text string data item (major type 3) when defined within a 'union' type. The following example shows the encoding of an 'ip-address' leaf instance when set to "2001:db8:a0b:12f0::1". Definition example from [RFC7317]: typedef ipv4-address { typedef ipv6-address { type string { pattern '((::|\[[0-9a-fA-F]{0,4}]:)?\[[0-9a-fA-F]{0,4}]:)?((::|\[[0-9a-fA-F]{0,4}]:)\[[0-9a-fA-F]{0,4}]:)?(\[[0-9a-fA-F]{0,4}]:)?(\[[0-9a-fA-F]{0,4}]:)?(\[[0-9a-fA-F]{0,4}]:)?(\[[0-9a-fA-F]{0,4}]:)?(\[[0-9a-fA-F]{0,4}]:)?(\[[0-9a-fA-F]{0,4}]:)?(\[[0-9a-fA-F]{0,4}]:)?(\[[0-9a-fA-F]{0,4}]:)?\d+(\d+\d+\d+\d+\d+\d+)\d+(%[^\p{N}\p{L}]+)?'; } } typedef ip-address { type union { type ipv4-address; type ipv6-address; } } leaf address { type inet:ip-address; } CBOR diagnostic notation: "2001:db8:a0b:12f0::1" CBOR encoding: 74 323030313A6462383A6130623A313266303A3A31 6.13. The 'instance-identifier' Type This specification supports two approaches for encoding an instance-identifier, one based on YANG Schema Item iDentifier as defined in Section 3.2 and one based on names as defined in Section 3.3. 6.13.1. SIDs as instance-identifier SIDs uniquely identify a schema node. In the case of a single instance schema node, i.e. a schema node defined at the root of a YANG module or submodule or schema nodes defined within a container, the SID is sufficient to identify this instance. In the case of a schema node member of a YANG list, a SID is combined with the list key(s) to identify each instance within the YANG list(s). Single instance schema nodes MUST be encoded using a CBOR unsigned integer data item (major type 0) and set to the targeted schema node SID. Schema nodes member of a YANG list MUST be encoded using a CBOR array data item (major type 4) containing the following entries: - The first entry MUST be encoded as a CBOR unsigned integer data item (major type 0) and set to the targeted schema node SID. - The following entries MUST contain the value of each key required to identify the instance of the targeted schema node. These keys MUST be ordered as defined in the 'key' YANG statement, starting from top level list, and follow by each of the subordinate list(s). Examples within this section assume the definition of a schema node of type 'instance-identifier': Definition example from [RFC7950]: ``` container system { ... leaf reporting-entity { type instance-identifier; } leaf contact { type string; } leaf hostname { type inet:domain-name; } ~~~~ ``` *First example:* The following example shows the encoding of the 'reporting-entity' value referencing data node instance "/system/contact" (SID 1741). Definition example from [RFC7317]: container system { leaf contact { type string; } leaf hostname { type inet:domain-name; } } CBOR diagnostic notation: 1741 CBOR encoding: 19 06CD *Second example:* The following example shows the encoding of the 'reporting-entity' value referencing list instance "/system/authentication/user/authorized-key/key-data" (SID 1734) for user name "bob" and authorized-key "admin". Definition example from \[RFC7317\]: list user { key name; leaf name { type string; } leaf password { type ianach:crypt-hash; } } list authorized-key { key name; leaf name { type string; } } 6.13.2. Names as instance-identifier An "instance-identifier" value is encoded as a string that is analogical to the lexical representation in XML encoding; see Section 9.13.2 in [RFC7950]. However, the encoding of namespaces in instance-identifier values follows the rules stated in Section 3.3, namely: - The leftmost (top-level) data node name is always in the namespace qualified form. - Any subsequent data node name is in the namespace qualified form if the node is defined in a module other than its parent node, and the simple form is used otherwise. This rule also holds for node names appearing in predicates. For example, /ietf-interfaces:interfaces/interface[name='eth0']/ietf-ip:ipv4/ip is a valid instance-identifier value because the data nodes "interfaces", "interface", and "name" are defined in the module "ietf-interfaces", whereas "ipv4" and "ip" are defined in "ietf-ip". The resulting xpath MUST be encoded using a CBOR text string data item (major type 3). *First example:* This example is described in Section 6.13.1. CBOR diagnostic notation: "/ietf-system:system/contact" CBOR encoding: 78 1c 2F696574662D73797374656D3A73797374616374 *Second example:* This example is described in Section 6.13.1. CBOR diagnostic notation: "/ietf-system:system/authentication/user[name='bob']/authorized-key[name='admin']/key-data" CBOR encoding: 78 59 2F696574662D73797374656DD3A73797374656D2F61757468656E74696361 74696E62E2F757365725B6E616D653D276226F62275D2F617574686573 64642D6E65790D0A5B6E616D653D2761646D696E275D2F6B65792D64617461 *Third example:* This example is described in Section 6.13.1. CBOR diagnostic notation: "/ietf-system:system/authentication/user[name='jack']" CBOR encoding: 78 33 2F696574662D73797374656DD3A73797374656D2F61757468656E74696361 74696E62E2F757365725B6E616D653D2761646D696E275D2F6B65792D64617461 --- 7. Content-Types The following Content-Type is defined: application/yang-data+cbor; id=name: This Content-Type represents a CBOR YANG document containing one or multiple data node values. Each data node is identified by its associated namespace qualified name as defined in Section 3.3. FORMAT: CBOR map of name, instance-value The message payload of Content-Type 'application/yang-data+cbor' is encoded using a CBOR map. Each entry within the CBOR map contains the data node identifier (i.e. its namespace qualified name) and the associated instance-value. Instance-values are encoded using the rules defined in Section 4 8. Security Considerations The security considerations of [RFC7049] and [RFC7950] apply. This document defines an alternative encoding for data modeled in the YANG data modeling language. As such, this encoding does not contribute any new security issues in addition of those identified for the specific protocol or context for which it is used. To minimize security risks, software on the receiving side SHOULD reject all messages that do not comply to the rules of this document and reply with an appropriate error message to the sender. 9. IANA Considerations 9.1. Media-Types Registry This document adds the following Media-Type to the "Media Types" registry. +----------------+----------------------------+-----------+ | Name | Template | Reference | |----------------+----------------------------+-----------+ | yang-data+cbor | application/yang-data+cbor | RFC XXXX | +----------------+----------------------------+-----------+ // RFC Ed.: replace RFC XXXX with this RFC number and remove this note. 9.2. CoAP Content-Formats Registry This document adds the following Content-Format to the "CoAP Content-Formats", within the "Constrained RESTful Environments (CoRE) Parameters" registry. +---------------------------------+--------------+------+-----------+ | Media Type | Content | ID | Reference | | | Coding | | | +---------------------------------+--------------+------+-----------+ | application/yang-data+cbor; | | TBD1 | RFC XXXX | 9.3. CBOR Tags Registry This specification requires the assignment of CBOR tags for the following YANG datatypes. These tags are added to the CBOR Tags Registry as defined in section 7.2 of [RFC7049]. <table> <thead> <tr> <th>Tag</th> <th>Data Item</th> <th>Semantics</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>43</td> <td>text string</td> <td>YANG bits datatype</td> <td>[this]</td> </tr> <tr> <td></td> <td></td> <td>; see Section 6.7</td> <td></td> </tr> <tr> <td>44</td> <td>text string</td> <td>YANG enumeration datatype</td> <td>[this]</td> </tr> <tr> <td></td> <td></td> <td>; see Section 6.6</td> <td></td> </tr> <tr> <td>45</td> <td>unsigned integer</td> <td>YANG identityref datatype</td> <td>[this]</td> </tr> <tr> <td></td> <td>or text string</td> <td>; see Section 6.10</td> <td></td> </tr> <tr> <td>46</td> <td>unsigned integer</td> <td>YANG instance-identifier datatype; see</td> <td>[this]</td> </tr> <tr> <td></td> <td>or text string</td> <td>Section 6.13</td> <td></td> </tr> <tr> <td></td> <td>or array</td> <td></td> <td></td> </tr> <tr> <td>47</td> <td>unsigned integer</td> <td>YANG Schema Item iDentifier</td> <td>[this]</td> </tr> <tr> <td></td> <td></td> <td>; see Section 3.2</td> <td></td> </tr> </tbody> </table> 10. Acknowledgments This document has been largely inspired by the extensive works done by Andy Bierman and Peter van der Stok on [I-D.ietf-core-comi]. [RFC7951] has also been a critical input to this work. The authors would like to thank the authors and contributors to these two drafts. 11.1. Normative References 11.2. Informative References Authors' Addresses Michel Veillette (editor) Trilliant Networks Inc. 610 Rue du Luxembourg Granby, Quebec J2J 2V2 Canada Email: michel.veillette@trilliantinc.com
{"Source-Url": "https://datatracker.ietf.org/doc/pdf/draft-ietf-core-yang-cbor-14", "len_cl100k_base": 12244, "olmocr-version": "0.1.53", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 93550, "total-output-tokens": 16915, "length": "2e13", "weborganizer": {"__label__adult": 0.00024139881134033203, "__label__art_design": 0.0003616809844970703, "__label__crime_law": 0.00037598609924316406, "__label__education_jobs": 0.0007219314575195312, "__label__entertainment": 7.277727127075195e-05, "__label__fashion_beauty": 0.00014281272888183594, "__label__finance_business": 0.0005893707275390625, "__label__food_dining": 0.00023221969604492188, "__label__games": 0.00042557716369628906, "__label__hardware": 0.00127410888671875, "__label__health": 0.00026917457580566406, "__label__history": 0.0002779960632324219, "__label__home_hobbies": 7.176399230957031e-05, "__label__industrial": 0.0004851818084716797, "__label__literature": 0.0002961158752441406, "__label__politics": 0.0003020763397216797, "__label__religion": 0.0003631114959716797, "__label__science_tech": 0.058837890625, "__label__social_life": 7.706880569458008e-05, "__label__software": 0.03125, "__label__software_dev": 0.90283203125, "__label__sports_fitness": 0.00016415119171142578, "__label__transportation": 0.0003368854522705078, "__label__travel": 0.00014829635620117188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48318, 0.06606]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48318, 0.66904]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48318, 0.64136]], "google_gemma-3-12b-it_contains_pii": [[0, 1589, false], [1589, 4035, null], [4035, 4893, null], [4893, 5328, null], [5328, 6440, null], [6440, 8185, null], [8185, 8979, null], [8979, 10354, null], [10354, 11512, null], [11512, 12630, null], [12630, 13635, null], [13635, 14801, null], [14801, 15468, null], [15468, 16461, null], [16461, 17078, null], [17078, 17632, null], [17632, 18135, null], [18135, 18364, null], [18364, 18652, null], [18652, 18738, null], [18738, 19148, null], [19148, 19421, null], [19421, 20214, null], [20214, 21113, null], [21113, 21624, null], [21624, 22268, null], [22268, 23051, null], [23051, 23805, null], [23805, 24602, null], [24602, 25846, null], [25846, 26627, null], [26627, 27464, null], [27464, 28188, null], [28188, 29467, null], [29467, 31089, null], [31089, 31944, null], [31944, 33155, null], [33155, 33943, null], [33943, 34778, null], [34778, 35881, null], [35881, 36600, null], [36600, 37747, null], [37747, 39060, null], [39060, 39684, null], [39684, 39982, null], [39982, 41039, null], [41039, 42204, null], [42204, 43763, null], [43763, 45291, null], [45291, 46915, null], [46915, 48318, null], [48318, 48318, null], [48318, 48318, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1589, true], [1589, 4035, null], [4035, 4893, null], [4893, 5328, null], [5328, 6440, null], [6440, 8185, null], [8185, 8979, null], [8979, 10354, null], [10354, 11512, null], [11512, 12630, null], [12630, 13635, null], [13635, 14801, null], [14801, 15468, null], [15468, 16461, null], [16461, 17078, null], [17078, 17632, null], [17632, 18135, null], [18135, 18364, null], [18364, 18652, null], [18652, 18738, null], [18738, 19148, null], [19148, 19421, null], [19421, 20214, null], [20214, 21113, null], [21113, 21624, null], [21624, 22268, null], [22268, 23051, null], [23051, 23805, null], [23805, 24602, null], [24602, 25846, null], [25846, 26627, null], [26627, 27464, null], [27464, 28188, null], [28188, 29467, null], [29467, 31089, null], [31089, 31944, null], [31944, 33155, null], [33155, 33943, null], [33943, 34778, null], [34778, 35881, null], [35881, 36600, null], [36600, 37747, null], [37747, 39060, null], [39060, 39684, null], [39684, 39982, null], [39982, 41039, null], [41039, 42204, null], [42204, 43763, null], [43763, 45291, null], [45291, 46915, null], [46915, 48318, null], [48318, 48318, null], [48318, 48318, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48318, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48318, null]], "pdf_page_numbers": [[0, 1589, 1], [1589, 4035, 2], [4035, 4893, 3], [4893, 5328, 4], [5328, 6440, 5], [6440, 8185, 6], [8185, 8979, 7], [8979, 10354, 8], [10354, 11512, 9], [11512, 12630, 10], [12630, 13635, 11], [13635, 14801, 12], [14801, 15468, 13], [15468, 16461, 14], [16461, 17078, 15], [17078, 17632, 16], [17632, 18135, 17], [18135, 18364, 18], [18364, 18652, 19], [18652, 18738, 20], [18738, 19148, 21], [19148, 19421, 22], [19421, 20214, 23], [20214, 21113, 24], [21113, 21624, 25], [21624, 22268, 26], [22268, 23051, 27], [23051, 23805, 28], [23805, 24602, 29], [24602, 25846, 30], [25846, 26627, 31], [26627, 27464, 32], [27464, 28188, 33], [28188, 29467, 34], [29467, 31089, 35], [31089, 31944, 36], [31944, 33155, 37], [33155, 33943, 38], [33943, 34778, 39], [34778, 35881, 40], [35881, 36600, 41], [36600, 37747, 42], [37747, 39060, 43], [39060, 39684, 44], [39684, 39982, 45], [39982, 41039, 46], [41039, 42204, 47], [42204, 43763, 48], [43763, 45291, 49], [45291, 46915, 50], [46915, 48318, 51], [48318, 48318, 52], [48318, 48318, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48318, 0.02489]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
d46627c68f5236548727e8c98f16b2717d7af5a3
Introduction The STMCube™ initiative was originated by STMicroelectronics to ease developers' life by reducing development efforts, time and cost. STM32Cube covers the STM32 portfolio. STM32Cube Version 1.x includes: - The STM32CubeMX, a graphical software configuration tool that allows to generate C initialization code using graphical wizards. - A comprehensive embedded software platform, delivered per series (such as STM32CubeL4 for STM32L4 Series) - The STM32CubeL4 HAL, an STM32 abstraction layer embedded software, ensuring maximized portability across STM32 portfolio - A consistent set of middleware components such as RTOS, USB, STMTouch and FatFs - All embedded software utilities coming with a full set of examples The STM32CubeL4 discovery demonstration platform is built around the STM32Cube HAL, BSP and RTOS middleware components. With a LCD-glass display, a microphone, a joystick, ST-LINK/V2 debugger/programmer and microcontrollers from the STM32L1 and STM32L4 Series, this discovery kit is ideal to evaluate STM32 ultra-low-power solutions and audio capabilities. The architecture was defined with the goal of making from the STM32CubeL4 demonstration core an independent central component which can be used with several RTOS and third party firmware libraries through several abstraction layers inserted between the STM32CubeL4 demonstration core and the several modules and libraries working around. The STM32CubeL4 demonstration firmware supports STM32L476xx devices and runs on 32L476GDISCOVERY discovery kit. Contents 1 STM32Cube overview ......................................................... 6 2 Getting started with demonstration ................................. 7 2.1 Hardware requirements ............................................... 7 2.2 Hardware configuration ............................................ 9 2.3 Power supply modes .................................................. 10 2.3.1 USB supply ..................................................... 10 2.3.2 Battery supply ................................................ 10 3 Demonstration firmware package .................................... 11 3.1 Demonstration repository .......................................... 11 3.2 Demonstration architecture overview .............................. 12 3.2.1 Kernel core files ............................................. 13 3.3 STM32L476G discovery board BSP .................................. 14 4 Demonstration functional description ............................... 15 4.1 Overview .................................................................. 15 4.2 Main menu ................................................................ 15 4.3 Menu navigation ...................................................... 16 4.4 Modules .................................................................. 17 4.4.1 IDD ................................................................ 17 4.4.2 VDD ................................................................ 18 4.4.3 Audio record .................................................... 19 4.4.4 Audio player ..................................................... 20 4.4.5 Compass .......................................................... 21 4.4.6 Sound meter ...................................................... 21 4.4.7 Guitar tuner ...................................................... 22 4.4.8 Option ............................................................ 23 4.5 Audio player control ................................................... 23 5 Demonstration firmware settings ..................................... 25 5.1 Clock control ............................................................ 25 5.2 Peripherals ............................................................... 26 5.3 Interrupts / wakeup pins .................................................. 26 5.4 Low-power strategy ......................................................... 27 5.5 FreeRTOS resources .......................................................... 27 5.5.1 Tasks ................................................................. 28 5.5.2 Message queues ....................................................... 28 5.5.3 Mutex ................................................................. 28 5.5.4 Heap ................................................................. 29 5.6 Programming firmware application ............................... 30 5.6.1 Using binary file ......................................................... 30 5.6.2 Using preconfigured projects ....................................... 30 6 Demonstration firmware footprints ................................. 31 7 Kernel description ............................................................. 32 7.1 Overview ................................................................. 32 7.2 Kernel initialization ..................................................... 32 7.3 Kernel processes and tasks ............................................ 32 7.4 Kernel event manager .................................................. 33 7.5 Module manager ........................................................ 33 7.6 Backup and settings configuration ................................. 35 7.7 Adding a new module .................................................. 35 8 Revision history .............................................................. 36 List of tables Table 1. Kernel core files ................................................................. 13 Table 2. Joystick key functions ......................................................... 16 Table 3. Frequencies per string ......................................................... 22 Table 4. Audio player control joystick key functions ......................... 23 Table 5. Clock configurations .......................................................... 25 Table 6. Oscillators and PLL description .......................................... 25 Table 7. Used peripherals ................................................................. 26 Table 8. Demonstration firmware interrupts ...................................... 26 Table 9. Task description ................................................................. 28 Table 10. Message queues ............................................................... 28 Table 11. Heap usage ...................................................................... 29 Table 12. Modules footprint ............................................................. 31 Table 13. Document revision history ................................................ 36 List of figures Figure 1. STM32Cube block diagram ................................. 6 Figure 2. STM32L476G discovery board (top view) ......................... 8 Figure 3. STM32L476G discovery board (bottom view) ......................... 8 Figure 4. STM32L476G discovery kit jumper presentation ......................... 9 Figure 5. Folder structure .................................................. 11 Figure 6. Demonstration architecture overview ................................. 12 Figure 7. Discovery BSP structure ............................................ 14 Figure 8. Demonstration top menu ............................................. 15 Figure 9. IDD application menu structure display .............................. 17 Figure 10. IDD mode selection and result value .............................. 18 Figure 11. VDD application menu selection and result value ..................... 18 Figure 12. Audio record application menu selection ............................ 19 Figure 13. Audio recorder architecture ....................................... 19 Figure 14. Audio player application menu selection ............................ 20 Figure 15. Audio player architecture ......................................... 20 Figure 16. Compass application menu structure and display ................... 21 Figure 17. Sound meter application menu selection and display ............... 21 Figure 18. Guitar tuner application menu selection and display ............... 22 Figure 19. Option menu selection and display .................................. 23 Figure 20. Low-power scheme .................................................. 27 STM32Cube overview The STMCube™ initiative was originated by STMicroelectronics to ease developers' life by reducing development efforts, time and cost. STM32Cube covers the STM32 portfolio. STM32Cube Version 1.x includes: - The STM32CubeMX, a graphical software configuration tool that allows generating C initialization code using graphical wizards. - A comprehensive embedded software platform, delivered per series (such as STM32CubeL4 for STM32L4 Series) - The STM32CubeL4 HAL, an STM32 abstraction layer embedded software, ensuring maximized portability across STM32 portfolio - A consistent set of middleware components such as RTOS, USB, TCP/IP, graphics - All embedded software utilities coming with a full set of examples. 2 Getting started with demonstration 2.1 Hardware requirements The hardware requirements to start the demonstration application are as follows: - STM32L476G discovery board (*Figure 2* and *Figure 3*) (refer to UM1879 for the discovery kit description) - One “USB type A to Mini-B” cable to power up the discovery board from the USB ST-LINK (USB connector CN1) and to run in USB-powered mode - One CR2032 battery to run in battery mode. The STM32L476G discovery kit helps to discover the ultra-low-power features and audio capabilities of the STM32L4 Series. It offers everything required for beginners and experienced users to get stared quickly and develop applications easily. Based on an STM32L476VGT6 MCU, it includes an ST-LINK/V2-1 embedded debug tool interface, Idd current measurement, Quad-SPI Flash memory, an audio codec with 3.5mm connector, LCD segment display (8x40), LEDs, a joystick and a USB mini-B connector. Figure 2. STM32L476G discovery board (top view) Figure 3. STM32L476G discovery board (bottom view) 2.2 Hardware configuration The STM32Cube demonstration supports STM32L476xx devices and runs on STM32L476G-DISCO demonstration board from STMicroelectronics. Several jumpers are used to configure the hardware mode of operation as shown in Figure 4. Figure 4. STM32L476G discovery kit jumper presentation 2.3 Power supply modes The demonstration firmware provides two modes of operation depending on the power supply mode. The richest power supply mode is the USB-powered where all the applications can run at a maximum clock speed whereas an alternative mode consists in running from the battery. The battery mode allows to demonstrate the low-power consumption of MCU but also the HW itself. The demonstration firmware implements thus some clock reduction mechanisms, enters automatically in low-power modes in case of inactivity and wake-up in case of end-user interaction with the joystick. 2.3.1 USB supply The following jumper setup is mandatory for running the demonstration powered by the USB (ST-LINK): - Jumpers RST (JP3) / ST-LINK (CN3) present - Jumper JP5 on IDD - Jumper JP6 on 3V3 2.3.2 Battery supply The following jumper setup is mandatory for running the demonstration powered by the battery (see Figure 4): - Jumpers RST (JP3) / ST-LINK (CN3) removed - Jumper JP5 on IDD - Jumper JP6 on BATT And of course it insures that a CR2032 battery is present at the rear of the board. 3 Demonstration firmware package 3.1 Demonstration repository The STM32CubeL4 demonstration firmware for 32L476GDISCOVERY discovery kit is provided within the STM32CubeL4 firmware package as shown in Figure 5. The demonstration sources are located in the projects folder of the STM32Cube package for each supported board. The sources are divided into five groups described as follows: - **Binary**: demonstration binary file in Hex format - **Config**: all middleware components and HAL configuration files - **Core**: contains the kernel files - **Modules**: contains the sources files for main application top level and the application modules. - **Project settings**: a folder per tool chain containing the project settings and the linker files. ### 3.2 Demonstration architecture overview The STM32CubeL4 demonstration firmware for 32L476GDISCOVERY discovery kit is composed of a central kernel based on a set of firmware and hardware services offered by the STM32Cube middleware, discovery board drivers and a set of modules mounted on the kernel and built in a modular architecture. Each module can be reused separately in a standalone application. The full set of modules is managed by the Kernel which provides access to all common resources and facilitates the addition of new modules as shown in Figure 6. Each module should provide the following functionalities and properties: 1. Display characteristics. 2. The method to startup the module. 3. The method to close down the module for low-power mode. 4. The module application core (main module process). 5. The specific configuration. 6. The error management. ![Figure 6. Demonstration architecture overview](image-url) ### 3.2.1 Kernel core files *Table 1* lists the kernel core files covering the startup, interrupts management and kernel core services. <table> <thead> <tr> <th>Function</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>main.c</td> <td>Main program file</td> </tr> <tr> <td>stm32l4xx_it.c</td> <td>Interrupt handlers for the application</td> </tr> <tr> <td>k_menu.c</td> <td>Kernel menu and desktop manager</td> </tr> <tr> <td>k_module.c</td> <td>Modules manager</td> </tr> <tr> <td>k_startup.c</td> <td>Demonstration startup windowing process</td> </tr> <tr> <td>startup_stm32l476xx.s</td> <td>Startup file</td> </tr> </tbody> </table> 3.3 STM32L476G discovery board BSP The board drivers available within the `stm32l476g-discoveryXXX.c/.h` files (see Figure 7) implement the board capabilities and the bus link mechanism for the board components (LEDs, buttons, audio, compass, glass LCD, Quad-SPI Flash memory, etc…) **Figure 7. Discovery BSP structure** The components present on the STM32L476G discovery board are controlled by dedicated BSP drivers. These are: - The L3GD20 gyroscope in `stm32l476g_discovery_accelerometer.c/.h` - The CS43L22 audio codec in `stm32l476g_discovery_audio.c/.h` - The LSM303C e-Compass in `stm32l476g_discovery_compass.c/.h` - The LCD glass 8x40 in `stm32l476g_discovery_glass_lcd.c/.h` - The MFXSTM32L152 for Idd measurement in `stm32l476g_discovery_idd.c/.h` - The 16 Mbytes Micron N25Q128A13 Quad-SPI Flash memory in `stm32l476g_discovery_qspi.c/.h` 4 Demonstration functional description 4.1 Overview After powering the STM32L476G discovery board, the welcome message “STM32L476G-DISCOVERY DEMO” is displayed on the LCD and the first main menu application items is displayed. 4.2 Main menu Figure 8 shows the main menu application tree with navigation possibilities: ![Figure 8. Demonstration top menu](image-url) 4.3 Menu navigation The UP, DOWN, RIGHT and LEFT joystick directions allow to navigate between items in the main menu and the submenus. To enter a submenu and launch *Exec function*, press the SEL pushbutton. The SEL pushbutton designates the action of vertically pressing the top of the joystick as opposed to pressing horizontally UP, DOWN, RIGHT and LEFT. The basic joystick key functions are defined as follows: Table 2. Joystick key functions <table> <thead> <tr> <th>Joystick Key</th> <th>Function</th> </tr> </thead> <tbody> <tr> <td>DOWN</td> <td>Go to next menu/submenu item</td> </tr> <tr> <td>UP</td> <td>Go to previous menu/submenu item</td> </tr> <tr> <td>RIGHT / SEL</td> <td>Select the demonstration application/submenu item</td> </tr> <tr> <td>LEFT</td> <td>Stop and exit the demonstration application/submenu item</td> </tr> </tbody> </table> 4.4 Modules 4.4.1 IDD Overview The IDD module application measures and displays in real time the MCU current consumption depending on the selected power mode. The current is measured and calculated using a second microcontroller from the STM32L1 Series on the board. Figure 9. IDD application menu structure display Value is the Idd measurement result displayed for 2.5 seconds either in milliampere (mA), microampere (µA) or nanoampere (nA). Features - Run mode at 24 MHz (voltage range 2), PLL Off, RTC/LSE Off, Flash ART On - Sleep mode at 24 MHz (voltage range 2), PLL Off, RTC/LSE Off, Flash ART On - Low-power run mode at 2 MHz, PLL Off, RTC/LSE Off, Flash ART On - Low-power sleep mode at 2 MHz, PLL Off, RTC/LSE Off, Flash ART On - Stop 2 mode, RTC/LSE Off, Flash ART Off - Standby mode, RTC/LSE Off, Flash ART Off, RAM retention Off - Shutdown mode, RTC/LSE Off, Flash ART Off **Functional description** The selection of an Idd measurement with the joystick RIGHT key executes the following sequence: - Clear the LCD - Enter HW components in low-power mode - Enter MCU in low-power mode - Wait for automatic wake up through an external event on EXTI 13. This event is the end of the Idd measurement done on the STM32L1 MCU side. - Display the measured current value on the LCD The measured Idd value could be for example: ![Figure 10. IDD mode selection and result value](image) **4.4.2 VDD** **Overview** The VDD module measures the voltage supply with the 12-bit ADC single conversion. ![Figure 11. VDD application menu selection and result value](image) 4.4.3 Audio record Overview The audio record module application (Record) allows to demonstrate the audio recording capability of the STM32L476G discovery board thanks to the MP34DT01 embedded digital microphone. ![Figure 12. Audio record application menu selection](image) **Start/stop Recording** Features - 48 kHz audio recording in .wav format. - Audio file stored in the Quad-SPI Flash memory (N25Q128A13) The LED LD5 blinks continuously during the recording. Press the joystick LEFT key to stop recording and exit from the application mode. Use player application to playback the audio recording samples. Architecture *Figure 13* shows the different audio player parts and their connections and interactions with the external components. ![Figure 13. Audio recorder architecture](image) 4.4.4 Audio player Overview The audio player module application allows to demonstrate the audio capability of the STM32L476VGT6 MCU with earphones plugged in the 3.5mm audio output jack mounted on the STM32L476G discovery board. Features - Audio is played in .wav format. - The audio file selection is done either from the internal Flash memory or Quad-SPI Flash memory: - Internal Flash memory: Download file at address 0x08020000 with ST-Link Utility - Quad-SPI Flash memory: Audio source available after execution of the audio Record application - Volume up/down is controlled with the joystick UP/DOWN keys. - Playback may be paused/resumed with the joystick SEL key. Architecture *Figure 15* shows the different audio player parts and their connections and interactions with the external components. 4.4.5 Compass Overview The Compass module application allows to demonstrate the integration of the LSM303C 3-axis electronic compass embedded on the STM32L476G discovery board. Figure 16. Compass application menu structure and display Features - A real time display in degree of the direction versus the magnetic north - A calibration step is required in X/Y/Z axis to get the motion data from the application. It consists in rotating the board by 360 degrees in all 3 axis. The calibration data are saved in backup area. 4.4.6 Sound meter Overview The sound meter module demonstrates the perfect integration of a sound meter audio library. It uses the audio recording capability of the digital microphone embedded on the STM32L476G discovery board. Figure 17. Sound meter application menu selection and display Features - 16-bits PCM data input with a sample rate at 48 kHz (mono) - A-weighting pre-filter - Real time display of the sound level every 250ms with average done every 100ms 4.4.7 Guitar tuner Overview The Guitar tuner module application allows to demonstrate the perfect integration of an acoustic Guitar tuner audio library. It uses the audio recording capability of the digital microphone embedded on the STM32L476G discovery board. Status could be any of the following display outputs: - “ “: audio sample invalid. Ensure that STM32L476G discovery board is close to the guitar - “OK”: string in tune - “++”: string needs to be tightened - “+-”: string needs to be tightened but close to be in tune - “--”: string is too tightened - “-“: string is too tightened but close to be in tune Features - Standard tuning type of 6 acoustic guitar strings - Input 16-bits PCM data with a sample rate at 8 kHz (mono) - Precision lower than 0.5 Hz - Real time display every 256ms The standard tuning method is based on the following expected frequencies per string: <table> <thead> <tr> <th>String 1 (Hz) (the thickest string)</th> <th>String 2 (Hz)</th> <th>String 3 (Hz)</th> <th>String 4 (Hz)</th> <th>String 5 (Hz)</th> <th>String 6 (Hz) (the thinnest string)</th> </tr> </thead> <tbody> <tr> <td>E1 = 82.41</td> <td>A1 = 110</td> <td>D2 = 146.83</td> <td>G2 = 196</td> <td>B2 = 246.94</td> <td>E3 = 329.63</td> </tr> </tbody> </table> 4.4.8 Option Overview The option menu level provides the system control and information about the demonstration firmware. Features - Low-power mode selection - Stop2: the MCU enters automatically into Stop2 mode after X seconds of inactivity - Disable: the MCU never enters in low-power mode (except in IDD demonstration) The disable mode is available when running in USB-powered mode and not in battery mode to not empty the battery. - Demonstration version (about) - Display STM32L476G discovery firmware version and copyright 4.5 Audio player control Within the audio player application, the joystick key functions are enhanced and are described in the following Table 4: <table> <thead> <tr> <th>Joystick key</th> <th>Function</th> <th>Brief description</th> </tr> </thead> <tbody> <tr> <td>RIGHT</td> <td>Play</td> <td>Changes the audio player state to “AUDIPLAYER_PLAY”.</td> </tr> <tr> <td></td> <td></td> <td>Reads the wave file from the storage Flash memory.</td> </tr> <tr> <td></td> <td></td> <td>Sets the frequency.</td> </tr> <tr> <td></td> <td></td> <td>Starts or resumes the audio task.</td> </tr> <tr> <td></td> <td></td> <td>Starts playing audio stream from a data buffer using &quot;BSP_AUDIO_OUT_Play&quot; function in BSP audio driver.</td> </tr> <tr> <td>SEL</td> <td>Pause / Resume</td> <td>Suspends the audio task when playing and pauses the audio file stream.</td> </tr> <tr> <td></td> <td></td> <td>Resumes the audio task and the audio file stream when suspended.</td> </tr> </tbody> </table> Closes the wave file from storage flash memory. Suspends the audio task. Stops audio playing. Changes the audio player state to "AUDIPLAYER_STOP". Increases the volume. Decreases the volume. <table> <thead> <tr> <th>Joystick key</th> <th>Function</th> <th>Brief description</th> </tr> </thead> <tbody> <tr> <td>LEFT</td> <td>Stop &amp; Exit</td> <td>Closes the wave file from storage flash memory.</td> </tr> <tr> <td></td> <td></td> <td>Suspends the audio task.</td> </tr> <tr> <td></td> <td></td> <td>Stops audio playing.</td> </tr> <tr> <td></td> <td></td> <td>Changes the audio player state to &quot;AUDIPLAYER_STOP&quot;.</td> </tr> <tr> <td>UP</td> <td>Volume Up</td> <td>Increases the volume.</td> </tr> <tr> <td>DOWN</td> <td>Volume Down</td> <td>Decreases the volume.</td> </tr> </tbody> </table> 5 Demonstration firmware settings 5.1 Clock control The demonstration firmware takes benefit of the STM32L476VGT6 clock tree by selecting dynamically the clock configuration required by the running application/module when running in low-power mode. The following clock configurations are used in the demonstration firmware: <table> <thead> <tr> <th>Clock configuration</th> <th>Application/module</th> <th>Application/module</th> </tr> </thead> <tbody> <tr> <td>SYSLCLK: 2MHz MSI (LPRUN voltage range 2)</td> <td>USB-powered</td> <td>Demonstration startup</td> </tr> <tr> <td>SYSLCLK: 16MHz PLL from MSI 16MHz (RUN voltage range 2)</td> <td>USB-powered</td> <td>Sound Meter</td> </tr> <tr> <td>SYSLCLK: 80MHz (PLL) from MSI 8MHz (RUN voltage range 1)</td> <td>Demonstration startup</td> <td>-</td> </tr> </tbody> </table> The following oscillators and PLL are used in the demonstration firmware: <table> <thead> <tr> <th>Oscillators</th> <th>Application/module</th> <th>Application/module</th> </tr> </thead> <tbody> <tr> <td>MSI from 2MHz to 16MHz</td> <td>Demonstration startup</td> <td>Demonstration startup</td> </tr> <tr> <td>LSE</td> <td>LCD glass display</td> <td>LCD glass display</td> </tr> <tr> <td>LSI</td> <td>Internal Watchdog for automatic FW reset (error management)</td> <td>-</td> </tr> <tr> <td>PLL main</td> <td>Demonstration startup</td> <td>Sound meter</td> </tr> <tr> <td>PLLSA1</td> <td>All applications</td> <td>Compass</td> </tr> <tr> <td>PLLSA1_VCO = 8 MHz * PLLSA1N = 8 * 43 = VCO_344M</td> <td>Player</td> <td>Sound meter</td> </tr> <tr> <td>SAI_CK_x = PLLSA1_VCO/PLLSA1P = 344/7 = 49.142 MHz</td> <td>Record</td> <td>Guitar tuner</td> </tr> <tr> <td></td> <td></td> <td>Sound meter</td> </tr> </tbody> </table> 5.2 Peripherals The following peripherals are used in the demonstration firmware: <table> <thead> <tr> <th>Used peripherals</th> <th>Application/module</th> </tr> </thead> <tbody> <tr> <td>ADC</td> <td>VDD application</td> </tr> <tr> <td>CRC</td> <td>Audio applications</td> </tr> <tr> <td>DFSDM</td> <td>Audio record applications</td> </tr> <tr> <td>DMA</td> <td>Audio player and record applications</td> </tr> <tr> <td>EXTI</td> <td>Menu navigation + joystick + low-power mode + audio applications</td> </tr> <tr> <td>FLASH</td> <td>Storage in Audio player application</td> </tr> <tr> <td></td> <td>System settings</td> </tr> <tr> <td></td> <td>Low-power mode application</td> </tr> <tr> <td>GPIO</td> <td>All applications</td> </tr> <tr> <td>I2C</td> <td>Low-power mode application</td> </tr> <tr> <td>IWDG</td> <td>Internal Watchdog (FW reset in case of error)</td> </tr> <tr> <td>LCD</td> <td>LCD glass display</td> </tr> <tr> <td>PWR</td> <td>Low-power mode application</td> </tr> <tr> <td>RCC</td> <td>All applications</td> </tr> <tr> <td>QSPI</td> <td>Audio player and record applications</td> </tr> <tr> <td>SAI</td> <td>Audio applications</td> </tr> <tr> <td>SPI</td> <td>Compass application</td> </tr> </tbody> </table> 5.3 Interrupts / wakeup pins The following interrupts are used in the demonstration firmware: <table> <thead> <tr> <th>Interrupts</th> <th>Application/module</th> <th>Priority, SubPriority (highest=0, 0)</th> </tr> </thead> <tbody> <tr> <td>DMA1 Channel1</td> <td>Audio applications (SAI1)</td> <td>5, 0</td> </tr> <tr> <td>DMA1 Channel4</td> <td>Audio applications (DFSDM0)</td> <td>6, 0</td> </tr> <tr> <td>EXTI Line 0</td> <td>Joystick SEL key</td> <td>15, 0</td> </tr> <tr> <td>EXTI Line 1</td> <td>Joystick LEFT key</td> <td>15, 0</td> </tr> <tr> <td>EXTI Line 2</td> <td>Joystick RIGHT key</td> <td>15, 0</td> </tr> <tr> <td>EXTI Line 3</td> <td>Joystick UP key</td> <td>15, 0</td> </tr> <tr> <td>EXTI Line 5</td> <td>Joystick DOWN key</td> <td>15, 0</td> </tr> </tbody> </table> The wakeup pin 2 is used to wake up the MCU from Standby and Shutdown modes. ### 5.4 Low-power strategy The STM32CubeL4 demonstration firmware is designed to highlight the low-power consumption capabilities of both the STM32L476VG MCU and the STM32L476G discovery board. The Figure 20 illustrates the low-power strategy put in place in the STM32Cube demonstration firmware based on no running application and user activity monitoring. #### Figure 20. Low-power scheme <table> <thead> <tr> <th>Interrupts</th> <th>Application/module</th> <th>Priority, SubPriority (highest=0, 0)</th> </tr> </thead> <tbody> <tr> <td>EXTI Line 13</td> <td>IDD application (Interrupt from MFXSTM32L152 MCU)</td> <td>15, 15</td> </tr> <tr> <td>SysTick</td> <td>CortexM4 System Timer for OS Tick</td> <td>15, 0</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Interrupts</th> <th>Application/module</th> <th>Priority, SubPriority (highest=0, 0)</th> </tr> </thead> <tbody> <tr> <td>EXTI Line 13</td> <td>IDD application (Interrupt from MFXSTM32L152 MCU)</td> <td>15, 15</td> </tr> <tr> <td>SysTick</td> <td>CortexM4 System Timer for OS Tick</td> <td>15, 0</td> </tr> </tbody> </table> ### 5.5 FreeRTOS resources The STM32L476G demonstration firmware is designed on top of CMSIS-OS drivers based on FreeRTOS. The resources used in the firmware demonstration are listed hereafter. As a reminder FreeRTOS configuration is described in `FreeRTOSConfig.h` file. 5.5.1 Tasks Table 9. Task description <table> <thead> <tr> <th>Task entry point</th> <th>Description</th> <th>Function (File)</th> <th>Stack size (words)</th> <th>Priority</th> </tr> </thead> <tbody> <tr> <td>StartThread</td> <td>Main application core</td> <td>Main.c</td> <td>2 * 128</td> <td>osPriorityNormal</td> </tr> <tr> <td>AudioPlayer_Thread</td> <td>Audio player application</td> <td>AudioPlayer_Init() (AudioPlayer.c)</td> <td>4 * 128</td> <td>osPriorityHigh</td> </tr> <tr> <td>AudioRecorder_Thread</td> <td>Audio recorder application</td> <td>AudioRecorder_Init() (AudioRecorder.c)</td> <td>4 * 128</td> <td>osPriorityHigh</td> </tr> <tr> <td>Idle task (FreeRTOS)</td> <td>Implement Idle hook application</td> <td>vApplicationIdleHook() (Main.c)</td> <td>128</td> <td>osPriorityHigh</td> </tr> </tbody> </table> Stack size: 128 is the value defined for configMINIMAL_STACK_SIZE in FreeRTOSConfig.h 5.5.2 Message queues Table 10. Message queues <table> <thead> <tr> <th>QueueId</th> <th>Description</th> <th>Function (File)</th> <th>Queue depth (word)</th> </tr> </thead> <tbody> <tr> <td>JoyEvent</td> <td>Queue to receive input key event</td> <td>kMenu_Init() (K_menu.c)</td> <td>1</td> </tr> <tr> <td>AudioEvent</td> <td>Audio player input event</td> <td>AudioPlayer_Init() (AudioPlayer.c)</td> <td>1</td> </tr> <tr> <td>osMsdId part of AudioRecorder_ContextTypeDef structure</td> <td>Audio Recorder input event</td> <td>AudioRecorder_Init() (AudioRecorder.c)</td> <td>1</td> </tr> </tbody> </table> 5.5.3 Mutex A mutex DemoLowPowerMutex is defined to control the main application entry in low-power mode by first insuring the board components with their respective IOs in low-power mode and then setting the MCU in low-power consumption. The mutex DemoLowPowerMutex is released upon wake-up from an EXTI lines associated to joystick buttons. 5.5.4 Heap The Heap size is defined in FreeRTOSConfig.h as follows: ```c #define configTOTAL_HEAP_SIZE ((size_t)(40 * 1024)) ``` Heap usage in the firmware demonstration is dedicated to: - OS resources (tasks, queues, mutexes, memory allocation) - Application memory allocation requirements <table> <thead> <tr> <th>Applications</th> <th>Description</th> <th>Function (File)</th> <th>Memory requirements (bytes)</th> </tr> </thead> <tbody> <tr> <td>Audio Record</td> <td>Record buffer</td> <td>AudioRecorder_Start() (Audiorecorder.c)</td> <td>1024</td> </tr> <tr> <td>Guitar tuner</td> <td>Record buffer</td> <td>GuitarTuner_Run() (Guitartuner.c)</td> <td>4096</td> </tr> <tr> <td>Guitar tuner library</td> <td>Processing buffer used by Guitar tuner library</td> <td>GuitarTuner_Init() (Guitartuner.c)</td> <td>&lt; 16 Kbytes</td> </tr> <tr> <td>Soundmeter</td> <td>Record buffer</td> <td>SoundMeter_Run() (Soundmeter.c)</td> <td>960</td> </tr> <tr> <td>Soundmeter library</td> <td>Processing buffer used by Soundmeter library</td> <td>SoundMeter_Init() (Soundmeter.c)</td> <td>&lt; 6 Kbytes</td> </tr> </tbody> </table> The demonstration firmware implements a hook in `main.c` to control the memory allocation in the heap: ```c /** * @brief Application Malloc failure Hook * @param None * @retval None */ void vApplicationMallocFailedHook(void) { Error_Handler(); } ``` 5.6 Programming firmware application First of all, install the ST-LINK/V2.1 driver available on ST website. There are two ways of programming the STM32L476G discovery board. 5.6.1 Using binary file Upload the binary STM32CubeDemo_STM32L476G-Discovery-VX.Y.Z.hex from the firmware package available under Projects\STM32L476G-Discovery\Demonstrations\Binary using your preferred in-system programming tool. 5.6.2 Using preconfigured projects Choose one of the supported tool chains and follow the steps below: - Open the application folder: Projects\STM32L476G-Discovery \Demonstrations\ - Chose the desired IDE project (EWARM for IAR, MDK-ARM for Keil) - Double click on the project file (for example Project.eww for EWARM) - Rebuild all files: Go to Project and select Rebuild all - Load the project image: Go to Project and select Debug - Run the program: Go to Debug and select Go The demonstration software as well as other software examples that allow the user to discover the STM32 microcontroller features are available on ST website at www.st.com/stm32l4-discovery. 6 Demonstration firmware footprints This section provides the memory requirements for all the demonstration modules. The aim is to have an estimation of memory requirement in case of suppression or addition of a module or feature. The footprint information is provided for the following environment: - Tool chain: IAR 7.40.1 - Optimization: high size - Board: STM32L476G discovery. Table 12 shows the code memory, data memory and the constant memory used for each application module file and related libraries. <table> <thead> <tr> <th>Module</th> <th>Code [Byte]</th> <th>Data [byte]</th> <th>Const [byte]</th> </tr> </thead> <tbody> <tr> <td>IDD</td> <td>832</td> <td>237</td> <td>264</td> </tr> <tr> <td>VDD</td> <td>324</td> <td>142</td> <td>68</td> </tr> <tr> <td>RECORD</td> <td>1112</td> <td>-</td> <td>68</td> </tr> <tr> <td>PLAYER</td> <td>1212</td> <td>-</td> <td>68</td> </tr> <tr> <td>COMPAS</td> <td>1052</td> <td>108</td> <td>124</td> </tr> <tr> <td>SOUND + SoundMeter library</td> <td>638</td> <td>3932</td> <td>508</td> </tr> <tr> <td>GUITAR + GuitarTuner library</td> <td>592</td> <td>12416</td> <td>236</td> </tr> <tr> <td>Main application (Top menu + Option submenu)</td> <td>274</td> <td>276</td> <td>376</td> </tr> </tbody> </table> 7 Kernel description 7.1 Overview The role of the demonstration kernel is mainly to provide a generic platform that controls and monitors all the application processes with minimum memory consumption. The kernel provides a set of friendly services that simplifies module implementation by allowing access to all the hardware and firmware resources through the following tasks and services: - Hardware and modules initialization: - BSP initialization (LEDs, joystick keys, LCD, audio, Quad-SPI and compass) - Main menu management. - Memory management - Storage management (Quad-SPI Flash memory) - System settings 7.2 Kernel initialization The first task of the kernel is to initialize the hardware and firmware resources to make them available to its internal processes and the modules around it. The kernel starts by initializing the HAL, system clocks and then the hardware resources needed during the middleware components: - LEDs and buttons - LCD - Backup area - External Quad-SPI Flash memory - Audio codecs - MEMS Upon full initialization phase, the kernel adds and links the system and user modules to the demonstration core waiting to execute a menu entry point. 7.3 Kernel processes and tasks The kernel is composed of a main task managed by FreeRTOS through the CMSIS-OS wrapping layer: - Start thread: this task initializes the OS resources required by the demonstration framework and then starts the demonstration. 7.4 Kernel event manager The kernel provides services to manage events mainly here user key input (k_menu.h file) ```c /* @brief Start task * @param argument: pointer that is passed to the thread function as start argument * @retval None */ static void StartThread(void const * argument) { osMutexDef_t mutex_lowpower; /* Create mutex to handle low power mode */ DemoLowPowerMutex = osRecursiveMutexCreate(&mutex_lowpower); /* Start Demo */ kDemo_Start(); } ``` kMenu_SendEvent() is called from the interrupt IRQ callback (HAL_GPIO_EXTI_Callback()) to send the event message to the Kernel event mailbox. kMenu_GetEvent() is called by either the kernel event manager in kMenu_HandleSelection() or directly by the running module application expecting user input keys. 7.5 Module manager The main demonstration menu is initialized and launched by the Start thread. The modules are managed by the kernel; this later is responsible of initializing the modules, initializing hardware resources relative to the modules and initializing the system menu. Each module should provide the following functionalities and properties: 1. Graphical component structure. 2. Method to startup the module. 3. Method to manage low-power mode (optional) 4. The application task 5. The module background process (optional) 6. Specific configuration 7. Error management The modules can be added in run time to the demonstration and can use the common kernel resources. The following code shows how to add a module to the demonstration: A module is a set of function and data structures that are defined in a data structure that provides all the information and pointers to specific methods and functions to the kernel. This later checks the integrity and the validity of the module and inserts its structure into a module table. Each module is identified by a unique ID. When two modules have the same ID, the Kernel rejects the second one. The module structure is defined as follows: ```c typedef struct { uint8_t kModuleId; KMODULE_RETURN (*kModulePreExec)(void); KMODULE_RETURN (*kModuleExec)(void); KMODULE_RETURN (*kModulePostExec)(void); KMODULE_RETURN (*kModuleResourceCheck)(void); } K_ModuleItem_TypeDef; ``` - **Id**: unique module identifier - **Name**: pointer to the module name - **kModuleId**: module identifier - **kModulePreExec**: pre-execution function to allocate resources prior to the execution - **kModuleExec**: execution function - **kModulePostExec**: post-execution function to release resources - **kModuleResourceCheck**: function to check the resource requirements after the module was added to the kernel 7.6 Backup and settings configuration The STM32CubeL4 demonstration firmware saves the kernel and module settings in the bank2 of the internal Flash memory (address 0x08080000). ```c typedef struct { /* IDD application */ IdxBackupData_TypeDef idd; /* Compass application */ CompassBackupData_TypeDef compass; /* Global settings */ SettinsBackupData_TypeDef settings; } DemoBackupData_TypeDef; ``` ```c #include "config.h" ``` ```c # define DEMOBACKUP_AREA_ADDRESS 0x08080000 /* Backup data in first section of Bank2 */ ``` The following APIs allow to save or restore it from the backup area. ```c uint32_t SystemBackupRead(DemoBackupId, void *Data); void SystemBackupWrite(DemoBackupId, void *Data); ``` 7.7 Adding a new module Once the module appearance and the functionality are defined and created, based on the constraints described above, only the module is left to be added: 1. Define unique ID of the new module in `k_config.h` file. 2. `k_ModuleAdd()` function should be called in `k_ModuleInit()` from `KDemo_Initialization()` with unique ID defined in step 1. 3. Modify main module in `main_app.c` file to add the call to this new module in `ConstMainMenutems[]` table. ```c #include "config.h" ``` ```c #define "" 0 ``` ```c const char *ConstMainMenutems[] = { "", /* 0 */ "IDD" , /* 1 */ "VLO" , /* 2 */ "KEY" , /* 3 */ "BLAYER" , /* 4 */ "COMPASS" , /* 5 */ "SOUND" , /* 6 */ "SUIT" , /* 7 */ "OPTIO" , /* 8 */ "" , /* 9 */ "" , /* 10 */ "" , /* 11 */ }; ``` 8 Revision history Table 13. Document revision history <table> <thead> <tr> <th>Date</th> <th>Revision</th> <th>Changes</th> </tr> </thead> </table> | 17-Jul-2015| 1 | Initial release. IMPORTANT NOTICE – PLEASE READ CAREFULLY STMicroelectronics NV and its subsidiaries (“ST”) reserve the right to make changes, corrections, enhancements, modifications, and improvements to ST products and/or to this document at any time without notice. Purchasers should obtain the latest relevant information on ST products before placing orders. ST products are sold pursuant to ST’s terms and conditions of sale in place at the time of order acknowledgement. Purchasers are solely responsible for the choice, selection, and use of ST products and ST assumes no liability for application assistance or the design of Purchasers’ products. No license, express or implied, to any intellectual property right is granted by ST herein. Resale of ST products with provisions different from the information set forth herein shall void any warranty granted by ST for such product. ST and the ST logo are trademarks of ST. All other product or service names are the property of their respective owners. Information in this document supersedes and replaces information previously supplied in any prior versions of this document. © 2015 STMicroelectronics – All rights reserved
{"Source-Url": "https://www.st.com/content/ccc/resource/technical/document/user_manual/40/8a/04/4b/cf/2a/4c/73/DM00213619.pdf/files/DM00213619.pdf/jcr:content/translations/en.DM00213619.pdf", "len_cl100k_base": 10232, "olmocr-version": "0.1.53", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 61633, "total-output-tokens": 10990, "length": "2e13", "weborganizer": {"__label__adult": 0.0010099411010742188, "__label__art_design": 0.001590728759765625, "__label__crime_law": 0.0005717277526855469, "__label__education_jobs": 0.0007262229919433594, "__label__entertainment": 0.0003998279571533203, "__label__fashion_beauty": 0.0005087852478027344, "__label__finance_business": 0.0010156631469726562, "__label__food_dining": 0.000621795654296875, "__label__games": 0.0025653839111328125, "__label__hardware": 0.307861328125, "__label__health": 0.0004897117614746094, "__label__history": 0.0005927085876464844, "__label__home_hobbies": 0.0006480216979980469, "__label__industrial": 0.0037899017333984375, "__label__literature": 0.0003464221954345703, "__label__politics": 0.0004651546478271485, "__label__religion": 0.0008759498596191406, "__label__science_tech": 0.13818359375, "__label__social_life": 8.124113082885742e-05, "__label__software": 0.0194091796875, "__label__software_dev": 0.515625, "__label__sports_fitness": 0.0006303787231445312, "__label__transportation": 0.0017328262329101562, "__label__travel": 0.0003070831298828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42365, 0.03549]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42365, 0.13877]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42365, 0.7146]], "google_gemma-3-12b-it_contains_pii": [[0, 1549, false], [1549, 3861, null], [3861, 5518, null], [5518, 6720, null], [6720, 8360, null], [8360, 9102, null], [9102, 10035, null], [10035, 10135, null], [10135, 10441, null], [10441, 11539, null], [11539, 11751, null], [11751, 13230, null], [13230, 13883, null], [13883, 14739, null], [14739, 15109, null], [15109, 15966, null], [15966, 16859, null], [16859, 17547, null], [17547, 18349, null], [18349, 19164, null], [19164, 20163, null], [20163, 21475, null], [21475, 23211, null], [23211, 24318, null], [24318, 25698, null], [25698, 28137, null], [28137, 29784, null], [29784, 31666, null], [31666, 33178, null], [33178, 34264, null], [34264, 35355, null], [35355, 36795, null], [36795, 38335, null], [38335, 39451, null], [39451, 41008, null], [41008, 41192, null], [41192, 42365, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1549, true], [1549, 3861, null], [3861, 5518, null], [5518, 6720, null], [6720, 8360, null], [8360, 9102, null], [9102, 10035, null], [10035, 10135, null], [10135, 10441, null], [10441, 11539, null], [11539, 11751, null], [11751, 13230, null], [13230, 13883, null], [13883, 14739, null], [14739, 15109, null], [15109, 15966, null], [15966, 16859, null], [16859, 17547, null], [17547, 18349, null], [18349, 19164, null], [19164, 20163, null], [20163, 21475, null], [21475, 23211, null], [23211, 24318, null], [24318, 25698, null], [25698, 28137, null], [28137, 29784, null], [29784, 31666, null], [31666, 33178, null], [33178, 34264, null], [34264, 35355, null], [35355, 36795, null], [36795, 38335, null], [38335, 39451, null], [39451, 41008, null], [41008, 41192, null], [41192, 42365, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 42365, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42365, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42365, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42365, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42365, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42365, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42365, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42365, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42365, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42365, null]], "pdf_page_numbers": [[0, 1549, 1], [1549, 3861, 2], [3861, 5518, 3], [5518, 6720, 4], [6720, 8360, 5], [8360, 9102, 6], [9102, 10035, 7], [10035, 10135, 8], [10135, 10441, 9], [10441, 11539, 10], [11539, 11751, 11], [11751, 13230, 12], [13230, 13883, 13], [13883, 14739, 14], [14739, 15109, 15], [15109, 15966, 16], [15966, 16859, 17], [16859, 17547, 18], [17547, 18349, 19], [18349, 19164, 20], [19164, 20163, 21], [20163, 21475, 22], [21475, 23211, 23], [23211, 24318, 24], [24318, 25698, 25], [25698, 28137, 26], [28137, 29784, 27], [29784, 31666, 28], [31666, 33178, 29], [33178, 34264, 30], [34264, 35355, 31], [35355, 36795, 32], [36795, 38335, 33], [38335, 39451, 34], [39451, 41008, 35], [41008, 41192, 36], [41192, 42365, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42365, 0.19426]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
a0c67e72fdad0d293586a5f4ba45c0cc26e134d9
[REMOVED]
{"Source-Url": "https://hal.science/hal-00369355/document", "len_cl100k_base": 11746, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 66643, "total-output-tokens": 14688, "length": "2e13", "weborganizer": {"__label__adult": 0.0004804134368896485, "__label__art_design": 0.00066375732421875, "__label__crime_law": 0.00048661231994628906, "__label__education_jobs": 0.0019702911376953125, "__label__entertainment": 0.00014352798461914062, "__label__fashion_beauty": 0.00023937225341796875, "__label__finance_business": 0.0005283355712890625, "__label__food_dining": 0.0005559921264648438, "__label__games": 0.0011835098266601562, "__label__hardware": 0.0024089813232421875, "__label__health": 0.0014333724975585938, "__label__history": 0.0005626678466796875, "__label__home_hobbies": 0.0002474784851074219, "__label__industrial": 0.0010538101196289062, "__label__literature": 0.0006299018859863281, "__label__politics": 0.0004963874816894531, "__label__religion": 0.0008378028869628906, "__label__science_tech": 0.3759765625, "__label__social_life": 0.00015091896057128906, "__label__software": 0.00732421875, "__label__software_dev": 0.6005859375, "__label__sports_fitness": 0.0003843307495117187, "__label__transportation": 0.0011777877807617188, "__label__travel": 0.00026035308837890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42341, 0.02662]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42341, 0.2681]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42341, 0.78323]], "google_gemma-3-12b-it_contains_pii": [[0, 1111, false], [1111, 3410, null], [3410, 6234, null], [6234, 9296, null], [9296, 12819, null], [12819, 16009, null], [16009, 19522, null], [19522, 22433, null], [22433, 24302, null], [24302, 26848, null], [26848, 29005, null], [29005, 31220, null], [31220, 33491, null], [33491, 34792, null], [34792, 36239, null], [36239, 38469, null], [38469, 41289, null], [41289, 42341, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1111, true], [1111, 3410, null], [3410, 6234, null], [6234, 9296, null], [9296, 12819, null], [12819, 16009, null], [16009, 19522, null], [19522, 22433, null], [22433, 24302, null], [24302, 26848, null], [26848, 29005, null], [29005, 31220, null], [31220, 33491, null], [33491, 34792, null], [34792, 36239, null], [36239, 38469, null], [38469, 41289, null], [41289, 42341, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42341, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42341, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42341, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42341, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42341, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42341, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42341, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42341, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42341, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42341, null]], "pdf_page_numbers": [[0, 1111, 1], [1111, 3410, 2], [3410, 6234, 3], [6234, 9296, 4], [9296, 12819, 5], [12819, 16009, 6], [16009, 19522, 7], [19522, 22433, 8], [22433, 24302, 9], [24302, 26848, 10], [26848, 29005, 11], [29005, 31220, 12], [31220, 33491, 13], [33491, 34792, 14], [34792, 36239, 15], [36239, 38469, 16], [38469, 41289, 17], [41289, 42341, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42341, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
99feed7c9dad8685e40babd0fe8f99b75dd69372
Open IPTV Forum Postal address Open IPTV Forum support office address 650 Route des Lucioles – Sophia Antipolis Valbonne – FRANCE Tel.: +33 4 92 94 43 83 Fax: +33 4 92 38 52 90 Internet http://www.oipf.tv Disclaimer The Open IPTV Forum accepts no liability whatsoever for any use of this document. This specification provides multiple options for some features. The Open IPTV Forum Profiles specification complements the Release 2 specifications by defining the Open IPTV Forum implementation and deployment profiles. Any implementation based on Open IPTV Forum specifications that does not follow the Profiles specification cannot claim Open IPTV Forum compliance. Copyright Notification No part may be reproduced except as authorized by written permission. Any form of reproduction and/or distribution of these works is prohibited. Copyright © 2011 Open IPTV Forum e.V. Tables Table 1: Component Element and Attributes ................................................................. 10 Table 2: Example Audio/Video Synchronization ........................................................... 23 Figures Figure 1: Content Segmentation for HTTP Adaptive Streaming .................................. 5 Figure 2: Example of the MPD .................................................................................... 11 Figure 3: MPD Schema .............................................................................................. 19 Figure 4: Component management example ............................................................... 21 Figure 5: Example tfad-box ....................................................................................... 24 Figure 6: Partial Representation MP4 Example ........................................................... 25 Figure 7: Partial Representation Retrieval ................................................................. 25 Foreword This Technical Specification (TS) has been produced by the Open IPTV Forum. This specification provides multiple options for some features. The Open IPTV Forum Profiles specification will complement the Release 2 specifications by defining the Open IPTV Forum implementation and deployment profiles. Any implementation based on Open IPTV Forum specifications that does not follow the Profiles specification cannot claim Open IPTV Forum compliance. Introduction This specification defines the usage of and, where necessary, extensions to the technologies defined in [TS26234] and [TS26244] to enable HTTP based Adaptive Streaming for Release 2 Open IPTV Forum compliant services and devices. In the case of HTTP Adaptive Streaming, a service provides a Content item in multiple bitrates in a way that enables a terminal to adapt to (for example) variations in the available bandwidth by seamlessly switching from one version to another, at a higher or lower bitrate, while receiving and playing the Content. This is achieved by encoding a Content item in alternative Representations of different bitrates and segmenting these Representations into temporally aligned and independently encoded Segments. This results in a matrix of Segments as depicted in Figure 1. ![Figure 1: Content Segmentation for HTTP Adaptive Streaming](image) The Segments are offered for HTTP download from a URL that is unique per Segment. After completion of the download (and playback) of a certain Segment of a certain Representation, a terminal may switch to an alternate Representation simply by downloading (and playing) the next Segment of a different Representation. This requires the terminal to have a description of the available Representations and Segments and the URLs from which to download the Segments. This description is provided as a separate resource: the Media Presentation Description (MPD). The MPD is described in section 3. The media data in a Segment is formatted in compliance with the media formats as defined in [OIPF_MEDIA2]. However, in the context of HTTP Adaptive Streaming, additional requirements are put on the usage of these formats, especially regarding the systems layers. This “profile” is specified in section 4. Similarly, the retrieval mechanisms of Segments are in compliance with section 5.3.2.2 of [OIPF_PROT2], with the usage in the context of HTTP Adaptive Streaming as defined in this document. A Representation may be made up of multiple components, for example audio, video and subtitle components. A partial Representation may only contain some of these components and a terminal may need to download (and play) multiple partial Representations to build up a complete Representation, with the appropriate components according to the preferences and wishes of the user. Appendix C has a more detailed description on the use of partial Representations. 1 References 1.1 Normative References 1.1.1 Standard References <table> <thead> <tr> <th>Reference</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>[RFC2119]</td> <td>IETF RFC 2119 (1997-03), “Key words for use in RFCs to Indicate Requirement Levels”.</td> </tr> <tr> <td>[TS26234]</td> <td>3GPP TS 26.234 V9.3.0 (2010-06), Transparent end-to-end Packet-switched Streaming Service (PSS) Protocols and codecs (Release 9)</td> </tr> <tr> <td>[TS26244]</td> <td>3GPP TS 26.244 V9.2.0 (2010-06), Transparent end-to-end packet switched streaming service (PSS), 3GPP file format (3GP) (Release 9)</td> </tr> </tbody> </table> 1.1.2 Open IPTV Forum References <table> <thead> <tr> <th>Reference</th> <th>Description</th> </tr> </thead> </table> 2 Conventions and Terminology 2.1 Conventions The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in [RFC2119]. All sections and appendixes, except “Introduction”, are normative, unless they are explicitly indicated to be informative. 2.2 Terminology 2.2.1 Definitions In addition to the definitions provided in Volume 1, the following definitions are used in this Volume. <table> <thead> <tr> <th>Term</th> <th>Definition</th> </tr> </thead> <tbody> <tr> <td>Content</td> <td>An instance of audio, video, audio-video information, or data. (from Volume 1). A Content item may consist of several Components.</td> </tr> <tr> <td>Component</td> <td>An element of a Content item, for example an audio or subtitle stream in a particular language or a video stream from a particular camera view.</td> </tr> <tr> <td>Component Stream</td> <td>A bit stream that is the result of encoding a Component with a certain codec and certain codec parameters (e.g. bitrate, resolution).</td> </tr> <tr> <td>Content Resource</td> <td>A Content item that is provided in multiple Representations (e.g. multiple qualities, bitrates, camera views, etc.) to enable adaptive streaming of that Content item. Service Discovery procedures refer to a Content Resource. A Content Resource consists of one or more time-sequential Periods.</td> </tr> <tr> <td>Period</td> <td>A temporal section of a Content Resource.</td> </tr> <tr> <td>Representation</td> <td>A version of a Content Resource within a Period. Representations may differ in the included Components and the included Component Streams.</td> </tr> <tr> <td>Segment</td> <td>A temporal section of a Representation in a specific systems layer format (either MPEG-2TS or MP4), referred to via a unique URL.</td> </tr> </tbody> </table> 2.2.2 Abbreviations In addition to the Abbreviations provided in Volume 1, the following abbreviations are used in this Volume. <table> <thead> <tr> <th>Acronym</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td>3GPP</td> <td>ETSI 3rd Generation Partnership Project</td> </tr> <tr> <td>AAC</td> <td>Advanced Audio Coding</td> </tr> <tr> <td>AAC LC</td> <td>AAC Low Complexity</td> </tr> <tr> <td>ATSC</td> <td>Advanced Television Systems Committee</td> </tr> <tr> <td>BBTS</td> <td>Broadband Transport Stream</td> </tr> <tr> <td>DCF</td> <td>DRM Content Format</td> </tr> <tr> <td>DRM</td> <td>Digital Rights Management</td> </tr> <tr> <td>DVB</td> <td>Digital Video Broadcasting</td> </tr> <tr> <td>DVB-SI</td> <td>DVB Service Information</td> </tr> <tr> <td>ECM</td> <td>Entitlement Control Message</td> </tr> <tr> <td>ETSI</td> <td>European Telecommunications Standards Institute</td> </tr> <tr> <td>GOP</td> <td>Group Of Pictures</td> </tr> <tr> <td>IPMP</td> <td>Intellectual Property Management and Protection</td> </tr> <tr> <td>IV</td> <td>Initialization Vector</td> </tr> <tr> <td>JPEG</td> <td>Joint Photographic Experts Group</td> </tr> <tr> <td>MP4</td> <td>MP4 File Format</td> </tr> <tr> <td>MPD</td> <td>Media Presentation Description</td> </tr> <tr> <td>MPEG</td> <td>Moving Pictures Expert Group</td> </tr> <tr> <td>nPVR</td> <td>Network Personal Video Recorder</td> </tr> <tr> <td>NTP</td> <td>Network Time Protocol</td> </tr> <tr> <td>OMA</td> <td>Open Mobile Alliance</td> </tr> <tr> <td>PAT</td> <td>Program Association Table</td> </tr> <tr> <td>PDCF</td> <td>Packetized DRM Content Format</td> </tr> <tr> <td>PF</td> <td>Protected Format</td> </tr> <tr> <td>PID</td> <td>Packet Identifier</td> </tr> <tr> <td>PMT</td> <td>Program Map Table</td> </tr> <tr> <td>RAP</td> <td>Random Access Point</td> </tr> </tbody> </table> 3 Media Presentation 3.1 Media Presentation Description The Media Presentation Description (MPD) SHALL be as specified in [TS26234] section 12.2, with the following extensions and additional requirements: - The MPD SHALL be an XML file that SHALL validate against the schema in Appendix A. Note that the XML schema in Appendix A imports the schema specified in [TS26234]. This means that an MPD that does not use any of the OIPF specific extensions will validate against both the schema defined in [TS26234] as well as Appendix A. - A `<Representation>` element may carry the `@group-attribute` set to a non-zero value. In this case the attribute indicates that the `<Representation>` element is not necessarily a complete Representation, but consists of one or more individual Components (video, audio, subtitle, etc.) which may be downloaded and provided to the terminal in addition to content being downloaded from other `<Representation>` elements. In this case the `<Representation>` element SHALL contain one or more `<Component>` elements, as specified in section 3.1.1, one for each Component contained in the `<Representation>`. Note that it is the responsibility of the application on the terminal to select the desired Components and to initialize the terminal accordingly. Appendix C contains an informative description of how this can be done. The value of the `@group-attribute` SHALL be the same for Representations that contain at least one same Component. Two Representations with completely different Components (e.g. audio at two different languages) SHALL have different values for the `@group` attribute. An example instance of the OIPF compliant MPD with the constraints from section 3.2 is depicted in Figure 2. ### 3.1.1 Component Element <table> <thead> <tr> <th>Element/Attribute</th> <th>Description</th> <th>Optionality</th> </tr> </thead> <tbody> <tr> <td>Component</td> <td>This element contains a description of a Component.</td> <td></td> </tr> <tr> <td>@id</td> <td>Specifies the system-layer specific identifier of the elementary stream of this Component. The value SHALL be equal to the PID of the TS packets that carry the Component Stream of the Component, in case the system layer is MPEG-2 TS. The value SHALL be equal to the track ID of the track that carries the Component Stream of the Component, in case the system layer is MP4.</td> <td>O</td> </tr> <tr> <td>@type</td> <td>Specifies the Component type. Valid values include “Video”, “Audio” and “Subtitle” to specify the corresponding Component types defined in [OIPF_DAE2], section 7.14.5.1.1.</td> <td>M</td> </tr> <tr> <td>@lang</td> <td>Specifies an ISO 639 language code for audio and subtitles stream (see [OIPF_DAE2], section 7.14.5). Note that this attribute indicates the language of a specific Component, hence only a single language code is needed. This is different to the usage of the @lang attribute of the &lt;Representation&gt; element in the MPD, which may be used to indicate the list of languages used in the Representation.</td> <td>O</td> </tr> <tr> <td>@description</td> <td>The value of this attribute SHALL be a user readable description of the Component. This description may be used by the terminal in its user interface to allow a user to select the desired Components, e.g. select from different camera views in case of a video stream.</td> <td>O</td> </tr> <tr> <td>@audioChannels</td> <td>Specifies the audio channels for an audio stream. (e.g. 2 for stereo, 5 for 5.1, 7 for 7.1 - see [OIPF_DAE2], section 7.14.5). This attribute SHALL only be present when the value of the @type attribute is &quot;Audio&quot;.</td> <td>O</td> </tr> <tr> <td>@impaired</td> <td>When set to “true”, specify that the stream in this Component is an audio description for visually impaired or subtitles for hearing impaired. This attribute SHALL only be present when the value of the @type attribute is &quot;Audio&quot; or &quot;Subtitle&quot;.</td> <td>O</td> </tr> <tr> <td>@adMix</td> <td>When set to “true”, specifies that the Audio stream in this Component must be mixed with (one of the) the main audio stream(s), for which this attribute is absent or set to “false”. This attribute SHALL only be present when the value of the @type attribute is &quot;Audio&quot;.</td> <td>O</td> </tr> </tbody> </table> Table 1: Component Element and Attributes <?xml version="1.0" encoding="utf-8"?> <mpd xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:3gpp:ns:PSS:AdaptiveHTTPStreamingMPD:2009" xmlns:oipf="urn:oipf:iptv:has:2010" minBufferTime="PT10S" urn:oipf:iptv:has:2010 OIPF-MPD-009.xsd"> <period start="PT0S" segmentAlignmentFlag="true" bitStreamSwitchingFlag="true"> <segmentInfoDefault sourceUrlTemplatePeriod="http://www.aService.com/aMovie/$representationID$/Seg$Index$.3gs" duration="PT2S"/> <representation bandwidth="5000000" mimeType="video/mp4" startWithRap="true" group="1"> <segmentInfo baseUrl="http://www.aService.com/aMovie/HQ/" /> <initialisationSegmentUrl sourceUrl="http://www.aService.com/aMovie/init.mp4"/> <url sourceUrl="Seg10.3gs"/> <url sourceUrl="Seg11.3gs"/> <url sourceUrl="Seg12.3gs"/> <!-- Media Segments in high quality available from http://www.aService.com/aMovie/HQ/SegXX.3gs --> </segmentInfo> <oipf:Components> <oipf:Component type="video" id="1" description="Video"/> <oipf:Component type="audio" id="2" lang="en" description="Audio-En"/> </oipf:Components> </representation> <representation bandwidth="2500000" mimeType="video/mp4" startWithRap="true" group="1"> <segmentInfo> <initialisationSegmentUrl sourceUrl="http://www.aService.com/aMovie/init.mp4"/> <url sourceUrl="http://www.aService.com/aMovie/LQ/Seg10.3gs"/> <url sourceUrl="http://www.aService.com/aMovie/LQ/Seg11.3gs"/> <url sourceUrl="http://www.aService.com/aMovie/LQ/Seg12.3gs"/> <!-- Media Segments in low quality available from http://www.aService.com/aMovie/LQ/SegXX.3gs --> </segmentInfo> <oipf:Components> <oipf:Component type="video" id="1" description="Video"/> <oipf:Component type="audio" id="2" lang="en" description="Audio-En"/> </oipf:Components> </representation> <representation bandwidth="125000" mimeType="video/mp4" startWithRap="true" group="2"> <segmentInfo> <initialisationSegmentUrl sourceUrl="http://www.aService.com/aMovie/init.mp4"/> <urlTemplate startIndex="10" endIndex="12" id="fr"/> <!-- Media Segments with French audio available from http://www.aService.com/aMovie/FR/SegXX.3gs --> </segmentInfo> <oipf:Components> <oipf:Component type="audio" id="3" lang="fr" description="Audio-Fr"/> </oipf:Components> </representation> </period> </mpd> Figure 2: Example of the MPD 3.2 Segmentation Constraints The OITF SHALL support Segments as specified in [TS26234] with the following constraints: - Each Segment SHALL start with a random access point (RAP) and the @startWithRAP attribute SHALL be present and set to ‘true’ in all <Representation> elements in the MPD. - Byte Ranges SHALL NOT be used as a mechanism for identifying Segments. As a consequence the elements <InitialisationSegmentURL> and <Url> SHALL NOT include the optional attribute @range. Note that this does not preclude the use of HTTP requests with byte ranges to retrieve parts of a Segment. - To enable seamless switching: - Different Component Streams of the same Component SHALL be encoded in the same media format but MAY be different in the profile of that format. Section 4 in this document references [OIPF_MEDIA2] for the media formats, which specifies (profiles of) media formats for various media types. So if for example a Representation contains a Component Stream of a certain video Component that is encoded using H.264/AVC using the HD profile, then all Representations that have a Component Stream of that Component must use H.264/AVC but may use different configurations of H.264/AVC within the HD profile or SD Profile. - Segments of Representations with the same value for the @group attribute SHALL be time aligned. The attributes ‘segmentAlignmentFlag’ and ‘bitstreamSwitchingFlag’ SHALL be present and set to ‘true’ in all ‘Period’ elements in the MPD. For the set of Representations that have the same value for the @group attribute, the signaled Segment durations SHALL: - either be equal for all Representations in the set, - or equal for all Representations in the set without a <TrickMode> element and a multiple of this value for the Representations in the set with a <TrickMode> element. In this case there SHALL be at least one Representation in the set for which the <TrickMode> element is absent. Terminals are RECOMMENDED to select Representations with a <TrickMode> element in case of trickplay and to select Representations without a <TrickMode> element for play at normal speed. Terminals MAY select any Representation for both trickplay and normal play, regardless of the presence of the <TrickPlay> element and differences in duration of the Segments in a group. This will enable larger Segments for dedicated trick Representations which MAY be composed of intra-frames only with a fixed interval and therefore avoid that the number of Segment downloads per second is excessive during trick modes. NOTE: A non-time-aligned trick play Representation makes switching between it and the media Representation more difficult to achieve seamlessly, or less accurate for an OITF that does not perform extra seek processing. - If two <InitialisationSegmentURL>–elements have the same value in the sourceURL attribute, then the referenced init-data SHALL be the same. Consequently the terminal does not need to download the init-data twice. - All Representations assigned to a non-zero group SHALL carry an <InitialisationSegmentURL> element with the same value of the @sourceURL attribute. The referenced Initialization Segment SHALL carry the metadata that describes the samples for all Representations assigned to a non-zero group. A client only needs to acquire this overall Initialization Segment once. Note that if a service chooses to Segment a Content Resource in a way that does not meet these constraints, then the Content Resource might not be supported on all receivers. 3.3 Signaling of Content Protection in the MPD If Segments are protected, then the corresponding `<Representation>`-element in the MPD SHALL have a `<ContentProtection>` child element as specified in [TS26234]. The `@schemeIdUri`-attribute of the `<ContentProtection>`-element SHALL be set equal to the DRMSystemID as specified in [OIPF META2]. For example, for Marlin, the DRMSystemID and `@schemeIDUri`-attribute value is “urn:dvb:casystemid:19188”. [TS26234] allows a `<SchemeInformation>`-element to be located in the `<ContentProtection>`-element, however usage of this feature is not defined in this specification (i.e. if it is present, it may be ignored). 3.4 Media Presentation Description Updates Streaming of live Content SHALL be done following the rules described in [TS26234]: the MPD may be updated periodically at the interval described in the MPD, and successive versions of the MPD are guaranteed to be identical in the description of Segments that are already in the past. The synchronization of terminals and the live streaming server is addressed by external protocols such as NTP or equivalent. If service provider provides nPVR functionality to support a timeshift service using network storage, the following applies: - When the Segments of the live Content are stored on the nPVR server, which would occur after the timeShiftBufferDepth has passed, the URLs indicating the Segments on the nPVR server SHOULD be provided to the OITF to enable it to access these Segments at their new location by the MPD update mechanism [TS26234]. - The updated MPD SHOULD contain new URLs of the Segments on the nPVR server; these SHOULD have the same availabilityStartTime as in the original MPD. 4 Adaptive Media Formats The video, audio and subtitle formats used for HTTP Adaptive Streaming are the same as those defined in [OIPF_MEDIA2]. As in [OIPF_MEDIA2], at the systems layer, two formats for HTTP Adaptive Streaming are defined, namely MPEG-2 Transport Stream and MP4 File Format. 4.1 MPEG-2 Transport Stream Systems Layer If the Representation@mimeType attribute equals “video/mpeg” or “video/mp2t”, the media of a Representation is encapsulated in MPEG2-Transport Stream packets and the carriage of A/V Content and related information SHALL be in compliance with the [OIPF_MEDIA2] requirements on usage of the MPEG2-TS systems layer format, with the exceptions and additional requirements listed in sections 4.1.1 through 4.1.5. 4.1.1 PID Allocation - Regardless of the allocation of Component Streams to Representations, - Component Streams of the same Component SHALL be carried in transport stream packets that have the same PID (in transport stream packet header) and the same stream_id (in PES packet header). - Component Streams of different Components SHALL be carried in transport stream packets that have different PIDs (in transport stream packet header). - Some examples: ▪ "audio in Spanish" and "audio in English" have different PID ▪ "audio in English" and "audio description for impaired in English" have different PID ▪ "audio description in English at 64kbps" and "audio description in English at 128kbps" have the same PID ▪ "video angle 1 in H.264 at 720x576" and "video angle 1 in H.264 at 320x288" have the same PID. - When the Segments of a Representation contain MPEG-2 TS packets, the value of the id attribute in each Component element, if present, SHALL be the PID of the Transport Stream packets which carry the Component. 4.1.2 Program Specific Information - For all Representations, the PAT and PMT, either contained in the Initialisation Segments or in the media Segments, SHALL always contain the full list of all elementary streams. This means that Representations with the @group attribute set to zero will have the same PAT/PMT as Representations with the @group attribute set to a non-zero value. It will be responsibility of the application to apply in the terminal the required PID filters for the Components which are effectively being retrieved through the HTTP adaptive protocol. - If the media Segments do not contain PAT and PMT tables, the Initialisation Segment SHALL be present and declared in the MPD, pointing to a resource containing transport stream packets with at least one PAT and one PMT. 4.1.3 Access Unit Signaling - The random_access_indicator and elementary_stream_priority indicator are set as specified in sections 4.1.5 and 5.5.5 of [TS101154]. - It is RECOMMENDED that all transport streams packets where a video frame starts carry a non-empty AU_information data field as defined in annex D.2.2 of [TS101154] - The inclusion of the above signaling SHALL be used in a consistent manner for all Components in all Segments for a Content item. 4.1.4 Media Packaging - A media Segment SHALL contain the concatenation of one or several contiguous PES packets which are split and encapsulated into TS packets. Media Segments SHALL contains only complete PES packets. - When packetizing video elementary streams, up to one frame SHALL be included into one PES packet. Larger frames may be fragmented into multiples PES packets. The PES packet where a frame starts SHALL always contain a PTS/DTS header fields in the PES header. - PTS and DTS values SHALL be time aligned across different Representations. - There may be a discontinuity of the “continuity counter” in TS packets when changing from one Representation to another. The OITF SHALL expect that there might be a discontinuity on the “continuity counter” when changing from one Representation to another. 4.1.5 Content Protection [OIPF_MEDIA2] specifies two methods to protect (i.e. encrypt) MPEG-2 transport streams: BBTS and PF. The following requirements apply if Segments are protected: - Initialisation Segment and the Media Segments SHALL be formatted such that a file that consists of the Initialisation Segment and an arbitrary selection of Media Segments of the (set of partial) Representation(s), stored in order of their index in the MPD, is an BBTS compliant file or a PF compliant file or both. This MAY be achieved by using the same Crypto-period boundaries and Control Words across different Representations. - The DRM related metadata (i.e. PMT containing CA descriptors, CAT, EMM streams or ECM streams) in relation to a certain elementary stream SHALL be delivered as part of either the Media Segments that carry the samples of the elementary stream or the Initialisation Segment. - The DRM related metadata (i.e. ECM stream) of a certain protection system (i.e. Conditional Access system in MPEG2TS terminology) in relation to a certain elementary stream SHALL have the same PID in all Segments in which it is included. Example: the BBTS defined ECM’s for the audio in Spanish is always PID 134, in all Representations where Spanish audio is present, at any bitrate. 4.2 MP4 File Format Systems Layer If the Representation@mimeType attribute equals “video/mp4”, then the carriage of A/V Content and related information (e.g. subtitles) SHALL be in compliance with the [OIPF_MEDIA2] requirements on usage of the MP4 systems layer format, with the following restrictions: - For every Representation, a [TS26234] Initialisation Segment SHALL be available. - For all Representations, a reference to the Initialisation Segment SHALL be present in a <InitialisationSegmentURL> element in the <Representation> element. - An Initialisation Segment SHALL be delivered with MIME type “video/mp4”. - Initialisation Segments SHALL be formatted as specified in [TS26234], section 12.4.2.2. For every media stream of the (set of partial) Representation(s), the moov-box in the Initialisation Segments SHALL contain a trak-box describing the samples of the media streams in compliance with [ISOFF]. - Every Representation SHALL consist of Media Segments that are formatted as specified in [TS26234], section 12.4.2.3. - A Media Segment SHALL be delivered with MIME type “video/vnd.3gpp.Segment” as specified in [TS26244] - To allow a terminal to seek to any Segment with a certain index and start playback with perfect audio/video synchronization, every traf-box of a track that contains audio SHOULD contain a [TS26244] tfad-box. The contents of the box SHALL be such that if the terminal starts the playback of the audio samples of a Segment as specified in the box, then the audio and video of the Segment are played in perfect sync. - The Initialisation Segment and the Media Segments are formatted such that a file that consists of the Initialisation Segment and an arbitrary selection of Media Segments of either any complete Representation (@group attribute equal to zero) or the set of partial Representations (@group attribute unequal to zero), stored in order of the sequence_number in their mfhd-box (i.e. increasing order and no duplicates), is an [ISOFF] compliant file. (Note that this statement assumes that [ISOFF] allows for ‘gaps’ in the sequence_numbers of consecutive moof-boxes; i.e. the difference in the sequence_number of consecutive moof-boxes may be larger than one). - Regardless of the allocation of Components Streams to Representations, - Component Streams of the same Component SHALL be carried in track fragments that have the same trackID (in the tfhd-box). - Component Streams of different Components SHALL be carried in track fragments that have different trackIDs (in the tfhd-box). - If the Segments are protected, the Initialisation Segment and Media Segments SHALL also meet the requirements as specified in section 4.1.5. An informative appendix on the use of the MP4 file format systems layer is provided in Appendix D. 4.2.1 Content Protection [OIPF_MEDIA2] specifies three methods to protect (i.e. encrypt) MP4-based file formats: DCF, PDCF and MIPMP. This specification does not specify how to apply the DCF file format in the context of adaptive streaming. The following requirements apply if Segments are protected: - Initialisation Segment and the Media Segments SHALL be formatted such that a file that consists of the Initialisation Segment and an arbitrary selection of Media Segments of either any complete Representation (@group attribute equal to zero) or the set of partial Representations (@group attribute unequal to zero), stored in order of the sequence_number in their mfhd-box (i.e. increasing order and no duplicates), is either a PDCF [OIPF_MEDIA2] compliant file or a MIPMP [OIPF_MEDIA2] compliant file. - The DRM related metadata SHALL be delivered as part of the Initialisation Segment: - With PDCF format, the DRM related metadata is located in the moov-box. In addition, some DRM related metadata could also be contained in a Mutable DRM Info box. If used, the Mutable DRM Info box SHALL be delivered as part of the Initialisation Segment and located after the moov-box. - With MIPMP format, the DRM related metadata is not located in the moov-box but referenced from the moov-box as a separate track carrying an Object Descriptor Stream. The samples of the IPMP Object Descriptor Stream SHALL be delivered as part of the Initialisation Segment in a dedicated mdat-box located after the moov-box. NOTE: MIPMP uses cipher block chaining mode, whereas PDCF allows cipher block chaining mode or counter mode. When cipher block chaining is used for encryption, Media Segments need to be encrypted independently of each other. Given that this specification requires a Segment to start with a RAP and given that both MIPMP and PDCF require each access unit to start with its own IV and be encrypted separately, no additional requirements are needed to achieve independent encryption of media Segments. The “Access Unit Format Header”, as defined for the PDCF format allows the generation of samples that are identical to samples that comply with the MIPMP defined method of “Stream Encryption”. This means that a Service Provider may simultaneously address devices that support the MIPMP format and devices that support the PDCF format by providing different Initialisation Segments for the same Media Segments. The following additional constraints to the PDCF encryption method achieve this: - the PDCF “Encryption Method” is set to AES_128_CBC (cipher block chaining mode) - "PaddingScheme" is set to RFC_2630 (Padding according to RFC 2630) - “SelectiveEncryption” is not used. 5 Use Cases (Informative) 5.1 Live Streaming If the `@timeShiftBufferDepth` attribute is present in the MPD, it may be used by the terminal to know at any moment which Segments are effectively available for downloading with the current MPD. If this timeshift information is not present in the MPD, the terminal may assume that all Segments described in the MPD which are already in the past are available for downloading. When contents provider updates the MPD for live streaming, the new MPD should include all available Segments including the Segments included in the previous MPD. If the sum of `timeShiftBuffer` in the previous MPD and Segment duration in the previous MPD is larger than `NOW-availabilityStartTime` in the current MPD, the playlist should include the combination of the media Segments for which the sum of the start time of the Media Segment and the Period start time falls in the interval `[NOW-timeShiftBufferDepth-duration; CheckTime]` of the current MPD and the previous MPD. Periods may be used in the live streaming scenario to appropriately describe successive live events with different encoding or adaptive streaming properties. Timeshift is still possible across the boundaries of such events, provided that the timeshift window is large enough. 5.2 Trick Play Following the principles included in the 3GPP specification, the basic implementation of trick modes (fast forward, fast rewind, slow motion, slow rewind, pause and resume) is based on the processing of Segments by the terminal software: downloaded Segments may be provided to the decoder at a speed lower or higher than their nominal timeline (the internal timestamps) would mandate, thus producing the desired trick effect on the screen. Under these conditions the timestamps and the internal clock, if any, in the downloaded Segments do not correspond to the real time clock in the decoder, which need to be set appropriately. Pausing a Media Presentation can be implemented by simply stopping the request of Media Segments or parts thereof. Resuming a Media Presentation can be implemented through sending requests to Media Segments, starting with the next fragment after the last requested fragment. Slow motion and slow rewind can be implemented through controlling the normal stream playout speed at client side. The rest of this section addresses fast forward and fast rewind implementation. The playback of Segments in fast forward and fast rewind has an immediate effect on the bitrate that is effectively required in the network, because the Segments also need to be downloaded at a faster or at a slower rate than in normal play mode. The terminal should take this into account when doing the bitrate calculations for implementing the adaptive protocol. Dedicated stream(s) may be used to implement efficient trick modes: it is recommended to produce the stream(s) with a lower frame rate, longer Segments or a lower resolution to ensure that the bitrate is kept at a reasonable level even when the Segment is downloaded at a faster rate. The dedicated stream is described as Representation with a `<TrickMode>` element in the MPD. It is also recommended that if there are dedicated fast forward Representations, the normal Representations do not contain the `<TrickMode>` element in the MPD. A very low bitrate version of video might be used to implement some trick speeds, even if that Representation was not created with trick modes in mind; note however that in this case it is possible that the terminal would inject a very high frame rate to the decoder (yet at an acceptable bitrate). For fast rewind trick modes the terminal downloads successive Segments in reverse order, and it also requires that the frames corresponding to the Segment are presented in reverse order with respect to the capturing/encoding order. The feasibility of this process depends on the capability of the decoder and also on the encoding properties of the stream (e.g. it may be easier to implement if the Segment has been encoded using only intra frames). In order to start trick mode and easily switch between trick and normal play mode at anytime and support for reverse playing, the trick mode streams may be composed of intra frame only with a fixed interval. 5.3 MPEG-2 TS Seeking To determine the random access point in a media Segment, the client should download and search RAP one by one till the required RAP is found. The `'random_access_indicator' and 'elementary_stream_priority_indicator' in adaptation field of the transport stream may be used for locating every RAP. Appendix A. MPD Schema (Normative) Figure 3: MPD Schema Appendix B. HTTP Adaptive Streaming Initiation (Normative) B.1 Initiation from DAE Refer to [OIPF_DAE2] for a initiation of HTTP Adaptive Streaming from the DAE. B.2 Initiation from PAE Refer to [OIPF_PAE2] for a initiation of HTTP Adaptive Streaming from the PAE. Appendix C. Component Management (Informative) A <Representation> element with the @group attribute set to zero as defined in [TS26234] corresponds to a particular version of the full Content item with all its elements (video, audio, subtitles, etc). If all Representations have the @group attribute set to zero, the different Representations listed in the MPD correspond to full, alternate versions that differ in one or more particular aspects (bitrate, language, spatial resolution, etc). This means that the terminal needs at every moment to download and present Segments of only one Representation. While this provides a quite simple and straightforward model it has an important lack of flexibility in the following sense: if there are many alternatives for a particular Component (e.g. audio in different languages) and there are also a number of different bitrate alternatives, all combinations should be available at the server and consequently some media data is redundantly stored. For example, if a service provides 2 audio languages and the video in 2 bitrate levels, then it would need to provide 4 different Representations; however, there will be groups of 2 Representations which share exactly the same bulky video (they only differ in audio). This causes an important waste of storage space in the server. Even if the server can be optimized with respect to this (e.g. to build the Segments in real time from the elementary streams stored separately in its disks), this cannot be done in standard the HTTP caches. In order to solve this problem, [TS26234] includes the concept of partial Representations in the MPD though the @group attribute. When this attribute has a value different from 0, the Representation does not include all Components of the Content Resource, but only a subset of them (e.g. “audio in French”). An OIPF terminal needs to be able to identify the Representations that it requires, download their Segments independently and combine them for playback at the terminal side. In case of the example service above, the server may serve 2 Representations with 2 different bitrate versions of a movie with English audio, and separately it can serve a Representation with just the French audio. This way, all combinations are possible (all bitrates at all languages) but with roughly half the required storage in the server and the HTTP caches compared to when all possible combinations are separately stored as complete Representations. Figure 4 depicts the grouping of Componts and Components Streams into Representations for this example. In this example the Representations HQRep and LQRep would have the same non-zero value for the @group attribute; FrRep would have a different non-zero value. Additionally both HQRep and LQRep would carry <Component> elements that describe the Video and Audio-En Components; the FrRep would carry a <Component> element for the Audio-Fr Component. This Component-aware scenario relates to the process for selecting and presenting the desired set of Components. This process may also be applied for Content that is delivered through other mechanisms than the HTTP adaptive streaming protocol described in this document. In the context of OIPF (for example using the DAE “Extensions to video/broadcast for playback of selected Components”), this process operates may utilize information contained in the MPEG2-TS or MP4 metadata. Information contained in the Initialisation Segment may also be used in this process. The following is an example process for Component selection: 1. Retrieve the MPD. If the MPD includes both, decide if you want to play the partial or non-partial Representations. It is RECOMMENDED to use the partial Representations. 2. In case of a non-partial Representations: a) Based on metadata in the MPD (typically the @bandwidth-attribute), select an initial Representation. b) If present, retrieve the Initialisation Segment of the Representation. c) Retrieve Media Segments of the chosen Representation. d) Find the elementary streams in the downloaded Initialisation Segment / Media Segments. Typically select one video and one audio stream. If there are options, select from those. e) Setup the “player” to play the selected Component Streams. Play them. f) While playing, allow the user to select other/additional Component Streams in the Initialisation Segment / Media Segments. g) To switch to a different bitrate, select an alternate non-partial Representation and continue from step 2b. 3. In case of a partial Representations: a) Based on the metadata in the MPD (typically the @bandwidth-attribute and the <Component> element) select the initial Representations. b) If present, retrieve the Initialisation Segment of the Period. c) Retrieve Media Segments of the chosen Representations. d) Based on the @id’s of the <Components> elements, or using information from the Initialisation Segment, setup the “player” to play the selected Component Streams. Play them. e) While playing, allow the user to select other/additional Component Streams. If other/additional streams are selected, continue from step 3c. f) To switch to a different bitrate of one of the chosen partial Representations, select an alternate partial Representation with the same value for the @group attribute and continue from step 3c. Note that the Initialisation Segment will always contain the full description of all Component alternatives, so it will be guaranteed that there are no identifiers conflicts between them (e.g. two languages with the same MPEG-2 TS PID or MP4 trackID). The parsing of this Initialisation Segment and the corresponding settings on the terminal to select the appropriate Components is a responsibility of the application (the media player). Appendix D. Usage of the MP4 File Format (Informative) D.1 Audio/Video Synchronization Unlike MPEG-2 TS, the MP4 system layer ([ISOFF]) does not define a system clock or global timestamps that link the various elementary streams to the system clock. Instead, every track has its own independent timeline, specified based on the durations of samples. The decoding time of a sample is calculated by summing up the durations of all samples since the start of the track. The composition time of a sample is either identical to the decoding time, or indicated by an offset to the decoding time. In the context of adaptive streaming (and especially in case of live streaming), a terminal may want to start playback at any point in the Content without having access to the durations of all samples since the start of the track. Audio/video synchronization would not be a problem if at the start of each Segment, audio and video would always be perfectly aligned. This however is not possible, because video frames and audio frames are typically unequal in duration. Consequently, a Segment that contains an integer number of audio and video frames will not have equal durations of audio and video data. For example, that a movie consists of an audio and a video elementary stream, where the video is sampled at 25fps and the audio is sampled at 48 kHz and framed using 1024 audio samples per frame. This means that the duration of a video frame is 40 ms and the duration of an audio frame is 21.33 ms. Say also that these elementary streams are delivered using this specification and the MP4 system layer, with the following parameters: - timescale as specified in the `mvhd`-box: 25 (“ticks” per second) - timescale as specified in the `mdhd`-box of the video track: 25 - timescale as specified in the `mdhd`-box of the audio track: 48000 - Segment duration as specified in the MPD: 2 seconds For this case, Table 2 gives an overview of the allocation of audio and video frames to the first 12 Segments of this movie: <table> <thead> <tr> <th>Segment index</th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> <th>11</th> <th>12</th> </tr> </thead> <tbody> <tr> <td>Video start time (ticks)</td> <td>0</td> <td>50</td> <td>100</td> <td>150</td> <td>200</td> <td>250</td> <td>300</td> <td>350</td> <td>400</td> <td>450</td> <td>500</td> <td>550</td> <td>600</td> </tr> <tr> <td>Audio start time (ticks)</td> <td>0,00</td> <td>50,13</td> <td>100,27</td> <td>149,87</td> <td>200,00</td> <td>250,13</td> <td>300,27</td> <td>349,87</td> <td>400,00</td> <td>450,13</td> <td>500,27</td> <td>549,87</td> <td>600,00</td> </tr> <tr> <td>#Video frames</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> </tr> <tr> <td>#Audio frames</td> <td>94</td> <td>94</td> <td>93</td> <td>94</td> <td>94</td> <td>94</td> <td>94</td> <td>94</td> <td>94</td> <td>94</td> <td>94</td> <td>94</td> <td>94</td> </tr> <tr> <td>Video duration (ticks)</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> <td>50</td> </tr> <tr> <td>Audio duration (ticks)</td> <td>50,13</td> <td>50,13</td> <td>49,60</td> <td>50,13</td> <td>50,13</td> <td>49,60</td> <td>50,13</td> <td>50,13</td> <td>49,60</td> <td>50,13</td> <td>50,13</td> <td>50,13</td> <td>50,13</td> </tr> </tbody> </table> As can be seen in this example audio and video are perfectly aligned in Segments 0, 4, 8 and 12. However if a terminal seeks to for example Segment 5, then it would need to delay play-out of the audio for 0.13 ticks or 5ms compared to the video to achieve perfect audio/video synchronization. To signal this to the terminal, [TS26244] specifies the **tfad**-box, which this specification recommends to insert into the audio track. Figure 5 depicts a close up of the situation at the start of Segment 5 in the above example: **Figure 5: Example tfad-box** The **tfad**-box allows adding empty-time into a track at the accuracy of the timescale of the **mvhd**-box, which in this example is 25 and equal to the video. The **tfad**-box also allows specifying to skip certain samples of a track at the timescale of the track, which in this example is 48000. To achieve perfect audio/video sync in this example, Segment 5 may include a **tfad**-box in the audio track with the following contents: - Entry 1 (“1:empty” in **Figure 5**): - **Segment_duration** = 1 - **media_time** = -1 (i.e. “empty” time) - Entry 2 (“2:start from here” in **Figure 5**): - **Segment_duration** = 99 - **media_time** = 1664 A client that starts playing at Segment 5 may use this box to synchronize audio and video, which will result in the playing of the samples as depicted in the bottom half of **Figure 5**. A terminal that continues playing the Content from Segment 4 where it already has synchronized the audio and video tracks and should ignore the **tfad**-box and add the samples of Segment 5 back to back with the samples of Segment 4. D.2 Partial Representations Via partial Representations, this specification allows services to offer the various elementary streams of a presentation as separate downloads/streams (see Appendix C). In this case it is required that there is one single Initialisation Segment describing the samples in all Media Segments of all partial Representations and that the concatenation of the Initialisation Segment and the Media Segments is an [ISOFF] compliant file. This section illustrates how such requirement can be met by working out the example of Appendix C in combination with the MP4 system layer. In this example a service offers a video in 2 bitrates and audio in 2 languages, English and French, where French audio is offered for separate retrieval as a separate partial Representation (see Figure 4). Figure 6 depicts a potential allocation of movie and track fragments to Segments and Representations for the first few Segments of this example. ![Figure 6: Partial Representation MP4 Example](image) In the above example, each Segment has a sequence number in the MPD (i.e. the Segment index value) and contains a single movie fragment with a sequence number in the `mfhd`-box. Segments of the Representations “HQRep” and “LQRep” contain samples of both audio (English) and video tracks. Note that in this example the service is required to put each alternate video track on the same TrackID and define a common Init Segment for all partial Representations. Consequently each Component Stream will have its own sample description (in the `trak`-box of track 1) in the common `mvhd`-box. If a terminal selects to retrieve French audio in combination with the video, then it may retrieve the sequence of Segments as depicted in Figure 7. ![Figure 7: Partial Representation Retrieval](image) When stored as depicted (Initialisation Segment first, Media Segments in increasing order of movie fragment sequence number) this is a valid [ISOFF] file that can be played on an existing MP4 player. Note that the MPD could also include additional non-partial Representations, that reference the same Media Segments as the HQRep and LQRep Representations in this example, and the same (or a different) Initialisation Segment. In this way the same service (and the same HTTP caches!) can be used for terminals that do not support partial Representations.
{"Source-Url": "http://www.oipf.tv/docs/oipf-archive/OIPF-T1-R2-Specification-Volume-2a-HTTP-Adaptive-Streaming-V2_1-2011-06-21.pdf", "len_cl100k_base": 11760, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 48604, "total-output-tokens": 12958, "length": "2e13", "weborganizer": {"__label__adult": 0.0004150867462158203, "__label__art_design": 0.0007076263427734375, "__label__crime_law": 0.0007367134094238281, "__label__education_jobs": 0.0008625984191894531, "__label__entertainment": 0.0007047653198242188, "__label__fashion_beauty": 0.0001767873764038086, "__label__finance_business": 0.0007557868957519531, "__label__food_dining": 0.000263214111328125, "__label__games": 0.001809120178222656, "__label__hardware": 0.0112762451171875, "__label__health": 0.00019359588623046875, "__label__history": 0.00030541419982910156, "__label__home_hobbies": 7.963180541992188e-05, "__label__industrial": 0.0005326271057128906, "__label__literature": 0.00030541419982910156, "__label__politics": 0.0003559589385986328, "__label__religion": 0.0004820823669433594, "__label__science_tech": 0.06890869140625, "__label__social_life": 7.712841033935547e-05, "__label__software": 0.2139892578125, "__label__software_dev": 0.6962890625, "__label__sports_fitness": 0.0003306865692138672, "__label__transportation": 0.00041294097900390625, "__label__travel": 0.00021791458129882812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51104, 0.03482]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51104, 0.35063]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51104, 0.84057]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 880, false], [880, 880, null], [880, 1893, null], [1893, 4790, null], [4790, 7149, null], [7149, 9325, null], [9325, 10604, null], [10604, 12344, null], [12344, 14738, null], [14738, 17637, null], [17637, 21159, null], [21159, 22872, null], [22872, 25456, null], [25456, 28020, null], [28020, 30820, null], [30820, 33512, null], [33512, 38096, null], [38096, 38153, null], [38153, 38422, null], [38422, 41916, null], [41916, 44225, null], [44225, 47405, null], [47405, 48747, null], [48747, 51104, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 880, true], [880, 880, null], [880, 1893, null], [1893, 4790, null], [4790, 7149, null], [7149, 9325, null], [9325, 10604, null], [10604, 12344, null], [12344, 14738, null], [14738, 17637, null], [17637, 21159, null], [21159, 22872, null], [22872, 25456, null], [25456, 28020, null], [28020, 30820, null], [30820, 33512, null], [33512, 38096, null], [38096, 38153, null], [38153, 38422, null], [38422, 41916, null], [41916, 44225, null], [44225, 47405, null], [47405, 48747, null], [48747, 51104, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 51104, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51104, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51104, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51104, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51104, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51104, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51104, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51104, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51104, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51104, null]], "pdf_page_numbers": [[0, 0, 1], [0, 880, 2], [880, 880, 3], [880, 1893, 4], [1893, 4790, 5], [4790, 7149, 6], [7149, 9325, 7], [9325, 10604, 8], [10604, 12344, 9], [12344, 14738, 10], [14738, 17637, 11], [17637, 21159, 12], [21159, 22872, 13], [22872, 25456, 14], [25456, 28020, 15], [28020, 30820, 16], [30820, 33512, 17], [33512, 38096, 18], [38096, 38153, 19], [38153, 38422, 20], [38422, 41916, 21], [41916, 44225, 22], [44225, 47405, 23], [47405, 48747, 24], [48747, 51104, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51104, 0.1916]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
910c2b02d9a099d8b61b5c6b76d2bc468a6c627c
A METHOD OF PROCESSING VIDEO INTO AN ENCODED BITSTREAM In a method of processing video into an encoded bitstream in which the encoded bitstream is intended to be sent over a WAN to a device, the processing of the video results in the bitstream (a) representing the video in a vector graphic format with quality labels which are device independent, and also (b) being decodable at the device to display, at a quality determined by the resource constraints of the device, a vector graphics based representation of the video. A method of processing video into an encoded bitstream Technical Field This invention relates to a method of a method of processing video into an encoded bitstream. This may occur when processing pictures or video into instructions in a vector graphics format for use by a limited-resource display device. Background Art Systems for the manipulation and delivery of pictures or video in a scalable form allow the client for the material to request a quality setting that is appropriate to the task in hand, or to the capability of the delivery or decoding system. Then, by storing a representation at a particular quality in local memory, such systems allow the client to refine that representation over time in order to gain extra quality. Conventionally, such systems take the following approach: an encoding of the media is obtained by applying an algorithm whose parameters (e.g. quantisation level) are set to some "coarse" level. The result is a bitstream which can be decoded and the media fully reconstructed, although at a reduced quality with respect to the original. Subsequent encodings of the input are then obtained with progressively "better quality" parameter settings, and these can be combined with the earlier encodings in order to obtain a reconstruction to any desired quality. Such a system may include a method for processing the image data into a compressed and layered form where the layers provide a means of obtaining and decoding data over time to build up the quality of the image. An example is described in PCT/GB00/01614 to Telemedia Limited. Here the progressive nature of the wavelet encoding in scale-space is used in conjunction with a ranking of wavelet coefficients in significance order, to obtain a bitstream that is Scalable in many dimensions. Such systems, however, make assumptions about the capabilities of the client device, in particular, as regards the display hardware, where the ability to render multi-bit pixel values into a framestore at video update rates, is usually necessary. At the extreme end of the mobile computing spectrum however, multi-bit deep framestores may not be available, or if they are, the constraints of limited connection capacity, CPU, memory, and battery life, make the rendering of even the lowest quality video a severe drain on resources. In order to address this problem a method of adapting the data to the capability of the client device is required. This is a hard problem in the context of video which is conventionally represented in a device-dependent low-level way, as intensity values with a fixed number of bits sampled on a rectangular grid. Typically, in order to adapt to local constraints, such material would have to be completely decoded and then reprocessed into a more suitable form. A more flexible media format would describe the picture in a higher-level, more generic, and device-independent way, allowing efficient processing into any of a wide range of display formats. In the field of computer graphics, vector formats are well known and have been in use since images first appeared on computer screens. These formats typically represent the pictures as strokes, polygons, curves, filled areas, and so on, and as such make use of a higher-level and wider range of descriptive elements than is possible with the standard image pixel-format. An example of such a vector file format is Scalable Vector Graphics (SVG). If images can be processed into vector format while retaining (or even enhancing) the meaning or sense of the image, and instructions for drawing these vectors can be transmitted to the device rather than the pixel values (or transforms thereof), then the connection, CPU and rendering requirements potentially can all be dramatically reduced. Summary of the Invention In a first aspect, there is provided a method of processing video into an encoded bitstream in which the encoded bitstream is intended to be sent over a WAN to a device; wherein the processing of the video results in the bitstream: (a) representing the video in a vector graphic format with quality labels which are device independent, and (b) being decodable at the device to display, at a quality determined by the resource constraints of the device, a vector graphics based representation of the video. The quality labels may enable scalable reconstruction of the video at the device and also at different devices with different display capabilities. The method is particularly useful in devices which are resource constrained, such as mobile telephones and handheld computers. The following steps may occur as part of processing the video into a vector graphics format with quality labels: (a) describing the video in terms of vector based graphics primitives; (b) grouping these graphics primitives into features; (c) assigning to the graphics primitives and/or to the features values of perceptual significance; (d) deriving quality labels from these values of perceptual significance. An image, represented in the conventional way as intensity samples on a rectangular grid, can be converted into a graphical form and represented as an encoding of a set of shapes. This encoding represents the image at a coarse scale but with edge information preserved. It also serves as a base level image from which further, higher quality, encodings, are generated using one or more encoding methods. In one implementation, video is encoded using a hierarchy of video compression algorithms, where each algorithm is particularly suited to the generation of encoded video at a given quality level. In a second aspect, there is a method of decoding video which has been processed into an encoded bitstream in which the encoded bitstream has been sent over a WAN to device; wherein the decoding of the bitstream involves (i) extracting quality labels which are device independent and (ii) enabling the device to display a vector graphics based representation of the video at a quality determined by the quality labels, so that the quality of the video displayed on the device is determined by the resource constraints of the device. In a third aspect, there is an apparatus for encoding video into an encoded bitstream in which the encoded bitstream is intended to be sent over a WAN to a device; wherein the apparatus is capable of processing the video into the bitstream such that the bitstream: (a) represents the video in a vector graphic format with quality labels which are device independent, and (b) is decodable at the device to display, at a quality determined by the resource constraints of the device, a vector graphics based representation of the video. In a fourth aspect, there is a device for decoding video which has been processed into an encoded bitstream in which the encoded bitstream has been sent over a WAN to the device; wherein the device is capable of decoding the bitstream by (i) extracting quality labels which are device independent and (ii) displaying a vector graphics based representation of the video at a quality determined by the quality labels, so that the quality of the video displayed on the device is determined by the resource constraints of the device. In a fifth and final aspect, there is a video file bitstream which has been encoded by a process comprising the steps of processing an original video into an encoded bitstream in which the encoded bitstream is intended to be sent over a WAN to a device; wherein the processing of the video results in the encoded bitstream: (a) representing the video in a vector graphic format with quality labels which are device independent, and (b) being decodable at the device to display, at a quality determined by the resource constraints of the device, a vector graphics based representation of the video. Briefly, an implementation of the invention works as follows: A grey-scale image is converted to a set of regions. In a preferred embodiment, the set of regions corresponds to a set of binary images such that each binary image represents the original image thresholded at a particular value. A number of quantisation levels max_levels is chosen and the histogram of the input image is equalised for that number of levels, i.e., each quantisation level is associated with an equal number of pixels. Threshold values t(1), t(2),..., t(max_levels), where t is a value between the minimum and maximum value of the grey-scale, are derived from the equalisation step and used to quantize the image into max_levels binary images consisting of foreground regions (1) and background (0). For each of the max_levels image levels the following steps are taken: The regions are grown in order to fill small holes and so eliminate some 'noise'. Then, to ensure that no 'gaps' open up in the regions during detection of their perimeters, any 8-fold connectivity of the background within a foreground region is removed, and 8-fold connected foreground regions are thickened to a minimum of 3-pixel width. In another embodiment, the regions are found using a "Morphological Scale-Space Processor"; a non-linear image processing technique that uses shape analysis and manipulation to process multidimensional signals such as images. The output from such a processor typically consists of a succession of images containing regions with increasingly larger-scale detail. These regions may represent recognisable features of the image at increasing scales and can conveniently be represented in a scale-space tree, in which nodes hold region information (position, shape, colour) at a given scale, and edges represent scale-space behavior (how coarse-scale regions are formed from many fine-scale ones). These regions may be processed into a description (the shape description) that describes the shape, colour, position, visual priority, and any other aspect, of the regions, in a compact manner. This description is processed to provide feature information, where a feature is an observable characteristic of the image. This information may include any of the following: the sign of the intensity gradient of the feature (i.e., whether the contour represents the perimeter of a filled region or a hole), the average intensity of the feature, and the 'importance' of the feature, as represented by this contour. In a preferred embodiment, the perimeters of the regions are found, unique labels assigned to each contour, and each labelled contour processed into a list of coordinates. For each of the max_levels image levels, and for each contour within that level it is established whether the contour represents a boundary or a hole using a scan-line parity-check routine (Theo Pavlidis "Algorithms for Graphics and Image Processing", Springer-Verlag, P.174). Then a grey-scale intensity is estimated and assigned to this contour by averaging the grey-scale intensities around the contour. Finally, the contours are grouped into features by sorting the contours into families of related contours, and each feature is assigned a perceptual significance computed from the intensity gradients of the feature. Also, each contour within the feature is individually assigned a perceptual significance computed from the intensity gradient in the locality of the contour. Quality labels are then derived from the values of perceptual significance for both the contours and features in order to enable determination of position in a quality hierarchy. The contour coordinates may be sorted in order to put the coordinates in pixel adjacency order in order that, in the fitting step, the correct curves are modeled. In the preferred embodiment of this aspect of the invention, the contour is split into a set of simplified curves that are single-valued functions of the independent variable \( x \), i.e., the curves do not double-back on themselves, so a point with ordinate \( x \) is adjacent to a point with ordinate \( x+1 \). Parametric curves may then be fitted to the contours. In a preferred embodiment, a piecewise cubic Bezier curve fitting algorithm is used as described in: Andrew S. Glassner (ed), Graphics Gems Volume 1, P612, "An Algorithm for Automatically Fitting Digitised Curves". The curves are priority-ordered to form a list of graphics instructions in a vector graphics format that allow a representation of the original image to be reconstructed at a client device. For each level, starting with the lowest, and for each contour representing a filled region, the curve is written to file in SVG format. Then, for each level starting with the highest, and for each contour representing a hole, the curve written to file in SVG format. This procedure adapts the well-known "painters algorithm" in order to obtain the correct visual priority for the regions. The SVG client renders the regions in the order in which they are written in the file: by rendering regions of increasing intensity order "back-to-front" and then rendering regions of decreasing intensity order "front-to-back" the desired approximation to the input image is reconstructed. The region description may be transmitted to a client which decodes and reconstructs the video frames to a "base" quality level. A second encoding algorithm is then employed to generate enhancement information that improves the quality of the reconstructed image. In a preferred embodiment, the segmented and vectorised image is reconstituted at the encoder at a resolution equivalent to the "root" quadrant of a quadtree decomposition. This is used as an approximation to, or predictor for, the true root data values. The encoder subtracts the predicted, from the true root quadrant, encodes the difference using an entropy encoding scheme, and transmits the result. The decoder performs the inverse function, adding the root difference to the reconstructed root, and using this as the start point in the inverse transform. Brief Description of Figures Note: in the figures, the language used in the code fragments is MATLAB m-code. Figure 1 shows a code fragment for the 'makecontours' function. Figure 2 shows a code fragment for the 'contourtype' function. Figure 3 shows a code fragment for the 'contourcols' function. Figure 4 shows a code fragment for the 'contourassoc' function. Figure 5 shows a code fragment for the 'contourgrad' function. Figure 6 shows a code fragment for the 'adjorder' function. Figure 7 shows a code fragment for the 'writebezier' function. Figure 8 shows a flow chart representing the process of grouping contours into features. Figure 9 shows a flow chart representing the process of assigning values of perceptual significance to features and contours. Figure 10 shows a flow chart representing the process of assigning quality labels to contours. Figure 11 shows a diagram of the data structures used. Figure 12 shows the original monochrome 'Saturn' image. Figures 13 - 16 show the contours at levels 1 - 4, respectively. Figure 17 shows the contours at all levels superimposed. Figure 18 shows the rendered SVG image. Figure 19 shows a scalable encoder. Figure 20 shows a scalable decoder. Best Mode for Carrying out the Invention Key Concepts Scalable Vector Graphics An example of a scalable vector file format is Scalable Vector Graphics (Scalable Vector Graphics (SVG) 1.0 Specification, W3C Candidate Recommendation, 2 August 2000). SVG is a proposed standard format for vector graphics which is a namespace of XML and which is designed to work well across platforms, output resolutions, color spaces, and a range of available bandwidths. SVG. Wavelet Transform The wavelet transform has only relatively recently matured as a tool for image analysis and compression. Reference may for example be made to Mallat, Stephane G. “A Theory for Multiresolution Signal Decomposition: The Wavelet Representation” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.11, No.7, pp 674-692 (Jul 1989) in which the Fast Wavelet Transform (FWT) is described. The FWT generates a hierarchy of power-of-two images or subbands where at each step the spatial sampling frequency - the 'fineness' of detail which is represented - is reduced by a factor of two in x and y. This procedure decorrelates the image samples with the result that most of the energy is compacted into a small number of high-magnitude coefficients within a subband, the rest being mainly zero or low-value, offering considerable opportunity for compression. Each subband describes the image in terms of a particular combination of spatial/frequency components. At the base of the hierarchy is one subband - the root - which carries the average intensity information for the image, and is a low-pass filtered version of the input image. This subband can be used in Scalable image transmission systems as a coarse-scale approximation to the input image, which, however, suffers from blurring and poor edge definition. Scale-Space Filtering The idea of scale-space was developed for use in computer vision investigations and is described in, for example, AP Witkin: Scale space filtering - A new approach to multi-scale description, Ullman, Richards (Eds.), Image Understanding, Ablex, Norwood, NJ, 79-95, 1984. In a multi-scale representation, structures at coarse scales represent simplifications of the corresponding structures at finer scales. A multi-scale representation of an image can be obtained by the wavelet transform, as described above, or convolution using a Gaussian kernel. However, such linear filters result in a blurring of edges at coarse scales, as in the case of the wavelet root quadrant, as described above. **Browse Quality** In certain applications, the ability quickly to gain a sense of structure and movement outweighs the need to render a picture as accurately as possible. Such a situation occurs when a human user of a video delivery system wishes to find a particular event in a video sequence, for example, during an editing session; here the priority is not to appreciate the image as an approximation to reality, but to find out what is happening in order to make a decision. In such situations a stylised, simplified, or cartoon-like representation is as useful as, and arguably better than, an accurate one, as long as the higher-quality version is available when required. **Segmentation** In order to obtain a scale-space representation that simplifies or removes detail whilst preserving edge definition, a different approach must be taken to the problem of image simplification. Segmentation is the process of identifying and labelling regions that are "similar", according to some relation. A segmented image replaces smooth gradations in intensity with sharply defined areas of constant intensity but preserves perceptually significant features, and retains the essential structure of the image. A simple and straightforward approach to doing this involves applying a series of thresholds to the image pixels to obtain constant intensity regions, and sorting these regions according to their scale (obtained by counting interior pixels, or other geometrical methods which take account of the size and shape of the perimeter). These regions, typically, will correlate poorly with perceptually significant features in the original image, but can still represent the original in a stylised way. To obtain a better correlation between image features and segmented regions non-linear image processing techniques can be employed as described in, for example, P. Salembier and J. Serra. "Flat zones filtering, connected operators and filters by reconstruction", IEEE Transactions on Image Processing, 3(8):1153-1160, August 1995, which describes a Morphological segmentation technique. Morphological segmentation is a shape-based image processing scheme that uses connected operators (operators that transform local neighbourhoods of pixels) to remove and merge regions such that intra-region similarity tends to increase and inter-region similarity tends to decrease. This results in an image consisting of so-called "flat zones": regions with a particular colour and scale. Most importantly, the edges of these flat zones are well-defined and correspond to edges in the original image. A specific embodiment of the invention will now be described by way of example. **Conversion of input image to set of binary images representing regions** Referring to the code fragment of figure 1, a number of quantisation levels $max\_levels$ is chosen and the histogram of the input image is equalised for that number of levels. The equalisation transform matrix is then used to derive a vector of threshold values and this vector is used to quantise the image into $max\_levels$ levels. The histogram of the resulting quantised image is flat (i.e. each quantisation level is associated with an equal number of pixels). Then, for each of the $max\_levels$ levels, the image is thresholded at level L to convert to a binary image, consisting of foreground regions (1) and background (0). **Conversion of binary images to coordinate lists representing contours** Referring again to the code fragment of figure 1, for each of the $max\_levels$ binary images the following steps are taken: The regions are grown in order to fill small holes and so eliminate some 'noise'. The 'grow' operation involves setting a pixel to '1' if five or more pixels in the 3-by-3 neighbourhood are '1's; otherwise it is set to '0'. Then, to ensure that no gaps open up in the regions during subsequent processing, any 8-fold connectivity of the background is removed using a diagonal fill, and 8-fold connected foreground regions are widened to a minimum 3-pixel span using a thicken operation that adds pixels to the exterior of regions. The perimeters of the resulting regions are located and a new binary image created with pixels set to represent the perimeters. Each set of 8-connected pixels is then located and overwritten with a unique label. Then every connected set of pixels with a particular label is found and a list of pixel coordinates is built. 5 Determination of contour colour and type Referring to the code fragment of figure 2, for each of the max_levels image levels, and for each contour within that level it is established whether the contour represents a fill or a hole at this level using a scan-line parity-check routine (Theo Pavlidis "Algorithms for Graphics and Image Processing", Springer-Verlag, P.174). Then, referring to the code fragment of figure 3, for each contour a grey-scale intensity is estimated and assigned to this contour by averaging the grey-scale intensities around the contour. Feature extraction and quality labelling from contours The contours are grouped into features where each feature is assigned a perceptual significance computed from the intensity gradients of the feature. Also, each contour within the feature is individually assigned a perceptual significance computed from the intensity gradient in the locality of the contour. This is done as follows. Referring to the code fragment of figure 4 and the flow-chart of figure 8: starting with the highest-intensity fill-contour (rather than hole-contour), each contour at level L is associated with the contour at level L-1 that immediately encloses it, again using scan-line parity-checking. An association list is built that relates every contour to its 'parent' contour so that groups of contours representing a feature can be identified. The feature is assigned an ID and a reference to the contour list is made in a feature table. The process is then repeated for hole-contours, starting with the one with the lowest-intensity. Referring to the code fragment of figure 5 and the flow-chart of figure 9, perceptual significances are then assigned to features and contours in the following way. Starting with the highest-intensity fill-contour of a feature, and at each of a fixed number of positions (termed the fill-line) around this contour, the intensity gradient is calculated by determining the distance to the parent contour. These gradients are median-filtered and averaged and the value thus obtained - psi contour - gives a reasonable indication of perceptual significance of the contour. The association list is used to descend through all the rest of the enclosing contours. Then the gradients down each of the falls-lines of all the contours for the feature are calculated, median-filtered and averaged, and the value thus obtained - psfeature - gives a reasonable indication of perceptual significance of the feature as a whole. The final step is to derive quality labels from the values of perceptual significance for the contours and features in order to enable determination of position in a quality hierarchy. Referring to the flow-chart of figure 10, quality labels are initialised as the duple \{Ql, Qg\} (local and global quality) on each contour descriptor. The features are sorted with respect to psfeature. The first (most significant) feature is found and all of the contour descriptors in its list have their Ql set to 1; then the next most significant feature is found and the contour descriptors have their Ql set to 2, and so on. Thus, all the contours within a feature have the same value of Ql; contours belonging to different features have different values of Ql. As a second step all the contours are sorted with respect to psccontour, and linearly increasing values of Qg, starting with 1, are written to their descriptors. Thus, every contour in the scene has a unique value of Qg. Two orderings of the data are thus obtained using the quality labels: Ql ranks localised image features into significance order, Qg ranks contours into global significance order. This allows a decoder to choose the manner in which a picture is reconstructed: whether to bias in favour of reconstructing individual local features with the best fidelity first, or obtaining a global approximation to the entire scene first. The diagram of figure 11 outlines the data structures used when assigning quality labels to contours. The feature indicated comprises three contours. Local and global gradients are computed using the eight fall-lines shown and the values for psfeature, psccontour, Qg and Ql are written in the tables. **Reordering and filtering of contours** After the previous operations have been completed the coordinates in each list are in scan-order, i.e., the order in which they were detected. In order for curve-fitting to work they need to be re-ordered such that each coordinate represents a pixel adjacent to its immediate 8-fold connected neighbour. Referring to the code fragment of figure 6 - of the independent variable, i.e., that never change direction with respect to increasing this is done as follows: The contour may be complicated, with many changes of direction, but it cannot cross itself, or have multiple paths. The algorithm splits the contour into a list of simpler curves that are single-valued functions can number (or x-value). On these curves each value of the independent variable x maps to just one point, so points at x(n) and x(n+1) must be adjacent. The start and finish points of these curves are found, then for each curve these points are tested against all others to determine which curve connects to which other(s). Finally, the curves are traversed in connection order to generate the list of pixel coordinates in adjacency order. As part of the reordering process, runs of pixels on the same scan line are detected and replaced by a single point to reduce the size of data handed on to the fitting process. 10 **Bezier curve fitting** The piecewise cubic Bezier curve fitting algorithm used in the preferred embodiment of the invention is described in: Andrew S. Glassner (ed), Graphics Gems Volume 1, P612, "An Algorithm for Automatically Fitting Digitised Curves". **Visual priority ordering** 15 Referring to the code fragment of **figure 7**, for each level starting with the lowest, and for each contour representing a filled region, the curve is written to file in SVG format. Then, for each level starting with the highest, and for each contour representing a hole, the curve written to file in SVG format. This procedure adapts the well-known "painters algorithm" in order to obtain the correct visual priority for the regions. The SVG client renders the regions in the order in which they are written in the file: by rendering regions of increasing intensity order "back-to-front" and then rendering regions of decreasing intensity order "front-to-back" the desired approximation to the input image is reconstructed. **Scalable encoding using a vector graphics base level encoding** 20 Referring to the diagrams of a scalable encoder and decoder (**figures 15 and 16**), at the encoder the input image is segmented, shape-encoded, converted to vector graphics and transmitted as a low-bitrate base level image; it is also rendered at the wavelet root quadrant resolution and used as a predictor for the root quadrant data. The error in this prediction is entropy-encoded and transmitted together with the compressed wavelet detail coefficients. This compression may be based on the principle of spatially oriented trees, as described in PCT/GB00/01614 to Telemedia Limited. The decoder performs the inverse function; it renders the root image and presents this as a base level image; it also adds this image to the root difference to obtain the true root quadrant data which is then used as the start point for the inverse wavelet transform. Industrial Applicability As a simple example of the use of the invention consider the situation in which it is desired that material residing on a picture repository be made available to a range of portable devices with displays with an assortment of spatial and grey-scale resolution - possibly some with black-and-white output only. Using the methods of the current invention the material is processed into a single file in SVG format. The devices are loaded with SVG viewer software that allows reconstruction of picture data irrespective of the capability of the individual client device. Claims 1. A method of processing video into an encoded bitstream in which the encoded bitstream is intended to be sent over a WAN to a device; wherein the processing of the video results in the bitstream: (a) representing the video in a vector graphic format with quality labels which are device independent, and (b) being decodable at the device to display, at a quality determined by the resource constraints of the device, a vector graphics based representation of the video. 2. The method of Claim 1 in which the quality labels enable scalable reconstruction of the video at the device and also at different devices with different display capabilities. 3. The method of Claim 1 in which the following steps occur as part of processing the video into a vector graphics format with quality labels: (a) describing the video in terms of vector based graphics primitives; (b) grouping these graphics primitives into features; (c) assigning to the graphics primitives and/or to the features values of perceptual significance; (d) deriving quality labels from these values of perceptual significance. 4. The method of Claim 1 in which multiple processing steps are applied to the video, with each processing step producing an encoded bitstream with different quality characteristics. 5. The method of Claim 3 in which the vector based graphics primitives are selected from the group comprising: (a) straight lines or (b) curves. 6. The method of Claim 3 in which the values of perceptual significance relate to one or more of the following: (a) individual local features; (b) a global approximation to an entire scene in the video. 7. The method of Claim 3 in which the values of perceptual significance relate to one or more of the following: (a) sharpness of an edge (b) size of an edge (c) type of shape (d) colour consistency. 8. The method of Claim 1 in which the video is an image and/or an image sequence. 9. The method of Claim 3 where the video constitutes the base level in a scalable image delivery system, and where the features represented by graphics primitives in the video have a simplified or stylised appearance, and have well defined edges. 10. The method of Claim 9 where the image processing involves converting a grey-scale image into a set of binary images obtained by thresholding. 11. The method of Claim 9 where the processing involves converting a grey-scale image into a set of regions obtained using morphological processing. 12. The method of Claim 9 or 10, where the processing further involves the steps of region processing to eliminate detail, perimeter determination, and processing into a coordinate list. 13. The method of Claim 12 where the processing further involves the generation of perceptual significance information for both the graphics primitives and features, that are used to derive quality labels, that enable determination of position in a quality hierarchy. 14. The method of Claim 13 where the processing further involves re-ordering of the list such that each coordinate represents a pixel adjacent to its immediate 8-fold connected neighbour. 15. The method of Claim 14 where the processing further involves fitting parametric curves to the contours. 16. The method of Claim 15 where the processing further involves priority-ordering the contour curves representing filled regions front-to-back, and contour curves representing holes back-to-front, in order to form a list of graphics instructions in a vector graphics format that allow a representation of the original image to be reconstructed at a client device. 17. A method of decoding video which has been processed into an encoded bitstream in which the encoded bitstream has been sent over a WAN to device; wherein the decoding of the bitstream involves (i) extracting quality labels which are device independent and (ii) enabling the device to display a vector graphics based representation of the video at a quality determined by the quality labels, so that the quality of the video displayed on the device is determined by the resource constraints of the device. 18. An apparatus for encoding video into an encoded bitstream in which the encoded bitstream is intended to be sent over a WAN to a device; wherein the apparatus is capable of processing the video into the bitstream such that the bitstream: (a) represents the video in a vector graphic format with quality labels which are device independent, and (b) is decodable at the device to display, at a quality determined by the resource constraints of the device, a vector graphics based representation of the video. 19. A device for decoding video which has been processed into an encoded bitstream in which the encoded bitstream has been sent over a WAN to the device; wherein the device is capable of decoding the bitstream by (i) extracting quality labels which are device independent and (ii) displaying a vector graphics based representation of the video at a quality determined by the quality labels, so that the quality of the video displayed on the device is determined by the resource constraints of the device. 20. A video file bitstream which has been encoded by a process comprising the steps of processing an original video into an encoded bitstream in which the encoded bitstream is intended to be sent over a WAN to a device; wherein the processing of the video results in the encoded bitstream: (a) representing the video in a vector graphic format with quality labels which are device independent, and (b) being decodable at the device to display, at a quality determined by the resource constraints of the device, a vector graphics based representation of the video. A method of function makecontours(image, max_levels, max_labels, min_component_len, thresh) % Equalise the histogram of the input image. % Quantize the image at the predetermined thresholds to give binary image set. % Process to remove spurious detail, and to ensure minimum region thickness. % Find perimeter pixels and assign labels. % [img, eqtrans] = histeq(image, max_levels); [nlevels, thresh] = hist(eqtrans, max_levels); thresh = round(thresh*255); idx_contour = 1; for level = 1:max_levels, img = (image>.thresh(level)); img = bwmorph(img, 'majority', 16); img = bwmorph(img, 'diag'); background img = bwmorph(img, 'chicken'); img = bwperim(img); [im_labelled, nlabels] = bwlabel(img, 8); pixels % Find coordinates of labelled components % ncomponents = min([nlabels, max_labels]); ncontours(level) = 0; for ic = 1:ncomponents, [lrow, lcol] = find(im_labelled==ic); points = [lcol, lrow]; if (size(points, 1) > min_component_len) % Discard small regions cntr_points(idx_contour) = {points}; cntr_label(idx_contour) = (ic); idx_contour = idx_contour + 1; ncontours(level) = ncontours(level) + 1; end end Figure 1 function contourtype(cntr_points, max_levels, ncontours) % Determine if closed contour is boundary (1) or hole (0): adapted from scanline % contour filling algorithm using parity check (Pavlidis P.174). % Find any point on this curve - result applies to curve as a whole; % Choose the first point in the list as the candidate point (xtest, ytest); % For every other curve that intersects y with x<xtest, determine edge parity; % Handle tangent/multiple pixels properly using contour-line adjacency graph; % If sum of parities is even then contour is boundary, otherwise hole. % contourbase = 1; for level = 1:max_levels, for ic = 1:ncontours(level), contour = cntr_points{contourbase + ic - 1}; paritytotal = 0; othercontours = find((1:ncontours(level))~=ic); for curv = othercontours, testcontour = cntr_points{contourbase + curv - 1}; [parity, errorflag] = ip_parityFind(testcontour, contour); paritytotal = paritytotal + parity; end if (rem(paritytotal, 2)==0) cntr_type(contourbase + ic - 1) = 1; else cntr_type(contourbase + ic - 1) = 0; end end contourbase = contourbase + ncontours(level); end Figure 2 function contourcols(cntr_points, cntr_type, image) for ic = 1:length(cntr_points), % Determine the contour colour by averaging the pixel intensities % around the contour. points = cntr_points(ic) ; siz = size(image) ; inds = sub2ind(siz, points(:,2), points(:,1)) ; cntr_cols(ic) = round(mean(image(inds))) ; end Figure 3 function contourassoc(cntr_points, cntr_type, ncontours) % % For each contour at level L, find the contour at level L-1 % that immediately encloses it, using scan-line parity-checking. % Build an association list that relates every contour to its 'parent' % idx_contour = 1; conts = idx_contour:(idx_contour + ncontours(1) - 1); % contours at current level conts = conts(find(cntr_type(conts))); % find fill-contours only seq = 1; for (ic = conts), cntr_assoc(ic) = 0; % zero denotes lowest contour assoc seq = seq + 1; end idx_contour = idx_contour + ncontours(1); for (il = 2:length(ncontours)), % contours at level L-1 prevconts = conts; conts = idx_contour:(idx_contour + ncontours(il) - 1); % contours at level L conts = conts(find(cntr_type(conts))); % find fill-contours only for (ip = prevconts), seq = 1; for (ic = conts), [parity, errorflag] = ip_parityFind(cntr_points{ip}, cntr_points{ic}); if (rem(parity, 2) == 0) if (~isempty(cntr_assoc(ip))) cntr_assoc(ic) = ip; seq = seq + 1; end end end end end idx_contour = idx_contour + ncontours(il); end Figure 4 function contourgrad(cntr_points, cntr_assoc, ncontours, ngradients) % Use the association list to identify groups of contours that represent features. % % At each of a fixed number of positions around the max contour calculate the % intensity gradient at a tangent to the contour by determining the distance % to the parent contour. % Descend through, and process, all the enclosing contours. % Find the next-highest unprocessed intensity contour and repeat until all % the contours have been processed. % Associate the contour groups with feature IDs in the cntr_feature list. % cntr_grad = {}; featureID = 1; conts = fliplr(find(cntr_assoc)); % contour list, starting with highest intensity while (~isempty(conts)) ic = conts(1); cntr = cntr_points{ic}; stride = fix(length(cntr)/ngradients); % Space between sample points gradindex = 1; for (grad = 1:ngradients) % Choose the next point around the contour % point = cntr(gradindex, :); thisc = ic; parentc = cntr_assoc(thisc); % the parent contour id ig = 1; gradients = []; while (parentc~=0) % Starting at the current point, go down the intensity fall line % updating a vector of gradients as we go. % parentcntr = cntr_points{parentc}; % the parent contour points xdiffs = abs(parentcntr(:, 1)-point(1)); ydiffs = abs(parentcntr(:, 2)-point(2)); diffs = xdiffs + ydiffs; % Manhattan distance measure mindiffs = min(diffs); % shortest path to parent gradients(ig) = mindiffs; % minidx = find(diffs==mindiffs); % find intersection of fall line point = parentcntr(minidx(1), :); % with parent contour % parentc = parentcntr(minidx(1), :); end end end end if (~isempty(conts)) % remove current contour from conts = conts(find(conts==thisc)); end thisc = parentc; % descend the fall line parentc = cntr_assoc(thisc); ig = ig + 1; end cntr_grad(featureID, grad) = {gradients}; % update feature list gradindex = gradindex + stride; end cntr_feature(featureID) = ic; % 'maxima' contour for this feature featureID = featureID + 1; end function adorder(points) % % All input points must be 8-connected. % There must be a single path (no stubs, intersections, etc). % error = 0; npoints = size(points, 1); firstscan = min(points(:,1)); lastscan = max(points(:,1)); scans = [firstscan:lastscan]; nscans = length(scans); miny = min(points(:,2)); maxy = max(points(:,2)); linemapl = sparse(nscans, 1); linemaph = sparse(nscans, 1); nextcurve = 1; % Construct a scanline-ordered cell array of intersections with the curve. % We also remove multiple linear points on a scan. % for sc = 1:nscans, % Split up the contour into separate curves that are linearly % related to the scanline (i.e. one sample point per scan). % Build a linemap matrix (row=scanline, col=curve number) with entries % min and max row values per scanline per curve ID. % % Find start points of all runs of connected pixels; subtract the % list of row values from a shifted version of itself and look for % discontinuities (i.e. steps greater than one). % runs = [points(find(points(:,1)==scans(sc), 2)) ; nsamples = length(runs); shruns = [runs(1); runs(1:nsamples-1)] ; runstarts = find(abs(runs - shruns)>1) ; runends = [runstarts; nsamples+1] ; % end points % start points of all runs of all runs lenruns = runends - runstarts ; nruns = length(runstarts) ; % lengths of all runs % number of runs foundflag = 0 ; if (sc==1) % start off the linemap matrix p = 1 ; for j = 1:nruns, run = runs(p;p+lenruns(j)-1) ; linemap(sc, j) = min(run) ; linemaph(sc, j) = max(run) ; p = p + lenruns(j) ; nextcurve = nextcurve + 1 ; end else % connect pixels to existing curves, else start new curves lastsegsl = linemap(sc-1, :) ; lastsegsh = linemaph(sc-1, :) ; nzsegs = find(lastsegsl) ; p = 1 ; for j = 1:nruns, run = runs(p;p+lenruns(j)-1) ; bl = min(run) ; bh = max(run) ; for k = nzsegs, al = lastsegsl(k) ; ah = lastsegsh(k) ; end end end curves. % See if the new run connects to one of the existing one. % If so, add to the existing curve, else create a new % if ( ((bl<=ah+1) & (bl>=al)) | (bh>=al-1) & (bh<=ah) ) ; % Run has been matched with curve, so remove curve from list % and stop looking. % nzsegs = nzsegs(find(nzsegs~=k)) ; break ; end end if (~foundflag) % create a new line linemapl(sc, nextcurve) = bl ; linemaph(sc, nextcurve) = bh ; nextcurve = nextcurve + 1 ; else foundflag = 0 ; end p = p + lenruns(j) ; end end % Find the endpoints of the curves % for (idxcrv = 1:size(linemapl, 2)), curve = find(linemapl(:, idxcrv)) ; startscan = curve(1) ; endscan = curve(length(curve)) ; endpt(idxcrv, 1) = startscan ; endpt(idxcrv, 2) = endscan ; endpt(idxcrv, 3) = linemapl(startscan, idxcrv) ; endpt(idxcrv, 4) = linemaph(startscan, idxcrv) ; endpt(idxcrv, 5) = linemapl(endscan, idxcrv) ; endpt(idxcrv, 6) = linemaph(endscan, idxcrv) ; end % Find connected curves. % Offsets of (+-1,+-1), & (+-1,0) give connectivity. ncurves = size(endpt, 1); curveref = zeros(ncurves, 2); % list of connecting curve numbers starts = endpt(:, 1); ends = endpt(:, 2); % deltapoints is a ncurves-by-4 matrix: % [startscans-1, endscans-1, startscans+1, endscans+1; ...] % It lists all the points that result when the start/end scan % numbers are displaced by +-1 pixel. deltapoints = [endpt(:, 1:2)-1, endpt(:, 1:2)+1]; findstarts = []; findends = []; for (idxcrv = 1:ncurves), % Find curves that connect; first, find curves whose scanline start/end. % points are within + 1 line of those of the current curve. % % The 'start' is defined as the terminal with the smaller % scan line value; the 'end' as that with the greater. % % a & b matrices have same dimensions as deltapoints with a '1' signifying % a match with the current start/end. % findstarts & findends list the curve numbers for the scanline matches % for the start and end of the current curve, respectively, % (taking care to exclude the curve we are matching against). % a = reshape(starts(idxcrv)==deltapoints(:, ncurves, 4); fs = find(a(:,1) | a(:,2) | a(:,3) | a(:,4))); if (~isempty(fs)) findstarts = (fs(find(fs==idxcrv))); end b = reshape(ends(idxcrv)==deltapoints(:, ncurves, 4); fe = find(b(:,1) | b(:,2) | b(:,3) | b(:,4))); if (~isempty(fe)) findends = (fe(find(fe==idxcrv))); end % Now match the scan intersection start/ends of the current curve % against those of the candidates found in the scanline match. % When a match is found enter its curve ID into the curveref table. currintersectstart = [(endpt(idxcrv, 3)-1):(endpt(idxcrv, 3)+1), (endpt(idxcrv, 4)-1):(endpt(idxcrv, 4)+1)]; currintersectend = [(endpt(idxcrv, 5)-1):(endpt(idxcrv, 5)+1), (endpt(idxcrv, 6)-1):(endpt(idxcrv, 6)+1)]; currdeltastart = [delta_points(idxcrv, 1):delta_points(idxcrv, 3)]; currdeltaend = [delta_points(idxcrv, 2):delta_points(idxcrv, 4)]; % Handle special case where curve is single run on scanline if ((all(currdeltastart==currdeltaend)) & (all(currintersectstart==currintersectend))) currintersectstart = endpt(idxcrv, 3)-1; currintersectend = endpt(idxcrv, 6)+1; end for (s = findstarts), if ((any(ismember(currintersectstart, endpt(s, 3:4))) &... (any(ismember(currdeltastart, endpt(s, 1))))) curveref(idxcrv, 1) = s; break; elseif ((any(ismember(currintersectstart, endpt(s, 5:6))) &... (any(ismember(currdeltastart, endpt(s, 2))))) curveref(idxcrv, 1) = s; break; end end for (e = findends), if ((any(ismember(currintersectend, endpt(e, 3:4))) &... (any(ismember(currdeltaend, endpt(e, 1))))) curveref(idxcrv, 2) = e; break; elseif ((any(ismember(currintersectend, endpt(e, 5:6))) &... (any(ismember(currdeltaend, endpt(e, 2))))) curveref(idxcrv, 2) = e; break; end end % Now work out the order in which to traverse the input points to generate % output points in the correct chain order. % For an open contour there must be exactly two unset locations (0) in the curveref % tables corresponding to first & last curves in the contour (i.e. no start/end point). % Starting with the first curve, use the connecting curve numbers to traverse the % curves in the correct direction and order, looking up start & end scan lines and % building an indexing table as we go. % ncurves = size(curveref, 1); linemapindex = zeros(ncurves, 4); idxpts = 1; curve = 0; from = 0; to = 0; lastcurve = 0; [curve, handed] = find(curveref==lastcurve); % Handle the case of a closed contour - just choose the first in the list % if (isempty(curve)) curve = 1; handed = 1; end curve = curve(1); handed = handed(1); if (handed==1) starthand = 1; endhand = 2; else starthand = 2; endhand = 1; end for (idxcry = 1:ncurves), if (curve==0) error = 1; break; end if (curveref(curve, handed)==lastcurve) from = starthand; to = endhand; else from = endhand; to = starthand; end linemapindex(idxpts, 1) = endpt(curve, from); linemapindex(idxpts, 2) = endpt(curve, to); linemapindex(idxpts, 3) = curve; linemapindex(idxpts, 4) = to - from; idxpts = idxpts + 1; lastcurve = curve; curve = curveref(lastcurve, to); end % Generate the list of reordered points. % if (error==0); ncurves = size(curveref, 1); newindex = 1; for (idxcrv = 1:ncurves), first = linemapindex(idxcrv, 1); last = linemapindex(idxcrv, 2); curve = linemapindex(idxcrv, 3); incr = linemapindex(idxcrv, 4); for idxpt = first: incr: last, pointl = linemap(idxpt, curve); pointh = linemaph(idxpt, curve); point = pointl + round((pointh-pointl)/2); % choose end end end Figure 6 function writebezier(fid, cntr_points, cntr_level, cntr_type, cntr_cols, max_levels, max_fit, min_component_len, bezerror) % Fit Bezier curves to each of the labelled components using % adaptive piecewise cubic Bezier fitting algorithm (Graphics Gems P.612) % Write the SVG file. % Paint filled regions back-to-front; then holes front-to-back. % % Paint filled regions back-to-front % for level = 1:max_levels, idx_contours = find(((cntr_level,:) == level) & ((cntr_type,:) == 1)); if (~isempty(idx_contours)) for ic = idx_contours, if (~isempty(cntr_points{ic})) points = (cntr_points{ic})'; % make into 2-by-n matrix % The points are still 8-connected - not necessary for curve fitting. % (also, having too many points seems to break the program). % 'Thin' them out so as to have approx constant number for any size contour. % for npoints = length(points); % incr = 1 + npoints/max_fit; % fitinds = round(1:incr:npoints); % points = points(:, fitinds); % npoints = length(points); % recalculate length of points matrix if (npoints > min_component_len) SVGgroupStart(fid, cntr_cols(ic), 1, cntr_cols(ic), level); end SVGpathStart(fid); % Write SVG block start % Piecewise cubic Bezier fitting algorithm. % [degree, bezsections] = FitCurves(points, npoints, ncurves = length(degree); idxbez = 1; nctrl = degree(1) + 1; bezcurve = round(bezsections(; idxbez:(idxbez+nctrl-1))); SVGmoveto(fid, bezcurve(;, 1)); % Start new curve % write the SVG data % for i = 1:ncurves, nctrl = degree(i) + 1; bezcurve = round(bezsections(; idxbez:(idxbez+nctrl-1))); SVGwritepath(fid, bezcurve, degree(i)); idxbez = idxbez + nctrl; end SVGpathEnd(fid); % Write SVG block end SVGgroupEnd(fid); end end end % Paint holes front-to-back % for level = max_levels:1:1, idx_contours = find((cntr_level(i)==level) & ((cntr_type(i))==0)); if (~isempty(idx_contours)) for ic = idx_contours, points = (cntr_points{ic}); % make into 2-by-n matrix % Thin % npoints = length(points); incr = 1 + npoints/max_fit; fitinds = round(1:incr:npoints); points = points(:, fitinds); npoints = length(points); % recalculate length of points matrix if (npoints > min_component_len) SVGgroupStart(fid, cntr_cols(ic), 1, cntr_cols(ic), level); SVGpathStart(fid); } [degree, bezsections] = FitCurves(points, npoints, bezerror); ncurves = length(degree); idxbez = 1; nc ctrl = degree(1) + 1; bezcurve = round(bezsections(:, idxbez:(idxbez+nctrl-1))); SVGmoveto(fid, bezcurve(:, 1)); % write the SVG data % for i = 1:ncurves, nctrl = degree(i) + 1; bezcurve = round(bezsections(:, idxbez:(idxbez+nctrl-1))); SVGdatapath(fid, bezcurve, degree(i)); idxbez = idxbez + nctrl; end SVGpathEnd(fid); SVGgrouppEnd(fid); end end Figure 7 start feature ID = 1 Find highest-intensity unprocessed contour feature ID = feature ID + 1 Initialise new contour list for this feature Link contour into list Enclosing (parent) contour? More contours? end Figure 8 start feature ID = 1 Find contour list for this feature Next contour Calc local gradients at n perimeter points median filter & mean of n local gradients Write pscontour into contour descriptor More contours? N Calc feature gradients at n perimeter points median filter & mean of n feature gradients Write psfeature into contour descriptor More features? Y N End feature ID = feature ID + 1 start Initialise contour labels\n\{Qg, Ql\} = \{0, 0\} Sort features re. \textit{pfeature} \text{Ql} = 1 Next feature in sorted list Set Ql for all contour labels More contours? \text{Y} Sort contours re. \textit{pscontour} \text{Qg} = 1 Next contour in sorted list Set Qg for this contour More contours? \text{Y} End Figure 10 Figure 11
{"Source-Url": "https://patentimages.storage.googleapis.com/43/c1/4e/c48f08c5b41745/WO2002071757A2.pdf", "len_cl100k_base": 12734, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 45967, "total-output-tokens": 14893, "length": "2e13", "weborganizer": {"__label__adult": 0.0005021095275878906, "__label__art_design": 0.003360748291015625, "__label__crime_law": 0.0006899833679199219, "__label__education_jobs": 0.0010309219360351562, "__label__entertainment": 0.0003104209899902344, "__label__fashion_beauty": 0.0002551078796386719, "__label__finance_business": 0.0007009506225585938, "__label__food_dining": 0.0004763603210449219, "__label__games": 0.0010938644409179688, "__label__hardware": 0.0130615234375, "__label__health": 0.0005707740783691406, "__label__history": 0.0007047653198242188, "__label__home_hobbies": 0.00017499923706054688, "__label__industrial": 0.0015192031860351562, "__label__literature": 0.0004901885986328125, "__label__politics": 0.00035071372985839844, "__label__religion": 0.000720977783203125, "__label__science_tech": 0.416015625, "__label__social_life": 6.502866744995117e-05, "__label__software": 0.0433349609375, "__label__software_dev": 0.513671875, "__label__sports_fitness": 0.000247955322265625, "__label__transportation": 0.0006303787231445312, "__label__travel": 0.00024127960205078125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54772, 0.0155]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54772, 0.88754]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54772, 0.8686]], "google_gemma-3-12b-it_contains_pii": [[0, 524, false], [524, 2861, null], [2861, 5105, null], [5105, 7188, null], [7188, 9408, null], [9408, 11951, null], [11951, 14400, null], [14400, 15622, null], [15622, 17717, null], [17717, 19852, null], [19852, 22228, null], [22228, 24825, null], [24825, 27378, null], [27378, 29857, null], [29857, 30674, null], [30674, 32129, null], [32129, 33637, null], [33637, 35657, null], [35657, 36395, null], [36395, 37604, null], [37604, 38850, null], [38850, 39199, null], [39199, 40528, null], [40528, 42581, null], [42581, 43620, null], [43620, 44944, null], [44944, 45861, null], [45861, 47559, null], [47559, 49255, null], [49255, 50155, null], [50155, 50841, null], [50841, 52211, null], [52211, 53307, null], [53307, 53787, null], [53787, 54012, null], [54012, 54419, null], [54419, 54762, null], [54762, 54772, null], [54772, 54772, null], [54772, 54772, null], [54772, 54772, null], [54772, 54772, null], [54772, 54772, null]], "google_gemma-3-12b-it_is_public_document": [[0, 524, true], [524, 2861, null], [2861, 5105, null], [5105, 7188, null], [7188, 9408, null], [9408, 11951, null], [11951, 14400, null], [14400, 15622, null], [15622, 17717, null], [17717, 19852, null], [19852, 22228, null], [22228, 24825, null], [24825, 27378, null], [27378, 29857, null], [29857, 30674, null], [30674, 32129, null], [32129, 33637, null], [33637, 35657, null], [35657, 36395, null], [36395, 37604, null], [37604, 38850, null], [38850, 39199, null], [39199, 40528, null], [40528, 42581, null], [42581, 43620, null], [43620, 44944, null], [44944, 45861, null], [45861, 47559, null], [47559, 49255, null], [49255, 50155, null], [50155, 50841, null], [50841, 52211, null], [52211, 53307, null], [53307, 53787, null], [53787, 54012, null], [54012, 54419, null], [54419, 54762, null], [54762, 54772, null], [54772, 54772, null], [54772, 54772, null], [54772, 54772, null], [54772, 54772, null], [54772, 54772, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54772, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54772, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54772, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54772, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54772, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54772, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54772, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54772, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54772, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54772, null]], "pdf_page_numbers": [[0, 524, 1], [524, 2861, 2], [2861, 5105, 3], [5105, 7188, 4], [7188, 9408, 5], [9408, 11951, 6], [11951, 14400, 7], [14400, 15622, 8], [15622, 17717, 9], [17717, 19852, 10], [19852, 22228, 11], [22228, 24825, 12], [24825, 27378, 13], [27378, 29857, 14], [29857, 30674, 15], [30674, 32129, 16], [32129, 33637, 17], [33637, 35657, 18], [35657, 36395, 19], [36395, 37604, 20], [37604, 38850, 21], [38850, 39199, 22], [39199, 40528, 23], [40528, 42581, 24], [42581, 43620, 25], [43620, 44944, 26], [44944, 45861, 27], [45861, 47559, 28], [47559, 49255, 29], [49255, 50155, 30], [50155, 50841, 31], [50841, 52211, 32], [52211, 53307, 33], [53307, 53787, 34], [53787, 54012, 35], [54012, 54419, 36], [54419, 54762, 37], [54762, 54772, 38], [54772, 54772, 39], [54772, 54772, 40], [54772, 54772, 41], [54772, 54772, 42], [54772, 54772, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54772, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
d79fea78793e9272c7ce309e46fb9cff822f36e4
Tutorial on the MomanLib Summary This AppNote presents a brief tutorial about how to set up your environment (e.g. Visual Studio). It continues with replicating the C# - examples delivered in the Library. The advanced part with design techniques is found in AppNote 179. Applies To All Motion Control applications, in combination with a PC environment MC V2.5 and MC V3.0 Table of Contents LICENSING ........................................................................................................................................... 2 GETTING THE SOFTWARE ....................................................................................................................... 2 VISUAL STUDIO .................................................................................................................................... 2 FAULHABER MOMANLIB ...................................................................................................................... 3 EXAMPLE PROGRAM – STEP BY STEP .................................................................................................... 3 CREATING THE PROJECT ..................................................................................................................... 3 CREATING THE USER INTERFACE ....................................................................................................... 5 CONNECTING THE MOMANLIB TO THE PROJECT ............................................................................... 9 Why do we need to do this? .................................................................................................................. 9 What is a Wrapper ............................................................................................................................. 9 Creating the wrapper ......................................................................................................................... 10 PROVIDING AN INTERFACE TO THE USER-CODE ............................................................................. 15 Why an Interface ............................................................................................................................. 15 What is left to do? ............................................................................................................................. 16 ADDING FUNCTIONALITY TO THE USER INTERFACE ......................................................................... 17 Synchronous Access ....................................................................................................................... 20 Asynchronous Access ..................................................................................................................... 20 PUTTING IT TOGETHER .......................................................................................................................... 21 FormMain.cs ..................................................................................................................................... 21 MomanLibSample.cs .......................................................................................................................... 24 TESTING ............................................................................................................................................ 26 TABLE OF FIGURES ............................................................................................................................... 28 TABLE OF SOURCE CODE .................................................................................................................... 28 Licensing Visual C#, Visual Studio and MSDN are Trademarks registered by Microsoft. There may be Additional Terms and / or Conditions for licensed third party software components. For a full list see https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/en-us.aspx Log4Net is software provided by the Apache Foundation. There may be Additional Terms and / or Conditions for licensed third party software components. Getting the Software Visual Studio 1. Go on https://visualstudio.microsoft.com/vs/older-downloads/ or search for „Visual Studio 2013 Express“. If you feel comfortable with using a later version, that is okay too. 2. Sign In with your credentials or create a new one. a. If your account is created recently and you are not seeing any downloads, you may want to join the “Visual Studio Dev Essentials”-Program, which is free (as of Sept 2018, should work later too) to see more downloads. Joining the Program is possible under the tab “Subscriptions” [Figure 1: Downloads for Visual Studio green box] 3. Select the downloads for Visual Studio 2013 ![Figure 1: Downloads for Visual Studio] 4. Install Visual Studio (typically “Visual Studio Community 2013 with Update 5” would be a good choice, [see Figure 1: Downloads for Visual Studio red box]) 5. TIP: If you want to follow the Tutorial with the Visual Studio Language „English“ you can use the following Language Pack (or search for „Visual Studio 2013 Language Pack“ and select English) https://my.visualstudio.com/Downloads?q=Visual%20Studio%202013%20Language%20Pack Alternatively you can select „Tools“, then „Options“. Select the List-Item „Environment“, then „International Settings“ **Faulhaber MomanLib** 1. Download the Library b. Navigate through the FAULHABER Support-Page to “Drive electronics” and “Downloads” 2. Select the Win32 Programming Library 3. Download it to your local Project Folder 4. If you just want to try the Library, feel free to use the Examples (C++, C#, Delphi and LabView) provided under “/Examples/Source/”, for C# open “DemoCSharp.csproj” after Visual Studio is installed. **Example Program – Step by Step** If you only want to use the source code, there is a complete version at the end of this chapter: - Codelisting 11: Final source code of the FormMain.cs - Codelisting 12: Final source code of the MomanLibSample.cs **Creating the Project** 1. Select „New Project“ ![Figure 2: Visual Studio 2013 Project Overview](image-url) 2. Then Create the Project ![Figure 3: Create a new Visual C#-Project](image1) 3. This may be your standard view ![Figure 4: The starting view of a default project](image2) 4. Create the Folder „lib“ in the C# Project folder and extract the MomanLib into the Folder „MomanLib“ here. Take notice of the chosen path here. ![Folder overview](image1.png) **Figure 5: Folder overview** 5. You are good to go! **Creating the User Interface** 1. Renaming the Main Form From „Form1.cs“ to „FormMain.cs“ ![Renaming default Form File](image2.png) **Figure 6: Renaming the default Form File** When asked if you want to rename all references press „yes“ 2. Resizing the Form and Renaming the Form Title Figure 7: Renaming dialog Figure 8: Resizing the Form and renaming it 3. Add the Buttons via „Toolbox“ → „Button“ and Drag them into the Form ![Image of creating buttons] Figure 9: Creating the Buttons 4. Add the TextBox and make it multiline, the fit it in the Form with a nice border as you like. ![Image of creating textbox] Figure 10: Creating the Textbox 5. Add the StatusStrip by dragging the element on the Form ![Image of adding the StatusStrip](image1) Figure 11: Adding the Status Strip 6. Adding the Labels ![Image of adding the Labels](image2) Figure 12: Adding the Labels Connecting the MomanLib to the Project Why do we need to do this? To call native C++ Code in C#, we need a Wrapper ("Interface"). This is achieved by creating a method in C# for each method you want to import. When using a naming scheme, it is common praxis to name the methods of the wrapper exactly like the library names, and then rename the imported ones with one or two underscores. For some function calls you will need some advanced knowledge about marshalling and pointers, but with a little C++ knowledge you shouldn't have a problem. In case you don’t understand what's going on, just copy the files from the example or follow the next chapter. ![Diagram of C#-Architecture](https://www.mql5.com/en/articles/widget/249) Figure 13: Overview of the C#-Architecture What is a Wrapper A wrapper is a Software structure, which mimics or copies the behavior and the Methods of a Software Layer. Imagine, we would have a function in C++ like the following (simplified version): **Codelisting 1: Simplified export of an add method** ```c++ void c_method_add(int a, int b) { return a + b; } ``` This Method would get translated and compiled to a DLL. The result would be an exported Function inside this DLL. Exported Functions inside a DLL are visible by any Program that understands the File format DLL. So what a Program would see when loading the DLL is the following: This would describe the Path between the 1 and the 2 in Figure 13: Overview of the C#-Architecture. So what needs to be described now is the Part 3 of the Figure: How C# accesses the DLL and finally how the wrapper (Part 4) fits in this problem. Figure 14: How the DLL-Export could look like So the Wrapper to the previous example would be: ```csharp class Wrapper { static int add(int a, int b) { if(true/*import successful*/) // Checking if the import was successful return Import._add(a, b); else throw new Exception("Cannot import!"); } class Import { public const string cDLLPath = "C:/path/to/dll.dll"; [DllImport(cDLLPath, EntryPoint = "c_method_add", CallingConvention = CallingConvention.StdCall)] public extern static int _add(int a, int b); } ``` - A Class named „Wrapper“ - „add“ is the method we want to import - Checking if the import was successful - If yes, we can use the function - Else - We have to indicate that we have an error - A class named „Import“ - The DLL-Path, has to be const - The important part begins: Importing with the options: Path, how the methods real name is in the DLL (entry point), and how the method „c_method_add“ needs its arguments passed to it - What Type is returned, and what are the arguments, as well as a method name With this technique you have the possibility to react to failure before calling, return your own debug values, exclude some values, … The Following chapter will explain how you do this for the MomanLib. Creating the wrapper The following parts of the Tutorial will start with a Task that is either "Read", "Create", or "Copy". 1. “Create” a new File: MomanLibSample.cs Because the MomanLib has some more complex types to offer than just integers, we will need to declare some Types in beforehand, before we can use the DLL-Imports. Because when we add the Imports, we have everything declared, so it will be ready to use. 2. "Copy" – Choose to a. Add the Library-Types manual from the MomanCMD.h files located in C:\MyProject\FaulhaberMananLibExampleProgram\lib\MomanLib\Lib\Include (only use the rest if you want to use the Trace-Mode\(^1\)). b. Alternatively you can use Codelisting 2: Definition of the enums used in the library. Copy it to the File created in Step 1, but add it inside the namespace, not inside the class. Codelisting 2: Definition of the enums used in the library ```csharp public delegate void tdmMProtDataCallback(); public delegate void tdmMProtTraceValuesCallback(int nodeNr, UInt32[] value, int timecode); public enum eMomanprot { eMomanprot_ok_bootup = 2, eMomanprot_ok_async = 1, eMomanprot_ok = 0, eMomanprot_error = -1, eMomanprot_error_timeout = -2, eMomanprot_error_cmd = -3, eMomanprot_error_emcy = -4, eMomanprot_error_param = -5, eMomanprot_error_accessdenied = -6, eMomanprot_error_init = -7, eMomanprot_noData = -8 } ``` The delegates are a Type how a method looks is described in C#-return types aswell as arguments. The Enum is an Enumeration of Integers with names. One could say, it is collection of named numbers. eMomanprot is there for indicating how the state of the Interface Initialization is, the state whilst opening the Communication and when reading an Answer by the device. --- \(^1\) The Trace-mode is used for logging and recording (only MC V3.0) data values of the controller. ```csharp public enum eMomancmd { // Device Control: eMomancmd_shutdown = 16, eMomancmd_switchon = 17, eMomancmd_disable = 18, eMomancmd_quickstop = 19, eMomancmd_DIOp = 20, eMomancmd_ENOp = 21, eMomancmd_faultreset = 22, eMomancmd_MA = 23, eMomancmd_MR = 24, eMomancmd_HS = 25 } public enum eDecoded { eDecoded_none = 0, eDecoded_SOBJ = 1, // SOBJ command decoded eDecoded_GOBJ = 2, // GOBJ command decoded eDecoded_Bootup = 3, eDecoded_NMT = 4, eDecoded_NMTRequest = 5, eDecoded_Heartbeat = 6, eDecoded_Statusword = 7, } ``` 3. Then “Create / Copy“ a static class „MomanWrapperLib“ inside the namespace (see Codelistung 3: Example for inserting the static class MomanWrapperLib ) 4. "Read“ and select your library you want to use. (Choose from the Protocol folder) Here is USB selected, although we don’t want to use interface-specific functions. Create a final string in the class with the Path to the DLL as value ```csharp public const string cProtDll = @"C:\MyProject\FaulhaberMomanLibExampleProgram\Lib\MomanLib\Lib\Bin\Protocol\USB\CO_USB.dll"; ``` You may aswell use a relative Path, but it has to match your specific path. **Codelistung 3: Example for inserting the static class MomanWrapperLib** ```csharp using System; namespace FaulhaberMomanLibExampleProgram { class MomanWrapperLib { public const string cProtDll = @"C:\MyProject\FaulhaberMomanLibExampleProgram\Lib\MomanLib\Lib\Bin\Protocol\USB\CO_USB.dll"; //Code from Codelistung 1 [public delegate void tdmmProtDataCallback(); –] 5. “Read” and open the MomanProt.h because only the Protocol-Wrapping is needed here. See Documentation for more. If you scroll down, you may see the function prototypes like in Codelisting 4: Excerpt of function Prototypes **Codelisting 4: Excerpt of function Prototypes** ```c /* Function prototypes */ MOMANPROT_API eMomanprot __stdcall mmProtInitInterface(char* InterfaceDll, tdmmProtDataCallback DataReceived, tdmmProtTraceValuesCallback TraceValuesReceived); MOMANPROT_API void __stdcall mmProtCloseInterface(void); MOMANPROT_API void __stdcall mmProtSetDataCallback(tdmmProtDataCallback DataReceived); MOMANPROT_API void __stdcall mmProtSetTraceValuesCallback(tdmmProtTraceValuesCallback TraceValuesReceived); MOMANPROT_API eMomanprot __stdcall mmProtOpenCom(int port, int channel, int baud); MOMANPROT_API void __stdcall mmProtCloseCom(void); MOMANPROT_API int __stdcall mmProtLoadCommandSet(int cmdType); ``` If you need more functions like listed above, you will have to translate them on your own or see the examples. 6. “Read”. Now the DLL-Imports are needed to bind the C#-Methods to the functions provided in the DLL. To bring it on point, it is exactly what is described in Chapter “What is a Wrapper”. This is Done with ```c [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] ``` “Copy” for some of the DLL-Imports we need to declare the usage of: ```c using System.Runtime.InteropServices; ``` You will need to add this line at the very beginning of the MomanWrapperLib.cs file as seen in Codelisting 3: Example for inserting the static class MomanWrapperLib 7. The final result should look like in Figure 17: Example view of the WrapperLibrary, you can „Copy“ the Function Definitions from Codelisting 5: Function declarations of the Wrapper library: ``` [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] public static extern eMomanprot mmProtInitInterface(string InterfaceDll, tdmmProtDataCallback DataReceived, tdmmProtTraceValuesCallback TraceValuesReceived); [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] public static extern void mmProtCloseInterface(); [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] public static extern eMomanprot mmProtOpenCom(int port, int channel, int baud); [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] public static extern void mmProtCloseCom(); [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] public static extern bool mmProtSendCommand(int nodeNr, int index, int subIndex, int dataLen, int data); [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] public static extern eMomanprot mmProtReadAnswer(out IntPtr answData, out int nodeNr, out IntPtr cmdString, out IntPtr receiveTelegram); [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] public static extern eDecoded mmProtDecodeAnswStr([MarshalAs(UnmanagedType.LPStr)] string answStr, out Int64 value); [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] public static extern eMomanprot mmProtGetStrObj(int nodeNr, int index, int subIndex, out IntPtr value); [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] public static extern eMomanprot mmProtSetObj(int nodeNr, int index, int subIndex, int value, int len, out uint abortCode); [DllImport(cProtDll, CallingConvention=CallingConvention.StdCall)] public static extern string mmProtGetAbortMessage(uint abortCode); ``` Figure 17: Example view of the WrapperLibrary Codelisting 5: Function declarations of the Wrapper library Providing an Interface to the User-Code Why an Interface In every application you want to separate user code / more generalizable code from specific code. The advantage is easily explained: If you want to either port the software to another platform or want to exchange some software layers. Let’s say you want a new Graphical User Interface, build your own Library for the Communication with controllers or want to insert another library so that your application works with more Products, you need multiple layers. (for further reading try searching either online or in books with the keyword “Multitier architecture”). For this Application Note the Architecture with Examples from both #1 and #2 is recommended. Figure 18: Architectural Overview and recommended Design To be more specific, the Heading containing “Interface” does not refer to the C#-Way of an Interface, although it would be a professional step to translate the class definitions seen here into an interface. What is left to do? The following parts of the Tutorial will start with a Task that is like "Read", "Create", or "Copy". "Read": Technically, providing an Interface would not be necessary. If you decide to use this in a bigger Project, it is strongly recommended though. 1. Ask yourself what you want to achieve Here: We want to: 1. Get a Text from the Device 2. Enable it; To be more precise: operate the CiA402 3. Start a Positioning / Motor movement And not to mention the setup we need to do. 2. Then Look at the sentences and find an action and an object a. Get Text and Device b. Operate CiA402 and Device [it] c. Start Movement and Device [The Motor is replaced here with the Device because the device can do it for us] 3. Create a class for the Device and add the Methods A sample Implementation can be found in the Examples of the Library. → For the following we'll only need to "copy": the Codelisting 6: Example Code for the Interface to the Library Sample: Codelisting 6: Example Code for the Interface to the Library Sample class MomanLibSample { //Used interface Dll from communication Library: const string cIntfDll = @"C:\MyProject\FaulhaberMomanLibExampleProgram\lib\MomanLib\Lib\Bin\Interface\USB\MC3USB.dll"; public MomanLibSample() { //check if we can find the dll if (!System.IO.File.Exists(MomanWrapperLib.cProtDll)) { throw new DllNotFoundException(); } } internal bool GetStrObj(int nodeNr, int index, int subIndex, out string value) { IntPtr answData; eMomanprot ret = MomanWrapperLib.mmProtGetStrObj(nodeNr, index, subIndex, out answData); if (ret == eMomanprot.eMomanprot_ok) { value = Marshal.PtrToStringAnsi(answData); return true; } else { throw new Exception("Error during communication."); } } } Adding Functionality to the User Interface Usually one would want to log the actions / data that the program is producing. You could use a commonly used Library like log4net or similar. We will use the simpler Way and therefore we create a Logging-Method, that you can "copy". Codelisting 7: Code snippet for logging in a Textbox delegate void LoggingMethod(string logMessage, DateTime date); private void Log(string logMessage) { LogData_Threadsafe(logMessage, DateTime.Now); } private void LogData_Threadsafe(string logMessage, DateTime date) { if(textBox1.InvokeRequired) { IAsyncResult reference = textBox1.BeginInvoke(new LoggingMethod(LogData_Threadsafe),logMessage,date); textBox1.EndInvoke(reference); } else { string dateStr = "??"; //use ISO 8601 for date if(date != null) dateStr = date.ToString("yyyy-MM-dd\THH:mm:ss"); textBox1.Text += string.Format("[\{0\}] {1}\r\n",dateStr,logMessage); } } We then quickly add an initialization method by using the User Interface of Visual Studio and "select" Properties → Event → Load event, then "create" the Method by double-clicking on the dropdown as seen in the red box in Figure 19: Adding the Load Event. You may "copy" the following Codelisting 8: Initialization of the Library inside the Library **Codelisting 8: Initialization of the Library** ```csharp MomanLibSample library; System.Threading.Thread ReceiveThread; private void FormMain_Load(object sender, EventArgs e) { library = new MomanLibSample(); try { if (library.Init(CBAsyncDataReceived) == false) throw new Exception("Init failed!"); ReceiveThread = new System.Threading.Thread(EventReceiver); ReceiveThread.Start(); } catch(Exception ex) { MessageBox.Show(ex.Message); //we got an error this.Close(); } } ``` Figure 19: Adding the Load Event Assuming the Initialization can fail, we surround it in a try-catch-block and show a Message Box when an error is thrown. Logging here wouldn’t be good if we used the textbox example like before. Now we can "add" a function to the Button-Click by either double-clicking the Button or adding a method in the Properties → Events → Click event. **Codelisting 9: Source code for the GetStrObj-Button** ```csharp private void button1_Click(object sender, EventArgs e) { string value = ""; library.GetStrObj(1, 0x1008, 0x00, out value); Log("Sync read 0x1008.00"); Log(string.Format("Sync received: {0}\n", value)); } ``` This would result in something like Figure 21: The finished GUI (Graphical User Interface): Synchronous Access You can directly call the library methods described in the Library Reference. You can find a copy in C:\MyProject\FaulhaberMomanLibExampleProgram\lib\MomanLib\Doc. Asynchronous Access If you want to use the asynchronous Features of the Library you can create a Thread to receive a Notification when data is arrived. It is strongly advised that the Callback function called by the library does not directly process the data, because it will then block the receiving part of the library. One possible solution is the usage of System.Threading.AutoResetEvent. Codelisting 10: Example for asynchronous access handling ```csharp private System.Threading.AutoResetEvent ReceiveEvent = new System.Threading.AutoResetEvent(false); private void CBAsyncDataReceived() { ReceiveEvent.Set(); } private void EventReceiver() { while (true) { ReceiveEvent.WaitOne(); Invoke(new MethodInvoker(AsyncDataReceived)); ReceiveEvent.Reset(); } } private void AsyncDataReceived() { Log("Async Data!"); /* we may call ReadReceivedData*/ } ``` Putting it together As this puzzling and copy-pasting with source code parts always gets confusing – the two main parts are completed here, as stated in the beginning of the chapter. The following files are presented: <table> <thead> <tr> <th>Filename</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>FormMain.cs</td> <td>The File contains:</td> </tr> <tr> <td></td> <td>- Functionality to the User Interface</td> </tr> <tr> <td>MomanLibSample.cs</td> <td>The File contains:</td> </tr> <tr> <td></td> <td>- The Copied Parts from Codelisting 2: Definition of the enums used in the</td> </tr> <tr> <td></td> <td>library</td> </tr> <tr> <td></td> <td>- The Class <strong>class</strong> MomanLibSample</td> </tr> <tr> <td></td> <td>- The Class <strong>class</strong> MomanWrapperLib</td> </tr> <tr> <td>Program.cs</td> <td>The File contains the main function with</td> </tr> <tr> <td></td> <td>...</td> </tr> <tr> <td></td> <td><strong>Application.Run(new FormMain());</strong></td> </tr> </tbody> </table> Hint: You might have to adjust the namespaces. FormMain.cs Codelisting 11: Final source code of the FormMain.cs ```csharp using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; namespace FaulhaberMomanLibExampleProgram { public partial class FormMain : Form { MomanLibSample library; public FormMain() { InitializeComponent(); } } } private void button1_Click(object sender, EventArgs e) { string value = ""; library.GetStrObj(1, 0x1008, 0x00, out value); Log("Sync read 0x1008.00"); Log(string.Format("Sync received: {0}", value)); } delegate void LoggingMethod(string logMessage, DateTime date); private void Log(string logMessage) { LogData_Threadsafe(logMessage, DateTime.Now); } private void LogData_Threadsafe(string logMessage, DateTime date) { if (textBox1.InvokeRequired) { IAsyncResult reference = textBox1.BeginInvoke(new LoggingMethod(LogData_Threadsafe), logMessage, date); textBox1.EndInvoke(reference); } else { string dateStr = "??"; //use ISO 8601 for date if (date != null) dateStr = date.ToString("yyyy-MM-dd\THH:mm:ss"); textBox1.Text += string.Format("[{0}] {1}\r\n", dateStr, logMessage); } } private System.Threading.AutoResetEvent ReceiveEvent = new System.Threading.AutoResetEvent(false); private void CBAsyncDataReceived() { ReceiveEvent.Set(); } private void EventReceiver() { while (true) { ReceiveEvent.WaitOne(); Invoke(new MethodInvoker(AsyncDataReceived)); ReceiveEvent.Reset(); } } ```csharp private void AsyncDataReceived() { Log("Async Data!"); // we may call ReadReceivedData } System.Threading.Thread ReceiveThread; private void FormMain_Load(object sender, EventArgs e) { library = new MomanLibSample(); try { if (library.Init(CBAsyncDataReceived) == false) throw new Exception("Init failed!"); ReceiveThread = new System.Threading.Thread(EventReceiver); ReceiveThread.Start(); } catch (Exception ex) { MessageBox.Show(ex.Message); // we got an error this.Close(); } } private void FormMain_FormClosed(object sender, FormClosedEventArgs e) { ReceiveThread.Abort(); ReceiveThread.Join(); } ``` MomanLibSample.cs Codelisting 12: Final source code of the MomanLibSample.cs ```csharp using System; using System.Runtime.InteropServices; namespace FaulhaberMomanLibExampleProgram { public delegate void tdmmProtDataCallback(); public delegate void tdmmProtTraceValuesCallback(int nodeNr, UInt32[] value, int timecode); public enum eMomanprot { eMomanprot_ok_bootup = 2, eMomanprot_ok_async = 1, eMomanprot_ok = 0, eMomanprot_error = -1, eMomanprot_error_timeout = -2, eMomanprot_error_cmd = -3, eMomanprot_error_emcy = -4, eMomanprot_error_param = -5, eMomanprot_error_accessdenied = -6, eMomanprot_error_init = -7, eMomanprot_noData = -8 } public enum eMomancmd { //Device Control: eMomancmd_shutdown = 16, eMomancmd_switchon = 17, eMomancmd_disable = 18, eMomancmd_quickstop = 19, eMomancmd_DiOp = 20, eMomancmd_EnOp = 21, eMomancmd_faultreset = 22, eMomancmd_MA = 23, eMomancmd_MR = 24, eMomancmd_HS = 25 } public enum eDecoded { eDecoded_none = 0, /*!< none decoded */ eDecoded_SOBJ = 1, /*!< SOBJ command decoded */ eDecoded_GOBJ = 2, /*!< GOBJ command decoded*/ eDecoded_Bootup = 3, eDecoded_NMT = 4, eDecoded_NMTRequest = 5, eDecoded_Heartbeat = 6, } } ``` class MomanWrapperLib { public const string cProtDll = @"C:\MyProject\FaulhaberMomanLibExampleProgram\lib\MomanLib\Lib\Bin\Protocol\USB\CO_USB.dll"; [DllImport(cProtDll, CallingConvention = CallingConvention.StdCall)] public static extern eMomanprot mmProtInitInterface(string InterfaceDll, tdmmProtDataCallback DataReceived, tdmmProtTraceValuesCallback TraceValuesReceived); [DllImport(cProtDll, CallingConvention = CallingConvention.StdCall)] public static extern void mmProtCloseInterface(); [DllImport(cProtDll, CallingConvention = CallingConvention.StdCall)] public static extern eMomanprot mmProtOpenCom(int port, int channel, int baud); [DllImport(cProtDll, CallingConvention = CallingConvention.StdCall)] public static extern void mmProtCloseCom(); [DllImport(cProtDll, CallingConvention = CallingConvention.StdCall)] public static extern eMomanprot mmProtSendCommand(int nodeNr, int index, int subIndex, int dataLen, int data); [DllImport(cProtDll, CallingConvention = CallingConvention.StdCall)] public static extern void mmProtReadAnswer(out IntPtr answData, out int nodeNr, out IntPtr cmdString, out IntPtr receiveTelegram); [DllImport(cProtDll, CallingConvention = CallingConvention.StdCall)] public static extern eDecoded mmProtDecodeAnswStr([MarshalAs(UnmanagedType.LPStr)] string answStr, out Int64 value); [DllImport(cProtDll, CallingConvention = CallingConvention.StdCall)] public static extern eMomanprot mmProtGetStrObj(int nodeNr, int index, int subIndex, out IntPtr value); [DllImport(cProtDll, CallingConvention = CallingConvention.StdCall)] public static extern eMomanprot mmProtSetObj(int nodeNr, int index, int subIndex, int value, int len, out uint abortCode); [DllImport(cProtDll, CallingConvention = CallingConvention.StdCall)] public static extern string mmProtGetAbortMessage(uint abortCode); } class MomanLibSample { //Used interface Dll from communication Library: const string cIntfDll = @"C:\MyProject\FaulhaberMomanLibExampleProgram\lib\MomanLib\Lib\Bin\Interface\USB\MC3USB.dll"; public MomanLibSample() { //check if we can find the dll if (!System.IO.File.Exists(MomanWrapperLib.cProtDll)) { throw new DllNotFoundException(); } } } internal bool GetStrObj(int nodeNr, int index, int subIndex, out string value) { IntPtr ansData; eMomanprot ret = MomanWrapperLib.mmProtGetStrObj(nodeNr, index, subIndex, out ansData); if (ret == eMomanprot.eMomanprot_ok) { value = Marshal.PtrToStringAnsi(ansData); return true; } else { value = "<ERROR>"; return false; } } internal bool Init(tdmmProtDataCallback SignalDataReceived) { if (MomanWrapperLib.mmProtInitInterface(cIntfDll, SignalDataReceived, null) != eMomanprot.eMomanprot_ok) { return false; } if (MomanWrapperLib.mmProtOpenCom(1, 0, 0) != eMomanprot.eMomanprot_ok) { return false; } return true; } Testing The Example provided above should produce a closely comparable result to the example project delivered with the Library. Whilst using the Form, it should produce a result described in the following list, but some requirements have to be matched: - Your device has to have the right voltage connected to the right terminals – check first - Your device should have the communication Port connected to your PC / Device. - Your device should be discoverable in the latest Version Motion manager. (Here: Motion Manager 6.4 is used) - Make sure that the Node-Id of the device you are using is the one you are seeing in the motion manager – or the example (Fieldname: cNodeNr). If you created the example yourself, you may edit an argument of the call - Make sure you have the right DLL-Paths for both the Interface- DLL and the Protocol-DLL. library.GetStrObj(1, 0x1008, 0x00, out value); Referring to Figure 22: The Demo at use, with an MC V3.0 (MC5010 S CO) over USB with a configured and connected Motor - **When Starting:** - No Error should be noticeable, the Demo shows a string FAULHABER communication API loaded - If the Device is shown in Motion Manager, but it still shows then you need to close the Motion Manager - **When using the Interface Element** 1. **Pressing the Button 1:** - The Program Should output something like ``` [2018-10-05T12:00:00] Sync read 0x1008.00 [2018-10-05T12:00:00] Sync received: MC5010 ``` - If it does not, but something like ``` [2018-10-05T12:00:00] Sync read 0x1008.00 [2018-10-05T12:00:00] Sync received: <ERROR> ``` 2. **Pressing the Buttons 2, 3 or 4** - These Buttons should activate the State machine of the Controller, visible by the blinking speed of the „Status LED” (Refer to the Device Manual) as well as a text in the Communication log (7), one of ``` [2018-10-05T12:00:00] Send SHUTDOWN [2018-10-05T12:00:00] Send SWITCHON [2018-10-05T12:00:00] Send ENOP ``` 3. **Pressing the Button 5** - This should do nothing in this Demo, but the example should behave with a text like below as well as a relative position by 1000 Increments. ``` [2018-10-05T12:00:00] Execute MOVE RELATIVE ``` --- ![Figure 22: The Demo at use](image-url) Table of Figures <table> <thead> <tr> <th>Figure</th> <th>Description</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>FIGURE 1:</td> <td>DOWNLOADS FOR VISUAL STUDIO</td> <td>2</td> </tr> <tr> <td>FIGURE 2:</td> <td>VISUAL STUDIO 2013 PROJECT OVERVIEW</td> <td>3</td> </tr> <tr> <td>FIGURE 3:</td> <td>CREATE A NEW VISUAL C#-PROJECT</td> <td>4</td> </tr> <tr> <td>FIGURE 4:</td> <td>THE STARTING VIEW OF A DEFAULT PROJECT</td> <td>4</td> </tr> <tr> <td>FIGURE 5:</td> <td>FOLDER OVERVIEW</td> <td>5</td> </tr> <tr> <td>FIGURE 6:</td> <td>RENAMING THE DEFAULT FORM FILE</td> <td>5</td> </tr> <tr> <td>FIGURE 7:</td> <td>RENAMING DIALOG</td> <td>6</td> </tr> <tr> <td>FIGURE 8:</td> <td>RESIZING THE FORM AND RENAMING IT</td> <td>6</td> </tr> <tr> <td>FIGURE 9:</td> <td>CREATING THE BUTTONS</td> <td>7</td> </tr> <tr> <td>FIGURE 10:</td> <td>CREATING THE TEXTBOX</td> <td>7</td> </tr> <tr> <td>FIGURE 11:</td> <td>ADDING THE STATUS STRIP</td> <td>8</td> </tr> <tr> <td>FIGURE 12:</td> <td>ADDING THE LABELS</td> <td>8</td> </tr> <tr> <td>FIGURE 13:</td> <td>OVERVIEW OF THE C#-ARCHITECTURE</td> <td>9</td> </tr> <tr> <td>FIGURE 14:</td> <td>HOW THE DLL-EXPORT COULD LOOK LIKE</td> <td>9</td> </tr> <tr> <td>FIGURE 15:</td> <td>ADDING THE WRAPPER-FILES</td> <td>10</td> </tr> <tr> <td>FIGURE 16:</td> <td>DETAILED VIEW OF THE FILE CREATION</td> <td>11</td> </tr> <tr> <td>FIGURE 17:</td> <td>EXAMPLE VIEW OF THE WRAPPERLIBRARY</td> <td>14</td> </tr> <tr> <td>FIGURE 18:</td> <td>ARCHITECTURAL OVERVIEW AND RECOMMENDED DESIGN</td> <td>15</td> </tr> <tr> <td>FIGURE 19:</td> <td>ADDING THE LOAD EVENT</td> <td>18</td> </tr> <tr> <td>FIGURE 20:</td> <td>HOW TO ADD AOnClick METHOD TO A BUTTON</td> <td>19</td> </tr> <tr> <td>FIGURE 21:</td> <td>THE FINISHED GUI (GRAPHICAL USER INTERFACE)</td> <td>20</td> </tr> <tr> <td>FIGURE 22:</td> <td>THE DEMO AT USE</td> <td>27</td> </tr> </tbody> </table> Table of Source Code | CODELISTING 1: | SIMPLIFIED EXPORT OF AN ADD METHOD | 9 | | CODELISTING 2: | DEFINITION OF THE ENUMS USED IN THE LIBRARY | 11 | | CODELISTING 3: | EXAMPLE FOR INSERTING THE STATIC CLASS MomanWrapperLib | 12 | | CODELISTING 4: | EXCERPT OF FUNCTION PROTOTYPES | 13 | | CODELISTING 5: | FUNCTION DECLARATIONS OF THE WRAPPER LIBRARY | 14 | | CODELISTING 6: | EXAMPLE CODE FOR THE INTERFACE TO THE LIBRARY SAMPLE | 16 | | CODELISTING 7: | CODE SNIPPET FOR LOGGING IN A TEXTBOX | 17 | | CODELISTING 8: | INITIALIZATION OF THE LIBRARY | 18 | | CODELISTING 9: | SOURCE CODE FOR THE GetStrObj-Button | 19 | | CODELISTING 10: | EXAMPLE FOR ASYNCHRONOUS ACCESS HANDLING | 20 | | CODELISTING 11: | FINAL SOURCE CODE OF THE FormMain.cs | 21 | | CODELISTING 12: | FINAL SOURCE CODE OF THE MomanLibSample.cs | 24 | Rechtliche Hinweise Urheberrechte. Alle Rechte vorbehalten. Ohne vorherige ausdrückliche schriftliche Zustimmung der Dr. Fritz Faulhaber & Co. KG darf diese Application Note oder Teile dieser unabhängig von dem Zweck insbesondere nicht vervielfältigt, reproduziert, gespeichert (z.B. in einem Informationssystem) oder be- oder verarbeitet werden. Kein Vertragsbestandteil; Unverbindlichkeit der Application Note. Die Application Note ist nicht Vertragsbestandteil von Verträgen, die die Dr. Fritz Faulhaber GmbH & Co. KG abschließt, und der Inhalt der Application Note stellt auch keine Beschaffenheitsangabe für Vertragsprodukte dar, soweit in den jeweiligen Verträgen nicht ausdrücklich etwas anderes vereinbart ist. Die Application Note beschreibt unverbindlich ein mögliches Anwendungsbeispiel. Die Dr. Fritz Faulhaber GmbH & Co. KG übernimmt insbesondere keine Gewährleistung oder Garantie dafür und steht auch insbesondere nicht dafür ein, dass die in der Application Note illustrierten Abläufe und Funktionen stets wie beschrieben aus- und durchgeführt werden können und dass die in der Application Note beschriebenen Abläufe und Funktionen in anderen Zusammenhängen und Umgebungen ohne zusätzliche Tests oder Modifikationen mit demselben Ergebnis umgesetzt werden können. Der Kunde und ein sonstiger Anwender müssen sich jeweils im Einzelfall vor Vertragsabschluss informieren, ob die Abläufe und Funktionen in ihrem Bereich anwendbar und umsetzbar sind. Keine Haftung. Die Dr. Fritz Faulhaber GmbH & Co. KG weist darauf hin, dass aufgrund der Unverbindlichkeit der Application Note keine Haftung für Schäden übernommen wird, die auf die Application Note und deren Anwendung durch den Kunden oder sonstigen Anwender zurückgehen. Insbesondere können aus dieser Application Note und deren Anwendung keine Ansprüche aufgrund von Verletzungen von Schutzrechten Dritter, aufgrund von Mängeln oder sonstigen Problemen gegenüber der Dr. Fritz Faulhaber GmbH & Co. KG hergeleitet werden. Änderungen der Application Note. Änderungen der Application Note sind vorbehalten. Die jeweils aktuelle Version dieser Application Note erhalten Sie von Dr. Fritz Faulhaber GmbH & Co. KG unter der Telefonnummer +49 7031 638 688 oder per Mail von mcsupport@faulhaber.de. Legal notices Copyrights. All rights reserved. This Application Note and parts thereof may in particular not be copied, reproduced, saved (e.g. in an information system), altered or processed in any way irrespective of the purpose without the express prior written consent of Dr. Fritz Faulhaber & Co. KG. Industrial property rights. In publishing, handing over/dispatching or otherwise making available this Application Note Dr. Fritz Faulhaber & Co. KG does not expressly or implicitly grant any rights in industrial property rights nor does it transfer rights of use or other rights in such industrial property rights. This applies in particular to industrial property rights on which the applications and/or functions of this Application Note are directly or indirectly based or with which they are connected. No part of contract; non-binding character of the Application Note. The Application Note is not a constituent part of contracts concluded by Dr. Fritz Faulhaber & Co. KG and the content of the Application Note does not constitute any contractual quality statement for products, unless expressly set out otherwise in the respective contracts. The Application Note is a non-binding description of a possible application. In particular Dr. Fritz Faulhaber & Co. KG does not warrant or guarantee and also makes no representation that the processes and functions illustrated in the Application Note can always be executed and implemented as described and that they can be used in other contexts and environments with the same result without additional tests or modifications. The customer and any user must inform themselves in each case before concluding a contract concerning a product whether the processes and functions are applicable and can be implemented in their scope and environment. No liability. Owing to the non-binding character of the Application Note Dr. Fritz Faulhaber & Co. KG will not accept any liability for losses arising from its application by customers and other users. In particular, this Application Note and its use cannot give rise to any claims based on infringements of industrial property rights of third parties, due to defects or other problems as against Dr. Fritz Faulhaber GmbH & Co. KG. Amendments to the Application Note. Dr. Fritz Faulhaber & Co. KG reserves the right to amend Application Notes. The current version of this Application Note may be obtained from Dr. Fritz Faulhaber & Co. KG by calling +49 7031 638 688 or sending an e-mail to mcsupport@faulhaber.de.
{"Source-Url": "https://www.faulhaber.com/fileadmin/user_upload_global/support/MC_Support/Drive_Electronics/AppNotes/Faulhaber_AN176_EN.pdf", "len_cl100k_base": 10211, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 73139, "total-output-tokens": 11975, "length": "2e13", "weborganizer": {"__label__adult": 0.0004086494445800781, "__label__art_design": 0.00044083595275878906, "__label__crime_law": 0.00022864341735839844, "__label__education_jobs": 0.0003387928009033203, "__label__entertainment": 5.668401718139648e-05, "__label__fashion_beauty": 0.00011247396469116212, "__label__finance_business": 0.00021529197692871096, "__label__food_dining": 0.00021708011627197263, "__label__games": 0.0006418228149414062, "__label__hardware": 0.0010633468627929688, "__label__health": 0.0001227855682373047, "__label__history": 0.00012242794036865234, "__label__home_hobbies": 0.00010281801223754884, "__label__industrial": 0.0004010200500488281, "__label__literature": 0.00014650821685791016, "__label__politics": 0.00015461444854736328, "__label__religion": 0.0002779960632324219, "__label__science_tech": 0.001880645751953125, "__label__social_life": 5.179643630981445e-05, "__label__software": 0.0066680908203125, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00025391578674316406, "__label__transportation": 0.00040793418884277344, "__label__travel": 0.00014925003051757812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44169, 0.01699]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44169, 0.56694]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44169, 0.59348]], "google_gemma-3-12b-it_contains_pii": [[0, 3693, false], [3693, 4827, null], [4827, 6419, null], [6419, 6595, null], [6595, 7072, null], [7072, 7193, null], [7193, 7488, null], [7488, 7718, null], [7718, 9397, null], [9397, 11070, null], [11070, 12605, null], [12605, 14214, null], [14214, 15814, null], [15814, 17788, null], [17788, 18770, null], [18770, 20713, null], [20713, 21712, null], [21712, 22664, null], [22664, 23392, null], [23392, 24478, null], [24478, 26436, null], [26436, 27666, null], [27666, 28384, null], [28384, 29831, null], [29831, 32152, null], [32152, 33783, null], [33783, 35209, null], [35209, 38880, null], [38880, 42886, null], [42886, 44169, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3693, true], [3693, 4827, null], [4827, 6419, null], [6419, 6595, null], [6595, 7072, null], [7072, 7193, null], [7193, 7488, null], [7488, 7718, null], [7718, 9397, null], [9397, 11070, null], [11070, 12605, null], [12605, 14214, null], [14214, 15814, null], [15814, 17788, null], [17788, 18770, null], [18770, 20713, null], [20713, 21712, null], [21712, 22664, null], [22664, 23392, null], [23392, 24478, null], [24478, 26436, null], [26436, 27666, null], [27666, 28384, null], [28384, 29831, null], [29831, 32152, null], [32152, 33783, null], [33783, 35209, null], [35209, 38880, null], [38880, 42886, null], [42886, 44169, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 44169, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44169, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44169, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44169, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44169, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44169, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44169, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44169, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44169, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44169, null]], "pdf_page_numbers": [[0, 3693, 1], [3693, 4827, 2], [4827, 6419, 3], [6419, 6595, 4], [6595, 7072, 5], [7072, 7193, 6], [7193, 7488, 7], [7488, 7718, 8], [7718, 9397, 9], [9397, 11070, 10], [11070, 12605, 11], [12605, 14214, 12], [14214, 15814, 13], [15814, 17788, 14], [17788, 18770, 15], [18770, 20713, 16], [20713, 21712, 17], [21712, 22664, 18], [22664, 23392, 19], [23392, 24478, 20], [24478, 26436, 21], [26436, 27666, 22], [27666, 28384, 23], [28384, 29831, 24], [29831, 32152, 25], [32152, 33783, 26], [33783, 35209, 27], [35209, 38880, 28], [38880, 42886, 29], [42886, 44169, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44169, 0.06704]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
64950d2a02226853c6493dc4f64b0e8b1bc680d2
Originally published in S. Dustdar, J. L. Fiadeiro, & A. P. Sheth (eds.). Available from: http://dx.doi.org/10.1007/11841760_4 Copyright © Springer-Verlag Berlin Heidelberg 2006. This is the author's version of the work, posted here with the permission of the publisher for your personal use. No further distribution is permitted. You may also be able to access the published version from your library. The definitive version is available at http://www.springerlink.com/. Tracking over Collaborative Business Processes Xiaohui Zhao and Chengfei Liu Centre for Information Technology Research Faculty of Information and Communication Technologies Swinburne University of Technology Melbourne, Victoria, Australia {xzhao, cliu}@it.swin.edu.au Abstract. Workflow monitoring is a routine function of a workflow management system for tracking the progress of running workflow instances. To keep participating organisations as autonomous entities in an inter-organisational business collaboration environment, however, it brings challenges in generating workflow tracking structures and manipulating instance correspondences between different participating organisations. Aiming to tackle these problems, this paper proposed a matrix based framework on the basis of our relative workflow model. This framework enables a participating organisation to derive tracking structures over its relative workflows and the involved relevant workflows of its partner organisations, and to perform workflow tracking with the generated tracking structures. 1 Introduction With the trend of booming global business collaborations, organisations are required to streamline their business processes into dynamic virtual organisations [1, 2]. A virtual organisation defines the trading community of a set of participating organisations for conducting collaborative business processes. Normally, the building blocks of a collaborative business process are the pre-existing business processes of participating organisations. Therefore, it is fundamental that a collaborative business process knows how the business process belonging to different organisations are linked together for cooperation [3, 4]. While this kind of cooperation is a prerequisite, organisations must act as autonomous entities during business collaboration. Besides, certain levels of privacy of participating organisations have to be guaranteed. Many existing inter-organisational workflow approaches streamline the related business processes of different organisations, into a public view workflow process [5-9]. This public view neutralises the diversity of the perception on collaborative business processes from different organisations, and fails to support business privacy sufficiently. We reckon that different organisations may see different pictures of a collaborative business process, and may need to know and be only allowed to know certain details of the collaboration with their partner organisations. To support this, we have proposed a new approach for collaborative business process modelling called relative workflow model [10]. In this model, different visibility constraints, perceptions, and relative workflow processes can be defined for different participating organisations. Most traditional workflow monitoring approaches, such as WfMC Monitor and Audit specification [11, 12], BEA Weblogic Integration [13], IBM WebSphere MQ Workflow the agent based workflow monitoring [15] and the customisable workflow monitoring [16], are mainly applicable either in an intra-organisational setting or in an environment where a public view of a collaborative business process is assumed without privacy concern. To our best knowledge, there is little discussion on workflow monitoring in an inter-organisational environment concerning privacy. This paper aims to fill this gap. Based on the relative workflow model, the tracking structure for a relative workflow process is defined and a matrix based framework is proposed to enable a participating organisation to derive tracking structures over its relative workflow processes and the involved relevant workflow processes of its partner organisations, and to perform tracking based on the generated tracking structures. The remainder of this paper is organised as follows. Section 2 analyses requirements of workflow tracking in a privacy sensitive environment with a motivating example. In Section 3, we first review our relative workflow approach, then introduce some representation matrices, after that we define the tracking structure of a relative workflow process and discuss the fundamental rules for workflow tracking. Based on these rules, several matrix operations are presented in Section 4 for tracking structure generation, together with the algorithms for generating tracking structures and performing tracking. Conclusion remarks are given in Section 5. 2 Requirement Analysis with Motivating Example Basically speaking, current public view approaches all rely on a single workflow model to support inter-organisational business collaboration. This means that once the workflow model for a collaborative business process is defined, it will be open to all participating organisations. If we follow a public view approach, a participating organisation may not be able to offer different visibilities to different organisations. As such, different partnerships between different collaborating organisations cannot be achieved. In our opinion, the visibility between participating organisations is inherently relative rather than absolute. Our relative workflow approach [10] was proposed based on this “relative perspective” philosophy. This approach discards the public view on the inter-organisational workflow process, and allows different organisations to create different views or relative workflow processes upon the same collaborative business process. These multiple relative workflow processes enable participating organisations behave as autonomous entities and enhance the flexibility and privacy control of business collaboration. In the same time, they bring challenges to inter-organisational workflow tracking. Figure 1 illustrates a business collaboration scenario where a retailer collects orders from customers, and then purchases products from a manufacturer. The manufacturer may contact a shipper for booking product delivery while making goods with supplies from a supplier. In this scenario, a retailer may track the collaborative business process as follows: After placing an order with a manufacturer, the retailer may contact the manufacturer and enquire about the execution status of the production process by referring, say the order number. Furthermore, the retailer may also contact the shipper via the manufacturer and enquire about shipping information after the manufacturer organises product shipping for the retailer by a shipper. However, the retailer may not be allowed to enquire about the goods supply information, because that could be confidential information of the manufacturer and is hidden from the retailer. For a manufacturer, it may track same collaborative business process differently. Besides the retailer and shipper, the manufacturer can also track the supplier for goods supply information. From this scenario, we can see that (1) a participating organisation may require tracking other organisations for its involved part of a collaborative business process; (2) each participating organisation may track same collaborative business process differently. The first point requires collaboration between participating organisations, which is fundamental to inter-organisational workflow tracking. The second point, however, requires that a participating organisation is treated as a fully autonomous entity and can provide different visibilities to different organisations. Obviously, the public view approaches cannot meet the second requirement. Our relative workflow approach can meet both requirements, as we can see from the following sections. 3 Relative Workflows and Tracking Structures 3.1 Relative Workflow Model In this section, we briefly review the relative workflow model. Figure 2 shows the relative workflow meta model, which has been proposed in [10]. In this model, an organisation, say $g_1$, is considered as an entity holding its own workflow processes called local workflow processes. A local workflow process, $lp^1$, of organisation $g_1$ can be denoted as $g_1.lp^1$. As the owner, an organisation naturally has an absolute view of its local workflow processes. On the contrary, the host organisation (the owner organisation) may only allow a restricted view of its local workflow processes to its partner organisations due to the privacy concern. This restriction mechanism may hide some confidential workflow tasks and related links or set some tasks only observable rather than interactable to some partner organisations according to the partnership. The degree of task visibility are defined by visibility constraints, which currently contains three values, viz., “invisible”, “trackable” and “contactable”, as shown in Table 1. **Table 1. Visibility values** <table> <thead> <tr> <th>Visibility value</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td>Invisible</td> <td>A task is said invisible to an organisation, if it is hidden from the organisation.</td> </tr> <tr> <td>Trackable</td> <td>A task is said trackable to an organisation, if this organisation is allowed to trace the execution status of the task.</td> </tr> <tr> <td>Contactable</td> <td>A task is said contactable to an organisation, if the task is trackable to the organisation and the task is also allowed to send/receive messages to/from this organisation for the purpose of business interaction.</td> </tr> </tbody> </table> Visibility constraints are used as a component in defining perceptions. A perception $p_{g_1}^{l_{p}}$ defines how organisation $g_1$ sees $g_2$’s local workflow process $l_{p}$. In the motivating example, the manufacturer may set up the following content in the set of visibility constraints, $\mathcal{V}_C$, of its perception $p_{Retailer}^{Manufacturer:Production}$. These visibility constraints allow a partial view of the manufacturer’s production process for the retailer. This partial view is called perceivable workflow process. The perceivable workflow process of $g_2$’s local workflow process $g_2.l_{p}$ defined for organisation $g_1$ is denoted as $g_2.l_{p}^{g_1}$. To represent the diverse partnerships, an organisation may generate a series of perceivable workflow processes of same local workflow process for different partner organisations. And the inter-organisational business interactions are characterised as directed inter process links, such as $l_{ab1}$ and $l_{bc2}$ in Figure 1. In our relative workflow meta model, these inter process links are defined as message descriptions before being linked, and messaging links after being linked, as shown in Figure 2. Finally, a relative workflow process can be created by combining the messaging links which connect “contactable” tasks of neighbouring organisations. As shown in Figure 2, a relative workflow process consists of three parts, viz. local workflow processes, perceivable workflow processes and relevant messaging links. Such a relative workflow process represents the collaborative business process perceivable from an organisation. For example, we suppose that the involved organisations in the motivating example set up the following visibility constraints in proper perceptions, together with perception $p_{\text{Manufacturer.ProductOrdering}}$, which has been given before. \[ \begin{align*} \mathcal{V}_\text{Retailer:ProductOrdering}^M & = \{(\text{“raise order"}, \text{Invisible}), (\text{“place order with manufacturer"}, \text{Contactable}), (\text{“invoice customer"}, \text{Contactable}), (\text{“pay invoice"}, \text{Contactable})\}; \\ \mathcal{V}_\text{Shipper:Shipping}^M & = \{(\text{“collect order"}, \text{Contactable}), (\text{“preparation"}, \text{Invisible}), (\text{“delivery"}, \text{Trackable}), (\text{“confirm delivery"}, \text{Contactable})\}; \\ \mathcal{V}_\text{Shipper:Shipping}^R & = \{(\text{“collect order"}, \text{Invisible}), (\text{“preparation"}, \text{Trackable}), (\text{“delivery"}, \text{Trackable}), (\text{“confirm delivery"}, \text{Trackable})\}; \\ \mathcal{V}_\text{Supplier:Supplying}^M & = \{(\text{“collect order"}, \text{Contactable}), (\text{“preparation"}, \text{Invisible}), (\text{“delivery"}, \text{Contactable})\}. \end{align*} \] Since the retailer and the supplier have no partner relationship in the collaborative business process, they do not define perceptions for each other. According to these visibility constraints, the retailer and the manufacturer may generate corresponding relative workflow processes, as shown in Figure 3 (a) and (b), respectively. The tasks with dashed circles denote the invisible tasks. These two diagrams clearly illustrate that the relative workflow processes for same collaborative business process may be different from different organisations’ perspectives. This reflects the relativity characteristics of our relative workflow approach. 3.2 Representation Matrices To accurately depict the proposed relative workflow model, we establish several matrices to formally represent key concepts of the relative workflow model. Self Adjacency Matrix An n-task workflow process $p$ of organisation $g$ is represented by a special matrix, called Self Adjacency Matrix (SAM), which is defined as, $$D^g_{SAM} = [d_{ij}], \text{ where } d_{ij} = \begin{cases} r, & \text{if exists link } r \text{ linking task } t_i \text{ and task } t_j, \text{ where } i < j; \\ 0, & \text{otherwise.} \end{cases}$$ Each element of an SAM denotes an intra process link between tasks, such as $r_{a1}$ and $r_{b2}$ in Figure 1. As a link connecting tasks $t_i$ and $t_j$ is put in $d_{ij}$, not $d_{ji}$, where $i < j$, $D^g_{SAM}$ is always an upper triangular matrix. For example, process $a$ in Figure 1 can be represented by $\text{SAM } D^a_{SAM} = \begin{bmatrix} 0 & r_{a1} & 0 & 0 \\ 0 & 0 & r_{a2} & 0 \\ 0 & 0 & 0 & r_{a3} \\ 0 & 0 & 0 & 0 \end{bmatrix}$. Similarly, $D^b_{SAM} = \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$ and $D^c_{SAM} = \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$. A self adjacency matrix can be used to represent not only a local workflow process but also a perceivable workflow process, a relative workflow process, or a tracking structure, which will be introduced later. Transformation Matrix When composing a local workflow process $p$ into a perceivable workflow process for organisation $g$, the composition is subject to the visibility constraints defined in proper perceptions. The details of this composition can be found in [10]. In this paper, we formalise the composition process as an $n \times n$ triangular 0-1 matrix, called Transformation Matrix (TM), which is defined as $$T^g = [t_{ij}], \text{ where } t_{ij} = \begin{cases} 1, & \text{if task } t_j \text{ is composed into task } t_i (j \neq i), \text{ or not composed } (j = i); \\ 0, & \text{otherwise.} \end{cases}$$ This matrix can be directly derived from the visibility constraints defined in the corresponding perception, following the task composition algorithm discussed in [10]. Notice, each column has only one element with value “1”, because each task can be composed only once or may not be composed at all. For example, the procedure of composing local workflow process $b$ into a perceivable workflow process for organisation $A$ can be described by $TM_{A}^k = \begin{bmatrix} 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix}$. This composing procedure is conducted by the visibility constraints defined in perception $p_{A}^{R,b}$. Likewise, we can calculate that $TM_{A}^k = \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$. **Boundary Adjacency Matrix** Finally, we also represent the relevant messaging links in a matrix. The messaging links between two workflow processes, $p_1$ and $p_2$, from the perspective of organisation $g$, can be represented by an $m \times n$ matrix called boundary adjacency matrix (BAM), where $m$ is the number of tasks belonging to $p_1$, and $n$ is the number of tasks belonging to $p_2$. A BAM is defined as follows, $$B_{A}^{p_1/p_2|_{nocb}} = [b_{ij}], \text{ where } b_{ij} = \begin{cases} l, & \text{if exists messaging link } l \text{ connecting } p_{1,t_i} \text{ and } p_{2,t_j} \\ 0, & \text{otherwise.} \end{cases}$$ For example, the interaction relationship between local workflow process $b$ and perceivable workflow process $c$ at the site of organisation $B$, can be represented by $B_{B}^{p_1/p_2} = \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$. Similarly, $B_{A}^{p_1/p_2} = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$. **3.3 Tracking Structure** From the discussion in the motivating example section, we see that an organisation’s tracking structure is its observable view upon the execution progress of one collaborative business process. Technically, a tracking structure is different from a relative workflow process, because the latter is created by messaging links connecting to contactable tasks of neighbouring organisations while the former may go beyond neighbouring organisations through trackable tasks. Unlike the “contactable” visibility value defined in Table 1, the “trackable” value is designed for tracking purpose and can be set on the tasks of the workflow processes belonging to non-neighbouring organisations. We define a tracking structure for each relative workflow process and this tracking structure can be defined by including trackable tasks from its non-neighbouring organisations. **Tracking Structure:** A tracking structure $ts$ for organisation $g$’s relative workflow process $rp$ consists of the following tasks and links. - The tasks include: (i) the tasks of relative workflow process $rp$; (ii) the union of task sets of perceivable workflow processes that are reachable from $g$. These perceivable workflow processes may belong to $g$’s neighbouring and non-neighbouring organisations. The reachability of a perceivable workflow process from an organisation is to be discussed in next sub section. The links include: (i) the links of relative workflow process \( rp \); (ii) the union of link sets of perceivable workflow processes that are reachable from \( g \); (iii) the set of messaging links between perceivable workflow processes that are visible from \( g \). The visibility of a messaging link from an organisation is to be discussed in next sub section. ### 3.4 Rules From the definition of a tracking structure, we need to first define the visibility of a messaging link and the reachability of a perceivable workflow process from an organisation. They all depend on the visibility of tasks. For this purpose, we establish the following rules that are used to generate a perceivable workflow process and to determine whether a perceivable workflow process is reachable via visible messaging links and therefore can be included in the tracking structure. **Intra Process Visibility Rule:** If a task \( t \) in organisation \( g_1 \)’s local workflow process \( g_1.lp \) is set invisible to organisation \( g_2 \), then \( t \) is hidden by composing it into a visible (contactable or trackable) task of \( g_1.lp \). The links connecting \( t \) will be changed accordingly. The composition procedure will be discussed in the composition operation in next section. After composition, \( g_1.lp \) becomes a perceivable workflow process \( g_1.lpg \). **Inter Process Visibility Rule:** A messaging link \( l \) connecting two perceivable workflow processes is said visible to organisation \( g \), if and only if both tasks connected by \( l \) are visible to \( g \). **Expansion Rule:** Let \( ts \) be the tracking structure for a relative workflow process of organisation \( g \). A perceivable workflow process outside \( ts \) is said reachable and therefore can be included into \( ts \), if and only if it has at least one visible messaging link connecting a task inside \( ts \). Following the Intra Process Visibility Rule, the original link \( r_{b_1} \) connecting tasks \( b_1 \) and \( b_2 \) of process \( b \) in Figure 1 becomes invisible in its perceivable form for organisation \( A \) in Figure 3 (c) because \( b_2 \) is invisible to organisation \( A \). Correspondingly, links \( r_{c_2} \) and \( r_{c_3} \), which connect \( b_2 \) and \( b_3 \), \( b_2 \) and \( b_4 \) in Figure 1 respectively, are now changed to connect \( b_1 \) and \( b_3 \), \( b_1 \) and \( b_4 \), in Figure 3 (c). Following the Inter Process Visibility Rule, messaging link \( l_{bc} \) connecting task \( b_4 \) and task \( c_1 \) is not visible while messaging link \( l_{bc} \) connecting task \( b_3 \) and task \( c_2 \) is visible in Figure 1. Following the Expansion Rule, the perceivable workflow processes of process \( c \) is reachable because of the existence of the visible messaging link \( l_{bc} \). By applying all these rules, we can finally generate a tracking structure shown in Figure 3 (c) for \( A \)’s relative workflow process shown in Figure 3 (a). ### 4 Generating Tracking Structures #### 4.1 Operations According to the rules discussed in last section, we define three matrix operations for tracking structure derivation. Operation 1. Composition Operation As defined in the TM for a local workflow process, each element with value “1” in a non-diagonal position \((i, j)\) stands for a procedure of composing the composed task \(t_j\) to the composing task \(t_i\). Under the restriction of the Intra Process Visibility Rule, the following sub rules may apply to this composition: 1. A link connecting \(t_j\) and \(t_k\) \((k \neq i)\) is changed to a link connecting \(t_i\) and \(t_k\); 2. A link connecting \(t_i\) and \(t_k\) \((k \neq j)\) is unchanged; 3. A link connecting \(t_i\) and \(t_j\) is discarded. The first sub rule requires an operation that can be applied to the SAM defined for the local workflow process. This operation first adds the elements in row \(j\) to their corresponding elements in row \(i\), and then sets all elements in row \(j\) to zero. This can be achieved by applying a matrix multiplication to this TM and the SAM defined for the local workflow process. A function \(f_{\text{reshape}}\) is assigned to reshape the result matrix into an upper-triangular form. For input matrix \(M_{n \times n}\), function \(f_{\text{reshape}}\) is defined as \[ f_{\text{reshape}}(M_{n \times n}) = \left[ m_{ij} + m_{ji}, \right]_{i < j} \quad 0, \quad \text{otherwise.} \] The second sub rule identifies the case that needs no action. From the definition of a TM, we can see that the composing tasks of this case all have value “1” on the diagonal line, which takes no effect in the matrix multiplication. Regarding the third sub rule, we need to check whether there exists a link connecting \(t_i\) and \(t_j\) in the corresponding TM. This can be easily achieved by checking whether there exists a row that has value “1” at both column \(i\) and column \(j\). We can represent the existence of such a link by a boolean expression, i.e. \(f_{\text{row}}(i) = f_{\text{row}}(j)\), where \(f_{\text{row}}(x)\) defines a function that returns the row where column \(x\) has the value “1”. Finally, these three sub rules can be merged together as an operation \(\otimes\), which is defined on \(T_{n \times n} \otimes D_{n \times n} = [f_{\text{row}}(i) \neq f_{\text{row}}(j)] \sum_{k=1}^{n} a_{ij} d_{kj}\). Hence, organisation \(g_i\) may apply a Composition Operation on a local workflow process \(p\) to generate a perceivable workflow process for \(g_2\). This can be defined as \[ D^p_{g_2} = f_{\text{reshape}}(T^p_{g_1} \otimes D^p_{g_1}) \] Here \(D^p_{g_1}\) and \(T^p_{g_1}\) are the SAMs of \(g_1\’s\ local workflow process \(p\) and the corresponding TM for perception \(p^c\), respectively. By applying this composition operation, organisations \(B\) and \(C\) can generate perceivable workflow processes \(b\) and \(c\) for organisation \(A\) in the form of \[ D^b_A = f_{\text{reshape}}(T^b_A \otimes D^b_B) = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \] and \[ D^c_A = f_{\text{reshape}}(T^c_A \otimes D^c_C) = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \] Operation 2. Connection Operation According to the Inter Process Visibility Rule, we need to identify the visible messaging links between perceivable workflow processes in order to include perceivable workflow processes of non-neighbouring organisations in the tracking structure for an organisation. For this purpose, we need to identify the visible tasks by simply checking elements valued “1” in diagonal positions of the corresponding TM. We use function \( f_{\text{diag}} \) to diagonalise TM \( T \) into a diagonal matrix \( T^\circ \). Function \( f_{\text{diag}} \) is defined as follows, \[ f_{\text{diag}}(T[n \times n]) = T^\circ[n \times n], \] where \( o_{ij} = \begin{cases} 1, & \text{if } t_{ij} = 1 \text{ and } i = j; \\ 0, & \text{otherwise.} \end{cases} \) The visible messaging link between two workflow processes, for example, \( g_1 \)'s \( p_1 \) and \( g_2 \)'s \( p_2 \), from the perspective of another organisation, say \( g_3 \), can be represented as \( \text{BAM}^{2211}_{3} \). The Connection Operation connecting \( g_1.p_1 \) and \( g_2.p_2 \) for \( g_3 \) can be defined as \[ B^{g_1.p_1,g_2.p_2}_{g_3} = (f_{\text{diag}}(T^{g_1.p_1}_{g_1})) \cdot (f_{\text{diag}}(T^{g_2.p_2}_{g_2})) \cdot B^{g_1.p_1,g_2.p_2}_{g_3} \cdot T^T \] This connection operation first requires \( g_1 \) to diagonalise TM \( T^{g_1.p_1}_{g_1} \), and then perform a matrix multiplication on the diagonalised \( T^{g_1.p_1}_{g_1} \) and BAM \( B^{g_1.p_1,g_2.p_2}_{g_3} \). \( g_2 \) will subsequently use the diagonalised matrix \( T^{g_2.p_2}_{g_2} \) to multiply the result matrix from \( g_1 \). In the connection operation, proper transposition operations are needed to align the columns of the left hand matrix with the rows of the right hand matrix for matrix multiplication. Regarding the motivation example given in Section 2, organisations \( B \) and \( C \) can generate matrix \( B^{B_1,B_2}_{A} \) for organisation \( A \) to provide the visible messaging links between \( B \)'s process \( b \) and \( C \)'s process \( c \) in \( A \)'s view. \[ B^{B_1,B_2}_{A} = (f_{\text{diag}}(T^{B_1}_{B_1})) \cdot (f_{\text{diag}}(T^{B_2}_{B_2})) \cdot B^{B_1,B_2}_{A} \cdot T^T \] Operation 3. Extension Operation The Expansion Rule is used for extending the tracking structure to include perceivable workflow processes of both neighbouring and non-neighbouring organisations. Technically, an extension step can be represented as an Extension Operation. With a local workflow process \( p_1 \) in the tracking structure, organisation \( g_1 \) may apply the extension operation to include a local workflow process \( p_2 \) of organisation \( g_2 \) in the tracking structure. This can be defined as \[ D^{g_1.p_1,g_2.p_2}_{g_2} = \begin{bmatrix} D^{g_1.p_1}_{g_2} & B^{g_1.p_1,g_2.p_2}_{g_2} \\ 0 & D^{g_2.p_2}_{g_2} \end{bmatrix} \] For example, the tracking structure containing process \( a \) and \( b \) from the view of organisation \( A \), can be described by a composite SAM \( D_{ab}^\alpha = \begin{pmatrix} D_{ab}^\alpha \ & B_{ab}^\alpha \\ 0 \ & D_a^\alpha \end{pmatrix} \), which is obtainable through this extension operation. ### 4.2 Generation Algorithm The tracking structure generation can be technically considered as a process of appending a new generated column each time a reachable workflow process is detected. This new generated column consists of a new SAM and a series of BAMs. The new SAM describes the inner structure of this detected workflow process, while the BAMs describe the interaction relationships between the detected workflow process and the processes already included in the structure. As shown in Figure 4, at the starting point, the tracking structure contains only \( D_{g_1}^{\beta, \eta} \), which means that only \( g_1.p_1 \) is included. Afterwards, \( g_1 \) detects that perceivable workflow process \( g_2.p_2 \) is reachable from \( g_1.p_1 \), and then appends a column containing \( B_{g_1.p_1/g_2.p_2} \) and \( D_{g_2.p_2} \) to the tracking structure. Likewise, organisation \( g_2 \) may append a column containing \( B_{g_2.p_2/g_1.p_1} \), \( B_{g_2.p_2/g_3.p_3} \), and \( D_{g_2.p_2} \), when \( g_2 \) detects that process \( g_3.p_3 \) is reachable from \( g_1.p_1 \) via \( g_2.p_2 \). This appending process continues until all reachable perceivable workflow processes are detected. Because the inter-process interaction relationships can only be identified by the organisation (context organisation) that owns the “bridging” workflow processes, by which the expansion proceeds, a propagation mechanism is adopted to spread this detection process over all involved organisations. The context organisation for an appending step may change from time to time. Organisation \( g_1 \) is called the original context organisation of this tracking structure. We note that the process shown in Figure 4 starts from \( g_1 \)’s local workflow process \( g_1.p_1 \) instead of \( g_1 \)’s relative workflow process \( g_1.rp \). Actually, \( g_1.rp \) can be generated by the first step of the process when \( g_1 \) is the context organisation. Algorithm 1 details the generation procedure. In algorithm 1, function \( relatedProc(p) \) returns a set of local workflow processes and perceivable workflow processes that have direct interactions with process $p$. Function $\text{includedProc}(\text{trackStruc})$ returns all included workflow processes at that moment in tracking structure $\text{trackStruc}$, which initially contains an SAM defined on a local workflow process of the original context organisation. Function $\text{BAM}(p_1, p_2, g)$ returns the BAM between processes $p_1$ and $p_2$ from the view of organisation $g$, using the connection operation. Function $\text{SAM}(p, g)$ returns the SAM of process $p$ from the view of organisation $g$, using the composition operation. Function $\text{genOrg}(p)$ returns the organisation of process $p$. Algorithm 1. genTrackStruc - Tracking Structure Generation Input: \begin{itemize} \item $\text{trackStruc}$ - A tracking structure matrix \item $\text{cxtProc}$ - A local workflow process of the context organisation \item $\text{origCxtOrg}$ - The original context organisation that starts the generation \end{itemize} Output: \begin{itemize} \item $\text{trackStruc}$ - The expanded tracking structure matrix \end{itemize} Step 1 \hspace{1em} Detect workflow processes $\text{detectedProcSet} = \text{relatedProc}(\text{cxtProc})$; $\text{includedProcSet} = \text{includedProc}(\text{trackStruc})$; $\text{detectedProcSet} = \text{detectedProcSet} - \text{includedProcSet}$; Step 2 \hspace{1em} Expand the tracking structure $\text{appendedProcSet} = \emptyset$; for each process $p_i \in \text{detectedProcSet}$ \begin{itemize} \item $tempB = \text{BAM}(\text{cxtProc}, p_i, \text{origCxtOrg})$; \item if $tempB$ is a non-zero matrix then \begin{itemize} \item $\text{newColumn} = \emptyset$; \item for each process $p_j \in \text{includedProcSet}$ \begin{itemize} \item $B = \text{BAM}(p_j, p_i, \text{origCxtOrg})$; \item Append $B$ to $\text{newColumn}$. \end{itemize} \end{itemize} \item /* generate related boundary adjacency matrices of the new column*/ \item end if \end{itemize} \item $D = \text{SAM}(p_i, \text{origCxtOrg})$; \item /* generate the self adjacency matrix of the new column */ \begin{itemize} \item Append $\text{newColumn}$ and $D$ to $\text{trackStruc}$, using extension operation. \item $\text{includedProcSet} = \text{includedProcSet} \cup \{ p_i \}$; \item $\text{appendedProcSet} = \text{appendedProcSet} \cup \{ p_i \}$; \end{itemize} end for Step 3 \hspace{1em} Propagate the detection process for each process $p_i \in \text{appendedProcSet}$ \begin{itemize} \item $\text{targetOrg} = \text{genOrg}(p_i)$; \item /* Ask targetOrg to call genTrackStruc */ \item $\text{trackStruc} = \text{targetOrg} \text{. genTrackStruc}(\text{trackStruc}, p_i, \text{origCxtOrg})$; \end{itemize} end for Step 4 \hspace{1em} Return the expanded tracking structure \begin{itemize} \item return $\text{trackStruc}$ \end{itemize} The tracking structure generation process starts from a local workflow process of the original context organisation, and then spreads to all reachable workflow processes of the involved organisations. When this generation process comes to an organisation, this organisation becomes the context organisation of the above algorithm. For example, if we start from the retailer’s product ordering process, i.e., process $a$ in the motivating example, this algorithm first detects the workflow processes having direct interactions with process $a$. Then it checks for each detected workflow process whether it is reachable from organisation $A$, and if so, the detected process will be included to the tracking structure. In this step, organisation $B$’s process $b$ is included, and the tracking structure is expanded to $D_A^{D_B} = \begin{pmatrix} D_A & B_{A}^{B}\n\n 0 & D_A \end{pmatrix}$. After that, this generation process will be propagated to $B$, and $B$ repeats the above steps to extend the tracking structure. At this stage, $B$ may find process $c$ and process $d$, while only process $c$ is included. This is because that the retailer and the supplier do not set up perceptions for each other in this example, and hence no transformation matrix is defined for process $d$ from $A$. Therefore, the tracking structure is finally expanded to $D_A^{D_B} = \begin{pmatrix} D_A & B_{A}^{B}\n\n 0 & D_A \end{pmatrix}$, which equals to the diagram shown in Figure 3 (c). Here, $B_{A}^{B}$ is a zero matrix because there is no direct interactions between processes $a$ and $c$, and the other sub matrices can be found from the former part of this paper. ### 4.3 Performing Workflow Tracking In an inter-organisational workflow environment, there is another issue, i.e., how to keep the correct correspondence between collaborating local workflow instances. From the semantics of a collaborative business process, we can find the cardinality relationship between collaborating local processes, e.g., more than one instance of process $a$ may associate with a single instance of process $b$ for the purpose of batch production. While this kind of cardinality relationship can be determined at build time, the correlation between the particular instances of these processes has to be determined at run time, when they “shake hands”. To perform workflow tracking, we design a data structure, as shown in Figure 5, to keep the necessary information for tracking. This data structure consists of a series of lists, each of which represents the set of instances belonging to a specific local workflow process. Each element of a list has several units to record the workflow execution status. The links connecting elements represent the correspondence between instances of different workflow processes. The tracking process is similar to a graph traversal process, where the nodes represent the related workflow instances and the arcs represent their messaging links. Algorithm 2. *trackProc* - Tracking Process <table> <thead> <tr> <th>Input:</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>trackStruct</td> <td>The tracking structure to conduct the tracking</td> </tr> <tr> <td>origInstance</td> <td>An instance of the original context organisation’s initial local workflow process defined in <em>trackStruct</em></td> </tr> <tr> <td>DS</td> <td>The tracking data structure</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Output:</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>DS</td> <td>The updated tracking data structure</td> </tr> </tbody> </table> Step 1 **Initialisation** - `trackInstanceSet = ∅;` - `stack s = new stack();` - `s.push( origInstance );` Step 2 **Discover the participating workflow instances** - While `s` is not empty do - `cxtInstance = s.pop();` - `foundInstanceSet = linkedInstances( *cxtInstance*, trackStruct ) – trackInstanceSet;` - For each `i` ∈ `foundInstanceSet` - `s.push( i );` - `cxtProc = genProc( *cxtInstance* );` - `BAMset = relatedBAMs( *cxtProc*, trackStruct );` - For each link `l` of each boundary adjacency matrix `B` ∈ `BAMset` - /* now, start discovering workflow instances by following each visible messaging link */ - `partnProc = partnerProc( B, cxtProc );` - `partnOrg = genOrg( partnProc );` - If `cxtInstance` is newly fired then - `newInstanceSet = ∅;` - Ask `partnOrg` to check any new participating instances of *partnProc*, and set the instances to *newInstanceSet*. - `newInstanceSet = newInstanceSet – trackInstanceSet;` - /* filter the previous discovered instances */ - For each `i` ∈ `newInstanceSet` - `addInstance( partnProc, i );` - `addLink( cxtInstance, i );` - /* update the tracking data structure */ - `s.push( i );` - /* and add the newly discovered instance to the stack */ - End if - End for - End for - `trackInstanceSet = trackInstanceSet ∪ { *cxtInstance* };` - /* the set of instances to track */ - End while Step 3 **Update the execution status of participating workflow instances** - For each instance `i` ∈ `trackInstanceSet` - `p = genProc( i );` - `targetOrg = genOrg( p );` - Enquire `targetOrg` for the execution status of `i`, and then update the status of `i` in *DS*. end for to be tracked. In addition, new participating workflow instances will be identified at the time when visible messaging links are fired. Details can be found in Algorithm 2. In this algorithm, function addInstance(\(p, i\)) inserts instance \(i\) to the list of process \(p\) in the tracking data structure. Function addLink(\(i_1, i_2\)) creates a link between instances \(i_1\) and \(i_2\) in the tracking data structure. Function linkedInstances(\(i, trackStruc\)) returns the instances linked to instance \(i\) in the tracking data structure, according to the tracking structure \(trackStruc\). Function relatedBAMs(\(p, trackStruc\)) returns the set of BAMs related to process \(p\), defined in \(trackStruc\). Function partnerProc(\(B, p\)) returns the partner process of \(p\) defined in BAM \(B\). Function genOrg(\(p\)) returns the organisation of process \(p\). Function genProc(\(i\)) returns the process of instance \(i\). This algorithm starts from a local workflow instance of the original context organisation. Following the corresponding tracking structure, this algorithm searches along visible messaging links and propagates the execution status queries to all reachable workflow instances with the cooperation of participating organisations. The corresponding tracking structure records the interaction relationship between the processes of these reachable workflow instances. When an inter-organisational interaction is fired, the algorithm will check whether any new workflow instances join the business collaboration. If so, the algorithm will add these workflow instances to the tracking data structure. 5 Conclusion This paper contributed to the study of workflow tracking across organisational boundaries. Compared with other workflow tracking solutions, the approach proposed in the paper not only enables an organisation to track other organisations for its involved parts of collaborative business processes, but allows different organisations track same collaborative business process differently as well. In the paper, we deployed a matrix based framework which includes three representation matrices and three matrix operations. Algorithms using these matrices and operations for generating tracking structures and performing workflow tracking are developed. The framework allows an organisation to generate its own tracking structure based on its visibility to other organisations, thus privacy can be protected. The framework also allows a tracking structure to be generated on the fly, thus enables flexibility in workflow tracking. Based on its own tracking structure, an organisation can proactively trace the execution progress of its involved part of a collaborative business process. Acknowledgements The work reported in this paper is partly supported by the Australian Research Council discovery project DP0557572. References
{"Source-Url": "https://researchbank.swinburne.edu.au/items/ab6c5a21-d6ce-457b-b54e-3d0021145eb6/1/PDF%20(Accepted%20manuscript).pdf?.vi=save", "len_cl100k_base": 10330, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 53788, "total-output-tokens": 12262, "length": "2e13", "weborganizer": {"__label__adult": 0.0004422664642333984, "__label__art_design": 0.0008378028869628906, "__label__crime_law": 0.000911235809326172, "__label__education_jobs": 0.006946563720703125, "__label__entertainment": 0.0002067089080810547, "__label__fashion_beauty": 0.00029659271240234375, "__label__finance_business": 0.014923095703125, "__label__food_dining": 0.0005779266357421875, "__label__games": 0.0009832382202148438, "__label__hardware": 0.0013761520385742188, "__label__health": 0.0010976791381835938, "__label__history": 0.000675201416015625, "__label__home_hobbies": 0.0003094673156738281, "__label__industrial": 0.0018053054809570312, "__label__literature": 0.0005855560302734375, "__label__politics": 0.00048279762268066406, "__label__religion": 0.0004351139068603515, "__label__science_tech": 0.387939453125, "__label__social_life": 0.0003609657287597656, "__label__software": 0.0999755859375, "__label__software_dev": 0.47705078125, "__label__sports_fitness": 0.0003249645233154297, "__label__transportation": 0.0011873245239257812, "__label__travel": 0.000377655029296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45163, 0.01831]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45163, 0.18666]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45163, 0.83284]], "google_gemma-3-12b-it_contains_pii": [[0, 757, false], [757, 3700, null], [3700, 7188, null], [7188, 9029, null], [9029, 11096, null], [11096, 13182, null], [13182, 16159, null], [16159, 19096, null], [19096, 22274, null], [22274, 25535, null], [25535, 28397, null], [28397, 30853, null], [30853, 34252, null], [34252, 36888, null], [36888, 39774, null], [39774, 42636, null], [42636, 45163, null]], "google_gemma-3-12b-it_is_public_document": [[0, 757, true], [757, 3700, null], [3700, 7188, null], [7188, 9029, null], [9029, 11096, null], [11096, 13182, null], [13182, 16159, null], [16159, 19096, null], [19096, 22274, null], [22274, 25535, null], [25535, 28397, null], [28397, 30853, null], [30853, 34252, null], [34252, 36888, null], [36888, 39774, null], [39774, 42636, null], [42636, 45163, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45163, null]], "pdf_page_numbers": [[0, 757, 1], [757, 3700, 2], [3700, 7188, 3], [7188, 9029, 4], [9029, 11096, 5], [11096, 13182, 6], [13182, 16159, 7], [16159, 19096, 8], [19096, 22274, 9], [22274, 25535, 10], [25535, 28397, 11], [28397, 30853, 12], [30853, 34252, 13], [34252, 36888, 14], [36888, 39774, 15], [39774, 42636, 16], [42636, 45163, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45163, 0.04727]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
6e99f702b42eb3f096112f21b80e153de00f5d8c
UMA Release Notes Abstract This document contains non-normative release notes produced by the User-Managed Access Work Group explaining how new versions of the UMA specifications differ from previous ones. Status This document includes release notes for all versions of UMA. Editor • Eve Maler Intellectual Property Notice The User-Managed Access Work Group operates under Kantara IPR Policy - Option Patent & Copyright: Reciprocal Royalty Free with Opt-Out to Reasonable And Non discriminatory (RAND) (HTML version) and the publication of this document is governed by the policies outlined in this option. The content of this document is copyright of Kantara Initiative. © 2018 Kantara Initiative Table of Contents • 1 Abstract • 1.1 Table of Contents • 2 Introduction • 3 From UMA1 to UMA V2.0 • 3.1 Version Themes • 3.2 Specification Reorganization and Conformance Levels • 3.3 Terminology Changes • 3.4 API and Endpoint Changes • 3.5 Authorization Server Discovery Document and Metadata Changes • 3.5.1 Discovery Document and Metadata Simplification • 3.5.2 Definition of OAuth Dynamic Client Registration Metadata Field • 3.5.3 permissions Claim and Sub-Claims in Token Introspection Object Not Requested to Be IANA-Registered as JWT Claims • 3.6 Changes to AS-Client, RS-Client, and AS-Requesting Party Interfaces (Now UMA Grant Specification) • 3.6.1 Authorization Server Rotates Permission Ticket • 3.6.2 Token Endpoint Replaces RPT Endpoint; Client-Side Communications Defined as Extension Grant • 3.6.3 AAT Removed in Favor of PCT • 3.6.4 Deprecated Response-Body Permission Ticket Return Option By RS Removed • 3.6.5 Permission Ticket Return By AS With Redirect-User Hint No Longer Deprecated • 3.6.6 More Discretionary Permission Requests • 3.6.7 need_info Response Structured Flattened • 3.6.8 not_authorized Error Renamed to request_denied • 3.6.8.1 Added interval parameter to request_submitted Error • 3.6.9 New Refresh Token Clarity • 3.6.10 Authorization Assessment Gains Precision • 3.6.11 Permission Ticket Ecosystem Rationalized • 3.6.12 Only One Pushed Claim Token Now Allowed at a Time • 3.6.13 RPT Upgrading Logic Improved • 3.6.14 Token Revocation Clarifications • 3.6.15 Refresh Token Grant and Downscooping Logic Clarifications • 3.7 Changes to AS-Client/Protection API (Now Federated Authorization Specification) • 3.7.1 Resource Registration Endpoint • 3.7.1.1 Extraneous URL Parts Removed From Resource Registration API • 3.7.1.2 Scope Description Documents No Longer Expected to Resolve at Run Time When Scopes Are URLs • 3.7.1.3 Resource Descriptions Lose uri Parameter • 3.7.1.4 Resource and Scope Description Documents Gain Description Parameters • 3.7.1.5 scopes Parameter in Resource Description Document Renamed to resource_scopes • 3.7.1.6 New HTTP 400 and invalid_request Error • 3.7.2 Permission Endpoint • 3.7.2.1 Requesting Multiple Permissions and Permissions With Zero Scopes • 3.7.3 Token Introspection Endpoint • 3.7.3.1 scopes parameter renamed to resource_scopes in Introspection Response Object • 3.7.3.2 Options Not to Use Token Introspection Explicitly Allowed • 3.7.3.3 permissions Claim in Token Introspection Object Must Be Used • 3.7.3.4 permission Claim exp Sub-Claim's Meaning If Absent Removed • 4 From V1.0 to V1.0.1 • 4.1 Changes Affecting Authorization Server (+Client) Implementations • 4.1.1 AS Now Has Unique Redirect URI Endpoint for Claims Gathering (+Client) • 4.1.2 Permission Ticket Lifecycle Management (+Client) • 4.1.3 Requested Permission and Permission Ticket Matching • 4.1.4 Permission Ticket on Redirect Back to Client (+Client) • 4.1.5 PUT Means Complete Replacement Introduction This document contains non-normative release notes produced by the User-Managed Access Work Group explaining how new versions of the UMA specifications differ from previous ones. NOTE: Reading the release notes is not a substitute for reading the specifications carefully. In each specification release, much work is typically done to improve clarity and applicability for implementers and others. See the UMA Implementer's Guide for additional commentary. The UMA specifications use Semantic Versioning: Given a version number MAJOR.MINOR.PATCH, increment the: 1. MAJOR version when you make incompatible API changes, 2. MINOR version when you add functionality in a backwards-compatible manner, and 3. PATCH version when you make backwards-compatible bug fixes. The following shorthand terms and abbreviations are used in this document (see also the terminology, including abbreviations, defined in the specifications): • AS: authorization server • RS: resource server • Core: UMA Core specification (applies to versions 1.0 and 1.0.1) • RSR: OAuth Resource Set Registration specification (applies to versions 1.0 and 1.0.1) • Grant: UMA Grant for OAuth Authorization (applies to version 2.0) • FedAuthz: Federated Authorization for UMA (applies to version 2.0) • I-D: IETF Internet-Draft specification • Sec: section Where a change relates to a GitHub issue, the linked issue number is provided. From UMA1 to UMA V2.0 The UMA V2.0 Recommendations are User-Managed Access (UMA) 2.0 Grant for OAuth 2.0 Authorization (known as "Grant") and Federated Authorization for User-Managed Access (UMA) 2.0 (known as "FedAuthz"). The official versions are downloadable from the Kantara Reports & Recommendations page; this document links to specific sections within the HTML versions. Differences and changes noted are between V2.0 and V1.0.n generally; note that internal revision differences between UMA2 revisions are not tracked here. (You may find it helpful to refer to the Disposition of Comments document, a record of specification changes during the Public Comment periods late in their final review cycle, and the GitHub repository where the specifications are managed.) Where the distinction between V1.0 and V1.0.1 is important, it will be noted; otherwise the label "UMA1" is used. The following sequence diagrams may be of assistance as brief summaries of changes made: • Sequence diagram for Grant, highlighting key changes from UMA1 • Sequence diagram for FedAuthz, highlighting key changes from UMA1 Version Themes The major themes of this version, as determined by the Work Group's 2016 roadmap planning process, were (along with constantly improving security) to: - Increase OAuth 2.0 alignment - Improve Internet of Things readiness - Improve readiness for "wide ecosystems", where the requesting party and the resource owner's AS have no pre-established relationship Specification Reorganization and Conformance Levels The two specifications were divided differently until late April 2017. Core and RSR were recombined into Grant and FedAuthz, as follows: - All communications of the client and requesting party with the AS appear in Grant. This specification formally defines an extension OAuth grant. - All communications of the resource owner and resource server with the AS appear in FedAuthz. This includes: - Policy setting (outside the scope of UMA) - PAT definition and issuance - Protection API - Resource registration (previously, RSR specified only this endpoint/API and Core specified everything else) - The RS's permission requests at the AS - The RS's token introspection at the AS - The formal profiles for API extensibility URIs https://docs.kantarainitiative.org/uma/profiles/prot-ext-1.0, https://docs.kantarainitiative.org/uma/profiles/authz-ext-1.0, and https://docs.kantarainitiative.org/uma/profiles/rsrc-ext-1.0 were removed and replaced with recommendations (Grant Sec 4 and FedAuthz Sec 1.3) to define profiles as needed and to use uma_profiles_supported metadata (Grant Sec 2) to declare them. It is now optional to implement the features appearing in FedAuthz; thus, this specification effectively defines a conformance level. (Note: To receive the full benefits of "user-managed access", it is best to implement and use the features of both specifications.) Terminology Changes Note the following terminology changes made throughout the specifications. (256) See also Summary of API and Endpoint Changes below for naming changes made to some of the endpoints. <table> <thead> <tr> <th>UMA1</th> <th>UMA2</th> <th>Comments</th> </tr> </thead> <tbody> <tr> <td>configuration data</td> <td>metadata, discovery document</td> <td>For better clarity and OAuth alignment</td> </tr> <tr> <td>policies</td> <td>authorization grant rules, policy conditions</td> <td>For better consistency</td> </tr> <tr> <td>protection API token (PAT)</td> <td>protection API access token (PAT)</td> <td>For better clarity and OAuth alignment</td> </tr> <tr> <td>resource set, resource set registration</td> <td>resource, resource registration (protected while registered)</td> <td>For better clarity and OAuth alignment</td> </tr> <tr> <td>authorization API</td> <td>UMA grant (an extension OAuth grant)</td> <td>Result of redesign (see Token Endpoint Replaces RPT Endpoint; Client-Side Communications Defined as Extension Grant)</td> </tr> <tr> <td>authorization API token (AAT)</td> <td>goes away; a new related token is persisted claims token (PCT)</td> <td>Result of redesign (see AAT Removed in Favor of PCT)</td> </tr> <tr> <td>register a permission (for permission ticket)</td> <td>request (one or more) permission(s) (on behalf of a client)</td> <td>For better clarity</td> </tr> <tr> <td>trust elevation</td> <td>authorization process and authorization assessment</td> <td>Result of redesign (see Authorization Assessment Gains Precision)</td> </tr> <tr> <td>claims pushing + claims gathering = (n/a)</td> <td>claims pushing + claims gathering = claims collection</td> <td>For better consistency</td> </tr> <tr> <td>step-up authentication</td> <td>(n/a); just authorization process</td> <td>Result of redesign (see AAT Removed in Favor of PCT and Authorization Assessment Gains Precision)</td> </tr> <tr> <td>RPT as an UMA access token</td> <td>RPT as an OAuth access token</td> <td>Result of redesign (see Token Endpoint Replaces RPT Endpoint; Client-Side Communications Defined as Extension Grant)</td> </tr> </tbody> </table> API and Endpoint Changes These design changes include naming changes made to some of the endpoints. <table> <thead> <tr> <th>UMA1</th> <th>UMA2</th> <th>Comments</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> The same authorization server can have two different discovery endpoints, one serving UMA1 metadata and one serving UMA2 metadata. <table> <thead> <tr> <th>OAuth endpoints:</th> <th>OAuth endpoints:</th> </tr> </thead> <tbody> <tr> <td>authorization endpoint</td> <td>authorization endpoint</td> </tr> <tr> <td>token endpoint</td> <td>token endpoint</td> </tr> <tr> <td>Previously, the token endpoint issued both PATs and AATs. Now the token endpoint issues PATs and RPTs; there are no AATs. (Note that the authorization endpoint is used for authenticating resource owners only, not requesting parties.)</td> <td></td> </tr> </tbody> </table> Protection API: - resource set registration endpoint/API - permission registration endpoint - token introspection endpoint - Protection API (now OPTIONAL): - resource registration endpoint/API - permission endpoint - token introspection endpoint In the case of the first two endpoints, there are both design (primarily syntax) and naming differences, which also affects their corresponding metadata in the authorization server discovery document. Authorization API: - RPT endpoint - In UMA2, there is no authorization API. The prior function of the RPT endpoint is served by the existing OAuth token endpoint. Requesting party claims endpoint - Claims interaction endpoint - This is just a naming difference. Authorization Server Discovery Document and Metadata Changes Discovery Document and Metadata Simplification UMA1’s endpoint and feature discovery mechanism was defined in total by its Core specification. UMA2 makes use of the OAuth Authorization Server Discovery mechanism instead (still in Internet-Draft form at the time of UMA2 publication), eliminating metadata fields already defined by the OAuth discovery or OpenID Connect specification. The Grant (Sec 2) and FedAuthz (Sec 2) specifications each define only the metadata fields they require. (59, 157, 159, 305) Definition of OAuth Dynamic Client Registration Metadata Field The new metadata field claims_redirect_uris enables the client to pre-register claims redirection URIs. (Grant Sec 2, Sec 3.3.2, Sec 7.3) (337 sub-issues c and d) Permissions Claim and Sub-Claims in Token Introspection Object Not Requested to Be IANA-Registered as JWT Claims Previously, it was intended to make an IANA registration request of the claims inside the introspection object as independent JWT claims. This would enable them to be formally used in RPTs, such that an RS can validate the access token locally with these claims packed inside it. Because of potential security and privacy considerations, it was determined not to define this token format for now. (FedAuthz Sec 9) (334) Changes to AS-Client, RS-Client, and AS-Requesting Party Interfaces (Now UMA Grant Specification) Authorization Server Rotates Permission Ticket After the AS initially generates the permission ticket and the RS conveys it to the client, whenever the client subsequently approaches the AS token endpoint or redirects the requesting party to the AS claims gathering endpoint, the AS is required to rotate the value of the permission ticket every time it hands a permission ticket value back to the client (Grant Sec 3.3.3, Sec 3.3.6). This action obsoletes the need for the UMA Claims-Gathering Extension for Enhanced Security specification (see this explanation of that specification for more information). Token Endpoint Replaces RPT Endpoint; Client-Side Communications Defined as Extension Grant The specialized RPT endpoint was removed in favor of using the standard OAuth token endpoint (Grant Sec 3.3.1). A formal extension OAuth grant was defined (same section), working with regular OAuth capabilities and OAuth error codes to the extent possible (Sec 3.3.6). This enabled reuse of large portions of the threat model and the client type model, along with the ability for the client to request scopes and to authenticate using its own client credentials at the token endpoint (see the next section for additional discussion). (153, 165) **AAT Removed in Favor of PCT** An end-user requesting party no longer needs to mediate issuance of an AAT at the AS, and the client no longer needs to use an AAT in order to request a token; it simply uses its own client credentials at the OAuth token endpoint as in a normal grant (see Token Endpoint Replaces RPT Endpoint and Client-Side Communications Defined as Grant). Thus, the first time the requesting party needs to interact with the AS, if at all, is to provide claims interactively when redirected by the client as part of claims collection. This is in contrast to UMA1, where an end-user requesting party would have been expected to engage in an interactive OAuth flow to log in and then authorize AAT issuance at the AS’s authorization endpoint. In UMA1, the (required) AAT could have been used by the AS as a reminder of claims about the current requesting party. In UMA2, the (optional) PCT is available to serve in this capacity instead, without the OAuth mechanism being involved (Grant Sec 3.3.1). Note that UMA2 does not require the AS to involve the requesting party in an interactive flow authorizing PCT issuance (Grant Sec 3.3.3). (154, 264) **Deprecated Response-Body Permission Ticket Return Option By RS Removed** In UMA V1.0.1 the RS was able to return the initial permission ticket to the client in the response body for backwards compatibility with UMA V1.0, but this option was deprecated: now this option has been removed. (233) **Permission Ticket Return By AS With Redirect-User Hint No Longer Deprecated** In UMA V1.0.1 the AS was able to return the permission ticket to the client along with the redirect_user hint, but the client was not supposed to depend on ticket accuracy, and the supply of this ticket was deprecated. Now all permission tickets directly supplied by the AS are rotated and the value is safe for the client to depend on (Grant Sec 3.3.6). (233) **More Discretionary Permission Requests** The instruction for the RS to request permissions on the client’s behalf (which can be a private interface or the standardized interface governed by FedAuthz) is now defined as a recommendation (“SHOULD”) to be reasonable for the client's resource request, rather than being required to meet it ("minimally suffices"). The UMA Implementer's Guide has a section on Considerations Regarding Resource Server Permission Requests that explains how and why this level of discretion is more appropriate. **need_info Response Structured Flattened** The JSON nested object structure of the need_info error response from the AS has been flattened. Now it directly contains a permission ticket and either a required_claims or a redirect_user hint (or both) (Grant Sec 3.3.6). (237, 308) **not_authorized Error Renamed to request_denied** The UMA1 error not_authorized has been renamed to request_denied. Note that this error was re-added only in a later revision of UMA2. See the UMA Implementer’s Guide section called Understanding Authorization Server Response Options From the Token Endpoint to understand AS error semantics. (Grant Sec 3.3.6) (340) **Added interval parameter to request_submitted Error** An optional interval parameter was added to the request_submitted error to enable the AS to inform the client about appropriate polling intervals. (Grant Sec 3.3.6) (341) **New Refresh Token Clarity** It has been clarified that the AS can issue a refresh token and the client can use the refresh token grant to attempt to get a new RPT with it (Grant Sec 3.3.5, Sec 3.6). (238, 284) **Authorization Assessment Gains Precision** Inputs to authorization assessment and results calculation are more normative and precise. It is also now possible for permissions with zero scopes to be granted (Grant Sec 3.3.4). (266, 310, 317) **Permission Ticket Ecosystem Rationalized** The permission ticket generation ecosystem has been rationalized. In UMA2, a permission ticket is always generated, and the value rotated, in cases of a redirect back from the claims interaction endpoint and in cases of need_info and request_submitted errors from token endpoint requests, and never in cases of other errors. An authorization process is still ongoing while the authorization server is still generating permission tickets. (275, 279, 298) **Only One Pushed Claim Token Now Allowed at a Time** In UMA1, the mechanism for claim token pushing was a JSON-encoded request message sent to the RPT endpoint, optionally including a `claim_tokens` array each of whose objects had a `format` parameter and a `token` parameter. In UMA2 (Grant Sec 3.3.4), due to increased alignment with OAuth, this structure was flattened and the request message – now sent to the token endpoint as `application/x-www-form-urlencoded` format – contains each of the inner parameters only once. (If it is desired to send multiple claim tokens in a single request message, a compound claim token format could be defined.) **RPT Upgrading Logic Improved** UMA2 includes more comprehensive and normative logic around RPT upgrading (Grant Sec 3.3.5, Sec 3.3.5.1). (281) **Token Revocation Clarifications** UMA2 includes more comprehensive and normative text around token revocation, and defines a token type hint for PCTs (Grant Sec 3.7). (295) **Refresh Token Grant and Downscoping Logic Clarifications** UMA2 ensures that the logic of downscoping during token refreshing is properly defined given that UMA scopes are bound to resources, and clarifies that the AS does not perform authorization assessment in this context (Grant Sec 3.6). (306) **Changes to AS-RS Interface/Protection API (Now Federated Authorization Specification)** **Resource Registration Endpoint** **Extraneous URL Parts Removed From Resource Registration API** The API available at the resource registration endpoint required the path to contain the string `resource_set`. This string has been removed (FedAuthz Sec 3.2). (155) **Scope Description Documents No Longer Expected to Resolve at Run Time When Scopes Are URLs** The AS is no longer expected to resolve scope description details at resource registration time or at any other run-time requirement (FedAuthz Sec 3.1.1). (289) **Resource Descriptions Lose uri Parameter** The `uri` parameter in the resource description was removed due to potential security and privacy concerns. (FedAuthz Sec 3.1) (270) **Resource and Scope Description Documents Gain Description Parameters** Resource description documents and scope description documents each now have a new parameter, `description`, for a human-readable string describing the resource or scope (respectively) at length. (271, 272) **scopes Parameter in Resource Description Document Renamed to resource_scopes** The `scopes` parameter in the resource description document has been renamed to `resource_scopes` (FedAuthz Sec 3.1). (318) **New HTTP 400 and invalid_request Error** For a typical variety of malformed-request errors, a response of an HTTP 400 (Bad Request) status code and an optional `invalid_request` error code is now defined. (FedAuthz Sec 3.2) (354-1) **Permission Endpoint** **Requesting Multiple Permissions and Permissions With Zero Scopes** It is now possible for the RS to request multiple permissions on the client’s behalf, not just one; this enables the RS to request “packages” of multiple resources that are likely to need to be accessed together. It is also possible for the RS to supply zero scopes on a requested permission (FedAuthz Sec 4.1); this is because the client can request its own scopes directly from the AS (for more discussion see Token Endpoint Replaces RPT Endpoint; Client-Side Communications Defined as Extension Grant). (317) **Token Introspection Endpoint** **scopes parameter renamed to resource_scopes in Introspection Response Object** The `scopes` parameter in the token introspection response object has been renamed to `resource_scopes` (FedAuthz Sec 5.1.1). (158) **Options Not to Use Token Introspection Explicitly Allowed** In UMA2, the RPT is explicitly a type of OAuth access token, and it has been clarified that the token can be self-contained and validated locally by the RS, or introspected at the AS at run time, or its cached value used as appropriate (FedAuthz Sec 5). (261) **permissions Claim in Token Introspection Object Must Be Used** If token introspection is used (see Options Not to Use Token Introspection Explicitly Allowed), the introspection object can no longer be extended to replace the permissions claim with an entirely different structure. (322) **permission Claim exp Sub-Claim’s Meaning If Absent Removed** The statement about the permission claim’s exp sub-claim not expiring if it is absent was removed for the multi-part rationale given in the linked issue. (337 sub-issue a) --- **From V1.0 to V1.0.1** The UMA V1.0 specifications (Core, RSR) were approved in March 2015. The UMA V1.0.1 specifications (Core, RSR) are were approved in an All-Member Ballot to be Kantara Recommendations and were published in December 2015. The following release notes are catalogued according to their impact on software implementations (where impact on client software in addition to authorization server or resource server software is denoted with (+Client) in the section title). Links to relevant GitHub issues and specific section numbers are provided where possible, enabling old-to-new text comparisons and tracking of discussions and rationales. The following themes animated the V1.0.1 release process: - Account for V1.0 lessons learned out of the gate - Achieve timeline predictability and minimization of disruption for V1.0 implementers - Achieve efficiency, speed, and accuracy in specification revisions - Achieve issue solution consistency with OAuth 2.0 and OpenID Connect where possible - Within the allotted time, prioritize first blocking and critical bug fixes, then low-impact specification and implementation changes Minor changes, such as changes that don't impact implementations or specification interpretations, are not discussed in this section. To see a full list of issues disposed of and specification commits related to V1.0.1, see the list of GitHub issues with the "V1.0.1" label and the commit histories for Core and RSR. **Changes Affecting Authorization Server (+Client) Implementations** Following are specification changes in V1.0.1 that affect authorization servers, and possibly clients that interact with them as well. **AS Now Has Unique Redirect URI Endpoint for Claims Gathering (+Client)** Previously, the client was instructed to present the ordinary OAuth redirect_uri endpoint to which the AS should redirect requesting parties back after claims gathering, but this was ambiguously specified and incorrect. Now the client has a unique endpoint, claims_redirect_uri, that it needs to register. (144) **Permission Ticket Lifecycle Management (+Client)** Previously, little guidance was offered on how to manage permission tickets. Now some implications are explored, particularly as they relate to client interaction. (172) (Core Sec 3.2.2) **Requested Permission and Permission Ticket Matching** Previously, the matching of the "extents of access" of the requested permission registered by the RS and the permission ticket issued by the AS was implicit. Now it is spelled out. (175) (Core Sec 3.2) **Permission Ticket on Redirect Back to Client (+Client)** Previously, the AS was required to repeat the client's permission ticket back to it in a ticket property when offering a redirect_user hint in error_details. Now this is optional and the client is encouraged to ignore the property's value, preparatory to removing the property entirely in a future UMA version. The reason is that the value can't be guaranteed good; repeating the value was in order to save the client work; and having the client check the value would ultimately have caused both sides work for no gain. (205) (Core Sec 3.5.4.2) **PUT Means Complete Replacement** Previously, the requirement for an Update method in resource set registration to completely replace the previous resource set description was implicit. Now it is spelled out. (177) (RSR Sec 2.2.3) **Default-Deny for Authorization Data Issuance** Previously, a naive implementation could have resulted in accidental default-permit authorization data issuance in some cases. Now a default-deny authorization assessment model has been made explicit, with an example given of how implementations could get into trouble. (194) (Core Sec 3.5.2) **base64url-Encoded Claims (+Client)** Previously, the wording about base64url-encoding pushed claims was ambiguous about whether double-encoding was necessary in the case of claim formats that were already base64url-encoded. Now it has been clarified that double-encoding should not be performed. (206) (Core Sec 3.6.2) **Enhanced Security Considerations** Previously, the security considerations around accepting policy-setting context information from an incompletely trusted RS only covered "bad icon URIs". Now they cover all such policy-setting context information, following roughly the OAuth example. (151) (RSR Sec 4) Previously, the security considerations around client-pushed claims were explored only in a very cursory fashion in the body of the text. Now they are treated at length in a new subsection. (160) (Core Sec 7.4.1) **Enhanced Privacy Considerations** Previously, little was said about privacy implications of requesting party claims being transmitted to the AS. Now this section has been greatly expanded. (2 11) (Core Sec 8.2) **Changes Affecting Resource Server (+Client) Implementations** Following are specification changes in V1.0.1 that affect resource servers, and possibly clients that interact with them as well. **Caveat About Resource Server API Constraint** Previously, the specification was missing an important caveat: Based on a client's initial RPT-free resource request, the RS needs to know the correct AS, PAT, and resource set ID to include in its follow-on call to the permission request endpoint at the AS. Thus, the API of the RS needs to be structured so that it can derive this information from the client's request. Now this caveat appears in several locations. (161, 162, 225) **Adjustment of Other Resource Server API Constraints (+Client)** Previously, the specification wording was inconsistent and problematic regarding how the RS responds to a client request accompanied by no RPT or an RPT with insufficient authorization data (assuming permission request success). Now the ability not to respond at all is more fully acknowledged; all responses intended to be interpreted in a UMA fashion are required to be accompanied by a WWW-Authenticate: UMA header; the permission ticket is required to be returned in a new ticket parameter in that header; complete freedom is given regarding the RS's choice of HTTP status code; and only in the case of a 403 choice is a ticket in a JSON-encoded body suggested, preparatory to removing the body option in a future UMA version. The rationale for this somewhat dramatic set of changes is that the original prescription to return HTTP status code 403 was incorrect; the specification gave too little guidance about responses other than 403 responses to be useful for client interoperability; and its requirement to return the permission ticket in a JSON-encoded body regardless of expected content type was an issue. (163, 164, 168) (Core Sec 3.3.1) **Solution for Permission Registration Failure (+Client)** Previously, the specification gave no guidance on how the RS should respond to the client in case of permission registration failure at the AS. Now, if the RS responds at all, it is required to substitute a Warning: 199 - "UMA Authorization Server Unreachable" header for WWW-Authenticate: UMA. (176) (Core Sec 3.3.2) **Authorization Server URI to Return to Client (+Client)** Previously, the value of the as_uri property that the RS returns to the client was described somewhat vaguely as the authorization server's URI. Now it has been clarified to be the issuer URI as it appears in the AS configuration data of the AS. (199) (Core Sec 3.3.1) **New Security Considerations** Previously, the security considerations around accepting policy-setting context information from an incompletely trusted AS were not covered. Now they cover the user_access_policy_uri property, which is the only policy-setting context information passed from AS to RS. (185) (RSR Sec 4) **Specification Reorganizations** The specifications, particularly Core Sec 3, were reorganized in the fashion of OpenID Connect, with the goal of giving a subsection to every request and response message. Other notable changes include: - Several "commentary" subsections were added, such as Core Sec 3.2.2 discussing permission ticket creation and management, and RSR Sec 2.1.2 discussing scope interpretation. - A new section, Core Sec 9.2, registers the permissions property in the new OAuth token introspection IANA registry (this is in addition to its registration in the JWT claims registry). - Core Sec 7.4.1 breaks out the new, more extensive security considerations discussion of pushed claims. - Core Sec 8 now has subsections to make privacy considerations easier to find and understand. Core Specification Reorganization Found in Core V1.0 (go) Find in Core draft V1.0.1 (go) 1. Introduction (go) 1.1. Notational Conventions 1.2. Terminology 1.3. Achieving Distributed Access Control 1.3.1. Protection API 1.3.2. Authorization API 1.3.3. Protected Resource Interface 1.3.4. Time-to-Live Considerations 1.4. Authorization Server Configuration Data 2. Protecting a Resource (go) 3. Getting Authorization and Accessing a Resource (go) 3.1 Client Attempts Access to Protected Resource 3.1.1. Client Request to Resource Server With No RPT (go) 3.3 Resource Server Responds to Client 3.3.1 Resource Server Response to Client on Permission Registration Success 3.3.2 Resource Server Response to Client on Permission Registration Failure 3.1.2. Client Presents RPT (go) 3.2 Resource Server Registers Requested Permission With Authorization Server (go) 3.2.1 Resource Server Request to Permission Registration Endpoint 3.2.2 Permission Ticket Creation and Management 3.2.3 Authorization Server Response to Resource Server on Permission Registration Success 3.2.4 Authorization Server Response to Resource Server on Permission Registration Failure 3.3. Resource Server Determines RPT's Status (go) 3.3.1. Token Introspection 3.3.2. RPT Profile: Bearer 3.4 Resource Server Determines RPT Status (go) 3.4.1 Token Introspection Process 3.4.2 RPT Profile: Bearer 3.5 Client Seeks Authorization for Access (go) 3.5.1 Client Request to Authorization Server for Authorization Data (go) 3.5.2 Authorization Assessment Process 3.5.3 Authorization Server Response to Client on Authorization Success 3.5.4 Authorization Server Response to Client on Authorization Failure 3.4.1.1. Authentication Context Flows (go) 3.6 Client Responds to Authorization Server’s Request for Additional Information (go) 3.6.1 Client Redirects Requesting Party to Authorization Server for Authentication 3.4.1.2. Claims-Gathering Flows (go) 3.6 Client Responds to Authorization Server’s Request for Additional Information (go) 3.6.2 Client Pushes Claim Tokens to Authorization Server (go) 3.6.3 Client Redirects Requesting Party to Authorization Server for Claims-Gathering 4. Error Messages (go) 4.1 OAuth Error Responses 4.2 UMA Error Responses 5. Profiles for API Extensibility (go) 5.1 Protection API Extensibility Profile 5.2 Authorization API Extensibility Profile 5.3 Resource Interface Extensibility Profile 5. Profiles for API Extensibility (go) 5.1 Protection API Extensibility Profile 5.2 Authorization API Extensibility Profile 5.3 Resource Interface Extensibility Profile 6. Specifying Additional Profiles (go) 6.1 Specifying Profiles of UMA 6.2 Specifying RPT Profiles 6.3 Specifying Claim Token Format Profiles 6. Specifying Additional Profiles (go) 6.1 Specifying Profiles of UMA 6.2 Specifying RPT Profiles 6.3 Specifying Claim Token Format Profiles 7. Compatibility Notes (go) n/a 8. Security Considerations (go) 7. Security Considerations (go) 8.1. Redirection and Impersonation Threats (go) 7.1 Requesting Party Redirection and Impersonation Threats (go) 8.2. Client Authentication (go) 7.2 Client Authentication (go) 8.3. JSON Usage (go) 7.3 JSON Usage (go) 8.4. Profiles, Binding Obligations, and Trust Establishment (go) 7.4 Profiles and Trust Establishment (go) n/a 7.4.1 Requirements for Trust When Clients Push Claim Tokens (go) 9. Privacy Considerations (go) 8. Privacy Considerations (go) 8.1 Resource Set Information at the Authorization Server 8.2 Requesting Party Information at the Authorization Server 8.3 Profiles and Trust Establishment 10. IANA Considerations (go) 9. IANA Considerations (go) 10.1. JSON Web Token Claims Registration (go) 10.1.1. Registry Contents 9.1 JSON Web Token Claims Registration (go) 9.1.1 Registry Contents n/a 9.2 OAuth Token Introspection Response Registration (go) 9.2.1 Registry Contents 10.2. Well-Known URI Registration (go) 10.2.1. Registry Contents 9.3 Well-Known URI Registration (go) 9.3.1 Registry Contents Pre-V1.0 Changes Following is a catalog of notable changes to the specifications in the pre-V1.0 timeframe. Core Changes Internet-Draft Rev 11 to Rev 12 From I-D rev 11 to rev 12: - Notable changes: - Enhanced the Security Considerations section. Internet-Draft Rev 10 to Rev 11 From I-D rev 10 to rev 11: - Breaking changes: - Section 3.4: not_authorized_permission error code: Changed to not_authorized. - RPT handling: Changed extensively to remove the RPT issuance endpoint and enable the authorization data request endpoint to do all RPT issuance duties. Permission ticket issuance is now handled on an "eager" basis, when a client either without an RPT or with an invalid or insufficient-authorization-data RPT approaches the RS seeking access. This affects several sections: - Section 1.4: configuration data - Section 3: introduction - Section 3.1.1 and 3.1.2: client approaching RS - Section 3.2: RS registering permission - Section 3.4: RPT issuance and authorization data addition - Section 5.2: Extensibility profile implications - Section 1.4: - Changed the claim_profiles_supported property in the configuration data to claim_token_profiles_supported - Changed the user_endpoint property in the configuration data to authorization_endpoint, to match the final IETF RFC 6749 name in OAuth 2.0 - Changed the authorization_request_endpoint property in the configuration data to rpt_endpoint, to distinguish it more fully from the OAuth endpoint and to shorten it - (Also affects Section 5) Changed how uma_profiles_supported works, so that the API extensibility profiles don't have reserved keywords but rather use the regular URI mechanism for indicating profiles - Section 3.3.2: - Names of several properties in the permissions structure for the RPT "Bearer" token introspection response have changed to align them with JWT names: expires_at to exp, issued_by to iat - The JWT "scope" property at the top level is now disallowed in favor of "scopes" at the permissions level. - PAT and AAT OAuth scopes: - Renamed from URIs to simple strings: "uma_protection" and "uma_authorization"; the JSON scope description documents provided to enable the old URIs to resolve no longer have any relation to the UMA Core spec - Other changes of note: - Section 3.1.1 and Section 3.1.2: Extraneous host_id removed from example of RS's response to client. - Enabled explicit use of OAuth-based authentication protocols such as OpenID Connect for OAuth protection driving PAT and AAT issuance. - Identifiers for spec-defined profiles now use https instead of http - Migrated the claim profiling spec's requesting party claims endpoint configuration data to the core spec, and made it optional to supply. - Migrated the claim profiling spec's "need_claims" extensions to the core spec, broadened it to "need_info", and gave it "error_details" hints in the core spec. - Section 3.1.1: Requirement for RS to return 403 to a tokenless client has been softened to a SHOULD. - Section 3.3.2: The token introspection response has been aligned with the latest token introspection spec. nbf has been added at the permissions level, exp is now optional, and all permissions-level properties that duplicate JWT-level claims in intent now get overridden by any JWT-level claims present in the response. Finally, the "permissions" JWT claim has been registered with IANA. - Extensive new redirect-pattern claims gathering support added - Extensive new security and privacy considerations added - Section 1.4: - issuer property: Now required to match the actual published location of the config data. - Dynamic client configuration: When OIDC dynamic client configuration is used, this is now more intelligently handled through a reserved keyword "openid" that indicates that the OIDC configuration data should be consulted for the relevant endpoint. - pat_grant_types_supported and aat_grant_types_supported: Broadened to allow them to be strings even when not based on the OAuth grant type strings, similarly to token profiles. - issuer property: Now required to match the actual published location of the config data. - Dynamic client configuration: When OIDC dynamic client configuration is used, this is now more intelligently handled through a reserved keyword "openid" that indicates that the OIDC configuration data should be consulted for the relevant endpoint. - pat_grant_types_supported and aat_grant_types_supported: Broadened to allow them to be strings even when not based on the OAuth grant type strings, similarly to token profiles. Internet-Draft Rev 08 to Rev 09 From I-D rev 08 to 09: - Breaking changes: - (Technically breaking but not expected to have huge impact:) TLS/HTTPS is now mandatory for the AS to implement in its protection and authorization APIs. - Other changes of note: - It is no longer required for the client to redirect a human requesting party to the AS for the claims-gathering process. - A new claims profiling framework (now in a separate spec) describes how to leverage one of several common patterns for claims-gathering: client redirects the requesting party to AS, client pushes claims to the AS. • A new framework for API extensibility, and a matching series of extensibility profiles, appears in the core spec. It enables tighter coupling between the AS and RS, AS and client, and RS and client, respectively, but only in a controlled manner to foster greater interoperability in such circumstances. • The SHOULD for the usage of the SAML bearer token profile for PAT issuance is now just a MAY. • In Section 4.2, the example was corrected to remove a wayward "status": "error" property. • Clarified that no request message body is expected when the client uses the RPT endpoint at the AS. • Added a success example in Section 3.4.2 showing how authorization data is added and the RPT is simultaneously refreshed, a new capability. Internet-Draft Rev 07 to Rev 08 From I-D rev 07 to 08: • Breaking changes: • Section 1.3: TLS as defined and (mostly) required in OAuth (RFC 6749) is now a MUST in UMA for AS endpoints. From I-D rev 06 to 07: • Breaking changes: • Section 1.5: Some properties in the authorization server configuration data have been renamed, and others broken out into multiple properties with different names. The wording around reserved keywords vs. URIs as string values was also cleaned up. • oauth_token_profiles_supported: broken out into two, pat_profiles_supported and aat_profiles_supported. • uma_token_profiles_supported: renamed to rpt_profiles_supported. • Section 3.4.2: Error code names were cleaned up. • expired_requester_ticket: renamed to expired_ticket. • invalid_requester_ticket: renamed to invalid_ticket. • Other changes of note: • Updated the token introspection spec citation and details. • Updated the OAuth threat model citation. • Enhanced the security considerations section. • Broden from defining successful access as 200 OK to defining it as 2xx Success. • Explain that the PAT implicitly gives the "subject" of a requested permission. • Fix resource_set_registration_endpoint keyword mention. (It was missing the last work.) Internet-Draft Rev 05 to Rev 06 From I-D rev 05 to 06: • Breaking changes: • Section 1.5: The authorization server configuration data now allows for providing a dynamic client registration endpoint (now defined by the official OAuth dynamic client registration spec), rather than just serving as a flag for whether the generic feature is supported. The name changed to dynamic_client_endpoint. • Sections 3.1.1 and 3.1.2: The am_uri header has been renamed to as_uri due to terminology changes (see below). • Section 3.1.2: The OAuth error "insufficient_scope" is now a central part of the authorization server's response to a client with a valid RPT and insufficient scope. This aligns UMA more closely with OAuth as a profile thereof (stay tuned for more possible tweaks in this general area, e.g. in WWW-Authenticate). • Other changes of note: • Terminology has been changed wholesale from UMA-specific terms to OAuth-generic terms. • Authorization manager (AM) is now authorization server. • Host is now resource server. • Authorizing user is now resource owner. • Requester is now client. • Some additional terms and concepts have been tweaked, enhanced and clarified. • Scope is now scope type (likely to change back due to feedback). • Authorization data is now defined as a generic category, of which permissions are an instance. • RPT now stands for requesting party token instead of requester permission token. • UMA is more explicitly defined as a profile of OAuth. • References have been added to the OAuth token introspection spec proposal, though it is not fully used yet (stay tuned for breaking changes here). • The resource set registration process (phase 1) has been moved to a separate modular spec that is designed to be usable by other OAuth-based technologies along with UMA. RSR changes Internet-Draft Rev 04 to Rev 05 From I-D rev 04 to rev 05: • Breaking changes: • Changed the PUT method for the purpose of resource set creation at the authorization server, to POST. This had other rippling changes, such as removing the usage of If-Match, the precondition_failed error, ETag usage, and the privacy considerations warning about mapping real resource set names to obscured names that remove personally identifiable information. • Changed the name of the policy_uri property to user_access_policy_uri to differentiate it from the OAuth property of (formerly) the same name. • Other changes of note: • Clarified that user_access_policy_uri is allowed on Create, Read, and Update, and also now allow it on Delete and List too. • Enhanced the Security Considerations section. Internet-Draft Rev 03 to Rev 04 From I-D rev 03 to rev 04: • Breaking changes: • Removed the "status: xxx" property from all the AS responses in the RSR API. • Other changes of note: • ("04" to "05") Added a new optional resource_uri parameter to the resource set description, to support resource discovery at an authorization server. • Scopes bound to resource set descriptions can now be strings rather than being required to be URIs that resolve to scope description documents. • The _rev property has been removed from the resource set registration API. It can be added back as an extension for those who want it. Claim Profiles changes Claim Profiles Rev 00 Claim Profiles 00: • We decided not to progress this specification in its current form, so we will let it expire and will not reference it from Core. Change History <table> <thead> <tr> <th>Version</th> <th>Date</th> <th>Comment</th> </tr> </thead> <tbody> <tr> <td>v. 38</td> <td>Jan 09, 2018 22:16</td> <td>Eve Maler: Added change history macro</td> </tr> <tr> <td>v. 37</td> <td>Jan 09, 2018 22:15</td> <td>Eve Maler: Corrected header material; added links into specific Recommendation sections</td> </tr> <tr> <td>v. 36</td> <td>Nov 16, 2017 19:26</td> <td>Eve Maler</td> </tr> <tr> <td>v. 35</td> <td>Nov 16, 2017 03:00</td> <td>Eve Maler</td> </tr> <tr> <td>v. 34</td> <td>Nov 01, 2017 00:04</td> <td>Eve Maler</td> </tr> <tr> <td>v. 33</td> <td>Oct 10, 2017 16:53</td> <td>Eve Maler</td> </tr> <tr> <td>v. 32</td> <td>Oct 10, 2017 15:55</td> <td>Eve Maler</td> </tr> <tr> <td>v. 31</td> <td>Sep 06, 2017 15:11</td> <td>Eve Maler</td> </tr> <tr> <td>v. 30</td> <td>Aug 08, 2017 17:53</td> <td>Eve Maler</td> </tr> <tr> <td>v. 29</td> <td>Aug 08, 2017 17:46</td> <td>Eve Maler</td> </tr> <tr> <td>v. 28</td> <td>Aug 08, 2017 17:20</td> <td>Eve Maler</td> </tr> <tr> <td>v. 27</td> <td>Jul 18, 2017 14:01</td> <td>Eve Maler</td> </tr> <tr> <td>v. 26</td> <td>Jul 18, 2017 13:21</td> <td>Eve Maler</td> </tr> <tr> <td>v. 25</td> <td>Jul 18, 2017 13:20</td> <td>Eve Maler</td> </tr> <tr> <td>v. 24</td> <td>Jul 12, 2017 10:30</td> <td>Eve Maler</td> </tr> <tr> <td>Revision</td> <td>Date/Time</td> <td>Author</td> </tr> <tr> <td>----------</td> <td>-------------------</td> <td>----------</td> </tr> <tr> <td>v. 23</td> <td>Jul 05, 2017 11:07</td> <td>Eve Maler</td> </tr> <tr> <td>v. 22</td> <td>Jul 05, 2017 10:42</td> <td>Eve Maler</td> </tr> <tr> <td>v. 21</td> <td>Jun 30, 2017 00:21</td> <td>Eve Maler</td> </tr> <tr> <td>v. 20</td> <td>Jun 27, 2017 17:51</td> <td>Eve Maler</td> </tr> <tr> <td>v. 19</td> <td>Jun 25, 2017 23:14</td> <td>Eve Maler</td> </tr> <tr> <td>v. 18</td> <td>May 14, 2017 06:49</td> <td>Eve Maler</td> </tr> <tr> <td>v. 17</td> <td>May 14, 2017 06:47</td> <td>Eve Maler</td> </tr> <tr> <td>v. 16</td> <td>May 05, 2016 15:10</td> <td>Eve Maler</td> </tr> <tr> <td>v. 15</td> <td>May 05, 2016 14:49</td> <td>Eve Maler</td> </tr> <tr> <td>v. 14</td> <td>Jan 25, 2016 10:57</td> <td>Eve Maler</td> </tr> <tr> <td>v. 13</td> <td>Sep 20, 2015 14:39</td> <td>Eve Maler</td> </tr> <tr> <td>v. 12</td> <td>Sep 20, 2015 14:27</td> <td>Eve Maler</td> </tr> <tr> <td>v. 11</td> <td>Sep 20, 2015 14:27</td> <td>Eve Maler</td> </tr> <tr> <td>v. 10</td> <td>Sep 20, 2015 13:51</td> <td>Eve Maler</td> </tr> <tr> <td>v. 9</td> <td>Sep 20, 2015 13:49</td> <td>Eve Maler</td> </tr> <tr> <td>v. 8</td> <td>Sep 20, 2015 13:45</td> <td>Eve Maler</td> </tr> <tr> <td>v. 7</td> <td>Sep 20, 2015 13:06</td> <td>Eve Maler</td> </tr> <tr> <td>v. 6</td> <td>Sep 18, 2015 10:28</td> <td>Eve Maler</td> </tr> <tr> <td>v. 5</td> <td>Sep 18, 2015 10:13</td> <td>Eve Maler</td> </tr> <tr> <td>v. 4</td> <td>Sep 15, 2015 21:30</td> <td>Eve Maler</td> </tr> <tr> <td>v. 3</td> <td>Sep 15, 2015 21:25</td> <td>Eve Maler</td> </tr> <tr> <td>v. 2</td> <td>Sep 15, 2015 19:38</td> <td>Eve Maler</td> </tr> <tr> <td>v. 1</td> <td>Sep 15, 2015 14:58</td> <td>Eve Maler</td> </tr> </tbody> </table>
{"Source-Url": "https://kantarainitiative.org/confluence/download/temp/pdfexport-20200512-120520-2230-5067/uma-UMAReleaseNotes-120520-2230-5068.pdf?contentType=application/pdf", "len_cl100k_base": 12261, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 41705, "total-output-tokens": 12829, "length": "2e13", "weborganizer": {"__label__adult": 0.0003039836883544922, "__label__art_design": 0.0003740787506103515, "__label__crime_law": 0.0007314682006835938, "__label__education_jobs": 0.0007014274597167969, "__label__entertainment": 7.659196853637695e-05, "__label__fashion_beauty": 0.00014841556549072266, "__label__finance_business": 0.0012578964233398438, "__label__food_dining": 0.0002419948577880859, "__label__games": 0.0005936622619628906, "__label__hardware": 0.0007009506225585938, "__label__health": 0.0003888607025146485, "__label__history": 0.0003085136413574219, "__label__home_hobbies": 7.367134094238281e-05, "__label__industrial": 0.0003402233123779297, "__label__literature": 0.00027060508728027344, "__label__politics": 0.00034356117248535156, "__label__religion": 0.0003561973571777344, "__label__science_tech": 0.0222625732421875, "__label__social_life": 0.00010001659393310548, "__label__software": 0.05413818359375, "__label__software_dev": 0.91552734375, "__label__sports_fitness": 0.00022280216217041016, "__label__transportation": 0.00021970272064208984, "__label__travel": 0.00018680095672607425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49869, 0.04486]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49869, 0.07812]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49869, 0.86684]], "google_gemma-3-12b-it_contains_pii": [[0, 3850, false], [3850, 6385, null], [6385, 10541, null], [10541, 13936, null], [13936, 18823, null], [18823, 22492, null], [22492, 26739, null], [26739, 31823, null], [31823, 33682, null], [33682, 35987, null], [35987, 36110, null], [36110, 41247, null], [41247, 45742, null], [45742, 48770, null], [48770, 49869, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3850, true], [3850, 6385, null], [6385, 10541, null], [10541, 13936, null], [13936, 18823, null], [18823, 22492, null], [22492, 26739, null], [26739, 31823, null], [31823, 33682, null], [33682, 35987, null], [35987, 36110, null], [36110, 41247, null], [41247, 45742, null], [45742, 48770, null], [48770, 49869, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 49869, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49869, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49869, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49869, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49869, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49869, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49869, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49869, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49869, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49869, null]], "pdf_page_numbers": [[0, 3850, 1], [3850, 6385, 2], [6385, 10541, 3], [10541, 13936, 4], [13936, 18823, 5], [18823, 22492, 6], [22492, 26739, 7], [26739, 31823, 8], [31823, 33682, 9], [33682, 35987, 10], [35987, 36110, 11], [36110, 41247, 12], [41247, 45742, 13], [45742, 48770, 14], [48770, 49869, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49869, 0.12451]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
310f4c64d8136df18bde60ad0afacf029134da97
[REMOVED]
{"Source-Url": "https://www.inf.usi.ch/phd/fedyukovich/simabs_paper.pdf", "len_cl100k_base": 15031, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 102189, "total-output-tokens": 18136, "length": "2e13", "weborganizer": {"__label__adult": 0.0004093647003173828, "__label__art_design": 0.0004549026489257813, "__label__crime_law": 0.0003862380981445313, "__label__education_jobs": 0.0008993148803710938, "__label__entertainment": 9.107589721679688e-05, "__label__fashion_beauty": 0.0001863241195678711, "__label__finance_business": 0.00042629241943359375, "__label__food_dining": 0.0004677772521972656, "__label__games": 0.001293182373046875, "__label__hardware": 0.0015268325805664062, "__label__health": 0.0005164146423339844, "__label__history": 0.0003862380981445313, "__label__home_hobbies": 0.00014221668243408203, "__label__industrial": 0.0007166862487792969, "__label__literature": 0.0003769397735595703, "__label__politics": 0.0003705024719238281, "__label__religion": 0.000652313232421875, "__label__science_tech": 0.08477783203125, "__label__social_life": 0.00010764598846435548, "__label__software": 0.00704193115234375, "__label__software_dev": 0.8974609375, "__label__sports_fitness": 0.00038242340087890625, "__label__transportation": 0.0008029937744140625, "__label__travel": 0.0002397298812866211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52340, 0.07021]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52340, 0.33892]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52340, 0.80094]], "google_gemma-3-12b-it_contains_pii": [[0, 2466, false], [2466, 5188, null], [5188, 5335, null], [5335, 8683, null], [8683, 11995, null], [11995, 15088, null], [15088, 18342, null], [18342, 21951, null], [21951, 23231, null], [23231, 25140, null], [25140, 28866, null], [28866, 32173, null], [32173, 34074, null], [34074, 37311, null], [37311, 40167, null], [40167, 40987, null], [40987, 43830, null], [43830, 45438, null], [45438, 47251, null], [47251, 48705, null], [48705, 49533, null], [49533, 52018, null], [52018, 52340, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2466, true], [2466, 5188, null], [5188, 5335, null], [5335, 8683, null], [8683, 11995, null], [11995, 15088, null], [15088, 18342, null], [18342, 21951, null], [21951, 23231, null], [23231, 25140, null], [25140, 28866, null], [28866, 32173, null], [32173, 34074, null], [34074, 37311, null], [37311, 40167, null], [40167, 40987, null], [40987, 43830, null], [43830, 45438, null], [45438, 47251, null], [47251, 48705, null], [48705, 49533, null], [49533, 52018, null], [52018, 52340, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52340, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52340, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52340, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52340, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52340, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52340, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52340, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52340, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52340, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52340, null]], "pdf_page_numbers": [[0, 2466, 1], [2466, 5188, 2], [5188, 5335, 3], [5335, 8683, 4], [8683, 11995, 5], [11995, 15088, 6], [15088, 18342, 7], [18342, 21951, 8], [21951, 23231, 9], [23231, 25140, 10], [25140, 28866, 11], [28866, 32173, 12], [32173, 34074, 13], [34074, 37311, 14], [37311, 40167, 15], [40167, 40987, 16], [40987, 43830, 17], [43830, 45438, 18], [45438, 47251, 19], [47251, 48705, 20], [48705, 49533, 21], [49533, 52018, 22], [52018, 52340, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52340, 0.1155]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
cc56864be420a72a94a49398c76cbbadb3d56590
Abstract Compositionality is of great practical importance when building systems from individual components. Unfortunately, leads-to properties are not, in general, compositional, and theorems describing the special cases where they are, are needed. In this paper, we develop a general theory of compositional leads-to properties, and use it to derive a composition theorem based on the notion of progress sets, where progress sets can be defined in various ways. Appropriate definitions of progress sets yield new results and generalized versions of known theorems. 1 Introduction Although leads-to properties are not compositional, in general, it is worthwhile to identify the special cases where they are. Composition theorems for leads-to properties have been proposed, for example, in [Rao92, Mi91a, Mi91b]. In this paper, we develop a general theory about composition of leads-to properties, then specialize the results to give a composition theorem based on the notion of progress sets. A progress set for a program $F$ and target $q$ is a set of predicates satisfying certain properties. The theorem essentially states that if for each predicate in the progress set, program $G$ satisfies a particular property, then any "leads-to $q$" property that holds for $F$, also holds in the parallel composition of $F$ and $G$. Several different composition theorems can be obtained by choosing particular ways of constructing progress sets. First, we will introduce our program model, define the necessary background information, and develop a very general theorem for composing programs in a way that preserves leads-to properties. This theorem is then specialized to obtain the main result of the paper — a composition theorem based on progress sets. Finally, we explore different choices for the way a progress set is constructed and give several useful corollaries. 2 Preliminaries 2.1 Programs and Properties A program \( F \) is a pair \((V_F, S_F)\) where \( V \) is a set of typed variables and \( S \) is a set of predicate transformers that includes the identity transformer and represents the weakest preconditions of a set of nonmiraculous, always terminating, and boundedly nondeterministic commands. Thus each \( s \in S_F \) is universally conjunctive, strict w.r.t. \textit{false}, and or-continuous. Since the identity transformer corresponds to the command \textit{skip}, all programs in our model allow stuttering steps. The state space of \( F \) is a Cartesian product with a coordinate for each variable of \( V \). If \( V_F \) is the empty set, the state space is a single state representing the empty Cartesian product. A computation of \( F \) is an initial state \( \sigma_0 \), and a sequence of pairs \((s_i, \sigma_i)\), \( i > 0 \), where \( s_i \in S \) is a command and \( \sigma_i \) is a program state, and execution of command \( s_i \) can take the program from state \( \sigma_{i-1} \) to state \( \sigma_i \), for \( i > 0 \), and each command in the program appears infinitely often. This definition follows the one in \cite{CS95} or may also be viewed as a generalized version of UNITY \cite{CM88} with no initially section. In contrast with UNITY, our model does not explicitly restrict the initial condition of a program. Thus we will not be able to use a rule such as the UNITY substitution axiom to eliminate states unreachable from a specified initial state from consideration. We will, however, take into account that after any point in the computation, not just the initial one, some states may be no longer reachable or are not reachable in some interval of interest. Now we define several predicate transformers and program properties. The square brackets in the definitions are the everywhere operator from \cite{DS90}. The predicate transformers \( \text{awp}.F \) and properties \( \text{co} \) and \( \text{stable} \). The predicate transformer \( \text{awp} \) \cite{CS95}, is defined as \[ [\text{awp}.F.q \equiv (\forall s : s \in S_F : s.q)]\tag{1} \] Using \( \text{awp}.F \), we define a property \( \text{co}.F \) \cite{Mis94, CS95} that describes the next state relation of the program \( F \). \[ p \text{co}_F q = [p \Rightarrow \text{awp}.F.q]\tag{2} \] Operationally, \( p \text{co}_F q \) means that if \( p \) holds at some point during a computation, then \( q \) holds and will still hold after executing any command of \( F \). The property \( \text{stable.}F.q \) is defined as \[ \text{stable.}F.q = [q \Rightarrow \text{awp}.q], \] which indicates that \( q \) will never be falsified by any command of \( F \). **Refinement of awp.** For two programs \( F \) and \( F' \), we say that \( F \) is refined by \( F' \), denoted \( F \leq F' \) when the following formula holds. \[ (\forall p : [\text{awp}.F.p \Rightarrow \text{awp}.F'.p]) \] For our purposes in this paper, we use the fact that if \( F \) is refined by \( F' \), every co property of \( F \) is also a co property of \( F' \). The **predicate transformer** \( \text{wens}.F.s \), and properties ensures and leads-to (\( \sim \)). The weakest predicate that will hold until \( q \) does, and which will be taken to \( q \) by a single \( s \) step is denoted \( \text{wens}.F.s.q \). It is defined as \[ [\text{wens}.F.s.q \equiv \forall x : [x \equiv (s.q \land \text{awp}.F.(q \lor x)) \lor q]]. \] The predicate transformer \( \text{wens}.F.s \) is monotonic (i.e. \( [p \Rightarrow q] \Rightarrow [\text{wens}.F.s.p \Rightarrow \text{wens}.F.s.q] \)) and weakening (i.e. \( [q \Rightarrow \text{wens}.F.s.q] \)). From predicate calculus and the fixpoint induction rule, \[ [p \land \neg q \Rightarrow (s.q \land \text{awp}.F.(q \lor p))] \Rightarrow [p \Rightarrow \text{wens}.F.s.q]. \] From [CS95] we have a rule stating that if a predicate is stable, then its \( \text{wens} \) is also stable. \[ \text{stable.}F.q \Rightarrow \text{stable.}(\text{wens}.F.s.q). \] If \([t \land \neg q \Rightarrow \text{awp}.F.t] \) holds, then operationally, we have that if \( t \land \neg q \) holds, at some point, then \( t \) will continue to hold while \( \neg q \) does, and it will also hold after the step that has established \( q \). In this case, the states that satisfy both \( t \) and \( \text{wens}.F.s.q \) are the same as those satisfying both \( t \) and \( \text{wens}.F.s.(t \land q) \): \[ [t \land \neg q \Rightarrow \text{awp}.F.t] \Rightarrow [t \land \text{wens}.F.s.q \equiv t \land \text{wens}.F.s.(t \land q)]. \] **Proof of (8):** The proof is by mutual induction. \[ \text{true} = \{ \text{With } [p \equiv \text{wens}.F.s.q], \text{from the definition of } \text{wens} (5) \} \] \[ [p \equiv s.q \land \text{awp}.F.(p \lor q) \lor q] \] \[ \Rightarrow \{ \text{predicate calculus} \} \] \[ [p \land \neg q \Rightarrow s.q \land \text{awp}.F.(p \lor q)] \] \[\text{This definition of refinement only considers safety properties. A more general notion, where progress properties are taken into account is described in [San95].}\] The ensures property \( p \text{ ensures } q \) in \( F \) [CM88] can be defined as \[ p \text{ ensures } q \equiv (\exists s : [p \Rightarrow \text{wens}.F.s.q]) \] (9) Operationally, this means that if at some point \( p \land q \) holds, then eventually \( q \) will hold, and furthermore, there is a single command that, when executed, will establish \( q \). From (8), we easily obtain \[ t \land \neg q \Rightarrow t \land p \text{ ensures } q \land t. \] (10) From [CM88], \( p \sim_F q \), read \( p \text{ leads-to } q \) in \( F \), is the unique strongest relation (on predicates) satisfying \[ [p \text{ ensures } q] \Rightarrow p \sim_F q \] (11) \[ [p \text{ ensures } r] \land r \sim_F q \Rightarrow p \sim_F q \] (12) \[ (\forall w : w \in W : w \sim_F q) \Rightarrow (\exists w : w \in W : w) \sim_F q \] (13) where \( W \) is an arbitrary set of predicates on the state space of \( F \). A function from programs to predicate transformers \( \text{wlt} \) (weakest leads-to) has been given in [JKR89, Kna90] where \[ p \sim_F q = [p \Rightarrow \text{wlt}.F.q] \] (14) We will give a new formulation of \( \text{wlt} \) below. 2.2 The weakest ensures set of \( q, E.F.q \) Now, we introduce a new concept. For program \( F \) and predicate \( q, E.F.q \) is defined as the minimal set of predicates satisfying \[ q \in E.F.q \tag{15} \] \[ p \in E.F.q \Rightarrow \text{wens}.F.s.p \in E.F.q \tag{16} \] \[ (\forall w : w \in W : w \in E.F.q) \Rightarrow (\exists w : w \in W : w) \in E.F.q \tag{17} \] where \( W \) is an arbitrary set of predicates on the state space of \( F \). Some properties of \( E.F.q \) that will be used later are \[ p \in E.F.q \Rightarrow [q \Rightarrow p] \tag{18} \] and \[ p \sim_F q = (\exists r : r \in E.F.q : [p \Rightarrow r]) \tag{19} \] or alternatively, \[ \text{wlt}.F.q \equiv (\exists r : r \in E.F.q : r) \tag{20} \] **Induction on the structure of \( E.F.q \)** A look at the properties (15),(16) and (17) shows that we can prove that all elements of \( E.F.q \) have a certain property by showing that 1. the property holds for \( q \), 2. if the property holds for \( r \) then for all \( s \), the property holds for \( \text{wens}.F.s.r \), 3. if the property holds for all \( r_i \) with \( i \in I \) then the property holds for \((\exists i : i \in I : r_i)\). Several proofs in the sequel will use induction on the structure of \( E.F.q \). 2.3 Parallel Composition For two programs \( F = (V_F, S_F) \) and \( G = (V_G, S_G) \), their parallel composition or union, denoted \( F || G \) is defined as \[ F || G = (V_F \cup V_G, S_F \cup S_G). \tag{21} \] Parallel composition is only defined when common elements of \( V_F \) and \( V_G \) have the same type, however, we will assume that \( F || G \) is defined whenever we write it. Predicates on the state space of, say \( F \), may be also be viewed as predicates on the state space of \( F || G \) since the union may only increase the number of variables. The following theorems follow easily from the definitions: \[ [\text{awp}.F.q \land \text{awp}.G.q] \equiv \text{awp}.F || G.q \tag{22} \] \[ [p \Rightarrow \text{wens}.F.s.q] \land [p \land \neg q \Rightarrow \text{awp}.G.(p \lor q)] \Rightarrow [p \Rightarrow \text{wens}.F || G.s.q] \tag{23} \] \[ [\text{wens}.F || G.s.q \Rightarrow \text{wens}.F.s.q] \tag{24} \] 3 Composing leads-to properties While co and ensures admit simple composition theorems [CM88], simple composition theorems do not hold, in general, for leads-to properties. We can, however, use the relationship between leads-to properties and the elements of $E.F.q$ to give a general composition theorem that provides a starting point for more useful theorems. Below, we give a union theorem for $E.F.q$. Intuitively, the theorem says that if for every predicate $r$ in $E.F.q$, if $r \land \neg q$ holds at some point, then $r$ continues to hold until $q$ is established, then $r$ is also in $E.F||G.q$. The theorem is actually more general, introducing a predicate $t$ satisfying $[t \land \neg q \Rightarrow awp.F.t]$. This allows us, in essence, to restrict attention to the parts of the state space satisfying $t$. **Union Theorem for $E.F.q$.** $$\neg q \land t \Rightarrow awp.F.t$$ (25) $$\forall r : r \in E.F.q : [\neg q \land (r \land t) \Rightarrow awp.G.(r \land t)]$$ (26) $$\Rightarrow$$ $$\forall r : r \in E.F.q : (\exists r' : r' \in E.F||G.q : [r \land t \equiv r' \land t])$$ (27) **Proof of Union Theorem for $E.F.q$.** We show a stronger result: $$(\forall r : r \in E.F.q : (\exists r' : r' \in E.F||G.q : [r' \Rightarrow r] \land [r \land t \equiv r' \land t]))$$ The proof is by induction on the structure of the set $E.F.q$. Base: From (15), we have for $r = q$, $r' = q$. Induction with (16): Let $r = wens.F.s.x$ and $r' = wens.F||G.s.x'$. The induction hypothesis is $[x \land t \equiv x' \land t] \land [x' \Rightarrow x]$ * $wens.F.s.x \land t$ $$\equiv$$ $$\{ (8),(25) \}$$ $wens.F.s.(x \land t) \land t$ $$\Rightarrow$$ $$\{ (23) \text{ with } p := wens.F.s.(x \land t) \land t,$$ $$q := t \land x,$$ and using that $p \land \neg q \Rightarrow awp.G.(p \lor q)$ follows from (26) and $[q \Rightarrow x] \} $$wens.F||G.s.(x \land t)$$ $$\equiv$$ $$\{ \text{ induction hypothesis, } [x \land t \equiv x' \land t] \}$$ $wens.F||G.s.(x' \land t)$ $$\Rightarrow$$ $$\{ wens.F||G.s \text{ monotonic } \}$$ ** $wens.F||G.s.(x')$ ⇒ \{ \text{wens.F}||G.s \text{ monotonic, induction hypothesis, } [x' \Rightarrow x] \} \\ \text{wens.F}||G.s(x) \\ ⇒ \{ (24) \} \\ \text{wens.F.s}(x) Thus we have, from the starred lines in the proof that \[ [r' \Rightarrow r] \text{ and } [r \land t \equiv r' \land t]. \] Induction with (17): Follows from the predicate calculus. As a simple consequence of the union theorem for \(E.F.q\) and (19), we obtain a union theorem for leads-to. **Leads-to Union Theorem** \[ [-q \land t \Rightarrow \text{awp.F.t}] \\ (\forall r : r \in E.F.q : [-q \land (r \land t) \Rightarrow \text{awp.G.}(r \land t)]) \] ⇒ \[ (wlt.F.q) \land t \sim_{F\parallel G} q \] **Proof** From the union theorem for \(E.F.q\) and the fact that \(wlt.F.q \in E.F.q\), we have \((\exists r' : r' \in E.F)[G.q : wlt.F.q \land t \equiv r' \land t]\). Thus \([wlt.F.q) \land t \Rightarrow r']\), which, together with (19), imply \((wlt.F.q) \land t \sim_{F\parallel G} q\). A corollary of the above is the following. **Corollary to the leads-to union theorem** \[ p \sim_{F} q \\ [-q \land t \Rightarrow \text{awp.F.t}] \\ (\forall r : r \in E.F.q : [-q \land (r \land t) \Rightarrow \text{awp.G.}(r \land t)]) \] ⇒ \[ p \land t \sim_{F\parallel G} q \] Note that if \(t\) is such that \([p \Rightarrow t]\), then the conclusion is \(p \sim_{F\parallel G} q\). **Example** In the next example we explore the composition of two simple single statement programs. The variable \(x\) is an integer. \[ F : x := x + 1 \] \[ G : x := 2x \] Let \( q = (x \geq k) \), for some \( k \). First, we determine \( E.F.q \). For a program with a single command \( s \), we have \( E.F.s.q = q \lor s.q \). Using now \( s.(x \geq i) = (x+1 \geq i) \), we get \[ E.F.q = \{ i : i \leq k : (x \geq i) \} \cup \{ \text{true} \} \] and \( \text{wlt}.F.(x \geq k) \equiv \text{true} \). The union theorem can be applied, provided \[ (\forall r : r \in E.F.q : [(x \geq k) \land r \land t \Rightarrow (\text{awp}.G.(r \land t))] \] (31) where \( [\text{awp}.G.(r \land t)] \equiv (x := 2x)(r \land t) \). It is easy to see that (31) does not hold for any \( k \) if \( t \) is \text{true}, however, it does hold for all \( k \) if \( t \) is \text{false}. In addition, this choice of \( t \) satisfies (28). Thus we can conclude from the analysis that \[ x \geq 0 \leadsto F \parallel G \ x \geq k \] The example indicates the importance of being able to restrict the state space under consideration. The condition on \( t \) essentially says that once \( t \) holds in \( F \parallel G \) then it will continue to hold at least until \( q \), the target, does. We use this to weaken the requirements on \( G \) at the price of having \( t \) on the left side of the leads-to properties of the composed program. In most cases, one is concerned that a particular progress property of \( F \), say \( (x \geq 3) \leadsto (x \geq 10) \) is preserved in a composition, and the corollary is applicable. In this case \( (x \geq 3) \land (x \geq 0) \) is just \( (x \geq 3) \). Even though the restriction of the state space to states satisfying \( (x \geq 0) \) has not impacted the conclusion, it was still needed to apply the theorem. **A generalization of Misra’s Fixed Point Union Theorem** Directly applying the union theorem for leads-to is usually not practical. However, we show below how it can be specialized to yield Misra’s fixed point union theorem. A predicate \( q \) is a fixed point if the state no longer changes once \( q \) holds. Formally, \[ q \text{ is a fixed point of } G = (\forall p : \text{stable}.G.(p \land q)) \] Once a program has reached a fixpoint, its state no longer changes. The following theorem follows is a simple corollary of the leads-to union theorem since, from the definition of fixed point, (29) holds for all predicates. **Generalized fixed point union theorem** \[ \begin{align*} p & \leadsto F \ q \\ [\neg q \land t & \Rightarrow \text{awp}.F.t] \\ \neg q \land t & \text{ is a fixed point of } G \\ \Rightarrow \\ p \land t & \leadsto F \parallel G \ q \end{align*} \] (32) Misra's version [Mi91a] is obtained from the above with \([t \equiv true]\) and the observation that in this case \([-q \land t \Rightarrow \textit{awp}.F.t]\) hold trivially. 4 A composition theorem using progress sets In this section, we give conditions under which a set \(C.F.q\) of predicates is guaranteed to contain \(E.F.q\). The idea is that this set of predicates is easier to describe than \(E.F.q\). First, we require that \(C.F.q\) is closed under arbitrary conjunction and disjunction. For arbitrary \(R\): \[ (\forall r : r \in R : r \in C) \Rightarrow (\forall r : r \in R : r) \in C \tag{33} \] \[ (\forall r : r \in R : r \in C) \Rightarrow (\exists r : r \in R : r) \in C \tag{34} \] Since \(R\) may be empty, the above formulae imply that \(true \in C\) and \(false \in C\). The predicate transformer \(cl.C\). Given the set \(C\), we define a predicate transformer \(cl.C\), where \(cl.C.r\) is the strongest predicate in \(C\) weaker than \(r\): \[ cl.C.r \equiv \mu x : x \in C \land [r \Rightarrow x] \tag{35} \] The postulated properties of \(C\) are sufficient to guarantee the existence of \(cl.C.r\). In addition, \(cl.C.r\) is monotonic, weakening, and universally disjunctive. Also, \[ [cl.C.r \equiv r] = r \in C \tag{36} \] Progress sets For a program \(F\), and predicates \(t\) and \(q\), we say that \(C\) is a progress set for the triple \((F, t, q)\) if \[ C\text{ is closed under arbitrary conjunction and disjunction, (33, 34)(37)} \] \[ q \in C \tag{38} \] \[ t \in C \land \textit{stable}.F.t \tag{39} \] \[ (\forall m, s : [q \Rightarrow m], s \in F : (t \land m) \in C \Rightarrow (t \land (q \lor s.m)) \in C) \tag{40} \] The following lemma says that if \(C\) is a progress set for \((F, t, q)\), then for all predicates \(r \in E.F.q\), \((r \land t) \in C\). The lemma will allow us to reformulate the leads-to union theorem in terms of a progress set instead of \(E.F.q\). Lemma \[ C\text{ is closed under arbitrary conjunction and disjunction} \] \[ q \in C \] \[ q \in C \land \textit{stable}.F.t \] (∀m, s : [q ⇒ m], s ∈ F : (t ∧ m) ∈ C ⇒ (t ∧ (q ∨ s.m)) ∈ C) ⇒ (∀r : r ∈ E.F.q : (r ∧ t) ∈ C) The proof is by induction on the structure of E.F.q. The base case follows from the hypothesis (i.e. q ∈ C, t ∈ C) and the fact that C is closed under conjunction (33). The induction step with disjunction follows from the fact that C is closed under disjunction (34). The remaining induction requires us to show that: (t ∧ m) ∈ C ⇒ (t ∧ wens.F.s.m) ∈ C (41) Let r = wens.F.s.m, t = true = { definition of wens.s, (5) } [r ⇒ (s.m ∧ awp.F.(m ∨ r)) ∨ m] ⇒ { predicate calculus } [r ∧ t ⇒ t ∧ ((s.m ∧ awp.F.(m ∨ r)) ∨ m)] = { stable.F.t thus [∀m awp.F.m] } [t ∧ r ⇒ t ∧ ((s.m ∧ awp.F.(m ∨ r) ∧ awp.F.t) ∨ m)] = { awp.F. conjunctive } [t ∧ r ⇒ t ∧ (s.m ∧ awp.F.(m ∨ r) ∧ t) ∨ m] ⇒ { awp monotonic, [(m ∨ r) ∧ t] ⇒ (m ∨ (r ∧ t)), weaken right side } [t ∧ r ⇒ t ∧ (s.m ∧ awp.F.(m ∨ (r ∧ t)) ∨ m)] ⇒ { cl.C. weakening, awp.F. monotonic, weaken right side } [t ∧ r ⇒ t ∧ (s.m ∧ awp.F.(m ∨ cl.C.(r ∧ t)) ∨ m)] ⇒ { cl.C. monotonic } [cl.C.(t ∧ r) ⇒ cl.C.(t ∧ (s.m ∧ awp.F.(m ∨ cl.C.(r ∧ t)) ∨ m))] = { (t ∧ (s.m ∧ awp.F.(m ∨ cl.C.(r ∧ t)) ∨ m)) ∈ C, see below } [cl.C.(t ∧ r) ⇒ t ∧ (s.m ∧ awp.F.(m ∨ cl.C.(r ∧ t)) ∨ m)] ⇒ { weaken right side } [cl.C.(t ∧ r) ⇒ s.m ∧ awp.F.(m ∨ cl.C.(r ∧ t)) ∨ m] ⇒ { fixpoint, (6) } [cl.C.(t ∧ r) ⇒ wens.F.s.m] = { definition of r } [cl.C.(t ∧ r) ⇒ r] ⇒ { s.t ∪ cl.C.I}, cl.C. monotonic, thus [cl.C.(t ∧ r) ⇒ t ] [cl.C.(t ∧ r) ⇒ t ∧ r] ⇒ { cl.C. weakening, thus [t ∧ r ⇒ cl.C.(t ∧ r)] } [t ∧ r ⇒ cl.C.(t ∧ r)] = { definition of r, property of cl.C., (36) } t ∧ wens.F.s.m ∈ C Now we show \[(t \land (s.m \land \mathit{awp.F}.(m \lor \mathit{cl.C}.(r \land t)) \lor m)) \in C\] that was assumed above. By the induction hypotheses \(t \land m \in C\). \[ \begin{align*} \text{true} & = \{ \text{induction hypothesis} \} \\ (t \land m) & \in C \\ \Rightarrow & \{ \text{C closed under disjunction} \} \\ ((t \land m) \lor \mathit{cl.C}.(r \land t)) & \in C \\ \Rightarrow & \{ \text{predicate calculus, using } [\mathit{cl.C}.(r \land t) \Rightarrow t] \} \\ (t \land (m \lor \mathit{cl.C}.(r \land t))) & \in C \\ \Rightarrow & \{ \text{hypothesis} \} \\ (t \land (q \lor s. (m \lor \mathit{cl.C}.(r \land t)))) & \in C \\ \Rightarrow & \{ \text{conjunction over } s \in T_F, \text{ C closed under conjunction} \} \\ (t \land (q \lor \mathit{awp.F}.(m \lor \mathit{cl.C}.(r \land t)))) & \in C \\ \Rightarrow & \{ \text{induction hypothesis and hypothesis of lemma give} \} \\ (t \land (q \lor (s.m)) & \in C, \text{ C closed under conjunction} \} \\ (t \land (q \lor (s.m \land \mathit{awp.F}.(m \lor \mathit{cl.C}.(r \land t)))) & \in C \\ \Rightarrow & \{ (t \land m) \in C, \text{ C closed under disjunction, } [q \Rightarrow m] \} \\ (t \land (m \lor (s.m \land \mathit{awp.F}.(m \lor \mathit{cl.C}.(r \land t)))) & \in C) \\ \end{align*} \] \textbf{A monotonicity property of progress sets} From the conjunction and disjunction properties of a progress set \(C\) for \((F,t,q)\) one derives the following monotonicity property. \[ C \text{ is a progress set for } (F,t,q) \\ [q \Rightarrow q'] \land q' \in C \\ \Rightarrow \\ C \text{ is a progress set for } (F,t,q') \] Now, we have the final theorem of this section, and a main result of the paper. It follows immediately from the lemma and the leads-to union theorem. \textbf{Progress set union theorem} Let \(C\) be a progress set for \((F,t,q)\). \[ (\forall c : c \in C : c \land \neg q \text{ coG } c) \\ [q \Rightarrow q'] \land q' \in C \\ \Rightarrow \\ \text{wit.F.q'} \land t \sim_{F\parallel G} q' \] 5 Some progress sets In this section, we give several examples of how progress sets can be defined. The set of all predicates The set of all predicates on the state space of $F$ is a progress set for $(F, true, q)$. Then (42) is equivalent to $\neg q$ being a fixed point of $G$, so that we get yet another proof of Misra’s fixed point union theorem. The set of all predicates is primarily of interest because it demonstrates the existence of a progress set for every program and predicate. The set of all stable predicates The set of all stable predicates of any program on the appropriate state space is closed under arbitrary conjunction (33) and disjunction (34) and is therefore a candidate for progress sets. Rao [Rao92] gave two union theorems for leads-to based on the notion of decoupling and weak decoupling in terms of stability. His results are Rao’s Decoupling Theorem \[ \begin{align*} p & \leadsto_F q \\ stable.G.q \\ F \text{dec}_{safe} G \end{align*} \] \[\Rightarrow\] \[ p \leadsto F \parallel G \ q \] where \[F \text{dec}_{safe} G \equiv (\forall s, r : s \in S_F \land stable.G.r : stable.G.s.r),\] and Rao’s Weak Decoupling Theorem \[ \begin{align*} p & \leadsto_F q \\ stable.F \Vert G.q \\ F \text{wdec}_{safe} G \end{align*} \] \[\Rightarrow\] \[ p \leadsto F \parallel G \ q \] where \[F \text{wdec}_{safe} G \equiv F \text{dec}_{safe} F \Vert G.\] Both of these theorems are simple corollaries of the progress set union theorem. For the decoupling theorem, let $C.F.q = \{ r \mid stable.G.r \}$ From $F \text{dec}_{safe} G$ and \( \text{stable}.G.q \), \( C.F.q \) is a progress set for \((F, \text{true}, q)\). Since \( \text{stable}.G.r \Rightarrow r \land \neg q \iff_G r \), the theorem follows. For the weak decoupling theorem, let \( C.F.q = \{r|\text{stable}.F\|^G.r\} \). From \( F \text{ wdec}_{C,F,G} \), and \( \text{stable}.F\|^G.q \), \( C.F.q \) is a progress set for \((F, \text{true}, q)\). Since \( \text{stable}.F\|^G.r \Rightarrow r \land \neg q \iff_G r \), the theorem follows. Rao used these result to explore notions of commutativity that allow compositional progress results rather than advocating their direct use in programming. Indeed, direct use would seem to be counterproductive since the theorems themselves are rather non-compositional, requiring detailed knowledge of both \( F \) and \( G \) in order to determine whether they are decoupled or weakly decoupled. On the other hand, our more general theorem can be used in a similar, but more "compositional" way where \( F \) and \( G \) can be decoupled via a third program \( G' \). Instead of checking whether two programs \( F \) and \( G \) are decoupled or weakly decoupled, given \( F \), we choose a program \( G' \) so that the set of stable predicates of \( G' \) is a progress set for \( F \). Ideally, \( G' \) is chosen so that its set of stable predicates has a simple structure and is easily described. Then, to compose a program \( G \) with \( F \), it is only necessary to check that \( G \) satisfy (42) for the stable predicates of \( G' \). It suffices that \( G \) is refinement of \( G' \), thus the stable predicates of \( G' \) are stable in \( G \), and we have the following corollary of the progress set union theorem. **Decoupling via \( G' \) union theorem** Let the set of all stable predicates of \( G' \) be a progress set for \((F, \text{true}, q)\). \[ G' \leq G \] \[ \Rightarrow \quad \text{wlt}.F.q \leadsto_{F\|^G} q \] It is worth noting that when the set of stable predicates of \( G \) is taken as a progress set, \( C \), then \( \text{cl}.C.r \) is the strongest stable predicate weaker than \( r \) in \( G \), or \( \text{sst}.G.r \) [San91]. The set of states of \( G \) that are reachable from \( r \) is given by \( \text{sst}.G.r \). **Additional program properties that generate potential progress sets** In the previous section, we discussed using the set of all stable properties of some program \( G \) as a potential progress set for \((F, t, q)\). In this section, we give three more program properties that are slightly weaker than stable such that all predicates satisfying the property for some program \( G \) are closed under conjunction and disjunction and are therefore potential progress sets for \((F, t, q)\). (We still need to check the remaining conditions on progress sets.) Like the set of all stable predicates, all of the predicates so obtained satisfy (42) for \( G \). **Stable.G-not-leaving-q** \[ \{r|r \land \neg q \iff_G r \land [r \land q \iff_G r \lor \neg q]\} \] In this case, \( cl.C.r \) is the set of states that are reachable from \( r \) along a sequence of states satisfying the requirement that once \( q \) holds for some state in the sequence, it holds for all later states in the sequence. For two states \( \sigma \) and \( \gamma \) connected by such a sequence we write \( \sigma \text{ Reach}_G \text{nl.} q \gamma \). **Stable.\text{G-not-leaving-q-all-directions}** \[ \{ r| r \land (\neg q \lor \forall w.p) \leftarrow co_G \ r \} \tag{49} \] Here, \( cl.C.r \) is the set of states that are reachable from \( r \) via a sequence of states where once \( q \) holds, the sequence cannot be extended without maintaining \( q \). **Stable.\text{G-outside-q}** \[ \{ r| r \land \neg q \leftarrow co_G \ r \} \tag{50} \] In this latter case, \( cl.C.r \) is the set of states that are reachable from \( r \) via a sequence of states where \( q \) holds for at most the final state in the sequence. Also, note that for each of these choices for \( C \), \( q \in C \) trivially, thus providing alternatives to the set of all stable predicates, which requires that \( q \) be stable in \( G \). For later use we also note that a predicate \( p \) with \( [q \Rightarrow p] \) satisfies (48) and (49) if it satisfies (50). **Progress sets from a relation** Let \( R \) be a reflexive, transitive relation on the state space and \( \sigma \) and \( \gamma \) be representative states. Then the set of predicates \( r \) such that \[ (\forall \sigma, \gamma : r.\sigma \land \sigma R \gamma \Rightarrow r.\gamma) \tag{51} \] is closed under arbitrary conjunction and disjunction.\(^2\) ### 5.1 Composition theorems based on monotonicity and commutativity The previous section listed several ways that sets of predicates closed under arbitrary conjunction and disjunction can be generated. In this section, we give two theorems that are helpful in showing (40), repeated here for convenience, under the assumption of (33, 34, 38, and 39). \[ (\forall m, s : [q \Rightarrow m], s \in F : (t \land m) \in C \Rightarrow (t \land (q \lor s.m)) \in C) \tag{52} \] \(^2\)If we let \( R \) be defined by \( \sigma R \gamma = \{ cl.C.(\sigma) \} \gamma \), where \( \{ \sigma \} \) is the point predicate that holds at \( \sigma \) then the set of predicates generated by \( R \) is just \( C \). **Commutativity of cl.C and command** Assuming (33, 34, 38, and 39), the following implies (40): \[(\forall s : s \in S_P : (\forall m : [q \Rightarrow m] : [cl.C.(t \land s.m) \Rightarrow t \land (q \lor s.(cl.C.(t \land m))))]] \tag{53}\] Proof: \[ [q \Rightarrow m] \land t \land m \in C \] \[ \Rightarrow \{ (53) \text{ and } cl.C.t \land m = t \land m \} \] \[ [cl.C.t \land s.m \Rightarrow t \land (q \lor s.(t \land m))] \] \[ \Rightarrow \{ \text{monotonicity of } s \} \] \[ [cl.C.t \land s.m \Rightarrow t \land (q \lor s.m)] \] \[ \Rightarrow \{ \text{disjunction of left and right side with } t \land q \text{ which is equal } cl.C.t \land q \} \] \[ [cl.C.t \land q \lor cl.C.t \land s.m \Rightarrow t \land (q \lor s.m)] \] \[ \Rightarrow \{ \text{disjunctivity of cl.C} \} \] \[ [cl.C.t \land (q \lor s.m) \Rightarrow t \land (q \lor s.m)] \] \[ \Rightarrow \{ \text{def. of closed} \} \] \[ t \land (q \lor s.m) \in C \] In the next theorem, we require that the command corresponding to each \(s\) is deterministic and given by a functional state transformer \(f_s\). **Commutativity of functions and relation.** Let \(R\) be defined by \(\sigma R \gamma = (cl.C.(\sigma)).\gamma\). Then the condition below implies (53). \[(\forall s : s \in S_P : (\forall \sigma, \gamma : \text{state space of } F)[G : \sigma R \gamma \land t.\sigma \Rightarrow q.\sigma \lor q.\gamma \lor (f_s.\sigma R f_s.\gamma))] \tag{54}\] Proof: \[ [q \Rightarrow m] \land (cl.C.t \land s.m).\gamma \] \[ \Rightarrow \{ \text{definition of } R \} \] \[ \exists \sigma : \sigma R \gamma : (t \land s.m).\sigma \] \[ \Rightarrow \{ \text{rewritten with } f_s \text{ using stable.F}.t \} \] \[ (t \land m).f_s.\sigma \] \[ \Rightarrow \{ (54) \Rightarrow (q.\sigma \lor q.\gamma \lor f_s.\sigma R f_s.\gamma) \} \] \[ q.\sigma \lor q.\gamma \lor (cl.C.t \land m).f_s.\gamma \] \[ \Rightarrow \{ q \in C \text{ and } \sigma R \gamma \} \] \[ q.\gamma \lor (cl.C.t \land m).f_s.\gamma \] \[ \Rightarrow \{ \text{rewritten with } s \text{ instead of } f_s \} \] \[ q.\gamma \lor (s.cl.C.t \land m).\gamma \] \[ \Rightarrow \{ t \in C \text{ implies } t.\sigma \Rightarrow t.\gamma \} \] \[ (t \land (q \lor s.cl.C.t \land m)).\gamma \] 5.2 Monotonicity This section gives a composition theorem based on monotonicity with respect to a partial order \( \leq \). We use the following definitions. A predicate \( q \) is called monotonic with respect to \( \leq \) if \[ (\forall \sigma, \gamma : q.\sigma \land (\sigma \leq \gamma) \Rightarrow q.\gamma). \] (55) A function \( f \) is called monotonic with respect to \( \leq \) if \[ (\forall \sigma, \gamma : (\sigma \leq \gamma) \Rightarrow f.\sigma \leq f.\gamma). \] (56) A function \( f \) is called non-decreasing with respect to \( \leq \) if \[ (\forall \sigma : \sigma \leq f.\sigma). \] (57) If we take the set of all monotonic predicates as \( C \), then the condition \[ (\forall s : s \in S_F : f_s \text{ is monotonic with respect to } \leq) \implies (54) \quad \text{and} \quad (\forall s : s \in S_G : g_s \text{ is non-decreasing with respect to } \leq) \implies \text{implies that all predicates in } C \] are stable and satisfy therefore (42), so that we have the following corollary: For a partial order \( \leq \) \[ q \text{ is monotonic with respect to } \leq \] \[ (\forall s : s \in S_F : f_s \text{ is monotonic with respect to } \leq) \] \[ (\forall s : s \in S_G : g_s \text{ is non-decreasing with respect to } \leq) \] \[ \Rightarrow \quad wlt.F.q \leadsto_{F\parallel G} q \] This result can be applied in many programming situations. One example is PCN, see for example [FOT], processes communicate via so-called definitional variables. A definitional variable is initially undefined and may have assigned a value at most once. We can express this as monotonicity with respect to the partial order given by \((\sigma \leq \gamma) \equiv (\sigma \text{ undefined } \lor \sigma = \gamma)\). Another example are processes that communicate by message passing where the partial order is given by the length of the messages that have been sent along a communication channel. 6 Generalized commutativity conditions The importance of commutativity in program composition has been known for some time. Both Lipton [Lip75] and Misra [Mi91b] have proposed relevant conditions and their relationship has been explored by Rao [Rao92]. Here, we give generalized definitions of both Lipton and Misra commutativity and prove a composition theorem using these results. The advantage of our results is that they apply when \( q \) in \( p \leadsto q \) is not stable. The commutativity conditions are conditions on functions, we therefore assume that all commands are deterministic and are expressed by functional state transformers \( f_s \) for program \( F \) and \( g_s \) for program \( G \). In addition, we assume that for each function, there is a guard predicate written \( b_s \text{ or } [ ] f_s \), and if the guard predicate is false, then the state is unchanged, i.e. \[ [-b_s \Rightarrow (f_s = \text{id})] \] where \( \text{id} \) is the identity function. If \( b_s, \sigma \), then we say that \( f_s \) is enabled at \( \sigma \) and write this as \([\sigma] f_s \). Now we assume that we have a second function \( g_t \) with guard \( c_t \). The composition of functions \( f_s g_t \) is evaluated from the right, i.e. \( f_s g_t, \sigma = f_s (g_t, \sigma) \) and we define enabled for the composition as \[ [\sigma] f_s g_t = c_t, \sigma \land b_s, g_t, \sigma. \] (58) **Left Lipton commutativity outside** \( q \) Lipton proposed a commutativity condition where left commutativity can be briefly stated as: If for all states, \([\sigma] f g, \text{then } [\sigma] g f \text{ and } f g, \sigma = g f, \sigma \), i.e. if \( f g \) is enabled, then so is \( g f \) and both give the same result. Here, we give a definition that applies to states where some predicate \( q \) does not hold. For two guarded functions \( f \) and \( g \) left Lipton commutativity outside \( q \) is given by: \[ (f \circ o_{L} q g) \equiv \\ (\forall \sigma : \neg q, \sigma \land [\sigma] f g \land \neg q, g, \sigma : [\sigma] g f \land f, g, \sigma = g f, \sigma \land (q, f, \sigma \Rightarrow q, f g)) \] Operationally left Lipton commutativity outside \( q \) can be described as follows: If \( \sigma \) is outside \( q \), i.e. \( \neg q, \sigma \) and \( f g \) is enabled and \( g, \sigma \) is outside \( q \), then \( g f \) is enabled and \( f g = g f \) and if the result \( g f, \sigma \) is outside \( q \) then \( f, \sigma \) is outside \( q \). Note that if \( q = \text{false} \) we get the original definition given in [Lip75, Rao92]. Left Lipton commutativity is extended to programs by requiring all pairs of functions from the programs to commute. For two programs \( F \) and \( G \): \[ (F \circ o_{L} q G) \equiv (\forall f_s, g_t : s \in S \land t \in T : f_s \circ o_{L} q g_t) \] (60) **Misra Commutativity outside** \( q \) Misra defined a slightly different commutativity condition. Two functions \( f \) and \( g \) Misra commute if at points when both are enabled, both compositions are enabled and give the same result. As above, we give a modified condition that applies to states where a predicate \( q \) does not hold. For two guarded functions \( f \) and \( g \): \[ (f \text{ co}_m.q) g \equiv \\ (\forall \sigma : -q.\sigma \land [\sigma]f \land [\sigma]g : [\sigma]fg \land \sigma = g.f.\sigma \land \\ (q.f.\sigma \lor q.g.\sigma \Rightarrow q.fg.\sigma \lor q.f.\sigma \land q.g.\sigma)) \] (61) Operationally this can be described as follows: If \( \sigma \) is outside \( q \) and \( f \) and \( g \) are enabled, then \( fg \) and \( gf \) are enabled and \( fg = gf \) and if the result \( fg.\sigma \) is outside \( q \) and if \( f \) or \( g \) is outside \( q \) at \( \sigma \), then both are. If \( q = \text{false} \), then we get the original definition of [Mi91b]. Misra commutativity is extended to programs in the obvious way. For two programs \( F \) and \( G \): \[ (F \text{ co}_m.q) G \equiv (\forall f_s, g_t : s \in S \land t \in T : f_s \text{ co}_m.q g_t) \] (62) Now we show the main result of this section, if we have Left Lipton and Misra commutativity outside \( q \) that the set of predicates that are \( \text{stable}.G \land \neg \text{leaving} - q \) (48) are a progress set for \((F.\text{true}.q)\). **Commutativity outside** \( q \) and \( \text{stable}.G \land \neg \text{leaving} - q \) \[ (F \text{ co}_\ell.q) G \land (F \text{ co}_m.q) G \\ \Rightarrow \\ \text{The set of all predicates that are } \text{stable}.G \land \neg \text{leaving} - q \\ \text{is a progress set for } (F.\text{true}.q) \] **Proof** Since \( q \) is \( \text{stable}.G \land \neg \text{leaving} - q \), it is sufficient to show that (40) holds, or the stronger (54) with relation \( \text{Reach}.G.\text{nl}.q \), (48), i.e. \[ (\forall s : s \in S : \sigma \text{ Reach}.G.\text{nl}.q \gamma \Rightarrow q.\sigma \lor q.\gamma \lor (f_s.\sigma \text{ Reach}.G.\text{nl}.q f_s.\gamma)) \] (1) \( q.\gamma \) \[ (\sigma \text{ Reach}.G.\text{nl}.q \gamma) \land q.\gamma \\ \Rightarrow \\ q.\sigma \lor q.\gamma \lor (f_s.\sigma \text{ Reach}.G.\text{nl}.q f_s.\gamma) \] (2) \( \neg q.\gamma \) (2a) \[ (\sigma \text{ Reach}.G.\text{nl}.q \gamma) \land (\sigma \neq \gamma) \land \neg q.\gamma \\ \Rightarrow \\ \exists \sigma = \sigma_n, \sigma_1, \ldots, \sigma_n = \gamma : (\forall i : 0 \leq i < n : \exists t : t \in T : \sigma_{i+1} = g_t.\sigma_i) \] (48) It is now sufficient to prove: \[ (\forall i: 0 \leq i < n: q . \sigma_i \Rightarrow q . \sigma_{i+1}) \] \[ \Rightarrow \{ \text{omitting all } \sigma_i \text{ with } \sigma_{i+1} = \sigma_i \} \] \[ \exists \gamma: \sigma_0, \sigma_1, ..., \sigma_n = \gamma: (\forall i: 0 \leq i < n: \exists t: t \in T: [\sigma_i]g_t \land \sigma_{i+1} = g_t . \sigma_i) \] \[ \land (\forall i: 0 \leq i < n: q . \sigma_i \Rightarrow q . \sigma_{i+1}) \] \[ \Rightarrow \{ \neg q(\gamma) \} \] \[ \exists \gamma: \sigma_0, \sigma_1, ..., \sigma_n = \gamma: (\forall i: 0 \leq i < n: \exists t: t \in T: [\sigma_i]g_t \land \sigma_{i+1} = g_t . \sigma_i) \] \[ \land (\forall i: 0 \leq i < n: \neg q . \sigma_i) \] (2b) \[ s \in S \] \[ \Rightarrow \{ \text{construction of a set for } f_s \} \] \[ (i: 0 \leq i < n: \sigma'_i = f_s . \sigma_i \text{ if } f_s \text{ is enabled for } \sigma_i, \sigma'_i = \sigma_i \text{ otherwise}) \] (2c) It is now sufficient to prove: \[ (\forall i: 0 \leq i < n: \exists t: t \in T: \sigma'_{i+1} = g_t . \sigma'_i \land (q . \sigma'_i \Rightarrow q . \sigma'_{i+1}) \] (2c1) \( \sigma'_{i+1} \neq \sigma_i \) \[ \sigma'_{i+1} \neq \sigma_i \] \[ \Rightarrow \{ f_s \text{ must be enabled at } \sigma_{i+1} \} \] \[ [\sigma_{i+1}]f_s \land \sigma'_{i+1} = f_s . \sigma_{i+1} \] \[ \Rightarrow \{ \text{By (2a) } \neg q(\sigma_i) \land [\sigma_i]g_t \land (\neg q . \sigma_{i+1} \land \sigma_{i+1} = g_t . \sigma_i) \} \[ = q . \sigma_i \land [\sigma_i]g_t \land (\neg q . \sigma_{i+1} \land \sigma_{i+1} = g_t . \sigma_i) \} \[ \Rightarrow \{ F \text{com G} \} \[ [\sigma_i]g_t \land [\sigma_i]f_s . \sigma_i = f_s . g_t . \sigma_i \land (q . f_s . \sigma_i \Rightarrow q . g_t . f_s . \sigma_i) \] \[ \Rightarrow \{ \sigma'_{i+1} = f_s . g_t . \sigma_i \} \[ [\sigma_i]g_t \land \sigma'_{i+1} = g_t . \sigma_i \] \[ \Rightarrow \{ \{ \sigma_i]g_t \land \sigma'_{i+1} = f_s . g_t . \sigma_i \land (q . f_s . \sigma_i \Rightarrow q . g_t . f_s . \sigma_i) \} \[ \Rightarrow \{ \text{def. of } \sigma'_{i+1} \} \[ \sigma'_{i+1} = g_t . \sigma'_i \land (q . \sigma'_i \Rightarrow q . \sigma'_{i+1}) \] (2c2) \( \sigma'_{i+1} = \sigma_{i+1} \land \sigma'_i \neq \sigma_i \) \[ \sigma'_i \neq \sigma_i \] \[ \Rightarrow \{ f_s \text{ must be enabled at } \sigma_i, \text{ also by (2a), } g_t \text{ is enabled at } \sigma_i \} \[ \neg q(\sigma_i) \land [\sigma_i]f_s \land \sigma'_i = f_s . \sigma_i \land (\neg q . g_t . \sigma_i) \] \[ \Rightarrow \{ F \text{com G} \} \[ [\sigma_i]f_s . g_t \land \sigma'_i = f_s . g_t . \sigma_i \land (q . f_s . \sigma_i \Rightarrow q . g_t . f_s . \sigma_i) \] \[ \Rightarrow \{ \text{def. of } \sigma'_{i+1} \} \[ \neg q . \sigma_i \land \sigma'_{i+1} = g_t . \sigma'_i \land (q . \sigma'_i \Rightarrow q . \sigma'_{i+1}) \] (2c3) \( \sigma'_{i+1} = \sigma_{i+1} \land \sigma'_i \neq \sigma_i \) \[ \text{By (2a) } \sigma'_{i+1} = g_t . \sigma'_i \land (q . \sigma'_i \Rightarrow q . \sigma'_{i+1}) \] Now we can state the theorem on commutativity, which follows now from the progress set union theorem. **Commutativity outside q theorem** \[ q \sim_F r \] The next example applies the theorem to a simple handshaking protocol. The results of Rao are not applicable here since the target predicate is not stable. **Example: Consumer and producer** Variable $x$ is a local variable of $F$, $y$ of $G$. Both programs share a one element buffer $b$. **F : Consumer** ```latex consume : x, b := b, \bot \text{ if } b \neq \bot ``` additional assignments that do not modify $b$ or any variable of $G$ **G : Producer** ```latex produce : b := y \text{ if } b = \bot ``` additional assignments that do not modify $b$ or any variable of $F$ Now $b = k \sim_{F} b = \bot$. We want to apply the theorem to show $b = k \sim_{F \parallel G} b = \bot$. With $q = (b = \bot)$, the conditions of the theorem (63) hold: 1. $\neg q \land r \lor \forall q$ holds because in our case $r = q$. 2. $(F \lor_{c_k,q} G) \land (F \lor_{m,q} G)$ Since there is no interaction between $F$ and $G$ except via $b$ and the value of $q$ is changed only by $consume$ and $produce$, we need only look at pairs of functions involving $consume$ and $produce$. The conditions $\neg q \land [\sigma]f \land \neg q \land g \land [\sigma]f \land [\sigma]g$ never hold if $g = produce$, so that the case with $f = consume$ and $g \neq produce$ remains. The commutativity in this case follows because there is no interaction between $consume$ and $g$ and $q.consume.g$ holds. ### 7 Conclusions Starting from "first principles", we gave a general composition theorem for leads-to, then specialized it to a theorem based on the notion of a progress set. Progress sets proved to be an extremely useful device—by choosing different definitions of progress sets, we were able to obtain several different theorems for composing programs without invalidating leads-to properties. References
{"Source-Url": "http://ufdcimages.uflib.ufl.edu/UF/00/09/53/68/00001/1996211.pdf", "len_cl100k_base": 14192, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 88147, "total-output-tokens": 16832, "length": "2e13", "weborganizer": {"__label__adult": 0.0004301071166992187, "__label__art_design": 0.0006189346313476562, "__label__crime_law": 0.0005826950073242188, "__label__education_jobs": 0.0020351409912109375, "__label__entertainment": 0.00012493133544921875, "__label__fashion_beauty": 0.00021278858184814453, "__label__finance_business": 0.000701904296875, "__label__food_dining": 0.0006475448608398438, "__label__games": 0.0011911392211914062, "__label__hardware": 0.0016155242919921875, "__label__health": 0.0012426376342773438, "__label__history": 0.0004241466522216797, "__label__home_hobbies": 0.00021064281463623047, "__label__industrial": 0.000843048095703125, "__label__literature": 0.0007677078247070312, "__label__politics": 0.0005192756652832031, "__label__religion": 0.0007853507995605469, "__label__science_tech": 0.2135009765625, "__label__social_life": 0.00012159347534179688, "__label__software": 0.0092010498046875, "__label__software_dev": 0.7626953125, "__label__sports_fitness": 0.00027489662170410156, "__label__transportation": 0.0009226799011230468, "__label__travel": 0.00022470951080322263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45633, 0.01918]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45633, 0.55718]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45633, 0.73642]], "google_gemma-3-12b-it_contains_pii": [[0, 1599, false], [1599, 4415, null], [4415, 7013, null], [7013, 8181, null], [8181, 10410, null], [10410, 12502, null], [12502, 14027, null], [14027, 16606, null], [16606, 18676, null], [18676, 20275, null], [20275, 22274, null], [22274, 23846, null], [23846, 26860, null], [26860, 29207, null], [29207, 31437, null], [31437, 33715, null], [33715, 36565, null], [36565, 38854, null], [38854, 41982, null], [41982, 43552, null], [43552, 45383, null], [45383, 45633, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1599, true], [1599, 4415, null], [4415, 7013, null], [7013, 8181, null], [8181, 10410, null], [10410, 12502, null], [12502, 14027, null], [14027, 16606, null], [16606, 18676, null], [18676, 20275, null], [20275, 22274, null], [22274, 23846, null], [23846, 26860, null], [26860, 29207, null], [29207, 31437, null], [31437, 33715, null], [33715, 36565, null], [36565, 38854, null], [38854, 41982, null], [41982, 43552, null], [43552, 45383, null], [45383, 45633, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45633, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45633, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45633, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45633, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45633, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45633, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45633, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45633, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45633, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45633, null]], "pdf_page_numbers": [[0, 1599, 1], [1599, 4415, 2], [4415, 7013, 3], [7013, 8181, 4], [8181, 10410, 5], [10410, 12502, 6], [12502, 14027, 7], [14027, 16606, 8], [16606, 18676, 9], [18676, 20275, 10], [20275, 22274, 11], [22274, 23846, 12], [23846, 26860, 13], [26860, 29207, 14], [29207, 31437, 15], [31437, 33715, 16], [33715, 36565, 17], [36565, 38854, 18], [38854, 41982, 19], [41982, 43552, 20], [43552, 45383, 21], [45383, 45633, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45633, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
4a271f1fdfa953198e458c492f0925cd171b6f02
[REMOVED]
{"len_cl100k_base": 11093, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 56041, "total-output-tokens": 13232, "length": "2e13", "weborganizer": {"__label__adult": 0.0002872943878173828, "__label__art_design": 0.0004177093505859375, "__label__crime_law": 0.00026106834411621094, "__label__education_jobs": 0.001839637756347656, "__label__entertainment": 5.257129669189453e-05, "__label__fashion_beauty": 0.0001246929168701172, "__label__finance_business": 0.0002510547637939453, "__label__food_dining": 0.00021636486053466797, "__label__games": 0.00048470497131347656, "__label__hardware": 0.0004534721374511719, "__label__health": 0.00020825862884521484, "__label__history": 0.00024628639221191406, "__label__home_hobbies": 6.449222564697266e-05, "__label__industrial": 0.0002453327178955078, "__label__literature": 0.00029087066650390625, "__label__politics": 0.0001906156539916992, "__label__religion": 0.0003159046173095703, "__label__science_tech": 0.006969451904296875, "__label__social_life": 6.192922592163086e-05, "__label__software": 0.007053375244140625, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.00018417835235595703, "__label__transportation": 0.0003323554992675781, "__label__travel": 0.00015783309936523438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56367, 0.02011]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56367, 0.65433]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56367, 0.87334]], "google_gemma-3-12b-it_contains_pii": [[0, 2282, false], [2282, 5617, null], [5617, 8628, null], [8628, 11553, null], [11553, 14720, null], [14720, 17650, null], [17650, 19358, null], [19358, 22355, null], [22355, 24091, null], [24091, 26068, null], [26068, 28378, null], [28378, 31227, null], [31227, 34071, null], [34071, 36220, null], [36220, 39225, null], [39225, 42430, null], [42430, 45285, null], [45285, 48132, null], [48132, 50093, null], [50093, 53026, null], [53026, 55989, null], [55989, 56367, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2282, true], [2282, 5617, null], [5617, 8628, null], [8628, 11553, null], [11553, 14720, null], [14720, 17650, null], [17650, 19358, null], [19358, 22355, null], [22355, 24091, null], [24091, 26068, null], [26068, 28378, null], [28378, 31227, null], [31227, 34071, null], [34071, 36220, null], [36220, 39225, null], [39225, 42430, null], [42430, 45285, null], [45285, 48132, null], [48132, 50093, null], [50093, 53026, null], [53026, 55989, null], [55989, 56367, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56367, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56367, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56367, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56367, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56367, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56367, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56367, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56367, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56367, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56367, null]], "pdf_page_numbers": [[0, 2282, 1], [2282, 5617, 2], [5617, 8628, 3], [8628, 11553, 4], [11553, 14720, 5], [14720, 17650, 6], [17650, 19358, 7], [19358, 22355, 8], [22355, 24091, 9], [24091, 26068, 10], [26068, 28378, 11], [28378, 31227, 12], [31227, 34071, 13], [34071, 36220, 14], [36220, 39225, 15], [39225, 42430, 16], [42430, 45285, 17], [45285, 48132, 18], [48132, 50093, 19], [50093, 53026, 20], [53026, 55989, 21], [55989, 56367, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56367, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
e2f552bb8736cf9b622af38b84022baa6a4cb728
Chapter 13 discusses interrogation of data that can happen at varied levels and at many moments during analysis. Already in Chapter 6 we referred to Text search tools where the content is explored. Interrogation can also happen in terms of coding work you have previously achieved. Discover relationships between codes which co-occur in some way in the data or compare them across subsets of data (indicated by the application of variables or attributes to data). Types of queries vary from simple to complicated tasks; summarized, charted information where the results are already available in the background is available in some software. See all coloured illustrations (from the book) of software tasks and functions, numbered in chapter order. Sections included in the chapter: - Incremental and iterative nature of queries - Creating signposts for further queries - Identify patterns and relationships - Qualitative cross tabulations - Quality control, improving interpretive process - Tables and matrices - Charts and graphs Exercises: interrogating the dataset HyperRESEARCH’s reporting features take your current Case Filters and Code Filters nto account when generating reports. Set up your Case and Code filters and your code sorting preferences prior to running any report if you wish to interrogate a subset of your cases and codes. Filtering Cases HyperRESEARCH allows you to work with subsets of your cases by using case filters. When you filter cases, the subset of cases you’ve chosen is shown, and other cases are temporarily hidden. When browsing through your cases or creating a report or frequency report, HyperRESEARCH shows only those cases that are currently filtered. To remove the filter and show all cases, choose Cases Filter Cases ➤ All Cases. Tip: You can choose a case filter either from Cases Filter Cases or from the Filter Cases popup menu near the top of the study window. All cases The case filter default is All Cases. HyperRESEARCH will show all your cases unless instructed otherwise. To show all cases again after having filtered a subset of cases, choose Cases ➤ Filter Cases ➤ All Cases. Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research Filtering cases by name By choosing Cases ➤ Filter Cases ➤ By Name, you can choose specific cases manually. HyperRESEARCH displays your case list. Click a case name to select it. To select more than one case, click the first case, then Control-click (Windows) or Command-click (Mac) to select additional cases. Filtering cases by criteria By choosing Cases ➤ Filter Cases ➤ By Criteria, you can filter cases based on the presence or absence of codes you specify, and on specific code relationships. By choosing items from the Build Criteria popup menu, you can filter cases based on whether they include one or more codes, or whether they don’t include a code (using the NOT operator). You can also choose a function that specifies a relationship between two codes. ![Case Filtering Criteria](image) See the Expressions and Filtering Criteria topic for more information on specifying criteria for case filtering. Filtering only the current case To hide all cases except for the one currently displayed in the study window, choose Cases Current Case Only. Undoing all filtering and making all cases visible again To show all the hidden cases, choose Cases ➤ Filter Cases ➤ All Cases. Filtering and Sorting Code References Filter Cases HyperRESEARCH allows you to work with subsets of your code references by filtering certain codes and temporarily hiding all others. When browsing through your code references, scanning the Codes in Context in a source window, or generating a report or frequency report, HyperRESEARCH shows only the currently filtered code references. To remove the filter and show all code references, choose Codes ➤ Filter Codes ➤ All Codes. Tip: You can choose a code filter either from Codes ➤ Filter Codes or from the Filter Codes popup menu near the bottom of the study window. Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research Code filtering can be a powerful tool in analyzing your data, especially when used in combination with the case filtering tools. The analysis and reporting tools work with only those codes and cases currently filtered, making it easy to concentrate on certain portions of your data. All codes The code filter default is All Codes. HyperRESEARCH shows all your code references, regardless of which code you used for them, unless instructed otherwise. To show all code references again after having filtered a subset of codes, choose Codes ► Filter Codes ► All Codes. Filtering codes by name By choosing Codes ► Filter Codes ► By Name, you can choose specific code names manually. HyperRESEARCH displays a list of all codes in your study. Click a code name to select it. To select more than one code, click the first code, then Control-click (Windows) or Command-click (Mac) to select additional codes. HyperRESEARCH will display all code references that are assigned to any of the specified codes, and hide the rest of the code references. You can also select a group instead of a single code name. Selecting the group filters all the codes in the group, so this is a useful shortcut when you want to analyze the codes in one or more groups. Filtering codes by criteria By choosing Codes ► Filter Codes ► By Criteria, you can filter code references based on which codes they use. You can also choose a function that specifies a relationship between two codes. See the Expressions and Filtering Criteria topic below for more information on specifying criteria for case filtering. Using the Code Map to filter codes If the Code Map window is open and one or more codes are marked, you can choose Codes ► Filter Codes ► By Map to filter only the codes that have been marked with yellow highlighting. All code references that use any of the highlighted codes will be filtered. (The Code Map window must be open to use this option.) For more information about using the Code Map and marking codes, see the Mapping Code Relations and Code Map Window topics. Filtering codes by source type You may filter code references based on the type of source material they’ve been assigned to. Code reference types include Text, Image, Movie or Audio, and Theme. The Text, Image, and Movie or Audio types refer to the type of source file. The Theme code type is a special type, not referring to any specific source material. The Theory Builder can add theme codes to the study. See the Testing Theories and Adding Theme Codes topics for more information about theme codes. Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research Hiding all code references You can easily clear all filters by choosing Codes ➤ Filter Codes ➤ Unselect All. Although all your code references are of course saved (and easily retrieved by changing the code filter), they'll be hidden and out of the way while you apply new codes to the case. Hiding selected code references You can hide one or more code references by highlighting the code reference in the study window, then choosing Edit ➤ Hide Highlighted. The selected code references are hidden from view. To hide all code references in the current case that are not selected, select the code references you want to keep in view, then choose Edit ➤ Hide Others. Undoing all filtering and making all codes visible again To show all the hidden code references, choose Codes ➤ Filter Codes ➤ All Codes. Sorting code references Normally, the list of code references in the study window appears in the order you coded them: when you code a new selection of source material, its code reference is added to the bottom of the list. You can sort the list of code references to make it easier to scan, or to make it simpler to select multiple code references by grouping them together. (For example, if you want to delete all the code references to a particular source, it will be easier to select them all at once if you first sort by source name.) Sorting also affects the order in which code references appear in reports. You can sort by any combination of the source’s name, its type (text, picture or media), the code you used, or the position of the coded selection in the source. (If you have filtered cases, only the code references in the filtered cases are sorted. Any other cases are unchanged.) To sort code references, first choose Codes ➤ Sort. The Sort Codes window opens. Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research Click one or more of the buttons on the left. Each sort criterion you click appears under “Codes Sorted By”. **Sorting by code** To put code references in order by the code name you used for each, click the Code Name button. “Code Name” appears in the Codes Sorted By box on the right. Then click Sort to re-order the code references in all currently filtered cases. **Sorting by source file name** To put code references in order by the name of the source file, click Source Name, then click Sort. (Remember that in HyperRESEARCH, you can use multiple sources in the same case, so the code references in a case might be from several different sources.) **Sorting by type** The type of a code reference is the type of source it refers to: - Text (a plain text, RTF, Word, or HyperTRANSCRIBE file) • Image (an image file) - Movie (an audio or video file) - Theme (a code added by the Theory Builder, not linked to a source) To sort code references by their type, click Code Type, then click Sort **Sorting by the coded selection’s position** To put code references in the same order in which they appear in the source, click Code Position, then click Sort. **Sorting with multiple sort criteria** You might want to sort by several things at once. For example, if you’ve used several sources in one case, you might want to sort them by source name, and by position within each source. To sort by multiple criteria, click the buttons in the Sort Codes window in the order you want. (For example, if you want to sort by type, and within each type by code name, first click Code Type, then click Code Name.) The criteria appear in order in the Codes Sorted By box on the right. Analyzing Code Frequencies Checking the frequencies with which you’ve used your codes can help in analyzing both your data and the progress of your coding tasks. HyperRESEARCH also performs statistical analysis of the frequency with which codes are used across the cases in your study, allowing you to see which codes are broadly used and which are concentrated in certain cases. Creating a frequency report To create a report of the code frequencies in your study, choose Tools ▶ Frequency Report. In the Frequency Report window, you can select which cross-case statistical analysis options to include in the frequency report. You can also choose to display a bar graph that shows graphically how often each code has been used. Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research The frequency report shows all the codes in the Code Book, along with the total number of times each code has been used in your study, and whatever additional statistics you have chosen. **Filtering and the frequency report** If you have applied a code or case filter to work with a subset of codes and cases, only those codes and cases are shown in the frequency report. If you want to ignore the filter while creating the frequency report, choose the All Codes and Cases option under Codes and Cases to Include. (For more about code and case filtering, see the topics **Filtering and Sorting Code References** and **Filtering Cases**.) **Statistical analysis of code usage across cases** The frequency report lists all the codes in the Code Book, along with the total number of times each code has been used in your study. You can also optionally include statistical information about how the code is distributed across cases. These statistics are based on how often the code appears in each case in your study: the minimum, maximum, mean, and standard deviation of the code’s frequency of use. Each option is displayed in a column of the frequency report. You can use all, none, or any combination of options. You can click the header of any column to sort by that column, so, for example, you can sort the list of codes by the minimum number of times each code is used. **Minimum** The smallest number of times this code has been used in any of your cases. If there are any cases in your study where the code is not used at all, the minimum is zero. **Maximum** The largest number of times this code has been used in any of your cases. **Mean** The average number (arithmetic mean) of the code’s use across all cases in your study. **Standard Deviation** The standard deviation of the distribution of this code across the cases in your study. The larger the standard deviation, the more variation there is in use of the code. For example, if a code is used the same number of times in each case, the standard deviation of its frequency is zero. **Code frequency bar graph** If you check the Bar Graph box in the Frequency Report window, a graphical representation of the Total column is included in the frequency report. Each row includes a horizontal bar whose length is proportional to the total number of times the code has been used in your study: the longer the bar, the more often the code is used. This is the same information as in the Total column, but it's presented in visual format to make it easier to scan. Exporting a frequency report or code matrix Once you’ve created the frequency report, you can print it or export it as a text file. Exporting the frequency report To export the report, click Export, then click Export as Shown and choose a name and location for the exported file. The file is exported as plain text (with a ‘.txt’ extension) in tab-delimited format, and can be opened in any word processor or text editor, or in a spreadsheet program. When you export a frequency report, the bar graph is not included, but all other columns displayed are included. Exporting a code matrix You can also export a code matrix, which can be used with spreadsheet software. This matrix includes a row for each case in your study, and a column for each code. (The first row contains the names of all the codes, and the first column contains the case names.) Your matrix might look like this when you open it in a spreadsheet program: Each cell of the grid contains information about the use of that column’s code in that row’s case. In the example above, the highlighted cell shows that the code “combine” was used 6 times in the case named “Joe L”. To export a matrix of your code data, click Export, then click Export Matrix. HyperRESEARCH displays the Export Data window. First, you’ll define the format of the exported file. 1. In the Export Options tab, make sure the Delimited File option is chosen 2. If you want to export a frequency count, as in the example above, choose the Frequency Count option. If you only want to export whether each code was used at all in a given case, choose the Boolean Only option. (In this case, each cell of the grid contains 1 or 0 (zero), depending on whether the code was used in that case or not.) 3. Choose which character to use as the delimiter between columns: Comma or Tab. (If you’re not sure which to use, try Tab.) Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research Next, you can rearrange the order of codes. This step is optional, but it can be useful if you want to export only certain codes, or if you want the columns to be in some order other than alphabetical order. You rearrange codes in the Code Order tab: 4. Click the Code Order tab 5. The list of codes is shown in the order that the codes will be exported. To move a code, click the code to select it, then click Move Up or Move Down until the code is in the desired position. Repeat this step for each code until the entire list is in the desired order. Finally, you’ll export your file: 6. Click Export and specify a name and location for the exported file Your file is now ready to open in a spreadsheet or database program. (See also The Frequency Report Window information included in the exercises for Chapter 8.) Boolean Expressions and Filtering Criteria If you choose to filter cases or codes by criteria, or if you use the Theory Builder, HyperRESEARCH will ask you to build an expression that defines criteria to use in filtering your cases or codes. When filtering cases by criteria, or creating a theory rule set, the expression you build may be a combination of codes and code relationship functions, all linked by Boolean operators (AND and OR). You may also test for the absence of codes by inserting the NOT operator before a code name or code relationship function in the expression. Note: When filtering codes, the Boolean operators are not available. This is because when filtering codes, you are looking at each individual code reference and seeing whether it fits your criteria, rather than examining all the code references in a case. Building expressions to filter cases by criteria To specify the criteria for filtering cases, first choose Cases ➤ Filter Cases By Criteria. In the Case Filtering Criteria window, you’ll use the Build Criteria popup menu. Building the expression When you first begin building your expression, the Build Criteria menu includes NOT, Function, and Code: Choose NOT if you want to test cases for the absence of a code or code relationship function. (You'll specify the code or function next.) Choose Function to have HyperRESEARCH test cases for the presence of certain code relationships based on their proximity to one another. In the Select a Function dialog box, choose the function you want to test for (Equals, Excludes, FollowedBy, Includes, Overlaps, or PrecededBy). Then choose the first code for the function, followed by the second code for the function. (See the Code Relationship Functions topic for more information about these functions.) If you first choose NOT before choosing Function, HyperRESEARCH will look for cases that do not have code references that match the specified code relationship. Choose Code to specify a code name HyperRESEARCH should look for. If you first choose NOT followed by Code, HyperRESEARCH will look for cases that have not been coded with the specified code. Once you've chosen a code or function, the Build Criteria menu changes to contain the Boolean operators AND and OR. Use these to define more complex relationships between multiple codes and code relationships for HyperRESEARCH to test your cases against. Continue selecting functions and codes, and relating them with the AND or OR operators, until your expression is complete. When you click Select, HyperRESEARCH filters all the cases for which the expression is true. Cases for which the expression is false will be excluded from the current case filter, and temporarily hidden from view. (You can show them again by choosing Cases ▶ Filter Cases ▶ All Cases.) Adding parentheses Make sure to clarify any ambiguous portions of your expression by putting parentheses around terms you wish evaluated together (sub-expressions). See the Boolean Logic topic for more information about how expressions are evaluated. To place parentheses in your expression, you first enter the entire expression without parentheses, and then click the start and end of the parenthesized part of the expression. HyperResearch automatically detects that the expression may be ambiguous, and asks whether you want to place parentheses around the code or functions you've selected. For example, suppose you want to create the following expression: IF chocolate AND (ice cream OR pudding) To enter such an expression, follow these steps: 1. Enter the expression IF chocolate AND ice cream OR pudding without the parentheses, following the process described earlier in this topic. Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research 2. Click “ice cream”, the first code that you want to put inside parentheses 3. Click “pudding”, the last code that you want to put inside parentheses. The sub-expression “ice cream OR pudding” is selected 4. Answer “Yes” in the dialog box. A pair of parentheses is inserted surrounding the part of the expression you selected You can also nest parentheses, using the same technique: click the first code, function, or already-parenthesized expression that you want to enclose, then click the last Tip: If your expression requires nested parentheses, work from the inside to the outside: first place the innermost set of parentheses, then the next, and so on until you reach the outermost pair of parentheses. Re-filtering an already-filtered set of cases You might want to filter a set of cases twice. For example, you might use Cases ➤ Filter Cases ➤ By Name to work with only the cases for your female subjects, and then later filter again by criteria in order to work with only the subset of subjects that are not only female, but have a particular code relationship. If your cases have already been filtered, and you want to use the Filter By Criteria dialog box to work with a subset of just the filtered cases, use the Filtered option under Cases to Test. If you use the Filtered option, only the cases that are currently filtered and also meet the new criteria you entered are shown. (In our example, cases with male subjects are not shown even if they fulfill the code relationship you specify, because those cases weren’t already filtered.) To ignore any current filters and apply your new filter to all the cases in the study, select the All Cases option under Cases to Test. Building expressions to filter codes by criteria When filtering codes by criteria, the expression you build will simply be a series of code names and/or code relationship functions. HyperRESEARCH will select every code reference that matches any of the code names or any of the relationship functions you’ve specified. (See the Code Relationship Functions topic for more information about these functions.) You can choose to filter the code references in the current case (choose the Current option under Cases to Test), or the code references in all filtered cases (choose Filtered). Tip: If you just want to filter codes by name, and you don’t need to use the code relationship functions, choose Codes ➤ Filter Codes ➤ By Name instead of Codes Filter Codes By Criteria. The By Criteria option will let you filter codes by name, but using By Name is simpler. Building expressions in the Theory Builder The Build Expression popup menu in the Theory Builder works the same way as the Build Criteria menu in the Case Filtering Criteria dialog box (see “Building Expressions to Filter Cases by Criteria” above). The only difference is how the expression is applied to your data. When evaluating expressions in a theory, any case for which the expression is true will be subject to the actions specified in the THEN section. Cases for which the expression evaluates to false will not have the associated actions performed. See Tutorial Four: Testing Theories installed in the Documentation folder in the HyperRESEARCH folder for more information. Boolean Logic HyperRESEARCH uses Boolean logic and terms – NOT, AND, and OR – to define filtering criteria for cases, and to construct rules for testing theories with the Theory Builder. You probably remember Boolean logic from high school algebra, but here’s a quick refresher course. Operators Boolean logic describes the relationship of two or more terms to one another. HyperRESEARCH uses three Boolean operators – AND, OR, and NOT – to show the relationships between two or more codes or code relationship functions. Let’s see what effect each of these three operators has on two codes or code relationship functions joined by the operator. AND: The operator AND links two codes in the same way the word “and” does in an English sentence. For example, if someone said to you, “Buy cream and milk at the store,” you would buy both items, not just one. The Boolean operator AND indicates that both codes or functions must be present in order for the expression to be true. OR: The operator OR links two codes in the same way the conjunction “or” does in an English sentence. For example, if someone said to you, “Buy cream or milk at the store,” you would buy one of the two items, but not necessarily both. The Boolean operator OR indicates that one or both codes or functions must be present in order for the expression to be found true. NOT: The operator NOT indicates that HyperRESEARCH should look for the absence of the following code. It works in the same way the words “not” or “don’t” do in an English sentence. For example, if someone said to you, “Don’t buy milk at the store,” you wouldn’t buy milk. The Boolean operator NOT indicates that a code or function must be absent in order for the expression to be true. Order of precedence Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research When HyperRESEARCH encounters an expression with more than one Boolean operator (for example, "thisThing AND thatThing OR theOtherThing"), operators with the highest precedence are evaluated before those with lower precedence. When there are two operators of the same precedence, the expression is evaluated from left to right. HyperRESEARCH operators have this order of precedence: 1. NOT 2. AND 3. OR For example, let’s look at how HyperRESEARCH evaluates this expression: true AND true OR false First it evaluates the AND to arrive at: true OR false ...which evaluates to true. Parenthetical expressions For complex expressions, you can override HyperRESEARCH’s order of precedence by placing parentheses around expressions. Things are pretty simple when you’re just dealing with two codes at a time. But when an expression attempts to define a relationship between several codes and/or code relationship functions, it may be ambiguous. Just as punctuation helps us keep complicated English sentences from being ambiguous, so parenthetical expressions help us keep HyperRESEARCH expressions from being ambiguous. For example, if someone said to you, “Buy butter and milk or eggs at the store”, you might not understand immediately what that person meant. The “or” between the milk and eggs makes the meaning somewhat ambiguous. Do they mean that the eggs and milk are interchangeable, that they don’t mind which you buy as long as you buy butter and one of the other two? Or do they want eggs if you can’t get both butter and milk? This same type of problem can crop up quite easily in a HyperRESEARCH expression. HyperRESEARCH also uses parentheses to eliminate these types of ambiguities in expressions or selection criteria. To continue with the shopping example, if the shopping list were expressed as: buy butter AND (milk OR eggs) you would understand immediately that you should get butter and at least one of the other two items. And if the shopping list were expressed as: buy (butter AND milk) OR eggs you would know that you should get either both the first two items, or the third. HyperRESEARCH expressions work just the same way. For example, the expression: Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research (code1 OR code2 OR code3) and code4 would be found true if code4 and any of the three codes in parentheses occurred in the same case. If you don’t include parentheses in a Boolean expression, HyperRESEARCH interprets your statement according to the order of precedence described in the previous section. When HyperRESEARCH encounters a Boolean expression without parentheses, it performs all the AND statements first, then performs all the OR statements. For instance, if you create the Boolean expression code1 OR code2 OR code3 AND code4 leaving off all the parentheses, HyperRESEARCH deals with the last two statements, code3 AND code4, first, then goes back and tries to figure out what to do with all the OR statements. It interprets it as though you had parenthesized the expression like this: code1 OR code2 OR (code3 AND code4) Code Relationship (Proximity) Functions When filtering codes or cases by criteria, or testing theories, one of your options is to have HyperRESEARCH look for code references based on their relationship to other code references. You do this by including a function in your filtering criteria or expression. The code relationship functions are Equals, Excludes, FollowedBy, Includes, Overlaps, and PrecededBy. How code relationship functions work Each function compares the positions of each code reference of one code against the positions of each code references of a second code. (A code reference’s position is the location of the coded segment in the source.) A code relationship function is defined as follows: Function(Code 1, Code 2) For example, the function that describes the codes “wants kids” and “combining work and family” applied to exactly the same segment of source material is: Equals("wants kids","combining work and family") The functions The **Equals** function looks for code references for the two codes that exactly match each other. (Code references include the source type, source name, and position of the coded material within the source. All these must match for the code references to be considered equal.) The **Excludes** function looks for code references for *Code 1* that do not overlap any code references for *Code 2* in any way – with not even one character (for text sources), pixel (images), or frame (video) in common. The **FollowedBy** function looks for code references where the beginning of *Code 2* comes after the end of *Code 1*: • For text sources, there must be at least one character between the end of *Code 1* and the start of *Code 2*. • For image sources, the rectangle for *Code 2* must be either below or to the right of the rectangle for *Code 1*. • For audio or video sources, *Code 2* must start at least one second after the end of *Code 1*. The **Includes** function looks for code references where *Code 1* completely encompasses the material coded with *Code 2*. The source material for the *Code 2* reference must be entirely contained within the source material for the *Code 1* reference. (The **Includes** function will also look for exact matches, as with the **Equals** function.) The **Overlaps** function looks for code references for *Code 1* that overlap or intersect the code references for *Code 2* in any way. The shared source material may be as little as a single character, a single pixel on an image, or a single fraction of a second of audio or video. If any portion of the source material has been coded with both *Code 1* and *Code 2*, the **Overlaps** function will select those code references. The **PrecededBy** function looks for code references where the end of *Code 1* comes before the start of *Code 2*. **Note:** Except for the **Excludes** function, when filtering code references by criteria, HyperRESEARCH will look for all references for both codes (whichever codes are specified as *Code 1* and *Code 2*) that match the specified function. Filtering cases by criteria using functions A single match in a case is enough to filter that case. When filtering cases by criteria, any cases for which one or more *Code 1* references and one or more *Code 2* references match the specified function will be selected. For example, if you filter cases based on the **Excludes** function, the case will be filtered even if there’s only one *Code 1* reference that does not include any part of the source for *Code 2*, and even if the case contains other references to these two codes that do overlap. To filter cases where all the references to *Code 1* exclude any reference to *Code 2*, you would have HyperRESEARCH test for \[ \text{NOT Overlaps(} \text{Code 1, Code 2} \text{)} \] (in other words, cases where there are no overlapping code references between *Code 1* and *Code 2*). To filter cases where every reference *Code 1* overlaps at least one reference to *Code 2*, you should have HyperRESEARCH test for \[ \text{NOT Excludes} \ ( \text{Code 1, Code 2} ) \] (in other words, cases where there are no *Code 1* references that don’t overlap *Code 2*). Adding new code relationship functions Researchware releases additional code relationship functions from time to time, which you can download and add to your copy of HyperRESEARCH. In the Program tab of the Options/Preferences window, you can use the Add button to add code relationship functions to the standard set provided with HyperRESEARCH (Equals, Excludes, FollowedBy, Includes, Overlaps, and PrecededBy). For more information, see the “Programs” section in the Options/Preferences Window topic. Selecting codes with the code map HyperRESEARCH includes a Code Map feature. One of the most powerful ways to use a code map is to use it to select a subset of codes to work with. (For more information about selecting subsets of codes, see the Filtering and Sorting Code References topic.) Start by marking the codes you want to work with. To mark a code, click the Mark tool. Then click each code you want to select. Marked codes are highlighted in a bright yellow color so they’re easy to see. ![Code Map](image) To unmark a code, just click it again with the Mark tool. Once you have marked the codes you want, click and hold down the mouse on the Mark tool to display the Mark menu. Then choose Apply Marked Set to Study Window from the Mark menu. (This action is equivalent to choosing Codes ▶ Filter Codes ▶ By Map.) The code references that correspond to the marked codes are displayed in the study window, and all other code references are temporarily hidden. Testing Theories The HyperRESEARCH Theory Builder lets you create a model of the relationships in your data, and test the validity of that model by checking how codes are related in your coded data for each of your cases. Using the Theory Builder can help determine whether or not the data supports any assumptions and inferences you may have concerning your study. For an introduction to the Theory Builder as used in an example study, see Tutorial 7: The Theory Builder. Before working with the Theory Builder, you must develop a theory about your data. Once you have formulated a theory, you must express it in terms HyperRESEARCH can understand. You can then work out the best way to define each rule in the theory in terms of codes. Each rule in a HyperRESEARCH theory consists of two parts; one or more antecedents (IF statements, assumptions to check for) and one or more consequents (THEN statements, actions to take if the IF statement is true). Both antecedents and consequents can be expressed in terms of codes and code functions. The antecedents are codes that define your assumptions. The consequents are the consequences that result when an assumption is borne out by the data. HyperRESEARCH treats a set of antecedents and consequents as a rule, and checks a rule’s validity against the available data (that is, your coded source material) whenever you test the theory. If a rule’s antecedents prove true, HyperRESEARCH can then use that rule’s consequents, or actions, to support further Rules. It does this by temporarily adding or removing specified codes from the case being tested. A code added as a result of a rule’s consequents is a Theme, and may be added permanently to the case if you check the Add Themes to Cases box. Theme codes are based on the presence or absence of given codes rather than any statements inherent in the source material. (For more information about theme codes, see Adding Theme Codes.) The Theory Builder Window To open the Theory Builder window, choose Theory ➤ New Theory. The Theory Builder window consists of two main sections: the Theory Rule List in the top half, and the Rule Editor in the bottom half. The triangle labeled Show Rule Editor hides and shows the Rule Editor. The Theory Rule List simply lists the rules you’ve defined in the Rule Editor. These rules translate your theory into terms that HyperRESEARCH can understand. The Theory Rule List The Theory Rule List displays the rules you’ve defined in the Rule Editor. When you have finished building and adding new rules, click Export or Display to test the theory. If you click Display, the results are shown in a window. If you click Export, the results are saved in a text file whose name you enter. Click Cancel to close the Theory Builder window. The Rule Editor The Rule Editor has two main parts: the IF section, where you create the expression to test, and the THEN section, where you specify what to do if the IF expression is true. When you have finished specifying both the IF and THEN sections, click the OK button at the upper right of the Rule Editor section to add the new rule to the Theory Rule List in the top of the window. Rule Editor: The IF section In the IF section, you create a statement about codes in your study, using the Build Expression popup menu. The content of the Build Expression menu varies depending on the context. When you begin, the menu contains the items Function, Code, and NOT. After you add a code or function, the menu changes to contain AND and OR. The Build Expression menu has the following menu items: **Function** Chooses one of the functions Equals, Excludes, FollowedBy, Includes, Overlaps, or PrecededBy. (For information about these functions, see the Code Relationship Functions topic.) Code Selects a code from the Code Book. NOT: Add NOT to the rule. The boolean operator NOT looks for cases where the statement following is not true. For example, if you choose NOT, then choose a code, the rule specifies all the cases where that code is not used. AND: Add AND to the rule. The boolean operator AND looks for cases where the statement before and the statement after AND are both true. For example, if you choose a code, then AND, then another code, the rule specifies all the cases where both codes are used. OR: Add OR to the rule. The boolean operator OR looks for cases where either the statement before OR or the statement after or both are true. For example, if you choose a code, then OR, then another code, the rule specifies all the cases where one or both codes are used. The expression you are building appears in the box below the Build Criteria menu. (For more information about building an IF statement, see the Expressions and Filtering Criteria topic.) To remove the entire expression and start over, click Clear IF. Rule Editor: The THEN section In the THEN section, you specify what to do if the statements in the IF section are found to be true, using the Actions menu: Add Goal A goal is a final endpoint in a set of rules. If the statement in the IF section is found to be true for a case and a goal is added, HyperRESEARCH considers the theory to have been proven for that case. (Most theories have only one goal, although it is possible to include multiple goals in a single theory.) Therefore, you usually use Add Goal in the final rule of your theory. Add Code Temporarily adds a code you specify to any case where the IF statement is found to be true. You can use this temporary code in other, later rules in the same theory. Adding a temporary code is a handy way of marking a case for later use, and it also lets you specify a code as functionally equivalent to a statement being true: if the code is in a case, you know that the theory is true for that case. If the Add Themes to Cases box at the bottom of the Theory Rule List section is checked, the code is permanently added to the case and can be seen in the study window. Otherwise, the code is only temporary, and is not seen in the study window. Remove Code Temporarily removes a code you specify, for any case where the IF statement is found to be true. Temporarily removing a code is a handy way of hiding the code from other, later rules in the same theory. This command affects only the rules in the current theory. It does not change the Code Book and does not remove any code references from your study. Unlike the Add Code command, the Remove Code command cannot be made permanent by checking a box. It is always temporary. Adding Theme Codes The Theory Builder temporarily adds theme codes to a case. Normally, these additional codes exist only during the test itself; they are not permanently added to the case. However, by checking the Add Themes to Cases box in the Theory Builder window, you can add these codes to the case as codes of type “Theme”. Themes are not derived directly from your data in the coding process. Rather, they’re based on the presence or absence of certain codes in the case. They don’t point to any underlying source material (of type “Text”, “Image”, or “Movie”). They represent themes you’ve inferred from existing coding. A code of type “Theme”, unlike other codes, does not have any source material associated with it. It is part of the case, but has no source file or source reference. For example, in the QDA Software study (found in the Documentation folder installed with HyperRESEARCH), the following inference may be made in the rule editor: IF computer more efficient AND stays close to data AND NOT distant from data THEN NOT FRANKENSTEINS MONSTER That is: if during the coding process you found source material that warranted the codes computer more efficient and stays close to data, but you didn’t find any source material that supported the distant from data code, then it’s logical to infer that the respondent doesn’t consider qualitative data analysis software to be a Frankenstein’s Monster. If you place this rule in the Theory Builder, and check the Add Themes to Cases box, then the code NOT FRANKENSTEIN’S MONSTER is added to each case that meets the criteria. You can use any code as a theme code, even codes you’ve already used in your study for coding. If you want your theme codes to be distinct from your normal codes (to be applied only as themes, rather than assigned directly to the data) you may prefer to name them in all upper case (e.g. NOT FRANKENSTEINS MONSTER). This will help them stand out in the Code Book, the study window, and in reports. These theme codes may be manipulated and analyzed in the same ways as regular codes. They appear in the Code Book and can be duplicated, renamed, deleted, and so on. Also See in HyperRESEARCH Help These topics in the HyperRESEARCH Help and the User Guide may be helpful in completing the exercises for this chapter: - Tutorials: Tutorial 7: The Theory Builder - Analysis - Reporting - Windows: Study window Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research - Tools: Report Builder, Frequency Report tool, Theory Builder To find a topic, choose Help > HyperRESEARCH Help and look through the list on the left side of the Help window. Ann Dupuis 2014 Please contact info@qdaservices.com if you intend to use these materials for teaching purposes. Visit the companion website at https://study.sagepub.com/using-software-in-qualitative-research
{"Source-Url": "https://study.sagepub.com/sites/default/files/HyperRESEARCH_Silver%2BLewins2e_Ch13_Step_by_Step.pdf", "len_cl100k_base": 9462, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 53139, "total-output-tokens": 10519, "length": "2e13", "weborganizer": {"__label__adult": 0.0005402565002441406, "__label__art_design": 0.0016088485717773438, "__label__crime_law": 0.0010309219360351562, "__label__education_jobs": 0.1302490234375, "__label__entertainment": 0.0002715587615966797, "__label__fashion_beauty": 0.0003325939178466797, "__label__finance_business": 0.0015020370483398438, "__label__food_dining": 0.0005517005920410156, "__label__games": 0.0015001296997070312, "__label__hardware": 0.0010442733764648438, "__label__health": 0.0009322166442871094, "__label__history": 0.0007719993591308594, "__label__home_hobbies": 0.00047969818115234375, "__label__industrial": 0.0007381439208984375, "__label__literature": 0.0018062591552734375, "__label__politics": 0.0005316734313964844, "__label__religion": 0.0009860992431640625, "__label__science_tech": 0.045562744140625, "__label__social_life": 0.0009093284606933594, "__label__software": 0.332763671875, "__label__software_dev": 0.474609375, "__label__sports_fitness": 0.0004320144653320313, "__label__transportation": 0.00043320655822753906, "__label__travel": 0.0003833770751953125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44948, 0.0047]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44948, 0.31899]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44948, 0.89393]], "google_gemma-3-12b-it_contains_pii": [[0, 2331, false], [2331, 4340, null], [4340, 7103, null], [7103, 9087, null], [9087, 10773, null], [10773, 11699, null], [11699, 14243, null], [14243, 16307, null], [16307, 18328, null], [18328, 21043, null], [21043, 23602, null], [23602, 26242, null], [26242, 28629, null], [28629, 30421, null], [30421, 33643, null], [33643, 35489, null], [35489, 37155, null], [37155, 38901, null], [38901, 41164, null], [41164, 44249, null], [44249, 44948, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2331, true], [2331, 4340, null], [4340, 7103, null], [7103, 9087, null], [9087, 10773, null], [10773, 11699, null], [11699, 14243, null], [14243, 16307, null], [16307, 18328, null], [18328, 21043, null], [21043, 23602, null], [23602, 26242, null], [26242, 28629, null], [28629, 30421, null], [30421, 33643, null], [33643, 35489, null], [35489, 37155, null], [37155, 38901, null], [38901, 41164, null], [41164, 44249, null], [44249, 44948, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 44948, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44948, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44948, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44948, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44948, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44948, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44948, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44948, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44948, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 44948, null]], "pdf_page_numbers": [[0, 2331, 1], [2331, 4340, 2], [4340, 7103, 3], [7103, 9087, 4], [9087, 10773, 5], [10773, 11699, 6], [11699, 14243, 7], [14243, 16307, 8], [16307, 18328, 9], [18328, 21043, 10], [21043, 23602, 11], [23602, 26242, 12], [26242, 28629, 13], [28629, 30421, 14], [30421, 33643, 15], [33643, 35489, 16], [35489, 37155, 17], [37155, 38901, 18], [38901, 41164, 19], [41164, 44249, 20], [44249, 44948, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44948, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
e51a524b67a0c2a7511c953d111e2cba51f3d2b2
[REMOVED]
{"Source-Url": "http://people.sutd.edu.sg/~sunjun/Publications/sttt12.pdf", "len_cl100k_base": 14895, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 63742, "total-output-tokens": 17301, "length": "2e13", "weborganizer": {"__label__adult": 0.0003764629364013672, "__label__art_design": 0.0005660057067871094, "__label__crime_law": 0.00034618377685546875, "__label__education_jobs": 0.0010671615600585938, "__label__entertainment": 0.00010728836059570312, "__label__fashion_beauty": 0.0001888275146484375, "__label__finance_business": 0.0004050731658935547, "__label__food_dining": 0.0003898143768310547, "__label__games": 0.0010156631469726562, "__label__hardware": 0.0017538070678710938, "__label__health": 0.0004658699035644531, "__label__history": 0.00040435791015625, "__label__home_hobbies": 0.0001380443572998047, "__label__industrial": 0.0008845329284667969, "__label__literature": 0.0003330707550048828, "__label__politics": 0.00037026405334472656, "__label__religion": 0.0005197525024414062, "__label__science_tech": 0.12841796875, "__label__social_life": 0.00010156631469726562, "__label__software": 0.0098419189453125, "__label__software_dev": 0.8505859375, "__label__sports_fitness": 0.0003237724304199219, "__label__transportation": 0.0010967254638671875, "__label__travel": 0.00021755695343017575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 76319, 0.02795]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 76319, 0.58921]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 76319, 0.9072]], "google_gemma-3-12b-it_contains_pii": [[0, 4275, false], [4275, 9985, null], [9985, 13583, null], [13583, 18013, null], [18013, 23085, null], [23085, 27361, null], [27361, 32245, null], [32245, 37215, null], [37215, 42373, null], [42373, 46672, null], [46672, 51925, null], [51925, 55724, null], [55724, 61450, null], [61450, 63626, null], [63626, 66062, null], [66062, 70845, null], [70845, 75225, null], [75225, 76319, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4275, true], [4275, 9985, null], [9985, 13583, null], [13583, 18013, null], [18013, 23085, null], [23085, 27361, null], [27361, 32245, null], [32245, 37215, null], [37215, 42373, null], [42373, 46672, null], [46672, 51925, null], [51925, 55724, null], [55724, 61450, null], [61450, 63626, null], [63626, 66062, null], [66062, 70845, null], [70845, 75225, null], [75225, 76319, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 76319, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 76319, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 76319, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 76319, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 76319, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 76319, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 76319, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 76319, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 76319, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 76319, null]], "pdf_page_numbers": [[0, 4275, 1], [4275, 9985, 2], [9985, 13583, 3], [13583, 18013, 4], [18013, 23085, 5], [23085, 27361, 6], [27361, 32245, 7], [32245, 37215, 8], [37215, 42373, 9], [42373, 46672, 10], [46672, 51925, 11], [51925, 55724, 12], [55724, 61450, 13], [61450, 63626, 14], [63626, 66062, 15], [66062, 70845, 16], [70845, 75225, 17], [75225, 76319, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 76319, 0.0669]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
55e4446fca415cd86a471bd6b6ec54ff590a4680
API Parameter Recommendation Based on Documentation Analysis by Yuan Xi A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in Computer Science Waterloo, Ontario, Canada, 2019 © Yuan Xi 2019 Author’s Declaration I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. Abstract Application Programming Interfaces (APIs) are widely used in today’s software development, as they provide an easy and safe way to build more powerful applications with less code. However, learning how to use an API function correctly can sometimes be difficult. Software developers may spend a lot of time to learn a new library before they become productive. When an unfamiliar API is to be used, they usually have to chase down documentation and code samples to figure out how to use the API correctly. This thesis proposes a new approach based on documentation analysis, helping developers learn to use APIs by recommending likely parameter candidates. Our approach analyzes the documentation information, extracts possible candidates from code context, and gives them as parameter suggestions. To test the effectiveness of our approach, we process the documentation of 5 popular JavaScript libraries, and evaluate the approach on top 1,000 JavaScript projects from GitHub. We used 1,681 instances of API function calls for testing in total. On average, over 60% of the time the correct parameter is in the suggestion set generated by our approach. Acknowledgments Firstly, I want to thank my supervisors–Professor Lin Tan, Professor Michael Godfrey, and Professor Meiyappan Nagappan for their patience and help to my research work. They work hard to provide a good research environment for every team member and give valuable suggestions when we meet problems. I sincerely appreciate the opportunity to work with them. I want to thank for my parents and my girlfriend, who always love and support me. Without their support, I cannot finish my study successfully. Thanks to all my friends I met in Waterloo. I enjoy my graduate study time with all of them. This thesis is dedicated to the ones I love and the ones who love me. Table of Contents List of Tables vii List of Figures viii 1 Introduction 1 1.1 Research Contributions ............................................. 3 1.2 Thesis Organization .................................................. 3 2 Background 4 2.1 Statically Typed Languages and Dynamically Typed Languages .......... 4 2.1.1 Statically Typed Languages ........................................ 4 2.1.2 Dynamically Typed Languages ....................................... 5 2.2 API Documentation ...................................................... 6 2.3 Lodash: A Modern JavaScript Utility Library ............................ 6 2.4 API Parameter Suggestion ............................................... 6 3 Related Work 10 3.1 API Usage Patterns ..................................................... 10 3.2 Software Text Analysis ................................................ 11 3.3 Code Completion System ............................................... 11 3.4 API Usage Recommendation ............................................ 12 3.5 API Parameter Recommendation ........................................ 12 List of Tables 1.1 Statistics on API Function Declarations and Invocations . . . . . . . . . . 2 2.1 Statistics on API Function Declarations and Invocations . . . . . . . . . . 7 5.1 JavaScript Libraries Used as API Documentation Data Set . . . . . . . . . . 27 5.2 API Functions Found in the GitHub Projects . . . . . . . . . . . . . . . . 33 6.1 Parameter Suggestion Accuracy for Lodash . . . . . . . . . . . . . . . . . . 35 6.2 Parameter Suggestion Accuracy for Vue.js . . . . . . . . . . . . . . . . . . 35 6.3 Parameter Suggestion Accuracy for AngularJS . . . . . . . . . . . . . . . . 36 6.4 Parameter Suggestion Accuracy for Async . . . . . . . . . . . . . . . . . . 36 6.5 Parameter Suggestion Accuracy for Zlib . . . . . . . . . . . . . . . . . . . 37 6.6 Overall Parameter Suggestion Accuracy for Target Libraries . . . . . . . . 37 6.7 Reasons for Fail Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6.8 Number of Parameter Suggestions Statistics for Lodash . . . . . . . . . . . 38 6.9 Number of Parameter Suggestions Statistics for Vue.js . . . . . . . . . . . 39 6.10 Number of Parameter Suggestions Statistics for AngularJS . . . . . . . . . 39 6.11 Number of Parameter Suggestions Statistics for Async . . . . . . . . . . . 40 6.12 Number of Parameter Suggestions Statistics for Zlib . . . . . . . . . . . . 41 List of Figures <table> <thead> <tr> <th>Figure</th> <th>Description</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>2.1</td> <td>Lodash Online Documentation for API Function “filter”</td> <td>7</td> </tr> <tr> <td>4.1</td> <td>Workflow of API Parameter Recommendation</td> <td>14</td> </tr> <tr> <td>4.2</td> <td>Preprocess: Step 1</td> <td>16</td> </tr> <tr> <td>4.3</td> <td>Preprocess: Step 2</td> <td>17</td> </tr> <tr> <td>4.4</td> <td>Lodash Documentation Example</td> <td>18</td> </tr> <tr> <td>4.5</td> <td>Vue.js Documentation Example</td> <td>19</td> </tr> <tr> <td>4.6</td> <td>Number of Arguments for Function-type Parameter in Lodash Documentation</td> <td>20</td> </tr> <tr> <td>4.7</td> <td>Number of Arguments for Function-type Parameter in AngularJS Documentation</td> <td>21</td> </tr> <tr> <td>4.8</td> <td>Analyse the Project Source Code</td> <td>22</td> </tr> <tr> <td>4.9</td> <td>Approach for Non-function-type Parameter</td> <td>23</td> </tr> <tr> <td>4.10</td> <td>Approach for Function-type Parameter</td> <td>24</td> </tr> <tr> <td>6.1</td> <td>Distribution of Numbers of Suggestions</td> <td>41</td> </tr> </tbody> </table> Chapter 1 Introduction To improve software productivity, today’s programs use Application Programming Interfaces (APIs) extensively, because APIs provide the reuse of the code and can release the project developers from some trivial and repetitive work. Thanks to the success of software communities and platforms such as Github, developers today can find a large number of libraries which provide various APIs with plenty of functionalities and are suitable for different development environments. There are many advantages for software developers to use existing APIs from well-known libraries and frameworks rather than write their own code with the same functionalities. Using APIs can not only reduce repetitive work, but also make programs more robust, since API developers are always trying their best to improve the qualities and reliability of their APIs. Although the utiliazation of APIs can lessen the workload of developers and provide various functionalities, it still may introduce new problems; for example, a library upgrade may cause backward compatibility issues [38], and API dependency chains could bring a lot of setup problems to developers before they can actually use the APIs [21]. One main challenge is that correctly and efficiently using APIs from unfamiliar libraries and frameworks is nontrivial. Because there are numerous APIs providing different functionalities, it is very common for developers to encounter unfamiliar APIs in their work. Working with complex APIs in an unfamiliar library presents many barriers: programmers have to choose not only the right method to call, but also the correct parameters for a method call in an API usage. When programmers want to use a new API function, they often need to carefully read the documentations and inspect code examples to learn the actual Table 1.1: Statistics on API Function Declarations and Invocations <table> <thead> <tr> <th>Project</th> <th>Parameterized Declaration</th> <th>Non-parameterized Declaration</th> <th>Parameterized Invocation</th> <th>Non-parameterized Invocation</th> </tr> </thead> <tbody> <tr> <td>Eclipse 3.6.2</td> <td>64%</td> <td>36%</td> <td>57%</td> <td>43%</td> </tr> <tr> <td>Tomcat 7.0</td> <td>49%</td> <td>51%</td> <td>60%</td> <td>40%</td> </tr> <tr> <td>JBoss 5.0</td> <td>58%</td> <td>42%</td> <td>60%</td> <td>40%</td> </tr> <tr> <td>average</td> <td>57%</td> <td>43%</td> <td>59%</td> <td>41%</td> </tr> </tbody> </table> meaning of each parameter, and search the entire context code to see which variables fit the arguments. In order to help programmers use unfamiliar APIs better, a number of techniques have been proposed [44], [11], [45]. Among these, Buse et al. [11] and Zhong et al. [45] investigate finding examples or usage patterns to guide developers to correctly use the APIs. Another kind of approach is a code completion system or API suggestion system, which promptly provides developers with programming suggestions, such as which API functions to call and which expressions to use as parameters. Most existing API suggestion techniques focus on telling developers which is the right API method to call [42], [19]. However, previous research has also emphasized the importance of helping developers choose the right parameters for an API method call [10], [27]. Zhang et al. [44] investigate the statistics of API function declarations and invocations in the code of Eclipse 3.6.2 [1], Tomcat 7.0 [5], and JBoss 5.0 [3]. Table 1.1 displays their findings. According to Table 1.1, for both API declarations and API invocations, API functions with one or more parameters are more common than that without any parameters. Thus, helping developers choose the right API functions is not enough, and recommending the right parameters can be another non-trivial topic. Nowadays libraries usually provide documentation for programmers to learn how to use them, and these are usually highly reliable because they are provided by the developers of the libraries, who fully understand how to use the APIs correctly. Previous work shows that API documentations have a significant impact on API usability [14]. When having to use an unfamiliar library, the first thing for most programmers to do is to search for the corresponding documentations and read them. In this paper, we propose and evaluate an approach to let computers learn the documentation and suggest the correct arguments when using a certain API. It will help developers save a lot of time and reduce bugs when they need to use some unfamiliar APIs. 1.1 Research Contributions In this thesis, I make the following contributions: • We collect two data sets: one data set of JavaScript API documentation and one of real-world JavaScript projects. The API documentation data set consists of five popular JavaScript third-party libraries, and the project data set consists of top 1,000 JavaScript projects based on their star ratings. These two data sets can be used for future research on API suggestions and documentation analysis. • We propose an approach to generate parameter suggestions for API function calls based on API documentation analysis. The approach does not require too many code examples, and can be generalized to other programming languages. • We conduct experiments evaluating the performance of our API parameter suggestion approach. 1.2 Thesis Organization The rest of the thesis is organized as follows. Chapter 2 and Chapter 3 discuss the background and related work respectively. Chapter 4 describes our approach for choosing the right parameters for an API method. Chapter 5 shows the setup of experiments to evaluate the proposed approaches. We show and analyze experimental results in Chapter 6. In Chapter 7, we present the threats to validity in our work. Chapter 8 summarizes our work, and sketches possible future research in this area. Chapter 2 Background This section provides the background of statically typed languages and dynamically typed languages, API documentation, the API library used in this research, and the overview of the research problem. 2.1 Statically Typed Languages and Dynamically Typed Languages 2.1.1 Statically Typed Languages A language is statically typed if the type of a variable is known at compile time, and will not change after being declared. For some languages this means that you as the programmer must specify what type each variable is (e.g., Java, C); other languages offer some form of type inference, the capability of the type system to deduce the type of a variable (e.g.: OCaml, Haskell, Scala, Kotlin). A more formal way to define statically typed language is that a programming language is considered as statically typed if it does type checking at compile-time. This process verifies the type safety of a program based on analysis of a program’s source code. If a program passes a static type checker, then the program is guaranteed to satisfy some set of type safety properties for all possible inputs [43]. The main advantage here is that many kinds of checking can be done by the compiler, and therefore a lot of trivial bugs can be caught at a very early stage [6]. However, many languages with static type checking provide a way to bypass the type checker. Some languages allow programmers to choose between static and dynamic type safety. For example, C# distinguishes between statically-typed and dynamically-typed variables. Uses of the former are checked statically, whereas uses of the latter are checked dynamically. Other languages allow writing code that is not type-safe; for example, in C, programmers can freely cast a value between any two types that have the same size, effectively subverting the type concept. Statically typed languages come out earlier than dynamically typed languages. The first statically typed language is Fortran [8], which is developed by IBM in 1957, and it is also known as the first high-level programming language. 2.1.2 Dynamically Typed Languages A language is dynamically typed if the type is associated with run-time values, and not named variables/fields/etc. This means that the type of a variable may change after being declared. For example, JavaScript is a dynamically typed language, and in JavaScript, you can assign a numeric value to a variable when being declared, and assign a string value to it later. Typically you as a programmer do not have to specify types every. Here are some famous dynamically typed languages: JavaScript, Perl, Ruby, Python. A dynamically typed language performs type checking at runtime. Implementations of dynamically type-checked languages generally associate each runtime object with a type tag (i.e., a reference to a type) containing its type information [43]. Most type-safe languages include some form of dynamic type checking, even if they also have a static type checker. This is because that many useful features or properties are difficult or impossible to verify statically. Most scripting languages have this feature as there is no compiler to do static type-checking anyway, but you may find yourself searching for a bug that is due to the interpreter misinterpreting the type of a variable. Luckily, scripts tend to be small so bugs have not so many places to hide. By definition, dynamic type checking may cause a program to fail at runtime. In some programming languages, it is possible to anticipate and recover from these failures. In others, type-checking errors are considered fatal. Most dynamically typed languages do allow you to provide type information, but do not require it. One language that is currently being developed, Rascal, takes a hybrid approach allowing dynamic typing within functions but enforcing static typing for the function signature. The first dynamically typed language is Lisp, which was invented by John McCarthy in 1958 [20]. Lisp is the second-oldest high-level programming language in widespread use today, and Fortran is the only older one, by one year in 1957. 2.2 API Documentation Since APIs are designed to be consumed, it is important to make sure that the clients, or consumers, are able to quickly learn an API and understand what they can do with it. Most of consumers will never read the source code of the API libraries they use. It is always too tedious for them to know how to implement it. Instead, they want to understand how to use the API quickly and efficiently, which is where API documentation comes into the picture. API documentation is a technical content deliverable, containing instructions about how to effectively use and integrate with an API. It is a concise reference manual containing all the information required to work with the API, with details about the functions, classes, return types, arguments and more, supported by tutorials and examples. Figure 2.1 gives an example of an online API documentation. 2.3 Lodash: A Modern JavaScript Utility Library Lodash is a JavaScript library that provides utility functions for common programming tasks. It evolves from Underscore.js and now receives maintenance from the original contributors to Underscore.js. Lodash is one of the most popular JavaScript libraries now. Its number of weekly downloads is around 15,000,000 on NPM platform. Table 2.1 shows some basic information about Lodash. 2.4 API Parameter Suggestion This section illustrates API parameter suggestion problem with a use case example. Imagine that a developer, Pat is writing a JavaScript project, and learning to use the JavaScript library Lodash. Pat is currently working on a new feature for their website that will display the usernames of all active users. Pat currently has an array of users, where each user is an object with attributes “name” and “gender”. Table 2.1: Statistics on API Function Declarations and Invocations <table> <thead> <tr> <th>Name</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Original author</td> <td>John-David Dalton</td> </tr> <tr> <td>Initial release</td> <td>April 23, 2012</td> </tr> <tr> <td>Current version</td> <td>4.17.10</td> </tr> <tr> <td>Written in</td> <td>JavaScript</td> </tr> <tr> <td>License</td> <td>MIT</td> </tr> <tr> <td>Website</td> <td>lodash.com</td> </tr> <tr> <td>Lines of codes</td> <td>17,105</td> </tr> <tr> <td>Number of API elements</td> <td>287</td> </tr> </tbody> </table> ```javascript var users = [ { 'name': 'Jack', 'gender': 'Male' }, { 'name': 'Susan', 'gender': 'Female' }, { 'name': 'Bob', 'gender': 'Male' }, { 'name': 'Alex', 'gender': 'Non-binary' } ]; Now Pat wants to only keep all the male users from the list, and Pat knows that the API function `filter` Lodash library will be helpful, so Pat starts writing: ```javascript var male_users = filter( ``` However, Pat forgets what are the parameters for the function `filter`. Therefore, Pat opens a web browser and begins to search for the correct way of filter data using Lodash. Pat searches for "Lodash" in Google, opens the home page of lodash.com, locates the page containing API documentation, and finds hundreds of Lodash API functions. It takes a few minutes for Pat to locate the documentation for the API method `filter` (shown as Fig 2.1). Pat learns from the documentation that the filter method takes two parameters: a collection and a predicate. The collection can be either an array or an object, and the predicate is a function used to filter the array. Having found the needed information, Pat returns from the journey back to editing the JavaScript document, where Pat completes the method call. Consider how the above use case would be different if Pat had a plugin to automatically read the Lodash documentation and provide intelligent parameter suggestions: 1. Pat writes `var male_users = filter(` 2. The variable `users` pops up as a suggestion for the first parameter 3. Pat chooses `users` and press ENTER 4. The function `isMale(user)` pops up as a suggestion for the second parameter 5. Pat chooses `isMale(user)` and presses ENTER With only 5 steps, an unfamiliar API function call is completed, and John never loses focus on his current task. The following chapters will introduce some related works and show how our documentation-based API parameter suggestions work. Chapter 3 Related Work 3.1 API Usage Patterns API usage patterns are patterns that guide consumers how to use APIs to achieve their goals. They show the sequences of API function calls for developers. For example, if a developer wants to write something into a file, an API usage pattern will tell him/her which API function should be called first to open the file, then which API function should be called to write text content, and which function should be invoked finally to close the file. In 2005, Holmes et al. [18] proposed Strathcona that helps developers by automatically locating source code examples that are relevant to their work. In 2009, Zhong et al. [45] proposed an approach called MAPO that mines API usage patterns and uses mined patterns as an index for recommending associated code snippets to aid programming. Buse et al. [11] tried to build API usage patterns by synthesizing code examples from documentation and online question and answering forums like StackOverflow. Saied et al. [31] proposed a technique for mining multi-level API usage patterns to exhibit the co-usage relationships between methods of the API of interest across interfering usage scenarios. In 2003, Montandon et al. [22] described a platform that instruments the standard Java-based API documentation format with concrete examples of usage and can also be applied to Android APIs. Nguyen et al. [25] proposed GrouMiner, a graph-based approach that represents usage patterns as graphs for mining. Wang et al. [39] presented UP-Miner to mine succinct and high-coverage patterns from source code by clustering API call sequences. In 2014, a new method called Baker was proposed to link up-to-date source code examples to API documentation [33]. In 2016, Fowkes et al. [16] designed a parameter-free probabilistic API mining algorithm, PAM, that uses a novel probabilistic model based on a set of API patterns and a set of probabilities to infer the most interesting API patterns. ### 3.2 Software Text Analysis Documentation contains a lot of useful information. Forward et al. [15] pointed out that documentation content is relevant and important in their research. They also found that good software documentation technologies should be more aware of professionals’ requirements, opposed to blindly enforce documentation formats or tools. A lot of useful technologies have been proposed to analyse software documentation. Rubio-González et al. [30] used static program analysis to examine mismatches between documented and actual error codes. Many techniques have been proposed to automatically analyze comments and detect inconsistencies between comments and source code [35], [36], [37]. Dalip et al. [13] proposed an approach that automatically estimates the quality of the documents in the digital library. DMOSS [12] is a toolkit that can systematically assess the quality of non source code text found in software packages. Schiller et al. [32] investigated code contracts of 90 C# open-source projects. Wong et al. [40] designed an approach called DASE to improve the performance of symbolic execution for automatic test generation and bug detection. DASE can automatically extract input constraints from documents of a software project, and use these constraints to find out core execution paths in the program. Blasi et al. [9] presents an approach, Jdoctor, that translates Javadoc comments into executable procedure specifications written as Java expressions. Yang et al. [41] designed D2Spec for extracting web API specifications from the documentation pages based on machine learning. ### 3.3 Code Completion System Enhancing current completion systems to work more effectively with large APIs have been investigated in previous studies [24], [10]. These studies made use of database of API usage recommendation, type hierarchy, context filtering and API methods functional roles for improving the performance of API method call completion. Calcite is an Eclipse plugin that helps developers instantiate classes by adding Java API suggestions to the code completion menu [23]. Omar et al. [26] designed Graphite, an active code completion tool allowing Java library developers to introduce interactive and highly-specialized code generation interfaces directly into the editor. Robbes et al. [29] proposed an approach to improve code completion with program history. Ginzberg et al. [17] proposed an automatic code completion approach by developing an LSTM model. Asaduzzaman et al. [7] proposed a context sensitive code completion technique that uses, in addition to the aforementioned information, the context of the target method call. Recently, Pythia, a novel AI-assisted code completion system developed by Microsoft, has been available as part of Intelllicode extension in Visual Studio Code IDE [34]. 3.4 API Usage Recommendation Most of the current API recommendation techniques focus on API usage recommendation, which tells the consumers which is the right API method to call [28], [42], [19]. These techniques explore the relationship among the API functions and the context of real world source code that invokes the APIs and make use of these relations to give recommendation. Rahman et al. [28] proposed RACK that recommends a list of relevant APIs for a natural language query for code search by exploiting keyword-API associations from the crowdsourced knowledge of StackOverflow. LibraryGuru [42] recommends suitable Android APIs for given functionality descriptions. Ma et al. [19] proposed an approach ServRel to recommend relevant Web APIs based on the proposed service cooperative network for a target Web API. 3.5 API Parameter Recommendation Another kind of techniques are used for helping developers choose the right parameters for an API method call. Previous researches show that compared with suggesting the right API method to call, software developers like a tool that can recommend the right parameter much more [10], [27]. However, there are only few studies on it. Zhang et al. [44] proposed an approach Precise for recommending API parameters by mining existing code bases, and 53% of the recommendations are exactly the same as the actual parameters. However, Precise can only work for Java projects and requires existing code examples that use the APIs. As discussed in Section 2.1, statically typed languages like Java and dynamically typed languages like JavaScript are different. Precise makes use of the property of statically typed languages that the type of a variable is fixed, and will not change after being declared. After getting the abstract syntax tree (AST) of the program, Precise can easily know the type of each API parameter and each variable. However, for dynamically typed languages, type information is not contained in the AST, so Precise cannot work for dynamically typed languages. In addition, Precise is based on machine learning technologies, which need lots of usage examples as training data. It will be difficult to extend Precise for the API libraries that are less popular or do not have enough code examples. Chapter 4 Approaches In this chapter, we describe our approach for giving suggestions for API parameters. 4.1 Overview Figure 4.1: Workflow of API Parameter Recommendation Figure 4.1 shows the workflow of the entire process. Our approach can be divided into three parts: preprocessing, analysing the project source code, and generating parameter candidates. First, we collect and preprocess the library documents to build an API database that stores the essential information of each API functions, such as method name, number of arguments, etc. Once the API database is set up, we need to analyse the source code for each parameter suggestion query. In this part, abstract syntax tree (AST) data will be generated for each source code file. The final step is to use the API data and AST data to generate parameter candidates for the suggestion query. This part involves two different approaches depending on whether the parameter in the query is non-function-type or function-type. Section 4.2 describes the preprocessing part. Section 4.3 shows the detail of how to analyse the source code for each project that uses the APIs. Section 4.4 and Section 4.5 explains the candidate generating approaches for non-function-type parameter and function-type parameter respectively. ## 4.2 Preprocessing Preprocessing is the first part of our approach. Figure 4.2 and Figure 4.3 demonstrate the workflow of the preprocessing step. From the library documentation, we can get lots of information about an API element, including its arguments, what it returns, some description about the API, and some example code telling developers how to use the API in practice. This information is always provided, and we can use a web crawler to collect them from the online documents. We note that this collection step only needs to be performed once per library if we build these information into a database, then the documentation data can be reused for each API parameter suggestion query. Figure 4.2 shows how we build the database and what is contained in our database. We study documents of many famous JavaScript libraries in our research. Although these documents are in different format, most of them are all well-structured. Some examples are shown as Figure 4.4 and Figure 4.5. Thus, although we have to write a different web crawler for each library, the task is relatively straightforward. As shown in Figure 4.2, for each API element, we only store four basic kinds of information in our database: argument, description, return, and example. Most documents such as Figure 4.4 and Figure 4.5 will provide this information. So this step can be used on most of popular libraries today. Once we set up the database, we go to step 2, which is shown as Figure 4.3. Since we store argument information in our database, we can easily get the name and the type of the argument. In JavaScript, there are 6 basic types: Numeric, String, Boolean, Object, Array, and Function. For our approach, we deal with Function-type arguments differently from other types, so here for each argument in each API function, we have to decide whether it should be function-type or non-function-type (Numeric, String, Boolean, Object, or Array) according to the documentation. If the target argument is a non-function-type argument, we should use the approach for non-function-type parameter in Section 4.3 to give the suggestions. If the target argument is a function-type argument, we should first take the name of the target argument and the description part of the API, and analyze the description part to see if there are any descriptions about how many parameters this function (the function-type argument) should have. We study a lot of popular JavaScript libraries, and find most of the documents will provide this kind of information although the ways of their expressions may be different. Figure 4.6 shows the related description in Lodash document, and Figure 4.7 shows the related description in AngularJS document. So for different libraries, we have to build a different analyzer to get how many parameters the function-type argument should have. Once we get this number of parameters, \( n \), we use the approach for function-type parameter in Section 4.5 to give the suggestions. We use the API documentation in Figure 4.6 as an example to illustrate the preprocess step. Using a web crawler, we can easily collect the arguments, description, return, and example of the API function `assignInWith`. This function has three arguments: `object`, `sources`, and `customizer`. The types of first two arguments are both Object, and the third argument is a Function. So the third argument is a function-type parameter, and the first two arguments are non-function-type. We check all the sentences in the description part, and the last two sentences both mention `customizer`. From the last sentence, using simple string match techniques, we get to know that `customizer` is a function that should have five arguments, so the number of parameters, \( n \), of it will be 5. All the information of `assignInWith` will be stored in the API database. 4.3 Analysing the Project Source Code After setting up the database for the library, we use the information to give the API parameter suggestions. Given a project that invoke an API, we have to analyze the source code. Figure 4.8 shows the approach for analysing the source code of project which uses the APIs. ```javascript _.compact([0, 1, false, 2, '', 3]); // => [1, 2, 3] ``` Figure 4.4: Lodash Documentation Example # Vue.directive(id, [definition]) - Arguments: - {string} id - {Function | Object} [definition] - Usage: Register or retrieve a global directive. ```javascript // register Vue.directive('my-directive', { bind: function () {}, inserted: function () {}, update: function () {}, componentUpdated: function () {}, unbind: function () {} }) // register (function directive) Vue.directive('my-directive', function () { // this will be called as `bind` and `update` }) // getter, return the directive definition if registered var myDirective = Vue.directive('my-directive') ``` Figure 4.5: Vue.js Documentation Example For our approach, we only take the source code in the same file with the API call. We first need to use a JavaScript parser to parse the source code and get the abstract syntax tree (AST) of it. Once we have the AST, we use an AST parser to collect some Figure 4.6: Number of Arguments for Function-type Parameter in Lodash Documentation important information, including all the APIs that are imported in the file, a set of variables that represent Functions, a set of variables that represent Numerics, a set of variables that represent Booleans, a set of variables that represent Strings, a set of variables that represent Objects, and a set of variables that represent Arrays. We can use these information and the information from the database in Section 4.2 to give the API parameter suggestions. ### 4.4 Approach for Non-function-type Parameter Figure 4.9 demonstrates the approach for suggesting candidates for a non-function-type parameter. We can easily look up the type of the parameter we want to recommend in the database in Section 4.2, and we call the type $X$, which can be either one of Numeric, String, Boolean, Object, or Array. Because we classify the variables according to their types in Section 4.3, we can pick out a set of variables that represent type $X$, and these variables can be part of the candidate set. We also need to check all the imported APIs in Section 4.2. We look up their return types in the database. If their return values are type $X$, then these API calls can also be part of the candidate set. For each of the imported API, we should calculate the similarity between the target API and it. If we find a similar API to the target API, we can use the actual value of the similar argument as a suggestion for the target parameter. For example, we want to recommend a value for the first parameter of an API function, `pullAllWith`, and we check all the imported APIs and find one API function called `pullAllBy`. We look them up in the database, and find that not only their function names are similar, but also the first arguments of them have the same name and the same type, so we think these two APIs are similar, and we can use the actual value of the first parameter of `pullAllBy` as a suggestion for the first argument of `pullAllWith`. In this way, similar arguments from similar API functions can be used as part of the candidate set. In addition, we use constants of type X as our default options so that when our approach cannot find any variables that satisfy the requirements, we can still give some suggestions. We use the following code snippet as an example to illustrate how the approach works. ```javascript var arr = [1]; var other = concat(arr, 2, [3], [[4]], ''); var num = 5; compact( ? ); ``` We need to generate the suggestion candidates for the first parameter of `compact`, the documentation of which can be found in Figure 4.4. From Figure 4.4, we know that the type of the first parameter is Array. After source code analysis, we can know that `arr` is an Array, and `num` is a Number. We also need to check another imported API function `concat`. It is also a Lodash API function, which returns an Array. Therefore, it is easy to infer that the type of `other` is Array. We should suggest an array constant as our default option as well, and we usually choose an empty array. So for this query, we will generate a candidate set that consists of `arr`, `other`, and an empty array. ### 4.5 Approach for Function-type Parameter Figure 4.10 demonstrates the approach for suggesting candidates for a function-type parameter. The approach for Function-type parameter is similar to that for non-function-type parameter. Because it is only for function-type parameter, we only need to pick the set of functions from the AST data in Section 4.3. As described in Section 4.2, we can know the number of parameters, \( n \), for the function-type parameter. So we should check all the variables which represent functions to see if there are any functions called with \( n \) parameters. These variables which represent functions with \( n \) parameters should be part of the candidate set. Similarly, we also need to check all the imported APIs in Section 4.2. We look up their argument information in the database. If they have \( n \) parameters, then these API calls can also be part of the candidate set. And for each of the imported APIs, we calculate the similar parameters of similar API functions to our target API. The way to find similar arguments is similar to the corresponding descriptions in Section 4.4, so we do not repeat it here. Since it is for function-type parameter, we use anonymous functions as our default options here. We use the following code snippet as an example to illustrate how the approach works. ```javascript function mutate(o1, o2, mutation, cond) { if (cond) { return assignWith(o1, o2, mutation); } else { return assignInWith(o2, o1, ?); } } ``` We need to generate the suggestion candidates for the third parameter of `assignInWith`, the documentation of which can be found in Figure 4.6. From Figure 4.6, we know that it is a function-type parameter, and it requires five parameters. After source code analysis, we can find two function-type variables: `mutate` and `assignWith`, but none of them have five parameters. Then we check other imported API functions and find `assignWith`. It takes three arguments and returns an Object. We cannot find any suggestion candidates until now. However, when we calculate the similarity between `assignWith` and `assignInWith`, we find they are similar APIs, and their third arguments are also similar. So we put the third argument of `assignWith`, `mutation` into the candidate set. In addition, We should suggest an anonymous function as our default option. So for this query, we will generate a candidate set that consists of `mutation`, and an anonymous function. Chapter 5 Experimental Setup In this section, we will explain how we conduct experiments to evaluate the performance of our approach. 5.1 Research Questions The objective of our experiments is to investigate how useful our approach can be applied for generating parameter suggestions to API users. So we investigate two research questions and design the experiments aims to answer them: - RQ1: How accurate are the parameter suggestions of the approach? - RQ2: How many suggestions does the approach generate for input source code files? The motivation for the first research question is to assess the accuracy of the parameter suggestion approach. It help us know if our approach can generate good suggestions for use cases in real projects. The motivation for the second research question is to evaluate the efficiency of our parameter suggestion approach. Usually our approach gives more than one suggested variables for one parameter query. However, if our approach gives too many suggestions, it will be difficult for program developers to pick the right variable. Therefore, we want to know if number of suggestions that our approach generates is practical for most real-world situations. 5.2 Data Sets Collecting To evaluate the accuracy and efficiency of our API parameter suggestion approach, we attempt to predict the parameters of different API functions from different libraries, using JavaScript source code from real-world open source projects. 5.2.1 API Documentation Data Set The first step of the experiment is to collect the documentation information of five different third-party JavaScript libraries. The libraries are selected based on community popularity and frequency of use. The list of libraries selected is displayed in Table 5.1. For each library, the following information of each API method’s documentation are collected and stored in a JSON file: - Function name - Function description - Return type - Return description - Argument types - Argument names - Argument descriptions <table> <thead> <tr> <th>Library</th> <th>Version</th> </tr> </thead> <tbody> <tr> <td>Lodash</td> <td>4.17.10</td> </tr> <tr> <td>Vue.js</td> <td>2.5</td> </tr> <tr> <td>AngularJS</td> <td>1.7.8</td> </tr> <tr> <td>Async</td> <td>2.6.2</td> </tr> <tr> <td>Zlib</td> <td>1.0.5</td> </tr> </tbody> </table> Table 5.1: JavaScript Libraries Used as API Documentation Data Set • Number of parameters for any function-type parameters • Method signature • Code examples There are two ways to collect these information. For some libraries, a web crawler can be used to extract the necessary information from their online documentation if the documentation are well-structured. The web crawlers are written in Python and based on the Scrapy framework [4]. Because each library’s online documentation is formatted differently, we have to write a different web crawler for each library. Another way to collect the documentation information is to manually copy-and-paste this information. This is used when either the library contains only few methods or the online documentation is not well-structured. For libraries with few API functions, it would be more time-consuming to write a web crawler. For badly-structured online documentation, it is hard to write a web crawler that works perfectly, so manually collecting the information is easier and better. In an ideal world, this information might be easily extractable from a website that uses tags or makes the API details easy to access in some other way. Here is an example of the JSON file 1 of the API function, assignInWith, the documentation of which is shown in Figure 4.6: ```json { "return": { "type": "Object", "description": "Returns object." }, "description": "This method is like _.assignIn except that it accepts customizer which is invoked to produce the assigned values. If customizer returns undefined, assignment is handled by the method instead. The customizer is invoked with five arguments: (objValue, srcValue, key, object, source)." } ``` 1The example code is utf-8 encoded, so it is human unreadable. Note: This method mutates object. "arguments": [ { "type": "Object", "name": "object", "description": "The destination object." }, { "type": "Object", "name": "sources", "description": "The source objects." }, { "type": "Function", "name": "customizer", "description": "The function to customize assigned values." } ], "signature": "_.assignInWith(object, sources, [customizer])", "example": "function customizer(objValue, srcValue) { return _.isUndefined(objValue) ? srcValue : objValue; } var defaults = _.partialRight(_.assignInWith, customizer); defaults({'a': 1}, {'b': 2}, {'a': 3}); // => {'a': 1, 'b': 2} ", "name": "assignInWith" 5.2.2 JavaScript Project Data Set Once the documentation had been summarized for each API method, we need a large repository of JavaScript files to test against. Therefore, we choose to clone the top 1,000 JavaScript projects from GitHub ordered by their star ratings. In total, the top 1000 projects from GitHub contain over 200,000 JavaScript files. Because not all these JavaScript files include the target API libraries in the previous data set, we can filter out the files that do not use any target APIs. We use a keyword search to locate every file that uses at least one of the target libraries. This can be done by using a UNIX `grep` command to locate all files containing import statements for target libraries. In the end, we found 1,023 files which make at least one call to a function contained in any of the target library APIs. ### 5.3 Generating Abstract Syntax Tree According to our approach introduced in Section 4.3 (referring to Figure 4.8), all the JavaScript source code files from Section 5.2.2 must be converted into abstract syntax trees (AST). We implement our JavaScript parser based on the Esprima framework [2] to generate the AST for each source code file and store it as a JSON file. Given an example of JavaScript source code file as following: ```javascript function addOne(a) { return a + 1; } var b = addOne(3); ``` Here is the JSON file of the AST generated for the example above: ```json { "type": "Program", "body": [ { "type": "FunctionDeclaration", "id": { "type": "Identifier", }, "body": [ { "type": "ExpressionStatement", "expression": { "type": "CallExpression", "callee": { "type": "Identifier", }, "arguments": [ { "type": "Literal", } ] } } ] } ] } ``` "name": "addOne", }, "params": [ { "type": "Identifier", "name": "a" } ], "body": { "type": "BlockStatement", "body": [ { "type": "ReturnStatement", "argument": { "type": "BinaryExpression", "operator": "+", "left": { "type": "Identifier", "name": "a" } }, "right": { "type": "Literal", "value": 1, "raw": "1" } } ] }, "generator": false, "expression": false, "async": false }, { "type": "VariableDeclaration", "declarations": [ After generating ASTs for the source code files, we note that only a fraction of each library’s API functions are discovered in the GitHub projects we collected. The number of unique methods and number of method instances that were found for each library is shown in Table 5.2. In total, 43 unique functions from the 5 libraries are discovered. Table 5.2: API Functions Found in the GitHub Projects <table> <thead> <tr> <th>Library</th> <th>Total number of functions in library</th> <th>Unique functions found</th> <th>Function instances found</th> </tr> </thead> <tbody> <tr> <td>Lodash</td> <td>574</td> <td>10</td> <td>875</td> </tr> <tr> <td>Vue.js</td> <td>10</td> <td>10</td> <td>561</td> </tr> <tr> <td>AngularJS</td> <td>9</td> <td>9</td> <td>200</td> </tr> <tr> <td>Async</td> <td>78</td> <td>11</td> <td>39</td> </tr> <tr> <td>Zlib</td> <td>7</td> <td>3</td> <td>6</td> </tr> </tbody> </table> 5.3.1 API Parameter Suggestion We implement an analyzer based on the our approach introduced in Section 4.4 and Section 4.5, which takes the AST file and the API documentation information as inputs, and outputs a CSV file which contains the parameter prediction results. Each row in the output file contains a set of suggestions for a single argument, the target API function, the target file, the argument number, the actual parameter value, the number of suggestions, and a list of suggestions. Then we analyze the CSV file containing the predictions to answer our research questions: - **RQ1**: How accurate are the parameter suggestions of the approach? - **RQ2**: How many suggestions does the approach generate for input source code files? In RQ1 we compute the accuracy of our prediction results. A given parameter prediction is considered successful if the actual value from the source code file is present in the list of suggestions, otherwise considered failed. To compute the accuracy for each API parameter prediction, we divide the number of successes by the total number of API function calls (the sum of successes and fails for each API function parameter). In RQ2 we count the number of our suggestions for each API parameter prediction query. For each API parameter, we calculate the median and maximum number of suggestions. Chapter 6 Experiment Results Based on our experimental results, we are willing to answer the following research questions: • RQ1: How accurate are the parameter suggestions of the approach? • RQ2: How many suggestions does the approach generate for input source code files? 6.1 RQ1: How accurate are the parameter suggestions of the approach? To calculate the accuracy of our API parameter suggestion approach, the actual parameter values used in the source code are compared to the list of suggestions. For each parameter, if the list of suggestions contains the actual value, then consider it as successful; if not then it fails. To compute the overall accuracy for each library, we divide the number of successes by the total number of API function calls of the library. The detailed results for each target library are shown in Table 6.1, Table 6.2, Table 6.3, Table 6.4, and Table 6.5 respectively. The overall results are shown in Table 6.6. The overall suggestion accuracy of our approach for the target libraries is 64.6%, and our approach achieves an accuracy over 50% for 4 out of 5 libraries. For the most successful library, Async, the accuracy is 81.3%. However, when we look at the detailed results for each target library, we see a lot of predictions with 0.0% accuracy, which means our approach cannot give any correct Table 6.1: Parameter Suggestion Accuracy for Lodash <table> <thead> <tr> <th>Function Name</th> <th>Suggestion Accuracy (%)</th> <th>Number of API calls</th> </tr> </thead> <tbody> <tr> <td></td> <td>1st Parameter</td> <td>2nd Parameter</td> </tr> <tr> <td>filter</td> <td>46.7</td> <td>93.3</td> </tr> <tr> <td>reduce</td> <td>35.4</td> <td>95.1</td> </tr> <tr> <td>some</td> <td>52.2</td> <td>78.9</td> </tr> <tr> <td>times</td> <td>80.7</td> <td>100.0</td> </tr> <tr> <td>transform</td> <td>80.0</td> <td>100.0</td> </tr> <tr> <td>partition</td> <td>16.7</td> <td>100.0</td> </tr> <tr> <td>find</td> <td>45.3</td> <td>91.5</td> </tr> <tr> <td>map</td> <td>35.0</td> <td>83.8</td> </tr> <tr> <td>remove</td> <td>45.5</td> <td>63.6</td> </tr> <tr> <td>every</td> <td>67.7</td> <td>70.6</td> </tr> </tbody> </table> Table 6.2: Parameter Suggestion Accuracy for Vue.js <table> <thead> <tr> <th>Function Name</th> <th>Suggestion Accuracy (%)</th> <th>Number of API calls</th> </tr> </thead> <tbody> <tr> <td></td> <td>1st Parameter</td> <td>2nd Parameter</td> </tr> <tr> <td>extend</td> <td>95.2</td> <td>-</td> </tr> <tr> <td>filter</td> <td>66.7</td> <td>0.0</td> </tr> <tr> <td>component</td> <td>90.4</td> <td>32.5</td> </tr> <tr> <td>mixin</td> <td>76.9</td> <td>-</td> </tr> <tr> <td>directive</td> <td>100.0</td> <td>66.7</td> </tr> <tr> <td>compile</td> <td>100.0</td> <td>-</td> </tr> <tr> <td>use</td> <td>4.6</td> <td>-</td> </tr> <tr> <td>delete</td> <td>60.7</td> <td>85.7</td> </tr> <tr> <td>set</td> <td>47.3</td> <td>81.8</td> </tr> <tr> <td>nextTick</td> <td>77.8</td> <td>72.2</td> </tr> </tbody> </table> suggestion for these parameters even once. This unexpected huge variance in accuracy for different parameters needs further investigation. Table 6.3: Parameter Suggestion Accuracy for AngularJS <table> <thead> <tr> <th>Function Name</th> <th>Suggestion Accuracy (%)</th> <th>Number of API calls</th> </tr> </thead> <tbody> <tr> <td></td> <td>1st Parameter</td> <td>2nd Parameter</td> </tr> <tr> <td>toJson</td> <td>12.5</td> <td>100.0</td> </tr> <tr> <td>forEach</td> <td>66.7</td> <td>100.0</td> </tr> <tr> <td>module</td> <td>98.1</td> <td>100.0</td> </tr> <tr> <td>equals</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <td>isDefined</td> <td>0.0</td> <td>-</td> </tr> <tr> <td>isString</td> <td>0.0</td> <td>-</td> </tr> <tr> <td>isUndefined</td> <td>0.0</td> <td>-</td> </tr> <tr> <td>copy</td> <td>31.0</td> <td>66.7</td> </tr> <tr> <td>element</td> <td>43.4</td> <td>-</td> </tr> </tbody> </table> Table 6.4: Parameter Suggestion Accuracy for Async <table> <thead> <tr> <th>Function Name</th> <th>Suggestion Accuracy (%)</th> <th>Number of API calls</th> </tr> </thead> <tbody> <tr> <td></td> <td>1st Parameter</td> <td>2nd Parameter</td> </tr> <tr> <td>times</td> <td>100.0</td> <td>100.0</td> </tr> <tr> <td>eachSeries</td> <td>66.7</td> <td>100.0</td> </tr> <tr> <td>parallel</td> <td>100.0</td> <td>100.0</td> </tr> <tr> <td>series</td> <td>66.7</td> <td>100.0</td> </tr> <tr> <td>waterfall</td> <td>100.0</td> <td>50.0</td> </tr> <tr> <td>eachLimit</td> <td>40.0</td> <td>66.7</td> </tr> <tr> <td>tryEach</td> <td>100.0</td> <td>100.0</td> </tr> <tr> <td>map</td> <td>50.0</td> <td>100.0</td> </tr> <tr> <td>each</td> <td>0.0</td> <td>100.0</td> </tr> <tr> <td>timesLimit</td> <td>0.0</td> <td>100.0</td> </tr> <tr> <td>whilst</td> <td>100.0</td> <td>100.0</td> </tr> </tbody> </table> To find out why the accuracy for different parameters varies so widely, we manually check all the cases where the accuracy is 0.0%. After manual analysis, we find that the reasons for these failures can be classified into 4 types: Table 6.5: Parameter Suggestion Accuracy for Zlib <table> <thead> <tr> <th>Function Name</th> <th>Suggestion Accuracy (%)</th> <th>Number of API calls</th> </tr> </thead> <tbody> <tr> <td></td> <td>1st Parameter</td> <td>2nd Parameter</td> </tr> <tr> <td>deflate</td> <td>100.0</td> <td>0.0</td> </tr> <tr> <td>gzip</td> <td>66.7</td> <td>0.0</td> </tr> <tr> <td>deflateRaw</td> <td>100.0</td> <td>0.0</td> </tr> </tbody> </table> Table 6.6: Overall Parameter Suggestion Accuracy for Target Libraries <table> <thead> <tr> <th>Library</th> <th>Overall Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Lodash</td> <td>67.0</td> </tr> <tr> <td>Vue.js</td> <td>57.7</td> </tr> <tr> <td>AngularJS</td> <td>64.3</td> </tr> <tr> <td>Async</td> <td>81.3</td> </tr> <tr> <td>Zlib</td> <td>41.7</td> </tr> <tr> <td>Overall</td> <td>64.6</td> </tr> </tbody> </table> 1. The parameter is imported from another file or library. 2. The parameter is contained within a dictionary. 3. The parameter is a property of an object. 4. It is difficult to infer the type of the right parameter from given code context. The statistics of the reasons are displayed in Table 6.7. The major reasons are that the parameter is an object property, and that it is difficult to infer the type of the right parameter. To solve these problems, we need to improve the type analysis techniques on JavaScript source code, but this requires significant effort so we leave it as possible future work. Our documentation based API parameter suggestion approach can give good suggestions in most cases. On average, the probability that the correct parameter is in our generated suggestion list is 64.6%. Table 6.7: Reasons for Fail Cases <table> <thead> <tr> <th>Reason</th> <th>Count</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>Imported</td> <td>6</td> <td>18.2%</td> </tr> <tr> <td>Dictionary member</td> <td>1</td> <td>3.0%</td> </tr> <tr> <td>Object property</td> <td>13</td> <td>39.4%</td> </tr> <tr> <td>Type not detected</td> <td>13</td> <td>39.4%</td> </tr> </tbody> </table> 6.2 RQ2: How many suggestions does the approach generate for input source code files? We use this research question to investigate To answer RQ2, we compute the median and maximum number of suggestions for each API function parameters. Median numbers can reflect the efficiency of our approach in average cases, and maximum numbers can show the situation of the worst cases. All these results for different target libraries are displayed in Table 6.8, Table 6.9, Table 6.10, Table 6.11, and Table 6.12 respectively. Table 6.8: Number of Parameter Suggestions Statistics for Lodash <table> <thead> <tr> <th>Function Name</th> <th>Median of Suggestions</th> <th>Maximum of Suggestions</th> </tr> </thead> <tbody> <tr> <td></td> <td>1st Parameter</td> <td>2nd Parameter</td> </tr> <tr> <td>filter</td> <td>9</td> <td>5</td> </tr> <tr> <td>reduce</td> <td>8</td> <td>1</td> </tr> <tr> <td>some</td> <td>7</td> <td>5</td> </tr> <tr> <td>times</td> <td>2</td> <td>4</td> </tr> <tr> <td>transform</td> <td>7</td> <td>1</td> </tr> <tr> <td>partition</td> <td>7</td> <td>5</td> </tr> <tr> <td>find</td> <td>5</td> <td>4</td> </tr> <tr> <td>map</td> <td>9</td> <td>4</td> </tr> <tr> <td>remove</td> <td>6</td> <td>3</td> </tr> <tr> <td>every</td> <td>13</td> <td>8</td> </tr> </tbody> </table> Table 6.9: Number of Parameter Suggestions Statistics for Vue.js <table> <thead> <tr> <th>Function Name</th> <th>Median of Suggestions</th> <th>Maximum of Suggestions</th> </tr> </thead> <tbody> <tr> <td></td> <td>1st Para</td> <td>2nd Para</td> </tr> <tr> <td>extend</td> <td>2</td> <td>-</td> </tr> <tr> <td>filter</td> <td>1</td> <td>3</td> </tr> <tr> <td>component</td> <td>1</td> <td>10</td> </tr> <tr> <td>mixin</td> <td>1</td> <td>-</td> </tr> <tr> <td>directive</td> <td>1</td> <td>5</td> </tr> <tr> <td>compile</td> <td>1</td> <td>-</td> </tr> <tr> <td>use</td> <td>4</td> <td>-</td> </tr> <tr> <td>delete</td> <td>2</td> <td>1</td> </tr> <tr> <td>set</td> <td>2</td> <td>1</td> </tr> <tr> <td>nextTick</td> <td>2</td> <td>2</td> </tr> </tbody> </table> Table 6.10: Number of Parameter Suggestions Statistics for AngularJS <table> <thead> <tr> <th>Function Name</th> <th>Median of Suggestions</th> <th>Maximum of Suggestions</th> </tr> </thead> <tbody> <tr> <td></td> <td>1st Parameter</td> <td>2nd Parameter</td> </tr> <tr> <td>toJson</td> <td>6</td> <td>6</td> </tr> <tr> <td>forEach</td> <td>2</td> <td>4</td> </tr> <tr> <td>module</td> <td>2</td> <td>1</td> </tr> <tr> <td>equals</td> <td>16</td> <td>17</td> </tr> <tr> <td>isDefined</td> <td>13</td> <td>-</td> </tr> <tr> <td>isString</td> <td>3</td> <td>-</td> </tr> <tr> <td>isUndefined</td> <td>3</td> <td>-</td> </tr> <tr> <td>copy</td> <td>13</td> <td>29</td> </tr> <tr> <td>element</td> <td>2</td> <td>-</td> </tr> </tbody> </table> From the detailed tables, we find that the median number of suggestions are usually smaller than or equal to 10 (an acceptable number of suggestions), which means our approach does not give too many suggestions overall. However, the maximum number of suggestions for each parameter can be much larger. It can be even as high as 57 for the Table 6.11: Number of Parameter Suggestions Statistics for Async | Function Name | Median of Suggestions | | | | Maximum of Suggestions | | | | |---------------|-----------------------|---|---|---|-----------------------|---|---| | | 1st Para | 2nd Para | 3rd Para | 4th Para | 1st Para | 2nd Para | 3rd Para | 4th Para | | times | 4 | 14 | 14 | - | 4 | 14 | 14 | - | | eachSeries | 3 | 13 | 13 | - | 4 | 13 | 13 | - | | parallel | 3 | 10 | - | - | 8 | 18 | - | - | | series | 10 | 21 | - | - | 18 | 30 | - | - | | waterfall | 4 | 19 | - | - | 4 | 19 | - | - | | eachLimit | 5 | 2 | 20 | 20 | 20 | 7 | 30 | 30 | | tryEach | 1 | 11 | - | - | 1 | 11 | - | - | | map | 10 | 11 | 11 | - | 11 | 11 | 11 | - | | each | 1 | 4 | 4 | - | 1 | 4 | 4 | - | | timesLimit | 4 | 5 | 29 | 29 | 4 | 5 | 29 | 29 | | whilst | 8 | 8 | 8 | - | 8 | 8 | 8 | - | worst case. So for this kind of cases with too many suggestions, our approach still needs to be refined. We also draw a bar chart Figure 6.1 to display the distribution of numbers of suggestions. In Figure 6.1, blue bars shows the distribution for only successful cases in Section 6.1, and red bars shows the distribution for all the cases. The numbers on the horizontal axis represent the number of suggestions. We aggregate the numbers larger than or equal to 15 together as “>= 15”. The height of each bar represents the percentage of number of suggestions occurs. For example, the first blue bar in Figure 6.1 means that 21.4% of parameter predictions give only 1 suggestion when considering only successful cases. Comparing the blue and red bars, we do not see any significant difference. Over 60% predictions suggest at most 5 variables, and over 80% suggest at most 10 variables. Both charts show that our approach is much more likely to give a smaller amount of suggestions, which is more efficient and more helpful for developers. Therefore, in general our approach is efficient and usually do not generate excessive amount of suggestions. | Our documentation-based API parameter suggestion approach usually does not generate too many suggestions. On average, over 60% predictions suggest at most 5 variables, and over 80% suggest at most 10 variables. | | | | | | | | | Table 6.12: Number of Parameter Suggestions Statistics for Zlib <table> <thead> <tr> <th>Function Name</th> <th>Median of Suggestions</th> <th>Maximum of Suggestions</th> </tr> </thead> <tbody> <tr> <td></td> <td>1st Parameter</td> <td>2nd Parameter</td> </tr> <tr> <td>deflate</td> <td>2</td> <td>1</td> </tr> <tr> <td>gzip</td> <td>2</td> <td>1</td> </tr> <tr> <td>deflateRaw</td> <td>11</td> <td>6</td> </tr> </tbody> </table> Figure 6.1: Distribution of Numbers of Suggestions Chapter 7 Threats to Validity 7.1 Data Set Selection In the data collection process, we collect the documentation information for 5 third-party JavaScript libraries, and 1,000 JavaScript projects from GitHub. For this research, we focused on a single programming language: JavaScript. While our approach can be generalized to other languages since it does not require any JavaScript specific properties, we suspect that our experimental results from this thesis might be quite different for other programming languages. Since our approach is based on documentation analysis, the quality of API documentation is an important factor that will impact our experimental results. Because we want our results to be more representative, we choose our target libraries based on their community popularity and frequency of use. However, it still causes some bias since most well-known and popular third-party libraries are also well-documented. It is hard to avoid, and what we can do is only to strike a balance. We believe our sample of libraries is representative of most JavaScript APIs since we choose them based on their popularity. However, there are too many JavaScript APIs in real-world. Therefore, further research is required to get a more complete picture of typical API usage. Similarly, although we tried to avoid bias when collecting testing projects, by picking the top 1,000 JavaScript projects on GitHub according to their star rating, there are much more projects in real-world, and not all of them are on GitHub. We also notice that not all of the API functions in the five libraries are covered in our testing projects. Therefore, further research on more real-world projects can be conducted to get more generalizable results. 7.2 API Documentation Information Collection Threats may also come from our data mining and analysis techniques. To collect documentation information for each library, we used a combination of web crawler scripts and manual analysis. If there are any errors copying the documentation information, it would definitely influence the accuracy of our results. We examine the JSON files of manually created documentation information, making sure there is no such error. Chapter 8 Conclusion and Future Works 8.1 Conclusion APIs are highly beneficial to modern software development, as they free software developers from writing repetitive code, and provides more robust implementations. They improve the efficiency of software development. However, learning how to use an API effectively can be slow and tedious. When an unfamiliar API library is to be used for the first time, the developer must typically pause their current task to chase down documentation and code samples that describe the input, output, and intended use of the desired API methods. Since developers use APIs so frequently in their daily work, the time they spend searching for API information and learning has a considerable impact on their productivity. Trying to solve this usability barrier caused by unfamiliar libraries, this thesis proposes an approach to predict the parameters when calling an API function. Our approach preprocesses API documentation, and help developers extract possible parameter candidates from the code context, and give them as suggestions. Unlike other existing approach [44], our approach is based on API documentation analysis, so it does not require a lot example use cases to study from. Also [44] can only be used for Java projects, but our approach can be used for both statically-typed and dynamically-typed languages. To evaluate the accuracy of our approach, we analyzed over 200,000 files from the top 1,000 JavaScript projects on GitHub. We parsed every file and predicted the parameters of every function belonging to any of 5 popular JavaScript libraries. 1,681 instances of target functions are discovered. We compare each prediction to the parameter that the developer actually used in the source code file to compute the accuracy. On average, the probability that the correct parameter is in our generated suggestion list is 64.6%. 8.2 Future Works For future works, we plan to explore the following aspects: Applying to other programming languages: Although we test our approach on JavaScript libraries and projects, it can be applied to other programming languages as well, because it does not require any language specific properties. We believe that this approach can also perform well on other languages, but future experiments are still needed to validate it. API documentation quality: The quality of API documentation can influence the documentation based API suggestion approaches, so on the other hand, these documentation based approaches can help examining and improving the quality of API documentation. Bibliography 49
{"Source-Url": "https://uwspace.uwaterloo.ca/bitstream/handle/10012/15506/Xi_Yuan.pdf?isAllowed=y&sequence=3", "len_cl100k_base": 16200, "olmocr-version": "0.1.53", "pdf-total-pages": 58, "total-fallback-pages": 0, "total-input-tokens": 99891, "total-output-tokens": 21360, "length": "2e13", "weborganizer": {"__label__adult": 0.00045418739318847656, "__label__art_design": 0.00032639503479003906, "__label__crime_law": 0.0002593994140625, "__label__education_jobs": 0.0015783309936523438, "__label__entertainment": 7.104873657226562e-05, "__label__fashion_beauty": 0.00014197826385498047, "__label__finance_business": 0.00021064281463623047, "__label__food_dining": 0.0002722740173339844, "__label__games": 0.0005316734313964844, "__label__hardware": 0.00042557716369628906, "__label__health": 0.0002605915069580078, "__label__history": 0.00018787384033203125, "__label__home_hobbies": 6.371736526489258e-05, "__label__industrial": 0.00016045570373535156, "__label__literature": 0.00034499168395996094, "__label__politics": 0.00022995471954345703, "__label__religion": 0.0003552436828613281, "__label__science_tech": 0.0019817352294921875, "__label__social_life": 0.00011444091796875, "__label__software": 0.004428863525390625, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00021696090698242188, "__label__transportation": 0.00027632713317871094, "__label__travel": 0.00015616416931152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79544, 0.03608]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79544, 0.24597]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79544, 0.80459]], "google_gemma-3-12b-it_contains_pii": [[0, 274, false], [274, 540, null], [540, 1704, null], [1704, 2386, null], [2386, 3554, null], [3554, 3554, null], [3554, 4903, null], [4903, 5711, null], [5711, 7541, null], [7541, 10267, null], [10267, 11845, null], [11845, 13148, null], [13148, 15738, null], [15738, 17584, null], [17584, 18237, null], [18237, 19907, null], [19907, 20147, null], [20147, 21958, null], [21958, 24494, null], [24494, 26910, null], [26910, 27248, null], [27248, 27682, null], [27682, 30013, null], [30013, 31092, null], [31092, 32388, null], [32388, 32813, null], [32813, 33705, null], [33705, 34402, null], [34402, 35343, null], [35343, 36218, null], [36218, 37054, null], [37054, 38073, null], [38073, 39405, null], [39405, 40607, null], [40607, 41691, null], [41691, 43420, null], [43420, 44349, null], [44349, 46289, null], [46289, 46718, null], [46718, 47063, null], [47063, 49210, null], [49210, 50554, null], [50554, 52901, null], [52901, 55374, null], [55374, 56981, null], [56981, 58892, null], [58892, 61390, null], [61390, 64340, null], [64340, 64931, null], [64931, 66581, null], [66581, 67145, null], [67145, 68869, null], [68869, 69722, null], [69722, 71191, null], [71191, 73337, null], [73337, 75693, null], [75693, 77888, null], [77888, 79544, null]], "google_gemma-3-12b-it_is_public_document": [[0, 274, true], [274, 540, null], [540, 1704, null], [1704, 2386, null], [2386, 3554, null], [3554, 3554, null], [3554, 4903, null], [4903, 5711, null], [5711, 7541, null], [7541, 10267, null], [10267, 11845, null], [11845, 13148, null], [13148, 15738, null], [15738, 17584, null], [17584, 18237, null], [18237, 19907, null], [19907, 20147, null], [20147, 21958, null], [21958, 24494, null], [24494, 26910, null], [26910, 27248, null], [27248, 27682, null], [27682, 30013, null], [30013, 31092, null], [31092, 32388, null], [32388, 32813, null], [32813, 33705, null], [33705, 34402, null], [34402, 35343, null], [35343, 36218, null], [36218, 37054, null], [37054, 38073, null], [38073, 39405, null], [39405, 40607, null], [40607, 41691, null], [41691, 43420, null], [43420, 44349, null], [44349, 46289, null], [46289, 46718, null], [46718, 47063, null], [47063, 49210, null], [49210, 50554, null], [50554, 52901, null], [52901, 55374, null], [55374, 56981, null], [56981, 58892, null], [58892, 61390, null], [61390, 64340, null], [64340, 64931, null], [64931, 66581, null], [66581, 67145, null], [67145, 68869, null], [68869, 69722, null], [69722, 71191, null], [71191, 73337, null], [73337, 75693, null], [75693, 77888, null], [77888, 79544, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79544, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79544, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79544, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79544, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79544, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79544, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79544, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79544, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79544, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79544, null]], "pdf_page_numbers": [[0, 274, 1], [274, 540, 2], [540, 1704, 3], [1704, 2386, 4], [2386, 3554, 5], [3554, 3554, 6], [3554, 4903, 7], [4903, 5711, 8], [5711, 7541, 9], [7541, 10267, 10], [10267, 11845, 11], [11845, 13148, 12], [13148, 15738, 13], [15738, 17584, 14], [17584, 18237, 15], [18237, 19907, 16], [19907, 20147, 17], [20147, 21958, 18], [21958, 24494, 19], [24494, 26910, 20], [26910, 27248, 21], [27248, 27682, 22], [27682, 30013, 23], [30013, 31092, 24], [31092, 32388, 25], [32388, 32813, 26], [32813, 33705, 27], [33705, 34402, 28], [34402, 35343, 29], [35343, 36218, 30], [36218, 37054, 31], [37054, 38073, 32], [38073, 39405, 33], [39405, 40607, 34], [40607, 41691, 35], [41691, 43420, 36], [43420, 44349, 37], [44349, 46289, 38], [46289, 46718, 39], [46718, 47063, 40], [47063, 49210, 41], [49210, 50554, 42], [50554, 52901, 43], [52901, 55374, 44], [55374, 56981, 45], [56981, 58892, 46], [58892, 61390, 47], [61390, 64340, 48], [64340, 64931, 49], [64931, 66581, 50], [66581, 67145, 51], [67145, 68869, 52], [68869, 69722, 53], [69722, 71191, 54], [71191, 73337, 55], [73337, 75693, 56], [75693, 77888, 57], [77888, 79544, 58]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79544, 0.26119]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
32008b786a3606f31b0a7495a5e2a2ef9ccdf2c6
[REMOVED]
{"len_cl100k_base": 10451, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 46090, "total-output-tokens": 13031, "length": "2e13", "weborganizer": {"__label__adult": 0.0003502368927001953, "__label__art_design": 0.0002636909484863281, "__label__crime_law": 0.00028705596923828125, "__label__education_jobs": 0.0010166168212890625, "__label__entertainment": 4.5359134674072266e-05, "__label__fashion_beauty": 0.00014460086822509766, "__label__finance_business": 0.0001809597015380859, "__label__food_dining": 0.0002579689025878906, "__label__games": 0.0004968643188476562, "__label__hardware": 0.0004520416259765625, "__label__health": 0.00033974647521972656, "__label__history": 0.00019347667694091797, "__label__home_hobbies": 6.181001663208008e-05, "__label__industrial": 0.00022685527801513672, "__label__literature": 0.00028014183044433594, "__label__politics": 0.0002188682556152344, "__label__religion": 0.0003559589385986328, "__label__science_tech": 0.004482269287109375, "__label__social_life": 9.21487808227539e-05, "__label__software": 0.004116058349609375, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0002257823944091797, "__label__transportation": 0.00035762786865234375, "__label__travel": 0.00015735626220703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52807, 0.06175]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52807, 0.31036]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52807, 0.93151]], "google_gemma-3-12b-it_contains_pii": [[0, 2217, false], [2217, 5517, null], [5517, 8381, null], [8381, 11586, null], [11586, 14266, null], [14266, 15992, null], [15992, 18938, null], [18938, 22281, null], [22281, 25015, null], [25015, 27382, null], [27382, 29682, null], [29682, 32662, null], [32662, 35645, null], [35645, 38791, null], [38791, 41973, null], [41973, 45152, null], [45152, 48396, null], [48396, 52016, null], [52016, 52807, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2217, true], [2217, 5517, null], [5517, 8381, null], [8381, 11586, null], [11586, 14266, null], [14266, 15992, null], [15992, 18938, null], [18938, 22281, null], [22281, 25015, null], [25015, 27382, null], [27382, 29682, null], [29682, 32662, null], [32662, 35645, null], [35645, 38791, null], [38791, 41973, null], [41973, 45152, null], [45152, 48396, null], [48396, 52016, null], [52016, 52807, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52807, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52807, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52807, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52807, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52807, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52807, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52807, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52807, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52807, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52807, null]], "pdf_page_numbers": [[0, 2217, 1], [2217, 5517, 2], [5517, 8381, 3], [8381, 11586, 4], [11586, 14266, 5], [14266, 15992, 6], [15992, 18938, 7], [18938, 22281, 8], [22281, 25015, 9], [25015, 27382, 10], [27382, 29682, 11], [29682, 32662, 12], [32662, 35645, 13], [35645, 38791, 14], [38791, 41973, 15], [41973, 45152, 16], [45152, 48396, 17], [48396, 52016, 18], [52016, 52807, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52807, 0.20207]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
e4941ad0e6734c18a6e6d4787986a3bc16122267
Automatic Verification of Functions with Accumulating Parameters Citation for published version: Document Version: Publisher's PDF, also known as Version of record Published in: Journal of Functional Programming General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and/or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Automatic verification of functions with accumulating parameters ANDREW IRELAND Department of Computing & Electrical Engineering, Heriot-Watt University, Riccarton, Edinburgh EH14 4AS, Scotland (e-mail: h.ireland@hw.ac.uk) ALAN BUNDY Department of Artificial Intelligence, University of Edinburgh, 80 South Bridge, Edinburgh EH1 1HN, Scotland (e-mail: bundy@ed.ac.uk) Abstract Proof by mathematical induction plays a crucial role in reasoning about functional programs. A generalization step often holds the key to discovering an inductive proof. We present a generalization technique which is particularly applicable when reasoning about functional programs involving accumulating parameters. We provide empirical evidence for the success of our technique and show how it is contributing to the ongoing development of a parallelizing compiler for Standard ML. 1 Introduction and motivations Functional programs, by their very nature, are highly amenable to formal methods of reasoning. This has been exploited within the formal verification community where the majority of theorem proving based tools have a strong functional bias (Boyer and Moore, 1979; Boyer and Moore, 1988; Bundy et al., 1990; Owre et al., 1992; Kapur and Zhang, 1995; ORA, 1996; Hutter and Sengler, 1996; Kaufmann and Moore, 1997). Proof by mathematical induction plays a crucial role in reasoning about recursively defined functions. The generalization of an inductive conjecture often holds the key to discovering a proof. We present an automatic generalization technique which is particularly applicable when reasoning about functional programs involving accumulating parameters. We are partly motivated by a research project\(^1\) in which a parallelizing compiler for Standard ML (SML) is being developed. This project builds directly upon previous work on the development of parallel systems from functional prototypes (Michaelson and Scaife, 1995). Transformation rules for SML will play an important part within the compilation process. It is a goal of this project to support the formal verification of these transformation rules by embedding --- \(^1\) EPSRC grant GR/L42889: Parallelising compilation of Standard ML through prototype instrumentation and transformation. a theorem proving capability within the compiler. We see the work presented here as providing the basis for achieving this goal. The paper is structured as follows. In section 2 background material on the problem and our approach are presented. An analysis of our prototype technique, what which call the ‘basic critic’, is given in section 3. This analysis provides the motivation for our extended technique which is documented in sections 4, 5 and 6. The implementation and testing of the extended technique are discussed in section 7, where particular attention is given to a verification obligation generated by the parallelizing compiler project mentioned above. Related and future work are outlined in sections 8 and 9, respectively. Finally, we draw our conclusions in section 10. 2 Background 2.1 Accumulator Generalization The introduction of accumulator parameters is a well documented (Henderson, 1980; Bird and Wadler, 1988; Turner, 1991; Bird, 1998) technique for deriving efficient functional programs. To illustrate the basic idea we use list reversal, a standard text book example (Henderson, 1980). Consider the following naive definition of list reversal: \[ \begin{align*} reverse(nil) &= nil \\ reverse(X :: Y) &= app(reverse(Y), X :: nil) \end{align*} \] where :: and \textit{app} denote list construction and concatenation respectively. An equivalent, but more efficient, version is derived by introducing an additional ‘accumulator’ parameter, \textit{i.e.} \[ \begin{align*} rev(nil, Z) &= Z \\ rev(X :: Y, Z) &= rev(Y, X :: Z) \end{align*} \] The resulting function \textit{rev} is tail-recursive. By exploiting the direct correspondence between tail-recursion and iteration further efficiency gains can be achieved by purely mechanical means. 2.1.1 The verification problem The correctness of the transformation given above is of obvious concern. Establishing the formal correctness, however, is not a purely mechanical process. It requires us to prove an inductive conjecture of the form: \[ \forall t : \text{list}(A), reverse(t) = rev(t, nil) \] In this paper, we are concerned with proving such inductive conjectures automatically. A naive attempt at proving (1) by structural induction on the list \(t\) fails. The failure occurs in the step case where we have a proof obligation of the form: \[ \text{reverse}(t) = \text{rev}(t, \text{nil}) \vdash \text{app}(\text{reverse}(t), h :: \text{nil}) = \text{rev}(t, h :: \text{nil}) \] Note that the conclusion fails to match the hypothesis because it contains mismatching term structures, i.e. \text{app}(..., h :: \text{nil}) on the left-hand-side and \text{h} :: ... on the right-hand side. The problem is that the induction hypothesis is not strong enough, i.e. it only tells us about the behaviour of \text{rev} when its accumulator parameter is set to \text{nil}. The failed proof attempt can be overcome by generalizing the conjecture. The generalization involves the introduction of a new universally quantified variable into the conjecture, i.e. \[ \forall t : \text{list}(A), \forall l : \text{list}(A), \text{app}(\text{reverse}(t), l) = \text{rev}(t, l) \] (2) We refer to this as accumulator generalization. The generalized conjecture provides a stronger induction hypothesis which enables the step case proof to succeed. The need for generalization represents a major obstacle to the automatic verification of functional programs. A generalization step is underpinned by the cut-rule of inference. In a goal-directed framework, therefore, a generalization introduces an infinite branching point into the search space. It is known (Kreisel, 1965) that the cut-elimination theorem does not hold for inductive theories. Consequently, heuristics for controlling generalization play an important role in the automation of inductive proof. 2.1.2 Our approach Returning to the list reversal example, the accumulator parameter provides a strong hint as to where the new universal variable should occur within the generalized conjecture. However, even with this elementary example additional guidance is required if the process is to be fully automated. For instance, how is the introduction of the \text{app}(..., l) term structure on the left-hand side of (2) motivated? We address this question through the use of a meta-level reasoning technique. Our starting point is a meta-level description of the common structure which characterizes an inductive proof. When a proof attempt fails this description can then be used to bridge the gap between the failure and a subsequent successful proof. We argue that having such a description provides a handle on the infinite search space generated by the generalization problem. Our approach relies upon the richness of the background theory, i.e. the lemmata which are available to the theorem prover. However, as will be shown in section 7, this is not as restrictive as it might first appear. 2.2 Proof methods and critics We build upon the notion of a proof plan (Bundy, 1988) and tactic-based theorem proving (Gordon et al., 1979). While a tactic encodes the low-level structure of a family of proofs a proof plan expresses the high-level structure. In terms of automated deduction, a proof plan guides the search for a proof. That is, given a collection of general purpose tactics the associated proof plan can be used automatically to tailor a special purpose tactic to prove a particular conjecture. The basic building blocks of proof plans are methods. Using a meta-logic, methods express the preconditions for tactic application. The benefits of proof plans can be seen when a proof attempt goes wrong. Experienced users of theorem provers, such as NQTHM, are used to intervening when they observe the failure of a proof attempt. Such interventions typically result in the user generalizing their conjecture or supplying additional lemmata to the prover. Through the notion of a proof critic (Ireland, 1992), we have attempted to automate this process. Critics provide the proof planning framework with an exception handling mechanism which enables the partial success of a proof plan to be exploited in search for a proof. The mechanism works by allowing proof patches to be associated with different patterns of precondition failure. We previously reported (Ireland and Bundy, 1996) various ways of patching inductive proofs based upon the partial success of the ripple method described below. 2.3 Method for guiding inductive proof In the context of mathematical induction the ripple method plays a pivotal role in guiding the search for a proof. The ripple method controls the selective application of rewrite rules in order to prove step case goals. Schematically, a step case goal can be represented as follows: \[ \cdots \forall b'. P[a,b'] \cdots \vdash P[c_1(a), b] \] where \( c_1(a) \) denotes the induction term. To achieve a step case goal the conclusion must be rewritten so as to allow the hypothesis to be applied: \[ \cdots \forall b'. P[a,b'] \cdots \vdash c_2(P[a, c_3(b)]) \] Note that, to apply the induction hypothesis, we must first instantiate \( b' \) to be \( c_3(b) \) which gives rise to a goal of the form: \[ \cdots P[a, c_3(b)] \cdots \vdash c_2(P[a, c_3(b)]) \] The need to instantiate an inductive hypothesis in this way is commonplace in inductive proof, and plays a crucial role in our technique. We return to this point at the end of this section. Syntactically an induction hypothesis and conclusion are very similar. More formally, the hypothesis can be expressed as an embedding within the conclusion (Smaill and Green, 1996). Restricting the rewriting of the conclusion so as to preserve this embedding maximizes the chances of applying an induction hypothesis. This is the basic idea behind the ripple method. The application of the ripple method, or rippling, makes use of meta-level annotations called wave-fronts to distinguish the term structures which cause the mismatch between the hypothesis and conclusion. Conversely any term structure within the conclusion which corresponds to the hypothesis is called skeleton. In general, embedded within each wave-front will be Automatic verification of functions with accumulating parameters parts of the skeleton term structure, these are known as wave-holes. We use a box and an underline to represent wave-fronts and wave-holes respectively, e.g. an annotated version of the goal given above takes the form: \[ \cdots \forall b'. P[a, b'] \cdots \vdash P[c_1(a)↑, [b]] \] We refer to a wave-front and its associated wave-hole, e.g. \(c_1(a)↑\), as a wave-term. The arrows are used to indicate the direction in which wave-fronts can be moved through the term structure. A term structure with the annotations removed is called the erasure. In order to distinguish terms within the conclusion which can be matched by universal variables in the hypothesis we use annotations called sinks, i.e. \([\ldots]\). As will be explained below sinks play an important role in identifying the need for accumulator generalization. A successful application of the ripple method can be characterized as follows: \[ \cdots \forall b'. P[a, b'] \cdots \vdash \Box c_3(P[a, c_3(b)])↑ \] Note that the term \(c_3(b)\), i.e. the instantiation for \(b'\), occurs within a sink so the wave-front annotation is no longer required. Rippling restricts rewriting to a syntactic class of rules called wave-rules. Wave-rules make progress towards eliminating wave-fronts while preserving skeleton term structure. A wave-rule which achieves the ripple given above takes the form:\(^2\) \[ P[c_1(X)↑, Y] \Rightarrow \Box c_2(P[X, c_3(Y)])↑ \] Wave-rules are derived automatically from definitions and logical properties like substitution, associativity and distributivity, etc. All wave-rules are available during the process of planning a proof. In general, a successful ripple will require multiple wave-rule applications. There are three basic patterns of rippling which are summarized schematically in figure 1. The preconditions for applying wave-rules are given in figure 2.3. We draw the readers attention to precondition 4, and in particular the notion of sinkable wave-fronts. It is the failure of this precondition within the context of a syntactically applicable wave-rule which provides the trigger for our proof patching technique. For a complete description of rippling and the generation of wave-rules see Bundy et al. (1993) and Basin and Walsh (1996). To illustrate one of the basic patterns of rippling an inductive proof of conjecture (2) is presented. Structural induction on the list \(t\) gives rise to a trivial base case. We focus here on the step case where the induction hypothesis takes the form: \[ \forall l' : list(A). \text{app} reverset(t), l') = \text{rev}(t, l') \] and the annotated conclusion takes the form: \[ \text{app} reverset\Box h :: t↑, [l]) = \text{rev}(h :: t↑, [l]) \] \(^2\) We use \(\Rightarrow\) to denote rewrite rules and \(\rightarrow\) to denote logical implication. A. Ireland and A. Bundy rippling-out: \[ f_1(\ldots(f_3(c_4(\ldots)))\ldots) \quad c_3(f_1(\ldots(f_3(\ldots)))\ldots) \] rippling-sideways: \[ f_1(\ldots c_4(\ldots)\ldots) \quad f_1(\ldots,\ldots,c_4(\ldots)\ldots) \] rippling-in: \[ c_3(f_1(\ldots f_3(\ldots))\ldots) \quad f_1(\ldots f_3(\ldots))\ldots \] An outward ripple involves the movement of wave-fronts into less nested term tree positions. A sideways ripple moves wave-fronts between distinct branches in the term tree while inward ripples movement of wave-fronts into more nested term tree positions. In general, a wave-rule may combine all three forms. Fig. 1. The three basic rippling patterns. The proof of the step case requires the definitions of reverse, rev and app, as well as the associativity of app. These definitions give rise to 49 wave-rules which include: \[ reverse(\overrightarrow{X :: Y}) \Rightarrow \overrightarrow{app(reverse(Y),X :: nil)} \] (6) \[ rev(\overrightarrow{X :: Y},Z) \Rightarrow \overrightarrow{rev(Y,X :: Z)} \] (7) \[ app(\overrightarrow{app(X,Y)},Z) \Rightarrow \overrightarrow{app(X,app(Y,Z))} \] (8) Wave-rule (6) applies on the left-hand-side of (5) to give: \[ app(\overrightarrow{app(reverse(t),h :: nil)},[l]) = \overrightarrow{rev(h :: t),[l]} \] (9) Applying wave-rule (7) on the right-hand-side of (9) gives: \[ app(\overrightarrow{app(reverse(t),h :: nil)},[l]) = \overrightarrow{rev(t,[h :: l])} \] Finally, wave-rule (8) applies on the left-hand-side giving: \[ app(reverse(t),[app(h :: nil,l)]) = \overrightarrow{rev(t,[h :: l])} \] Note that the term structure delimited by the sink annotation on the left-hand-side simplifies to give: \[ app(reverse(t),[h :: l]) = \overrightarrow{rev(t,[h :: l])} \] (10) A match between (10) and (4) is achieved by instantiating \(l'\) to be \(h :: l\). This completes the step case proof. Automatic verification of functions with accumulating parameters Input sequent: \[ H \vdash G[f_1(c_1(\_\_\_), f_2(\_\_), f_3(c_2(\_\_\_)))] \] Method preconditions: 1. there exists a subterm \( T \) of \( G \) which contains wave-front(s), e.g. \[ f_1(c_1(\_\_), f_2(\_\_), f_3(c_2(\_\_\_))) \] 2. there exists a wave-rule which matches \( T \), e.g. \[ C \to f_1(c_1(X), Y, Z) \Rightarrow c_3(f_1(X, c_3(Y), c_4(Z))) \] 3. the wave-rule condition follows from the context, e.g. \[ H \vdash C \] 4. resulting inward directed wave-fronts are potentially removable, e.g. \[ \cdots (c_3(f_3(\_\_\_))) \rightarrow (\text{sinkable}) \] or \[ \cdots (c_4(f_3(c_2(\_\_\_))) \rightarrow (\text{cancellable}) \] Note that a wave-front is \text{sinkable} if it is inward directed and one or more of its wave-holes contains a sink. A wave-front is \text{cancellable} if it is inward directed and one or more of its wave-holes contains an outward directed wave-front. Output sequent: \[ H \vdash G[\underbrace{c_5(f_3(\_\_\_))}_{\text{sinkable}}, c_4(f_3(c_2(\_\_\_)))_{\text{cancellable}}] \] Note that, for a wave-rule to be applicable, both object-level and meta-level term structures must match. Fig. 2. Preconditions for applying wave-rules. 2.4 A critic for discovering generalizations In terms of the preconditions for applying wave-rules, the need for an accumulator generalization can be explained by the failure of precondition 4, i.e. a missing sink (see figure 2.3). Schematically this failure pattern can be characterized as follows: \[ \cdots P[a, d] \cdots \vdash P[c_1(g), d] \] where \( d \) denotes a term which does not contain any sinks. We call the occurrence of \( d \) a blockage term because it blocks the sideways ripple, in this case the application of wave-rule (3). The identification of a blockage term triggers the generalization critic. The associated proof patch introduces schematic terms into the goal to partially specify the occurrences of a sink variable. In the schematic example presented above this leads to a patched goal of the form: $$\cdots \forall l'. P[a, \mathcal{M}(l') \cdots \forall l. P[c_1(a), \mathcal{M}(l])]$$ where $\mathcal{M}$ denotes a second-order meta-variable. Note that wave-rule (3) is now applicable, giving rise to a refined goal of the form: $$\cdots \forall l'. P[a, \mathcal{M}(l') \cdots \forall l. c_2(P[a, c_3(\mathcal{M}(l))])$$ The expectation is that an inward ripple will determine the identity of $\mathcal{M}$. Relating this proof patch to the list reversal example an inductive proof of conjecture (1) gives rise to the following failure pattern: $$\cdots \text{reverse}(t) = \text{rev}(t, \text{nil}) \cdots \vdash \text{app}(\text{reverse}(t), h :: \text{nil}) = \text{rev}(h :: t, \text{nil})$$ (11) Note that the occurrence of nil on the right-hand side is a blockage term because it prevents the application of wave-rule (7). The patched goal takes the form: $$\cdots \forall' : \text{list}(A). \mathcal{M}_2(\text{reverse}(t), l') = \text{rev}(t, \mathcal{M}_1(l')) \cdots \vdash \mathcal{M}_2(\text{reverse}(h :: t), [l]) = \text{rev}(h :: t, \mathcal{M}_1([l]))$$ (12) Using wave-rule (6) the goal becomes: $$\cdots \forall' : \text{list}(A). \mathcal{M}_2(\text{reverse}(t), l') = \text{rev}(t, \mathcal{M}_1(l')) \cdots \vdash \mathcal{M}_2(\text{app}(\text{reverse}(t), h :: \text{nil}), [l]) = \text{rev}(h :: t, \mathcal{M}_1([l]))$$ Wave-rule (7) is now applicable and gives rise to a goal of the form: $$\cdots \forall' : \text{list}(A). \mathcal{M}_2(\text{reverse}(t), l') = \text{rev}(t, \mathcal{M}_1(l')) \cdots \vdash \mathcal{M}_2(\text{app}(\text{reverse}(t), h :: \text{nil}), [l]) = \text{rev}(t, h :: \mathcal{M}_1([l]))$$ Our approach to the problem of constraining the instantiation of schematic terms will be detailed in section 5. We will refer to the above generalization as the basic critic. 3 Limitations of the basic critic The basic critic described in section 2.4 has proved very successful (Ireland and Bundy, 1996). Through our empirical testing, however, a number of limitations have been observed: 1. Certain classes of example require the introduction of multiple sink variables. The basic critic only deals with single sink variables. 2. The basic critic was designed in the context of equational proofs. A sink variable is assumed to occur on both sides of an equation. On the side opposite to the blockage term it is assumed that in the resulting generalized term structure the sink (auxiliary) will occur as an argument of the outermost functor. 3. Sink term occurrences which are motivated by blockage terms are more constrained than those which are not. This is not exploited by the basic critic during the search for a generalization. From these observations a number of natural extensions to the basic critic emerged. These extensions are described in the following sections. 4 Specifying sink terms To exploit the distinction between different sink term occurrences hinted at above we extend the meta-level annotations to include the notions of primary and secondary wave-fronts. A wave-front which provides the basis for a sideways ripple, but which is not applicable because of the presence of a blockage term is designated to be primary. All other wave-fronts are designated to be secondary. To illustrate, consider the following schematic conclusion: \[ g(f(c_1(a, b)^1, d), c_1(a, b)^2) (13) \] and the following wave-rules: \[ f(c_1(X, Y)^1, Z) \Rightarrow f(X, c_2(Z, Y)^1) (14) \] \[ g(X, c_1(Y, Z)^1) \Rightarrow c_3(g(X, Y)^1) (15) \] Assuming that the occurrence of \(d\) in (13) denotes a blockage term then wave-rule (14) is not applicable. Wave-rule (15) is applicable and enables an outwards ripple, i.e. \[ c_3(g(f(c_1(a, b)^1, d), a), b)^1 \] Using subscripts\(^3\) to denote primary and secondary wave-fronts then the analysis presented above gives rise to the following classification of the wave-fronts appearing in (13): \[ g(f(c_1(a, b)^1, d), c_1(a, b)^2) (13) \] Note that the rippling of the secondary wave-fronts is undone. This increases the number of generalizations which may be subsequently discovered. Relating the notion of primary and secondary wave-fronts to blocked goal (11) gives rise to \[ reverse(h :: t)^1 = rev(h :: t)^1, nil \] \(^3\) Note that wave-rules must also take account of the extension to the wave-front annotations. 4.1 Primary sink terms For each primary wave-front an associated sink term is introduced. We refer to these as primary sink terms. The position of a primary sink term corresponds to the position of the blockage term within the conclusion. The structure of a primary sink term is a function of the blockage term and is computed as follows: \[ \text{pri}(X) = \begin{cases} \mathcal{M}_i([l_1]) & \text{if } X \text{ is a constant} \\ \mathcal{M}_i(X, [l_1]) & \text{if } X \text{ is a wave-front} \\ F(\text{pri}(Y_1), \ldots, \text{pri}(Y_n)) & \text{otherwise} \end{cases} \] where \(X \equiv F(Y_1, \ldots, Y_n)\) Note that \(\mathcal{M}_i\) denotes a higher-order meta-variable while \(l_i\) denotes a new object-level variable. In general distinct primary sink terms may or may not need to share the same object-level variable. This represents a choice point in the construction of primary sink terms. Assuming \(d\) denotes a constant then \(\text{pri}(d)\) evaluates to \(\mathcal{M}_1([l_1])\). Substituting this sink term for \(d\) in (16) gives a schematic conclusion of the form: \[ g(\mathcal{f}(c_1(a,b))^{↑1}, \mathcal{M}_1([l_1])) \] Relating the general notion of primary sink terms to the specific list reversal example gives: \[ \text{reverse}(h::t^{↑2}) = \text{rev}(h::t^{↑1}, \mathcal{M}_1([l_1])) \] 4.2 Secondary sink terms For each secondary wave-front we eagerly attempt to apply a sideways ripple by introducing occurrences of the variables associated with the primary sink terms. These occurrences are specified again using schematic term structures and are called secondary sink terms. The construction of secondary sink terms are as follows. For each subterm, \(X\), of the conclusion which contains a secondary wave-front, we compute a secondary sink term as follows: \[ \text{sec}(X) = \mathcal{M}_i(X, [l_1], \ldots, [l_m]) \] where \(l_1, \ldots, l_m\) denote the vector of variables generated by the construction of the primary sink terms. To illustrate, consider again the schematic conclusion (17). Taking \(X\) to be \([c_1(a,b)]^{↑2}\) then the process of introducing secondary sink terms gives rise to a new schematic conclusion of the form: \[ g(\mathcal{f}(c_1(a,b))^{↑1}, \mathcal{M}_1([l_1]), \mathcal{M}_2(c_1(a,b))^{↑2}, [l_1])) \] Note that the selection of \(X\) represents a choice point in the construction of secondary sink terms. In the case of (17), another alternative instantiation for \(X\) exists, i.e. \[ g(\ldots, c_1(a,b))^{↑2}) \] Automatic verification of functions with accumulating parameters 235 again relating the general notion to the specific list reversal example gives rise to two alternative patches of the form: \[ \text{reverse}(M_2(h :: t), [l_1]) = \text{rev}(h :: t, M_1([l_1])) \] \[ M_2(\text{reverse}(h :: t), [l_1]) = \text{rev}(h :: t, M_1([l_1])) \] (19) Note that the second of these corresponds to the patched goal (12). 5 Instantiating sink terms The process of instantiating the sink terms introduced by the generalization critic is guided by the application of wave-rules. In general, the application of wave-rules in the presence of schematic term structure requires higher-order unification. Our implementation therefore exploits a higher-order unification procedure (see section 7). In this application, however, we only require second-order unification. The application of wave-rules in the presence of second-order meta-variables within the goal-term requires narrowing, i.e. rewriting where free variables in the redex can be instantiated through the unification with wave-rules. Below we describe how the meta-level annotations can be used to constrain the unification process and discuss the benefits of this approach. 5.1 Constraining second-order unification Our procedure for constraining the application of rewrite rules within the context of skeleton term structure which contains second-order meta-variables involves three steps. The applicability of a wave-rule of the form \( L \Rightarrow R \) to a wave-term \( W \) is computed as follows: 1. For each wave-front within \( L \) there exists a wave-front within \( W \) which unifies giving a substitution \( \theta_1 \). 2. The erasures of \( L' \) and \( W' \) unify giving a substitution \( \theta_2 \), where \( L' = L \cdot \theta_1 \) and \( W' = W \cdot \theta_1 \). 3. For each sink term \( T \) of the form \( M_j[[l_1], \ldots, [l_n]] \) within \( L' \cdot \theta_2 \) there exists a substitution \( \theta_3 \) such that \( (T \cdot \theta_3) = [l_k] \) (1 \( \leq \) \( k \leq n \)). If successful then \( W \) is replaced by \( ((R \cdot \theta_1) \cdot \theta_2) \cdot \theta_3 \). Note that in the unification of wave-fronts both object-level and meta-level term structure must match, e.g. the wave-fronts \( c((X, Y))^N \) and \( c(f(a), g(b, c))^2 \) match giving rise to the following substitution \( \{ X \mapsto f(a), Y \mapsto g(b, c), N \mapsto 2 \} \). The constraints of rippling significantly reduce the number of unifiers which are considered as will be shown in section 5.3. Our procedure does not, however, eliminate choice completely. In particular, the application of the procedure may give rise to choice with respect to the selection of... wave-fronts (step 1) and sinks (step 3). We use an iterative deepening search strategy to enable alternative branches within the search space to be explored. Second-order unification will, in general, lead to a non-terminating sequence of inward directed wave-fronts. For this reason, projections are used to eagerly terminate inward ripples. A projection is applied whenever an inward directed wave-front occurs as the immediate super-term of a sink term. The strategy of eager instantiation of meta-variables may of course give rise to an over-generalization, i.e. a non-theorem. A counter-example checker is used to filter candidate instantiations of the schematic conjecture. The checker evaluates ground instances of the conjecture, typically corresponding to base cases. On detecting a non-theorem the planner backtracks and explores alternative branches within the search space. A complementary instantiation strategy is discussed in section 9 which is appropriate when meta-variables occur out with the scope of our technique. 5.2 List reversal revisited Returning to the list reversal example, consider again patch (19) which ripples by wave-rules (6) and (7) to give: \[ \forall l': \text{list}(A). \mathcal{M}_2(\text{reverse}(t), l') = \text{rev}(t, \mathcal{M}_1(l')) \] Now consider the wave-term on the left-hand side of the form: \[ \mathcal{M}_2(\text{app}(\text{reverse}(t), h :: \text{nil}), [l]) = \text{rev}(t, h :: \mathcal{M}_1[l]) \] Using the annotated unification procedure wave-rule (8) now applies to give: \[ \exists \forall l': \text{list}(A). \text{app}(\text{reverse}(t), \mathcal{M}_3(t, l')) = \text{rev}(t, \mathcal{M}_1(l')) \] Note that \( \mathcal{M}_2 \) is instantiated to be \( \lambda x.\lambda y.\text{app}(x, \mathcal{M}_3(x, y)) \). By the process of eager instantiation \( \mathcal{M}_1 \) becomes \( \lambda x.x \) and \( \mathcal{M}_3 \) becomes \( \lambda x.\lambda y.y \) giving: \[ \exists \forall l': \text{list}(A). \text{app}(\text{reverse}(t), l') = \text{rev}(t, l') \] Simplifying the sink on the left-hand-side and instantiating \( l' \) to be \( h :: l \) enables the application of induction hypothesis. Note that the resulting generalization corresponds to conjecture (2). 5.3 Benefits of meta-level guidance Using the list reversal example we now consider the benefits of using meta-level annotations to constrain the unification process. We compare the branching rates when applying annotated and unannotated rewrite rules. As mentioned in section 2.3 the list reversal example gives rise to 49 wave-rules. In the case of goal-term (20) the annotated unification procedure eliminates all but the following 4 wave-rules: \[ \text{app}(\text{app}(X, Y)^N, Z) \Rightarrow \text{app}(X, \text{app}(Y, Z)^N) \quad (21) \] \[ \text{app}(X, \text{app}(Y, Z)^N) \Rightarrow \text{app}(\text{app}(X, Y), Z)^N \] \[ X :: \text{app}(Y, Z)^N \Rightarrow \text{app}(X :: Y, Z)^N \] \[ \text{app(app(Y, Z)^N, X :: nil)^N} \Rightarrow \text{reverse(X :: Y)^N} \] Note that only the first three of these will actually apply since the third is ruled-out by precondition 4 of the ripple method, i.e. sink-ability. The 3 remaining applicable wave-rules should then be compared with the results of unannotated unification which again gives rise to 18 applicable rewrite rules. While the annotations reduce the number of wave-rules considered for unification they also constrain the number of unifiers. To illustrate, consider goal-term (20) and the left-hand side of wave-rule (21). Unification without the constraints of annotations generates two possible unifiers, i.e. \(\lambda x.\lambda y.\text{app}(x, \text{app}(y, z))\) and \(\lambda x.\lambda y.\text{app}(h :: t, \text{app}(x, y))\). Note that the first is based upon projection while the second uses imitation. The imitation, however, violates the key property of rippling, i.e. skeleton preservation (see section 2.3), so is rejected by the annotated unification procedure. 6 Organizing the search space In controlling the search for a generalization we place a number of constraints on the proof planning process: - Planning in the context of schematic term structures requires a bounded search strategy. We use an iterative deepening strategy to explore the space of alternative ripple proofs. - Backtracking over the construction of sink terms deals with the choice point issues raised in section 4. - Since primary sink terms are more constrained than secondary sink terms priority is given to the rippling of primary wave-fronts. 7 Implementation and testing The extensions to the basic critic described above directly address the limitations highlighted in section 3: 1. The linkage of blockage terms with the introduction of primary sink terms within the schematic conjecture addresses the issue of multiple sink variables. 2. The issue of positioning auxiliary sink variables is dealt with by the ability to revise the construction of secondary sink terms. 3. By extending the meta-logic to include the notions of primary and secondary wave-fronts we are able to exploit the observation that certain sink terms are more constrained than others during the search for generalizations. Our extended critic has been implemented and integrated within the CL&M proof planner (Bundy et al., 1990). The implementation makes use of the higher-order features of \( \lambda \)-Prolog (Miller and Nadathur, 1988). The results presented in Ireland and Bundy (1996) for the basic critic were replicated by the extended critic. The extended critic, however, discovered generalizations which the basic critic missed. Moreover, a number of new examples were generalized by the extended critic for which the application of the basic critic resulted in failure. Our results are documented in the tables given in Appendix B. The example conjectures for which the extended critic improves upon the performance of the basic critic are presented in Table 1. All the examples require accumulator generalization and therefore cannot be proved automatically by other inductive theorem provers such as NQTHM (Boyer and Moore, 1979; Boyer and Moore, 1988). The correspondence between conjectures and generalized conjectures is recorded in Table 2. The time taken to discover each generalization using the extended critic is also given in Table 2. The lemmata used in motivating the generalizations are presented in Table 3, while the actual generalized conjectures are given in Table 4. All the generalized conjectures are computed automatically. Our technique relies upon the existence of appropriate lemmata. However, as can be seen from Table 3, the lemmata are relatively general purpose, i.e. properties such as associativity and distributivity. Moreover, we have previously shown how our approach to failure analysis has enabled us to automatically generate such lemmata (Ireland and Bundy, 1996). This gives the opportunity for lemmata discovered during one part of a proof effort to be used to motivate a generalization within another. To place our contribution within the wider context of functional programming, we focus upon conjecture C10 (see Table 1), which arose within the parallelising compiler project mentioned in section 1. C10 is the proof obligation generated by the verification of a SML transformation rule which specifies an equivalence between a single and a distributed application of the \textit{map} function, i.e. \[ \forall t : \text{list}(A). \forall f : A \rightarrow B. \forall n : \mathbb{N}. \] \[ \text{map}(f, t) = \text{reduce}(\lambda x.\lambda y.\text{app}(x, y), \text{map}(\lambda x.\text{map}(f, x), \text{split}(1, n, \text{nil}, t))) (22) \] Such equivalences enable the correspondence which exists between higher-order functions and generic parallel constructs to be exploited during the parallelisation of SML code. The definitions associated with (22) are included as rewrite rules within Appendix A while the corresponding SML code is given in figure 3. An inductive proof of (22) requires an accumulator generalization. Our extended critic generates a schematic conjecture of the form: \[ \forall f : A \rightarrow B \forall n : \mathbb{N} \forall l_1 : \mathbb{N} \forall l_2 : \text{list}(A) \] \[ \text{map}(f, \mathcal{M}_3(t, l_1, l_2)) = \] \[ \text{reduce}(\lambda x. \lambda y. \text{app}(x, y), \text{map}(\lambda x. \text{map}(f, x), \text{split}_1(\mathcal{M}_1(l_1), n, \mathcal{M}_2(l_2), t))) \] The subsequent proof planning instantiates this schematic conjecture giving rise to a generalized conjecture of the form: \[ \forall t : \text{list}(A) \forall f : A \rightarrow B \forall n : \mathbb{N} \forall l_1 : \mathbb{N} \forall l_2 : \text{list}(A). \] \[ \text{map}(f, \text{app}(l_2, t)) = \] \[ \text{reduce}(\lambda x. \lambda y. \text{app}(x, y), \text{map}(\lambda x. \text{map}(f, x), \text{split}_1(l_1, n, l_2, t))) \] (23) Note that the generalization involves the introduction of two new universally quantified variables \( l_1 \) and \( l_2 \). To summarize, the ripple method in conjunction with the extended critic is able to automatically generate and verify (23) by analysing the failure to prove (22) directly. 8 Related work In Aubin’s thesis (Aubin, 1976) he presents a technique for discovering accumulator generalizations based upon the failure of an unfolding strategy. Basically he used the mismatch between the conclusion and hypothesis to suggest the introduction of what we call primary sinks. With regard to secondary sinks, Aubin appeals to a notion of an equation being ‘balanced’, i.e. a sink should occur on both sides of an equality. Hesketh, in her thesis (Hesketh, 1991), tackled the problem of accumulator generalization in the context of proof planning and rippling. Her approach, however, did not deal with multiple sinks. By introducing the primary and secondary classification of wave-fronts we believe that our approach provides greater control in the search for generalizations. This becomes crucial as the complexity\(^4\) of examples increases. In addition, we use sink annotations explicitly in selecting potential projections for higher-order meta-variables. Hesketh’s work, however, was much broader than ours in that she unified a number of different kinds of generalization. Moreover, she was also able to synthesize tail-recursive functions given equivalent naive recursive definitions (Hesketh et al., 1992). An alternative to our approach of annotated unification is presented in (Hutter and Kohlhase, 1997) where essentially the structure preservation constraints of rippling are embedded within the unification algorithm. This approach, however, has not been applied to the problem of generalization so a direct comparison is not possible. 9 Future work A limitation of the technique as implemented is that it only deals with wave-fronts which contain single wave-holes. This restricts us to proofs which involve a single induction hypothesis. In principle, we see no reason why this restriction should not be removed in the future. One of the goals of parallelizing SML compiler project is the automatic synthesis of missing transformation rules. We see the work presented here as a starting point for this synthesis task. Our technique is not restricted to reasoning about functional programs. For instance, we believe that it subsumes the procedure described by Pierre (1995) for generalizing hardware specifications. In addition, by exploiting the close relationship which exists between induction and iteration we have shown (Ireland and Stark, 1997) how our generalization critic can play a role in the automatic discovery of tail invariants (Kaldewaij, 1990). We plan to investigate these connections further. The critic mechanism was motivated by a desire to build an automatic theorem prover which was more robust than conventional provers. We believe, however, that the critic mechanism also provides a basis for developing effective user interaction. An interactive version of the critic mechanism has been implemented (Ireland et al., 1997) which invites a user to complete the instantiation of meta-variables. This represents ongoing work which, as observed in section 5.1, complements the generalization technique presented here. 10 Conclusion The search for inductive proofs cannot avoid the problem of generalization. In this paper we describe extensions to a proof critic for automatically generalizing inductive conjectures. The ideas presented here build upon a technique for patching proofs reported in Ireland and Bundy (1996). These extensions have significantly \(^4\) That is, as the number of definitions and lemmata available to the prover increases. improved the performance of the technique while preserving the spirit of original proof patch. Our implementation of the extended critic has been tested on the verification of functional programs with some promising results. More generally, we believe that our technique has wider application in terms of both software and hardware verification. Acknowledgements The research reported in this paper was supported by EPSRC grants GR/J80702, GR/L11724, GR/L42889 as well as ARC grant 438. We would like to thank Greg Michaelson for providing example conjectures and Lincoln Wallen for drawing our attention to the connection between tail invariants and our generalization critic. Thanks also go to David Basin, Alan Smaill, Maria McCann, Julian Richardson, Toby Walsh, Richard Boulton, Dieter Hutter and anonymous CADE-13 and JFP referees for their constructive feedback on this paper. An earlier shorter version of this paper appeared in the proceedings of CADE-13. Appendix A: Definitional rewrite rules \[ \begin{align*} \text{reverse}(\text{nil}) & \Rightarrow \text{nil} \\ \text{reverse}(X :: Y) & \Rightarrow \text{app}(\text{reverse}(Y), X :: \text{nil}) \\ \text{rev}(\text{nil}, Z) & \Rightarrow Z \\ \text{rev}(X :: Y, Z) & \Rightarrow \text{rev}(Y, X :: Z) \\ \text{atend}(X, \text{nil}) & \Rightarrow X :: \text{nil} \\ \text{atend}(X, Y :: Z) & \Rightarrow Y :: \text{atend}(X, Z) \\ \text{map}(X, \text{nil}) & \Rightarrow \text{nil} \\ \text{map}(X, Y :: Z) & \Rightarrow X(Y) :: \text{map}(X, Z) \\ \text{reduce}(X, \text{nil}) & \Rightarrow \text{nil} \\ \text{reduce}(X, Y :: Z) & \Rightarrow X(Y, \text{reduce}(X, Z)) \\ \text{foldr}(W, X, \text{nil}) & \Rightarrow X \\ \text{foldr}(W, X, Y :: Z) & \Rightarrow W(Y, \text{foldr}(W, X, Z)) \\ \text{filter}(X, \text{nil}) & \Rightarrow \text{nil} \\ X(Y) \Rightarrow \text{filter}(X, Y :: Z) & \Rightarrow Y :: \text{filter}(X, Z) \\ \neg X(Y) \Rightarrow \text{filter}(X, Y :: Z) & \Rightarrow \text{filter}(X, Z) \\ \text{sum}(\text{nil}) & \Rightarrow 0 \\ \text{sum}(X :: Y) & \Rightarrow \text{sum}(Y) + X \\ \text{prod}(\text{nil}) & \Rightarrow 1 \\ \text{prod}(X :: Y) & \Rightarrow \text{prod}(Y) \ast X \\ \text{tsum}(\text{nil}, Z) & \Rightarrow Z \\ \text{tsum}(X :: Y, Z) & \Rightarrow \text{tsum}(Y, Z + X) \end{align*} \] \footnote{We assume standard recursive definitions for \textit{even} and \textit{odd} as well as for list concatenation (\textit{app}), deletion (\textit{del}) and membership (\textit{mem}).} \[ \text{tprod}(\text{nil}, Z) \Rightarrow Z \] \[ \text{tprod}(X :: Y, Z) \Rightarrow \text{tprod}(Y, Z \cdot X) \] \[ \text{sp}(\text{nil}, Y, Z) \Rightarrow \langle Y, Z \rangle \] \[ \text{sp}(W :: X, Y, Z) \Rightarrow \text{sp}(X, W + Y, W \cdot Z) \] \[ \text{sp}_2(\text{nil}, Y, Z) \Rightarrow \langle Y, Z \rangle \] \[ \text{sp}_2(W :: X, Y, Z) \Rightarrow \text{sp}_2(X, Y + W, Z \cdot W) \] \[ \text{even}(\text{nil}) \Rightarrow \text{nil} \] \[ \text{odd}(X) \Rightarrow \text{evenel}(X :: Y) \Rightarrow \text{evenel}(Y) \] \[ \text{even}(X) \Rightarrow \text{evenel}(X :: Y) \Rightarrow X :: \text{evenel}(Y) \] \[ \text{odd}(\text{nil}) \Rightarrow \text{nil} \] \[ \text{odd}(X) \Rightarrow \text{oddel}(X :: Y) \Rightarrow X :: \text{oddel}(Y) \] \[ \text{even}(X) \Rightarrow \text{oddel}(X :: Y) \Rightarrow \text{oddel}(Y) \] \[ \text{perm}(\text{nil}, \text{nil}) \Rightarrow \text{true} \] \[ \text{perm}(\text{nil}, X :: Y) \Rightarrow \text{false} \] \[ \text{partition}(\text{nil}, Y, Z) \Rightarrow \text{app}(Y, Z) \] \[ \text{even}(W) \Rightarrow \text{partition}(W :: X, Y, Z) \Rightarrow \text{partition}(X, \text{atend}(W, Y), Z) \] \[ \text{odd}(W) \Rightarrow \text{partition}(W :: X, Y, Z) \Rightarrow \text{partition}(X, Y, \text{atend}(W, Z)) \] \[ \text{split}_1(V, W, X, \text{nil}) \Rightarrow X :: \text{nil} \] \[ V > W \Rightarrow \text{split}_1(V, W, X, Y :: Z) \Rightarrow X :: \text{split}_1(2, W, Y :: \text{nil}, Z) \] \[ V \leq W \Rightarrow \text{split}_1(V, W, X, Y :: Z) \Rightarrow \text{split}_1(V + 1, W, \text{atend}(Y, X), Z) \] \[ \text{split}(X, Y) \Rightarrow \text{split}_1(1, X, \text{nil}, Y) \] **Appendix B: Experimental results** <table> <thead> <tr> <th>No</th> <th>Conjecture</th> </tr> </thead> <tbody> <tr> <td>C1</td> <td>reverse(X) = rev(X, nil)</td> </tr> <tr> <td>C2</td> <td>rev(reverse(X), ..., nil) = reverse(reverse(X))</td> </tr> <tr> <td>C3</td> <td>perm(reverse(X), rev(X, nil))</td> </tr> <tr> <td>C4</td> <td>rev(reverse(X, nil)) = reverse(reduce(\lambda x. \lambda y.z, atend(X, y), X))</td> </tr> <tr> <td>C5</td> <td>app(evenel(X), oddel(X)) = partition(X, nil, nil)</td> </tr> <tr> <td>C6</td> <td>app(filter(\lambda x. even(x), X), filter(\lambda x. odd(x), X)) = partition(X, nil, nil)</td> </tr> <tr> <td>C7</td> <td>sp(X, 0, 1) = \langle \text{sum}(X), \text{prod}(X) \rangle</td> </tr> <tr> <td>C8</td> <td>\langle \text{sum}(X, 0), \text{tprod}(X, 1) \rangle = \langle \text{foldr}(\lambda x. \lambda y.(x + y), 0, X), \text{foldr}(\lambda x. \lambda y.(x \cdot y), 1, X) \rangle</td> </tr> <tr> <td>C9</td> <td>sp_2(X, 0, 1) = \langle \text{foldr}(\lambda x. \lambda y.(x + y), 0, X), \text{foldr}(\lambda x. \lambda y.(x \cdot y), 1, X) \rangle</td> </tr> <tr> <td>C10</td> <td>map(F, X) = reduce(\lambda x. \lambda y.app(x, y), map(\lambda x.map(F, x), split_1(1, \text{nil}, X)))</td> </tr> </tbody> </table> Automatic verification of functions with accumulating parameters Table 2. Performance of the extended generalization critic <table> <thead> <tr> <th>No</th> <th>Generalizations (Timings)</th> </tr> </thead> <tbody> <tr> <td>C1</td> <td>G1 (7.9) G2 (7.7)</td> </tr> <tr> <td>C2</td> <td>G3 (25.0) G4 (17.1) G5 (105.8) G6 (15.8) G7 (14.3) G8 (16.5)</td> </tr> <tr> <td></td> <td>G9 (11.4) G10 (15.3)</td> </tr> <tr> <td>C3</td> <td>G11 (8.7) G12 (7.4) G13 (7.6)</td> </tr> <tr> <td>C4</td> <td>G14 (10.2)</td> </tr> <tr> <td>C5</td> <td>G15 (108.1)</td> </tr> <tr> <td>C6</td> <td>G16 (95.4)</td> </tr> <tr> <td>C7</td> <td>G17 (24.7)</td> </tr> <tr> <td>C8</td> <td>G18 (42.9)</td> </tr> <tr> <td>C9</td> <td>G19 (30.1)</td> </tr> <tr> <td>C10</td> <td>G20 (68.3)</td> </tr> </tbody> </table> The timings are given in CPU seconds and were obtained using a sicstus implementation of ClAM running on a Sun ultra-sparc. The figures represent the time taken to compute the alternative instantiations of each conjecture schema. Note that in the case of C1 and C2 the basic critic does not discover G2, G8 and G9 while it fails completely on conjectures C3 through to C10. However, the extended critic succeeds on all the conjectures given in Table 1. Table 3. Lemmata used to motivate generalizations <table> <thead> <tr> <th>No</th> <th>Lemma</th> </tr> </thead> <tbody> <tr> <td>L1</td> <td>(\text{app(app(X, Y), Z)} = \text{app(X, app(Y, Z))})</td> </tr> <tr> <td>L2</td> <td>(\text{app(app(X, Y :: nil), Z)} = \text{app(X, Y :: Z)})</td> </tr> <tr> <td>L3</td> <td>(\text{reverse(app(X, Y :: nil)) = Y :: reverse(X)})</td> </tr> <tr> <td>L4</td> <td>(\text{app(X, Y :: Z)} = \text{app(atend(Y, X), Z)})</td> </tr> <tr> <td>L5</td> <td>(X + (Y + Z) = (X + Y) + Z)</td> </tr> <tr> <td>L6</td> <td>(X * (Y * Z) = (X * Y) * Z)</td> </tr> <tr> <td>L7</td> <td>(\text{map(W, app(X, Y :: Z)) = app(map(W, X), map(W, app(Y :: nil, Z)))})</td> </tr> </tbody> </table> Table 4. Generalized conjectures <table> <thead> <tr> <th>No</th> <th>Generalization</th> <th>Lemmata</th> </tr> </thead> <tbody> <tr> <td>G1</td> <td>( \text{app}(\text{reverse}(X), Y) = \text{rev}(X, Y) )</td> <td>L1</td> </tr> <tr> <td>G2</td> <td>( \text{reverse}(\text{rev}(Y, X)) = \text{rev}(X, Y) )</td> <td></td> </tr> <tr> <td>G3</td> <td>( \text{rev}(\text{rev}(X, Y), \text{nil}) = \text{app}(\text{reverse}(Y), \text{reverse}(\text{reverse}(X))) )</td> <td>L2&amp;L3</td> </tr> <tr> <td>G4</td> <td>( \text{rev}(\text{rev}(X, Y), \text{nil}) = \text{rev}(Y, \text{reverse}(\text{reverse}(X))) )</td> <td>L3</td> </tr> <tr> <td>G5</td> <td>( \text{rev}(\text{rev}(X, Y), \text{nil}) = \text{rev}(\text{rev}(\text{reverse}(Y)), \text{reverse}(\text{reverse}(X))) )</td> <td>L2&amp;L3</td> </tr> <tr> <td>G6</td> <td>( \text{rev}(\text{rev}(X, Y), \text{nil}) = \text{rev}(\text{rev}(\text{reverse}(Y)), \text{reverse}(\text{reverse}(X))) )</td> <td>L3</td> </tr> <tr> <td>G7</td> <td>( \text{rev}(\text{rev}(X, \text{reverse}(\text{reverse}(Y))), \text{nil}) = \text{rev}(Y, \text{reverse}(\text{reverse}(X))) )</td> <td>L3</td> </tr> <tr> <td>G8</td> <td>( \text{rev}(\text{rev}(X, \text{reverse}(Y)), \text{nil}) = \text{rev}(\text{rev}(Y, \text{reverse}(\text{reverse}(X)))) )</td> <td>L3</td> </tr> <tr> <td>G9</td> <td>( \text{rev}(\text{rev}(X, Y), \text{nil}) = \text{rev}(\text{rev}(\text{app}(\text{reverse}(X), Y)), \text{rev}(X, Y)) )</td> <td>L1</td> </tr> <tr> <td>G10</td> <td>( \text{rev}(\text{rev}(X, Y), \text{nil}) = \text{rev}(\text{rev}(\text{rev}(Y, X)), \text{rev}(X, Y)) )</td> <td></td> </tr> <tr> <td>G11</td> <td>( \text{perm}(\text{rev}(\text{rev}(X, Y)), \text{rev}(\text{rev}(X, Y))) )</td> <td>L1</td> </tr> <tr> <td>G12</td> <td>( \text{perm}(\text{rev}(\text{rev}(X, Y)), \text{rev}(\text{rev}(X, Y))) )</td> <td>L4</td> </tr> <tr> <td>G13</td> <td>( \text{rev}(\text{rev}(X, Y), \text{nil}) = \text{rev}(\text{rev}(\text{app}(\text{reverse}(X), Y)), \text{rev}(X, Y)) )</td> <td>L1</td> </tr> <tr> <td>G14</td> <td>( \text{app}(\text{app}(\text{reduce}(\lambda x.\lambda y.\text{atend}(x, y), X), Y)) )</td> <td>L4</td> </tr> <tr> <td>G15</td> <td>( \text{app}(\text{app}(\text{even}(X)), \text{app}(\text{app}(\text{odd}(X)), \text{partition}(X, Y, Z))) ) = ( \text{partition}(X, Y, Z) )</td> <td>L4</td> </tr> <tr> <td>G16</td> <td>( \text{app}(\text{app}(\text{filter}(\lambda x.\text{even}(x), X)), \text{app}(\text{app}(\text{odd}(X), \text{partition}(X, Y, Z))) ) = ( \text{partition}(X, Y, Z) )</td> <td>L4</td> </tr> <tr> <td>G17</td> <td>( \text{sp}(X, Y, Z) = \langle \text{sum}(X) + Y, \text{prod}(X) \times Z \rangle )</td> <td>L5&amp;L6</td> </tr> <tr> <td>G18</td> <td>( \langle \text{sum}(X, Y), \text{tprod}(X, Z) \rangle = \langle Y + \text{foldr}(\lambda x.\lambda y.(x + y), 0, X), Z \times \text{foldr}(\lambda x.\lambda y.(x \times y), 1, X) \rangle )</td> <td>L5&amp;L6</td> </tr> <tr> <td>G19</td> <td>( \text{sp}_2(X, Y, Z) = \langle Y + \text{foldr}(\lambda x.\lambda y.(x + y), 0, X), Z \times \text{foldr}(\lambda x.\lambda y.(x \times y), 1, X) \rangle )</td> <td>L5&amp;L6</td> </tr> <tr> <td>G20</td> <td>( \text{map}(F, \text{app}(Y, X)) = \text{reduce}(\lambda x.\lambda y.\text{app}(x, y), \text{map}(\lambda x.\text{map}(F, x), \text{split}(Z, W, Y, X))) )</td> <td>L4&amp;L7</td> </tr> </tbody> </table> The lemmata used to suggest generalizations are indicated in the third column. No entry appears if the generalization was discovered using purely definitional rewrite rules. References Automatic verification of functions with accumulating parameters
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/411764/Automatic_Veri_cation_of_Functions_with_Accumulating_Parameters.pdf", "len_cl100k_base": 14397, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 74127, "total-output-tokens": 16990, "length": "2e13", "weborganizer": {"__label__adult": 0.0005016326904296875, "__label__art_design": 0.000637054443359375, "__label__crime_law": 0.0004911422729492188, "__label__education_jobs": 0.002094268798828125, "__label__entertainment": 0.0001436471939086914, "__label__fashion_beauty": 0.0002703666687011719, "__label__finance_business": 0.0005369186401367188, "__label__food_dining": 0.0005831718444824219, "__label__games": 0.0009250640869140624, "__label__hardware": 0.0015230178833007812, "__label__health": 0.0013027191162109375, "__label__history": 0.0005211830139160156, "__label__home_hobbies": 0.0002005100250244141, "__label__industrial": 0.0008835792541503906, "__label__literature": 0.0007619857788085938, "__label__politics": 0.0004794597625732422, "__label__religion": 0.0008616447448730469, "__label__science_tech": 0.26611328125, "__label__social_life": 0.00016808509826660156, "__label__software": 0.00726318359375, "__label__software_dev": 0.7119140625, "__label__sports_fitness": 0.00041866302490234375, "__label__transportation": 0.0012044906616210938, "__label__travel": 0.00025081634521484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55521, 0.02601]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55521, 0.56728]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55521, 0.8131]], "google_gemma-3-12b-it_contains_pii": [[0, 1035, false], [1035, 3297, null], [3297, 5555, null], [5555, 8607, null], [8607, 11480, null], [11480, 14357, null], [14357, 16220, null], [16220, 18263, null], [18263, 20652, null], [20652, 22661, null], [22661, 25168, null], [25168, 27915, null], [27915, 30159, null], [30159, 32616, null], [32616, 36018, null], [36018, 37737, null], [37737, 40646, null], [40646, 43148, null], [43148, 45773, null], [45773, 48079, null], [48079, 51802, null], [51802, 55521, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1035, true], [1035, 3297, null], [3297, 5555, null], [5555, 8607, null], [8607, 11480, null], [11480, 14357, null], [14357, 16220, null], [16220, 18263, null], [18263, 20652, null], [20652, 22661, null], [22661, 25168, null], [25168, 27915, null], [27915, 30159, null], [30159, 32616, null], [32616, 36018, null], [36018, 37737, null], [37737, 40646, null], [40646, 43148, null], [43148, 45773, null], [45773, 48079, null], [48079, 51802, null], [51802, 55521, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55521, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55521, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55521, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55521, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55521, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55521, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55521, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55521, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55521, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55521, null]], "pdf_page_numbers": [[0, 1035, 1], [1035, 3297, 2], [3297, 5555, 3], [5555, 8607, 4], [8607, 11480, 5], [11480, 14357, 6], [14357, 16220, 7], [16220, 18263, 8], [18263, 20652, 9], [20652, 22661, 10], [22661, 25168, 11], [25168, 27915, 12], [27915, 30159, 13], [30159, 32616, 14], [32616, 36018, 15], [36018, 37737, 16], [37737, 40646, 17], [40646, 43148, 18], [43148, 45773, 19], [45773, 48079, 20], [48079, 51802, 21], [51802, 55521, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55521, 0.11222]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
92718ac00a7363bd5a33cd7e06a0835d9af92d76
6. A Transfer Module in Tamil – English MT In the present chapter, it is attempted to design a Transfer module to translate Tamil inflectional categories and their properties into English Equivalences. Based on the preceding two chapters (ch.4 and ch.5), the necessary Transfer Rules are being proposed to be incorporated in this model. The computational program has been developed in .net platform. The programming language is C#. The proposed module would help the Language technologists involved in the development of Tamil – English Machine Translation system. The proposed module comprises of many sub-modules. These sub-modules are explained here in detail. At the end of the chapter, the sample program has been explained with the necessary screen shots to demonstrate the importance of the study of the Inflectional categories and their properties in both Tamil and English. 6.1. Structure of Transfer Module The automatic translation of Tamil inflected wordforms in a sentence into English equivalences involves many processes. They are: 1. Tokenization 2. Morphological Parsing 3. POS Tagging 4. Clitics Filtration 5. Derivation of Stem 6. Transfer of Tamil wordforms into English Equivalences During the above processes, at some points, some decisions may have to be made. They are: 1. After Morphological Parsing and POS Tagging, it has to be decided whether the output is a Variable Lexeme or not. For example, some determiners such as anta, oru are not variable lexemes. Since the present Transfer module is concerned only with the Variable one, this decision-making is quite important at this stage. 2. After this, it has to be decided whether the output is inflectional wordform or not. The variable output may be a derived one such as alakāṇa ‘beautiful’. These derivations having no inflectional suffixes should be discarded at this stage for further process. 3. The above inflected wordform may have some clitics. For example, in paṭittāṇā ‘did he study’, the final morphemic segment is a clitic -ā. Since the aim of the Transfer module in the present work is to translate the inflectional properties only, any clitic that come across, should be dropped. So, it has to be decided at this stage whether the above output has any clitic. 4. After the process of dropping of clitics above, some of the output may be some derived stems. For example, in *alakāṇavaṇai* ‘handsome person (acc.)’, though the final and its preceding morphemic segments are inflectional morphemes, the stem with which they are attached is a derived adjective (*aḷaku+āṇa*) Since the meaning of the derived stem may not be available in the Lexicon, the derivational stem should be parsed to get the meanings of the root and stem and the meaning of the total derived stem. All the above processes and decision-making are explained in the following flow chat: The present work is to translate the inflectional properties only; any clitic that comes across should be dropped. So, it has to be decided at this stage whether the above output has any clitic. 4. After the process of dropping of clitics above, some of the output may be some derived stems. For example, in "ḻakāṉavaṉai" (handsome person (acc.)), though the final and its preceding morphemic segments are inflectional morphemes, the stem with which they are attached is a derived adjective (aḻaku+āṉa). Since the meaning of the derived stem may not be available in the Lexicon, the derivational stem should be parsed to get the meanings of the root and stem and the meaning of the total derived stem. All the above processes and decision-making are explained in the following flow chart: Figure 6.1 Structure of Transfer Module in Tamil – English MT 6.1.1. Input The input to the process may be any sentence type – Simple, Compound and Complex, which are grammatical ones. That is, the sentence may be a single clause or more than one clause but it should be a grammatically correct one. But the input should be a single sentence. e.g. 1. *nāṉ anta māṇavaraip pārtēṉ. ‘I saw that student’* 2. *nāṉ anta māṇavaraip pārkkavum pēcavum ceytēṉ. ‘I saw the student and spoke to him’* 3. *nāṉ anta māṇavaraip pārttu, pēciṉēṉ. ‘Having seen that student, I spoke to him’* (1) is a simple sentence; (2) a compound sentence; (3) a complex sentence. Moreover, at this stage, the given sentence should have been normalized for further tokenizing process. For example, in the following input, *nāṉ avaraik kinarriliruntu tūkki viṭṭēṉ,* there is a space between the fourth and fifth words. Because of this, the program would consider these words as two separate words (main verbs) – *tūkki ‘having lifted’ and viṭṭēṉ ‘I gave up’. So, the meaning of the sentence would be ‘Having lifted him from the well, I gave up’. On the other hand, if there is no space between the above words, it will be *tūkkiviṭṭēṉ*. The first part of this wordform *tūkki* would be considered as the verbal participle form of the main verb *tūkku* and the second part *vittēy* the aspectual auxiliary verb. The resultant meaning of this wordform would be ‘I had lifted him from the well’. There are problems with clitics, postpositions, adverbial particles also. 1. *uṇkaḷukku varuvatu tāṇē varum.* 2. *uṇkaḷukku varuvatuttaṇē varum.* In the above example (1), where there is a space between second and third word, the third word *tāṇē* means ‘automatically’ (adverb) or ‘itself’ (adverb). In (2), since there is no space between the second and third word, the whole unit stands as a single wordform and its meaning is ‘which might come’. Thus, the meanings of the sentences are as follows: 1. *uṇkaḷukku varuvatu tāṇē varum.* ‘The one which would come to you would come automatically.’ ‘The one which would come to you itself will come.’ 2. *uṇkaḷukku varuvatuttaṇē varum.* ‘That which could come to you only will come. Isn’t?’ In the following example, 1. *avar enatu kaiyaip purrip pēciṉār* ‘Holding my hand, he talked.’ 2. *avar enatu kaiyappurrip pēciṉār* ‘He talked about my hand.’ The meaning difference is due to whether there is a space between *kaiyai* and *parri* or not. If there is a space, *parri* is the verbal participle form of the verb *parru* ‘to hold’; when there is no space, it is the accusative postposition (*aipparri*) attached to the noun *kai* ‘hand’. Another example, 1. uṅkaḷukkup piṭitta vaṇṇam 2. uṅkaḷukkup piṭittavaṇṇam In (1), the second word is the relative participle form of piṭi ‘like’ and the third word means ‘colour’. So the meaning of the whole phrase is ‘the colour you like’. In (2), there is no space as in the first phrase. The second word consists of the Relative Participle form of piṭi ‘like’ and the third word is the Adverbial Particle vaṇṇam ‘as per’. So the meaning of this phrase is ‘as you like’. Here, it is to be mentioned, that Sandhi (morphophonemics) is also to be considered in this normalization process. Depending upon the sandhi, meaning changes as in the following examples: 1. avaṅ kattikonṭu veṭṭīnāṅ ‘He cut with the knife.’ 2. avaṅ kattikkonṭu veṭṭīnāṅ ‘He was shouting while he cut.’ In (1), the second word consists of katti ‘knife (noun)’ and konṭu ‘with (postposition)’. The meaning is ‘with the knife’. In (2), between katti and konṭu, there is an internal sandhi ‘k’. Because of this, here, konṭu stands for the progressive aspectual marker. And the stem katti is the verbal participle form of kattu ‘shout’. The above examples clearly establish that the normalization process of the input data is very much essential before they are sent to any analysis by the computer. Otherwise, it would lead to many ambiguities which could not be solved automatically by the computer. In the present program, it is assumed that the input data are normalized ones. 6.1.2. **Tokenizer** Here, the *token* stands for the individual wordform. The task of the tokenizer is to identify the tokens (wordforms) available in the sentence. After receiving the input, the tokenizer would analyse it and provide the individual tokens. Here, the space between the words is the only criterion for this process. If there is any punctuation marker, it would be discarded during this process. The output of this tokenizing process is the input to the next process – Morphological Parsing. 6.1.3. **Morphological Parser** The task of this Parser is to segment the individual morphemes of every word, if they contain more than one morpheme. As Tamil is an agglutinative language, all the grammatical affixes attached to the root/stem of Tamil are suffixes only. The only exception is the demonstrative bound forms – *a-*, *i-*, *e-* found in the words like *ikkulantai* ‘this child’, *akkulantai* ‘that child’, *ekkulantai* ‘which child’. The present parser could identify this prefix type demonstrative morphemes. There are two ways to segment the Tamil words: one is, from left to right; another is right to left. For Tamil, if the first one is adopted, then the beginning root/stem would be first segmented and then one by one other suffixes would be identified and segmented. For example, if the input to the parser is the word *paṭittāṅ* ‘studied(he)’, the root *paṭi* would first be identified, then the next past tense suffix *tt* and finally the PNG suffix *āṅ* would be identified and the output would be three parsed morphemes – *paṭi, tt and āṅ*. If the parsing starts from right, that is from the end of the word, only after getting the suffixes, the root/stem would be identified. That is, with the above example, first the PNG suffix, then the past tense suffix and finally the root morpheme would be identified. However, the final output would be the same - *paṭi, tt and āṅ*. **Right to Left Parsing**: In the parsing process from right to left, first the whole word would be first checked with the root word database –‘Lexicon’ – whether it is a root word or not. If it is not found in the database, then the process would start to segment the final suffix. In the present example, the PNG suffix *āṅ* would be parsed out. Then, again the remaining segment *paṭitt-* has to be checked with the database whether it is a root or not. If it is not found there, then the parsing process would start to identify and segment the past tense suffix *-tt*-. After its segmentation, once again the remaining *paṭi* has to be checked with the database to decide whether it is a root or not. In the present example, it would be found in the database. Now the parsing process would come to an end. **Left to Right Parsing**: If the process starts from left to right, first the whole input would be checked with the database to decide whether it is a root or not. If it is not found in the database, then parser, with the help of its algorithm, would attempt to identify the root. Once the root is identified, then for further segmentation process for other suffixes, it would not be necessary to check the remaining segment again and again with the root word database. With the present example, once the segment _paṭi_ is identified as the root, then it would not be necessary to check the remaining segment -_ttāṉ_ or after segmenting the tense suffix -_tt-_ , the final remaining segment _āṉ_ against the root word database. So, in the present work, ‘left to right’ way is adopted. That is, first the root would be identified and then other suffixes would be parsed. The Morphological Parser consists of the following components: 1. Root word Database 2. Suffix Database 3. Morphotactics of Tamil wordforms 4. Tamil Sandhi (both internal and external Sandhi) ### 6.1.3.1. Root Word Database structure: Tamil Root words (lexicons) are stored with the necessary details along with English meanings. The root words are classified into five major categories: 1. Noun 2. Verb 3. Adjective 4. Adverb 5. *Itaiccol* (‘middle words’) For verbs, their tense conjugation details are given in the database. Based on the tense suffixes, the Tamil verbs are classified into various groups by grammarians and linguists. There are variations in their classification. For the present work, the conjugation system adopted in the Tamil Lexicon (University of Madras, 1982) is adopted. In that system, there are thirteen classes. The irregular verbs are accommodated in the 13th class. 6.1.3.2. Suffix Database structure All the inflectional suffixes identified in the Chapter 4 of the present work are put in the database. In addition to that, for the purpose of parsing, the derivational suffixes are also included here. They are classified into Noun suffixes, Verb suffixes, Adjectival suffixes and Adverbial suffixes. *Itaiccol* won’t take any suffix. **Noun suffixes:** 1. Number (Plural) suffix 2. Fillers 3. Case 4. Postpositions 5. Verbalizers 6. Adjectival suffixes 7. Adverbial suffixes 8. Relative Participle of some of the postpositions 9. NG (Number – Gender) suffixes The above first four types of suffixes are inflectional suffixes and they are already dealt with in detail in the Chapter 4. **Verbalizers:** The input to the Parser may be some wordforms consist of a noun plus some verbalizers. e.g. *talaivarāṟār* ‘became a leader’ In this example, the morphological parser would identify the segment *talaivar* as a root noun with the help of the root word database. However, this root noun is here followed by a verbalizer suffix ā (it is not the interrogative clitic ). After taking this suffix, the root noun *talaivar* becomes a verb – *talaivarā*. After this, the derived segment *talaivarā* would undergo verb inflection. In this example, this segment takes a past tense suffix ṉ with one more suffix – PNG suffix – ār. Because of this above phenomenon, all the verbalizer suffixes (ā, āku, ākku) which are added to the noun are identified and kept in this database. **Adjectival suffixes:** Some of the wordforms such as *aḻakāṉavaṉai* ‘the handsome person(acc.)’ are noun wordforms. However, in the above example, the noun stem aḻakāṉavaṉ is not available in the root noun database. Because, this is a derived noun, derived from noun root (*aḻaku*) + adjectival suffix (*āṉa*) + NG suffix (*aṅ*). Only after getting this derived noun wordform, the accusative case suffix is added and the final wordform *aḻakāṉavaṉai* is arrived at. In the parsing process of the above example, since the adjectival suffixes and NG suffixes are involved, these suffixes should be available in the noun suffix database. Adverbial suffixes: In Tamil, along with the pure adverb wordforms, there are many derived adverb wordforms such as *aḷakākattāṇ, vēkamākavā*. Though adverbs are variable lexemes (by taking clitics), they do not undergo any inflectional process. Here, it is to be mentioned that clitics are not inflectional suffixes. The above mentioned derived adverbs are derived from nouns by taking adverbial suffix – āka. 1. *aḷakākattāṇ = aḷaku (Noun)+ āka (Adv. suffix) + (t)+ tāṇ (Clitic)* 2. *vēkamākavā = vēkam (Noun) + āka (Adv. suffix) +(v)+ ā (Clitic)* Unless the above wordforms are parsed and tagged as adverbs, they could not be determined later whether they are inflected wordforms or not. This knowledge is essential for the present work, since here the aim is to translate the inflectional properties only. Hence the need for including adverbial suffixes along with other suffixes in the noun suffix database. **Relative Participle form of postpositions** In fact, the postpositions generally do not undergo inflection. However, some of the Tamil postpositions take relative participle forms. There may be some historical reasons behind this which are to be analysed. But this is beyond the aim of this present work. e.g. *avanaippaṟri* ‘about him’ = *avan* ‘he’ + *aippaṟri* ‘about’ The above wordform could be followed by some verbs. Adverbial suffixes: In Tamil, along with the pure adverb wordforms, there are many derived adverb wordforms such as aḻakākattāṉ, vēkamākavā. Though adverbs are variable lexemes (by taking clitics), they do not undergo any inflectional process. Here, it is to be mentioned that clitics are not inflectional suffixes. The above mentioned derived adverbs are derived from nouns by taking adverbial suffix –āka. 1. aḻakākattāṉ = aḻaku (Noun) + āka (Adv. suffix) + (t) + tāṉ (Clitic) 2. vēkamākavā = vēkam (Noun) + āka (Adv. suffix) + (v) + ā (Clitic) Unless the above wordforms are parsed and tagged as adverbs, they could not be determined later whether they are inflected wordforms or not. This knowledge is essential for the present work, since here the aim is to translate the inflectional properties only. Hence the need for including adverbial suffixes along with other suffixes in the noun suffix database. Relative Participle form of postpositions In fact, the postpositions generally do not undergo inflection. However, some of the Tamil postpositions take relative participle forms. There may be some historical reasons behind this which are to be analysed. But this is beyond the aim of this present work. e.g. avaṉaippaṟṟiṉ “about him” The above wordform could be followed by some verbs. e.g. avaṉaippaṟṟiṉ pēciṉēṉ “I talked about him”. The above postposition inflected noun wordform is further inflected for the relative participle suffix -a and the resultant form is: avaṉaippaṛṛiya = avan + aippari + (y) a ‘about him’ Since the above form is relative participle, it could be followed by noun phrases only. avaṉaippaṛṛiya ceyti ‘the news about him’ So, it is necessary to have the knowledge of the relative participle suffix in noun wordform parsing. NG suffixes: It is already explained in the Chapter 4. Verb suffixes: 1. Tense suffixes (including negative) 2. Voice suffixes 3. Aspectual suffixes 4. Modal suffixes 5. Verbal participle suffixes 6. Adjectival participle suffixes 7. Adverbial participle suffixes 8. Relative Participle suffixes 9. PNG suffixes 10. Verbal noun suffixes 11. Participial noun suffixes All the above suffixes are already dealt with in detail in the Chapter 4 (ch.4). 6.1.3.3. Morphotactics Noun wordform <table> <thead> <tr> <th>Noun+(Plural)+(Filler)+(Case)+(Postposition)+(RP)+ (NG) + (Clitics)</th> </tr> </thead> </table> e.g. - Noun - *paiyan* ‘boy’ - Noun + Plural - *paiyankal* (*paiyan + kal*) ‘boys’ - Noun + Case - *paiyanai* (*paiyan + ai*) ‘boy (acc.)’ - Noun + Plural + Case - *paiyankalai* (*paiyan + kal + ai*) ‘boys (acc.)’ - Noun + Case + Postposition - *paiyanaiapparri* (*paiyan + ai +(p)+ parri*) ‘about the boy’ - Noun + Plural + Case + Postposition - *paiyankalaiapparri* (*paiyan + kal + ai +(p) + parri*) ‘about the boys’ - Noun + Postposition - *paiyanmitu* (*paiyan + mitu*) ‘on/above/over the boy’ - Noun + Plural + Postposition - *paiyankalmitu* (*paiyan + kal + mitu*) ‘on/above/over the boys’ - Noun + Case + Postposition + RP – *paiyanaiapparriya* (*paiyan + ai +(p)+parri +(y)+ a*) ‘about the boy’ - Noun + Plural + Case + Postposition + RP - *paiyankalaiapparriya* (*paiyan + kal + ai +(p)+ parri +(y)+ a*) ‘about the boys’ - Noun + Case + Postposition + RP + NG - *paiyanaiapparriyatu* (*paiyan + ai +(p) + parri +(y)+ a + (a)tu*) ‘(It is) about the boy.’ - Noun + Plural + Case + Postposition + RP + NG - *paiyankalaiapparriyatu* (*paiyan + kal + ai +(p)+ parri +(y)+ a + (a)tu*) ‘(It is) about the boys.’ - Noun + Filler + Case - *marattai* (*maram + attu + ai*) ‘tree (acc.)’ Noun + Filler + Case + Postposition - *marattaipparri* (\(maram + attu + ai + (p) + parri\)) ‘about the tree’ Noun + Filler + Case + Postposition + RP - *marattaipparriya* (\(maram + attu + ai + (p) + parri + (y) + a\)) ‘about the tree’ Noun + Filler + Case + Postposition + RP + NG - *marattaipparriyatu* (\(maram + attu + ai + (p) + parri + (y) + a + (a)tu\)) ‘(It is) about the tree’ --- **Noun + (Plural) + Verbalizer** E.g. Noun + Verbalizer - *talaivarāṅar* (\(talaivar + ā + ṉ\) + ār) ‘became a leader – he’ Noun + (Plural) + Verbalizer - *talaivarkaḷāṅarikaḷ* (\(talaivar + kaḷ + ā + ṉ\) + ārkaḷ) ‘became leaders – they’ --- **Noun + Adjectival suffix + (NG) + ……** E.g. Noun + Adjectival suffix - *panpuḷḷa* (\(panpu + uḷḷa\)) ‘virtuous’ Noun + Adjectival suffix + NG - *panpuḷḷavar* (\(panpu + uḷḷa + \text{var}\)) ‘virtuous person’ --- **Noun + Adverbial suffix + (Clitics)** E.g. Noun + Adverbial suffix - *aḻakāka* (\(aḻaku + āka\)) ‘beautifully’ Noun + Adverbial suffix + Clitics - *aḻakākattāṉ* (\(aḻaku + āka + (t) + tāṉ\)) ‘good only’ **Verb Wordform** <table> <thead> <tr> <th>Verb + VP-Simple + ASP + (Tense + {PNG /RP + (NG)}) ) / (VP) + (Clitics)</th> </tr> </thead> <tbody> <tr> <td>e.g.</td> </tr> <tr> <td>- Verb + Tense + PNG - <em>paṭittāṉ</em> (<em>paṇi + tt + āṉ</em>) ‘read/studied-he’</td> </tr> <tr> <td>- Verb + VP-Simple + ASP - <em>paṭittuviṭu</em> (<em>paṇi + ttu + viṭu</em>) ‘have read’</td> </tr> <tr> <td>- Verb + VP-Simple + ASP + Tense + PNG - <em>paṭittuviṭṭāṉ</em> (paṇi + ttu + viṭu + ṭ + āṉ) ‘has read – he’</td> </tr> <tr> <td>- Verb + VP-Simple + ASP + Tense + RP - <em>paṭittuviṭṭa</em> (paṇi + ttu + viṭu + ṭ + a) ‘who has read’</td> </tr> <tr> <td>- Verb + VP-Simple + ASP + Tense + RP + NG - <em>paṭittuviṭṭavaṉ</em> (paṇi + ttu + viṭu + ṭ + a + vaṉ) ‘one who has read’</td> </tr> <tr> <td>- Verb + VP-Simple + ASP + Vpinfi. - <em>paṭittuviṭa</em> (paṇi + ttu + viṭu + a) ‘have to read’</td> </tr> <tr> <td>- Verb + VP-Simple + ASP + VP-Simple - <em>paṭittuviṭṭu</em> (paṇi + ttu + viṭu + ṭ + tu) ‘having read’</td> </tr> </tbody> </table> **Verb +(Filler) + VP-inf+ (Voice) + (Modal) + (Tense+ PNG)+ (Clitics)** <table> <thead> <tr> <th>e.g.</th> </tr> </thead> <tbody> <tr> <td>- Verb + Filler + VP-inf + Voice - <em>paṭikkavai</em> (paṇi + kk + a + vai) ‘make one study/read’</td> </tr> <tr> <td>- Verb + Filler + VP-inf + Modal - <em>paṭikkalām</em> (paṇi + kk + a + lām) ‘may read’</td> </tr> <tr> <td>- Verb + Filler + VP-inf + Voice + Tense+ PNG - <em>paṭikkapattaṉu</em> (paṇi + kk + a + (p) + paṭu + ṭ + atu) ‘was read-it’</td> </tr> <tr> <td>- Verb + Filler + VP-inf + Modal + Tense+ PNG - <em>paṭikkamuṭṭiṉu</em></td> </tr> </tbody> </table> Verb + (Filler) + Tense / Negative + RP + (NG)/( Adverbial Particle) - Verb + Tense + RP - paṭitta (paṭi + tt + a) ‘who read/which was read’ - Verb + Filler + Negative + RP - paṭikkāta (paṭi + kk +āt + a) ‘who did not read/which was not read’ - Verb + Tense + RP + NG - paṭittavaṇ (paṭi + tt + a + vaṇ) ‘one who studied/read’ - Verb + Filler + Negative + RP + NG - paṭikkātavaṇ (paṭi + kk +āt + a + vaṇ) ‘one who did not study/read’ - Verb + Tense + RP + Adverbial Particle - paṭittapōtu (paṭi + tt + a + pōtu) ‘when/during studying/reading’ - Verb + Filler + Negative + RP + Adverbial Particle - paṭikkātavarai (paṭi + kk +āt + a + varai) ‘till one not read’ Verb + (Filler)/Tense + VP + (Adjectival particle)+ Clitics - Verb + (Filler) + VP-inf - paṭikka (paṭi + kk + a) ‘to read’ - Verb + VP-Simple - paṭittu (paṭi + ttu) ‘having read’ - Verb + (Filler) + VP + Adjectival particle-paṭikkattakka (paṭi + kk + a + (t) + takka) ‘readable/suitable for reading’ Verb + (Tense) + Verbal Noun suffix - Verb + Verbal Noun suffix - paṭittal (paṭi + ttal) ‘reading’ Verb + Tense + Verbal Noun suffix - *paṭittamai* (*paṭi + tt + amai*) ‘reading’ **Adjective Wordform** <table> <thead> <tr> <th>Adjective + NG</th> </tr> </thead> </table> * e.g. * Adjective + NG - *nallavaṇ* (*nalla + vaṇ*) ‘good – he’ **6.1.3.4. Sandhi rules involved in morphological parsing** During the parsing of Tamil wordforms, the sandhi phenomena (both internal and external) have to be considered. Between root word and suffix or between suffix and suffix, the sandhi rules make an impact on the structure of the wordforms. Due to this sandhi, some phonemes may be added or deleted or changed. For example, in the wordform *paṭittuppārttāṉ* ‘He tried reading’, there is a phoneme ‘p’ in between *paṭitu* (the verbal participle form of *paṭi*) and *pārttāṉ* (aspectual marker). This is due to the sandhi rule. When a root or suffix ends in *-PPu* and the next root or suffix begins with *PPu*, there should be a phonemic increment *p*. This is an addition of phoneme. During parsing of the above word, this phonemic increment should be identified and ignored. That is, only *paṭittu* and *pārttāṉ* should be considered for parsing purpose. Here, the problem is, just by seeing two *ps* in a wordform, it could not be identified that one *p* is an increment due to sandhi. It is because, some wordforms may have two *ps* as part of the morpheme. As like the addition or insertion of phonemes, there are deletions of phoneme or changes of phoneme in this Sandhi process. e.g. *maram* ‘tree’ + *vēr* ‘root’ = *maravēr* ‘root of the tree’ - deletion of *m* in the first word. *maram* ‘tree’ + *kal* (plural suffix) = *maraṅkal* ‘trees’ - change of the phoneme *m* into *ṅ*. Before parsing the above wordform *maravēr*, the Sandhi deletion should be identified and *mara* should be reconstructed into *maram*. Likewise before parsing the wordform *maraṅkal*, the *maraṅ* should be reconstructed into *maram*. With some words, both deletion as well as addition of phonemes might have occurred due to sandhi. e.g. *maram* ‘tree’ + *kiḷai* ‘branch’ = *marakkilai* ‘the branch of the tree’ Here, first the phoneme *m* of *maram* is deleted under a sandhi rule and then the phoneme *k* is added under some other sandhi rule. So, before parsing this wordform, these sandhi changes should be identified to get the original root *maram*. Likewise, if the first word or suffix ends in a vowel and the following word or suffix begins with a vowel, then there will be an addition of phoneme *y* or *ṽ*. 209 This is due to the rules of the syllabic structure. e.g. \( \text{teruvā ‘is it a street?’} = \text{teru ‘street’} + v + ā \) (interrogative clitic) \( \text{ilaiyā ‘is it a leaf?’} = \text{ilai ‘leaf’} + y + ā \) (interrogative suffix) Before parsing the above wordforms, these increments of \( y \) and \( v \) should be identified and the wordforms should be reconstructed as: \[ \text{teru} + ā \text{ and ilai} + a \] Sometimes, there may be some ambiguities. e.g. \( \text{ivvilai} \) This could be parsed in two ways: 1. \( \text{iv ‘this’} + \text{vilai ‘price’} \) 2. \( \text{iv ‘this’} + \text{ilai ‘leaf’} \) In (1), the second \( v \) is part of the word \( \text{vilai} \). In (2), the second \( v \) is not part of the word, but only a glide occurred due to Sandhi rule. In Tamil, with some wordforms, at the juncture of root word and the following suffix or between suffix and suffix, there may be some ‘fillers’ (“empty morph”). e.g. \( \text{maram ‘tree’} + \text{ai ‘accusative case suffix’} = \text{marattai ‘tree (acc.)’} \) Here, in between \( \text{maram} \) and \( \text{ai} \), there is an empty morph \text{attu} (after the deletion of \( \text{am} \) from \( \text{maram} \). The sandhi rule is, the preceding word should be a noun ending in \( \text{am} \) and the following suffix should be a case one. If the following suffix is not a case suffix, then the empty morph *attu* won’t occur. *maram* ‘tree’ + ā (interrogative suffix) = *maramā* ‘is it a tree?’ All the above sandhi processes are called Internal Sandhi. There are also external sandhi processes in Tamil. This sandhi is called external sandhi. In external sandhi, not only phonology, but morphology, syntax and even semantics are involved in Tamil. e.g. *avay ennai* pārttup pēciṇāṇ ‘He saw me and talked.’ Here, in the second word *ennai* ‘me’ is added with a phoneme *p* at the end. The sandhi rule behind this, is: If a word is inflected for accusative case and the following word begins with a stop phoneme, then there should be an addition of a respective stop phoneme with the first word *ennai*. And in the third word *pārttup* the final phoneme *p* is the result of the sandhi process. The rule is: if a word is in the *ceytu* pattern verbal participle form ending with two stop phonemes and the following word begins with a stop phoneme, then in the preceding word, there would be an addition of a stop phoneme – here, it is *p*. Before parsing the above wordforms, the phonemes added because of the sandhi processes in the second word and in the third word, should be identified and deleted. In some Sandhi processes, syntax is also involved. 1. *avay vēlai* pārttāṇ ‘He worked.’ 2. *avay vēlaip* pārttāṇ ‘He saw the spear.’ In (1), the relation between the second word \( \text{vēlai} \) and the third word \( \text{pārttāṉ} \) is a non-casal one. So, there is no sandhi addition in the second word. In (2), the relation between the second word \( \text{vēlai} \) and the third word \( \text{pārttāṉ} \) is a casal one. That is the segment \( ai \) is the accusative case suffix. So, there is a sandhi addition \( p \) in the second word. The above sandhi process helps to consider the wordform \( \text{vēlai} \) in (1) as a root word, whereas the wordform \( \text{vēlai} \) should be parsed into \( \text{vēl} + ai \). But during parsing, the sandhi increment \( p \) should be deleted. Thus, before parsing a wordform, if there is any sandhi change – addition, deletion, change of phonemes – or glide insertion or addition of some empty morph, they should be identified and the root wordform should be reconstructed for further parsing process. 6.1.3.5. Issues in Parsing In parsing Tamil wordforms, there are some issues leading to ambiguities. A single wordform may have more than one parser output. For example, in parsing the wordform \( \text{paṭittavarai} \), there would be two outputs. 1. \( \text{paṭittavar} \) ‘the learned person’ + \( ai \) (accusative case suffix) 2. \( \text{paṭitta} \) - the RP form of \( \text{paṭi} \) ‘study’ + \( \text{varai} \) ‘upto’ In (1), the inner parsing of \( \text{paṭittavar} \) is: ‘\( \text{paṭi} \) ‘study’ + \( tt \) (Past tense) + \( a \) (Relative Participle marker) + \( \text{var} \) (NG)’ So, the meaning of *paṭittavarai* is ‘the learned person (acc.)’ In (2), the inner parsing of *paṭitta* is: ‘*paṭi* ‘study’ + *tt* (Past tense) + *a* (Relative Participle marker)’ So, the meaning of *paṭittavarai* is ‘upto the studies done’ The aim of this present morphological parser is to find out all the possibilities of inflection found in a wordform. If a wordform gives place for more than one output, then the present parser should exhaust all these possibilities. In the above example, the root word *paṭi* has undergone two kinds of inflection. So, it should not stop its parsing process once it gets one structure. It should find out all the possible morphological structures found in the given wordform. However, the morphological parser developed for spellchecking purpose could stop once it gets one possible structure. But, the parser developed for the present work – to translate the inflectional properties found in wordforms – should be different from the spellchecker parser. Some more examples with more than one morphological structures: **kaṭalai** - *kaṭal* ‘sea’ + *ai* (acc.suffix) - *kaṭalai* ‘nut’ **paṭikkavā** - *paṭi* ‘study’+ *kk* (filler) + *a* (infī. suffix)+ *vā* (modal) - *paṭi* ‘study’ + *kk* (filler) + *a* (infinitive suffix) + *v* (glide) + (clitic) **neytāṉ** - *ney* ‘ghee’ Noun + *tāṉ* (clitic) - *ney* ‘weave’ Verb + *t* (Past Tense ) + *āṉ* (PNG ) 6.1.4. Wordclass Tagger The Parser output would be sent to another sub-module for wordclass tagging. Based on the root word and suffixes, it would decide the tagger of the wordforms. Mostly the final suffix of a wordform plays an important role in tagging. The tagging of the wordform is important to decide the inflectional categories to which a particular suffix may belong. For example, the output of the parsing of the wordform ṭā would be: 1. ṭu ‘run’ + ā - negative relative participle suffix 2. ṭu ‘tile’ + ā ‘interrogative clitic’ In (1), ṭu is a verb root. Hence it could be followed by the negative relative participle suffix ā. As a verb root, it cannot be followed by the interrogative suffix. And the wordclass tagger of this wordform is Negative Relative Participle. In (2), ṭu is a noun root. Hence it could be followed by the interrogative clitic ā. As a noun root, it cannot be followed by the negative relative participle. And the wordclass tagger of this wordform is Noun Interrogative. Since the present work is concerned with the inflectional properties or suffixes, though the parser outputs are two (Negative Relative Participle and Interrogative noun), only the first output would be considered for further process. Moreover, to decide the status and meaning of the individual suffixes inside a wordform, the wordclass category of the whole wordform is very much needed. And the aim of the present work is to translate or find equivalences for the Tamil inflectional suffixes, the whole word category has an important role. 6.1.5. Variable Vs Invariable wordform Based on the input provided by both the Parser and Wordclass Tagger, the next sub-module would decide whether the input wordform is a variable one or not. If it is an invariable one, it would be discarded for further analysis. Only the variable wordforms would be considered for the present work. For example, the following wordforms are invariable: - mika ‘much’ - mikavum ‘much’ - oru ‘one’ - ayyō ‘Alas’ All the above belong to īṭaiccol (middle words). They won’t undergo any inflection process. 6.1.6. Inflection The output of the previous sub-module – the filtered wordforms which are variable – are the input to this module. This sub-module would analyse whether the input wordforms are inflected or non-inflected. For example, the following wordforms, though variables, are uninflected: 1. vēkamāka ‘fast’ 2. metuvākattāṅ ‘slowly’ 3. aḷakāṇa ‘beautiful’ The root of (1) is a noun and it is attached with the adverb suffix āka. The root of (2) is an adverb and it takes the emphatic clitic tāṅ. The root of (3) is a noun and the attached suffix is the adjective suffix āṇa. In all the above, the roots are variable since their wordforms are changed because of the attachment of suffixes. But these suffixes are not inflectional suffixes; they are only derivational suffixes and clitic. On the other hand, the following wordforms are variable and inflected for various inflectional properties. 1. paṭittāṅ ‘(he) studied’ = paṭi + tt + āṅ 2. avaṇukku ‘to him’ = avaṇ + ukku 3. vantu ‘having come’ = vā + ntu 4. koṭutta ‘one who gave’ = koṭu + tt + a The root of (1) is a verb and it is inflected for past tense and PNG. The root of (2) is a pronoun and it is inflected for dative case. The root of (3) is a verb and it is inflected for verbal participle. The root of (4) is a verb and it is inflected for past tense and relative participle. 6.1.7. Inflected wordforms – with or without clitics The output from the above sub-module is the input to this sub-module. That is, the variable as well as inflected wordforms are the input to this sub-module. Now, this sub-module would analyse these wordforms to find out whether there are any clitics. If a wordform has clitics, it would be sent for filtering. The following wordforms have variable lexemes as well as inflected ones. But there are some clitics in the wordforms. 1. \textit{avarkalukkuttan} ‘for them only’ = avar + kal + ukku + tāŋ 2. \textit{paṭittāŋa} ‘did he study?’ = paṭi + tt + āŋ + ā 3. \textit{paṭikkattāŋ} ‘to study only’ = paṭi + kk + a + tāŋ 4. \textit{avaṅukkumaṭṭum} ‘for him only’ = avaŋ + ukku + maṭṭum In (1), in addition to the two inflectional suffixes, there is one clitic tāŋ. In (2), in addition to the two inflectional suffixes, there is one clitic ā. In (3), in addition to the two inflectional suffixes, there is one clitic tāŋ. In (4), in addition to the inflectional suffixes, there is one clitic maṭṭum. The clitics attached wordforms would be sent for filtration and the output without any clitic would be sent to next sub-module for further process. 6.1.8. Pure stem? The input to the sub-module would be wordforms having both basic lexicons as well as derived ones as stems for further inflection. Basic lexicons mean here the root words available in Lexicon. The derived ones mean the stem having a root lexicon with some derived suffixes. Since these derived stems are not available in the Lexicons, they would be sent to further process to get the meaning. Only after that they would be sent to the final Transfer module. If a wordform has only basic lexicon which is available in the Lexicon and inflected for some inflectional properties, directly it would be sent to the Transfer module. In (1), the root noun talaivar is verbalized by the addition of the verbalizer ākku. Then the verb talaivarākku is inflected for tense īṉ and PNG ār. The verb talaivarākku is not a basic lexicon available in the Lexicon. It is a derived verb from the noun talaivar ‘leader’. So, to get the meaning of this derived verb, it should be processed first. Only after that, it could be sent to final transfer module. In (2), the root word is pāl ‘milk’. It is added with the derivational suffix kāraṉ to get the derived word pālkkāraṉ. Since this word is not a basic lexicon, it may not be available in the Lexicon. To get the meaning of this derived word, it should be sent for further process. Here, ai is the inflectional suffix. In (3), the root noun āḻaku ‘beauty’. It is added with the adjectival suffix āṉa to get the derived adjective āḻakāṉa ‘beautiful’. Then it is inflected for NG vaḷ and the resultant wordform is āḻakāṉavaḷ. In (4), the root paṇpu ‘virtue’ is added with the derivational suffix uḷḷa to get the derived word paṇpuḷḷavar. It is further inflected for NG var and accusative case ai. 6.1.9. **Transfer Module**: This is the core module in the present work. **Transfer Rule structure** All the inflectional suffixes found in the input would be converted into abstract inflectional categories with the respective inflectional properties. For example, *kaḷ* Plural → Plural *tt* Past Tense → Past Tense *iru* Perfect Aspectual → Perfect Aspectual However, the root/stem would be retained with its category. e.g. *paiyaṅ* Noun *paṭi* Verb *aḻakāṇa* Adjective **Input**: Input for this module is the inflected wordforms. The stems of these wordforms may be either pure roots (‘lexicons’) or derived ones. And these wordforms would be free from any clitic since already they are, if any, filtered by some process earlier. The stems are inflected for some inflectional properties. The input for this module would be: **Pure root/derived stem + inflectional suffixes** This module would process the input as follows: It would get the English lexicon/root word for the Tamil stem, if it is a root lexicon, from the Tamil – English Lexicon. If the stem is a derived one, by the previous sub-module, it would get the derived meaning and find the equivalent root word from the Lexicon. Example 1: Same number of inflectional suffix and same morphotactics: paiyakoḷ **Input:** paiyā + kaḷ (plural suffix) paiyan is available in the Tamil – English Lexicon. kaḷ - the plural suffix in Tamil has the equivalence ‘s’ in English. **Input:** paiyā + Plural The order of the noun root plus plural suffix in English is the same as in Tamil. So, the output would be: ‘student’ – ‘s’ **Final output:** ‘students’ Example 2: Difference in the number of suffixes but same morphotactics. e.g. paṭittāṇ **Input:** paṭi (Verb) + tt (pt.) + āy (3rd Person, singular, masculine suffix) Here, paṭi is the root lexicon. When it is referred in the Lexicon, there are two categories – one is, noun paṭi; the other is verb paṭi. However, since the root paṭi in the above example is followed by the tense suffix, it should be a verb, not a noun. Now, even after deciding the root as a verb paṭi, there is a problem. There are two paṭi verbs in the Lexicon. \textit{paṭi} (1) ‘be covered with’, ‘settle’ \textit{paṭi} (2) ‘read’, ‘study’ However, the first \textit{paṭi} belongs to the fourth group in the conjugation types. \textit{kīr} - nt - v The second \textit{paṭi} belongs to the eleventh group in the conjugation types. \textit{kkir} - tt - pp So, it could be decided that the verb \textit{paṭi} in the above input example should be the second one. Now, the Tamil \textit{paṭi} in the example would be replaced by English verb. \textit{paṭi} (Verb) + \textit{tt} (pt.) + ān (3\textsuperscript{rd} Person, singular, masculine suffix) | ‘study’ (Verb) + past tense + 3\textsuperscript{rd} Person, singular, masculine suffix. | The next step is, to find out the past tense inflectional suffix for the verb ‘study’. It is ‘ed’. | ‘study’ (Verb) + ‘ed’ + 3\textsuperscript{rd} Person, singular, masculine suffix. | Since in the past tense, there is no PNG marker in English, it would not be filled up. Anyhow that slot could be filled up by ‘(he)’ to indicate the Person, Number and Gender of the ‘Subject’. | ‘study’ (Verb) + ‘ed’ + ‘(he)’ | According to the spelling rule of English, the combination of the verb ‘study’ and the past tense ‘ed’ would give the word ‘studied’. \textbf{Output:} ‘studied (he)’ Example 3: Difference in the number of suffixes as well as in morphotactics: \[ \text{paṭittirukkíṟāṇ} \] | Input: | \( \text{paṭi} \) (V) + \( tt \) (VP-Simple) + \( īru \) (Perfect) + \( kkiṟ \) (Pr.) + \( āṇ \) (3rd PNG) | In the above input, there are one Verb root plus four inflectional suffixes. As like in the previous example, here also the English equivalence for the verb root \( \text{paṭi} \) is ‘study’. \[ \text{‘study’} + \text{tt (VP-Simple suffix)} + \text{Perfect marker} + \text{Present tense suffix} + \text{3rd PNG} \] \[ \text{‘study’} + \text{Past Participle} + \text{‘have’} + \text{Present tense} + \text{‘3rd PNG.} \] \[ \text{‘studied’} + \text{‘has’} + \text{3rd PNG} \] Now, according to the word order of English, the aspectual word should precede the main verb. So, in the present example, the word order would be changed. \[ \text{‘has’} + \text{‘studied’} + \text{3rd PNG} \] | Output: | ‘has studied-he’ | In the present example, the present tense property has merged with the aspectual marker ‘have’ and the resultant form is ‘has’. And since the aspectual ‘has’ occurred here, the main verb ‘study’ has changed into past participle form ‘studied’. 222 Example 4: Modal Inflection: \textit{varavēṇṭum} \textbf{Input:} \texttt{vā ‘Verb’ + \texttt{a ‘infinitive marker’} + \texttt{vēṇṭum ‘obligatory modal’}} \begin{itemize} \item ‘come’ Verb + ‘a’ infinitive marker + obligatory modal \item ‘come’ Verb + ‘a’ infinitive marker + ‘should’ \item ‘come’ Verb + ‘should’ \end{itemize} Now according to the word order rule in English, the auxiliary model verb should precede the main verb. Also the main verb should be the first form of the verb. In Tamil, according to the morphotactics, the main verb inflected for infinitive precedes the modal. \begin{itemize} \item ‘should’ + ‘come’ \end{itemize} \textbf{Output:} ‘should come’ Here, it is to be mentioned that most of the modals are not inflected for tense and PNG. Example 5: \textit{kollappaṭṭāṇ} \textbf{Input:} \texttt{kol (main verb) + \texttt{a (infinitive suffix)} + \texttt{paṭu (passive marker)} + \texttt{t (past tense)} + \texttt{āṇ (PNG)}} kill (main verb) + a (infinitive suffix) + auxiliary ‘be’ …past participle + past tense + āṉ (PNG) ‘kill’ (main verb) + a (infinitive suffix) + was …ed + āṉ (PNG) ‘killed’ (main verb) + was + ('he') According to the word order rule, the auxiliary ‘was’ should precede the main verb. ‘was’ + ‘killed’ +('he') Output: ‘was killed-he’ Example 6: paṭittirukkavēṇṭum input : paṭi (main verb) – ttu (VP-Simple suffix) – iru (Perfect Aspectual) – (kk)a (infinitive marker) - vēṇṭum (obligatory modal) ‘study’ (main verb) – ttu (VP simple) – have – (kk)a (infinitive marker) – should. ‘study’ (main verb) – en (Past Participle marker) – have – should ‘studied’ – have – should According to the English word order, the modal should precede the aspectual and the aspectual should precede the main verb. should – have – studied Output : ‘should have studied’ Example 7: paṭikkappaṭṭirukkavēṇṭum **input:** paṭi (main verb) – (kk)a (infinitive marker) - paṭu (passive marker) - ț (past tense suffix) - iru (Perfect aspectual) - (kk)a (infinitive marker) - vēṇṭum (model) ‘study’ (main verb) – (kk)a (infinitive marker) – be … en – have … en – (kk)a (infinitive marker) - should According to the word order rules in English, the modal should precede the aspectual; aspectual should precede the voice; voice should precede the main verb. should - have - been - studied **Output:** ‘should have been studied’ Example 8: paṭittu **input:** paṭi (Main Verb) - ttu (VP simple marker) study (main verb) – having ….ed When a VP-Simple marker is followed by an aspectual within a wordform, it should be discarded during transfer to English, because the occurrence of the verbal participle form is dictated by the syntactic context, that is, by the following ‘Aspectual’ marker. The verbal participle marker does not contribute its inflectional property to the main verb here. But if it is the final of the wordform it should be retained because of its contribution of inflectional meaning to the verb, and translated into ‘having’. According to the Word order rules in English, the auxiliary should precede the main verb. having – studied Output: having studied Example 9: paṭikka Input: paṭi (Main Verb) - (kk) a (infinitive marker) When the infinitive marker ‘a’ is followed by a modal auxiliary within a wordform, it should be discarded during translation into English, because the occurrence of the verbal participle form is dictated by the syntactic context, that is, by the following ‘Modal’ marker. The verbal participle marker does not contribute its inflectional property to the main verb here. But when it is the final in a wordform, it should be retained because of its contribution of inflectional meaning to the verb. It should be translated into English Infinitive, indicating word ‘to’. ‘s’study’ – to Output: to study Example 10: paṭittāl Input: paṭi (Main Verb) - ttu (VP Simple marker) – āl (conditional marker) ‘s’study’ (main verb) - … ed (past participle marker) – if (conditional word) If - study - ed Output: If studied Example 11: \( paṭittāl \) **Input:** \( paṭi \) (main verb) + \( ttu \) (VP simple marker) + \( āl \) (cond. marker) ‘study’ (Main Verb) + \( ttu \) (VP simple marker) + \( āl \) (conditional marker) ‘study’ (Main Verb) + \( ttu \) (VP simple marker) + ‘if’ (conditional word) Here, the translation \( ttu \) VP Simple marker depends upon the time or tense of the main clause. e.g. \( avaṉ\ paṭittāl, ṇānum\ paṭippēn \) ‘If he studies, I will study.’ \( avaṉ\ paṭittāl, ṇānum\ paṭikkirēn \) ‘If he studies, I study.’ So, the translation should be ‘studies’ and ‘studied’ ‘if studies’ **Output:** If studies The relevant flow chart, flow diagram and some screen shots are provided here. 1. Tamil Noun Morphotactics 2. Tamil Verb Morphotactics 3. Flow Diagram of Transfer Module 4. Program Sample Outputs Thus, the Transfer module could translate any Tamil inflected word into English word (e.g. *paiyānkal* – ‘boys’) or word with grammatical or functional word (e.g. *avaṇukku* – ‘to him’). Once the Tamil input is given to this module, it would do everything (Tokenization, Morphological Parsing, POS Tagging etc.) to arrive at the final English translation output. If the input has any clitic or derivational suffixes, this module could handle them effectively to get the final inflected form to be translated to English.
{"Source-Url": "http://shodhganga.inflibnet.ac.in/bitstream/10603/200044/7/chapter%206.pdf", "len_cl100k_base": 14800, "olmocr-version": "0.1.53", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 105990, "total-output-tokens": 15781, "length": "2e13", "weborganizer": {"__label__adult": 0.0010204315185546875, "__label__art_design": 0.001979827880859375, "__label__crime_law": 0.000927448272705078, "__label__education_jobs": 0.0604248046875, "__label__entertainment": 0.0006213188171386719, "__label__fashion_beauty": 0.0004754066467285156, "__label__finance_business": 0.0007357597351074219, "__label__food_dining": 0.0008001327514648438, "__label__games": 0.002483367919921875, "__label__hardware": 0.001232147216796875, "__label__health": 0.000926971435546875, "__label__history": 0.0011081695556640625, "__label__home_hobbies": 0.0003108978271484375, "__label__industrial": 0.0008630752563476562, "__label__literature": 0.0258331298828125, "__label__politics": 0.0007939338684082031, "__label__religion": 0.0018939971923828125, "__label__science_tech": 0.08099365234375, "__label__social_life": 0.0006575584411621094, "__label__software": 0.0439453125, "__label__software_dev": 0.77001953125, "__label__sports_fitness": 0.0005106925964355469, "__label__transportation": 0.0011453628540039062, "__label__travel": 0.0003864765167236328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47407, 0.01712]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47407, 0.80833]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47407, 0.87147]], "google_gemma-3-12b-it_contains_pii": [[0, 889, false], [889, 2075, null], [2075, 2863, null], [2863, 3717, null], [3717, 4991, null], [4991, 6349, null], [6349, 7577, null], [7577, 8828, null], [8828, 10518, null], [10518, 11854, null], [11854, 13021, null], [13021, 14449, null], [14449, 15796, null], [15796, 18024, null], [18024, 19392, null], [19392, 20469, null], [20469, 21719, null], [21719, 22783, null], [22783, 24116, null], [24116, 25274, null], [25274, 26624, null], [26624, 28029, null], [28029, 29564, null], [29564, 30967, null], [30967, 32369, null], [32369, 33290, null], [33290, 34701, null], [34701, 36300, null], [36300, 37405, null], [37405, 38483, null], [38483, 39570, null], [39570, 40839, null], [40839, 42042, null], [42042, 43011, null], [43011, 43873, null], [43873, 45139, null], [45139, 46073, null], [46073, 46887, null], [46887, 47407, null]], "google_gemma-3-12b-it_is_public_document": [[0, 889, true], [889, 2075, null], [2075, 2863, null], [2863, 3717, null], [3717, 4991, null], [4991, 6349, null], [6349, 7577, null], [7577, 8828, null], [8828, 10518, null], [10518, 11854, null], [11854, 13021, null], [13021, 14449, null], [14449, 15796, null], [15796, 18024, null], [18024, 19392, null], [19392, 20469, null], [20469, 21719, null], [21719, 22783, null], [22783, 24116, null], [24116, 25274, null], [25274, 26624, null], [26624, 28029, null], [28029, 29564, null], [29564, 30967, null], [30967, 32369, null], [32369, 33290, null], [33290, 34701, null], [34701, 36300, null], [36300, 37405, null], [37405, 38483, null], [38483, 39570, null], [39570, 40839, null], [40839, 42042, null], [42042, 43011, null], [43011, 43873, null], [43873, 45139, null], [45139, 46073, null], [46073, 46887, null], [46887, 47407, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47407, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47407, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47407, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47407, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47407, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47407, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47407, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47407, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47407, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47407, null]], "pdf_page_numbers": [[0, 889, 1], [889, 2075, 2], [2075, 2863, 3], [2863, 3717, 4], [3717, 4991, 5], [4991, 6349, 6], [6349, 7577, 7], [7577, 8828, 8], [8828, 10518, 9], [10518, 11854, 10], [11854, 13021, 11], [13021, 14449, 12], [14449, 15796, 13], [15796, 18024, 14], [18024, 19392, 15], [19392, 20469, 16], [20469, 21719, 17], [21719, 22783, 18], [22783, 24116, 19], [24116, 25274, 20], [25274, 26624, 21], [26624, 28029, 22], [28029, 29564, 23], [29564, 30967, 24], [30967, 32369, 25], [32369, 33290, 26], [33290, 34701, 27], [34701, 36300, 28], [36300, 37405, 29], [37405, 38483, 30], [38483, 39570, 31], [39570, 40839, 32], [40839, 42042, 33], [42042, 43011, 34], [43011, 43873, 35], [43873, 45139, 36], [45139, 46073, 37], [46073, 46887, 38], [46887, 47407, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47407, 0.04845]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
8a8e83e5fdc1b963636cccf04ec9b4406f27e9cc
A Proposal for the Cooperation of Solvers in Constraint Functional Logic Programming S. Estévez-Martín\textsuperscript{a,1} A. J. Fernández\textsuperscript{b,2} T. Hortalá-González\textsuperscript{a,1} M. Rodríguez-Artalejo\textsuperscript{a,1} F. Sáenz-Pérez\textsuperscript{a,1} R. del Vado-Vírseda\textsuperscript{a,1} \textsuperscript{a} Departamento de Sistemas Informáticos y Programación Universidad Complutense de Madrid Madrid, Spain \textsuperscript{b} Departamento de Lenguajes y Ciencias de la Computación Universidad de Málaga Málaga, Spain Abstract This paper presents a proposal for the cooperation of solvers in constraint functional logic programming, a quite expressive programming paradigm which combines functional, logic and constraint programming using constraint lazy narrowing as goal solving mechanism. Cooperation of solvers for different constraint domains can improve the efficiency of implementations since solvers can take advantage of other solvers' deductions. We restrict our attention to the cooperation of three solvers, dealing with syntactic equality and disequality constraints, real arithmetic constraints, and finite domain (\textit{FD}) constraints, respectively. As cooperation mechanism, we consider to propagate to the real solver the constraints which have been submitted to the \textit{FD} solver (and viceversa), imposing special communication constraints to ensure that both solvers will allow the same integer values for all the variables involved in the cooperation. Keywords: Cooperating Solvers, Constraints, Functional Logic Programming, Lazy Narrowing, Implementation. 1 Introduction Cooperation of different solvers for Constraint Programming (shortly \textit{CP}) has been widely investigated during the last years [4], aiming at the solution of hybrid problems that cannot be handled by a single solver and also at improvements of efficiency, among other things. On the other hand, the Functional and Logic Programming styles (\textit{FP} and \textit{LP}, resp.) support a clean declarative semantics as well as powerful program construction facilities. The CLP scheme for Constraint Logic Programming, started by a seminal paper by Jaffar and Lassez, provides a combination of CP and LP which has proved very practical for CP applications [7]. Adding a FP dimension to CLP has led to various proposals of CFLP schemes for Constraint Functional Logic Programming, developed since 1991 and aiming at a very expressive combination of CP, higher-order lazy FP and LP. Both CLP and CFLP are schemes that can be instantiated by means of different constraint domains and solvers. This paper presents a proposal for solver cooperation in CFLP, more precisely in an instance of the CFLP scheme as presented in [9,2,8] which is implemented in the TOY language and system [1]. The solvers whose cooperation is supported are: a solver for the Herbrand domain $\mathcal{H}$ supporting syntactic equality and disequality constraints; a solver for the domain $\mathcal{FD}$, which supports finite domain constraints over the set of integer numbers $\mathbb{Z}$; and a solver for the domain $\mathcal{R}$, which supports arithmetic constraints over the set of real numbers $\mathbb{R}$. This particular combination has been chosen because of the usefulness of $\mathcal{H}$ constraints for dealing with structured data and the important role of hybrid $\mathcal{FD}$ and $\mathcal{R}$ constraints in many practical CP applications [4]. TOY has been implemented on top of SICStus Prolog [15], using the $\mathcal{FD}$ and $\mathcal{R}$ solvers provided by SICStus along with Prolog code for the $\mathcal{H}$ solver. CFLP goal solving takes care of evaluating calls to program defined functions by means of lazy narrowing, and decomposing hybrid constraints by introducing new local variables. Eventually, pure $\mathcal{FD}$ and $\mathcal{R}$ constraints arise, which must be submitted to the respective solvers. Our proposal for solver cooperation is based on the communication between the $\mathcal{FD}$ and $\mathcal{R}$ solvers by means of special communication constraints called bridges. A bridge $u \equiv v$ constrains $u::\text{int}$ and $v::\text{real}$ to take the same integer value. Our system keeps bridges in a special store and uses them for two purposes, namely binding and propagation. Binding simply instantiates a variable occurring at one end of a bridge whenever the other end of the bridge becomes a numeric value. Propagation is a more complex operation which takes place whenever a pure constraint is submitted to the $\mathcal{FD}$ or $\mathcal{R}$ solver. At that moment, propagation rules relying on the available bridges are used for building a mate constraint which is submitted to the mate solver (think of $\mathcal{R}$ as the mate of $\mathcal{FD}$ and viceversa). Propagation enables each of the two solvers to take advantage of the computations performed by the other. In order to maximize the opportunities for propagation, the CFLP goal solving procedure has been enhanced with operations to create bridges whenever possible, according to certain rules. Obviously, independent computing of solvers remains possible. The rest of the paper is organized as follows. Section 2 recalls the essentials of CFLP programming and presents a CFLP program which solves a generic problem illustrating $\mathcal{H} + \mathcal{FD} + \mathcal{R}$ cooperation. Section 3 presents a formal description of cooperative goal solving by means of constraint lazy narrowing enhanced with rules for creation of bridges and propagation of mate constraints. Section 4 presents some details of our current implementation of cooperative goal solving in the TOY system, as well as performance results based on the program from Section 2, showing that propagation of mate constraints via bridges leads to significant speedups of execution time. Section 5 includes a summary of conclusions, a brief discussion of related work and some hints to planned future work. 2 CFLP Programming In this section, we recall the essentials of the CFLP scheme \cite{9,2,8} for lazy Constraint Functional Logic Programming, which serves as a logical and semantic framework for our proposal of cooperation of solvers. 2.1 The Constraint Domains $\mathcal{H}$, $\mathcal{FD}$ and $\mathcal{R}$ We assume a universal signature $\Sigma = \langle DC, FS \rangle$, where $DC = \bigcup_{n \in \mathbb{N}} DC^n$ and $FS = \bigcup_{n \in \mathbb{N}} FS^n$ are countably infinite and mutually disjoint sets of constructor symbols and function symbols, indexed by arities. Functions are further classified into domain dependent primitive functions $PF^n \subseteq FS^n$ and user defined functions $DF^n = FS^n \setminus PF^n$ for each $n \in \mathbb{N}$. We consider a special symbol $\bot$, intended to denote an undefined value and we assume the Boolean constants $true, false \in DC^0$. We also consider a countably infinite set $Var$ of variables and a set $U$ of primitive elements (as e.g. the set $\mathbb{Z}$ of integer numbers or the set $\mathbb{R}$ of real numbers). An expression $e \in Exp$ has the syntax $e ::\n X | h | (e_1 ... e_m)$, where $u \in U, X \in Var$ and $h \in DC \cup FS, (\overline{e_m})$ abbreviates $e_1 ... e_m$. The following classification of expressions is useful: $(X \overline{e_m})$, with $X \in Var$ and $m \geq 0$, is called a flexible expression, while $u \in U$ and $(h \overline{e_m})$ with $h \in DC \cup FS$ are called rigid expressions. Moreover, a rigid expression $(h \overline{e_m})$ is called active iff $h \in FS^n$ and $m \geq n$, and passive otherwise. Another important subclass of expressions is the set of patterns $t \in Pat$, whose syntax is defined as $t ::\n X | (e \overline{m}) | (f \overline{m})$, where $u \in U, X \in Var, c \in DC^n$ with $m \leq n$, and $f \in FS^n$ with $m < n$. We also consider substitutions as mappings $\sigma, \theta$ from variables to patterns, and by convention, we write $e \sigma$ instead of $\sigma(e)$ for any $e \in Exp$, and $\sigma \theta$ for the composition of $\sigma$ and $\theta$. A constraint domain provides a set of specific primitive elements $U$, along with certain primitive functions $p \in PF^n$ operating upon them. Atomic constraints over a given constraint domain $D$ can have the form $\Diamond$ (denoting a constraint trivially true), $\Diamond$ (denoting a constraint trivially false) or $p \overline{e_m} \rightarrow! t$ with $\overline{e_m} \in Exp$ and $t \in Pat$. Atomic primitive constraints have the form $\Diamond$, $\Diamond$ or $p \overline{e_m} \rightarrow! t$ with $\overline{e_m}, t \in Pat$. This paper deals with three constraint domains: - $\mathcal{H}$, the so-called Herbrand domain, which supports syntactic equality and disequality constraints over an empty set of primitive elements. - $\mathcal{FD}$, which supports finite domain constraints over $\mathbb{Z}$. - $\mathcal{R}$, which supports arithmetic constraints over $\mathbb{R}$. Table 1 summarizes the primitive functions available for these domains, and the way they are used for building atomic primitive constraints in practice. We also assume constraint solvers Solver$^H$, Solver$^{FD}$ and Solver$^R$ associated to these domains. In addition to the constraints just described, we also use a special kind of communication constraints built from a new primitive function `equiv :: int → real → bool` such that `(equiv n x)` returns `true` if `x` has an integer value equivalent to `n`, and `false` otherwise. Constraints of the form `equiv e1 e2 →! true` will be called bridges and abbreviated as `e1 #== e2` in the sequel. We introduce a constraint domain `M` which operates with bridges. The cooperation of a `FD` solver and a `R` solver via communication bridges can lead to great reductions of the `FD` search space, manifesting as significant speedups of the execution time, as we will see in Section 4. <table> <thead> <tr> <th><code>D</code></th> <th>Primitive Functions</th> <th>Abbreviates</th> </tr> </thead> <tbody> <tr> <td><code>H</code></td> <td><code>seq :: A → A → bool</code></td> <td><code>e1 == e2 =def seq e1 e2 →! true</code></td> </tr> <tr> <td></td> <td></td> <td><code>e1 / = e2 =def seq e1 e2 →! false</code></td> </tr> <tr> <td><code>FD</code></td> <td><code>iseq :: int → int → bool</code></td> <td><code>e1 # = e2 =def iseq e1 e2 →! true</code></td> </tr> <tr> <td></td> <td></td> <td><code>e1 #\ = e2 =def iseq e1 e2 →! false</code></td> </tr> <tr> <td></td> <td><code>ileq :: int → int → bool</code></td> <td><code>e1 # &lt; e2 =def e2 ileq e1 →! false</code></td> </tr> <tr> <td></td> <td></td> <td><code>e1 # &lt;= e2 =def e1 ileq e2 →! true</code></td> </tr> <tr> <td></td> <td></td> <td><code>e1 # &gt; e2 =def e1 ileq e2 →! false</code></td> </tr> <tr> <td></td> <td></td> <td><code>e1 # &gt;= e2 =def e2 ileq e1 →! true</code></td> </tr> <tr> <td></td> <td><code>#+,#-,#*,#/ :: int→int→int</code>, <code>domain :: [int]→int→int→bool</code>, <code>belongs :: int → [int] → bool</code>, <code>labeling :: [labelType]→[int]→bool</code></td> <td></td> </tr> <tr> <td><code>R</code></td> <td><code>rleq :: real → real → bool</code></td> <td><code>e1 &lt; e2 =def e2 rleq e1 →! false</code></td> </tr> <tr> <td></td> <td></td> <td><code>e1 &lt;= e2 =def e1 rleq e2 →! true</code></td> </tr> <tr> <td></td> <td></td> <td><code>e1 &gt; e2 =def e1 rleq e2 →! false</code></td> </tr> <tr> <td></td> <td></td> <td><code>e1 &gt;= e2 =def e2 rleq e1 →! true</code></td> </tr> <tr> <td></td> <td><code>+,-,*,/ :: real → real → real</code></td> <td></td> </tr> <tr> <td><code>M</code></td> <td><code>equiv :: int → real → bool</code></td> <td><code>e1# == e2 =def equiv e1 e2 →! true</code></td> </tr> </tbody> </table> Table 1 The Constraint Domains `H`, `FD`, `R` and `M` ### 2.2 Structure of Program Rules Programs are sets of constrained program rules of the form `f t_n = r ⇐ C`, where `f ∈ DF^n`, `t_n` is a linear sequence of patterns, `r` is an expression and `C` is a finite conjunction `δ_1, . . . , δ_m` of atomic constraints `δ_i` for each `1 ≤ i ≤ m`, possibly including occurrences of defined function symbols. Predicates can be modelled as defined functions returning Boolean values, and clauses `p t_n : − C` abbreviate rules `p t_n = true ⇐ C`. In practice, `TOY` and similar constraint functional logic languages requires program rules to be well-typed in a polymorphic type system. As a running example for the rest of the paper, we consider a generic program written in TOY which solves the problem of searching for a 2D point lying in the intersection of a discrete grid and a continuous region. Both grids and regions are represented as Boolean functions. They can be passed as parameters because our programming framework supports higher-order programming features. % Discrete versus continuous points: \[ \text{type } \text{dPoint} = (\text{int}, \text{int}) \quad \text{type } \text{cPoint} = (\text{real}, \text{real}) \] % Sets and membership: \[ \text{type } \text{setOf A} = \text{A} \rightarrow \text{bool} \\ \text{isIn} :: \text{setOf A} \rightarrow \text{A} \rightarrow \text{bool} \\ \text{isIn } \text{Set } \text{Element} = \text{Set Element} \] % Grids and regions as sets of points: \[ \text{type } \text{grid} = \text{setOf dPoints} \quad \text{type } \text{region} = \text{setOf cPoints} \] % Predicate for computing intersections of regions and grids: \[ \text{bothIn} :: \text{region} \rightarrow \text{grid} \rightarrow \text{dPoint} \rightarrow \text{bool} \\ \text{bothIn Region Grid } (X, Y) :- X \#== RX, Y \#== RY, \\ \text{isIn Region } (RX, RY), \text{isIn Grid } (X, Y), \text{labeling } [ ] [X,Y] \] We will try the bothIn predicate for various square grids and triangular regions of parametrically given sizes, defined as follows: % Square grid: \[ \text{square} :: \text{int} \rightarrow \text{grid} \\ \text{square } N (X, Y) :- \text{domain } [X, Y] \geq 0 \land N \] % Triangular region: \[ \text{triangle} :: \text{cPoint} \rightarrow \text{real} \rightarrow \text{region} \\ \text{triangle } (RX0, RY0) H (RX, RY) :- \\ \text{RY} \geq RY0 - H, \text{RY} - \text{RX} \leq RY0 - RX0, \text{RY} + \text{RX} \leq RY0 + RX0 \] We build an isosceles triangles from a given upper vertex \((RX0, RY0)\) and a given height \(H\). The three vertices are \((RX0, RY0), (RX0-H, RY0-H), (RX0+H, RY0-H)\), and the region inside the triangle is enclosed by the lines \(r1 : RY = RY0 - H, r2 : RY - RX = RY0 - RX0\) and \(r3 : RX + RX = RY0 + RX0\) and characterized by the conjunction of the three linear inequalities: \(C1 : RY \geq RY0 - H, C2 : RY - RX \leq RY0 - RX0\) and \(C3 : RY + RX \leq RY0 + RX0\). This explains the real arithmetic constraints in the triangle predicate. As an example of goal solving for this program, we fix two integer values \(d\) and \(n\) such that \((d, d)\) is the middle point of the grid (square n), where \((n + 1)^2\) is the total number of discrete points within the square grid. For instance, we could choose \( n = 4 \) and \( d = 2 \). We consider three goals computing the intersection of this fixed square grid with three different triangular regions: - **Goal 1:** bothIn (triangle \( (d + 1/2, \, d+1) \) \( 1/2 \)) (square \( n \)) \((X,Y)\). This goal fails. - **Goal 2:** bothIn (triangle \( (d, \, d+1/2) \) \( 1 \)) (square \( n \)) \((X,Y)\). This goal computes one solution for \((X,Y)\), corresponding to the point \((d,d)\). - **Goal 3:** bothIn (triangle \( (d, \, d+1/2) \) \( 2 \)) (square \( n \)) \((X,Y)\). This goal computes four solutions for \((X,Y)\), corresponding to the points \((d,d)\), \((d-1,d-1)\), \((d,d-1)\) and \((d+1,d-1)\). ### 3 Cooperative Goal Solving Extending the operational semantics given in [9,2] for lazy constraint functional logic programming, we design in this section a goal solving calculus based on constraint lazy narrowing and solver cooperation mechanisms. #### 3.1 Structure of the Goals We consider goals of the general form \( G \equiv \exists \overline{U}. P \sqcap C \sqcap M \sqcap H \sqcap F \sqcap R \) in order to represent a generic state of the computation with cooperation of solvers over \( \mathcal{H} \), \( \mathcal{FD} \) and \( \mathcal{R} \). The symbol \( \sqcap \) is interpreted as conjunction. - \( \overline{U} \) is a finite set of local variables in the computation. - \( P \) is a conjunction of so-called productions of the form \( e_1 \rightarrow t_1, \ldots, e_n \rightarrow t_n \), where \( e_i \in \text{Exp} \) and \( t_i \in \text{Pat} \) for all \( 1 \leq i \leq n \). The set of produced variables of \( G \) is defined as the set \( \text{pvar}(P) \) of variables occurring in \( t_1 \ldots t_n \). - \( C \) is a finite conjunction of constraints to be solved possibly including occurrences of defined functions symbols. - \( M \) is the so-called communication store between \( \mathcal{FD} \) and \( \mathcal{R} \), with primitive bridge constraints involving only variables and integer or real values. - \( H \) is the so-called Herbrand store, with strict equality/disequality primitive constraints and an answer substitution with variable bindings. • $F$ is the so-called finite domain store, with finite domain primitive constraints and an answer substitution with integer variable bindings. • $R$ is the so-called real arithmetic store, with primitive real arithmetic constraints and an answer substitution with real variable bindings. We work with admissible goals $G$ satisfying the goal invariants given in [9] and such that no variable has more than one bridge in $M$. We also write $\square$ to denote an inconsistent goal. Moreover, we say that a variable $X$ is a demanded variable in a goal $G$ if $X$ occurs in any of the constraint stores of $G$ (i.e., $M$, $H$, $F$ or $R$), and $\mu(X) \neq \bot$ holds for every solution $\mu$ of the corresponding constraint store. For example, $X$ is a demanded variable for the finite domain constraint $X \# \geq 3$ but not a demanded variable for the strict disequality constraint $s(X) \neq 0$, where $s$ and $0$ are constructor symbols. In the sequel, we use the following notations in order to indicate the transformation of a goal by applying a substitution $\sigma$ and also adding $\sigma$ to the corresponding store: - $(P \sqcap C \sqcap M \sqcap H \sqcap F \sqcap R) \circ_{M, H, F, R} =_{def} (P \circ C \circ M \circ H \circ F \circ R) \circ_{M, H, F, R}$ - $(P \sqcap C \sqcap M \sqcap H \sqcap F \sqcap R) \circ_{H, F, R} =_{def} (P \circ C \circ M \circ H \circ F \circ R) \circ_{H, F, R}$ - $(P \sqcap C \sqcap M \sqcap H \sqcap F \sqcap R) \circ_{R} =_{def} (P \circ C \circ M \circ H \circ F \circ R) \circ_{R}$ where $(\Pi \sqcap \theta)$ $\upharpoonright$ $\sigma =_{def} \Pi \sigma \sqcap \theta \sigma$ and $(\Pi \sqcap \theta)$ stands for $H$, $F$ or $R$. 3.2 Cooperative Goal Solving by means of Constrained Lazy Narrowing The Constrained Lazy Narrowing Calculus $CLNC(\mathcal{D})$ is presented in [9] as a suitable computation mechanism for solving goals for $CFLP(\mathcal{D})$ over a single constraint domain $\mathcal{D}$ (e.g. $\mathcal{H}$, $\mathcal{FD}$ or $\mathcal{R}$) and a single constraint solver over the domain $\mathcal{D}$. Now, in order to provide a formal foundation to our proposal for the cooperation of solvers over the constraint domains $\mathcal{H}$, $\mathcal{FD}$ and $\mathcal{R}$, preserving the good properties obtained in the $CFLP(\mathcal{D})$ framework, we have to reformulate the goal transformation rules of the calculus $CLNC(\mathcal{D})$ to deal with the class of goals defined above. We have to distinguish two kinds of rules: rules for constrained lazy narrowing with sharing by means of productions (these rules are easily adapted from [9]; see Table 2), and new rules for cooperative constraint solving over the constraint stores. 3.3 Rules for Cooperative Goal Solving The following three rules describe the process of lazy flattening of non-primitive arguments from constraints in $C$ by means of new productions, the creation of new bridge constraints stored in $M$ with the aim of enabling propagations, and the actual propagation of mate constraints (recall introduction) via bridges, taking place simultaneously with the submission of primitive constraints to the $\mathcal{FD}$ and $\mathcal{R}$ stores. **FC Flatten Constraint** $$\exists \overline{U}. \ P \sqcap p \overline{e}_n \rightarrow ! t, \ C \sqcap M \sqcap H \sqcap F \sqcap R \vdash_{FC} \exists \overline{V}_m, \ \overline{U}. \overline{a}_m \rightarrow \overline{V}_m, \ P \sqcap p \overline{t}_n \rightarrow ! t, \ C \sqcap M \sqcap H \sqcap F \sqcap R$$ If some $e_i \notin Pat$, $\overline{a}_m$ are those $e_i$ which are not patterns, $\overline{V}_m$ are new variables, DC Decomposition \( \exists \overline{U}, h_{\overline{m}} \rightarrow h_{\overline{m}}, P \parallel C \parallel M \parallel H \parallel F \parallel R \parallel_{DC} \exists \overline{U}, e_{\overline{m}} \rightarrow \overline{t}_{\overline{m}}, P \parallel C \parallel M \parallel H \parallel F \parallel R \) CF Conflict Failure \( \exists \overline{U}, e \rightarrow t, P \parallel C \parallel M \parallel H \parallel F \parallel R \parallel_{CF} \) if \( e \) is rigid and passive, \( t \notin Var, e \) and \( t \) have conflicting roots. SP Simple Production \( \exists \overline{U}, s \rightarrow t, P \parallel C \parallel M \parallel H \parallel F \parallel R \parallel_{SP} \exists \overline{U}, (P \parallel C \parallel M \parallel H \parallel F \parallel R) \parallel_{H} \sigma \) if \( s \equiv X \in Var, t \notin Var, \sigma = \{X \mapsto t\} \) or \( s \in Pat, t \equiv X \in Var, \sigma = \{X \mapsto s\}; \overline{U} \equiv \overline{U} \setminus \{X\}. \) IM Imitation \( \exists X, \overline{U}, h_{\overline{m}} \rightarrow X, P \parallel C \parallel M \parallel H \parallel F \parallel R \parallel_{IM} \exists \overline{X}, \overline{U}, (e_{\overline{m}} \rightarrow X_{\overline{m}}, P \parallel C \parallel M \parallel H \parallel F \parallel R) \sigma \) if \( h_{\overline{m}} \notin Pat \) is passive, \( X \) is a demanded variable and \( \sigma = \{X \mapsto h_{\overline{m}}\}. \) EL Elimination \( \exists X, \overline{U}, e \rightarrow X, P \parallel C \parallel M \parallel H \parallel F \parallel R \parallel_{EL} \exists \overline{U}, P \parallel C \parallel M \parallel H \parallel F \parallel R \) if \( X \) does not occur in the rest of the goal. DF Defined Function \( \exists \overline{U}, f_{\overline{e}_{\overline{a}}}, \overline{m} \rightarrow t, P \parallel C \parallel M \parallel H \parallel F \parallel R \parallel_{DF} \) \( \exists X, \overline{U}, e_{\overline{m}} \rightarrow \overline{t}_{\overline{m}}, r \rightarrow X, X_{\overline{a}_{\overline{f}}} \rightarrow t, P \parallel C', C \parallel M \parallel H \parallel F \parallel R \) if \( f \in DF^{n}(k > 0), t \notin Var \) or \( t \) is a demanded variable and \( R: f \overline{t}_{\overline{m}} \rightarrow r \equiv C' \) is a fresh variant of a rule in \( \mathcal{P} \), with \( \overline{Y} = var(R) \) and \( X \) are new variables. PC Place Constraint \( \exists \overline{U}, p_{\overline{m}} \rightarrow t, P \parallel C \parallel M \parallel H \parallel F \parallel R \parallel_{PC} \exists \overline{U}, P \parallel p_{\overline{m}} \rightarrow ! t, C \parallel M \parallel H \parallel F \parallel R \) if \( p \in PF^{n}(k > 0), t \notin Var \) or \( t \) is a demanded variable. Table 2 Rules for Constrained Lazy Narrowing \( p_{\overline{t}_{\overline{n}}} \) is obtained from \( p_{\overline{e}_{\overline{n}}} \) by replacing each \( e_{i} \) which is not a pattern by \( V_{i} \). SB Set Bridges \( \exists \overline{U}, P \parallel p_{\overline{t}_{\overline{n}}} \rightarrow ! t, C \parallel M \parallel H \parallel F \parallel R \parallel_{SB} \exists \overline{V}, \overline{U}, P \parallel p_{\overline{t}_{\overline{n}}} \rightarrow ! t, C \parallel M' \parallel M \parallel H \parallel F \parallel R \) If \( \pi = p_{\overline{t}_{\overline{n}}} \rightarrow ! t \) is a primitive constraint, and (i) \( \pi \) is a \( FD \) constraint, and \( M' = bridges^{FD}R(\pi, M) \neq \emptyset \) or else (ii) \( \pi \) is a \( R \) constraint, and \( M' = bridges^{R}^{FD}(\pi, M) \neq \emptyset \). In both cases, \( \overline{V} = var(M') \setminus var(M) \) are new variables occurring in the new bridge constraints created by the \( bridges \) operations described in Tables 3, 4. SC Submit Constraints \( \exists \overline{U}, P \parallel p_{\overline{t}_{\overline{n}}} \rightarrow ! t, C \parallel M \parallel H \parallel F \parallel R \parallel_{SC} \exists \overline{U}, P \parallel C \parallel M' \parallel H' \parallel F' \parallel R' \) If \( SB \) cannot be used to set new bridges, and one of the following cases applies: (i) If \( p_{\overline{t}_{\overline{n}}} \rightarrow ! t \) is a bridge \( u \neq u' \) then \( M' = (u \neq u', M), H' = H, F' = F \) and \( R' = R \). (ii) If \( p \overline{t_n} \rightarrow t \) is a primitive Herbrand constraint seq \( t_1 t_2 \rightarrow! t \) then \( M' = M, H' = (\text{seq } t_1 t_2 \rightarrow! t, H), \) \( F' = F \) and \( R' = R \). (iii) If \( p \overline{t_n} \rightarrow! t \) is a primitive \( \mathcal{FD} \) constraint \( \pi \) then \( M' = M, H' = H, F' = (\pi, F) \) and \( R' = (R'', R) \), where \( R'' = \text{propagations}^{\mathcal{FD} \rightarrow \mathcal{R}}(\pi, M) \). (iv) If \( p \overline{t_n} \rightarrow! t \) is a primitive \( \mathcal{R} \) constraint \( \pi \) then \( M' = M, H' = H, F' = (F'', F) \) and \( R' = (\pi, R) \), where \( F'' = \text{propagations}^{\mathcal{R} \rightarrow \mathcal{FD}}(\pi, M) \). where the \textit{propagations} operations given in Tables 3 and 4 take care of the construction of mate constraints via bridges in \( M \) for propagation between the \( \mathcal{FD} \) and \( \mathcal{R} \) stores. <table> <thead> <tr> <th>( \pi )</th> <th>( \text{bridges}^{\mathcal{FD} \rightarrow \mathcal{R}}(\pi, M) )</th> <th>( \text{propagations}^{\mathcal{FD} \rightarrow \mathcal{R}}(\pi, M) )</th> </tr> </thead> <tbody> <tr> <td>domain ( [X_1, \ldots, X_n] a b )</td> <td>{ ( X_i #== RX_i \mid 1 \leq i \leq n, ) ( X_i ) has no bridge in ( M, RX_i ) new }</td> <td>{ ( a \leq RX_i, RX_i \leq b \mid 1 \leq i \leq n ) ( (X_i #== RX_i) \in M ) }</td> </tr> <tr> <td>belongs ( X [a_1, \ldots, a_n] )</td> <td>{ ( X #== RX \mid X ) has no bridge in ( M, RX ) new }</td> <td>{ ( \min(a_1, \ldots a_n) \leq RX, RX \leq \max(a_1, \ldots a_n) ) ( 1 \leq i \leq n ) ( (X #== RX) \in M ) }</td> </tr> <tr> <td>( t_1 #&lt;t_2 ) (analogously ( #&lt;, #&gt;, #==, #=, #=) )</td> <td>{ ( X_i #== RX_i \mid 1 \leq i \leq 2, ) ( t_i ) is a variable ( X_i ) with no bridge in ( M, RX_i ) new }</td> <td>{ ( t_i^R &lt; t_i^R \mid 1 \leq i \leq 2, ) ( t_i ) is an integer constant ( n ) and ( t_i^R ) is ( n ), or else ( t_i ) is a variable ( X_i, ) ( (X_i #== RX_i) \in M, ) and ( t_i^R ) is ( RX_i ) }</td> </tr> <tr> <td>( t_1 #+t_2 \rightarrow! t_3 ) (analogously ( #-, #*) )</td> <td>{ ( X_i #== RX_i \mid 1 \leq i \leq 3, t_i ) is a variable ( X_i ) with no bridge in ( M, RX_i ) new }</td> <td>{ ( t_i^R + t_i^R \rightarrow! t_i^R \mid 1 \leq i \leq 3, ) ( t_i^R ) is determined as in the previous case }</td> </tr> </tbody> </table> Table 3 | Bridge Constraints and Propagations from \( \mathcal{FD} \) to \( \mathcal{R} \) | ### 3.4 Rules for Constraint Solving The last four rules describe the process of constraint solving by means of the application of a constraint solver over the corresponding stores (\( M, H, F \) or \( R \)). We note that, in order to respect the admissibility conditions of goals and perform an adequate lazy evaluation, we must protect all the produced variables \( \chi = p\text{var}(P) \) occurring in the stores from eventual binding caused by the solvers (see [9] for more details). We use the following notations: - \( H \vdash_{\text{Solver}}^{\mathcal{M}} \chi \exists \overline{Y} H' \) indicates one of the alternatives computed by the solver. - \( H \vdash_{\text{Solver}}^{\mathcal{M}} \chi \downarrow \) indicates failure of the solver (i.e., \( H \) is unsatisfiable). Similar notations are used to indicate the behavior of the \( \mathcal{FD} \) and \( \mathcal{R} \) solvers. The simple behavior of the \( \mathcal{M} \) solver is shown explicitly. **MS \( M \)-Solver** - \( \exists \overline{U} P \square C \square X \# == u', M \square H \square F \square R \vdash_{\text{MS}_1} \exists \overline{U} \). \( (P \square C \square M \square H \square F \square R)@F \sigma \) <table> <thead> <tr> <th>(\pi)</th> <th>bridges(\mathcal{R} \rightarrow \mathcal{FD}(\pi, M))</th> <th>propagations(\mathcal{R} \rightarrow \mathcal{FD}(\pi, M))</th> </tr> </thead> <tbody> <tr> <td>(RX &lt; RY)</td> <td>(0) (no bridges are created)</td> <td>({X# &lt; Y</td> </tr> <tr> <td>(RX &lt; a)</td> <td>(0) (no bridges are created)</td> <td>({X# &lt; [a]</td> </tr> <tr> <td>(a &lt; RY)</td> <td>(0) (no bridges are created)</td> <td>({[a]# &lt; Y</td> </tr> <tr> <td>(RX &lt;= RY)</td> <td>(0) (no bridges are created)</td> <td>({X# &lt;= Y</td> </tr> <tr> <td>(RX &lt;= a)</td> <td>(0) (no bridges are created)</td> <td>({X# &lt;= [a]</td> </tr> <tr> <td>(a &lt;= RY)</td> <td>(0) (no bridges are created)</td> <td>({[a]# &lt;= Y</td> </tr> <tr> <td>(t_1 == t_2)</td> <td>({X# == RX</td> <td>) either (t_1) is an integer constant and (t_2) is a variable (RX) with no bridges in (M) (or vice versa) and (X) is new)</td> </tr> <tr> <td>(t_1 + t_2 \rightarrow t_3) (analogously for (\neg, \ast))</td> <td>({X# == RX</td> <td>t_3) is a variable (RX) with no bridge in (M), (X) new, for (1 \leq i \leq 2) (t_i) is either an integer constant or a variable (RX_i) with bridge ((X_i# == RX_i) \in M))</td> </tr> <tr> <td>(t_1 / t_2 \rightarrow! t_3)</td> <td>(0) (no bridges are created)</td> <td>({t^{FD}_2 # \ast t^{FD}_3 \rightarrow t^{FD}_1</td> </tr> </tbody> </table> Table 4 Bridge Constraints and Propagations from \(\mathcal{R}\) to \(\mathcal{FD}\) If \(X \notin \text{pvar}(P)\), \(u' \in \mathbb{R}\), \(\sigma = \{X \mapsto u\}\) with \(u \in \mathbb{Z}\) such that \(\text{equiv} \ u \ u'\), \(\overline{\mathcal{U}}' = \overline{\mathcal{U}}\) if \(X \notin \overline{\mathcal{U}}\) and \(\overline{\mathcal{U}}' = \overline{\mathcal{U}} \setminus \{X\}\) otherwise. - \(\exists \overline{U}. \ P \Box C \Box u# == RX, M \Box H \Box F \Box R \vdash_{\text{MS}_2} \overline{Y}. (P \Box C \Box M \Box H' \Box F' \Box R) \sigma'\) if \(RX \notin \text{pvar}(P)\), \(u \in \mathbb{Z}\), \(\sigma = \{RX \mapsto u'\}\) with \(u' \in \mathbb{R}\) such that \(\text{equiv} \ u \ u'\), \(\overline{\mathcal{U}}' = \overline{\mathcal{U}}\) if \(RX \notin \overline{\mathcal{U}}\) and \(\overline{\mathcal{U}}' = \overline{\mathcal{U}} \setminus \{RX\}\) otherwise. - \(\exists \overline{U}. \ P \Box C \Box u# == u', M \Box H \Box F \Box R \vdash_{\text{MS}_3} \overline{Y}. P \Box C \Box M \Box H \Box F \Box R\) if \(u \in \mathbb{Z}\), \(u' \in \mathbb{R}\) and \(\text{equiv} \ u \ u' = true\). - \(\exists \overline{U}. \ P \Box C \Box u# == u', M \Box H \Box F \Box R \vdash_{\text{MS}_4} \overline{Y}\) If \(u \in \mathbb{Z}\), \(u' \in \mathbb{R}\) and \(\text{equiv} \ u \ u' = false\). **HS** **H-Solver** \(\exists \overline{U}. \ P \Box C \Box M \Box H \Box F \Box R \vdash_{\text{HS}} \overline{Y}, \overline{U}. (P \Box C \Box M \Box H' \Box F' \Box R) \sigma'\) If \(\chi = \text{pvar}(P) \cap \text{var}(H)\) and \(H \vdash_{\text{Solver}^H, \chi} \exists \overline{Y}. H'\) with \(H' = \Pi' \Box \sigma'\). **FS** **F-Solver** \(\exists \overline{U}. \ P \Box C \Box M \Box H \Box F \Box R \vdash_{\text{FS}} \overline{Y}, \overline{U}. (P \Box C \Box M \Box H \Box F' \Box R) \sigma'\) If \(\chi = \text{pvar}(P) \cap \text{var}(F)\) and \(F \vdash_{\text{Solver}^F, \chi} \exists \overline{Y}. F'\) with \(F' = \Pi' \Box \sigma'\). RS $R$-Solver $\exists \bar{U}$. $P \square C \square M \square H \square F \square R \vdash_{RS} \exists Y \bar{U}$. $(P \square C \square M \square H \square F \square R')\sigma'$ If $\chi = p\text{var}(P) \cap \text{var}(R)$ and $R \vdash_{\text{Solver}}^\chi, \exists Y \bar{U}$. $R'$ with $R' = \Pi' \square \sigma'$. SF Solving Failure $\exists \bar{U}$. $P \square C \square M \square H \square F \square R \vdash_{\text{SF}}$ If $\chi = p\text{var}(P) \cap \text{var}(K)$ and $K \vdash_{\text{Solver}}^\chi, \exists Y \bar{U}$, where $D$ is the domain $\mathcal{H}$, $\mathcal{FD}$ or $\mathcal{R}$ and $K$ is the corresponding constraint store (i.e., $H$, $F$ or $R$). The following example illustrates the process of flattening and propagation, starting with the real arithmetic constraint $(RX + 2 \times RY) \times RZ \leq 3.5$ and bridges for $RX$, $RY$ and $RZ$. At each goal transformation step, we underline the selected subgoal and the applied rule. We use Tables 3 and 4 in order to build new bridges and propagations in the transformation process. In these tables, no bridges are created for $\neq/\neq$, because integer division cannot be propagated to real division. The notations $[a]$ (resp. $[a]$), stand for the least integer upper bound (resp. the greatest integer lower bound) of $a \in \mathbb{R}$. Constraints $t_1 > t_2$ resp. $t_1 \geq t_2$ not occurring in Table 4 are treated as $t_2 < t_1$ resp. $t_2 < t_1$. $\Box (RX + 2 \times RY) \times RZ \leq 3.5 \Box X \# = RX, Y \# = RY, Z \# = RZ \Box \Box \Box \vdash_{\text{FC}}$ $\exists RA. (RX + 2 \times RY) \times RZ \rightarrow RA \square RA \leq 3.5 \Box X \# = RX, Y \# = RY, Z \# = RZ \Box \Box \Box \vdash_{\text{FC}}$ $\exists RA. \Box (RX + 2 \times RY) \times RZ \rightarrow RA, RA \leq 3.5 \Box X \# = RX, Y \# = RY, Z \# = RZ \Box \Box \Box \vdash_{\text{FC}}$ $\exists RB, RA. RX + 2 \times RY \rightarrow RB \Box RB \times RZ \rightarrow RA, RA \leq 3.5 \Box X \# = RX, Y \# = RY, Z \# = RZ \Box \Box \Box \vdash_{\text{PC}}$ $\exists RB, RA. \Box RX + 2 \times RY \rightarrow RA, RA \leq 3.5 \Box X \# = RX, Y \# = RY, Z \# = RZ \Box \Box \Box \vdash_{\text{PC}}$ $\exists RC, RB, RA. 2 \times RY \rightarrow RC \Box RX + RC \rightarrow RA, RA \leq 3.5 \Box X \# = RX, Y \# = RY, Z \# = RZ \Box \Box \Box \vdash_{\text{PC}}$ $\exists RC, RB, RA. \Box 2 \times RY \rightarrow RC, RX + RC \rightarrow RA, RA \leq 3.5 \Box X \# = RX, Y \# = RY, Z \# = RZ \Box \Box \Box \vdash_{\text{PC}}$ $\exists C, B, A, RC, RB, RA. \Box 2 \times RY \rightarrow RC, RX + RC \rightarrow RA, RA \leq 3.5 \Box C \# = RC, B \# = RB, A \# = RA, X \# = RX, Y \# = RY, Z \# = RZ \Box \Box \Box \vdash_{\text{PC}}$ $\Box 2 \times Y \rightarrow C, X \# + C \rightarrow A, A \# = 3 \Box 2 \times RY \rightarrow RC, RX + RC \rightarrow RA, RA \leq 3.5$ 4 Implementation and Performance Results In this section, we present some hints about the implementation of the formal setting presented above, and we test its performance showing the improvements caused by propagation w.r.t. a restricted use of bridges for binding alone. 4.1 Implementation Our implementation has been developed by adding a store $M$ for bridges, as well as code for implementing bindings and propagations, on top of the existing $TO\gamma$ system [1]. $TO\gamma$ has three solvers already for the constraint domains $\mathcal{H}$, FD and R, each of them with its corresponding stores. Each predefined function is implemented as a SICStus Prolog predicate that has arguments for: function arguments (as many as its arity), function result, and Herbrand constraint store. The next example is a simplified code excerpt that shows how the binding mechanism for bridges is implemented. Actually, this is the only code needed for obtaining the performance results shown in Subsection 4.2 for computations without propagation. \[ \begin{align*} (1) & \quad #==(L, R, true, Cin, [','=='(HL,HR)|Cout]) :- \\ (2) & \quad hnf(L, HL, Cin, Cout1), hnf(R, HR, Cout1, Cout), \\ (3) & \quad freeze(HL, HR is float(HL)), freeze(HR, HL is integer(HR)). \end{align*} \] This predefined constraint demands its two arguments (L and R) to be in head normal form (hnf). Therefore, the code line (2) implements the application of the rules FC and PC (with true as t and equiv as p). Next, line (3) implements the application of the transformation rule MS. The predicate freeze suspends the evaluation of its second argument until the first one becomes ground. What we need to reflect in this constraint is to equal two arguments (variables or constants) of different type, i.e., real and integer, so that type casting is needed (float and integer operations). Binding and matching inherent in MS are accomplished by unification. Finally, the transformation rule SC (case i) is implemented by adding the flattened bridge to the communication store. The last two arguments of predicate #== stand for the input and output stores. For the sake of rapid prototyping, the current implementation mixes Herbrand and communication constraints in one single store, although they should be separated for better performance. In addition, we always add constraints to the communication store (irrespective of groundness) and never drop them; again to be enhanced in a final release. Implementing propagation requires a modification of existing code for predefined constraints. For example, the code excerpt below shows the implementation of the relational constraint #>. \[ \begin{align*} (1) & \quad #>(L, R, Out, Cin, Cout) :- \\ (2) & \quad hnf(L, HL, Cin, Cout1), hnf(R, HR, Cout1, Cout), \\ (3) & \quad searchVarsR(HL,Cout2,Cout3,HLR), searchVarsR(HR,Cout3,Cout,HRR), \\ (4) & \quad ((Out=true, HL#>HR, \{HLR>HRR\});(Out=false, HL#<=HR, \{HLR<=HRR\})). \end{align*} \] Here, line (2) implements the FC and PC goal transformation rules; line(3) implements the rule SB by adding new needed bridges to the mixed H + M store; and line (4) implements propagation (case (iii) of rule SC), sending both the FD constraint and its mate in R to the corresponding solvers. Note that, because we allow reification in particular for relational constraints, the complementary cases (true and false results) correspond to complementary constraints. 4.2 Performance Results Table 5 compares the timing results for executing the goals in Section 2 for the running example (see Subsection 2.2). The first column indicates the goal, the second and third ones indicate the parameters $d$ and $n$ determining the middle point and the size of the square grid, respectively. The next columns show running times (in milliseconds) in the form $(t_B/t_{BP})$, where $t_B$ stands for the system using bridges for binding alone and $t_{BP}$ for the system using bridges also for propagation. Values '0' in these columns stand for very small execution times that are displayed as '0' by the system, last columns are headed with a number $i$ which refers to the $i$-th solution found, and the last column with numbers stands for the time needed to determine that there are no more solutions. In this simple example we see that the finite domain search space has been hugely cut by the propagations from $\mathcal{R}$ to $\mathcal{FD}$. Finite domain solvers are not powerful enough to cut the search space in such an efficient way as simplex methods do for linear real constraints. <table> <thead> <tr> <th>Goal</th> <th>$d$</th> <th>$n$</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>20000</td> <td>40000</td> <td>1828/0</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td></td> <td>200000</td> <td>400000</td> <td>17900/0</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>2</td> <td>20000</td> <td>40000</td> <td>1125/0</td> <td>2172/0</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td></td> <td>200000</td> <td>400000</td> <td>111201/0</td> <td>215156/0</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>3</td> <td>20000</td> <td>40000</td> <td>1125/0</td> <td>1485/0</td> <td>0/0</td> <td>1500/0</td> <td>2203/0</td> </tr> <tr> <td></td> <td>200000</td> <td>400000</td> <td>111329/0</td> <td>147406/0</td> <td>0/0</td> <td>147453/0</td> <td>216156/0</td> </tr> </tbody> </table> Table 5 Performance Results 5 Conclusions We have presented a proposal for the cooperation of solvers for the three domains $\mathcal{H}$, $\mathcal{FD}$ and $\mathcal{R}$ in Constraint Functional Logic Programming, based on the propagation of mate constraints between the $\mathcal{FD}$ and $\mathcal{R}$ solvers. Our presentation includes both a formal description of cooperative goal solving as an enrichment of existing goal solving calculi [9,2] and a discussion of an effective implementation as an extension of an existing constraint functional logic system, which was already shown to have a reasonable performance [3]. We have obtained encouraging performance results, shown by goal solving examples where the propagation of mate constraints dramatically cuts the search space, thus leading to significant speedups in execution time. Besides the benefits of improving efficiency in a sequential environment, cooperation of solvers even opens the possibility of exploiting emerging technologies such as parallel architectures and grid computing for the parallel execution of different solvers on different processing elements (platforms, processors or cores). As mentioned in the introduction, the cooperation of constraint solvers has been extensively investigated during the last years [4]. Let us mention at this point just a restricted selection of related work. In his PhD thesis [12] Eric Monfroy proposed BALI (Binding Architecture for Solver integration, see also [13,14]), providing a number of cooperations primitives which can be used to combine various solvers according to different strategies. Monfroy’s approach assumes that all the solvers work over a common store, while our present proposal requires communication among different stores. Mircea Marin [10] developed a CFLP scheme that combines Monfroy’s approach to solver cooperation with a higher-order lazy narrowing calculus somewhat similar to [9] and the goal solving calculus we have presented in Section 3. In contrast to our proposal, Marin’s approach allows for higher-order unification, which leads both to greater expressivity and to less efficient implementations. Moreover, the instance of CFLP implemented by Marin and others [11] is quite different to our work, since it deals with the combination of four solvers over a constraint domain for algebraic symbolic computation. More recently, Petra Hofstedt [6,5] proposed a general approach for the combination of various constraint systems and declarative languages into an integrated system of cooperating solvers. In Hofstedt’s proposal, the goal solving procedure of a declarative language is viewed also as a solver, and cooperation of solvers is achieved by two mechanisms: constraint propagation, that submits a constraint belonging to some domain \( D \) to \( D \)’s constraint store, say \( S_D \); and projection of constraint stores, that consults the contents of a given store \( S_D \) and deduces constraints for another domain. Propagation, as used in this paper, is more akin to Hofstedt’s projection; while Hofstedt’s propagation corresponds to our goal solving rules for placing constraints in stores and invoking constraint solvers. Hofstedt’s ideas have been implemented in a meta-solver system called META-S, but we are not aware of any performance results. These and other related works encourage us to continue our investigation, aiming at finding and implementing more elaborated means of communication among solvers, as well as trying their performance experimentally. Future planned work also includes modelling the declarative semantics of cooperation. The implementation described in this paper will be soon available (see http://toy.sourceforge.net). References
{"Source-Url": "http://gpd.sip.ucm.es/sonia/PROLE'06.pdf", "len_cl100k_base": 13409, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 61275, "total-output-tokens": 15241, "length": "2e13", "weborganizer": {"__label__adult": 0.0003216266632080078, "__label__art_design": 0.00034356117248535156, "__label__crime_law": 0.0003914833068847656, "__label__education_jobs": 0.0009355545043945312, "__label__entertainment": 7.110834121704102e-05, "__label__fashion_beauty": 0.000152587890625, "__label__finance_business": 0.000385284423828125, "__label__food_dining": 0.00040340423583984375, "__label__games": 0.000865936279296875, "__label__hardware": 0.0008182525634765625, "__label__health": 0.0005736351013183594, "__label__history": 0.00025773048400878906, "__label__home_hobbies": 0.00013244152069091797, "__label__industrial": 0.0006585121154785156, "__label__literature": 0.00023257732391357425, "__label__politics": 0.0003008842468261719, "__label__religion": 0.0005626678466796875, "__label__science_tech": 0.0469970703125, "__label__social_life": 0.00010383129119873048, "__label__software": 0.007537841796875, "__label__software_dev": 0.9365234375, "__label__sports_fitness": 0.00033164024353027344, "__label__transportation": 0.0006537437438964844, "__label__travel": 0.0002160072326660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45820, 0.02614]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45820, 0.39988]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45820, 0.76502]], "google_gemma-3-12b-it_contains_pii": [[0, 2085, false], [2085, 5810, null], [5810, 9262, null], [9262, 11825, null], [11825, 14344, null], [14344, 16588, null], [16588, 20218, null], [20218, 24475, null], [24475, 28038, null], [28038, 31765, null], [31765, 35156, null], [35156, 38033, null], [38033, 41021, null], [41021, 44304, null], [44304, 45820, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2085, true], [2085, 5810, null], [5810, 9262, null], [9262, 11825, null], [11825, 14344, null], [14344, 16588, null], [16588, 20218, null], [20218, 24475, null], [24475, 28038, null], [28038, 31765, null], [31765, 35156, null], [35156, 38033, null], [38033, 41021, null], [41021, 44304, null], [44304, 45820, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45820, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45820, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45820, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45820, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45820, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45820, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45820, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45820, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45820, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45820, null]], "pdf_page_numbers": [[0, 2085, 1], [2085, 5810, 2], [5810, 9262, 3], [9262, 11825, 4], [11825, 14344, 5], [14344, 16588, 6], [16588, 20218, 7], [20218, 24475, 8], [24475, 28038, 9], [28038, 31765, 10], [31765, 35156, 11], [35156, 38033, 12], [38033, 41021, 13], [41021, 44304, 14], [44304, 45820, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45820, 0.16165]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
3e122dc6bdadc5051b327d9a17bbb2197b864aaf
This chapter provides an introduction to the tools SAP offers to help provision data for SAP HANA. It begins with a look into what types of tools you have to choose from; then, it dives a little deeper into what sets each tool apart. “Introduction” Contents Index The Authors Megan Cundiff, Vernon Gomes, Russell Lamb, Don Loden, Vinay Suneja Data Provisioning for SAP HANA 352 Pages, 2018, $79.95 ISBN 978-1-4932-1671-0 www.sap-press.com/4588 Chapter 1 Introduction When it comes to data provisioning, most companies have to work with the data and the tools they have. We hope this book will help you make the right choices as you navigate provisioning data to SAP HANA. If you deal with data, whether large or small, you’ll probably ask yourself at some point, “How can I get this file/table/extract/feed into SAP HANA?” If you haven’t heard this question a hundred times already, you will soon. Project managers schedule meetings on this question; analysts ping every IT contact they know searching for a quick answer. When asking an SAP HANA consultant, the answers might border on endless. The alphabet soup of solutions and tool names can be confusing even to seasoned SAP users. Whether you’re an IT executive or a developer, your customers are probably asking this question, and your goal should be to provide a simple answer, which will require at least a cursory understanding of the available tools, an inventory of the tools currently available to you, and a methodology for determining the best solution for your users’ circumstances. This book aims to strengthen you in all three areas, so that you can quickly and confidently leverage SAP HANA’s in-memory computing to support your organization. First, let’s look into what types of tools we have to choose from; then, we’ll dive a little deeper into what sets each tool apart. 1.1 What Are the Tools for Provisioning Data? The hardest part is usually getting started. We’ll cover six tools in depth in this book, but we can group them into three categories to help you quickly decide where to focus your efforts: ETL (extract, transform, and load); cleansing; and replication. Let’s briefly define each category and see how the six tools fall into each category; then, we can dive a little deeper into what separates these tools from others in the market. Often, to be clear and concise, the meticulous grouping of functionalities into acronyms can have the opposite effect. Suddenly, rather than saying, “You can use SAP HANA’s built-in ETL tool,” you might end up saying, “You can use SDI via SDA and a Data Provisioning Agent server.” Despite meaning the same thing, the latter statement can easily result in hours researching and making lists of pros and cons. But, ultimately, each tool has its place, and in this section, we’ll clarify the overarching use case for each. First, SAP HANA smart data integration (SDI) is a tool primarily focused on getting your SAP HANA system up and running as quickly as possible by being bundled with the platform natively. Next, SAP Data Services is designed to create a common language across your organization, which may or may not include SAP HANA, and facilitate data movements. Third, SAP Agile Data Preparation peeks behind the curtain a bit to allow business users build their own joins and lookups on source data. Finally, the SAP Landscape Transformation Replication Server (SAP LT Replication Server) is a tool that you can use to quickly put SAP HANA to work and start querying massive amounts of SAP data. Separating the tools into these broader categories hopefully points to a larger theme in this book, which is that no one tool can do it all, all the time. More often than not, a combination of these tools is required to support a large organization with data spread out across multiple SAP and non-SAP systems. We’ll look at each tool independently to understand its strengths and weaknesses and its place in the IT landscape. If you already know which tools you plan to use, skip to the specific chapter for the nuts and bolts of utilizing the tool in your provisioning strategy. ### 1.1.1 Extract, Transform, and Load ETL products enable you to manipulate your data before loading the data into SAP HANA. By offering standardization and reproducible data enhancements, ETL tools can greatly improve analyst productivity by removing repetitive tasks from the daily workload. If a user mentions they need to download or export the data into Excel so that the data can be “massaged” or “cleaned up” before uploading, an ETL tool can be inserted into the process to automate those tasks, thus allowing your analysts to focus on analysis. When provisioning SAP HANA, if one of your users says, “I have a file,” the first question you should ask is “How do you get this file?” The answer will help you decide between the two provisioning tools found in this group, as follows: - SAP Data Services - SAP HANA smart data integration (SDI) ### SAP Data Services SAP Data Services is a one-stop-ETL-shop for SAP data integration. Other ETL tools exists, of course, such as Informatics, SSIS, and open source options such as Pentaho, but for multisystem integration in a mixed landscape that includes any amount SAP software, SAP Data Services is the ETL tool of choice because of ability to natively access SAP programs and its change data capture options. However, using SAP is not a prerequisite for using SAP Data Services. SAP Data Services’ primary function is to provide a layer across all data storage devices in your organization, both on-premise and in the cloud. SAP Data Services includes eight customized ODBC adapters, can utilize JDBC connections, parse Hadoop file stores, import web services for software-as-a-service (SaaS) integrations, open FTP and SFTP file locations, connect to Samba and Windows shares, and in a pinch even leverage Windows and Unix shell commands and custom Python scripts. In terms of data storage, SAP Data Services levels the playing field by providing a single syntax to interface with all these storage options. Let’s look at a few examples to expand on this topic from a developer’s point of view. ### The Tool of Many Names Another common name for SAP Data Services is the “Data Integrator (DI)” or the “SAP BusinessObjects Data Integrator (BODI),” which is used to refer to the same tool, minus the data quality transforms used for data cleansing. This licensing difference is often overlooked by developers who may simply refer to the tool as SAP Data Services. For anyone who has worked with any type of data, SQL (Structured Query Language) is not a new term. But, too often, many forget that not all SQL is created equal. Every database has its own unique features and solutions for certain tasks and, thus, also unique syntax requirements. Let’s say, for example, we’d like to see the top 10 customers by total sales and the relevant vice president at each client company. Let’s assume we have this data stored in a single table, structured like the records shown in Table 1.1. The records in this table might exist in any database as exact duplicates, but the way in which the database is asked for records can change drastically from system to system. <table> <thead> <tr> <th>VP First Name</th> <th>VP Last Name</th> <th>Customer</th> <th>Sales</th> </tr> </thead> <tbody> <tr> <td>John</td> <td>Doe</td> <td>ABC Co.</td> <td>1,000</td> </tr> <tr> <td>Jane</td> <td>Doe</td> <td>XYC Inc.</td> <td>500</td> </tr> </tbody> </table> Table 1.1 Customers with Sales Information Now, let's look at some different SQL syntaxes, depending on the database that stores this table. For a table in Oracle, a developer would need to write a query that looks something like Listing 1.1. Oracle utilizes a double pipe (||) to concatenate strings and includes a useful rownum reserve name for tracking result set values, which can then be used. ```sql Select VP_FIRST_NAME || ' ' || VP_LAST_NAME as VP_NAME, sum(sales) from table1 where rownum <= 10 group by CUSTOMER order by sum(sales) ``` Listing 1.1 Oracle Syntax For a table in Microsoft SQL Server, a developer would need to write a query that looks something like Listing 1.2. Microsoft SQL Server doesn’t have a rownum object that can be referenced; instead, the keyword top will select the top n number of records. Microsoft SQL Server also uses plus signs (+) for concatenation. ```sql Select top 10 VP_FIRST_name + ' ' + VP_LAST_NAME as VP Customer, sum(sales) from table1 group by CUSTOMER order by sum(sales) ``` Listing 1.2 Microsoft SQL Server Syntax For a table in PostgreSQL, you would write a query like the one in Listing 1.3. PostgreSQL, like Oracle, uses double pipes to tie strings together; however, unlike both Oracle and the Microsoft SQL Server, you’ll use a different keyword, limit, to restrict our result set to the top 10. ```sql Select VP_FIRST_name || ' ' || VP_LAST_NAME Customer, sum(sales) from table1 group by CUSTOMER order by sum(sales) limit 10 ``` Listing 1.3 PostgreSQL Syntax Even within the same database brand, differences among versions can also result in syntactical changes and, over time, through new releases, result in better ways to execute code. SAP Data Services enables ETL developers to ignore these differences in code, often without having to write any code at all. The SAP Data Services user interface is primarily drag-and-drop. Rather than writing SELECT statements, although the option is available, you can import the table metadata and map columns from the source table to the target table by dragging and dropping columns and dragging. Queries are no longer lines of code but boxes that house all the individual configuration panels, dropdown menus, and function calls that make up a query. Once the configuration is satisfactory, the SAP Data Services application server executes the code by translating the configuration into the necessary SQL syntax required by both the source and target databases. An example of an SAP Data Services job is shown in Figure 1.1. Figure 1.1 An Example SAP Data Services Job For example, a common data transformation involves the location of a substring within a string. In SAP Data Services, similar to other programing languages, this transformation is known as an Index() function. Let’s say we have, as shown in Table 1.2, an example dataset that includes product codes and descriptions that no longer meet the business definition; thus, data manipulation is required. <table> <thead> <tr> <th>PRODUCT_CODE_LONG</th> <th>PRODUCT_NAME</th> </tr> </thead> <tbody> <tr> <td>AB-123</td> <td>Cotton Swabs 500 Ct</td> </tr> <tr> <td>KP-345</td> <td>Cotton Swabs 1000 Ct</td> </tr> </tbody> </table> Table 1.2 Example Dataset Perhaps a business requirement is to remove the text before the dash in a product code before sending the data to another system. A common solution for this in SAP Data Services is to leverage the index function along with a left trim (ltrim). The SAP Data Services code would look as follows: ```sql ltrim(PRODUCT_CODE_LONG, 1, index(PRODUCT_CODE_LONG, '-', 1)) ``` Regardless of the source database, this line of code will not require alternate syntax. With SAP Data Services, you don’t need to know that Oracle equivalent `Index()` function is called `Instr()` or that, to trim off the left side of a string in Microsoft SQL Server, the function `Right()` is required. Let’s not forget that this data might not be in a database at all! Instead, the data could be in an Excel file or even stored within a third-party cloud solution such as Salesforce.com. Regardless, SAP Data Services will determine the proper syntax required for the transformation logic. If your organization needs to cast a wide net to unify numerous databases and perform complex data transformations, SAP Data Services is likely to be the preferred option. But what if your scope isn’t that wide? Other ETL tools are available to you, including one already built into the SAP HANA platform itself: SDI. However, to work with data not already inside SAP HANA, we’ll need to look at another component first, SAP HANA smart data access (SDA). While not specifically an ETL tool, we’ll discuss SDA because of its importance when leveraging SDI. **SAP HANA Smart Data Access** SDA is another piece of that SAP HANA platform. You might notice that this tool is not of specific to data provisioning. SDA provides a window into another database, thus allowing you to view and query without having to copy that data over to SAP HANA. The data never leaves its source system and is never written to the SAP HANA hard disk when leveraging SDA. However, you can see the data directly within your SAP HANA development environment under the **Provisioning** folder, as shown in Figure 1.2, which allows you to create remote sources and import virtual tables. ![Figure 1.2 SDA from the Provisioning Folder in SAP HANA Studio](image) However, as you can probably guess, SDA’s virtual tables can be leveraged by SDI as source tables to facilitate an SAP HANA-based ETL solution, with, of course, some limitations. At the time of this writing, SDA in SAP HANA 2.0 includes the following 17 ODBC connections out of the box: - ASE - TERADATA - IQ - SAP HANA - HADOOP - GENERIC ODBC - ORACLE - MSSQL - NETEZZA ![Figure 1.3 SDA Virtual Tables](image) SDA also includes four destinations so you can leverage external procedure calls on your data when SAP HANA is not appropriate, for example, when using the open-source machine learning library TensorFlow or an rServe server. The four destinations are as follows. - HADOOP - SPARK SQL - rSERVE - GRPC As long as these built-in ODBC connections meet your requirements, SDA might be all you need. SDI can simply refer to the virtual tables exposed by these SDA adapters as source tables, execute the SQL required, and then write the results to disk in SAP HANA. But, if you have source systems not accessible via the adapters listed above, one additional piece of software can be leveraged to extend beyond SDA’s predelivered ODBC adapters—SDI. **SAP HANA Smart Data Integration** Also an ETL tool, SDI offers much the same core functionality as SAP Data Services. SDI can leverage all the ODBC connections mentioned previously plus an additional 20 Java adapters have been developed by SAP and are distributed via the Data Provisioning Agent. Additionally, if these prebuilt solutions still don’t meet your needs, you can extend SDI’s integration further by writing your own Java adapter utilizing the SAP HANA Adapter software development kit (SDK). One key difference between SDI and SAP Data Services is that, if you already have SAP HANA, you already have SDI. As a core component of the SAP HANA platform, every version of SAP HANA from SP 09 on has SDI built in and ready to deploy. If additional adapters are required, for example, for reading from a flat file or for connecting to a web service, you’ll need to complete an extra step first: You’ll need to deploy the Data Provisioning Agent, shown in Figure 1.4. The SAP HANA Data Provisioning Agent Configuration screen allows you to deploy 20 additional Java adapters to supplement the adapters already provided by SDA. **Why a separate piece of software?** For SAP, this segregation of duties isolates the database from the data transfer mechanism and ensures that the processing power required by and promised to the SAP HANA system remains unaffected. Thus, SAP recommends utilizing a second server or a virtual machine (either Linux or Windows) to run the Data Provisioning Agent, from which your Data Provisioning Agent adapters will be deployed. Luckily, this free and lightweight piece of software can even be run locally on a typical developer’s laptop for testing purposes. Another significant difference between the two tools is that, with the changes that have come with SAP HANA extended application services, advanced model (SAP HANA XSA) in SAP HANA 2.0, SDI development can be done completely in a web browser via the SAP Web IDE, as shown in Figure 1.5, which shows two tables being joined, but no output has been created. This web-based feature can greatly simplify processes and reduce the effort required for developer onboarding. Simply grant developers the appropriate role while creating their user and provide the link. No need to install client tools with the appropriate version, or even SAP HANA Studio or Eclipse, the original development IDEs for SDI. SDI flowgraphs can be built using the SAP Web IDE, an SAP HANA XSA application accessible via a web browser. Finally, the largest difference between the two tools involves their overall purposes. SDI’s purpose is to provision SAP HANA. Though packed with data federation options and extensibility via the SDK, SDA’s primary function is to load data into SAP HANA, not into other systems. While loading data into SAP HANA is probably your immediate goal, keep in mind your organization’s long-term goals. If loading an array of multiple databases other than SAP HANA is not a concern at the moment, SDI might be the perfect fit. SDI is a feature-rich ETL solution capable of meeting many, if not all, of your SAP HANA provisioning requirements. In Chapter 2, we’ll cover how to get started developing SDI flowgraphs, how to set up the Data Provisioning Agent (as well as deploying its most common adapters), and how to leverage them in an SDI-based ETL solution. But, what if the data to be pulled into your SAP HANA environment isn’t quite up to par? As an aside, this book will also cover a few specific transformations within SDI in depth that call under their own acronym: SAP HANA smart data quality (SDQ). ### 1.1.2 Cleansing While similar to ETL (and in the case of SAP Data Services bundled with cleansing tools), cleansing requires a different type of logic, something smarter. Where ETL tools will leverage joins by matching two keys exactly, cleansing leverages fuzzy joins and looks for likely matches with some degree of confidence. The goal of a cleansing tool is to find out whether a given piece of data captures the intent of the user who entered it. If you’ve ever been unlucky enough to have to join two datasets by something as fluid as company names (or worse, address lines), then you’ve experienced the challenges that come with programmatic cleansing. Take, for example, the records shown in Table 1.3. The number of ways different users might input the same address are staggering, and to a database, these variations are all equal in validity. <table> <thead> <tr> <th>Source System Name</th> <th>Address Line</th> </tr> </thead> <tbody> <tr> <td>Cloud CRM</td> <td>293 1st Avenue</td> </tr> <tr> <td>On Prem ERP</td> <td>293 First Ave.</td> </tr> </tbody> </table> Table 1.3 Possible Data Inputs To an analyst, these two addresses are clearly the same, but not so to a database. To avoid having to sift through millions of records, hunting for duplicates and valid links, you can leverage one of the tools in this category to ensure you’re making efficient use of your limited SAP HANA storage: - SAP HANA smart data quality (SDQ) - SAP Agile Data Preparation - SAP Data Quality Management, microservices for location data ### SAP HANA Smart Data Quality As a component of SDI, SDQ can be utilized to cleanse data already stored in SAP HANA, either in batch jobs during extractions from other systems or in real time as data becomes available to the SAP HANA system. SDQ is ultimately a subset of functions available to the SDI developer that can be included in flowgraphs, which is similar to the data quality transforms found in SAP Data Services, but only available with the appropriate license. While not as diverse as the data quality capabilities in SAP Data Services, SDQ is well suited for parsing and standardizing free-form text, without the need for an additional server, application, or licensing. However, you’ll need to take into account additional costs when cleansing address data is required. An annual subscription fee is required to access the most up-to-date address information across all SAP address cleansing solutions, including SAP Data Services. These address information files referred to as directories and are required for the different address cleansing engines to perform their logic. Once purchased, simply add the directories to the correct server location to enable validating and improving address data coming into your SAP HANA system. Though only a subset of SDI, due to the numerous configurations required, we’ll explore SDQ extensively to ensure you get the most out of your decision to utilize SDI as your SAP HANA provisioning tool. However, SDQ is not the only method that an organization can use to enhance data quality in their SAP HANA systems. **SAP Agile Data Preparation** SAP Agile Data Preparation, shown in Figure 1.6, is the most business analyst-friendly provisioning method discussed in this book. If you’re familiar with the self-service business intelligence trend popularized by tools such as SAP BusinessObjects Web Intelligence and SAP Lumira, SAP Agile Data Preparation extends the reach of that trend deeper into backend systems by offering business users an easy-to-understand web interface to connect data sources, whether a remote database or a local file, and perform common database tasks such as joins, formulas, and even cleansing. SAP Agile Data Preparation is, like SDI, an SAP HANA XSA application accessible with a web browser. SAP Agile Data Preparation itself is ultimately an SAP HANA XSA application that, similar to SAP Data Services, translates a user’s configurations, transformation, and cleansing rules into backend SQL commands. However, these commands are not limited by user sessions in any way. Rather than obscuring a user’s “development” behind the finished product, the process itself is exportable. Once a user has written code, this code can be saved and shared to improve reusability and standardization. Exporting an SAP Agile Data Preparation job shows the underlying commands generated, which are in fact SDI flowgraphs. Thus, these flowgraphs can be sent to IT as a prototype, enabling IT to better understand what the business needs really are and to improve the development process. SAP Agile Data Preparation, while an extension of the SAP HANA platform, does not however actually require an SAP HANA instance. SAP also offers an SaaS SAP Agile Data Preparation solution via the SAP Cloud Platform. We’ll cover how to set up both on-premise and cloud SAP Agile Data Preparation in depth in Chapter 4. **SAP Data Quality Management, Microservices for Location Data** In addition to SAP Agile Data Preparation, SDQ and the data quality transforms found in SAP Data Services, we’ll be covering one final data quality product, SAP Data Quality Management, microservices for location data. Microservices are much like they sound, micro. Microservices are application programming interface (API) endpoints that do one thing and one thing only. This granularity allows developers plug in services as needed and allows the owners of the service to easily manage and debug them. SAP announced its foray into the microservices realm by pulling out the most complicated pieces of the ETL process, address cleansing and geocoding. Through a cloud service, you can visit the microservices web page to view usage, billing, and connection information (see Figure 1.7). However, in order to actually leverage the service, you need to integrate programmatically through SAP Data Services or another application backend. Figure 1.6 SAP Agile Data Preparation User Interface Figure 1.7 SAP Cloud Platform Cockpit Microservices Page As we’ll see in Chapter 3 and Chapter 5, these processes offer numerous options and require annual updates. If these setup costs, both in time and money, seem prohibitive, the microservices route might be a better choice instead. We’ll walk you through the simple process of setting up your microservices account, as well as some common use cases, and describe the integration process of using SAP Data Quality Management microservices into common applications. 1.1.3 Replication The final category of data provisioning tools is also the simplest. Replication is the purest form of data transference: Table A in System 1 should match Table A in System 2. Complexity comes into play during execution. How often is System1/TableA updated? How often should System2/TableA be refreshed? Should System1 push the data to System2, or should System2 pull the data from System1? How will you detect changes in System1? These questions can be answered by a replication tool. Not included in the following is SAP Data Services, where replication via real-time jobs can be achieved, but these other tools require much less development to implement: - SAP Landscape Transformation Replication Server - SAP HANA smart data integration (SDI) With this grouping mind, you should have a clear understanding of where to direct your attention given a particular use case and the tools available to you. Use Table 1.4 to quickly determine the right tools, based on the type of provisioning and business need, for either batch (B) (i.e., periodic) processing or real-time (RT) (i.e., immediate) processing. Please note that SDQ is a component of SDI; thus, technically, SDI performs cleansing functions as well. <table> <thead> <tr> <th>Tool</th> <th>Manipulate</th> <th>Copy</th> <th>Cleanse</th> </tr> </thead> <tbody> <tr> <td>SAP Data Services</td> <td>B/RT</td> <td>B</td> <td>B/RT</td> </tr> <tr> <td>SAP HANA smart data integration (SDI)</td> <td>B/RT</td> <td>B/RT</td> <td>B/RT</td> </tr> <tr> <td>SAP HANA smart data quality</td> <td>B/RT</td> <td></td> <td>B/RT</td> </tr> <tr> <td>SAP Data Quality Management</td> <td>B</td> <td></td> <td></td> </tr> <tr> <td>SAP Agile Data Preparation</td> <td>B</td> <td></td> <td>B</td> </tr> <tr> <td>SAP Landscape Transformation Replication Server</td> <td>RT</td> <td></td> <td></td> </tr> </tbody> </table> Table 1.4 Tools for Batch and Real-Time Capabilities SAP Landscape Transformation runs on the SAP NetWeaver stack. Trigger-based replication has been a staple of many database architectures for years; however, just like SQL has its own flavors, replication too can vary by database brand and version, in this case SAP ERP and SAP Business Warehouse, on which your application is installed. The SAP LT Replication Server fills the gap nicely at the application level, much like SAP Data Services, but with a core focus on real-time replication rather than ETL. SAP LT Replication Server provides a cockpit view for setting up tables to be initialized, replicated, and reloaded. Generally, once set up, you shouldn’t need to revisit the cockpit outside of occasional maintenance or troubleshooting, as shown in Figure 1.8. Figure 1.8 SAP LT Replication Server Cockpit View True, for some transformation capabilities, all of which we’ll cover in this book, the SAP LT Replication Server shines in its ability to simplify the replication of SAP data into a target enterprise data warehouse (EDW). In this chapter, we’ll dive into what capabilities exists, how we can leverage these capabilities to generate real-time views of our data, and when best to leverage the SAP LT Replication Server in your provisioning strategy. 1.2 How Are These Tools Used Together? Now that we’ve touched on each tool individually, you should understand why using all of these tools to their fullest extent within a single organization is rather unlikely. In fact, with so many overlapping functionalities, more likely, only two or three of these tools will be heavily utilized in a production scenario. While we’re used to seeing some common pairings, ultimately every environment will require a different combination tool. One of the most challenging decisions for anyone new to the SAP EIM space is deciphering when to utilize one or more of the ETL tools described in this book. While these tools overlap in many ways, each of them excel in one or more areas that the others aren’t designed to support. Over the years, the authors have come to rely upon the following three criteria in order to arrive at the appropriate mix for a given environment: - **Scope:** How many unique data storage solutions are within the scope of your provisioning strategy? - **Quality:** How much transformation, cleansing, and manipulation is required before the data becomes meaningful/useful? - **Latency:** How quickly must the target system (SAP HANA in the case of this book) be updated relative to the data being written to the source system? Simply asking these three questions often requires booking a conference room for a week. As depicted in Figure 1.9, none of these questions are meant to build on the other, and not all of them will hold equal weight in the final tool mix your organization decides on. ![Figure 1.9 Latency, Quality, and Scope](image) The following three matrices, Table 1.5, Table 1.6, and Table 1.7, can help you narrow down the optimal tool mix for your situation. **1. Introduction** For example, let’s assume that after reviewing the requirements our scope, quality, and latency, we determine that we wish to utilize SAP HANA as our EDW, with no separate staging or archival system. We acknowledge that, after reviewing the sources of our data, some manipulation will be required to unify the systems, but not much, and that our users are comfortable with nightly data refreshes. As a result, we see SDI and SAP Data Services support all three requirements, with SAP Data Services offering more capability when it comes to data quality and manipulation. If we are not confident in our quality assessment, we might lean more towards SAP Data Services, however, in this scenario we are at least certain that neither SAP LT Replication Server nor SAP Agile Data Preparation will meet our needs. That said, by far the most common scenario we’ve seen is leveraging SAP LT Replication Server and SAP Data Services to provide near real-time reporting outside of SAP ERP. This scenario is probably prevalent because of the popularity of the SAP HANA sidecar architecture, which enables SAP customers to query massive volumes of SAP ERP transactional data directly, without having to reinstall and migrate their SAP ERP environment. Instead, SAP LT Replication Server (or sometimes SAP Data Services batch jobs) can replicate the data to SAP HANA tables. However, often, customers still need to use “helper tables,” tables that provide flags and other user information, to get the most out their transactional data. Thus, SAP Data Services provides batch processing to generate keys, perform lookups, and fill in other gaps that neither the SAP LT Replication Server nor SAP HANA views could effectively resolve. ### Table 1.7 Utilize this Table to Determine which Tools Best Support your Latency Requirements <table> <thead> <tr> <th>Capability</th> <th>Tool</th> </tr> </thead> <tbody> <tr> <td></td> <td>SAP Data Services</td> </tr> <tr> <td>Batch Processing</td> <td>Great</td> </tr> <tr> <td>Real-Time</td> <td>Great</td> </tr> <tr> <td>Processing</td> <td>Real-Time Replication</td> </tr> </tbody> </table> Of course, nothing prevents you from leveraging SDI to do the same thing as SAP Data Services in some scenarios. Further, of course, due to its integration capabilities, if you’re using SAP Agile Data Preparation, you’ll probably want to leverage the export process to flowgraph functionality for developing reusable and standardized logic. Ultimately, the architect is the one to decide, while system administrators and business users must decipher which tools should be utilized for which purposes. **Example** Let’s look at a hypothetical use case where every tool plays a role within an imaginary enterprise information management team at a large international organization, MaxWidgets, Inc. MaxWidgets is a large organization that has grown via several international acquisitions. As a result, numerous ERP and EDW systems are spread throughout the world, the largest of which are in Beijing, Ireland, and Memphis, TN. The executive team is struggling to get a clear picture of total sales by region because each region has their own method of collecting sales data. Some data is easy and comes in via the online store, but many customers visit local branches and make purchases through in-person sales representatives, who, unfortunately, aren’t patient with the CRM tool. The deliveries, especially in Beijing, are often managed by individual reps and rarely tie back to the billing address on the order. While the Memphis and Ireland sales data is pretty consistent, these branches have far more sales and generate several times the amount of records per day, compared to the Beijing branch. Now, let’s say that leadership has decided to move all sales data into SAP HANA; however, not all of the data is created equal. We already know the address data in Beijing has tons of duplicates and errors as the sales reps key in only the bare minimum into the CRM to complete the opportunity entry, but Memphis is running a leg-acy SAP ERP system on old hardware, and Ireland has a homegrown BI application that only publishes on-demand reports that are essentially stored procedures that call back to JavaServer Pages (JSPs). Digging into the Ireland BI application, you realize that a massive ETL effort is required to recreate the stored procedure and JSP logic. You decide to put all of your SAP Data Services resources on the task, and slowly but surely, you begin extracting the Ireland data straight into your SAP HANA tables. However, you can’t afford to wait on an available SAP Data Services resource to begin work on the Beijing and Memphis data, so you turn to your SAP HANA team for assistance. They propose pulling the Beijing data via SDI; however, they recommend cleaning up the data in transit. Not much more transformation is required outside of the cleansing, and you... don't own Beijing address directories, so you decide to keep the SDI layer simple for now and instead use the SAP Data Quality Management microservice for Beijing. In this way, if you decide to convert the Beijing sales data to an SAP Data Services job, switching over will be easier. With Beijing and Ireland out of the way, you turn your sights to the legacy SAP system in Memphis. They've been talking about upgrading the system for years, but haven't gotten around to it. You know what tables you need, but nightly batches would strain the old servers, so you decide to leverage SAP LT Replication Server and replicate each record as it comes in in real time. SAP Basis gets you up and running, but then you realize something is off about the customer master—it seems old. Turns out the business has been maintaining the customer master outside of SAP through a combination of Excel files and Microsoft SQL Server databases that reference SAP document numbers. After all, the old system has been "about to go away" for years. Rather than trying to piece these files together with the few SAP Data Services developers you have available, you decide to use SAP Agile Data Preparation and allow the business to continue to map sales headers to their SQL database. This slight change to their current process still should reduce the number of Excel files floating around, and that's something everyone can get on board with. 1.3 Summary In this chapter, we focused on the high-level strengths of each tool, providing a pretty thorough inventory of the provisioning options available for SAP HANA from SAP. In the next few chapters, we'll take a close look at each of these applications, describe how to get started working with them, and discuss some common pitfalls you may encounter along the way. First, let's focus on SDI, including how to get it up and running and how to get started provisioning SAP HANA. # Contents Preface ........................................................................................................................ 13 1 Introduction .................................................................................................................. 17 1.1 What Are the Tools for Provisioning Data? ............................................................. 17 1.1.1 Extract, Transform, and Load ........................................................................... 18 1.1.2 Cleansing ................................................................................................................ 26 1.1.3 Replication .............................................................................................................. 30 1.2 How Are These Tools Used Together? ........................................................................ 31 1.3 Summary ..................................................................................................................... 36 2 SAP HANA Smart Data Integration ........................................................................... 37 2.1 What Is SAP HANA Smart Data Integration? .......................................................... 37 2.2 Use Cases for SAP HANA Smart Data Integration ...................................................... 38 2.3 Installation and Configuration ................................................................................... 39 2.3.1 Data Provisioning Server .................................................................................... 40 2.3.2 Data Provisioning Delivery Unit ..................................................................... 41 2.3.3 Data Provisioning Agent .................................................................................... 44 2.4 Using SAP HANA Smart Data Integration ................................................................. 48 2.4.1 SAP HANA Web-Based Development Workbench .................................... 48 2.4.2 Creating Flowgraphs ........................................................................................... 50 2.4.3 Configuring the Data Provisioning Agent for Flat File Access ............... 54 2.4.4 Reading Flat Files ............................................................................................... 57 2.4.5 Building Blocks ................................................................................................... 67 2.4.6 Real-Time Flowgraphs ....................................................................................... 78 2.4.7 Monitoring ........................................................................................................... 83 2.5 Summary ..................................................................................................................... 89 ## 3 SAP HANA Smart Data Quality ### 3.1 What Is SAP HANA Smart Data Quality? ....................................................... 91 ### 3.2 How Do SAP HANA Smart Data Integration and SAP HANA Smart Data Quality Work Together? .......................................................... 92 ### 3.3 Installation and Configuration ................................................................... 93 3.3.1 Enabling the Script Server ................................................................. 93 3.3.2 Downloading and Deploying SAP Smart Data Quality Directories .... 95 3.3.3 Creating Authorized Users for SAP Smart Data Quality .................. 101 ### 3.4 Using SAP HANA Smart Data Quality ...................................................... 103 3.4.1 Identifying Cleansing Options ......................................................... 103 3.4.2 Identifying Matching Options ......................................................... 110 3.4.3 Identifying Geocode Solution Options .......................................... 117 3.4.4 The Script Server ............................................................................. 121 ### 3.5 Summary ................................................................................................... 122 ## 4 SAP Agile Data Preparation ### 4.1 What Is SAP Agile Data Preparation? ...................................................... 123 ### 4.2 SAP Agile Data Preparation and SAP HANA .......................................... 124 ### 4.3 SAP Agile Data Preparation: On-Premise versus Cloud ....................... 124 ### 4.4 Installation and Configuration .................................................................. 126 4.4.1 Downloading the Files ....................................................................... 126 4.4.2 Importing the Delivery Units ............................................................. 132 4.4.3 Adding Data Domain Tiles ................................................................. 138 4.4.4 Security Management ....................................................................... 139 ### 4.5 Using SAP Agile Data Preparation ............................................................ 140 4.5.1 Creating a Project and Loading Data ................................................. 140 4.5.2 Navigating the Side Panel ................................................................. 145 4.5.3 Reviewing Data Quality Statistics .................................................... 147 4.5.4 Actioning Data .................................................................................. 149 4.5.5 Cleansing and De-duplicating Data .................................................. 156 ## 5 SAP Data Services ### 5.1 What Is SAP Data Services? ..................................................................... 167 ### 5.2 Installation and Configuration .................................................................. 168 5.2.1 Install Information Platform Services .............................................. 172 5.2.2 Install SAP Data Services ................................................................. 194 ### 5.3 Using SAP Data Services ......................................................................... 202 5.3.1 Batch Data Loading .......................................................................... 202 5.3.2 Best Practices .................................................................................... 211 ### 5.4 Summary .................................................................................................. 217 ## 6 SAP Landscape Transformation Replication Server ### 6.1 What Is the SAP Landscape Transformation Replication Server? .......... 219 ### 6.2 Installation and Configuration .................................................................. 222 6.2.1 ABAP Source System ...................................................................... 223 6.2.2 Separate Server with an ABAP Source System ................................ 224 6.2.3 Separate Server with a Non-ABAP Source System .............................. 224 ### 6.3 Using the SAP LT Replication Server ....................................................... 225 6.3.1 Configuring and Managing the Replication Process ......................... 230 6.3.2 Creating a Configuration ................................................................... 232 6.3.4 Initial versus Ongoing Data Replication ....................................................... 234 6.3.5 Transformation Capabilities ....................................................................... 236 6.4 Summary .................................................................................................... 238 7 SAP Data Quality Management, Microservices for Location Data ................. 241 7.1 What Is SAP Data Quality Management, Microservices for Location Data? ........ 241 7.2 Invoking Microservices for Location Data .................................................. 243 7.2.1 Address Cleansing and Geocoding ......................................................... 243 7.2.2 Reverse Geocoding .............................................................................. 249 7.2.3 Information Codes and Messages ......................................................... 251 7.3 Installation and Configuration .................................................................... 252 7.3.1 Getting Started ..................................................................................... 252 7.3.2 Supported Integrations ....................................................................... 253 7.3.3 Authentication .................................................................................... 256 7.3.4 Configuration Editor .......................................................................... 257 7.4 Using Prebuilt Functions ............................................................................. 258 7.5 Summary .................................................................................................. 259 8 SAP HANA Data in the Cloud ......................................................................... 261 8.1 Cloud Considerations .................................................................................. 261 8.2 SAP Cloud Platform .................................................................................... 265 8.2.1 SAP Cloud Connector ........................................................................ 265 8.2.2 Architecture ....................................................................................... 267 8.2.3 Integration ......................................................................................... 268 8.3 Amazon Web Services ............................................................................... 270 8.4 Microsoft Azure ......................................................................................... 275 8.5 Summary .................................................................................................. 279 9 Data Provisioning Case Studies ..................................................................... 281 9.1 Data Preparation for an Omnichannel Initiative ............................................ 281 9.1.1 Company Background .................................................................... 282 9.1.2 Solution .......................................................................................... 284 9.2 Supply Chain Analytics for Reducing Cost of Goods Sold ......................... 303 9.2.1 Company Background .................................................................... 304 9.2.2 Solution .......................................................................................... 307 9.3 Profile and Transform Customer Data .......................................................... 323 9.3.1 Company Background .................................................................... 323 9.3.2 Solution .......................................................................................... 324 9.4 Cleaning and De-duplicating a Mailing List ................................................. 332 9.4.1 Company Background .................................................................... 332 9.4.2 Solution .......................................................................................... 333 9.5 Summary .................................................................................................. 343 The Authors ................................................................................................. 345 Index ............................................................................................................ 347 Index _SYS_REPO ........................................ 66-67, 81, 102 A ABAP source system ......................................... 233 Access plans ...................................................... 236 Adapters ............................................................. 47, 57, 60, 83, 315 Address cleansing .............................................. 243 Address directories ............................................... 93 Address formats ................................................. 245 Address validation ............................................. 258 Addresses ................................................................ 27, 161 AFL ................................................................. 78 Agent Monitor ..................................................... 43, 83–84 Agents ................................................................. 46, 83 Aggregating data ................................................... 152 Aggregation nodes .................................................. 72–74 Amazon Web Services (AWS) ......................... 125, 270–271 vs Microsoft Azure .............................................. 276 API Management Console .................................. 268–269 API requests .......................................................... 243 request properties ............................................... 244 response properties ........................................... 247 Application Designer .......................................... 265 Application function libraries ............................ 121 Application function modeler ............................. 91 Application programming interface (API) ....... 29 Association Editor ............................................... 301 Associative match ............................................. 299–303 Attribute change package .................................. 254 Authentication .................................................... 256 client certificate .................................................. 254 Authorizations ................................................... 232, 234 B Batch ................................................................. 30, 34, 78, 81, 87, 202 Batch data loading ............................................. 202, 211 Batch jobs ........................................................... 172, 193, 204 Bill of material (BOM) ......................................... 506 Blueprint packages ............................................ 256 Break group key ................................................... 296, 299 Business configuration sets .................................. 254 Business intelligence (BI) ..................................... 35 C Calculation views .............................................. 165 Case studies ....................................................... 281 customer data ..................................................... 323 mailing list ........................................................ 332 omnichannel retail ............................................. 282 supply chain analytics ......................................... 303 Case transforms ................................................. 188, 216 configuration ...................................................... 189 Catalog .................................................................. 23, 38, 49, 61 Central Management Console (CMC) .............. 142 Central Management Server (CMS) ................ 198 Change data capture (CDC) .............................. 78, 81, 205 Checkpoint recovery ........................................... 176–177 Cleanse transform ............................................. 93, 95, 103, 105, 110, 117 Cleansing ......................................................... 26–28, 156, 160, 282, 285–286, dictionaries ....................................................... 161 options ............................................................... 103 Clients .................................................................. 310 Cloud ................................................................. 19, 29, 46 Cloud deployments ............................................ 262 Cloud migration ................................................... 263 Cloud providers ................................................... 262 Cluster tables ...................................................... 235 Configuration and Monitoring Dashboard .......... 226 Configuration Editor .......................................... 252, 257 Consolidated customer ........................................ 284 Consumption-based pricing model ..................... 242 Containerization ................................................... 263 Content Management Server (CMS) .............. 124 Credentials mode ............................................... 58 cron ..................................................................... 86 CSV ..................................................................... 55, 165, 333 Customer relationship management (CRM) .... 35 D daemon.ini ......................................................... 40, 94 Data cleanse ....................................................... 92 Data compression .............................................. 207 Data enrichment ............................................... 146 347 Index Data federation .............................. 23, 37 Data flows ........................................ 183–184, 188, 212 Data Integrator .................................. 19 Data manipulation ........................... 149 Data mart ..................................... 208, 210 Data Migration Server (DMIS) add-on .... 222 Data modeling .................................. 122 Data provisioning ............................ 312 Data Provisioning Agent 18, 24–25, 40–41, 43–44, 47, 54–59, 81, 83–84, 125, 127 Data provisioning server ................... 40 address cleansing ..................... 243 assessment .............................. 148 genealogy .................................. 248 reversion-genealogy .................... 249 statistics .................................... 147 Data sink ...................................... 79, 121 Data Source Browser ................. 143 Data sources ................................ 113, 142 Data structures ............................. 209 Data warehouse ............................ 215, 308, 318, 321 Database connection ...................... 221 Database management system (DBMS) .... 263 Database triggers ......................... 219 Dataflows ................................... 288, 292–293 Datastores .............................. 168, 255–256, 287 configuration properties .......... 169 connection parameters .............. 168 eample .................................... 170 Date generation .......................... 78 DB2 system .................................. 314 De-duplicating .......................... 156–157, 342–343 Delivery units ...................... 40–41, 43, 125, 136 import ....................................... 132 installer .................................. 135 Dimension .................................. 77 Dimension tables .......................... 179 Direct Connect ............................ 271 Download Manager ...................... 45 Dq_reference_data_path .............. 100 E Eclipse ......................................... 26 Editor ....................................... 52, 60–61 Elastic Compute Cloud (EC2) ............ 270 Endpoint ..................................... 265 Enterprise data warehouse (EDW) ....... 31, 35, 62, 332–333 Enterprise information management portfolio ........................................ 91, 242 Enterprise Semantic Services .............. 127 ETL .................................. 17–20, 23–24, 26, 29, 35, 37–38, 50, 67, 82, 89, 91, 122, 208 business rule enforcement stage .......... 216 driver stage .................................. 212 lookup stage .................................. 214 processing stage ........................... 213 Event-based rules ......................... 238 Excel .......................................... 36 Expression Editor ......................... 71 F Fact tables .................................. 179 Field validations .......................... 192 Field-based rules ......................... 238 File adapter ................................ 54, 56, 58–60, 334 Filter transform .......................... 93 Filters ....................................... 67–68, 70–72, 74, 77, 341–342 node ........................................... 79 Flat files ..................................... 57, 60, 314, 325, 343 Dictionaries ............................... 154 FTP ........................................... 19 Fully qualified domain name .......... 231 Fuzzy joins .................................. 26 Fuzzy logic .................................... 110 Fuzzy match .................................. 159 G Geocode ...................................... 244, 321–323 Geocode transform ..................... 95, 103, 117, 119–120 Git ........................................... 55 GUID ........................................ 232 H Hadoop ...................................... 24 Harmonize values .......................... 151 hdflflowgraph ............................. 52 hdbserver ................................... 40 HDFS ........................................ 58 Hybrid solution ........................... 263 I Import ........................................ 43 Index server .................................. 37 Information codes and messages .......... 251 Information Platform Services (IPS) ... 194, 197 Information Platform Services server .... 263 Initial Load .................................. 234 Input type .................................... 79 IT landscape .................................. 91 J Java ............................................. 24 JIT Data Preview ....................... 68, 335, 340–341 Job server engine .......................... 215 Jobs ........................................... 172, 178, 293 Joints ......................................... 75–78, 300, 322, 330 node ........................................... 79, 77 JSON ........................................... 244 K Kerberos .................................... 59 L Latency ...................................... 220 Launchpad ........................................ 44 Linux .......................................... 25, 45 Logging tables .............................. 326, 328–329 Lookups ...................................... 215, 331–332 Ltrim (left trim) ............................ 22 M Mapping .................................... 120, 289, 291, 327–328 Mass transfer .................................. 308 Match policy ................................... 115, 157 Match rule ....................................... 114 Match settings ................................ 115 Match transform ........................... 110, 112, 114 Multidatabase container (MDC) .......... 40 N Neteza ......................................... 287 Nodes .......................................... 67–68, 72 Notifications .................................. 87–88 O OAuth client .................................. 256 OData .......................................... 268 ODBC ........................................ 19, 23–24, 41, 57, 315 OLTP ........................................... 308 Ongoing replication ........................ 236 On-premise ...................................... 19, 29, 261 Oracle ........................................... 20, 314 Output types .................................. 79 P Parallel workloads .......................... 175 Performance options ...................... 238 Personal security environment .......... 254 Pivot ........................................... 78 Schemas 59, 61, 66, 81, 288, 310, 313–314, 318, 320, 323 Script server ............................................... 93–94, 121 SDQ_USER .......................................................... 102 Security ................................................ 49, 55, 81, 139, 305 role ......................................................... 49 Sender queue ............................................... 235 Series execution ........................................ 178 Server Intelligence Agent (SIA) ...................... 198 SFTP ...................................................... 19, 130 Sharing data ............................................... 163 Sidebar .......................................................... 34 Single-use script object .................................. 182 SMTP .......................................................... 88 SOAP .......................................................... 41, 57 Social media ............................................... 282 Software development kit (SDK) .................... 24, 26 Software-as-a-service (SaaS) ......................... 29, 254 SQL ...................................................... 19–20, 31, 40–41, 60–61, 68–69, 74, 81–82, 85, 336 SQL Console ............................................. 104, 110, 130 SSO .......................................................... 59 Staging ...................................................... 75, 205, 207, 288, 334 Stateless application constructs ..................... 192 Storage .......................................................... 65 Suggestion lists .......................................... 247 Support Package Manager ................................ 254 Survival rules ............................................... 115, 158 SYSTEM user ............................................... 100 Table settings .............................................. 238 Tables ....................................................... 143, 227 Target tables ............................................... 121 Task Monitor ............................................... 43, 83–85 Technical user ............................................. 58 Template tables ........................................... 61–66, 109, 116, 119, 121 Tenants ..................................................... 40 Tensorflow ................................................... 24, 38 Traces .......................................................... 49 Transaction ................................................. LTR ......................................................... 226 LTBC ...................................................... 227, 235, 309 LTRO ...................................................... 228 LTRS ...................................................... 227 Transactional data ........................................ 34 Transformation capabilities ........................... 236 Trigger-based replication ................................ 219 Triggers ..................................................... 81 Truncate .................................................... 62 table .......................................................... 66 Try and catch block ...................................... 173 U unpivot ...................................................... 78 Upsert ...................................................... 66 URL .......................................................... 46, 49–50 User roles ................................................... 226, 233 V Validation transforms .................................... 190 classification ............................................. 191 Virtual private cloud (VPC) ......................... 270 Virtual private network (VPN) ...................... 262 Virtual tables ............................................ 23–24, 57, 59–62, 64, 78, 81, 318 W Web service ............................................... 57 Weighted scoring ...................................... 293 WHERE clause ......................................... 185 Windows ................................................... 25, 45 Work process ............................................ 232 Workflows .................................................. 174, 177, 182, 288 failure ...................................................... 171 parallel execution ...................................... 175 series execution ........................................ 178 Worksheets .............................................. 310, 325, 327, 329, 331 Workspace ............................................... 53, 68, 72 X XML web services ........................................ 173 Megan Cundiff is a data and analytics consultant at Protiviti where she works with clients from all industries to understand complex business challenges and implement end-to-end business intelligence solutions. Vernon Gomes is a former IT industry systems administrator turned BI consultant. He is currently a senior consultant at Protiviti for data and analytics and is using his IT experience to assist clients in developing BI and cloud solutions. Russell Lamb is a manager at Protiviti who has spent the last several years empowering organizations to use SAP HANA by enhancing their enterprise data warehouses, analyzing unwieldy SAP ERP tables, cleansing and storing SaaS-sourced CRM data, and extending their landscape into the cloud. Don Loden is a managing director of data and analytics at Protiviti, with full lifecycle data warehouse and information governance experience in multiple industries. He is an SAP Certified Application Associate of SAP Data Services, and the author of three books and twelve articles on data management topics. Vinay Suneja is a manager at Protiviti with more than five years of experience in implementing analytic solutions for clients in the retail, utilities, public sector, and banking industries. He is proficient with SAP BusinessObjects BI/SAW BW as well as big data technologies including SAP Lumira, SAP HANA, and Hadoop. We hope you have enjoyed this reading sample. You may recommend or pass it on to others, but only in its entirety, including all pages. This reading sample and all its parts are protected by copyright law. All usage and exploitation rights are reserved by the author and the publisher.
{"Source-Url": "https://s3-eu-west-1.amazonaws.com/gxmedia.galileo-press.de/leseproben/4588/reading_sample_sappress_1671_data_provisioning.pdf", "len_cl100k_base": 12693, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 52294, "total-output-tokens": 14697, "length": "2e13", "weborganizer": {"__label__adult": 0.0008192062377929688, "__label__art_design": 0.0013103485107421875, "__label__crime_law": 0.0009832382202148438, "__label__education_jobs": 0.00594329833984375, "__label__entertainment": 0.0005917549133300781, "__label__fashion_beauty": 0.00043082237243652344, "__label__finance_business": 0.045928955078125, "__label__food_dining": 0.0009102821350097656, "__label__games": 0.000942707061767578, "__label__hardware": 0.00281524658203125, "__label__health": 0.0009622573852539062, "__label__history": 0.0007424354553222656, "__label__home_hobbies": 0.00032067298889160156, "__label__industrial": 0.004299163818359375, "__label__literature": 0.0011653900146484375, "__label__politics": 0.0009694099426269532, "__label__religion": 0.0010433197021484375, "__label__science_tech": 0.30419921875, "__label__social_life": 0.000354766845703125, "__label__software": 0.1370849609375, "__label__software_dev": 0.486083984375, "__label__sports_fitness": 0.00036835670471191406, "__label__transportation": 0.0011739730834960938, "__label__travel": 0.0004096031188964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66693, 0.05549]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66693, 0.03272]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66693, 0.73945]], "google_gemma-3-12b-it_contains_pii": [[0, 452, false], [452, 2337, null], [2337, 7522, null], [7522, 10749, null], [10749, 13362, null], [13362, 16459, null], [16459, 20622, null], [20622, 23710, null], [23710, 27633, null], [27633, 29082, null], [29082, 34177, null], [34177, 36092, null], [36092, 38999, null], [38999, 43464, null], [43464, 47860, null], [47860, 53299, null], [53299, 60188, null], [60188, 60188, null], [60188, 65032, null], [65032, 66693, null]], "google_gemma-3-12b-it_is_public_document": [[0, 452, true], [452, 2337, null], [2337, 7522, null], [7522, 10749, null], [10749, 13362, null], [13362, 16459, null], [16459, 20622, null], [20622, 23710, null], [23710, 27633, null], [27633, 29082, null], [29082, 34177, null], [34177, 36092, null], [36092, 38999, null], [38999, 43464, null], [43464, 47860, null], [47860, 53299, null], [53299, 60188, null], [60188, 60188, null], [60188, 65032, null], [65032, 66693, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 66693, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66693, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66693, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66693, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66693, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66693, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66693, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66693, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66693, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66693, null]], "pdf_page_numbers": [[0, 452, 1], [452, 2337, 2], [2337, 7522, 3], [7522, 10749, 4], [10749, 13362, 5], [13362, 16459, 6], [16459, 20622, 7], [20622, 23710, 8], [23710, 27633, 9], [27633, 29082, 10], [29082, 34177, 11], [34177, 36092, 12], [36092, 38999, 13], [38999, 43464, 14], [43464, 47860, 15], [47860, 53299, 16], [53299, 60188, 17], [60188, 60188, 18], [60188, 65032, 19], [65032, 66693, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66693, 0.04399]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
66639279a1940482b32be36d248743b4a50e51ab
Starting and Stopping MySQL Abstract This is the Starting and Stopping MySQL extract from the MySQL 5.7 Reference Manual. For legal information, see the Legal Notices. For help with using MySQL, please visit the MySQL Forums, where you can discuss your issues with other MySQL users. Document generated on: 2022-09-01 (revision: 74046) # Table of Contents Preface and Legal Notices ...................................................................................................................... v 1 Installing MySQL on Unix/Linux Using Generic Binaries ......................................................... 1 2 Starting the Server for the First Time on Windows ........................................................................ 5 3 The Server Shutdown Process ........................................................................................................ 7 4 Server and Server-Startup Programs ............................................................................................ 9 4.1 mysqld — The MySQL Server ......................................................................................... 9 4.2 mysql_safe — MySQL Server Startup Script ............................................................... 9 4.3 mysql.server — MySQL Server Startup Script ............................................................ 17 4.4 mysql_multi — Manage Multiple MySQL Servers ....................................................... 19 Preface and Legal Notices This is the Starting and Stopping MySQL extract from the MySQL 5.7 Reference Manual. **Licensing information—MySQL 5.7.** This product may include third-party software, used under license. If you are using a *Commercial* release of MySQL 5.7, see the MySQL 5.7 Commercial Release License Information User Manual for licensing information, including licensing information relating to third-party software that may be included in this Commercial release. If you are using a *Community* release of MySQL 5.7, see the MySQL 5.7 Community Release License Information User Manual for licensing information, including licensing information relating to third-party software that may be included in this Community release. **Licensing information—MySQL NDB Cluster 7.5.** This product may include third-party software, used under license. If you are using a *Commercial* release of NDB Cluster 7.5, see the MySQL NDB Cluster 7.5 Commercial Release License Information User Manual for licensing information relating to third-party software that may be included in this Commercial release. If you are using a *Community* release of NDB Cluster 7.5, see the MySQL NDB Cluster 7.5 Community Release License Information User Manual for licensing information relating to third-party software that may be included in this Community release. **Licensing information—MySQL NDB Cluster 7.6.** If you are using a *Commercial* release of MySQL NDB Cluster 7.6, see the MySQL NDB Cluster 7.6 Commercial Release License Information User Manual for licensing information, including licensing information relating to third-party software that may be included in this Commercial release. If you are using a *Community* release of MySQL NDB Cluster 7.6, see the MySQL NDB Cluster 7.6 Community Release License Information User Manual for licensing information, including licensing information relating to third-party software that may be included in this Community release. **Legal Notices** Copyright © 1997, 2022, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract. The terms governing the U.S. Government's use of Oracle cloud services are defined by the applicable contract for such services. No other rights are granted to the U.S. Government. This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc, and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle. This documentation is NOT distributed under a GPL license. Use of this documentation is subject to the following terms: You may create a printed copy of this documentation solely for your own personal use. Conversion to other formats is allowed as long as the actual content is not altered or edited in any way. You shall not publish or distribute this documentation in any form or on any media, except if you distribute the documentation in a manner similar to how Oracle disseminates it (that is, electronically for download on a Web site with the software) or on a CD-ROM or similar medium, provided however that the documentation is disseminated together with the software on the same medium. Any other use, such as any dissemination of printed copies or use of this documentation, in whole or in part, in another publication, requires the prior written consent from an authorized representative of Oracle. Oracle and/or its affiliates reserve any and all rights to this documentation not expressly granted above. Documentation Accessibility For information about Oracle’s commitment to accessibility, visit the Oracle Accessibility Program website at https://www.oracle.com/corporate/accessibility/. Access to Oracle Support for Accessibility Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit https://www.oracle.com/corporate/accessibility/learning-support.html#support-tab. Chapter 1 Installing MySQL on Unix/Linux Using Generic Binaries Oracle provides a set of binary distributions of MySQL. These include generic binary distributions in the form of compressed tar files (files with a .tar.gz extension) for a number of platforms, and binaries in platform-specific package formats for selected platforms. This section covers the installation of MySQL from a compressed tar file binary distribution on Unix/Linux platforms. For Linux-generic binary distribution installation instructions with a focus on MySQL security features, refer to the Secure Deployment Guide. For other platform-specific binary package formats, see the other platform-specific sections in this manual. For example, for Windows distributions, see Installing MySQL on Microsoft Windows. See How to Get MySQL on how to obtain MySQL in different distribution formats. MySQL compressed tar file binary distributions have names of the form mysql-VERSION-OS.tar.gz, where VERSION is a number (for example, 5.7.39), and OS indicates the type of operating system for which the distribution is intended (for example, pc-linux-i686 or winx64). Warnings - If you have previously installed MySQL using your operating system native package management system, such as Yum or APT, you may experience problems installing using a native binary. Make sure your previous MySQL installation has been removed entirely (using your package management system), and that any additional files, such as old versions of your data files, have also been removed. You should also check for configuration files such as /etc/my.cnf or the /etc/mysql directory and delete them. For information about replacing third-party packages with official MySQL packages, see the related APT guide or Yum guide. - MySQL has a dependency on the libaio library. Data directory initialization and subsequent server startup steps fail if this library is not installed locally. If necessary, install it using the appropriate package manager. For example, on Yum-based systems: $> yum search libaio # search for info $> yum install libaio # install library Or, on APT-based systems: $> apt-cache search libaio # search for info $> apt-get install libaio1 # install library - For MySQL 5.7.19 and later: Support for Non-Uniform Memory Access (NUMA) has been added to the generic Linux build, which has a dependency now on the libnuma library; if the library has not been installed on your system, use your system's package manager to search for and install it (see the preceding item for some sample commands). - SLES 11: As of MySQL 5.7.19, the Linux Generic tarball package format is EL6 instead of EL5. As a side effect, the MySQL client bin/mysql needs libtinfo.so.5. A workaround is to create a symlink, such as `ln -s libncurses.so.5.6 /lib64/libtinfo.so.5` on 64-bit systems or `ln -s libncurses.so.5.6 /lib/libtinfo.so.5` on 32-bit systems. To install a compressed tar file binary distribution, unpack it at the installation location you choose (typically `/usr/local/mysql`). This creates the directories shown in the following table. ### Table 1.1 MySQL Installation Layout for Generic Unix/Linux Binary Package <table> <thead> <tr> <th>Directory</th> <th>Contents of Directory</th> </tr> </thead> <tbody> <tr> <td>bin</td> <td>mysql server, client and utility programs</td> </tr> <tr> <td>docs</td> <td>MySQL manual in Info format</td> </tr> <tr> <td>man</td> <td>Unix manual pages</td> </tr> <tr> <td>include</td> <td>Include (header) files</td> </tr> <tr> <td>lib</td> <td>Libraries</td> </tr> <tr> <td>share</td> <td>Error messages, dictionary, and SQL for database installation</td> </tr> <tr> <td>support-files</td> <td>Miscellaneous support files</td> </tr> </tbody> </table> Debug versions of the `mysqld` binary are available as `mysqld-debug`. To compile your own debug version of MySQL from a source distribution, use the appropriate configuration options to enable debugging support. See [Installing MySQL from Source](#). To install and use a MySQL binary distribution, the command sequence looks like this: ``` $> groupadd mysql $> useradd -r -g mysql -s /bin/false mysql $> cd /usr/local $> tar zxfv /path/to/mysql-VERSION-OS.tar.gz $> ln -s full-path-to-mysql-VERSION-OS mysql $> cd mysql $> mkdir mysql-files $> chown mysql:mysql mysql-files $> chmod 750 mysql-files $> bin/mysqld --initialize --user=mysql $> bin/mysql_ssl_rsa_setup $> bin/mysqld_safe --user=mysql & # Next command is optional $> cp support-files/mysql.server /etc/init.d/mysql.server ``` This procedure assumes that you have root (administrator) access to your system. Alternatively, you can prefix each command using the `sudo` (Linux) or `pfexec` (Solaris) command. The `mysql-files` directory provides a convenient location to use as the value for the `secure_file_priv` system variable, which limits import and export operations to a specific directory. See [Server System Variables](#). A more detailed version of the preceding description for installing a binary distribution follows. ### Create a mysql User and Group If your system does not already have a user and group to use for running `mysqld`, you may need to create them. The following commands add the `mysql` group and the `mysql` user. You might want to call the ```bash $> groupadd mysql $> useradd -r -g mysql -s /bin/false mysql ``` Obtain and Unpack the Distribution user and group something else instead of mysql. If so, substitute the appropriate name in the following instructions. The syntax for useradd and groupadd may differ slightly on different versions of Unix/Linux, or they may have different names such as adduser and addgroup. ``` $> groupadd mysql $> useradd -r -g mysql -s /bin/false mysql ``` Note Because the user is required only for ownership purposes, not login purposes, the useradd command uses the -r and -s /bin/false options to create a user that does not have login permissions to your server host. Omit these options if your useradd does not support them. Obtain and Unpack the Distribution Pick the directory under which you want to unpack the distribution and change location into it. The example here unpacks the distribution under /usr/local. The instructions, therefore, assume that you have permission to create files and directories in /usr/local. If that directory is protected, you must perform the installation as root. ``` $> cd /usr/local ``` Obtain a distribution file using the instructions in How to Get MySQL. For a given release, binary distributions for all platforms are built from the same MySQL source distribution. Unpack the distribution, which creates the installation directory. tar can uncompress and unpack the distribution if it has z option support: ``` $> tar zxfv /path/to/mysql-VERSION-OS.tar.gz ``` The tar command creates a directory named mysql-VERSION-OS. To install MySQL from a compressed tar file binary distribution, your system must have GNU gunzip to uncompress the distribution and a reasonable tar to unpack it. If your tar program supports the z option, it can both uncompress and unpack the file. GNU tar is known to work. The standard tar provided with some operating systems is not able to unpack the long file names in the MySQL distribution. You should download and install GNU tar, or if available, use a preinstalled version of GNU tar. Usually this is available as gnutar, gtar, or as tar within a GNU or Free Software directory, such as /usr/sfw/bin or /usr/local/bin. GNU tar is available from http://www.gnu.org/software/tar/. If your tar does not have z option support, use gunzip to unpack the distribution and tar to unpack it. Replace the preceding tar command with the following alternative command to uncompress and extract the distribution: ``` $> gunzip < /path/to/mysql-VERSION-OS.tar.gz | tar xvf - ``` Next, create a symbolic link to the installation directory created by tar: ``` $> ln -s full-path-to-mysql-VERSION-OS mysql ``` The ln command makes a symbolic link to the installation directory. This enables you to refer more easily to it as /usr/local/mysql. To avoid having to type the path name of client programs always when you are working with MySQL, you can add the /usr/local/mysql/bin directory to your PATH variable: ``` $> export PATH=$PATH:/usr/local/mysql/bin ``` Perform Postinstallation Setup The remainder of the installation process involves setting distribution ownership and access permissions, initializing the data directory, starting the MySQL server, and setting up the configuration file. For instructions, see Postinstallation Setup and Testing. Chapter 2 Starting the Server for the First Time on Windows This section gives a general overview of starting the MySQL server. The following sections provide more specific information for starting the MySQL server from the command line or as a Windows service. The information here applies primarily if you installed MySQL using the `noinstall` version, or if you wish to configure and test MySQL manually rather than with the GUI tools. The examples in these sections assume that MySQL is installed under the default location of `C:\Program Files\MySQL\MySQL Server 5.7`. Adjust the path names shown in the examples if you have MySQL installed in a different location. Clients have two options. They can use TCP/IP, or they can use a named pipe if the server supports named-pipe connections. MySQL for Windows also supports shared-memory connections if the server is started with the `shared_memory` system variable enabled. Clients can connect through shared memory by using the `--protocol=MEMORY` option. For information about which server binary to run, see Selecting a MySQL Server Type. Testing is best done from a command prompt in a console window (or “DOS window”). In this way you can have the server display status messages in the window where they are easy to see. If something is wrong with your configuration, these messages make it easier for you to identify and fix any problems. --- **Note** The database must be initialized before MySQL can be started. For additional information about the initialization process, see Initializing the Data Directory. To start the server, enter this command: ``` C:\> "C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqld" --console ``` For a server that includes InnoDB support, you should see the messages similar to those following as it starts (the path names and sizes may differ): ``` InnoDB: The first specified datafile c:\ibdata\ibdata1 did not exist: InnoDB: a new database to be created! InnoDB: Setting file c:\ibdata\ibdata1 size to 209715200 InnoDB: Database physically writes the file full: wait... InnoDB: Log file c:\iblogs\ib_logfile0 did not exist: new to be created InnoDB: Setting log file c:\iblogs\ib_logfile0 size to 31457280 InnoDB: Log file c:\iblogs\ib_logfile1 did not exist: new to be created InnoDB: Setting log file c:\iblogs\ib_logfile1 size to 31457280 InnoDB: Log file c:\iblogs\ib_logfile2 did not exist: new to be created InnoDB: Setting log file c:\iblogs\ib_logfile2 size to 31457280 InnoDB: Doublewrite buffer not found: creating new InnoDB: Doublewrite buffer created InnoDB: creating foreign key constraint system tables InnoDB: foreign key constraint system tables created 011024 10:58:25 InnoDB: Started ``` When the server finishes its startup sequence, you should see something like this, which indicates that the server is ready to service client connections: ``` mysqld: ready for connections Version: '5.7.39' socket: '' port: 3306 ``` The server continues to write to the console any further diagnostic output it produces. You can open a new console window in which to run client programs. If you omit the `--console` option, the server writes diagnostic output to the error log in the data directory (C:\Program Files\MySQL\MySQL Server 5.7\data by default). The error log is the file with the .err extension, and may be set using the `--log-error` option. **Note** The initial root account in the MySQL grant tables has no password. After starting the server, you should set up a password for it using the instructions in [Securing the Initial MySQL Account](#). Chapter 3 The Server Shutdown Process The server shutdown process takes place as follows: 1. The shutdown process is initiated. This can occur initiated several ways. For example, a user with the `SHUTDOWN` privilege can execute a `mysqladmin shutdown` command. `mysqladmin` can be used on any platform supported by MySQL. Other operating system-specific shutdown initiation methods are possible as well: The server shuts down on Unix when it receives a `SIGTERM` signal. A server running as a service on Windows shuts down when the services manager tells it to. 2. The server creates a shutdown thread if necessary. Depending on how shutdown was initiated, the server might create a thread to handle the shutdown process. If shutdown was requested by a client, a shutdown thread is created. If shutdown is the result of receiving a `SIGTERM` signal, the signal thread might handle shutdown itself, or it might create a separate thread to do so. If the server tries to create a shutdown thread and cannot (for example, if memory is exhausted), it issues a diagnostic message that appears in the error log: ``` Error: Can't create thread to kill server ``` 3. The server stops accepting new connections. To prevent new activity from being initiated during shutdown, the server stops accepting new client connections by closing the handlers for the network interfaces to which it normally listens for connections: the TCP/IP port, the Unix socket file, the Windows named pipe, and shared memory on Windows. 4. The server terminates current activity. For each thread associated with a client connection, the server breaks the connection to the client and marks the thread as killed. Threads die when they notice that they are so marked. Threads for idle connections die quickly. Threads that currently are processing statements check their state periodically and take longer to die. For additional information about thread termination, see `KILL Statement`, in particular for the instructions about killed `REPAIR TABLE` or `OPTIMIZE TABLE` operations on `MyISAM` tables. For threads that have an open transaction, the transaction is rolled back. If a thread is updating a nontransactional table, an operation such as a multiple-row `UPDATE` or `INSERT` may leave the table partially updated because the operation can terminate before completion. If the server is a source replication server, it treats threads associated with currently connected replicas like other client threads. That is, each one is marked as killed and exits when it next checks its state. If the server is a replica, it stops the I/O and SQL threads, if they are active, before marking client threads as killed. The SQL thread is permitted to finish its current statement (to avoid causing replication problems), and then stops. If the SQL thread is in the middle of a transaction at this point, the server waits until the current replication event group (if any) has finished executing, or until the user issues a `KILL QUERY` or `KILL CONNECTION` statement. See also `STOP SLAVE Statement`. Since nontransactional statements cannot be rolled back, in order to guarantee crash-safe replication, only transactional tables should be used. To guarantee crash safety on the replica, you must run the replica with `--relay-log-recovery` enabled. See also Relay Log and Replication Metadata Repositories). 5. The server shuts down or closes storage engines. At this stage, the server flushes the table cache and closes all open tables. Each storage engine performs any actions necessary for tables that it manages. InnoDB flushes its buffer pool to disk (unless `innodb_fast_shutdown` is 2), writes the current LSN to the tablespace, and terminates its own internal threads. MyISAM flushes any pending index writes for a table. 6. The server exits. To provide information to management processes, the server returns one of the exit codes described in the following list. The phrase in parentheses indicates the action taken by systemd in response to the code, for platforms on which systemd is used to manage the server. - 0 = successful termination (no restart done) - 1 = unsuccessful termination (no restart done) - 2 = unsuccessful termination (restart done) Chapter 4 Server and Server-Startup Programs Table of Contents 4.1 mysqld — The MySQL Server ................................................................. 9 4.2 mysqld_safe — MySQL Server Startup Script ....................................... 9 4.3 mysql.server — MySQL Server Startup Script ....................................... 17 4.4 mysqld_multi — Manage Multiple MySQL Servers .................................. 19 This section describes mysqld, the MySQL server, and several programs that are used to start the server. 4.1 mysqld — The MySQL Server mysqld, also known as MySQL Server, is a single multithreaded program that does most of the work in a MySQL installation. It does not spawn additional processes. MySQL Server manages access to the MySQL data directory that contains databases and tables. The data directory is also the default location for other information such as log files and status files. Note Some installation packages contain a debugging version of the server named mysqld-debug. Invoke this version instead of mysqld for debugging support, memory allocation checking, and trace file support (see Creating Trace Files). When MySQL server starts, it listens for network connections from client programs and manages access to databases on behalf of those clients. The mysqld program has many options that can be specified at startup. For a complete list of options, run this command: ``` mysqld --verbose --help ``` MySQL Server also has a set of system variables that affect its operation as it runs. System variables can be set at server startup, and many of them can be changed at runtime to effect dynamic server reconfiguration. MySQL Server also has a set of status variables that provide information about its operation. You can monitor these status variables to access runtime performance characteristics. For a full description of MySQL Server command options, system variables, and status variables, see The MySQL Server. For information about installing MySQL and setting up the initial configuration, see Installing and Upgrading MySQL. 4.2 mysqld_safe — MySQL Server Startup Script mysqld_safe is the recommended way to start a mysqld server on Unix. mysqld_safe adds some safety features such as restarting the server when an error occurs and logging runtime information to an error log. A description of error logging is given later in this section. Note For some Linux platforms, MySQL installation from RPM or Debian packages includes systemd support for managing MySQL server startup and shutdown. On these platforms, mysqld_safe is not installed because it is unnecessary. For more information, see Managing MySQL Server with systemd. One implication of the non-use of `mysqld_safe` on platforms that use systemd for server management is that use of `[mysqld_safe]` or `[safe_mysqld]` sections in option files is not supported and might lead to unexpected behavior. `mysqld_safe` tries to start an executable named `mysqld`. To override the default behavior and specify explicitly the name of the server you want to run, specify a `--mysqld` or `--mysqld-version` option to `mysqld_safe`. You can also use `--ledir` to indicate the directory where `mysqld_safe` should look for the server. Many of the options to `mysqld_safe` are the same as the options to `mysqld`. See Server Command Options. Options unknown to `mysqld_safe` are passed to `mysqld` if they are specified on the command line, but ignored if they are specified in the `[mysqld_safe]` group of an option file. See Using Option Files. `mysqld_safe` reads all options from the `[mysqld]`, `[server]`, and `[mysqld_safe]` sections in option files. For example, if you specify a `[mysqld]` section like this, `mysqld_safe` finds and uses the `--log-error` option: ``` [mysqld] log-error=error.log ``` For backward compatibility, `mysqld_safe` also reads `[safe_mysqld]` sections, but to be current you should rename such sections to `[mysqld_safe]`. `mysqld_safe` accepts options on the command line and in option files, as described in the following table. For information about option files used by MySQL programs, see Using Option Files. ### Table 4.1 mysqld_safe Options <table> <thead> <tr> <th>Option Name</th> <th>Description</th> <th>Introduced</th> <th>Deprecated</th> </tr> </thead> <tbody> <tr> <td><code>--basedir</code></td> <td>Path to MySQL installation directory</td> <td></td> <td></td> </tr> <tr> <td><code>--core-file-size</code></td> <td>Size of core file that mysqld should be able to create</td> <td></td> <td></td> </tr> <tr> <td><code>--datadir</code></td> <td>Path to data directory</td> <td></td> <td></td> </tr> <tr> <td><code>--defaults-extra-file</code></td> <td>Read named option file in addition to usual option files</td> <td></td> <td></td> </tr> <tr> <td><code>--defaults-file</code></td> <td>Read only named option file</td> <td></td> <td></td> </tr> <tr> <td><code>--help</code></td> <td>Display help message and exit</td> <td></td> <td></td> </tr> <tr> <td><code>--ledir</code></td> <td>Path to directory where server is located</td> <td></td> <td></td> </tr> <tr> <td><code>--log-error</code></td> <td>Write error log to named file</td> <td></td> <td></td> </tr> <tr> <td><code>--malloc-lib</code></td> <td>Alternative malloc library to use for mysqld</td> <td></td> <td></td> </tr> <tr> <td><code>--mysqld</code></td> <td>Name of server program to start (in ledir directory)</td> <td></td> <td></td> </tr> <tr> <td>Option Name</td> <td>Description</td> <td>Introduced</td> <td>Deprecated</td> </tr> <tr> <td>-------------------------------------</td> <td>--------------------------------------------------</td> <td>------------</td> <td>------------</td> </tr> <tr> <td>--mysqld-safe-log-timestamps</td> <td>Timestamp format for logging</td> <td>5.7.11</td> <td></td> </tr> <tr> <td>--mysqld-version</td> <td>Suffix for server program name</td> <td></td> <td></td> </tr> <tr> <td>--nice</td> <td>Use nice program to set server scheduling priority</td> <td></td> <td></td> </tr> <tr> <td>--no-defaults</td> <td>Read no option files</td> <td></td> <td></td> </tr> <tr> <td>--open-files-limit</td> <td>Number of files that mysqld should be able to open</td> <td></td> <td></td> </tr> <tr> <td>--pid-file</td> <td>Path name of server process ID file</td> <td></td> <td></td> </tr> <tr> <td>--plugin-dir</td> <td>Directory where plugins are installed</td> <td></td> <td></td> </tr> <tr> <td>--port</td> <td>Port number on which to listen for TCP/IP connections</td> <td></td> <td></td> </tr> <tr> <td>--skip-kill-mysqld</td> <td>Do not try to kill stray mysqld processes</td> <td></td> <td></td> </tr> <tr> <td>--skip-syslog</td> <td>Do not write error messages to syslog; use error log file</td> <td>Yes</td> <td></td> </tr> <tr> <td>--socket</td> <td>Socket file on which to listen for Unix socket connections</td> <td></td> <td></td> </tr> <tr> <td>--syslog</td> <td>Write error messages to syslog</td> <td>Yes</td> <td></td> </tr> <tr> <td>--syslog-tag</td> <td>Tag suffix for messages written to syslog</td> <td>Yes</td> <td></td> </tr> <tr> <td>--timezone</td> <td>Set TZ time zone environment variable to named value</td> <td></td> <td></td> </tr> <tr> <td>--user</td> <td>Run mysqld as user having name user_name or numeric user ID user_id</td> <td></td> <td></td> </tr> </tbody> </table> - **--help** Display a help message and exit. - **--basedir=dir_name** The path to the MySQL installation directory. • **--core-file-size=size** The size of the core file that `mysqld` should be able to create. The option value is passed to `ulimit -c`. • **--datadir=dir_name** The path to the data directory. • **--defaults-extra-file=file_name** Read this option file in addition to the usual option files. If the file does not exist or is otherwise inaccessible, the server exits with an error. If `file_name` is not an absolute path name, it is interpreted relative to the current directory. This must be the first option on the command line if it is used. For additional information about this and other option-file options, see Command-Line Options that Affect Option-File Handling. • **--defaults-file=file_name** Use only the given option file. If the file does not exist or is otherwise inaccessible, the server exits with an error. If `file_name` is not an absolute path name, it is interpreted relative to the current directory. This must be the first option on the command line if it is used. For additional information about this and other option-file options, see Command-Line Options that Affect Option-File Handling. • **--ledir=dir_name** If `mysqld_safe` cannot find the server, use this option to indicate the path name to the directory where the server is located. As of MySQL 5.7.17, this option is accepted only on the command line, not in option files. On platforms that use systemd, the value can be specified in the value of `MYSQLD_OPTS`. See Managing MySQL Server with systemd. • **--log-error=file_name** Write the error log to the given file. See The Error Log. • **--mysqld-safe-log-timestamps** This option controls the format for timestamps in log output produced by *mysqld_safe*. The following list describes the permitted values. For any other value, *mysqld_safe* logs a warning and uses UTC format. • **UTC, utc** ISO 8601 UTC format (same as **--log_timestamps=UTC** for the server). This is the default. • **SYSTEM, system** ISO 8601 local time format (same as **--log_timestamps=SYSTEM** for the server). • **HYPHEN, hyphen** `YY-MM-DD h:mm:ss` format, as in *mysqld_safe* for MySQL 5.6. • **LEGACY, legacy** `YYMMDD hh:mm:ss` format, as in *mysqld_safe* prior to MySQL 5.6. This option was added in MySQL 5.7.11. • **--malloc-lib=[lib_name]** The name of the library to use for memory allocation instead of the system `malloc()` library. As of MySQL 5.7.15, the option value must be one of the directories `/usr/lib`, `/usr/lib64`, `/usr/lib/i386-linux-gnu`, or `/usr/lib/x86_64-linux-gnu`. Prior to MySQL 5.7.15, any library can be used by specifying its path name, but there is a shortcut form to enable use of the `tcmalloc` library that is shipped with binary MySQL distributions for Linux in MySQL 5.7. It is possible for the shortcut form not to work under certain configurations, in which case you should specify a path name instead. **Note** As of MySQL 5.7.13, MySQL distributions no longer include a `tcmalloc` library. The **--malloc-lib** option works by modifying the `LD_PRELOAD` environment value to affect dynamic linking to enable the loader to find the memory-allocation library when *mysqld* runs: • If the option is not given, or is given without a value (**--malloc-lib=**), `LD_PRELOAD` is not modified and no attempt is made to use `tcmalloc`. • Prior to MySQL 5.7.31, if the option is given as **--malloc-lib=tcmalloc, mysqld_safe** looks for a `tcmalloc` library in `/usr/lib` and then in the MySQL `pkglibdir` location (for example, `/usr/local/mysql/lib` or whatever is appropriate). If `tmalloc` is found, its path name is added to the... beginning of the `LD_PRELOAD` value for `mysqld`. If `tcmalloc` is not found, `mysqld_safe` aborts with an error. As of MySQL 5.7.31, `tcmalloc` is not a permitted value for the `--malloc-lib` option. - If the option is given as `--malloc-lib=/path/to/some/library`, that full path is added to the beginning of the `LD_PRELOAD` value. If the full path points to a nonexistent or unreadable file, `mysqld_safe` aborts with an error. - For cases where `mysqld_safe` adds a path name to `LD_PRELOAD`, it adds the path to the beginning of any existing value the variable already has. **Note** On systems that manage the server using systemd, `mysqld_safe` is not available. Instead, specify the allocation library by setting `LD_PRELOAD` in `/etc/sysconfig/mysql`. Linux users can use the `libtcmalloc_minimal.so` included in binary packages by adding these lines to the `my.cnf` file: ``` [mysqld_safe] malloc-lib=tcmalloc ``` Those lines also suffice for users on any platform who have installed a `tcmalloc` package in `/usr/lib`. To use a specific `tcmalloc` library, specify its full path name. Example: ``` [mysqld_safe] malloc-lib=/opt/lib/libtcmalloc_minimal.so ``` - `--mysqld=prog_name` The name of the server program (in the `ledir` directory) that you want to start. This option is needed if you use the MySQL binary distribution but have the data directory outside of the binary distribution. If `mysqld_safe` cannot find the server, use the `--ledir` option to indicate the path name to the directory where the server is located. As of MySQL 5.7.15, this option is accepted only on the command line, not in option files. On platforms that use systemd, the value can be specified in the value of `MYSQLD_OPTS`. See Managing MySQL Server with systemd. - `--mysqld-version=suffix` This option is similar to the `--mysqld` option, but you specify only the suffix for the server program name. The base name is assumed to be `mysqld`. For example, if you use `--mysqld-version=debug`, `mysqld_safe` starts the `mysqld-debug` program in the `ledir` directory. If the argument to `--mysqld-version` is empty, `mysqld_safe` uses `mysqld` in the `ledir` directory. As of MySQL 5.7.15, this option is accepted only on the command line, not in option files. On platforms that use systemd, the value can be specified in the value of `MYSQLD_OPTS`. See Managing MySQL Server with systemd. - `--nice=priority` Use the `nice` program to set the server's scheduling priority to the given value. • **--no-defaults** Do not read any option files. If program startup fails due to reading unknown options from an option file, **--no-defaults** can be used to prevent them from being read. This must be the first option on the command line if it is used. For additional information about this and other option-file options, see Command-Line Options that Affect Option-File Handling. • **--open-files-limit=count** The number of files that mysqld should be able to open. The option value is passed to `ulimit -n`. <table> <thead> <tr> <th>Note</th> </tr> </thead> <tbody> <tr> <td>You must start mysqld_safe as root for this to function properly.</td> </tr> </tbody> </table> • **--pid-file=file_name** The path name that mysqld should use for its process ID file. From MySQL 5.7.2 to 5.7.17, mysqld_safe has its own process ID file, which is always named `mysqld_safe.pid` and located in the MySQL data directory. • **--plugin-dir=dir_name** The path name of the plugin directory. • **--port=port_num** The port number that the server should use when listening for TCP/IP connections. The port number must be 1024 or higher unless the server is started by the root operating system user. • **--skip-kill-mysqld** Do not try to kill stray mysqld processes at startup. This option works only on Linux. • **--socket=path** The Unix socket file that the server should use when listening for local connections. • **--syslog, --skip-syslog** **--syslog** causes error messages to be sent to syslog on systems that support the logger program. **--skip-syslog** suppresses the use of syslog; messages are written to an error log file. When syslog is used for error logging, the daemon.err facility/severity is used for all log messages. Using these options to control mysqld logging is deprecated as of MySQL 5.7.5. Use the server `log_syslog` system variable instead. To control the facility, use the server `log_syslog_facility` system variable. See Error Logging to the System Log. • `--syslog-tag=tag` For logging to **syslog**, messages from **mysqld_safe** and **mysqld** are written with identifiers of **mysqld_safe** and **mysqld**, respectively. To specify a suffix for the identifiers, use `--syslog-tag=tag`, which modifies the identifiers to be **mysqld_safe-tag** and **mysqld-tag**. Using this option to control **mysqld** logging is deprecated as of MySQL 5.7.5. Use the server `log_syslog_tag` system variable instead. See Error Logging to the System Log. • `--timezone=timezone` Set the `TZ` time zone environment variable to the given option value. Consult your operating system documentation for legal time zone specification formats. • `--user={user_name|user_id}` Run the **mysqld** server as the user having the name `user_name` or the numeric user ID `user_id`. ("User" in this context refers to a system login account, not a MySQL user listed in the grant tables.) If you execute **mysqld_safe** with the `--defaults-file` or `--defaults-extra-file` option to name an option file, the option must be the first one given on the command line or the option file is not used. For example, this command does not use the named option file: ```sql mysql> mysqld_safe --port=port_num --defaults-file=file_name ``` Instead, use the following command: ```sql mysql> mysqld_safe --defaults-file=file_name --port=port_num ``` The **mysqld_safe** script is written so that it normally can start a server that was installed from either a source or a binary distribution of MySQL, even though these types of distributions typically install the server in slightly different locations. (See Installation Layouts.) **mysqld_safe** expects one of the following conditions to be true: • The server and databases can be found relative to the working directory (the directory from which **mysqld_safe** is invoked). For binary distributions, **mysqld_safe** looks under its working directory for `bin` and `data` directories. For source distributions, it looks for `libexec` and `var` directories. This condition should be met if you execute **mysqld_safe** from your MySQL installation directory (for example, `/usr/local/mysql` for a binary distribution). • If the server and databases cannot be found relative to the working directory, **mysqld_safe** attempts to locate them by absolute path names. Typical locations are `/usr/local/libexec` and `/usr/local/var`. The actual locations are determined from the values configured into the distribution at the time it was built. They should be correct if MySQL is installed in the location specified at configuration time. Because **mysqld_safe** tries to find the server and databases relative to its own working directory, you can install a binary distribution of MySQL anywhere, as long as you run **mysqld_safe** from the MySQL installation directory: ```bash cd mysql_installation_directory bin/mysqld_safe & ``` If **mysqld_safe** fails, even when invoked from the MySQL installation directory, specify the `--ledir` and `--datadir` options to indicate the directories in which the server and databases are located on your system. **mysqld_safe** tries to use the `sleep` and `date` system utilities to determine how many times per second it has attempted to start. If these utilities are present and the attempted starts per second is greater than 5, mysqld_safe waits 1 full second before starting again. This is intended to prevent excessive CPU usage in the event of repeated failures. (Bug #11761530, Bug #54035) When you use mysql_safe to start mysqld, mysql_safe arranges for error (and notice) messages from itself and from mysqld to go to the same destination. There are several mysql_safe options for controlling the destination of these messages: - **--log-error=file_name**: Write error messages to the named error file. - **--syslog**: Write error messages to syslog on systems that support the logger program. - **--skip-syslog**: Do not write error messages to syslog. Messages are written to the default error log file (host_name.err in the data directory), or to a named file if the --log-error option is given. If none of these options is given, the default is --skip-syslog. When mysql_safe writes a message, notices go to the logging destination (syslog or the error log file) and stdout. Errors go to the logging destination and stderr. \[\text{\textbf{Note}}\] Controlling mysql logging from mysql_safe is deprecated as of MySQL 5.7.5. Use the server’s native syslog support instead. For more information, see Error Logging to the System Log. **4.3 mysql.server — MySQL Server Startup Script** MySQL distributions on Unix and Unix-like system include a script named mysql.server, which starts the MySQL server using mysqld_safe. It can be used on systems such as Linux and Solaris that use System V-style run directories to start and stop system services. It is also used by the macOS Startup Item for MySQL. mysql.server is the script name as used within the MySQL source tree. The installed name might be different (for example, mysqld or mysql). In the following discussion, adjust the name mysql.server as appropriate for your system. \[\text{\textbf{Note}}\] For some Linux platforms, MySQL installation from RPM or Debian packages includes systemctl support for managing MySQL server startup and shutdown. On these platforms, mysql.server and mysqld_safe are not installed because they are unnecessary. For more information, see Managing MySQL Server with systemctl. To start or stop the server manually using the mysql.server script, invoke it from the command line with start or stop arguments: ``` mysql.server start mysql.server stop ``` mysql.server changes location to the MySQL installation directory, then invokes mysqld_safe. To run the server as some specific user, add an appropriate user option to the [mysqld] group of the global /etc/my.cnf option file, as shown later in this section. (It is possible that you must edit mysql.server if you’ve installed a binary distribution of MySQL in a nonstandard location. Modify it to change location into the proper directory before it runs mysqld_safe. If you do this, your modified version of mysql.server mysql.server — MySQL Server Startup Script may be overwritten if you upgrade MySQL in the future; make a copy of your edited version that you can reinstall.) mysql.server stop stops the server by sending a signal to it. You can also stop the server manually by executing mysqladmin shutdown. To start and stop MySQL automatically on your server, you must add start and stop commands to the appropriate places in your /etc/rc* files: - If you use the Linux server RPM package (MySQL-server-VERSION.rpm), or a native Linux package installation, the mysql.server script may be installed in the /etc/init.d directory with the name mysqld or mysql. See Installing MySQL on Linux Using RPM Packages from Oracle, for more information on the Linux RPM packages. - If you install MySQL from a source distribution or using a binary distribution format that does not install mysql.server automatically, you can install the script manually. It can be found in the support-files directory under the MySQL installation directory or in a MySQL source tree. Copy the script to the /etc/init.d directory with the name mysql and make it executable: ``` cp mysql.server /etc/init.d/mysql chmod +x /etc/init.d/mysql ``` After installing the script, the commands needed to activate it to run at system startup depend on your operating system. On Linux, you can use chkconfig: ``` chkconfig --add mysql ``` On some Linux systems, the following command also seems to be necessary to fully enable the mysql script: ``` chkconfig --level 345 mysql on ``` - On FreeBSD, startup scripts generally should go in /usr/local/etc/rc.d/. Install the mysql.server script as /usr/local/etc/rc.d/mysql.server.sh to enable automatic startup. The rc(8) manual page states that scripts in this directory are executed only if their base name matches the *.sh shell file name pattern. Any other files or directories present within the directory are silently ignored. - As an alternative to the preceding setup, some operating systems also use /etc/rc.local or /etc/init.d/boot.local to start additional services on startup. To start up MySQL using this method, append a command like the one following to the appropriate startup file: ``` /bin/sh -c 'cd /usr/local/mysql; ./bin/mysqld_safe --user=mysql &' ``` - For other systems, consult your operating system documentation to see how to install startup scripts. mysql.server reads options from the [mysql.server] and [mysqld] sections of option files. For backward compatibility, it also reads [mysql_server] sections, but to be current you should rename such sections to [mysql.server]. You can add options for mysql.server in a global /etc/my.cnf file. A typical my.cnf file might look like this: ``` [mysqld] datadir=/usr/local/mysql/var socket=/var/tmp/mysql.sock port=3306 user=mysql [mysql.server] ``` basedir=/usr/local/mysql The `mysql.server` script supports the options shown in the following table. If specified, they must be placed in an option file, not on the command line. `mysql.server` supports only `start` and `stop` as command-line arguments. ### Table 4.2 mysql.server Option-File Options <table> <thead> <tr> <th>Option Name</th> <th>Description</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>basedir</td> <td>Path to MySQL installation directory</td> <td>Directory name</td> </tr> <tr> <td>datadir</td> <td>Path to MySQL data directory</td> <td>Directory name</td> </tr> <tr> <td>pid-file</td> <td>File in which server should write its process ID</td> <td>File name</td> </tr> <tr> <td>service-startup-timeout</td> <td>How long to wait for server startup</td> <td>Integer</td> </tr> </tbody> </table> - **basedir=dir_name** The path to the MySQL installation directory. - **datadir=dir_name** The path to the MySQL data directory. - **pid-file=file_name** The path name of the file in which the server should write its process ID. The server creates the file in the data directory unless an absolute path name is given to specify a different directory. If this option is not given, `mysql.server` uses a default value of `host_name.pid`. The PID file value passed to `mysqld_safe` overrides any value specified in the `[mysqld_safe]` option file group. Because `mysql.server` reads the `[mysqld]` option file group but not the `[mysqld_safe]` group, you can ensure that `mysqld_safe` gets the same value when invoked from `mysql.server` as when invoked manually by putting the same `pid-file` setting in both the `[mysqld_safe]` and `[mysqld]` groups. - **service-startup-timeout=seconds** How long in seconds to wait for confirmation of server startup. If the server does not start within this time, `mysql.server` exits with an error. The default value is 900. A value of 0 means not to wait at all for startup. Negative values mean to wait forever (no timeout). ### 4.4 mysqld_multi — Manage Multiple MySQL Servers `mysqld_multi` is designed to manage several `mysqld` processes that listen for connections on different Unix socket files and TCP/IP ports. It can start or stop servers, or report their current status. **Note** For some Linux platforms, MySQL installation from RPM or Debian packages includes `systemd` support for managing MySQL server startup and shutdown. On these platforms, `mysqld_multi` is not installed because it is unnecessary. For information about using `systemd` to handle multiple MySQL instances, see [Managing MySQL Server with systemd]. mysqld_multi searches for groups named [mysqldN] in my.cnf (or in the file named by the --defaults-file option). N can be any positive integer. This number is referred to in the following discussion as the option group number, or GNR. Group numbers distinguish option groups from one another and are used as arguments to mysqld_multi to specify which servers you want to start, stop, or obtain a status report for. Options listed in these groups are the same that you would use in the [mysqld] group used for starting mysqld. (See, for example, Starting and Stopping MySQL Automatically.) However, when using multiple servers, it is necessary that each one use its own value for options such as the Unix socket file and TCP/IP port number. For more information on which options must be unique per server in a multiple-server environment, see Running Multiple MySQL Instances on One Machine. To invoke mysqld_multi, use the following syntax: ``` mysqld_multi [options] {start|stop|reload|report} [GNR[,GNR] ...] ``` start, stop, reload (stop and restart), and report indicate which operation to perform. You can perform the designated operation for a single server or multiple servers, depending on the GNR list that follows the option name. If there is no list, mysqld_multi performs the operation for all servers in the option file. Each GNR value represents an option group number or range of group numbers. The value should be the number at the end of the group name in the option file. For example, the GNR for a group named [mysqld17] is 17. To specify a range of numbers, separate the first and last numbers by a dash. The GNR value 10-13 represents groups [mysqld10] through [mysqld13]. Multiple groups or group ranges can be specified on the command line, separated by commas. There must be no whitespace characters (spaces or tabs) in the GNR list; anything after a whitespace character is ignored. This command starts a single server using option group [mysqld17]: ``` mysqld_multi start 17 ``` This command stops several servers, using option groups [mysqld8] and [mysqld10] through [mysqld13]: ``` mysqld_multi stop 8,10-13 ``` For an example of how you might set up an option file, use this command: ``` mysqld_multi --example ``` mysqld_multi searches for option files as follows: - With --no-defaults, no option files are read. - With --defaults-file=file_name, only the named file is read. - Otherwise, option files in the standard list of locations are read, including any file named by the --defaults-extra-file=file_name option, if one is given. (If the option is given multiple times, the last value is used.) For additional information about these and other option-file options, see Command-Line Options that Affect Option-File Handling. Option files read are searched for [mysqld_multi] and [mysqldN] option groups. The [mysqld_multi] group can be used for options to mysqld_multi itself. [mysqldN] groups can be used for options passed to specific mysqld instances. The [mysqld] or [mysqld_safe] groups can be used for common options read by all instances of mysqld or mysqld_safe. You can specify a --defaults-file=file_name option to use a different configuration file for that instance, in which case the [mysqld] or [mysqld_safe] groups from that file are used for that instance. `mysqld_multi` supports the following options. - **--help** Display a help message and exit. - **--example** Display a sample option file. - **--log=file_name** Specify the name of the log file. If the file exists, log output is appended to it. - **--mysqladmin=prog_name** The `mysqladmin` binary to be used to stop servers. - **--mysqld=prog_name** The `mysqld` binary to be used. Note that you can specify `mysqld_safe` as the value for this option also. If you use `mysqld_safe` to start the server, you can include the `mysqld` or `ledir` options in the corresponding `[mysqldN]` option group. These options indicate the name of the server that `mysqld_safe` should start and the path name of the directory where the server is located. (See the descriptions for these options in Section 4.2, “`mysqld_safe` — MySQL Server Startup Script”.) Example: ``` [mysqld38] mysqld = mysqld-debug ledir = /opt/local/mysql/libexec ``` - **--no-log** Print log information to `stdout` rather than to the log file. By default, output goes to the log file. - **--password=password** The password of the MySQL account to use when invoking `mysqladmin`. Note that the password value is not optional for this option, unlike for other MySQL programs. - **--silent** Silent mode; disable warnings. - **--tcp-ip** Connect to each MySQL server through the TCP/IP port instead of the Unix socket file. (If a socket file is missing, the server might still be running, but accessible only through the TCP/IP port.) By default, connections are made using the Unix socket file. This option affects `stop` and `report` operations. - **--user=user_name** The user name of the MySQL account to use when invoking `mysqladmin`. - **--verbose** Be more verbose. • **--version** Display version information and exit. Some notes about *mysqld_multi*: • **Most important**: Before using *mysqld_multi* be sure that you understand the meanings of the options that are passed to the *mysqld* servers and why you would want to have separate *mysqld* processes. Beware of the dangers of using multiple *mysqld* servers with the same data directory. Use separate data directories, unless you *know* what you are doing. Starting multiple servers with the same data directory does *not* give you extra performance in a threaded system. See Running Multiple MySQL Instances on One Machine. --- **Important** Make sure that the data directory for each server is fully accessible to the Unix account that the specific *mysqld* process is started as. *Do not* use the Unix *root* account for this, unless you *know* what you are doing. See How to Run MySQL as a Normal User. • Make sure that the MySQL account used for stopping the *mysqld* servers (with the *mysqladmin* program) has the same user name and password for each server. Also, make sure that the account has the **SHUTDOWN** privilege. If the servers that you want to manage have different user names or passwords for the administrative accounts, you might want to create an account on each server that has the same user name and password. For example, you might set up a common *multi_admin* account by executing the following commands for each server: ```bash $> mysql -u root -S /tmp/mysql.sock -p Enter password: mysql> CREATE USER 'multi_admin'@'localhost' IDENTIFIED BY 'multipass'; mysql> GRANT SHUTDOWN ON *.* TO 'multi_admin'@'localhost'; ``` See Access Control and Account Management. You have to do this for each *mysqld* server. Change the connection parameters appropriately when connecting to each one. Note that the host name part of the account name must permit you to connect as *multi_admin* from the host where you want to run *mysqld_multi*. • The Unix socket file and the TCP/IP port number must be different for every *mysqld*. (Alternatively, if the host has multiple network addresses, you can set the `bind_address` system variable to cause different servers to listen to different interfaces.) • The **--pid-file** option is very important if you are using *mysqld_safe* to start *mysqld* (for example, **--mysqld=mysqld_safe**) Every *mysqld* should have its own process ID file. The advantage of using *mysqld_safe* instead of *mysqld* is that *mysqld_safe* monitors its *mysqld* process and restarts it if the process terminates due to a signal sent using `kill -9` or for other reasons, such as a segmentation fault. • You might want to use the **--user** option for *mysqld*, but to do this you need to run the *mysqld_multi* script as the Unix superuser (**root**). Having the option in the option file doesn't matter; you just get a warning if you are not the superuser and the *mysqld* processes are started under your own Unix account. The following example shows how you might set up an option file for use with *mysqld_multi*. The order in which the *mysqld* programs are started or stopped depends on the order in which they appear in the option file. Group numbers need not form an unbroken sequence. The first and fifth [mysqldN] groups were intentionally omitted from the example to illustrate that you can have “gaps” in the option file. This gives you more flexibility. # This is an example of a my.cnf file for mysqld_multi. # Usually this file is located in home dir ~/.my.cnf or /etc/my.cnf ``` [mysqld_multi] mysqld = /usr/local/mysql/bin/mysqld_safe mysqladmin = /usr/local/mysql/bin/mysqladmin user = multi_admin password = my_password [mysqld2] socket = /tmp/mysql.sock2 port = 3307 pid-file = /usr/local/mysql/data2/hostname.pid2 datadir = /usr/local/mysql/data2 language = /usr/local/mysql/share/mysql/english user = unix_user1 [mysqld3] mysqld = /path/to/mysqld_safe ledir = /path/to/mysqld-binary/ mysqladmin = /path/to/mysqladmin socket = /tmp/mysql.sock3 port = 3308 pid-file = /usr/local/mysql/data3/hostname.pid3 datadir = /usr/local/mysql/data3 language = /usr/local/mysql/share/mysql/swedish user = unix_user2 [mysqld4] socket = /tmp/mysql.sock4 port = 3309 pid-file = /usr/local/mysql/data4/hostname.pid4 datadir = /usr/local/mysql/data4 language = /usr/local/mysql/share/mysql/estonia user = unix_user3 [mysqld6] socket = /tmp/mysql.sock6 port = 3311 pid-file = /usr/local/mysql/data6/hostname.pid6 datadir = /usr/local/mysql/data6 language = /usr/local/mysql/share/mysql/japanese user = unix_user4 ``` See Using Option Files.
{"Source-Url": "https://downloads.mysql.com/docs/mysql-startstop-excerpt-5.7-en.pdf", "len_cl100k_base": 14479, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 66723, "total-output-tokens": 15516, "length": "2e13", "weborganizer": {"__label__adult": 0.0005016326904296875, "__label__art_design": 0.0005769729614257812, "__label__crime_law": 0.0006189346313476562, "__label__education_jobs": 0.004573822021484375, "__label__entertainment": 0.0003647804260253906, "__label__fashion_beauty": 0.00017154216766357422, "__label__finance_business": 0.0018863677978515625, "__label__food_dining": 0.0002834796905517578, "__label__games": 0.00536346435546875, "__label__hardware": 0.003204345703125, "__label__health": 0.0002932548522949219, "__label__history": 0.0007476806640625, "__label__home_hobbies": 0.0003247261047363281, "__label__industrial": 0.000640869140625, "__label__literature": 0.0006651878356933594, "__label__politics": 0.0004725456237792969, "__label__religion": 0.0005598068237304688, "__label__science_tech": 0.0745849609375, "__label__social_life": 0.00017213821411132812, "__label__software": 0.336669921875, "__label__software_dev": 0.56640625, "__label__sports_fitness": 0.00025153160095214844, "__label__transportation": 0.0004549026489257813, "__label__travel": 0.00031447410583496094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62480, 0.01344]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62480, 0.51318]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62480, 0.81045]], "google_gemma-3-12b-it_contains_pii": [[0, 28, false], [28, 340, null], [340, 1483, null], [1483, 1483, null], [1483, 5431, null], [5431, 8795, null], [8795, 11539, null], [11539, 14304, null], [14304, 17266, null], [17266, 17561, null], [17561, 20512, null], [20512, 21145, null], [21145, 24396, null], [24396, 25423, null], [25423, 28126, null], [28126, 30849, null], [30849, 32987, null], [32987, 34624, null], [34624, 36658, null], [36658, 39163, null], [39163, 41086, null], [41086, 44429, null], [44429, 47281, null], [47281, 50116, null], [50116, 52784, null], [52784, 55974, null], [55974, 57885, null], [57885, 61298, null], [61298, 62480, null], [62480, 62480, null]], "google_gemma-3-12b-it_is_public_document": [[0, 28, true], [28, 340, null], [340, 1483, null], [1483, 1483, null], [1483, 5431, null], [5431, 8795, null], [8795, 11539, null], [11539, 14304, null], [14304, 17266, null], [17266, 17561, null], [17561, 20512, null], [20512, 21145, null], [21145, 24396, null], [24396, 25423, null], [25423, 28126, null], [28126, 30849, null], [30849, 32987, null], [32987, 34624, null], [34624, 36658, null], [36658, 39163, null], [39163, 41086, null], [41086, 44429, null], [44429, 47281, null], [47281, 50116, null], [50116, 52784, null], [52784, 55974, null], [55974, 57885, null], [57885, 61298, null], [61298, 62480, null], [62480, 62480, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 62480, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62480, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62480, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62480, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62480, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62480, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62480, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62480, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62480, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62480, null]], "pdf_page_numbers": [[0, 28, 1], [28, 340, 2], [340, 1483, 3], [1483, 1483, 4], [1483, 5431, 5], [5431, 8795, 6], [8795, 11539, 7], [11539, 14304, 8], [14304, 17266, 9], [17266, 17561, 10], [17561, 20512, 11], [20512, 21145, 12], [21145, 24396, 13], [24396, 25423, 14], [25423, 28126, 15], [28126, 30849, 16], [30849, 32987, 17], [32987, 34624, 18], [34624, 36658, 19], [36658, 39163, 20], [39163, 41086, 21], [41086, 44429, 22], [44429, 47281, 23], [47281, 50116, 24], [50116, 52784, 25], [52784, 55974, 26], [55974, 57885, 27], [57885, 61298, 28], [61298, 62480, 29], [62480, 62480, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62480, 0.08408]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
3020c1146e2986320fb2a651f26bc29248ed51f4
XEP-0332: HTTP over XMPP transport Peter Waher mailto:peterwaher@hotmail.com xmpp:peter.waher@jabber.org http://www.linkedin.com/in/peterwaher 2020-03-31 Version 0.5.1 <table> <thead> <tr> <th>Status</th> <th>Type</th> <th>Short Name</th> </tr> </thead> <tbody> <tr> <td>Deferred</td> <td>Standards Track</td> <td>NOT_YET_ASSIGNED</td> </tr> </tbody> </table> This specification defines how XMPP can be used to transport HTTP communication over peer-to-peer networks. Legal Copyright This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF). Permissions Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation. Warranty ## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ## Liability In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages. Conformance This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA). 1 Introduction Many documents have been written on how to transport XMPP datagrams using HTTP. The motivation behind such solutions has often been to be able to use XMPP in scripting languages such as JavaScript running in web browsers. But up to this point very little has been written about the reverse: How to transport HTTP methods and HTTP responses over an XMPP-based peer-to-peer network. Here, the motivation is as follows: There are multitudes of applications and APIs written that are based on HTTP over TCP as the basic communication transport protocol. As these are moving closer and closer to the users, problems arise when the users want to protect their data and services using firewalls. Even though there are methods today to open up firewalls manually or automatically permit communication with such devices and applications, you still open up the application for everybody. This rises the need for more advanced security measures which is sometimes difficult to implement using HTTP. The XMPP protocol however does not have the same problems as HTTP in these regards. It’s a peer-to-peer protocol naturally allowing communication with applications and devices behind firewalls. It also includes advanced user authentication and authorization which makes it easier to make sure unauthorized access to private content is prevented. Furthermore, with the advent of semantic web technologies and its use in web 3.0 and Internet of Things applications, such applications move even more rapidly into the private spheres of the users, where security and privacy is of paramount importance, it is necessary to use more secure transport protocols than HTTP over TCP. There are many different types of HTTP-based communication that one would like to be able to transport over XMPP. A non-exhaustive list can include: - Web Content like pages, images, files, etc. - Web Forms. - Web Services (SOAP, REST, etc.) - Semantic Web Resources (RDF, Turtle, etc.) - Federated SPARQL queries (SQL-type query language for the semantic web, or web 3.0) - Streamed multi-media content in UPnP and DLNA networks. Instead of trying to figure out all possible things transportable over HTTP and make them transportable over XMPP, this document ignores the type of content transported, and instead focuses on encoding and decoding the original HTTP requests and responses, building an HTTP tunnel over an existing XMPP connection. It would enable existing applications to work seamlessly over XMPP if browsers and web services supported this extension (like displaying your home control application on your phone when you are at work), without the need to update the myriad of existing applications. It would also permit federated SPARQL queries in personal networks with the added benefit of being able to control who can talk to who (or... what can talk to what) through established friendship relationships. Previous extensions handling different aspects of XMPP working together with HTTP: - **Verifying HTTP Requests via XMPP (XEP-0070)**: This specification handles client authentication of resources, where there are three parties: HTTP Client <-> HTTP Server/XMPP Client <-> XMPP Server. Here HTTP Client authentication to resources on the HTTP Server is made by a third party, an XMPP Server. - **SOAP over XMPP (XEP-0072)**: This specification handles execution of SOAP-based web services specifically. This specification has some benefits regarding to Web Service calls over XMPP, but is only one example of all types of HTTP-based communication one would desire to transport over XMPP. - **BOSH (XEP-0124)**: This specification handles XMPP-based communication over HTTP sessions (BOSH), allowing for instance, XMPP communication in JavaScript using the XML HTTP Request object. This is in some way the reverse of what this document proposes to do. - **Stanza Headers and Internet Metadata (XEP-0131)**: While not directly related to HTTP, it is used to transport headers in the form of collections of key-value pairs, exactly as is done in HTTP. The format for encoding headers into XMP defined by this XEP will be re-used in this XEP. - **XMPP URI Query Components (XEP-0147)**: This informational specification proposes ways to define XMPP-actions using URL’s. The xmpp URI scheme is formally defined in [RFC5122](http://tools.ietf.org/html/rfc5122). This document will propose a different URI scheme for HTTP-based resources over an XMPP transport: httpx. ## 2 Requirements This document presupposes the server already has a web server (HTTP Server) implementation, and that it hosts content through it, content which can be both dynamic (i.e. generated) or static (e.g. files) in nature. Content, which it wants to publish to XMPP clients as well as HTTP clients. It also presupposes that the client is aware of HTTP semantics and MIME encoding. --- 3 Glossary The following table lists common terms and corresponding descriptions. **HTTP Client** An HTTP Client is the initiator of an HTTP Request. **HTTP Method** HTTP Methods are: OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE and PATCH. The HTTP Method CONNECT is not supported by this specification. **HTTP Request** An HTTP Request consists of a HTTP Method, version information, headers and optional body. **HTTP Resource** A resource on an HTTP Server identified by a path. Each path begins with a separator character (/). **HTTP Response** An HTTP Response consists of a status code, optional status message, headers and optional body. **HTTP Server** An HTTP Server responds to HTTP Client requests. **Web Server** Used synonymously with HTTP Server. 4 Use Cases All HTTP communication is done using the Request/Response paradigm. Each HTTP Request is made sending an iq-stanza containing a req element to the server. Each iq-stanza sent is of type set. When the server responds, it does so by sending an iq-stanza response (type result) back to the client containing a resp element. Since responses are asynchronous, and since multiple requests may be active at the same time, responses may be returned in a different order than the in which the original requests were made. Requests or responses containing data must also consider how this data should be encoded within the XML telegram. Normally in HTTP, content and headers are separated by a blank line, and the transfer of the content is made in the same stream. Specific HTTP headers are used to define how the content is transferred and encoded within the stream (Content-Type, Content-Length, Content-Encoding, Content-Transfer-Encoding). This approach is not possible if the response is to be embedded in an XML telegram, since it can interfere with the encoding of the encompassing XML. To solve this, this document specifies additional data transfer mechanisms that are compatible with the XMPP protocol. The normal HTTP-based content transfer headers will still be transported, but do not affect the content encoding used in the XMPP transport. The following content encoding methods are available: **text** Normal text content. The text is encoded as text within XML, using the same encoding used by the XML stream. XML escape characters (`, `, and `&`) are escaped using the normal `<`, `>`, and `&` character escape sequences. **xml** Xml content embedded in the XML telegram. Note however, that any processing instructions or XML version statements must be avoided, since it may cause the XML stream to become invalid XML. If this is a problem, normal text encoding can be used as an alternative. The advantage of xml instead of text or base64 encodings is when used in conjunction with EXI compression Efficient XML Interchange (EXI) Format (XEP-0322) XEP-0322: Efficient XML Interchange (EXI) Format [https://xmpp.org/extensions/xep-0322.html]. EXI compression has the ability to compress XML efficiently. Text will not be compressed, unless response exists in internal string tables. Base-64 encoded data will be compressed so that the 33% size gain induced by the encoding is recaptured. **base64** Base-64 encoded binary content. Can be used to easily embed binary content in the telegram. **chunkedBase64** Chunked Base-64 encoded binary content. The content is not embedded in the telegram. Instead it is sent in chunks, using separate chunk messages to the client. Chunked transport can be used by the server when it doesn’t know the size of the final result. Streaming content, i.e. content of infinite length, must use ibb or jingle transport types to transfer content. If the content consists of a file, sipub should be used. Chunked encoding is perfect for dynamic responses of moderate sizes, for instance for API method responses. The server does not know when the response is begun to be generated what the final size will be, but it will be most probably “manageable”. Using the chunked transfer mechanism enables the server to start sending the content, minimizing the need for buffers, and at the same time minimizing the number of messages that needs to be sent, increasing throughput. The client can limit the maximum chunk size to be used by the server, using the max-ChunkSize attribute in the request. The chunk size can be set to a value between 256 and 65536. If not provided in the request, the server chooses an appropriate value. Note that chunks can be sent containing a smaller amount of bytes than the maximum chunk size provided in the request. **sipub** The sender might deem the content to be too large for sending embedded in the XMPP telegram. To circumnavigate this, the sender publishes the content as a file using Publishing Stream Initiation Requests (XEP-0137) XEP-0137: Publishing Stream Initiation Requests [https://xmpp.org/extensions/xep-0137.html]. Publishing Stream Initiation Requests, instead of embedding the content directly. This might be the case for instance, when a client requests a video resource, without using a ranged request. This transfer mechanism is of course the logical choice, if the content is already stored in a file on the server, and the size of the file is sufficiently large to merit the overhead of sipub. Smaller files can simply be returned using the text, xml or base64 mechanisms. The client can disable the use of sipub by the server, by including a sipub='false' attribute in the request. sipub is enabled by default. On constrained devices with limited support for different XEP’s, this can be a way to avoid the use of technologies not supported by the client. **ibb** This option may be used to encode indefinite streams, like live audio or video streams (HLS, SHOUTcast, Motion JPEG web cams, etc). It uses In-Band Bytestreams (XEP-0047) XEP-0047: In-Band Bytestreams <https://xmpp.org/extensions/xep-0047.html> to send the content over an in-band bytestream to the client. This option is not available in requests, only in responses. Streams must not use any of the above mechanisms. Only ibb and jingle mechanisms can be used. If the content represents multimedia jingle is preferable, especially if different encodings are available. The client can disable the use of ibb by the server, by including a ibb='false' attribute in the request. ibb is enabled by default. On constrained devices with limited support for different XEP’s, this can be a way to avoid the use of technologies not supported by the client. **jingle** For demanding multi-media streams alternative methods to transport streaming rather than embedded into the XMPP stream may be required. Even though the ibb method may be sufficient to stream a low-resolution web cam in the home, or listen to a microphone or a radio station, it is probably badly suited for high-resolution video streams with multiple video angles and audio channels. If such content is accessed and streamed, the server can negotiate a different way to stream the content using Jingle (XEP-0166) XEP-0166: Jingle <https://xmpp.org/extensions/xep-0166.html>. The client can disable the use of jingle by the server, by including a jingle='false' attribute in the request. jingle is enabled by default. On constrained devices with limited support for different XEP’s, this can be a way to avoid the use of technologies not supported by the client. **Note:** Content encoded using **chunkedBase64** encoding method can be terminated, either by the receptor going off-line, or by sending a **close** command to the sender. The transfer methods sipub, ibb and jingle have their own mechanisms for aborting content transfer. ### 4.1 HTTP Methods The following use cases show how different HTTP methods may work when transported over XMPP. To facilitate the readability in these examples, simple text or xml results are shown. 4.1.1 OPTIONS This section shows an example of an OPTIONS method call. OPTIONS is described in §9.2 in RFC 2616. Listing 1: OPTIONS ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='1'> <req xmlns='urn:xmpp:http' method='OPTIONS' resource='*' version='1.1'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Host'>example.org</header> </headers> </req> </iq> <iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser' id='1'> <resp xmlns='urn:xmpp:http' version='1.1' statusCode='200' statusMessage='OK'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Date'>Fri, 03 May 2013 13:52:10 GMT-4</header> <header name='Allow'>OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE</header> <header name='Content-Length'>0</header> </headers> </resp> </iq> ``` 4.1.2 GET This section shows an example of a GET method call. GET is described in §9.3 in RFC 2616. Listing 2: GET ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='2'> <req xmlns='urn:xmpp:http' method='GET' resource='rdf/xep' version='1.1'> <headers xmlns='http://jabber.org/protocol/shim'> ``` **Note:** The XMPP/HTTP bridge at the server only transmits headers literally as they are reported, as if it was normal HTTP over TCP that was used. In the HTTP over XMPP case, connections are not handled in the same way, and so the "Connection: Close" header has no meaning in this case. For more information about connection handling in the HTTP over XMPP case, see the section on **Connection Handling**. ### 4.1.3 HEAD This section shows an example of a HEAD method call. HEAD is described in §9.4 in RFC 2616. Listing 3: HEAD ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='3'> ``` 4.1.4 POST This section shows an example of a POST method call. POST is described in §9.5 in RFC 2616. Listing 4: POST ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='4'> <req xmlns='urn:xmpp:http' method='POST' resource='/sparql/?default-graph-uri=http%3A%2F%2Fexample.org%2Frdf/xep' version='1.1'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Host'>example.org</header> <header name='User-agent'>Clayster HTTP/XMPP Client</header> <header name='Content-Type'>application/sparql-query</header> <header name='Content-Length'>...</header> </headers> <data> <text>PREFIX dc: &lt;http://purl.org/dc/elements/1.1/&gt; ; ``` BASE &lt;http://example.org/&gt; SELECT ?title ?creator ?publisher OPTIONAL { ?x dc:creator ?creator } . }</text> </data> </req> </iq> &lt;iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser' id='4' &gt; &lt;resp xmlns='urn:xmpp:http' version='1.1' statusCode='200' statusMessage='OK'&gt; &lt;headers xmlns='http://jabber.org/protocol/shim'&gt; &lt;header name='Date'>Fri, 03 May 2013 17:09:34-4&lt;/header&gt; &lt;header name='Server'>Clayster&lt;/header&gt; &lt;header name='Content-Type'>application/sparql-results+xml&lt;/header&gt; &lt;header name='Content-Length'>&lt;/header&gt; &lt;/headers&gt; &lt;data&gt; &lt;xml&gt; &lt;sparql xmlns="http://www.w3.org/2005/sparql-results#"&gt; &lt;head&gt; &lt;variable name="title"/&gt; &lt;variable name="creator"/&gt; &lt;/head&gt; &lt;results&gt; &lt;result&gt; &lt;binding name="title"&gt; &lt;literal&gt;HTTP over XMPP&lt;/literal&gt; &lt;/binding&gt; &lt;binding name="creator"&gt; &lt;uri&gt;http://example.org/PeterWaher&lt;/uri&gt; &lt;/binding&gt; &lt;/result&gt; &lt;/results&gt; &lt;/sparql&gt; &lt;/xml&gt; &lt;/data&gt; &lt;/resp&gt; &lt;/iq&gt; Note: If using xml encoding of data, care has to be taken to avoid including the version and encoding information (<?xml version="1.0"?>) at the top of the document, otherwise the resulting XML will be invalid. Care has also to be taken to make sure that the generated XML is not invalid XMPP, even though it might be valid XML. This could happen for instance, if the XML contains illegal elements from the jabber:client namespace. If in doubt, use another encoding mechanism. 4.1.5 PUT This section shows an example of a PUT method call. PUT is described in §9.6 in RFC 2616. Listing 5: PUT ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='5'> <req xmlns='urn:xmpp:http' method='PUT' resource='/index.html' version='1.1'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Host'>example.org</header> <header name='Content-Type'>text/html</header> <header name='Content-Length'>...</header> </headers> <data> <text> &lt;html&gt;&lt;header/&gt;&lt;body&gt;&lt;p&gt;Beautiful home page.&lt;/p&gt;&lt;/body&gt;&lt;/html&gt; </text> </data> </req> </iq> <iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser' id='5'> <resp xmlns='urn:xmpp:http' version='1.1' statusCode='204' statusMessage='No_Content'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Date'>Fri, 03 May 2013 17:40:41 GMT-4</header> <header name='Content-Length'>0</header> </headers> </resp> </iq> ``` 4.1.6 DELETE This section shows an example of a DELETE method call. DELETE is described in §9.7 in RFC 2616. Listing 6: DELETE ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='6'> <req xmlns='urn:xmpp:http' method='DELETE' resource='/index.html' version='1.1'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Host'>example.org</header> </headers> </req> </iq> <iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser' id='6'> <resp xmlns='urn:xmpp:http' version='1.1' statusCode='403' statusMessage='Forbidden'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Date'>Fri, 03 May 2013 17:46:07GMT -4</header> <header name='Content-Type'>text/plain</header> <header name='Content-Length'>...</header> </headers> <data> <text>You're not allowed to change the home page!</text> </data> </resp> </iq> ``` 4.1.7 TRACE This section shows an example of a TRACE method call. TRACE is described in §9.8 in RFC 2616. Listing 7: TRACE ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='7'> Note: The Trace command returns the request it received from the client by the server. Here, however, it is assumed that the request is made over HTTP/TCP, not HTTP/XMPP. Therefore, in this example, the XMPP layer has transformed the HTTP/XMPP request into an HTTP/TCP-looking request, which is returned as the response to the TRACE Method call. RFC 2616 is silent to the actual format of the TRACE response (MIME TYPE message/http), and TRACE is only used (if not disabled for security reasons) for debugging connections and routing via proxies. Therefore, a response returning the original XMPP request should also be accepted by the caller. 4.1.8 PATCH This section shows an example of a PATCH method call. PATCH is described in RFC 5789. Listing 8: PATCH ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='8'> </iq> ``` 4.2 Encoding formats In the following sub-sections, the different data encoding formats are discussed, each with corresponding examples to illustrate how they work. The interesting part of these examples is the data element and its contents. 4.2.1 text Text responses is a simple way to return text responses (i.e. any MIME Type starting with text/). Since the text is embedded into XML, the characters <, > and & need to be escaped to &lt;, &gt; and &amp; respectively. The following example shows how a TURTLE response, which is text-based, is returned using the text encoding: Listing 9: text ```xml <iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser' id='8'> <resp xmlns='urn:xmpp:http' version='1.1' statusCode='204' statusMessage='No_Content'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Content-Location'>/file.txt</header> <header name='ETag'>e0023aa4e</header> </headers> </resp> </iq> ``` 4 USE CASES | id='2' | |<resp xmlns='urn:xmpp:http' version='1.1' statusCode='200' statusMessage='OK'> |<headers xmlns='http://jabber.org/protocol/shim'> |<header name='Date'>Fri, 03 May 2013 16:39:54 GMT -4</header> |<header name='Server'>Clayster</header> |<header name='Content-Type'>text/turtle</header> |<header name='Content-Length'>...</header> |<header name='Connection'>Close</header> |</headers> |<data> |<text>@prefix dc: &lt;http://purl.org/dc/elements/1.1/&gt;. |@base &lt;http://example.org/&gt;. |&lt;xep&gt; dc:title "HTTP_over_XMPP"; |dc:creator &lt;PeterWaher&gt;; |dc:publisher &lt;XSF&gt;. |</text> |</data> |</resp> |</iq> 4.2.2 xml XML is a convenient way to return XML embedded in the XMPP response. This can be suitable for MIME Types of the form `.*\((\*\*[a-z]+)\)?xml` (using regular expression to match them), like `text/xml`, `application/soap+xml` or `application/sparql-results+xml`. Care has to be taken however, since not all XML constructs can be embedded as content to an XML element without invalidating it, like the xml version and encoding declaration (`<?xml version="1.0"?>` as an example). Care has also to be taken to make sure that the generated XML is not invalid XMPP, even though it might be valid XML. This could happen for instance, if the XML contains illegal elements from the `jabber:client` namespace. If unsure how to handle XML responses using the `xml` encoding type, you can equally well use the `text` type, but encode the XML escape characters `<`, `>`, and `&`, or use another encoding, like `base64`. The advantage of `xml` instead of `text` or `base64` encodings is when used in conjunction with EXI compression. EXI compression has the ability to compress XML efficiently. Text will not be compressed, unless response exists in internal string tables. Base-64 encoded data will be compressed so that the 33% size gain induced by the encoding is recaptured. Listing 10: xml ```xml <iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser'></i ``` 4.2.3 base64 Base-64 encoding is a simple way to encode content that is easily embedded into XML. Apart from the advantage of being easy to encode, it has the disadvantage to increase the size of the content by 33%, since it requires 4 bytes to encode 3 bytes of data. Care has to be taken not to send too large items using this encoding. **Note:** The actual size of the content being sent does not necessarily need to increase if this encoding method is used. If EXI compression is used at the same time, and it uses schema-aware compression, it will actually understand that the character set used to encode the data only uses 6 bits of information per character, and thus compresses the data back to its original size. The following example shows an image is returned using the **base64** encoding: 4.2.4 chunkedBase64 In HTTP, Chunked Transfer Encoding is used when the sender does not know the size of the content being sent, and to avoid having its buffers overflow, sends the content in chunks with a definite size. A similar method exists in the HTTP over XMPP transport: The chunkedBase64 allows the sender to transmit the content in chunks. Every chunk is base-64 encoded. The stream of chunks are identified by a streamId parameter, since chunks from different responses may be transmitted at the same time. Another difference between normal chunked transport, and the chunkedBase64 encoding, is that the size of chunks does not have to be predetermined. Chunks are naturally delimited and embedded in the XML stanza. The last chunk in a response must have the last attribute set to true. Note: Chunked encoding assumes the content to be finite. If content is infinite (i.e., for instance live streaming), the ibb or jingle transfer encodings must be used instead. If the sender is unsure if the content is finite or infinite, ibb or jingle must be used. Note 2: If the web server sends chunked data to the client it uses the HTTP header Transfer-Encoding: chunked, and then sends the data in chunks but with chunk sizes inserted so the receiving end can decode the incoming data. Note that this data will be included in the data sent in the XMPP chunks defined by this document. In this case, data will be chunked twice: First by the web server, and then by the HTTP over XMPP transport layer. When received by the client, it is first reassembled by the HTTP over XMPP layer on the client, and then by the HTTP client who will read the original chunk size elements inserted into the content. More information about HTTP chunking, can be found in RDF2616 §3.6.1. Note 3: In order to work over XMPP servers that do not maintain message order, a nr attribute is available on the chunk element. The first chunk reports a nr of zero. Each successive chunk reports a nr that is incremented by one. In this way, the receiver can make sure to order incoming chunks in the correct order. 4.2.5 sipub Often content being sent can be represented by a file, virtual or real, especially if the content actually represents a file and is not dynamically generated. In these instances, instead of embedding the contents in the response, since content can be potentially huge, a File Stream Initiation is returned instead, as defined in XEP 0137: Publishing Stream Initiation Requests. This is done using the `sipub` element. ### Listing 13: `sipub` ```xml <iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser' id='11'> <resp xmlns='urn:xmpp:http' version='1.1' statusCode='200' statusMessage='OK'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Date'>Fri, 03 May 2013 16:39:54GMT -4</header> <header name='Server'>Clayster</header> <header name='Content-Type'>image/png</header> <header name='Content-Length'>221203</header> </headers> <data> <sipub xmlns='http://jabber.org/protocol/sipub' from='httpserver@example.org' id='file-0001' mime-type='image/png' profile='http://jabber.org/protocol/si/profile/file-transfer'> <file xmlns='http://jabber.org/protocol/si/profile/file-transfer' name='Kermit.png' size='221203' date='2013-03-06T16:47Z'/> </sipub> </data> </resp> </iq> ``` ### 4.2.6 ibb Some web servers provide streaming content, i.e. content where packets are sent according to a timely fashion. Examples are video and audio streams like HLS (HTTP Live Streams), SHOUTcast, ICEcast, Motion JPEG, etc. In all these examples, content is infinite, and cannot be sent "all as quickly as possible". Instead, content is sent according to some kind of bitrate or frame rate for example. Such content must use the ibb transfer mechanism, if used (or the jingle transfer mechanism). The ibb transfer mechanism uses In-Band Bytestreams to transfer data from the server to the client. It starts by sending an a ibb element containing a sid attribute identifying the stream. Then the server sends an ibb:open IQ-stanza to the client according to XEP-0047. The client can choose to reject, negotiate or accept the request whereby the transfer is begun. When the client is satisfied and wants to close the stream, it does so, also according to XEP-0047. The sid value returned in the HTTP response is the same sid value that is later used by the IBB messages that follow. In this way, the client can relate the HTTP request and response, with the corresponding data transferred separately. Listing 14: ibb ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='12'> <req xmlns='urn:xmpp:http' method='GET' resource='/webcam1.jpg' version='1.1'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Host'>example.org</header> </headers> </req> </iq> <iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser' id='12'> <resp xmlns='urn:xmpp:http' version='1.1' statusCode='200' statusMessage='OK'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Date'>Fri, 04 May 2013 15:05:32 GMT -4</header> <header name='Server'>Clayster</header> <header name='Content-Type'>multipart/x-mixed-replace; boundary=__2347927492837489237492837</header> </headers> <data> <ibb sid='Stream0002'/> </data> </resp> </iq> <iq type='set' from='httpserver@example.org' to='httpclient@example.org/browser' id='13'> <open xmlns='http://jabber.org/protocol/ibb' block-size='32768' sid='Stream0002' stanza='message'/> </iq> ``` 4 USE CASES 4.2.7 jingle For demanding multi-media streams alternative methods to transport streaming rather than embedded into the XMPP stream may be required. Even though the ibb method may be sufficient to stream a low-resolution web cam in the home, or listen to a microphone or a radio station, it is probably badly suited for high-resolution video streams with multiple video angles and audio channels. If such content is accessed and streamed, the server can negotiate a different way to stream the content using XEP 0166: jingle. Listing 15: jingle ```xml <iq type='result' from='httpserver@example.org' to='httpclient@example.org' id='14'/> <message from='httpserver@example.org' to='httpclient@example.org/browser'> <data xmlns='http://jabber.org/protocol/ibb' sid='Stream0002' seq='0'>...</chunk> </message> <iq type='set' from='httpclient@example.org' to='httpserver@example.org' id='14'> <close xmlns='http://jabber.org/protocol/ibb' sid='Stream0002'/> </iq> <iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser' id='14'/> ``` <headers> </headers> <data> <jingle xmlns='urn:xmpp:jingle:1' action='session-initiate' initiator='romeo@montague.lit/orchard' sid='a73sjvk1a37jfe4'> <content creator='initiator' name='voice'> <description xmlns='urn:xmpp:jingle:apps:rtp:1' media='audio'> <payload type id='96' name='speex' clockrate='16000'/> <payload type id='97' name='speex' clockrate='8000'/> <payload type id='18' name='G729'/> <payload type id='0' name='PCMU'/> <payload type id='103' name='L16' clockrate='16000' channels='2'/> <payload type id='98' name='x-ISAC' clockrate='8000'/> </description> <transport xmlns='urn:xmpp:jingle:transports:ice-udp:1' pwd='asRel8Rgdpd777uzjYhagZg' ufrag='8hhY'> <candidate component='1' foundation='1' generation='0' id='el0747fg11' ip='10.0.1.1' network='1' port='8998' priority='2130706431' protocol='udp' type='host'/> <candidate component='1' foundation='2' generation='0' id='ys32b30v3r' ip='192.0.2.3' network='1' port='45664' priority='1694498815' protocol='udp' rel-addr='10.0.1.1' rel-port='8998' type='srflx'/> </candidate> </transport> </content> </jingle> </data> Note: Example taken from XEP 166: Jingle. Note2: Using Jingle in this way makes it possible for an intelligent server to return multiple streams the client can choose from, something that is not done in normal HTTP over TCP. The first candidate should however correspond to the same stream that would have been returned if the request had been made using normal HTTP over TCP. 4.3 Applications The following section lists use cases based on type of application. It is used to illustrate what types of applications would benefit from implementing this extension. 4.3.1 Browsers HTTP began as a protocol for presenting text in browsers. So, browsers is a natural place to start to list use cases for this extensions. In general, content is identified using URL’s, and in the browser a user enters the URL into Address Field of the browser, and the corresponding content is displayed in the display area. The content itself will probably contain links to other content, each such item identified by an absolute or relative URL. The syntax and format of Uniform Resource Locators (URLs) or Uniform Resource Identifiers (URIs) is defined in RFC 3986. The basic format is defined as follows: Listing 16: URL syntax ``` URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ] ``` ``` hier-part = "//" authority path-absolute / path-rootless / path-empty ``` RFC 2616 furthermore defines the format of URLs using the http URI scheme, as follows: Listing 17: HTTP (over TCP) URL syntax ``` http_URL = "http:" "//" host [ "::" port ] [ abs_path [ "?" query ]] ``` RFC 2818 continues to define the https scheme for HTTP transport over SSL/TLS over TCP using the same format as for HTTP URLs, except the https scheme is used to inform the client that HTTP over SSL/TLS is to be used. In a similar way, this document proposes a new URI scheme: `httpx`, based on the HTTP URL scheme, except `httpx` URLs imply the use of HTTP over XMPP instead of HTTP over TCP. URLs using the `httpx` URL scheme has the following format: ``` Listing 18: HTTP over XMPP URL syntax httpx_URL = "httpx: " // resourceless_jid [ abs_path [ "?" query ]]" ``` Here, the host and port parts of normal HTTP URLs have been replaced by the resource-less JID of the HTTP Server, i.e. only the user name, the @ character and the domain. The / separator between the resource-less JID and the following abs_path, is part of abs_path. ``` Listing 19: Examples of URLs with the httpx scheme httpx://httpServer@example.org/index.html httpx://httpServer@example.org/images/image1.png httpx://httpServer@example.org/api?p1=a&p2=b ``` By creating a new scheme for HTTP over XMPP transport, and implementing support for it in web browsers, XML HTTP request objects and web servers, Web Applications previously requiring web hosting on the Internet will be able to be seamlessly hosted privately and securely behind firewalls instead, by simply switching from the http URL scheme to the httpx URL scheme in the calling application. All relative URL’s within the application, including URL’s sent to the XHR object (Ajax) will automatically be directed to use the HTTP over XMPP transport instead. It’s beyond the scope of this specification to define how browsers handles its own XMPP account(s) and roster. This section only makes a suggestion to show how this can be handled. It is assumed in this discussion that the browser has a working XMPP connection with a server, and has its own JID. For simplicity, we will assume the browser has only one connection. Extension to multiple connection is canonical. When resolving an URL using the httpx scheme, the browser needs to extract the JID of the server hosting the resource. If that JID is already in the roster, the request can proceed as usual. If not in the roster, the browser needs to send a friendship request. A non-exhaustive list of states could be made: - No response: This could be presented as a connection to the content server being made. - Request rejected: This could be handled in the same way as HTTP Error Forbidden. - Request accepted: Connection made, proceed with fetching content. - Timeout: If no friendship request response have been returned, the browser can choose to time out. 4 USE CASES Since XMPP works both ways, the browser can receive friendship requests from the outside world. Any such requests should be displayed to the end user, if any, or rejected. For more information, see Roster Handling in web clients and Roster Handling in web servers. Today, most people who want to host their own web applications (HTML/HTTP based applications) need to host them on a server publicly available on the Internet. However, many applications of a private nature like a family blog, home automation system, etc., is not suited for public hosting, since it puts all private data at risk of being compromised, or access to home security functions (like home webcams) to get in the hands of people you don’t want to have access to them. To solve this, one can host the application on a server at home, perhaps a small cheap plug computer consuming as little as 1 or 2 Watts of electricity, using a web server supporting this extension. If the following design rules are followed, the application should be visible in any browser also supporting this extension, as long as friendship exists between the browser and the web server: - Only relative URL’s are used within references (images, audio, video, links, objects, etc.). If absolute URL’s are used (including scheme), the browser might get the first page correctly, but will be unable to get the content with the absolute URL, unless the URL has the same scheme as the principal page. - URL’s to web forms must also be relative, for the same reason. - Any URL’s sent to the XML HTTP Request (XHR) Object directed to API’s or resources hosted by the same application must also be relative, for the same reasons as above. The XHR Object supports relative URL’s. If the above rules are met, which they should under normal conditions, typing in the httpx URL in the browser (for instance when you’re at the office) should display the application (hosted for example at home behind a firewall) in the same way as when you use http (or https) when you have access to the server (for instance when you’re home), as long as friendship exists between the browser JID and the server JID. 4.3.2 Web Services Many applications use a Service Oriented Architecture (SOA) and use web services to communicate between clients and servers. These web services are mostly HTTP over TCP based, even though there are bindings which are not based on this. The most common APIs today (REST) are however all based on HTTP over TCP. Being HTTP over TCP requires the web server hosting the web services either to be public or directly accessible by the client. But as the services move closer to end users (for instance a Thermostat publishing a REST API for control in your home), problems arise when you try to access the web service outside of private network in which the API is available. As explained previously, the use of HTTP over XMPP solves this. The following example shows a simple SOAP method call: Listing 20: SOAP method call ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.com' id='15'> <req xmlns='urn:xmpp:http' method='POST' resource='/Math' version='1.1'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Host'>www.example.com</header> <header name='Content-Type'>application/soap+xml; charset=utf-8</header> <header name='Content-Length'>...</header> </headers> <data> <xml> <soap:Envelope xmlns:soap="http://www.w3.org/2001/12/soap-envelope" soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding"> <soap:Body xmlns:m="http://www.example.org/math"> <m:AddNumbers> <m:N1>10</m:N1> <m:N2>20</m:N2> </m:AddNumbers> <m:GetStockPrice> </m:GetStockPrice> </soap:Body> </xml> </data> </req> </iq> <iq type='result' from='httpserver@example.com' to='httpclient@example.org/browser' id='15'> <resp xmlns='urn:xmpp:http' version='1.1' statusCode='200' statusMessage='OK'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Content-Type'>application/soap+xml; charset=utf-8</header> <header name='Content-Length'>...</header> </headers> <data> ``` <xml> <soap:Envelope xmlns:soap="http://www.w3.org/2001/12/soap-envelope" soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding"> <soap:Body xmlns:m="http://www.example.org/math"> <m:AddNumbersResponse> <m:Sum>30</m:Sum> </m:AddNumbersResponse> </soap:Body> </soap:Envelope> </xml> </data> </resp> </iq> Note: Other components of SOAP, such as WSDL and disco-documents are just examples of content handled by simple GET requests. This section shows an example of a REST method call. REST method calls are just simple GET, POST, PUT or DELETE HTTP calls with dynamically generated content. Listing 21: REST <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='16'> <req xmlns='urn:xmpp:http' method='GET' resource='/api/multiplicationtable?m=5' version='1.1'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Host'>example.org</header> </headers> </req> </iq> <iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser' id='16'> <resp xmlns='urn:xmpp:http' version='1.1' statusCode='200' statusMessage='OK'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Date'>Fri, 05 May 2013 15:01:53 GMT -4</header> <header name='Server'>Clayster</header> <header name='Content-Type'>text/xml</header> <header name='Content-Length'>...</header> </headers> </resp> </iq> 4.3.3 Semantic Web & IoT The Semantic Web was originally developed as a way to link data between servers on the Web, and understand it. However, with the advents of technologies such as SPARQL, the Semantic Web has become a way to unify API’s into a universal form of distributed API to all types of data possible. It also allows for a standardized way to perform grid computing, in the sense that queries can be federated and executed in a distributed fashion (“in the grid”). For these reasons, and others, semantic web technologies have been moving closer to Internet of Things, and also into the private spheres of its end users. Since the semantic web technologies are based on HTTP, they also suffer from the shortcomings of HTTP over TCP, when it comes to firewalls and user authentication and authorization. Allowing HTTP transport over XMPP greatly improves the reach of semantic technologies beyond "The Internet" while at the same time improving security and controllability of the information. As the semantic web moves closer to Internet of Things and the world of XMPP, it can benefit from work done with relation to the Internet of Things, such as Internet of Things - Provisioning (XEP-0324)\(^{10}\), which would give automatic control of who (or what) can communicate with whom (or what). Turtle\(^{11}\), is a simple way to represent semantic data. The following example shows Turtle-encoded semantic data being returned to the client as a response to a request. Listing 22: Turtle ```xml <iq type='result' from='httpserver@example.org' </iq> ``` \(^{11}\)Turtle: Terse RDF Triple Language <http://www.w3.org/TR/turtle/> RDF is another way to represent semantic data, better suited than Turtle for M2M communication. Related technologies, such as the micro format RDFa allows for embedding RDF into HTML pages or XML documents. The following example shows RDF-encoded semantic data being returned to the client as a response to a request. ``` Listing 23: RDF ``` [13] RDFa: RDF through attributes [http://www.w3.org/TR/rdfa-syntax/] This section shows an example of a SPARQL query executed as a POST call. Listing 24: SPARQL ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='4'> <req xmlns='urn:xmpp:http' method='POST' version='1.1' resource='/sparql/?default-graph-uri=http%3A%2F%2Fanother.example%2Fcalendar.rdf'> <headers xmlns='http://jabber.org/protocol/shim'> <header name='Host'>example.org</header> <header name='User-agent'>Clayster HTTP/XMPP Client</header> <header name='Content-Type'>application/sparql-query</header> <header name='Content-Length'>...</header> </headers> <data> <text>@prefix dc: &lt;http://purl.org/dc/elements/1.1/&gt;. @xep&gt; dc:title "HTTP over XMPP"; dc:creator &lt;PeterWaher&gt;; dc:publisher &lt;XSF&gt;. </text> </data> </req> </iq> ``` <iq type='result'... 4.3.4 Streaming There are many types of streams and streaming protocols. Several of these are based on HTTP or variants simulating HTTP. Examples of such HTTP-based or pseudo-HTTP based streaming protocols can include HLS 14 used for multi-media streaming, SHOUTcast 15 used for internet radio and Motion JPEG 16 common format for web cameras. Common for all streaming data, is that they are indefinite, but at the same time rate-limited. --- 16 Motion JPEG <http://en.wikipedia.org/wiki/Motion_JPEG> depending on quality, etc. Because of this, the web server is required to use the ibb encoding or the jingle encoding to transport the content to the client. 5 Determining Support If an entity supports the protocol specified herein, it MUST advertise that fact by returning a feature of "urn:xmpp:http" in response to Service Discovery (XEP-0030) information requests. Listing 25: Service discovery information request ```xml <iq type='set' from='httpclient@example.org/browser' to='httpserver@example.org' id='disco1'> <query xmlns='http://jabber.org/protocol/disco#info'/> </iq> ``` Listing 26: Service discovery information response ```xml <iq type='result' from='httpserver@example.org' to='httpclient@example.org/browser' id='disco1'> <query xmlns='http://jabber.org/protocol/disco#info'> ...<feature var='urn:xmpp:http'/>... </query> </iq> ``` In order for an application to determine whether an entity supports this protocol, where possible it SHOULD use the dynamic, presence-based profile of service discovery defined in Entity Capabilities (XEP-0115). However, if an application has not received entity capabilities information from an entity, it SHOULD use explicit service discovery instead. 6 Implementation Notes 6.1 Connection handling HTTP over TCP includes headers for connection handling. The basic sequence for an HTTP request might be: • Client connects to server • Clients sends request • Client received response • Client closes connection However, in the HTTP over XMPP case, there are no connections between the client and the server. Both clients and servers have active connections to the XMPP Server, but these remain unchanged during the sequence of requests. Therefore, both clients and servers should ignore any HTTP over TCP connection settings, since they have no meaning in the HTTP over XMPP case. However, the corresponding headers should always be transported as is, to maintain the information. 6.2 HTTP Headers HTTP Headers are serialized to and from XML using the XEP-0131 Stanza Headers and Internet Metadata. However, this does not mean that the SHIM feature needs to be published by the client, as defined in §3 in XEP-0131, since the headers will be embedded into the HTTP elements. Also, if there is any conflicts in how to store header values, when it comes to data types, etc., the original format as used by the original HTTP request must be used, and not the format defined in Header Definitions or A Note About date-Related Headers in XEP-0131. The HTTP over XMPP tunnel is just a tunnel of HTTP over XMPP, it does not know the semantic meaning of headers used in the transport. It does not know if additional headers added by the myriad of custom applications using HTTP are actually HTTP-compliant. It just acts as a transport, returning the same kind of response (being deterministic) as if the original request was made through other means, for example over TCP. It does not add, remove or change semantic meaning of keys and values, nor change the format of the corresponding values. Such changes will create uncountable problems very difficult to detect and solve in a general case. This specification differs from XEP-0131 in that this specification the headers are consumed by web servers and web clients (The XMPP client here only being a "dumb" gateway), while in XEP-0131 the headers are consumed by the XMPP clients themselves, knowing XML and XML formats. 6.3 Stanza Sizes Some XMPP Servers may limit stanza sizes for various reasons. While this may work well for certain applications, like Instant Messaging and Chat, implementors of HTTP over XMPP need to know that some server have such stanza size restrictions. Therefore, an implementation should include configurable size limits, so chunking can be used instead of sending large stanzas. Another limit could be when streaming should be used instead of chunking. This later limit should be applied in particular on audio and video content. The implementor should also consider to send large content in the form of files using file transfer, and large multi-media content using Jingle. Note: According to RFC 6120 there is a smallest allowed maximum stanza size that all XMPP servers must support. According to §13.12.4 of that document, this limit is set to 10000 bytes including all characters from the opening < character to the closing > character. 6.4 Bandwidth Limitations Some XMPP Servers may also have bandwidth restrictions enforced. This to limit the possibility of Denial of Service attacks or similar flooding of messages. Implementors of the HTTP over XMPP extensions must know however, that the bandwidth limitations for instant messaging and chat may be completely different from that of normal web applications. In chatting, a 1000 bytes/s limit is in most cases sufficient, while the same limit for even a modest web applications will make the application completely impossible to use. 7 Security Considerations It’s beyond the scope of this document to define how HTTP clients or HTTP servers handle rosters internally. The following sections list suggestions on how these can be handled by different parties. 7.1 Roster handling in browsers Since browsers are operated by end users, any friendship request received from the outside should be either shown to the user (if the browser also maintains an IM client), or automatically rejected. On the other hand, when the browser wants to access an URL using the httpx scheme, an automatic friendship request to the corresponding JID should be done, if not already in the roster. It is assumed that by entering the URL, or using the URL of an application already displayed, this implies giving permission to add that JID as a friend to the roster of the browser. 7.2 Roster handling in web servers A web server should have different security settings available. The following subsections list possible settings for different scenarios. Note that these settings only reflect roster handling and cannot be set per resource. However, the server can maintain a set of JIDs with different --- settings and restrict access to parts of the content hosted by the server per JID. 7.2.1 Public Server A public server should accept requests from anybody (reachable from the current JID). All friendship requests should be automatically accepted. To avoid bloating the roster, friendship requests could be automatically unsubscribed once the HTTP session has ended. 7.2.2 Manual Server All new friendship are shown (or queued) to an administrator for manual acceptance or rejection. Once accepted, the client can access the corresponding content. During the wait (which can be substantial), the client should display a message that the friendship request is sent and response is pending. Automatic unsubscription of friendships should only be done on a much longer inactivity timeframe than the normal session timeout interval. 7.2.3 Private Server All new friendship requests are automatically rejected. Only already accepted friendships are allowed to make HTTP requests to the server. 7.2.4 Provisioned Server All new friendship requests are delegated to a trusted third party, according to XEP 0324: Internet of Things - Provisioning. Friendship acceptance or rejection is then performed according to the response from the provisioning server(s). Automatic friendship unsubscription can be made to avoid bloating the roster. However, the time interval for unsubscribing inactive users should be longer than the normal session timeout period, to avoid spamming any provisioning servers each time a client requests friendship. 8 IANA Considerations The httpx URL scheme, as described above, must be registered as a provisional URI scheme according to BCP 35. The registration procedure is specified in BCP 35, section 7. 8.1 URI Scheme Registration Template Following is an URI Scheme Registration Template, as per BCP 35, section 7.4. **URI scheme name** httpx **Status** provisional **URI scheme syntax** The syntax used for the httpx scheme reuses the URI scheme syntax for the http scheme, as defined in RFC 2616: ``` http_URL = "http:" // host [: port] [ abs_path ["?" query]] ``` Instead of using host and port to define where the HTTP server resides, the httpx scheme uses a resource-less XMPP JID to define where the HTTP server resides, implying the use of HTTP over XMPP as defined in this document instead of HTTP over TCP: ``` httpx_URL = "httpx:" // resourceless_jid [ abs_path ["?" query]] ``` Here, the host and port parts of normal HTTP URLs have been replaced by the resource-less JID of the HTTP Server, i.e. only the user name, the @ character and the domain. The / separator between the resource-less JID and the following abs_path, is part of abs_path. https://httpServer@example.org/index.html https://httpServer@example.org/images/image1.png **URI scheme semantics** By creating a new scheme for HTTP over XMPP transport, and implementing support for it in web browsers, XML HTTP request objects and web servers, Web Applications previously requiring web hosting on the Internet will be able to be seamlessly hosted privately and securely behind firewalls instead, by simply switching from the http URL scheme to the httpx URL scheme in the calling application. All relative URL’s within the application, including URL’s sent to the XHR object (Ajax) will automatically be directed to use the HTTP over XMPP transport instead. **Encoding considerations** Encoding is fully described in the Encoding section of the HTTP over XMPP XEP document. **Applications/protocols that use this URI scheme name** This URI scheme is to be used in the same environments and by the same types of applications as the http URI scheme. **Interoperability considerations** Interoperability considerations is described in the Implementation Notes section of the HTTP over XMPP XEP document. **Security considerations** Security considerations is described in the Security considerations section of the HTTP over XMPP XEP document. **Contact** For further information, please contact Peter Waher: Email: peter.waher@example.org JabberID: peter.waher@jabber.org URI: http://www.linkedin.com/in/peterwaher Author/Change controller For concerns regarding changes to the provisional registration, please contact Peter Waher: Email: peter.waher@example.org JabberID: peter.waher@jabber.org URI: http://www.linkedin.com/in/peterwaher References Following is a short list of referenced documents: RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1 HTTP over XMPP XEP: XMPP Extension Protocol: HTTP over XMPP 9 XMPP Registrar Considerations The protocol schema needs to be added to the list of XMPP protocol schemas. 10 XML Schema ```xml <?xml version='1.0' encoding='UTF-8'?> <xs:schema xmlns:xs='http://www.w3.org/2001/XMLSchema' targetNamespace='urn:xmpp:http' xmlns='urn:xmpp:http' xmlns:shim='http://jabber.org/protocol/shim' xmlns:sipub='http://jabber.org/protocol/sipub' xmlns:ibb='http://jabber.org/protocol/ibb' xmlns:jingle='urn:xmpp:jingle:1' elementFormDefault='qualified'> <xs:import namespace='http://jabber.org/protocol/shim'/> <xs:import namespace='http://jabber.org/protocol/sipub'/> <xs:import namespace='http://jabber.org/protocol/ibb'/> <xs:import namespace='urn:xmpp:jingle:1'/> <xs:element name='req'> <xs:complexType> <xs:sequence> <xs:element ref='shim:headers' minOccurs='0' maxOccurs='1'/> <xs:element name='data' type='Data' minOccurs='0' maxOccurs='1'/> </xs:sequence> <xs:attribute name='method' type='Method' use='required'/> <xs:attribute name='resource' type='xs:string' use='required'/> </xs:complexType> </xs:element> </xs:schema> ``` <xs:attribute name='version' type='Version' use='required' /> <xs:attribute name='maxChunkSize' type='MaxChunkSize' use='optional'/> <xs:attribute name='sipub' type='xs:boolean' use='optional' default='true'/> <xs:attribute name='ibb' type='xs:boolean' use='optional' default='true'/> <xs:attribute name='jingle' type='xs:boolean' use='optional' default='true'/> </xs:complexType> </xs:element> <xs:simpleType name='MaxChunkSize'> <xs:restriction base='xs:int'> <xs:minInclusive value='256'/> <xs:maxInclusive value='65536'/> </xs:restriction> </xs:simpleType> <xs:element name='resp'> <xs:complexType> <xs:sequence> <xs:element ref='shim:headers' minOccurs='0' maxOccurs='1'/> <xs:element name='data' type='Data' minOccurs='0' maxOccurs='1'/> </xs:sequence> <xs:attribute name='version' type='Version' use='required'/> <xs:attribute name='statusCode' type='xs:positiveInteger' use='required'/> <xs:attribute name='statusMessage' type='xs:string' use='optional'/> </xs:complexType> </xs:element> <xs:complexType name='Data'> <xs:choice minOccurs='1' maxOccurs='1'> <xs:element name='text' type='xs:string'> <xs:annotation> <xs:documentation>Used for text responses that are not XML. </xs:documentation> </xs:annotation> </xs:element> <xs:element name='xml'> <xs:annotation> <xs:documentation>Specifically used for XML-formatted responses.</xs:documentation> </xs:annotation> </xs:element> </xs:complexType> <xs:annotation> <xs:documentation>Short binary responses, base-64 encoded.</xs:documentation> </xs:annotation> <xs:complexType> <xs:attribute name="streamId" type="xs:string" use="required"/> </xs:complexType> </xs:element> <xs:element ref='sipub:sipub'> <xs:annotation> <xs:documentation>Content available through file transfer.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="ibb"> <xs:annotation> <xs:documentation>Content returned through an in-band bytestream.</xs:documentation> </xs:annotation> <xs:complexType> <xs:attribute name="sid" type="xs:string" use="required"/> </xs:complexType> </xs:element> <xs:element ref="jingle:jingle"> <xs:annotation> <xs:documentation>Multi-media content returned through jingle.</xs:documentation> </xs:annotation> </xs:element> </xs:choice> </xs:complexType> <xs:element name="chunk"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:base64Binary"> <xs:attribute name="streamId" type="xs:string" use="required"/> <xs:attribute name="nr" type="xs:nonNegativeInteger" use="required"/> <xs:attribute name="last" type="xs:boolean" use="optional" default="false"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> <xs:element name='close'> <xs:complexType> <xs:attribute name='streamId' type='xs:string' use='required'/> </xs:complexType> </xs:element> <xs:simpleType name='Method'> <xs:restriction base='xs:string'> <xs:enumeration value='OPTIONS'/> <xs:enumeration value='GET'/> <xs:enumeration value='HEAD'/> <xs:enumeration value='POST'/> <xs:enumeration value='PUT'/> <xs:enumeration value='DELETE'/> <xs:enumeration value='TRACE'/> <xs:enumeration value='PATCH'/> </xs:restriction> </xs:simpleType> <xs:simpleType name='Version'> <xs:restriction base='xs:string'> <xs:pattern value='\d[.]\d'/> </xs:restriction> </xs:simpleType> 11 Acknowledgements Thanks to Peter Saint-Andre, Karin Forsell, Matthew A. Miller, Kevin Smith and Ralph Meijer for all valuable feedback.
{"Source-Url": "https://xmpp.org/extensions/xep-0332.pdf", "len_cl100k_base": 15433, "olmocr-version": "0.1.53", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 132218, "total-output-tokens": 18731, "length": "2e13", "weborganizer": {"__label__adult": 0.0004088878631591797, "__label__art_design": 0.0005011558532714844, "__label__crime_law": 0.0005893707275390625, "__label__education_jobs": 0.0007886886596679688, "__label__entertainment": 0.0002644062042236328, "__label__fashion_beauty": 0.00014925003051757812, "__label__finance_business": 0.0004601478576660156, "__label__food_dining": 0.0002853870391845703, "__label__games": 0.0008053779602050781, "__label__hardware": 0.0027618408203125, "__label__health": 0.00029540061950683594, "__label__history": 0.000438690185546875, "__label__home_hobbies": 6.645917892456055e-05, "__label__industrial": 0.0004324913024902344, "__label__literature": 0.0006237030029296875, "__label__politics": 0.0002930164337158203, "__label__religion": 0.0005030632019042969, "__label__science_tech": 0.12371826171875, "__label__social_life": 0.0001175999641418457, "__label__software": 0.10174560546875, "__label__software_dev": 0.763671875, "__label__sports_fitness": 0.00025391578674316406, "__label__transportation": 0.0006279945373535156, "__label__travel": 0.0002639293670654297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65877, 0.03267]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65877, 0.47095]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65877, 0.76517]], "google_gemma-3-12b-it_contains_pii": [[0, 439, false], [439, 2974, null], [2974, 2974, null], [2974, 2974, null], [2974, 5812, null], [5812, 8480, null], [8480, 10812, null], [10812, 13848, null], [13848, 16620, null], [16620, 17912, null], [17912, 18560, null], [18560, 19296, null], [19296, 20659, null], [20659, 22212, null], [22212, 23452, null], [23452, 24333, null], [24333, 25314, null], [25314, 27355, null], [27355, 28161, null], [28161, 28962, null], [28962, 30459, null], [30459, 32343, null], [32343, 33977, null], [33977, 35111, null], [35111, 36198, null], [36198, 37953, null], [37953, 40434, null], [40434, 43236, null], [43236, 44789, null], [44789, 46315, null], [46315, 48064, null], [48064, 48544, null], [48544, 49426, null], [49426, 50063, null], [50063, 51592, null], [51592, 54040, null], [54040, 56432, null], [56432, 58223, null], [58223, 60637, null], [60637, 62323, null], [62323, 63733, null], [63733, 64419, null], [64419, 65738, null], [65738, 65877, null]], "google_gemma-3-12b-it_is_public_document": [[0, 439, true], [439, 2974, null], [2974, 2974, null], [2974, 2974, null], [2974, 5812, null], [5812, 8480, null], [8480, 10812, null], [10812, 13848, null], [13848, 16620, null], [16620, 17912, null], [17912, 18560, null], [18560, 19296, null], [19296, 20659, null], [20659, 22212, null], [22212, 23452, null], [23452, 24333, null], [24333, 25314, null], [25314, 27355, null], [27355, 28161, null], [28161, 28962, null], [28962, 30459, null], [30459, 32343, null], [32343, 33977, null], [33977, 35111, null], [35111, 36198, null], [36198, 37953, null], [37953, 40434, null], [40434, 43236, null], [43236, 44789, null], [44789, 46315, null], [46315, 48064, null], [48064, 48544, null], [48544, 49426, null], [49426, 50063, null], [50063, 51592, null], [51592, 54040, null], [54040, 56432, null], [56432, 58223, null], [58223, 60637, null], [60637, 62323, null], [62323, 63733, null], [63733, 64419, null], [64419, 65738, null], [65738, 65877, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65877, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65877, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65877, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65877, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65877, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65877, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65877, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65877, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65877, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65877, null]], "pdf_page_numbers": [[0, 439, 1], [439, 2974, 2], [2974, 2974, 3], [2974, 2974, 4], [2974, 5812, 5], [5812, 8480, 6], [8480, 10812, 7], [10812, 13848, 8], [13848, 16620, 9], [16620, 17912, 10], [17912, 18560, 11], [18560, 19296, 12], [19296, 20659, 13], [20659, 22212, 14], [22212, 23452, 15], [23452, 24333, 16], [24333, 25314, 17], [25314, 27355, 18], [27355, 28161, 19], [28161, 28962, 20], [28962, 30459, 21], [30459, 32343, 22], [32343, 33977, 23], [33977, 35111, 24], [35111, 36198, 25], [36198, 37953, 26], [37953, 40434, 27], [40434, 43236, 28], [43236, 44789, 29], [44789, 46315, 30], [46315, 48064, 31], [48064, 48544, 32], [48544, 49426, 33], [49426, 50063, 34], [50063, 51592, 35], [51592, 54040, 36], [54040, 56432, 37], [56432, 58223, 38], [58223, 60637, 39], [60637, 62323, 40], [62323, 63733, 41], [63733, 64419, 42], [64419, 65738, 43], [65738, 65877, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65877, 0.00463]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
c862c2ab10e7d110ca23d37af4b6e2413bdda219
[REMOVED]
{"len_cl100k_base": 14002, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 76531, "total-output-tokens": 17119, "length": "2e13", "weborganizer": {"__label__adult": 0.0002675056457519531, "__label__art_design": 0.00042891502380371094, "__label__crime_law": 0.0002157688140869141, "__label__education_jobs": 0.0011777877807617188, "__label__entertainment": 6.61015510559082e-05, "__label__fashion_beauty": 0.00012695789337158203, "__label__finance_business": 0.00022983551025390625, "__label__food_dining": 0.0002593994140625, "__label__games": 0.0006132125854492188, "__label__hardware": 0.0005879402160644531, "__label__health": 0.00023281574249267575, "__label__history": 0.00021255016326904297, "__label__home_hobbies": 8.082389831542969e-05, "__label__industrial": 0.0003376007080078125, "__label__literature": 0.0002605915069580078, "__label__politics": 0.00015795230865478516, "__label__religion": 0.00033783912658691406, "__label__science_tech": 0.0151824951171875, "__label__social_life": 7.003545761108398e-05, "__label__software": 0.007633209228515625, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.00019931793212890625, "__label__transportation": 0.0003819465637207031, "__label__travel": 0.00016045570373535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 72530, 0.02627]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 72530, 0.64344]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 72530, 0.88706]], "google_gemma-3-12b-it_contains_pii": [[0, 2278, false], [2278, 5570, null], [5570, 9050, null], [9050, 12962, null], [12962, 13092, null], [13092, 14833, null], [14833, 16307, null], [16307, 16436, null], [16436, 20067, null], [20067, 21591, null], [21591, 24609, null], [24609, 27888, null], [27888, 31100, null], [31100, 32543, null], [32543, 32796, null], [32796, 35586, null], [35586, 36861, null], [36861, 38560, null], [38560, 40093, null], [40093, 43388, null], [43388, 46134, null], [46134, 49412, null], [49412, 51255, null], [51255, 52733, null], [52733, 54082, null], [54082, 57459, null], [57459, 60611, null], [60611, 63830, null], [63830, 66919, null], [66919, 70355, null], [70355, 72530, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2278, true], [2278, 5570, null], [5570, 9050, null], [9050, 12962, null], [12962, 13092, null], [13092, 14833, null], [14833, 16307, null], [16307, 16436, null], [16436, 20067, null], [20067, 21591, null], [21591, 24609, null], [24609, 27888, null], [27888, 31100, null], [31100, 32543, null], [32543, 32796, null], [32796, 35586, null], [35586, 36861, null], [36861, 38560, null], [38560, 40093, null], [40093, 43388, null], [43388, 46134, null], [46134, 49412, null], [49412, 51255, null], [51255, 52733, null], [52733, 54082, null], [54082, 57459, null], [57459, 60611, null], [60611, 63830, null], [63830, 66919, null], [66919, 70355, null], [70355, 72530, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 72530, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 72530, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 72530, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 72530, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 72530, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 72530, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 72530, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 72530, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 72530, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 72530, null]], "pdf_page_numbers": [[0, 2278, 1], [2278, 5570, 2], [5570, 9050, 3], [9050, 12962, 4], [12962, 13092, 5], [13092, 14833, 6], [14833, 16307, 7], [16307, 16436, 8], [16436, 20067, 9], [20067, 21591, 10], [21591, 24609, 11], [24609, 27888, 12], [27888, 31100, 13], [31100, 32543, 14], [32543, 32796, 15], [32796, 35586, 16], [35586, 36861, 17], [36861, 38560, 18], [38560, 40093, 19], [40093, 43388, 20], [43388, 46134, 21], [46134, 49412, 22], [49412, 51255, 23], [51255, 52733, 24], [52733, 54082, 25], [54082, 57459, 26], [57459, 60611, 27], [60611, 63830, 28], [63830, 66919, 29], [66919, 70355, 30], [70355, 72530, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 72530, 0.13213]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
42b13351309514a593e53ceae15bbc05f7d7e49c
The Graphplan Planner Searching the Planning Graph Literature Neoclassical Planning - concerned with restricted state-transition systems - representation is usually restricted to propositional STRIPS - neoclassical vs. classical planning - classical planning: search space consists of nodes containing partial plans - neoclassical planning: nodes can be seen as sets of partial plans - resulted in significant speed-up and revival of planning research Overview - The Propositional Representation - The Planning-Graph Structure - The Graphplan Algorithm Classical Representations - **propositional representation** - world state is set of propositions - action consists of precondition propositions, propositions to be added and removed - **STRIPS representation** - like propositional representation, but first-order literals instead of propositions - **state-variable representation** - state is tuple of state variables \( \{x_1, \ldots, x_n\} \) - action is partial function over states Propositional Planning Domains - Let \( L = \{p_1, \ldots, p_n\} \) be a finite set of proposition symbols. A propositional planning domain on \( L \) is a restricted state-transition system \( \Sigma = (S, A, \gamma) \) such that: - \( S \subseteq 2^L \), i.e. each state \( s \) is a subset of \( L \) - \( A \subseteq 2^L \times 2^L \times 2^L \), i.e. each action \( a \) is a triple \((\text{precond}(a), \text{effects}^-(a), \text{effects}^+(a))\) where \( \text{effects}^-(a) \) and \( \text{effects}^+(a) \) must be disjoint - \( \gamma : S \times A \rightarrow 2^L \) where - \( \gamma(s, a) = (s - \text{effects}^-(a)) \cup \text{effects}^+(a) \) if \( \text{precond}(a) \subseteq s \) - \( \gamma(s, a) = \text{undefined} \) otherwise - \( S \) is closed under \( \gamma \) DWR Example: State Space The Graphplan Planner 7 DWR Example: Propositional States - \( L = \{ \text{onpallet}, \text{onrobot}, \text{holding}, \text{at1}, \text{at2} \} \) - \( S = \{ s_0, \ldots, s_5 \} \) - \( s_0 = \{ \text{onpallet}, \text{at2} \} \) - \( s_1 = \{ \text{holding}, \text{at2} \} \) - \( s_2 = \{ \text{onpallet}, \text{at1} \} \) - \( s_3 = \{ \text{holding}, \text{at1} \} \) - \( s_4 = \{ \text{onrobot}, \text{at1} \} \) - \( s_5 = \{ \text{onrobot}, \text{at2} \} \) The Graphplan Planner 8 ### DWR Example: Propositional Actions <table> <thead> <tr> <th>$a$</th> <th>precond($a$)</th> <th>effects$^-(a)$</th> <th>effects$^+(a)$</th> </tr> </thead> <tbody> <tr> <td>take</td> <td>{onpallet}</td> <td>{onpallet}</td> <td>{holding}</td> </tr> <tr> <td>put</td> <td>{holding}</td> <td>{holding}</td> <td>{onpallet}</td> </tr> <tr> <td>load</td> <td>{holding,at1}</td> <td>{holding}</td> <td>{onrobot}</td> </tr> <tr> <td>unload</td> <td>{onrobot,at1}</td> <td>{onrobot}</td> <td>{holding}</td> </tr> <tr> <td>move1</td> <td>{at2}</td> <td>{at2}</td> <td>{at1}</td> </tr> <tr> <td>move2</td> <td>{at1}</td> <td>{at1}</td> <td>{at2}</td> </tr> </tbody> </table> ### DWR Example: Propositional State Transitions <table> <thead> <tr> <th></th> <th>$s_0$</th> <th>$s_1$</th> <th>$s_2$</th> <th>$s_3$</th> <th>$s_4$</th> <th>$s_5$</th> </tr> </thead> <tbody> <tr> <td>take</td> <td>$s_1$</td> <td></td> <td>$s_3$</td> <td></td> <td></td> <td></td> </tr> <tr> <td>put</td> <td></td> <td>$s_0$</td> <td></td> <td>$s_2$</td> <td></td> <td></td> </tr> <tr> <td>load</td> <td></td> <td></td> <td>$s_4$</td> <td></td> <td></td> <td></td> </tr> <tr> <td>unload</td> <td></td> <td></td> <td></td> <td>$s_3$</td> <td></td> <td></td> </tr> <tr> <td>move1</td> <td></td> <td>$s_0$</td> <td></td> <td>$s_1$</td> <td></td> <td>$s_4$</td> </tr> <tr> <td>move2</td> <td>$s_2$</td> <td></td> <td>$s_3$</td> <td></td> <td></td> <td>$s_5$</td> </tr> </tbody> </table> Propositional Planning Problems • A propositional planning problem is a triple \( P=(\Sigma,s_i,g) \) where: - \( \Sigma=(S,A,\gamma) \) is a propositional planning domain on \( L=\{p_1,\ldots,p_n\} \) - \( s_i \in S \) is the initial state - \( g \subseteq L \) is a set of goal propositions that define the set of goal states \( S_g=\{s \in S \mid g \subseteq s\} \) DWR Example: Propositional Planning Problem • \( \Sigma \): propositional planning domain for DWR domain • \( s_i \): any state - example: initial state = \( s_0 \in S \) • \( g \): any subset of \( L \) - example: \( g=\{\text{onrobot,at2}\} \), i.e. \( S_g=\{s_5\} \) Classical Plans - A plan is any sequence of actions \( \pi = \langle a_1, \ldots, a_k \rangle \), where \( k \geq 0 \). - The length of plan \( \pi \) is \( |\pi| = k \), the number of actions. - If \( \pi_1 = \langle a_1, \ldots, a_k \rangle \) and \( \pi_2 = \langle a'_1, \ldots, a'_j \rangle \) are plans, then their concatenation is the plan \( \pi_1 \cdot \pi_2 = \langle a_1, \ldots, a_k, a'_1, \ldots, a'_j \rangle \). - The extended state transition function for plans is defined as follows: - \( \gamma(s, \pi) = s \) if \( k = 0 \) (\( \pi \) is empty) - \( \gamma(s, \pi) = \gamma(\gamma(s, a_1), \langle a_2, \ldots, a_k \rangle) \) if \( k > 0 \) and \( a_1 \) applicable in \( s \) - \( \gamma(s, \pi) = \text{undefined} \) otherwise Classical Solutions - Let \( \mathcal{P} = (\Sigma, s_i, g) \) be a propositional planning problem. A plan \( \pi \) is a solution for \( \mathcal{P} \) if \( g \subseteq \gamma(s_i, \pi) \). - A solution \( \pi \) is redundant if there is a proper subsequence of \( \pi \) is also a solution for \( \mathcal{P} \). - \( \pi \) is minimal if no other solution for \( \mathcal{P} \) contains fewer actions than \( \pi \). ### DWR Example: Plans and Solutions | plan $\pi$ | $|\pi|$ | $\gamma(s,\pi)$ | sol. | red. | min. | |--------------------------------|---------|-----------------|------|------|------| | {} | 0 | $s_0$ | no | - | - | | $\langle \text{move2}, \text{move2} \rangle$ | 2 | undef. | no | - | - | | $\langle \text{take}, \text{move1} \rangle$ | 2 | $s_3$ | no | - | - | | $\langle \text{take}, \text{move1}, \text{put}, \text{move2}, \text{take}, \text{move1}, \text{load}, \text{move2} \rangle$ | 8 | $s_5$ | yes | yes | no | | $\langle \text{take}, \text{move1}, \text{load}, \text{move2} \rangle$ | 4 | $s_5$ | yes | no | yes | | $\langle \text{move1}, \text{take}, \text{load}, \text{move2} \rangle$ | 4 | $s_5$ | yes | no | yes | ### Reachable Successor States - The successor function $\Gamma^m : 2^S \rightarrow 2^S$ for a propositional domain $\Sigma = (S, A, \gamma)$ is defined as: - $\Gamma(s) = \{ \gamma(s, a) | a \in A \text{ and } a \text{ applicable in } s \}$ for $s \in S$ - $\Gamma(\{s_1, \ldots, s_n\}) = \bigcup(k \in \{1, \ldots, n\}) \Gamma(s_k)$ - $\Gamma^0(\{s_1, \ldots, s_p\}) = \{s_1, \ldots, s_p\}$ - $\Gamma^m(\{s_1, \ldots, s_p\}) = \Gamma(\Gamma^{m-1}(\{s_1, \ldots, s_p\}))$ - The transitive closure of $\Gamma$ defines the set of all reachable states: - $\Gamma^\ast(s) = \bigcup(k \in \{0, \ldots\}) \Gamma^k(\{s\})$ for $s \in S$ Relevant Actions and Regression Sets • Let $\mathcal{P} = (\Sigma, s, g)$ be a propositional planning problem. An action $a \in A$ is relevant for $g$ if • $g \cap \text{effects}^+(a) \neq \emptyset$ and • $g \cap \text{effects}^-(a) = \emptyset$. • The regression set of $g$ for a relevant action $a \in A$ is: • $\gamma^{-1}(g, a) = (g - \text{effects}^+(a)) \cup \text{precond}(a)$ • note: $\gamma(s, a) \in S_g$ iff $\gamma^{-1}(g, a) \subseteq s$ Regression Function • The regression function $\Gamma^{-m}$ for a propositional domain $\Sigma = (S, A, \gamma)$ on $L$ is defined as: • $\Gamma^{-1}(g) = \{ \gamma^{-1}(g, a) \mid a \in A \text{ is relevant for } g \}$ for $g \in 2^L$ • $\Gamma^0(g_1, \ldots, g_n) = \{ g_1, \ldots, g_n \}$ • $\Gamma^{-1}(g_1, \ldots, g_n) = \bigcup_{k \in [1,n]} \Gamma^{-1}(g_k)$ • $\Gamma^{-m}(g_1, \ldots, g_n) = \Gamma^{-1}(\Gamma^{-m-1}(\{g_1, \ldots, g_n\}))$ • The transitive closure of $\Gamma^{-1}$ defines the set of all regression sets: • $\Gamma^c(g) = \bigcup_{k \in [0, \infty]} \Gamma^{-k}(\{g\})$ for $g \in 2^L$ Statement of a Propositional Planning Problem - A statement of a propositional planning problem is a triple $P = (A, s_i, g)$ where: - $A$ is a set of actions in an appropriate propositional planning domain $\Sigma = (S, A, \gamma)$ on $L$ - $s_i$ is the initial state in an appropriate propositional planning problem $P = (\Sigma, s_i, g)$ - $g$ is a set of goal propositions in the same propositional planning problem $P$ Example: Ambiguity in Statement of a Planning Problem - statement: $P = (\{a_1\}, s_i, g)$ where $a_1 = (\{p_1\}, \{p_1, p_2\})$, $s_i = \{p_1\}$, and $g = \{p_2\}$ - $\Sigma_1 =$ - $\{\{p_1\}, \{p_2\}\}$ - $\{a_1\}$ - $((\{p_1\}, a_1) \rightarrow \{p_2\})$ on $L_1 = \{p_1, p_2\}$ - $\Sigma_2 =$ - $\{\{p_1\}, \{p_2\}, \{p_1, p_3\}, \{p_2, p_3\}\}$ - $\{a_1\}$ - $((\{p_1\}, a_1) \rightarrow \{p_2\}, ((\{p_1, p_3\}, a_1) \rightarrow \{p_2, p_3\})$ on $L_2 = \{p_1, p_2, p_3\}$ Statement Ambiguity - **Proposition**: Let $\mathcal{P}_1$ and $\mathcal{P}_2$ be two propositional planning problems that have the same statement. Then both, $\mathcal{P}_1$ and $\mathcal{P}_2$, have - the same set of reachable states $\Gamma^*(s_j)$ and - the same set of solutions. Properties of the Propositional Representation - **Expressiveness**: For every propositional planning domain there is a corresponding state-transition system, but what about vice versa? - **Conciseness**: propositional action representation is concise because it does not mention what does not change - **Consistency**: not every assignment of truth values to propositions must correspond to a state in the underlying state-transition system Grounding a STRIPS Planning Problem - Let $P=(O,s_i,g)$ be the statement of a STRIPS planning problem and $C$ the set of all the constant symbols that are mentioned in $s_i$. Let $\text{ground}(O)$ be the set of all possible instantiations of operators in $O$ with constant symbols from $C$ consistently replacing variables in preconditions and effects. - Then $P'=(\text{ground}(O),s_i,g)$ is a statement of a STRIPS planning problem and $P'$ has the same solutions as $P$. Translation: Propositional Representation to Ground STRIPS - Let $P=(A,s_i,g)$ be a statement of a propositional planning problem. In the actions $A$: - replace every action $(\text{precond}(a), \text{effects}^-(a), \text{effects}^+(a))$ with an operator $o$ with - some unique name($o$), - $\text{precond}(o) = \text{precond}(a)$, and - $\text{effects}(o) = \text{effects}^+(a) \cup \{\neg p \mid p \in \text{effects}^-(a)}$. Translation: Ground STRIPS to Propositional Representation - Let \( P = (O, s, g) \) be a ground statement of a classical planning problem. - In the operators \( O \), in the initial state \( s \), and in the goal \( g \) replace every atom \( P(v_1, \ldots, v_n) \) with a propositional atom \( P_{v_1, \ldots, v_n} \). - In every operator \( o \): for all \( \neg p \) in \( \text{precond}(o) \), replace \( \neg p \) with \( p' \). - if \( p \) in \( \text{effects}(o) \), add \( \neg p' \) to \( \text{effects}(o) \). - if \( \neg p \) in \( \text{effects}(o) \), add \( p' \) to \( \text{effects}(o) \). - In the goal replace \( \neg p \) with \( p' \). - For every operator \( o \) create an action \( (\text{precond}(o), \text{effects}^{-}(a), \text{effects}^{+}(a)) \). Overview - The Propositional Representation - The Planning-Graph Structure - The Graphplan Algorithm Example: Simplified DWR Problem - robots can load and unload autonomously - locations may contain unlimited number of robots and containers - problem: swap locations of containers Simplified DWR Problem: STRIPS Actions - move($r,l,l'$) - precond: at($r,l$), adjacent($l,l'$) - effects: at($r,l'$), ¬at($r,l$) - load($c,r,l$) - precond: at($r,l$), in($c,l$), unloaded($r$) - effects: loaded($r,c$), ¬in($c,l$), ¬unloaded($r$) - unload($c,r,l$) - precond: at($r,l$), loaded($r,c$) - effects: unloaded($r$), in($c,l$), ¬loaded($r,c$) **Simplified DWR Problem: State Proposition Symbols** - **robots:** - \( r1 \) and \( r2 \): \( \text{at(robr,loc1)} \) and \( \text{at(robr,loc2)} \) - \( q1 \) and \( q2 \): \( \text{at(robq,loc1)} \) and \( \text{at(robq,loc2)} \) - \( ur \) and \( uq \): \( \text{unloaded(robr)} \) and \( \text{unloaded(robq)} \) - **containers:** - \( a1, a2, ar, \) and \( aq \): \( \text{in(conta,loc1)} \), \( \text{in(conta,loc2)} \), \( \text{loaded(conta,robr)} \), and \( \text{loaded(conta,robq)} \) - \( b1, b2, br, \) and \( bq \): \( \text{in(contb,loc1)} \), \( \text{in(contb,loc2)} \), \( \text{loaded(contb,robr)} \), and \( \text{loaded(contb,robq)} \) - **initial state:** \( \{r1, q2, a1, b2, ur, uq\} \) **Simplified DWR Problem: Action Symbols** - **move actions:** - \( Mr12 \): \( \text{move(robr,loc1,loc2)} \), \( Mr21 \): \( \text{move(robr,loc2,loc1)} \), \( Mq12 \): \( \text{move(robq,loc1,loc2)} \), \( Mq21 \): \( \text{move(robq,loc2,loc1)} \) - **load actions:** - \( Lar1 \): \( \text{load(conta,robr,loc1)} \); \( Lar2, Laq1, Laq2, Lar1, Lbr2, Lbq1, \) and \( Lbq2 \) correspondingly - **unload actions:** - \( Uar1 \): \( \text{unload(conta,robr,loc1)} \); \( Uar2, Uaq1, Uaq2, Uar1, Ubr2, Ubq1, \) and \( Ubq2 \) correspondingly Solution Existence • **Proposition**: A propositional planning problem \( P = (\Sigma, s_i, g) \) has a solution iff \( S_g \cap \Gamma^>(\{s_i\}) \neq \emptyset \). • **Proposition**: A propositional planning problem \( P = (\Sigma, s_i, g) \) has a solution iff \( \exists s \in \Gamma^<(\{g\}) : s \subseteq s_i \). Reachability Tree • tree structure, where: • root is initial state \( s_i \) • children of node \( s \) are \( \Gamma(\{s\}) \) • arcs are labelled with actions • all nodes in reachability tree are \( \Gamma^>(\{s_i\}) \) • all nodes to depth \( d \) are \( \Gamma^d(\{s_i\}) \) • solves problems with up to \( d \) actions in solution • problem: \( O(k^d) \) nodes; \( k = \) applicable actions per state DWR Example: Reachability Tree Planning Graph: Nodes - layered directed graph $G=(N,E)$: - $N = P_0 \cup A_1 \cup P_1 \cup A_2 \cup P_2 \cup \ldots$ - state proposition layers: $P_0, P_1, \ldots$ - action layers: $A_1, A_2, \ldots$ - first proposition layer $P_0$: - propositions in initial state $s_i$: $P_0 = s_i$ - action layer $A_j$: - all actions $a$ where: $\text{precond}(a) \subseteq P_{j-1}$ - proposition layer $P_j$: - all propositions $p$ where: $p \in P_{j-1}$ or $\exists a \in A_j: p \in \text{effects}^+(a)$ Planning Graph: Arcs - from proposition $p \in P_{j-1}$ to action $a \in A_j$: - if: $p \in \text{precond}(a)$ - from action $a \in A_j$ to layer $p \in P_j$: - positive arc if: $p \in \text{effects}^+(a)$ - negative arc if: $p \in \text{effects}^-(a)$ - no arcs between other layers Reachability in the Planning Graph - reachability analysis: - if a goal $g$ is reachable from initial state $s_i$ - then there will be a proposition layer $P_g$ in the planning graph such that $g \subseteq P_g$ - necessary condition, but not sufficient - low complexity: - planning graph is of polynomial size and - can be computed in polynomial time Independent Actions: Examples - Mr12 and Lar1: - cannot occur together - Mr12 deletes precondition $r_1$ of Lar1 - Mr12 and Mr21: - cannot occur together - Mr12 deletes positive effect $r_1$ of Mr21 - Mr12 and Mq21: - may occur in same action layer Independent Actions - Two actions $a_1$ and $a_2$ are independent iff: - $\text{effects}^{-}(a_1) \cap (\text{precond}(a_2) \cup \text{effects}^{+}(a_2)) = \emptyset$ - $\text{effects}^{-}(a_2) \cap (\text{precond}(a_1) \cup \text{effects}^{+}(a_1)) = \emptyset$. - A set of actions $\pi$ is independent iff every pair of actions $a_1, a_2 \in \pi$ is independent. Pseudo Code: independent function independent($a_1, a_2$) for all $p \in \text{effects}^{-}(a_1)$ if $p \in \text{precond}(a_2)$ or $p \in \text{effects}^{+}(a_2)$ then return false for all $p \in \text{effects}^{-}(a_2)$ if $p \in \text{precond}(a_1)$ or $p \in \text{effects}^{+}(a_1)$ then return false return true Applying Independent Actions - A set $\pi$ of independent actions is *applicable* to a state $s$ iff $\bigcup_{a \in \pi} \text{precond}(a) \subseteq s$. - The result of applying the set $\pi$ in $s$ is defined as: $$\gamma(s, \pi) = (s - \text{effects}^{-}(\pi)) \cup \text{effects}^{+}(\pi),$$ where: - $\text{precond}(\pi) = \bigcup_{a \in \pi} \text{precond}(a)$, - $\text{effects}^{+}(\pi) = \bigcup_{a \in \pi} \text{effects}^{+}(a)$, and - $\text{effects}^{-}(\pi) = \bigcup_{a \in \pi} \text{effects}^{-}(a)$. Execution Order of Independent Actions - **Proposition**: If a set $\pi$ of independent actions is applicable in state $s$ then, for any permutation $\langle a_1, \ldots, a_k \rangle$ of the elements of $\pi$: - the sequence $\langle a_1, \ldots, a_k \rangle$ is applicable to $s$, and - the state resulting from the application of $\pi$ to $s$ is the same as from the application of $\langle a_1, \ldots, a_k \rangle$, i.e.: $$\gamma(s, \pi) = \gamma(s, \langle a_1, \ldots, a_k \rangle).$$ Layered Plans - Let $P = (A, s, g)$ be a statement of a propositional planning problem and $G = (N, E)$, $N = P_0 \cup A_1 \cup P_1 \cup A_2 \cup P_2 \cup \ldots$, the corresponding planning graph. - A layered plan over $G$ is a sequence of sets of actions: $\prod = \langle \pi_1, \ldots, \pi_k \rangle$ where: - $\pi_i \subseteq A_i \subseteq A$, - $\pi_i$ is applicable in state $P_{i-1}$, and - the actions in $\pi_i$ are independent. Layered Solution Plan - A layered plan $\prod = \langle \pi_1, \ldots, \pi_k \rangle$ is a solution to a planning problem $P = (A, s, g)$ iff: - $\pi_1$ is applicable in $s$, - for $j \in \{2, \ldots, k\}$, $\pi_j$ is applicable in state $\gamma(\ldots \gamma(\gamma(s, \pi_1), \pi_2), \ldots, \pi_{j-1})$, and - $g \subseteq \gamma(\ldots \gamma(\gamma(s, \pi_1), \pi_2), \ldots, \pi_k)$. Execution Order in Layered Solution Plans - **Proposition:** If \( \Pi = (\pi_1, \ldots, \pi_k) \) is a solution to a planning problem \( P=(A,s_i,g) \), then: - a sequence of actions corresponding to any permutation of the elements of \( \pi_1 \), - followed by a sequence of actions corresponding to any permutation of the elements of \( \pi_2 \), - … - followed by a sequence of actions corresponding to any permutation of the elements of \( \pi_k \) is a path from \( s_i \) to a goal state. Problem: Dependent Propositions: Example - \( r2 \) and \( ar \): - \( r2 \): positive effect of Mr12 - \( ar \): positive effect of Lar1 - but: Mr12 and Lar1 not independent - hence: \( r2 \) and \( ar \) incompatible in \( P_1 \) - \( r1 \) and \( r2 \): - positive and negative effects of same action: Mr12 - hence: \( r1 \) and \( r2 \) incompatible in \( P_1 \) No-Operation Actions - No-Op for proposition \( p \): - name: \( Ap \) - precondition: \( p \) - effect: \( p \) - \( r1 \) and \( r2 \): - \( r1 \): positive effect of \( Ar1 \) - \( r2 \): positive effect of \( Mr12 \) - but: \( Ar1 \) and \( Mr12 \) not independent - hence: \( r1 \) and \( r2 \) incompatible in \( P_1 \) - only one incompatibility test Mutex Propositions - Two propositions \( p \) and \( q \) in proposition layer \( P_j \) are mutex (mutually exclusive) if: - every action in the preceding action layer \( A_j \) that has \( p \) as a positive effect (incl. no-op actions) is mutex with every action in \( A_j \) that has \( q \) as a positive effect, and - there is no single action in \( A_j \) that has both, \( p \) and \( q \), as positive effects. - notation: \( \mu P_j = \{ (p,q) | p,q \in P_j \text{ are mutex} \} \) **Pseudo Code: mutex for Propositions** ```plaintext function mutex(p1, p2, μA_j) for all a_1 ∈ p1.producers() for all a_2 ∈ p2.producers() if (a_1, a_2) ∉ μA_j then return false end if end for end for return true end function ``` **Mutex Actions: Example** - r1 and r2 are mutex in $P_1$ - r1 is precondition for Lar1 in $A_2$ - r2 is precondition for Mr21 in $A_2$ - hence: Lar1 and Mr21 are mutex in $A_2$ Mutex Actions - Two actions $a_1$ and $a_2$ in action layer $A_j$ are mutex if: - $a_1$ and $a_2$ are dependent, or - a precondition of $a_1$ is mutex with a precondition of $a_2$. - notation: $\mu A_j = \{ (a_1, a_2) \mid a_1, a_2 \in A_j \text{ are mutex} \}$ Pseudo Code: mutex for Actions ```pseudocode function mutex(a_1, a_2, P) if ¬independent(a_1, a_2) then return true for all $p_1 \in \text{precond}(a_1)$ for all $p_2 \in \text{precond}(a_2)$ if $(p_1, p_2) \in P$ then return true return false ``` Decreasing Mutex Relations - **Proposition**: If \( p,q \in P_{j-1} \) and \((p,q) \notin \mu P_{j-1}\) then \((p,q) \notin \mu P_j\). - **Proof**: - if \( p,q \in P_{j-1} \) then \( Ap,Aq \in A_j \) - if \((p,q) \notin \mu P_{j-1}\) then \((Ap,Aq) \notin \mu A_j\) - since \( Ap,Aq \in A_j \) and \((Ap,Aq) \notin \mu A_j\), \((p,q) \notin \mu P\) must hold - **Proposition**: If \( a_1,a_2 \in A_{j-1} \) and \((a_1,a_2) \notin \mu A_{j-1}\) then \((a_1,a_2) \notin \mu A_j\). - **Proof**: - if \( a_1,a_2 \in A_{j-1} \) and \((a_1,a_2) \notin \mu A_{j-1}\) then - \( a_1 \) and \( a_2 \) are independent and - their preconditions in \( P_{j-1} \) are not mutex - both properties remain true for \( P_j \) - hence: \( a_1,a_2 \in A_j \) and \((a_1,a_2) \notin \mu A_j\) Removing Impossible Actions - Actions with mutex preconditions \( p \) and \( q \) are impossible - example: preconditions \( r_2 \) and \( ar \) of \( Uar2 \) in \( A_2 \) are mutex - can be removed from the graph - example: remove \( Uar2 \) from \( A_2 \) The Graphplan Planner 54 Reachability in Planning Graphs - **Proposition**: Let $P = (A, s_i, g)$ be a propositional planning problem and $G = (N, E)$, $N = P_0 \cup A_1 \cup P_1 \cup A_2 \cup P_2 \cup \ldots$, the corresponding planning graph. If - $g$ is reachable from $s_i$ then - there is a proposition layer $P_g$ such that - $g \subseteq P_g$ and - $\neg \exists g_1, g_2 \in g : (g_1, g_2) \in \mu_{P_g}$. Overview - The Propositional Representation - The Planning-Graph Structure - The Graphplan Algorithm The Graphplan Algorithm: Basic Idea - expand the planning graph, one action layer and one proposition layer at a time - from the first graph for which $P_g$ is the last proposition layer such that - $g \subseteq P_g$ and - $\neg \exists g_1, g_2 \in g: (g_1, g_2) \in \mu P_g$ - search backwards from the last (proposition) layer for a solution Planning Graph Data Structure - $k$-th planning graph $G_k$: - nodes $N$: - array of proposition layers $P_0 \ldots P_k$ - proposition layer $j$: set of proposition symbols - array of action layers $A_1 \ldots A_k$ - proposition layer $j$: set of action symbols - edges $E$: - precondition links: $pre_j \subseteq P_j \times A_j$, $j \in \{1 \ldots k\}$ - positive effect links: $e^+_j \subseteq A_j \times P_j$, $j \in \{1 \ldots k\}$ - negative effect links: $e^-_j \subseteq A_j \times P_j$, $j \in \{1 \ldots k\}$ - proposition mutex links: $\mu A_j \subseteq A_j \times A_j$, $j \in \{1 \ldots k\}$ - action mutex links: $\mu P_j \subseteq P_j \times P_j$, $j \in \{1 \ldots k\}$ Pseudo Code: expand function expand($G_{k-1}$) $A_k \leftarrow \{ a \in A \mid \text{precond}(a) \subseteq P_{k-1} \}$ and $\{(p_1, p_2) \mid p_1, p_2 \in \text{precond}(a) \} \cap \mu P_{k-1} = \{\} \} $\mu A_k \leftarrow \{(a_1, a_2) \mid a_1, a_2 \in A_k, a_1 \neq a_2, \text{ and mutex}(a_1, a_2, \mu P_{k-1})\} $P_k \leftarrow \{ p \mid \exists a \in A_k : p \in \text{effects}^+(a) \} $\mu P_k \leftarrow \{(p_1, p_2) \mid p_1, p_2 \in P_k, p_1 \neq p_2, \text{ and mutex}(p_1, p_2, \mu A_k)\} for all $a \in A_k$ $pre_k \leftarrow pre_k \cup \{(p \mid p \in P_{k-1} \text{ and } p \in \text{precond}(a)) \times a\} $e_k^+ \leftarrow e_k^+ \cup (a \times \{p \mid p \in P_k \text{ and } p \in \text{effects}^+(a)\}) $e_k^- \leftarrow e_k^- \cup (a \times \{p \mid p \in P_k \text{ and } p \in \text{effects}^-(a)\}) Planning Graph Complexity - **Proposition:** The size of a planning graph up to level $k$ and the time required to expand it to that level are polynomial in the size of the planning problem. - **Proof:** - problem size: $n$ propositions and $m$ actions - $|P| \leq n$ and $|A| \leq n + m$ (incl. no-op actions) - algorithms for generating each layer and all link types are polynomial in size of layer Fixed-Point Levels - A fixed-point level in a planning graph $G$ is a level $\kappa$ such that for all $i, i > \kappa$, level $i$ of $G$ is identical to level $\kappa$, i.e. $P_i = P_\kappa$, $\mu P_i = \mu P_\kappa$, $A_i = A_\kappa$, and $\mu A_i = \mu A_\kappa$. - **Proposition**: Every planning graph $G$ has a fixed-point level $\kappa$, which is the smallest $k$ such that $|P_k| = |P_{k+1}|$ and $|\mu P_k| = |\mu P_{k+1}|$. - **Proof**: - $P_i$ grows monotonically and $\mu P_i$ shrinks monotonically - $A_i$ and $P_i$ only depend on $P_{i-1}$ and $\mu P_{i-1}$ Searching the Planning Graph - **general idea**: - search backwards from the last proposition layer $P_k$ in the current graph - let $g$ be the set of goal propositions that need to be achieved at a given proposition layer $P_j$ (initially the last layer) - find a set of actions $\pi_j \subseteq A_j$ such that these actions are not mutex and together achieve $g$ - take the union of the preconditions of $\pi_j$ as the new goal set to be achieved in proposition layer $P_{j-1}$ Planning Graph Search Example Planning Graph as AND/OR-Graph - OR-nodes: - nodes in proposition layers - links to actions that support the propositions - AND-nodes: - nodes in action layers - $k$-connectors all preconditions of the action - search: - $AO^*$ not best algorithm because it does not exploit layered structure Repeated Sub-Goals The Graphplan Planner The nogood Table - *nogood* table (denoted \( \nabla \)) for planning graph up to layer \( k \): - array of \( k \) sets of sets of goal propositions - inner set: one combination of propositions that cannot be achieved - outer set: all combinations that cannot be achieved (at that layer) - before searching for set \( g \) in \( P_j \): - check whether \( g \in \nabla(j) \) - when search for set \( g \) in \( P_j \) has failed: - add \( g \) to \( \nabla(j) \) ### Pseudo Code: extract ```plaintext function extract(G,g,i) if i=0 then return ∅ if g∈∇(i) then return failure Π ← gpSearch(G,g,{},i) if Π≠failure then return Π ∇(i) ← ∇(i) + g return failure ``` ### Pseudo Code: gpSearch ```plaintext function gpSearch(G,g,π,i) if g={} then Π ← extract(G,∪a∈π precond(a),i-1) if Π=failure then return failure return Π⊙⟨π⟩ p ← g.selectOne() resolvers ← {a∈Ai | p∈effects+{a} and ¬∃a′∈π: (a,a′)∈μAi} if resolvers={} then return failure a ← resolvers.chooseOne() return gpSearch(G,g-effects+(a),π+a,i) ``` Pseudo Code: graphplan function graphplan(A,s,g) i ← 0; ; P_0 ← s; G ← (P_0,(); while (g ∉ P, or g ∉ P) and ¬fixedPoint(G) do i ← i + 1; expand(G) if g ∉ P, or g ∉ P then return failure η ← fixedPoint(G) ? |∇(κ)| : 0 |[i] ← extract(G,g,i) while |[i]|=failure do i ← i + 1; expand(G) |[i] ← extract(G,g,i) if |[i]|=failure and fixedPoint(G) then if r|∇(κ)| then return failure η ← |∇(κ)| return |[i] Graphplan Properties - **Proposition**: The Graphplan algorithm is sound, complete, and always terminates. - It returns failure iff the given planning problem has no solution; - otherwise, it returns a layered plan |[i] that is a solution to the given planning problem. - Graphplan is orders of magnitude faster than previous techniques! Overview - The Propositional Representation - The Planning-Graph Structure - The Graphplan Algorithm - Planning-Graph Heuristics Forward State-Space Search - idea: apply standard search algorithms (breadth-first, depth-first, A*, etc.) to planning problem: - search space is subset of state space - nodes correspond to world states - arcs correspond to state transitions - path in the search space corresponds to plan DWR Example State goal: (and (in ca p2) (in cb q2) (in cc p2) (in cd q2) (in ce q2) (in cf q2)) Heuristics • estimate distance to nearest goal state • number of unachieved goals (not admissible) • number of unachieved goals / max. number of positive effects per operator (admissible) • example state (prev. slide): • actual goal distance: 35 actions • h(s) = 6 • h(s) = 6 / 4 Finding Better Heuristics - solve “relaxed” problem and use solution as heuristic - planning heuristic: - planning problem: \( P = (O, s_i, g) \) - for \( p \in g \): \( \text{min-layer}(p) = \text{index of first proposition layer in planning graph that contains } p \) - admissible heuristic: \( \max(p \in g): \text{min-layer}(p) \) - not admissible: \( \sum(p \in g): \text{min-layer}(p) \) - no need to compute mutex relations - no need to re-compute planning graph for ground backward search The FF Planner (Basics) - heuristic - based on planning graph without negative effects - backward search possible in polynomial time - search strategy - enforced hill-climbing: commit to first state with better f-value Overview - The Propositional Representation - The Planning-Graph Structure - The Graphplan Algorithm - Planning-Graph Heuristics
{"Source-Url": "http://www.inf.ed.ac.uk/teaching/courses/plan/slides/Graphplan-Slides.pdf", "len_cl100k_base": 10588, "olmocr-version": "0.1.50", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 87437, "total-output-tokens": 12413, "length": "2e13", "weborganizer": {"__label__adult": 0.00042557716369628906, "__label__art_design": 0.0008172988891601562, "__label__crime_law": 0.0008935928344726562, "__label__education_jobs": 0.00284576416015625, "__label__entertainment": 0.00015628337860107422, "__label__fashion_beauty": 0.0002199411392211914, "__label__finance_business": 0.0007452964782714844, "__label__food_dining": 0.00046896934509277344, "__label__games": 0.003932952880859375, "__label__hardware": 0.0018625259399414065, "__label__health": 0.0007338523864746094, "__label__history": 0.0007290840148925781, "__label__home_hobbies": 0.0004227161407470703, "__label__industrial": 0.0012025833129882812, "__label__literature": 0.0005593299865722656, "__label__politics": 0.00042510032653808594, "__label__religion": 0.0006151199340820312, "__label__science_tech": 0.283203125, "__label__social_life": 0.0001634359359741211, "__label__software": 0.01849365234375, "__label__software_dev": 0.67919921875, "__label__sports_fitness": 0.00054931640625, "__label__transportation": 0.0012454986572265625, "__label__travel": 0.00031256675720214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29926, 0.0155]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29926, 0.64212]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29926, 0.63616]], "google_gemma-3-12b-it_contains_pii": [[0, 197, false], [197, 695, null], [695, 1948, null], [1948, 2480, null], [2480, 3519, null], [3519, 4171, null], [4171, 5366, null], [5366, 6921, null], [6921, 8011, null], [8011, 8937, null], [8937, 9671, null], [9671, 10592, null], [10592, 11494, null], [11494, 12040, null], [12040, 13322, null], [13322, 14071, null], [14071, 14613, null], [14613, 14905, null], [14905, 15529, null], [15529, 16245, null], [16245, 17274, null], [17274, 18118, null], [18118, 19005, null], [19005, 19876, null], [19876, 20356, null], [20356, 20915, null], [20915, 22023, null], [22023, 22532, null], [22532, 23613, null], [23613, 24855, null], [24855, 25923, null], [25923, 26260, null], [26260, 26783, null], [26783, 27397, null], [27397, 28243, null], [28243, 28672, null], [28672, 29064, null], [29064, 29797, null], [29797, 29926, null]], "google_gemma-3-12b-it_is_public_document": [[0, 197, true], [197, 695, null], [695, 1948, null], [1948, 2480, null], [2480, 3519, null], [3519, 4171, null], [4171, 5366, null], [5366, 6921, null], [6921, 8011, null], [8011, 8937, null], [8937, 9671, null], [9671, 10592, null], [10592, 11494, null], [11494, 12040, null], [12040, 13322, null], [13322, 14071, null], [14071, 14613, null], [14613, 14905, null], [14905, 15529, null], [15529, 16245, null], [16245, 17274, null], [17274, 18118, null], [18118, 19005, null], [19005, 19876, null], [19876, 20356, null], [20356, 20915, null], [20915, 22023, null], [22023, 22532, null], [22532, 23613, null], [23613, 24855, null], [24855, 25923, null], [25923, 26260, null], [26260, 26783, null], [26783, 27397, null], [27397, 28243, null], [28243, 28672, null], [28672, 29064, null], [29064, 29797, null], [29797, 29926, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29926, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29926, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29926, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29926, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29926, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29926, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29926, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29926, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29926, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 29926, null]], "pdf_page_numbers": [[0, 197, 1], [197, 695, 2], [695, 1948, 3], [1948, 2480, 4], [2480, 3519, 5], [3519, 4171, 6], [4171, 5366, 7], [5366, 6921, 8], [6921, 8011, 9], [8011, 8937, 10], [8937, 9671, 11], [9671, 10592, 12], [10592, 11494, 13], [11494, 12040, 14], [12040, 13322, 15], [13322, 14071, 16], [14071, 14613, 17], [14613, 14905, 18], [14905, 15529, 19], [15529, 16245, 20], [16245, 17274, 21], [17274, 18118, 22], [18118, 19005, 23], [19005, 19876, 24], [19876, 20356, 25], [20356, 20915, 26], [20915, 22023, 27], [22023, 22532, 28], [22532, 23613, 29], [23613, 24855, 30], [24855, 25923, 31], [25923, 26260, 32], [26260, 26783, 33], [26783, 27397, 34], [27397, 28243, 35], [28243, 28672, 36], [28672, 29064, 37], [29064, 29797, 38], [29797, 29926, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29926, 0.0452]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
c09d7086fc0931f50ab9a40d111510fae64290c2
[REMOVED]
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01095934/file/SAS14.pdf", "len_cl100k_base": 15615, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 86184, "total-output-tokens": 17501, "length": "2e13", "weborganizer": {"__label__adult": 0.00036978721618652344, "__label__art_design": 0.0005140304565429688, "__label__crime_law": 0.0003783702850341797, "__label__education_jobs": 0.0007753372192382812, "__label__entertainment": 9.262561798095704e-05, "__label__fashion_beauty": 0.00017964839935302734, "__label__finance_business": 0.0003659725189208984, "__label__food_dining": 0.0003421306610107422, "__label__games": 0.000957012176513672, "__label__hardware": 0.0028133392333984375, "__label__health": 0.0005040168762207031, "__label__history": 0.00041866302490234375, "__label__home_hobbies": 0.00016880035400390625, "__label__industrial": 0.0007152557373046875, "__label__literature": 0.00029754638671875, "__label__politics": 0.00032401084899902344, "__label__religion": 0.0005679130554199219, "__label__science_tech": 0.12158203125, "__label__social_life": 9.709596633911131e-05, "__label__software": 0.00980377197265625, "__label__software_dev": 0.857421875, "__label__sports_fitness": 0.0003254413604736328, "__label__transportation": 0.0007948875427246094, "__label__travel": 0.0002593994140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55237, 0.03462]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55237, 0.47258]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55237, 0.80461]], "google_gemma-3-12b-it_contains_pii": [[0, 1014, false], [1014, 3981, null], [3981, 6833, null], [6833, 9843, null], [9843, 13370, null], [13370, 16817, null], [16817, 17330, null], [17330, 20706, null], [20706, 21915, null], [21915, 25195, null], [25195, 27985, null], [27985, 31349, null], [31349, 35233, null], [35233, 39200, null], [39200, 43096, null], [43096, 46554, null], [46554, 49458, null], [49458, 50109, null], [50109, 52227, null], [52227, 55237, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1014, true], [1014, 3981, null], [3981, 6833, null], [6833, 9843, null], [9843, 13370, null], [13370, 16817, null], [16817, 17330, null], [17330, 20706, null], [20706, 21915, null], [21915, 25195, null], [25195, 27985, null], [27985, 31349, null], [31349, 35233, null], [35233, 39200, null], [39200, 43096, null], [43096, 46554, null], [46554, 49458, null], [49458, 50109, null], [50109, 52227, null], [52227, 55237, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55237, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55237, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55237, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55237, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55237, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55237, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55237, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55237, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55237, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55237, null]], "pdf_page_numbers": [[0, 1014, 1], [1014, 3981, 2], [3981, 6833, 3], [6833, 9843, 4], [9843, 13370, 5], [13370, 16817, 6], [16817, 17330, 7], [17330, 20706, 8], [20706, 21915, 9], [21915, 25195, 10], [25195, 27985, 11], [27985, 31349, 12], [31349, 35233, 13], [35233, 39200, 14], [39200, 43096, 15], [43096, 46554, 16], [46554, 49458, 17], [49458, 50109, 18], [50109, 52227, 19], [52227, 55237, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55237, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
51c46e431ddcb3a24f139663185d36d50e47e60d
[REMOVED]
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20040068136.pdf", "len_cl100k_base": 9999, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 52375, "total-output-tokens": 12456, "length": "2e13", "weborganizer": {"__label__adult": 0.0003955364227294922, "__label__art_design": 0.0002627372741699219, "__label__crime_law": 0.0004274845123291016, "__label__education_jobs": 0.0005183219909667969, "__label__entertainment": 5.632638931274414e-05, "__label__fashion_beauty": 0.000156402587890625, "__label__finance_business": 0.00015056133270263672, "__label__food_dining": 0.0003612041473388672, "__label__games": 0.0007266998291015625, "__label__hardware": 0.0008077621459960938, "__label__health": 0.0005927085876464844, "__label__history": 0.00021207332611083984, "__label__home_hobbies": 8.434057235717773e-05, "__label__industrial": 0.00036406517028808594, "__label__literature": 0.00024187564849853516, "__label__politics": 0.00028586387634277344, "__label__religion": 0.0005102157592773438, "__label__science_tech": 0.01509857177734375, "__label__social_life": 7.593631744384766e-05, "__label__software": 0.0040740966796875, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.0004036426544189453, "__label__transportation": 0.0006194114685058594, "__label__travel": 0.0002084970474243164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49283, 0.02019]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49283, 0.40283]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49283, 0.84409]], "google_gemma-3-12b-it_contains_pii": [[0, 2746, false], [2746, 5986, null], [5986, 8649, null], [8649, 10445, null], [10445, 13589, null], [13589, 15941, null], [15941, 18468, null], [18468, 20637, null], [20637, 23588, null], [23588, 26853, null], [26853, 29277, null], [29277, 32401, null], [32401, 34612, null], [34612, 36535, null], [36535, 40289, null], [40289, 43633, null], [43633, 46613, null], [46613, 49283, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2746, true], [2746, 5986, null], [5986, 8649, null], [8649, 10445, null], [10445, 13589, null], [13589, 15941, null], [15941, 18468, null], [18468, 20637, null], [20637, 23588, null], [23588, 26853, null], [26853, 29277, null], [29277, 32401, null], [32401, 34612, null], [34612, 36535, null], [36535, 40289, null], [40289, 43633, null], [43633, 46613, null], [46613, 49283, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49283, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49283, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49283, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49283, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49283, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49283, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49283, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49283, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49283, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49283, null]], "pdf_page_numbers": [[0, 2746, 1], [2746, 5986, 2], [5986, 8649, 3], [8649, 10445, 4], [10445, 13589, 5], [13589, 15941, 6], [15941, 18468, 7], [18468, 20637, 8], [20637, 23588, 9], [23588, 26853, 10], [26853, 29277, 11], [29277, 32401, 12], [32401, 34612, 13], [34612, 36535, 14], [36535, 40289, 15], [40289, 43633, 16], [43633, 46613, 17], [46613, 49283, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49283, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
b6c4a3551a3145413f54a7e3f107dfeaddbd1958
Epistemic protocols for distributed gossiping Apt, K.R.; Grossi, D.; van der Hoek, W. DOI 10.4204/EPTCS.215.5 Publication date 2016 Document Version Final published version Published in Electronic Proceedings in Theoretical Computer Science License CC BY Citation for published version (APA): Epistemic Protocols for Distributed Gossiping Krzysztof R. Apt Centrum Wiskunde Informatica Amsterdam, The Netherlands k.r.apt@cwi.nl Davide Grossi University of Liverpool Liverpool, UK d.grossi@liv.ac.uk Wiebe van der Hoek University of Liverpool Liverpool, UK wiebe@liv.ac.uk Gossip protocols aim at arriving, by means of point-to-point or group communications, at a situation in which all the agents know each other’s secrets. We consider distributed gossip protocols which are expressed by means of epistemic logic. We provide an operational semantics of such protocols and set up an appropriate framework to argue about their correctness. Then we analyze specific protocols for complete graphs and for directed rings. 1 Introduction In the gossip problem (18, 4, see also 10 for an overview) a number $n$ of agents, each one knowing a piece of information (a secret) unknown to the others, communicate by one-to-one interactions (e.g., telephone calls). The result of each call is that the two agents involved in it learn all secrets the other agent knows at the time of the call. The problem consists in finding a sequence of calls which disseminates all the secrets among the agents in the group. It sparked a large literature in the 70s and 80s 18, 4, 9, 5, 17 typically focusing on establishing—in the above and other variants of the problem—the minimum number of calls to achieve dissemination of all the secrets. This number has been proven to be $2n - 4$, where $n$, the number of agents, is at least 4. The above literature assumes a centralized perspective on the gossip problem: a planner schedules agents’ calls. In this paper we pursue a line of research first put forth in 3 by developing a decentralized theory of the gossip problem, where agents perform calls not according to a centralized schedule, but following individual epistemic protocols they run in a distributed fashion. These protocols tell the agents which calls to execute depending on what they know, or do not know, about the information state of the agents in the group. We call the resulting distributed programs (epistemic) gossip protocols. Contribution of the paper and outline The paper introduces a formal framework for specifying epistemic gossip protocols and for studying their computations in terms of correctness, termination, and fair termination (Section 2). It then defines and studies two natural protocols in which the interactions are unconstrained (Section 3) and four example gossip protocols in which agents are positioned on a directed ring and calls can happen only between neighbours (Section 4). Proofs are collected in the appendix. From a methodological point of view, the paper integrates concepts and techniques from the distributed computing, see, e.g., 1 Chapter 11] and the epistemic logic literature 8, 15 in the tradition of 16, 14, 7]. 2 Gossip protocols We introduce first the syntax and semantics of gossip protocols. 2.1 Syntax We loosely use the syntax of the language CSP (Communicating Sequential Processes) of \cite{11} that extends the guarded command language of \cite{6} by disjoint parallel composition and commands for synchronous communication. CSP was realized in the distributed programming language OCCAM (see INMOS \cite{12}). The main difference is that we use as guards epistemic formulas and as communication primitives calls that do not require synchronization. Also, the syntax of our distributed programs is very limited. In order to define gossip protocols we introduce in turn calls and epistemic guards. Throughout the paper we assume a fixed finite set $A$ of at least three agents. We assume that each agent holds exactly one secret and that there exists a bijection between the set of agents and the set of secrets. We denote by $P$ the set of all secrets (for propositions). Furthermore, it is assumed that each secret carries information identifying the agent to whom that secret belongs. 2.1.1 Calls Each call concerns two agents, the caller ($a$ below) and the agent called ($b$). We distinguish three modes of communication of a call: - **push-pull**, written as $ab$ or $(a, b)$. During this call the caller and the called agent learn each other’s secrets, - **push**, written as $a \triangleright b$. After this call the called agent learns all the secrets held by the caller, - **pull**, written as $a \triangleleft b$. After this call the caller learns all the secrets held by the called agent. Variables for calls are denoted by $c, d$. Abusing notation we write $a \in c$ to denote that agent $a$ is one of the two agents involved in the call $c$ (e.g., for $c := ab$ we have $a \in c$ and $b \in c$). Calls in which agent $a$ is involved are denoted by $c^a$. 2.1.2 Epistemic guards Epistemic guards are defined as formulas in a simple modal language with the following grammar: $$ \phi ::= F_{a}p \mid \neg \phi \mid \phi \land \phi \mid K_{a} \phi, $$ where $p \in P$ and $a \in A$. Each secret is viewed as a distinct symbol. We denote the secret of agent $a$ by $A$, the secret of agent $b$ by $B$ and so on. We denote the set of so defined formulas by $\mathcal{L}$ and we refer to its members as epistemic formulas or epistemic guards. We read $F_{a}p$ as ‘agent $a$ is familiar with the secret $p$’ (or ‘$p$ belongs to the set of secrets $a$ knows about’) and $K_{a} \phi$ as ‘agent $a$ knows that formula $\phi$ is true’. So this language is an epistemic language where atoms consist of ‘knowing whether’ statements about propositional atoms, if we view secrets as Boolean variables. Atomic expressions in $\mathcal{L}$ concern only who knows what secrets. As a consequence the language cannot express formally the truth of a secret $p$. This level of abstraction suffices for the purposes of the current paper. However, expressions $F_{a}p$ could be given a more explicit epistemic reading in terms of ‘knowing whether’. That is, ‘$a$ is familiar with $p$’ can be interpreted (on a suitable Kripke model) as ‘$a$ knows whether the secret $p$ is true or not’. This link is established in \cite{3}. 2.1.3 Gossip protocols Before specifying what a program for agent $a$ is, let us first define the language $L_a$ with the following grammar: $$ \psi ::= K_a \phi \mid \neg \psi \mid \psi \land \psi $$ with $\phi \in L$ By a component program, in short a program, for an agent $a$ we mean a statement of the form $$ *[[\bigwedge_{j=1}^{m} \psi_j \rightarrow c_j]], $$ where $m > 0$ and each $\psi_j \rightarrow c_j$ is such that $\psi_j \in L_a$ and $a$ is the caller in $c_j$. Given an epistemic formula $\psi \in L_a$ and a call $c$, we call the construct $\psi \rightarrow c$ a rule and refer in this context to $\psi$ as a guard. We denote the set of rules $\{\psi_1 \rightarrow c_1, \ldots, \psi_k \rightarrow c_k\}$ as $[[\bigwedge_{j=1}^{k} \psi_j \rightarrow c_j]$ and abbreviate a set of rules $\{\psi_1 \rightarrow c, \ldots, \psi_k \rightarrow c\}$ with the same call to a single rule $\bigvee_{i=1}^{k} \psi_i \rightarrow c$. Intuitively, $*$ denotes a repeated execution of the rules, one at a time, where each time a rule is selected whose guard is true. Finally, by a distributed epistemic gossip protocol, in short a gossip protocol, we mean a parallel composition of component programs, one for each agent. In order not to complicate matters we assume that each gossip protocol uses only one mode of communication. Of special interest for this paper are gossip protocols that are symmetric. By this we mean that the protocol is a composition of the component programs that are identical modulo the names of the agents. Formally, consider a statement $\pi(x)$, where $x$ is a variable ranging over the set $A$ of agents and such that for each agent $a \in A$, $\pi(a)$ is a component program for agent $a$. Then the parallel composition of the $\pi(a)$ programs, where $a \in A$, is called a symmetric gossip protocol. Gossip protocols are syntactically extremely simple. Therefore it would seem that little can be expressed using them. However, this is not the case. In Sections 3 and 4 we consider gossip protocols that can exhibit complex behaviour. 2.2 Semantics We now move on to provide a formal semantics of epistemic guards, and then describe the computations of gossip protocols. 2.2.1 Gossip situations and calls A gossip situation is a sequence $s = (Q_a)_{a \in A}$, where $Q_a \subseteq P$ for each agent $a$. Intuitively, $Q_a$ is the set of secrets $a$ is familiar with in situation $s$. The initial gossip situation is the one in which each $Q_a$ equals $\{A\}$ and is denoted by $r$. The set of all gossip situations is denoted by $S$. We say that an agent $a$ is an expert in a gossip situation $s$ if he is familiar in $s$ with all the secrets, i.e., if $Q_a = P$. The initial gossip situation reflects the fact that initially each agent is familiar only with his own secret, although it is not assumed this is common knowledge among the agents. In fact, in the introduced language we have no means to express the concept of common knowledge. 1Alternatively, $L_a$ could be defined as the fragment of $L$ consisting of the formulae of form $K_a \psi$. In logic S5, it is easy to prove that each $\psi \in L_a$ is logically equivalent to a formula $K_a \phi \in L$. We will use the following concise notation for gossip situations. Sets of secrets will be written down as lists. E.g., the set \( \{A, B, C\} \) will be written as \( ABC \). Gossip situations will be written down as lists of lists of secrets separated by dots. E.g., if there are three agents, \( \text{root} = A.B.C \) and the situation \( (\{A, B\}, \{A, B\}, \{C\}) \) will be written as \( AB.AB.C \). Each call transforms the current gossip situation by modifying the set of secrets the agents involved in the call are familiar with. More precisely, the application of a call to a situation is defined as follows. **Definition 2.1 (Effects of calls)** A call is a function \( c : S \rightarrow S \), so defined, for \( s := (Q_a)_{a \in A} \): \[ \begin{align*} &c = ab \quad c(s) = (Q_a')_{a \in A}, \text{ where } Q_a' = Q_a \cup Q_b, Q_c' = Q_c, \text{ for } c \neq a, b; \\ &c = a\triangleright b \quad c(s) = (Q_a')_{a \in A}, \text{ where } Q_a' = Q_a \cup Q_b, Q_a' = Q_a, Q_c' = Q_c, \text{ for } c \neq a, b; \\ &c = a\triangleleft b \quad c(s) = (Q_a')_{a \in A}, \text{ where } Q_a' = Q_a \cup Q_b, Q_a' = Q_b, Q_c' = Q_c, \text{ for } c \neq a, b. \end{align*} \] The definition formalizes the modes of communications we introduced earlier. Depending on the mode, secrets are either shared between caller and callee \((ab)\), they are pushed from the caller to the callee \((a\triangleright b)\), or they are retrieved by the caller from the callee \((a\triangleleft b)\). ### 2.2.2 Call sequences A **call sequence** is a (possibly infinite) sequence of calls, in symbols \((c_1, c_2, \ldots, c_n, \ldots)\), all being of the same communication mode. The empty sequence is denoted by \(\varepsilon\). We use \(c\) to denote a call sequence and \(C\) to denote the set of all call sequences. The set of all finite call sequences is denoted \(C^{<\omega}\). Given a finite call sequence \(c\) and a call \(c\) we denote by \(c.c\) the prepending of \(c\) with \(c\), and by \(c.c\) the postpending of \(c\) with \(c\). The result of applying a call sequence to a situation \(s\) is defined by induction using **Definition 2.1** as follows: - **[Base]** \(\varepsilon(s) := s\). - **[Step]** \((c.c)(s) := c(c(s))\). **Example 2.2** Let the set of agents be \(\{a, b, c\}\). \[ \begin{array}{cccccc} ab & ca & ab & ab & ab & ab \\ A.B.C & AB.AB.C & ABC.AB.ABC & ABC.ABC.ABC & ABC.ABC.ABC & ABC.ABC.ABC \end{array} \] The top row lists the call sequence \((ab, ca, ab)\), while the bottom row lists the successive gossip situations obtained from the initial situation \(A.B.C\) by applying the calls in the sequence: first \(ab\), then \(ca\) and finally \(ab\). By applying an infinite call sequence \(c = (c_1, c_2, \ldots, c_n, \ldots)\) to a gossip situation \(s\) one obtains therefore an infinite sequence \(c^0(s), c^1(s), \ldots, c^n(s), \ldots\) of gossip situations, where each \(c^k\) is sequence \(c_1, c_2, \ldots, c_k\). A call sequence \(c\) is said to **converge** if for all input gossip situations \(s\) the generated sequence of gossip situations reaches a limit, that is, there exists \(n < \omega\) such that for all \(m \geq n\) \(c^m(s) = c^{n+1}(s)\). Since the set of secrets is finite and calls never make agents forget secrets they are familiar with, it is easy to see the following. **Fact 2.3** All infinite call sequences converge. However, as we shall see, this does not imply that all gossip protocols terminate. In the remainder of the paper, unless stated otherwise, we will assume the push-pull mode of communication. The reader can easily adapt our presentation to the other modes. 2.2.3 Gossip models The set $S$ of all gossip situations is the set of all possible combinations of secret distributions among the agents. As calls progress in sequence from the initial situation, agents may be uncertain about which one of such secrets distributions is the actual one. This uncertainty is precisely the object of the epistemic language for guards we introduced earlier. **Definition 2.4** A gossip model (for a given set $A$) is a tuple $\mathcal{M} = (\mathcal{C}^{\omega}, \{\sim_a\}_{a \in A})$, where each $\sim_a \subseteq \mathcal{C}^{\omega} \times \mathcal{C}^{\omega}$ is the smallest relation satisfying the following inductive conditions (assume the mode of communication is push-pull): - **[Base]** Suppose $c \sim_a d$. - (i) If $a \notin c$, then $c.c \sim_a d$ and $c \sim_a d.c$. - (ii) If there exists $b \in A$ and $c, d \in \{ab, ba\}$ such that $c.c(root)_a = d.d(root)_a$, then $c.c \sim_a d.d$. A gossip model with a designated finite call sequence is called a pointed gossip model. For the push, respectively pull, modes of communication clause (ii) needs to be modified by requiring that for some $b \in A$, $c = d = a > b$ or $c = d = a < b$, respectively. For instance, by (i) we have $ab, bc \sim_a ab, bd$. But we do not have $bc, ab \sim_a bd, ab$ since $(bc, ab)(root)_a = ABC \neq ABD = (bd, ab)(root)_a$. Let us flesh out the intuitions behind the above definition. Gossip models are needed in order to interpret the epistemic guards of gossip protocols. Since such guards are relevant only after finite sequences of calls, the domain of a gossip model is taken to consist only of finite sequences. Intuitively, those are the finite sequences that can be generated by a gossip protocol. Let us turn now to the $\sim_a$ relation. This is defined with the following intuitions in mind. First of all, no agent can distinguish the empty call sequence from itself—this is the base of the induction. Next, if two call sequences are indistinguishable for $a$, then the same is the case if (i) we extend one of these sequences by a call in which $a$ is not involved or if (ii) we extend each of these sequences by a call of $a$ with the same agent (agent $a$ may be the caller or the callee), provided $a$ is familiar with exactly the same secrets after each of the new sequences has taken place—this is the induction step. The above intuitions are based on the following assumptions on the form of communication we presuppose: (i) At the initial situation, as communication starts, each agent knows only her own secret but considers it possible that the others may be familiar with all other secrets. In other words there is no such thing as common knowledge of the fact that ‘everybody knows exactly her own secret’. (ii) In general, each agent always considers it possible that call sequences (of any length) take place that do not involve her. These assumptions are weaker than the ones analyzed in [3]. We state without proof the following simple fact. **Fact 2.5** (i) Each $\sim_a$ is an equivalence relation; (ii) For all $c, d \in \mathcal{C}$ if $c \sim_a d$, then $c(root)_a = d(root)_a$, but not vice versa. This prompts us to note also that according to Definition 2.4 sequences which make $a$ learn the same set of secrets may well be distinguishable for $a$, such as, for instance, $ab, bc, ab$ and $ab, bc, ac$. In the first one $a$ comes to know that $b$ knows $a$ is familiar with all secrets, while in the second one, she comes to know that $c$ knows $a$ is familiar with all secrets. Relation $\sim_a$ is so defined as to capture this sort of ‘higher-order’ knowledge. --- 2 Notice that the definition requires a designated initial situation, which we assume to be root. 2.2.4 Truth conditions for epistemic guards Everything is now in place to define the truth of the considered formulas. **Definition 2.6** Let \( \mathcal{M}, c \) be a pointed gossip model with \( \mathcal{M} = (\mathcal{C}^{<\omega}, (\sim_a)_{a \in A}) \) and \( c \in \mathcal{C}^{<\omega} \). We define the satisfaction relation \( \models \) inductively as follows (clauses for Boolean connectives are omitted): \[ (\mathcal{M}, c) \models F_a p \iff p \in c(\text{root}), \] \[ (\mathcal{M}, c) \models K_a \phi \iff \forall d \text{ s.t. } c \sim_a d, (\mathcal{M}, d) \models \phi. \] So formula \( F_a p \) is true (in a pointed gossip model) whenever secret \( p \) belongs to the set of secrets agent \( a \) is familiar with in the situation generated by the designated call sequence \( c \) applied to the initial situation root. The knowledge operator is interpreted as customary in epistemic logic using the equivalence relations \( \sim_a \). 2.2.5 Computations Assume a gossip protocol \( P \) that is a parallel composition of the component programs \( *_{a=1}^{m_a} \psi_a^j \rightarrow c_a^j \), one for each agent \( a \in A \). Given the gossip model \( \mathcal{M} = (\mathcal{C}^{<\omega}, \{\sim_a\}_{a \in A}) \) we define the computation tree \( \mathcal{C}^P \subseteq \mathcal{C}^{<\omega} \) of \( P \) as the smallest set of sequences satisfying the following inductive conditions: [Base] \( \varepsilon \in \mathcal{C}^P \); [Step] If \( c \in \mathcal{C}^P \) and \( (\mathcal{M}, c) \models \psi_a^j \) then \( c.c_a^j \in \mathcal{C}^P \). In this case we say that a transition has taken place between \( c \) and \( c.c_a^j \), in symbols, \( c \rightarrow c.c_a^j \). So \( \mathcal{C}^P \) is a (possibly infinite) set of finite call sequences that is iteratively obtained by performing a ‘legal’ call (according to protocol \( P \)) from a ‘legal’ (according to protocol \( P \)) call sequence. A path in the computation tree of \( P \) is a (possibly infinite) sequence of elements of \( \mathcal{C}^P \), denoted by \( \xi = (c_0, c_1, \ldots, c_n, \ldots) \), where \( c_0 = \varepsilon \) and each \( c_{i+1} = c_i.c \) for some call \( c \) and \( i \geq 0 \). A computation of \( P \) is a maximal rooted path in the computation tree of \( P \). The above definition implies that a call sequence \( c \) is a leaf of the computation tree if and only if \[ (\mathcal{M}, c) \models \bigwedge_{a \in A} \bigwedge_{j=1}^{m_a} \neg \psi_a^j. \] We call the formula \[ \bigwedge_{a \in A} \bigwedge_{j=1}^{m_a} \neg \psi_a^j \] the exit condition of the gossip protocol \( P \). Obviously computation trees can be infinite, though they are always finitely branching. Further, note that this semantics for gossip protocols abstracts away from some implementation details of the calls. More specifically, we assume that the caller always succeeds in his call and does not require to synchronize with the called agent. In reality, the called agent might be busy, being engaged in another --- Note that while the sequences that are elements of the computation tree of a protocol are always finite (although possibly infinite in number), computations can be infinite sequences (of finite call sequences). call. To take care of this one could modify each call by replacing it by a ‘call protocol’ that implements the actual call using some lower level primitives. We do not elaborate further on this topic. Let us fix some more terminology. For \( c \in C^P \), an agent \( a \) is enabled in \( c \) if \( (\mathcal{M}, c) \models \bigwedge_{j=1}^{m_a} \psi^a_j \) and is disabled otherwise. So an agent is enabled if it can perform a call. An agent \( a \) is selected in \( c \) if it is the caller in the call that for some \( c' \) determines the transition \( c \to c' \) in \( \xi \). Finally, a computation \( \xi \) is called a fair computation if it is finite or each agent that is enabled in infinitely many sequences in \( \xi \) is selected in infinitely many sequences in \( \xi \). We note in passing that various alternative definitions of fairness are possible; we just focus on one of them. An interested reader may consult \([2]\), where several fairness definitions (for instance one focusing on actions and not on agents) for distributed programs were considered and compared. We conclude this section by observing the following. Our definition of computation tree for protocol \( P \) presupposes that guards \( \psi^a_j \) are interpreted over the gossip model \( \mathcal{M} = (C^{<\omega}, \{\sim_a\}_{a \in A}) \). This means that when evaluating guards, agents consider as possible call sequences that cannot be generated by \( P \). In other words, agents do not know the protocol. To model common knowledge of the considered protocol in the gossip model one should take as the domain of the gossip model \( \mathcal{M} \) the underlying computation tree. However, the computation tree is defined by means of the underlying gossip model. To handle such a circularity an appropriate fixpoint definition is needed. We leave this topic for future work. ### 2.3 Correctness We are interested in proving the correctness of gossip protocols. Assume a gossip protocol \( P \) that is a parallel composition of the component programs \( *\left[\left[ m_a \right]_{a \in A} \mathcal{M}_a \right] \). We say that \( P \) is partially correct, in short correct, if in all situations sequences \( c \) that are leaves of the computation tree of \( P \), for each agent \( a \) \[ (\mathcal{M}, c) \models \bigwedge_{b \in A} F_a B, \] i.e., if for all situations sequences \( c \) that are leaves of the computation tree of \( P \), each agent is an expert in the gossip situation \( c(\text{root}) \). We say furthermore that \( P \) terminates if all its computations are finite and that \( P \) fairly terminates if all its fair computations are finite. In the next section we provide examples showing that partial correctness and termination of the considered protocols can depend on the assumed mode of communication and on the number of agents. In what follows we study various gossip protocols and their correctness. We begin with the following obvious observation. **Fact 2.7** For each protocol \( P \) the following implications (\( \Rightarrow \)) hold, where \( T_P(x) \) stands for its termination and \( FT_P(x) \) for its fair termination in a communication mode \( x \): \[ T_P(x) \Rightarrow FT_P(x). \] Protocol R3 given in Section 4 shows that none of these implications can be reversed. Moreover, it is not the case either that for each protocol \( P \): \[ T_P(\triangleright) \Rightarrow T_P(\text{push-pull}), \] \[ T_P(\triangleleft) \Rightarrow T_P(\text{push-pull}). \] Example 2.8 Let $A = \{a, b, c\}$ and define the following expression: $$\begin{align*} \mathcal{A} \subset \mathcal{C} & := \bigwedge_{i \in \{A, B, C\}} (F_{a} I \rightarrow F_{i} I) \land \bigvee_{i \in \{A, B, C\}} (F_{i} I \land \neg F_{a} I) \end{align*}$$ Expression $\mathcal{B} \subset \mathcal{C}$ is defined analogously. Note that we denote by $I$ the secret of agent $i$. Intuitively, $\mathcal{A} \subset \mathcal{C}$ means that agent $c$ is familiar with all the secrets that agent $a$ is familiar with, but not vice versa. So $c$ is familiar with a strict superset of the secrets $a$ is familiar with. Further, let $\text{Exp}_j$ stand for $\bigwedge_{i \in \{A, B, C\}} F_{i} I$. Consider now the following component programs: - for agent $a$: $*[\neg K_{a}(\mathcal{A} \subset \mathcal{C}) \land \neg K_{a}\text{Exp}_a \rightarrow a \triangleright c]$, - for agent $b$: $*[\neg K_{b}(\mathcal{B} \subset \mathcal{C}) \land \neg K_{b}\text{Exp}_b \rightarrow b \triangleright c]$, - for agent $c$: $*[\neg K_{c}\text{Exp}_a \land K_{c}\text{Exp}_c \rightarrow c \triangleright a [\neg K_{c}\text{Exp}_b \land K_{c}\text{Exp}_c \rightarrow c \triangleright b]]$. This protocol is correct. Indeed, initially no agent is an expert, hence both guards of $c$ are false. On the other hand, we have $(\mathcal{M}, \varepsilon) \models \neg (\mathcal{A} \subset \mathcal{C})$ and $(\mathcal{M}, \varepsilon) \models \neg (\mathcal{B} \subset \mathcal{C})$, so both $(\mathcal{M}, \varepsilon) \models \neg K_{a}(\mathcal{A} \subset \mathcal{C})$ and $(\mathcal{M}, \varepsilon) \models \neg K_{b}(\mathcal{B} \subset \mathcal{C})$. Consequently, initially both $a$ and $b$ are enabled. If the first call is granted to $a$, this agent will call $c$ yielding the gossip situation $A.B.A.C$. Now the guard of $a$ is false (since $a$ is still familiar only with his own secret $A$, while $c$ is familiar with at least $A$ and $C$ and $a$ knows this). The guard of $c$ is still false. So now only $b$ is enabled. After his call of $c$ this yields the gossip situation $A.B.A.B.C$. At this stage, only agent $c$ is enabled and after he calls both $a$ and $b$ all guards become false. Moreover, this protocol terminates. Indeed, the only computations are the ones in which first the calls $a \triangleright c$ and $b \triangleright c$ take place, in any order, followed by the calls $c \triangleright a$ and $c \triangleright b$, also performed in any order. However, if we use the push-pull direction type instead of push, then the situation changes. Indeed, after an arbitrary number of calls $ac$ the formula $\neg (\mathcal{A} \subset \mathcal{C})$ is still true and hence $\neg K_{a}(\mathcal{A} \subset \mathcal{C})$ is true, as well. Consequently, this call can be indefinitely repeated, so the protocol does not terminate. □ 3 Two symmetric protocols In this section we consider protocols for the case when the agents form a complete graph. We study two protocols. We present them first for the communication mode push-pull. (Partial) correctness of the considered protocols does not depend on the assumed mode of communication. Learn new secrets protocol (LNS) Consider the following program for agent $i$: $$*[[]_{j \in A} \neg F_{i} I \rightarrow (i, j)]$$ Informally, agent $i$ calls agent $j$ if $i$ is not familiar with $j$’s secret. Note that the guards of this protocol do not use the epistemic operator $K_{i}$, but they are equivalent to the ones that do, as $\neg F_{i} I$ is equivalent to $K_{i} \neg F_{i} I$. This protocol was introduced in [3] and studied with respect to the push-pull mode, assuming asynchronous communication. As noted there this protocol is clearly correct. Also, it always terminates since after each call $(i, j)$ the size of $\{(i, j) \in A \times A \mid \neg F_{i} I\}$ decreases. The same argument shows termination if the communication mode is pull. However, if the communication mode is push, the protocol may fail to terminate, even fairly. To see it fix an agent $a$ and consider a sequence of calls in which each agent calls $a$. At the end of this sequence $a$ becomes an expert but nobody is familiar with his secret. So any extension of this sequence is an infinite computation. Let us consider now the possible call sequences generated by the computations of this protocol. Assume that there are \( n \geq 4 \) agents. By the result mentioned in the introduction in each terminating computation at least \( 2n - 4 \) calls are made. The LNS protocol can generate such shortest sequences (among others). Indeed, let \( A = \{a,b,c,d, i_1, \ldots, i_{n-4}\} \) be the set of agents. Then the following sequence of \( 2n - 4 \) calls \[ (a, i_1), (a, i_2), \ldots, (a, i_{n-4}), (a,b), (c,d), (a,c), (b,d), (i_1,b), (i_2,b), \ldots, (i_{n-4},b) \] (1) corresponds to a terminating computation. The guards used in this protocol entail that after a call \((i,j)\) neither the call \((j,i)\) nor another call \((i,j)\) can take place, that is between each pair of agents at most one call can take place. Consequently, the longest possible sequence contains at most \( n(n-1)/2 \) calls. Such a worst case can be generated by means of the following sequence of calls: \[ [2], [3], [4], \ldots, [n], \] where for a natural number \( k \), \([k]\) stands for the sequence \((1,k), (2,k), \ldots, (k-1,k)\) [4]. **Hear my secret protocol (HMS)** Next, we consider a protocol with the following program for agent \( i \): \[ *[[\bigwedge_{j \in A} \neg K_i F_j I \to (i,j)]] \] Informally, agent \( i \) calls agent \( j \) if he (agent \( i \)) does not know whether \( j \) is familiar with his secret. To prove correctness of this protocol it suffices to note that its exit condition \[ \bigwedge_{i,j \in A} K_i F_j I \] implies \( \bigwedge_{i,j \in A} F_j I \). To prove termination it suffices to note that after each call \((i,j)\) the size of the set \( \{(i,j) \mid \neg K_i F_j I\} \) decreases. If the communication mode is push, then the termination argument remains valid, since after the call \( i \bowtie j \) agent \( j \) still learns all the secrets agent \( i \) is familiar with. However, if the communication mode is pull, then the protocol may fail to terminate, even fairly. To see it fix an agent \( j \) and consider the calls \( i \circ j \), where \( i \) ranges over \( A \setminus \{j\} \), arbitrarily ordered. Denote this sequence by \( c \). Consider now an infinite sequence of calls resulting from repeating \( c \) indefinitely. It is straightforward to check that such a sequence corresponds to a possible computation. Indeed, in this sequence agent \( j \) never calls and hence never learns any new secret. So for each \( i \neq j \) the formula \( \neg K_i F_j I \) remains true and hence each agent \( i \neq j \) remains enabled. Moreover, after the calls from \( c \) took place agent \( j \) is not anymore enabled. Hence the resulting infinite computation is fair. \[ ^4 \text{Other longest sequences are obviously possible, for instance: } 12, 13, \ldots, 1n, 23, 24, \ldots, 2n, 34, 35, \ldots, 3n, \ldots, (n-1)n. \] 4 Protocols over directed rings In this section we consider the case when the agents are arranged in a directed ring, where \( n \geq 3 \). For convenience we take the set of agents to be \( \{1, 2, \ldots, n\} \). For \( i \in \{1, \ldots, n\} \), let \( i \oplus 1 \) and \( i \ominus 1 \) denote respectively the successor and predecessor of agent \( i \). That is, for \( i \in \{1, \ldots, n-1\} \), \( i \oplus 1 = i+1 \), \( n \ominus 1 = 1 \), for \( i \in \{2, \ldots, n\} \), \( i \ominus 1 = i-1 \), and \( 1 \ominus 1 = n \). For \( k > 1 \) we define \( i \ominus k \) and \( i \oplus k \) by induction in the expected way. Again, when reasoning about the protocols we denote the secret of agent \( i \in \{1, \ldots, n\} \) by \( I \). We consider four different protocols and study them with respect to their correctness and (fair) termination. In this set up, a call sequence over a directed ring is a (possibly infinite) sequence of calls, all being of the same communication mode, and all involving an agent \( i \) and \( i \oplus 1 \). As before, we use \( c \) to denote such a call sequence and \( C_{\text{DR}} \) to denote the set of all call sequences over a directed ring. In this section, unless stated otherwise, by a call sequence we mean a sequence over a directed ring. The set of all such finite call sequences is denoted \( C_{\text{DR}}^< \). A gossip model for a directed ring is a tuple \( \mathcal{M}_{\text{DR}} = (\mathcal{C}_{\text{DR}}^>, \sim_a)_{a \in A} \), where each \( \sim_a \subseteq C_{\text{DR}}^< \times C_{\text{DR}}^< \) as in Definition 2.4. The truth definition is as before, and the notion of a computation tree for directed rings \( C_{\text{DR}}^< \subseteq C_{\text{DR}}^< \) of a ring protocol \( P \) is analogous to the notion defined before. When presenting the protocols we use the fact that \( F_i J \) is equivalent to \( K_i F_j J \). **Ring protocol R1** Consider first a gossip protocol with the following program for \( i \): \[ \star \left[ \bigvee_{j=1}^{n} (F_i J \land K_j \neg F_{i \ominus 1} J) \rightarrow i \triangle 1 \right], \] where \( \triangle \) denotes the mode of communication, so \( \triangleright, \triangleleft \) or push-pull. Informally, agent \( i \) calls his successor, agent \( i \oplus 1 \), if \( i \) is familiar with some secret and he knows that his successor is not familiar with it. **Proposition 4.1** Let \( \triangle = \triangleright \). Protocol R1 terminates and is correct. Termination and correctness do not both hold for the other communication modes. Consider first the pull communication mode, i.e., \( \triangle = \triangleleft \). Then the protocol does not always terminate. Indeed, each call \( i \oplus 1 \) can be repeated. Next, consider the push-pull communication mode. We show that then the protocol is not correct. Indeed, take \[ c = (1, 2), (2, 3), \ldots, (n-1, n). \] We claim that after the sequence of calls \( c \) the exit condition of the protocol is true. To this end we consider each agent in turn. After \( c \) each agent \( i \), where \( i \neq n \) is familiar the secrets of the agents 1, 2, \ldots, \( i+1 \). Moreover, because of the call \( (i, i+1) \) agent \( i \) knows that agent \( i+1 \) is familiar with these secrets. So the exit condition of agent \( i \) is true. To deal with agent \( n \) note that \( c \sim_n c, (n-2, n-1), (n-3, n-2) \ldots (2, 3), (1, 2) \). After the latter call sequence agent 1 becomes an expert. So after \( c \) agent \( n \) cannot know that agent 1 is not familiar with some secret. Consequently, after \( c \) the exit condition of agent \( n \) is true, as well. However, after \( c \) agent 1 is not an expert, so the protocol is indeed not correct. In what follows we initially present the protocols assuming the push-pull mode of communication. Ring protocol R2 Consider now a gossip protocol with the following program for agent $i$: $$ \ast \left[ \neg K_i F_{i \oplus 1} I \oplus 1 \rightarrow (i, i \oplus 1) \right], $$ where (recall) $I \oplus 1$ denotes the secret of agent $i \oplus 1$. Informally, agent $i$ calls his successor, which is agent $i \oplus 1$, if $i$ does not know that his successor is familiar with the secret of $i$’s predecessor, i.e., agent $i \oplus 1$. **Proposition 4.2** If $|A| \in \{3, 4\}$ then protocol R2 is correct. However, this protocol is not correct for five or more agents. To see it consider the sequence of calls $$(1, 2), (2, 3), \ldots, (n - 1, n), (n, 1), (1, 2)$$ where $n \geq 5$. After it the exit condition of the protocol is true. However, agent 3 is not familiar with the secret of agent 5. Note that the same argument shows that the protocol in which we use $\neg K_i F_{i \oplus 1} I \vee \neg K_i F_{i \oplus 1} I \ominus 1$ instead of $\neg K_i F_{i \oplus 1} I \ominus 1$ is incorrect, as well. Moreover, this protocol does not always terminate. Indeed, one possible computation consists of an agent $i$ repeatedly calling his successor $i \oplus 1$. Ring protocol R3 Next, consider the following modification of protocol R2 in which we use the following program for agent $i$: $$ \ast \left[ (\neg \bigwedge_{j=1}^{n} F_i J) \lor \neg K_i F_{i \oplus 1} I \ominus 1 \rightarrow (i, i \oplus 1) \right]. $$ Informally, agent $i$ calls his successor, agent $i \oplus 1$, if $i$ is not familiar with all the secrets or $i$ does not know that his successor is familiar with the secret of his predecessor, agent $i \ominus 1$. This gossip protocol is obviously correct thanks to the fact that $\bigwedge_{i=1}^{n} \bigwedge_{j=1}^{n} F_i J$ is part of the exit condition. However, it does not always terminate for the same reason as the previous one. On the other hand, the following holds. **Proposition 4.3** Protocol R3 fairly terminates. The same conclusions concerning non termination and fair termination can be drawn for the push and the pull modes of communication. Indeed, for push it suffices to consider the sequence of calls $i \triangleright i \oplus 1, i \oplus 1 \triangleright i \oplus 2, \ldots, i \ominus 1 \triangleright i$ after which agent $i \ominus 1$ becomes disabled, and for pull the sequence of calls $i \triangleleft i \oplus 1, i \ominus 1 \triangleleft i, \ldots, i \ominus 2 \triangleleft i \ominus 3$ after which agent $i \ominus 2$ becomes disabled. Ring protocol R4 Finally, we consider a protocol that is both correct and terminates for the push-pull mode. Consider the following program for $i$: $$ \ast \left[ \bigvee_{j=1}^{n} (F_i J \land \neg K_i F_{i \oplus 1} J) \rightarrow (i, i \oplus 1) \right]. $$ Informally, agent $i$ calls his successor, agent $i \oplus 1$, if $i$ is familiar with some secret and he does not know whether his successor is familiar with it. Note the similarity with protocol R1. **Proposition 4.4** Protocol R4 terminates and is correct. Epistemic Protocols for Distributed Gossip <table> <thead> <tr> <th>Protocol</th> <th>T</th> <th>FT</th> <th>T for ⊳</th> <th>FT for ⊳</th> <th>T for ⊲</th> <th>FT for ⊲</th> </tr> </thead> <tbody> <tr> <td>LNS</td> <td>yes</td> <td>yes</td> <td>no</td> <td>no</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>HMS</td> <td>yes</td> <td>yes</td> <td>yes</td> <td>yes</td> <td>no</td> <td>no</td> </tr> <tr> <td>R3</td> <td>no</td> <td>yes</td> <td>no</td> <td>yes</td> <td>no</td> <td>yes</td> </tr> <tr> <td>R4</td> <td>yes</td> <td>yes</td> <td>yes</td> <td>yes</td> <td>no</td> <td>yes</td> </tr> </tbody> </table> Table 1: Summary of termination results. If the communication mode is push, then the termination argument remains valid, since after the call $i \triangleright i \oplus 1$ agent $i \oplus 1$ still learns all the secrets that agent $i$ is familiar with and hence the above set $\{(i, j) \mid \neg K_i F_{i \oplus 1} J\}$ decreases. If the communication mode is pull, then the protocol may fail to terminate, because after the first call $i \triangleleft i \oplus 1$ agent $i \oplus 1$ does not learn the secret of agent $i$ and consequently the call can be repeated. However, the situation changes when fairness is assumed. **Proposition 4.5** For the pull communication mode protocol R4 fairly terminates. Table 1 summarizes the termination properties of the protocols considered in the paper. 5 Conclusions The aim of this paper was to introduce distributed gossip protocols, to set up a formal framework to reason about them, and to illustrate it by means of an analysis of selected protocols. Our results open up several avenues for further research. First, our correctness arguments were given in plain English with occasional references to epistemic tautologies, such as $K_i \phi \rightarrow \phi$, but it should be possible to formalize them in a customized epistemic logic. Such a logic should have a protocol independent component that would consist of the customary S5 axioms and a protocol dependent component that would provide axioms that depend on the mode of communication and the protocol in question. An example of such an axiom is the formula $K_i F_{i \oplus 1} I \ominus 1 \rightarrow F_i I \oplus 1$ that we used when reasoning about protocol R2. To prove the validity of the latter axioms one would need to develop a proof system that allows us to compute the effect of the calls, much like the computation of the strongest postconditions in Hoare logics. Once such a logic is provided the next step will be to study formally its properties, including decidability. Then we could clarify whether the provided correctness proofs could be carried out automatically. Second, generalizing further the ideas we introduced by considering directed rings, gossip protocols could be studied in interface with network theory (see [13] for a textbook presentation). Calls can be assumed to be constrained by a network, much like in the literature on ‘centralized’ gossip (cf. [10]) or even have probabilistic results (i.e., secrets are passed with given probabilities). More complex properties of gossip protocols could then be studied involving higher-order knowledge or forms of group knowledge among neighbors (e.g., “it is common knowledge among $a$ and her neighbors that they are all experts”), or their stochastic behavior (e.g., “at some point in the future all agents are experts with probability $p$”). Third, it will be interesting to analyze the protocols for the types of calls considered in [3]. They presuppose some form of knowledge that a call took place (for instance that given a call between $a$ and $b$ each agent $c \neq a, b$ noted the call but did not learn its content). Another option is to consider multicasting (calling several agents at the same time). Finally, many assumptions of the current setup could be lifted. Different initial and final situations could be considered, for instance common knowledge of protocols could be assumed, or common knowledge of the familiarity of all agents with all the secrets upon termination could be required. Finally, to make the protocols more efficient passing of tokens could be allowed instead of just the transmission of secrets by means of calls. Acknowledgments We would like to thank Hans van Ditmarsch and the referees for helpful comments and Rahim Ramezanian for useful comments about Example 2.8. This work resulted from a research visit by Krzysztof Apt to Davide Grossi and Wiebe van der Hoek, sponsored by the 2014 Visiting Fellowship Scheme of the Department of Computer Science of the University of Liverpool. The first author is also a Visiting Professor at the University of Warsaw. He was partially supported by the NCN grant nr 2014/13/B/ST6/01807. References Proof (of Proposition 4.1) Termination. Given a call sequence \( c \) define the set \[ \text{Inf}(c) := \{(i, j) \mid i, j \in \{1, \ldots, n\} \text{ and } (\mathcal{M}_{DR}, c) \models F_i J\}. \] After each enabled call \( i \triangleright i \oplus 1 \) in \( c \), the set \( \text{Inf}(c) \) increases, which ensures termination since each set \( \text{Inf}(c) \) has at most \( n^2 \) elements. Correctness. Consider a leaf of the computation tree. Then the exit condition \[ \bigwedge_{i=1}^{n} \bigwedge_{j=1}^{n} (\neg F_i J \lor \neg K_i \neg F_i \oplus 1 J) \] is true. We proceed by induction to show that then each \( F_i J \) is true, where \( i, j \in \{1, \ldots, n\} \), and where the pairs \( (i, j) \) are ordered as follows: \[ (1, 1), (2, 1), \ldots, (n, 1), (2, 2), (3, 2), \ldots, (1, 2), \ldots, (n, n), (1, n), \ldots, (n - 1, n). \] So the \( i \)th row lists the pairs \( (j, i) \) with \( j \in \{1, \ldots, n\} \) ranging clockwise, starting at \( i \). Take a pair \( (i, j) \). If \( i = j \), then \( F_i J \) is true by assumption. If \( i \neq j \), then consider the pair that precedes it in the above ordering. It is then of the form \( (i_1, j) \), where \( i = i_1 \oplus 1 \). By the induction hypothesis \( F_{i_1} J \) is true, so by the exit condition \( \neg K_{i_1} \neg F_i J \) is true. Suppose now towards a contradiction that \( \neg K_{i_1} \neg F_i J \) is true. Then \( i_1 \oplus 1 \neq j \). Hence by virtue of the considered communication mode and Definition 2.4 it follows that agent \( i_1 \) knows that \( \neg F_{i_1} \oplus 1 J \) is true since the only way for \( i_1 \oplus 1 \) to become familiar with \( J \) is by means of a call from \( i_1 \). So \( K_{i_1} \neg F_i J \) is true. This yields a contradiction. Hence \( F_i J \) is true. So we showed, as desired, that \( \bigwedge_{i=1}^{n} \bigwedge_{j=1}^{n} F_i J \) is true in the considered leaf. \( \square \) Proof (of Proposition 4.2) To start with, \( \bigwedge_{i=1}^{n} F_i I \) is true in every node of the computation tree. Suppose the exit condition \( \bigwedge_{i=1}^{n} K_i F_i \oplus 1 I \oplus 1 \) is true at a node of the computation tree (in short, true). It implies that \( \bigwedge_{i=1}^{n} F_i \oplus 1 I \oplus 1 \) is true. Fix \( i \in \{1, \ldots, n\} \). By the above \( F_i I \oplus 2 \) is true. Further, the implication \( K_i F_i \oplus 1 I \oplus 1 \rightarrow F_i I \oplus 1 \) is true in every node of the computation tree (remember, the agents are positioned on a directed ring). If \( n = 3 \), this proves that \( \bigwedge_{j=1}^{n} F_j J \) is true. If \( n = 4 \), we note that \( K_i F_i \oplus 1 I \oplus 1 \) implies that agent \( i \oplus 1 \) learned \( I \oplus 1 \) through a call of agent \( i \) and hence the implication \( K_i F_i \oplus 1 I \oplus 1 \rightarrow F_i I \oplus 1 \) is true in every node of the computation tree, as well (remember that the mode is push-pull). We conclude that \( \bigwedge_{j=1}^{n} F_j J \) is true. \( \square \) Proof (of Proposition 4.3) First, note that the following three statements are equivalent for each node \( c \) of an arbitrary computation \( \xi \) and each agent \( i \): - \( i \) is disabled at \( c \), - \( (\\mathcal{M}_{DR}, c) \models (\bigwedge_{j=1}^{n} F_i J) \land K_i F_{i+1} I \ominus 1 \), - a sequence of calls \((i \oplus 2, i \oplus 3), (i \oplus 3, i \oplus 4), \ldots, (i, i \oplus 1)\) (possibly interspersed with other calls) has taken place in \( \xi \) before \( c \). Suppose now towards a contradiction that an infinite fair computation \( \xi \) exists. We proceed by case distinction. **Case 1** Some agent becomes disabled in \( \xi \). We claim that if an agent \( i \) becomes disabled in \( \xi \), then also agent \( i \oplus 1 \) becomes disabled in \( \xi \). Indeed, otherwise by fairness at some point in \( \xi \) after which \( i \) becomes disabled, agent \( i \oplus 1 \) calls his successor, \( i \oplus 2 \), and by the above sequence of equivalences in turn becomes disabled. We conclude by induction that at some point in \( \xi \) all agents become disabled and hence \( \xi \) terminates, which yields a contradiction. **Case 2** No agent becomes disabled in \( \xi \). By fairness each agent calls in \( \xi \) infinitely often his successor. So for every agent \( i \) there exists in \( \xi \) the sequence of calls \((i \oplus 2, i \oplus 3), (i \oplus 3, i \oplus 4), \ldots, (i, i \oplus 1)\) (possibly interspersed with other calls). By the above sequence of equivalences after this sequence of calls agent \( i \) becomes disabled, which yields a contradiction. \( \square \) Proof (of Proposition 4.4) **Termination** It suffices to note that after each call \((i, i \oplus 1)\) the size of the set \[ \{(i, j) \in A \times A \mid \neg K_i F_{i+1} J\} \] decreases. **Correctness** Consider a leaf of the computation tree. Then the exit condition \[ \bigwedge_{i=1}^{n} \bigwedge_{j=1}^{n} (\neg F_i J \lor K_i F_{i+1} J) \] is true. As in the case of protocol R1 we prove that it implies each \( F_i J \) is true by induction on the pairs \((i, j)\), where \( i, j \in \{1, \ldots, n\} \), ordered as follows: \[ (1, 1), (2, 1), \ldots, (n, 1), (2, 2), (3, 2), \ldots, (1, 2), \ldots, (n, n), (1, n), \ldots, (n - 1, n). \] Take a pair \((i, j)\). If \( i = j \), then \( F_i J \) is true by assumption. If \( i \neq j \), then consider the pair that precedes it in the above ordering, so \((i_1, j)\), where \( i = i_1 \oplus 1 \). By the induction hypothesis \( F_{i_1} J \) is true, so by the exit condition \( K_i F_i J \) is true and hence \( F_i J \) is true. \( \square \) Proof (of Proposition 4.5) Consider the following sequence of statements: - (i) \( i \) is disabled at \( c \), (ii) \( (M_{DR},c) \models \bigwedge_{j=1}^{n} (F_i J \rightarrow K_i F_{i \oplus 1} J) \), (iii) \( (M_{DR},c) \models K_i F_{i \oplus 1} \), (iv) a sequence of calls \( i \ominus 1 \triangleright i, i \ominus 2 \triangleright i \ominus 1, \ldots, i \ominus i \oplus 1 \) (possibly interspersed with other calls) has taken place in \( \xi \) before \( c \). It is easy to verify that these statements are logically related in the following way: \[(i) \iff (ii) \Rightarrow (iii) \Rightarrow (iv) \Rightarrow (ii)\] for each node \( c \) of an arbitrary computation \( \xi \) and each agent \( i \). They are therefore equivalent. Suppose now towards a contradiction that an infinite fair computation \( \xi \) exists. As in the proof of Proposition 4.3 we proceed by case distinction. **Case 1** Some agent becomes disabled in \( \xi \). We claim that if an agent \( i \) becomes disabled in \( \xi \), then also \( i \ominus 1 \) becomes disabled in \( \xi \). Indeed, otherwise by fairness at some point in \( \xi \) after which \( j \) becomes disabled, agent \( i \ominus 1 \) calls his successor, \( i \), and by the above sequence of equivalences in turn becomes disabled. We conclude by induction that at some point in \( \xi \) all agents become disabled and hence \( \xi \) terminates, which yields a contradiction. **Case 2** No agent becomes disabled in \( \xi \). By fairness each agent calls in \( \xi \) infinitely often his successor. So for every agent \( i \) there exists in \( \xi \) a sequence of calls \( i \ominus 1 \triangleright i, i \ominus 2 \triangleright i \ominus 1, \ldots, i \ominus i \oplus 1 \) (possibly interspersed with other calls). By the above sequence of equivalences, after this sequence of calls agent \( i \) becomes disabled, which yields a contradiction. \( \square \)
{"Source-Url": "https://pure.uva.nl/ws/files/45949470/1606.07516.pdf", "len_cl100k_base": 14313, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 70207, "total-output-tokens": 16593, "length": "2e13", "weborganizer": {"__label__adult": 0.00042176246643066406, "__label__art_design": 0.0005698204040527344, "__label__crime_law": 0.0005125999450683594, "__label__education_jobs": 0.0017642974853515625, "__label__entertainment": 0.00018477439880371096, "__label__fashion_beauty": 0.00023221969604492188, "__label__finance_business": 0.0006108283996582031, "__label__food_dining": 0.0006041526794433594, "__label__games": 0.00110626220703125, "__label__hardware": 0.0015153884887695312, "__label__health": 0.000988006591796875, "__label__history": 0.0004808902740478515, "__label__home_hobbies": 0.0002301931381225586, "__label__industrial": 0.0008158683776855469, "__label__literature": 0.0010194778442382812, "__label__politics": 0.0004930496215820312, "__label__religion": 0.0006933212280273438, "__label__science_tech": 0.326904296875, "__label__social_life": 0.00016224384307861328, "__label__software": 0.01158905029296875, "__label__software_dev": 0.6474609375, "__label__sports_fitness": 0.00031757354736328125, "__label__transportation": 0.000927448272705078, "__label__travel": 0.00022852420806884768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52450, 0.02793]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52450, 0.22252]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52450, 0.86133]], "google_gemma-3-12b-it_contains_pii": [[0, 502, false], [502, 3473, null], [3473, 6615, null], [6615, 9833, null], [9833, 13502, null], [13502, 17253, null], [17253, 20513, null], [20513, 24034, null], [24034, 28165, null], [28165, 31177, null], [31177, 35027, null], [35027, 38061, null], [38061, 41714, null], [41714, 44824, null], [44824, 47858, null], [47858, 50626, null], [50626, 52450, null]], "google_gemma-3-12b-it_is_public_document": [[0, 502, true], [502, 3473, null], [3473, 6615, null], [6615, 9833, null], [9833, 13502, null], [13502, 17253, null], [17253, 20513, null], [20513, 24034, null], [24034, 28165, null], [28165, 31177, null], [31177, 35027, null], [35027, 38061, null], [38061, 41714, null], [41714, 44824, null], [44824, 47858, null], [47858, 50626, null], [50626, 52450, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52450, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52450, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52450, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52450, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52450, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52450, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52450, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52450, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52450, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52450, null]], "pdf_page_numbers": [[0, 502, 1], [502, 3473, 2], [3473, 6615, 3], [6615, 9833, 4], [9833, 13502, 5], [13502, 17253, 6], [17253, 20513, 7], [20513, 24034, 8], [24034, 28165, 9], [28165, 31177, 10], [31177, 35027, 11], [35027, 38061, 12], [38061, 41714, 13], [41714, 44824, 14], [44824, 47858, 15], [47858, 50626, 16], [50626, 52450, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52450, 0.01685]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
8dbe7cd0ea3bed70a184537dfb31f7392f537ddf
Uncertainty, Risk, and Information Value in Software Requirements and Architecture Emmanuel Letier David Stefan Earl T. Barr Department of Computer Science University College London London, United Kingdom {e.letier, d.stefan, e.barr}@ucl.ac.uk ABSTRACT Uncertainty complicates early requirements and architecture decisions and may expose a software project to significant risk. Yet software architects lack support for evaluating uncertainty, its impact on risk, and the value of reducing uncertainty before making critical decisions. We propose to apply decision analysis and multi-objective optimisation techniques to provide such support. We present a systematic method allowing software architects to describe uncertainty about the impact of alternatives on stakeholders’ goals; to calculate the consequences of uncertainty through Monte-Carlo simulation; to shortlist candidate architectures based on expected costs, benefits and risks; and to assess the value of obtaining additional information before deciding. We demonstrate our method on the design of a system for coordinating emergency response teams. Our approach highlights the need for requirements engineering and software cost estimation methods to disclose uncertainty instead of hiding it. Categories and Subject Descriptors D2.11 [Software Engineering]: Software Architectures General Terms Design, Economics, Theory Keywords Software engineering decision analysis 1. INTRODUCTION Uncertainty is inevitable in software engineering. It is particularly present in the early stages of software development when an organisation needs to make strategic decisions about which IT projects to fund, or when software architects need to make decisions about the overall organisation of a software system. In general, these decisions aim at maximising the benefits that the software system will bring to its stakeholders, subject to cost and time constraints. Uncertainty includes uncertainty about stakeholders’ goals and their priorities, about the impact of alternatives on these goals, about the feasibility, cost, and duration of implementing the alternatives, about future changes in stakeholders’ goals, business context and technological environments, and finally uncertainty about whether the right questions about decisions are even being asked and all their options identified. In a decision problem, uncertainty is a lack of complete knowledge about the actual consequences of alternatives. For example, software architects may be uncertain about the cost and performance impact of a proposed software architecture. Given their current knowledge, they might estimate the cost to be between £1m to £3m and the achievable response time to be between 1 and 10 seconds. A risk exists when the possible consequences of a decision include undesirable outcomes, like loss or disaster [40]. Continuing the example, selecting the proposed architecture might carry the risks of the development costs exceeding £2m and the response time not achieving the minimally acceptable target of 2 seconds. In software architecture decisions, the risks include selecting an architecture that is too expensive to develop, operate, and maintain, that is delivered too late and, most importantly, that fails to deliver the expected benefits to its stakeholders. Numerous studies have shown that these risks are severely underestimated [29]. This is not surprising: uncertainty and risks are rarely considered explicitly in software engineering decisions and the software engineering literature offers no principled approaches to deal with them. In this paper, we focus on early requirements and architecture decisions, i.e. decisions about the functionality the software should provide, the quality requirements it should satisfy, its organisation into components and connectors, and its deployment topology. We assume stakeholders’ goals and the alternatives have been identified using appropriate requirements engineering and software architecture methods [45, 47, 63, 65]. Our objective is to support reasoning about uncertainty concerning the impact of alternatives on stakeholders’ goals. Previous work dealing with uncertainty in early requirements and architecture decisions [21, 42, 49, 62] suffers important limitations: they use unreliable methods for eliciting uncertainties (some confuse group consensus with certainty); they tend to evaluate alternatives against vague, unfalsifiable criteria; they provide no information about the risks that accompany uncertainty; and they provide no support for assessing to what extent obtaining additional information before making a decision could reduce these risks. We address these limitations by adapting concepts and techniques from statistical decision analysis to the problems of early requirements and architecture design decisions. Decision analysis is a discipline aiming at supporting complex decisions under uncertainty with systematic methods and mathematical tools for understanding, formalising, analysing, and providing insights about the decision problem [38]. Decision analysis is used notably in the health care domain to inform decisions about the cost-effectiveness of new medical treatments based on the results of clinical trials [7]. There are exceptional uses of these methods in the context of IT investment decisions [14, 41], but despite their relevance to early requirements engineering and architecture decisions, they have been largely ignored by the software engineering community. Our approach to early requirements and architecture decisions consists in formalising the decision problem in terms domain-specific measurable goals, to elicit and represent uncertainties as probability distributions, to simulate the impact of alternatives on goals through Monte-Carlo (MC) simulations, and to shortlist a set of alternatives using Pareto-based multi-objective optimisation techniques. We introduce the software engineering community to the value of information, a powerful notion from decision analysis, that allows a decision maker faced with uncertainty to measure those uncertainties and determine which would be most profitably reduced. The paper's main contribution is a systematic method for applying statistical decision analysis techniques to early requirements and architecture decision problems (Section 3). By developing this method, we were also led to make the following contributions: 1. We define novel decision risk metrics tailored for requirements and architecture decision problems (Section 3.3). 2. We extend the concept of value of information, traditionally defined in terms of impact of additional information on expected outcomes only, by considering how additional information reduces risk (Section 2.3). 3. We introduce the concept of Pareto-optimal strip, a generalisation of a Pareto-optimal front, designed to resist modelling and measurement errors present in multi-objective decision problem under uncertainty (Section 3.5). We have developed a tool supporting our approach and have applied it to data from a real system from the literature [21]. Our tool and all models discussed in this paper are available at www.cs.ucl.ac.uk/staff/e.letier/sdda. 2. COST-BENEFIT ANALYSIS UNDER UNCERTAINTY Before considering early requirements and architecture decision problems, we first consider the simpler problem of selecting one alternative among a set of alternatives based on their costs and benefits. Such problem assume a model exists to calculate the costs and benefits of all alternatives in a common unit, which is usually monetary (e.g. Pound, Euro, Dollar, Yen or Rupee) [7]: Definition. A cost-benefit decision model comprises a set $A$ of alternatives, a set $\Omega$ of model parameters, and two functions, $\text{cost}(a, \omega)$ and $\text{benefit}(a, \omega)$, that return the cost and benefit of alternative $a$ given the parameter values $\omega$. The net benefit of an alternative is then $NB(a, \omega) = \text{benefit}(a, \omega) - \text{cost}(a, \omega)$. To simplify the notation, we sometimes leave the model parameters implicit and write $NB(a)$ for $NB(a, \omega)$, and similarly $\text{benefit}(a)$ and $\text{cost}(a)$. Example. An engineering firm is considering replacing an ageing Computer-Aided Design (CAD) application with a new system. The set of alternatives is $A = \{\text{legacy, new}\}$. The CAD application helps the firm design complicated engineering artefacts (e.g. turbines, aircraft engines, etc.) that it sells to clients. The benefits associated with each alternative $a \in A$ is a function of several variables such as the market size, the market share that each alternative might help achieving, which itself is a function of features of each CAD. Likewise, the cost associated with each alternative is a function of several parameters such as the development, maintenance and operational costs. The cost and benefit functions would typically also include concerns related to incremental benefit delivery, cash flow, and discount factors [14, 16]. The model parameters are the variables in these equations, i.e. those that are not further defined in terms of other variables. To keep our illustrative example simple, we will hide the details of the cost and benefit functions and discuss decisions based on the results of these functions only. 2.1 Computing Expected Net Benefit and Risk Traditional Cost-Benefit Analysis (CBA) computes the net benefit of each alternative using point estimates (exact numbers instead of ranges) for each of a model’s parameter. Such approaches therefore ignore the often large uncertainty about parameter values. Uncertainty about cost and benefit exists but is hidden. In a statistical CBA, uncertainty about the model parameters is modelled explicitly as probability distributions and used to compute the probability distribution of net benefit for each alternative. Simple, effective methods exist for eliciting the model parameters’ probability distributions from decision makers [52]. These methods have sound mathematical foundations and are based on significant empirical studies of how uncertainty can be reliably elicited from humans. We will not be concerned with these methods in this paper beyond noting that they can and should be used to elicit reliable probability distributions from domain experts and decision makers. Once the model parameters probability distributions have been estimated, one needs to compute the probability distributions for the cost, benefit and net benefit of each alternative. It is generally not possible to compute these probability distributions analytically because the model equations and parameters’ probability distributions can be arbitrarily complicated. Monte-Carlo (MC) simulations can, however, compute good approximations. The underlying principle is to sample a large number of simulation scenarios generated by model parameter values drawn from their probability distributions and use them to compute the net benefit in that scenario. The result of a MC simulation of a cost-benefit decision model is a $M \times N$ matrix $NB$ where $M$ is the number of simulated scenarios and $N$ is the number of alternatives in $A$. The element $NB[i, j]$ denotes the net benefit for alternative $j$ in the $i^{th}$ scenario. From the result of a MC simulation, one can, for each alternative, estimate measures of interest to decision makers such as the expected net benefit ($\text{ENB}$), loss probability ($LP$), and probable loss magnitude ($\text{PLM}$), defined as follows: \[ \text{ENB}(a) = E[NB(a)] \\ LP(a) = P(NB(a) < 0) \\ \text{PLM}(a) = E[NB(a) | NB(a) < 0] \] where $E[X]$ denotes the expectation of a random variable $X$. Example. Figure 1 shows the results of a statistical CBA for our illustrative example. We assume cost and benefit have a normal distribution truncated at zero. Figure 1a shows the mean and 90% confidence interval of these distributions. A 90% confidence interval means that decision makers believe there is a 90% chance that the actual costs and benefits will fall within these ranges. Figure 1b shows the resulting expected net benefit, loss probability, and probable loss magnitude of each alternative. It shows developing the new CAD has a high expected net benefit but also high risks in terms of loss probability and probable loss magnitude. In a traditional CBA, these risks would not have been quantified and would, most likely, have been underestimated if not entirely ignored. 2.2 The Expected Value of Information If, before making a decision, decision makers could pay someone to obtain additional information that reduce uncertainty about the cost and benefits of alternatives, how much would that information be worth to them? It is possible to answer this question by computing the expected value of information [37]. Intuitively, information that reduces uncertainty may lead decision makers to select an alternative with highest expected net benefit than the alternative they would select without additional information. The expected value of information is the expected gain in net benefit between the selected alternatives with and without the additional information. The expected value of information for the different model variables tells decision makers to focus on reducing uncertainty about information with high expected value and to avoid wasting effort reducing uncertainty about information with low expected value (or at least not pay more for information than its expected value). Computing the expected value of information can yield surprising results. Hubbard reports he has applied information value theory to 20 IT project business cases (each having between 40 to 80 variables) and observed the following pattern: (1) the majority of variables had an information value of zero; (2) the variables that had high information value were routinely those that the client never measured; (3) the variables that clients used to spend the most time measuring were usually those with a very low (even zero) information value [41]. The contrast between the second and third observations constitutes what Hubbard has called the IT measurement inversion paradox [39]. He cites the large effort spent by one of his clients on function point analysis [4] — a popular software development productivity and cost estimation method — as an example of measurement with very low information value because its cost estimation were not more accurate or precise than the project managers’ initial estimates. The expected value of information is defined with respect to an outcome to be maximised and assumes a default decision strategy of maximising expected outcome. In this section, the outcome to be maximised is the net benefit NB but the definition applies to any other outcome, e.g. maximising software reliability. Information is valued in the same units as the outcome that it measures. This makes measuring information value with respect to net benefit particularly attractive as it assigns a financial value to information. The expected value of total perfect information (EVPTI) is the expected gain in net benefit from using perfect information about all model parameters: $$\text{EVPTI} = E[\max_{\omega} \text{NB}(a, \omega)] - \max_{\omega} E[\text{NB}(a, \omega)].$$ In this definition, the second term denotes the highest expected net benefit given the current uncertainty about the model parameters $\Omega$, and the first term the expectation over all possible values $\omega$ for the parameters $\Omega$ of the highest net benefit when the parameter values are $\omega$ (in other words, the expected net benefit from obtaining perfect information). Observe how the two terms invert the application of the expectation and maximisation operators. It can be shown that EVPTI is always positive or zero. The EVPTI can be estimated from the output $\hat{\text{NB}}$ of a MC simulation: $$\text{EVPTI} = \max_{i \in [1,N]} \max_{j \in [1,M]} \hat{\text{NB}}[i, j] - \max_{i \in [1,M]} \max_{j \in [1,N]} \hat{\text{NB}}[i, j].$$ As an illustration, Figure 2 shows how EVPTI is computed from a small MC simulation with 5 scenarios (the actual MC simulation used to produce the results in Figure 1 consists of $10^5$ scenarios). ### Table 1: Statistical Cost-Benefit Analysis for deciding whether to replace a legacy application by a new system. <table> <thead> <tr> <th></th> <th>Mean</th> <th>90% CI</th> <th>New</th> <th>Legacy</th> </tr> </thead> <tbody> <tr> <td>benefit(new)</td> <td>£5m</td> <td>[£1m–£9m]</td> <td>£2m</td> <td>£1m</td> </tr> <tr> <td>cost(new)</td> <td>£3m</td> <td>[£1m–£5m]</td> <td>£1m</td> <td>£1m</td> </tr> <tr> <td>benefit(legacy)</td> <td>£1m</td> <td>[£0.9m–£1.1m]</td> <td>£2m</td> <td>£1m</td> </tr> <tr> <td>cost(legacy)</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> </tbody> </table> (a) Mean and 90% Confidence Intervals (CI). ### Figure 1: Expected Net Benefit (ENB), Loss Probability (LP), and Probable Loss Magnitude (PLM). ### Equation 3: Information Value Analysis. The value of total perfect information (EVPTI) is £0.64m. $$\text{EVPPI} = \frac{\text{ENB} - \text{LP} - \text{PLM}}{\text{cost}}$$ where $\text{ENB} = E[\max_{\omega} \text{NB}(a, \omega)]$, $\text{LP} = \mathbb{P}(\text{Loss})$, and $\text{PLM} = \mathbb{P}(\text{Loss Magnitude})$. ### Table 2: Information Value Analysis. The value of total perfect information (EVPTI) is £0.64m. <table> <thead> <tr> <th></th> <th>Benefit(new)</th> <th>Benefit(legacy)</th> <th>£0.54m</th> <th>£0.14m</th> </tr> </thead> <tbody> <tr> <td>New</td> <td>£0.54m</td> <td>£0.14m</td> <td>8%</td> <td>4%</td> </tr> <tr> <td>Legacy</td> <td>£0.54m</td> <td>£0.14m</td> <td>8%</td> <td>4%</td> </tr> <tr> <td>cost</td> <td>£0.54m</td> <td>£0.14m</td> <td>8%</td> <td>4%</td> </tr> <tr> <td>PLM</td> <td>£0.54m</td> <td>£0.14m</td> <td>8%</td> <td>4%</td> </tr> </tbody> </table> Note that $\text{EVPTI} = E_{\omega}[\text{ENB}(a, \omega)] - E_{\omega}[\text{PLM}(a, \omega)] = E_{\omega}[\text{ENB}(a, \omega)] - E_{\omega}[\text{LP}(a, \omega)]$. Note that $\text{EVPTI}$ and $\text{EVPPI}$ compute the expected value of information about some parameters before the value for these parameters are revealed. Once the actual values are revealed, they may increase or decrease expected net benefit. The $\text{EVPTI}$ and $\text{EVPPI}$ merely compute how much the expected net benefit will change on average. It is these averages that are always positive or zero. The revelation of new information can also both increase or decrease uncertainty about the parameters’ true values. When this happens, an increase of uncertainty is most likely caused by a failure to mitigate overconfidence biases during the elicitation of probability distributions. For example, decision makers with overconfidence biases will express 90% confidence intervals that are narrower than their true uncertainty. Overconfidence bias is a serious problem because the computations of expected net benefit, risk, and information value all assume the initial probability distributions are accurate. This observation reinforces the importance of using appropriate uncertainty elicitation techniques designed to reduce the effects of overconfidence and other biases [52]. ### Figure 2: Statistical Cost-Benefit Analysis for deciding whether to replace a legacy application by a new system. ### Table 3: Expected Net Benefit (ENB), Loss Probability (LP), and Probable Loss Magnitude (PLM). <table> <thead> <tr> <th></th> <th>Benefit(new)</th> <th>Benefit(legacy)</th> <th>£0.54m</th> <th>£0.14m</th> </tr> </thead> <tbody> <tr> <td>New</td> <td>£0.54m</td> <td>£0.14m</td> <td>8%</td> <td>4%</td> </tr> <tr> <td>Legacy</td> <td>£0.54m</td> <td>£0.14m</td> <td>8%</td> <td>4%</td> </tr> <tr> <td>cost</td> <td>£0.54m</td> <td>£0.14m</td> <td>8%</td> <td>4%</td> </tr> <tr> <td>PLM</td> <td>£0.54m</td> <td>£0.14m</td> <td>8%</td> <td>4%</td> </tr> </tbody> </table> ### Equation 4: Information Value Analysis. The value of total perfect information (EVPTI) is £0.64m. $\text{EVPTI} = E[\max_{\omega} \text{NB}(a, \omega)] - \max_{\omega} E[\text{NB}(a, \omega)].$ Possibly the most common in software engineering consists in measuring, for each model parameter taken individually, the change of \( NB \) when \( a \) is true when alternative \( a \) fails when parameter values are \( \omega \). For example, for \( LP(a,\omega) \) is \( NB(a,\omega) < 0 \). Our definition can easily be extended to risk measures, such as \( PLM \), defined over real-valued rather than boolean \( F \) functions. Let \( a^* \) be an alternative that maximises expected \( NB \). If there is more than one alternative with equal highest expected \( NB \), \( a^* \) is one with minimal risk. Let \( a^*(\omega) \) and \( a^*(\Theta) \) be alternatives that maximise \( NB \) when \( \Omega = \omega \) or \( \Theta = \Theta \), respectively. The expected impact of total (respectively, partial) perfect information on \( Risk \) is the expected difference between \( Risk(a^*(\omega)) \) (respectively, \( Risk(a^*(\Theta)) \)) and \( Risk(a^*) \): \[ \Delta_{TPI}(Risk) = E[Risk(a^*(\omega))] - Risk(a^*) \] \[ \Delta_{PPI}(\Theta)(Risk) = E[Risk(a^*(\Theta))] - Risk(a^*). \] The \( \Delta_{TPI}(Risk) \) can be estimated from matrices \( \tilde{NB} \) and \( \hat{F} \) generated during the Monte-Carlo simulation: \[ \Delta_{TPI}(Risk) = \text{mean}(\tilde{F}(\text{which.max}_{j=1}^J \tilde{NB}(i,j))) \] \[-\text{mean}(0 > \tilde{NB}(i,a^*)]. \] where \( \text{which.max}_{j=1}^J \tilde{NB}(i,j) \) denotes the column indices of the alternative with highest benefit in row \( i \). To compute \( \Delta_{PPI}(\Theta)(Risk) \), we have extended the algorithm for computing \( EVPPPI \) from the Monte-Carlo simulation data \( (\Theta, \tilde{NB}, \hat{F}) \). Our extension applies the same principle as the one used to compute \( \Delta_{TPI}(Risk) \) to compute the \( \Delta \) in \( Risk \) in each segment of \( \Theta \) values, then returns the weighted average of those \( \Delta \) over all segments. **Example** Figure 1-c shows the expected value of information in our illustrative example. The \( EVPI \) is \( £0.64m \), 32\% of expected net benefit. Measuring \( EVPI \) shows that reducing uncertainty about the new application’s benefits has high value and reduces most of the risks, whereas reducing uncertainty about its cost has almost no value and little impact in reducing loss probability. ### 3. Software Design Decisions Under Uncertainty Software design decisions are usually more complex than the simple cost-benefit decision problems of the previous section. Complexity arises in the solution space, in the objective space, and in the models that relate the two. In the solution space, instead of involving the selection of one alternative from a set, they typically involve a multitude of interrelated design decisions concerning choices among alternative architectural styles, design patterns, technologies, and responsibility assignments [63, 65]. This leads to an exponential increase in the number of candidate solutions; for example, if the problem involves 10 design decisions with 3 options each, the number of candidate architectures is \( 3^{10} \) (around 60,000). The solution space for software design decisions is therefore several orders of magnitudes larger than the solution spaces of other domains applying decision analysis techniques — for example, in healthcare economics the solution space rarely exceeds 5 different treatment options [7]. In the objective space, software design decisions typically involve multiple goals that are generally conflicting, hard to define precisely, and not easily comparable (unlike cost and benefit, they have different units of measure). Examples of goals include concerns related to security, performance, reliability, usability, and the improved business outcomes generated by the software. Clarifying these goals and understanding their trade-offs is a significant part of supporting software design decisions. The goals in healthcare decision problems are at least as complex as software design decision goals. There has, however, been a much greater effort at defining these goals and their trade-offs than for software engineering problems. This has resulted in measures such as the quality-adjusted life year used to compare alternative treatment options [7]. The models relating the design decision options to stakeholders’ goals are often hard to build, validate, and include a very large number of parameters. They are typically composed of models of the software system (to evaluate the impact of software design decisions on software qualities such as its performance and reliability) and models of the application domain (to evaluate the impact of software and system design decisions on stakeholders goals). To deal with this complexity, we propose the following process: 1. Defining the architecture decision model 2. Defining a cost-benefit decision model 3. Defining the decision risks <table> <thead> <tr> <th>Scenarios</th> <th>( \hat{NB}(\text{new}) )</th> <th>( \hat{NB}(\text{legacy}) )</th> <th>Max</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>£1.53m</td> <td>£0.03m</td> <td>£1.33m</td> </tr> <tr> <td>2</td> <td>£0.13m</td> <td>£0.06m</td> <td>£0.96m</td> </tr> <tr> <td>3</td> <td>£4.05m</td> <td>£1.00m</td> <td>£4.05m</td> </tr> <tr> <td>4</td> <td>£6.13m</td> <td>£1.06m</td> <td>£6.13m</td> </tr> <tr> <td>5</td> <td>-£1.39m</td> <td>£1.07m</td> <td>£1.07m</td> </tr> <tr> <td>Mean</td> <td>£2.05m</td> <td>£1.02m</td> <td>£2.71m</td> </tr> </tbody> </table> **Figure 2**: Illustration of a MC simulation and computation of \( EVPTI \). The second and third columns show the \( NB \) for the new and legacy applications in 5 random scenarios. Over these five scenarios, the new application has the highest expected net benefit (\( £2.05m \)). The fourth column shows the maximal possible net benefit in each scenario and its mean value over all five scenarios (\( £2.71m \)). We thus have \( EVTI = £2.71m - £2.05m = £0.66m \). 4. Eliciting parameters values 5. Shortlisting candidate architectures 6. Identifying closed and open design decisions 7. Computing expected information value Steps 1 and 2 correspond to standard model elaboration activities performed notably in the ATAM [45] and CBAM [42, 49] approaches. Steps 3 and 4 are specific to architecture decisions under uncertainty. Step 5 extends Pareto-based multiobjective optimisation techniques to decisions under uncertainty. Step 6 identifies closed and open design decisions from this shortlist. Step 7 computes expected information values. At the end of these steps, if some model parameters or variables have high expected information value, software architects may choose to elicit further information and refine corresponding parts of their models to improve their decisions and reduce their risks. In practice, some of these steps may be intertwined. For example, the elaboration of the architecture decision model and the cost benefit model in steps 1 and 2 are likely to be interleaved rather than performed sequentially [51]. **SAS Case Study.** We apply our method on a case study of software architecture decisions presented at ICSE 2013 [21]. The software to be designed is a Situational Awareness System (SAS) whose purpose is to support the deployment of personnel in emergency response scenarios such as natural disasters or large scale riots. SAS applications would run on Android devices carried by emergency crews and would allow them to share and obtain an assessment of the situation in real-time (e.g., interactive overlay on maps), and to coordinate with one another (e.g., send reports, chat, and share video streams). A team of academics and engineers from a government agency previously identified a set of design decisions, options and goals to be achieved by this system (see Figure 3). They also defined models for computing the impact of options on the goals and documented uncertainty about model parameters using three point estimates, a method commonly used by engineers and project managers that consists in eliciting a pessimistic, most likely, and optimistic value for each model parameter. They then applied a fuzzy-logic based approach, called GuideArch, to support design decisions under uncertainty [21]. To facilitate comparison between the approaches, we will apply our method on the same model and data as those used by the GuideArch method [20]. ### 3.1 Defining the Architecture Decision Model The first step consists in identifying the decisions to be taken together with their options, defining the goals against which to evaluate the decisions, and developing a decision model relating alternative options to the goals [35, 45]. The result is a multi-objective architecture decision model (MOADM). **Definition.** A multi-objective architecture decision model is a tuple $(D, C, Ω, G, v)$, where - $D$ is a set of design decisions where each decision $d \in D$ has several options $O_d$; a candidate architecture is a function $a : D \rightarrow \cup_{d \in D} O_d$ that maps each decision $d$ to a single option in $O_d$; the set of all candidate architectures is noted $A^1$; - $C$ is a set predicates capturing dependency constraints between design decisions such as prerequisite, mutual exclusion, and mutual inclusion relations [59, 69]; - $Ω$ is a set of model parameters; - $G$ is a set of optimisation goals, partitioned into $G_+$ and $G_-$. \[ v(g, a, o) = \sum_{d \in D} contrib(g, a(d)) \] where $contrib(g, o)$ are model parameters denoting the contribution --- **Figure 3:** Overview of the SAS Case Study [21]. of option \( o \) to goal \( g \). For example, \( \text{contrib}(\text{BatteryUsage}, \text{GPS}) \) denotes the contribution of GPS to battery usage. Since the model has 25 options and 7 goals, we have \( 25 \times 7 \) (175) parameters. Like all models, this model is imperfect. For example, evaluating the response time of an architecture by summing up the response time of its individual component is a basic performance model that will only give a rough approximation of an architecture response time. Evaluating the reliability of an architecture by summing the reliability of its components is most likely to be an inaccurate measure of the true reliability. Another significant problem with this model is that the goals have no clear definition. For example, what is meant by reliability and battery usage? \( \ldots \) In order to separate issues concerning the validity of the SAS decision model from discussions concerning the benefits of alternative decision support methods, we temporarily assume this MOADM to be valid. We revisit this assumption after having compared the two decision methods on the same model. ### 3.2 Defining the Cost-Benefit Model Multi-objective decision problems increase in difficulty as the number of objectives increases [34]. Since a MOADM could have a large number of optimisation goals, one way to simplify the problem is to convert the MOADM into a simpler cost-benefit decision model [42, 49]. The cost-benefit model allows software architects to relate design decisions and levels of goal satisfaction to financial goals of direct interest to the project clients and stakeholders. The set of alternatives of the cost-benefit decision model is the set of candidate architectures in \( A \) satisfying the constraints in \( C \). Software architects, in collaboration with project stakeholders, define the cost and benefit functions. The parameters of the cost-benefit decision model include the parameters \( \Omega \) of the architecture decision model plus additional parameters involved in the definition of the cost and benefit functions. The cost function would typically include software development, deployment, operation and maintenance costs but possibly also other costs incurred in the application domain such as salary, material, legal, environmental, and reputation costs. The benefit function would model estimated financial values associated with achieved levels of goal attainment. A problem with many cost-benefit models is that they exclude from their equations costs and benefits that are perceived to be too hard to quantify and measure. For example, they omit the cost and benefit related to security, usability, company reputation, etc. To be useful, cost-benefit models should include the hard-to-measure factors that are important to the decision so that their uncertainty can be assessed and analysed instead of being ignored. Systematic methods for transforming vague qualitative goals into meaningful measurable objectives exist and have been used successfully in many industrial projects [1, 31, 41]. Many other projects however ignore these methods. A popular alternative is to compute for each alternative a utility score defined as the weighted sum of the stakeholders’ preferences for each goal: \[ U(a, \omega) = \sum_{g \in G} w(g) \times \text{Pref}_g(v(g, a, \omega)) \] where the goal weights \( w(g) \) and preferences functions \( \text{Pref}_g(x) \) are elicited from stakeholders using appropriate techniques [61]. The goal preference values \( \text{Pref}_g(x) \) are real numbers in \([0, 1]\) denoting the level of preference stakeholders associate with a value \( x \) for goal \( g \). A preference of 1 denotes the highest possible stakeholders’ satisfaction, a preference of 0 denotes the worst. For example, if \( g \) is the response time of a web application, a preference of 1 may be given to an average response time of 1 second or less and of 0 to an average response time of 10 seconds or above. Preference functions are often constructed as linear or s-shape functions between the goal attainments corresponding to the lowest and highest preference [56]. This approach, or a close variant, is found in many requirements engineering methods [3, 5, 24, 32, 64]. An advantage of defining utility as a weighted sums of goal preferences is that it is extremely easy to apply. Its biggest inconvenience, however, is that the utility scores correspond to no physical characteristics in the application domain making them hard to interpret and impossible to validate empirically. In other words, the utility functions are not falsifiable [55]. In contrast, in other domains, e.g. in healthcare economics, utility functions are not restricted to weighted sums and they denote domain-specific measures — such as the quality-adjusted life year — making it possible to refute and improve them based on empirical evidences [7]. When a utility function exists, whether the utility is falsifiable or not, it is possible to convert a utility score into financial units using a \textit{willingness-to-pay ratio} \( K \) such that the benefit of an alternative is the product of its utility and \( K \) [7]: \( \text{benefit}(a, \omega) = K \times U(a, \omega) \). This approach allows us to apply our statistical cost-benefit analysis method on any requirements and architecture models developed using a utility-based approach. **SAS Case Study.** The GuideArch approach assigns to each architecture \( a \) a score \( \text{GuideArch}(a, \omega) \) that is the probability that \( a \) successfully meets the requirements of the project. To facilitate exposition and relation to other work, we convert the GuideArch score to a utility score using the GuideArch model views these functions as normalisation functions expressing the percentage of goal attainment relative to the highest attainment achievable within the model. The SAS model utility score mixes both cost and benefit factors. For our experiment, we have thus assumed this utility score corresponds to the net benefit of our cost-benefit model, i.e. \( \text{NB}(a, \omega) = U(a, \omega) \), without distinguishing the cost and benefit parts of the utility function. ### 3.3 Defining Design Decision Risks Software design decisions should take into consideration the risks associated to each candidate architecture. In a cost-benefit model, these risks can be measured using the loss probability and probable loss magnitude introduced in Section 2. Decision makers can introduce additional risk measures related to net benefits, for example measuring the probability that the net benefit or return-on-investment (i.e. the ratio between net benefit and cost) are below some thresholds. In addition to risk measures related to net benefits, software architects may be interested in risks relative to the goals of the multi-objective architecture decision model: **Goal Failure Risks.** The risk for an architecture \( a \) to fail to satisfy a goal \( g \), noted \( \text{GRisk}(g, a, \omega) \), is the probability that \( a \) fails to achieve some minimum level of goal attainment: \[ \text{GRisk}(g, a, \omega) = P(v(g, a, \omega) < \text{must}(g)) \] \( \text{must}(g) \) is the response time of a web application, a preference of 1 may be given to an average response time of 1 second or less and of 0 to an average response time of 10 seconds or above. Preference functions are often constructed as linear or s-shape functions between the goal attainments corresponding to the lowest and highest preference [56]. This approach, or a close variant, is found in many requirements engineering methods [3, 5, 24, 32, 64]. An advantage of defining utility as a weighted sums of goal preferences is that it is extremely easy to apply. Its biggest inconvenience, however, is that the utility scores correspond to no physical characteristics in the application domain making them hard to interpret and impossible to validate empirically. In other words, the utility functions are not falsifiable [55]. In contrast, in other domains, e.g. in healthcare economics, utility functions are not restricted to weighted sums and they denote domain-specific measures — such as the quality-adjusted life year — making it possible to refute and improve them based on empirical evidences [7]. When a utility function exists, whether the utility is falsifiable or not, it is possible to convert a utility score into financial units using a \textit{willingness-to-pay ratio} \( K \) such that the benefit of an alternative is the product of its utility and \( K \) [7]: \( \text{benefit}(a, \omega) = K \times U(a, \omega) \). This approach allows us to apply our statistical cost-benefit analysis method on any requirements and architecture models developed using a utility-based approach. where \( must(g) \) is the level of goal attainment below which stakeholders would consider the goal to be unrealized. This definition assumes \( g \) is to be maximised; a symmetric definition can be given for goals to be minimized. Eliciting the \( must(g) \) values is part of many requirements engineering methods [31, 47, 56]). **Project Failure Risk.** The risk for an architecture \( a \) to fail the whole project, noted \( PRisk(a) \), is defined as the risk of failing to satisfy at least one of its goals. If the goals are statistically independent, we have \[ PRisk(a) = 1 - \prod_{g \in G} (1 - GRisk(g, a)). \] The project failure risk is defined with respect to goals are defined in the multi-objective architecture decision model. These goals may include concerns related to development costs and schedule. **SAS Case Study.** The original SAS model has no definition of risk and does not specify \( must \) values for any of its goals. We thus decided to define the \( must(g) \) values relative to the goal level attainment of a baseline architecture whose goal attainments would be equal to those of the existing system. The new system has to be at least as good as the current system on all goals, otherwise the project would be a failure. We selected as baseline the architecture with the lowest expected net benefit from among the top 5%. ### 3.4 Eliciting Parameters Values The following step consists in eliciting probability distributions (or single value in case a parameter is known with certainty) for all parameters in the architecture and cost benefit decision models. As mentioned in Section 2, simple, reliable methods exist for performing this elicitation [52]. **SAS Case Study.** The SAS design team elicited uncertainty for all 175 model parameters through a three-point estimation method that consist in eliciting for each parameter its most likely, lowest and highest values. They interpreted these three points estimates as triangular fuzzy value functions which are equivalent to triangular probability distributions. They also elicited point-based values for each of the 7 goal weights parameters (unlike our approach, GuideArch does not allow these weights to be uncertain). ### 3.5 Shortlisting Candidate Architectures The next step consists in shortlisting candidate architectures to be presented to software architects for the final decision and for computing expected information value. For this step, software architects have to decide what shortlisting criteria to use. The default is to shortlist candidate architectures that maximise expected net benefit and minimise project failure risk. Software architects may, however, select other risk-related criteria such as the probabilities that the project costs and schedule exceed some threshold, or that the loss probability or probable loss magnitude do. Software architects may select any number of criteria. However, keeping the number of criteria below 3 facilities the generation and visualisation of the shortlist. Architects may also specify, for each criteria, a resolution margin to resist spurious differentiation when comparing alternatives. For example, setting the resolution margins for financial objectives, such as expected net benefit, to £10,000 causes the shortlisting process to ignore differences of less then £10,000 when comparing candidate architectures net benefit. These margins make our shortlisting process robust against statistical error in the MC simulation and modelling error due to simplifications in the model equations. Shortlisting candidate architectures based on strict Pareto-optimality without our resolution margin can cause a priori rejection of a candidate architecture due to insignificant differences in objective attainment. Our tool then computes the shortlist as the set of Pareto-optimal candidate architectures for the chosen criteria and resolution margins. More precisely, a candidate architecture \( a \) is shortlisted if there is no other candidate architecture \( a' \) that outperforms \( a \) by the resolution margins on all criteria. If the MOADM includes a non-empty set \( C \) of dependency constraints between design decisions, any architecture that violates these constraints is automatically excluded. Our shortlisting approach is an extension of the standard notion of Pareto-optimality [34] used to deal with optimisation problems involving uncertainty. In the objective space, the outcomes of each candidate architecture for each criteria forms a Pareto-optimal strip, or a Pareto-optimal front with margins. Our implementation identifies the Pareto-optimal alternatives through an exhaustive exploration of the design space. It first computes the \( NB \) matrix for the full design space using MC simulation then uses a classic algorithm for extracting Pareto-optimal sets [46] that we have extended to deal with resolution margins. Our implementation is in R, an interpreted programming language for statistical computing. For the SAS model, on a standard laptop, the MC simulations of all 6912 alternatives takes around 5 minutes (for a MC simulation with \( 10^4 \) scenarios) and the identification of the Pareto-optimal strip less than a second. Other industrial architecture decision problems have a design space whose size is similar or smaller to that of the SAS [9, 43, 44]. For example, the application of CBAM to NASA Earth Observation Core System (ECS) [43] involves 10 binary decisions (thus 1024 alternative architectures against 6912 for the SAS). Our exhaustive search approach is thus likely to be applicable to most architecture decision problems. The scalability bottleneck of our approach is more likely to be related to the elaboration of the decision models (steps 1 and 2) and the number of parameters to be elicited from stakeholders (step 3) than to the automated shortlisting step. If, however, a need to increase the performance and scalability of our shortlisting technique appears, one could port our implementation to a faster compiled programming language and use evolutionary algorithms commonly used in search-based software engineering [33] such as NSGA2 [15] to deal with much larger design spaces (but at the cost of losing the guarantee of finding the true Pareto-optimal strip). **SAS Case Study.** Figure 4 shows the Pareto-optimal strip for the SAS candidate architectures evaluated with respect to expected net benefit and project failure risk. The resolution margins for each criteria are set at 0.1 and 1%, respectively. The red crosses show the 9 architectures shortlisted by our approach, the blue squares the top 10 architectures of the GuideArch approach, and the grey circles all other candidate architectures. In our shortlist, 5 out of 9 candidate architectures are in the Pareto-strip but not on the Pareto-front; they would have groundlessly been excluded from the shortlist if we had followed the traditional approach of retaining solutions in the Pareto-optimal front only. We observe important differences between our shortlist and top 10 architectures GuideArch identifies: our shortlists identifies candidate architectures with slightly higher expected net benefit and much lower project risk than GuideArch’s top 10 architectures. We explain the difference between the two shortlists as follows. GuideArch did not consider project failure risk as we defined it in Section 3.3 (or any other risk) in their architecture evaluations. It is therefore not surprising that its top 10 architectures perform weakly with respect to this criterion. Instead of evaluating criteria against their expected net benefit (or equivalently their utility score) and some measure of risk, GuideArch ranks candidate architectures according to a single criterion corresponding to an uncertainty-adjusted score defined as the weighted sum of an architecture’s pessimistic, most... 3.6 Open and Closed Design Decisions Shortlisting a set of candidate architectures may conclude a set of design decisions. A design decision is closed if all shortlisted architectures agree on the option to be selected for this decision; a design decision is open if the shortlisted architecture contains alternative options for that option. Presenting the open and closed design decisions gives decision makers a useful view of the shortlisted architectures. If the shortlist is large, it can also be organised into clusters based on design decisions similarities [66]. SAS Case Study. Figure 5 shows the open and closed design decisions in our shortlisted candidate architectures. 3.7 Computing Information Value The last step consists in computing the expected value of perfect information and its impact on risks. The expected value of total perfect information and its impact on risk, $EVTPI$ and $ERITPI$, give upper bounds on the value that additional information could bring to the decision. If $EVTPI$ is small and the impact on risk low, there is little value in reducing model parameters uncertainty. The expected value of partial perfect information about a single model parameter $\Theta$ and its impact on risk, $EVPPPI(\Theta)$ and $ERIPPI(\Theta)$, help software architects to distinguish model parameters with high and low expected information value. We also found it useful to measure the expected value of partial perfect information about the level of attainment of each goal and its impact on risk, $EVPPPI(g,v,a)$ and $ERIPPI(g,v,a)$. This gives software architects a mean of separating high and low information value at the levels of goals instead of individual parameters which can be too numerous (the SAS model has 175 parameters) and fine-grained. To ease computations of expected information values, we limit the alternatives to those in the shortlist. In our case study, this reduces the $NB$ matrix from which $EVTPI$ and $EVPPPI$ are computed from a size of 6912 by $10^4$ (the number of alternatives by the number of simulation scenarios) to a size of 9 by $10^4$. One should be careful in interpreting $EVTPI$ and $EVPPPI$ values to remember that their accuracy is conditional on the validity of the decision model. They only measure the value of reducing uncertainty about model parameters, not about the model equations. We come back to this issue below. SAS Case Study. Using the shortlisted architectures identified in Section 3.5 and $NB$ matrix for those architectures, we compute that $EVTPI$ is 0.05 which represents only 0.25% of the highest expected net benefit. $ERITPI$ is 9% which is the full project failure risk of the highest benefit architecture in our shortlist. This means that the impact of perfect information is to reduce project failure risk to zero. Figure 6 shows all non-zero EVPPPI for all goals and architectures. Since these EVPPPI are small, the table shows the ratio of EVPPPI to EVTPI instead of absolute values. The ramp up time and battery usage of 4 of the 9 shortlisted architectures are shown to have, in relative terms, much higher information value than other goals and architectures. However, in absolute terms, these values remain low. In order to experiment with the use of EVTPI and EVPPPI, we have artificially extended uncertainty in the SAS model and observed the impact on EVTPI and EVPPPI. We have, for example, given uncertainty to the goal weights in the definition of the utility function. We have assumed that the SAS design team is likely to have overestimated the goal weights and have therefore replaced their constant value by a triangular distribution of parameters $(0, w(g), w(g))$ where $w(g)$ is the initial goal weight estimated by the SAS design. <table> <thead> <tr> <th>Open Decisions</th> <th>Options</th> </tr> </thead> <tbody> <tr> <td>File Sharing</td> <td>OpenIntents</td> </tr> <tr> <td>Chat</td> <td>XMPP (Open File)</td> </tr> <tr> <td>Connectivity</td> <td>3G on Nexus 1</td> </tr> <tr> <td>Architectural Pattern</td> <td>Façade</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Closed Decisions</th> <th>Option</th> </tr> </thead> <tbody> <tr> <td>Location Finding</td> <td>Radio</td> </tr> <tr> <td>Hardware Platform</td> <td>Nexus 1</td> </tr> <tr> <td>Report Syncing</td> <td>Explicit</td> </tr> <tr> <td>Map Access</td> <td>Preloaded (ESRI)</td> </tr> <tr> <td>Database</td> <td>MySQL</td> </tr> <tr> <td>Data Exchange Format</td> <td>Unformatted Data</td> </tr> </tbody> </table> team. This distribution resulting in a linearly decreasing probability distribution function from \( w(g) \) to 0. We observed that this uncertainty roughly doubled EVTPI. However, in our all experiments, EVTPI remains small. This is mostly due to the small differences in net benefit that exist among the shortlisted architectures even when most of the model parameters uncertainties are increased. If we had confidence in the validity of the model utility function, this result would mean that, for this particular decision problem, there is no value in reducing uncertainty before deciding among the shortlisted architectures. However, we have identified important limitations in the SAS MOADM and utility models that severely question their validity, the most important problem being that these models are not falsifiable, making it impossible to validate and improve them based on empirical evidence. The project client should thus be sceptical of the choice of architecture, risk assessment, and information value generated using these models whatever decision support method is used. In order to deal with such difficulties, it would be desirable to be able to explicitly describe and reason not only about parameters uncertainty, but also about model uncertainty (also called structural uncertainty) [17]. Requirements and architecture decision problems would especially benefit from this capability. It would enable an incremental approach where software architects could start from an inexpensive, coarse-grained decision model with large uncertainty, then use expected information value about model uncertainty to decide whether and where to reduce uncertainty by refining parts of the model. They could for example start with a coarse-grained software performance model similar to the one used in the SAS case study, estimate their uncertainty about the model error (the deviation between its predicted performance and the software’s actual performance) and compute the expected value of perfect information about this error to decide whether to refine this model into a more fine-grained performance model. We have started exploring how to extend our method to deal with model uncertainty by converting it to parameter uncertainty, but the approach is still tentative and our method does not currently supports this. 4. EVALUATION AND FUTURE WORK Evaluating decision support methods is hard. Often, authors argue that their method is systematic, liked by its users, triggers useful discussions and generates insights into the decision problem [21, 34, 49]. None of these claims, however, consider whether a decision method is correct and produces better outcomes than another method, or even than no method at all (i.e., decisions based on intuition alone). The popular AHP method [57], for example, is criticised by decision experts for its mathematical flaws [8, 10, 53] and lack of evidence that it leads to better decisions than intuition alone [40]. Evaluation of software engineering decision methods should go beyond vague claims of usefulness. In this section, we propose to evaluate software engineering decision support methods according to their correctness, performance and scalability, applicability, and cost-effectiveness. Inspired by an adaptation of Moslow’s pyramid of human needs to software quality [2], we visualize these criteria in a pyramid where the lower-level criteria are necessary foundations for higher-level ones. We discuss the extent to which we can claim our method meets these criteria and outline a roadmap of future research to extend our evaluation and improve our method against those criteria. 1. Correctness. The first level is to establish what correctness properties can be claimed of the method. One must distinguish correctness of the decision method from correctness of the decision models to which the method is applied. Our method is correct in the sense that its produces correct estimations of the candidate architectures expected net benefits, risks, and expected information value assuming validity of the decision models and accuracy of the parameters’ probability distributions. Not all decision methods can make this correctness claim. For example, GuideArch computes for each architecture a score that, unlike our expected net benefit and risk, makes no falsifiable predictions about the architecture and has therefore no notion of correctness. The lack of validity and falsifiability of the decision model we used in the SAS case study is an important weakness of our evaluation. All models are wrong, but some are useful [13]. Unfortunately, today no scientific method exists to help software architects evaluate how useful a model actually is to inform decisions. As mentioned in the closing of the previous section, we intend to address this shortcoming by extending our approach to deal with model uncertainty so as the be able to estimate modelling errors, their impact on decisions, and support an incremental modelling process guided by information value analysis. Our method assumes it is possible to elicit accurate probability distributions for all model parameters. Such elicitation can be hampered by important cognitive biases. For example, software estimations have been shown to be affected by anchoring [6]. Significant research in uncertainty elicitation has show it is possible to counter the effects of such biases using appropriate methods [52]. However, these methods have to our knowledge not yet been applied in a software engineering context and further evaluation is thus required in this area. 2. Performance and scalability. With the SAS case study, we have shown our method is fast enough to analyse a real software design decision problem whose size and complexity is similar to those of other published industrial architecture decision problems [9, 43, 44]. The manual steps of elaborating the decision models and eliciting all parameters probability distributions will most likely be the first scalability bottleneck of applying our method to more complex problems. If our automated shortlisting step becomes a bottleneck, its performance and scalability can be improved notably by using by using evolutionary search-based algorithms to reduce the number of candidate architectures to evaluate. In the near future, we intend to conduct a systematic scalability analysis [19] of the whole approach on real case studies before attempting to improve its performance. 3. Applicability. The next evaluation criteria is to show the method is applicable by its intended users (not just the method designers) in actual software engineering projects. We distinguish technical applicability, the extent to which the method is understandable and applicable by software architects in an ideal (fictive) project where actors do not intentionally or unintentionally game the decision making process, from contextual applicability, the extent to which the method is applicable in the context of real projects where the project governance, incentives, and political relations might affect the decision making process and reporting of uncertainty. At the moment, we see no critical threats to the technical applicability of our method. Our method takes as input decision models that correspond to those already produced by other requirements engineering and architecture methods [22, 45, 47, 49]. The only other required inputs are probability distributions modelling the decision makers uncertainty about the model parameters. As mentioned earlier, simple, reliable methods exist to elicit such probability distributions [52]. Our analysis outputs need to be easily interpretable by \[^3\]Our approach actually computes these quantities using MC simulation which introduces bounded and measurable simulations errors [48, 54]. In our case study, with simulations of 10^5 scenarios, these errors are negligible, particularly when compared to the much wider modelling and parameter uncertainty. decision makers. Although the concepts of risk, Pareto-optimality and information value can be misunderstood, we see no insurmountable obstacle here. Even if the method is technically applicable, the political context and governance structure of a project may create obstacles to the accurate reporting of uncertainty and analysis of risks [40, 50]. Important research in this area will be needed to identify incentives and governance structures that are favourable to sound decision making under uncertainty. 4. Cost-effectiveness The next evaluation stage is to demonstrate the cost-effectiveness of decision analysis methods in requirements and architecture decisions. A method can be applicable without being cost-effective. Showing cost-effectiveness of a decision method dealing with uncertainty is hard. One must distinguish a good decision from a good outcome. A good decision may by the effect of chance lead to a bad outcome, and vice-versa a bad decision may also by the effect of chance lead to a good outcome. However, when analysed over many decisions, a good decision support method should on average lead to better outcomes, which for software engineering projects means higher business benefits from IT projects and less costly project failures. We believe that by setting expected benefits and risks as explicit decision criteria and by using falsifiable models that can be incrementally improved from empirical evidence, our method has a better chance of achieving these goals than other methods relying on unfalsifiable models and utility scores not clearly related to benefits and risks. 5. RELATED WORK Most requirements and architecture decision methods ignore uncertainty and rely on point-based estimates of their models parameters [5, 22, 24, 32, 47, 64, 68]. By simply replacing point-based estimates by probability distributions, our method can be directly applied to any previous decision model because the MC simulations at the heart of the method merely consist of evaluating the point-based models on many different possible parameters values. Our method builds on previous methods for dealing with uncertainty in software architecture decisions, notably CBAM [42, 49] and GuideArch [21]. The first two steps of our method are equivalent to CBAM’s model elaboration steps. Our method differs from CBAM in that it relies on sound, reliable techniques for eliciting probability distributions; it includes explicit definition of risks against which alternatives are evaluated; it shortlists candidate architectures based on multiple objectives (e.g. ENB and Risk) instead of assuming a single ranking criterion; and it measures expected information value whereas CBAM uses deterministic sensitivity analysis whose limitations were described in Section 2.2. Elaborating on the first point, CBAM infers probability distributions from divergences between stakeholders’ single-point estimates; this confuses consensus about the most likely value with uncertainty about the possible ranges of values. Our method differs from GuideArch in the following ways. In step 1 and 2, our method allows decision makers to elaborate problem-specific decision models whereas GuideArch relies on fixed equations for computing a score for each candidate architecture. The GuideArch equations are not falsifiable and therefore not amenable to empirical validation. Likewise, unlike step 3 of our method, GuideArch does not allow decision makers to define domain-specific measures of risks. In step 4, we model uncertainty about parameters’ values as probability distributions for which sound uncertainty elicitation techniques exist [52] whereas GuideArch uses fuzzy logic values that cannot be empirically validated and calibrated. In step 5, we allow decision makers to shortlist candidate architecture based on expected net benefit and risks whereas GuideArch ranks architecture using a single risk-adjusted score whose interpretation is problematic. Finally, GuideArch has no support for assessing the value of information. Because GuideArch does not require the elaboration of problem-specific models, it may be simpler to apply than CBAM and our approach; however, the lack of validity of the fixed equations used to score alternatives should raise concerns regarding the validity of the rankings it produces. Our decision support method deals with design time knowledge uncertainty and should not be confused with the large body of software engineering research dealing with run-time physical uncertainty (e.g. [30, 35, 36, 47]). Philosophers and statisticians use the terms epistemic and aleatory uncertainty, respectively [52]. A probabilistic transition system may for example describe variations in the response time of a web service as an exponential distribution with a mean $\lambda$. This models a run-time physical uncertainty. Such probabilistic model could be part of a decision model where the mean $\lambda$ is an uncertain model parameter. The decision makers’ uncertainty about $\lambda$ is a knowledge uncertainty. Other software engineering research streams are concerned with uncertainty during the elaboration of partial models [23] and uncertainty in requirements definitions for adaptive systems [67]. These are different concerns and meanings of uncertainty than those studied in this paper. Graphical decision-theoretic models [18] and Bayesian networks [28] provide general tools for decision making under uncertainty. They have supported software decisions regarding development resources, costs, and safety risks [26, 27] but not requirements and architecture decisions. We did not used these tools to support our method because they deal with discrete variables only; their use would have required transforming our continuous variables such as cost, benefit and goal attainment levels into discrete variables. Boehm’s seminal book on software engineering economics devotes a chapter to statistical decision theory and the value of information [11]. The chapter illustrates the expected information value on a simple example of deciding between two alternative development strategies. To our knowledge, this is the only reference to information value in the software engineering literature, including in Boehm’s subsequent work. This concept thus appears to have been forgotten by our community. Software cost estimation methods [4, 12, 26, 60] could be used to provide inputs to our decision method. Many already rely on statistical and Bayesian methods to provide cost estimates; they could easily generate cost estimates in the form of probability distributions instead of point-based estimates. 6. CONCLUSION Requirements and architecture decisions are essentially decisions under uncertainty. We have argued that modelling uncertainty and mathematically analysing its consequences leads to better decisions than either hiding uncertainty behind point-based estimates or treating uncertainty qualitatively as an inherently uncontrollable aspect of software development. We believe that statistical decision analysis provide the right set of tools to manage uncertainty in complex requirements and architecture decisions. These tools may be useful to other areas of software engineering, e.g. testing, where critical decisions must be made by analysing risks arising out of incomplete knowledge. In future work, we intend to validate and refine our method on a series of industrial case studies and address the problem of reasoning about model uncertainty. 7. REFERENCES
{"Source-Url": "http://earlbarr.com/publications/reqrisk.pdf", "len_cl100k_base": 14135, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 42735, "total-output-tokens": 16423, "length": "2e13", "weborganizer": {"__label__adult": 0.0003762245178222656, "__label__art_design": 0.000942230224609375, "__label__crime_law": 0.0002999305725097656, "__label__education_jobs": 0.0022106170654296875, "__label__entertainment": 9.27448272705078e-05, "__label__fashion_beauty": 0.00018739700317382812, "__label__finance_business": 0.0007395744323730469, "__label__food_dining": 0.0003561973571777344, "__label__games": 0.0008158683776855469, "__label__hardware": 0.0005450248718261719, "__label__health": 0.0004470348358154297, "__label__history": 0.0003690719604492187, "__label__home_hobbies": 0.00010591745376586914, "__label__industrial": 0.0004093647003173828, "__label__literature": 0.0004973411560058594, "__label__politics": 0.00027561187744140625, "__label__religion": 0.00043320655822753906, "__label__science_tech": 0.0236663818359375, "__label__social_life": 9.828805923461914e-05, "__label__software": 0.007610321044921875, "__label__software_dev": 0.95849609375, "__label__sports_fitness": 0.0002791881561279297, "__label__transportation": 0.00040984153747558594, "__label__travel": 0.00021588802337646484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71343, 0.0287]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71343, 0.40167]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71343, 0.89839]], "google_gemma-3-12b-it_contains_pii": [[0, 5142, false], [5142, 12774, null], [12774, 19898, null], [19898, 25800, null], [25800, 29424, null], [29424, 38270, null], [38270, 46173, null], [46173, 50517, null], [50517, 58506, null], [58506, 66022, null], [66022, 71343, null], [71343, 71343, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5142, true], [5142, 12774, null], [12774, 19898, null], [19898, 25800, null], [25800, 29424, null], [29424, 38270, null], [38270, 46173, null], [46173, 50517, null], [50517, 58506, null], [58506, 66022, null], [66022, 71343, null], [71343, 71343, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71343, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71343, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71343, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71343, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71343, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71343, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71343, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71343, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71343, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71343, null]], "pdf_page_numbers": [[0, 5142, 1], [5142, 12774, 2], [12774, 19898, 3], [19898, 25800, 4], [25800, 29424, 5], [29424, 38270, 6], [38270, 46173, 7], [46173, 50517, 8], [50517, 58506, 9], [58506, 66022, 10], [66022, 71343, 11], [71343, 71343, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71343, 0.14981]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
25d32439acab0a218329af7d4a3951f4a48cd9f2
Android App Testing: A Model for Generating Automated Lifecycle Tests ‘This thesis was submitted in partial fulfillment of the requirements for the Master’s Degree in software engineering from the Faculty of Graduate Studies, at Birzeit University, Palestine’ By Student Name: Malik Motan Student Number: 1165317 Supervised By: Dr. Samer Zein Android App Testing: A Model for Generating Automated Lifecycle Tests Author Malik Motan This thesis was prepared under the supervision of Dr. Samer Zein and has been approved by all members of the examination committee. Dr. Samer Zein, Birzeit University (Chairman of the committee) Dr. Ahmad Tamrawi, Birzeit University (Member) Dr. Abdel Salam Sayyad, Birzeit University (Member) Date of Defense August 29th, 2020 Abstract Android is currently the dominating OS in the market. An immense number of Android apps is deployed to the Google Play store every year. Android apps are no longer merely focusing on entertainment or socialization. In fact, the literature shows that apps specializing in critical domains such as health, education and even the military are growing in numbers. This puts more pressure on app developers to produce quality apps. Research shows that current Android app testing approaches rely heavily on manual testing. Research in automatic test generation for Android apps focuses mostly on automated GUI testing, with some approaches introducing model-based testing for test case inputs. However, no studies focus on generating lifecycle tests automatically, especially for testing lifecycle method conformance. In this research, we present a model-based solution approach as a tool to conduct assertion lifecycle tests automatically for Android activity lifecycle callback methods. Our objective is to build a framework to generate such a model. Finally, we evaluated our proposed framework in two ways. First using a group case study. Then we evaluate our work using 10 real world open source Android applications. The results of our evaluation are promising and show that our proposed framework is useful for detecting errors. المستخلص نظام تشغيل أندرويد حاليًا هو نظام التشغيل المهيمن في السوق. يُنشر عدد هائل من التطبيقات في متجر جووجل بلاي في كل عام. لم تعد تطبيقات أندرويد تركز فقط على الترفيه أو التواصل الاجتماعي. في الواقع، تُظهر الأبحاث أن التطبيقات المتخصصة في المجالات الهامة مثل الصحة والتعليم وحتى في الجيش يزدادون مستمرًا. يضع هذا مزيدًا من الضغط على مطورين تطبيقات لإنتاج تطبيقات عالية الجودة. تظهر الأبحاث أن أساليب اختبار تطبيقات أندرويد الحالية تعتمد بشكل كبير على الاختبار اليدوي. تركز الأبحاث في مجال الاختبارات الأمبواتيكية لتطبيقات أندرويد في الغالب على الاختبار الآلي لواجهة المستخدم، مع بعض الأساليب التي تقدم الاختبار المستند إلى نموذج لمدخلات حالة الاختبار. ومع ذلك، لا توجد دراسات تركز على توليد اختبارات دورة الحياة تلقائيًا خاصة لاختبار مطابقة دورة حياة طرق نظام أندرويد. في هذا البحث، تقدم نهج الحل القائم على النموذج كأداة لإجراء اختبارات دورة الحياة للتحقق تلقائيًا لطريق دورة حياة أندرويد. هدفنا هو بناء إطار عمل لإنشاء مثل هذا النموذج. أخيرًا، قمنا بتقييم إطار العمل المقترح بطريقة أولاً باستخدام دراسة جماعية، ثم بعدها قمنا بتقييم علمنا باستخدام 10 تطبيقات أندرويد حقيقية مفتوحة المصدر يظهر التقييمات أن نتائج عملنا امتدت. وتوضح أن إطار العمل المقترح مفيد لأكتشاف الأخطاء. Table of Content List of Figures 6 List of Tables 7 List of Acronyms 8 Chapter 1 9 Introduction 9 1.1 Introduction and Motivation 9 1.2 Research Objectives and Problem Statement 10 1.3 Structure of This Report 11 Chapter 2 12 Background 12 2.1 Android Activity Lifecycle Model 12 Chapter 3 16 Literature Review 16 3.1 Introduction 16 3.2 Literature Review Methodology 16 3.3 Mobile Application Testing 17 3.3.1 Automated Mobile App Test Input Generation 18 3.4 Generic Model Based Testing 21 3.5 Specialized Model Based Testing 23 3.5.1 Custom-tuned GUI Test Generation 28 3.6 Android Activity Lifecycle Conformance Testing 30 3.6 Summary 31 Chapter 4 32 Methodology 32 4.1 The Model 32 List of Figures Figure 1: Android Activity Lifecycle Methods Figure 2: Model Phases Figure 3: Abstract Syntax Tree Figure 4: Activity lifecycle methods graph Figure 5: Sample tool Analysis Results Figure 6: sample TODO comment injected above the code acquiring camera using camera manager Figure 7: sample TODO comment injected above the code acquiring camera using camera object instance Figure 8: Multiple TODOs for multiple resources for the lifecycle method. Figure 9: Execution Results of the Automated Testing Application for the FooCam App Figure 10: Execution Results of the Automated Testing Application for the FooCam App List of Tables Table 1: The open source apps used for evaluation Table 2: Execution time and Lines of code in the main activity for each app List of Acronyms IEEEExplore: Institute of Electrical and Electronics Engineers Explore ACM: Association for Computing Machinery AST: Automation of Software Test ICITCS: International Conference on IT Convergence and Security ICSRS: International Conference on System Reliability and Science ICT4M: International Conference on Information and Communication Technology for The Muslim World GUI: Graphical User Interface API: Application Programming Interface ESG: Event Sequence Graph RNN: Recurrent Neural Network AST: Abstract Syntax Tree AUT: Application Under Test Chapter 1 Introduction This chapter introduces research aim, objectives, and motivation. 1.1 Introduction and Motivation Nowadays, rapidly evolving mobile apps are revolutionizing the way people live and interact in all aspects of modern life. Mobile apps are ubiquitous and cover a wide spectrum of domains. Technology startups are reshaping the way people live by presenting creative and highly intelligent mobile apps. In fact, these apps are no longer merely targeting entertainment and socialization, they’re now being applied in more critical domains such as health, education and even military [1], [2] to mention just a few. Furthermore, e-payments and m-governments mobile apps have been among the most critical types of apps these days. The need for high quality mobile apps has never been higher. Hence, in order to make sure a mobile app functions correctly and its data integrity is preserved and not lost in the operating system, mobile apps need to conform to a standard application lifecycle model. A mobile app lifecycle model normally means the state of the app’s process and whether it’s paused, running, stopped, ..etc as well as the state changes from one state to another. In general, a mobile app conforms to the application lifecycle when it is implemented correctly by interacting with and transitioning properly between the application lifecycle states [3]. In order to make sure an app conforms to the application lifecycle, proper testing and validation need to be conducted. Testing mobile apps is anything but straightforward. This is due to the diverse nature of mobile apps and platforms. Such diversity exists due to several factors including the wide range of current mobile screen sizes starting from small smartphones and all the way to tablets, the variety of input mechanisms like keyboard, gesture, finger and even voice, the storage capacity, memory, bandwidth and processor. Consequently, it becomes difficult to test the mobile app and ensure it functions efficiently in all circumstances. Several techniques are used to test mobile applications. Many researchers have explored using these techniques on mobile applications [4, 5, 6, 7, 8]. These techniques cover the categories of performance testing, unit testing, usability testing and functional user interface testing. However, little studies focus on test case generation for application lifecycle. Hence, the focus in this thesis research. We chose Android as our target platform for this research for several reasons. First and foremost, Android platform has the largest market share of mobile devices of 87% as of 2019 [9]. Also, Android is an open source platform, which allows for more research freedom when exploring the platform. 1.2 Research Objectives and Problem Statement Android app testing is an active area of research. Even though several studies have explored generating test cases for Android applications in various areas, mainly user interface, research for generating test cases to assess the quality of Android lifecycle code is still nowhere to be found. Testing the quality of the app lifecycle code is a major concern especially when producing high-end Android apps. Objectives In this research, we aim to: 1. Investigate how to generate a model to represent the lifecycle callback methods from the perspective of system resources. 2. Design a framework to extract such a model using static code analysis techniques. 3. Extend the framework to generate and execute automated tests from extracted model. 1.3 Structure of This Report This report starts with a quick introduction about the research motivation, aim and objectives. Then we proceed to chapter 2 to discuss the background by explaining what Android activity lifecycle model is and how it is used in the Android app development process. In chapter 3, we introduce the literature review. In this chapter, we present an overview of the current state-of-the-art of Android application testing and automated test generation approaches. We mainly focus on model-based test generation research for Android application and finally highlight the research gap we’re trying to fill. Finally, in chapter 4 we briefly introduce our research methodology. We go over the initially suggested model of lifecycle test generation for Android activity lifecycle. After that, we conclude this report with our references. Chapter 2 Background 2.1 Android Activity Lifecycle Model Android applications go through a series of state changes as the user navigates through these apps. An application state can be running, paused or even closed. The process of going through these states of an Android application and the possibility/impossibility of shifting from one state to another is the application lifecycle. Unlike traditional desktop applications, where the operating system takes care of all app states and changes among these states, the Android operating system cannot do the same. The scarcity of mobile device resources makes it infeasible for the Android OS to store all states and state changes of all applications and whenever the state changes for these apps. This puts the burden of handling and managing the application data and state when the application is paused, swapped by another application on the phone or even shut down by the operating system. This means that ensuring optimal resource consumption, preventing data loss and reaction to application lifecycle state changes is the complete responsibility of the Android application itself [10]. An Android activity is a single action the user is able to perform. Android activity class creates a window to hold the UI of the app screen, so the user can interact with the activity. When the user interacts with the activity, the activity instance undergoes state changes. The changes of activity instance state represent the activity lifecycle. In order to be aware of and react to state changes of the activity, the activity class has callback methods, also known as activity lifecycle methods. These lifecycle methods indicate the Android operating system’s process regarding the activity window, which is the current application screen. These lifecycle methods show whether the Android system is creating, resuming or stopping an activity, or even killing the process of the activity. Developers can specify exactly what should happen whenever the user enters or leaves the activity inside the any activity lifecycle callback method. For example, if the user is streaming a video within a video streaming application, and the user decided to switch to another application, it is the video streaming app responsibility to pause/kill the streaming process and cut off the network connection. Hence, a callback lifecycle method allows the Android app developer to state what should happen upon app state changes. It is very important to perform the right tasks and reaction to an application state changes. Using the lifecycle methods right can make the difference between a highly performant app and a buggy app that keeps crashing. Android operating system offers six activity lifecycle methods. These methods are onCreate(), onStart(), onStop(), onResume(), onPause(), and finally onDestroy(). The first method onCreate is where the main logic of the application startup is coded. This callback method is triggered upon getting into the Created state of the activity. This logic only runs once per the activity lifecycle. The second method is the onStart, which shows the activity to the user and prepares it for interaction, and it is where the developer places the code to maintain the app’s GUI. This method is triggered upon getting into the Started state. The onResume method is triggered once the app gets into the Resumed state. This method is where the user can start interacting with the application. Once the app’s activity is no longer in the foreground, and whether it’s being destroyed or paused, the application enters the Paused state. This triggers the onPause lifecycle method. In this method, the developer codes what’s expected to happen while the application is being shotly interrupted by a phone call for example, a multi-app window is active and the focus is no longer on the current app, or even a dialog has opened on top of the current activity which makes our app’s activity not fully visible. When the current app’s activity is no longer visible and completely covered by something else, or the app itself in the background and another application is now active, the activity enters the *Stopped* state. This state triggers the *onStop* lifecycle method. Inside this method, the developer can code logic involving operations of saving information to the database for example. The last lifecycle method is *onDestroy*. This lifecycle method is triggered right before destroying the activity. This method can be used either because the activity is finished, or because the Android system is killing the activity for some reason. In this method, the developer needs to clean up after the app components and activities and release all system resources [11]. The following diagram depicts the activity lifecycle methods and transitions between them. **Figure 1**: Android Activity Lifecycle Methods **Source**: [11] Chapter 3 Literature Review 3.1 Introduction In our search for automated model-based test case generation for mobile applications, we came across three main categories of papers. First we start off by looking at the current state-of-the-art of mobile application testing and the available automated testing tools. Secondly we look at the generic model-based testing category which discusses papers that look into generating test cases using predefined generic models. In the same area, we then discuss the advantages of using model based testing approach in mobile applications, or comparing model-based testing approach with other automatic test case generation approaches. The third category of papers delves deeply into the model-based testing approach for mobile applications and conducts custom tuning of the steps it takes to produce a model for automatically generating test cases, the actual generation process of the test cases, the execution of the generated test cases and finally the evaluation of these executed test cases. In this chapter, we go over these papers, discuss and compare the problems each paper discusses, the methodology they follow and finally the results these papers come up with. 3.2 Literature Review Methodology GoogleScholar.com is the engine used to search for papers. We filtered through multiple academic journals and articles. These include Springer, IEEEXplore, ACM, ResearchGate, AST, ICITCS, ICSRS, and ICT4M. We also used Google’s documentation for Android developer guide and some other online resources. Main keywords and search strings used to elicit the papers from these resources include mobile testing, Android testing, lifecycle testing, automation tool, application lifecycle, Android activity, automated testing, automation tool, automatic test case generation, model-based testing, Android GUI model, GUI testing, generation of test cases from model and automatic lifecycle test generation. In order to eliminate researcher bias, inclusion criteria for selecting and presenting papers related to the research topic are as follows. For a paper to be included in the literature review, it has to be: 1. Has been published within the past five years 2. The paper is at least five-page long 3. Preference for papers that are empirical studies 3.3 Mobile Application Testing Zein et al. [4] conducted an exploratory multiple case-study to try and understand the testing methods developers use in real world mobile applications and the obstacles those developers encounter while testing. This empirical study involved four mobile app development companies. The authors concluded that in all studied cases, both developers and testing engineers do not have the expertise or full knowledge in testing methods or tools to create or test mobile apps which comply with Android lifecycle properties or models. Besides, the study found that testing engineers do not possess the knowledge to perform cross-application communication testing, which is called integration testing. Moreover, the study concludes that no official and systematic testing approaches exist to help test critical apps. In fact, most industry testing of mobile apps relies on the manual blackbox testing of GUI. Also, automated testing tools are rarely used to test application lifecycle conformance. In short, mobile app developers main focus is to quickly produce responsive apps with fancy GUIs. At the same time, Muccini et al. [1] present a generic overview of mobile application testing state-of-the-art, and its future research directions. The authors try to understand how mobile applications are different from traditional ones, and how this affects the types of needed testing methods. Also, they delve into the challenges and the directions research is heading towards for mobile app testing. Finally, the authors talk about the role of automation in the mobile app testing process. The authors describe that automating mobile app testing is crucial for two main reasons. First, automation reduces the cost of testing and at the same time guarantees the quality of the apps under test. Second, ensuring Layer testing. This means testing the interoperability of applications through the operating system, among apps, and against device hardware too, such as sensors. Reported bugs indicate that problems arise due to issues between the applications and operating systems, and not only within the apps. The same paper explains that achieving cost effective testing for the mobile app can be achieved by means of cloud-based testing and outsourcing. Besides, research indicates that the industry may be heading towards the services of testing as a service, which would yield affordable testing solutions for mobile apps. 3.3.1 Automated Mobile App Test Input Generation Linares-Vasquez et al. [13] highlight in a survey the state-of-the-art of automation testing for mobile applications. The Authors cover the services, tools and frameworks that mobile app developers can use in app testing as well as presenting some of the drawbacks of these available testing methods. The paper introduces GUI automation APIs and frameworks as one of the existing methods for testing mobile apps. These frameworks are often used to get an overview of the GUI components structure. Such tools are usually used by developers and testers to help write automation scripts that run by record and replay tools. These record and replay tools are used to record testing behavior, then generate test scripts that can be run later and even modified to cover different test cases. However, record and replay tools suffer from a compromise of either accuracy or timing of recorded and replayed events. Another type of testing methods is automated test input generation. This is one of the most active research areas of automated mobile app testing. Research in this area presents several approaches for automated test input generation. These approaches include random testing, systematic testing, search-based testing, combinatorial testing, and finally model-based testing which is our focus in this research. Error and bug reporting tools are another type of the commonly used testing methods nowadays. This type of tools usually comes in two types, the first is the regular issue tracker for bugs and the second is the crash and resource consumption monitor tools. This type of tools helps provide a textual description of the bugs as well as adding snapshots of the errors and resource consumption. Mobile testing services are another type of mobile testing approaches. This approach relied on outsourcing the testing part of a mobile app to a group of testing experts and/or non-experts outside of the development company. This allows for less testing costs. Tools for device streaming are also used to facilitate the testing process of mobile apps. This means that developers can mirror the mobile device to a personal computer screen or even remotely connect to that mobile device when needed. Despite the fact that all of these automated testing methods and techniques exist, manual testing is still used more than automated testing for mobile apps. The authors explain that manual testing is normally preferred due to several factors including lack of the needed functionality in many of these tools, personal testing preferences by the testers, or organizational limitations in some of the cases. Delving deeper into the available automatic test input generation tools, Choudhary et al. conduct an empirical study [15], by performing a comparison of tools of the existing automatic test input generation for Android devices. The authors evaluate how effective these tools are in terms of code coverage, fault detection capability, compatibility with Android platforms and ease of use. When it comes to code coverage, the authors state that several methods are used which are systematic, random and model-based test input generation. The research states that it’s not clear that any of those strategies is better than the others in practice. However, the authors explain that the most important factor when it comes to automatically generating test inputs is time. This means that all tools should focus on how much code coverage they can achieve within a limited amount of time to help measure the effectiveness of these tools. Monkey [16]; which is a random-based test input generation tool presents the best results in terms of code coverage. The second metric the paper compared tools against is ease of use. This means that the tool needs to be effective in terms of working right away and out of the box, with not much configuration involved. The authors found that Dynodroid [17] and Monkey to be the easiest to use. While ACTEve [18], A³E [19] and GUIRipper [20] need a lot of configuration to use. Compatibility with Android platforms is another metric used to compare automation tools. This means that the test input generation tool needs to run on different Android devices with variable hardware characteristics and Android versions. Monkey, GUIRipper, and ACTEve are compatible with all Android versions and devices. The last metric measured in this research is fault detection capability. The authors measured this metric by counting how many bugs a tool can detect within one hour of running per app. Monkey tool was able to find the highest number of bugs within the set time limit in this case. Consequently, the research shows that Monkey tool depicts the best performance when according to the four benchmarks specified in this research. 3.4 Generic Model Based Testing In the empirical study of de Cleva Farto and Endo [21], they conduct an experiment to measure the effectiveness of using model-based testing in generating test cases automatically for Android applications. They aim at checking if current model based testing can be used to test functional requirements for mobile applications. Besides, they try to identify the results and issues of using model based testing in mobile applications. Finally, they try to measure the effectiveness of the generated test cases on Android applications. The authors use an event sequence graph (ESG) as the modelling method. They develop test cases using Robotium framework [22]. The research finds that model based testing is a valid and recommended approach for automatically generating test cases for Android applications. The authors found that model based testing is efficient in terms of generating test cases automatically, detecting faults in the system under test, good test case quality, reduced cost and time of testing and finally the maturity and evolution of the testing model. Challenges of the model based testing are also presented. The study shows that modelling itself proves a difficult task, making the testing concrete for mobile applications in general and finally requiring experience in certain tools to perform the tests. However, the study concludes that model based testing using an event sequence graph is an effective and systematic method for testing Android applications. Saad and Abu Bakar [23] discuss selecting the proper testing tool for the mobile platform of choice and depending on the research requirements. They introduce a variety of automated testing tools for the mobile platforms of Android, iOS, Blackberry, Symbian, Windows Phone and Windows Mobile. The authors focus on the verification of methods that are needed to ensure the mobile app works as expected, so in other words they assert the functionality of the mobile app with generic blackbox tools. Their criteria for choosing the right testing tool include how well the tool handles different web browsers, emulators, support for different operating systems, types of GUI testing they offer, and interruption testing abilities, test reporting capabilities, test workflows, and pricing. The chosen tool of the research is Micro Focus Silk Mobile, a one time payment tool. The authors mainly chose this tool for the support it offers on all platforms and for providing high quality test flows. On the other hand, Frajtak et al. [24] introduce the challenge of using machine-aided exploratory testing rather than manual exploratory testing to generate test cases. The proposed approach is suggested where the system under test model is not available. This approach uses a reverse engineering method to recreate the model of the system under test. This research conducts a case study with two groups of testers. Where one uses manual exploratory testing and the other uses machine-aided exploratory testing. The authors propose a testing framework to help with the exploration of the system under test. The research finds that the exploratory testing aided by the proposed framework achieves better results in terms of documenting the testing process. The documentation mostly covers the steps followed to perform a test case as well as generic documentation of the explored areas of the system under test. The authors measure the efficiency of this approach by comparing it with manual exploratory testing. The results of this comparison show that the machine-aided exploratory testing saves 23.54% more time than the manual exploratory testing. Another commonly studied approach for test case generation that relies on equivalence classes, where Subramanian et al. [25] discuss partitioning of equivalence classes as an approach to generate test cases for Android application GUI. This is a manual approach that is based on specifying the functionality and the GUI specifications. The proposed approach use class depends on the equivalence class coverage method. This method produces test cases for the GUI immediately and it fits early stages of the app development lifecycle. This approach adapts well to changes in the application, since it performs systematic exploration of the test cases. Besides, the proposed testing approach can help with app maintenance since unnecessary testing errors can be filtered out. ### 3.5 Specialized Model Based Testing Amalfitano et al. [26] introduce the problem of automating the generation of GUI tests. They present MobiGUITAR as a tool for automated GUI testing of Android apps. MobiGUITAR is a run-time tool for observing, extracting and abstracting the GUI state of the Android app. This tool is based on an abstraction Model that has criteria for test coverage to generate unit test cases automatically. The tool relies on a reverse-engineered model for mobile apps. The authors of this tool apply it on 4 open-source Android projects in an empirical study to generate and run over seven thousand test cases and find 10 new bugs. MobiGUITAR consists of three steps. First, it traverses the mobile app GUI in order to create a state-machine model (graph) for the GUI to be used for test case generation (also known as GUI ripping). Second, MobiGUITAR generates test cases for the resulting GUI sequences of events. These test cases are based on the rule that pairs of adjacent events (edges on the graph) are grouped together to merge the humongous number of sequences of events for the app. Finally, the execution step outputs the generated test cases as JUnit-formatted outputs. This helps detect app crash bugs during run-time, which covers the IllegalArgumentException. The authors conclude that the tool they created helps generate test cases that in turn help find severe bugs in the applications under test. Moreover, the authors depict that using model-based testing along with model learning offers improved fault detection in the realm of testing Android applications. In another study, Espada et al. [27] executed a model-based testing approach to explain the different possible user interaction flows in mobile applications. The study is conducted by using a tool to explore the model generated by a custom finite state machine aimed at detecting all potential user interactions. The proposed approach uses model-based testing. The authors built a tool that uses a model to generate test cases. The generated test cases consider both user interactions with the applications and the applications interactions with each other. Then to analyze the expected behavior, the authors use a tool called SPIN to analyze a specially designed state machine to get all available user interactions corresponding to the generated test cases. Finally, the model generated test cases are run on an Android device to mimic user behavior. Compared to the approach the authors followed in MOBIGUITAR [26], this research did not need to clean up the generated test cases and remove the infeasible ones. This is because this study separated the test case generation from the testing process. Also, the states resulting from the custom state machine are designed to be limited and compact. Besides, MOBIGUITAR is applicable on one application at a time, while this research allows for testing multiple applications interacting with Android intents. This research concludes by describing how the results generated in this case study make realistic test cases and not randomly generated test cases that require data cleaning. The authors are yet to verify the efficiency of this tool with a runtime verification mechanism to test the effectiveness and performance of this proposed approach. At the same time, Salihu et al. [28] presents a tool called AMOGA. This approach is proposed to overcome the issue of generating a model that automatically produces dynamic and comprehensive GUIs for mobile applications. This approach is a combination between static and dynamic methods of model generation. AMOGA has a static analyzer and a dynamic crawler. The static analyzer extracts the event sequences that the mobile app supports. The dynamic crawler then crawls through these generated events and builds up the model of the mobile applications. AMOGA is applied to 15 Android applications in an experiment. The results show that AMOGA efficiently generates a thorough model which presents high code coverage of the system under test. Besides, mutation testing is applied to measure the ability of AMOGA to detect faults in the mobile app. The proposed tool achieved a good mutation score which proves that AMOGA can reveal several bugs in the mobile app under test. In another study, Liu et al. [29] introduced the challenge of automatically generating relevant input text for mobile applications. They use a deep learning mechanism to automatically generate text input for 50 iOS mobile apps. The deep learning techniques are tuned to work on trained and untrained mobile apps for the deep learning model. The authors claim to the best of their knowledge and at the time of this research, that they’re the first to use deep learning to generate text inputs automatically for mobile apps. In short, their deep learning approach consists of two phases. The first one is the training phase. In this phase, the automated tool is trained by learning the manual inputs for testing and by associating these inputs with the relevant context of testing. The second phase is prediction. The automated tool predicts the input text, depending on the context. After that, the research delves into evaluating the proposed approach in terms of effectiveness against other automatic generation of input for mobile. This evaluation involves comparing a Recurrent Neural Network (RNN) model they build with a random input generation approach in terms of performance and effectiveness in the scope of multiple cases studies. These case studies involve 50 iOS devices where the authors test the FireFox and Github iOS apps. Finally the research compares the deep learning’s RNN model with the Word2Vec algorithm in terms of performance and effectiveness. Eventually, the authors find that the deep learning approach for automatic input generation produces relevant text input in terms of the program context. The evaluation of the 50 iOS apps using the RNN model shows that the proposed RNN model provides efficient and effective results for generating inputs. Gudmundsson et al. [30] test the effectiveness of model-based testing to QuizUp Android application. This application represents the largest quizzing app on the market with millions of users around the globe. The study depicts that the model-based method which relies on a simple finite state machine can effectively and efficiently be used to test such huge Android apps. After applying model-based testing techniques on this app, the authors were able to detect major defects in the mobile app under test. These defects were then fixed and deployed to the QuizUp Android application. In another study, Panizo et al. [31] introduced a model-based testing framework to automatically user interactions with the mobile app. The authors extend the TRIANGLE[6] tool for automatic model-based testing which relies on model checking mechanism. This study uses the extended tool for testing the ExoPlayer Android application in several various network scenarios. This mobile application is a video streaming app that uses several streaming protocols. The major feature in the proposed testbed extension is that it emulates realistic networking scenarios that cover several configurations for network and radio. Integrating this feature into model-based testing resulted in better test coverage for user flows, the ability to further extend the model with additional user flow conditions without changing the model but rather defining new rules for it and the simplicity of defining testing criteria within the tool with plain language for average developers. At the same time, Frajták, Bures and Jelinek [32] present a hybrid testing methodology that combines the model-based testing approach along with the manual exploratory testing. The authors present this combination to try and eliminate the issue of verifying and documenting the resulting test cases when the model for test case generation is incomplete or inconsistent and we need to re-evaluate its results or measure its testing effectiveness. The model of test case generation is dynamically created and updated in the exploratory stage. Each step for the test case generation in the exploratory model is marked via a JavaScript tracking code. This tracking code is injected in web pages that represent the channels of communication for the test case input data. The study conducted an experiment where two groups, one used the manual exploratory testing and the other used model based testing. The research found that manual exploratory testing had the advantage in some of the sub-tasks of providing documentation of the testing flow, re-evaluating the testing scenarios, and documenting the explored and non explored areas of the --- 1 https://www.triangle-project.eu system under test. This is an advantage of manual exploratory testing that model-based testing does not have that this research found. In the study conducted by Zhang, Wu, and Rountev [33], the authors look into testing Android applications to explore and find the leaks and defects in resource usage. They used a static analysis model to define the regular GUI flows that have a normal effect on Android app resources. After that they introduced a test input generation algorithm to generate these normal flows and then categorize these flows into two categories. The categorization of the generated outputs of these algorithms followed the patterns of resource leak of the Android applications under test. After that, the authors compared these algorithms of automatic test input generation with non automated algorithms that the authors have presented in previous work. The result of this study indicates that it is actually possible to automatically generate effective and generic test cases for detecting leaks in Android app resources using the methodology they propose. 3.5.1 Custom-tuned GUI Test Generation Baek and Bae [34] introduce automated GUI testing using a model for test case generation as well as debugging. The authors follow a systematic approach in an empirical study to understand the effect of multilevel criteria of GUI comparisons on the effectiveness of testing. The authors introduced the multilevel GUI comparison criteria (GUICC) framework as a GUI model generation methodology which focused on the way GUI model is generated for Android applications. They conducted an empirical study to test the effectiveness of GUICC for testing the effectiveness of Android GUI testing models. They tested the framework in terms of research questions about the effectiveness of the generation GUI graph, the code coverage that graph offered and finally the error detection capability of the GUICC framework. As a result, the authors found that multilevel GUI testing achieved more effective activity based (single level) GUI modelling and testing. Besides, they found that state explosion issues can be significantly minimized with multilevel GUI comparison criteria, which is an issue in single level GUI testing. The authors discuss the issue of implementing automated GUI testing approaches and comparing the outcome of each of these approaches. The authors present a generic testing algorithm in the context of a conceptual framework. The aim of their framework is to use it for building up methods to automatically test GUIs and compare the outcomes. The framework itself is based on the generic and configurable algorithm that can be tuned to produce several various Random testing and Active Learning testing methods via modifying the six input parameters of the algorithm. In a different study, Amalfitano et al. [35] applied a specially designed conceptual framework on an Android application. They defined and applied nine testing approaches by configuring the AndroidRipper [3] in nine different ways, which resulted in nine different testing approaches. After that, they compared between the results for each of these approaches. The goal of this comparison was to try and understand how the change of the algorithm’s six inputs affected the adequacy of testing, the cost and the resulting GUI tree complexity for each of these approaches. Six of these nine testing approaches use Active Learning strategies, where model learning and GUI testing are used together to learn the model of the GUI to generate events/test cases based on the inferred model. The research concluded that descriptive testing strategy, where the analyzing the GUI is described in relevant component subsets, deeply affects the performance of the testing approach as well as the testing model complexity. Whereas the scheduling strategy, where the schedule for firing next events is provided, didn’t. In fact, the models built using both the active learning and non active learning methods are biased when built using the scheduling testing strategy. 3.6 Android Activity Lifecycle Conformance Testing Zein et al. [39] introduce an automated testing approach using static analysis to help junior developers keep track of used system resources during the Android applications’ lifecycle. The difference between my research and Dr. Samer’s is the following: - Dr. Samer’s work is merely based on static code analysis, while mine is model based. I build a model and the output of every phase is previously clarified in the methodology chapter. - My model-based tool checks activity files, fragments and AppCompatActivity files, while Dr. Samer’s only checks activity files. - My model-based solution approach (tool) generates text files for each activity, fragment and AppCompatActivity file, and these text files represent graphs of the lifecycle methods mode for that activity, fragment or AppCompatActivity. These graph representations contain the lifecycle method transitions from one lifecycle method to the next and the resources acquired/released inside each lifecycle method. Dr. Samer’s static code analysis doesn’t have such phases or lifecycle method representations for checking resource acquisition/failure in the next phases. - I conduct a user acceptance experiment to verify the benefits and usability of my model-based solution approach, while Dr. Samer doesn’t • My model-based solution approach injects the TODO comment recommendations inside the Android apps’ source code, to help the developer locate the resources that fail and where to release these resources, Dr. Samer’s tool doesn’t offer such functionality. 3.6 Summary In conclusion, we discussed work related to automatic testing of Android applications. As shown in the previous sections, research in this field usually focuses on depicting state-of-the-art mobile application testing and the available automated testing tools in the market. In other studies, research focuses on generic model-based testing which discusses papers that look into generating test cases using predefined generic models. In the same way, some papers tend to discuss the advantages of using model based testing approach in mobile app testing, or comparing model-based testing approach with other automatic test case generation approaches. Another category of papers delves deeply into the model-based testing approach for mobile applications and conducts custom tuning of the steps it takes to produce a model for automatically generating test cases, the actual generation process of the test cases, the execution of the generated test cases and finally the evaluation of these executed test cases. We elaborated on these approaches and fields in the above sections in terms of the problems they tackle, the methodologies used to tackle these problems and finally the results each paper discovered. However, no research focuses on generating lifecycle tests automatically for Android activity lifecycle callback methods. This is where we shed the light and conduct our research to fill this research gap. Chapter 4 Methodology The methodology we propose is building a model of the Android lifecycle callback methods, then use this model to generate test cases and finally execute these test cases and analyze the results. The lifecycle model will be the base of our test case generation process. The following figure depicts our model and its four phases. 4.1 The Model The following diagram summarizes the main phases of the proposed model. ![Diagram of Model Phases] **Figure 2:** Model Phases This model-based solution approach (tool) is implemented using the language Java, version 8. The implemented tool which is based on this model follows the phases and input and output flow described in the next sections for each phase of the model. The model-based proof-of-concept tool we present checks Android app activity files, fragments and AppCompatActivity which is used for backward compatibility features. 4.1.1 The Parser This is where the Android source code is first consumed. This phase takes the Android app source code and parses each of the activity files in this app using static code analysis. The parser parses one activity at a time, until all activities are parsed. The parser will only parse the Android lifecycle Lifecycle methods within each activity. The output of this phase is an Abstract Syntax Tree (AST) model. The parser used in this phase is the Java Parser [37]. This publicly available library enables us to interact with the source code to be parsed in Java object representation format. This object representation format is called Abstract Syntax Tree (AST). This Abstract Syntax Tree data structure helps navigate and traverse the parsed code conveniently. The example below demonstrates what the Abstract Syntax Tree representation is for Java code that prints the time [38]. ```java package com.github.javaparser; package java.time.LocalDateTime; public class TimePrinter { public static void main(String args[]) { System.out.print(LocalDateTime.now()); } ``` The time printer code above is parsed and represented in the high level Abstract Syntax Tree shown in the following figure. ![Abstract Syntax Tree](image) **Figure 3**: Abstract Syntax Tree [38] The Abstract Syntax Tree Model in this phase resulting from parsing the Android source code is then consumed by the Lifecycle Method Analyzer in the next phase. ### 4.1.2 Lifecycle Method Analyzer This phase of the model consumes the Abstract Syntax Tree resulting from the previous phase and uses the AST to build a graph for each activity of the Android app. Each graph represents the activity lifecycle methods and the transitions between them. Each node in the graph contains a lifecycle method in that activity and the resources it uses (catches/releases). Each edge in the graph represents transitions between the lifecycle methods. For example, Camera resource: where it was opened (onResume), and where it was let go (onPause). The figure below shows a sample graph with two lifecycle methods and the resources acquired and released in each of them. ![Activity lifecycle methods graph](image) Figure 4: Activity lifecycle methods graph This phase of the model produces a directed graph for each activity file in the Android app and the lifecycle methods inside that activity file, similar to the graph in figure 4. In order to help visualize and see the results of this phase, the graph for each activity is printed on a separate text file and inside a folder named ‘log’, and the output on that text file is printed in the following format: ``` onResume() ---> onPause() ---> onResume() - Camera acquired - Camera released - Camera acquired - Location acquired - Location acquired ``` ``` This activity lifecycle graph is then consumed by the assertion algorithm in the next phase of this model to determine the success or failure for each of the acquired resources. 4.1.3 Resource Usage Assertion Model In this phase, we build lifecycle test assertions to make sure each resource, e.g. the camera was acquired and released in the relevant lifecycle methods. These assertions tests take into consideration three resources and a set of predefined model rules for each resource to use for assertion. In the model-based tool we implement assertions for three system resources; the camera, the location and finally the external drive. We chose these 3 resources because they’re the most commonly used system resources by Android apps. The first resource is the camera. This resource is checked by searching all lifecycle methods for the camera object instance, which acquires the camera. This object is usually initialized inside the onResume lifecycle method. However, the model checks for instances of the camera object in all lifecycle methods in case the camera was acquired in any of them. The camera in Android (version 11) can be acquired in one of two ways: 1. Directly using a camera object instance. This camera object instance can be directly initialized in the lifecycle method itself, or inside another regular method which then is invoked inside the lifecycle method, called cameraInstance for example. Afterwards, this camera object instance is used to acquire the camera resource using cameraInstance.open(). Normally the camera is released inside the onResume method, but we check the other lifecycle methods as well. We check if the camera resource was released by looking for the cameraInstance.release(). 2. Using the Android camera manager which uses the camera via system services. We check for an instance of the camera manager object, called `cameraManager` for example. In order to check if the camera manager instance is released or not, we look for the `cameraManager.close()`, which should be in the onPause lifecycle method. The second resource we check is the location. We check for location acquisition in lifecycle methods (usually in the onResume method) and we look for an instance of the `FusedLocationProviderClient` and if it invokes the `getFusedLocationProviderClient` method. In order to check if this resource is released, we look for the `removeLocationUpdates` inside the onPause lifecycle method. The third resource we check in this model is the external storage (drive). We check for reading from or writing to external storage using either the Android `ParcelFileDescriptor` or `InputStream/OutputStream`, depending on whether the media content is best represented as a file descriptor or a file stream. The model-based solution approach (tool) we build has a property file used to define resources and the rules for each one of these resources. The property file which is managed by the developer consists of the following attributes and methods as inputs to the tool: - Name for the resource we’re defining. For example, we alias the camera resource caught using the camera object instance ‘Camera’ and the camera resource caught using the Android camera manager (via system service) ‘Camera2’, in order to clearly show the resource that failed and how it was acquired. - The way the resource is caught. For example, we define `cameraInstanceVariable.open()` as the way ‘Camera’ resource is caught using the camera object instance. While we defined getSystemService(Context.CAMERA_SERVICE) as the way the camera is caught using the camera manager. - Lifecycle methods where the resource can be acquired. For example, we define `onResume` as the method where the ‘Camera’ and ‘Camera2’ can be acquired. - Lifecycle methods where the resource can be released. For example, we define `onPause` as the method for where the camera resource can be released, in both camera definition methods. ### 4.1.4 Resource Usage Report The final phase of this model is to present the assertion results for the checked resources. The model checks for resource acquisition/release using the assertion rules and for each of the 3 resources described in the previous section. If a resource is caught, but not released, then the test fails. The model-based tool we present provides details about each resource and in what activity it was caught (acquired), and whether this resource was released (pass) or not (fail). Figure 5 below shows a screenshot of the results window of the tool after analyzing a sample Android app and showing the parsing and checking results for 7 activities and their associated resources. We notice the results colored red, which are 1, 4 and 5 are failing resources because the related resources were acquired but not released. The tool shows the failing resources and the recommendation to release the failing and in the appropriate lifecycle methods. **You must release Camera by using mCameraDevice.close() method in the onPause() activity lifecycle method.** 2. AndroidCameraApi.java: Camera2 passed. 3. AndroidCameraApi.java: External_Drive passed. **You must release Camera by using camera.release() method in the onPause() activity lifecycle method.** 5. MakePhotoNotReleaseActivity.java: Camera2 Failed. **You must release Camera by using manager.close() method in the onPause() activity lifecycle method.** 6. Camera2Take.java: Camera2 Failed. **You must release Camera by using manager.close() method in the onPause() activity lifecycle method.** 7. Camera2Take.java: External_Drive passed. 8. Camera2Take.java: Location passed. Completed. Figure 5: Sample tool Analysis Results Besides, this model-based tool we present inserts a TODO comment in the Android app’s source code and in the activity file to help the developer locate the resource that failed the test. The comments are inserted for resources that fail, and two TODOs are inserted for each failed resource: 1. One comment is inserted above the line of the instance of the resource that acquired the system resource. Screenshots below show the TODOs for the camera resource acquired once using camera object instance, and once acquired using camera manager, and neither were released. 2. A second comment is inserted on top of the activity lifecycle method where the resource was acquired. Figure below shows the TODOs injected above the lifecycle method where the camera resource was acquired in 2 different ways (hypothetical scenario) to show how acquiring multiple resources in the same lifecycle method and not releasing them would be treated by the tool. These TODOs would summarize the failures for the developer and save some time to go look inside each of the invoked resource acquisition methods inside the lifecycle method to look for which resources are failing separately as per previous step when there are multiple failures. Figure 8: Multiple *TODOs* for multiple resources for the lifecycle method. These *TODO* comments help the developer locate the resources that are acquired and not released, so the developer can release these resources and avoid bugs. Chapter 5 Evaluation We evaluate the results of our research using two approaches. We first conduct a user evaluation test, an approach used before by Barnett et al. [36]. Secondly, we seed bugs into 10 real open source Android applications, then run our tool on these applications. This method was applied by Zein et al. [39]. We first conduct the user evaluation test to help measure user acceptance of the developed tool and if it could actually help make the development process faster and less buggy. The GitHub repository of the project contains the entire automated testing application as well as an executable JAR file for the tool. 5.1 User Evaluation Experiment We asked 6 Android developers to fill out a demographic survey prior to starting the experiment. This helped identify the experience level of the developers undergoing the experiment. After that, the participants watched a short learning video on how to use the automated testing application (our model-based tool) on a provided sample application. The learning video is so that we reduce the bias and at the same time run the experiment for multiple users, instead of going after them one by one. The experiment setup went as follows: 1. Fill out a pre-experiment demographic survey (can be found in the appendix) 2. Watch a learning video for how to use the automated testing application. 3. Complete the programming task described below. --- 2 https://github.com/MalikMotan/AutomatedAndroidAppTester 4. Finally fill out this evaluation survey. 5.1.1 Programming Task Now once you’ve filled out the initial survey and watched the video on how to use the automated testing application, your task is to run the automated testing application on the sample Android application provided in this link and using the following simple steps: 1. Download the automated Testing application (Jar file)\(^3\). 2. Download (or fork) the sample Android application\(^4\). 3. Run the automated testing application and select the sample Android application as the input. 4. Check the successful/failed resources on the testing application’s GUI. 5. Go to the Android app’s source code and release the non-released (failed) resources in each of the corresponding activity files mentioned in the applications GUI. 6. Run the automated testing application again on the Android app. 7. Verify all resources now pass the tests. Once the developer has taken the development task, he/she is asked to fill out the post experiment survey to measure the user acceptance. In the next chapter, we introduce the results and discussion. The full experiment surveys, survey results, sample app, learning video and executable JAR file are all in one public GitHub repository\(^5\) \(^3\) [https://github.com/MalikMotan/ExperimentAndEvaluation/blob/master/Android-Parser.jar](https://github.com/MalikMotan/ExperimentAndEvaluation/blob/master/Android-Parser.jar) \(^4\) [https://github.com/MalikMotan/ExperimentAndEvaluation/tree/master/SampleAndroidApp](https://github.com/MalikMotan/ExperimentAndEvaluation/tree/master/SampleAndroidApp) \(^5\) [https://github.com/MalikMotan/ExperimentAndEvaluation](https://github.com/MalikMotan/ExperimentAndEvaluation) 5.2 Open Source Apps We evaluate our automated testing application against 10 open source real world Android applications. These 10 applications are from different domains and have different sizes and complexities. We seed bugs into these applications and check the ability of our tool to detect these bugs. Table 1 below shows a summary of these applications. <table> <thead> <tr> <th>App Name</th> <th>App Type</th> <th>Description</th> <th>Lines of code in main activity</th> </tr> </thead> <tbody> <tr> <td>FooCam (200ms)</td> <td>Multimedia</td> <td>Take several consecutive shots with different exposure settings</td> <td>297</td> </tr> <tr> <td>AntennaPod (1.5s)</td> <td>Multimedia</td> <td>Flexible and easy to use podcast manager</td> <td>514</td> </tr> <tr> <td>CoCoin (1.2s)</td> <td>Financial</td> <td>Multiview accounting and financial management</td> <td>770</td> </tr> <tr> <td>LeafPic (1.6)</td> <td>Multimedia</td> <td>Material-designed fluid image gallery</td> <td>633</td> </tr> <tr> <td>Keepass2Android (1.1s)</td> <td>Utility</td> <td>Password manager to store and retrieve passwords</td> <td>287</td> </tr> <tr> <td>Camera2Basic (0.4)</td> <td>Multimedia</td> <td>Camera tutorial that is used to learn how to use camera</td> <td>1036</td> </tr> <tr> <td>Location Samples (0.3)</td> <td>Navigation</td> <td>Location samples library for best practices of utilizing location</td> <td>241</td> </tr> <tr> <td>OpenCamera (1.5)</td> <td>Multimedia</td> <td>Multifunctional and rich camera app</td> <td>3038</td> </tr> <tr> <td>Telegram (15s)</td> <td>Communication (15s)</td> <td>Messaging app with</td> <td>4193</td> </tr> </tbody> </table> Table 1: The open source apps used for evaluation <table> <thead> <tr> <th></th> <th></th> <th>high speed and security for exchanged messages</th> </tr> </thead> <tbody> <tr> <td>WordPress (14s)</td> <td>Productivity</td> <td>Content management system for blogs and personal websites</td> </tr> </tbody> </table> The evaluation process for the 10 applications aimed at evaluating our tool’s (automated testing application) ability to detect the correct and incorrect release of the acquired Android system resources. The evaluation process consists of three phases. 5.2.1 Phase 1 We first manually check the source code of all applications under test (AUT). This is to make sure these applications don’t have lifecycle method errors. We check the imports of utilization of system resources to check which resources are being used. After that, we check the acquisition and release of each system resource to check if that resource is invoked in the right lifecycle methods. 5.2.2 Phase 2 This phase aimed at evaluating the wrong releases of system resources. In this phase we modified the source code of each of the applications under test in order to inject bugs into the main activity of each of them. Then we checked if our automated testing application can detect these injected bugs and display related errors on the application GUI. Our automated testing application tests for three resources, which are the camera, the GPS (Location) and the external storage. (drive). Our testing process includes scenarios for testing, which test resources incrementally. These scenarios are: a. First scenario: we inject the error of incorrect release of the camera resource, then run our tool to detect this error. b. Second scenario: we inject a second error to the camera one, by having incorrect release of the GPS (location) resource, then run our tool to detect these two errors (camera and GPS resources). c. Third scenario: we inject a third error for the external storage resource, then run our tool to try and detect all three resources together (camera, GPS and external drive). In this phase, we rely on manually modifying the Android apps’ source code in order to inject bugs into them. We make sure each of the resources is acquired but not released. The incorrect release of system resources is done in either one of two ways, which are the common mistakes of junior developers when building Android apps. These two ways are: 1. Deleting the release method for each one of the three Android resources. For the camera resource, we delete the cameraInstance.release() method invocation for the camera invoked using camera instance method, or delete the cameraManager.close() method invocation of the camera using camera manager method. For the location (GPS), we delete the removeLocationUpdates method. For the external drive resource, we delete the ParcelFileDescriptor or InputStream/OutputStream, depending on whether the media content is best represented as a file descriptor or a file stream. 2. Inserting the release method for each resource into the wrong lifecycle method. For example, releasing one or more of the resources inside the onStop lifecycle method. This phase also includes inserting the *TODO* recommendation comments where each failed resource is acquired. This *TODO* intends to help identify in the source code of the Android application where each resource was captured but not released. Then insert a comment above the line of code that acquires any of the three resources (camera, GPS and external drive), to recommend releasing that resource using the appropriate released method, and in the appropriate lifecycle method as mentioned in figures 6, 7 and 8 in the previous chapter. 5.2.3 Phase 3 The purpose of this phase was to evaluate if the automated testing application can correctly identify the appropriate release of system resources and in the right lifecycle method. In this phase, we aimed to check if our automated testing application can detect for each of the three resources if the resource was acquired and released in the right lifecycle methods. In order to evaluate the execution performance of our automated testing application [40], we measured the time (in milliseconds) it took to analyze, parse and check the resources for each of our 10 applications under test. Chapter 6 Results and Discussion 6.1 User Evaluation Experiment 6.1.1 User Evaluation Experiment Setup There were 6 participants in the automated testing application’s evaluation experiment who successfully completed the experiment. All of the participants are software developers with Android development experience levels ranging from one to five years. Two of the participants were female and four were male. The vast majority of those developers have built one to five Android applications in their lifetime. Only one of those developers has built 10 or more Android applications. Five of those developers have bachelor’s degrees, and only one of them has a master’s degree. Almost none of those participants has ever used an automated testing application for their Android applications. It took the participants about 15 minutes each to finish testing the application and fill out the surveys. Figure below shows the responses to each of the demographic questions. 6.1.2 User Evaluation Experiment Results When it comes to the post experiment survey, which intends to measure user acceptance, the vast majority of participant responses were positive. Around 90% of all participants either agree or strongly agree to 14 out of 15 to the likert scale type of questions. These strong results confirm that the proposed model and model-based tool is suitable for usage by professional Android application developers. About 85% of participants agree or strongly agree that the automated testing application is fairly easy to use and easy to learn how to use it. Besides, that same percentage of participants strongly agree that it was easy to find the TODO comments inside the Android app’s source code where the resources were acquired but not released. This in turn has helped those developers perform the programming task easily and fix the failing resources and run the automated testing application again to verify these resources were released properly and with no failures. Keeping in mind that those participants also found it easy to remember how to re-run the automated testing application when running it the second time and without making any errors. Questions 3, 5, 6, 7, 11, 14 and 15 got 100% of participants to strongly agree on the following respectively: - It was easy to find the log text files that represent the lifecycle methods model for each activity, - It was easy to find the resources that were caught (acquired) but not released in any activity lifecycle method, - The GUI of the automated testing application provided useful information about the resource the Android app was using, - The GUI of the automated testing application helped identify the resources which were acquired but not released, - The participants are satisfied in general using the automated testing application for testing their Android apps for resource failure and would recommend this automated testing application for fellow Android developers. Furthermore, 85% of participants agree or strongly agree that the automated testing application helps make the development and testing process of Android apps faster and more productive. When the participants were asked in the open-ended question about what they liked the most about the automated testing application, two major points were raised: - The success and failure of acquired resources. - The TODO comments in the Android app’s source code to help release the failing resources and in the relevant lifecycle method. Finally, participants included some points regarding what features they would like to be included in this automated testing application for Android apps, these consist of: - Checking all Android system resources in addition to the 3 resources the automated testing application currently checks. - Releasing the failed resources automatically. 6.2 Open Source Applications Evaluation Results The results of phase 2 evaluation for the 10 open source apps, where we check if our tool can detect incorrect release of system resources, were promising. The automated testing application successfully detected all incorrect releasings of the Android system resources and for all 3 scenarios. Figure 9 shows a screenshot of the execution results for the automated testing application for the AntennaPod Android app. The results of execution show that the external drive resource in one of the major activities failed the test, since it did not release the external drive in the right lifecycle method (onPause method). Figure 9: Execution Results of the Automated Testing Application for the AntennaPod App When it comes to phase 3 testing, where we tested if our tool can identify correct release of resources, the results were promising too. The automated testing application successfully detected correct releasing of the Android system resources and for all 3 scenarios. Figure 10 shows a screenshot of the execution results for the automated testing application for the FooCam Android app. The results of execution show that the camera and external drive resources in the main activity have passed the test. This is because both the camera and the external drive were acquired in the onResume and released in the onPause lifecycle methods correctly. As for performance evaluation of our automated testing application, we measured the execution time in milliseconds for each of the applications. Table 2 below shows the execution time for each of the 10 apps we test. Table 2 above depicts the execution time and lines of code for each of the 10 open source Android applications. The average execution time for the first 8 applications is 780 milliseconds. This shows that our tool can analyze, parse and check for resource acquisition and releasing for each of the 3 resources and for relatively large mobile apps in a short time (780 milliseconds). The last two applications which are Telegram and WordPress are huge enterprise applications having hundreds of activities, and our tool takes about 14.5s on average to analyze such applications. <table> <thead> <tr> <th>App Name</th> <th>Execution Time</th> <th>Lines of code in main activity</th> </tr> </thead> <tbody> <tr> <td>FooCam</td> <td>0.2s</td> <td>297</td> </tr> <tr> <td>AntennaPod</td> <td>1.5s</td> <td>514</td> </tr> <tr> <td>CoCoin</td> <td>1.2s</td> <td>770</td> </tr> <tr> <td>LeafPic</td> <td>1.6</td> <td>633</td> </tr> <tr> <td>Keepass2Android</td> <td>1.1s</td> <td>287</td> </tr> <tr> <td>Camera2Basic</td> <td>0.4s</td> <td>1036</td> </tr> <tr> <td>Location Samples</td> <td>0.3s</td> <td>241</td> </tr> <tr> <td>OpenCamera</td> <td>1.5</td> <td>3038</td> </tr> <tr> <td>Telegram</td> <td>15s</td> <td>4193</td> </tr> <tr> <td>WordPress</td> <td>14s</td> <td>1546</td> </tr> </tbody> </table> Table 2: Execution time and Lines of code in the main activity for each app Chapter 7 Conclusion Mobile application testing is of paramount importance. The overwhelming majority of professional Android application developers rely heavily on manual testing for the developed Android applications. The need for an automated testing solution approach has never been higher. In this research, we propose a model-based automated testing approach for testing Android applications automatically. We propose a model that focuses on the Android activity lifecycle callback methods. This model is then implemented into a proof-of-concept Java application tool that consumes a full Android application and generates a model of each activity, fragment and AppCompatActivity file in the form of a graph data structure. This graph contains the lifecycle methods and resources acquired and released as nodes, and the transitions between these lifecycle methods as edges. Then this graph is used for checking the acquisition/release of three Android system resources which are the camera, the location and the external drive (for read and write operations). This model simple has three outputs: 1. Text files that contain the lifecycle methods, their acquired and released resources and transitions between these lifecycle methods which mimic a graph data structure. 2. Application GUI status report which includes each resource (being one of the 3 tested resources), its parent activity, fragment or AppCompatActivity file name, and the status of that resource (pass/fail). 3. Inject TODO comment recommendations inside the Android applications source code to help the developer locate the resources that failed (acquired but not released), and in which lifecycle methods to release them. In the end, we conduct an experiment on 10 open source Android applications. The results of this experiment seem promising and have good results. On average, for 8 out of the 10 applications we tested in this experiment, it only takes our tool around 780 milliseconds to parse and analyze the application and produce results. This indicates reasonable and good execution performance. Finally, we conduct a user acceptance test with 6 Android developer participants. The results of this test indicate that the automated testing application we propose is useful. 7.1 Threats to Validity Even though we did our best to try and reduce threats to validity for both evaluation methods, we introduce the following threats to Validity. 7.1.1 User Evaluation Experiment Regarding internal validity, our experiment’s 6 participants were all software developers from the same software development company (Harri LLC). The affiliation between these developers may have influenced and biased their responses. Besides, some of the participants knew the researcher in person, so this might have affected their responses to the post experiment survey. When designing this user evaluation experiment, we had the novice mobile developers in mind as the target for this research. Even though the majority of our participants had less than one year of experience (more 66.7%), some had around 5 years of experience. This could have resulted in bias towards the questionnaire. That being said, novice developers can definitely benefit from this automated testing application. When it comes to external threats to validity, we suspect some factors may have caused this type of threat. The user evaluation experiment involved 6 participants, even though this number was sufficient for our purpose of measuring user acceptance, it may not be suitable for statistical analysis purposes. Evaluating our automated testing application with a wider range of participants and on a variety of mobile applications would provide us with more confidence in our tool. Besides, providing more applications for the users to test with and from different domains would help us expose gaps in our model and improve on it. We also question our construct validity. The experiment asked the developers to modify the existing code to make sure the non released resources are released and in the right lifecycle methods. However, the experiment did not ask the developers to extend the features of the application to mimic actual software development scenarios, and to test acquired and released resources for a resource the developer himself/herself implemented. Thus, not all steps of applications development and testing were evaluated by our experiment. Future work on our model-based testing application may include extending the sample application features and acquiring and releasing system resources, to make sure every part of the process is captured and evaluated. ### 7.1.1 Open Source Applications Evaluation Our automated testing application was tested on 10 real world open source applications from multiple domains such as multimedia, communication and navigation. However, in order for us to have more confidence in the testing results, we need to test our automated testing application on more open source applications. Future work on this automated testing application may also cover applications from other platforms such as iOS applications. 7.2 Future Work Our automated testing application involves checking for acquired and non released resources. Then we insert comments in the source code to recommend how and where to release the failed resources. Future work may include enhancing our suggested model to automated generated and insert the code to release the failed resources automatically. This is actually a feature that several developers in the survey had asked for. Besides, future work may include generalizing this model to cover mobile applications on other platforms such as iOS applications. Future work may also cover integrating resource checks for the rest of the Android system resources. These resources include sensors, bluetooth, cellular data,...etc. Having all these resources integrated into the model and having sets of rules for each of these would definitely help generalize our automated testing application. References [37] [https://javaparser.org/](https://javaparser.org/) Appendix A: Survey Questions Pre-experiment survey 1. How many years of experience do you have in Android app development? - 1 or less - 1 - 3 years - 3 - 5 years - 5 - 7 years - More than 7 2. Gender? - Male - Female 3. How many Android apps have you built/helped build? - 1 - 5 - 6 - 10 - more than 10 4. Have you used automated Android app testing tools before? - yes - no 5. What level of education do you have? - High School - Diploma - Bachelor's Degree - Master's Degree - PhD Post Experiment Survey Answers to questions 1 through 15 have the following likert scale for answer <table> <thead> <tr> <th></th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th></th> </tr> </thead> <tbody> <tr> <td>Strongly Disagree</td> <td>☐</td> <td>☐</td> <td>☐</td> <td>☐</td> <td>☐</td> <td>Strongly Agree</td> </tr> </tbody> </table> Questions 16 and 17 are open-ended questions 1. It was fairly easy to use the automated testing application 2. It was easy to learn how to use the automated testing application 3. It was easy to find the log files for each activity 4. It was easy to find the @TODO comments in the activity files for failing resources 5. It was easy to find the resources that were caught but not released in any activity lifecycle method 6. The results screen in the automated testing application GUI provided useful information about the Android app resources 7. The information in the automated testing application GUI helped me identify resources that are caught but not released 8. It was easy to release the non released (failed) resources and run the automated testing application again to verify if resource is released (passed) 9. It was easy to remember how to use the automated testing application again after releasing the resources in the Android app 10. It was easy to avoid making errors or mistakes while using the automated testing application 11. The automated testing application makes it easy to detect resources caught but not released 12. The automated testing application makes the Android app development and testing go faster 13. The automated testing application makes it more productive to develop and test Android apps 14. You are satisfied with using the automated testing application 15. You would recommend using this automated testing application to a fellow Android developer 16. What did you like the most about the automated testing application? 17. What features would you like to be added to the automated testing application?
{"Source-Url": "http://library.birzeit.edu/librarya/bzu-ths/show_ths_category2.php?RTJtoken=e88c5aab27cc9ad6dd6e9ba4fc0f7836", "len_cl100k_base": 16335, "olmocr-version": "0.1.50", "pdf-total-pages": 68, "total-fallback-pages": 0, "total-input-tokens": 203364, "total-output-tokens": 21547, "length": "2e13", "weborganizer": {"__label__adult": 0.0003986358642578125, "__label__art_design": 0.0003371238708496094, "__label__crime_law": 0.0002486705780029297, "__label__education_jobs": 0.0021610260009765625, "__label__entertainment": 7.086992263793945e-05, "__label__fashion_beauty": 0.000186920166015625, "__label__finance_business": 0.0002061128616333008, "__label__food_dining": 0.00026535987854003906, "__label__games": 0.0009360313415527344, "__label__hardware": 0.0008063316345214844, "__label__health": 0.00023615360260009768, "__label__history": 0.00021207332611083984, "__label__home_hobbies": 6.4849853515625e-05, "__label__industrial": 0.0002225637435913086, "__label__literature": 0.00027823448181152344, "__label__politics": 0.00023996829986572263, "__label__religion": 0.0003247261047363281, "__label__science_tech": 0.005222320556640625, "__label__social_life": 9.238719940185548e-05, "__label__software": 0.00530242919921875, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00025391578674316406, "__label__transportation": 0.0004298686981201172, "__label__travel": 0.00017058849334716797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 89632, 0.03413]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 89632, 0.35371]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 89632, 0.89192]], "google_gemma-3-12b-it_contains_pii": [[0, 346, false], [346, 770, null], [770, 2111, null], [2111, 3293, null], [3293, 4008, null], [4008, 4008, null], [4008, 4650, null], [4650, 4793, null], [4793, 5362, null], [5362, 7154, null], [7154, 8729, null], [8729, 9762, null], [9762, 11593, null], [11593, 13570, null], [13570, 14596, null], [14596, 14663, null], [14663, 16410, null], [16410, 17957, null], [17957, 19775, null], [19775, 21682, null], [21682, 23692, null], [23692, 25648, null], [25648, 27558, null], [27558, 29538, null], [29538, 31528, null], [31528, 33323, null], [33323, 35347, null], [35347, 37492, null], [37492, 39248, null], [39248, 41084, null], [41084, 42884, null], [42884, 44574, null], [44574, 45255, null], [45255, 46596, null], [46596, 47368, null], [47368, 48356, null], [48356, 50092, null], [50092, 51869, null], [51869, 53284, null], [53284, 54709, null], [54709, 55364, null], [55364, 55600, null], [55600, 57084, null], [57084, 58812, null], [58812, 60772, null], [60772, 62286, null], [62286, 64001, null], [64001, 65149, null], [65149, 66124, null], [66124, 66572, null], [66572, 68105, null], [68105, 69648, null], [69648, 70385, null], [70385, 70602, null], [70602, 72158, null], [72158, 73859, null], [73859, 75607, null], [75607, 77284, null], [77284, 78184, null], [78184, 79867, null], [79867, 81444, null], [81444, 83290, null], [83290, 84924, null], [84924, 86742, null], [86742, 87169, null], [87169, 87730, null], [87730, 89121, null], [89121, 89632, null]], "google_gemma-3-12b-it_is_public_document": [[0, 346, true], [346, 770, null], [770, 2111, null], [2111, 3293, null], [3293, 4008, null], [4008, 4008, null], [4008, 4650, null], [4650, 4793, null], [4793, 5362, null], [5362, 7154, null], [7154, 8729, null], [8729, 9762, null], [9762, 11593, null], [11593, 13570, null], [13570, 14596, null], [14596, 14663, null], [14663, 16410, null], [16410, 17957, null], [17957, 19775, null], [19775, 21682, null], [21682, 23692, null], [23692, 25648, null], [25648, 27558, null], [27558, 29538, null], [29538, 31528, null], [31528, 33323, null], [33323, 35347, null], [35347, 37492, null], [37492, 39248, null], [39248, 41084, null], [41084, 42884, null], [42884, 44574, null], [44574, 45255, null], [45255, 46596, null], [46596, 47368, null], [47368, 48356, null], [48356, 50092, null], [50092, 51869, null], [51869, 53284, null], [53284, 54709, null], [54709, 55364, null], [55364, 55600, null], [55600, 57084, null], [57084, 58812, null], [58812, 60772, null], [60772, 62286, null], [62286, 64001, null], [64001, 65149, null], [65149, 66124, null], [66124, 66572, null], [66572, 68105, null], [68105, 69648, null], [69648, 70385, null], [70385, 70602, null], [70602, 72158, null], [72158, 73859, null], [73859, 75607, null], [75607, 77284, null], [77284, 78184, null], [78184, 79867, null], [79867, 81444, null], [81444, 83290, null], [83290, 84924, null], [84924, 86742, null], [86742, 87169, null], [87169, 87730, null], [87730, 89121, null], [89121, 89632, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 89632, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 89632, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 89632, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 89632, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 89632, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 89632, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 89632, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 89632, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 89632, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 89632, null]], "pdf_page_numbers": [[0, 346, 1], [346, 770, 2], [770, 2111, 3], [2111, 3293, 4], [3293, 4008, 5], [4008, 4008, 6], [4008, 4650, 7], [4650, 4793, 8], [4793, 5362, 9], [5362, 7154, 10], [7154, 8729, 11], [8729, 9762, 12], [9762, 11593, 13], [11593, 13570, 14], [13570, 14596, 15], [14596, 14663, 16], [14663, 16410, 17], [16410, 17957, 18], [17957, 19775, 19], [19775, 21682, 20], [21682, 23692, 21], [23692, 25648, 22], [25648, 27558, 23], [27558, 29538, 24], [29538, 31528, 25], [31528, 33323, 26], [33323, 35347, 27], [35347, 37492, 28], [37492, 39248, 29], [39248, 41084, 30], [41084, 42884, 31], [42884, 44574, 32], [44574, 45255, 33], [45255, 46596, 34], [46596, 47368, 35], [47368, 48356, 36], [48356, 50092, 37], [50092, 51869, 38], [51869, 53284, 39], [53284, 54709, 40], [54709, 55364, 41], [55364, 55600, 42], [55600, 57084, 43], [57084, 58812, 44], [58812, 60772, 45], [60772, 62286, 46], [62286, 64001, 47], [64001, 65149, 48], [65149, 66124, 49], [66124, 66572, 50], [66572, 68105, 51], [68105, 69648, 52], [69648, 70385, 53], [70385, 70602, 54], [70602, 72158, 55], [72158, 73859, 56], [73859, 75607, 57], [75607, 77284, 58], [77284, 78184, 59], [78184, 79867, 60], [79867, 81444, 61], [81444, 83290, 62], [83290, 84924, 63], [84924, 86742, 64], [86742, 87169, 65], [87169, 87730, 66], [87730, 89121, 67], [89121, 89632, 68]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 89632, 0.06004]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
1b34c45e8f97fdad76bf80be7b8510205c715f9a
Abstract—Dynamic taint analysis and forward symbolic execution are quickly becoming staple techniques in security analyses. Example applications of dynamic taint analysis and forward symbolic execution include malware analysis, input filter generation, test case generation, and vulnerability discovery. Despite the widespread usage of these two techniques, there has been little effort to formally define the algorithms and summarize the critical issues that arise when these techniques are used in typical security contexts. The contributions of this paper are two-fold. First, we precisely describe the algorithms for dynamic taint analysis and forward symbolic execution as extensions to the run-time semantics of a general language. Second, we highlight important implementation choices, common pitfalls, and considerations when using these techniques in a security context. Keywords—taint analysis, symbolic execution, dynamic analysis I. INTRODUCTION Dynamic analysis — the ability to monitor code as it executes — has become a fundamental tool in computer security research. Dynamic analysis is attractive because it allows us to reason about actual executions, and thus can perform precise security analysis based upon run-time semantics of a general language. Further, dynamic analysis is simple: we need only consider facts about a single execution at a time. Two of the most commonly employed dynamic analysis techniques in security research are dynamic taint analysis and forward symbolic execution. Dynamic taint analysis runs a program and observes which computations are affected by predefined taint sources such as user input. Dynamic forward symbolic execution automatically builds a logical formula describing a program execution path, which reduces the problem of reasoning about the execution to the domain of logic. The two analyses can be used in conjunction to build formulas representing only the parts of an execution that depend upon tainted values. The number of security applications utilizing these two techniques is enormous. Example security research areas employing either dynamic taint analysis, forward symbolic execution, or a mix of the two, are: 1) **Unknown Vulnerability Detection.** Dynamic taint analysis can look for misuses of user input during an execution. For example, dynamic taint analysis can be used to prevent code injection attacks by monitoring whether user input is executed [22–24, 49, 58]. 2) **Automatic Input Filter Generation.** Forward symbolic execution can be used to automatically generate input filters that detect and remove exploits from the input stream [13, 21, 22]. Filters generated in response to actual executions are attractive because they provide strong accuracy guarantees [13]. 3) **Malware Analysis.** Taint analysis and forward symbolic execution are used to analyze how information flows through a malware binary [5, 6, 64], explore trigger-based behavior [11, 44], and detect emulators [57]. 4) **Test Case Generation.** Taint analysis and forward symbolic execution are used to automatically generate inputs to test programs [16, 18, 35, 56], and can generate inputs that cause two implementations of the same protocol to behave differently [9, 16]. Given the large number and variety of application domains, one would imagine that implementing dynamic taint analysis and forward symbolic execution would be a textbook problem. Unfortunately this is not the case. Previous work has focused on how these techniques can be applied to solve security problems, but has left it as an out of scope to give exact algorithms, implementation choices and pitfalls. As a result, researchers seeking to use these techniques often rediscover the same limitations, implementation tricks, and trade-offs. The goals and contributions of this paper are two-fold. First, we formalize dynamic taint analysis and forward symbolic execution as found in the security domain. Our formalization rests on the intuition that run-time analyses can precisely and naturally be described in terms of the formal run-time semantics of the language. This formalization provides a concise and precise way to define each analysis, and suggests a straightforward implementation. We then show how our formalization can be used to tease out and describe common implementation details, caveats, and choices as found in various security applications. II. FIRST STEPS: A GENERAL LANGUAGE A. Overview A precise definition of dynamic taint analysis or forward symbolic execution must target a specific language. For the purposes of this paper, we use SIMPIL: a Simple Intermediate Language. The grammar of SIMPIL is presented in Table I. Although the language is simple, it is powerful enough to express typical languages as varied as Java [30] and assembly code [7]. Indeed, the language is representative of internal representations used by compilers for a variety of programming languages [1]. A program in our language consists of a sequence of numbered statements. Statements in our language consist of assignments, assertions, jumps, and conditional jumps. Expressions in SIMPIL are side-effect free (i.e., they do not change the program state). We use “\(\circ\)” to represent typical binary operators, e.g., you can fill in the box with operators such as addition, subtraction, etc. Similarly, \(\bullet\) represents unary operators such as logical negation. The statement get_input(src) returns input from source src. We use a dot (\(\cdot\)) to denote an argument that is ignored, e.g., we will write get_input(\(\cdot\)) when the exact input source is not relevant. For simplicity, we consider only expressions (constants, variables, etc.) that evaluate to 32-bit integer values; extending the language and rules to additional types is straightforward. For the sake of simplicity, we omit the type-checking semantics of our language and assume things are well-typed in the obvious way, e.g., that binary operands are integers or variables, not memories, and so on. B. Operational Semantics The operational semantics of a language specify unambiguously how to execute a program written in that language. Because dynamic program analyses are defined in terms of actual program executions, operational semantics also provide a natural way to define a dynamic analysis. However, before we can specify program analyses, we must first define the base operational semantics. The complete operational semantics for SIMPIL are shown in Figure 1. Each statement rule is of the form: \[ \text{computation} \quad \langle \text{current state}, \text{stmt} \rangle \rightsquigarrow \langle \text{end state}, \text{stmt}' \rangle \] Rules are read bottom to top, left to right. Given a statement, we pattern-match the statement to find the applicable rule, e.g., given the statement \(x := e\), we match to the ASSIGN rule. We then apply the computation given in the top of the rule, and if successful, transition to the end state. If no rule matches (or the computation in the premise fails), then the machine halts abnormally. For instance, jumping to an address not in the domain of \(\Sigma\) would cause abnormal termination. The execution context is described by five parameters: the list of program statements (\(\Sigma\)), the current memory state (\(\mu\)), the current value for variables (\(\Delta\)), the program counter (\(pc\)), and the current statement (\(\iota\)). The \(\Sigma\), \(\mu\), and \(\Delta\) contexts are maps, e.g., \(\Delta[x]\) denotes the current value of variable \(x\). We denote updating a context variable \(x\) with value \(v\) as \(x \leftarrow v\), e.g., \(\Delta[x \leftarrow 10]\) denotes setting the value of variable \(x\) to the value 10 in context \(\Delta\). A summary of the five meta-syntactic variables is shown in Figure 2. In our evaluation rules, the program context \(\Sigma\) does not change between transitions. The implication is that our operational semantics do not allow programs with dynamically generated code. However, adding support for dynamically generated code is straightforward. We discuss how SIMPIL can be augmented to support dynamically generated code and other higher-level language features in Section II-C. The evaluation rules for expressions use a similar notation. We denote by \(\mu, \Delta \vdash e \Downarrow v\) evaluating an expression \(e\) to a value \(v\) in the current state given by \(\mu\) and \(\Delta\). The expression \(e\) is evaluated by matching \(e\) to an expression evaluation rule and performing the attached computation. <table> <thead> <tr> <th>Context</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>(\Sigma)</td> <td>Maps a statement number to a statement</td> </tr> <tr> <td>(\mu)</td> <td>Maps a memory address to the current value at that address</td> </tr> <tr> <td>(\Delta)</td> <td>Maps a variable name to its value</td> </tr> <tr> <td>(pc)</td> <td>The program counter</td> </tr> <tr> <td>(\iota)</td> <td>The next instruction</td> </tr> </tbody> </table> Figure 2: The meta-syntactic variables used in the execution context. Most of the evaluation rules break the expression down into simpler expressions, evaluate the subexpressions, and then combine the resulting evaluations. **Example 1.** Consider evaluating the following program: 1. \( x := 2 \times \text{get\_input}(\cdot) \) The evaluation for this program is shown in Figure 3 for the input of 20. Notice that since the `ASSIGN` rule requires the expression \( e \) in `var := e` to be evaluated, we had to recurse to other rules (`BINOP`, `INPUT`, `CONST`) to evaluate the expression \( 2 \times \text{get\_input}(\cdot) \) to the value 40. **C. Language Discussion** We have designed our language to demonstrate the critical aspects of dynamic taint analysis and forward symbolic execution. We do not include some high-level language constructs such as functions or scopes for simplicity and space reasons. This omission does not fundamentally limit the capability of our language or our results. Adding such constructs is straightforward. For example, two approaches are: 1. Compile missing high-level language constructs down to our language. For instance, functions, buffers and user-level abstractions can be compiled down to `SIMPIL` statements instead of assembly-level instructions. Tools such as BAP [7] and BitBlaze [7] already use a variant of `SIMPIL` to perform analyses. BAP is freely available at \( \text{http://bap.ece.cmu.edu} \). **Example 2.** Function calls in high-level code can be compiled down to `SIMPIL` by storing the return address and transferring control flow. The following code calls and returns from the function at line 9. ```plaintext /* Caller function */ esp := esp + 4 store(esp, 6) /* retaddr is 6 */ goto 9 /* The call will return here */ halt /* Callee function */ ... goto load(esp) ``` We assume this choice throughout the paper since previous dynamic analysis work has already demonstrated that such languages can be used to reason about programs written in any language. 2) Add higher-level constructs to `SIMPIL`. For instance, it might be useful for our language to provide direct support for functions or dynamically generated code. This could slightly enhance our analyses (e.g., allowing us to reason about function arguments), while requiring only small changes to our semantics and analyses. Figure 4 presents the `CALL` and `RET` rules that need to be added to the semantics of `SIMPIL` to provide support for call-by-value function calls. Note that several new contexts were introduced to support functions, including a stack context (\( \lambda \)) to store return addresses, a scope context (\( \zeta \)) to store function-local variable contexts and a map from function names to addresses (\( \phi \)). In a similar manner we can enhance `SIMPIL` to support A dynamic taint analysis system is precise in a scenario, undertainting means the system missed a real attack when no attack occurred. Second, dynamic taint analysis can miss the information flow from a source to a sink, which we call undertainting. In the attack detection scenario, undertainting means the system missed a real attack. A dynamic taint analysis system is precise if no undertainting or overtainting occurs. In this section we first describe how dynamic taint analysis is implemented by monitoring the execution of a program. We then describe various taint analysis policies and trade-offs. Finally, we describe important issues and caveats that often result in dynamic taint analysis systems that overtaint, undertaint, or both. A. Dynamic Taint Analysis Semantics Since dynamic taint analysis is performed on code at runtime, it is natural to express dynamic taint analysis in terms of the operational semantics of the language. Taint policy actions, whether it be taint propagation, introduction, or checking, are added to the operational semantics rules. To keep track of the taint status of each program value, we redefine values in our language to be tuples of the form \( \langle v, \tau \rangle \), where \( v \) is a value in the initial language, and \( \tau \) is the taint status of \( v \). A summary of the necessary changes to \( \text{SIMPIL} \) to enable dynamic taint analysis. | taint \( t \) | \( \equiv \) | T | F | | value \( \langle v, t \rangle \) | | \( \tau_\Delta \) | Maps variables to taint status | | \( \tau_\mu \) | Maps addresses to taint status | Table II: Additional changes to \( \text{SIMPIL} \) to enable dynamic taint analysis. The purpose of dynamic taint analysis is to track information flow between sources and sinks. Any program value whose computation depends on data derived from a tainted source is considered tainted (denoted T). Any other value is considered untainted (denoted F). A taint policy \( P \) determines exactly how taint flows as a program executes, what sorts of operations introduce new taint, and what checks are performed on tainted values. While the specifics of the taint policy may differ depending upon the taint analysis application, e.g., taint tracking policies for unpacking malware may be different than attack detection, the fundamental concepts stay the same. Two types of errors can occur in dynamic taint analysis. First, dynamic taint analysis can mark a value as tainted when it is not derived from a tainted source. We say that such a value is overtainted. For example, in an attack detection application overtainting will typically result in reporting an attack when no attack occurred. Second, dynamic taint analysis can miss the information flow from a source to a sink, which we call undertainting. In the attack detection scenario, undertainting means the system missed a real attack. A dynamic taint analysis system is precise if no undertainting or overtainting occurs. In this section we first describe how dynamic taint analysis is implemented by monitoring the execution of a program. We then describe various taint analysis policies and trade-offs. Finally, we describe important issues and caveats that often result in dynamic taint analysis systems that overtaint, undertaint, or both. The purpose of dynamic taint analysis is to track information flow between sources and sinks. Any program value whose computation depends on data derived from a tainted source is considered tainted (denoted T). Any other value is considered untainted (denoted F). A taint policy \( P \) determines exactly how taint flows as a program executes, what sorts of operations introduce new taint, and what checks are performed on tainted values. While the specifics of the taint policy may differ depending upon the taint analysis application, e.g., taint tracking policies for unpacking malware may be different than attack detection, the fundamental concepts stay the same. Two types of errors can occur in dynamic taint analysis. First, dynamic taint analysis can mark a value as tainted when it is not derived from a tainted source. We say that such a value is overtainted. For example, in an attack detection application overtainting will typically result in reporting an attack when no attack occurred. Second, dynamic taint analysis can miss the information flow from a source to a sink, which we call undertainting. In the attack detection scenario, undertainting means the system missed a real attack. A dynamic taint analysis system is precise if no undertainting or overtainting occurs. In this section we first describe how dynamic taint analysis is implemented by monitoring the execution of a program. We then describe various taint analysis policies and trade-offs. Finally, we describe important issues and caveats that often result in dynamic taint analysis systems that overtaint, undertaint, or both. A. Dynamic Taint Analysis Semantics Since dynamic taint analysis is performed on code at runtime, it is natural to express dynamic taint analysis in terms of the operational semantics of the language. Taint policy actions, whether it be taint propagation, introduction, or checking, are added to the operational semantics rules. To keep track of the taint status of each program value, we redefine values in our language to be tuples of the form \( \langle v, \tau \rangle \), where \( v \) is a value in the initial language, and \( \tau \) is | \( \mu, \Delta \vdash \) | CONST | \( \mu, \Delta \vdash \) | get_input() \( \Downarrow \) | 20 is input | INPUT | \( v' = 2 \cdot 20 \) | BINOP | \( \Delta' = \Delta[x \leftarrow 40] \) | \( i = \Sigma[pc+1] \) | ASSIGN | | \( \mu, \Delta \vdash 2*get\_input() \Downarrow 40 \) | | \( \Sigma, \mu, \Delta, pc, x := 2*get\_input() \rightarrow \Sigma, \mu, \Delta', pc + 1, i \) | Figure 3: Evaluation of the program in Listing 1. III. DYNAMIC TAINT ANALYSIS The purpose of dynamic taint analysis is to track information flow between sources and sinks. Any program value whose computation depends on data derived from a tainted source is considered tainted (denoted T). Any other value is considered untainted (denoted F). A taint policy \( P \) determines exactly how taint flows as a program executes, what sorts of operations introduce new taint, and what checks are performed on tainted values. While the specifics of the taint policy may differ depending upon the taint analysis application, e.g., taint tracking policies for unpacking malware may be different than attack detection, the fundamental concepts stay the same. Two types of errors can occur in dynamic taint analysis. First, dynamic taint analysis can mark a value as tainted when it is not derived from a tainted source. We say that such a value is overtainted. For example, in an attack detection application overtainting will typically result in reporting an attack when no attack occurred. Second, dynamic taint analysis can miss the information flow from a source to a sink, which we call undertainting. In the attack detection scenario, undertainting means the system missed a real attack. A dynamic taint analysis system is precise if no undertainting or overtainting occurs. In this section we first describe how dynamic taint analysis is implemented by monitoring the execution of a program. We then describe various taint analysis policies and trade-offs. Finally, we describe important issues and caveats that often result in dynamic taint analysis systems that overtaint, undertaint, or both. A. Dynamic Taint Analysis Semantics Since dynamic taint analysis is performed on code at runtime, it is natural to express dynamic taint analysis in terms of the operational semantics of the language. Taint policy actions, whether it be taint propagation, introduction, or checking, are added to the operational semantics rules. To keep track of the taint status of each program value, we redefine values in our language to be tuples of the form \( \langle v, \tau \rangle \), where \( v \) is a value in the initial language, and \( \tau \) is | taint \( t \) | \( \equiv \) | T | F | | value \( \langle v, t \rangle \) | | \( \tau_\Delta \) | Maps variables to taint status | | \( \tau_\mu \) | Maps addresses to taint status | Table II: Additional changes to \( \text{SIMPIL} \) to enable dynamic taint analysis. the taint status of \( v \). A summary of the necessary changes to \( \text{SIMPIL} \) is provided in Table II. Figure 5 shows how a taint analysis policy \( P \) is added to \( \text{SIMPIL} \). The semantics show where the taint policy is used; the semantics are independent of the policy itself. In order to support taint policies, the semantics introduce two new contexts: \( \tau_\Delta \) and \( \tau_\mu \). \( \tau_\Delta \) keeps track of the taint status of scalar variables. \( \tau_\mu \) keeps track of the taint status of memory cells. \( \tau_\Delta \) and \( \tau_\mu \) are initialized so that all values are marked untainted. Together, \( \tau_\Delta \) and \( \tau_\mu \) keep the taint status for all variables and memory cells, and are used to derive the taint status for all values during execution. B. Dynamic Taint Policies A taint policy specifies three properties: how new taint is introduced to a program, how taint propagates as instructions execute, and how taint is checked during execution. Taint Introduction. Taint introduction rules specify how taint is introduced into a system. The typical convention is to initialize all variables, memory cells, etc. as untainted. In \( \text{SIMPIL} \), we only have a single source of user input: the get_input() call. In a real implementation, get_input() represents values returned from a system call, return values from a library call, etc. A taint policy will also typically distinguish between different input sources. For example, an internet-facing network input source may always introduce taint, while a file descriptor that reads from a trusted configuration file may not [7, 49, 64]. Further, specific taint sources can be tracked independently, e.g., \( \tau_\Delta \) can map not just the bit indicating taint status, but also the source. Taint Propagation. Taint propagation rules specify the taint status for data derived from tainted or untainted operands. Since taint is a bit, propositional logic is usually used to express the propagation policy, e.g., \( t_1 \lor t_2 \) indicates the result is tainted if \( t_1 \) is tainted or \( t_2 \) is tainted. Policy Check \[ \begin{align*} \text{GO} & \quad \mu, \Delta | e_1 \downarrow v_1 \ldots \mu, \Delta | e_i \downarrow v_i \quad \Delta' = \Delta[x_1 \leftarrow v_1, \ldots, x_i \leftarrow v_i] \\ \text{RETURN} & \quad \mu, \Delta, \text{pc} \text{ jmp } e \leftrightarrow \Sigma', \mu, \Delta, \text{pc'}, t \\ \end{align*} \] Table III: A typical tainted jump target policy for detecting attacks. A dot (\(\cdot\)) denotes an argument that is ignored. A tainted status is converted to a boolean value in the natural way, e.g., \(T\) maps to true, and \(F\) maps to false. **Taint Checking.** Taint status values are often used to determine the runtime behavior of a program, e.g., an attack detector may halt execution if a jump target address is tainted. In SIMPL, we perform checking by adding the policy to the premise of the operational semantics. For instance, the T-GOTO rule uses the \(P_{\text{gotocheck}}(t)\) policy. \(P_{\text{gotocheck}}(t)\) returns \(T\) if it is safe to perform a jump operation when the target address has taint value \(t\), and returns \(F\) otherwise. If \(F\) is returned, the premise for the rule is not met and the machine terminates abnormally (signifying an exception). **C. A Typical Taint Policy** A prototypical application of dynamic taint analysis is attack detection. Table III shows a typical attack detection policy which we call the **tainted jump policy**. In order to be concrete when discussing the challenges and opportunities in taint analysis, we often contrast implementation choices with respect to this policy. We stress that although the policy is designed to detect attacks, other applications of taint analysis are typically very similar. The goal of the tainted jump policy is to protect a potentially vulnerable program from control flow hijacking attacks. The main idea in the policy is that an input-derived value will never overwrite a control-flow value such as a return address or function pointer. A control flow exploit, however, will overwrite jump targets (e.g., return addresses) with input-derived values. The tainted jump policy ensures safety against such attacks by making sure tainted jump targets are never used. The policy introduces taint into the system by marking all values returned by \(\text{get_input}()\) as tainted. Taint is then propagated through the program in a straightforward manner, e.g., the result of a binary operation is tainted if either operand is tainted, an assigned variable is tainted if the right-hand side value is tainted, and so on. **Example 3.** Table IV shows the taint calculations at each step of the execution for the following program: \[ \begin{align*} 1 & \quad x := 2 \ast \text{get_input}() \\ 2 & \quad y := 5 + x \\ 3 & \quad \text{goto } y \end{align*} \] On line 1, the executing program receives input, assumed to be 20, and multiplies by 2. Since all input is marked as tainted, \(2 \ast \text{get_input}()\) is also tainted via T-BINOP, and \(x\) is marked in \(\tau_\Delta\) as tainted via T-ASSIGN. On line 2, \(y\) (tainted) is added to \(y\) (untainted). Since one operand is tainted, \(y\) is marked as tainted in \(\tau_\Delta\). On line 3, the program jumps to \(y\). Since \(y\) is tainted, the T-GOTO premise for \(P\) is not satisfied, and the machine halts abnormally. **Different Policies for Different Applications.** Different applications of taint analysis can use different policy decisions. As we will see in the next section, the typical taint policy described in Table III is not appropriate for all application domains, since it does not consider whether memory addresses are tainted. Thus, it may miss some attacks. We discuss alternatives to this policy in the next section. **D. Dynamic Taint Analysis Challenges and Opportunities** There are several challenges to using dynamic taint analysis correctly, including: - **Tainted Addresses.** Distinguishing between memory addresses and cells is not always appropriate. - **Undertainting.** Dynamic taint analysis does not properly handle some types of information flow. Overtainting. Deciding when to introduce taint is often easier than deciding when to remove taint. Time of Detection vs. Time of Attack. When used for attack detection, dynamic taint analysis may raise an alert too late. Table V summarizes the alternate policies proposed for addressing some of these challenges in particular scenarios. In the remainder of the section we discuss the advantages and disadvantages of these policy choices, and detail common implementation details and pitfalls. Tainted Addresses. Memory operations involve two values: the address of the memory cell being referenced, and the value stored in that cell. The tainted jump policy in Table III independently tracks the taint status of addresses and memory cells separately. This policy is akin to the idea that the taint status of a pointer (in this case, an address) and the object pointed to (in this case, the memory cell) are independent [31]. Example 4. Given the tainted jump policy, consider the The tainted jump policy applied to the above program still allows an attacker to jump to untainted, yet attacker-determined locations. This is an example of undertaint by the policy. This means that the tainted jump policy may miss an attack. One possible fix is to use the tainted addresses policy shown in Table V. Using this policy, a memory cell is tainted if either the memory cell value or the memory address is tainted. TaintCheck [49], a dynamic taint analysis engine for binary code, offers such an option. The tainted address policy may also have issues. For example, the tcpdump program has legitimate code similar to the program above. In tcpdump, a network packet is first read in. The first byte of the packet is used as an index into a function pointer table to print the packet type, e.g., if byte 0 of the packet is 4, the IPv4 printer is selected and then called. In the above code, $z$ represents the base address of the function call table, and $x$ is the first byte of the packet. Thus, the tainted address modification would cause every non-trivial run of tcpdump to raise a taint error. Other code constructs, such as switch statements, can cause similar table lookup problems. The tainted address policy may find additional taint flows, but may also overtaint. On the other hand, the tainted jump policy can lead to undertaint. In security applications, such as attack detection, this dichotomy means that the attack detector either misses some exploits (i.e., false negatives) or reports safe executions as bad (i.e., false positives). **Control-flow taint.** Dynamic taint analysis tracks data flow taint. However, information flow can also occur through control dependencies. <table> <thead> <tr> <th>Policy</th> <th>Substitutions</th> <th>Sanitization</th> </tr> </thead> <tbody> <tr> <td>Tainted Value</td> <td>$P_{\text{mem}}(t_a, t_v) \equiv t_v$</td> <td></td> </tr> <tr> <td>Tainted Addresses</td> <td>$P_{\text{mem}}(t_a, t_v) \equiv t_a \lor t_v$</td> <td></td> </tr> <tr> <td>Control Dependent</td> <td>Not possible</td> <td></td> </tr> <tr> <td>Tainted Overflow</td> <td>$P_{\text{taint}}(t_1, t_2, v_1, v_2, \theta_b) \equiv (t_1 \lor t_2) \Rightarrow \neg \text{overflows}(v_1 \lor v_2)$</td> <td></td> </tr> </tbody> </table> Table V: Alternate taint analysis policy choices. Informally, a statement $s_2$ is control-dependent on statement $s_1$ if $s_1$ controls whether or not $s_2$ will execute. A more precise definition of control-dependency that uses post-dominators can be found in [29]. In SIMPIL, only indirect and conditional jumps can cause control dependencies. **Example 5.** Consider the following program: 1. $x := \text{get_input}(\cdot)$ 2. $y := \text{load}(z + x)$ 3. $\text{goto } y$ The user provides input to the program that is used as a table index. The result of the table lookup is then used as the target address for a jump. Assuming addresses are of some fixed-width (say 32-bits), the attacker can pick an appropriate value of $x$ to address any memory cell she wishes. As a result, the attacker can jump to any value in memory that is untainted. In many programs this would allow the user to violate the intended control flow of the program, thus creating a security violation. The tainted jump policy applied to the above program still allows an attacker to jump to untainted, yet attacker-determined locations. This is an example of undertaint by the policy. This means that the tainted jump policy may miss an attack. One possible fix is to use the tainted addresses policy shown in Table V. Using this policy, a memory cell is tainted if either the memory cell value or the memory address is tainted. TaintCheck [49], a dynamic taint analysis engine for binary code, offers such an option. The tainted address policy may also have issues. For example, the tcpdump program has legitimate code similar to the program above. In tcpdump, a network packet is first read in. The first byte of the packet is used as an index into a function pointer table to print the packet type, e.g., if byte 0 of the packet is 4, the IPv4 printer is selected and then called. In the above code, $z$ represents the base address of the function call table, and $x$ is the first byte of the packet. Thus, the tainted address modification would cause every non-trivial run of tcpdump to raise a taint error. Other code constructs, such as switch statements, can cause similar table lookup problems. The tainted address policy may find additional taint flows, but may also overtaint. On the other hand, the tainted jump policy can lead to undertaint. In security applications, such as attack detection, this dichotomy means that the attack detector either misses some exploits (i.e., false negatives) or reports safe executions as bad (i.e., false positives). **Control-flow taint.** Dynamic taint analysis tracks data flow taint. However, information flow can also occur through control dependencies. If you do not compute control dependencies, you cannot determine control-flow based taint, and the overall analysis may undertaint. Unfortunately, pure dynamic taint analysis cannot compute control dependencies, thus cannot accurately determine control-flow-based taint. The reason is simple: reasoning about control dependencies requires reasoning about multiple paths, and dynamic analysis executes on a single path at a time. In the above example, any single execution will not be able to tell that the value of $y$ is control-dependent and $z$ is not. There are several possible approaches to detecting control-dependent taint: 1) Supplement dynamic analysis with static analysis. Static analysis can compute control dependencies, and thus can be used to compute control-dependent taint [1, 20, 52]. Static analysis can be applied over the entire program, or over a collection of dynamic analysis runs. 2) Use heuristics, making an application-specific choice whether to overtaint or undertaint depending upon the scenario [20, 49, 63]. **Sanitization.** Dynamic taint analysis as described only adds taint; it never removes it. This leads to the problem of taint propagation: as the program executes, more and more values become tainted, often with less and less taint precision. A significant challenge in taint analysis is to identify when taint can be removed from a value. We call this the taint sanitization problem. One common example where we wish to sanitize is when the program computes constant functions. A typical example in x86 code is $b = a \oplus a$. Since $b$ will always equal zero, the value of $b$ does not depend upon $a$. x86 programs often use this construct to zero out registers. A default taint analysis policy, however, will identify $b$ as tainted whenever $a$ is tainted. Some taint analysis engines check for well-known constant functions, e.g., TEMU [7] and TaintCheck [49] can recognize the above xor case. The output of a constant function is completely independent of user input. However, some functions allow users to affect their output without allowing them to choose an arbitrary output value. For example, it is computationally hard to find inputs that will cause a cryptographically secure hash function to output an arbitrary value. Thus, in some application domains, we can treat the output of functions like cryptographic hash functions as untainted. Newsome et al. have explored how to automatically recognize such cases by quantifying how much control users can exert on a function’s output [48]. Finally, there may be application-dependent sanitization. For example, an attack detector may want to untaint values if the program logic performs sanitization itself. For example, if the application logic checks that an index to an array is within the array size, the result of the table lookup could be considered untainted. **Time of Detection vs Time of Attack.** Dynamic taint analysis can be used to flag an alert when tainted values are used in an unsafe way. However, there is no guarantee that the program integrity has not been violated before this point. One example of this problem is the time of detection/time of attack gap that occurs when taint analysis is used for attack detection. Consider a typical return address overwrite exploit. In such attacks, the user can provide an exploit that overwrites a function return address with the address of attacker-supplied shellcode. The tainted jump policy will catch such attacks because the return address will become tainted during overwrite. The tainted jump policy is frequently used to detect such attacks against potentially unknown vulnerabilities. [20–22, 49, 63] Note, however, that the tainted jump policy does not raise an error when the return address is first overwritten; only when it is later used as a jump target. Thus, the exploit will not be reported until the function returns. Arbitrary effects could happen between the time when the return address is first overwritten and when the attack is detected, e.g., any calls made by the vulnerable function will still be made before an alarm is raised. If these calls have side effects, e.g., include file manipulation or networking functions, the effects can persist even after the program is aborted. The problem is that dynamic taint analysis alone keeps track of too little information. In a return overwrite attack, the abstract machine would need to keep track of where return addresses are and verify that they are not overwritten. In binary code settings, this is difficult. | value $v$ ::= 32-bit unsigned integer $|$ exp | $\Pi$ ::= Contains the current constraints on symbolic variables due to path choices Table VI: Changes to SIMPIL to allow forward symbolic execution. Another example of the time of detection/time of attack gap is detecting integer overflow attacks. Taint analysis alone does not check for overflow: it just marks which values are derived from tainted sources. An attack detector would need to add additional logic beyond taint analysis to find such problems. For example, the tainted integer overflow policy shown in Table V is the composition of a taint analysis check and an integer overflow policy. Current taint-based attack detectors [7, 20, 49, 63] typically exhibit time of detection to time of attack gaps. BitBlaze [7] provides a set of tools for performing a post hoc instruction trace analysis on execution traces produced with their taint infrastructure for post hoc analysis. Post hoc trace analysis, however, negates some advantages of having a purely dynamic analysis environment. **IV. FORWARD SYMBOLIC EXECUTION** Forward symbolic execution allows us to reason about the behavior of a program on many different inputs at one time by building a logical formula that represents a program execution. Thus, reasoning about the behavior of the program can be reduced to the domain of logic. **A. Applications and Advantages** **Multiple inputs.** One of the advantages of forward symbolic execution is that it can be used to reason about more than one input at once. For instance, consider the program in Example 6 — only one out of $2^{32}$ possible inputs will cause the program to take the true branch. Forward symbolic execution can reason about the program by considering two different input classes — inputs that take the true branch, and those that take the false branch. **Example 6.** Consider the following program: ```plaintext 1 x := 2*get_input(·) + 1 2 if x-5 == 14 then goto 3 else goto 4 3 // catastrophic failure 4 // normal behavior ``` Only one input will trigger the failure. **B. Semantics of Forward Symbolic Execution** The primary difference between forward symbolic execution and regular execution is that when get_input(·) is evaluated symbolically, it returns a symbol instead of a concrete value. When a new symbol is first returned, there are no constraints on its value; it represents any possible value. As a result, expressions involving symbols cannot be fully evaluated to a concrete value (e.g., \( s + 5 \) can not be reduced further). Thus, our language must be modified, allowing a value to be a partially evaluated symbolic expression. The changes to SIMPIL to allow forward symbolic execution are shown in Table VI. Branches constrain the values of symbolic variables to the set of values that would execute the path. The updated rules for branch statements are given as S-TCOND and S-FCOND in Figure 6. For example, if the execution of the program follows the true branch of \( \text{“if } x > 2 \text{ then goto } e_1 \text{ else goto } e_2 \text{”} \), then \( x \) must contain a value greater than 2. If execution instead takes the false branch, then \( x \) must contain a value that is not greater than 2. Similarly, after an assertion statement, the values of symbols must be constrained such that they satisfy the asserted expression. We represent these constraints on symbol assignments in our operational semantics with the path predicate \( \Pi \). We show how \( \Pi \) is updated by the language constructs in Figure 6. At every symbolic execution step, \( \Pi \) contains the constraints on the symbolic variables. C. Forward Symbolic Execution Example The symbolic execution of Example 6 is shown in Table VII. On Line 1, get_input() evaluates to a fresh symbol \( s \), which initially represents any possible user input. \( s \) is doubled and then assigned to \( x \). This is reflected in the updated \( \Delta \). When forward symbolic execution reaches a branch, as in Line 2, it must choose which path to take. The strategy used for choosing paths can significantly impact the quality of the analysis; we discuss this later in this section. Table VII shows the program contexts after symbolic execution takes both paths (denoted by the use of the S-TCOND and S-FCOND rules). Notice that the path predicate \( \Pi \) depends on the path taken through the program. D. Forward Symbolic Execution Challenges and Opportunities Creating a forward symbolic execution engine is conceptually a very simple process: take the operational semantics of the language and change the definition of a value to include symbolic expressions. However, by examining our formal definition of this intuition, we can find several instances where our analysis breaks down. For instance: - **Symbolic Memory.** What should we do when the analysis uses the \( \mu \) context — whose index must be a non-negative integer — with a symbolic index? - **System Calls.** How should our analysis deal with external interfaces such as system calls? - **Path Selection.** Each conditional represents a branch in the program execution space. How should we decide which branches to take? We address these issues and more below. **Symbolic Memory Addresses.** The \texttt{LOAD} and \texttt{STORE} rules evaluate the expression representing the memory address to a value, and then get or set the corresponding value at that address in the memory context \( \mu \). When executing concretely, that value will be an integer that references a particular memory cell. When executing symbolically, however, we must decide what to do when a memory reference is an expression instead of a concrete number. The **symbolic memory address problem** arises whenever an address referenced in a load or store operation is an expression derived from user input instead of a concrete value. When we load from a symbolic expression, a sound strategy is to consider it a load from any possible satisfying assignment for the expression. Similarly, a store to a symbolic address could overwrite any value for a satisfying assignment to the expression. Symbolic addresses are common in real programs, e.g., in the form of table lookups dependent on user input. Symbolic memory addresses can lead to aliasing issues even along a single execution path. A potential address alias occurs when two memory operations refer to the same address. **Example 7.** Consider the following program: <p>| | |</p> <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>1</td> <td>\texttt{store(addr1, v)}</td> </tr> <tr> <td>2</td> <td>\texttt{z = load(addr2)}</td> </tr> </tbody> </table> If \( addr1 = addr2 \), then \( addr1 \) and \( addr2 \) are aliased and the value loaded will be the value \( v \). If \( addr1 \neq addr2 \), then \( v \) will not be loaded. In the worst case, \( addr1 \) and \( addr2 \) are expressions that are sometimes aliased and sometimes not. There are several approaches to dealing with symbolic references: - One approach is to make unsound assumptions for removing symbolic addresses from programs. For example, Vine [7] can optionally rewrite all memory addresses as scalars based on name, e.g., Example 7 would be rewritten as: <p>| |</p> <table> <thead> <tr> <th></th> </tr> </thead> <tbody> <tr> <td>1</td> </tr> <tr> <td>2</td> </tr> </tbody> </table> The appropriateness of such unsound assumptions varies depending on the overall application domain. - Let subsequent analysis steps deal with them. For example, many application domains pass the generated formulas to a SMT solver [4, 32]. In such domains we can let the SMT solver reason about all possible aliasing relationships. In order to logically encode symbolic addresses, we must explicitly name each memory update. Example 7 can be encoded as: \[ mem_1 = (mem_0 \text{ with } mem_0[addr_1] = v) \land z = mem_1[addr_2] \] The above formula should be read as \(mem_1\) is the same as \(mem_0\) except at index \(addr_1\), where the value is \(v\). Subsequent reads are performed on \(mem_1\). - Perform alias analysis. One could try to reason about whether two references are pointing to the same address by performing alias analysis. Alias analysis, however, is a static or offline analysis. In many application domains, such as recent work in automated test-case generation [8, 16–18, 28, 33, 34, 56], fuzzing [35], and malware analysis [10, 44], part of the allure of forward symbolic execution is that it can be done at run-time. In such scenarios, adding a static analysis component may be unattractive. Unfortunately, most previous work does not specifically address the problem of symbolic addresses. KLEE and its predecessors [16, 18] perform a mix of alias analyses and letting the SMT solver worry about aliasing. DART [35] and CUTE [56] only handle formulas that are linear constraints and therefore cannot handle general symbolic references. However, when a symbolic memory access is a linear address, they can solve the system of linear equations to see if they may be aliased. To the best of our knowledge, previous work in malware analysis has not addressed the issue. Thus, malware authors could intentionally create malware that includes symbolic memory operations to thwart analysis. ### Path Selection When forward symbolic execution encounters a branch, it must decide which branch to follow first. We call this the path selection problem. We can think of a forward symbolic execution of an entire program as a tree in which every node represents a particular instance of the abstract machine (e.g., \(\Pi, \Sigma, \mu, \Delta, pc, \iota\)). The analysis begins with only a root node in the tree. However, every time the analysis must fork, such as when a conditional jump is encountered, it adds as children all possible forked states to the current node. We can further explore any leaf node in the tree that has not terminated. Thus, forward symbolic execution needs a strategy for choosing which state to explore next. This choice is important, because loops with symbolic conditions may never terminate. If an analysis tries to explore such a loop in a naive manner, it might never explore other branches in the state tree. Loops can cause trees of infinite depth. Thus, the handling of loops are an integral component in the path-selection strategy. For example, suppose \(n\) is input in: \[ \text{while } (3^n + 4^n = 5^n) \{ \ n++; \ldots \} \] Exploring all paths in this program is infeasible. Although we know mathematically there is no satisfying answer to the branch guard other than 2, the forward symbolic execution algorithm does not. The formula for one loop iteration will include the branch guard \(3^n + 4^n = 5^n\), the second iteration will have the branch guard \(3^{n+1} + 4^{n+1} = 5^{n+1}\), and so on. Typically, forward symbolic execution will provide an upper bound on loop iterations to consider in order to keep it from getting “stuck” in such potentially infinite or long-running loops. Approaches to the path selection problem include: 1) **Depth-First Search.** DFS employs the standard depth-first search algorithm on the state tree. The primary disadvantage of DFS is that it can get stuck in non-terminating loops with symbolic conditions if no maximum depth is specified. If this happens, then no other branches will be explored and code coverage will be low. KLEE [16] and EXE [18] can implement a DFS search with a configurable maximum depth for cyclic paths to prevent infinite loops. 2) **Concolic Testing.** Concolic testing [28, 36, 56] uses concrete execution to produce a trace of a program execution. Forward symbolic execution then follows the same path as the concrete execution. The analysis can optionally generate concrete inputs that will force the execution down another path by choosing a conditional and negating the constraints corresponding to that conditional statement. Since forward symbolic execution can be magnitudes slower than concrete execution, one variant of concolic testing uses a single symbolic execution to generate many concrete testing inputs. This search strategy is called generational search [36]. 3) **Random Paths.** A random path strategy is also implemented by KLEE [16] where the forward symbolic execution engine selects states by randomly traversing the state tree from the root until it reaches a leaf node. The random path strategy gives a higher weight to shallow states. This prevents executions from getting stuck in loops with symbolic conditions. 4) **Heuristics.** Additional heuristics can help select states that are likely to reach uncovered code. Sample heuristics include the distance from the current point of execution to an uncovered instruction, and how recently the state reached uncovered code in the past. **Symbolic Jumps.** The premise of the GOTO rule requires the address expression to evaluate to a concrete value, similar to the LOAD and STORE rules. However, during forward symbolic execution the jump target may be an expression instead of a concrete location. We call this the symbolic jump problem. One common cause of symbolic jumps are jump tables, which are commonly used to implement switch statements. A significant amount of previous work in forward symbolic execution does not directly address the symbolic jump problem [8, 16–18, 28, 35, 36, 56]. In some domains, such as automated test-case generation, leaving symbolic jumps out-of-scope simply means a lower success rate. In other domains, such as in malware analysis, widespread use of symbolic jumps would pose a challenge to current automated malware reverse engineering [10, 11, 44]. Three standard ways to handle symbolic jumps are: 1) Use concrete and symbolic (concolic) analysis [56] to run the program and observe an indirect jump target. Once the jump target is taken in the concrete execution, we can perform symbolic execution of the concrete path. One drawback is that it becomes more difficult to explore the full-state space of the program because we only explore known jump targets. Thus, code coverage can suffer. 2) Use a SMT solver. When we reach a symbolic jump to $e$ with path predicate $P$, we can ask the SMT solver for a satisfying answer to $P \land e$. A satisfying answer includes an assignment of values to variables in $e$, which is a concrete jump target. If $e$ is a conditional jump target, we add to the query to return values different from those previously seen. For example, if the first satisfying answer is $n$, we query for $P \land e \land \neg n$. Although querying a SMT solver is a perfectly valid solution, it may not be as efficient as other options that take advantage of program structure, such as static analysis. 3) Use static analysis. Static analysis can reason about the entire program to locate possible jump targets. In practice, source-level indirect jump analyses typically take the form of pointer analyses. Binary-level jump static analyses reason about what values may be referenced in jump target expressions [2]. For example, function pointer tables are typically implemented as a table of possible jump targets. **Example 8.** Consider the following program: ``` 1 bytes := get_input(\cdot) 2 p := load(funetable + bytes) 3 goto p ``` Since `funetable` is statically known, and the size of the table is fixed, a static analysis can determine that the range of targets is $load(funetable+x)$ where $\{x|0 \leq x \leq k\}$ and $k$ is the size of the table. **Handling System and Library Calls.** In concrete execution, system calls introduce input values to a program. Our language models such calls as `get_input(\cdot)`. We refer to calls that are used as input sources as system-level calls. For example, in a C program system-level calls may correspond to calling library functions such as `read`. In a binary program, system-level calls may correspond to issuing an interrupt. Some system-level calls introduce fresh symbolic variables. However, they can also have additional side effects. For example, `read` returns fresh symbolic input and updates an internal pointer to the current read file position. A subsequent call to `read` should not return the same input. One approach to handling system-level calls is to create summaries of their side effects [12, 16, 18]. The summaries are models that describe the side effects that occur whenever the respective code is called concretely. The advantage of summaries is that they can abstract only those details necessary for the application domain at hand. However, they typically need to be generated manually. A different approach when using concolic execution [56] is to use values returned from system calls on previous concrete executions in symbolic execution. For example, if during a concrete execution `sys_call()` returns 10, we use 10 during forward symbolic execution of the corresponding `sys_call()`. The central advantages of a concolic-based approach is it is simple, easy to implement, and sidesteps the problem of reasoning about how a program interacts with its environment. Any analysis that uses concrete values will not, by definition, provide a complete analysis with respect to system calls. In addition, the analysis may not be sound, as some calls do not always return the same result even when given the same input. For example, `gettimeofday()` returns a different time for each call. **Performance.** A straightforward implementation of forward symbolic execution will lead to a) a running time exponential in the number of program branches, b) an exponential number of formulas, and c) an exponentially-sized formula per branch. The running time is exponential in the number of branches because a new interpreter is forked off at each branch point. The exponential number of formulas directly follows, as there is a separate formula at each branch point. **Example 9.** Consider the following program: ```plaintext 1 x := get_input(); 2 x := x + x 3 x := x + x 4 x := x + x 5 if e then S1 else S2 6 if e2 then S3 else S4 7 if e3 then S5 else S6 8 assert(x < 10); ``` $S_i$ are statements executed in the branches. There are 8 paths through this program, so there will be 8 runs of the interpreter and 8 path predicates. The size of a formula even for a single program path may be exponential in size due to substitution. During both concrete and symbolic evaluation of an expression $e$, we substitute all variables in $e$ with their value. However, unlike concrete evaluation, the result of evaluating $e$ is not of constant size. Example 9 demonstrates the problem with $x$. If during forward symbolic execution get_input() returns $s$, after executing the three assignments $\Delta$ will map $x \rightarrow s + s + s + s + s + s + s + s$. In practice, we can mitigate these problems in a number of ways: - Use more and faster hardware. Exploring multiple paths and solving formulas for each path is inherently parallelizable. - Exponential blowup due to substitution can be handled by giving each variable assignment a unique name, and then using the name instead of performing substitution. For example, the assignments to $x$ can be written as: $$x_1 = x_0 + x_0 \land x_2 = x_1 + x_1 \land x_3 = x_2 + x_2$$ - Identify redundancies between formulas and make them more compact. In the above example, the path predicates for all formulas will include the first four statements. Bouncer [21] uses heuristics to identify commonalities in the formulas during signature generation. Godefroid et al. [36] perform post hoc optimizations of formulas to reduce their size. - Identify independent subformulas. Cadar et al. identify logically independent subformulas, and query each subformula separately in EXE and KLEE [16, 18]. They also implement caching on the SMT solver such that if the same formula is queried multiple times they can use the cached value instead of solving it again. For example, all path predicates for Example 9 contain as a prefix the assignments to $x$. If these assignments are independent of other parts of the path predicate, KLEE’s cache will solve the subformula once, and then use the same returned value on the other 8 paths. Cadar et al. found caching instrumental in scaling forward symbolic execution [18]. - One alternative to forward symbolic execution is to use the weakest precondition [26] to calculate the formula. Formulas generated with weakest preconditions require only $O(n^2)$ time and will be at most $O(n^2)$ in size, for a program of size $n$ [14, 30, 42]. Unlike forward symbolic execution, weakest preconditions normally process statements from last to first. Thus, weakest preconditions are implemented as a static analysis. However, a recent algorithm for efficiently computing the weakest precondition in any direction can be used as a replacement for applications that build formulas using symbolic execution [40]. The program must be converted to dynamic single assignment form before using this new algorithm. **Mixed Execution.** Depending on the application domain and the type of program, it may be appropriate to limit symbolic input to only certain forms of input. For instance, in automated test generation of a network daemon, it may not make sense to consider the server configuration file symbolically — in many cases, a potential attacker will not have access to this file. Instead, it is more important to handle network packets symbolically, since these are the primary interface of the program. Allowing some inputs to be concrete and others symbolic is called mixed execution. Our language can be extended to allow mixed execution by concretizing the argument of the get_input(·) expression, e.g., get_input(file), get_input(network), etc. Besides appropriately limiting the scope of the analysis, mixed execution enables calculations involving concrete values to be done on the processor. This allows portions of the program that do not rely on user input to potentially run at the speed of concrete execution. V. RELATED WORK A. Formalization and Systematization The use of operational semantics to define dynamic security mechanisms is not new [37, 45]. Other formal mechanisms for defining such policies exist as well [54]. Despite these tools, prior work has largely avoided formalizing dynamic taint analysis and forward symbolic execution. Some analysis descriptions define a programming language similar to ours, but only informally discuss the semantics of the analyses [28, 35, 63]. Such informal descriptions of semantics can lead to ambiguities in subtle corner cases. B. Applications In the remainder of this section, we discuss applications of dynamic taint analysis and forward symbolic execution. Due to the scope of related work, we cite the most representative work. Automatic Test-case Generation. Forward symbolic execution has been used extensively to achieve high code-coverage in automatic test-case generation [16–18, 28, 35, 36, 56]. Many of these tools also automatically find well-defined bugs, such as assertion errors, divisions by zero, NULL pointer dereferences, etc. Automatic Filter Generation. Intrusion prevention/detection systems use input filters to block inputs that trigger known bugs and vulnerabilities. Recent work has shown that forward symbolic execution path predicates can serve as accurate input filters for such systems [12–14, 21, 22, 43, 46, 47]. Automatic Network Protocol Understanding. Dynamic taint analysis has been used to automatically understand the behavior of network protocols [15, 62] when given an implementation of the protocol. Malware Analysis. Automatic reverse-engineering techniques for malware have used forward symbolic execution [10, 11, 44] and dynamic taint analysis [5, 6, 27, 57, 64] to analyze malware behavior. Taint analysis has been used to track when code unpacking is used in malware [64]. Web Applications. Many analyses of Web applications utilize dynamic taint analysis to detect common attacks such as SQL injections [3, 38, 39, 50, 55, 61] and cross-site scripting attacks [53, 55, 60]. Some researchers have also combined dynamic taint analysis with static analysis to find bugs in Web applications [3, 61]. Sekar [55], introduced taint inference, a technique that applies syntax and taint-aware policies to block injection attacks. Taint Performance & Frameworks. The ever-growing need for more efficient dynamic taint analyses was initially met by binary instrumentation frameworks [20, 51]. Due to the high overhead of binary instrumentation techniques, more efficient compiler-based [41, 63] and hardware-based [24, 25, 58, 59] approaches were later proposed. Recent results show that a dynamic software-based approach, augmented by static analysis introduce minimal overhead, and thus can be practical [19]. Extensions to Taint Analysis. Our rules assume data is either tainted or not. For example, Newsome et al. have proposed a generalization of taint analysis that quantifies the influence that an input has on a particular program statement based on channel capacity [48]. VI. CONCLUSION Dynamic program analyses have become increasingly popular in security. The two most common — dynamic taint analysis and forward symbolic execution — are used in a variety of application domains. However, despite their widespread usage, there has been little effort to formally define these analyses and summarize the critical issues that arise when implementing them in a security context. In this paper, we introduced a language for demonstrating the critical aspects of dynamic taint analysis and forward symbolic execution. We defined the operational semantics for our language, and leveraged these semantics to formally define dynamic taint analysis and forward symbolic execution. We used our formalisms to highlight challenges, techniques and tradeoffs when using these techniques in a security setting. VII. ACKNOWLEDGEMENTS We would like to thank Dawn Song and the BitBlaze team for their useful ideas and advice on dynamic taint analysis and forward symbolic execution. We would also like to thank our shepherd Andrei Sabelfeld, JongHyup Lee, Ivan Jager, and our anonymous reviewers for their useful comments and suggestions. This work is supported in part by CyLab at Carnegie Mellon under grant DAAD19-02-1-0389 from the Army Research Office. The views expressed herein are those of the authors and do not necessarily represent the views of our sponsors. REFERENCES
{"Source-Url": "https://users.ece.cmu.edu/~aavgerin/papers/Oakland10.pdf", "len_cl100k_base": 13929, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 61609, "total-output-tokens": 18262, "length": "2e13", "weborganizer": {"__label__adult": 0.0004143714904785156, "__label__art_design": 0.0003325939178466797, "__label__crime_law": 0.0010290145874023438, "__label__education_jobs": 0.0006127357482910156, "__label__entertainment": 8.481740951538086e-05, "__label__fashion_beauty": 0.00017118453979492188, "__label__finance_business": 0.0001894235610961914, "__label__food_dining": 0.00031638145446777344, "__label__games": 0.0010824203491210938, "__label__hardware": 0.0013484954833984375, "__label__health": 0.0005550384521484375, "__label__history": 0.0002837181091308594, "__label__home_hobbies": 0.0001004338264465332, "__label__industrial": 0.00044345855712890625, "__label__literature": 0.00033736228942871094, "__label__politics": 0.00031375885009765625, "__label__religion": 0.0004405975341796875, "__label__science_tech": 0.052276611328125, "__label__social_life": 8.171796798706055e-05, "__label__software": 0.012176513671875, "__label__software_dev": 0.9267578125, "__label__sports_fitness": 0.00026297569274902344, "__label__transportation": 0.00044608116149902344, "__label__travel": 0.00017058849334716797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 75926, 0.01843]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 75926, 0.52202]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 75926, 0.87262]], "google_gemma-3-12b-it_contains_pii": [[0, 4241, false], [4241, 8951, null], [8951, 11716, null], [11716, 22236, null], [22236, 26321, null], [26321, 27305, null], [27305, 33677, null], [33677, 39129, null], [39129, 44318, null], [44318, 47479, null], [47479, 52878, null], [52878, 58251, null], [58251, 63593, null], [63593, 69820, null], [69820, 75926, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4241, true], [4241, 8951, null], [8951, 11716, null], [11716, 22236, null], [22236, 26321, null], [26321, 27305, null], [27305, 33677, null], [33677, 39129, null], [39129, 44318, null], [44318, 47479, null], [47479, 52878, null], [52878, 58251, null], [58251, 63593, null], [63593, 69820, null], [69820, 75926, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 75926, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 75926, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 75926, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 75926, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 75926, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 75926, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 75926, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 75926, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 75926, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 75926, null]], "pdf_page_numbers": [[0, 4241, 1], [4241, 8951, 2], [8951, 11716, 3], [11716, 22236, 4], [22236, 26321, 5], [26321, 27305, 6], [27305, 33677, 7], [33677, 39129, 8], [39129, 44318, 9], [44318, 47479, 10], [47479, 52878, 11], [52878, 58251, 12], [58251, 63593, 13], [63593, 69820, 14], [69820, 75926, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 75926, 0.08914]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
b37325f6e8b6e0b84950aea0ef3bedf110aa4f29
On Analysis of Boundness Property for ECATNets by Using Rewriting Logic Noura Boudiaf, and Allaoua Chaoui Abstract—To analyze the behavior of Petri nets, the accessibility graph and Model Checking are widely used. However, if the analyzed Petri net is unbounded then the accessibility graph becomes infinite and Model Checking cannot be used even for small Petri nets. ECATNets [2] are a category of algebraic Petri nets. The main feature of ECATNets is their sound and complete semantics based on rewriting logic [8] and its language Maude [9]. ECATNets analysis may be done by using techniques of accessibility analysis and Model Checking defined in Maude. But, these two techniques supported by Maude do not work also with infinite-states systems. As a category of Petri nets, ECATNets can be unbounded and so infinite systems. In order to know if we can apply accessibility analysis and Model Checking of Maude to an ECATNet, we propose in this paper an algorithm allowing the detection if the ECATNet is bounded or not. Moreover, we propose a rewriting logic based tool implementing this algorithm. We show that the development of this tool using the Maude system is facilitated thanks to the reflectivity of the rewriting logic. Indeed, the self-interpretation of this logic allows us both the modeling of an ECATNet and acting on it. Keywords—ECATNets, Rewriting Logic, Maude, Finite-state Systems, Infinite-state Systems, Boundness Property Checking. I. INTRODUCTION The development of provably error-free concurrent systems is still a challenge of system engineering. Modeling and analysis of concurrent systems by means of Petri nets is one of the well known approaches using formal methods. Two of well known analysis techniques of Petri nets are dynamic analysis and Model Checking. These two methods are largely used in the verification of different category of Petri nets. However, if the analyzed Petri net is unbounded then the reachability graph becomes infinite and Model Checking can not be used, even for small Petri nets. ECATNets [2] are a category of algebraic Petri nets (APNs) based on a safe combination of algebraic abstract types and high level Petri nets. The semantic of ECATNets is defined in terms of rewriting logic [8], allowing us to build models by formal reasoning. As Petri nets, ECATNets provide a quickly understood formalism due to their simple construction and graphical depiction. Moreover, ECATNets have a strong theory and development tools based on powerful logic with sound and complete semantic. The integration of ECATNets in rewriting logic is very promising in terms of specification and verification of their properties. Rewriting logic provides to ECATNets a simple, intuitive, and practical textual version to analyze systems, without losing the formal semantic. ECATNets analysis may be done by using techniques of accessibility analysis and Model Checking defined in Maude [3]. However, these two techniques supported by Maude do not work with infinite-states system. As a category of Petri nets, ECATNets can be unbounded and so infinite system. Consequently, study of boundeness property is important in our sense to decide the applicability of accessibility analysis and Model Checking for ECATNets. The study of boundness property for different category of Petri nets is a known problem in the literature. Such studies aim in general to construct a coverability graph for Petri nets. Among these: The Pr/T-net reachability analysis tool PROD [12] implements several methods for efficient reachability analysis, PAPETRI [1] constructs reachability and coverability graphs for place/transition nets, colored nets, and algebraic Petri nets, CPN/AMI [5], DESIGN/CPN [10], and INA (Integrated Net Analyzer) [11] computes the coverability graph (for Place/Transition Nets (P/T) and colored Petri nets (CPN) with time and priorities) using the algorithm of Karp and Miller. In the case where the net is bounded, the coverability graph corresponds to the usual reachability graph. In [4], Finkel considers reachability graphs and coverability graphs as special cases of a more general structure, so-called \( \omega \)-state graphs. Among all these state graphs, there exists a unique one which is minimal with respect to the number of nodes. In our case, we will restrict ourselves to study boundeness property for ECATNets. We do not have as an objective in this paper the construction of the coverability graph but only we decide through a proposed algorithm if an ECATNet is bounded or not. In this paper, we propose an algorithm and its rewriting logic based tool to check boundeness property for ECATNets. First, we study unbounded places in ECATNets by giving some propositions and their proofs. Then, we extract conditions of unboundness of ECATNets. The algorithm computes the accessibility graph (finite) and checks at the same time the conditions of unbounded places in an ECATNet. If one of the unboundeness conditions is true, the algorithm stops computing and returns that the ECATNet is not bounded. Otherwise, if the algorithm finishes the construction of accessibility graph, then it returns that the ECATNet is bounded. After that, we present a tool based on Maude that implements this algorithm. Such tool allows us to know the applicability of the accessibility analysis and the Model Checking of Maude for the ECATNets. The development of this tool is not very complicated thanks to the reflectivity of Maude language. Indeed, the self-interpretation of this logic allows us both the modelling of an ECATNet and acting on it. The remainder of this paper is organized as follows: the section 2 is a general presentation of the ECATNets and their description in rewriting logic. Some proprieties about ECATNets including a study of unbounded places case are presented in section 3. Section 4 contains our proposed algorithm which detects if an ECATNet is bounded or not. In section 5, we introduce briefly the concept of the meta-computation in Maude. In section 6, we describe our algorithm’s implementation in Maude system. In section 7, we give an example of an ECATNet and its description in Maude meta-level representation. Section 8 contains the application of the tool on the example. Finally, the section 9 concludes the paper. II. ECATNETS ECATNets [2] are a kind of net/data model combining the strengths of Petri nets with those of abstract data types. Places are marked with multi-sets of algebraic terms. Input arcs of each transition t, i.e. (p, t), are labeled by two inscriptions IC(p, t) (Input Conditions) and DT(p, t) (Destroyed Tokens), output arcs of each transition t, i.e. (t, p'), are labeled by CT(t, p') (Created Tokens), and finally each transition t is labeled by TC(t) (Transition Conditions) (see figure 1). IC(p, t) specifies the enabling condition of the transition t, DT(p, t) specifies the tokens (a multi-set) which have to be removed from p when t is fired, CT(t, p') specifies the tokens which have to be added to p' when t is fired. Finally, TC(t) represents a boolean term which specifies an additional enabling condition for the transition t. The current ECATNets’ state is given by the union of terms having the following form (p, M(p)). As an example, the distributed state s of a net having one transition t is given by the following multi-set: s = (p, a ⊕ b ⊕ c). ![A generic ECATNet](image) A transition t is enabled when various conditions are simultaneously true. The first condition is that every IC(p, t) for each input place p is enabled. The second condition is that TC(t) is true. Finally, the addition of CT(t, p') to each output place p' must not result in p' exceeding its capacity when this capacity is finite. When t is fired, DT(p, t) is removed (positive case) from the input place p and simultaneously CT(t, p') is added to the output place p'. Let’s note that in the non-positive case, we remove the common elements between DT(p, t) and M(p). Transition firing and its conditions are formally expressed by rewrite rules. A rewrite rule is a structure of the form: "u → v if boolexp", where u and v are respectively the left and the righthand sides of the rule, t is the transition associated with this rule and boolexp is a Boolean term. Precisely u and v are multi-sets of pairs of the form (p, [m]_p), where p is a place of the net, [m]_p a multi-set of algebraic terms, and the multi-set union on these terms, when the terms are considered as singletons. The multi-set union on the pairs (p, [m]_p) will be denoted by ⊕. [x] denotes the equivalence class of x, w.r.t. the ACI (Associativity, Commutativity, Identity = φ_M) axioms for ⊕. An ECATNet state is itself represented by a multi-set of such pairs where a place p is found at least once if it’s not empty. Now, we recall the forms of the rewrite rules (i.e., the meta-rules) to associate with the transitions of a given ECATNet. IC(p, t) is of the form [m]_p **Case 1.** IC(p, t) = DT(p, t) The form of the rule is then given by: \[ t : (p, [IC(p, t)]_p) \rightarrow (p', [CT(t, p')]_p) \] where t is the involved transition, p its input place, and p' its output place. **Case 2.** IC(p, t) ⊆ DT(p, t) This situation corresponds to checking that IC(p, t) is included in M(p) and, in the positive case, removing DT(p, t) from M(p). In the case where DT(p, t) is not included in M(p), we have to remove the elements which are common to these two multi-sets. The form of the rule is given by: \[ t : (p, [IC(p, t)]_p) \otimes (p, [DT(p, t)]_p \cap [M(p)]_p) \rightarrow (p, [IC(p, t)]_p \otimes (p', [CT(t, p')]_p)) \] **Case 3.** IC(p, t) \cap DT(p, t) \neq φ_M This situation corresponds to the most general case. It may however be solved in an elegant way by remarking that it could be brought to the two already treated cases. This is achieved by replacing the transition falling into this case by two transitions which, when fired concurrently, give the same global effect as our transition. In reality, this replacement shows how ECATNets allow specifying a given situation at two levels of abstraction. The forms of the axioms associated with the extensions are, w.r.t. the explanation already given, evident and thus not commented. IC(p, t) is of the form - [m]_p The form of the rule is given by: \[ t : (p, [DT(p, t)]_p \cap [M(p)]_p) \rightarrow (p', [CT(t, p')]_p) \] if ((IC(p, t) \cap [M(p)]_p) = φ_M) \rightarrow [false] **IC(p, t) = empty** The form of the rule is given by: \[ t : (p, [DT(p, t)]_p \cap [M(p)]_p) \rightarrow (p', [CT(t, p')]_p) \] if [M(p)]_p \rightarrow φ_M When the place capacity C(p) is finite, the conditional part of the rewrite rule will include the following component: \[ [CT(t, p')]_p \otimes [M(p)]_p \cap [C(p)]_p \rightarrow [CT(p, t)]_p \otimes [M(p)]_p \] (Cap) In the case where there is a transition condition TC(t), the conditional part of our rewrite rule must contain the following component: TC(t) → [true]. III. STUDY OF UNBOUNDED PLACES The development of an algorithm that detects cases of unbounded places in an ECATNet is a delicate problem. It is about an unbounded place when the number of algebraic terms in this place increases infinitely. The study of case of an unbounded place comes back to study the monotony property. In an ECATNet, this property depends strongly on assignments of algebraic term variables that label arcs joining places and transitions. We separate three cases of ECATNets: simple ECATNet (without conditions of transitions and ECATNet's places with infinite capacity), ECATNet with conditions of transitions and places with infinite capacity, ECATNet's places with infinite capacity), ECATNet with transitions and places with infinite capacity, the monotony may be respected or not. We focus in our study on the case when IC(p, t) is of the form [m]⊗ and we exclude from the two other cases (IC(p, t) is of the form [m]⊗, and IC(p, t) = empty). A. ECATNet's Places with Infinite Capacity We give in following some propositions and their proofs. We focus in these propositions on the ECATNet of the first component: TC(t). Proposition 1. Let M, M' two markings and S a sequence of transitions, if \[ M \xrightarrow{S} M' \] and \( M \subseteq M' \) then \( M' \xrightarrow{S} M \) Proof 1. We make call to the proof by recurrence. For \( S = t \) (one transition), if \( t \) is enabled at \( M \), then we have: - \( \forall p \in P \) \( IC(p, t) \subseteq M(p) \) or \( DT(p, t) \subseteq M(p) \) - \( \forall p \in P \) if \( M(p) \subseteq M'(p) \) then \( IC(p, t) \subseteq M'(p) \) or \( DT(p, t) \subseteq M'(p) \) Consequently, \( t \) is enabled at \( M' \). Let's assume that this property is verified for \( S = t_1, t_2, \ldots, t_k \) and we prove that it is for \( S = t_1, t_2, \ldots, t_k \). We have: \[ M \xrightarrow{t_1} M_1 \xrightarrow{t_2} M_2 \xrightarrow{t_3} \ldots \xrightarrow{t_k} M_k \] By supposition, \( M \subseteq M' \) then \( M' \xrightarrow{t_1} M_1' \xrightarrow{t_2} M_2' \xrightarrow{t_3} \ldots \xrightarrow{t_k} M_k' \) Now, is \( t_k \) enabled at \( M_k' \)? We have \( M \xrightarrow{t} M_1 \xrightarrow{t} M_2 \xrightarrow{t} \ldots \xrightarrow{t} M_k \) \[ \forall p \in P \] \( M_1(p) = (M(p) \setminus DT(p, t_1)) \odot CT(p, t_1) \) (1) If \( IC(p, t_1) \subseteq M(p) \) and \( DT(p, t_1) \subseteq M(p) \) Such that \( \setminus \) and \( \odot \) are subtraction and union of multi-sets. While \( DT(p, t_1) \subseteq M(p) \) then, without risks in multi-sets, we can write: \[ M_1(p) = (M(p) \setminus DT(p, t_1)) \odot CT(p, t_1) \] (2) \[ M_k(p) = (M_{k-1}(p) \setminus DT(p, t_{k-1})) \odot CT(p, t_{k-1}) \] (3) Then: \[ M_k(p) = (M_{k-2}(p) \setminus DT(p, t_{k-2})) \odot CT(p, t_{k-2}) \] (4) we can write: \[ M_k(p) = (M_{k-1}(p) \setminus DT(p, t_{k-1})) \odot CT(p, t_{k-1}) \] (5) and then \[ M(p) \odot \bigcap_{i=1}^{k} CT(p, t_i) \subseteq M(p) \odot \bigcap_{i=1}^{k} DT(p, t_i) \] (6) Moreover, we have also: \[ M'(p) = (M'(p) \odot \bigcap_{i=1}^{k} CT(p, t_i)) \odot \bigcap_{i=1}^{k} DT(p, t_i) \] (7) If \( M(p) \subseteq M'(p) \) then \[ M(p) \odot \bigcap_{i=1}^{k} CT(p, t_i) \subseteq M(p) \odot \bigcap_{i=1}^{k} DT(p, t_i) \] (8) and then \[ M(p) \odot \bigcap_{i=1}^{k} CT(p, t_i) \subseteq M(p) \odot \bigcap_{i=1}^{k} DT(p, t_i) \] (9) because \[ \bigcap_{i=1}^{k} DT(p, t_i) \subseteq M(p) \odot \bigcap_{i=1}^{k} CT(p, t_i) \] (10) That is to say \( M_k(p) \subseteq M_k'(p) \), if \( t_{k+1} \) is enabled at \( M_k \) then it is enabled at \( M_k' \). If \( M \xrightarrow{t_{k+1}} M' \) and \( M \subseteq M' \) then \( M' \xrightarrow{t_{k+1}} M_k' \). It means that the property of monotony is verified. Proposition 2. If \( M \xrightarrow{t} M_1 \) and \( M \subseteq M_1 \) then \( p \) place, such that \( M(p) \cap M_1(p) \) is an unbounded place. Proof 2. We have in this case: \[ M \xrightarrow{t} M_1 \xrightarrow{t} \ldots \xrightarrow{t} M_k \] When \( k \) offers toward the infinite with \( M \subseteq M_1 \subseteq M_2 \) and \( M_{k-1} \subseteq M_k \) For \( p \in P \) if \( M(p) \subseteq M_1 \) then \[ \exists m(p) \neq \phi \] \( M_1(p) = m(p) \odot M(p) \) (11) \( m \) is non-empty multi-set. On the other hand \[ M_2(p) = (M_1(p) \odot \bigcap_{i=1}^{k} CT(p, t_i)) \odot \bigcap_{i=1}^{k} DT(p, t_i) \] (12) consequently, with ) ()( 21 ),( ),(: 26 ) if ) ()( 21 , then every infinite place is an unbounded place. Let’s consider that , it is always a ISNI:0000000091950263 I )(),( 1 if \[ \phi \in \mathcal{P} \] ∈ \[ \cup= \] such that ' Open Science Index, Mathematical and Computational Sciences Vol:1, No:8, 2007 waset.org/Publication/10019 say, if a transition is enabled since a marking always exist and the condition is always true. That wants to increasing the multi-sets of terms in these places, these values inside the input places of the transition of this condition. By transition condition is true if the values which make it true are any problems with regard to the preservation of monotony. A The presence of a condition for a transition does not make We add more algebraic terms than we have just withdrawn. For \[ \text{Proof 3. For simplicity, we only take into consideration} \] S = t (one transition). Let’s consider that P = IP \cup FP ( IP : Places with infinite capacities, FP : Places with finite capacities) if \( t \) is enabled at \( M_1 \), then we have : \[ \forall p \in P \ IC(p,t) \subseteq M_1(p) \text{ and } DT(p,t) \subseteq M_1(p) \] \[ \forall p \in P \text{ if } M_1(p) \subseteq M_2(p) \] then \( IC(p,t) \subseteq M_2(p) \) and \( DT(p,t) \subseteq M_2(p) \) on the other hand, \( \forall p \in FP : M_1(p) \subseteq M_2(p) \) if \( t \) doesn’t change the marking in finite places because if \( t \) deletes tokens from a finite place, \( t \) put the same tokens in this place or \( t \) is just independent from this place. Then \( t \) is also enabled at \( M_2 \). \[ M_2(p) = (M_1(p) \setminus DT(p,t)) \odot CT(p,t) \] because \[ M_1(p) \subseteq (M_1(p) \setminus DT(p,t)) \odot CT(p,t) \] then \( DT(p,t) \) is a positive multi-set. In this case, we can write: \[ M_2(p) = M_1(p) \odot (CT(p,t) \setminus DT(p,t)) \] we conclude that \[ \forall p \in FP : CT(p,t) \setminus DT(p,t) = \phi \] we can continue in this way and we get \[ M_3(p) = M_2(p) \odot (CT(p,t) \setminus DT(p,t)) \] \[ M_3(p) = (M_1(p) \odot (CT(p,t) \setminus DT(p,t))) \] \[ \odot (CT(p,t) \setminus DT(p,t)) \] \[ M_3(p) = M_1(p) \odot \underbrace{(CT(p,t) \setminus DT(p,t))}_{\text{2 times}} \] \[ t \] is always enabled at \( M_3 \). By recurrence we get : \[ M_k(p) = M_1(p) \odot \underbrace{(CT(p,t) \setminus DT(p,t))}_{\text{k times}} \] we put \[ CT(p,t) \setminus DT(p,t) = m(p) \] and \[ M_k(p) = M_1(p) \odot \underbrace{(m(p))}_{\text{k times}} \] we can say \[ M_1 \xrightarrow{t} \] for a place \( p \in IP \) with \( M_1(p) \subseteq M_2(p), m(p) \neq \phi \). So, if \( k \) goes toward the infinite, then the number of the algebraic terms in place \( p \) increases toward the infinite. D. ECATNet's Places with Finite Capacity We distinguish two cases. We discuss them in the following propositions: **Proposition 3.** If \( M_1 \xrightarrow{S} M_2 \) and \( M_1 \subseteq M_2 \) and for every finite place \( p, M_1(p) = M_2(p) \), then every infinite place \( p' \) such \( M_1(p') = M_2(p') \) is an unbounded place. \[ M_2(p) = (m(p) \odot (M(p) \odot \underbrace{(CT(p,t_i)))}_{i=1} \setminus (\odot DT(p,t_i))) \] (13) without risk, we write \[ M_2(p) = m(p) \odot (\underbrace{(M(p) \odot \underbrace{(CT(p,t_i)))}_{i=1} \setminus (\odot DT(p,t_i)))}_{i=1} \] (14) this means that \[ M_2(p) = m(p) \odot M_1(p) \] (15) or \[ M_2(p) = m(p) \odot m(p) \odot M(p) \] (16) we have \[ m(p) \subseteq m(p) \odot m(p) \] (17) and then \[ M(p) \subseteq M_1(p) \subseteq M_2(p) \] (18) by recurrence, we will have \[ M(p) \subseteq M_1(p) \subseteq M_2(p) \subseteq \ldots \subseteq M_k(p) \] (19) \[ M_k(p) = \underbrace{m(p) \odot \ldots \odot m(p) \odot M(p)}_{\text{k times}} \] (20) That is to say, that if \( k \) offers toward the infinite, then the number of the algebraic terms in place \( p \) increases toward the infinite. **Interpretation 2.** For one transition \( t \), we have : \[ \forall p \in t^* \ M'(p) = M(p) \setminus DT(p,t) \] (21) if \( IC(p,t) \subseteq M(p) \) \[ \forall p \in t^* \ M'(p) = M(p) \odot CT(p,t) \] (22) For \( p \in t^* \ M(p) \subseteq M'(p) \) then \[ M(p) = M'(p) \odot CT(p,t) \] (23) It is achieved some either circumstances. \[ p \in t^* \ M(p) \subseteq M'(p) \] then \( M(p) \subseteq M'(p) \setminus DT(p,t) \) It is possible, only in two cases : - \( DT(p,t) = \phi \), we sensitize without withdrawing - input place is always output place \( DT(p,t) \subseteq CT(p,t) \) We add more algebraic terms than we have just withdrawn. C. Presence of Transitions Conditions The presence of a condition for a transition does not make any problems with regard to the preservation of monotony. A transition condition is true if the values which make it true are inside the input places of the transition of this condition. By increasing the multi-sets of terms in these places, these values always exist and the condition is always true. That wants to say, if a transition is enabled since a marking \( M \), it is always since \( M' \) such that \( M \subseteq M' \). Proposition 4. If \( M \xrightarrow{S} M' \) and \( M \subseteq M' \) and the first transition in \( S \) is in not enabled since \( M' \). If it exists \( S' \) such that \( M' \xrightarrow{S'} M'' \) \( S' \) stops when \( S \) becomes enabled. If we have the following case: \[ M \xrightarrow{S} M' \xrightarrow{S'} M'' \xrightarrow{S'} M''' \text{ and } M'''(p) \subseteq M(p) \text{ for each place } p \text{ with bounded capacity yielding } S \text{ disabled at } M', \text{ then every infinite place } p' \text{ such } M'(p') \subseteq M'''(p') \text{ is an unbounded place.} \] Interpretation 4. The only reason that makes the first transition of \( S \) disabled is the problem of the overtaking of certain places capacities. \( S' \) makes lower the algebraic terms in places suffering of overflow. For it, \( S \) becomes again enabled, then we get \( M''' \). Then, we get the infinity in certain places with infinite capacities. IV. AN ALGORITHM TO DETECT UNBOUNDED PLACES FOR ECATNETS In this section, we present our algorithm for checking if a given ECATNet is bounded or not. The basic idea of the algorithm consists in computing the accessibility graph (finite) of the ECATNet and checking at the same time the conditions of unbounded places. If one of the unboundeness conditions is true, the algorithm stops computing and returns that ECATNet is not bounded. Otherwise, if the algorithm finishes the accessibility graph, then it returns that the ECATNet is bounded. First, we define some functions that will be used in the framework of our algorithm: - \( \text{SubMarking}(m, m') \): returns true if \( m \) is included in \( m' \). - \( \text{StSubMarking}(m, m') \): returns true if \( m \) is a sub-marking of \( m' \) and \( m \neq m' \) (\( m \) is strictly sub-marking of \( m' \)). - \( \text{ReachableMarking}(m, l) \): returns a result marking of firing rewriting rule \( l \) at \( m \). If \( \text{ReachableMarking}(m, l) = m \), then the rule \( l \) is not enabled at \( m \). - \( \text{GetSubMarkingConcerningPlace}(m, p) \): gives a marking that is sub-marking of \( m \) concerning the place \( p \). In the following algorithm, we consider \((C1)\) and \((C2)\) the conditions of the presence of unbounded places in an ECATNet. \((C1)\) is the condition that we find it after \( if \) and \((C2)\) is the condition we find it after the \( or \). Algorithm Boundeness Property Decision for ECATNets Input: \( N \): ECATNet without inhibitor arcs, \( P \): set of places of \( N \), \( P = FP \cup IP \), \( FP \): set of finite places, \( IP \): set of infinite places, \( FP \cap IP = \phi \), \( L \): set of transitions (rewriting rules) of \( N \), \( m_0 \): the initial marking. Output: Decision if an ECATNet is bounded or not Method: \( \text{var Decision : (Bounded, UnBounded) := Bounded; } \) 1. The root is labeled by the initial marking \( m_0 \) 2. A marking \( m \) doesn't have a successor if and only if: -for each rewriting rule \( l \), \( \text{ReachableMarking}(m, l) = \phi \) -it exists on the path of \( m_0 \) to \( m \) another marking \( m' = m \) 3. if the two conditions are not verified, let \( m'' \) be the marking such \( m \xrightarrow{l} m'' \) then for (every rule \( l \), where \( \text{ReachableMarking}(m, l) \neq \phi \) & (Decision = Bounded) do if ( \( \exists m' : \text{marking on the path of } m_0 \text{ until } m \) & \( m' \) is a sub-marking of \( \text{ReachableMarking}(m, l) \) & \( \forall p \in FP \): \( \text{GetSubMarkingConcerningPlace}(\text{ReachableMarking}(m, l), p) = \) \( \text{GetSubMarkingConcerningPlace}(m', p) \) \( (C1) \) or \( \exists m' : \text{marking on the path of } m_0 \text{ until } m \) & \( m' \) is strictly sub-marking of \( \text{ReachableMarking}(m, l) \) & \( \exists l_i, l_2: L \) & \( l \sim l_2 \) on the path from \( m' \) until \( m \) such that \( \exists m_1, m_2 \) two markings on the path from \( m' \) until \( m \) and \( m' \xrightarrow{l_1} m_1 \) and \( m_1 \xrightarrow{l_2} m_2 \) and \( m_2 \xrightarrow{l_2} \) \( \text{ReachableMarking}(m, l) \) \( \text{& SubMarking}(m', m_1) = \text{true} \) \( \text{& ReachableMarking}(m_1, l_1) = \phi \) \( \& \forall p \in FP : \text{GetSubMarkingConcerningPlace}(m_2, p) \) is sub-marking of \( \text{GetSubMarkingConcerningPlace}(m', p) \) \( (C2) \) then Decision := UnBounded; else \( m'' = \text{ReachableMarking}(m, l) \); return (Decision); V. META-LEVEL COMPUTATION IN MAUDE Maude provides a platform getting easy implementation of ECATNets’ tool. Meta-level description is one of services provided by Maude. This service permits describing a module in meta-level. This module becomes an input to another module. We will use meta-level representation in Maude to describe an ECATNet and act on it. The syntax of meta-level representation is different from ordinary representation in Maude. Term and module in the meta-level are called meta-term and meta-module respectively. Meta-term is considered as term of a generic type called Term. A Meta-module is considered as a term of generic type called Module. To manipulate a module in meta-level representation, Maude provides a module called META-LEVEL. This module encapsulates some services called descent functions. A descent function performs reduction and rewriting of meta-term, according to the equations and rules in the corresponding meta-module, at the meta-level. Function metaApply: metaApply is the process of applying a rule of a system module to a term: sort ResultTriple ResultTriple?. subset ResultTriple < ResultTriple?. op \{ \_ \_ \_ \} : Term Type Substitution \rightarrow ResultTriple [ctor]. op failure : \rightarrow ResultTriple? [ctor]. op metaApply : Module Term Qid Substitution Nat \rightarrow ResultTriple?. The first four parameters are representations in meta-level of a module, a term in a module, some rules label in the module and a set of assignments (possibly empty) defining a partial substitution for variables being in this rule. The last parameter is a natural number. Given a natural N as a fifth parameter, metaApply function returns the (N+1)\textsuperscript{th} result of application of every substitution. In our application, we don’t need any substitution, than we take the empty substitution none. In this case, we take 0 as last parameter. For more details about the two last parameters see [3]. This function returns a triple formed by result term, type of this term and substitution, otherwise metaApply returns ‘failure’. Function getTerm. We apply getTerm function on metaApply to extract only the resulting term: \( \text{op getTerm : ResultTriple} \rightarrow \text{Term} \). VI. IMPLEMENTATION OF THE ALGORITHM IN MAUDE In the framework of this work, we used as platform Maude in its version 2.0.1 under Windows-XP. The development of this application is not very complicated thanks to the meta-computation concept in Maude. Many details are adjusted by the ECATNets description in Maude. For instance, a transition can have a condition that will be integrated in the rule. The call of metaApply function allows to evaluate the condition and to free our application to deal with this detail. Let’s consider textual version of ECATNets in Maude. First, we present a generic module that describes basic operations for ECATNet: \[ \text{fmod GENERIC-ECATNET is} \] \[ \text{sorts Place Marking GenericTerm.} \] \[ \text{op mt : -> Marking.} \] \[ \text{op } _;_ : \text{Place GenericTerm -> Marking.} \] \[ \text{op mt : -> Marking.} \] \[ \text{endfm} \] As illustrated in this code, mt is the empty marking implementing \( \phi_0 \). Respecting some syntactical constraints in Maude language, we define the operation \( <_;_> \) which permits the construction of elementary marking. The two underlines indicate the positions of the operation’s parameters. The first parameter of this operation is a place and the second one is an algebraic term (marking) in this place. A. Reachability Graph Representation. A triple \(<T ; L ; T’>\) means that the rewriting of \( T \) (T for Term) is \( T’ \) by using the rule \( L \). This is equivalent to say that the firing of the transition represented by the rule \( L \) at the marking represented by \( T \) gives the marking represented by \( T’ \). The accessibility graph will be represented by a list of this kind of triples. B. Application’s Functions Many functions are developed in functional programming paradigm to implement the above algorithm. In this section, we describe in detail how we realized some of these functions. We present also some basic sorts. We define a sort BoundnessData containing two constants Bounded and UnBounded. An element of the sort Decision is a couple of an element ListOfTriple (accessibility graph under construction) and an element of sort BoundnessData. The operation 1st which is applied on a pair of sort Decision returns the first element of this pair that is of sort ListOfTriple. \[ \text{sorts BoundnessData Decision.} \] \[ \text{ops Bounded UnBounded : -> BoundnessData.} \] \[ \text{op _;_ : ListOfTriple BoundnessData -> Decision.} \] \[ \text{op 1st : Decision -> ListOfTriple.} \] \[ \text{eq 1st((LT ; Bd)) = LT.} \] \[ \text{op 2nd : Decision -> BoundnessData.} \] \[ \text{eq 2nd((LT ; Bd)) = Bd.} \] The function ReachableMarking\((M, T, L)\) returns a successor marking of the whole \( T \) by applying \( L \), otherwise, the function returns the empty marking \( mt \). M is a module representing ECATNet system: \[ \text{op ReachableMarking : Module Term Qid -> Term.} \] \[ \text{eq ReachableMarking(M, T, L) =} \] \[ \text{if metaApply(M, T, L, none, 0) == failure then getTerm(metaApply(M, T, L, none, 0))) == } \] \[ \text{else getTerm(metaApply(M, T, L, none, 0))) fi} \] l \[ \text{else mt fi.} \] The function AccessibleMark\((M, T, L)\), with \( T \) a term to compute its successor by firing rule \( L \), this function returns, in success case, a triple of the form \(<T ; L ; RT>\) such that \( RT \) is the successor marking of \( T \) after firing \( L \): \[ \text{op AccessibleMark : Module Term Qid -> Triple.} \] \[ \text{eq AccessibleMark(M, T, L) = MediumAccessibleMark(M, T, GetSubTerms(T, L)).} \] This function calls the function MediumAccessibleMark\((M, T, GetSubTerms(T, L))\) returns a stack of all sub-terms of \( T \). This is necessary because metaApply\((M, T, L, none, 0)\) gives a result term if and only if the whole term \( T \) matches exactly the left hand side of a rule \( L \). For a super term of \( T \) and which is different from it, this function doesn’t give a result term. In this case, we proceed to the decomposition of the term by extracting all its sub-terms components. Then, we apply to each sub-term top\((S)\) the ReachableMarking\((M, top(S), L)\) function for rule \( L \). The sub-term component that is a left part of this rule is subtracted from its super-term. The result RT of the subtraction is added to the result term of ReachableMarking\((M, top(S), L)\). \[ \text{op MediumAccessibleMark : Module Term Stack Qid -> Triple.} \] \[ \text{eq MediumAccessibleMark(M, T, S, L) =} \] \[ \text{if S == emptystack then errorltt} \] \[ \text{else if ReachableMarking(M, top(S), L) == mt then MediumAccessibleMark(M, T, pop(S), L)} \] \[ \text{else if StackCompareEqTerms(T, top(S)) == true then if CapacityCheckingInPlaces(ReachableMarking(M, T, L)) == true then <T ; L ; ReachableMarking(M, T, L)> else MediumAccessibleMark(M, T, pop(S), L) fi} \] \[ \text{else if} \] International Scholarly and Scientific Research & Innovation 1(8) 2007 345 Decidability-Detection(M, T0) calls and initializes parameters of AllAccessibleMark(M, T0, S, S1, S', LS, APath, LT, IP, FP, LS1) function. Let’s note that FP is the set of places with finite capacity and S is a stack containing terms markings to be treated. When we obtain successors markings of top(S), we put it first in S1. If S becomes empty, we pass to deal with markings in S1 and so we put the content of S1 in S. S' is a stack containing markings that are dealt with. This is important in order to avoid looking for successors marking for those that are already treated. LS is a list containing initially all labels of module’s rules and LS1 is a list always containing all labels of module’s rules (GetRulesName(M)). APath is a stack of term lists, when each term list is a path. The first term of this list (path) is the initial marking and the last one is a marking waiting to compute its accessible markings. LT contains the coverability graph created until this moment : \[ \text{CapacityCheckingInPlaces}((\text{TermAddition}((\text{TermSubstractionExt}(T, \text{top}(S)), \text{ReachableMarking}(M, \text{top}(S), L))) == \text{true}) \] then \(< T ; L ; \text{TermAddition}((\text{TermSubstractionExt}(T, \text{top}(S)), \text{ReachableMarking}(M, \text{top}(S), L))) >\) else MediumAccessibleMark(M, T, pop(S), L) \[ \text{fi } \text{fi } \text{fi } \text{fi}. \] The main function of our application is Decidability-Detection(M, T0). This function is an interface with the user: \[ \text{op Decidability-Detection} : \text{Module Term} -> \text{BoundnessData}. \] eq \text{Decidability-Detection}(M, T0) = 2nd(\text{AllAccessibleMark}(M, T0, T0, emptystack, emptystack, GetRulesName(M), T0, empty, IPs, FP, GetRulesName(M))) Decidability-Detection(M, T0) sets the coverability graph created until this moment: \[ \text{ConcatListOfTriple}((\text{AccessibleMark}(M, \text{top}(S), \text{head}(LS))), \text{GetRTerms}(\text{AccessibleMark}(M, \text{top}(S), \text{head}(LS))), \text{APath}), \text{ConcatListOfTriple}((\text{AccessibleMark}(M, \text{top}(S), \text{head}(LS))), \text{LT}, \text{IP}, \text{FP}, \text{LS1})) \] Decidability-Detection(M, T0) sets the coverability graph created until this moment: \[ \text{ConcatListOfTriple}((\text{AccessibleMark}(M, \text{top}(S), \text{head}(LS))), \text{GetRTerms}(\text{AccessibleMark}(M, \text{top}(S), \text{head}(LS))), \text{APath}), \text{ConcatListOfTriple}((\text{AccessibleMark}(M, \text{top}(S), \text{head}(LS))), \text{LT}, \text{IP}, \text{FP}, \text{LS1})) \] We compute successor marking of top(S) with each rule head(LS) in LS. Each time LS becomes empty, we reallocate its content with LS1 to continue accessible markings computation for another marking. If S and S1 are empty, there is no marking to compute its successors. This marks the end of the computation. If S is empty and not S1, we put the contents of S1 in S, this permits to compute the accessible markings from those in S1. It is necessary the reallocation of LS by LS1. In the case when S isn’t empty (i.e., we have markings to compute their successors), we need to verify some conditions before computing the successors of top(S). First, we check if LS isn’t empty and if top(S) doesn’t exist in S’. Because if LS is empty, this means that we have already computed all accessible markings of top(S), so we discard it to S’, and if top(S) exists in S’, so top(S) is already treated before. If such conditions are checked, we can proceed to compute accessible marking of top(S) with head(LS) rule. For that, we call AccessibleMark(M, top(S), head(LS)). This function insures that head(LS) rule is enabled or not at top(S) marking. The term errorltt indicates that head(LS) is not enabled at top(S). In this case, we discard this rule and we continue to see if there are others rules (tail(LS)) enabled at top(S). If head(LS) is enabled at top(S), we check if conditions of unbounded places are valid or not. This is expressed by the condition \((T' =/= \text{nil})\) and \(\text{ConstantMarksInFPlaces}(M, T', \text{RT}, \text{head}(LS), \text{FP}) == \text{true}\). The first condition \((T' =/= \text{nil})\) implements the condition \((\text{C1})\) of the algorithm and the second one implements \((\text{C2})\). If the conditions are true, we stop computing reachability graph and we return UnBounded. Let’s note that 3rd(AccessibleMark(M, top(S), head(LS))) returns the third element in the triple (result term of firing... head(LS) at top(S)). TrListToStack(AccessibleMark(M, top(S), T, head(LS), IP), S1) puts in S1 the third element in the triple AccessibleMark(M, top(S), head(LS), IP), and DeleteDupInStackExt eliminates any duplication in the result stack. PathSCreation(top(S), GetRTerms(AccessibleMark(M, top(S), head(LS), IP)), APath) puts the new created accessible marking in its appropriate place in APath. Finally, ConcatListOfTriple(AccessibleMark(M, top(S), head(LS), IP), LT) allows adding new created triple AccessibleMark(M, top(S), head(LS), IP) to existent coverability graph (LT). VII. EXAMPLE The subject of this section is the application of the proposed tool on a simple industrial case. This example is presented in [6] and it is described by using ECATNets formalism in [7]. We take this description with some modifications. This example presents an infinite-state real system. A. Example Presentation The example is about a cell of production that manufactures forged pieces of metal with the help of a press. This cell is composed of a table A that serves to feed the cell by raw pieces, of a robot of handling, a press and a table B that serves to storage of forged pieces. The robot includes two arms, disposed at right angles on one same horizontal plan, interdependent of one same axis of rotation and without vertical mobility possibility. The figure 2 represents the spatial disposition of elements of the cell. The robot can seize a raw piece of the table A and to put down it in the press with the help of the arm 1. It can also seize a forged piece of the press and can put down it on the table of storage B with the help of the arm 2. In short, the robot can do two movements of rotation. The first allows it to pass from its initial position to its secondary one. This movement permits the robot to deposit a raw piece in the press and possibly the one of a forged piece on the table of storage B. The second allows it to pass from its secondary position towards its initial position and to continue the cycle of rotation. B. ECATNets Model of the Example Figure 3 represents the ECATNets model of production cell. The symbol \( \phi \) is used to denote the empty multi-set in arcs inscriptions. Please note that \( r \) denotes 'raw' and \( f \) denotes 'forge'. If the inscriptions IC(p, t) and DT(p, t) are equals, then we only present IC(p, t) on the arc (p, t). The rewriting rules of the system will be presented in the following section directly in Maude. ECATNets Places. Ta : table A ; set, possibly empty, of raw pieces. Tb : table B ; set, possibly empty, of raw pieces. Ar1 : arm 1 of robot ; at most a raw piece. Ar2 : arm 2 of robot ; at most a forge piece. Pr : press ; at most a raw piece or a forge piece. PosI : initial spatial position of robot ; it is marked "ok" if it is the current position of robot. PosS : secondary spatial position of robot ; it is marked "ok" if it is the current position of robot. EA : this place is added for testing if the two arms of robot are empty. ECATNets Transitions. T1 : taking of a raw piece by the arm 1 of the robot. T2 : taking of a forge piece by the arm 2 of the robot. D1 : deposit of a raw piece in the press. D2 : deposit of a forge piece on the table B. TS1, TS2 : rotation of the robot from its initial position towards its secondary position. T1 : rotation of the robot from its secondary position towards its initial position. F : forge of the raw piece introduced in the press. E : deposit of a raw piece on the table A. R : removing forge pieces from the table B. C. Meta-level Representation of the Example in Maude The user is not obliged to write (his/her) ECATNet in a meta-representation. (He/she) can write it in the common mode, and then (he/she) uses the function of Maude upModule which allows transforming the representation of a module to its meta-representation. The passing in the other direction also is possible, thanks to the function downModule. For more of clarity, we preferred to give the module describing the previous ECATNet in its meta-representation. In the module META-LEVEL-ROBOT-ECATNET-SYSTEM, the META-ROBOT module is defined as a constant of type Module and its contents are described by means of an equation. GENERIC-ECATNET is the description in meta-level of module GENERIC-ECATNET. described previously. For more simplicity, we only present some rewriting rules describing Robot behavior. ```maude op META-ROBOT : -> Module . eq META-ROBOT = (mod 'META-ROBOT is protecting 'GENERIC-ECATNET . protecting 'INT . sorts 'Cointype ; 'RPosType ; 'EmptyArmType . subsort 'Cointype < 'GenericTerm . subsort 'RPosType < 'GenericTerm . subsort 'EmptyArmType < 'GenericTerm . subsort 'Place < 'Marking . op 'ok : nil -> 'RPosType [ctor] . op 'Earl : nil -> 'EmptyArmType [ctor] . op 'Ear2 : nil -> 'EmptyArmType [ctor] . op 'forge : nil -> 'Cointype [ctor] . op 'Ta : nil -> 'Place [ctor] . op 'Tb : nil -> 'Place [ctor] . op 'Pr : nil -> 'Place [ctor] . op 'Ar1 : nil -> 'Place [ctor] . op 'PosS : nil -> 'Place [ctor] . op 'Ea : nil -> 'Place [ctor] . one none rl 'm:Marking => '._:[<_;_>[Ta.Place, 'raw.Cointype], 'm:Marking] [label('E)] . rl '._:[<_;_>[PosS.Place, 'ok.RPosType], '<_;_>:[Ar2.Place, 'forge.Cointype], '._:[<_;_>[Ea.Place, 'Ear2.EmptyArmType], '<_;_>:[PosS.Place, 'ok.RPosType]] [label('D2)] . ... endm . endm ``` VIII. APPLICATION OF THE TOOL ON THE EXAMPLE To apply the tool on the example, we call the main function of the application Decidability-Decision(META-ROBOT, '<_;_>[<_;_>[PosS.Place, 'ok.RPosType], '<_;_>:[Ar2.Place, 'forge.Cointype], '<_;_>:[Ea.Place, 'Ear2.EmptyArmType]]). The example is about an unbounded ECATNet. It’s clear that the transition 'E is always enabled and so 'Ta is an unbounded place. Consequently, the application of the tool on this example returns Unbounded as result (figure 4). ![Fig. 4 Application of the boundness property checker on the ECATNet example](image) IX. CONCLUSION In this paper, we proposed an algorithm and its Maude based tool to check boundness property for ECATNets. Such algorithm is motivated by the fact that analysis techniques like reachability analysis and Model Checking of Maude cannot deal with infinite-state model including unbounded ECATNets. The tool aims to inform us if our ECATNet is bounded or not. In this case, we can deduce if we can apply or not the accessibility analysis and the Model Checking of Maude on this ECATNet. The development of this tool is not very complicated thanks to the reflectivity of Maude language and the integration of the ECATNets formalism in this language. REFERENCES
{"Source-Url": "http://waset.org/publications/10019/on-analysis-of-boundness-property-for-ecatnets-by-using-rewriting-logic", "len_cl100k_base": 12628, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 35130, "total-output-tokens": 14956, "length": "2e13", "weborganizer": {"__label__adult": 0.00035500526428222656, "__label__art_design": 0.0006761550903320312, "__label__crime_law": 0.0005526542663574219, "__label__education_jobs": 0.0017881393432617188, "__label__entertainment": 0.00013911724090576172, "__label__fashion_beauty": 0.00022017955780029297, "__label__finance_business": 0.0006866455078125, "__label__food_dining": 0.0005326271057128906, "__label__games": 0.0009737014770507812, "__label__hardware": 0.0014286041259765625, "__label__health": 0.0008745193481445312, "__label__history": 0.0005035400390625, "__label__home_hobbies": 0.000244140625, "__label__industrial": 0.0016117095947265625, "__label__literature": 0.0005064010620117188, "__label__politics": 0.0005350112915039062, "__label__religion": 0.0007200241088867188, "__label__science_tech": 0.44580078125, "__label__social_life": 0.0001399517059326172, "__label__software": 0.0104217529296875, "__label__software_dev": 0.529296875, "__label__sports_fitness": 0.00035452842712402344, "__label__transportation": 0.0012111663818359375, "__label__travel": 0.0002419948577880859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46931, 0.01427]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46931, 0.52586]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46931, 0.78592]], "google_gemma-3-12b-it_contains_pii": [[0, 5117, false], [5117, 10835, null], [10835, 15391, null], [15391, 20751, null], [20751, 26752, null], [26752, 32421, null], [32421, 36915, null], [36915, 41230, null], [41230, 46931, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5117, true], [5117, 10835, null], [10835, 15391, null], [15391, 20751, null], [20751, 26752, null], [26752, 32421, null], [32421, 36915, null], [36915, 41230, null], [41230, 46931, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46931, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46931, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46931, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46931, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46931, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46931, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46931, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46931, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46931, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46931, null]], "pdf_page_numbers": [[0, 5117, 1], [5117, 10835, 2], [10835, 15391, 3], [15391, 20751, 4], [20751, 26752, 5], [26752, 32421, 6], [32421, 36915, 7], [36915, 41230, 8], [41230, 46931, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46931, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
9f0a585cf52bc86b29e9ce8bb664a41af8706992
[REMOVED]
{"Source-Url": "https://hal-paris1.archives-ouvertes.fr/hal-00914021/file/Recommendation_Heuristics_for_Improving_Product_Line_Coni_guration_Processes_Recommendation_Systems_in_Software_Engineering_Book_2014.pdf", "len_cl100k_base": 13865, "olmocr-version": "0.1.49", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 64227, "total-output-tokens": 17700, "length": "2e13", "weborganizer": {"__label__adult": 0.0003609657287597656, "__label__art_design": 0.0008296966552734375, "__label__crime_law": 0.0002880096435546875, "__label__education_jobs": 0.0027923583984375, "__label__entertainment": 0.00011581182479858398, "__label__fashion_beauty": 0.0002446174621582031, "__label__finance_business": 0.0013065338134765625, "__label__food_dining": 0.00034546852111816406, "__label__games": 0.0011844635009765625, "__label__hardware": 0.001720428466796875, "__label__health": 0.0003654956817626953, "__label__history": 0.0004107952117919922, "__label__home_hobbies": 0.00020933151245117188, "__label__industrial": 0.0017080307006835938, "__label__literature": 0.0004227161407470703, "__label__politics": 0.00022172927856445312, "__label__religion": 0.0004673004150390625, "__label__science_tech": 0.1026611328125, "__label__social_life": 0.00011742115020751952, "__label__software": 0.02935791015625, "__label__software_dev": 0.8525390625, "__label__sports_fitness": 0.00026798248291015625, "__label__transportation": 0.0016803741455078125, "__label__travel": 0.0001920461654663086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 80317, 0.02647]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 80317, 0.27066]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 80317, 0.86362]], "google_gemma-3-12b-it_contains_pii": [[0, 1098, false], [1098, 2973, null], [2973, 6029, null], [6029, 9274, null], [9274, 12367, null], [12367, 15326, null], [15326, 17252, null], [17252, 20664, null], [20664, 20715, null], [20715, 24032, null], [24032, 26225, null], [26225, 28572, null], [28572, 31074, null], [31074, 33168, null], [33168, 36160, null], [36160, 39544, null], [39544, 42700, null], [42700, 45650, null], [45650, 48775, null], [48775, 51139, null], [51139, 53745, null], [53745, 56876, null], [56876, 60513, null], [60513, 63653, null], [63653, 71981, null], [71981, 75117, null], [75117, 79340, null], [79340, 80317, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1098, true], [1098, 2973, null], [2973, 6029, null], [6029, 9274, null], [9274, 12367, null], [12367, 15326, null], [15326, 17252, null], [17252, 20664, null], [20664, 20715, null], [20715, 24032, null], [24032, 26225, null], [26225, 28572, null], [28572, 31074, null], [31074, 33168, null], [33168, 36160, null], [36160, 39544, null], [39544, 42700, null], [42700, 45650, null], [45650, 48775, null], [48775, 51139, null], [51139, 53745, null], [53745, 56876, null], [56876, 60513, null], [60513, 63653, null], [63653, 71981, null], [71981, 75117, null], [75117, 79340, null], [79340, 80317, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 80317, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 80317, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 80317, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 80317, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 80317, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 80317, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 80317, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 80317, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 80317, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 80317, null]], "pdf_page_numbers": [[0, 1098, 1], [1098, 2973, 2], [2973, 6029, 3], [6029, 9274, 4], [9274, 12367, 5], [12367, 15326, 6], [15326, 17252, 7], [17252, 20664, 8], [20664, 20715, 9], [20715, 24032, 10], [24032, 26225, 11], [26225, 28572, 12], [28572, 31074, 13], [31074, 33168, 14], [33168, 36160, 15], [36160, 39544, 16], [39544, 42700, 17], [42700, 45650, 18], [45650, 48775, 19], [48775, 51139, 20], [51139, 53745, 21], [53745, 56876, 22], [56876, 60513, 23], [60513, 63653, 24], [63653, 71981, 25], [71981, 75117, 26], [75117, 79340, 27], [79340, 80317, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 80317, 0.01083]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
4b14df61d63f37b4dc3c08c3ce34d0878be25e04
3LEE: A 3-Layer Effort Estimator for Software Projects Amin Moradbeiky 1, Vahid Khatibi Bardsiri 2,†, and Mehdi Jafari 3 1,2 Department of Computer Engineering, Faculty of Science, Kerman Branch, Islamic Azad University, Kerman, Iran 3 Department of Electrical Engineering, Faculty of Engineering, Kerman Branch, Islamic Azad University, Kerman, Iran Managing software projects due to its intangible nature is full of challenges when predicting the effort needed for development. Accordingly, there exist many studies with the attempt to devise models to estimate efforts necessary in developing software. According to the literature, the accuracy of estimator models or methods can be improved by correct application of data filtering or feature weighting techniques. Numerous models have also been proposed based on machine learning methods for data modeling. This study proposes a new model consisted of data filtering and feature weighting techniques to improve the estimation accuracy in the final step of data modeling. The model proposed in this study consists of three layers. Tools and techniques in the first and second layers of the proposed model select the most effective features and weight features with the help of LSA (Lightning Search Algorithm). By combining LSA and an artificial neural network in the third layer of the model, an estimator model is developed from the first and second layers, significantly improving the final estimation accuracy. The upper layers of this model filter out and analyze data of lower layers. This arrangement significantly increased the accuracy of final estimation. Three datasets of real projects were used to evaluate the accuracy of proposed model, and the results were compared with those obtained from different methods. The results were compared based on performance criteria, indicating that the proposed model effectively improved the estimation accuracy. Article Info Keywords: Development Effort Estimation, Lightning Search Algorithm, Neural Networks, Software Project Article History: Received 2021-08-17 Accepted 2021-12-18 I. INTRODUCTION Software effort estimation is directly related to the development and success of software project, consequently, this estimation is considered as a major challenge for researchers and the practitioners in software industry. Estimation methods are divided of algorithmic and non-algorithmic categories [1], the first is based on mathematical and the second on heuristic and metaheuristic methods. In both methods, usually a model is devised a number of tools and algorithms proposed to estimate of the software development effort. In each one of these models, tools and algorithms are combined in their unique sense to provide a precise estimate of the software development effort. Some of these tools or algorithms are applied as an intermediate tool to increase the accuracy of the method. The following studies support this claim in increasing the accuracy of the neural network: - Neural networks are one of the most commonly adopted methods in AI; Elman’s neural network is applied in [2] for software development effort estimation. - Rankovic and et al. [3] proposed four new models based on artificial neural network and they utilized five datasets to test them. - Kumar and et al. [4] used neural networks to deep learning in software effort estimation. - The dilation-erosion-linear perceptron was introduced in 2012, and is applied in many articles for prediction, but if there exists complexity of input/output then, it will not be sufficient. Araujo et al.[5] optimized the structure of this perceptron, using the descending gradient in the learning process, and used it in software effort estimation. - A combination of satin bowerbird optimization algorithm (SBO) and the Neuro-fuzzy (ANFIS) is applied to increase the accuracy in predicting software error [6]. A number of researchers seek to increase the accuracy of the Analogy Based Estimation (ABE) method through different tools, or use ABE as a tool to increase the accuracy of the other tools: - ABE method has been commonly used for software effort estimation by researchers. The differential equation (DE) algorithm is applied in similarity function to weight the features, named Differential evolution in Analogy-Based Estimation (DABE) [7], to improve the efficiency of this method. - There exists no exact definition on projects similarity. A similarity region is identified by [8] for feature selection in similar projects through Case-Based Reasoning (CBR) concept. - One of the algorithms combined through ABE methods is genetic algorithm [9]. - Application of Particle swarm optimization (PSO) algorithm to increase ABE precision [10] and a hybrid model from PSO and simulated annealing algorithm to improve ABE performance [11] is proposed. Fuzzy logic-based tools and technique combination with other methods are used in some studies for performance and accuracy improvement: - The estimation model (EM) proposed by [12] is to divide the projects into categories with similar distribution parameters, followed by adopting the fuzzy method are used in estimation and is applied from the firefly algorithm in the rule-base system for selection. - effective parameters on the estimation are proposed by [13], where attempt is made to increase precision through the fuzzy method. - A combination of two algorithmic and non-algorithmic methods COCOMO and NEURO-Fuzzy is applied in [14], where the accuracy of the estimation increased by sending the outputs of NEURO-Fuzzy to the COCOMOII. - Idri and et al. assessed the effect of missing data (MD) techniques on ABE and fuzzy-analogy. [15]. - Usually the fuzzy logic is applied in solving error prediction problem because it can perform with incomplete data, while the main problem is the great volume of rules which slowed the decision making process. Attempt is made by [16] reduce this volume by applying fuzzy controllers instead of fuzzy logic. - Karimi and Gandomani used a combination of differential evolution algorithm and fuzzy-neural network for Software development effort estimation modeling [17]. - Chhabra and Singh used optimizing design of fuzzy model for software effort estimation using particle swarm optimization algorithm [18]. Some researchers only use tools based on algorithmic methods: - The COCOMO, proposed by [19], the COCOMOII, proposed by [20], SLIM, proposed by [21], Function Point Analysis, proposed by[22] and Dotti model proposed by [23]. - The regression-based methods like: linear regression methods [24, 25], non-linear regression methods [25], tree regression methods [26, 27]. Artificial intelligence algorithms can improve the efficiency of formulated methods by searching for the appropriate configuration for these methods [28]. This approach has been followed in some articles: - Updated K-modes clustering basic algorithms are applied in effort prediction. in the proposal model by [29] the Beyesian belief network is constructed from of the COCOMO model, where the intervals are of fuzzy numbers, then, the PSO algorithm and Genetic algorithm (GA) are combined to improve the software effort estimation. - Machine learning algorithms are commonly applied in problem estimation Two different types of Support Vector Machine (SVM) are applied by [30] to predict effort and compare with the other methods like neural networks, decision tree etc. Various feature selection methods have also been used to performance optimization of machine learning based methods [31]. - Meta-heuristic algorithms are commonly applied by researchers in many cases. A hybrid Meta-heuristic algorithm, consisting of Cuckoo Optimization Algorithm (COA), Harmonic Search [32], and DE algorithm is applied to optimizing COCOMO parameters [33], and to improve software effort estimation. Some studies emphasize identifying key project features and their relationship with the software development effort. There has been an emphasis on identifying interrelated features influencing the software development effort [34]. Features influencing effort has been identified by a neural network [35]. The PSO algorithm [36] and the Bayesian technique [37] have been used to identify features influencing the software development effort. A novel model is proposed in this study by analyzing models previously presented in the literature. Data preparation tools have been proposed in some studies to improve the estimation accuracy. Some studies have emphasized the different effectiveness of various project features on the software development effort, and attempts have made to propose a model to exactly estimate the effort considering project features and effectiveness of different features. The effectiveness has been defined as a coefficient in the literature. Data modeling by machine learning methods has also been performed in some studies. Accordingly, in the proposed model, various separately used techniques and tools in the literature for improving the estimation accuracy were adopted in a model with separate layers. Each layer in this model increases the accuracy of the next layer. Simply speaking, the output of each layer in this model is input to the next layer, improving the final performance of the proposed model. Section II discusses the ABE method used for estimating the software development effort. Section III introduces criteria for calculating the accuracy of the proposed model. Section IV introduces the proposed model. Section V discusses a cross validation method for evaluating the stability of the proposed model results. Section VI introduces three datasets of real projects used for testing the proposed model. Section VII introduces techniques compared with the proposed model. Section VIII presents the test results of the proposed model. The model results are analyzed in Section IX. ### II. ANALOGY-BASED ESTIMATION METHOD (ABE) Estimation methods are of the two algorithm and non-algorithm. Because the first methods are not appropriate to be adopted in dynamic environment of software projects, the second methods are applied in this context, making ABE one of the most applicable methods. ABE method is adopted in the unspecified value estimation of single feature (i.e. effort or cost) of one project. The steps of this method are described in the following sub-sections. #### A. Similarity Function The similarity of projects through studying features with certain value(s) is determined through this function. For this purpose, the following Euclidean, Eq. 3 and Manhattan Eq. 4 similarity determination methods are applied. The project features include both the digit and non-digit groups. With respect to the digit features, in both the methods, the space of digit features is estimated for the project’s difference estimation. With respect to the non-digit features, the level of difference is set at 0 or 1. These methods differ in the digit features space of difference is set at 0 or 1. These methods differ in the $$\delta = 0.0001$$ $$sim(p, p') = \frac{1}{\sqrt{\sum_{i=1}^{n} Dis(f_i, f_i') + \delta}}$$ $$Dis(f_i, f_i') = \begin{cases} (f_i - f_i')^2 & \text{if } f_i \text{ and } f_i' \text{ are numerical or ordinal} \\ 0 & \text{if } f_i \text{ and } f_i' \text{ are nominal and } f_i = f_i' \\ 1 & \text{if } f_i \text{ and } f_i' \text{ are nominal and } f_i \neq f_i' \end{cases}$$ $$\delta = 0.0001$$ $$sim(p, p') = \frac{1}{\sum_{i=1}^{n} Dis(f_i, f_i') + \delta}$$ $$Dis(f_i, f_i') = \begin{cases} (f_i - f_i')^2 & \text{if } f_i \text{ and } f_i' \text{ are numerical or ordinal} \\ 0 & \text{if } f_i \text{ and } f_i' \text{ are nominal and } f_i = f_i' \\ 1 & \text{if } f_i \text{ and } f_i' \text{ are nominal and } f_i \neq f_i' \end{cases}$$ #### B. Solution Function This function is applied in the effort estimation of one project according to the effort of k projects with more similarities. <table> <thead> <tr> <th>Study</th> <th>Year</th> <th>Dataset</th> <th>Evaluation Method</th> <th>Method</th> <th>Ref No.</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2019</td> <td>21 Project(1 Dataset)</td> <td>MMRE, Pred, MSE</td> <td>ANN</td> <td>[2]</td> </tr> <tr> <td>2</td> <td>2021</td> <td>COCOMO, NASA, Kemerer</td> <td>MAE, Pred, MMRE</td> <td>ANN</td> <td>[3]</td> </tr> <tr> <td>3</td> <td>2017</td> <td>ISBSG, Albrecht, Kemerer</td> <td>MMRE, Pred</td> <td>ANFIS</td> <td>[6]</td> </tr> <tr> <td>4</td> <td>2007</td> <td>CF, DPS</td> <td>MMRE, Pred, MdMRE</td> <td>ABE</td> <td>[9]</td> </tr> <tr> <td>5</td> <td>2012</td> <td>CF, DPS, ISBSG</td> <td>MMRE, Pred</td> <td>PSO, ABE</td> <td>[10]</td> </tr> <tr> <td>6</td> <td>2019</td> <td>Desharnais, COCOMO</td> <td>MMRE, Pred</td> <td>Firefly algorithm</td> <td>[12]</td> </tr> <tr> <td>7</td> <td>2019</td> <td>4 Project(1 Dataset)</td> <td>MMRE, VAF</td> <td>Fuzzy</td> <td>[13]</td> </tr> <tr> <td>8</td> <td>2018</td> <td>COCOMO</td> <td>MMRE</td> <td>Neuro fuzzy</td> <td>[14]</td> </tr> <tr> <td>9</td> <td>2021</td> <td>Kemerer, Albrecht</td> <td>MMRE, Pred</td> <td>ANFIS</td> <td>[17]</td> </tr> <tr> <td>10</td> <td>2020</td> <td>COCOMO</td> <td>MMRE, Pred</td> <td>PSO, Fuzzy</td> <td>[18]</td> </tr> <tr> <td>11</td> <td>2016</td> <td>COCOMO</td> <td>MMRE</td> <td>Bayesian network</td> <td>[29]</td> </tr> <tr> <td>12</td> <td>2018</td> <td>ISBSG</td> <td>MAE</td> <td>SVR</td> <td>[30]</td> </tr> <tr> <td>13</td> <td>2017</td> <td>COCOMO</td> <td>MMRE</td> <td>Cuckoo Search</td> <td>[32]</td> </tr> <tr> <td>14</td> <td>2018</td> <td>COCOMO</td> <td>MMRE, Pred, MAE</td> <td>DE</td> <td>[33]</td> </tr> <tr> <td>15</td> <td>2020</td> <td>ISBSG, Desharnais</td> <td>MMRE, Pred</td> <td>ACO, ABE</td> <td>[38]</td> </tr> </tbody> </table> \[ C_p = \sum_{k=1}^{K} \frac{\text{Sim}(p, p_k)}{\sum_{i=1}^{K} \text{Sim}(p, p_i)} C_{p_i} \] (3) Where, \( P \) is the project, the effort value of which is intended to be estimated. Symbol \( P_i \) is the \( i \)th project of \( K \) more similar project. Symbol \( C_{p_i} \) is a certain value to be estimated from the \( j \)th more similar project. C. The best value of \( K \) The \( K \) value is applied in effort estimations with a high accuracy. An appropriate \( K \) value mostly depends on the study projects. If the difference in study projects is slightly high, the \( K \) value accuracy reduces, because the effective projects manifest more differences at the final stage of estimation. If the study projects are too close to one another, the low value of \( K \) results prevents the study of similar projects. The existence of these projects in the final stage is of a positive influence on the results’ accuracy. This accuracy is due to a reduction in the noise rate during estimation process. Consequently, no constant value of \( K \) can be considered, thus, it is better for \( K \) to be determined in its dynamic sense. According to the above-mentioned points, no constant value of \( K \) can be considered. Therefore, it is better for \( K \) to be determined in its dynamic sense. III. EQUATIONS FOR ESTIMATION ERROR CALCULATION In this section, utilized equations to evaluate the accuracy of the proposed model and compare it with other methods are introduced. These equations are commonly used for accuracy evaluation by researchers in the field. The results of equations are displayed as diagrams for better accuracy evaluation and comparison. utilized equations are presented in Equ (4 to 8), relative error (RE), magnitude of relative error (MRE), median magnitude of relative error (MdMRE), prediction percentage (PRED) and Mean of absolute error(MAE): \[ RE = \frac{\text{Estimate} - \text{Actual}}{\text{Actual}} \] (4) \[ MRE = \frac{|\text{Estimate} - \text{Actual}|}{\text{Actual}} \] (5) \[ MdMRE = \text{Median}(\text{MRE}) \] (6) \[ PRED(X) = \frac{A}{N} \] (7) \[ MAE = \frac{1}{N} \sum_{i=1}^{n} |\text{Estimate} - \text{Actual}| \] (8) IV. THE PROPOSED MODEL This paper presents a new model called 3 Layer Effort Estimator (3LEE). 3LEE is composed of two sections: training and test. The training section of this model consists of three layers, each responsible refine the data and enhance the precision of estimation. The model of layer 1 is shown in Fig (1), where, the best features are selected based on the feature selection and ABE method with several iteration. At every one of iterations, a subset of features is selected and the MdMRE error value is calculated for that set of feature. The iteration continues until the whole set of selected features end. What is obtained here a set of the best features with the highest effect on software development effort estimation, which is applied to send as an input to the next layer. ![Fig. 1. The flowchart of training section model, layer 1](image-url) The layer 2 model is shown in Fig. (2), which undergoes training through the selected features as its input. This model is iterated for many times through LSA algorithm, and at each iteration the LSA algorithm suggests an appropriate setting for ABE. The ABE method processes projects and estimates them based on settings suggested by the LSA algorithm. This process runs until the estimation error reaches a specific threshold or the iterations are ended. Finally, the best setting for ABE is the result obtained through implementing the model of this layer. The obtained settings are applied as the input for layer 3. The second layer of the proposed model includes a hybrid model of the ABE and LSA method. The ABE method searches for the most similar projects with the target project to estimate software development effort based on features adaptation. The ABE method uses the LSA algorithm to increase estimation accuracy. The LSA algorithm tries to propose the most appropriate configuration for ABE method and helps it to provide a more accurate estimation. The configuration proposed by the LSA algorithm differs based on project conditions and its features. On the other hand, the first layer helps increase the second layer’s accuracy by processing input data to the first layer. Simply speaking, higher quality data enters the second layer with the help of the first layer. The estimate obtained from the second layer is not the final estimation. In the third layer, a model is developed of the estimator of first and second layers and based on their input and output. This layer leads to improved final estimated accuracy. The layer 3 model is shown in Fig. (3) which undergoes predicting proper estimation error based on a project’s data. To estimate the prediction error, Artificial Neural Network (ANN) is applied. The proper configuration of ABE obtained in layer 2 is received as its input and best features obtained in layer 1. This layer’s model is iterated through the LSA algorithm, where at each iteration, the LSA algorithm proposes proper values for b and w of ANN. Here, ANN predicts estimation error for each project and ABE estimates the effort. The resulting values are applied in Eq. 9 for estimation: \[ E_{\text{Final}}[i] = |E_{\text{ABE}}[i] - (\text{Error}[i] \times \text{Th})| \] (9) Where, \(i\) is the number of projects, \(E_{\text{ABE}}\) is the estimation from the ABE method and Error is the error proposed by the ANN, and the \(Th\) coefficient is the percentage of effect of suggested error on the value of estimation. The result of this equation is the value of final estimation. In the proposed model, the third layer has very important role. This layer tries to provide a more accurate estimate of the project through building a model. The built model receives the project characteristics and the estimated amount of effort to make the estimation by using equation 9. In other words, this layer tries to get a more accurate perspective of the project status. On the other hand, the sub layers of this layer have also strengthened its accuracy by refining and providing data. After final estimation of each project is run, the resulting value is applied in Eqs. (5, 6, 7 and 8). Consequently, the estimation error is calculated based on settings suggested by LSA algorithm. The obtained estimation error is returned to the LSA algorithm as a feedback and this process goes on until the error resulting from estimation does not reach a specific threshold or iteration of the LSA algorithm ends. This layer will provide the best settings for estimation of prediction error Through ANN. These proposed settings reduce final estimation error. --- **Fig. 2.** The flowchart of training section model, layer 2 The test section flowchart is shown in Fig. (4), where, the set of test projects is estimated through settings proposed in layer 2 for ABE and the features specified in layer 1 based on estimation error predicted by layer 3. The MdMRE and PRED values resulting from running of this stage are considered as the estimation errors. **V. CROSS VALIDATION METHOD** Based on the proposed model, projects must be divided into two groups of training and test. The arrangement of projects in dividing process effects on the accuracy of the proposed model [39]. For sustainability provement of the models, different cross validation methods including 3 fold, 10 fold, etc. can be used. Each one of these methods provides a specific arrangement for projects. Based on the performed study [39], leave-one-out (LOO) is the best method for evaluation and its achieved accuracy is independent from arrangement of projects. In this paper, LOO method is adopted. VI. INTRODUCING DATASETS In testing stage of proposed model, dataset of real projects are utilized. These datasets are applied by many researchers. The details of the data analyzes of these datasets are tabulated in Table II. The desharnais dataset consists of 81 real software projects. This dataset is collected in canadian software houses. The projects in desharnais dataset are described by 11 features. In this dataset, one of the features named ‘Cost’ is dependent and ten other features named ‘TeamExp’, ‘ManagerExp’, ‘YearEnd’, ‘Duration’, ‘Transactions’, ‘Entities’, ‘AdjFP’, ‘AdjFactor’, ‘RawFP’, and ‘Dev.Env’ are independent. In this paper, only 77 projects of this dataset are used for tests because the other 4 projects have defective data. The maxwell dataset contains data on 62 real software projects. There is one dependent feature called ‘effort’ and 25 independent features indexed from 1 to 25 in this dataset. The results here are displayed and analyzed through separate dataset. Precision of this model in these tests is calculated and displayed through the criteria and equations introduced in section VI. In this paper, MATLAB software has been used for modeling. The tests of the proposed model and all compared models in this paper are performed on a computer with 4th generation i5 CPU and 4GB RAM. In the configuration of LSA algorithm, the size of initial population factor and the maximum numbers of iterations are considered 50 and 150 times respectively. The adopted neural network in this study is Feedforward network. The network settings are suggested by the LSA algorithm. Determining the vector of bias values, weight, and the best number of hidden layers for neural network are the suggested configuration by the LSA algorithm. These settings have been selected based on the results of multiple experiments. VII. TECHNIQUES This proposed model is compared with the following methods for evaluating accuracy: - Ordinary Least Squares (OLS): this method is based on the regression and the best line of regression. - Robust Regression (ROR): ROR uses regression for estimation. This method utilizes weighting to increase estimation accuracy in unusual data [40]. - Multivariate Adaptive Regression Splines (MARS): is a non-linear and non-parametric regression method indicative of some interesting features like ease in interpretation, the ability to model complex non-linear correlation, with a rapid output [41]. - Classification and Regression Tree (CART): One of the commonly used methods for data classification is the CART method. The CART method adopts decision tree for data classification [42]. - M5: The M5 method utilizes modeling technique for data estimation and the developed model has a tree structure. This method separately computes a linear regression for each leaf in the developed tree model. [43]. - Multi Layered Perceptron (MLP): Neural network is a non-linear modeling technique. Multi Layered Perceptron-based neural network is applied by many researches. This method is based on a network of neurons in an input layer, one or more hidden layers and an output layer [44]. - Case based learning reasoning (CBR): The CBR operator search for the most similar sample to the sample we intend to estimate. The similarity of samples is calculated through this method. In this method, the K determines the number of most similar samples that must be used for data estimation [45]. VIII. TESTING DATASETS The objective of testing this model is to evaluate the degree of its precision. The tests are run on the introduced datasets. The tests are run on the introduced datasets. The results here are displayed and analyzed through separate dataset. Precision of this model in these tests is calculated and displayed through the criteria and equations introduced in section III. In this paper, MATLAB software has been used for modeling. The tests of the proposed model and all compared models in this paper are performed on a computer with 4th generation i5 CPU and 4GB RAM. In the configuration of LSA algorithm, the size of initial population factor and the maximum numbers of iterations are considered 50 and 150 times respectively. The adopted neural network in this study is Feedforward network. The network settings are suggested by the LSA algorithm. Determining the vector of bias values, weight, and the best number of hidden layers for neural network are the suggested configuration by the LSA algorithm. These settings have been selected based on the results of multiple experiments. A. Desharnais dataset test Desharnais dataset test is selected as the first, the specifications of which are presented in section VI. The MRE value obtained from implementing this proposed model is expressed in Fig. (5). The MdMRE value for this test is 0.22 and the PRED value is 0.51. Fig. 5. MRE error frequency distribution with 3LEE model for desharnais dataset The frequency distribution diagram of MRE error is graphed in Fig. (5), where the percentage of distribution of different values of MRE error are exposed. The horizontal axis of this graph indicates the MRE quantity. The vertical axis of this graph represents the percentage of projects with a specific MRE quantity. As observed in Fig. (5), a high percentage of errors fall within a range less than 0.5. The higher slope of this diagram in one area signifies higher percentage of error distribution within that specified range. As the graph moves toward bigger errors, its slope becomes less, even reaches zero, indicating fewer projects with low estimation accuracy. B. COCOMO dataset test Specifications of this dataset are presented in section VI. The MRE value obtained from implementing this proposed model is expressed in Fig. (6). The MdMRE value for this test is 0.53 and the PRED value is 0.19. Table III <table> <thead> <tr> <th>Method</th> <th>desharnais</th> <th>maxwell</th> <th>COCOMO</th> </tr> </thead> <tbody> <tr> <td>CART</td> <td>0.35</td> <td>0.45</td> <td>0.77</td> </tr> <tr> <td>CBR K=1</td> <td>0.45</td> <td>0.59</td> <td>0.85</td> </tr> <tr> <td>CBR K=2</td> <td>0.42</td> <td>0.55</td> <td>0.76</td> </tr> <tr> <td>CBR K=3</td> <td>0.42</td> <td>0.44</td> <td>0.78</td> </tr> <tr> <td>CBR K=4</td> <td>0.38</td> <td>0.52</td> <td>0.78</td> </tr> <tr> <td>LSSVM</td> <td>0.41</td> <td>0.45</td> <td>1.33</td> </tr> <tr> <td>M5</td> <td>0.39</td> <td>0.49</td> <td>0.71</td> </tr> <tr> <td>MARS</td> <td>0.57</td> <td>0.48</td> <td>3.70</td> </tr> <tr> <td>MLP</td> <td>0.54</td> <td>0.56</td> <td>0.87</td> </tr> <tr> <td>OLS</td> <td>0.53</td> <td>0.48</td> <td>4.06</td> </tr> <tr> <td>ROR</td> <td>0.49</td> <td>0.59</td> <td>0.98</td> </tr> <tr> <td>PSO+ABE [10]</td> <td>0.40</td> <td>0.47</td> <td>0.75</td> </tr> <tr> <td>ACO+ABE [38]</td> <td>0.36</td> <td>0.48</td> <td>0.75</td> </tr> <tr> <td>RF [46]</td> <td>0.39</td> <td>0.32</td> <td>1.86</td> </tr> <tr> <td>3LEE</td> <td>0.22</td> <td>0.24</td> <td>0.53</td> </tr> </tbody> </table> The frequency distribution diagram of MRE error for the COCOMO dataset is shown in Fig. (6), where the horizontal axis, represent the MRE quantity and the vertical axis represent the percentage of projects with a specific MRE quantity. As observed here, a high percentage of errors fall within a range of less than 1. This is obvious from figure 6. The higher slope areas of the diagram contain small errors and as the diagram moves towards bigger errors, its slope becomes less, and even reaches zero, indicating fewer projects with low estimation accuracy. C. Maxwell dataset test The next test is run on the maxwell dataset, specifications of which are given in section VI. The MRE value obtained from implementing this proposed model is expressed in Fig. (7). The MdMRE value for this test is 0.24 and the PRED value is 0.5. Table IV <table> <thead> <tr> <th>Method</th> <th>desharnais</th> <th>maxwell</th> <th>COCOMO</th> </tr> </thead> <tbody> <tr> <td>CART</td> <td>0.27</td> <td>0.32</td> <td>0.12</td> </tr> <tr> <td>CBR K=1</td> <td>0.25</td> <td>0.25</td> <td>0.07</td> </tr> <tr> <td>CBR K=2</td> <td>0.29</td> <td>0.22</td> <td>0.17</td> </tr> <tr> <td>CBR K=3</td> <td>0.29</td> <td>0.29</td> <td>0.07</td> </tr> <tr> <td>CBR K=4</td> <td>0.31</td> <td>0.24</td> <td>0.06</td> </tr> <tr> <td>LSSVM</td> <td>0.24</td> <td>0.29</td> <td>0.09</td> </tr> <tr> <td>M5</td> <td>0.29</td> <td>0.22</td> <td>0.17</td> </tr> <tr> <td>MARS</td> <td>0.23</td> <td>0.29</td> <td>0.07</td> </tr> <tr> <td>MLP</td> <td>0.24</td> <td>0.20</td> <td>0.19</td> </tr> <tr> <td>OLS</td> <td>0.27</td> <td>0.24</td> <td>0.12</td> </tr> <tr> <td>ROR</td> <td>0.36</td> <td>0.29</td> <td>0.19</td> </tr> <tr> <td>PSO+ABE [10]</td> <td>0.40</td> <td>0.29</td> <td>0.09</td> </tr> <tr> <td>ACO+ABE [38]</td> <td>0.36</td> <td>0.32</td> <td>0.09</td> </tr> <tr> <td>RF [46]</td> <td>0.36</td> <td>0.40</td> <td>0.12</td> </tr> <tr> <td>3LEE</td> <td>0.51</td> <td>0.5</td> <td>0.19</td> </tr> </tbody> </table> The frequency distribution diagram of MRE error for the Maxwell dataset is drawn in Fig. (7), where, the horizontal axis represent the MRE quantity and the vertical axis represent the percentage of projects with a specific MRE quantity. As observed here, about 70% of the projects are estimated with less than 0.5 error and big errors are of a small distribution. IX. ANALYSIS AND COMPARISON OF THE RESULTS For accuracy comparison of the proposed model with other methods, numerous tests are performed. These tests are performed on the same test conditions with the proposed model. The results of tests for comparison are shown in table III and IV. In these tables, the results of the tests are comparable with each other. The obtained results indicate the high precision of the 3LEE model. Moreover, the PRED value of the 3LEE model reveals a high precision estimations rate in this model. This precision is due to fact that refining filters are separated which in turn increase data precision. In estimation research, in cases where data refining methods are applied, they join estimation process which leads to many problems. Combining the refinement and estimation processes lead to problems for the model leading to less precision in the results. In the early tests run, here refinement and estimation are run in a simultaneous manner making the results hardly precise. However, when the stages are defined and implemented in separate layers, the precision of the results face drastic changes. Another reason for high precision of the results here is the predictive contribution of the estimation error obtained through Eq. (9). This method is very effective in normalizing and reducing MRE for projects with a high estimation error percentage. Applying, Eq. (9) and layer 3 lead to a considerable decline in upper limit of MRE With a drastic decline in an acceptable range. In running tests on the 3LEE model, identifying the proper sequence of layer placement is one of the most important items to be tested. The proposed sequence of layers is obtained as a result of running different tests and the layers’ movement. The results of various tests confirm the sequence of layers. The MdMRE and PRED criteria of the proposed model are compared with other methods as shown in Fig. (8, 9 and 10), respectively. The results of this comparison for the COCOMO Dataset are shown in Fig. (8), where, the MdMRE value of this proposed model is lower than that of the MdMRE value of all methods. The PRED value of this proposed model is greater than that of its MdMRE value. This difference reflects the high accuracy of the estimates provided by this model. The results of this comparison are shown for the maxwell Dataset, Fig. (10), where, PRED value of this proposed model is greater than its MdMRE value. This difference reflects the high accuracy of the estimates provided by this model. The difference in accuracy here model with other methods is based on the PRED and MdMRE criteria, Fig. (10). Another comparison is made based on MAE benchmark to better assess the accuracy of this proposed model. This criterion represents the mean error of estimation in the projects, Fig. (11), whereas observed the 3LEE model is more accurate than all its counterparts. The accuracy of this model, according to the MAE criteria, is about 70% higher than its counterparts. The other methods, even close to this model, in one of the datasets, are not able to repeat their own estimation accuracy in other data sets. This point reflects the ability of this model to be adaptive in project conditions. To determine 3lee overall performance, Wilcoxon, a statistical test, is executed which would confirm the superiority of this model. Wilcoxon test specifies the difference between two data samples that the difference is determined by P-value parameter. Based on this method two samples of data statistically different when p-value quantity is less than 0.05. In this article, the P-value quantity of different methods is compared with the 3LEE method. The P-value of each one of the assessed methods in comparison to the 3LEE model is tabulated in Table V. The P-value quantity of Wilcoxon test in all methods and all dataset is less than 0.05. The results of this test confirm the statistical significance of this model. TABLE V P-Values obtained from wilcoxon test <table> <thead> <tr> <th>method</th> <th>desharnais</th> <th>maxwell</th> <th>COCOMO</th> </tr> </thead> <tbody> <tr> <td>CART</td> <td>0.0381</td> <td>0.0347</td> <td>0.0356</td> </tr> <tr> <td>CBR K=1</td> <td>0.0215</td> <td>0.0012</td> <td>0.0015</td> </tr> <tr> <td>CBR K=2</td> <td>0.0461</td> <td>0.0119</td> <td>0.0473</td> </tr> <tr> <td>CBR K=3</td> <td>0.0283</td> <td>0.0303</td> <td>0.0125</td> </tr> <tr> <td>CBR K=4</td> <td>0.0493</td> <td>0.0071</td> <td>0.0088</td> </tr> <tr> <td>LS-SVM</td> <td>0.0434</td> <td>0.0138</td> <td>0.00076</td> </tr> <tr> <td>M5</td> <td>0.026</td> <td>0.01</td> <td>0.0338</td> </tr> <tr> <td>MARS</td> <td>0.0039</td> <td>4.80E-05</td> <td>4.38E-08</td> </tr> <tr> <td>MLP</td> <td>0.00078</td> <td>0.0037</td> <td>0.0342</td> </tr> <tr> <td>OLS</td> <td>0.0017</td> <td>0.0219</td> <td>3.51E-04</td> </tr> <tr> <td>ROR</td> <td>0.0313</td> <td>0.00964</td> <td>0.0166</td> </tr> <tr> <td>PSO+ABE [10]</td> <td>0.0391</td> <td>0.0383</td> <td>0.0368</td> </tr> <tr> <td>ACO+ABE [38]</td> <td>0.041</td> <td>0.0389</td> <td>0.037</td> </tr> <tr> <td>RF [46]</td> <td>0.048</td> <td>0.05</td> <td>2.3853e-005</td> </tr> </tbody> </table> Fig. 8. Comparing methods on cocomo dataset Fig. 9. Comparing methods on desharnise dataset X. CONCLUSION According to the literature, the use of data processing methods, identification methods of effective features, and identification of types of relationships between project features on software project effort, or data modeling increase the estimation accuracy. Moreover, the correct application of heuristic algorithms for configuring methods and tools plays a key role in increased efficiency. A novelty of this study is to present sub-models with the above objectives for identifying features and their effective relationships, exactly configuring data modeling techniques, and estimating by the LSA algorithm based on feature similarity. The other novelty is to propose a model consisting of three layers in which sub-models are organized in their layers to improve their accuracy. The first layer of the proposed model acts on project features. In the second layer, the ABE method, an estimation method based on feature similarity, is configured using the LSA algorithm. The accuracy of the second layer is increased by using the analysis results of the first layer on project features. Combining the neural network and LSA, an estimator model of the first and second layers is developed in the third layer based on its outputs and inputs to increase the final estimation accuracy. Testing each layer slightly increased the estimation accuracy, but properly organizing all these layers significantly increased the final estimation accuracy. Using the heuristic algorithm in this model improved the flexibility of layers and their consistency with project conditions. This model is tested and its precise results were displayed. Precision of the results here suggests that many models presented by researchers so far can become more precise if redesigned based on the procedures presented here. Here, a new method is applied to increase the accuracy of the estimation model. In addition to data modeling to estimate the effort, a separate modeling is performed to estimate the model error. The error modeling is made in Layer 3. The result of this model indicates the final value of the estimate. Separate error modeling is contributive in reducing error estimation. References
{"Source-Url": "https://journals.usb.ac.ir/article_6620_cf13db59d71689e529e6aefb626527eb.pdf", "len_cl100k_base": 9658, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 43782, "total-output-tokens": 12521, "length": "2e13", "weborganizer": {"__label__adult": 0.0003535747528076172, "__label__art_design": 0.0003445148468017578, "__label__crime_law": 0.00028061866760253906, "__label__education_jobs": 0.0014600753784179688, "__label__entertainment": 7.140636444091797e-05, "__label__fashion_beauty": 0.0001807212829589844, "__label__finance_business": 0.0004379749298095703, "__label__food_dining": 0.0003325939178466797, "__label__games": 0.0006527900695800781, "__label__hardware": 0.0007047653198242188, "__label__health": 0.0005016326904296875, "__label__history": 0.00020444393157958984, "__label__home_hobbies": 0.00010156631469726562, "__label__industrial": 0.0003981590270996094, "__label__literature": 0.00030994415283203125, "__label__politics": 0.0001742839813232422, "__label__religion": 0.0003573894500732422, "__label__science_tech": 0.0213623046875, "__label__social_life": 9.673833847045898e-05, "__label__software": 0.00598907470703125, "__label__software_dev": 0.96484375, "__label__sports_fitness": 0.0002944469451904297, "__label__transportation": 0.0003695487976074219, "__label__travel": 0.0001767873764038086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45386, 0.08208]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45386, 0.22458]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45386, 0.87837]], "google_gemma-3-12b-it_contains_pii": [[0, 3559, false], [3559, 8887, null], [8887, 13040, null], [13040, 16130, null], [16130, 19891, null], [19891, 20542, null], [20542, 25966, null], [25966, 29475, null], [29475, 33793, null], [33793, 34723, null], [34723, 36647, null], [36647, 42476, null], [42476, 45386, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3559, true], [3559, 8887, null], [8887, 13040, null], [13040, 16130, null], [16130, 19891, null], [19891, 20542, null], [20542, 25966, null], [25966, 29475, null], [29475, 33793, null], [33793, 34723, null], [34723, 36647, null], [36647, 42476, null], [42476, 45386, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45386, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45386, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45386, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45386, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45386, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45386, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45386, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45386, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45386, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45386, null]], "pdf_page_numbers": [[0, 3559, 1], [3559, 8887, 2], [8887, 13040, 3], [13040, 16130, 4], [16130, 19891, 5], [19891, 20542, 6], [20542, 25966, 7], [25966, 29475, 8], [29475, 33793, 9], [33793, 34723, 10], [34723, 36647, 11], [36647, 42476, 12], [42476, 45386, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45386, 0.27236]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
7682b00d6f891cf05524b1feff9ce2d17a07ede7
Exhaustive Exploration of the Failure-oblivious Computing Search Space Thomas Durieux∗, Youssef Hamadi†, Zhongxing Yu∗, Benoit Baudry‡, Martin Monperrus‡ ∗University of Lille & Inria, France †Ecole Polytechnique, France ‡Royal Institute of Technology, Stockholm Abstract—High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. I. INTRODUCTION Dependable computing is founded on four main engineering activities [1]: fault prevention; fault removal, i.e. testing; fault forecasting; and fault tolerance. The latter, fault tolerance, aims at handling faults triggered in production. For instance, it is common to observe error pages on the Internet while ordering a laser pointer, registering to a conference, or installing a new blogging platform. Fault tolerance aims at preventing system crashes on occurrence of such errors. As such, it is complementary to testing, as it takes care of the faults that have not been discovered at testing time. Failure-oblivious computing is one of the approaches for fault tolerance [2–5]. To overcome software failures at runtime, failure-oblivious computing techniques modify the execution so that failures become less critical: instead of crashing the whole system, only the current task fails and the system remains available. For example, Rinard et al. [2] have proposed a failure-oblivious model consisting of skipping erroneous writes happening out of an array’s bounds. Another example is probabilistic memory safety in DieHard [5], where the execution modifications are controlled addition of blank memory padding. In this work, we address the following main limiting assumption of existing works on failure-oblivious computing: previous work in this area assumes that in response to a given failure, there exists one and only one execution modification to be applied. In practice, however, there can possibly exist multiple failure-oblivious decisions to respond to a failure. As an example, consider the null pointer exceptions in a high-level language such as Java. To overcome the same null pointer exception, one can imagine two different solutions: 1) one can skip the execution of the statement where a null value is about to be used, or 2) one can directly return to the caller method. We scientifically investigate this assumption in this paper. More specifically, we set up a conceptual failure-oblivious computing framework in which this assumption does not hold, i.e., several execution modifications exist within this framework to respond to a failure. Once we accept that there are multiple alternative execution modifications to the same failure, this naturally leads to the novel and promising concept of “search space of failure-oblivious computing”. The basic search space for a single failure is simply composed of all possible execution modifications. However, now consider the presence of multiple failures occurring in a row in a single execution, then the search space becomes the cartesian product of all possible decisions over each failure point. To exhaustively explore this search space, we propose an algorithm called FO-EXPLORE. We then setup an experiment to systematically study the failure-oblivious search space of 16 field failures of large-scale open-source Java software. We run those field failures in a virtual endless loop that simulates the same failure happening again and again. Each execution offers us a possibility to explore a new patch in the search space, where a patch is composed by a sequence of execution modifications. The outcome of the experiment is the exact topology, for a given failure-oblivious model, of the failure-oblivious computing search space of the considered failures. The experimental result shows that the failure-oblivious search space is in general large and not easy to explore: this experiment represents more than 10 days of computation in a distributed scientific grid. For all the 16 considered field failures, we identify a total of 8460 different failure-oblivious execution modifications which are composed of 1 up to 8 execution modifications taken in a row. As an example, let us consider a field failure of Apache Commons Collections, an open-source Java project. This field failure, reported as issue #360, is a null pointer exception. For this failure, our proposed exploration algorithm FO-EXPLORE finds that there exist 45 different failure-oblivious execution modifications. To sum up, the contributions of this paper are: • The characterization of the concept of “search space of failure-oblivious computing”. • An algorithm to exhaustively explore this novel search space. An implementation of the algorithm for exhaustively exploring the failure-oblivious computing search space for null pointer dereferences in Java. The systematic empirical study of the failure-oblivious computing search space for 16 real null dereference failures, reported on a public issue tracker on large and used Java libraries. The remainder of this paper is organized as follows. Section II motivates the study of the failure-oblivious computing search space. Section III details our exploration algorithm. Section IV details the evaluation on 16 field null dereferences. Section V details the threats to validity of the contribution. Section VI presents the related works and Section VII concludes. This paper is a complete rewrite of an Arxiv version [6]. II. MOTIVATING EXAMPLE Let us consider the example in Listing 1 which is an excerpt of server code that retrieves the last connection date of a user and prints the result to an HTML page. Method getLastConnectionDate first gets the user session, and then retrieves the last connection date from the session object. This code snippet can possibly trigger two failures that can crash the request: 1) if the session does not exist and getUserSession returns null, then there is a null pointer exception at line 3 (NPE1), and 2) for the first connection, getLastConnection returns null, and another null pointer exception can be thrown at line 6 (NPE2). Now let us consider the possible execution modifications to overcome the failures. In Listing 1 to overcome NPE1 at line 3, a failure-oblivious system could modify the execution state and flow in three ways: 1) it creates a new session object on the fly, and 2) it returns an arbitrary Date object such as the current date, and 3) it returns null. As the example suggests, there are multiple possible failure-oblivious strategies for the same failure. However, note that not all such modifications are equivalent. For instance, if modification #3 is applied, it triggers another failure NPE2, whereas solutions #1 and #2 do not further break the system state. This indicates that not all state modifications are equivalent, some being invalid. In this paper, we define a conceptual framework and an exploration algorithm to reason about the presence of multiple competing execution modifications in response to a failure. Listing 1: Code Excerpt with Two Potential Null Dereference Failures ``` 1 Date getLastConnectionDate() { 2 Session session = getUserSession(); 3 return session.getLastConnection(); // NPE1 4 }... 5 HTML.write(getLastConnectionDate().toString()); // NPE2 ``` III. EXPLORING THE SEARCH SPACE OF FAILURE-OBLIVIOUS COMPUTING We have shown that there is an implicit search space in failure-oblivious computing. In this section, we formally define this search space and devise an algorithm to explore it exhaustively. A. Basic Definitions Failure-oblivious computing [2] is the art of changing a program’s state or flow during execution such that a crashing failure does not happen anymore and that the program is able to continue its execution, and it is sometimes referred to as runtime repair [7], state repair [8] and self-healing software [9]. As the terminology of failure-oblivious computing can hardly be considered as stable, we thus first define its core terms and concepts. Definition: A failure point is the location in the code where a failure is triggered. In the simplest case, a failure point is a statement at a given line. Definition: A failure-oblivious model defines a type of failure and the corresponding possible manners to overcome the failure. For instance, a failure-oblivious model that considers out-of-bounds write in arrays can skip the array upon failures. Definition: A failure-oblivious strategy defines how a failure is handled. The traditional failure-oblivious literature assumes that there is one single way to be oblivious to a failure. However, there are types of failures for which there exist multiple failure-oblivious strategies. For example, upon an out-of-bounds read in an integer array, one can return either a constant or a value that presents somewhere else in the array. This makes two different strategies. Definition: A context-dependent failure-oblivious strategy is a strategy that can be instantiated in multiple ways depending on the execution context. For example, upon an out-of-bounds read in an integer array, one can return the first, the second, . . . , up to the n-th value that presents in the array, meaning that there are n ways to instantiate this strategy. Definition: A failure-oblivious decision is the application of a specific failure-oblivious strategy to handle a failure at a specific failure point. A failure-oblivious decision is an execution modification. B. The Failure-oblivious Computing Search Space Taking a failure-oblivious decision means exploring a new program state after the first failure. The execution proceeding from this new program state can result in a new failure, for which a failure-oblivious decision has to be taken as well. Definition: A failure-oblivious decision sequence is a sequence of failure-oblivious decisions that are taken in a row during one single execution because of cascading failures. In this paper, an execution consisting of a single failure-oblivious decision is called a unary decision sequence, and a composite (or n-ary) decision sequence contains at least 2 decisions. We use Listing 1 to give an example of n-ary failure-oblivious decision sequence. At line 3, the execution of the method can be stopped by returning null to overcome a null pointer exception when session is null, later on in the execution, a new Date can be crafted to handle the second null pointer exception at line 6. In certain cases, only one decision in isolation may not be enough to overcome the failure, only the sequence is a solution. The notion of context-aware decision and decision sequence naturally defines a search space. Definition: The failure-oblivious computing search space of a failure-triggering input is defined by all possible decision sequences that can happen after the first failure. C. FO-EXPLORE: An Algorithm to Explore the Failure-oblivious Computing Search Space After defining the failure-oblivious computing search space, we now present an algorithm to exhaustively explore this search space. The algorithm is named FO-EXPLORE and is shown in [Algorithm 1]. 1) Algorithm Input-Output: FO-EXPLORE requires four inputs which we explain in detail below. Program P: a program to which failure-oblivious support will be injected. The program is automatically transformed so as to support failure-oblivious execution. The transformation also adds interception hooks to steer and monitor failure-oblivious decisions. Failure Triggering Input I: an input triggering a runtime failure. We assume that we have at least one program input that enables us to automatically reproduce the failures as many times as we want. An input can be a set of values given to a function. In an object-oriented program, an input is much more than values only, it is a set of created objects, interacting through a sequence of method calls. Failure-oblivious Model R: a model listing the possible modifications on the program state or execution flow to handle the failure as defined above. Validity Oracle O: an oracle specifying the viability of the computation if failure-oblivious computing happens. A validity oracle is a predicate on the program state at the end of program execution on the failure-triggering input. The goal of the validity oracle is to validate or invalidate the failure-oblivious decision sequence that has happened during execution. In failure-oblivious computing, the traditional validity oracle is the absence of crashing errors, but more advanced ones can be defined (e.g., use additional assertions). For instance, consider again the example in [Listing 1] where the validity oracle is the presence of an exception that results in HTTP code 500. Returning null is a failed unary decision sequence because the request crashes with NPE1. On the contrary, returning a fresh date object enables the request to succeed and the HTML to be generated, making it a valid unary decision sequence. Taking the above described four inputs, FO-EXPLORE outputs a set of failure-oblivious decision sequences along with their validity. 2) Algorithm Workflow: To explore the search space, the basic idea behind FO-EXPLORE is to make a different decision according to the failure-oblivious model under consideration for each time a failure point is detected, and then collect all decision sequences. We now give a detailed description of the workflow. To explore all the failure-oblivious decisions sequences (line 3), FO-EXPLORE executes the input that triggers the runtime failure as much as it is required (line 5). Each time that the failure is detected at a failure point \( f \) according to the failure-oblivious model (line 6), FO-EXPLORE checks whether it has already chosen a decision for the failure point \( f \) (line 7). If no decision has been taken for the current failure point \( f \) before, FO-EXPLORE computes all the possible decisions for the current failure point (line 9). Then FO-EXPLORE selects an unexplored or a not completely explored decision \( d \) from the computed available decisions (line 11). More specifically, FO-EXPLORE can face two different selection cases: Case 1: The failure point has never been detected before, which means that the failure has never happened in this location before. In this case, FO-EXPLORE selects the first decision in the set of possible decisions. Case 2: The failure has already been detected before at this failure point in the program. In this case, FO-EXPLORE has two possibilities: explore a new failure-oblivious decision or use an already used decision that triggers a new failure point which is not exhaustively explored. Once a decision has been selected, FO-EXPLORE applies the decision and resumes the execution of --- 1FO-EXPLORE uses the original NPEFix algorithm to compute the search space of a single failure point. the program (lines 13-14). At the end of the execution, FO- EXPLORE uses the validity oracle to determine the validity of the execution and finally stores the result (line 16). 3) Working Example: We now illustrate the workflow of FO-EXPLORE using the example in Listing 1. Figure 1 presents the actual decision tree of this example. Recall that when FO-EXPLORE executes the request for the failing input, a null pointer exception called NPE1 will be produced at line 3. This is shown as a circle “NPE1” in Figure 1. For this first failure point (NPE1), FO-EXPLORE explores three different decisions to handle the null pointer exception (the three arrows coming out of NPE1 in Figure 1): 1) uses return null to exit the method, and 3) uses session = new Session() to initialize the null variable with a new instance and proceeds with the execution of the same method. In cases 2) and 3), the execution does not produce other exceptions. Now consider that FO-EXPLORE selects the first decision (return null) and resumes the execution. This decision later produces a second null pointer exception (NPE2 in Figure 1) at line 6 in Listing 1. At this failure point, FO-EXPLORE explores two decisions: 1) uses return to exit the execution of the method and 2) uses new Date() to replace the null eexpression getLastConnectionDate() with a new instance of Date. The latter execution modification succeeds while the former produces a failure. The former execution modification fails as if FO-EXPLORE selects the return strategy at failure point of NPE2, no response is sent to the client so that a timeout will be produced on the client side. At the end of the exploration, FO-EXPLORE eventually discovers four different decision sequences (the four different paths from Execution Start to Execution End in Figure 1), and three of them produce an acceptable output. D. Usefulness of Exploring the Search Space Characterizing the search space of failure-oblivious computing can provide sound scientific foundations on future work on failure-oblivious computing. Indeed, there is very little work that studies to what extent and why failure-oblivious comput- ing succeeds. By clearly defining and exploring the search space, we obtain comprehensive data about this unresearched phenomenon. The empirical results presented in Section IV is a first step towards this direction. In addition, by identifying multiple acceptable failure- oblivious decision sequences, it opens a radically new per- pective on failure-oblivious computing: there may be certain decision sequences that are better than the others. In other words, there are cases where one adds an additional criterion up top of the validity oracle, and this criterion is used to select the best failure-oblivious decision sequence. For instance, the best decision sequence may be the one that runs the fastest. In presence of multiple acceptable decision sequences, one needs to characterize and explore the search space first in order to select the best failure-oblivious decision sequence. IV. Empirical Evaluation The goal of the empirical evaluation presented in this section is to study the failure-oblivious computing search space of real failures occurring in large-scale open-source software. This study is built on three research questions about the topology of the search space. RQ1. [Multiplicity] Does it exist multiple failure-oblivious decision sequences for a given failure-triggering input? How large is the corresponding search space? We want to un- derstand if it is possible to apply different failure-oblivious strategies to handle one specific failure. And if it is the case, how many different decision sequences can be taken. RQ2. [Fertility] What is the proportion of valid failure- oblivious decision sequences? In the context of failure- oblivious computing, there may exist different failure- oblivious decision sequences that are all valid, i.e., they all fix the runtime failure under consideration. To us, the proportion of valid decision sequences can be considered as the “fertility” of the search space. When the goal is to find at least one valid decision sequence, it is much easier to do so if there are many points in the search space that are valid. On the contrary, if there is only one valid decision sequence in the search space, it requires in the worst case visiting the complete search space before finding the only valid decision sequence. The fertility of the search space is the opposite of what is called “hardness” or “constrainedness” in combinatorial optimization. RQ3. [Disparity] To what extent does the search space contain composite decision sequences? Our protocol identifies a set of diverse failure-oblivious decision sequences, and some of them require several decisions in a row. We will observe whether there exist such composite failure-oblivious decision sequences in our benchmark. Table I: In our experiments, we consider a failure-oblivious model for null dereference exceptions in Java. The model consists of 6 possible strategies. <table> <thead> <tr> <th>Strategy</th> <th>Id</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>reuse</td> <td>S1</td> <td>injection of an existing compatible object</td> </tr> <tr> <td>creation</td> <td>S2</td> <td>injection of a new object</td> </tr> <tr> <td>line</td> <td>S3</td> <td>skip statement</td> </tr> <tr> <td>void</td> <td>S4a</td> <td>return a null or 0 to caller</td> </tr> <tr> <td>creation</td> <td>S4b</td> <td>return a new object to caller</td> </tr> <tr> <td>reuse</td> <td>S4c</td> <td>return an existing compatible object to caller</td> </tr> </tbody> </table> A. Considered Failure-Oblivious Model We have implemented FO-EXPLORE for null dereferences in Java (aka null pointer exceptions) with a failure-oblivious model derived from NPEFix [10]. In Java, all object variable dereferences are potential failure points (field accesses, method calls on local variables, method parameters, and implicit casts). A decision has to be taken when the variable is null, which means that failure points are activated if and only if a null is going to be dereferenced. For each decision point, NPEFix defines six failure-oblivious strategies, grouped into two categories as shown in Table I. The first category consists of replacing the null value by an alternative valid non-null object of a compatible type. This category is composed of two sub-categories: 1) when a variable is null, one can reuse an object from another variable in the scope instead, strategies of this kind are called reuse-based strategies; 2) when a variable is null, one can also create a new object on the fly, strategies of this kind are called creation-based strategies. Note that the number of possible decisions for reuse and creation based strategies is parametrized by the number of variables and the number of constructors available, which means that for a single decision point, there are often dozens of different available decisions. The second category is based on skipping the execution of the code affected by the null dereference, one can either skip the line that uses the null dereference or skip the rest of the method. When skipping the rest of a method which returns a value, one can also either reuse an existing object or create one on the fly. For more details on the strategies and the implementation, we refer the readers to the NPEFix paper [10]. B. Benchmark We need real and reproducible production failures to conduct the evaluation. To achieve this, we reuse the benchmark of NPEFix [10] consisting of 16 field failures coming from Apache projects. Table II presents the core statistics of our benchmark. The first column contains the bug id. The second column contains the SVN revision of the global Apache SVN. The third column contains the number of lines of code. The fourth column contains the total number of method calls before the null pointer exception is triggered. For example, issue Collections-360 fixed at revision 1076034 is an application of 21650 lines of Java code. Let us dwell on the last column, the number of executed method calls before the dereference happens. It gives an intuition on the complexity of the setup required to reproduce the field failure. As shown in Table II, there are between 2 and 342 methods (application methods, not counting JDK and API methods) called for the reproduced field failures under consideration, with an average of 75.56. This indicates that the failures in our benchmark are not simple tests with a trivial call to a method with null parameters. C. Experimental Protocol The experiment is based on the exhaustive exploration of the search space of failure-oblivious decision sequences, as defined by our failure-oblivious model for null pointer exceptions described in Section IV-A. It is done on the benchmark of failures presented in Section IV-B. 1) Exhaustive exploration: We apply algorithm FO-EXPLORE to build the complete decision tree of all failures in our benchmark. Recall that the exploration of the failure-oblivious research space is done as follows: 1) We instrument each buggy program of our benchmark with our failure-oblivious model; 2) We execute each instrumented program with the test case that encodes the field failure; 3) We collect all decisions taken at runtime; and finally 4) We execute the validity oracle at the end of the test case execution. The time required to perform such an experiment is approximately the size of the search space multiplied by the time for reproducing the failure. Note the alternative computation that comes after the first failure-oblivious decision at the first failure point is added on top of this. The raw data of this evaluation is publicly available on GitHub.² We answer all research questions based on this data. 2) Validity of Failure-Obliviating Decision Sequences: For a given decision sequence taken in response to a failure, we assess its validity according to the oracle. In our experiments, the validity oracle is directly extracted from the test case reproducing the field failure: a decision sequence is considered as valid if no null pointer exception and no other exceptions are thrown. A decision sequence is considered as invalid if the original null pointer exception is thrown (meaning that there is no possible decision at the failure point), or another exception is thrown and not caught. When the test case contains domain-specific assertions beyond whether exceptions occur, we keep them and a decision sequence is considered as valid if all assertions pass after the application execution modifications. This is the case for 14/16 failures. D. Results to Research Questions 1) RQ1. [Multiplicity] Does it exist multiple failure-oblivious decision sequences for a given failure-triggering input? How large is the corresponding search space?: We analyze the data obtained with the experiment described in Section IV-C1 consisting of exhaustively exploring the search space of execution modifications for 16 null dereferences. Table III shows the core metrics we are interested in. Table III reads as follows. Each line corresponds to a failure of our benchmark. Each column gives the value of a metric of interest. The first column contains the name of each bug. The second column contains the number of encountered decision points for this failure. The third column contains the number of possible failure-oblivious decision sequences for this failure. The fourth column contains the number of execution modifications (valid decision sequences for which the oracle has stated that the decision sequence has worked). The fifth column contains the percentage of valid decision sequences. The sixth column contains the minimum/median/maximum number of decisions taken for valid decision sequences. For example, the first line of Table III details the result for bug Collections-360. To overcome this failure at runtime, there are two possible decision points, which, when they are systematically unfolded, correspond to 45 possible decision sequences, 16 of which are valid according to the oracle. The size of the valid decision sequences is always equal to 2, which means that there must be two decisions taken in a row to handle the failure. Our experiment is the first to show that there exist multiple alternative decisions to overcome a failure at runtime, as shown by the number of explored decision sequences. The number is exactly the size of our search space when we conduct an exhaustive exploration. In this experiment, it ranges between 4 decisions (for Math-305 and PfdBox-2965) to 576 for Math-988A and 51785 for Math-1117. Overall, we notice a great variance of the size of the search space. We also see in Table III that there is a correlation between the number of activated decision points for a given failure and the number of possible decision sequences. For instance, for Felix-4960, there is only one activated decision point (at the failure point where the null pointer exception is about to happen), and 10 possible decisions can be taken at this point. On the contrary, Math-1117 has the biggest number of activated decision points (21), which also has the biggest number of decision sequences (51785). This correlation is expected and explained analytically as follows. Once a first decision is made at the failure point (where the null dereference is about to happen), many alternative execution paths are uncovered. Then, a combinatorial explosion of stacked decisions happens. If we assume that there are 5 alternative decisions at the first decision point and that each of them in turn triggers another decision point with 10 alternative decisions, it directly results in $5 \times 10 = 50$ possible decision sequences. Now, if we assume $n$ decision points with $m$ possible decisions on average, this results in $m^n$ decision sequences, which is a combinatorial explosion. In general, the size of the search space depends on: 1) the overall number of decision points activated for a given failure, 2) the number of possible decisions at each decision point, 3) and the correlation between different decision points, that is the extent to which one decision taken at a decision point influences the number of possible subsequent decisions to be taken. For failures with large number of explored decision sequences, it means that failure-oblivious computing unfolds a large number of diverse program states and their corresponding subsequent executions. Answer to RQ1. We have performed an exhaustive exploration to draw a precise picture of the failure-oblivious computing search space. The result clearly shows that there exist multiple alternative failure-oblivious decision sequences to handle null dereferences. In our experiment, there are 11/16 failures for which we observe more than 10 possible decision sequences (column “Nb possible decision sequence”) for the same failure and according to our execution modification model, with a maximum value of 51785 (for Math-1117).} 2) RQ2. [Fertility] What is the proportion of valid failure-oblivious decision sequences?: We now have a clear picture of the size of the failure-oblivious search space, and are interested in further knowing whether there exist multiple valid decision sequences in that space. To do so, we still consider the exhaustive study protocol described in Section IV-C1 whose results are given in Table III. We concentrate especially on the fourth column which shows the number of valid decision sequences. We compare it against the column representing the size of the search space, i.e., the total number of possible decision sequences. For instance, for Collections-360, the search space contains 45 possible decision sequences, among which 16 are valid according to the oracle (the absence of null pointer exception and the two assertions at the end of the test case reproducing the failure pass), i.e., a proportion of $16/45 \approx 36\%$ of decision sequences in the search space are valid. We notice several interesting extreme cases in Table III. First, there are two failures – Lang-587 and PfdBox-2995 – for which only 1 valid decision sequence exists. In addition, there is one failure for which all decision sequences remove the failure. This is Math-1115 for which all 5 possible decision sequences are valid. Let us dwell on this proportion of valid decision sequences in the search space. This proportion depends on the strength of the considered oracle. In failure-oblivious computing, the oracle that is classically considered is the absence of runtime exceptions: we call this oracle the default oracle. In this experiment, we have an oracle that subsumes this default oracle as we also use assertions at the end of the test that reproduces the failure. We have manually inspected the tests and found that not all tests have equally strong assertions, which partly explains the variations in fertility we observe. **Answer to RQ2.** In our benchmark, the proportion of valid decision sequences varies from 0 to 100% (from 0/14 to 5/5 valid execution modifications). This great variation is due to strength of the considered oracle, and the complexity of the code at the failure point. 3) RQ3. [Disparity] To what extent does the search space contain composite decision sequences? We have shown in RQ2 that there are multiple valid execution modifications. Now, we study their complexity as measured by the number of decisions involved in the execution modification. To do so, we again study the results of the exhaustive study protocol described in Section IV-C1 whose results are given in Table III. We especially concentrate on the column showing that among the set of valid decision sequences, what is the minimum, median and maximum size (recall that the size is the number of decisions in the decision sequence). For instance, for PDFBox- 2812, the minimal size in number of decisions among all valid failure-oblivious decision sequences is 1, the median size is 6 and the maximal size is 7. We have the following findings from this data. First, for 5/16 failures of our benchmark, we see that there exist failure-oblivious decision sequences composed of more than one decision. Since our failure-oblivious model is specific to null pointer exceptions, it means that there exist failures for which the null dereference problem is not solved by the first decision, and that another null dereference happens later. Among the 5 failures, the failure-oblivious decision sequences are of the same size 2 for two failures (Collections-360, Lang- 703). For the remaining 3 failures (Math-988A, PDFBox-2812, Math-1117), there are failure-oblivious decision sequences of different size. For instance, for Math-988A, there exist failure- oblivious decision sequences of 1, 2 and 3 decisions. Second, for 10/16 failures of our benchmark, the failure-oblivious decision sequences are always composed of a single decision. This is strongly correlated with the size of the search space (third column, number of decision sequences), indicating that the test case reproducing the production failure sets up a program state that is amenable to failure-oblivious computing. Finally, for one failure (Math-369), even though there are several decisions taken, but none of them are valid. All decision sequences are invalidated by the oracle (the assertions of the test case reproducing the field failure). An interesting question is whether one needs to apply differ- ent strategies in the same execution to get an acceptable result. Figure 2 presents the number of different repair strategies that are used in valid decision sequences. We see that 76% of valid decision sequences mix at least two different strategies, this clearly hints that mixing different strategies is important for effective failure oblivious computing. **Example of Math-988A**: Now we illustrate the disparity of the number of decision sequences for the bug Math-988A. The initial null pointer exception is triggered in the return statement of method toSubSpace which returns an object of type Vector1D (see line 182 in Listing 3). The null pointer Figure 2: The number of valid decision sequences that use between 1 and 5 different repair strategies. We exclude Math-1117 for better visibility. Figure 3: Excerpt of the decision tree of Math-988A. One path in this tree is a “decision sequence”. This figure clearly shows the presence of paths with multiple steps, i.e., the presence of compound decision sequences. Listing 2: The human patch for Math-988A. ```java // compute the intersection on infinite line Vector2D v2D = line1.intersection(line2); if (v2D == null) { return null; } // check location of point with respect to first sub-line Location loc1 = getRemainingRegion().checkPoint(line1.toSubSpace(v2D)); ``` Listing 3: The decision points of Math-988A. ```java Vector1D toSubSpace(Vector2D point) { if (point == null) { return Vector1D.NaN; } return new Vector1D(cos * point.getX() + sin * p2.getY()); } ``` Listing 4: The patch equivalent to a valid failure-oblivious decision taken for Math-988A. ```java Vector1D toSubSpace(Vector2D point) { if (point == null) { return Vector1D.NaN; } return new Vector1D(cos * point.getX() + sin * p2.getY()); } ``` exception is triggered when a Vector2D parameter is null and methods `getX` and `getY` are called on it. The Vector2D parameter is computed by the line 117 in Listing 2 and then passes as argument to the method `toSubSpace` at line 123 in Listing 2. If two lines do not have an intersection, the geometrical computation of the intersection of them in line 117 of Listing 2 is null and will then cause the null pointer exception. There are three failure points for this bug as shown in Table III, which means that there are potentially between 1 and 3 null pointer exceptions happening during the execution of the failure input depending on the selected decisions. The source code of these failure points are shown in Listing 3 and the different decision sequences are illustrated in the decision tree in Figure 3. In this figure, each node represents a failure point, each arrow represents a decision, and each path between the Execution Start and the Execution End represents a decision sequence. For Math-988A, there are different kinds of failure-oblivious decision: 1) initialize the null parameter with a new instance (1 decision point), 2) use a new and disposable instance of Vector2D at both places where the null parameter is used (2 decision points), 3) return null either at the first NPE location or at the second one, triggering another decision in the caller (between 2 and 3 decision points), and 4) return a new instance Vector1D (1 decision point). For Math-988A, the reproducing test contains JUnit assertions for checking the expected correct behavior, which should return null when no intersection exists. In case the failure-oblivious decisions pass those assertions, it means that failure-oblivious computing achieves full correctness and these decisions are illustrated with “OK” in Figure 3. Otherwise if the failure-oblivious decisions fail on the assertions or trigger that calls for more research in this area. our opinion, this is an interesting threat to external validity other programming languages and runtime environments. In oblivious computing search space has the same structure in are specific to Java and whether the search space of failure- external validity, it may also be asked whether our results However, our experiment considers as many or more failures is a research trade-off between cost and external validity. ures from 6 different software projects. Recall that reproducing potential bugs. We now discuss the threats to validity of our experiment. First, let us discuss internal validity. Our experiment is of computational nature, and consequently, a bug in our code may threaten the validity of our results. However, since all our experiment code is publicly available for sake of open- scientifc[1] future researchers will be available to identify these potential bugs. Second, a threat to the external validity relates to the number of failures considered. Our experiment has considered 16 fail- ures from 6 different software projects. Recall that reproducing field failures is a very costly task and consequently, there is a research trade-off between cost and external validity. However, our experiment considers as many or more failures than the related work on failure-oblivious computing. For external validity, it may also be asked whether our results are specific to Java and whether the search space of failure- oblivious computing search space has the same structure in other programming languages and runtime environments. In our opinion, this is an interesting threat to external validity that calls for more research in this area. V. Threats to validity We now discuss the threats to validity of our experiment. First, let us discuss internal validity. Our experiment is of computational nature, and consequently, a bug in our code may threaten the validity of our results. However, since all our experiment code is publicly available for sake of open- First, let us discuss internal validity. Our experiment is of computational nature, and consequently, a bug in our code may threaten the validity of our results. However, since all our experiment code is publicly available for sake of open- Second, a threat to the external validity relates to the number of failures considered. Our experiment has considered 16 fail- ures from 6 different software projects. Recall that reproducing field failures is a very costly task and consequently, there is a research trade-off between cost and external validity. However, our experiment considers as many or more failures than the related work on failure-oblivious computing. For external validity, it may also be asked whether our results are specific to Java and whether the search space of failure- oblivious computing search space has the same structure in other programming languages and runtime environments. In our opinion, this is an interesting threat to external validity that calls for more research in this area. VI. Related work Long and Rinard [11] study the search space of patch generation systems. In our work, we consider failure-oblivious decision sequences, which are fundamentally different: while a code patch is a permanent modification to the behavior, a failure-oblivious decision sequence only impacts one single execution, with no effect or regression on subsequent ex- cutions, even if they execute the same statement. What’s interesting is that in both cases, contrary to the initial intuition of the research community, there is a multiplicity of possible patches. Long and Rinard’s paper is the first one to study this for static patches, our paper is possibly the first one to comprehensively show that this phenomenon exists for failure- oblivious decision sequences. There are several automatic recovery techniques. One of the earliest techniques is Ammann and Knight’s “data diversity” [12], that aims at enabling the computation of a program in the presence of failures. The idea of data diversity is that, when a failure occurs, the input data is changed so that the new input resulting from the change does not result in the failure. Answer to RQ3. For 5/16 failures of our benchmark, the search space contains composite failure-oblivious decision sequences that have more than one decision. For 3/16 failures, the possible failure-oblivious decision sequences have disparate sizes, and our protocol enables us to identify all valid failure-oblivious decision sequences. The assumption is that the output based on this artificial input, through an inverse transformation, remains acceptable in the domain under consideration. The input transformations can be seen as a kind of failure-oblivious model. As such, our protocol could be used to reason on the search space of data diversity. Demskey et al. [13] present a language for the specification of data structure invariants. The invariant specification is used to verify and repair the consistency of data structure instances at runtime. In their work, Demskey et al. do not study the associated search space. Rinard et al. [2] presents a technique to avoid illegal mem- ory accesses by adding additional code around each memory operation during the compilation process. For example, the additional code verifies at runtime that the program only uses the allocated memory. If the memory access is outside the allocated memory, the access is ignored instead crashing with a segmentation fault. We apply different decisions to handle a given failure (and not a single code, hard-coded in the injected code), and we use an oracle to reason about the viability of the decision. Perkins et al. [14] proposes ClearView, a system for auto- matically handling errors in production. The system consists of monitoring the system execution on low-level registers to learn invariants. Those invariants are then monitored, and if a violation of an invariant is detected ClearView forces the restoration. From an engineering perspective, the difference is we reason on decision sequences, while ClearView analyzes each decision in isolation. From a scientific perspective, our work finely characterizes the search space and the outcomes of failure-oblivious computing based on execution modification. Rx [15] is a runtime repair system based on changing the environment upon failures. Rx employs checkpoint-and- rollback for re-executing the buggy code when failures happen. The differences are as follows: 1) Rx does not change the execution itself but the environment 2) the search space of Rx is smaller (a set of predefined strategies) 3) Rx’ experiment does not include systematic exploration of the search space. Kling et al. [16] propose Bold a system to detect and escape infinite and long-running loops. On user demand, Bolt is attached to a running application and tries different strategies to escape the infinite loop. If a strategy fails, Bolt uses rollback to restore the state of the application and then tries the next strategy. Bolt does not reason about decision sequences as we do in this paper. Long et al. [4] introduces the idea of “recovery shepherding” in a system called RCV. Upon certain errors (null dereferences and divide by zero), recovery shepherding consists in returning a manufactured value, as for failure-oblivious computing. The key idea of recovery shepherding is to track the manufactured values so as to see 1) whether they are passed to system calls or files and 2) whether they disappear. The key difference with our work lies in the reasoning about the effect of the combinations (by storing and keeping information about the actual valid decision sequences). Jula et al. [17] presents a system to defend against deadlocks at runtime. The system first detects synchronization patterns of deadlocks, and when the pattern is detected, the system avoids re-occurrences of the deadlock with additional locks. The pattern detection is related to the detector of instances of the fault model under consideration. However, Jula et al. do not explore and compare alternative locking strategies. We note that our protocol may be plugged on top of their systems to explore the search space of locking sequences. Hosek and Cadar [18] switch between application versions when a bug is detected. This technique can handle failures because some bugs disappear while others appear between versions. We can also use our protocol to systematically explore the sequences of runtime jumps across versions. Assure [19] is a self-healing system based on checkpointing and error virtualization. Error virtualization consists of handling an unknown and unrecoverable error with error handling code that is already present in the system yet designed for handling other errors. While Assure does self-healing by opportunistic reuse of already present recovery code, our failure-oblivious model handles failures by modifying the state or flow. Carzaniga et al. [20] repair web applications at runtime with a set of manually written, API-specific alternatives rules. This set can be seen as a hardcoded set of failure-oblivious decision sequences. On the contrary, we do not require a list of alternatives but instead relies on an abstract failure-oblivious model that is automatically instantiated at runtime. Berger and Zorn [5] show that is possible to effectively tolerate memory errors and provide probabilistic memory safety by randomizing the memory allocation and providing memory replication. Exterminator [21] provides more sophisticated fault tolerance than [5] by performing fault localization before applying memory padding. The work by Qin et al. [22] exploits a specific hardware feature called ECC-memory for detecting illegal memory accesses at runtime. The idea of the paper is to use the consistency checks of the ECC-memory to detect illegal memory accesses (for instance due to buffer overflow). Both techniques are semantically equivalent in the normal case. We have reasoned about the search space of execution modifications that are not semantically equivalent, where one taken decision can impact the rest of the computation. Dobolyi and Weimer [3] present a technique to tolerate null dereferences. Using code transformation, they introduce hooks to a recovery framework. This framework is responsible for forwarding recovery of the form of creating a default object of an appropriate type of skipping instructions. Kent [23] proposes alternatives to null pointer exceptions. He proposes to skip the failure line or exits the method by a return when a null pointer exception is detected. In those two contributions, there is no reasoning on the search space of failure-oblivious computing, as done in this work. Jeffrey et al. [24] present a technique to assist the developers to locate the root cause of memory errors. In this work Jeffrey at al. suppress the execution of the statement that produces the failure and repeats this procedure until the execution of the program does not fail. The last suppressed statement should according to Jeffrey et al. be close to the root cause of the memory error. This approach uses the failure-oblivious strategy to continue the execution of the program in order to gain knowledge. In this case, they want to identify the root cause of the memory error. The main difference is that Jeffrey et al. don’t use the failure-oblivious technique to fix the application but to get knowledge during the execution of the program. We focus on the failure-oblivious computing search space to understand the failure-oblivious behavior and we also consider different failure-oblivious strategies to handle failures. VII. Conclusion In this paper, we characterized the key concept of search space of failure-oblivious computing. In order to understand the behavior of failure-oblivious computing, we proposed an algorithm to exhaustively explore this search space. We performed an empirical study on 16 real java bugs to draw a precise picture of the nature of the failure-oblivious search space on reals bugs. We find out that there are several possible execution modifications to handle a failure and several execution modifications have to be taken in a row to handle some specific failures. We have now a better understating of the size of the failure-oblivious search space, we plan to develop techniques to select better state modifications in the future so that the capability of failure-oblivious system in production environment can be improved. References
{"Source-Url": "https://export.arxiv.org/pdf/1710.09722", "len_cl100k_base": 10752, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36273, "total-output-tokens": 13063, "length": "2e13", "weborganizer": {"__label__adult": 0.0003452301025390625, "__label__art_design": 0.00029540061950683594, "__label__crime_law": 0.0003666877746582031, "__label__education_jobs": 0.0004699230194091797, "__label__entertainment": 6.99758529663086e-05, "__label__fashion_beauty": 0.00014829635620117188, "__label__finance_business": 0.00015485286712646484, "__label__food_dining": 0.00029468536376953125, "__label__games": 0.000682830810546875, "__label__hardware": 0.0007872581481933594, "__label__health": 0.0004277229309082031, "__label__history": 0.00023925304412841797, "__label__home_hobbies": 6.812810897827148e-05, "__label__industrial": 0.00025081634521484375, "__label__literature": 0.00031447410583496094, "__label__politics": 0.00023365020751953125, "__label__religion": 0.0003876686096191406, "__label__science_tech": 0.021697998046875, "__label__social_life": 8.225440979003906e-05, "__label__software": 0.0071258544921875, "__label__software_dev": 0.96484375, "__label__sports_fitness": 0.0002548694610595703, "__label__transportation": 0.000400543212890625, "__label__travel": 0.0001710653305053711}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55920, 0.01355]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55920, 0.33593]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55920, 0.89202]], "google_gemma-3-12b-it_contains_pii": [[0, 5497, false], [5497, 11122, null], [11122, 15694, null], [15694, 20600, null], [20600, 25311, null], [25311, 31707, null], [31707, 35665, null], [35665, 38710, null], [38710, 46488, null], [46488, 53253, null], [53253, 55920, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5497, true], [5497, 11122, null], [11122, 15694, null], [15694, 20600, null], [20600, 25311, null], [25311, 31707, null], [31707, 35665, null], [35665, 38710, null], [38710, 46488, null], [46488, 53253, null], [53253, 55920, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55920, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55920, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55920, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55920, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55920, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55920, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55920, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55920, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55920, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55920, null]], "pdf_page_numbers": [[0, 5497, 1], [5497, 11122, 2], [11122, 15694, 3], [15694, 20600, 4], [20600, 25311, 5], [25311, 31707, 6], [31707, 35665, 7], [35665, 38710, 8], [38710, 46488, 9], [46488, 53253, 10], [53253, 55920, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55920, 0.01831]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
d6ec94542c622425357893ec752096d300c1a1dc
Mining XML Functional Dependencies through Formal Concept Analysis Viorica Varga October 16, 2012 Definitions for XML Functional Dependencies Introduction to FCA FCA tool to detect XML FDs Finding XML keys Detecting XML data redundancy Conclusions and Future Work Outline Definitions for XML Functional Dependencies Introduction to FCA FCA tool to detect XML FDs Finding XML keys Detecting XML data redundancy Conclusions and Future Work Outline Definitions for XML Functional Dependencies Introduction to FCA FCA tool to detect XML FDs Finding XML keys Detecting XML data redundancy Conclusions and Future Work Outline Definitions for XML Functional Dependencies Introduction to FCA FCA tool to detect XML FDs Finding XML keys Detecting XML data redundancy Conclusions and Future Work Outline Definitions for XML Functional Dependencies Introduction to FCA FCA tool to detect XML FDs Finding XML keys Detecting XML data redundancy Conclusions and Future Work Outline Definitions for XML Functional Dependencies Introduction to FCA FCA tool to detect XML FDs Finding XML keys Detecting XML data redundancy Conclusions and Future Work XML Design XML data design: choose an appropriate XML schema, which usually come in the form of DTD (Document Type Definition) or XML Scheme. Functional dependencies (FDs) are a key factor in XML design. The objective of normalization is to eliminate redundancies from an XML document, eliminate or reduce potential update anomalies. Schema definition Definition (Schema) A schema is defined as a set \( S = (E, T, r) \), where: - \( E \) is a finite set of element labels; - \( T \) is a finite set of element types, and each \( e \in E \) is associated with a \( \tau \in T \), written as \( (e : \tau) \), \( \tau \) has the next form: \[ \tau ::= \text{str} | \text{int} | \text{float} | \text{SetOf} \tau | \text{Rcd} [e_1 : \tau_1, \ldots, e_n : \tau_n] ; \] - \( r \in E \) is the label of the root element, whose associated element type can not be \( \text{SetOf} \tau \). - Types \( \text{str} \), \( \text{int} \) and \( \text{float} \) are the system defined simple types and \( \text{Rcd} \) indicate complex scheme elements. - Keyword \( \text{SetOf} \) is used to indicate set schema elements. - Attributes and elements are treated in the same way, with a reserved "@" symbol before attributes. Figure: CustOrder XML tree Customer’s Orders Example Scheme 0 CustOrder: Rcd 1 Customers: SetOf Rcd 2 CustomerID: str 3 CompanyName: str 4 Address: str 5 City: str 6 PostalCode: str 7 Country: str 8 Phone: str 9 Orders: SetOf Rcd 10 OrderID: int 11 CustomerID: str 12 OrderDate: str 13 OrderDetails: SetOf Rcd 14 OrderID: int 15 ProductID: int 16 UnitPrice: float 17 Quantity: float 18 ProductName: str 19 CategoryID: int A schema element $e_k$ can be identified through a path expression, $\text{path}(e_k) = /e_1/e_2/.../e_k$, where $e_1 = r$, and $e_i$ is associated with type $\tau_i ::= \text{Rcd} [..., e_{i+1} : \tau_{i+1},...]$ for all $i \in [1, k - 1]$. A path is *repeatable*, if $e_k$ is a set element. We adopt XPath steps "." (self) and ".." (parent). **Definition (Data tree)** An XML database is defined to be a rooted labeled tree $T = \langle N, P, V, n_r \rangle$, where: - $N$ is a set of labeled data nodes, each $n \in N$ has a label $e$ and a node key that uniquely identifies it in $T$; - $n_r \in N$ is the root node; - $P$ is a set of parent-child edges, there is exactly one $p = (n', n)$ in $P$ for each $n \in N$ (except $n_r$), where $n' \in N, n \neq n'$, $n'$ is called the parent node, $n$ is called the child node; - $V$ is a set of value assignments, there is exactly one $v = (n, s)$ in $V$ for each leaf node $n \in N$, where $s$ is a value of simple type. Descendant, repeatable element definition - We assign a node key, referred to as @key, to each data node in the data tree in a pre-order traversal. - A data element $n_k$ is a descendant of another data element $n_1$ if there exists a series of data elements $n_i$, such that $(n_i, n_{i+1}) \in P$ for all $i \in [1, k - 1]$. - Data element $n_k$ can be addressed using a path expression, $\text{path}(n_k) = /e_1/ \ldots /e_k$, where $e_i$ is the label of $n_i$ for each $i \in [1, k]$, $n_1 = n_r$, and $(n_i, n_{i+1}) \in P$ for all $i \in [1, k - 1]$. - A data element $n_k$ is called repeatable if $e_k$ corresponds to a set element in the schema. - Element $n_k$ is called a direct descendant of element $n_a$, if $n_k$ is a descendant of $n_a$, $\text{path}(n_k) = \ldots /e_a/e_1/ \ldots /e_{k-1}/e_k$, and $e_i$ is not a set element for any $i \in [1, k - 1]$. Warehouse Example Warehouse Example Scheme 0 warehouse : $Rcd$ 1 state : $SetOf$ $Rcd$ 2 name : $str$ 3 store : $SetOf$ $Rcd$ 4 contact : $Rcd$ 5 name : $str$ 6 address : $str$ 7 book : $SetOf$ $Rcd$ 8 ISBN : $str$ 9 author : $SetOf$ $str$ 10 title : $str$ 11 price : $str$ Element-value equality **Definition** (Element-value equality) Two data elements $n_1$ of $T_1 = \langle N_1, \mathcal{P}_1, \mathcal{V}_1, n_{r1} \rangle$ and $n_2$ of $T_2 = \langle N_2, \mathcal{P}_2, \mathcal{V}_2, n_{r2} \rangle$ are element-value equal (written as $n_1 =_{ev} n_2$) if and only if: - $n_1$ and $n_2$ both exist and have the same label; - There exists a set $M$, such that for every pair $(n'_1, n'_2) \in M$, $n'_1 =_{ev} n'_2$, where $n'_1$, $n'_2$ are children elements of $n_1$, $n_2$, respectively. Every child element of $n_1$ or $n_2$ appears in exactly one pair in $M$. - $(n_1, s) \in \mathcal{V}_1$ if and only if $(n_2, s) \in \mathcal{V}_2$, where $s$ is a simple value. **Example** Data elements node 30 and 50 are element value equal if and only if the subtrees rooted at those two elements are identical when the order among sibling elements is ignored. Path-value equality **Definition** (Path-value equality) Two data element paths $p_1$ on $T_1 = \langle N_1, \mathcal{P}_1, \mathcal{V}_1, n_{r1} \rangle$ and $p_2$ on $T_2 = \langle N_2, \mathcal{P}_2, \mathcal{V}_2, n_{r2} \rangle$ are **path-value equal** (written as $T_1.p_1 =_{pv} T_2.p_2$) if and only if there is a set $M'$ of matching pairs where - For each pair $m' = (n_1, n_2)$ in $M'$, $n_1 \in N_1$, $n_2 \in N_2$, \( \text{path}(n_1) = p_1 \), \( \text{path}(n_2) = p_2 \), and $n_1 =_{ev} n_2$; - All data elements with path $p_1$ in $T_1$ and path $p_2$ in $T_2$ participate in $M'$, and each such data element participates in only one such pair. Value equality between two paths is complicated by the fact that a single path can match multiple data elements in the data tree. This definition consider two paths value equal if each node which is pointed to by one path must have a corresponding node that is pointed to by the other path, where the two nodes are element value equal. Generalized tree tuple **Definition** A generalized tree tuple of data tree $T = \langle N, \mathcal{P}, \mathcal{V}, n_r \rangle$, with regard to a particular data element $n_p$ (called pivot node), is a tree $t_{n_p}^T = \langle N^t, \mathcal{P}^t, \mathcal{V}^t, n_r \rangle$, where: - $N^t \subseteq N$ is the set of nodes, $n_p \in N^t$; - $\mathcal{P}^t \subseteq \mathcal{P}$ is the set of parent-child edges; - $\mathcal{V}^t \subseteq \mathcal{V}$ is the set of value assignments; - $n_r$ is the same root node in both $t_{n_p}^T$ and $T$; - $n \in N^t$ if and only if: - $n$ is a descendant or ancestor of $n_p$ in $T$, or - $n$ is a non-repeatable direct descendant of an ancestor of $n_p$ in $T$; - $(n_1, n_2) \in \mathcal{P}^t$ if and only if $n_1 \in N^t$, $n_2 \in N^t, (n_1, n_2) \in \mathcal{P}$; - $(n, s) \in \mathcal{V}^t$ if and only if $n \in N^t$, $(n, s) \in \mathcal{V}$. Tuple class - A generalized tree tuple is a data tree projected from the original data tree. - It has an extra parameter called a pivot node. In contrast with tree tuple defined in Arenas and Libkin’s article, which separate sibling nodes with the same path at all hierarchy levels, the generalized tree tuple separate sibling nodes with the same path above the pivot node. - Based on the pivot node, generalized tree tuples can be categorized into tuple classes: **Definition** (Tuple class) A tuple class $C_p^T$ of the data tree $T$ is the set of all generalized tree tuples $t_n^T$, where $\text{path}(n) = p$. Path $p$ is called the **pivot path**. Figure: Example tree tuple Figure: Example tree tuple XML Functional Dependency **Definition (XML FD)** An XML FD is a triple $\langle C_p, LHS, RHS \rangle$, written as $LHS \rightarrow RHS$ w.r.t. $C_p$, where $C_p$ denotes a tuple class, $LHS$ is a set of paths $(P_{li}, i = [1, n])$ relative to $p$, and $RHS$ is a single path $(P_r)$ relative to $p$. An XML FD holds on a data tree $T$ (or $T$ satisfies an XML FD) if and only if for any two generalized tree tuples $t_1, t_2 \in C_p$ - $\exists i \in [1, n], t_1.P_{li} = \perp$ or $t_2.P_{li} = \perp$, or - If $\forall i \in [1, n], t_1.P_{li} =_{pv} t_2.P_{li}$, then $t_1.P_r \neq \perp, t_2.P_r \neq \perp, t_1.P_r =_{pv} t_2.P_r$. A null value, $\perp$, results from a path that matches no node in the tuple, and $=_{pv}$ is the path-value equality defined previously. XML Functional Dependency Example Example (XML FD) In our running example whenever two products agree on ProductID values, they have the same ProductName. This can be formulated as follows: \[ .\text{/ProductID} \rightarrow .\text{/ProductID} \text{ w.r.t. } C_{OrderDetails} \] Another example is: \[ .\text{/ProductID} \rightarrow .\text{/CategoryID} \text{ w.r.t } C_{OrderDetails} \] Example (XML FD) In warehouse tree: \[ .\text{/ISBN} \rightarrow .\text{/title} \text{ w.r.t } C_{book} \] \[ ..\text{/contact/name},.\text{/ISBN} \rightarrow .\text{/price} \text{ w.r.t } C_{book} \] \[ .\text{/ISBN} \rightarrow .\text{/author} \text{ w.r.t } C_{book} \] \[ .\text{/author},.\text{/title} \rightarrow .\text{/ISBN} \text{ w.r.t } C_{book} \] Trivial XML FD **Definition:** (Trivial XML FD) An XML FD $\langle C_p, LHS, RHS \rangle$ is trivial if: 1. $RHS \in LHS$, or 2. For any generalized tree tuple in $C_p$, there is at least one path in LHS that matches no data element. The 2. point can arise, because of the existence of Choice elements. **Example** If Contact is a Choice element instead of Rcd, i.e. it can have either name or address as its child, but not both, then the XML FD: $\langle C_{store}, ./contact/name, ./contact/address, ./@key \rangle$ is trivial, because no $C_{store}$ tuple will have both LHS node. **XML key** **Definition** (XML key) An XML Key of a data tree $T$ is a pair $\langle C_p, LHS \rangle$, where $T$ satisfies the XML FD $\langle C_p, LHS, ./@key \rangle$. **Example** We have the XML FD: $\langle C_{Orders}, ./OrderID, ./@key \rangle$, which implies that $\langle C_{Orders}, ./OrderID \rangle$ is an XML key. **Example** $\langle C_{State}, ./name \rangle$ $\langle C_{Store}, ./contact/name, ./contact/address \rangle$ are XML keys. Structurally redundant XML FDs Theorem - Let $FD = \langle C_p, LHS, RHS \rangle$, - if none of the paths in $LHS$ and $RHS$ specifies a data element that is descendent of the pivot node in the tuple, - then $FD$ holds on a data tree $T$ - if and only if $FD' = \langle C_p', LHS', RHS' \rangle$ holds on $T$, where - $C_p'$ is the lowest-repeatable-ancestor tuple class of $C_p$ - paths in $LHS'$ and $RHS'$ are equivalent to paths in $LHS$ and $RHS$ (i.e. they correspond to the same absolute paths). Example ..//ISBN → ../title w.r.t $C_{author}$ is structurally redundant with ..//ISBN → ./title w.r.t $C_{book}$ Interesting XML FD Tuple classes with repeatable pivot paths are called essential tuple classes. Definition (Interesting XML FD) An XML FD $\langle C_p, LHS, RHS \rangle$ is interesting if it satisfies the following conditions: - $RHS \not\in LHS$; - $C_p$ is an essential tuple class; - $RHS$ matches to descendent(s) of the pivot node. An interesting XML FD is a non-trivial XML FD with an essential tuple class and is not structurally redundant to any other XML FD. XML data redundancy **Definition** (XML data redundancy) A data tree $T$ contains a redundancy if and only if $T$ satisfies an interesting XML FD $\langle C_p, LHS, RHS \rangle$, but does not satisfy the XML Key $\langle C_p, LHS \rangle$. Intuitively: - if $\langle C_p, LHS \rangle$ is not a key for $T$, then there exists two distinct tuples in $C_p$ that share the same LHS. - $T$ satisfies $\langle C_p, LHS, RHS \rangle$, so RHS of these two tuples must be value equal - so: data is redundantly stored Definition (GTT-XNF) An XML schema $S$ is in GTT-XNF given the set of all satisfied interesting XML FDs if and only if for each such XML FD $⟨C_p, LHS, RHS⟩$, $⟨C_p, LHS⟩$ is an XML key. *Intuitively:* GTT-XNF disallows any satisfied interesting XML FD that indicates data redundancies. **Rule 1 (Reflexivity)** $LHS \rightarrow P_1$ w.r.t. $C_p$ is satisfied if $P_1 \subseteq LHS$. **Rule 2 (Augmentation)** $LHS \rightarrow P_1$ w.r.t. $C_p \Rightarrow \{LHS, P_2\} \rightarrow P_1$ w.r.t. $C_p$. **Rule 3 (Transitivity)** $LHS \rightarrow P_1$ w.r.t. $C_p \land \ldots \land LHS \rightarrow P_n$ w.r.t. $C_p \land \{P_1, \ldots, P_n\} \rightarrow P$ w.r.t. $C_p \Rightarrow LHS \rightarrow P$ w.r.t. $C_p$. ### XML Data Flat Representation <table> <thead> <tr> <th>warehouse</th> <th>state</th> <th>name</th> <th>store</th> <th>contact</th> <th>contact/name</th> <th>contact/address</th> <th>book</th> <th>ISBN</th> <th>author</th> <th>title</th> <th>price</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>10</td> <td>WA</td> <td>12</td> <td>13</td> <td>Borders</td> <td>Seattle</td> <td>20</td> <td>00...269</td> <td>Post</td> <td>DBMS</td> <td>126.99</td> </tr> <tr> <td>1</td> <td>10</td> <td>WA</td> <td>12</td> <td>13</td> <td>Borders</td> <td>Seattle</td> <td>30</td> <td>00...638</td> <td>Rama...</td> <td>DBMS</td> <td>79.90</td> </tr> <tr> <td>1</td> <td>10</td> <td>WA</td> <td>12</td> <td>13</td> <td>Borders</td> <td>Seattle</td> <td>30</td> <td>00...638</td> <td>Gehrke</td> <td>DBMS</td> <td>79.90</td> </tr> </tbody> </table> **Figure**: One relation for the whole XML data ## XML Data Hierarchical Representation <table> <thead> <tr> <th>$R_{\text{root}}$</th> <th>@key</th> <th>parent</th> </tr> </thead> <tbody> <tr> <td></td> <td>1</td> <td>↓</td> </tr> </tbody> </table> <table> <thead> <tr> <th>$R_{\text{state}}$</th> <th>@key</th> <th>parent</th> <th>name</th> </tr> </thead> <tbody> <tr> <td></td> <td>10</td> <td>1</td> <td>WA</td> </tr> <tr> <td></td> <td>40</td> <td>1</td> <td>KY</td> </tr> </tbody> </table> <table> <thead> <tr> <th>$R_{\text{store}}$</th> <th>@key</th> <th>parent</th> <th>contact</th> <th>contact/name</th> <th>contact/addr.</th> </tr> </thead> <tbody> <tr> <td></td> <td>12</td> <td>10</td> <td>13</td> <td>Borders</td> <td>Seattle</td> </tr> <tr> <td></td> <td>42</td> <td>40</td> <td>43</td> <td>Borders</td> <td>Lexington</td> </tr> <tr> <td></td> <td>72</td> <td>40</td> <td>73</td> <td>WHSmith</td> <td>Lexington</td> </tr> </tbody> </table> <table> <thead> <tr> <th>$R_{\text{book}}$</th> <th>@key</th> <th>parent</th> <th>ISBN</th> <th>title</th> <th>price</th> </tr> </thead> <tbody> <tr> <td></td> <td>20</td> <td>12</td> <td>00..269</td> <td>DBMS</td> <td>126.99</td> </tr> <tr> <td></td> <td>30</td> <td>12</td> <td>00..638</td> <td>DBMS</td> <td>79.90</td> </tr> <tr> <td></td> <td>50</td> <td>42</td> <td>00..638</td> <td>DBMS</td> <td>79.90</td> </tr> <tr> <td></td> <td>80</td> <td>72</td> <td>00..638</td> <td>DBMS</td> <td>↓</td> </tr> </tbody> </table> <table> <thead> <tr> <th>$R_{\text{author}}$</th> <th>@key</th> <th>parent</th> <th>author</th> </tr> </thead> <tbody> <tr> <td></td> <td>22</td> <td>20</td> <td>Post</td> </tr> <tr> <td></td> <td>32</td> <td>30</td> <td>Ramakrishnan</td> </tr> <tr> <td></td> <td>33</td> <td>30</td> <td>Gehrke</td> </tr> <tr> <td></td> <td>52</td> <td>50</td> <td>Ramakrishnan</td> </tr> <tr> <td></td> <td>53</td> <td>50</td> <td>Gehrke</td> </tr> <tr> <td></td> <td>82</td> <td>80</td> <td>Ramakrishnan</td> </tr> <tr> <td></td> <td>83</td> <td>80</td> <td>Gehrke</td> </tr> </tbody> </table> **Figure:** Set of relations Intra-relation FDs/Keys, Inter-relation FDs/Keys Example (XML FD) In warehouse tree: ./ISBN → ./title w.r.t \(C_{book}\) intra-relation FD ../contact/name,../ISBN → ./price w.r.t \(C_{book}\) inter-relation FD ./ISBN → ./author w.r.t \(C_{book}\) inter-relation FD ./author,../title → ./ISBN w.r.t \(C_{book}\) inter-relation FD Eliminating redundancy-indicating FDs - if $\langle C_p, LHS \rangle$ is not a key for $T$ - $T$ satisfies $\langle C_p, LHS, RHS \rangle$, so RHS is redundantly stored - to eliminate such FD, the schema element corresponding to RHS is moved into a new schema location, such that those data elements are no longer redundantly stored. - Let $\Sigma$ be the set of redundancy-indicating FDs. **Example** `. /ISBN$ \rightarrow $. /title w.r.t $C_{book}$ $\{ .. / .. / name, .. / contact / name, . / ISBN \}$ \rightarrow $. /price w.r.t $C_{book}$ Assumption: $\{ .. / name, . / contact / name \}$ is a key for $C_{store}$. Local/global XML FD **Definition** (Local/global XML FD) An XML FD \( \langle C_p, LHS, RHS \rangle \) is local if there exists \( LHS' \subset LHS \) such that \( \langle C_{p'}, LHS' \rangle \) is an XML key, where \( C_{p'} \) is an ancestor tuple class of \( C_p \) (i.e. \( p' \) is a prefix of \( p \)). Otherwise, the FD is *global*. **Example** \( .$/ISBN \rightarrow .//$title \) w.r.t \( C_{book} \) is global, because no subset of its LHS is a key for any tuple class above \( C_{book} \) **means**: 2 books, regardless whether they are under the same store or state, if they have the same ISBN, then they will have the same title. **Example** \{ \( .//./name, .//contact/name,.$/ISBN \) \} \rightarrow .$/price w.r.t \( C_{book} \) is local, because \{ \( .//./name, .//contact/name \) \} is a key for \( C_{store} \). **means**: state name and store name uniquely identifies each store, any 2 books, if they have the same ISBN, they will have the same price, as long as they are under the same store. Eliminate global FD Procedure 1 - Let $F = \{P_1, \ldots, P_n\} \rightarrow P_r$ w.r.t. $C_p$ be a redundancy indicated global FD on Schema $S_{\text{root}}$; - $\{e_i | i \in [1, n]\}$ and $\{\tau_i | i \in [1, n]\}$ be the sets of schema element labels and types, respectively, associated with each $P_i$; - $e_r$ and $\tau_r$ be the schema element label and type, respectively, associated with $P_r$; - $\tau_{\text{parent}}$ be the schema element type of the parent element of $P_r$; - $\tau_{\text{root}} = \text{Rcd}[e'_1 : \tau'_1, \ldots, e'_m : \tau'_m]$ be the element type of the root element. Eliminating redundancy: - Create a new schema element with label $e_{\text{new}}$ and type $\tau_{\text{new}} = \text{SetOfRcd}[e_1 : \tau_1, \ldots, e_n : \tau_n, e_r : \tau_r]$; - Set $\tau_{\text{root}} = \text{Rcd}[e'_1 : \tau'_1, \ldots, e'_m : \tau'_m, e_{\text{new}} : \tau_{\text{new}}]$; - Remove $(e_r : \tau_r)$ from $\tau_{\text{parent}}$. Eliminate global FD Example \[ ISBN \rightarrow \text{title} \text{ w.r.t } C_{book} \] Scheme after eliminating global FD: 0. warehouse: Rcd 1. state: SetOf Rcd 2. name: str 3. store: SetOf Rcd 4. contact: Rcd 5. name: str 6. address: str 7. book: SetOf Rcd 8. ISBN: str 9. author: SetOf str 10. price: str 11. new-book: SetofRcd 12. ISBN: str 13. title: str Adjusting FD - remove $F$ from $\Sigma$. - the semantics of $F$ is captured by: $\{P_1, \ldots, P_n\} \rightarrow P_r$ w.r.t. $C_{new}$, it is not redundancy, does not need to be added to $\Sigma$ - remove all FDs from $\Sigma$ that are affected by the move of $P_r$. Example $\{./author, ./title\} \rightarrow ./ISBN$ w.r.t $C_{book}$ is removed, because it is not valid. It is safe to do, because ISBN is no longer redundant. Eliminate local FD Procedure 2 - Let $F = \{P_1, \ldots, P_{k-1}, P_k, \ldots, P_n\} \rightarrow P_r$ w.r.t. $C_p$ be a redundancy indicated local FD on Schema $S_{root}$; - $\{P_1, \ldots, P_{k-1}\}$ is the key for $C'_p$; - $C'_p$ is is an ancestor tuple class of $C_p$ and there is no other subset $L$ of $\{P_i | i \in [1, n]\}$ such that $L$ is a key for $C''_p$; - $C''_p$ is an ancestor of $C_p$ and a descendant of $C'_p$ (i.e. $C'_p$ is the lowest tuple class that can be identified); - $\{e_i | i \in [k, n]\}$ and $\{\tau_i | i \in [k, n]\}$ be the sets of schema element labels and types, respectively, associated with each $P_i$; - $e_r$ and $\tau_r$ be the schema element label and type, respectively, associated with $P_r$; - $\tau_{parent}$ be the schema element type of the parent element of $P_r$; - $\tau_{p'} = \text{Rcd}[e'_1 : \tau'_1, \ldots, e'_m : \tau'_m]$ be the element type of the schema element corresponding to the pivot path of $C_{p'}$. Eliminate redundancy - Create a new schema element with label $e_{new}$ and type $ au_{new} = \text{SetOfRcd}[e_k : \tau_k, \ldots, e_n : \tau_n, e_r : \tau_r]$ - Set $\tau_{p'} = \text{Rcd}[e'_1 : \tau'_1, \ldots, e'_m : \tau'_m, e_{new} : \tau_{new}]$ - Remove $(e_r : \tau_r)$ from $\tau_{parent}$ Explanation - to eliminate a local FD like $\{ \ldots/\ldots/name, \ldots/contact/name, \ldots/\ldots/ISBN \} \rightarrow \ldots/\ldots/price$ w.r.t $C_{book}$ - create a new schema element containing the subset of its LHS (ISBN) that are not part of the key for ancestor tuple class ($C_{store}$) and RHS element (price) - put this new element under the schema element corresponding to the pivot path of the ancestor tuple class ($/\ldots/warehouse/state/store$). - RHS element is removed from its original position Eliminate redundancy cont. - by creating the new schema element under the non-root ancestor, fewer elements needs to be copied under the new scheme - after the the modification of the scheme remove any FD that is affected by the move of $P_r$ Scheme after eliminating local FD: ```plaintext 0 warehouse: Rcd 1 state: SetOf Rcd 2 name: str 3 store: SetOf Rcd 4 contact: Rcd 5 name: str 6 address: str 7 book: SetOf Rcd 8 ISBN: str 9 author: SetOf str 10 title: str 11 new—book: SetOf Rcd 12 ISBN: str 13 price: str ``` Special case for Procedure 2 - if the entire LHS of the FD is a key for some ancestor tuple class **Example** In DBLP scheme *year* of an *article* is determined by the identity (@key) of the *issue* containing the *article* - instead of creating a new scheme element containing a single element *year* - we move *year* after *issue* Normalization algorithm **Algorithm SchemaNormalization:** **Input:** Schema $S$, a set $\Sigma$ of redundancy-indicating FDs, a set $\Upsilon$ of XML Keys (to determine local vs. global FD). 1. Group FDs in $\Sigma$ based on tuple class $C_p$ and $LHS$, order them according to decreasing depth of $C_p$ (lowest first) and increasing number of paths in $LHS$ second (fewest first); 2. while $\Sigma$ is not empty: 3. let $\mathcal{F}$ be the first set of FDs in $\Sigma$ with the same LHS and $C_p$; 4. let $F$ be the first FD in $\mathcal{F}$; 5. if $F$ is local: Modify $S$ by applying Procedure 2; 6. else Modify $S$ by applying Procedure 1; // $F$ is global 7. foreach additional $F' \in \mathcal{F}$: 8. Modify $S$ in the same way by applying procedures 1 or 2, but using the new schema element already created in dealing with $F'$ 9. remove all FDs in $\mathcal{F}$ from $\Sigma$; 10. foreach $F \in \Sigma$: 11. if $F$ is no longer valid: remove $F$ from $\Sigma$; 12. if $F$ is now structurally-redundant: 13. convert $F$ into its equivalent $F'$ that is not structurally redundant and add $F'$ to $\Sigma$ (see Theorem 2); **Output:** Schema $S$, the modified redundancy-free schema Normalization algorithm explanation - FDs are grouped according to their LHS **Example** - \( ./ISBN \rightarrow ./title \) w.r.t \( C_{book} \) - \( ./ISBN \rightarrow ./author \) w.r.t \( C_{book} \) - if they are dealt separately two new scheme element are created - FDs are processed according to the number of paths in their LHS to reduce the storage cost. **Example** \[ \{ ./title, ./author \} \rightarrow ./ISBN \) w.r.t \( C_{book} \) - If this FD is processed first, then the elements title and author will remain under book, not ISBN Normalization algorithm explanation cont. - FDs are processed according to the hierarchy depth of their tuple class in a bottom-up fashion (lowest first) - this is because during the process of FDs for a lower hierarchy tuple class, redundancy-indicating FDs for a higher hierarchy tuple class may be created - algorithm terminates because each application of Procedure 1 and 2 either removes one redundancy-indicating FD or converts one redundancy-indicating FD into another one with a tuple class at a higher hierarchy GTT-XNF Scheme of Warehouse xml data - Eliminating first: ./ISBN → ./title w.r.t $C_{book}$ ./ISBN → ./author w.r.t $C_{book}$ - this FD is no longer redundancy indicating \{ ../..//name, ../contact/name, ../ISBN \} → ./price w.r.t $C_{book}$ ```yaml 0 warehouse: Rcd state: SetOf Rcd name: str store: SetOf Rcd contact: Rcd name: str address: str book: SetOf Rcd ISBN: str price: str 10 new-book: SetOfRcd 11 ISBN: str 12 title: str 13 author: SetOf str ``` Introduction to FCA - From a philosophical point of view a concept is a unit of thoughts consisting of two parts: - the extension, which are objects; - the intension consisting of all attributes valid for the objects of the context; - Formal Concept Analysis (FCA) introduced by Wille gives a mathematical formalization of the concept notion. - A detailed mathematic foundation of FCA can be found in: - Formal Concept Analysis is applied in many different realms like psychology, sociology, computer science, biology, medicine and linguistics. - FCA is a useful tool to explore the conceptual knowledge contained in a database by analyzing the formal conceptual structure of the data. Introduction to FCA - FCA studies how objects can be hierarchically grouped together according to their common attributes. In FCA the data is represented by a cross table, called formal context. - A formal context is a triple \((G, M, I)\). - \(G\) is a finite set of objects - \(M\) is finite set of attributes - The relation \(I \subseteq G \times M\) is a binary relation between objects and attributes. - Each couple \((g, m) \in I\) denotes the fact that the object \(g \in G\) is related to the item \(m \in M\). Introduction to FCA For a set $A \subseteq G$ of objects we define $$A' := \{ m \in M \mid glm \text{ for all } g \in A\}$$ the set of all attributes common to the objects in $A$. Dually, for a set $B \subseteq M$ of attributes we define $$B' := \{ g \in G \mid glm \text{ for all } m \in B\}$$ the set of all objects which have all attributes in $B$. A formal concept of the context $\mathbb{K} := (G, M, I)$ is a pair $(A, B)$ where $A \subseteq G$, $B \subseteq M$, $A' = B$, and $B' = A$. We call $A$ the extent and $B$ the intent of the concept $(A, B)$. The set of all concepts of the context $(G, M, I)$ is denoted by $\mathcal{B}(G, M, I)$. Example formal context - The following cross table describes for some hotels the attributes they have. - In this case the objects are: Oasis, Royal, Amelia, California, Grand, Samira; - and the attributes are: Internet, Sauna, Jacuzzi, ATM, Babysitting. - \((\{\text{California, Grand}\})') := \{\text{Sauna, Jacuzzi}\}. <table> <thead> <tr> <th></th> <th>Internet</th> <th>Sauna</th> <th>Jacuzzi</th> <th>ATM</th> <th>Babysitting</th> </tr> </thead> <tbody> <tr> <td>Oasis</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> </tr> <tr> <td>Royal</td> <td>X</td> <td>X</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Amelia</td> <td>X</td> <td></td> <td></td> <td></td> <td>X</td> </tr> <tr> <td>California</td> <td>X</td> <td>X</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Grand</td> <td>X</td> <td></td> <td></td> <td></td> <td>X</td> </tr> <tr> <td>Samira</td> <td></td> <td></td> <td></td> <td></td> <td>X</td> </tr> </tbody> </table> **Table:** Formal context of the Hotel facilities example Conceptual lattice of Hotel facilities FCA tool to detect XML FDs ▶ we elaborate an FCA based tool that identify functional dependencies in XML documents. ▶ to achieve this, as a first step, we have to construct the Formal Context of functional dependencies for XML data. ▶ we have to identify the objects and attributes of this context in case of XML data. ▶ tuple-based XML FD notion proposed in the above section suggests a natural technique for XFD discovery ▶ XML data can be converted into a fully unnested relation, a single relational table, and apply existing FD discovery algorithms directly. ▶ given an XML document, which contains at the beginning the schema of the data, we create generalized tree tuples from it. Construct Formal Context of XML FDs - each tree tuple in a tuple class has the same structure, so it has the same number of elements. - we use the flat representation which converts the generalized tree tuples into a flat table - each row in the table corresponds to a tree tuple in the XML tree - in the flat table we insert non-leaf and leaf level elements (or attributes) from the tree - for non-leaf level nodes the associated keys are used as values - we include non-leaf level nodes with associated key values, to detect XML keys Flat table for tuple class $C_{Orders}$ Example Let us construct the flat table for tuple class $C_{Orders}$. There are two non-leaf nodes: - Orders, appears as Orders@key - OrderDetails, appears as OrderDetails@key. Formal Context for class $C_{Orders}$ - **Context’s Attributes:** $PathEnd/ElementName$ - for non-leaf level nodes: the name of the attribute is constructed as: $<ElementName> + "@key"$ and its value will be the associated key value - for non-leaf level nodes: the element names of the leaves. - **Context’s Objects:** the objects are considered to be the tree tuple pairs, actually the tuple pairs of the flat table. The key values associated to non-leaf elements and leaf element’s values are used in these tuple pairs. - **Context’s Properties:** the mapping between objects and attributes is defined by a binary relation, this incidence relation of the context shows which attributes of this tuple pairs have the same value. Beginning of the Formal Context of functional dependencies for tuple class \( C_{Orders} \) - the analyzed XML document may have a large number of tree tuples. - we filter the tuple pairs and we leave out those pairs in which there are no common attributes, by an operation called ”clarifying the context”, which does not alter the conceptual hierarchy. Concept Lattice of functional dependencies’ Formal Context for tuple class $C_{\text{Orders}}$ - we run the Concept Explorer (ConExp) engine to generate the concepts and create the concept lattice. Processing the Output of FCA - a concept lattice consists of the set of concepts of a formal context and the subconcept-superconcept relation between the concepts; - every circle represents a formal concept; - each concept is a tuple of a set of objects and a set of common attributes, but only the attributes are listed; - an edge connects two concepts if one implies the other directly; - each link connecting two concepts represents the transitive subconcept-superconcept relation between them; - the top concept has all formal objects in its extension; - the bottom concept has all formal attributes in its intension. The relationship between FDs in databases and implications in FCA a FD $X \rightarrow Y$ holds in a relation $r$ over $R$ iff the implication $X \rightarrow Y$ holds in the context $(G, R, I)$ where $G = \{(t_1, t_2)|t_1, t_2 \in r, t_1 \neq t_2\}$ and $\forall A \in R,$ $(t_1, t_2)IA \iff t_1[A] = t_2[A].$ - objects of the context are couples of tuples and each object intent is the agree set of this couple - the implications in this lattice corresponds to functional dependencies in XML. Example $\langle C_{Orders}, ./OrderID, ./CustomerID \rangle$ $\langle C_{Orders}, ./Orders@key, ./CustomerID \rangle$ $\langle C_{Orders}, ./OrderDetail/OrderID, ./CustomerID \rangle$ Reading the Concept Lattice - in the lattice we list only the attributes, these are relevant for our analysis; - let there be a concept, labeled by $A, B$ and a second concept labeled by $C$, where $A, B$ and $C$ are FCA attributes; - let concept labeled by $A, B$ be the subconcept of concept labeled by $C$; - tuple pairs of concept labeled by $A, B$ have the same values for attributes $A, B$, but for attribute $C$ too. - tuple pairs of concept labeled by $C$ do not have the same values for attribute $A$, nor for $B$, but have the same value for attribute $C$. - tuple pairs of every subconcept of concept labeled by $A, B$ have the same values for attributes $A, B$. - the labeling of the lattice is simplified by putting each attribute only once, at the highest level. Reading the Concept Lattice - we analyze attributes $A$ and $B$: - if we have only $A \rightarrow B$, then $A$ would be a subconcept of $B$; - if only $B \rightarrow A$ holds then $B$ should be a subconcept of $A$; - we have $A \rightarrow B$ and $B \rightarrow A$, that’s why they come side by side in the lattice. - So attributes from a concept imply each other. Example We have the next XML FDs: \[ \langle C_{Orders}, ./OrderID, ./OrderDetails/OrderID \rangle \\ \langle C_{Orders}, ./OrderID, ./Orders@key \rangle \\ \langle C_{Orders}, ./Orders@key, ./OrderID \rangle \\ \langle C_{Orders}, ./Orders@key, ./OrderDetails/OrderID \rangle \\ \langle C_{Orders}, ./OrderDetails/OrderID, ./Orders@key \rangle \\ \langle C_{Orders}, ./OrderDetails/OrderID, ./OrderID \rangle \] The functional dependencies found by software FCAMineXFD **Figure**: Functional dependencies in tuple class $C_{Orders}$ The concept lattice for the whole XML document Data Analysis - we can see the hierarchy of the analyzed data: - the node labeled by *Customers/Country* is on a higher level than node labeled by *Customers/City*; - the Customer’s node with every attribute is a subconcept of node labeled *Customers/City*; - in our XML data, every customer has different name, address, phone number, so these attributes appear in one concept node and **imply each other**; - the Orders node in XML is child of Customers, in the lattice, the node labeled with the key of Orders node, is subconcept of Customers node, so the hierarchy is visible; - these are 1:n relationships, from Country to City, from City to Customers, from Customers to Orders. - information about products is on the other side of the lattice; Products are in n:m relationship with Customers, linked by OrderDetail node in this case. FDs for the whole XML document Finding XML keys FDs with RHS as ./@key values can be used to detect the keys in XML. In tuple class $C_{Orders}$ we have XML FD: - $\langle C_{Orders}, ./OrderID, ./@key \rangle$, which implies that $\langle C_{Orders}, ./OrderID \rangle$ is an XML key. - $\langle C_{Orders}, ./OrderDetails/OrderID, ./@key \rangle$, so $\langle C_{Orders}, ./OrderDetails/OrderID \rangle$ is an XML key too. In tuple class $C_{Customers}$ software found XML FD: - $\langle C_{Customers}, ./CustomerID, ./@key \rangle$, which implies that $\langle C_{Customers}, ./CustomerID \rangle$ is an XML key. - other detected XML keys are: - $\langle C_{Customers}, ./Orders/CustomerID \rangle$; - $\langle C_{Customers}, ./CompanyName \rangle$; - $\langle C_{Customers}, ./Address \rangle$; - $\langle C_{Customers}, ./Phone \rangle$. Detecting XML data redundancy - having the set of functional dependencies for XML data in a tuple class, we can detect interesting functional dependencies. - in essential tuple class \( C_{Orders} \) an interesting FD: \[ \langle C_{Orders}, ./OrderDetails/ProductID, ./OrderDetails/ProductName \rangle \] - but \[ \langle C_{Orders}, ./OrderDetails/ProductID \rangle \] is not an XML key - So it is a data redundancy. - the same reason applies for XML FD \[ \langle C_{Orders}, ./OrderDetails/ProductName, ./OrderDetails/ProductID \rangle \]. - the other XML FD’s have as LHS a key for tuple class \( C_{Orders} \). Conclusions - This paper introduces an approach for mining functional dependencies in XML documents based on FCA. - Based on the flat representation of XML, we constructed the concept lattice. - We analyzed the resulted concepts, which allowed us to discover a number of interesting dependencies. - Our framework offers an graphical visualization for dependency exploration. Future Work - given the set of dependencies discovered by our tool: - propose a normalization algorithm for converting any XML schema into a correct one
{"Source-Url": "http://www.cs.ubbcluj.ro/~da/wp-content/uploads/XMLFCA.pdf", "len_cl100k_base": 11412, "olmocr-version": "0.1.53", "pdf-total-pages": 68, "total-fallback-pages": 0, "total-input-tokens": 110624, "total-output-tokens": 13857, "length": "2e13", "weborganizer": {"__label__adult": 0.0003740787506103515, "__label__art_design": 0.00131988525390625, "__label__crime_law": 0.0005412101745605469, "__label__education_jobs": 0.0035533905029296875, "__label__entertainment": 0.00014984607696533203, "__label__fashion_beauty": 0.0002663135528564453, "__label__finance_business": 0.0011444091796875, "__label__food_dining": 0.0003590583801269531, "__label__games": 0.0007562637329101562, "__label__hardware": 0.0008821487426757812, "__label__health": 0.0004699230194091797, "__label__history": 0.0007467269897460938, "__label__home_hobbies": 0.0002598762512207031, "__label__industrial": 0.0006847381591796875, "__label__literature": 0.0013856887817382812, "__label__politics": 0.0003719329833984375, "__label__religion": 0.0006990432739257812, "__label__science_tech": 0.24853515625, "__label__social_life": 0.0002872943878173828, "__label__software": 0.044403076171875, "__label__software_dev": 0.69189453125, "__label__sports_fitness": 0.0002505779266357422, "__label__transportation": 0.0005578994750976562, "__label__travel": 0.0002586841583251953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37230, 0.01616]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37230, 0.56685]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37230, 0.72132]], "google_gemma-3-12b-it_contains_pii": [[0, 100, false], [100, 271, null], [271, 451, null], [451, 631, null], [631, 811, null], [811, 991, null], [991, 1171, null], [1171, 1719, null], [1719, 2600, null], [2600, 2627, null], [2627, 3131, null], [3131, 4107, null], [4107, 4980, null], [4980, 4998, null], [4998, 5300, null], [5300, 6194, null], [6194, 7198, null], [7198, 8103, null], [8103, 8759, null], [8759, 8786, null], [8786, 8813, null], [8813, 9594, null], [9594, 10353, null], [10353, 10943, null], [10943, 11401, null], [11401, 12032, null], [12032, 12505, null], [12505, 13015, null], [13015, 13734, null], [13734, 14453, null], [14453, 16079, null], [16079, 16665, null], [16665, 17292, null], [17292, 18313, null], [18313, 19280, null], [19280, 19731, null], [19731, 20164, null], [20164, 21137, null], [21137, 21960, null], [21960, 22571, null], [22571, 22908, null], [22908, 24105, null], [24105, 24658, null], [24658, 25180, null], [25180, 25701, null], [25701, 26519, null], [26519, 27044, null], [27044, 27703, null], [27703, 28557, null], [28557, 28596, null], [28596, 29285, null], [29285, 29822, null], [29822, 30041, null], [30041, 30777, null], [30777, 31132, null], [31132, 31331, null], [31331, 31954, null], [31954, 32638, null], [32638, 33416, null], [33416, 34211, null], [34211, 34333, null], [34333, 34380, null], [34380, 35220, null], [35220, 35251, null], [35251, 36077, null], [36077, 36701, null], [36701, 37077, null], [37077, 37230, null]], "google_gemma-3-12b-it_is_public_document": [[0, 100, true], [100, 271, null], [271, 451, null], [451, 631, null], [631, 811, null], [811, 991, null], [991, 1171, null], [1171, 1719, null], [1719, 2600, null], [2600, 2627, null], [2627, 3131, null], [3131, 4107, null], [4107, 4980, null], [4980, 4998, null], [4998, 5300, null], [5300, 6194, null], [6194, 7198, null], [7198, 8103, null], [8103, 8759, null], [8759, 8786, null], [8786, 8813, null], [8813, 9594, null], [9594, 10353, null], [10353, 10943, null], [10943, 11401, null], [11401, 12032, null], [12032, 12505, null], [12505, 13015, null], [13015, 13734, null], [13734, 14453, null], [14453, 16079, null], [16079, 16665, null], [16665, 17292, null], [17292, 18313, null], [18313, 19280, null], [19280, 19731, null], [19731, 20164, null], [20164, 21137, null], [21137, 21960, null], [21960, 22571, null], [22571, 22908, null], [22908, 24105, null], [24105, 24658, null], [24658, 25180, null], [25180, 25701, null], [25701, 26519, null], [26519, 27044, null], [27044, 27703, null], [27703, 28557, null], [28557, 28596, null], [28596, 29285, null], [29285, 29822, null], [29822, 30041, null], [30041, 30777, null], [30777, 31132, null], [31132, 31331, null], [31331, 31954, null], [31954, 32638, null], [32638, 33416, null], [33416, 34211, null], [34211, 34333, null], [34333, 34380, null], [34380, 35220, null], [35220, 35251, null], [35251, 36077, null], [36077, 36701, null], [36701, 37077, null], [37077, 37230, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37230, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37230, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37230, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37230, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37230, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37230, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37230, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37230, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37230, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37230, null]], "pdf_page_numbers": [[0, 100, 1], [100, 271, 2], [271, 451, 3], [451, 631, 4], [631, 811, 5], [811, 991, 6], [991, 1171, 7], [1171, 1719, 8], [1719, 2600, 9], [2600, 2627, 10], [2627, 3131, 11], [3131, 4107, 12], [4107, 4980, 13], [4980, 4998, 14], [4998, 5300, 15], [5300, 6194, 16], [6194, 7198, 17], [7198, 8103, 18], [8103, 8759, 19], [8759, 8786, 20], [8786, 8813, 21], [8813, 9594, 22], [9594, 10353, 23], [10353, 10943, 24], [10943, 11401, 25], [11401, 12032, 26], [12032, 12505, 27], [12505, 13015, 28], [13015, 13734, 29], [13734, 14453, 30], [14453, 16079, 31], [16079, 16665, 32], [16665, 17292, 33], [17292, 18313, 34], [18313, 19280, 35], [19280, 19731, 36], [19731, 20164, 37], [20164, 21137, 38], [21137, 21960, 39], [21960, 22571, 40], [22571, 22908, 41], [22908, 24105, 42], [24105, 24658, 43], [24658, 25180, 44], [25180, 25701, 45], [25701, 26519, 46], [26519, 27044, 47], [27044, 27703, 48], [27703, 28557, 49], [28557, 28596, 50], [28596, 29285, 51], [29285, 29822, 52], [29822, 30041, 53], [30041, 30777, 54], [30777, 31132, 55], [31132, 31331, 56], [31331, 31954, 57], [31954, 32638, 58], [32638, 33416, 59], [33416, 34211, 60], [34211, 34333, 61], [34333, 34380, 62], [34380, 35220, 63], [35220, 35251, 64], [35251, 36077, 65], [36077, 36701, 66], [36701, 37077, 67], [37077, 37230, 68]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37230, 0.0726]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
28425338527d2e29b826e0a8a4e0e8e6b5b50198
[REMOVED]
{"len_cl100k_base": 11704, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 46976, "total-output-tokens": 13731, "length": "2e13", "weborganizer": {"__label__adult": 0.0005364418029785156, "__label__art_design": 0.0007939338684082031, "__label__crime_law": 0.0005154609680175781, "__label__education_jobs": 0.0011444091796875, "__label__entertainment": 0.0001316070556640625, "__label__fashion_beauty": 0.00028705596923828125, "__label__finance_business": 0.0003368854522705078, "__label__food_dining": 0.0005474090576171875, "__label__games": 0.0013799667358398438, "__label__hardware": 0.0093536376953125, "__label__health": 0.000957965850830078, "__label__history": 0.000606536865234375, "__label__home_hobbies": 0.00019943714141845703, "__label__industrial": 0.001071929931640625, "__label__literature": 0.00035500526428222656, "__label__politics": 0.0004730224609375, "__label__religion": 0.0009374618530273438, "__label__science_tech": 0.265380859375, "__label__social_life": 7.778406143188477e-05, "__label__software": 0.007686614990234375, "__label__software_dev": 0.70458984375, "__label__sports_fitness": 0.0006237030029296875, "__label__transportation": 0.0014944076538085938, "__label__travel": 0.00033473968505859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59692, 0.03082]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59692, 0.28952]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59692, 0.9228]], "google_gemma-3-12b-it_contains_pii": [[0, 1866, false], [1866, 5080, null], [5080, 8300, null], [8300, 10597, null], [10597, 12442, null], [12442, 15434, null], [15434, 19214, null], [19214, 22442, null], [22442, 25051, null], [25051, 27569, null], [27569, 30666, null], [30666, 33616, null], [33616, 35363, null], [35363, 36669, null], [36669, 40266, null], [40266, 43894, null], [43894, 44507, null], [44507, 47848, null], [47848, 51069, null], [51069, 54366, null], [54366, 58470, null], [58470, 59692, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1866, true], [1866, 5080, null], [5080, 8300, null], [8300, 10597, null], [10597, 12442, null], [12442, 15434, null], [15434, 19214, null], [19214, 22442, null], [22442, 25051, null], [25051, 27569, null], [27569, 30666, null], [30666, 33616, null], [33616, 35363, null], [35363, 36669, null], [36669, 40266, null], [40266, 43894, null], [43894, 44507, null], [44507, 47848, null], [47848, 51069, null], [51069, 54366, null], [54366, 58470, null], [58470, 59692, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59692, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59692, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59692, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59692, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59692, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59692, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59692, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59692, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59692, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59692, null]], "pdf_page_numbers": [[0, 1866, 1], [1866, 5080, 2], [5080, 8300, 3], [8300, 10597, 4], [10597, 12442, 5], [12442, 15434, 6], [15434, 19214, 7], [19214, 22442, 8], [22442, 25051, 9], [25051, 27569, 10], [27569, 30666, 11], [30666, 33616, 12], [33616, 35363, 13], [35363, 36669, 14], [36669, 40266, 15], [40266, 43894, 16], [43894, 44507, 17], [44507, 47848, 18], [47848, 51069, 19], [51069, 54366, 20], [54366, 58470, 21], [58470, 59692, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59692, 0.18286]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
12e058ce5fd82a3bf725a8bad29e6c792cd7536c
Software Verification and Validation Laboratory: OCLR: a More Expressive, Pattern-based Temporal Extension of OCL Wei Dou, Domenico Bianculli and Lionel Briand Interdisciplinary Centre for Security, Reliability and Trust University of Luxembourg TR-SnT-2014-2 February 4, 2014 **OCLR: a More Expressive, Pattern-based Temporal Extension of OCL** Wei Dou, Domenico Bianculli, and Lionel Briand SnT Centre - University of Luxembourg, Luxembourg, Luxembourg {wei.dou,domenico.bianculli,lionel.briand}@uni.lu **Abstract.** Modern enterprise information systems often require to specify their functional and non-functional (e.g., Quality of Service) requirements using expressions that contain temporal constraints. Specification approaches based on temporal logics demand a certain knowledge of mathematical logic, which is difficult to find among practitioners; moreover, tool support for temporal logics is limited. On the other hand, a standard language such as the Object Constraint Language (OCL), which benefits from the availability of several industrial-strength tools, does not support temporal expressions. In this paper we propose OCLR, an extension of OCL with support for temporal constraints based on well-known property specification patterns. With respect to previous extensions, we add support for referring to a specific occurrence of an event as well as for indicating a time distance between events and/or scope boundaries. The proposed extension defines a new syntax, very close to natural language, paving the way for a rapid adoption by practitioners. We show the application of the language in a case study in the domain of eGovernment, developed in collaboration with a public service partner. 1 Introduction Complex software systems, such as modern enterprise information systems, call for the definition of requirements specifications that include both functional and non-functional aspects (such as QoS, Quality of Service). In both cases, the specifications might characterize (quantitative) aspects of the system that involve temporal constraints. Examples of these constraints are bounds on the sequence and/or number of occurrences of system events, possibly conjuncted with constraints on the temporal distance of events. These types of specifications have been catalogued in various collections of property specification patterns, to help analysts and developers in expressing typical, recurrent properties of a system, using a generalized yet structured and precise form. The majority of property specification patterns have emerged in the context of concurrent, real-time critical systems [6, 12, 9], though there have been recent proposals of specification patterns for specific domains, like service-based applications [1]. In all cases, the patterns have been formalized in terms of some temporal logic, either the classic ones like LTL and CTL or a more specialized version like SOLOIST [2]. One problem in using a specification language based on a temporal logic is that it requires a strong theoretical background, which is rarely found in practitioners. Moreover, tool support for the verification of properties expressed in temporal logic is prototypical and limited, at least if considered in the context of applying this kind of formal method at a scalable, industrial-grade level. One of the specification languages that has found a significant consensus and adoption in industry is the Object Constraint Language (OCL) [10], used to specify constraints on models, and now a standard in the context of model-driven engineering practice. However, OCL does not support the specification of temporal requirements. There have been several research proposals to extend OCL with temporal constructs. Nevertheless, in the scope of a collaboration with a public service partner active in the domain of eGovernment, we found that the available temporal extensions of OCL do not meet the expressiveness requirements as determined in our field study, based on realistic specifications extracted from a collection of eGovernment business process descriptions. In this paper we propose a new language, called OCLR, to fill the expressiveness gap that we found on the field. OCLR is an extension of OCL that supports temporal constraints based on some of the well-known property specification patterns. More specifically, we advance the state of the art by introducing support for referring to a specific occurrence of an event in scope boundaries as well as for indicating a time distance between events and/or scope boundaries. Our language extends OCL in a minimal fashion while maximizing the expressiveness of temporal properties; moreover, the syntax is very close to natural language, to encourage practitioners to use it. To show the feasibility of using OCLR in realistic scenarios, we include a case study in the context of an eGovernment application developed by our public service partner. In the future, our intent is to adopt OCLR in the context of a larger project on model-driven run-time verification of business processes. Since in this project we plan to leverage existing industrial-strength OCL tools, such as constraint verification engines, we decided to minimize, by design, the differences between the models underlying OCLR and OCL. We believe that making OCLR a minimal extension of OCL will make the translation of OCLR expressions into regular OCL ones much easier than performing the same translation starting from expressions written in a language much more distant from OCL, such as a temporal logic. The rest of this paper is structured as follows. In Sect. 2 we discuss the motivations for which and the context in which this work has been developed. Section 3 introduces OCLR, its syntax and the (informal) semantics. In Sect. 4 we show the application of OCLR in a case study in the domain of eGovernment. We survey related work in Sect. 5. Section 6 concludes the paper, providing directions for future work. --- 1 In fact OCLR stands for “OCL for Run-time verification”. 2 The translation from OCLR to OCL is out of the scope of this paper. 3 The complete definition of the formal semantics of OCLR is available in the appendix. 2 Motivations This work has been developed as part of an ongoing collaboration with CTIE (Centre des technologies de l’information de l’Etat), the Luxembourg national center for information technology. The main role of CTIE is to lead the development of electronic government (eGovernment) initiatives within Luxembourg, with the ultimate goal of delivering digital public services to citizens and enterprises, as well as improving the processes followed by the public administration. The business processes designed for public administrations are usually highly complex and require the interaction of different stakeholders. In particular, they act as the “glue” to orchestrate different information systems, possibly by many different organizations, in an effort to foster cooperation of various administrations. Given the complexity and the many interactions foreseen for eGovernment business processes, designing effective and efficient processes to drive e-service delivery is one of the most challenging tasks for public administrations. For these reasons, their development is gradually moving towards model-driven techniques. This is the case for CTIE, which has developed in-house a model-driven methodology for designing eGovernment business processes. Usually these processes are designed as compositions of services provided by different organizations, administrations, or third-party suppliers. A service integrator has to monitor the execution of the third-party services it uses to check whether they fulfill their obligations (both in terms of functional and non-functional properties), so that the business process itself can meet its requirements. Furthermore, it is also important to verify at run time whether the business process execution complies with the constraints specified during the modeling phase, to detect when a failure occurs and to possibly determine corrective actions. In this context, we are involved in a project on model-driven, run-time verification of (eGovernment) business processes. One of the first steps of this project consisted in identifying the type of constraints to check at run time. We analyzed several applications developed by CTIE and scrutinized the requirements specifications associated with all use cases and business process descriptions. We were able to recast the majority of specifications written in natural language using the system of property specification patterns (and scopes) proposed by Dwyer et al. [6]. However, in some cases the original definitions proposed in [6] had to be extended to match the system specifications. For example, the definitions of property specification scopes, used to refer to the extent of a program execution over which a pattern must hold, had to be extended to support references to a specific occurrence of an event (not only the first one as in [6]), as in the requirement “event A shall occur before the second occurrence of event X”. Another variant of this type of scope boundary that we found is the one with requirements on the distance between events, such as “event A shall occur five time units before the second occurrence of event X”. In some cases, the requirements specifications had to be expressed in terms of some real-time specification patterns [12,9], which quantitatively define distance among events and durations of events. Based on the results of this phase, we pondered over the definition of a high-level specification language for expressing this type of constraints. The intrinsic temporal nature of the requirements specifications we found, including also real-time constraints, could have suggested to follow the direction of building on some temporal logic. However, specification languages based on temporal logic require a certain mathematical knowledge that is not easy and common to find among practitioners, such as business analysts or software engineers. Moreover, the array of tools available for the verification of temporal logic is limited, especially if one considers the additional requirement of applying them in realistic industrial contexts. Based on these limitations, given the model-driven engineering practice already in place at our public service partner, we decided to define our specification language as an extension of OCL. In this way, we can build on a language that is standardized, is known among practitioners, and has a wide set of well-established, industrial-strength tools, like constraints verification engines. 3 OCLR The design of OCLR is based on Dwyer et al.’s property specification pattern system [6]. This system defines five scopes (globally, before, after, between-and, and after-until) and eight patterns (universality, absence, existence, bounded existence, precedence, response, precedence chain, and response chain). In the definition of OCLR we decided to support all these scopes and patterns, with the following extensions: – The possibility, in the definition of a scope boundary, to refer to a specific occurrence of an event, as in “before the second occurrence of event X...”. – The possibility to indicate a time distance with respect to a scope boundary, as in “at least (at most) two time units before the n-th occurrence of event X...”. – Support for expressing time distance between events occurrences, to express properties like a bounded response, such as “event B should occur in response to event A within 2 time units”. These design choices have been motivated by the type of properties that we have found while analyzing the requirement specifications of our public service partner, as well as by the lack of support for them in the current temporal extensions of OCL (see Sect. 5). OCLR has been inspired by the design of Temporal OCL [11], another pattern-based temporal extension of OCL. As we will discuss in more detail in Sect. 5, Temporal OCL lacks the language features described above. Nevertheless, we borrow from it the notion of event, i.e., a predicate that specifies a set of instants within the time line; the specific types of events supported in the language are described in the following subsection. 3.1 Syntax The syntax of OCLR (also inspired by the one of Temporal OCL [11]) is shown in Fig. 1: non-terminals are enclosed in angle brackets, terminals are enclosed in single quotes, and underlined italic words are non-terminals defined in the OCL grammar [10]. An ⟨OCLR block⟩ comprises a set of conjuncted ⟨TemporalClauses⟩ beginning with the keyword ‘temporal’. Each temporal clause contains a temporal expression that consists of a ⟨scope⟩ and a ⟨pattern⟩; the scope specifies the time slot(s) during which the property described by the pattern is checked. ⟨OCLRBlock⟩ ::= ‘temporal’ ⟨TemporalClause⟩+ ⟨TemporalClause⟩ ::= [(simpleNameCS) ‘:’ [Quantif]] ⟨TemporalExp⟩ ⟨Quantif⟩ ::= ‘let’ ⟨VariableDeclarationCS⟩ ‘in’ ⟨TemporalExp⟩ ::= ⟨Scope⟩ ⟨Pattern⟩ ⟨Scope⟩ ::= ‘globally’ | ‘before’ ⟨Boundary1⟩ | ‘after’ ⟨Boundary1⟩ | ‘between’ ⟨Boundary2⟩ ‘and’ ⟨Boundary2⟩ | ‘after’ ⟨Boundary2⟩ ‘until’ ⟨Boundary2⟩ ⟨Pattern⟩ ::= ‘always’ ⟨Event⟩ | ‘eventually’ ⟨RepeatableEventExp⟩ | ‘never’ ‘exactly’ [(IntegerLiteratureExpCS)] ⟨Event⟩ | (EventChainExp) preceding [(TimeDistanceExp)] ⟨EventChainExp⟩ | ⟨EventChainExp⟩ ‘responding’ [(TimeDistanceExp)] ⟨EventChainExp⟩ ⟨Boundary1⟩ ::= [(IntegerLiteratureExpCS)] ⟨SimpleEvent⟩ [(TimeDistanceExp)] ⟨Boundary2⟩ ::= [(IntegerLiteratureExpCS)] ⟨SimpleEvent⟩ [at least] IntegerLiteratureExpCS ‘tu’ ⟨EventChainExp⟩ ::= ⟨Event⟩ (’,’ [‘#’ (TimeDistanceExp)] ⟨Event⟩)* ⟨TimeDistanceExp⟩ ::= (ComparingOp) ⟨IntegerLiteratureExpCS⟩ ‘tu’ ⟨RepeatableEventExp⟩ ::= [(ComparingOp)] ⟨IntegerLiteratureExpCS⟩ ⟨Event⟩ ⟨ComparingOp⟩ ::= ‘at least’ | ‘at most’ | ‘exactly’ ⟨Event⟩ ::= ⟨SimpleEvent⟩ | ⟨ComplexEvent⟩ [‘|’ Event] ⟨ComplexEvent⟩ ::= ‘isCalled’ ‘(‘ ‘anyOp’ [‘,’ ‘pre:’ ⟨OCLExpressionCS⟩ [‘,’ ‘post:’ ⟨OCLExpressionCS⟩ ‘)’ ‘\’ ⟨Event⟩] ⟨SimpleEvent⟩ ::= (SimpleCallEvent) | (SimpleChangeEvent) ⟨SimpleChangeEvent⟩ ::= ‘becomesTrue’ ‘(‘ ⟨OCLExpressionCS⟩ ‘)’ ⟨SimpleCallEvent⟩ ::= ‘isCalled’ ‘(‘ ⟨OperationCallExpCS⟩ [‘,’ ‘pre:’ ⟨OCLExpressionCS⟩ [‘,’ ‘post:’ ⟨OCLExpressionCS⟩ ‘)’ Fig. 1. Grammar of OCLR The definitions of \langle Event \rangle s that can be used in a temporal expression are adapted from [11]. The keyword \texttt{isCalled} represents a call event, which corresponds to a call to an operation. Under the hypothesis of atomicity of operations, we merge into a single call event, the events corresponding to the call, the start, and the end of an operation. A call event has three parameters: the called operation; the precondition (optional, in the form of an OCL expression) that acts as guard over the system pre-state and the operation parameters for the actual call execution; the postcondition (optional, in the form of an OCL expression) that acts as guard over the system post-state and the return value of the call invocation. Notice that a call event is raised only if the operation is invoked \textit{and} both the precondition and the postcondition are satisfied. The keyword \texttt{anyOp} is used if no operation is specified; in this case the call event becomes a state change event, from the state determined by the precondition to the state determined by the postcondition. The keyword \texttt{becomesTrue} denotes a state change event parameterized with the OCL expression provided as parameter: it corresponds to the state in which the input expression becomes true (which implies that in the previous state it evaluated to false). We also support the disjunction \texttt{\mid} and the exclusion \texttt{\setminus} operations on events. 3.2 \textit{OCLR} at Work We now present some examples of properties that can be expressed with \textit{OCLR}, in order to provide the reader with a high-level, intuitive understanding of the language. We consider the history trace shown in Fig. 2 and for each property indicate whether it is violated or not by the trace. First, we define the properties in English: 1. “Event \textit{C} will happen 8 time units after the second occurrence of event \textit{X}.” (satisfied) 2. “Event \textit{A} should happen within 30 time units after the first occurrence of event \textit{X}.” (satisfied) 3. “Event \textit{C} will eventually happen after at least 3 time units since the first occurrence of event \textit{X}; and it must happen before event \textit{Y} if the latter happens.” (violated) 4. “After the second occurrence of event \textit{X}, event \textit{C} will eventually happen exactly twice.” (satisfied) 5. “Event \textit{C} should happen at least once between every first occurrence of event \textit{X} and the next event \textit{X}; the time interval between event \textit{X} and the first occurrence of event \textit{C} should be at least 5 time units.” (violated) ![Fig. 2. Sample events traces](image) 6. “Event $B$ must happen at least 3 time units before the first occurrence of event $Y$.” (satisfied) 7. “Before the first occurrence of event $Y$, once event $X$ occurs, event $A$ will happen followed by event $B$; the time interval between $X$ and $A$ is at least 3 time units.” (satisfied) The corresponding $OCLR$ expressions are shown below: 1. temporal: after 2 $X$ exactly 8 tu eventually $C$ 2. temporal: after $X$ at most 30 tu eventually $A$ 3. temporal: after 1 $X$ at least 3 tu until $Y$ eventually $C$ 4. temporal: after 2 $X$ eventually exactly 2 $C$ 5. temporal: between $X$ at least 5 tu and $Y$ eventually at least 1 $C$ 6. temporal: before $Y$ at least 3 tu eventually $B$ 7. temporal: before $Y$ $A$, $B$ responding at least 3 tu $X$ ### 3.3 Informal Semantics In this section we present the informal semantics of the scopes and the patterns supported in $OCLR$ expressions; they correspond to non-terminals $\langle$Scope$\rangle$ and $\langle$Pattern$\rangle$, respectively. The full definition of the formal semantics is available in the appendix. **Scopes.** For the description of scopes, we refer to the trace of events depicted in Fig. 3. We use symbols $X$ and $Y$ as shorthands for events that can be derived from the non-terminal $\langle$SimpleEvent$\rangle$. ![Fig. 3. A sample trace for the description of scopes](image-url) **Before.** This scope identifies a portion of a trace up to a certain boundary. The general template for this scope in $OCLR$ is “before $[m]$ $X$ $\langle$ComparingOp$\rangle$ $n$ tu”, where elements between brackets are optional, ‘$m$’ and ‘$n$’ are integers derived from the non-terminal $\langle$IntegerLiteratureExpCS$\rangle$, and ‘tu’ stands for “time unit(s)”. This template can be expanded in four forms: 1) “before $X$”, 2) “before $X$ $\langle$ComparingOp$\rangle$ $n$ tu”, 3) “before $m$ $X$”, 4) “before $m$ $X$ $\langle$ComparingOp$\rangle$ $n$ tu”. The first two forms are convenient shorthands for the third and fourth ones, respectively, with $m = 1$. The form “before $m$ $X$” selects the portion of the trace up to the $m$-th occurrence of event $X$; see, for example, the top row in Fig. 4, where the interval from the origin of the trace up to the third occurrence of $X$ is highlighted with a thick line. The form “before $m$ $X$ $\langle$ComparingOp$\rangle$ $n$ tu” has three variants, depending on the possible expansions of non-terminal $\langle$ComparingOp$\rangle$: – "before m X at least n tu" identifies the scope from the origin of the trace up to n time units before the m-th occurrence of X; – "before m X at most n tu" identifies the scope starting at n time units before the m-th occurrence of X and bounded to the right by the m-th occurrence of X; – "before m X exactly n tu" pinpoints the time instant at n time units before the m-th occurrence of X. Examples of these three variants of scopes are shown with thick segments in Fig. 4, with m = 3 and with n = 2. **After.** This scope identifies a portion of a trace starting from a certain boundary. It has a dual semantics with respect to the before scope. We provide an intuition of its semantics using Fig. 5, where the possible variants of this scope are represented as thick segments. **Between-And.** This scope identifies portion(s) of a trace delimited by two boundaries. The general template for this scope in OCLR is "between [m1] X [at least n1 tu] and [m2] Y [at least n2 tu]", where elements between brackets are optional, 'm1', 'm2', 'n1', 'n2' are integers derived from the non-terminal ⟨IntegerLiteratureExpCS⟩, and 'tu' stands for “time unit(s)”. This template can be expanded in four forms: 1. "between m1 X [at least n1 tu] and m2 Y [at least n2 tu]"; 2. "between X [at least n1 tu] and m2 Y [at least n2 tu]"; 3. "between m1 X [at least n1 tu] and Y [at least n2 tu]"; 4. "between X [at least n1 tu] and Y [at least n2 tu]". The first form is the most general: it selects the single segment of the trace delimited by the m1-th occurrence of event X and the m2-th occurrence of event Y happening after the m1-th occurrence of X. The second and third forms are shorthands for the first one, with m1 = 1 and m2 = 1, respectively. The fourth form is the closest to the original definition in [6], since it selects all the segments in the trace delimited by the boundaries. In this regard, notice the difference with respect to the expression "between 1 X and 1 Y", which selects the segment delimited by the first occurrence of X and the first occurrence of Y after X. In all forms it is possible to use the expression at least n tu when defining boundaries, with the same meaning described for the scope before. Four examples of the Between-and scope are shown in Fig. 6. **After-Until.** This scope is similar to Between-and, with the difference that each identified segment extends to the right in case the event defined by the second boundary does not occur; this peculiarity can be noticed in the first two rows of Fig. 7, and compared with those in Fig. 6. **Globally.** This scope corresponds to the entire trace shown in Fig. 3. Note that all scopes but those using the 'exactly' keyword do not include the events occurring at the boundaries of the scope itself. **Patterns.** OCLR supports the eight patterns defined in [6]. **Universality.** It states that a certain event should always happen within the given scope. Existence. It indicates that the given scope contains some occurrence(s) of a certain event. This pattern comes in four forms: - “eventually A” means that the event A happens at least once; - “eventually at least m A” means that A happens at least m times; - “eventually at most m A” means that A happens at most m times; - “eventually exactly m A” means that A happens exactly m times. The last three forms are variants of the bounded existence pattern, a subclass of the existence one. Absence. It states that a certain event never occurs in the given scope. It is also possible to specify that a specific number of occurrences of the same event should not happen, as in “never exactly 2 X”, which says that X should never occur exactly twice. Precedence. This pattern (also available in the variant called precedence chain) indicates the precondition relationship between a pair of events (respectively, the two blocks of a chain) in which the occurrence of the second event between X and Y between X and Y at least 2 tu between 1 X at least 2 tu and 2 Y between 2 X at least 2 tu and 1 Y at least 2 tu (respectively, block) depends on the occurrence of the first event (respectively, block). Based on this original definition, we added support for timing information to enable expressing the time distance between two adjacent events. The semantics can be explained using the following example and the event trace in Fig. 8; the expression “\(A\) preceding at most 10 tu \(B\), \# at least 5 tu \(C\)” indicates that the event \(A\) is the precondition of the block “\(B\) followed by \(C\)”, that the time distance between \(A\) and \(B\) is at most 10 time units, and the time distance (expressed using the \# operator) between events \(B\) and \(C\) is at least 5 time units. Here, \(A\) (at the left of ‘preceding’) represents the first block of the chain, while the expression “\(B\), \# at least 5 tu \(C\)” represents the second block (at the right of ‘preceding’). **Response.** This pattern (also available in the variant called *response chain*) specifies the cause-effect relationship between a pair of events (respectively, the two blocks of a chain) in which the occurrence of the first event (respectively, first block) leads to the occurrence of the second event (respectively, second block). The property “\(C, D\) responding at most 10 tu \(A\), \# at least 5 tu \(B\)” specifies that two successive events \(A\) and \(B\) stimulate the sequential occurrence of \(C\) and \(D\), and the time interval between \(A\) and \(B\) should be at least 5 time units; the time interval between \(B\) (second element of the first block) and \(C\) (first element of the second block) should be at most 10 time units. This property is violated by the example in Fig. 8, because the time distance between \(A\) and \(B\) is only 4 time units. **Fig. 8.** Example trace for illustrating the precedence and response patterns 4 Applying OCLR in an eGovernment scenario In this section we present a case study where we show the use of OCLR in the context of an eGovernment application developed by our public service partner. We illustrate some properties (selected from the 47 we analyzed) of a business process model related to three use cases. The goal is to investigate whether OCLR can precisely capture all temporal and timed properties of a real eGovernment system. The case study description has been sanitized for the purpose of not disclosing confidential information and also to obtain a model at the minimum level of detail required to illustrate and express the properties. The scenario describes the Identity Card Management (ICM) business process, which is in charge of issuing and managing the ID cards of the diplomatic personnel of the country. A sanitized version of the conceptual model corresponding to this scenario is shown in Fig. 9. The ICM business process deals with the card requests, the production of the cards, and the returns of the cards once expired. The ICM process also keeps track of the state of a card (CardState), which can be, for example, InCirculation or Expired. A card Request can be in different states, such as Approved, Denied, and InProgress. Once a request for a card is submitted to the ICM system, it is evaluated and then either approved or denied. After the approval, the ICM system asks the production system to issue a physical card. The card will then be delivered to the applicant. The ICM also deals with events such as the damage, loss, or expiration of cards. Sample properties. We now list the requirements specifications associated with three uses cases of the ICM system, and show how the corresponding properties can be expressed in OCLR. Card Request. The following requirements are associated with the use case related to the card request: R1 Once a card request is approved, the applicant is notified within three days; this notification has to occur before the production of the card is started. R2 The applicant has to show up within five days from the notification to get her personal data collected. ![Fig. 9. Conceptual model of the ICM process](image-url) R3 If the applicant does not show up within five days after the second notification, the request will be denied and the applicant notified about the refusal. Property R1 is expressed in lines 2–5. The before scope is delimited by the event that corresponds to a change in the state of the card \((c.state=\text{CardState::InProduction})\). The response pattern is bounded (time units are expressed in seconds) and requires the notification to the applicant \((\text{notifyApproved})\) to happen in response to a change in the state of the request \((r.state=\text{RequestCard::Approved})\). Property R2 (lines 6–8) combines an after scope with an existence pattern. A similar structure is used in R3 (lines 9–11), where the after scope uses the second occurrence of \text{notifyApproved} as the boundary. **Card Loss.** The following requirements are associated with the use case related to the loss of a card: L1 If a card is reported as lost to the ICM and has not been found yet, a temporary card will be sent to the card holder within 1 day. L2 If the card has not been found yet, a new card will be delivered to the holder within five days after the report of the loss. L3 After the card loss is reported, if the card is found, within at most three days the delivery of the new card will be canceled and a notification to return the temporary card will be sent. Both properties L1 and L2 use an after scope combined with an existence pattern. Notice that in both cases the additional condition “card not found yet” is expressed as a precondition of the operation which is the argument of isCalled in the existence pattern (deliverTempCard and deliverNewCard). Property L3 combines an after scope with a precedence chain pattern, where the first block corresponds to finding the card (isFound) and the second block is the chain of cancelCardDelivery and notifyReturnCard. Card Expiration. The following requirements are associated with the use case related to the expiration of a card: E1 Once a card expires, the holder is notified to return the card at most twice. E2 After five days from the second notification to the holder about the expiration of the card, if the card has not been returned yet, the police is notified. E3 Once a card is returned, the holder will receive a confirmation within one day. ``` context ICM 1 2 temporal E1: let c:Card in 3 after becomesTrue(c.state = CardState::Expired) 4 until becomesTrue(c.state = CardState::Returned) 5 eventually at most 2 isCalled(notifyReturnCard(c.cardHolder), 6 pre: c.state <> CardState::Returned) 7 8 temporal E2: let c:Card in 9 after 2 isCalled(notifyReturnCard(c.cardHolder), 10 pre: c.state <> CardState::Returned) 11 exactly 5*24*3600 tu 12 eventually isCalled(notifyPolice(c.cardHolder), 13 pre: c.state <> CardState::Returned) 14 15 temporal E3: let c:Card in 16 globally isCalled(notifyCardReturned(c.cardHolder), 17 pre: c.state = CardState::Returned) 18 responding at most 24*3600 tu 19 becomesTrue(c.state = CardState::Returned) ``` Property E1 uses an after-until scope, delimited by the events corresponding to the expiration of the card (c.state=CardState::Expired) and the return of the card (c.state=CardState::Returned). A bounded existence pattern is used to specify the maximum amount of notifications (notifyReturnCard) that can happen. In property E2 we use an after scope combined with the keyword ‘exactly’ to pinpoint the exact time instant in which the police is notified (notifyPolice). Property E3 states an invariant of the system (using the globally scope) for the response pattern correlating the return of the card (c.state=CardState::Returned) to the notification to the holder (notifyCardReturned). 5 Related Work There have been several proposals for extending OCL with support for temporal constraints. In the rest of this section we summarize them and discuss their differences and limitations with respect to OCLR. One of the first proposals is OCL/RT [4], which extends OCL with the notion of timestamped events (based on the original UML abstract meta-class Event) and two temporal modalities, “always” and “sometimes”. Events are associated with instances of classifiers and, by means of a special satisfaction operator, it is possible to evaluate an expression at the time instant when a certain event occurred. The OCL/RT extension allows for expressing real-time deadline and timeout constraints but requires to reason explicitly at the lowest-level of abstraction, in terms of time instants. Cabot et al. [3] extend UML to use UML/OCL as a temporal conceptual modeling language, introducing the concepts of durability and frequency for the definition of temporal features of UML classifiers and associations. They define temporal operations in OCL through which it is possible to refer to any past state of the system. These operations are mapped into standard OCL by relying on the mapping of the temporally-extended conceptual schema into a conventional UML one, which explicitly instantiates the concepts of time interval and instant. However, the temporal operations are geared to express temporal integrity constraints on the model, rather than temporal properties correlating events of the system. The majority of the proposals regarding temporal extensions of OCL are realized by extending the language with temporal operators/modalities borrowed from standard temporal logic, such as “always”, “until”, “eventually”, “next”. A preliminary work in this direction appeared in [5]. Lavazza et al. [13] define the Object Temporal Logic (OTL), which allows users to write temporal constraints on Real-time UML (UML-RT) models. In particular, it supports the concepts Time, Duration and Interval to specify the time distance between events. Nevertheless, the language is modeled after the TRIO temporal logic [14], and the properties are written using a low level of abstraction. Ziemann and Gogolla [17] proposes TOCL, an extension of OCL with elements of a linear temporal logic, to specify constraints on the temporal evolution of the system states. Being based on linear temporal logic, TOCL does not support real-time constraints. The work on Flake and Mueller [7] goes in a similar direction, proposing an extension of OCL that allows for the specification of past- and future-oriented time-bounded constraints. They do not support event-based specifications; moreover, the proposed mapping into Clocked LTL does not allow to rely on standard OCL tools. Kuester-Filipe and Anderson propose a liveness template for future-oriented time-bounded constraints, as those that can be captured with a response or existence pattern. This template is defined in terms of the real-time temporal logic of knowledge, interpreted over timed automata, to allow for formal reasoning. The expressiveness of this extension is very limited, since it supports only one template. Soden and Eichler [16] propose Linear Temporal OCL (LT-OCL) for languages defined over MOF meta-models in conjunction with operational semantics. LT-OCL contains the standard modalities of Linear Temporal Logic. The interpretation of LT-OCL formulae is defined in the context of a MOF meta-model and its dynamic behavior specified by action semantics using the M3Actions framework. 16 Wei Dou, Domenico Bianculli, and Lionel Briand The approaches that are most similar to OCLR are those that extend OCL with support for Dwyer et al.'s property specification patterns [6]. Flake and Mueller [8] propose a state-oriented temporal extension of OCL for user-defined classes that have an associated Statechart. The pattern-based temporal expressions refer to configurations of Statecharts. With respect to OCLR, they do not support the specification in terms of events. Moreover, the expressions corresponding to the patterns are not first-class entities of the language, hence they are more verbose and less close to natural language. Robinson [15] presents a temporal extension of OCL called OCL\textsuperscript{TM}, developed in the context of a framework for monitoring of requirements expressed using a goal model. The temporal extension of OCL includes all the operators corresponding to standard LTL modalities, support for Dwyer et al.'s patterns and for timeouts in patterns. In this regard, it is very close to the expressiveness of OCLR, though it supports neither the reference to a specific occurrence of an event in scope boundaries nor the association of time shifts to boundaries (as OCLR does with the keywords 'at least', 'at most', 'exactly'). Kanso and Taha [11] introduce Temporal OCL, a pattern-based temporal extension of OCL. As discussed in Sect. 3, OCLR borrows some language entities from Temporal OCL. Although the support for temporal patterns is very similar between the two languages, Temporal OCL does not allow references to specific event occurrences in scope boundaries and it lacks support for timing information, such as the distance between events and the distance from a scope boundary. 6 Conclusion and Future Work A broad class of requirements for modern complex software systems involves temporal constraints, possibly enriched with timing information. Current approaches for specifying requirements either lack the expressiveness (as in the case of OCL) required for this new class of properties or require mathematical expertise (e.g., temporal logic). In this paper we presented OCLR, a novel temporal extension of OCL based on common property specification patterns, and extended with support for referring to a specific occurrence of an event in scope boundaries, and for specifying the distance between events and/or boundaries of the scope of a pattern. We presented the semantics of the language and its application to a case study in the domain of eGovernment. This work has been developed as part of a broader collaboration with our public service partner CTIE, the Luxembourg state center for information technology, in the context of a project on model-based run-time verification of eGovernment business processes. We are currently working on defining the mapping between OCLR and OCL, in order to take advantage of the industrial-strength tools available to check OCL constraints. Our next steps will focus on defining a model-based run-time verification technique for properties written with OCLR, and integrating it in the business process run-time platform of our partner. We also plan to conduct an empirical study to assess the improvements provided by OCLR when adopted as specification language in the development life cy- OCLR: a More Expressive, Pattern-based Temporal Extension of OCL cle of our partner, and also to improve the language, integrating feedback from practitioners and adding support for other specification patterns [1]. Acknowledgments. This work has been supported by the National Research Fund, Luxembourg (FNR/P10/03). We would like to thank the members of the Prometa team at CTIE, in particular Lionel Antunes, Ludwig Balmer, Henri Meyer, Manuel Rouard, for their help with the analysis of the case study. References A Formal Semantics This section presents the formal semantics of OCLR, using the concept of temporal linear traces. A.1 Trace **Definition 1 (Alphabet of atomic events).** Let $\mathcal{O}$ be the set of all operations and $\mathcal{E}$ be the set of all OCL expressions of an object model $\mathcal{M}$. The alphabet $\Sigma$ of atomic events is defined by the set $\mathcal{O} \times \mathcal{E} \times \mathcal{E}$. An atomic event $e \in \Sigma$ then takes the form: $e = (op, pre, post)$. It stands for a call of the operation $op$ in a context where $pre$ is the pre-condition satisfied in the pre-state and $post$ is the post-condition satisfied in the post-state. **Definition 2 (Poset of atomic events).** Let $\prec$ be the strict partial ordering on the alphabet of atomic events $\Sigma$. The poset of atomic events is denoted as $(\Sigma, \prec)$, in which for any $e_1 \neq e_2$, $e_1 \prec e_2$ indicates that $e_1$ occurs before $e_2$. Thus the corresponding non-strict partial ordering of the atomic events is denoted as $\preceq$. **Definition 3 (Time distance).** Let $(\Sigma, \prec)$ be the poset of atomic events. The timestamp of an atomic event is the absolute occurrence time that is counted in time unit ($tu$); it is defined by the function $\tau: \Sigma \rightarrow \mathbb{N}^+$. The time distance between two different atomic events is the difference between their timestamp, i.e., $d(e_i, e_j) = \tau(e_j) - \tau(e_i)$, $e_i \prec e_j$. **Definition 4 (Trace).** Let $(\Sigma, \prec)$ be the poset of atomic events. A trace $\lambda$ is a finite timeline comprising a sequence of atomic events denoted as $(e_0, \ldots, e_{n-1})$ in which $e_0$ is its starting event and $n$ is the length. The universal set of sub-traces is denoted as $\Lambda$. Given an $n$-length trace $\lambda$, - the atomic event at index $i$ is denoted as $\lambda(i)$; - the time distance between $\lambda(i)$ and $\lambda(j)$ is denoted as $d(i, j)$ ($1 \leq i \leq j \leq n - 1$); - the sub-trace from $\lambda(i)$ to $\lambda(j)$ is denoted as $\lambda(i : j)$ ($0 \leq i \leq j \leq n - 1$); - a forward shift of an atomic event by $t$ time units is denoted as $\lambda(i > t)$; - a backward shift of an atomic event by $t$ time units is denoted as $\lambda(i < t)$; - a time constraint sub-trace $\lambda(i : j)$, $0 \leq i \leq j \leq n - 1$ (i.e., left boundary shifts forwards by time distance $t_1$ and right boundary shifts backwards by time distance $t_2$) is denoted as $\lambda(i > t_1, j < t_2)$; A.2 Event **AnyCallEvent** Let $\Sigma$ be the alphabet of atomic events, $\mathcal{O}$ be the set of all operations, $\mathcal{E}$ be the set of all OCL expressions. **AnyCallEvent** identifies any operation satisfied by a pair of pre- and post-conditions, which is defined by: $$\text{AnyCallEvent} = \text{isCalled}(\text{anyOp}, \text{pre}, \text{post}) = \{(o,p,q) \in \Sigma | o \in \mathcal{O}, p \in \mathcal{E}, q \in \mathcal{E} \text{ and } p \Rightarrow \text{pre}, q \Rightarrow \text{post}\}$$ **SimpleEvent** Let $\Sigma$ be the alphabet of atomic events, $\mathcal{O}$ be the set of all operations and $\mathcal{E}$ be the set of all OCL expressions. A **SimpleEvent** can be either a **SimpleCallEvent** or a **SimpleChangeEvent** defined below. $$\text{SimpleCallEvent} = \text{isCalled}(\text{op}, \text{pre}, \text{post}) = \{(o,p,q) \in \Sigma | o \in \mathcal{O}, p \in \mathcal{E}, q \in \mathcal{E} \text{ and } o = \text{op}, p \Rightarrow \text{pre}, q \Rightarrow \text{post}\}$$ $$\text{SimpleChangeEvent} = \text{becomesTrue}(P) \equiv \text{isCalled}(\text{anyOp}, \neg P, P)$$ Hence a **SimpleChangeEvent** is defined by a special **AnyCallEvent** in which an OCL boolean expression $P$ becomes true after some operation call that is not specified ($\text{anyOp}$). Two binary operators are defined on events. **Disjunction:** $E_1 | E_2$ means $E_1$ occurs or $E_2$ occurs. **Exclusion:** $E_1 \setminus E_2$ means $E_1$ occurs and $E_2$ does not occur. **ComplexEvent** **ComplexEvent** is defined by: $$\text{ComplexEvent} = \text{AnyCallEvent} \text{ or } \text{AnyCallEvent} \setminus \text{Event}$$ The operator **negation** can be expressed as a **ComplexEvent**. **Negation:** $\neg E \equiv \text{isCalled}(\text{anyOp}, \text{true}, \text{true}) \setminus E$ **Event** An event $E$ can be specified by a **SimpleEvent** or a **ComplexEvent**, or either of them in disjunction with another event. It is defined by: $$\text{Event} = \text{SimpleEvent} \text{ or } \text{ComplexEvent} \text{ or } \text{SimpleEvent} | \text{Event} \text{ or } \text{ComplexEvent} | \text{Event}$$ **EventChain** An **EventChain** is a chain of **Events** occurring in sequence with optional quantification of time distance between each pair of adjacent elements; it can degrade to a single **Event**. An $m$-length **EventChain** ($m > 1$) is denoted as $E_1, t_1, E_2, \ldots, t_{m-1}, E_m$. Therein $t_i (1 \leq i \leq m-1)$ represents the quantification of time distance between $E_i, E_{i+1}$, if it is available, having the form $t_i = \# \bowtie \delta_i$, $\delta_i, t_u, \delta_i \in \mathbb{N}^+$, and the range of $\bowtie \delta_i$ is {at least, at most, exactly}. **EventChain matching function** Let \( \lambda \) be an \( n \)-length trace, \( E \) be a single element EventChain, \( E_1, d_1, \ldots, E_{m-1}, d_{m-1}, E_m \) be an \( m \)-length EventChain \((m > 1)\). The matching function \( \text{match} \) is defined for checking the occurrence of an EventChain over the trace. \[ \text{match}(\lambda, E) \equiv \exists i, 0 \leq i \leq n-1, \lambda(i) \in E \\ \text{match}(\lambda, E_1, t_1, E_2, \ldots, t_{m-1}, E_m) \equiv \exists i_1, i_2, \ldots, i_m, 0 \leq i_1 < i_2 < \cdots < i_m \leq n-1, \\ \lambda(i_1) \in E_1, \lambda(i_2) \in E_2, \ldots, \lambda(i_m) \in E_m \\ \text{and } \forall j, 1 \leq j \leq m-1, \begin{cases} \frac{d(i_j, i_{j+1})}{\delta_j} & \text{if } \preceq_j = \text{at least}; \\ \frac{d(i_j, i_{j+1})}{\delta_j} & \text{if } \succeq_j = \text{at most}; \\ \frac{d(i_j, i_{j+1})}{\delta_j} & \text{if } \curlyeqprec_j = \text{exactly}. \end{cases} \] where \( t_j \in \{\infty\} \cup \{\# \preceq_j \delta_j \preceq t \mid 1 \leq j \leq m-1, \preceq_j \in \{\text{at least, at most, exactly}\}, \delta \in \mathbb{N}^+\). Moreover, two functions \( \text{first}(\text{EventChain}) \) and \( \text{last}(\text{EventChain}) \) are defined to get the first and the last event, respectively. ### A.3 Temporal Expressions The semantics of temporal expressions considers the time distance, the events, and the trace. We assume the following types and ranges for the variables used in the semantics definition below. - \( E, E_1, E_2 \) (for scopes) : SimpleEvent - \( E \) (for patterns) : Event - \( EC_1, EC_2 \) (for patterns) : EventChain - \( a, c : \{i \mid i \in \mathbb{N}^+ \text{ and } 0 \leq i \leq n-1\} \cup \{\infty\} \) - \( b, d : \mathbb{N}^+ \) - \( \alpha, \beta, \gamma, \theta, \delta, \eta, i, k, k_2, m, x, y : \{i \mid i \in \mathbb{N}^+ \text{ and } 0 \leq i \leq n-1\} \) - \( |\lambda(i)| \in E) = \begin{cases} 0, & \lambda(i) \notin E; \\ 1, & \lambda(i) \in E. \end{cases} \) - \( \preceq \{\text{at least, at most, exactly}\} \) - \( \Delta : \{\leq, <, >, \geq, =, \neq\} \) **Scopes** Let \( S \) be the set of scopes defined in the grammar. A scope \( s \in S \) is a set of sub-traces of an \( n \)-length trace \( \lambda \in A \) defined by the function \( \phi_{[s]}(\lambda) : A \rightarrow 2^A \) as follows: - \( \phi_{[\text{globally}]}(\lambda) = \{\lambda\} \) - \( \phi_{[\text{before a } E]}(\lambda) = \{\lambda(0 : \gamma) \mid \lambda(\gamma) \in E \text{ and } \sum_{k=0}^{\gamma-1} |\lambda(k)| \in E| = m\} \) - \( \phi_{[\text{before a } E \preceq b \preceq t]}(\lambda) = \{\lambda(\alpha : \beta) \mid \exists \theta, \lambda(\theta) \in E \text{ and } \sum_{k=0}^{\theta-1} |\lambda(k)| \in E| = m\} \) where \[ \begin{align*} m &= \begin{cases} 0, & \text{if } a = \infty; \\ 0, & \text{if } a = 1; \\ a - 1, & \text{if } a > 1. \end{cases} \\ \beta &= \begin{cases} \theta, & \text{if } \infty = \text{at least}; \\ \theta, & \text{if } \infty = \text{at most}; \\ \theta, & \text{if } \infty = \text{exactly}. \end{cases} \end{align*} \] \[ \begin{align*} \phi_{\text{after } a E}[\lambda] &= \left\{ \lambda(\gamma : n - 1) \mid \lambda(\gamma) \in E \text{ and } \sum_{k=0}^{\gamma-1} |\lambda(k) \in E| = m \right\} \\ \phi_{\text{after } a E \in b tu}[\lambda] &= \left\{ \lambda(\alpha : \beta) \mid \exists \theta, \lambda(\theta) \in E \text{ and } \sum_{k=0}^{\theta-1} |\lambda(k) \in E| = m \right\} \end{align*} \] where \[ \begin{align*} m &= \begin{cases} 0, & \text{if } a = \infty; \\ 0, & \text{if } a = 1; \\ a - 1, & \text{if } a > 1. \end{cases} \\ \beta &= \begin{cases} \theta, & \text{if } \infty = \text{at least}; \\ \theta, & \text{if } \infty = \text{at most}; \\ \theta, & \text{if } \infty = \text{exactly}. \end{cases} \end{align*} \] \[ \begin{align*} \phi_{\text{between } E_1 \text{ and } E_2}[\lambda] &= \{ \lambda(\alpha : \beta) \mid \lambda(\alpha) \in E_1, \lambda(\beta) \in E_2 \\ &\quad \text{and } \forall k, 0 \leq \alpha < k < \beta \leq n - 1, \lambda(k) \notin E_2 \\ &\quad \text{and if } \exists i < \alpha, \lambda(i) \in E_1, \exists j, i < j < \alpha, \lambda(j) \in E_2 \} \\ \phi_{\text{between } E_1 \text{ and } E_2 \text{ at least } d tu}[\lambda] &= \{ \lambda(\alpha : \beta) \mid \lambda(\alpha) \in E_1, \beta = \delta < d, \lambda(\delta) \in E_2 \\ &\quad \text{and } \forall k, 0 \leq \alpha < k < \delta \leq n - 1, \lambda(k) \notin E_2 \\ &\quad \text{and if } \exists i < \alpha, \lambda(i) \in E_1, \exists j, i < j < \alpha, \lambda(j) \in E_2 \} \\ \phi_{\text{between } E_1 \text{ at least } b tu \text{ and } E_2}[\lambda] &= \{ \lambda(\alpha : \beta) \mid \alpha = \theta > b, \lambda(\theta) \in E_1, \lambda(\beta) \in E_2 \\ &\quad \text{and } \forall k, 0 \leq \theta < k < \beta \leq n - 1, \lambda(k) \notin E_2 \\ &\quad \text{and if } \exists i < \theta, \lambda(i) \in E_1, \exists j, i < j < \theta, \lambda(j) \in E_2 \} \\ \phi_{\text{between } E_1 \text{ at least } b tu \text{ and } E_2 \text{ at least } d tu}[\lambda] &= \{ \lambda(\alpha : \beta) \mid \alpha = \theta > b, \lambda(\theta) \in E_1, \beta = \delta < d, \lambda(\delta) \in E_2 \\ &\quad \text{and } \forall k, 0 \leq \theta < k < \delta \leq n - 1, \lambda(k) \notin E_2 \\ &\quad \text{and if } \exists i < \theta, \lambda(i) \in E_1, \exists j, i < j < \theta, \lambda(j) \in E_2 \} \end{align*} \] \[ \phi_{\text{[between } a E_1 \text{ and } c E_2]\{\lambda}\} (\lambda) = \left\{ \lambda(\alpha : \beta) \mid \lambda(\alpha) \in E_1, \sum_{k_1=0}^{\alpha-1} |\lambda(k_1)| \in E_1 = x \text{ and } \lambda(\beta) \in E_2, \sum_{k_2=\alpha+1}^{\beta-1} |\lambda(k_2)| \in E_2 = y \right\} \] \[ \phi_{\text{[between } a E_1 \text{ at least } d tu]\{\lambda}\} (\lambda) = \left\{ \lambda(\alpha : \beta) \mid \lambda(\alpha) \in E_1, \sum_{k_1=0}^{\alpha-1} |\lambda(k_1)| \in E_1 = x \text{ and } \beta = \delta < d, \lambda(\delta) \in E_2, \sum_{k_2=\alpha+1}^{\delta-1} |\lambda(k_2)| \in E_2 = y \right\} \] \[ \phi_{\text{[between } a E_1 \text{ at least } d tu]\{\lambda}\} (\lambda) = \left\{ \lambda(\alpha : \beta) \mid \alpha = \theta > b, \lambda(\theta) \in E_1, \sum_{k_1=0}^{\theta-1} |\lambda(k_1)| \in E_1 = x \text{ and } \lambda(\beta) \in E_2, \sum_{k_2=\theta+1}^{\beta-1} |\lambda(k_2)| \in E_2 = y \right\} \] where \[ x = \begin{cases} 0, & \text{if } a = c^\alpha \text{ and } c \not= \alpha^\gamma; \\ 0, & \text{if } a = 1; \\ 1 - a, & \text{if } a > 1. \end{cases} \] \[ y = \begin{cases} 0, & \text{if } c = c^\alpha \text{ and } a \not= \alpha^\gamma; \\ 0, & \text{if } c = 1; \\ c - 1, & \text{if } c > 1. \end{cases} \] for every pattern \( p \in \mathbb{P} \) be the set of patterns defined in the grammar. The semantics of a pattern \( p \in \mathbb{P} \) is given by the function \( \varphi_{\varphi}(\lambda) : A \rightarrow \{true, false\} \) defined for every \( \lambda \in A \) as follows: \[ \begin{align*} \varphi_{\text{eventually } E}(\lambda) & \iff \exists i \geq 0, \lambda(i) \in E \\ \varphi_{\text{eventually } \sim E}(\lambda) & \iff \sum_{i=0}^{n-1} |\lambda(i) \in E_2| \triangle m \end{align*} \] where \[ \begin{align*} \ge; & \text{ if } \triangleright \triangleq \text{ at least}; \\ \le; & \text{ if } \triangleright \triangleq \text{ at most}; \\ \bullet; & \text{ if } \triangleright \triangleq \text{ exactly}. \end{align*} \] \[ \begin{align*} &\varphi_{\text{never } E}(\lambda) \iff \forall i \geq 0, \lambda(i) \notin E \\ &\varphi_{\text{always } E}(\lambda) \iff \forall i \geq 0, \lambda(i) \in E \\ &\varphi_{\text{never exactly } m E}(\lambda) \iff \sum_{i=0}^{n-1} |\lambda(i) \in E| \neq m \\ &\varphi_{\text{EC}_1 \text{ preceding } \text{EC}_2}(\lambda) \iff \forall \text{match}(\lambda, \text{EC}_2) \Rightarrow \\ &\quad \text{match}(\lambda, \text{EC}_1) \text{ and last}(\text{EC}_1) \prec \text{first}(\text{EC}_2) \\ &\varphi_{\text{EC}_1 \text{ responding } \text{EC}_2}(\lambda) \iff \forall \text{match}(\lambda, \text{EC}_2) \Rightarrow \\ &\quad \text{match}(\lambda, \text{EC}_1) \text{ and last}(\text{EC}_2) \prec \text{first}(\text{EC}_1) \\ &\quad \text{and } d(\text{last}(\text{EC}_1), \text{first}(\text{EC}_2)) \triangle b \end{align*} \] where \[ \Delta = \begin{cases} \geq, & \text{if } \bowtie = \text{ at least}; \\ \leq, & \text{if } \bowtie = \text{ at most}; \\ =, & \text{if } \bowtie = \text{ exactly}. \end{cases} \] Temporal Expression The semantics of a temporal expression \((\text{pattern}, \text{scope}) \in \mathbb{P} \times \mathbb{S}\) over a trace \(\lambda \in \Lambda\) is defined by: \(\lambda \models (\text{pattern}, \text{scope}) \iff \forall \lambda' \in \phi_{\text{scope}}(\lambda), \varphi_{\text{pattern}}(\lambda')\) \[ \begin{array}{cccccccccc} X & A & B & Y & Y & X & X & C & C & Y & X \\ \end{array} \] Fig. 10. An event trace Consider an OCLR constraint like “eventually \(A\) at least 2 time units before \(Y\)”. The black section in Fig. 10 is the scope restricted by “before \(Y\)”. The temporal pattern “eventually \(A\) at least 2 time units” is evaluated using the semantics described for \textit{eventually}. Because the time interval between $A$ and $Y$ is 10 time units which is more than 2, this constraint is satisfied.
{"Source-Url": "http://orbilu.uni.lu/bitstream/10993/15339/1/oclr-report.pdf", "len_cl100k_base": 14076, "olmocr-version": "0.1.50", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 91130, "total-output-tokens": 16507, "length": "2e13", "weborganizer": {"__label__adult": 0.0003108978271484375, "__label__art_design": 0.00054168701171875, "__label__crime_law": 0.0003705024719238281, "__label__education_jobs": 0.0005927085876464844, "__label__entertainment": 7.998943328857422e-05, "__label__fashion_beauty": 0.0001518726348876953, "__label__finance_business": 0.00035953521728515625, "__label__food_dining": 0.0003256797790527344, "__label__games": 0.0005269050598144531, "__label__hardware": 0.0006771087646484375, "__label__health": 0.00042366981506347656, "__label__history": 0.00027489662170410156, "__label__home_hobbies": 8.982419967651367e-05, "__label__industrial": 0.0004737377166748047, "__label__literature": 0.00029587745666503906, "__label__politics": 0.00031280517578125, "__label__religion": 0.0004248619079589844, "__label__science_tech": 0.04364013671875, "__label__social_life": 8.273124694824219e-05, "__label__software": 0.0090179443359375, "__label__software_dev": 0.93994140625, "__label__sports_fitness": 0.00019931793212890625, "__label__transportation": 0.0005068778991699219, "__label__travel": 0.00020062923431396484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55832, 0.03087]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55832, 0.31046]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55832, 0.84965]], "google_gemma-3-12b-it_contains_pii": [[0, 282, false], [282, 2986, null], [2986, 6214, null], [6214, 9570, null], [9570, 12348, null], [12348, 14392, null], [14392, 17078, null], [17078, 19539, null], [19539, 22487, null], [22487, 23598, null], [23598, 25424, null], [25424, 27634, null], [27634, 29007, null], [29007, 31594, null], [31594, 34938, null], [34938, 38241, null], [38241, 41205, null], [41205, 43731, null], [43731, 46447, null], [46447, 49190, null], [49190, 51948, null], [51948, 53211, null], [53211, 53950, null], [53950, 55683, null], [55683, 55832, null]], "google_gemma-3-12b-it_is_public_document": [[0, 282, true], [282, 2986, null], [2986, 6214, null], [6214, 9570, null], [9570, 12348, null], [12348, 14392, null], [14392, 17078, null], [17078, 19539, null], [19539, 22487, null], [22487, 23598, null], [23598, 25424, null], [25424, 27634, null], [27634, 29007, null], [29007, 31594, null], [31594, 34938, null], [34938, 38241, null], [38241, 41205, null], [41205, 43731, null], [43731, 46447, null], [46447, 49190, null], [49190, 51948, null], [51948, 53211, null], [53211, 53950, null], [53950, 55683, null], [55683, 55832, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55832, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55832, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55832, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55832, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55832, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55832, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55832, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55832, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55832, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55832, null]], "pdf_page_numbers": [[0, 282, 1], [282, 2986, 2], [2986, 6214, 3], [6214, 9570, 4], [9570, 12348, 5], [12348, 14392, 6], [14392, 17078, 7], [17078, 19539, 8], [19539, 22487, 9], [22487, 23598, 10], [23598, 25424, 11], [25424, 27634, 12], [27634, 29007, 13], [29007, 31594, 14], [31594, 34938, 15], [34938, 38241, 16], [38241, 41205, 17], [41205, 43731, 18], [43731, 46447, 19], [46447, 49190, 20], [49190, 51948, 21], [51948, 53211, 22], [53211, 53950, 23], [53950, 55683, 24], [55683, 55832, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55832, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
d8efab8a0afee7d7815e8c35d82bb793c76ef13e
INTRODUCTION This document describes proposed changes to the syserr mechanism. The reader is assumed to have a basic understanding of how the current syserr mechanism functions. Relevant information can be found in the following documents: MTB-016 MTB-071 MTB-103 The most significant change involves the tabularization of all syserr messages. Calls to syserr will specify a code that identifies an entry in a table of syserr messages. This table will be used in a manner similar to the current error_table_. The table of syserr messages will contain all of the information needed to process, write, and log the syserr message. It will also contain information that can be used to sort and interpret syserr messages that have been saved in the syserr_log. This table will be structured in a way that minimizes the amount of wired main memory used. The tabularization of syserr messages will result in greatly increased administrative control over the use of the syserr mechanism. It will also provide a useful source of documentation for all syserr messages. A new source level language will be used to specify the syserr messages as a series of source statements. A new translator will be developed to process these source statements and produce the actual syserr table. This document also describes two new capabilities that will be added to the syserr mechanism. One involves the passing of binary data to syserr. This binary data will be put into the syserr_log along with the syserr message. This binary data may then be formatted by programs that process messages from the syserr_log. A second capability involves passing an error_table_code to syserr. The error_table_message referenced by this error_table_code will be appended to the syserr message. The implementation of these changes involves adding three new entry points to syserr. In order to give the reader an overview of the new syserr capabilities the calling sequences of these new entry points are described below. ```c call syserr$message (syserr_table_code, arg1, ... , argn); call syserr$binary (syserr_table_code, data_ptr, data_size, arg1, ... , argn); call syserr$error_code (syserr_table_code, error_table_code, arg1, ... , argn); ``` This document discusses these new syserr capabilities in detail. There are sections dealing with each of the following subjects: 1. Tabularization of syserr messages. 2. Binary data. 3. Error_table_ messages. 5. An implementation plan for these changes. THE TABULARIZATION OF SYSERR MESSAGES Problems with the Current Syserr Mechanism The present calling sequence to syserr involves passing as arguments a syserr action code, a formline_control string, and arguments used by formline_to expand this control string. This calling sequence has major disadvantages in terms of storage usage and administrative control. These disadvantages are discussed below. 1. The calls to syserr generate too many words of object code. Many of the calls to syserr are made from wired programs and thus add to the total wired storage needed by the system. Even syserr calls in paged programs may add to system overhead. The presence of syserr calls in a program may result in adding an extra page to the program or splitting frequently used code over two pages. 2. The calling sequence to syserr is self contained. All of the information needed to process a syserr message is passed in the call to syserr. This may seem to be an advantage in that it allows syserr messages to be easily changed. This ease of modification is, however, a major disadvantage of the current syserr mechanism. Syserr messages are added, deleted, and modified so frequently, and so unnoticed, that it is virtually impossible to know what syserr calls are in the system at any given time. Only by making a pass over the source of the entire core can we now generate a list of all the installed calls to syserr. Even doing this, however, gives us little information about the real meaning and purpose of each call. The syserr mechanism is a critical system function. It is used to crash the system, communicate with the operator, and log system information. From an administrative, marketing, and operations point of view it is unacceptable to have so little control over the use of this critical system function. It is unacceptable that we have no means of generating complete and up-to-date documentation about each syserr message. 3. Although the current syserr interface makes it easy to modify a call to syserr it makes it difficult to change the types of information passed to syserr. Any additional data that might be useful to have associated with a syserr message would have to be passed in the call itself. We would have to modify the syserr calling sequence, or add a new syserr entry point, or squeeze the additional information into the current calling sequence. MTB-071 described a cumbersome attempt to squeeze a sorting code into the action code argument. A better method of specifying a sorting code is presented in this document. The Syserr Tables The proposed new calling sequences to syserr are designed to solve the problems discussed above. They involve passing a syserr_table_code as an argument to syserr. This code is much like an error_table_code. It should be declared by the calling program as follows: dcl syserr_table_$nnnnnn fixed bin(35) external; The entry name "nnnnnn" is a unique name that identifies this syserr message. For example, the syserr message generated by the iom_manager, "iom_manager: bad devx X supplied." might have the name, syserr_table_$bad_devx. Each call to syserr must pass the syserr_table_code that corresponds to the desired syserr message. The same syserr_table_code may be referenced by several different calls to syserr. The syserr_table_codes will correspond to entries in a source segment named syserr_table_st. It is similar in function to the source segment, error_table_et. A special compiler, syserr_table_compiler (stc), will be developed to process the syserr_table_st source segment. The source language used to define a syserr message will be described in one of the later sections. The syserr_table_st source segment will be translated by syserr_table_compiler into two ALM source programs. These two ALM programs must then be assembled into object segments. The two object segments generated will be called syserr_table_ and syserr_info_. The reason that we need two segments will be explained below. An important point to note about these two segments is that both segments are generated from the single syserr_table_st source segment. The syserr_table_segment contains references to the syserr_info_segment. Thus the installed version of both of these segments must have been generated from the same source segment. To ensure this we will require that the installed version of both of these segments be generated by the same invocation of syserr_table_compiler. In order to do this a unique ID will be placed in each of these two segments. At system initialization time, syserr_log_init will check that the variables syserr_table_$uid and syserr_info_$uid are equal. If they are not equal then system initialization will be aborted. In order to facilitate changing the format of a syserr_table_ or syserr_info_ entry, a version number will be associated with each of these segments. The variables syserr_table$_version_num and syserr_info$_version_num will contain the version number that specifies the format of the entries in their respective segments. These two version numbers do not have to be equal. They will be generated by syserr_table_compiler. The syserr_table_codes that will be used in calls to syserr correspond to entries in syserr_table_. Each entry in syserr_table_ will contain all of the information needed by syserr_real to process a syserr call. Since syserr_real must not take a page fault when called by a program that is wired, we must guarantee that syserr_table_entries referenced by wired programs will themselves be wired. In order to do this syserr_table_compiler will group all of the entries referenced by wired programs at the top of syserr_table_. During system initialization enough pages at the top of this segment to cover all of these entries will be permanently wired. In order to minimize the amount of wired storage needed by syserr_table_ the information kept in each syserr_table_entry will be only that information absolutely necessary to syserr_real. Below is a description of a syserr_table_entry. dcl 1 ste based (ste_ptr) aligned, 2 code, /* 1. */ ( 3 pad /* 2. */ 3 table_offset bit(18), /* 3. */ 2 info_offset bit(18), /* 4. */ 2 action_code fixed bin(8), /* 5. */ 2 cstring_len fixed bin(8)) unaligned, /* 6. */ 2 cstring char(0 refer(ste.cstring_len));/* 7. */ 1. table_offset - This word is the value referenced by a syserr_table_code variable such as syserr_table_$bad_devx. 2. pad - This half of a syserr_table_code word is reserved for future use. In the initial implementation it will be set to zero. 3. info_offset - This field contains the offset of this entry in syserr_table_, i.e., its own word offset. 4. info_offset - This field contains the offset of the corresponding entry in syserr_info_. There is a one-to-one correspondence between the entries in syserr_table_ and the entries in syserr_info_. This field provides the connection between corresponding entries in these two tables. 5. action_code - This is the syserr action code for this message. (For a list of the valid syserr action codes see the section on the syserr_table_source language.) 6. cstring_len - This field specifies the length of the formline_control string for this syserr message. 5. cstring - This field is the formline_control string for this syserr message. The syserr_info_table contains information about syserr messages that is not needed by syserr_real. The reason for splitting up the information about a syserr message into two entries is to minimize the information contained in a syserr_table_entry. This is important since some syserr_table_entries are wired. Most syserr_table_entries will not be wired, but for the sake of consistency it is desirable to make all of the syserr_table_entries have the same format. The reason for putting the syserr_info_entries into a separate segment and not putting them in an unwired part of the syserr_table_segment involves system initialization considerations. That part of the syserr mechanism that writes syserr messages on the operator's console and puts messages into the wired_log is initialized early in collection 1. There is a critical limit to the space that is available during collection 1. The amount of information contained in a syserr_info_ entry may be so great that the syserr_info_ segment would be too large to be used during collection 1. Thus these two segments cannot be combined. The syserr_info_ table will not be used until collection 2 when the syserr logging mechanism is initialized. Below is a description of a syserr_info_ entry. dcl 1 sie based(sie_ptr) aligned, ( 2 action_code fixed bin(8), /* 1 */ 2 name_len fixed bin(8), /* 2 */ 2 desc_len fixed bin(17), /* 3 */ 2 sort_code fixed bin(17), /* 4 */ 2 format_code fixed bin(17)) unaligned, /* 5 */ 2 name char(0 refer(sie.name_len)), /* 6 */ 2 description char(0 refer(sie.desc_len)); /* 7 */ 1. action_code - The syserr action code is duplicated in this entry for efficiency reasons. 2. name_len - This field contains the length of the syserr message name string. 3. desc_len - This field contains the length of the syserr message description string. 4. sort_code - This field contains a number that is used to sort syserr messages that have been logged. Each class of syserr messages - device errors, audit messages, etc - will be assigned a unique sorting code. For those syserr messages that do not fit into any special class a default value of 0 will be used. (See the section on the user ring processing of syserr messages.) 5. format_code - This field contains a number that can be used to format any binary data associated with this message. It should be very helpful to user ring programs that process syserr messages from the syserr_log. (See the section on the user ring processing of syserr messages.) 6. name - This field specifies the name of the syserr message. It is identical to the entry point name used in the declaration of a syserr_table_code. Continuing with our example, if this entry corresponds to the syserr_table_code syserr_table_$bad_devx then this field would contain "bad_devx". 7. description - This string contains a description of this syserr message. It may include a description of the circumstances that cause this message to be used, a description of any variables that may appear in the expanded message string, a description of any action that the operator should take in response to this message, or any other information useful to know about this message. Ring Zero Syserr Processing This section describes how the syserr_table_ and syserr_info_ segments are used by syserr_real and syserr_logger to process a syserr message. As an example, the calling sequence to the syserr$message entry point is described in detail below. syserr$message (syserr_table_code, arg1, ..., argi) ARGUMENTS: syserr_table_code (Input) (fixed bin(35)) This argument specifies an offset into the segment syserr_table_. This offset references the entry in syserr_table_ that corresponds to this syserr message. arg1, ..., argi (Input) These are optional arguments that will be used by formline_ to expand the control string. The entry point syserr$message is an ALM interface to the entry point syserr_real$message. Using the syserr_table_code argument syserr will reference the syserr_table_ entry for this syserr message. From this entry it will get the action code for this message. All the new syserr entry points will check to see if any stack manipulation is needed. If the action code specifies a fatal error and if other conditions are met then syserr will alter the stack that it is running on so that previous stack history information will be preserved for debugging purposes. Then syserr will call the corresponding entry point in syserr_real using the same argument list that it was called with. The syserr_real entry point that is called will also use the syserr_table_code argument to make a pointer to the syserr_table_ entry associated with this message. It will get the action code from this entry. It will check that this action code is valid. Contrary to MTB-071 no log code (sorting code) value will be derived from this action code. The control string for this message will be copied from its syserr_table_ entry. Using this control string and the arguments passed by the caller syserr_real will call formline to generate an expanded ASCII message. The message will be logged. Based upon the action code, syserr_real will write this message on the operator's console. This message will be logged by syserr_real in basically the same way that it does now. However, the information put into the wired_log is somewhat different. Below is a description of the new wired_log entry. dcl 1 wmess based(wmess_ptr) aligned, 2 head like wmess_header, /* 1. */ 2 text char(0 refer(wmess.head.text_len)), /* 2. */ 2 data(0 refer(wmess.head.data_size)) bit(36), /* 3. */ 2 next_wmess bit(36); /* 4. */ dcl 1 wmess_header based aligned, 2 seq_num fixed bin(35), /* 5. */ ( 2 info_off bit(18), /* 6. */ 2 text_len fixed bin(8), /* 7. */ 2 data_size fixed bin(8), /* 8. */ 2 time fixed bin(71)) unal; /* 9. */ 1. head - The header of the wired_log message entry. 2. text - The ASCII message that was expanded from the control string of this syserr message. 3. data - The binary data that is copied into the wired_log by syserr_real. (See the section on binary data.) 4. next_wmess - Used to calculate the address of the next entry in the wired_log. 5. seq_num - The sequence number assigned to this syserr message by syserr_real. The sequence number count is initialized to 1 whenever the syserr_log is reinitialized. Due to the high number of syserr messages that will be generated by the protection audit mechanism this field has been expanded from its previous size. 6. info_off - Offset in syserr_info of the entry that corresponds to this syserr message. 7. text_len - Number of characters in the ASCII message string. 8. data_size - Number of words of binary data copied into this message entry. Zero implies that there is no binary data in this entry. 9. time - Raw clock time specifying when the syserr message was put into the wired_log. When handling the log interrupt, syserr_logger will copy each entry in the wired_log into the syserr_log. It will copy the seq_num, text, data, and time fields from the wired_log. Using the info_off field in the wired_log entry it will generate a pointer to the syserr_info_ entry associated with this syserr message. From this syserr_info_ entry it will get the rest of the data that goes into the syserr_log entry. Below is a description of the new syserr_log entry. dcl 1 smess based(smess_ptr) aligned, 2 head like smess_header, /* 1. */ 2 name char(0 refer(smess.head.name_len)), /* 2. */ 2 text char(0 refer(smess.head.test_len)), /* 3. */ 2 data(0 refer(smess.head.data_size)) bit(36), /* 4. */ 2 next_smess bit(36); /* 5. */ dcl 1 smess_header based aligned, ( 2 next bit(18), /* 6. */ 2 prev bit(18)) unaligned, /* 7. */ 2 seq_num fixed bin(35), /* 8. */ ( 2 action_code fixed bin(8), /* 9. */ 2 name_len fixed bin(8), /* 10. */ 2 text_len fixed bin(8), /* 11. */ 2 data_size fixed bin(8), /* 12. */ 2 time fixed bin(71)) unal; /* 13. */ 1. head - The header of the syserr_log message entry. 2. name - The name of this syserr message. 3. text - The expanded ASCII message. 4. data - The binary data saved for this syserr message. 5. next_smess - Used to calculate the address of the next entry in the syserr_log. 6. next - The offset of the next entry in the syserr_log. 7. prev - The offset of the previous entry in the syserr_log. 8. seq_num - The sequence number of this syserr message. 9. action_code - The action code of this syserr message. It tells how syserr_real processed this message. 10. **name_len** - Number of characters in the string that specifies the name of this syserr message. 11. **text_len** - Number of characters in the ASCII message string. 12. **data_size** - Number of words of binary data. 13. **time** - Raw clock time when message logged. **User Ring Syserr Processing** User ring programs may process syserr messages that have been logged. They will be able to get syserr messages directly from **syserr_log** or from one of the system log segments. They will be able to select syserr messages based on syserr message name, sequence number, action code, the time the message was logged, and sort code. They may print the message text as is since it is already in ASCII and completely expanded. If this message has any binary data they must decide how to format it. This decision can be made using the format code for this message. If the format code is 0 or if it is not known to the program then the binary data may be formatted as if it were an octal dump. However, if the format code is equal to some prearranged value that the user ring programs understand then they will be able to format the binary data in some special way. For example, a format code of 1 may imply that the binary data is SCU data. A format code of 2 may imply that it is history register data, etc. Neither the sort code nor the format code are found in the **syserr_log** entry. They are found in the **syserr_info_** entry associated with this syserr message. Any other program that wants to find the **syserr_info_** entry associated with a **syserr_log** entry must do the following. 1. Get the syserr message name from the **syserr_log** entry. 2. Using this entry point name and the segment name "syserr_table_" call hcs_make_ptr to get a pointer to the **syserr_table_** entry that corresponds to this syserr message. 3. Using the info_offset found in the **syserr_table_** entry generate a pointer to the corresponding **syserr_info_** entry. It may not be obvious to the reader why the syserr message name is saved in the **syserr_log** entry instead of the offset of the **syserr_info_** entry itself. It is true that if the offset were saved then the algorithm described above would not be necessary. However, this offset is not saved in the **syserr_log** entry for the following reason. Syserr messages will be saved in the syserr_log and the system log segments for long periods of time, possibly months or even years. During this time it is inevitable that syserr messages will be added and deleted from the system. The syserr_table mechanism must be able to process syserr messages that were generated from old versions of syserr_table and syserr_info. Unless these segments are formatted in a very inefficient way the offsets of their syserr message entries will change each time syserr_table.st is recompiled. Thus we need to put something in the syserr_log entries that will identify a syserr message for all time. The syserr message name is such an entity. If a program is processing a syserr message that has become obsolete, then there will be no corresponding entry in syserr_table or syserr_info. The call to hcs_make_ptr will not be successful since it uses an unknown syserr_table_entry point name. The program will know that this syserr message is obsolete and will use default values for the information that it would have found in the syserr_info entry. The ability to add and delete syserr messages from syserr_table and syserr_info is an important feature. Just as important, however, is the ability to change the information about a syserr message that is kept in these segments. Information relevant to a syserr message at the time it was generated (action code, text, binary data, time) is saved in the syserr_log entry. Information that is used at a later time to process, interpret, and describe this syserr message is kept in its syserr_info entry. Changing the description of a syserr message means that the new description will be available for past as well as future instances of that syserr message. At this time, the function of the sort and format codes is not clearly understood. What is understood, however, is that the fields in a syserr_info entry such as the description, sort_code, and format_code do not affect the ring 0 processing of syserr messages. They represent a convention understood by the writer of syserr_table_st source statements, syserr_table_compiler, and user ring programs that process syserr messages from the syserr_log. Advantages of the Tabularization of Syserr Messages 1. The new syserr calling sequence will generate less object code. The main savings is due to the fact that the formline_control string is no longer part of the object segment. Also, the most frequently used of the new syserr entry points, syserr$message, has one less argument that the current syserr entry point. Since all calls to syserr are made with descriptors this implies that four words will be saved in each of these calls. 2. One could say that the savings described above is nullified by the space needed by syserr_table_ and syserr_info_ entries. However, this is only partly true. First, the pages used by these two segments are only referenced when a syserr call is actually made. This is an infrequent occurrence. Some syserr messages are almost never used. With the old calling sequence the space used by these calls was in the object text and was therefore active each time the program was executed. Secondly, some syserr messages contain the same message. If called by two separate programs the control string will be duplicated in the object text of each program. The new calling sequence can eliminate this duplication. Many syserr messages have the same control string except for a program name. Such messages could be changed to use the single syserr code by having the program names specified as arguments. 3. The tabularization of syserr messages will result in a significant improvement in the administrative control over the use of this system function. Programmers modifying ring 0 programs will no longer be able to add, delete, or change syserr messages at will. They will have to change syserr_table_.st and this should require an MCR. Changes to syserr_table_.st should be noted on the system change request form. 4. The syserr_table_.st source segment will be an instant source of documentation about syserr messages. The description of the syserr message will be especially helpful. In addition to the source segment itself, a program could be developed that would format this source segment as an actual document. Programs could also be developed that would return selective information about a syserr message. 5. The format and sort codes defined for each syserr message will be very helpful in processing syserr messages that have been logged. New syserr message processing features can easily be added since this whole area of the syserr mechanism is merely a convention among user ring programs. **BINARY DATA** Recently, certain programs have been putting large amounts of binary data into the syserr_log. History registers (128 words) and SCU data (48 words) have been put into the syserr_log. In the future, device status and other information associated with I/O device errors will be logged. The current syserr mechanism does not allow this to be done either conveniently or efficiently. The major problems involved with logging binary data via the current syserr mechanism are: 1. It is inconvenient for the calling program. It must go through the trouble of breaking up the binary data into pieces that syserr can handle. 2. Because the data must first be broken up, programs usually call syserr with only four words of data at a time. For example, in order to put all of the history register data into the syserr_log 32 calls are made to syserr. The multiplicity of calls that result from giving syserr only a few words at a time is very inefficient. 3. Due to the multiple wired_log entries generated, each of which has header information, and due to the conversion from binary to ASCII, the wired_log entries for 128 words of binary data now uses 648 words. When syserr is called from a program that is masked down to system level the log interrupt is inhibited. If this program repeatedly calls syserr the wired_log will overflow and messages will be lost from the log. Currently, if a program that is masked down to system level attempts to put history register data into the log most of the syserr messages will be lost from the log. With the current implementation, in order to make the wired_log large enough to hold all of the history register data we would have to increase its size to 750 words, five times its current size of 150 words. 4. Since many log entries are needed to put large amounts of data into the syserr_log, it is possible for these entries to be interleaved in the syserr_log with other entries generated by the same program while it is simultaneously running on another processor. It is likely that in such a case the data retrieved from the syserr_log would not be interpretable. In order to solve these problems the new entry point to syserr described below will be implemented. It is designed to meet the following goals: 1. The calling program must be able to pass binary data to syserr in a convenient manner. 2. A reasonably large amount of data must be processed by a single call to syserr. 3. The binary data should not be converted to ASCII. It should be put into the syserr_log in its original binary format. syserr$binary(syserr_table_code, data_ptr, data_size, arg1, ..., argn) ARGUMENTS: data_ptr (Input) (ptr) Pointer to the first word of binary data to be logged. data_size (Input) (fixed bin) The number of words of binary data to be put into the syserr_log. A maximum data size of 128 words will be allowed. The entry point syserr$binary is an ALM interface to the entry point syserr_real$binary. syserr_real$binary performs all of the functions that are performed by syserr_real$message. It will support all of the defined syserr action codes. It will generate an ASCII string from the formline_control string found in the syserr_table_entry for this message. This ASCII string will be placed in the wired_log and later copied into the syserr_log. The ASCII string will be typed on the operator's console if this is specified by the syserr action code. In addition, syserr_real$binary will copy into the wired log all of the binary data specified by the data_ptr and data_size arguments. This data will not be converted into ASCII. It will not be typed on the operator's console regardless of the action code. When syserr_logger handles the log interrupt it will copy all of this binary data into the corresponding syserr_log entry. The log entry header for both the wired_log and syserr_log will be changed to include the size of the binary data that is contained in the entry. If this value is zero then there is no binary data. The other syserr_real entry points will always set this field to zero. User ring programs will have to convert any binary data found in a log entry into a printable format. Instead of this conversion being done by syserr_real, a critical ring 0 program, it will be done in a higher ring. The format code found in the syserr_info_entry can be used to tell user ring programs how this binary data should be formatted. As a default the binary data can be printed as if it were a dump. ERROR TABLE MESSAGES Many calls to syserr contain an error_table_code as one of the arguments to formline_. This code is usually converted and printed as an octal number. This method of using error_table_message codes within syserr messages has the following major disadvantages. 1. It is not easy for an operator who sees such a message typed on the operator’s console to know what error_table_message is being referenced. He must look in the source listing of error_table_.alm. Using the octal entry offset obtained from the syserr message he can then find the error_table_message. 2. Finding the error_table_message from these syserr messages once they have been logged may often be impossible. The error_table_entry offsets that reference a previous version of the error_table will not be valid. The new syserr entry point described below is intended to improve the use of error_table_messages with syserr messages. Since this entry point will reference the unwired segment, error_table_, it must not be called by any programs that cannot take page faults. syserr$error_code (syserr_table_code, error_table_code, arg1, ... ARGUMENTS: error_table_code (Input) (fixed bin(35)) A standard error_table_code. The entry point syserr$error_code is an ALM interface to the entry point syserr_real$error_code. It performs all of the functions that are performed by syserr_real$message. In addition, it will use the error_table_code argument to obtain a message string from the system error_table_. This message string will be appended to the expanded syserr_message string. The concatenated string will be logged. If appropriate, the concatenated string will be typed on the operator’s console. SYSERR SOURCE LANGUAGE This section discusses the source language used to define syserr messages. The definition of all of the syserr messages will be combined in the single segment, syserr_table_.st. The definition of a syserr message is comprised of several statements. An informal description of these statements is given below. General Statement Syntax <statement>::= <statement name>: <statement variable>; NAME STATEMENT A name statement must be the first statement in the definition of a syserr message. The statement variable is the name of this syserr message. This name will become an entry point in the segment syserr_table_. Each syserr message name must be unique within syserr_table_.st. END STATEMENT An end statement must be the last statement in the definition of a syserr message. The statement variable is the name of this syserr message. It must match the name specified on the preceding name statement. Between the name statement and the end statement will be all of the other statements that define this syserr message. These statements may be in any order. Only one statement of each type is allowed in any one syserr message definition. The main purpose of this statement is for the convenience of those perusing a listing of the syserr_table_.st source segment. It identifies a syserr message definition that has spanned one or more pages of the listing. ACTION STATEMENT This statement defines the syserr action code for this syserr message. This statement is not optional. Syserr messages that use variable action codes (for example those that use DEBG card values) must now be specified as separate messages. There must be one message for each possible action code. This is a rare case and its use should be discouraged. The meaning of the various action codes is given below. The reader should note that the previously supported syserr action involving the termination of a process is no longer supported. - **fatal** - The message will be logged and then typed on the operator's console with alarm. Then a "Multics Not In Operation" message will be typed. Then Multics will be crashed. - **write** - The message will be logged and then typed on the operator's console without alarm. - **write_alarm** - The message will be logged and then typed on the operator's console with alarm. - **log** - The message will be logged. The message will not be written on the operator's console. However, if the message could not be logged due to a lack of space in the wired_log buffer then the message will be typed on the operator's console without alarm. The string "*LOST" will be prefixed to the syserr message. - **log_only** - The message will be logged. The message will not be written on the operator's console. If the message cannot be logged it will be lost without notification to the operator. **CONTROL STATEMENT** ``` <control statement>::= control: "<control string>"; ``` The control statement is used to define the formline_control string that is to be used to expand this syserr message. The variables defined within this formline_control string must match the arguments passed in the call to syserr. The control string variable must be within quotes. Any quotes within the control string itself must be expressed as double quotes. This statement is not optional. **STATUS STATEMENT** ``` <status statement>::= status: <status variable>; <status variable>::= {wired|active|paged|init} ``` The status statement is used to define where this syserr message is to placed in the syserr_table_. All syserr message entries are grouped into one of four status classes. status classes. All of the entries from the same status class will be packed together in syserr_table_. The wired status class entries will be placed at the top of syserr_table_, followed by the active status class entries, followed by the paged status class entries, and lastly followed by the init status class entries. The number of pages in syserr_table_used by the wired status class entries will be calculated by syserr_table Compiler. These pages will be permanently wired at system initialization time. This is done in order to fulfill the requirement that all entries in syserr_table_ that are referenced by syserr_real on behalf of wired programs must themselves be wired. This statement is optional. If it is missing a default status class of wired will be assumed. The exact meaning of these four syserr message status variables is: **wired** - One of the calls to syserr that references this syserr message comes from a program that is wired. **active** - This syserr message will be referenced only by paged programs. This syserr message is frequently used. The purpose of this status class is to hopefully put some of these frequently used paged syserr message entries into any unused space in the last page used by the wired syserr message entries. **paged** - This syserr message will be referenced solely by paged programs. **init** - This syserr message will be referenced solely during system initialization. By isolating this type of syserr messages we can place them in pages at the end of syserr_table_. These pages will never be referenced after system initialization. Some calls to syserr come from initialization programs that are wired. However, this case occurs only during collection 1 when all of syserr_table_ is wired. Thus syserr messages whose status is both wired and init should be defined as init. **DESCRIPTION STATEMENT** <description statement>::= description: "<description string>"; This statement is used to specify a description of the syserr message. The description string may contain any characters suitable for printing. Quotes within this string must be expressed as double quotes. The description string may be as long as necessary to completely describe the meaning and reason for this syserr message. If appropriate, it should include a description of the action to be taken by the operator in response to this syserr message. This is an optional statement. If it is missing a null string will be used as a default. SORT STATEMENT \[ \text{<sort statement> ::= sort: <sort code>;} \] The sort code variable specified in this statement must be a non-negative decimal number. This variable specifies that this syserr message belongs to a particular sort class. The user ring programs may support an option that enables them to process only those syserr messages that belong to a particular sort class. The sort class numbers must be used according to conventions agreed upon by the writers of syserr_table statements. This statement is optional. If it is missing, a default sort code value of 0 will be used. FORMAT STATEMENT \[ \text{<format statement> ::= format: <format code>;} \] The format code variable specified in this statement must be a non-negative decimal number. This variable specifies the format to be used when printing binary data. The format codes must be used according to conventions understood by the writers of syserr_table statements and the user ring programs. This statement is optional. If it is missing a default format code value of 0 will be used. A format code of 0 implies that the binary data is to be printed as an octal dump. NOTES Spacing characters (blanks, tabs, new line characters, new page characters) may appear between any statement and between any element of a statement. PL/I type comments strings, /*...*/, may appear anywhere that a spacing character may appear. (See Appendix A for sample definitions of syserr messages using this source language.) IMPLEMENTATION PLAN 1. The syserr_table_compiler must be implemented. It probably should be coded using the reduction_compiler. 2. The syserr_table_.st source segment must be generated. As calls to syserr are converted their syserr messages must be defined in syserr_table_.st. 3. The programs syserr and syserr_real must be changed. The current syserr entry point must be maintained until all calls to syserr have been converted to use one of the three new entry points. Since there is no syserr_table_code argument passed to the current syserr entry point, default table information will be used. In order to work with the new wired_log and syserr_log entry formats this entry point will use dummy syserr message entries that will be defined in syserr_table_.st. There will be one dummy message entry for each action code. The control strings in the syserr_table_ entries for these dummy messages will not be used by syserr_real. The three new syserr entry points and their corresponding syserr_real entry points must be implemented. 4. A change should be made to the syserr message text that is typed on the operator's console. The sequence number of the message should be included in the message text. This sequence number may then be used by the operator to obtain more information about the message. 5. The program syserr_logger must be changed to work with the new wired_log and syserr_log entry formats. 6. The program init_collections must be changed. The program syserr_log_init must be moved from collection 1 to collection 2. 7. The syserr_data data base must be changed. The size of the wired_log buffer should be doubled to 300 words. This will allow at least two syserr messages with the maximum amount of binary data to be logged. 8. All user ring programs that process syserr messages from the syserr_log must be changed to use the new syserr_log entry format. They must be changed to process binary data and to use the format and sort codes that are available in syserr_info_. 9. A new program should be implemented that would print selective information from a syserr_info_entry. It should be able to find this entry given either a syserr message name or a valid syserr message sequence number. 10. A new program should be implemented that can generate a formal document from the syserr_table_.st source segment. 11. A new program should be implemented that would merge private versions of the syserr_table_.st source segment into one source segment. Any number of source segments of the type aaaaaa.st could be used as input. The result would be a new source segment with the name syserr_table_.st. 12. The programs that call syserr will have to be changed. They do not all have to be changed at once. The syserr calls that involve binary data should be changed first. Then as ring 0 programs are added or changed we can require that they use the new syserr calling sequence in order to be installed. Appendix A Sample Syserr Message Definitions ```c /* This is a sample definition of a syserr message * using the syserr_table_compiler source language. * call syserr$message (syserr_table_$bad_devx,devx); */ name: bad_devx; action: fatal; /* Crash the system. */ control: "iom_manager: bad devx "o supplied."; status: wired; format: 0; /* No binary data. */ sort: 1; /* Sort class 1. */ description: "This syserr message is generated when iom_manager is called with a bad device index. There is nothing the operator can do. "; end: bad_devx; /* call syserr$message (syserr_table_$mylock,"name",lockp); */ name: mylock; action: fatal; /* Fatal error. */ control: "a: mylock error on "p." status: paged; /* Fatal action => not active. */ format: 0; sort: 2; /* File system error. */ description: "This syserr message is generated by file system programs that find a lock already locked to a process. "; end: mylock; ```
{"Source-Url": "https://multicians.org/mtbs/MTB-173.pdf", "len_cl100k_base": 9496, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 44924, "total-output-tokens": 10576, "length": "2e13", "weborganizer": {"__label__adult": 0.0002701282501220703, "__label__art_design": 0.0002727508544921875, "__label__crime_law": 0.0001938343048095703, "__label__education_jobs": 0.0003838539123535156, "__label__entertainment": 7.444620132446289e-05, "__label__fashion_beauty": 0.0001030564308166504, "__label__finance_business": 0.00023603439331054688, "__label__food_dining": 0.0001932382583618164, "__label__games": 0.0010843276977539062, "__label__hardware": 0.00299072265625, "__label__health": 0.00015676021575927734, "__label__history": 0.00021219253540039065, "__label__home_hobbies": 9.453296661376952e-05, "__label__industrial": 0.0004901885986328125, "__label__literature": 0.0002038478851318359, "__label__politics": 0.00014519691467285156, "__label__religion": 0.0004324913024902344, "__label__science_tech": 0.0238494873046875, "__label__social_life": 4.464387893676758e-05, "__label__software": 0.020965576171875, "__label__software_dev": 0.947265625, "__label__sports_fitness": 0.00017583370208740234, "__label__transportation": 0.00027871131896972656, "__label__travel": 0.00012695789337158203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43173, 0.02361]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43173, 0.55522]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43173, 0.84103]], "google_gemma-3-12b-it_contains_pii": [[0, 1763, false], [1763, 3306, null], [3306, 5581, null], [5581, 8390, null], [8390, 10532, null], [10532, 12669, null], [12669, 14899, null], [14899, 16662, null], [16662, 18383, null], [18383, 20738, null], [20738, 23384, null], [23384, 25774, null], [25774, 27837, null], [27837, 29877, null], [29877, 31575, null], [31575, 33288, null], [33288, 35387, null], [35387, 37667, null], [37667, 39601, null], [39601, 41784, null], [41784, 42086, null], [42086, 43173, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1763, true], [1763, 3306, null], [3306, 5581, null], [5581, 8390, null], [8390, 10532, null], [10532, 12669, null], [12669, 14899, null], [14899, 16662, null], [16662, 18383, null], [18383, 20738, null], [20738, 23384, null], [23384, 25774, null], [25774, 27837, null], [27837, 29877, null], [29877, 31575, null], [31575, 33288, null], [33288, 35387, null], [35387, 37667, null], [37667, 39601, null], [39601, 41784, null], [41784, 42086, null], [42086, 43173, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 43173, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43173, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43173, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43173, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43173, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43173, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43173, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43173, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43173, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43173, null]], "pdf_page_numbers": [[0, 1763, 1], [1763, 3306, 2], [3306, 5581, 3], [5581, 8390, 4], [8390, 10532, 5], [10532, 12669, 6], [12669, 14899, 7], [14899, 16662, 8], [16662, 18383, 9], [18383, 20738, 10], [20738, 23384, 11], [23384, 25774, 12], [25774, 27837, 13], [27837, 29877, 14], [29877, 31575, 15], [31575, 33288, 16], [33288, 35387, 17], [35387, 37667, 18], [37667, 39601, 19], [39601, 41784, 20], [41784, 42086, 21], [42086, 43173, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43173, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
9d561faeea62bc052e01a9805fb66c46c943793e
Different Conceptions in Software Project Risk Assessment Höst, Martin; Lindholm, Christin Published in: proceedings of Conference on Software Engineering Research and Practise in Sweden, SERPS 2005 Citation for published version (APA): General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. ABSTRACT During software project risk management, a number of decisions are taken based on discussions and subjective opinions about the importance of identified risks. In this paper, different people’s opinions about the importance of identified risks are investigated in a controlled experiment through the use of utility functions. Engineering students participated as subjects in the experiment. Differences have been found with respect to the perceived importance, although the experiment could not explain the differences based on study program or undertaken role in a development course. The results and experiences from this experiment can be used when a larger experiment is planned. 1. INTRODUCTION During project planning and management, procedures for risk management are crucial. This is, for example, acknowledged by the presence of risk management issues at level 3 in the Software Engineering Institute’s Capability Maturity Model (e.g. [1]). The objective of risk management is to identify relevant risks as early as possible in a project, in order to avoid or limit the effect of potential problems, such as project delays and cost overruns. More formally, risk management can be defined as “an organized process for identifying and handling risk factors; including initial identification and handling of risk factors as well as continuous risk management” [2]. Risk management is often carried out in a number of steps, e.g.: risk identification, risk analysis, risk planning, and risk monitoring [6]. During risk identification, risks are identified by relevant people, e.g. by using checklists and brainstorming-techniques. The identified risks are prioritized with respect to their probability of actually occurring in the project and their potential impact. The risks that are expected to have both high probability and large unwanted effects are the most important risks to continue to work with in the process. In the risk-planning step, plans are made in order to either lower the effects of the prioritized risk, lower their probability, or to prepare for what to do if they actually occur. In the monitoring step, the risks are monitored during the course of the project. There are, of course, no clear and objective rules available for how to prioritize the identified risks in the second step. This is instead carried out through discussions and subjective evaluations, where participants have different values and see the risks in different ways [5]. This means that there is a need to investigate methods that can help decision-makers in this discussion. In this paper the usage of utility functions, as described below, are used in order to investigate the different values. It is, for example, important to know if different people have various opinions about how important different risks are for a project. Utility functions (e.g. [8]) describe how different people value a property. For example, a utility function could describe how people value the expected life-duration after different alternative medical treatments. If the utility function is linear, a life-duration of 2x years would be perceived as twice as good as a life-duration of x years. The utility function does, however, not have to be linear, which affects how people make decisions when choosing different treatments. In software engineering relevant properties to study include, for example, the expected delay of a project and the number of faults that remain in the project after delivery. Based on the shape of the utility function it is possible to discuss whether different individuals act as risk-averse, i.e. they tend to avoid risks and choose a lower safe gain instead of an uncertain high gain, or risk-seeking, i.e. seeking a possible high gain instead of a more certain lower gain. Safety critical projects include, as all other projects, a large amount of software. In all projects, risk management is important and especially typical project-related risks play an important role. When it comes to risks that are more related to the product, e.g., the number of persistent faults in the product, they are very important in safety critical systems for two reasons. One reason is obviously that it is important to identify these risks as early as possible in order to secure the quality of the developed product. The second reason is that it is important to limit the number of problems during the project even if the quality of the product with respect to the number of dormant faults is acceptable when the product is delivered. This is because a large amount of changes during a project decreases the maintainability of the code, which may result in later lowered quality, which results in new faults later on. The outline of the paper is as follows. In Section 2, the Trade-off method for deriving utilities is presented and the usage of utility functions in software risk assessment is discussed. In Section 3 the research method and the research questions are presented, and the results are presented and discussed in Section 4. In Section 5 conclusions are presented. 2. ESTIMATION AND USAGE OF THE UTILITY FUNCTION 2.1. The Trade-off method The objective of the Trade-off (TO) method is to estimate the utility function for one person, i.e. decide how this individual perceives different values of a factor. First the TO method is explained, and then the usage of utility functions is further discussed. According to the TO method [8] the subject is iteratively asked to compare different “lotteries”. A lottery is shown graphically in Figure 1. ![Figure 1. A lottery](image) Figure 1 shows that one of two events (event 1 and event 2) will occur. I.e., if the probability of event 1 is $p$, then the probability of event 2 is $1-p$. If event 1 occurs this will result in result 1 and if event 2 occurs this will result in result 2. An example of possible values for the lottery is shown in Table 1. <table> <thead> <tr> <th>Property</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>Event 1</td> <td>Design expert NN is unable to follow the project</td> </tr> <tr> <td>Event 2</td> <td>Design expert NN is able to follow the project</td> </tr> <tr> <td>Result 1</td> <td>The revenue of the project will be 100 KEUR</td> </tr> <tr> <td>Result 2</td> <td>The revenue of the project will be 200 KEUR</td> </tr> </tbody> </table> ![Table 1. An example of a lottery.](image) The meaning of this example is that either the design expert will be able to follow the project or not. He/she may, for example, be ill during the project or transferred to another project. If the design expert is able to participate in the project, the expected revenue is 200 KEUR and if he/she is unable to participate in the project the expected revenue is 100 KEUR. In the TO method participants should iteratively compare pairs of lotteries. An example of a pair of lotteries is shown in Figure 2. ![Figure 2. A pair of lotteries.](image) The upper lottery in Figure 2 shows what could happen if one condition is true (an old design is chosen) and the lower shows what could happen if another condition is true (a new design is chosen). The probabilities of the events are assumed to be independents of the conditions, i.e. the probability that design expert NN will be able to participate in the project is the same in the two lotteries. An advantage of the TO-method compared to other methods for eliciting utility functions is that the value of the probability need not be explained to the person using the method. In the TO method the subject is first asked to select a value of the revenue in the second lottery (Y in Figure 2) that makes the two lotteries equally attractive. When this has been done the subject is asked to compare two new lotteries. These two lotteries are similar to the first two lotteries, but with value X (see Figure 2) changed to the value that the subject chose in the last question. The subject is now asked to give a new value of Y that makes these two lotteries equally attractive. This process is iterated in order to give values of the utility function for the result factor (i.e. revenue in Figure 2). If the X-value in the first comparison is called $x_0$, the first Y-value is called $x_1$, the second Y-value called $x_2$, etc., then it can be shown that the utility function $u$, can be estimated as [8] $$u(x_i) = i \times u(x_1)$$ which can be normalized to $$u(x_i) = i \times a$$ where $a = 1/n$, and $n$ is the number of Y-values given by the subject. The proof for this is not provided in this paper; instead the interested reader is referred to [8]. In Figure 3 a hypothetical example of a utility function is shown. This example shows a concave curve. If this example denotes the utility of a gain, e.g. in monetary terms, it means that the subject values lower values relatively higher than higher values. Often people with these values are denoted risk-averse, i.e., they prefer lower values with low risk before higher values with higher risk. The above discussion concerning the meaning of a concave utility function is relevant when the utility function denotes something positive, such as revenue. Since the concept of utility functions, at least as we see it, is easier to understand for results where a large amount is better than a small amount (e.g. revenue) the presentation in this section is based on the revenue as an example. However, in the experiment that is presented in the sequel of the paper, the subjects compared e.g. different values of the number of remaining faults in a program. The value of this parameter should of course be as low as possible. 2.2. Tool In the TO-method the questions that are asked to the subject should, as it is described in Section 2.1, be based on the previous answer given by the subject. For example, if the subject answered “250” in the last round, then “250” should be one of the results that the subject should compare to in the next round. This means that it is hard to use the TO-method based on completely pre-developed and parameterized instrumentation, e.g. paper forms. For the purpose of the research presented in this paper, a very simple tool was developed. A screen-shot from the tool is presented in Figure 4. From the screen-shot it can be seen that the appearance of the tool was not identical to the questionnaire that is described in [8]. In [8] the decision tree (e.g. Figure 2) was graphically presented to the subjects. In Figure 4, the user is asked to answer the same question as in Figure 2. If the user answers “250”, the next question will be as presented in Figure 5. 2.3. Interpretation of utility functions in software engineering risk assessment The factors that are considered in software risk assessment often refer to negative aspects and not to positive aspects. For example, factors such as number of remaining faults, delay, etc. are analysed instead of positive factors such as revenue, life-duration, etc. This means that the interpretation of the utility function cannot be carried out exactly as described in Section 2.1. In [3] the typical shape of utility functions for losses, e.g. in monetary terms, is discussed. In this paper the focus is on determining the shapes and the differences between different people’s shape of the utility functions. The focus is not that much on the interpretation of the utility functions. However, a short attempt to explain the meaning of different shapes is given. If the utility function e.g. for the remaining number of faults is concave (i.e. as in Figure 3) this means that relatively the effect of every fault is higher if there are few faults than if there are many faults. This means that a person with this interpretation thinks that 2 faults is less than twice as serious as if there are x faults. If this person would choose between a fixed value x and a lottery with value 0 with probability 1/2 and value 2x with probability 1/2, this person would probably choose the lottery since the expected utility value of the lottery is lower than for the fixed value x. Since this person chooses the lottery instead of the fixed value, we say that a person with a concave utility function is risk seeking. If the function is convex, the value of every fault is higher if there are many faults compared to if there are few faults. We say that a person with a convex utility function is risk averse. Imagine a situation where a person should compare two different alternative ways of handling a risk in a project. Based on subjective evaluations it might be estimated that one of the alternatives will results in a certain expected amount of remaining faults and the other alternative will results in a higher amount of expected faults. In this case a person with a concave utility function would probably not see the second alternative as negative as a person with a convex utility function. This will of course affect how different people act during discussion on risk evaluation during risk management. It is therefore interesting to investigate how similar utility func- 3. RESEARCH DESIGN 3.1. Introduction The objective of the research presented in this paper is to investigate the shape of utility functions for factors that are relevant in software project risk management. More specifically, the research questions are as follows: - RQ1: What is the distribution between convex, concave and linear utility functions for properties that are relevant in software project risk assessment? - RQ2: Is there any difference between different roles in a project with respect to the shape of the utility functions? - RQ3: Is there any difference between the backgrounds of people with respect to the shape of the utility functions? - RQ4: Is there any difference between the shapes of the utility functions for normal projects and projects developing safety-critical products. The research questions are investigated in an experiment where students act as subjects. 3.2. The experiment The experiment was conducted as a part of a software engineering project course given at LTH Campus Helsingborg during the spring of 2005. The students followed programmes in Computer Science, Software Engineering, Electrical Engineering, and Multimedia. The course is attended in the 2nd year of their university studies. The course where the experiment was conducted is a project course where the students work in projects of typically 17 persons in each project. All projects are given the assignment of implementing a number of services for a very basic telephone switching system. In the beginning of the course the students are given a basic version of the system where only basic functions such as providing simple telephone calls, managing what happens if the called party is already involved in a telephone call, etc. are provided. Their assignment is to develop more advanced services such as call forwarding, billing, etc. The project group should follow a software development process based on the waterfall model with steps such as project planning, requirements engineering, implementation, and testing. This experiment was conducted during the test-phase of the project, i.e. after the project planning was carried out. In every project groups the students are divided into the following roles: - Project leaders - Technical responsibility - Developers - Testers The experiment was conducted during a seminar where all students participated. At the seminar the seminar-leader first held a lecture on risk management, and then the students carried out the tasks of the experiment. In the experiment the utility function of every student was elicited with the TO-method. The factor of interest was the remaining number of faults after delivery of a software system. In the assignment the students were presented with two scenarios (scenario 1 and scenario 2). Scenario 1 is based on the project assignment that they were involved in. The scenario was presented as follows (slightly modified and translated from Swedish to English): Assume that there in your project was a design expert (NN) that could decide the design. NN is part of the “technical responsibility” group of your project and NN has some new ideas about the design that are not exactly as the teachers in the course has thought about the design. The design proposed by NN is called “new design” and the ordinary design, as proposed by the teachers is called “old design”. Based on experience data, the project leaders estimate that there will be a certain amount of faults remaining in the product at the acceptance test. Consider the following four cases: Case 1A: The old design is used and NN is able to participate in the project. Then there will be 5 faults at the acceptance test. Case 1B: The old design is used and NN is unable to participate in the project due to illness. Then there will be 6 faults at the acceptance test. Case 2A: The new design is used and NN is able to participate in the project. Then there will be 2 faults at the acceptance test. Case 2B: The new design is used and NN is unable to participate in the project due to illness. How many faults can there be at the acceptance meeting if the two designs should be equally attractive? Scenario 2 is based on another system than they worked with in the course. It describes instead a safety critical system. Scenario 2 was presented as follows (slightly modified and translated from Swedish to English): In an intensive care unit you have surveillance equipment connected to the patient that monitors the patient condition. Different values is continuously registered, such as patient’s absorption of oxygen, cardiac activity etc. The values are analysed by software in the surveillance equipment. The surveillance equipment sends an alarm, if the analysed values in any way diverge from the normal values. If no attention is taken to the abnormal values (i.e., absence of alarm), it can cause severe injury to the patient and in some cases even death. There is a great risk for serious damage if the alarm fails. The personnel need proper training to be able to connect and manage the surveillance equipment correct. Most of the personnel have this type of training, but some times they do not have the training, due to lack of time. If the surveillance equipment is connected the wrong way there is a risk for absence of alarm and the patient are exposed to danger. Now there is a desire to try new software in the surveillance equipment. Consider the following four cases: Case 1A: Present software is used. The personal are trained on the surveillance equipment. At 7 occasions in a three-month period, there was absence of alarm from the surveillance equipment, despite the fact that there should have been... alarms. Case 1B: Present software is used. In this case personnel who have not received proper training on the equipment use the equipment. At 9 occasions in a three-month period, there was absence of alarm from the surveillance equipment, despite the fact that there should have been alarms. Case 2A: New software is used. The personal are trained on the surveillance equipment. At 4 occasions in a three-month period, there was absence of alarm from the surveillance equipment, despite the fact that there should have been alarms. Case 2B: New software is used. In this case personnel who have not received proper training on the equipment use the equipment. How many alarms can be missed if the new software should be equally attractive? The students were also given instructions on how to use the tool that is presented in Section 2.2. They used the tool when they answered questions iteratively according to the TO-method. All students first worked with scenario 1 and then with scenario 2. In the analysis the results from each student is characterized as concave, convex, linear or “other”. A curve is classified as “other” if it has not the same shape (convex or concave) for all x-values, e.g. the first half of the curve is convex and the second half is concave. In order to investigate research question RQ1 the data from all students are pooled and the number of curves of each shape is compared. In order to investigate research question RQ2 the following independent and dependent variables [9] have been defined for the experiment: - Independent variable: role in project - Dependent variable: number of curves of each shape In order to investigate research question RQ3 the following independent and dependent variables have been defined for the experiment: - Independent variable: study program - Dependent variable: number of curves of each shape (i.e. the same independent variable as for RQ2) In order to investigate research question RQ4 the following independent and dependent variables have been defined: - Independent variable: Scenario - Dependent variable: number of curves of each shape (i.e. the same independent variable as for RQ2 and RQ3) That is, for all four research questions, the number of people with a certain shape of the utility function was chosen. The analysis is presented in Section 4. 3.3. Validity In order to evaluate the validity of the study, a checklist from [9] is used. Validity threats may be classified into the following four classes: conclusion validity, construct validity, internal validity, and external validity. The conclusion validity is related to the possibilities to draw correct conclusions about relations between the independent and dependent variables of the experiment. Typical threats of this type are, for example, to use wrong statistical tests, to use statistical tests with too low power, or to obtain significant differences by measuring too many dependent variables (“fishing and the error rate”). Since, as it is shown in Section 4, there in most analyses, is no possibility to determine any differences with statistical significance, there is no such risk. However, when it is discussed whether this means that there are no different it is important to remember that this also may be due to few data points. This is further discussed in Section 4. The internal validity is affected by confounding factors that affect the measured values outside the control, or knowledge, of the researcher. This may, for example, be that the groups of subjects carried out their assignments under different conditions, or maturation of participants. In order to lower the internal threats in this experiment all students carried out the assignment the same time during a 90 minutes seminar when one of the researchers were present. One threat to this study is that the two scenarios were analysed in the same order by all students. This should be taken into account when the difference between the scenarios is analysed, i.e. when RQ4 is analysed. The reason for letting every participant work with the scenarios in the same order was that it was seen as positive that the students started with a scenario that presents a familiar project and system, i.e. a situation that is related to the course. Threats to construct validity denote the relation between the concepts and theories behind the experiment, and the measurements and treatments that were analyzed. We have not identified any serious threats of this kind. The external validity reflects primarily how general the results are with respect to the subject population and the experiment object. The intention is that the subjects in this experiment should be representative of engineers working with this type of estimation in live projects. As we see it, the largest threat to validity is of this kind. It cannot be concluded with any large validity that the students that participated in this experiment are representative of professional practitioners. Scenario 2 is not in any way related to the students’ course work, but scenario 1 was based on the projects that the students participated in the course. However, the scenario was still a hypothetical scenario and it was studied in the testing phase of the project, i.e. after the risk assessment in a real project. According to [4], controlled experiments can be classified according to two dimensions as displayed in Table 2. According to this classification, the experiment could be classified as (I2, E1) with respect to scenario 1 and as (I1, E1) for scenario 2. In an experiment that is classified as this, it may be important to reflect over how valid the results are. In this case we believe that the results primarily could serve as a basis for continued experiments in the area. It is important to include more experienced people in continued experiments. The results from this experiment are however important when these experiments are planned. 4. RESULTS 4.1. Results from the empirical study The experiment was conducted with 47 students, but one of the them did not hand in any results, which means that there were 46 students that completed the tasks of the experiment. The number of subjects that completed scenario 1 was 44, since 2 of the subjects were discarded because the scenario was only iterated three times. The minimum of iterations was set to four times. In scenario 2, 3 subjects were discarded for the same reason so the number of subjects that retained for further analysis was 43 in scenario 2. In order to investigate research question RQ1 the distribution of utility functions were analysed. The distribution between concave, convex, linear and other utility functions for the two scenarios are displayed in Table 3. The result is presented in percent of the total number of subject for each scenario, and in absolute figures in parenthesis. In scenario 2 the linear utility functions dominate and there are only a few concave functions. Scenario 1 does not show this type of dominance. The students had different roles in their project groups. There is a difference in the amount of students connected to the various roles. The largest group were developers (18 students) and the smallest group were project leaders (6 students). The values for each role and type of utility function are presented in Table 4. It can be seen in scenario 1 that both project leaders and those with technical responsibility show convex utility functions were as most of the developers and testers show linear utility functions. In scenario 2, the linear utility functions dominated for all the roles. Not only the roles but also the background of the students are different. So the background (i.e. study program) was looked at as a variable against the number of curves of each shape. The students come from the four different engineering programs Computer Science, CS (12), Software Engineering, SE (8) Electrical Engineering, EE (9), and Multimedia, MM (17). Table 5 shows for scenario 1 that the students from Software Engineering have the same amount of convex and linear utility functions and so have also the students from Multimedia. The students from the Electrical Engineering program have the same amount of concave and convex utility functions while for the computer science students the majority of the utility functions are linear. The data has been analysed with a number of chi-2 tests [7] as summarized in Table 6. In the analysis, data from people with responses other than convex, concave and linear was discarded. In analysis 1, a chi-2 goodness of fit test was carried out in order to see whether the three shapes were equally probable. Data from both scenarios was pooled (denoted “1+2” in the 4:th column). It was clear that the shapes were not equally probable. The value of this analysis is limited, but it does <table> <thead> <tr> <th>Incentive</th> <th>Experience</th> </tr> </thead> <tbody> <tr> <td>I1: Isolated artefact</td> <td>E1: Undergraduate student with less than 3 months recent industrial experience</td> </tr> <tr> <td>I2: Artificial project</td> <td>E2: Graduate student with less than 3 months recent industrial experience</td> </tr> <tr> <td>I3: Project with short-term</td> <td>E3: Academic with less than 3 months recent industrial experience</td> </tr> <tr> <td>commitment</td> <td>E4: Any person with industrial experience, between 3 months and 2 years</td> </tr> <tr> <td>I4: Project with long-term</td> <td>E5: Any person with industrial experience for more than 2 years</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Role</th> <th>Scen</th> <th>Concave</th> <th>Convex</th> <th>Linear</th> <th>Other</th> </tr> </thead> <tbody> <tr> <td>Project leaders</td> <td>1</td> <td>17% (1)</td> <td>50% (3)</td> <td>33% (2)</td> <td>0% (0)</td> </tr> <tr> <td>Technical responsibility</td> <td>2</td> <td>0% (0)</td> <td>40% (2)</td> <td>60% (3)</td> <td>0% (0)</td> </tr> <tr> <td>Developer</td> <td>1</td> <td>17% (3)</td> <td>28% (5)</td> <td>39% (7)</td> <td>17% (3)</td> </tr> <tr> <td></td> <td>2</td> <td>0% (0)</td> <td>29% (7)</td> <td>65% (11)</td> <td>6% (1)</td> </tr> <tr> <td>Tester</td> <td>1</td> <td>25% (3)</td> <td>17% (2)</td> <td>33% (4)</td> <td>25% (3)</td> </tr> <tr> <td></td> <td>2</td> <td>0% (0)</td> <td>23% (3)</td> <td>54% (7)</td> <td>23% (3)</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Role</th> <th>Scen</th> <th>Concave</th> <th>Convex</th> <th>Linear</th> <th>Other</th> </tr> </thead> <tbody> <tr> <td>Project leaders</td> <td>1</td> <td>25% (2)</td> <td>38% (3)</td> <td>38% (3)</td> <td>0% (0)</td> </tr> <tr> <td>Technical responsibility</td> <td>2</td> <td>0% (0)</td> <td>17% (1)</td> <td>83% (5)</td> <td>0% (0)</td> </tr> <tr> <td>Developer</td> <td>1</td> <td>18% (3)</td> <td>29% (5)</td> <td>29% (5)</td> <td>24% (4)</td> </tr> <tr> <td></td> <td>2</td> <td>12% (2)</td> <td>22% (4)</td> <td>47% (8)</td> <td>18% (3)</td> </tr> <tr> <td>Tester</td> <td>1</td> <td>10% (1)</td> <td>30% (3)</td> <td>40% (4)</td> <td>20% (2)</td> </tr> <tr> <td></td> <td>2</td> <td>0% (0)</td> <td>27% (3)</td> <td>45% (5)</td> <td>27% (3)</td> </tr> <tr> <td>E5: Any person with industrial</td> <td></td> <td>33% (3)</td> <td>33% (3)</td> <td>11% (1)</td> <td>22% (2)</td> </tr> <tr> <td>experience for more than 2 years</td> <td></td> <td>0% (0)</td> <td>22% (2)</td> <td>78% (7)</td> <td>0% (0)</td> </tr> </tbody> </table> The data has been analysed with a number of chi-2 tests [7] as summarized in Table 6. In the analysis, data from people with responses other than convex, concave and linear was discarded. In analysis 1, a chi-2 goodness of fit test was carried out in order to see whether the three shapes were equally probable. Data from both scenarios was pooled (denoted “1+2” in the 4:th column). It was clear that the shapes were not equally probable. The value of this analysis is limited, but it does show that the shape that results from the method is not completely random. Concerning RQ1, the most important contribution lies in the distribution of the different shapes. <table> <thead> <tr> <th>RQ</th> <th>Analysis</th> <th>Independent variable</th> <th>Data from scenario</th> <th>p</th> <th>Chi2 requirements ok</th> </tr> </thead> <tbody> <tr> <td>RQ1</td> <td>1</td> <td>-</td> <td>1+2</td> <td>0.0006***</td> <td>Yes</td> </tr> <tr> <td>RQ2</td> <td>2</td> <td>Role</td> <td>1</td> <td>0.94</td> <td>No</td> </tr> <tr> <td></td> <td>3</td> <td>Role</td> <td>2</td> <td>0.04(*)</td> <td>No</td> </tr> <tr> <td></td> <td>4</td> <td>Role</td> <td>1+2</td> <td>0.73</td> <td>No</td> </tr> <tr> <td></td> <td>5</td> <td>PL+TR vs D+T</td> <td>1+2</td> <td>0.66</td> <td>Yes</td> </tr> <tr> <td>RQ3</td> <td>6</td> <td>Program</td> <td>1</td> <td>0.83</td> <td>No</td> </tr> <tr> <td></td> <td>7</td> <td>Program</td> <td>2</td> <td>0.60</td> <td>No</td> </tr> <tr> <td></td> <td>8</td> <td>Program</td> <td>1+2</td> <td>0.95</td> <td>No</td> </tr> <tr> <td>RQ4</td> <td>9</td> <td>Scenario</td> <td>1+2</td> <td>0.012*</td> <td>Yes</td> </tr> </tbody> </table> *significant at the 5% level **significant at the 1% level ***significant at the 0.1% level Concerning RQ2 in analysis 2-4, there are too few data points to be able to carry out a Chi-2 test that compares the shapes of the roles (“No” in 6-th column). Therefore in analysis 5, data from project leaders and “technical responsibility” was pooled and data from developers and testers were pooled, which means that an analysis comparing “management roles” to more developer-oriented roles could be carried out. This analysis is valid with respect to the number of data-points, but it was clear that there is no statistical difference. Concerning RQ3, there were not enough data points to carry out a chi-2 test, and no natural way to pool data. The data did not show any clear difference between the programmes. In the analysis of RQ4 it was found that there is a clear difference between the two scenarios. This is, however, founded with the order in which the two scenarios were analysed by the participants. 4.3. Discussion The empirical study has shown that the distribution of utility functions varies for different kinds of scenarios. The result from scenario 2 shows dominance of linear utility functions but this type of dominance does not exist in scenario 1. In scenario 2 you might expect more risk-averse tendencies because the scenario concerns severe injury to patients or even death but this is not the case. One factor to consider is the fact that the subjects are used to the tool and know how it works during scenario 2, see Section 3.3. Based on this study, it has not been possible to state that any role is more risk seeking than any other role. Maybe there are too few subjects in the study and the students have not had their roles long enough to connect to it. There is a difference between the two scenarios. In scenario 1 there is an indisputable majority of convex utility functions for project leaders and those with technical responsibility but that is not the case in scenario 2. This has, however, not been further analysed. If we look at scenario 2 and the students’ background we see the same pattern here that for all the study programs the linear utility function dominate. In scenario 1 you have no obvious connection between study programs and shape of utility function. It can be discussed if the students’ backgronds are so different. All four programs belongs to the IT-programs, they have the same admission criteria and results in a bachelor degree. If we had a group of students with entirely different education and background to compare with we might see a clear difference in risk tendency. 5. CONCLUSIONS From this study it is possible to conclude that different study participants have different opinions about the faults remaining after testing. Some of the participants are more risk seeking than others. There are, however, no clear connection between the projects-roles and the shape of the utility functions in this study. There are, as described in Section 3.3, some threats to the validity of this study and future studies will, based on the experiences from this study be changed in the following ways: - People with more experience in general and with more experience from their project-roles should be involved in the study. If students should be involved, they could probably be sampled from populations with larger differences than students from study programs that to some extent are quite similar. - If a similar experiment design is chosen, it should be adapted so that all subjects do not work with both scenarios in the same order. There were reasons for choosing this design in this research, but in further studies it is probably better not to have the same order for all participants. Additional further work includes risk management in general, e.g. building a tool for risk assessment and follow-up, and software development in safety critical development projects. ACKNOWLEDGEMENTS The authors would like to express their gratitude to the students that participated in the study. The authors would also like to acknowledge Gyllenstierna Krapperup Stiftelsen for funding the research studies of Christin Lindholm. REFERENCES
{"Source-Url": "http://portal.research.lu.se/portal/files/6319826/766486.pdf", "len_cl100k_base": 8388, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27089, "total-output-tokens": 8869, "length": "2e13", "weborganizer": {"__label__adult": 0.00042510032653808594, "__label__art_design": 0.0006198883056640625, "__label__crime_law": 0.0005507469177246094, "__label__education_jobs": 0.007778167724609375, "__label__entertainment": 0.0001004338264465332, "__label__fashion_beauty": 0.00020575523376464844, "__label__finance_business": 0.0007772445678710938, "__label__food_dining": 0.0004055500030517578, "__label__games": 0.000881195068359375, "__label__hardware": 0.0006737709045410156, "__label__health": 0.0005917549133300781, "__label__history": 0.0002894401550292969, "__label__home_hobbies": 0.00013768672943115234, "__label__industrial": 0.0004467964172363281, "__label__literature": 0.0006690025329589844, "__label__politics": 0.0002505779266357422, "__label__religion": 0.0004496574401855469, "__label__science_tech": 0.03179931640625, "__label__social_life": 0.00023794174194335935, "__label__software": 0.0094757080078125, "__label__software_dev": 0.9423828125, "__label__sports_fitness": 0.0002651214599609375, "__label__transportation": 0.00044083595275878906, "__label__travel": 0.00018894672393798828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38405, 0.03736]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38405, 0.70596]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38405, 0.95335]], "google_gemma-3-12b-it_contains_pii": [[0, 1225, false], [1225, 6325, null], [6325, 9868, null], [9868, 14271, null], [14271, 19976, null], [19976, 25931, null], [25931, 31630, null], [31630, 37687, null], [37687, 38405, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1225, true], [1225, 6325, null], [6325, 9868, null], [9868, 14271, null], [14271, 19976, null], [19976, 25931, null], [25931, 31630, null], [31630, 37687, null], [37687, 38405, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38405, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38405, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38405, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38405, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38405, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38405, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38405, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38405, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38405, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38405, null]], "pdf_page_numbers": [[0, 1225, 1], [1225, 6325, 2], [6325, 9868, 3], [9868, 14271, 4], [14271, 19976, 5], [19976, 25931, 6], [25931, 31630, 7], [31630, 37687, 8], [37687, 38405, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38405, 0.22222]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
28ca278a94fb39aaf2b4ffcd20ead56ab68a142f
Playing with the words of science online: birds of a feather flock together Llop Agramunt, David Curs 2013-2014 Directors: VANESA DAZA i ROSA ESTOPÀ GRAU EN ENGINYERIA TELEMÀTICA Treball de Fi de Grau PLAYING WITH THE WORDS OF SCIENCE ONLINE: BIRDS OF A FEATHER FLOCK TOGETHER David Llop Agramunt FINAL DEGREE PROJECT TELEMATIC ENGINEERING ESCOLA SUPERIOR POLITÈCNICA UPF 2014 DIRECTOR OF THE PROJECT Rosa Estopà, Vanesa Daza To my lovely partner Núria and her future children. Acknowledgement Thanks to Miquel Cormudella for his patience and knowledge, to Núria Espuña for her love and guidance, to my family for support and to Rosa Estopà and Vanesa Daza for letting me being part of their project. Abstract This Project is the web implementation of a board game made by Rosa Estopà’s Team in the “Jugant a definir la ciència” (Estopà, 2013) project context. The aim of the project was to develop a tool for teaching entities, for children to use it using the new technologies and the games as a media to get children’s attention and help them learn about science. The game format is online and allows the user to play competing with a friend, the computer or himself, and it includes the choosing of the difficulty of the game. This document puts the game in the context of “Jugant a definir la ciència”, describes the design, the technical implementation of the game which has been done using the object oriented programming paradigm, and includes a user guide for the game. Resum Aquest projecte és la implementació web d’un joc de taula elaborat en el marc del projecte “Jugant a definir la ciència”(Estopà, 2013). El projecte s’ha desenvolupat amb intenció de ser una eina per a entitats educatives d’infants, que pugin utilitzar les noves tecnologies i els jocs com un recurs per atraure l’atenció dels infants i ajudar-los a treballar el lèxic científic. El joc és en format online, i permet a l’usuari jugar amb diverses modalitats: jugar individualment o competir amb un amic, amb l’ordinador i, a més a més, inclou la possibilitat de seleccionar la dificultat de la partida. Aquest document posa el joc en el context de “Jugant a definir la ciència”, descriu el disseny i la implementació tècnica del joc que ha estat feta usant el paradigma de la programació orientada a objectes i inclou una guia d’usuari del joc. Resumen Este proyecto es la implementación web de un juego de mesa elaborado en el marco del proyecto” Jugant a definir la ciencia”(Estopà, 2013). El proyecto se ha desarrollado con la intención de ser una herramienta para las entidades educativas de niños, que puedan usar las nuevas tecnologías e los juegos como un recurso para atraer la atención de los niños y ayudarles a trabajar el léxico científico. El juego es en formato online, y permite al usuario jugar en diversas modalidades: jugar individualmente o competir con un amigo o con el ordenador y además incluye la posibilidad de seleccionar la dificultad de la partida. Este documento pone el juego en el contexto de “Jugant a definir la ciencia”, describe el diseño y la implementación técnica del juego que ha sido hecho usando el paradigma de la programación orientada a objetos e incluye una guía de usuario del juego. Summary Abstract........................................................................................................................................vii List of figures...................................................................................................................................xi 1. INTRODUCTION ...................................................................................................................... 1 1.1 Motivation .............................................................................................................................. 1 1.2 Objectives ................................................................................................................................ 1 1.3 Context of the project .............................................................................................................. 1 1.4 The original board Game ........................................................................................................ 2 2. PLANIFICATION ....................................................................................................................... 3 2.1 The Scheduling Process .......................................................................................................... 3 2.2 The Development Process ........................................................................................................ 3 a) Analysis of the information system .................................................................................... 3 b) Design of the game ............................................................................................................. 3 c) Implementation of the game ............................................................................................. 3 3. TOOLS AND RESOURCES ..................................................................................................... 7 3.1 The Java Programming Language ......................................................................................... 7 3.2 Vaadin ..................................................................................................................................... 8 3.3 The Eclipse tool ..................................................................................................................... 9 4. DESIGN ....................................................................................................................................... 11 4.1 Project Requirements ............................................................................................................. 11 a) Functional requirements ...................................................................................................... 11 b) Non-functional requirements ............................................................................................ 12 4.2 Functionalities ....................................................................................................................... 12 4.3 Design decisions .................................................................................................................... 12 4.4 Use Cases .............................................................................................................................. 13 a) Use case: Choose game mode ........................................................................................... 14 b) Use case: Choose game couples ....................................................................................... 15 c) Use case: Play time attack ............................................................................................... 15 d) Use case: Play versus a friend ......................................................................................... 16 e) Use case: Play Versus computer ..................................................................................... 16 5. IMPLEMENTATION ................................................................................................................... 19 5.1 Structure of the code .............................................................................................................. 19 5.2 Implementation process ......................................................................................................... 21 a) The Start class .................................................................................................................. 21 b) The Block class ................................................................................................................ 21 c) The Game class ................................................................................................................. 22 5.3 Implementation handicaps ..................................................................................................... 24 5.4 Evaluation and testing .......................................................................................................... 24 6. USER GUIDE ............................................................................................................................ 25 6.1 Preparing the game ............................................................................................................... 25 6.2 General in-game features ..................................................................................................... 27 6.3 Playing Time Attack or freely ................................................................. 27 6.4 Playing versus a friend ............................................................... 29 6.5 Playing versus the computer.................................................. 31 7. FINAL RESULT ......................................................................................... 33 8 CONCLUSIONS ....................................................................................... 35 9 FUTURE WORK ......................................................................................... 37 Bibliography ................................................................................................. 39 Annex ............................................................................................................. 41 List of figures Figure 2.1: Gant diagram......................................................... 4 Figure 2.2: Gant diagram graphic.............................................. 5 Figure 3.1: Car class example.................................................. 8 Figure 3.2: Capture of Eclipse webpage................................. 9 Figure 3.3: Capture of the Eclipse text editor..................... 10 Figure 4.1: User case diagram................................................ 14 Figure 5.1: Class diagram.......................................................... 18 Figure 6.1: Initial Layout......................................................... 23 Figure 6.2: Initial Layout zoom in......................................... 23 Figure 6.3: Couple choosing layout........................................ 24 Figure 6.4: Couple choosing layout zoom in ......................... 24 Figure 6.5: Time Attack with three couples......................... 25 Figure 6.6: Time Attack with three couples zoom in........... 26 Figure 6.7: Time Attack finished............................................. 26 Figure 6.8: Versus a friend with six couples....................... 27 Figure 6.9: Player 2 turn............................................................. 27 Figure 6.10: Player 1 wins......................................................... 28 Figure 6.11: Draw................................................................. 28 Figure 6.12: Versus the computer with ten couples.......... 29 Figure 6.13: Versus computer middle game....................... 29 Figure 6.14: Versus computer. End of user’s turn............. 30 Figure 6.15: Versus computer end.......................................... 30 1. INTRODUCTION This section explains the motivation for doing this project, its objectives and in which context is this project developed. 1.1 Motivation The criteria for choosing a final degree project was based on two personal decisions. The first one was an interest on participating in a social project, where my work could help on providing tools to help society in some way, like helping in the development and education of children as in this case. The second one is to expand the knowledge I started learning in the subject “Programació Orientada a Objectes”, a subject that I enjoyed very much. I think that to work efficiently is important to enjoy while working and that’s why I wanted to practice more deeply what I learned. The subject was an introduction to the Object Oriented Programming paradigm. It awakened an interest in me that I couldn’t practice as much as I would have liked while I was studying it. I took this project as a test to see how creative and efficient I could be, and how I want to orientate my future in the programming world, but also as a contribution to the children education in the country. 1.2 Objectives The main objective of the project is to analyze, design, develop and evaluate an interactive application. The application will be used as educational tool for the scientific lexical learning oriented to children in Primary School. This tool will be a game application that will be played online from the “Jugant a definir la ciència” webpage. The game is an adaptation of the classic game “Memory”. As part of the main objective, the game must be developed in the Object Oriented programming paradigm. The technical nature of the project makes the development of it a chance to practice the methodology of technical projects implementation. 1.3 Context of the project IULA (University Institute of Applied Linguistics) is a research center of the Universitat Pompeu Fabra that unites investigators, collaborators and trainees, and participates in national and international organizations and networks, promoting research seminars and scientific activities periodically. The IULA investigator Rosa Estopà and her team created the “Jugant a Definir la ciència” project with the initiative of creating an environment of learning science for children. In the “Jugant a definir la ciència” project they assume that the bases of the specialized knowledge are acquired during the first living years of a person, and that we humans have significant conceptions that will help in the development of the scientific knowledge. Main study focus is the basic words of science in the context of children between 6-8 years old, like water, space, star, brain, ice, death, sun, heat, speed, air, life, etc. The main goal is to unite material that allows the study of what children know about the basics of science in their first years of school and provide proper materials for them, such as ludic material to work with scientific vocabulary, elaborating a dictionary done with school definitions and creating a catalog of the cognitive representations of the scholars previous to any scientific knowledge. Nowadays, the project counts with a purchasable collection of dictionaries (“My first Dictionary of Science” (Estopà, 2013)[1]), games (“La Maleta Viatgera”, “Juga amb les paraules de la ciència: Taller de jocs gegants”), applications (“Club LEXIC”, “El Microscopi”) and publications (“Jugant a definir la ciència: un dicionari de mots de ciència fet per I per a nens I nenes”(Estopà, 2011)[2], “El Microscopi. Banc obert de definicions terminològiques I catàleg de representacions” (Cornudella, 2013) [3], “Recursos per treballar el lèxic acadèmic col·laborativament” (Estopà, 2013) [4]). The game “The travelling suitcase” is in fact a pack of five different playable board games in a “suitcase”. These ones are intended to exercise the skills on relations, memory, concentration, speed, visual speed, analysis, construction and mathematic analysis. The board game that specifically focuses on memory and concentration is the “Cada ovella amb la seva parella” (The expression “Birds of a feather flock together”), an adaptation of the classical Memory Game. In this context, the project “Playing with the words of science Online: Birds of a feather flock together” detailed in the next chapters is the computer version of the “Cada ovella amb la seva parella” game, which will be available to play Online. 1.4 The original board Game This game uses the same game structure as the classic Memory game, where the goal is to remember what’s on the different cards and try to find the cards that are a couple. The classical Memory game consists just on finding the same image two times, fomenting only the memory exercise. The game “Cada ovella amb la seva parella” goes further, using studied concepts, images and drawings that make the users develop not only the memory but also the relations between images that represent concepts and words. This document is structured in sections, we will see the planification of all the project in section 2, the tools and resources used in section 3, the design made in section 4, the explanation of the implementation in section 5, a detailed user guide in section 6 and then a valuation of the final result on section 7. The conclusions of the project are in section 8 and section 9 considers future work related to the project. Not considered as sections, in the end the bibliography and annexes can be found. 2. PLANIFICATION As a general approach, I have used the waterfall model for the project planification. This is because the project is based on the completion of different phases in a certain order, but in this case is not a pure waterfall model because the phases are not completed strictly one after the other. I’ve used some Rapid Application Development (RAD for short) methodology keys for the planification of the project because I think that the most effective key of the developing of a game is the iterative development and the construction of prototypes. This attempts to reduce inherent project risk by breaking a project into smaller segments and requires active user involvement. 2.1 The Scheduling Process Different phases of the project are decided during the scheduling process. After this phase, the Development Process starts. 2.2 The Development Process a) Analysis of the information system For this project, the information system refers to the technology used, the software and development tools. The steps consist on learning about this information system and getting to know what implementations can be done with it. Afterwards, the step is to install these tools needed and finally to get in contact with the theoretical subjects looking for the maximum knowledge available to be able to start the design. b) Design of the game The first step of designing the game is understanding how it should work in the end. The design is based on the structure of how the elements used behave and how they relate between them. This phase is critical, because a bad or not optimized design affects deeply in the project. The usual consequences of a bad design are visible in the implementation phase, where the bad design implies an impossible or non-optimized implementation, meaning a mandatory re-design and the loss of resources. As a final step of the “Design of the game” and as a link to the “Implementation of the game” phase there is the realization of a first non-functional prototype of the interface. c) Implementation of the game The implementation of the game is the execution of the design made in the previous phase. First step of the phase is creating a primitive initial prototype, then a functional prototype until a final prototype. As it has been said, the design has a lot of influence in the implementation steps. This phase includes the testing of the game in the functional and final prototype. A documentation has been made as part each phase to follow the evolution of all the tasks and help with the realization of this document. With the previous stages of the planification decided, I created a Gantt diagram to have a clear and reliable planification (see Figure 2.1 and Figure 2.2) <table> <thead> <tr> <th>Task Name</th> <th>Duration</th> <th>Start</th> <th>Finish</th> </tr> </thead> <tbody> <tr> <td>Final Degree Project</td> <td>504 hrs</td> <td>Wed 12/02/14</td> <td>Mon 12/05/14</td> </tr> <tr> <td>Scheduling Process</td> <td>8 hrs</td> <td>Wed 12/02/14</td> <td>Thu 13/02/14</td> </tr> <tr> <td>Study of the tasks and phases of the project</td> <td>2 days</td> <td>Wed 12/02/14</td> <td>Thu 13/02/14</td> </tr> <tr> <td>Development Process</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Analysis of the information system</td> <td>496 hrs</td> <td>Thu 13/02/14</td> <td>Mon 12/05/14</td> </tr> <tr> <td>Initial Abstract</td> <td>136 hrs</td> <td>Thu 13/02/14</td> <td>Mon 10/01/14</td> </tr> <tr> <td>Analis of the technology to apply</td> <td>0 days</td> <td>Thu 13/02/14</td> <td>Thu 13/02/14</td> </tr> <tr> <td>Search and installation of the development Software</td> <td>4 days</td> <td>Thu 13/02/14</td> <td>Mon 17/02/14</td> </tr> <tr> <td>Study of the utilization of the software</td> <td>10 days</td> <td>Mon 17/02/14</td> <td>Mon 24/02/14</td> </tr> <tr> <td>Documentation</td> <td>20 days</td> <td>Mon 24/02/14</td> <td>Mon 10/03/14</td> </tr> <tr> <td>Design of the game</td> <td>34 days</td> <td>Mon 10/03/14</td> <td>Mon 10/03/14</td> </tr> <tr> <td>Design of the User Interface</td> <td>140 hrs</td> <td>Mon 10/03/14</td> <td>Thu 03/04/14</td> </tr> <tr> <td>Design of the Classes</td> <td>5 days</td> <td>Mon 10/03/14</td> <td>Thu 13/03/14</td> </tr> <tr> <td>Prototype of the interface non-functional</td> <td>25 days</td> <td>Mon 13/03/14</td> <td>Mon 31/03/14</td> </tr> <tr> <td>Documentation</td> <td>5 days</td> <td>Mon 31/03/14</td> <td>Thu 03/04/14</td> </tr> <tr> <td>Implementation of the game</td> <td>35 days</td> <td>Mon 10/03/14</td> <td>Thu 03/04/14</td> </tr> <tr> <td>Initial Prototype</td> <td>220 hrs</td> <td>Thu 03/04/14</td> <td>Mon 12/05/14</td> </tr> <tr> <td>Functional Prototype (Beta)</td> <td>16 days</td> <td>Thu 03/04/14</td> <td>Tue 15/04/14</td> </tr> <tr> <td>Final Prototype</td> <td>24 days</td> <td>Tue 15/04/14</td> <td>Thu 01/05/14</td> </tr> <tr> <td>Documentation</td> <td>15 days</td> <td>Thu 01/05/14</td> <td>Mon 12/05/14</td> </tr> <tr> <td>Documentation</td> <td>55 days</td> <td>Thu 03/04/14</td> <td>Mon 12/05/14</td> </tr> </tbody> </table> Figure 2.1: Gant diagram. Figure 2.2: Gant diagram graphic. 3. TOOLS AND RESOURCES The Tools and Resources section describes all the software used during the development of the project. Microsoft Word 2013 has been used to write the documentation of the project, and MS Project 2013 to create the Gantt diagram. The programming tools and resources used to program in the Java programming language have been the Eclipse programming tool and the Vaadin plug-in for Eclipse. 3.1 The Java Programming Language During my university years I’ve seen several ways of programming, including languages like C, C++, C#, Java, Octave, Phyton, Scilab or Visual Basic.NET. I’ve specially developed an interest in the Object Oriented Programming paradigm. Different programming paradigms exist in the world of programming. Each of them are based on rules and statements that define how the programmer must order his ideas to create programs. One of these paradigms in the programming world is the Object Oriented Programming paradigm. As its name says, it is based on the existence of “objects”. The idea of this paradigm is having different “objects” that specialize in different functions, and the interaction between them allow the program to execute complex tasks. The “objects” have data fields known as attributes that describe the object and associated procedures known as methods. Some of the languages that use this paradigm are C++, Objective-C, C#, Perl, Python, Ruby, PHP, Smalltalk or Java. Specifically, the language chosen for this project is the Java language, one of the most popular programming languages in use. Two books to get started on understanding Java are “Java Programming Language (4th Edition) [(Ken Arnold, 2005) [5] and “Head First Java (2nd Edition)” [(Sierra, 2005) [6]. Java is a concurrent, class-based, object-oriented computer programming language, and its creation was made looking forward to let application developers “write once, run anywhere”, or WORA, which means that the code that runs on one platform doesn’t need to be recompiled every time that has to be run on another platform. The nature of Java is that is compiled to bytecode (or else named “class file”), and this means that can run on any JVM (Java Virtual Machine, a virtual machine that can execute Java bytecode). In Java, the programmer creates “classes”, and every class has “attributes” and “methods”. Attributes can be, among others, integers, floats, Booleans or strings, and they have a certain value. Methods are functions in charge of “doing” things, so by definition are the ones that interact with the attributes of the instances. If classes are the objects that define “what kind of characteristics”, “instances” of those classes are the ones that specify the values of these characteristics. For example, one could have the class “Car”, with the attribute “weight” as float and “brand” and “color” as strings, and the method “getcolor” that returns the value of the “color” attribute. An instance of this class would be one with “Ferrari” as the value for the string attribute “brand”, “red” for the “color” attribute and “100” for the value of the “weight” attribute. In a few words, this means that the class defines how the instances of that class will be, and the instances are just “real examples” of the class that can interact between them. A visual example of a class “Car” in Figure 3.1: ![Car class example](image) Figure 3.1: Car class example. ### 3.2 Vaadin Vaadin is an open source Web application framework. It has the singularity of working with server-side Java code, differentiating it from client-oriented java frameworks such as GWT. The major part of the logic runs on the servers, and it is defined as a robust architecture for rapid application development thanks to the tools in it, that helps the programmer to build web user interfaces extremely fast. Vaadin is designed to be compatible with all the internet browsers in order to let the programmer focus on developing the application. Built-in themes like Reindeer, Runo or Cameleon Theme, the option of customizing own themes, creating own components and using and creating add-ons are features of Vaadin that follow the goal of focusing in the easy development of the application. The documentation, tutorials and community makes relatively easy to learn how to use Vaadin. For this project, the tutorials, the Book of Vaadin¹ and the forum have been particularly useful. The tutorials are clearly structured, with sound and image quality, and with the steps specified to start a project with Vaadin. The Book of Vaadin is the full reference for Vaadin. It shows you how to get started and is a written guide of all the features that Vaadin has, their functionalities and how to use them. The Community disposes of a forum, a blog, and a wiki that contain many common code doubts and problems solved and allows anyone to participate in them by registering in. “Vaadin is a Java framework for building modern web applications that look great, perform well and make you and your users happy.”² When I learned about Vaadin, I realized that it was the kind of tool that I needed to implement the game. Since Vaadin is also open source, it has lots of published information about how does it work and what can you do with it. --- ² [www.vaadin.com](www.vaadin.com) The main goal of Vaadin is to help programmers to create user interfaces. This is done through the hundreds of components that are available and the agility that provides the fact that it is built on HTML5. This allows the programmer to move applications to the web without having to do installations or plug-ins. The Server-Client concept is used with the intention of having the benefit of the speed of the server combined with the added flexibility of client-side solutions. There are several ways of getting Vaadin into a project including Maven, the Eclipse plug-in or the All-in-One archive, but for this project the chosen one has been the Eclipse plug-in because I am familiar with Eclipse and feel comfortable working with it. ### 3.3 The Eclipse tool The Eclipse Tool is a software composed by a group of multiplatform open-source programming tools that are capable of designing Client applications, typically used to develop IDE’s (Integrated Development Environment). This tool has a text editor with syntax exaltation that allows the user to write the code in java. It compiles in real time and can create projects and classes, making it a useful tool to program in Java. The Eclipse Tool is part of the “Eclipse community”, developed by the Eclipse Foundation, an open source community (see webpage capture in Figure 3.2) ![Eclipse webpage](image) 3 www.eclipse.org The following capture (Figure 3.3) is from the Eclipse interface. The automatic identification of certain concrete words, numeration of the lines and tabulation makes the understanding of the code a lot easier, and so makes the programming more agile. 4. DESIGN This part of the document explains the criteria used in every step of the design of the game, the requirements asked, the functionalities, the relations between the elements and its behavior. The design is the most important phase of the project. The reason is that the implementation of the game is strictly related to the design, so if the design is wrong the implementation can reveal that the game can’t accomplish the functionalities expected, and then a re-design is required and a new implementation as well, loosing valuable resources. A good design implies a clear and optimized structure of the elements that are involved and how are they related. There are different ways to program and different designs that can have the same results, but the best design is the one with the qualities of efficiency and scalability. 4.1 Project Requirements The first thing done in the project was establishing the project requirements. These were accorded with the tutor Rosa Estopà and his colleague Miquel Cornudella. The requirements are the following: - The game: Cada Ovella amb la seva parella - The platform: WEB - The software: Eclipse and Vaadin The first requirement was deciding what game was to be done. As it has been said in the context of the project, there exist various games, but the one decided was the “Cada Ovella amb la seva parella”. The second requirement was on what platform would the game be available, and since the internet is nowadays the most accessible resource (not everyone has devices like Tablets, but most of the schools and houses have internet⁴) it was established that the game had to be played through a webpage. In order to develop a game playable in a webpage, the tools Eclipse and Vaadin were proposed, because they’re tools used to create web applications with easy interface features, and the “Jugant a definir la ciència” project webpage had already been developed with these tools. a) Functional requirements The functional requirement is describing the behavior of the system as it relates to the system's functionality. For this project, the system must allow the user to navigate between different scenarios and choose game parameters, and then it must behave as specified depending on the buttons and cards pressed. ⁴ http://data.worldbank.org/indicator/IT.NET.USER.P2 b) Non-functional requirements The non-functional requirement elaborates a performance characteristic of the system. The performance requirements are having access to a computer with internet connection, platform constraints are having any of the common internet browsers specified (Firefox, Chrome, Explorer). 4.2 Functionalities The main functionalities obtained from the resulting code are the following: - Capability of launching the application on Internet Explorer 8, Firefox, Chrome, Safari, Opera and all the new versions of the browsers. - Launch the game from different devices and browsers with complete independence. - Play a single game, choosing between “time-attack” and “versus the computer”. - Play a multiplayer game, choosing “versus a friend”. 4.3 Design decisions. Being part of “Jugant a definir la ciència” has conditioned and facilitated the design of the game. The design needed to be strictly related to the design of the original game and webpage⁵. For the interface design of the application I have used the images annexed to this document selected by Rosa Estopà, and the cards that are played during the game are the same as the ones used in “Cada ovella amb la seva parella”. Since the users will mostly be children, the different scenarios on the game are very similar, and the user can interact with a different number of buttons, making the interaction with the game graphically very clear and easy for the user. The different scenarios in which the user can navigate share the same background but the elements change. Also, the cards change every time that a scenario with cards is generated, so that the game has the maximum variability and randomness to increase the “play again” value. What this design accomplishes is an easy understanding of how to play the game and at the same time, the techniques that make a game able to be replayed, encouraging the users to use the application as long as wanted. ⁵ http://defciencia.iula.upf.edu/index.htm There are three main buttons to choose the game mode: - **Jugar a contrarelle o lliure**: “Play freely or in time-attack mode”. This button prepares the game to be played in a scenario where only one player can play, freely or in time-attack mode. - **Jugar amb un amic**: “Play with a friend”. This button prepares the game to be played in a scenario where two people can compete against each other. - **Jugar contra l’ordenador**: “Play against the computer”. This button prepares the game to be played competing against the computer. The computer will play with artificial intelligence, trying to play like a real person. As parameters of the game scenario, the number of couples must be decided: - **2 parelles, 3 parelles, 4 parelles, 5 parelles, 6 parelles, 7 parelles, 8 parelles, 9 parelles, 10 parelles**: Buttons to select how many couples the user wants to play with. - **Torna al lloc anterior**: Button to return to the previous scenario. In the time-attack or Free scenario - Card button: Button that contains the card which the user plays with. - **Torna al lloc anterior**: Button to return to the previous scenario. - **Juga un altre cop**: Button to restart the current scenario. - **Comença el temps!**: Button to start the time counter. - **Fi!**: Button to stop the time counter and to show the time that has passed from the time that the Comença el temps! button was pushed. This button is initially disabled to avoid confusing the user, and gets enabled once the Comença el temps! button is pushed. In the versus a friend scenario - Card button: Button that contains the card which the user plays with. - **Torna al lloc anterior**: Button to return to the previous scenario. - **Juga un altre cop**: Button to restart the scenario. In the versus the computer scenario - Card button: Button that contains the card which the user plays with. - **Torna al lloc anterior**: Button to return to the previous scenario. - **Juga un altre cop**: Button to restart the scenario. - **Seguent ronda**: Button that lets the computer move when it is his turn. ### 4.4 Use Cases Use cases show the reactions to the actions that the users can do over the application. The use cases show the behavior of the game during all its life, and allows to analyze the specific functionality of it. To understand the use cases relations I made the user case diagram shown in Figure 4.1 The Use Case template is formed by a number of fields next described that define the information on the use cases: - **Use Case**: Title of the use case. - **Actor**: An actor is a person or other entity external to the software system being specified who interacts with the system and performs use cases to accomplish tasks. - **Description**: Provides a brief description of the sequence of actions and the outcome of executing the use case. - **Preconditions**: List of activities that must take place, or any conditions that must be true, before the use case can be started. - **Postconditions**: Describes the state of the system at the conclusion of the use case execution. - **Normal course of events**: Provides a detailed description of the user actions and system responses that will take place during execution of the use case under normal, expected conditions. - **Alternative courses**: Documentation of other legitimate usage scenarios that can take place within this use case separately in this section. - **Includes**: List of any other use cases that are included or called by this use case. - **Notes and Issues**: Additional comments about this use case. ![User case diagram](image) Figure 4.1: User case diagram. **a) Use case: Choose game mode** <table> <thead> <tr> <th>Use Case</th> <th>Choose game mode</th> </tr> </thead> <tbody> <tr> <td><strong>Actor</strong></td> <td>User</td> </tr> <tr> <td><strong>Description</strong></td> <td>The user has to choose one of the three buttons corresponding to the three game modes</td> </tr> <tr> <td><strong>Preconditions</strong></td> <td>The user has started the application to play the game</td> </tr> <tr> <td><strong>Postconditions</strong></td> <td>The configuration related to the game mode selected is prepared and a new scenario is charged.</td> </tr> </tbody> </table> b) Use case: Choose game couples <table> <thead> <tr> <th>Use Case</th> <th>Choose game couples</th> </tr> </thead> <tbody> <tr> <td><strong>Actor</strong></td> <td>User</td> </tr> <tr> <td><strong>Description</strong></td> <td>The user has to choose the quantity of couples for the game.</td> </tr> <tr> <td><strong>Preconditions</strong></td> <td>The user has chosen a game mode.</td> </tr> <tr> <td><strong>Postconditions</strong></td> <td>All the game features have been chosen, the game will start.</td> </tr> <tr> <td><strong>Normal course of events</strong></td> <td>The user clicks on the button that represents the number of couples desired, the scenario has all the information ready to start the game. A label indicates the game mode chosen.</td> </tr> <tr> <td><strong>Alternative courses</strong></td> <td>The button to go to the previous scenario is pushed, the actual scenario is deleted and the previous scenario is charged again.</td> </tr> <tr> <td><strong>Includes</strong></td> <td>If alternative course, “Choose game mode” use case.</td> </tr> <tr> <td><strong>Notes and issues</strong></td> <td>-</td> </tr> </tbody> </table> c) Use case: Play time attack <table> <thead> <tr> <th>Use Case</th> <th>Play Time Attack</th> </tr> </thead> <tbody> <tr> <td><strong>Actor</strong></td> <td>User</td> </tr> <tr> <td><strong>Description</strong></td> <td>The number of couples and the game mode is displayed and ready to start playing.</td> </tr> <tr> <td><strong>Preconditions</strong></td> <td>The user selected the Time-Attack mode and a number of couples.</td> </tr> <tr> <td><strong>Postconditions</strong></td> <td>Push the “Fi” button and check the time.</td> </tr> <tr> <td><strong>Normal course of events</strong></td> <td>The number of couples selected is displayed. A label with the game mode is displayed. The buttons “Torna al lloc anterior”, “Juga un altre cop” and “Comença el temps!” are displayed. The “Fi!” button is displayed but disabled. The user clicks the “Comença el temps!” button to start the Time-Attack and starts to play.</td> </tr> <tr> <td><strong>Alternative courses</strong></td> <td>The user can push the “Torna al lloc anterior” button to get back to the previous scenario. The user can push the “Juga un altre cop” button to start a new game with the</td> </tr> </tbody> </table> d) Use case: Play versus a friend <table> <thead> <tr> <th>Use Case</th> <th>Play Versus a friend</th> </tr> </thead> <tbody> <tr> <td><strong>Actor</strong></td> <td>Two users</td> </tr> <tr> <td><strong>Description</strong></td> <td>The number of couples and the game mode is displayed and ready to start playing.</td> </tr> <tr> <td><strong>Preconditions</strong></td> <td>The user selected the versus a friend mode and a number of couples.</td> </tr> <tr> <td><strong>Postconditions</strong></td> <td>A warning informs of who is the winner.</td> </tr> <tr> <td><strong>Normal course of events</strong></td> <td>The number of couples selected is displayed. A label with the game mode is displayed. Two labels “Player 1” and “Player 2” are displayed with its score. The buttons “Torna al lloc anterior” and “Juga un altre cop” are displayed. Every user has 2 clicks available in their turn.</td> </tr> <tr> <td><strong>Alternative courses</strong></td> <td>The user can push the “Torna al lloc anterior” button to get back to the previous scenario. The user can push the “Juga un altre cop” button to start a new game with the same game features that were selected.</td> </tr> <tr> <td><strong>Includes</strong></td> <td>For the first alternative course, “Choose couples” use case. For the second alternative course, “Play Time-Attack” use case.</td> </tr> <tr> <td><strong>Notes and issues</strong></td> <td>The buttons specified can be pushed any time.</td> </tr> </tbody> </table> e) Use case: Play Versus computer <table> <thead> <tr> <th>Use Case</th> <th>Play Versus computer</th> </tr> </thead> <tbody> <tr> <td><strong>Actor</strong></td> <td>User</td> </tr> <tr> <td><strong>Description</strong></td> <td>The number of couples and the game mode is displayed and ready to start playing.</td> </tr> <tr> <td><strong>Preconditions</strong></td> <td>The user selected the Play Versus Computer mode and a number of couples.</td> </tr> <tr> <td><strong>Postconditions</strong></td> <td>A warning informs of who is the winner.</td> </tr> <tr> <td><strong>Normal course of events</strong></td> <td>The number of couples selected is displayed. A label with the game mode is displayed. Two labels “Player 1” and “Computer” are displayed with its score. The buttons “Torna al lloc anterior” and “Juga un altre cop” are displayed. Every time the user finishes his turn, the button</td> </tr> <tr> <td>Alternative courses</td> <td>“Següent ronda” is enabled. The user can push the “Torna al lloc anterior” button to get back to the previous scenario. The user can push the “Juga un altre cop” button to start a new game with the same game features that were selected.</td> </tr> <tr> <td>Includes</td> <td>For the first alternative course, “Choose couples” use case. For the second alternative course, “Play Versus computer” use case.</td> </tr> <tr> <td>Notes and issues</td> <td>The buttons specified can be pushed any time.</td> </tr> </tbody> </table> 5. IMPLEMENTATION This chapter describes how the game was structured and implemented, explaining the criteria followed and detailing the code parts that help to understand the operation of the game. 5.1 Structure of the code The structure of the code is based on the design parameters and functionalities. For this project and the way it has been approached, a not very complex structure has been made. The complexity of the code is visible in the implementation of the in-game behavior and the detailed artificial intelligence criteria. Three classes have been designed and details are explained in Section 5.2. The class Start has been designed to start running the game, generating the layout and setting the game ready. The class Block manages the structure and behavior of the main element of the game, the cards that will be used to play. The most complex class is the Game class. This class defines the layout characteristics and manages all the game mode rules and elements and the artificial intelligence of the computer player. In the structure, these class characteristics are reflected on the number of elements conforming each class (See class diagram in Figure 5.1) Figure 5.1: Class diagram. 5.2 Implementation process Once the structure was defined, the implementation started. The understanding of the relations and dependencies between the elements of the code defined the implementation process schedule. As it was planned, in this part of the project the methodology was to develop a functionality, test it, and then evaluate how this functionality influences the game. Deciding to include or eliminate functionalities was done after seeing the result of the implementation, and that meant modifying the design as well. In this part of the document, a non-technical language is used to explain the code characteristics. a) The Start class The first part implemented was the file Start.java that contains the “Start” class. This class is the first code part of the game, so it contains the method that executes the code. The function of this class is to start the application and create the initial layout of the game. To do this, the method InitialMenu was created. InitialMenu: This method creates an instance of the class Game and it sets the layout element as main layout to set the background layout of the game. Then, adds the three buttons that allow the user to choose the game mode to the layout. b) The Block class In the structure it was decided that a class “Block” would be the best approach on managing the card elements of the game. A block represents two cards that form a couple, and during the game this class has to be instanced very often to generate cards. The methods of this class are getback, and the buttons of the two cards. getback: This method turns around the two buttons that represent a couple of cards. This is done by taking the reverse image of a card and setting it as the button image. “Card” button: This method is launched every time that the button related to the card is pressed. When is called, the card checks if the other card of the same block has been pressed, considering the couple completed. It also calls a method checkreset from the class Game to check the state of the remaining cards on the board. c) The Game class The game class is the class that represents all the interaction between the user and the game. Most part of the code is in this class because the interaction with the game and the artificial intelligence are the densest parts. This is due the specific behaviors of the game in certain points that imply exceptions instead of rules. SecondMenu: When one of the three first buttons (Time-Attack, Versus a friend, Versus the computer) is pushed, this method is called. The first thing that the method does is removing the three previous buttons, then it recognizes the game mode selected and indicates it in a label, adds the button “Torna al lloc anterior” and adds the buttons corresponding to the number of couples to choose. “Torna al lloc anterior” button: This button removes the elements on the layout and adds the game mode buttons, recreating the starting scenario. “couples” button: This buttons all call a method named thebuttoneffect. Depending on which button is pressed, the method gets an input (an integer) representing the number of couples chosen. thebuttoneffect : This method creates the game board. It removes the previous components of the layout and then adds a grid created with getGameGrid that is full with the cards created with createBlocks and AddBlocksToUI. Then, depending on the game mode chosen it customizes the layout with customizegame and also adds the buttons “Juga un altre cop” and “Torna al lloc anterior”. getGameGrid : It creates a grid and dimensions it depending on the couples chosen. createBlocks: Depending on the number of couples chosen, this method randomly chooses couples from a list of possible cards in the game and generates the buttons representing the cards. addBlocksToUI: Method in charge of putting the cards in the grid. “Juga un altre cop” button: The button to restart the game. When pressed, this button restarts the variables of the game selected and calls the thebuttoneffect method. “Torna al lloc anterior” button: This button is the mentioned in thebuttoneffect method. In his case, the button removes all the components on the layout and then calls the SecondMenu method. customizegame: The common parts of the layout are added in the thebuttoneffect method, but the specific elements for the different game modes are added in this method. If the game mode selected was “Time Attack”, the buttons “Comença el temps” and “Fi!” are added. “Comença el temps” starts the time counter and “Fi!” finishes the count and notifies the user the time that has passed. If he game mode selected was “Versus a friend”, this method adds two labels “Jugador 1” and “Jugador 2” and its respective score counters. Finally, if the game mode is “Versus the computer”, the method adds the button “Seguent ronda” that allows the user to go to the next round, and also adds a difficulty box to choose the computer difficulty and the labels “Jugador” and “Ordenador” with its score counters. getRandomList: Method that creates a list of random numbers. Used as input for the createBlocks method. gerButtons: A method that gets a random list of cards out of the list of blocks and serves as input for the AddBlocksToUI method. checkreset: This method is called every time that a card button is pressed. The function of the method is to control and check what the response is every time that a card is pressed. It considers the case where the first card is pressed, the second one is pressed and the third as well, executing the reactions related when there are couples found or changes of turn. Finally, it checks the game mode and calls the vsmode method if necessary. vsmode: When playing one of the versus modes, this method is called every time that a card is pressed. It designates when the turn changes for every player, assigns points to the counters when a player discovers a couple and tells who the winner is or if the score is a draw once the game is over. reset: When the checkreset method concludes that the cards pressed have to be turned around, the reset method is called. Its function is to turn around the cards shown if they are not a couple. addseenlist: This method is part of the artificial intelligence created for the computer. It checks and updates a list of cards that has been seen by any player in the current game, in order to represent the memory of the computer. playrandomly: This method is the behavior of the computer when playing with the lowest difficulty. It chooses two random cards, checks that they have not been already discovered or pressed in the current turn and then presses the cards. artificialIntelligence: Every time that the user presses the “Seguent ronda” button, this method is called. This method defines all the behavior of the computer when playing, emulating a real player in his turn. When playing against the computer, all the cards are disabled after the user’s turn to prevent the user from cheating, so the first thing that this method does is enabling all the cards except for the couples found. Depending on the difficulty chosen, the method decides at this point to play randomly (low difficulty) or to play smart (high difficulty). The difficulty criteria is that a random number is compared to the difficulty chosen by the user every turn. If that random number is bigger than the difficulty chosen, the computer plays randomly on that turn. The result is that for a low difficulty level, there is a low probability of smart play and high probability of random play. If it plays smart, the next step is to update the “seen list” of the computer that works as the memory of the game. To update it, the method eliminates the already completed couples from the seen list to prevent the computer from pressing completed cards. The method considers three possible scenarios. The first is the case where there are no cards shown before the choosing, in the second there are two cards shown before choosing and in the third case there is only one card already chosen. When there are no cards shown before choosing, the method checks if there are any seen couples to push, or if it has to push a random first card. In the “one card pressed” scenario, the method checks if this first card is couple with any of the seen list or it presses the second card randomly. For the case of two cards already shown, the method behaves very similar but avoiding to press a card already pressed because this is not a valid move in the game, so if the computer has seen a couple (including the cards already shown), it selects the couple but first pressing the card of the couple that is not shown. If there are no couples seen, it presses one card randomly, leading to the “one card pressed” scenario explained previously. 5.3 Implementation handicaps Even aiming for the most optimistic planification possible, in the implementation phase of the project is where more trouble a programmer can have in a matter of problematic surprises. In this part of the document I explain the major implementation handicap that have affected the project scheduling. The Vaadin plug-in for Eclipse is a software still in development, and this means that some functionalities and behaviors are not completely controlled. The Vaadin plug-in offers a specific tool to create user interfaces easily, and the original plan included using this specific tool. The tool guaranteed a fast intuitive user interface design, but the freedom is limited and the tool is not perfectly effective, forcing the programmer to dead ends. This was solved using the classical code methodology instead of this tool. The impact was not high because of the Vaadin specific code features to facilitate the creation of user interfaces. 5.4 Evaluation and testing To evaluate the application I looked for the critic opinion of different professionals related to technology and human matters. Those are friends, family and university colleagues that are audiovisual or informatic engineers, sociologists, anthropologists, primary school teachers, graphic designers or pediatric doctors. Every member of this group tested the application and provided a detailed feedback that helped to improve both technical and playing behaviors. The technical testing of the application was made by pressing all the possible buttons in all the possible combinations. That way, all legal or illegal moves were checked and all the unpredicted behaviors detected. The control was made including messages in the code that were launched in the different scenarios, helping me to confirm the correct behavior of the application. 6. USER GUIDE This user guide is an explanation of how the user will be able to use the game, showing the different scenarios that are available for the user to go through and the possible decisions that can be made. 6.1 Preparing the game. When the application is launched, the first scenario that the user finds is an initial layout with the background of the game, and three buttons (Figure 6.1). The user must push one of the three buttons to choose the game mode that is willing to play (Figure 6.2). The three options are “Play Time Attack or freely” (“Jugar a contrarellotge o lliure”), “Play versus a friend” (Jugar amb un amic”) and “Play versus the computer” (“Jugar contra l’ordinador”) Once any of the three buttons is pressed, the next scenario is shown. In here, the user can choose the number of couples to play with, from two couples to ten. The number of couples chosen indicates how many cards will be on the board, so two couples will put four cards on the board and ten couples will put twenty cards on the board. A button to go back to the previous scenario for choosing the game mode is available (the button “Torna al lloc anterior”), and the game mode previously chosen is indicated on the top center part of the interface (See Figures 6.3 and 6.4). With the game mode and the number of couples chosen, the user gets into the specific playing scenario. 6.2 General in-game features For all the in-game scenarios, the “Torna al lloc anterior” button is available for the user to go back to the previous scenario. Also a “replay” button is available that when pressed, the timer is set to zero, and new cards are set on the board. The game mode that is currently played is indicated during all the game in the label on the top central part of the layout. The cards behavior is always the same. The user can press and see two consecutive cards. If they form a couple, both cards keep shown but disabled. In case they are different, both cards are turned around when another card is pressed. When playing with two players, getting a couple right is rewarded with a point and a chance to press two more cards. 6.3 Playing Time Attack or freely For this example, the scenario in Figure 6.5 is a game with three couples. ![Figure 6.5: Time Attack with three couples.](image) In this game mode, the user can freely play without no pressure elements as competition or time. To play freely, no requirement is needed, the user can start pressing cards and finding the couples. Once all the couples are found or the user is ready to change, can press the “Juga un altre cop” button to play again or the “Torna al lloc anterior” button to change game characteristics. In the example in Figure 6.6, to play in Time Attack mode, a button “Start the time” (“Comença el temps”) is available. Pressing this button starts a time counter and the user must start trying to discover all the couples. Once discovered, the user must press the “Fi!” button and a notification will show up in the middle of the screen showing the time that has passed (see Figure 6.7). To be able to press any button, the notification must be pressed to hide it. 6.4 Playing versus a friend For this example, the scenario shown in Figure 6.8 is a game with six pairs. In the “Versus a friend” game mode, there is a label and a score for two players (“Jugador 1” and “Jugador 2”). When playing, the label indicates whose turn is by showing the text “ET TOCA” next to the player (See Figure 6.9) When all the couples are found, the player with the biggest score is the winner and a notification appears to declare the winner (Figure 6.10) ![Figure 6.10: Player 1 wins.](image) In case the same score is achieved by both players, the notification declares a draw (Figure 6.11) ![Figure 6.11: Draw.](image) To be able to press any button, the notification must be pressed to hide it. 6.5 Playing versus the computer For this example, the scenario in Figure 6.12 is a game with ten couples. ![Figure 6.12: Versus the computer with ten couples.](image) When playing against the computer, a difficulty box “Escull nivell de dificultat (5 el més difícil)” is available, where the user must choose the difficulty of the game, that will be translated to how well does the computer play. This box is disabled when the user starts playing (Figure 6.13) ![Figure 6.13: Versus computer middle game.](image) To play in this mode, the user must press two cards. When these are pressed and they’re not a couple, all the cards become disabled and a button “Següent ronda” is enabled. This prevents the user to press illegal elements and guides the user (See Figure 6.14) Figure 6.14: Versus computer. End of user’s turn. Pushing the “Següent ronda” button allows the computer to play his turn. The high difficulty makes the computer play as a human that remembers all the cards that sees and the computer plays with the same rules as a user, so it is normal that the computer sometimes discovers several couples in a row while playing in high difficulty, making the challenge even harder. In Figure 6.15 a computer victory is shown. Figure 6.15: Versus computer end. 7. FINAL RESULT The final result of the project is a computer adaptation of the game “Cada ovella amb la seva parella” that will be easily accessible via the internet with any web browser. The game has an easy interface with intuitive usage accomplishing the requirements that imply the age of the potential users, and thanks to its design and competitive and flexible features it is a very replayable game. Putting together these characteristics with the science learning environment of the original game, the result is a tool disguised of fun game that will have children addicted to learning science. 8 CONCLUSIONS The planification of the process, the steps followed, the research done, the design and implementation of the code and all combined with a large number of hours invested and the guidance of professionals makes this project a major learning experience in university. “Jugant a definir la ciència online: Cada ovella amb la seva parella” has been an excellent challenge to develop my project organization and realization skills, and a perfect chance to practice the programming knowledge learned during university. Being part of an already developed project as “Jugant a definir la ciència” instead of creating a new one, and understanding its objectives and motivations has been a very important input to define the character of the project. “Jugant a definir la ciència online: Cada ovella amb la seva parella” has become the tool that was expected. The result of the project is an attractive teaching science game that accomplishes the goals of teaching science thanks to the attractively entertaining game characteristics. This proofs that a well planification, structuration and implementation of the resources has been done. Analyzing the result obtained in the project, I can say that Eclipse and Vaadin have resulted a useful and indicated tool to program the game, especially thanks to the specific features focused on the interface implementation. This application is important to promote the learning of science knowledge and the exercising of important body functions like the memory or agility, especially for children. As a final conclusion, I successfully analyzed, designed, developed and evaluated an interactive application with web access that hopefully will accomplish the goal “learning while playing” for children, mostly with Catalan as mother language. 9 FUTURE WORK The adaptation made in this project it is just one of the proposals for the “Cada ovella amb la seva parella” game. The scalability of the game makes possible the extension of the game by adding other functionalities to it, like other game modes with more cards. Some possible next features would be including the possibility of recording the score of the users in a data base and establish a score ranking, introducing a user profile structure for the users to log in, adding a feature to play versus a friend online, adding sound tracks related to the cards to hear the pronunciation of the words, add specific science area versions and implement versions of the game in other languages like Spanish, English, French, etc. The game could also be adapted to other platforms like tablets and smartphones, and specific adaptations for neurofunctional disabled people is a future work as well. “Jugant a definir la ciència online: Cada Ovella amb la seva parella” is just one of the games proposed by the project “Jugant a definir la ciència”. In the beginning of this document other titles of “Jugant a definir la ciència” are mentioned, and those could also be adapted to the informatics world, like the dictionaries “My first Dictionary of Science” (Estopà, 2013)[1] or the other three games available in “La Maleta Viatgera”. 37 Bibliography http://docs.oracle.com/javase/specs/ [ May 2014] http://docs.oracle.com/javase/7/docs/api/java/awt/Button.html [February 2014] http://es.wikipedia.org/wiki/Eclipse_(software)#Caracter%C3%ADsticas[May 2014] http://medialab.di.unipi.it/web/IUM/Programmazione/OO/what/concepts.html [May 2014] http://www.eclipse.org/ [June 2014] https://vaadin.com/home [February 2014] http://defciencia.iula.upf.edu/ [April 2014] http://www.iula.upf.edu/breus/breu289ca.htm [April 2014] http://stackoverflow.com/ [March 2014] https://vaadin.com/forum# [March 2014] [2] Estopà, Rosa “Jugant a definir la ciència: un diccionari de mots de ciència fet per i per a nens i nenes” (2013) Annex Background image Card images laboratori lupa mapa microscopi nervi neurona ordinador u relotge telescopi termòmetre (temperatura del cos) volcà planta (vegetal) juga amb Paraules de Ciència
{"Source-Url": "https://repositori.upf.edu/bitstream/handle/10230/22882/LlopAgramunt_2014.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 14048, "olmocr-version": "0.1.49", "pdf-total-pages": 61, "total-fallback-pages": 0, "total-input-tokens": 96281, "total-output-tokens": 16138, "length": "2e13", "weborganizer": {"__label__adult": 0.0007138252258300781, "__label__art_design": 0.0012998580932617188, "__label__crime_law": 0.0003914833068847656, "__label__education_jobs": 0.0112152099609375, "__label__entertainment": 0.0003407001495361328, "__label__fashion_beauty": 0.00039768218994140625, "__label__finance_business": 0.0004329681396484375, "__label__food_dining": 0.0008449554443359375, "__label__games": 0.0079803466796875, "__label__hardware": 0.00156402587890625, "__label__health": 0.000415802001953125, "__label__history": 0.0008282661437988281, "__label__home_hobbies": 0.00034332275390625, "__label__industrial": 0.0005865097045898438, "__label__literature": 0.0010080337524414062, "__label__politics": 0.0003592967987060547, "__label__religion": 0.0008134841918945312, "__label__science_tech": 0.0139312744140625, "__label__social_life": 0.0003020763397216797, "__label__software": 0.007762908935546875, "__label__software_dev": 0.94677734375, "__label__sports_fitness": 0.0006742477416992188, "__label__transportation": 0.0006861686706542969, "__label__travel": 0.0003123283386230469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64927, 0.03112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64927, 0.29457]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64927, 0.82894]], "google_gemma-3-12b-it_contains_pii": [[0, 204, false], [204, 204, null], [204, 433, null], [433, 433, null], [433, 485, null], [485, 485, null], [485, 709, null], [709, 709, null], [709, 3227, null], [3227, 3227, null], [3227, 8834, null], [8834, 9676, null], [9676, 11393, null], [11393, 11393, null], [11393, 14042, null], [14042, 16902, null], [16902, 19345, null], [19345, 21772, null], [21772, 21806, null], [21806, 21806, null], [21806, 25106, null], [25106, 27160, null], [27160, 28546, null], [28546, 28798, null], [28798, 31138, null], [31138, 33134, null], [33134, 35530, null], [35530, 37244, null], [37244, 39747, null], [39747, 41701, null], [41701, 42177, null], [42177, 42177, null], [42177, 43363, null], [43363, 43390, null], [43390, 45464, null], [45464, 48286, null], [48286, 51236, null], [51236, 54064, null], [54064, 54837, null], [54837, 55448, null], [55448, 56756, null], [56756, 57222, null], [57222, 57555, null], [57555, 57945, null], [57945, 58462, null], [58462, 59221, null], [59221, 59828, null], [59828, 59828, null], [59828, 61624, null], [61624, 61624, null], [61624, 62973, null], [62973, 62973, null], [62973, 64617, null], [64617, 64727, null], [64727, 64764, null], [64764, 64764, null], [64764, 64796, null], [64796, 64822, null], [64822, 64882, null], [64882, 64899, null], [64899, 64927, null]], "google_gemma-3-12b-it_is_public_document": [[0, 204, true], [204, 204, null], [204, 433, null], [433, 433, null], [433, 485, null], [485, 485, null], [485, 709, null], [709, 709, null], [709, 3227, null], [3227, 3227, null], [3227, 8834, null], [8834, 9676, null], [9676, 11393, null], [11393, 11393, null], [11393, 14042, null], [14042, 16902, null], [16902, 19345, null], [19345, 21772, null], [21772, 21806, null], [21806, 21806, null], [21806, 25106, null], [25106, 27160, null], [27160, 28546, null], [28546, 28798, null], [28798, 31138, null], [31138, 33134, null], [33134, 35530, null], [35530, 37244, null], [37244, 39747, null], [39747, 41701, null], [41701, 42177, null], [42177, 42177, null], [42177, 43363, null], [43363, 43390, null], [43390, 45464, null], [45464, 48286, null], [48286, 51236, null], [51236, 54064, null], [54064, 54837, null], [54837, 55448, null], [55448, 56756, null], [56756, 57222, null], [57222, 57555, null], [57555, 57945, null], [57945, 58462, null], [58462, 59221, null], [59221, 59828, null], [59828, 59828, null], [59828, 61624, null], [61624, 61624, null], [61624, 62973, null], [62973, 62973, null], [62973, 64617, null], [64617, 64727, null], [64727, 64764, null], [64764, 64764, null], [64764, 64796, null], [64796, 64822, null], [64822, 64882, null], [64882, 64899, null], [64899, 64927, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64927, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64927, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64927, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64927, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64927, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64927, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64927, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64927, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64927, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64927, null]], "pdf_page_numbers": [[0, 204, 1], [204, 204, 2], [204, 433, 3], [433, 433, 4], [433, 485, 5], [485, 485, 6], [485, 709, 7], [709, 709, 8], [709, 3227, 9], [3227, 3227, 10], [3227, 8834, 11], [8834, 9676, 12], [9676, 11393, 13], [11393, 11393, 14], [11393, 14042, 15], [14042, 16902, 16], [16902, 19345, 17], [19345, 21772, 18], [21772, 21806, 19], [21806, 21806, 20], [21806, 25106, 21], [25106, 27160, 22], [27160, 28546, 23], [28546, 28798, 24], [28798, 31138, 25], [31138, 33134, 26], [33134, 35530, 27], [35530, 37244, 28], [37244, 39747, 29], [39747, 41701, 30], [41701, 42177, 31], [42177, 42177, 32], [42177, 43363, 33], [43363, 43390, 34], [43390, 45464, 35], [45464, 48286, 36], [48286, 51236, 37], [51236, 54064, 38], [54064, 54837, 39], [54837, 55448, 40], [55448, 56756, 41], [56756, 57222, 42], [57222, 57555, 43], [57555, 57945, 44], [57945, 58462, 45], [58462, 59221, 46], [59221, 59828, 47], [59828, 59828, 48], [59828, 61624, 49], [61624, 61624, 50], [61624, 62973, 51], [62973, 62973, 52], [62973, 64617, 53], [64617, 64727, 54], [64727, 64764, 55], [64764, 64764, 56], [64764, 64796, 57], [64796, 64822, 58], [64822, 64882, 59], [64882, 64899, 60], [64899, 64927, 61]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64927, 0.15332]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
529855458e2ddaaca595c2266e73be0da8e1357b
A Node-Positioning Algorithm for General Trees TR89-034 September, 1989 John Q. Walker II The University of North Carolina at Chapel Hill Department of Computer Science CB##3175, Sitterson Hall Chapel Hill, NC 27599-3175 A TextLab Report UNC is an Equal Opportunity/Affirmative Action Institution. Abstract Drawing a tree consists of two stages: determining the position of each node, and actually rendering the individual nodes and interconnecting branches. The algorithm described in this paper is concerned with the first stage: given a list of nodes, an indication of the hierarchical relationship among them, and their shape and size, where should each node be positioned for optimal aesthetic effect? This algorithm determines the positions of the nodes for any arbitrary general tree. It is the most desirable positioning with respect to certain widely-accepted heuristics. The positioning, specified in x, y coordinates, minimizes the width of the tree. In a general tree, there is no limit on the number of offspring per node; this contrasts with binary and ternary trees, for example, which are trees with a limit of 2 and 3 offspring per node. This algorithm operates in time $O(N)$, where $N$ is the number of nodes in the tree. Previously, most tree drawings have been positioned by the sure hand of a human graphic designer. Many computer-generated positionings have been either trivial or contained irregularities. Earlier work by Wetherell and Shannon (1979) and Tilford (1981), upon which this algorithm builds, failed to correctly position the interior nodes of some trees. Radack (1988), also building on Tilford's work, has solved this same problem with a different method which makes four passes. The algorithm presented here correctly positions a tree's nodes using only two passes. It also handles several practical considerations: alternate orientations of the tree, variable node sizes, and out-of-bounds conditions. Keywords Aesthetics, computer graphics, drawing methods, tree drawing, tree structures Contents Introduction ......................................................... 1 What is A General Tree? ........................................... 1 Aesthetic Rules ..................................................... 2 Application Areas ................................................... 3 How the Algorithm Works ......................................... 4 The Algorithm ....................................................... 6 An Example .......................................................... 17 Nodes Visited in the First Traversal ......................... 17 Nodes Visited in the Second Traversal ...................... 20 Changing the Orientation of the Root ......................... 21 Previous Work ....................................................... 24 Acknowledgements .................................................. 27 References ............................................................ 28 An Example Underlying Tree Structure ....................... 29 Introduction This algorithm addresses the problem of drawing tree structures. Trees are a common method of representing a hierarchically-organized structure. In computer science, trees are used in such areas as searching, compiling, and database systems; in non-computer applications, they are commonly used to construct organizational charts or to illustrate biological classifications. Visual displays of trees show hierarchical relationships clearly; they are often more useful than listings of trees in which hierarchical structure is obscured by a linear arrangement of the information. A key task in tree drawing is deciding where to place each node on the display or output page. This task is accomplished by a node-positioning algorithm that calculates the x and y coordinates for every node of the tree. A rendering routine can then use these coordinates to draw the tree. A node-positioning algorithm must address two key issues. First, the resulting drawing should be aesthetically pleasing. Second, the positioning algorithm should make every effort to conserve space. Each of these two issues can be handled straightforwardly by itself, but taking them together poses some challenges. Several algorithms for the positioning of general trees have been published; in the works of Sweet and Tilford, however, the authors describe anomalies with their algorithms that can cause drawings with less-than-desirable results. The algorithm presented here corrects the deficiencies in these algorithms and produces the most desirable positioning for all general trees it is asked to position. Radack has published a node-positioning algorithm that uses a different solution technique, but which produces results identical to those presented here. What Is A General Tree? This paper deals with rooted, directed trees, that is, trees with one root and hierarchical connections from the root to its offspring. No node may have more than one parent. A general tree is a tree with no restriction on the number of offspring each node has. A general tree is also known as an m-ary tree, since each node can have m offspring (where m is 0 or more). The common terms binary tree and ternary tree are restrictive examples of the general case; binary and ternary trees allow no more than 2 and 3 offspring per node, respectively. As a class, binary trees, in particular, differ from general trees in the following respect: An offspring of a node in a binary tree must be either the left offspring or the right offspring. It is common practice in drawing binary trees to preserve this left-right distinction. Thus, a single offspring is placed under its parent node either to the left or right of its parent's position. This left-right distinction does not apply in a general tree; if a node has a single offspring, the offspring is placed directly below its parent. This algorithm positions a binary tree by ignoring the distinction above. That is, it does not preserve left or right positioning of the offspring under the parent: if a node has exactly one offspring, it is positioned directly below its parent. Supowit and Reingold\(^9\) noted that it is \(\text{NP}\)-hard to optimally position minimum-width binary trees (while adhering to the distinction above) to within a factor of less than about four percent. ### Aesthetic Rules In their paper, Wetherell and Shannon\(^{13}\) first described a set of aesthetic rules against which a good positioning algorithm must be judged. Tilford\(^{11}\) and Supowit and Reingold\(^9\) have expanded that list in an effort to produce better algorithms. **Tidy** drawings of trees occupy as little space as possible while satisfying certain aesthetics: 1. Nodes at the same level of the tree should lie along a straight line, and the straight lines defining the levels should be parallel.\(^{13}\) In parse trees, one might want all leaves to lie on one horizontal line; for that application Aesthetic 1 is not desirable. In this case, though, the width of the placement is fixed and so the minimum width placement problem for such parse trees is not interesting. We therefore restrict our attention to the wide class of applications for which Aesthetic 1 is desirable.\(^9\) 2. A parent should be centered over its offspring.\(^{13}\) 3. A tree and its mirror image should produce drawings that are reflections of one another; moreover, a subtree should be drawn the same way regardless of where it occurs in the tree. In some applications, one wishes to examine large trees to find repeated patterns; the search for patterns is facilitated by having isomorphic subtrees drawn isomorphically.\(^8\) This implies that small subtrees should not appear arbitrarily positioned among larger subtrees. a. Small, interior subtrees should be spaced out evenly among larger subtrees (where the larger subtrees are adjacent at one or more levels). b. Small subtrees at the far left or far right should be adjacent to larger subtrees. The algorithm described in this paper satisfies these aesthetic rules. Application Areas In the past, general trees displayed on a computer screen or in print have had one of the following characteristics: 1. they were positioned by hand by a graphic artist, 2. they were small, trivial, or special-case trees, able to be positioned by one of the existing algorithms, or 3. they had areas of irregularity within them where the algorithmic positioning was not aesthetically desirable. With this algorithm, a computer can reliably generate tree drawings equivalent to those done by a skilled human. Below are some of the applications that often use tree-drawings. - Drawings of B-trees and 2-3 trees - Structure editors that draw trees - Flow charts without loops - Visual LISP editors - Parse trees - Decision trees - Hierarchical database models - Hierarchically-organized file systems (for example, directories, sub-directories, and files) - Depth-first spanning trees (graph theory) - Organizational charts - Table of contents in printed matter - Biological classification How the Algorithm Works This algorithm initially assumes the common practice among computer scientists of drawing trees with the root at the top of the drawing. Node-positioning algorithms are concerned only with determining the x-coordinates of the nodes; the y-coordinate of a node can easily be determined from its level in the tree, due to Aesthetic 1 and the natural convention of a uniform vertical separation between consecutive levels. "Changing the Orientation of the Root" on page 21 presents a variation of the algorithm for altering the relationship of the x- and y-coordinates. This algorithm utilizes two concepts developed in previous positioning algorithms. First is the concept of building subtrees as rigid units. When a node is moved, all of its descendants (if it has any) are also moved—the entire subtree being thus treated as a rigid unit. A general tree is positioned by building it up recursively from its leaves toward its root. Second is the concept of using two fields for the positioning of each node. These two fields are: - a preliminary x-coordinate, and - a modifier field. Two tree traversals are used to produce the final x-coordinate of a node. The first traversal assigns the preliminary x-coordinate and modifier fields for each node; the second traversal computes the final x-coordinate of each node by summing the node’s preliminary x-coordinate with the modifier fields of all of its ancestors. This allows the simple moving of a large subtree and allows the algorithm to operate in time $O(N)$. For example, to move a subtree 4 units to the right, increment both the preliminary x-coordinate and the modifier field of the subtree’s root by 4. As another example, the modifier field associated with the apex node of the tree is used in determining the final position of all of its descendants. (The term apex node is used here to distinguish the root of the entire tree from the roots of individual internal subtrees.) The first tree traversal is a postorder traversal, positioning the smallest subtrees (the leaves) first and recursively proceeding from left to right to build up the position of larger and larger subtrees. Sibling nodes are always separated from one another by at least a predefined minimal distance (the sibling separation); adjacent subtrees are separated by at least a predefined subtree separation. Subtrees of a node are formed independently and placed as close together as these separation values allow. As the tree walk moves from the leaves to the apex, it combines smaller subtrees and their root to form a larger subtree. For a given node, its subtrees are positioned one-by-one, moving from left to right. Imagine that its newest subtree has been drawn and cut out of paper along its contour. Superimpose the new subtree atop its neighbor to the left, and move them apart until no two points are touching. Initially their roots are separated by the sibling separation value; then at the next lower level, they are pushed apart until the subtree separation value is established between the adjacent subtrees at the lower level. This process continues at successively lower levels until we get to the bottom of the shorter subtree. Note that the new subtree being placed may not always bump against a descendant of its nearest sibling to the left; siblings much farther to the left, but with many offspring, may cause the new subtree to be pushed to the right. At some levels no movement may be necessary; but at no level are the subtrees moved closer together. When this process is complete for all of the offspring of a node, the node is centered over its leftmost and rightmost offspring. When pushing a new, large subtree farther and farther to the right, a gap may open between the large subtree and smaller subtrees that had been previously positioned correctly, but now appear to be bunched on the left with an empty area to their right. This produces an undesirable appearance; this characteristic of left-to-right gluing was the failing of the algorithms by Sweet, Wetherell and Shannon, and Tilford. The algorithm presented here produces evenly distributed, proportional spacing among subtrees. When moving a large subtree to the right, the distance it is moved is also apportioned to smaller, interior subtrees, satisfying Aesthetic 3. The moving of these subtrees is accomplished as above—by adding the proportional values to the preliminary x-coordinate and modifier fields of the roots of the small interior subtrees. For example, if three small subtrees are bunched at the left because a new large subtree has been positioned to the right, the first small subtree to shifted right by $\frac{1}{4}$ of the gap, the second small subtree is shifted right by $\frac{1}{2}$ of the gap, and the third small subtree is shifted right by $\frac{3}{4}$ of the gap. The second tree traversal, a preorder traversal, determines the final x-coordinate for each node. It starts at the apex node of the tree, summing each node’s x-coordinate value with the combined sum of the modifier fields of its ancestors. It also adds a value that guarantees centering of the display with respect to the position of the apex node of the drawing. How the Algorithm Works 5 The Algorithm Since the algorithm operates by making two recursive walks of the tree, several variables are taken to be global for the sake of runtime efficiency. These variables are described below, alphabetically. All other variables are local to their respective procedures and functions. Variable Description LevelZeroPtr The algorithm maintains a list of the previous node at each level, that is, the adjacent neighbor to the left. LevelZeroPtr is a pointer to the first entry in this list. xTopAdjustment A fixed distance used in the final walk of the tree to determine the absolute x-coordinate of a node with respect to the apex node of the tree. yTopAdjustment A fixed distance used in the final walk of the tree to determine the absolute y-coordinate of a node with respect to the apex node of the tree. The following global values must be set before the algorithm is called; they are not changed during the algorithm. They can be coded as constants. Constant Description LevelSeparation The fixed distance between adjacent levels of the tree. Used in determining the y-coordinate of a node being positioned. MaxDepth The maximum number of levels in the tree to be positioned. If all levels are to be positioned, set this value to positive infinity (or an appropriate numerical value). SiblingSeparation The minimum distance between adjacent siblings of the tree. SubtreeSeparation The minimum distance between adjacent subtrees of a tree. For proper aesthetics, this value is normally somewhat larger than SiblingSeparation. The algorithm is invoked by calling function POSITIONTREE, passing it a pointer to the apex node of the tree. If the tree is too wide or too tall to be positioned within the coordinate system being used, POSITIONTREE returns the boolean FALSE; otherwise it returns TRUE. For each node, the algorithm uses nine different functions. These might be stored in the memory allocated for each node, or they might be calculated for each node, depending on the internal structure of your application. <table> <thead> <tr> <th>Function</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>PARENT(Node)</td> <td>The current node's hierarchical parent</td> </tr> <tr> <td>FIRSTCHILD(Node)</td> <td>The current node's leftmost offspring</td> </tr> <tr> <td>LEFTSIBLING(Node)</td> <td>The current node's closest sibling node on the left</td> </tr> <tr> <td>RIGHTSIBLING(Node)</td> <td>The current node's closest sibling node on the right</td> </tr> <tr> <td>XCOORD(Node)</td> <td>The current node's x-coordinate</td> </tr> <tr> <td>YCOORD(Node)</td> <td>The current node's y-coordinate</td> </tr> <tr> <td>PRELIM(Node)</td> <td>The current node's preliminary x-coordinate</td> </tr> <tr> <td>MODIFIER(Node)</td> <td>The current node's modifier value</td> </tr> <tr> <td>LEFTNEIGHBOR(Node)</td> <td>The current node's nearest neighbor to the left, at the same level</td> </tr> </tbody> </table> Upon entry to POSITIONTREE, the first four functions—the hierarchical relationships—are required for each node. Also, XCOORD and YCOORD of the apex node are required. Upon its successful completion, the algorithm sets the XCOORD and YCOORD values for each node in the tree. function POSITIONTREE (Node): BOOLEAN; begin if Node ≠ NULL then begin (* Initialize the list of previous nodes at each level. *) INITPREVNODELIST; (* Do the preliminary positioning with a postorder walk. *) FIRSTWALK(Node, 0); (* Determine how to adjust all the nodes with respect to *) (* the location of the root. *) xTopAdjustment = XCOORD(Node) - PRELIM(Node); yTopAdjustment = YCOORD(Node); (* Do the final positioning with a preorder walk. *) return SECONDWALK(Node, 0, 0); end; else (* Trivial: return TRUE if a null pointer was passed. *) return TRUE; end. end. Figure 1. Function POSITIONTREE. This function determines the coordinates for each node in a tree. A pointer to the apex node of the tree is passed as input. This assumes that the x and y coordinates of the apex node are set as desired, since the tree underneath it will be positioned with respect to those coordinates. Returns TRUE if no errors, otherwise returns FALSE. procedure FIRSTWALK (Node, Level): begin (* Set the pointer to the previous node at this level. *) LEFTNEIGHBOR(Node) = GETPREVNODEATLEVEL(Level); SETPREVNODEATLEVEL(Level, Node); (* This is now the previous. *) MODIFIER(Node) = 0; (* Set the default modifier value. *) if (ISLEAF(Node) or Level = MaxDepth) then begin if HASLEFTSIBLING(Node) then (* Determine the preliminary x-coordinate based on: *) (* the preliminary x-coordinate of the left sibling, *) (* the separation between sibling nodes, and *) (* the mean size of left sibling and current node. *) PRELIM(Node) = PRELIM(LEFTSIBLING(Node)) + SiblingSeparation + MEANNODESIZE(LEFTSIBLING(Node), Node); else (* No sibling on the left to worry about. *) PRELIM(Node) = 0; end; else /* This Node is not a leaf, so call this procedure recursively for each of its offspring. */ begin Leftmost = Rightmost = FIRSTCHILD(Node); FIRSTWALK(Leftmost, Level + 1); while HASRIGHTSIBLING(Rightmost) do begin Rightmost = RIGHTSIBLING(Rightmost); FIRSTWALK(Rightmost, Level + 1); end; Midpoint = (PRELIM(Leftmost) + PRELIM(Rightmost)) / 2; if HASLEFTSIBLING(Node) then begin PRELIM(Node) = PRELIM(LEFTSIBLING(Node)) + SiblingSeparation + MEANNODESIZE(LEFTSIBLING(Node), Node); MODIFIER(Node) = PRELIM(Node) - Midpoint; APPORTION(Node, Level); end; else PRELIM(Node) = Midpoint; end; end. Figure 2. Procedure FIRSTWALK. In this first postorder walk, every node of the tree is assigned a preliminary x-coordinate (held in field PRELIM(Node)). In addition, internal nodes are given modifiers, which will be used to move their offspring to the right (held in field MODIFIER(Node)). function SECONDWALK (Node; Level, Modsum): BOOLEAN; begin if Level ≤ MaxDepth then begin xTemp ← xTopAdjustment + PRELIM(Node) + Modsum; yTemp ← yTopAdjustment + (Level * LevelSeparation); (* Check to see that xTemp and yTemp are of the proper size for your application. *) if CHECKEXTENTSRANGE(xTemp, yTemp) then begin XCOORD(Node) ← xTemp; YCOORD(Node) ← yTemp; if HASCHILD(Node) then (* Apply the Modifier value for this node to all its offspring. *) Result ← SECONDWALK(FIRSTCHILD(Node), Level + 1, Modsum + MODIFIER(Node)); else (* Continuing would put the tree outside of the drawable extents range. *) Result ← FALSE; end; end; else (* We are at a level deeper than what we want to draw. *) Result ← TRUE; end; end; else Result ← SECONDWALK(RIGHTSIBLING(Node), Level + 1, Modsum); end. Figure 3. Function SECONDWALK. During a second preorder walk, each node is given a final x-coordinate by summing its preliminary x-coordinate and the modifiers of all the node's ancestors. The y-coordinate depends on the height of the tree. If the actual position of an interior node is right of its preliminary place, the subtree rooted at the node must be moved right to center the sons around the father. Rather than immediately readjust all the nodes in the subtree, each node remembers the distance to the provisional place in a modifier field (MODIFIER(Node)). In this second pass down the tree, modifiers are accumulated and applied to every node. Returns TRUE if no errors, otherwise returns FALSE. procedure APPORTION (Node, Level): begin Leftmost ~ FIRSTCHILD(Node); Neighbor ~ LEFTNEIGHBOR(Leftmost); CompareDepth + 1; DepthToStop ~ MaxDepth - Level; while (Leftmost ≠ φ and Neighbor ≠ φ and CompareDepth = DepthToStop) do begin (* Compute the location of Leftmost and where it should be with respect to Neighbor. *) LeftModsum ~ 0; RightModsum ~ 0; AncestorLeftmost ~ Leftmost; AncestorNeighbor ~ Neighbor; for i ~ 0 until CompareDepth do begin AncestorLeftmost ~ PARENT(AncestorLeftmost); AncestorNeighbor ~ PARENT(AncestorNeighbor); RightModsum ~ RightModsum + MODIFIER(AncestorLeftmost); LeftModsum ~ LeftModsum + MODIFIER(AncestorNeighbor); end; (* Find the MoveDistance, and apply it to Node's subtree. *) (* Add appropriate portions to smaller interior subtrees. *) MoveDistance ~ (PRELIM(Neighbor) + LeftModsum + SubtreeSeparation + MEANNODESIZE(Leftmost, Neighbor) - (PRELIM(Leftmost) + RightModsum)); if MoveDistance > 0 then begin (* Count interior sibling subtrees in LeftSiblings*) TempPtr ~ Node; LeftSiblings ~ 0; while (TempPtr ≠ φ and TempPtr ≠ AncestorNeighbor) do begin LeftSiblings ~ LeftSiblings + 1; TempPtr ~ LEFTSIBLING(TempPtr); end; end; end; Figure 4. Procedure APPORTION, part 1 of 2 if TempPtr ≠ φ then (* Apply portions to appropriate leftsibling *) (* subtrees. *) begin Portion = MoveDistance / LeftSiblings; TempPtr = Node; while TempPtr = AncestorNeighbor do begin PRELIM(TempPtr) += MoveDistance; MODIFIER(TempPtr) += MoveDistance; MoveDistance = MoveDistance - Portion; TempPtr = LEFTSIBLING(TempPtr); end; end; else (* Don't need to move anything--it needs to *) (* be done by an ancestor because *) (* AncestorNeighbor and AncestorLeftmost are *) (* not siblings of each other. *) return; end; (* of MoveDistance > 0 *) (* Determine the leftmost descendant of Node at the next *) (* lower level to compare its positioning against that of *) (* its Neighbor. *) CompareDepth = CompareDepth + 1; if ISLEAF(Leftmost) then Leftmost = GETLEFTMOST(Node, 0, CompareDepth); else Leftmost = FIRSTCHILD(Leftmost); end; (* of the while *) end. Figure 5. Procedure APPORTION, part 2 of 2. This procedure cleans up the positioning of small sibling subtrees, thus fixing the "left-to-right gluing" problem evident in earlier algorithms. When moving a new subtree farther and farther to the right, gaps may open up among smaller subtrees that were previously sandwiched between larger subtrees. Thus, when moving the new, larger subtree to the right, the distance it is moved is also apportioned to smaller, interior subtrees, creating a pleasing aesthetic placement. function GETLEFTMOST (Node, Level, Depth): NODE; begin if Level ≥ Depth then return Node; else if ISLEAF(Node) then return 0; else begin Rightmost ← FIRSTCHILD(Node); Leftmost ← GETLEFTMOST(Rightmost, Level + 1, Depth); (* Do a postorder walk of the subtree below Node. *) while (Leftmost = 0 and HASRIGHTSIBLING(Rightmost)) do begin Rightmost ← RIGHTSIBLING(Rightmost); Leftmost ← GETLEFTMOST(Rightmost, Level + 1, Depth); end; end; end; end. Figure 6. Function GETLEFTMOST. This function returns the leftmost descendant of a node at a given Depth. This is implemented using a postorder walk of the subtree under Node, down to the level of Depth. Level here is not the absolute tree level used in the two main tree walks; it refers to the level below the node whose leftmost descendant is being found. function MEANNODESIZE (LeftNode, RightNode): REAL; begin NodeSize ← 0; if LeftNode ≠ 0 then NodeSize ← NodeSize + RIGHTSIZE(LeftNode); if RightNode ≠ 0 then NodeSize ← NodeSize + LEFTSIZE(RightNode); return NodeSize; end. Figure 7. Function MEANNODESIZE. This function returns the mean size of the two passed nodes. It adds the size of the right half of lefthand node to the left half of righthand node. If all nodes are the same size, this is a trivial calculation. function CHECKEXTENTS RANGE (xValue, yValue): BOOLEAN; begin if (xValue is a valid value for the x-coordinate) and (yValue is a valid value for the y-coordinate) then return TRUE; else return FALSE; end. Figure 8. Function CHECKEXTENTS RANGE. This function verifies that the passed x- and y-coordinates are within the coordinate system being used for the drawing. For example, if the x-and y-coordinates must be 2-byte integers, this function could determine whether xValue and yValue are too large. procedure INITPREVNODELIST: begin /* Start with the node at level 0—the apex of the tree. */ TempPtr = LevelZeroPtr; while TempPtr ≠ φ do begin PREVNODE(TempPtr) = φ; TempPtr = NEXTLEVEL(TempPtr); end; end. Figure 9. Initialize the list of previous nodes at each level. Three list-maintenance procedures, GETPREVNODEATLEVEL, SETPREVNODEATLEVEL, and INITPREVNODELIST, maintain a singly-linked list. Each entry in the list corresponds to the node previous to the current node at a given level (for example, element 2 in the list corresponds to the node to the left of the current node at level 2). If the maximum tree size is known beforehand, this list can be replaced with a fixed-size array, and these procedures become trivial. Each list element contains two fields: PREV—previous node at this level, and NEXTLEVEL—a forward pointer to the next list element. The list does not need to be cleaned up between calls to POSITIONTREE, for performance. function GETPREVNODEATLEVEL (Level): NODE; begin (* Start with the node at level 0--the apex of the tree. *) TempPtr + LevelZeroPtr; i = 0; while TempPtr $\neq \phi$ do begin if i = Level then return PREVNODE(TempPtr); TempPtr $\leftarrow$ NEXTLEVEL(TempPtr); i $\leftarrow$ i + 1; end; (* Otherwise, there was no node at the specific level. *) return $\phi$; end. Figure 10. Get the previous node at this level. See Figure 9. procedure SETPREVNODEATLEVEL (Level, Node): begin (* Start with the node at level 0--the apex of the tree. *) TempPtr ← LevelZeroPtr; i ← 0; while TempPtr ≠ nil do begin if i = Level then begin (* At this level, replace the existing list element with the passed-in node. *) PREVNODE(TempPtr) ← Node; return; end; else if NEXTLEVEL(TempPtr) = nil then (* There isn't a list element yet at this level, so *) (* add one. The following instructions prepare the *) (* list element at the next level, not at this one. *) begin NewNode ← ALLOCATE_A_NODE; PREVNODE(NewNode) ← nil; NEXTLEVEL(NewNode) ← nil; NEXTLEVEL(TempPtr) ← NewNode; end; (* Prepare to move to the next level, to look again. *) TempPtr ← NEXTLEVEL(TempPtr); i ← i + 1; end; (* Should only get here if LevelZeroPtr is nil. *) LevelZeroPtr ← ALLOCATE_A_NODE; PREVNODE(LevelZeroPtr) ← Node; NEXTLEVEL(LevelZeroPtr) ← nil; end; end. Figure 11. Set an element in the list. See Figure 9 on page 14. Function "ALLOCATE_A_NODE" (not shown here) requests a pointer to a block of memory, to be used to represent a node in the list. An Example The operation of the algorithm during these two walks can be best illustrated with an example. At least three levels are needed to illustrate its operation, since a small subtree must be centered between larger sibling subtrees. The following figure is an example tree positioned by this algorithm. Its fifteen nodes have been lettered in the order that they are visited in the first postorder traversal. For this example, the mean size of each node is 2 units and the sibling separation and subtree separation values are the same: 4 units. Figure 12. An example general tree, with 15 nodes Nodes Visited in the First Traversal The nodes are visited in a postorder walk. Their preliminary x-coordinate value and modifier values are calculated in this traversal. Node Preliminary X-coordinate and Modifier A is a leaf with no left sibling. \[ \begin{align*} \text{PRELIM}(A) &= 0 \\ \text{MODIFIER}(A) &= 0 \end{align*} \] B is also a leaf with no left sibling. \[ \begin{align*} \text{PRELIM}(B) &= 0 \\ \text{MODIFIER}(B) &= 0 \end{align*} \] C is the right sibling of node B. It is separated from it by the sibling separation value plus the mean size of the two nodes. \[ \begin{align*} \text{PRELIM}(C) &= 0 + 4 + 2 = 6 \\ \text{MODIFIER}(C) &= 0 \end{align*} \] D is the parent of nodes B and C, and the right sibling of node A. It is separated from node A by the sibling separation value plus the mean size of the two nodes. Its modifier is set so that when it is applied to nodes B and C, they will appear centered underneath it. The modifier is determined by taking PRELIM(D) and subtracting the mean of the PRELIM(its mostly widely-separated offspring) values. \[ \begin{align*} \text{PRELIM}(D) &= 0 + 4 + 2 = 6 \\ \text{MODIFIER}(D) &= 6 - (6 + 6)/2 = 3 \end{align*} \] E is the parent of nodes A and D. It is centered over nodes A and D. \[ \begin{align*} \text{PRELIM}(E) &= (6 + 6)/2 = 3 \\ \text{MODIFIER}(E) &= 0 \end{align*} \] F is a right sibling of node E. It is separated from it by the sibling separation value plus the mean size of the two nodes. That would place it directly over node C. We can see now that node N's subtree will later be placed much further to the right, leaving the spacing between nodes E and F smaller, and hence different, than the spacing between nodes F and N. When node N is finally positioned, the position of node F will be adjusted. But for now, \[ \begin{align*} \text{PRELIM}(F) &= 3 + 4 + 2 = 9 \\ \text{MODIFIER}(F) &= 0 \end{align*} \] G is a leaf with no left sibling. \[ \begin{align*} \text{PRELIM}(G) &= 0 \\ \text{MODIFIER}(G) &= 0 \end{align*} \] H is a leaf with no left sibling. \[ \begin{align*} \text{PRELIM}(H) &= 0 \\ \text{MODIFIER}(H) &= 0 \end{align*} \] I is the right sibling of node H. It is separated from it by the sibling separation value plus the mean size of the two nodes. \[ \begin{align*} \text{PRELIM}(I) &= 0 + 4 + 2 = 6 \\ \text{MODIFIER}(I) &= 0 \end{align*} \] J is the right sibling of node I. As above, it is separated by the standard spacing from node I. \[ \begin{align*} \text{PRELIM}(J) &= 6 + 4 + 2 = 12 \\ \text{MODIFIER}(J) &= 0 \end{align*} \] K is the right sibling of node J. \[ \text{PRELIM}(K) = 12 + 4 + 2 = 18 \\ \text{MODIFIER}(K) = 0 \] L is the right sibling of node K. \[ \text{PRELIM}(L) = 18 + 4 + 2 = 24 \\ \text{MODIFIER}(L) = 0 \] M is the parent of nodes H through L, and the right sibling of node G. It is separated from node G by the sibling separation value plus the mean size of the two nodes. Its modifier is set so that when it is applied to nodes H through L, they will appear centered underneath it. \[ \text{PRELIM}(M) = 0 + 4 + 2 = 6 \\ \text{MODIFIER}(M) = 6 - (0 + 24)/2 = -6 \] N is the parent of nodes G and M, and the right sibling of node F. It is first of all given its standard positioning to the right of node F, with a modifier that reflects the centering of its offspring beneath it. \[ \text{PRELIM}(N) = 9 + 4 + 2 = 15 \\ \text{MODIFIER}(N) = 15 - (0 + 6)/2 = 12 \] Now we have to verify that node E’s subtree and node N’s subtree are properly separated. Moving down one level, the leftmost descendant of node N, node G, currently has a positioning of 0 + 12 = 12 (PRELIM(G) plus the MODIFIER(N), its parent). The rightmost descendant of node E, node D is positioned at 6 + 0 = 6 (PRELIM(D) plus the MODIFIER(E), its parent). Their difference is 12 - 6 = 6, which is equal to the minimum separation (subtree separation plus mean node size), so N’s subtree does not need to be moved, since there is no overlap at this level. Moving down one more level, the leftmost descendant of node N is node H. It is positioned at 0 + -6 + 12 = 6 (PRELIM(H) plus MODIFIER(M) and MODIFIER(N)). The rightmost descendant of node E, node C, is positioned at 6 + 3 + 0 = 9 (PRELIM(C) plus MODIFIER(D) and MODIFIER(E)). Their difference is 6 - 9 = -3; it should be 6, the minimum subtree separation plus the mean node size. Thus node N and its subtree need to be moved to the right a distance of 6 - -3 = 9. \[ \text{PRELIM}(N) = 15 + 9 = 24 \\ \text{MODIFIER}(N) = 12 + 9 = 21 \] This opens a gap of size 9 between sibling nodes E and N. This difference needs to be evenly distributed to all contained sibling nodes, and node F is the only one. Node F is moved to the right a distance of 9/2 = 4.5. \[ \text{PRELIM}(F) = 9 + 4.5 = 13.5 \\ \text{MODIFIER}(F) = 0 + 4.5 = 4.5 \] O is the parent of nodes E, F, and N. It is positioned halfway between the position of nodes E and N. **Nodes Visited in the Second Traversal** The nodes are all visited a second time, this time in a preorder traversal. Their final x-coordinates are determined by summing their preliminary x-coordinates with the modifier fields of all of their ancestors. <table> <thead> <tr> <th>Node</th> <th>Final X-coordinate (preliminary x-coordinate + modifiers of ancestors)</th> </tr> </thead> <tbody> <tr> <td>O</td> <td>13.5</td> </tr> <tr> <td>E</td> <td>3 + 0 = 3</td> </tr> <tr> <td>A</td> <td>0 + 0 + 0 = 0</td> </tr> <tr> <td>D</td> <td>6 + 0 + 0 = 6</td> </tr> <tr> <td>B</td> <td>0 + 3 + 0 + 0 = 3</td> </tr> <tr> <td>C</td> <td>6 + 3 + 0 + 0 = 9</td> </tr> <tr> <td>F</td> <td>13.5 + 0 = 13.5</td> </tr> <tr> <td>N</td> <td>24 + 0 = 24</td> </tr> <tr> <td>G</td> <td>0 + 21 + 0 = 21</td> </tr> <tr> <td>M</td> <td>6 + 21 + 0 = 27</td> </tr> <tr> <td>H</td> <td>0 + 6 - 21 + 0 = 15</td> </tr> <tr> <td>I</td> <td>5 + 6 - 21 + 0 = 21</td> </tr> <tr> <td>J</td> <td>12 + 6 - 21 + 0 = 27</td> </tr> <tr> <td>K</td> <td>18 + 6 - 21 + 0 = 33</td> </tr> <tr> <td>L</td> <td>24 + 6 - 21 + 0 = 39</td> </tr> </tbody> </table> An Example Changing the Orientation of the Root The algorithm illustrates tree positioning where the apex of the tree is at the top of the drawing. Some simple modifications allow other common positionings, such as where the root is on the left and the siblings are to its right. Four such orientations of the root can be readily identified; these will be the values taken by a new global constant, RootOrientation, to be set before the algorithm is called. NORTH root is at the top, as shown in the preceding algorithm SOUTH root is at the bottom, its siblings are above it EAST root is at the left, its siblings are to its right WEST root is at the right, its siblings are to its left The ability to accommodate a change in orientation involves some minor changes to three functions: POSITIONTREE, SECONDDWALK, and MEANNODESIZE. These changes are shown below with change bars. ```plaintext function POSITIONTREE (Node): BOOLEAN; begin if Node ≠ ∅ then begin (* Initialize the list of previous nodes at each level. *) INITPREVNODELIST; (* Do the preliminary positioning with a postorder walk. *) FIRSTWALK(Node, 0); (* Determine how to adjust all the nodes with respect to *) (* the location of the root. *) if RootOrientation = (NORTH or SOUTH) then begin xTopAdjustment + XCOORD(Node) = PRELIM(Node); yTopAdjustment + YCOORD(Node); end; else if RootOrientation = (EAST or WEST) then begin xTopAdjustment + XCOORD(Node); yTopAdjustment + YCOORD(Node) + PRELIM(Node); end; end; end; ``` Figure 13. Function POSITIONTREE. The final position of the tree's apex depends on the RootOrientation value. Changing the Orientation of the Root function SECONDWALK (Node, Level, Modsum): BOOLEAN; begin if Level ≤ MaxDepth then begin if RootOrientation = NORTH then begin xTemp ← xTopAdjustment + (PRELH(Node) + Modsum); yTemp ← yTopAdjustment + (Level * LevelSeparation); end; else if RootOrientation = SOUTH then begin xTemp ← xTopAdjustment + (PRELH(Node) + Modsum); yTemp ← yTopAdjustment - (Level * LevelSeparation); end; else if RootOrientation = EAST then begin xTemp ← xTopAdjustment + (Level * LevelSeparation); yTemp ← yTopAdjustment - (PRELH(Node) + Modsum); end; else if RootOrientation = WEST then begin xTemp ← xTopAdjustment - (Level * LevelSeparation); yTemp ← yTopAdjustment - (PRELH(Node) + Modsum); end; (* Check to see that xTemp and yTemp are of the proper *) (* size for your application. *) if CHECKEXTENTSRANGE(xTemp, yTemp) then . . Figure 14. Function SECONDWALK. The values of xTemp and yTemp now depend on the RootOrientation value. function MEANNODESIZE (LeftNode, RightNode): REAL; begin NodeSize ← 0; if RootOrientation = (NORTH or SOUTH) then begin if LeftNode ≠ ∅ then NodeSize ← NodeSize + RIGHTSIZE(LeftNode); if RightNode ≠ ∅ then NodeSize ← NodeSize + LEFTSIZE(RightNode); end; else if RootOrientation = (EAST or WEST) then begin if LeftNode ≠ ∅ then NodeSize ← NodeSize + TOPSIZE(LeftNode); if RightNode ≠ ∅ then NodeSize ← NodeSize + BOTTOMSIZE(RightNode); end; return NodeSize; end. end. Figure 15. Function MEANNODESIZE. This function now returns the mean width of the two nodes if the RootOrientation is NORTH or SOUTH; if the RootOrientation is EAST or WEST, it returns the mean height of the two nodes. Previous Work Early algorithms concentrated on drawing binary trees. Knuth is generally credited with the first published algorithm for drawing binary trees. Its positioning of nodes in the drawing was sometimes rather crude; later algorithms tried to improve upon it.\textsuperscript{8, 12, 13, 14} The drawing of optimally-positioned general trees, because of their distinctions from binary trees (as noted above), is not an NP-hard problem. However, the problem was approached somewhat later than the problem of drawing binary trees. In the algorithms of Sweet\textsuperscript{10} and Tilford,\textsuperscript{11} the authors both note their algorithm’s irregular behavior under certain conditions. Sweet\textsuperscript{10} published his algorithm as an appendix to his dissertation and noted the following: The principal shortcoming of the algorithm is somewhat difficult to demonstrate in a small tree. Figure B.3 is a somewhat contrived example that points out the problem. Consider the sons of node D: they are E, F, and G. Since the subtree E is “shallow,” it is placed quite far to the left. The nodes F and G must be placed considerably farther to the right in order to make room for their subtrees. With larger, wider trees, a shallow subtree such as E can be so far from its brothers that it gets “lost.” Shallow subtrees in son positions other than the first lead to uneven spacing of the sons. One could probably add a pass to the algorithm that, once having established the rightmost subtree of a node, then reformats the other subtrees for compactness and even spacing. ![Figure B.3. Example tree showing a shortcoming of the printing algorithm.](image) Figure 16. Sweet’s illustration showing deficiencies in his algorithm. From page 96 of Sweet.\textsuperscript{10} Tilford’s Masters thesis, \textit{Tree Drawing Algorithms},\textsuperscript{11} included algorithms for both binary and general tree drawings. He stated: Consider the three trees in Figure 5.1; in the first, subtrees were glued from left to right; in the second, from right to left. Of course, the most desirable positioning is given by the third drawing, which cannot be produced by Algorithm 5.1 [for drawing general trees] regardless of the gluing order, because Algorithm 5.1 always puts a pair of subtrees as close together as possible. The problem arises when a tree has the general shape shown in Figure 5.2, in which two non-adjacent subtrees are large, and the intervening ones are small enough that there is freedom in deciding where to place them. Although this will not always lead via Algorithm 5.1 to a violation of Aesthetic 4 [which states: "A tree and its mirror image should produce drawings that are reflections of one another; moreover, a subtree should be drawn the same way regardless of where it occurs in the tree."], it is clear that the small subtrees ought to be spaced out evenly rather than bunched up on one side or the other. ![Figure 5.1. A small ternary tree positioned (a) by Algorithm 5.1, with left-to-right gluing; (b) by Algorithm 5.1, with right-to-left gluing; and (c) ideally.](image) ![Figure 5.2. The general shape of a tree for which Algorithm 5.1 produces an unsatisfactory positioning.](image) Figure 17. Examples of centering vs. two types of gluing. From page 37 of Tilford. Figure 18. Tilford's illustration showing deficiencies in his algorithm. From page 38 of Tilford. Radack\textsuperscript{7} follows directly from Tilford's work, solving the left-to-right-gluing problem by essentially running his algorithm twice. The first time, the subtrees are agglutinated left to right; the second time, from right to left. A node is positioned at the average of the two assigned positions. Radack's algorithm positions the nodes in four passes. Andy Poggio of SRI International, describing an unpublished algorithm used on their CCWS system\textsuperscript{6} noted, "As their display is not a central aspect of our research, we developed a simple, expedient algorithm for that purpose." \textsuperscript{6} It adheres to three aesthetic rules: 1. nodes at the same level are displayed on the same horizontal level, 2. all successors of a node are displayed below the node in an area bounded by the midpoint distance to adjacent nodes, and 3. the display root node is always centered at the top of the display area. Trivial algorithms also exist for drawing general trees in an outline-like form, where the apex node is positioned to the left of the display and not centered above its offspring (for example, Petzold\textsuperscript{4}). The algorithms by Manning and Atallah\textsuperscript{3} are examples of the class of algorithms that do node positioning with a different set of aesthetic rules; their primary goal was to highlight the symmetry inherent in hierarchical relationships. Acknowledgements I appreciate the written correspondence I received from the experts on this topic (listed alphabetically): Brad A. Myers at the Computer Systems Research Institute in Toronto, Andy Poggio at SRI International, Edward Reingold at the University of Illinois, Bob Tarjan at AT&T Bell Laboratories, and C.S. Wetherell at AT&T Information Systems. Thanks to Jane Munn, Jim Staton, and Dr. Bill Wright at IBM who reviewed an earlier version of this paper. Bob Gibson and John Broughton have also given me a lot of help. An Example Underlying Tree Structure In this example, I use the internal tree notation described by Knuth [Reference 2, section 2.3.3] for a triply-linked tree. Each node consists of three pointers, FATHER, LSON, and RLINK, and its information in field INFO. FATHER points to the parent of the node. LSON points to the leftmost offspring of a node. RLINK points to the right sibling of a node. Thus, if node T is the root of a binary tree, the root of its left subtree is LSON(T) and the root of its right subtree is RLINK(LSON(T)). This node structure is illustrated below, using the syntax of the C programming language. ```c struct position { float x_coordinate; /* the value identified as XCOORD(Node) */ float y_coordinate; /* the value identified as YCOORD(Node) */ float preliminary; /* the value identified as PRELIM(Node) */ float modifier; /* the value identified as MODIFIER(Node) */ }; struct information { char node_label[80]; }; struct node { struct node *father; /* pointer to the parent of this node */ struct node *lson; /* pointer to this node's leftmost offspring */ struct node *rlink; /* pointer to the right sibling of this node */ struct node *left_neighbor; /* pointer to the adjacent node to the left */ struct position pos; /* positioning values, as defined above */ struct information info; /* node information, as defined above */ }; ```
{"Source-Url": "http://www.cs.unc.edu/techreports/89-034.pdf", "len_cl100k_base": 11350, "olmocr-version": "0.1.49", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 76641, "total-output-tokens": 13570, "length": "2e13", "weborganizer": {"__label__adult": 0.00032520294189453125, "__label__art_design": 0.0010318756103515625, "__label__crime_law": 0.00029587745666503906, "__label__education_jobs": 0.000912189483642578, "__label__entertainment": 0.00011473894119262697, "__label__fashion_beauty": 0.00018215179443359375, "__label__finance_business": 0.0002007484436035156, "__label__food_dining": 0.0003421306610107422, "__label__games": 0.0008463859558105469, "__label__hardware": 0.001567840576171875, "__label__health": 0.0004091262817382813, "__label__history": 0.0004076957702636719, "__label__home_hobbies": 0.00014519691467285156, "__label__industrial": 0.00039577484130859375, "__label__literature": 0.00035953521728515625, "__label__politics": 0.000232696533203125, "__label__religion": 0.0005130767822265625, "__label__science_tech": 0.057708740234375, "__label__social_life": 8.702278137207031e-05, "__label__software": 0.008697509765625, "__label__software_dev": 0.92431640625, "__label__sports_fitness": 0.0002505779266357422, "__label__transportation": 0.00046372413635253906, "__label__travel": 0.00018990039825439453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49135, 0.01492]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49135, 0.8612]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49135, 0.84704]], "google_gemma-3-12b-it_contains_pii": [[0, 302, false], [302, 2038, null], [2038, 3027, null], [3027, 5745, null], [5745, 8095, null], [8095, 9105, null], [9105, 11968, null], [11968, 14345, null], [14345, 16204, null], [16204, 17573, null], [17573, 18602, null], [18602, 20280, null], [20280, 22091, null], [22091, 23464, null], [23464, 24952, null], [24952, 26307, null], [26307, 27804, null], [27804, 28284, null], [28284, 29577, null], [29577, 30517, null], [30517, 32753, null], [32753, 35122, null], [35122, 36768, null], [36768, 38530, null], [38530, 39625, null], [39625, 40418, null], [40418, 42655, null], [42655, 43837, null], [43837, 45255, null], [45255, 45788, null], [45788, 47721, null], [47721, 49135, null]], "google_gemma-3-12b-it_is_public_document": [[0, 302, true], [302, 2038, null], [2038, 3027, null], [3027, 5745, null], [5745, 8095, null], [8095, 9105, null], [9105, 11968, null], [11968, 14345, null], [14345, 16204, null], [16204, 17573, null], [17573, 18602, null], [18602, 20280, null], [20280, 22091, null], [22091, 23464, null], [23464, 24952, null], [24952, 26307, null], [26307, 27804, null], [27804, 28284, null], [28284, 29577, null], [29577, 30517, null], [30517, 32753, null], [32753, 35122, null], [35122, 36768, null], [36768, 38530, null], [38530, 39625, null], [39625, 40418, null], [40418, 42655, null], [42655, 43837, null], [43837, 45255, null], [45255, 45788, null], [45788, 47721, null], [47721, 49135, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49135, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49135, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49135, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49135, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49135, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49135, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49135, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49135, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49135, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49135, null]], "pdf_page_numbers": [[0, 302, 1], [302, 2038, 2], [2038, 3027, 3], [3027, 5745, 4], [5745, 8095, 5], [8095, 9105, 6], [9105, 11968, 7], [11968, 14345, 8], [14345, 16204, 9], [16204, 17573, 10], [17573, 18602, 11], [18602, 20280, 12], [20280, 22091, 13], [22091, 23464, 14], [23464, 24952, 15], [24952, 26307, 16], [26307, 27804, 17], [27804, 28284, 18], [28284, 29577, 19], [29577, 30517, 20], [30517, 32753, 21], [32753, 35122, 22], [35122, 36768, 23], [36768, 38530, 24], [38530, 39625, 25], [39625, 40418, 26], [40418, 42655, 27], [42655, 43837, 28], [43837, 45255, 29], [45255, 45788, 30], [45788, 47721, 31], [47721, 49135, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49135, 0.04268]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
5dda0de21dffa6b364d32dfb08181e3e79adab78
Integrating Evergreen with Other Tools Documentation Interest Group Integrating Evergreen with Other Tools Documentation Interest Group # Table of Contents I. Introduction .................................................................................................................. 6 1. About This Documentation ..................................................................................... 8 2. About Evergreen .................................................................................................... 9 II. Adding Evergreen Search to Web Browsers ................................................................. 10 3. Adding OpenSearch to Firefox browser ................................................................. 12 III. Using Supercat ........................................................................................................... 14 4. Introduction ............................................................................................................ 16 5. ISBNs ..................................................................................................................... 17 6. Records .................................................................................................................. 18 Record formats ....................................................................................................... 18 Retrieve records .................................................................................................... 18 Recent records ...................................................................................................... 20 IV. Using UnAPI ............................................................................................................... 21 7. URL format ............................................................................................................. 23 V. Phonestlist.pm Module ............................................................................................... 25 8. Introduction ............................................................................................................ 27 9. Adding Parameters ............................................................................................... 29 10. Output .................................................................................................................... 30 11. Holds ..................................................................................................................... 31 12. Overdues ................................................................................................................. 32 13. Skipping patrons with email notification of holds .................................................. 33 14. Using the ws_ou parameter ................................................................................. 34 15. Automating the download .................................................................................... 35 VI. Adding an Evergreen search form to a web page ...................................................... 36 16. Introduction ........................................................................................................... 38 17. Simple search form .............................................................................................. 39 18. Advanced search form ......................................................................................... 40 19. Encoding ............................................................................................................... 41 20. Setting the document type .................................................................................. 42 21. Setting the library ............................................................................................... 43 VII. SIP Server .................................................................................................................. 44 22. About the SIP Protocol ....................................................................................... 46 23. Installing the SIP Server ..................................................................................... 47 Getting the code ..................................................................................................... 47 Configuring the Server ......................................................................................... 47 Adding SIP Users .................................................................................................. 48 Running the server ............................................................................................... 49 Logging-SIP .......................................................................................................... 49 Testing Your SIP Connection ............................................................................... 50 More Testing ......................................................................................................... 50 24. SIP Communication .............................................................................................. 52 01 Block Patron ...................................................................................................... 53 09/10 Checkin ........................................................................................................ 54 11/12 Checkout ....................................................................................................... 55 15/16 Hold ............................................................................................................. 55 17/18 Item Information ......................................................................................... 55 19/20 Item Status Update .................................................................................... 56 23/24 Patron Status ............................................................................................. 56 25/26 Patron Enable ............................................................................................. 56 29/30 Renew ........................................................................................................... 56 35/36 End Session ................................................................................................. 56 25. Patron privacy and the SIP protocol .......................................................... 59 SIP server configuration .............................................................................. 59 SSH tunnels on SIP clients ......................................................................... 59 A. Attributions ................................................................................................. 60 B. Admonitions ................................................................................................ 62 C. Licensing ..................................................................................................... 63 Index ................................................................................................................ 64 ## List of Tables 8.1. Parameters for the phonelist program: ................................................................. 27 11.1. Columns in the holds CSV file: ................................................................. 31 12.1. Columns in the overdues CSV file: ............................................................... 32 Part I. Introduction Table of Contents 1. About This Documentation ................................................................. 8 2. About Evergreen .............................................................................. 9 Chapter 1. About This Documentation This guide was produced by the Evergreen Documentation Interest Group (DIG), consisting of numerous volunteers from many different organizations. The DIG has drawn together, edited, and supplemented pre-existing documentation contributed by libraries and consortia running Evergreen that were kind enough to release their documentation into the creative commons. Please see the Attributions section for a full list of authors and contributing organizations. Just like the software it describes, this guide is a work in progress, continually revised to meet the needs of its users, so if you find errors or omissions, please let us know, by contacting the DIG facilitators at docs@evergreen-ils.org. This guide describes how to integrate Evergreen with other technologies, including Web browsers, Web sites, discovery layers, self-check machines, RFID equipment, SIP clients, auto-dialer phone scripts, and other applications. Copies of this guide can be accessed in PDF and HTML formats from http://docs.evergreen-ils.org. Chapter 2. About Evergreen Evergreen is an open source library automation software designed to meet the needs of the very smallest to the very largest libraries and consortia. Through its staff interface, it facilitates the management, cataloging, and circulation of library materials, and through its online public access interface it helps patrons find those materials. The Evergreen software is freely licensed under the GNU General Public License, meaning that it is free to download, use, view, modify, and share. It has an active development and user community, as well as several companies offering migration, support, hosting, and development services. The community’s development requirements state that Evergreen must be: • Stable, even under extreme load. • Robust, and capable of handling a high volume of transactions and simultaneous users. • Flexible, to accommodate the varied needs of libraries. • Secure, to protect our patrons’ privacy and data. • User-friendly, to facilitate patron and staff use of the system. Evergreen, which first launched in 2006 now powers over 544 libraries of every type – public, academic, special, school, and even tribal and home libraries – in over a dozen countries worldwide. Part II. Adding Evergreen Search to Web Browsers # Table of Contents 3. Adding OpenSearch to Firefox browser ........................................................................................................ 12 Chapter 3. Adding OpenSearch to Firefox browser OpenSearch is a collection of simple formats for the sharing of search results. More information about OpenSearch can be found on their website. The following example illustrates how to add an OpenSearch source to the list of search sources in a Firefox browser: 1. Navigate to any catalog page in your Firefox browser and click on the top right box’s dropdown and select the option for Add "Example Consortium OpenSearch". The label will match the current scope. 2. At this point, it will add a new search option for the location the catalog is currently using. In this example, that is CONS (searching the whole consortium). 3. Enter search terms to begin a keyword search using this source. The next image illustrates an example search for "mozart" using the sample bib record set. 4. You can select which search source to use by clicking on the dropdown picker. Part III. Using Supercat # Table of Contents 4. Introduction .................................................................................................................. 16 5. ISBNs .......................................................................................................................... 17 6. Records ......................................................................................................................... 18 Record formats ............................................................................................................... 18 Retrieve records ............................................................................................................. 18 Recent records ............................................................................................................... 20 Filtering by Org Unit ....................................................................................................... 20 Part III. Using Supercat Chapter 4. Introduction You can use SuperCat to get data about ISBNs, metarecords, bibliographic records, and authority records. Throughout this section, replace `<hostname>` with the domain or subdomain of your Evergreen installation to try these examples on your own system. Chapter 5. ISBNs Given one ISBN, Evergreen can return a list of related records and ISBNs, including alternate editions and translations. To use the Supercat oISBN tool, use http or https to access the following URL. For example, the URL http://gapines.org/opac/extras/oisbn/0439136350 returns the following list of catalog record IDs and ISBNs: <?xml version='1.0' encoding='UTF-8' ?> <idlist metarecord='436139'> <isbn record='5652044'>9780606323475</isbn> <isbn record='5767568'>9780780673809</isbn> <isbn record='1350528'>9780807286029</isbn> <isbn record='5708164'>9780780666964</isbn> <isbn record='2372013'>043965548X</isbn> <isbn record='5804511'>8498366969</isbn> <isbn record='4132282'>9780786222742</isbn> <isbn record='1530458'>9788478885190</isbn> <isbn record='2003291'>0736650962</isbn> <isbn record='1993002'>8478885196</isbn> <isbn record='1187595'>9780439554923</isbn> <isbn record='4591175'>8478885196</isbn> <isbn record='5676282'>8007282324</isbn> <isbn record='2363352'>8478885196</isbn> <isbn record='2315122'>1480614998</isbn> <isbn record='2304139'>8478886559</isbn> <isbn record='2012565'>9780613371063</isbn> <isbn record='5763645'>9782070528189</isbn> <isbn record='2383286'>0786222743</isbn> <isbn record='2489670'>9780529232696</isbn> <isbn record='1681685'>9780807282311</isbn> <isbn record='2160095'>0807286028</isbn> <isbn record='2219885'>9789500421157</isbn> <isbn record='1934218'>9780613355950</isbn> <isbn record='5682871'>9781594130021</isbn> <isbn record='1281164'>0807283150</isbn> <isbn record='1666656'>0747542155</isbn> <isbn record='4717734'>8478886559</isbn> </idlist> Chapter 6. Records Record formats First, determine which format you’d like to receive data in. To see the available formats for bibliographic records, visit http://<hostname>/opac/extras/supercat/formats/record Similarly, authority record formats can be found at http://libcat.linnbenton.edu/opac/extras/supercat/formats/authority and metarecord formats can be found at http://libcat.linnbenton.edu/opac/extras/supercat/formats/metarecord For example, http://gapines.org/opac/extras/supercat/formats/authority shows that the Georgia Pines catalog can return authority records in the formats opac, marc21, marc21-full, and marc21-uris. Supercat also includes the MIME type of each format, and sometimes also refers to the documentation for a particular format. ```xml <formats> <format> <name>opac</name> <type>text/html</type> </format> <format> <name>marc21</name> <type>application/xml</type> <docs>http://www.loc.gov/marc/</docs> </format> <format> <name>marc21-full</name> <type>application/xml</type> <docs>http://www.loc.gov/marc/</docs> </format> <format> <name>marc21-uris</name> <type>application/xml</type> <docs>http://www.loc.gov/marc/</docs> </format> </formats> ``` atom-full is currently the only format that includes holdings and availability data for a given bibliographic record. Retrieve records You can retrieve records using URLs in the following format: http://<hostname>/opac/extras/supercat/retrieve/<format>/<record-type>/<record-ID> For example, http://gapines.org/opac/extras/supercat/retrieve/mods/record/33333 returns the following record. ```xml <?xml version="1.0"?><modsCollection xmlns="http://www.loc.gov/mods/" xmlns:mods="http://www.loc.gov/mods/" version="3.0"> ``` Chapter 6. Records Report errors in this documentation using Launchpad. Recent records SuperCat can return feeds of recently edited or created authority and bibliographic records: http://<hostname>/opac/extras/feed/freshmeat/<feed-type>/<record-type>/<import-or-edit>/<limit>/<date> Note the following features: • The limit records imported or edited following the supplied date will be returned. If you do not supply a date, then the most recent limit records will be returned. • If you do not supply a limit, then up to 10 records will be returned. • feed-type can be one of atom, html, htmlholdings, marcxml, mods, mods3, or rss2. Filtering by Org Unit You can generate a similar list, with the added ability to limit by Org Unit, using the item-age browse axis. To produce an RSS feed by item date rather than bib date, and to restrict it to a particular system within a consortium: Example: http://gapines.org/opac/extras/browse/atom/item-age/ARL-BOG/1/10 Note the following: • ARL-BOG should be the short name of the org unit you’re interested in • 1 is the page (since you are browsing through pages of results) • 10 is the number of results to return per page Modifying the atom portion of the URL to atom-full will include catalog links in the results: Example: http://gapines.org/opac/extras/browse/atom-full/item-age/ARL-BOG/1/10 Modifying the atom portion of the URL to html-full will produce an HTML page that is minimally formatted: Example: http://gapines.org/opac/extras/browse/html-full/item-age/ARL-BOG/1/10 Part IV. Using UnAPI Chapter 7. URL format Evergreen’s unAPI support includes access to many record types. For example, the following URL would fetch bib 267 in MODS32 along with holdings, volume, copy, and record attribute information: https://example.org/opac/extras/unapi? id=tag::U2@bre/267{holdings_xml,acn,acp,mra}&format=mods32 To access the new unAPI features, the unAPI ID should have the following form: - **tag**:U2@ followed by class name, which may be - **bre** (bibs) - **biblio_record_entry_feed** (multiple bibs) - **acl** (copy locations) - **acn** (volumes) - **acnp** (call number prefixes) - **acns** (call number suffixes) - **acp** (copies) - **acpn** (copy notes) - **aou** (org units) - **ascecm** (copy stat cat entries) - **auri** (located URIs) - **bmp** (monographic parts) - **cbs** (bib sources) - **ccs** (copy statuses) - **circ** (loan checkout and due dates) - **holdings_xml** (holdings) - **mmr** (metarecords) - **mmr_holdings_xml** (metarecords with holdings) - **mmr_mra** (metarecords with record attributes) - **mra** (record attributes) • **sbsum** (serial basic summaries) • **sdist** (serial distributions) • **siss** (serial issues) • **sisum** (serial index summaries) • **sitem** (serial items) • **sssum** (serial supplement summaries) • **sstr** (serial streams) • **ssub** (serial subscriptions) • **sunit** (serial units) • followed by / • followed by a record identifier (or in the case of the **biblio_record_entry_feed** class, multiple IDs separated by commas) • followed, optionally, by limit and offset in square brackets • followed, optionally, by a comma-separated list of "includes" enclosed in curly brackets. The list of includes is the same as the list of classes with the following addition: • **bre.extern** (information from the non-MARC parts of a bib record) • followed, optionally, by / and org unit; "-" signifies the top of the org unit tree • followed, optionally, by / and org unit depth • followed, optionally, by / and a path. If the path is **barcode** and the class is **ACP**, the record ID is taken to be a copy barcode rather than a copy ID; for example, in **tag::U2@ACP/ACQ140{acn,bre,mra}/-/0/barcode**, **ACQ140** is meant to be a copy barcode. • followed, optionally, by **&format=** and the format in which the record should be retrieved. If this part is omitted, the list of available formats will be retrieved. Part V. Phonelist.pm Module # Table of Contents 8. Introduction ................................................................................................................................. 27 9. Adding Parameters ......................................................................................................................... 29 10. Output ........................................................................................................................................ 30 11. Holds ......................................................................................................................................... 31 12. Overdues .................................................................................................................................... 32 13. Skipping patrons with email notification of holds ....................................................................... 33 14. Using the `ws_ou` parameter .................................................................................................... 34 15. Automating the download .......................................................................................................... 35 Chapter 8. Introduction PhoneList.pm is a mod_perl module for Apache that works with Evergreen to generate callings lists for patron holds or overdues. It outputs a csv file that can be fed into an auto-dialer script to call patrons with little or no staff intervention. It is accessed and configured via a special URL and passing any parameters as a Query String on the URL. The parameters are listed in the table below. Table 8.1. Parameters for the phonelist program: <table> <thead> <tr> <th>Parameter</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>user</td> <td>Your Evergreen login. Typically your library’s circ account. If you leave this off, you will be prompted to login.</td> </tr> <tr> <td>passwd</td> <td>The password for your Evergreen login. If you leave this off you will be prompted to login.</td> </tr> <tr> <td>ws_ou</td> <td>The ID of the system or branch you want to generate the list for (optional). If your account does not have the appropriate permissions for the location whose ID number you have entered, you will get an error.</td> </tr> <tr> <td>skipemail</td> <td>If present, skip patrons with email notification (optional).</td> </tr> <tr> <td>addcount</td> <td>Add a count of items on hold (optional). Only makes sense for holds.</td> </tr> <tr> <td>overdue</td> <td>Makes a list of patrons with overdues instead of holds. If an additional, numeric parameter is supplied, it will be used as the number of days overdue. If no such extra parameter is supplied, then the default of 14 days is used.</td> </tr> </tbody> </table> The URL is https://your.evergreen-server.tld/phonelist A couple of examples follow: https://your.evergreen-server.tld/phonelist?user=circuser&passwd=password&skipemail The above example would sign in as user circuser with password of password and get a list of patrons with holds to call who do not have email notification turned on. It would run at whatever branch is normally associated with circuser. https://your.evergreen-server.tld/phonelist?skipemail The above example would do more or less the same, but you would be prompted by your browser for the user name and password. If your browser or download script support it, you may also use conventional HTTP authentication parameters. https://user:password@your.evergreen-server.tld/phonelist?overdue&ws_ou=2 The above logs in as user with password and runs overdues for location ID 2. The following sections provide more information on getting what you want in your output. Chapter 9. Adding Parameters If you are not familiar with HTTP/URL query strings, the format is quite simple. You add parameters to the end of the URL, the first parameter is separated from the URL page with a question mark (?) character. If the parameter is to be given an extra value, then that value follows the parameter name after an equals sign (=). Subsequent parameters are separated from the previous parameter by an ampersand (&). Here is an example with 1 parameter that has no value: https://your.evergreen-server.tld/phonelist?skipemail An example of 1 argument with a value: https://your.evergreen-server.tld/phonelist?overdue=21 An example of 2 arguments, 1 with a value and 1 without: https://your.evergreen-server.tld/phonelist?overdue=21&skipemail Any misspelled or parameters not listed in the table above will be ignored by the program. Chapter 10. Output On a successful run, the program will return a CSV file named phone.csv. Depending on your browser or settings you will alternately be prompted to open or save the file. Your browser may also automatically save the file in your Downloads or other designated folder. You should be able to open this CSV file in Excel, LibreOffice Base, any other spreadsheet program, or a text editor. If you have made a mistake and have mistyped your user name or password, or if you supply a ws_ou parameter with an ID where your user name does not have permission to look up holds or overdue information, then you will get an error returned in your browser. Should your browser appear to do absolutely nothing at all. This is normal. When there is no information for you to download, the server will return a 200 NO CONTENT message to your browser. Most browsers respond to this message by doing nothing at all. It is possible for there to be no information for you to retrieve if you added the skipemail option and all of your notices for that day were sent via email, or if you ran this in the morning and then again in the afternoon and there was no new information to gather. The program does indicate that it has already looked at a particular hold or overdue and will skip it on later runs. This prevents duplicates to the same patron in the same run. It will, however, create a duplicate for the same patron if a different copy is put on hold for that patron in between two runs. The specific content of the CSV file will vary if you are looking at holds or overdues. The specific contents are described in the appropriate sections below. Chapter 11. Holds The **phonelist** program will return a list of patrons with copies on hold by default, so long as you do not use the **overdue** parameter. You may optionally get a number of items that patron currently has on hold by adding the **addcount** parameter. As always, you can add the skipemail parameter to skip patrons with email notifications of their overdues, see [Skipping patrons with email notification of holds](#) as described below. Table 11.1. Columns in the holds CSV file: <table> <thead> <tr> <th>Name</th> <th>Patron’s name first and last.</th> </tr> </thead> <tbody> <tr> <td>Phone</td> <td>Patron’s phone number.</td> </tr> <tr> <td>Barcode</td> <td>Patron’s barcode.</td> </tr> <tr> <td>Count</td> <td>Number of copies on hold, if <strong>addcount</strong> parameter is used, otherwise this column is not present in the file.</td> </tr> </tbody> </table> Chapter 12. Overdues If you add the **overdue** parameter, you can get a list of patrons with overdue copies instead of a list of patrons with copies on the hold shelf. By default, this will give you a list of patrons with copies that are 14 days overdue. If you’d like to specify a different number of days you can add the number after the parameter with an equals sign: ``` https://your.evergreen-server.tld/phonelist?overdue=21&ws_ou=2 ``` The above will retrieve a list of patrons who have items that are 21 days overdue at the location with ID of 2. The number of days is an exact lookup. This means that the program will look only at patrons who have items exactly 14 days or exactly the number of days specified overdue. It does not pull up any that are less than or greater than the number of days specified. As always, you can add the `skipemail` parameter to skip patrons with email notifications of their overdues, see [Skipping patrons with email notification of holds](#) as described below. Table 12.1. Columns in the overdues CSV file: <table> <thead> <tr> <th>Name</th> <th>Patron’s name first and last.</th> </tr> </thead> <tbody> <tr> <td>Phone</td> <td>Patron’s phone number.</td> </tr> <tr> <td>Barcode</td> <td>Patron’s barcode.</td> </tr> <tr> <td>Titles</td> <td>A colon-separated list of titles that the patron has overdue.</td> </tr> </tbody> </table> Chapter 13. Skipping patrons with email notification of holds Skipping patrons who have email notification for their holds or overdues is very simple. You just need to add the `skipemail` parameter on the URL query string. Doing so will produce the list without the patrons who have email notification for overdues, or for all of their holds. Please note that if a patron has multiple holds available, and even one of these holds requests a phone-only notification, then that patron will still show on the list. For this option to exclude a patron from the holds list, the patron must request email notification on all of their current holds. In practice, we find that this is usually the case. Chapter 14. Using the ws_ou parameter Generally, you will not need to use the ws_ou parameter when using the phonelist program. The phonelist will look up the branch where your login account works and use that location when generating the list. However, if you are part of a multi-branch systems in a consortium, then the ws_ou parameter will be of interest to you. You can use it to specify which branch, or the whole system, you wish to search when running the program. Chapter 15. Automating the download If you’d like to automate the download of these files, you should be able to do so using any HTTP programming toolkit. Your client must accept cookies and follow any redirects in order to function. Part VI. Adding an Evergreen search form to a web page Table of Contents 16. Introduction ................................................................................................................................. 38 17. Simple search form ..................................................................................................................... 39 18. Advanced search form ............................................................................................................... 40 19. Encoding ...................................................................................................................................... 41 20. Setting the document type ....................................................................................................... 42 21. Setting the library ..................................................................................................................... 43 Chapter 16. Introduction To enable users to quickly search your Evergreen catalog, you can add a simple search form to any HTML page. The following code demonstrates how to create a quick search box suitable for the header of your web site: Chapter 17. Simple search form ``` <form action="http://example.com/eg/opac/results" method="get" accept-charset="UTF-8"> <!-- <input type="search" alt="Catalog Search" maxlength="200" size="20" name="query" placeholder="Search catalog for..." /> <input type="hidden" name="qtype" value="keyword" /> <input type="hidden" name="locg" value="4" /> <input type="submit" value="Search" /> <!-- </form> Replace 'example.com' with the hostname for your catalog. To link to the Kid’s OPAC instead of the TPAC, replace 'opac' with 'kpac'. Replace 'keyword' with 'title', 'author', 'subject', or 'series' if you want to provide more specific searches. You can even specify 'identifier|isbn' for an ISBN search. Replace '4' with the ID number of the organizational unit at which you wish to anchor your search. This is the value of the 'locg' parameter in your normal search. ``` Chapter 18. Advanced search form <form role="search" id="searchForm" method="get" class="searchform" action="http://your_catalog/eg/opac/results" accept-charset="UTF-8"> <label id="searchLabel" for="search">Search the Catalog: </label> <input type="search" value="" name="query" id="search" size="30"> <label id="search_qtype_label">Type:</label> <select name="qtype" id="qtype" aria-label="Select query type:"> <option value='keyword' selected="selected">Keyword</option> <option value='title'>Title</option> <option value='jtitle'>Journal Title</option> <option value='author'>Author</option> <option value='subject'>Subject</option> <option value='series'>Series</option> </select> &nbsp;&nbsp; <label id="search_itype_label">Format: </label> <select id='item_type_selector' name='fi:item_type' aria-label="Select item type:" option value='a'>Books and Journals</option> <option value='i'>Nonmusical Sound Recording</option> <option value='j'>Musical Sound Recording</option> <option value='g'>Video</option> </select> &nbsp;&nbsp; <label id="search_locg_label">Library: </label> <select aria-label='Select search library' name='locg'> <option value='1' class="org_unit">All Libraries</option> <option value='2' selected="selected" class="org_unit">Central Library</option> <option value='10' class="org_unit">Little Library</option> </select> <input class="searchbutton" type="submit" value="Search" /> </form> Chapter 19. Encoding For non English characters it is vital to set the attribute `accept-charset="UTF-8"` in the form tag (as in the examples above). If the parameter is not set, records with non English characters will not be retrieved. Chapter 20. Setting the document type You can set the document types to be searched using the attribute `option value=` in the form. For the value use MARC 21 code defining the type of record (i.e. Leader, position 06). For example, for musical recordings you could use `<option value='j'>Musical Sound Recording</option>` Chapter 21. Setting the library Instead of searching the entire consortium, you can set the Library to be searched in using the attribute **option value** in the form. For the value use Evergreen database.organization unit ID. Part VII. SIP Server # Table of Contents 22. About the SIP Protocol .................................................. 46 23. Installing the SIP Server .................................................. 47 Getting the code .......................................................... 47 Configuring the Server .................................................. 47 Setting the encoding ............................................... 47 Datatypes .............................................................. 48 Adding SIP Users ....................................................... 48 Running the server ..................................................... 49 Logging-SIP ............................................................. 49 Syslog ................................................................. 49 Syslog-NG ............................................................ 50 Testing Your SIP Connection ........................................... 50 More Testing ........................................................... 50 24. SIP Communication ...................................................... 52 01 Block Patron ......................................................... 53 09/10 Checkin ............................................................ 54 11/12 Checkout .......................................................... 55 15/16 Hold ............................................................... 55 17/18 Item Information .................................................. 55 19/20 Item Status Update .............................................. 56 23/24 Patron Status ..................................................... 56 25/26 Patron Enable ..................................................... 56 29/30 Renew .............................................................. 56 35/36 End Session ........................................................ 56 37/38 Fee Paid ............................................................ 57 63/64 Patron Information ............................................... 57 65/66 Renew All .......................................................... 57 93/94 Login ............................................................... 57 97/96 Resend ............................................................. 58 99/98 SC and ACS Status ............................................... 58 Fields ...................................................................... 58 25. Patron privacy and the SIP protocol .................................. 59 SIP server configuration ................................................ 59 SSH tunnels on SIP clients .............................................. 59 SIP, standing for Standard Interchange Protocol, was developed by the 3M corporation to be a common protocol for data transfer between ILS' (referred to in SIP as an ACS, or Automated Circulation System) and a third party device. Originally, the protocol was developed for use with 3M SelfCheck (often abbreviated SC, not to be confused with Staff Client) systems, but has since expanded to other companies and devices. It is now common to find SIP in use in several other vendors' SelfCheck systems, as well as other non-SelfCheck devices. Some examples include: - Patron Authentication (computer access, subscription databases) - Automated Material Handling (AMH) - The automated sorting of items, often to bins or book carts, based on shelving location or other programmable criteria Chapter 23. Installing the SIP Server This is a rough intro to installing the SIP server for Evergreen. Getting the code Current SIP server code lives at in the Evergreen git repository: ``` cd /opt git clone git://git.evergreen-ils.org/SIPServer.git SIPServer ``` Configuring the Server 1. Type the following commands from the command prompt: ``` $ sudo su opensrf $ cd /openils/conf $ cp oils_sip.xml.example oils_sip.xml ``` 2. Edit oils_sip.xml. Change the commented out `<server-params>` section to this: ``` <server-params min_spare_servers='1' max_spare_servers='2' min_servers='3' max_servers='25' /> ``` 3. `max_servers` will directly correspond to the number of allowed SIP clients. Set the number accordingly, but bear in mind that too many connections can exhaust memory. On a 4G RAM/4 CPU server (that is also running evergreen), it is not recommended to exceed 100 SIP client connections. Setting the encoding SIPServer looks for the encoding in the following places: 1. An `encoding` attribute on the `account` element for the currently active SIP account. 2. The `encoding` element that is a child of the `institution` element of the currently active SIP account. 3. The `encoding` element that is a child of the `implementation_config` element that is itself a child of the `institution` element of the currently active SIP account. 4. If none of the above exist, then the default encoding (ASCII) is used. Option 3 is a legacy option. It is recommended that you alter your configuration to move this element out of the `implementation_config` element and into its parent `institution` element. Ideally, SIPServer should not look into the implementation config, and this check may be removed at some time in the future. ## Datatypes The `msg64_hold_datatype` setting is similar to `msg64_summary_datatype`, but affects holds instead of circulations. When set to `barcode`, holds information will be delivered as a set of copy barcodes instead of title strings for patron info requests. With barcodes, SIP clients can both find the title strings for display (via item info requests) and make subsequent hold-related action requests, like holds cancellation. ### Adding SIP Users 1. Type the following commands from the command prompt: ```bash $ sudo su opensrf $ cd /openils/conf ``` 2. In the `<accounts>` section, add SIP client login information. Make sure that all `<logins>` use the same institution attribute, and make sure the institution is listed in `<institutions>`. All attributes in the `<login>` section will be used by the SIP client. 3. In Evergreen, create a new profile group called SIP. This group should be a sub-group of Users (not Staff or Patrons). Set Editing Permission as group_application.user.sip_client and give the group the following permissions: ``` COPY_CHECKIN COPY_CHECKOUT CREATE_PAYMENT RENEW_CIRC VIEW_CIRCULATIONS VIEW_COPY_CHECKOUT_HISTORY VIEW_PERMIT_CHECKOUT VIEW_USER VIEW_USER_FINES_SUMMARY VIEW_USER_TRANSACTIONS ``` OR use SQL like: ```sql INSERT INTO permission.grp_tree (name, parent, description, application_perm) VALUES ('SIP', 1, 'SIP2 Client Systems', 'group_application.user.sip_client'); INSERT INTO permission.grp_perm_map (grp, perm, depth, grantable) SELECT g.id, p.id, 0, FALSE FROM permission.grp_tree g, permission.perm_list p WHERE g.name = 'SIP' AND p.code IN ('COPY_CHECKIN', 'COPY_CHECKOUT', 'RENEW_CIRC', 'VIEW_CIRCULATIONS', ``` Verify: ``` SELECT * FROM permission.grp_perm_map pgpm INNER JOIN permission.perm_list ppl ON pgpm.perm = ppl.id INNER JOIN permission.grp_tree pgt ON pgt.id = pgpm.grp WHERE pgt.name = 'SIP'; ``` 4. For each account created in the `<login>` section of `oils_sip.xml`, create a user (via the staff client user editor) that has the same username and password and put that user into the SIP group. The expiration date will affect the SIP users' connection so you might want to make a note of this somewhere. ### Running the server To start the SIP server type the following commands from the command prompt: ``` $ sudo su opensrf $ oils_ctl.sh -a [start|stop|restart]_sip ``` ### Logging-SIP #### Syslog It is useful to log SIP requests to a separate file especially during initial setup by modifying your syslog config file. 1. Edit syslog.conf. ``` $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf ``` 2. Add this: ``` local6.* -/var/log/SIP_evergreen.log ``` 3. Syslog expects the logfile to exist so create the file. ``` $ sudo touch /var/log/SIP_evergreen.log ``` 4. Restart syslogd. $ sudo /etc/init.d/sysklogd restart **Syslog-NG** 1. Edit logging config. ``` sudo vi /etc/syslog-ng/syslog-ng.conf ``` 2. Add: ``` # +SIP2+ for Evergreen filter f_eg_sip { level(warn, err, crit) and facility(local6); }; destination eg_sip { file("var/log/SIP_evergreen.log"); }; log { source(s_all); filter(f_eg_sip); destination(eg_sip); }; ``` 3. Syslog-ng expects the logfile to exist so create the file. ``` $ sudo touch /var/log/SIP_evergreen.log ``` 4. Restart syslog-ng ``` $ sudo /etc/init.d/syslog-ng restart ``` **Testing Your SIP Connection** - In the root directory of the SIPServer code: ``` $ cd SIPServer/t ``` - Edit SIPtest.pm, change the $instid, $server, $username, and $password variables. This will be enough to test connectivity. To run all tests, you’ll need to change all the variables in the Configuration section. ``` $ PERL5LIB=./ perl 00sc_status.t ``` This should produce something like: ``` 1..4 ok 1 - Invalid username ok 2 - Invalid username ok 3 - login ok 4 - SC status ``` - Don’t be dismayed at Invalid Username. That’s just one of the many tests that are run. **More Testing** Once you have opened up either the **SIP** OR **SIP2** ports to be accessible from outside you can do some testing via **telnet**. In the following tests: - Replace **$server** with your server hostname (or **localhost** if you want to skip testing external access for now); • Replace $username, $password, and $instid with the corresponding values in the <accounts> section of your SIP configuration file; • Replace the $user_barcode and $user_password variables with the values for a valid user. • Replace the $item_barcode variable with the values for a valid item. 1. Start by testing your ability to log into the SIP server: We are using 6001 here which is associated with SIP2 as per our configuration. $ telnet $server 6001 Connected to $server. Escape character is '^]'. 9300CN$username|CO$password|CP$instid If successful, the SIP server returns a 941 result. A result of 940, however, indicates an unsuccessful login attempt. Check the <accounts> section of your SIP configuration and try again. 2. Once you have logged in successfully, replace the variables in the following line and paste it into the telnet session: 2300120080623 172148A0$instid|AA$user_barcode|AC$password|AD$user_password If successful, the SIP server returns the patron information for $user_barcode, similar to the following: 24 Y 00120100113 170738AEFirstName MiddleName LastName|AA$user_barcode|BLY|CQY |BHUSD|BV0.00|AFOK|A0$instid| The response declares it is a valid patron BLY with a valid password CQY and shows the user’s $name. 3. To test the SIP server’s item information response, issue the following request: 1700120080623 172148A0$instid|AB$item_barcode|AC$password If successful, the SIP server returns the item information for $item_barcode, similar to the following: 1803020120160923 190132AB30007003601852|AJRégion de Kamouraska|CK001|AQOSUL|APOSUL|BHCAD |BV0.00|BGOSUL|CSCA2 PQ NR46 73R The response declares it is a valid item, with the title, owning library, permanent and current locations, and call number. Chapter 24. SIP Communication SIP generally communicates over a TCP connection (either raw sockets or over telnet), but can also communicate via serial connections and other methods. In Evergreen, the most common deployment is a RAW socket connection on port 6001. SIP communication consists of strings of messages, each message request and response begin with a 2-digit “command” - Requests usually being an odd number and responses usually increased by 1 to be an even number. The combination numbers for the request command and response is often referred to as a Message Pair (for example, a 23 command is a request for patron status, a 24 response is a patron status, and the message pair 23/24 is patron status message pair). The table in the next section shows the message pairs and a description of them. For clarification, the “Request” is from the device (selfcheck or otherwise) to the ILS/ACS. The response is... the response to the request ;). Within each request and response, a number of fields (either a fixed width or separated with a | [pipe symbol] and preceded with a 2-character field identifier) are used. The fields vary between message pairs. <table> <thead> <tr> <th>Pair</th> <th>Name</th> <th>Supported?</th> <th>Details</th> </tr> </thead> <tbody> <tr> <td>01</td> <td>Block Patron</td> <td>Yes</td> <td>01/Block Patron - ACS responds with 24 Patron Status Response</td> </tr> <tr> <td>09-10</td> <td>Checkin</td> <td>Yes (with extensions)</td> <td>09/10 Checkin</td> </tr> <tr> <td>11-12</td> <td>Checkout</td> <td>Yes (no renewals)</td> <td>11/12 Checkout</td> </tr> <tr> <td>15-16</td> <td>Hold</td> <td>Partially supported</td> <td>15/16 Hold</td> </tr> <tr> <td>17-18</td> <td>Item Information</td> <td>Yes (no extensions)</td> <td>17/18 Item Information</td> </tr> <tr> <td>19-20</td> <td>Item Status Update</td> <td>No</td> <td>19/20 Item Status Update - Returns Patron Enable response, but doesn’t make any changes in EG</td> </tr> <tr> <td>23-24</td> <td>Patron Status</td> <td>Yes</td> <td>23/24 Patron Status - 63/64 “Patron Information” preferred</td> </tr> <tr> <td>25-26</td> <td>Patron Enable</td> <td>No</td> <td>25/26 Patron Enable - Used during system testing and validation</td> </tr> <tr> <td>29-30</td> <td>Renew</td> <td>Yes</td> <td>29/30 Renew</td> </tr> <tr> <td>35-36</td> <td>End Session</td> <td>Yes</td> <td>35/36 End Session</td> </tr> <tr> <td>37-38</td> <td>Fee Paid</td> <td>Yes</td> <td>37/38 Fee Paid</td> </tr> <tr> <td>63-64</td> <td>Patron Information</td> <td>Yes (no extensions)</td> <td>63/64 Patron Information</td> </tr> <tr> <td>65-66</td> <td>Renew All</td> <td>Yes</td> <td>65/66 Renew All</td> </tr> <tr> <td>93-94</td> <td>Login</td> <td>Yes</td> <td>93/94 Login - Must be first command to</td> </tr> </tbody> </table> 01 Block Patron A selfcheck will issue a Block Patron command if a patron leaves their card in a selfcheck machine or if the selfcheck detects tampering (such as attempts to disable multiple items during a single item checkout, multiple failed pin entries, etc). In Evergreen, this command does the following: - User alert message: CARD BLOCKED BY SELF-CHECK MACHINE (this is independent of the AL Blocked Card Message field). - Card is marked inactive. The request looks like: ``` 01<card retained><date>[fields AO, AL, AA, AC] ``` Card Retained: A single character field of Y or N - tells the ACS whether the SC has retained the card (ex: left in the machine) or not. Date: An 18 character field for the date/time when the block occurred. Format: YYYYMMDDZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, “Z” (3 blanks and a Z) represents UTC(GMT/Zulu)) Fields: See Fields for more details. The response is a 24 “Patron Status Response” with the following: - Charge privileges denied - Renewal privileges denied - Recall privileges denied (hard-coded in every 24 or 64 response) - hold privileges denied - Screen Message 1 (AF): blocked - Patron 09/10 Checkin ~The request looks like: 09<No block (Offline)><xact date><return date>[Fields AP,AO,AC,CH,BI] No Block (Offline): A single character field of Y or N - Offline transactions are not currently supported so send N. xact date: an 18 character field for the date/time when the checkin occurred. Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, “Z” (3 blanks and a Z) represents UTC(GMT/Zulu) Fields: See Fields for more details. The response is a 10 “Checkin Response” with the following: 10<resensitize><magnetic media><alert><xact date>[Fields AO,AB,AJ,CL,AA,CK,CH,CS,CT,CV,DA,AF,AG] Example (with a remote hold): 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| 101YNY20100623 165731AOBR1|AB1565921879|AQBR1|AJPerl 5 desktop reference|CK001|CSQA76.73.P33V76 1996 |CTBR3|CY373827|DANicholas Richard Woodard|CV02| Here you can see a hold alert for patron CY 373827, named DA Nicholas Richard Woodard, to be picked up at CT “BR3”. Since the transaction is happening at AO “BR1”, the alert type CV is 02 for hold at remote library. The possible values for CV are: - 00: unknown - 01: local hold - 02: remote hold - 03: ILL transfer (not used by EG) - 04: transfer - 99: other The logic for Evergreen to determine whether the content is magnetic_media comes from or search_config_circ_modifer. The default is non-magnetic. The same is true for media_type (default 001). Evergreen does not populate the collection_code because it does not really have any, but it will provide the call_number where available. Unlike the item_id (barcode), the title_id is actually a title string, unless the configuration forces the return of the bib ID. Don’t be confused by the different branches that can show up in the same response line. • AO is where the transaction took place, • AQ is the “permanent location”, and • CT is the destination location (i.e., pickup lib for a hold or target lib for a transfer). ## 11/12 Checkout ## 15/16 Hold Evergreen supports the Hold message for the purpose of canceling holds. It does not currently support creating hold requests via SIP2. ## 17/18 Item Information The request looks like: 17\langle xact\_date\rangle[\text{fields: AO,AB,AC}] The request is very terse. AC is optional. The following response structure is for **SIP2**. (Version 1 of the protocol had only 6 total fields.) 18\langle circulation\_status\rangle<\text{security\_marker}><\text{fee\_type}><xact\_date> [\text{fields: CF,AH,CJ,CM,AB,AJ,BG,BH,BV,CK,AQ,AP,CH,AF,AG,+CT,+CS}] Example: 1720060110 215612A0BR1|ABno\_such\_barcode| 1801010120100609 162510ABno\_such\_barcode|AJ| 1720060110 215612A0BR1|AB1565921879| 1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|A0BR1|APBR1|BGBR1 | |CTBR3|CSQA76.73.P33V76 1996| The first case is with a bogus barcode. The latter shows an item with a circulation\_status of 10 for in transit between libraries. The known values of **circulation\_status** are enumerated in the spec. EXTENSIONS: The CT field for destination location and CS call number are used by Automated Material Handling systems. 19/20 Item Status Update 23/24 Patron Status Example: - **2300120060101** 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| - **24YYYY** 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| - **2300120060101** 084235AOCONS|AA999999|ACsip_01|ADbad_password| - **24 Y** 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - **2300120060101** 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - **24 Y** 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| 1. The BL field (**SIP2**, optional) is valid patron, so the N value means bad_barcode doesn’t match a patron, the Y value means 999999 does. 2. The CQ field (**SIP2**, optional) is valid password, so the N value means bad_password doesn’t match 999999’s password, the Y means userpassword does. So if you were building the most basic **SIP2** authentication client, you would check for |CQY| in the response to know the user’s barcode and password are correct (|CQY| implies |BLY|, since you cannot check the password unless the barcode exists). However, in practice, depending on the application, there are other factors to consider in authentication, like whether the user is blocked from checkout, owes excessive fines, reported their card lost, etc. These limitations are reflected in the 14-character patron status string immediately following the 24 code. See the field definitions in your copy of the spec. 25/26 Patron Enable Not yet supported. 29/30 Renew Evergreen supports the Renew message. Evergreen checks whether a penalty is specifically configured to block renewals before blocking any SIP renewal. 35/36 End Session 3520100505 115901A0BR1|AA999999| The Y/N code immediately after the 36 indicates success/failure. Failure is not particularly meaningful or important in this context, and for evergreen it is hardcoded Y. 37/38 Fee Paid Evergreen supports the Fee Paid message. 63/64 Patron Information Attempting to retrieve patron info with a bad barcode: 6300020060329 201700 A0BR1|AAbad_barcode| 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|A0BR1| Attempting to retrieve patron info with a good barcode (but bad patron password): 6300020060329 201700 A0BR1|AA999999|ADbadpwd| 64 Y 00020100623 141130000000000000000000000000AA999999|AEdavid J. Fiander|BHUSD|BV0.00 |BD2 Meadowvale Dr. St Thomas, ON Canada |BEdjfiander@somemail.com|BF(519) 555 1234|BLY|CQ|PB19640925|PCPatrons |PIUnfiltered|AFOK|AOBR1| See 23/24 Patron Status for info on BL and CQ fields. 65/66 Renew All Evergreen supports the Renew All message. 93/94 Login Example: 9300CNsip_01|CObad_value|CPBR1| [Connection closed by foreign host.] 9300CNsip_01|COsip_01|CPBR1| 941 941 means successful terminal login. 940 or getting dropped means failure. When using a version of SIPServer that supports the feature, the Location (CP) field of the Login (93) message will be used as the workstation name if supplied. Blank or missing location fields will be ignored. This allows users or reports to determine which selfcheck performed a circulation. **97/96 Resend** **99/98 SC and ACS Status** 99<status code><max print width><protocol version> All 3 fields are required: - 0: SC is OK - 1: SC is out of paper - 2: SC shutting down - status code - 1 character - max print width - 3 characters - the integer number of characters the client can print - protocol version - 4 characters - x.xx 98<on-line status><checkin ok><checkout ok><ACS renewal policy><status update ok><offline ok><timeout period> - retries allowed<date/time sync><protocol version><institution id><library name><supported messages><terminal location><screen message><print line> Example: 9910302.00 98YYYYNN6000320100510 1717292.00A0CONS|BXYYYYYYYYYNYNNYN| The Supported Messages field BX appears only in SIP2, and specifies whether 16 different SIP commands are supported by the ACS or not. **Fields** All fixed-length fields in a communication will appear before the first variable-length field. This allows for simple parsing. Variable-length fields are by definition delimited, though there will not necessarily be an initial delimiter between the last fixed-length field and the first variable-length one. It would be unnecessary, since you should know the exact position where that field begins already. Chapter 25. Patron privacy and the SIP protocol SIP traffic includes a lot of patron information, and is not encrypted by default. It is strongly recommended that you encrypt any SIP traffic. SIP server configuration On the SIP server, use `iptables` or `etc/hosts` to allow SSH connections on port 22 from the SIP client machine. You will probably want to have very restrictive rules on which IP addresses can connect to this server. SSH tunnels on SIP clients SSH tunnels are a good fit for use cases like self-check machines, because it is relatively easy to automatically open the connection. Using a VPN is another option, but many VPN clients require manual steps to open the VPN connection. 1. If the SIP client will be on a Windows machine, install cygwin on the SIP client. 2. On the SIP client, use `ssh-keygen` to generate an SSH key. 3. Add the public key to `/home/my_sip_user/.ssh/authorized_keys` on your SIP server to enable logins without using the UNIX password. 4. Configure an SSH tunnel to open before every connection. You can do this in several ways: a. If the SIP client software allows you to run an arbitrary command before each SIP connection, use something like this: ```bash ssh -f -L 6001:localhost:6001 my_sip_user@my_sip_server.com sleep 10 ``` b. If you feel confident that the connection won’t get interrupted, you can have something like this run at startup: ```bash ssh -f -N -L 6001:localhost:6001 my_sip_user@my_sip_server.com ``` c. If you want to constantly poll to make sure that the connection is still running, you can do something like this as a cron job or scheduled task on the SIP client machine: ```bash #!/bin/bash instances=`/bin/ps -ef | /bin/grep ssh | /bin/grep -v grep | /bin/wc -l` if [ $instances -eq 0 ]; then echo "Restarting ssh tunnel" /usr/bin/ssh -L 6001:localhost:6001 my_sip_user@my_sip_server.com -f -N fi ``` Appendix A. Attributions Copyright © 2009-2016 Evergreen DIG Copyright © 2007-2016 Equinox Copyright © 2007-2016 Dan Scott Copyright © 2009-2016 BC Libraries Cooperative (SITKA) Copyright © 2008-2016 King County Library System Copyright © 2009-2016 Pioneer Library System Copyright © 2009-2016 PALS Copyright © 2009-2016 Georgia Public Library Service Copyright © 2008-2016 Project Conifer Copyright © 2009-2016 Bibliomation Copyright © 2008-2016 Evergreen Indiana Copyright © 2008-2016 SC LENDS Current DIG Members • Hilary Caws-Elwitt, Susquehanna County Library • Karen Collier, Kent County Public Library • George Duimovich, NRCan Library • Lynn Floyd, Anderson County Library • Sally Fortin, Equinox Software • Wolf Halton, Lyrasis • Jennifer Pringle, SITKA • June Rayner, eiNetwork • Steve Sheppard • Ben Shum, Bibliomation • Roni Shwaish, eiNetwork • Robert Soulliere, Mohawk College • Remington Steed, Calvin College • Tim Spindler, C/W MARS • Jane Sandberg, Linn-Benton Community College • Lindsay Stratton, Pioneer Library System • Yamil Suarez, Berklee College of Music • Jenny Turner, PALS Appendix B. Admonitions - Note - warning - caution - tip Appendix C. Licensing This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Index A Automated Circulation System, 46 Automated Material Handling, 46 Automated Material Handling (AMH), 55 C configuration files oils_sip.xml, 47, 48 M magnetic media, 54 O oils_sip.xml, 47, 48 S SelfCheck, 46, 53 SIP, 49, 50, 51 SIP Communication, 52 SIP Server SIP Communication, 52 syslog, 49 syslog-NG, 50
{"Source-Url": "http://docs.evergreen-ils.org/reorg/3.0/integrations/Evergreen_Documentation.pdf", "len_cl100k_base": 13920, "olmocr-version": "0.1.53", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 111604, "total-output-tokens": 17520, "length": "2e13", "weborganizer": {"__label__adult": 0.0003659725189208984, "__label__art_design": 0.0005884170532226562, "__label__crime_law": 0.0005397796630859375, "__label__education_jobs": 0.00218963623046875, "__label__entertainment": 0.00026988983154296875, "__label__fashion_beauty": 0.00011724233627319336, "__label__finance_business": 0.0010385513305664062, "__label__food_dining": 0.00023937225341796875, "__label__games": 0.0005435943603515625, "__label__hardware": 0.0011463165283203125, "__label__health": 0.00019109249114990232, "__label__history": 0.0003464221954345703, "__label__home_hobbies": 0.00014579296112060547, "__label__industrial": 0.0003044605255126953, "__label__literature": 0.0005192756652832031, "__label__politics": 0.0004427433013916016, "__label__religion": 0.000385284423828125, "__label__science_tech": 0.0276947021484375, "__label__social_life": 0.0002689361572265625, "__label__software": 0.15576171875, "__label__software_dev": 0.80615234375, "__label__sports_fitness": 0.00013911724090576172, "__label__transportation": 0.00023365020751953125, "__label__travel": 0.0002536773681640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60928, 0.056]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60928, 0.27008]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60928, 0.62357]], "google_gemma-3-12b-it_contains_pii": [[0, 69, false], [69, 137, null], [137, 6538, null], [6538, 7329, null], [7329, 7677, null], [7677, 7698, null], [7698, 7913, null], [7913, 8975, null], [8975, 10211, null], [10211, 10260, null], [10260, 10429, null], [10429, 11267, null], [11267, 11348, null], [11348, 11373, null], [11373, 12356, null], [12356, 12635, null], [12635, 14418, null], [14418, 16193, null], [16193, 16266, null], [16266, 17827, null], [17827, 17848, null], [17848, 17848, null], [17848, 18956, null], [18956, 20287, null], [20287, 20315, null], [20315, 21485, null], [21485, 23475, null], [23475, 23827, null], [23827, 24693, null], [24693, 26348, null], [26348, 27149, null], [27149, 28451, null], [28451, 29147, null], [29147, 29620, null], [29620, 29855, null], [29855, 29910, null], [29910, 30796, null], [30796, 31038, null], [31038, 31944, null], [31944, 33438, null], [33438, 33677, null], [33677, 34002, null], [34002, 34230, null], [34230, 34251, null], [34251, 37005, null], [37005, 37793, null], [37793, 39438, null], [39438, 41339, null], [41339, 42491, null], [42491, 44109, null], [44109, 45910, null], [45910, 48682, null], [48682, 49847, null], [49847, 51646, null], [51646, 52988, null], [52988, 54696, null], [54696, 55805, null], [55805, 57344, null], [57344, 59325, null], [59325, 60253, null], [60253, 60430, null], [60430, 60491, null], [60491, 60606, null], [60606, 60928, null]], "google_gemma-3-12b-it_is_public_document": [[0, 69, true], [69, 137, null], [137, 6538, null], [6538, 7329, null], [7329, 7677, null], [7677, 7698, null], [7698, 7913, null], [7913, 8975, null], [8975, 10211, null], [10211, 10260, null], [10260, 10429, null], [10429, 11267, null], [11267, 11348, null], [11348, 11373, null], [11373, 12356, null], [12356, 12635, null], [12635, 14418, null], [14418, 16193, null], [16193, 16266, null], [16266, 17827, null], [17827, 17848, null], [17848, 17848, null], [17848, 18956, null], [18956, 20287, null], [20287, 20315, null], [20315, 21485, null], [21485, 23475, null], [23475, 23827, null], [23827, 24693, null], [24693, 26348, null], [26348, 27149, null], [27149, 28451, null], [28451, 29147, null], [29147, 29620, null], [29620, 29855, null], [29855, 29910, null], [29910, 30796, null], [30796, 31038, null], [31038, 31944, null], [31944, 33438, null], [33438, 33677, null], [33677, 34002, null], [34002, 34230, null], [34230, 34251, null], [34251, 37005, null], [37005, 37793, null], [37793, 39438, null], [39438, 41339, null], [41339, 42491, null], [42491, 44109, null], [44109, 45910, null], [45910, 48682, null], [48682, 49847, null], [49847, 51646, null], [51646, 52988, null], [52988, 54696, null], [54696, 55805, null], [55805, 57344, null], [57344, 59325, null], [59325, 60253, null], [60253, 60430, null], [60430, 60491, null], [60491, 60606, null], [60606, 60928, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 60928, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60928, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60928, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60928, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60928, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60928, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60928, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60928, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60928, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60928, null]], "pdf_page_numbers": [[0, 69, 1], [69, 137, 2], [137, 6538, 3], [6538, 7329, 4], [7329, 7677, 5], [7677, 7698, 6], [7698, 7913, 7], [7913, 8975, 8], [8975, 10211, 9], [10211, 10260, 10], [10260, 10429, 11], [10429, 11267, 12], [11267, 11348, 13], [11348, 11373, 14], [11373, 12356, 15], [12356, 12635, 16], [12635, 14418, 17], [14418, 16193, 18], [16193, 16266, 19], [16266, 17827, 20], [17827, 17848, 21], [17848, 17848, 22], [17848, 18956, 23], [18956, 20287, 24], [20287, 20315, 25], [20315, 21485, 26], [21485, 23475, 27], [23475, 23827, 28], [23827, 24693, 29], [24693, 26348, 30], [26348, 27149, 31], [27149, 28451, 32], [28451, 29147, 33], [29147, 29620, 34], [29620, 29855, 35], [29855, 29910, 36], [29910, 30796, 37], [30796, 31038, 38], [31038, 31944, 39], [31944, 33438, 40], [33438, 33677, 41], [33677, 34002, 42], [34002, 34230, 43], [34230, 34251, 44], [34251, 37005, 45], [37005, 37793, 46], [37793, 39438, 47], [39438, 41339, 48], [41339, 42491, 49], [42491, 44109, 50], [44109, 45910, 51], [45910, 48682, 52], [48682, 49847, 53], [49847, 51646, 54], [51646, 52988, 55], [52988, 54696, 56], [54696, 55805, 57], [55805, 57344, 58], [57344, 59325, 59], [59325, 60253, 60], [60253, 60430, 61], [60430, 60491, 62], [60491, 60606, 63], [60606, 60928, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60928, 0.04512]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
0df1aee9ee917ee1bf5d441d90b3d4a3f4f9b23e
The Need for Software Architecture Evaluation in the Acquisition of Software-Intensive Systems Liming Zhu\(^{(a)}\), Mark Staples\(^{(a)}\) and Thong Nguyen\(^{(b)}\) \(^{(a)}\)National Information and Communication Technology Australia, (NICTA) \(^{(b)}\)Aerospace Division, Defence Science and Technology Organisation DSTO-TR-2936 ABSTRACT The software architecture for a software-intensive system defines the main elements of the system, their relationships, and the rationale for them in the system. Software architecture is fundamental to whether a system can achieve its quality objectives. Architecture evaluation is an approach for assessing whether a software architecture can support the system needs, especially its non-functional requirements (also known as quality requirements). Architecture evaluation can be used at different stages of a project, and is an effective way of ensuring design quality early in the lifecycle to reduce overall project cost and to manage risks. This report describes software architecture and architecture evaluation, and summarises some of the key benefits for software architecture evaluation that have been observed both in industry and in international Defence contexts. We make some general recommendations about architecture evaluation in the context of Australian defence acquisition. RELEASE LIMITATION Approved for public release UNCLASSIFIED The Need for Software Architecture Evaluation in the Acquisition of Software-Intensive Systems Executive Summary Australian Defence and the local defence industry are facing new challenges from ever-larger, more complex and interconnected software-intensive systems. Quality problems arising in the development of such systems are often only recognised late during development, leading to significant rework, and cost and schedule overruns. The risks for such problems can form at the start of an acquisition process, and at the earliest stages of high-level system design. The software architecture for a software-intensive system defines the main elements of the system, their relationships, and the rationale for them in the system. Software architecture is fundamental to whether a system can achieve its quality objectives. Architecture evaluation is an approach for assessing whether a software architecture can support the system needs, especially its non-functional requirements (also known as quality requirements). Architecture evaluation can be used at different stages of a project, and is an effective way of ensuring design quality early in the lifecycle to reduce overall project cost and to manage risks. This report describes software architecture and architecture evaluation, and summarises some of the key benefits for software architecture evaluation that have been observed both in industry and in international Defence contexts. We make some general recommendations about architecture evaluation in the context of Australian defence acquisition. **Recommendation 1**: Incorporate architectural requirements into the Request for Tender package. **Recommendation 2**: Integrate architecture evaluation activities across the acquisition process. **Recommendation 3**: Build architecture capabilities in Australian defence industry. **Recommendation 4**: Provide appropriate contractor incentives for architecture definition, analysis, and evaluation throughout the project lifecycle. Acknowledgements The authors would like to express our appreciation to the Defence Materiel Organisation for providing funding for this research and to Mr Adrian Pittman for supporting this work. Dr Liming Zhu NICTA Dr. Liming Zhu is a research leader in the Software Systems Research Group at NICTA. NICTA is Australia’s Information and Communications Technology Research Centre of Excellence. Dr Zhu formerly worked in several technology lead positions in the software industry before obtaining a PhD in software architecture at the University of New South Wales (UNSW). He is Australia’s expert delegate to the ISO/SC7 Working Group 42 on architecture related ISO standards. His research interests include software architecture, service engineering, model driven development and business process modelling. Dr Mark Staples NICTA Dr. Mark Staples is a principal researcher in the Software Systems Research Group at NICTA. NICTA is Australia’s Information and Communications Technology Research Centre of Excellence. Prior to NICTA, he worked in industry in verification and process management roles for the development of safety-critical transportation systems and financial transactions infrastructure systems. Dr Staples has a PhD in computer science from the University of Cambridge, and completed undergraduate degrees in computer science and cognitive science at the University of Queensland. His research in software engineering has included topics in software architecture, software process, software configuration management, software product line development, and formal methods. Dr Thong Nguyen Aerospace Division Dr Thong Nguyen is the project science and technology advisor to project AIR5440 and was the research manager in Mission Systems Integration Branch, Aerospace Division. Dr Nguyen has been conducting research in different fields including insect vision based motion detection, Data Fusion, Situation and Threat Assessment, Automation, Software Architecture, and Systems Integration Risk Assessment, and Systems Engineering. He has published over 50 research papers including book chapter, journal and conference papers. He has provided technical advice to various acquisition projects. He also provided advice to DMO on how to introduce Software Architecture evaluation into earlier stages of the acquisition process in order to identify and mitigate architecture-related technical risks as early in the acquisition process as possible. He was posted to the Wedgetail IPT in Seattle for two years as a technical advisor on Situation and Threat assessment and Sensor Management. He has 17 years of experience in using science and multidisciplinary research to enhance Defence Capability and holds a BE (Honour I) in Computer Systems Engineering, a BSc in Applied Mathematics, and a PhD in Electronics Engineering, all from Adelaide University; and a MBA in Technology Management from Deakin University. # Contents GLOSSARY 1 INTRODUCTION ................................................................................................................................. 1 2 THE ROLE OF SOFTWARE IN MODERN COMPLEX SYSTEMS ............................................. 2 3 SOFTWARE ARCHITECTURE EVALUATION ................................................................................. 3 3.1 What is Software Architecture? .......................................................................................... 3 3.1.1 Software Architecture Defines Structure .............................................................. 3 3.1.2 Software Architecture Addresses Non-functional Requirements ......................... 4 3.2 What is Software Architecture Evaluation? ........................................................................ 5 3.2.1 Architecture Evaluation Methods .......................................................................... 5 3.2.2 International Standards on Architecture Evaluation ............................................ 7 4 SOFTWARE ARCHITECTURE EVALUATION IN INDUSTRY AND DEFENCE................................. 12 4.1 Software Architecture Evaluations in Industry ................................................................. 12 4.2 Software Architecture Evaluation in US DoD System Acquisition ............................... 14 4.2.1 The CLIP Example from US DoD ............................................................................ 15 4.2.2 Integrating Software Architecture Evaluation in US DoD Acquisition .................... 16 5 THE NEED FOR SOFTWARE ARCHITECTURE EVALUATION IN ACQUISITION OF SOFTWARE-INTENSIVE SYSTEMS IN AUSTRALIAN DEFENCE .......................................................... 19 6 CONCLUSION .................................................................................................................................. 23 7 REFERENCES .................................................................................................................................. 25 # Glossary <table> <thead> <tr> <th>Abbreviation</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>ADF</td> <td>Australian Defence Force</td> </tr> <tr> <td>ATAM</td> <td>Architecture Trade-off Analysis Method</td> </tr> <tr> <td>COTS</td> <td>Commercial-Off-The-Shelf</td> </tr> <tr> <td>DMO</td> <td>Defence Materiel Organisation</td> </tr> <tr> <td>DSTO</td> <td>Defence Science and Technology Organisation</td> </tr> <tr> <td>FPS</td> <td>Function and Performance Specification</td> </tr> <tr> <td>GIG</td> <td>Global Information Grid</td> </tr> <tr> <td>ISO</td> <td>International Standard Organisation</td> </tr> <tr> <td>MDA</td> <td>Model Driven Architecture</td> </tr> <tr> <td>MDD</td> <td>Model Driven Development</td> </tr> <tr> <td>MOTS</td> <td>Military-Off-The-Shelf</td> </tr> <tr> <td>NCW</td> <td>Network Centric Warfare</td> </tr> <tr> <td>NICTA</td> <td>National ICT Australia</td> </tr> <tr> <td>OCD</td> <td>Operational Concept Document</td> </tr> <tr> <td>RFP</td> <td>Request for Proposals</td> </tr> <tr> <td>RFT</td> <td>Request for Tenders</td> </tr> <tr> <td>SEI</td> <td>Software Engineering Institute</td> </tr> </tbody> </table> This page is intentionally blank 1 Introduction Large and complex software-intensive projects often exhibit quality problems in late stages, may not be able to deliver on promised functionality on time, and thus require significant rework late in their schedule (Boehm, Valerdi et al. 2008). These issues affect not only the developers of software-intensive systems but also acquirers. The risks for such problems can form at the start of an acquisition process, and at the earliest stages of high-level system design. The software architecture for a software-intensive system defines the main elements of the system, their relationships, and the rationale for them in the system. Software architecture is fundamental to whether a software-intensive system can achieve its quality objectives, and is also important in structuring the system decomposition and associated work breakdown structure. Architecture evaluation is an approach for assessing whether a software architecture will be complete and consistent in terms of the system needs, especially the non-functional requirements (also known as quality requirements). Architecture evaluation can be used at different stages of a project, and is an effective way of ensuring design quality early in the lifecycle to reduce overall project cost and to manage risks. For example, AT&T, Avaya, Lucent and Millenium Services have reported around 700 architecture evaluations between 1998 and 2005. They estimate that projects of 100,000 non-comment source lines of code have saved an average of US$1 million each by using software architecture evaluations for identifying and resolving problems early (Maranzano, Rozsypal et al. 2005). Another recent survey of the state of practice in architecture evaluation among 235 industry participants identified a range of benefits – the top five being: architecture evaluation’s ability to identify project risks (88% responses), assess quality (77% responses), identify reuse opportunities (72% responses), promote good practices within an organisation (64% responses) and reduce project cost by reducing design defects (63% responses) (Babar and Gorton 2009). Architecture evaluations can also help to identify best practices in projects and socialize such practices across an organization, thereby improving the organization’s quality and operations (Maranzano, Rozsypal et al. 2005). The Australian Defence Force (ADF) sponsors a large number of software-intensive system acquisition projects. Like many other systems, modern weapon systems are becoming more software-intensive, larger, more complex, and with more networked capabilities (DoD 2009). The focus in the ADF is also shifting from technology readiness (i.e. maturity and feasibility) of individual technologies to the technical risk associated with systems and their integration with other systems (Smith, Egglestone et al. 2004) rather than new development. Industry adoption of architecture evaluation has emerged out of the need to deal with large-scale integration projects. The global defence industry has also undergone significant consolidation over the last several decades, resulting in an industry dominated by a few very large defence companies, mostly based in Europe and North America. Many acquisitions projects are encouraged to use Off-The-Shelf (OTS) systems including Military-Off-the-Shelf (MOTS), Government-Off-The-Shelf (GOTS) or Commercial-Off-the-Shelf (COTS) systems with limited customisation. However, the use of such OTS systems poses significant risks to quality, schedule and sustainment due to reduced control and exposure to long supply chains and dispersed global manufacturing, especially when local industry capability is lacking (DoD 2010). Architecture evaluation techniques can accommodate analysis of black-box OTS software. As outlined in the Defence white paper (DoD 2011), the specialisation entailed in complex projects has created new opportunities and dependencies in the global defence industry. The complexity of systems development and integration involved in modern defence projects has led to new partnerships and networks between prime suppliers and their suppliers and contractors. This is an opportunity for Australian defence companies to become part of the global supply chain of the primes. Explicit and well-managed architecture in major systems will assist Australian SMEs to identify opportunities for developing components that integrate into the global supply chains of international primes. Similar issues have also been identified in the US Department of Defense (DoD). It was clearly observed that past acquisition evaluation methods (DoD 1985; DoD 1994) were inadequate (Hantos 2004) due to their lack of focus on quality attributes and integration challenges. The response to this was to introduce a focus on architecture-driven system acquisition. Architecture-related activities have been tightly integrated into the RFP language (Bergey and Fisher 2001; Bergey, Fisher et al. 2002) and has produced positive results (Bergey and Morrow 2005). In this paper, we introduce the key concepts in software architecture and architecture evaluation. We survey the academic and industry state-of-the-art and state-of-practice for architecture evaluation. We review the architecture evaluation practices in US DoD system acquisition. Finally, we point to some of the possibilities to integrate architecture evaluation in Australian defence acquisition. 2 The Role of Software in Modern Complex Systems Software is rapidly becoming the centrepiece of complex system design. A large portion of complex system functionalities are now achieved through software. For example, in a period of forty years, the percentage of functionality provided by software to pilots of military aircraft has risen from 8% in 1960 (F-4 Phantom) to 80% in 2000 (F-22 Raptor) (Ferguson 2001). As a consequence of software’s growing complexity and its critical role in modern systems, software architecture has been recognized as an important foundation for software systems. Software architecture has a key influence on the quality of a software-intensive system. It is clear that the overall design of a software-intensive system can be critical in supporting system quality requirements for reliability using redundant components, or for performance using caching components. It is thought that software architecture can be key in the achievement of any kind of system quality. One important system quality enabled by software architecture is the ability to readily evolve the system. When a system is expected to undergo extensive evolution after deployment, it can be more important that the system be easily evolvable than that it be exactly correct at first deployment (Maier and Rechtin 2000). More broadly, maintenance cost has a strong connection with software architecture. Typically, the cost of maintaining a system for its whole cycle is two thirds of the total system cost. In other words, Defence will spend two times more of the initial acquisition cost in years to come for upgrading and enhancing a system’s capabilities over its life cycle. A software architecture designed to support maintainability can help to ensure that future upgrades are feasible at reasonable cost. There are Defence examples that project cancellations are due to software problems that can be traced back to limitations of software architecture or to the lack of any high-level assurance activities to ensure that safety critical requirements are satisfied (DOD 2009). Software provides many important properties and functionalities in a modern complex system, but is just part of the whole system. The system architecture describes the overall system decomposition, including software-intensive subsystems. However, a software architecture will not always appear as a neatly contained subsystem in a system architecture. Often the functionality and system qualities supported by the overall software architecture depend on interactions and relationships between separate system components. Software architecture and system architecture can be inter-dependent. Analysis of a software architecture will be based partly on assumptions about the system architecture, to show how system qualities are satisfied. Analyses of software architectural decisions can be used to assess design trade offs for meeting system qualities such as performance, modifiability, usability, or security. 3 Software Architecture Evaluation 3.1 What is Software Architecture? Australian Defence reports refer to the definition of software architecture in the IEEE 1471-2000 standard (later in ISO/IEC 42010 Architecture Description standard (ISO 2011)): “fundamental concepts or properties of a system in its environment embodied in its elements, relationships, and in the principles of its design and evolution”. Software architecture represents system software structure in terms of software components, how they interact, and how they support various quality requirements such as performance, security, or reliability. It also defines system-wide design rules and considers how a system may adapt to new requirements or environments. 3.1.1 Software Architecture Defines Structure Much of a software architect’s time is concerned with how to sensibly partition a software system into a set of inter-related sub-systems, components, or modules. Different system requirements and constraints will define the precise meaning of “sensibly” – an architecture must be designed to meet the specific requirements of the system it is intended for, under the system’s environment constraints. For example, a security requirement for a system may restrict where data can be stored and how it is communicated, but a user access requirement might require that some services are accessible from mobile devices. Both of these impose some structural constraints on system design but simultaneously leave open many design options for partitioning functionality across a collection of related components. The software architecture describes how responsibilities are assigned to each constituent software component. These responsibilities define the tasks a component must perform within the software system. Each component plays a specific role, and the overall component ensemble that comprises the architecture provides all the required functionality. A key structural issue for nearly all system design is minimizing dependencies, creating a loosely coupled architecture from a set of highly cohesive sub-systems or components. A dependency exists between systems when a change in one potentially forces a change in others. By eliminating unnecessary dependencies, changes are localized and do not propagate throughout an architecture. This is critical for system integration and affects the ability for components to be independently upgraded later without affecting others. Excessive dependencies are simply a bad thing. They make it difficult to make changes to systems, more expensive to test changes, they increase integration costs, and make concurrent, team-based development harder. This is especially so when there is only limited control over possible modifications, for example when dependent components are third-party OTS systems, legacy systems or systems belonging to other organisations. 3.1.2 Software Architecture Addresses Non-functional Requirements Non-functional requirements are concerned with how a system provides required functionality. There are three distinct areas of non-functional requirements: **Technical constraints:** These constrain design options by specifying certain technologies that the system must use. “We only have Java developers, so we must develop in Java.” “The existing database is Oracle.” **Business constraints:** These constrain design options, but for business, not technical reasons. For example, “In order to conduct a joint operation, we must interface with XYZ systems.” Another example is “The supplier of our current OTS system has raised prices prohibitively, so we’re moving to another system.” **Quality attributes:** These define a system’s requirements in terms of scalability, availability, ease of change, portability, usability, performance, and so on. Quality attributes address issues of concern to system users, as well as other stakeholders like the project team itself or the project sponsor. For a software-intensive system, the software architecture plays an important role in achieving these non-functional requirements, and we need software architecture evaluation to determine whether the architecture has indeed addressed them adequately. Software architecture also provides the foundation for software reuse, OTS component use, and integration with other systems. Software architecture has a profound influence on the project organizations’ functioning and structure. Architecture is the key framework for major technical decisions and many business decisions. 3.2 What is Software Architecture Evaluation? Software architectures are high-level designs. The aim of software architecture evaluation is to increase the confidence of the evaluation team that a software architecture is fit for its purpose. Evaluation has to be achieved expeditiously, as the detailed design, implementation or acquisition cannot generally commence until the architecture is agreed. The goal is to be as rigorous and efficient as possible, given the limited level of detail available on the proposed design. Validating the proposed system is challenging because whether for a new system, or for the evolution of an existing system, the proposed system does not yet exist. The design will also likely include new systems that have yet to be built, and black box OTS systems and existing operational systems that must be integrated. So, the system cannot be executed or tested to determine whether it fulfils its requirements. Also, for software systems, there are few precise and accurate design analysis techniques, and approximate analyses are not always safe. Nonetheless, an assessment must be made about whether all the proposed parts in the design can be made to work together in the planned way to fulfil the functional and non-functional (quality) requirements. 3.2.1 Architecture Evaluation Methods Abowd (Abowd, Bass et al. 1997) proposed two broad categories of software architecture evaluation. Questioning approaches focus on qualitative analysis, and include scenario-based analysis techniques. Measurement approaches are quantitative in nature, and include metrics and simulation techniques. The architecture evaluation research community has developed various methods such as Scenario-Based Architecture Analysis Method (SAAM) and Architecture Tradeoff Analysis Method (ATAM) (Kazman, Klein et al. 2004), Architecture Level Maintainability Analysis (ALMA) (Bengtsson, Lassingb et al. 2004) and Performance Analysis of Software Architecture (PASA) (Williams and Smith 2002). There have been several comparative studies of these methods (Dobrica and Niemela 2002; Babar, Zhu et al. 2004). The architecture evaluation field is reasonably mature. Scenario-based architecture evaluation methods such as SAAM and ATAM are widely used. Scenario-based methods were developed at the Software Engineering Institute (SEI) to tease out non-functional requirements concerning a software architecture through manual evaluation, prototyping or high-level testing. A scenario is a hypothetical but reasonable circumstance that may be faced by the system. Scenarios are related to architectural concerns such as quality attributes, and they aim to highlight the consequences of the software architectural decisions that are encapsulated in the design. Operational scenarios can be used to investigate non-functional requirements such as for system performance or reliability, and development or maintenance scenarios can be used to investigate non-functional requirements such as for modifiability or interoperability. The various scenario-based methods involve working out how the system is to respond to this stimulus. If the response is desirable, then a scenario is deemed to be satisfied by the software architecture. If the response is undesirable, or hard to quantify, then a flaw or an area of risk in the architecture may have been uncovered. Scenarios have been used for a long time in several disciplines including military and business strategy. The software engineering community uses scenarios in user-interface engineering, requirements elicitation, performance modelling and in software architecture evaluation (Kazman, Abowd et al. Nov. 1996). Scenarios are effective for architecture evaluation because they are very flexible; scenarios are used for evaluating most quality attributes. For example, we can use scenarios that represent component failure to examine availability and reliability, scenarios that represent change requests to analyse modifiability, scenarios that represent threats to analyse security, or scenarios that represent ease of use to analyse usability. Scenarios are normally very concrete, enabling the user to understand their detailed effect (Lassing, Rijsenbrij et al. 2003). The software architecture evaluation community has developed and assessed a number of techniques, both top-down and bottom-up, for eliciting a suitable set of scenarios that is as complete as possible. Regardless of the technique used to elicit scenarios, the measure of completeness is with respect to the business goals. Completeness of a set of scenarios is determined by finding out whether or not there is a business goal that is not covered by some scenarios. If there are some business goals not covered by the gathered scenarios, then the scenarios are incomplete. Equally important is whether a scenario cannot be mapped to a business goal, which may indicate that the statement of business goals is incomplete. In an ideal world, the architecturally significant requirements for a system to be reviewed would have been completely and unequivocally specified in a requirements specifications document, evolving ahead of or in concert with the architecture specification. In reality, requirements documents sometimes do not properly address architecturally significant requirements. Specifically, the requirements for both existing and planned systems are often missing, vague, or incomplete. Typically the first goal of an architecture analysis is to elicit the specific quality goals against which the architecture will be judged. <table> <thead> <tr> <th>Elements</th> <th>Brief Description</th> </tr> </thead> <tbody> <tr> <td>Stimulus</td> <td>A condition that needs to be considered when it arrives at a system</td> </tr> <tr> <td>Response</td> <td>The activity undertaken after the arrival of the stimulus</td> </tr> <tr> <td>Source of Stimulus</td> <td>An entity (human, system, or any actuator) that generates the stimulus</td> </tr> <tr> <td>Environment</td> <td>A system’s condition when a stimulus occurs, e.g, overloaded, running etc.</td> </tr> <tr> <td>Stimulated Artifact</td> <td>Some artifact that is stimulated may be the whole system or a part of it.</td> </tr> <tr> <td>Response measure</td> <td>The response to the stimulus should be measurable in some fashion so that the requirement can be tested.</td> </tr> </tbody> </table> The scenario generation framework shown in Table 1 is considered effective for eliciting and structuring information about quality-sensitive scenarios gathered from stakeholders or distilled from secondary sources of architecture knowledge such as design patterns. It has been argued (Kazman, Klein et al. 2004) that this framework provides a relatively rigorous and systematic approach to capture and document general scenarios, which can be used to develop concrete scenarios and to select an appropriate reasoning framework from which to evaluate software architecture. Once the scenarios have been defined, the evaluation methods for different quality attributes vary. Some are very quantitative such as for performance (Williams and Smith 2002). Some are qualitative techniques such as for modifiability (Bengtsson, Lassingb et al. 2004). There are also methods that quantitatively deal with the trade-offs made among different quality attributes and the sensitivity of architecture decisions (Zhu, Aurum et al. 2005). In order to deal with black-box OTS software, specific techniques were developed to select OTS components in the context of architecture evaluation. These methods include COTS mismatch resolution (Abdallah Mohamed 2008) and architecture-driven OTS selection (Liu and Gorton 2003). Some techniques propose new ways of judicially exercising parts of the black-box OTS system for specific application profile to gain performance insights into the future system (Zhu, Liu et al. 2007). There are various practical architecture measurement concepts that are not necessarily linked to specific non-functional requirements or scenarios. Some general techniques have been developed to measure the level of reuse, number of complex interfaces, number of underused interfaces, trends, architecture-related work progress, architecture description completeness, consistency, and general metrics for structural cohesion and coupling (Bianchi 2012). Although these are not specific to individual quality attributes or application requirements, they can often be used as early indicators of architecture characteristics. ### 3.2.2 International Standards on Architecture Evaluation Industry best practices are often formally standardised. The maturity and adoption of architecture evaluation is reflected in two new international standards related to architecture description and evaluation. The ISO/IEC 42010 standard on architecture description (ISO 2011) was officially published in 2010. The ISO/IEC 42030 standard on architecture evaluation (ISO 2012) is currently being developed with the first working draft being circulated. The final ISO/IEC 42030 standard is to be published in 2013. It is important to note that Australia is among the half-dozen countries actively developing the two ISO standards; not just providing comments and casting votes. Authors of this report are the Australian representatives to the working group responsible for the two standards. In the past, early access to the ISO/IEC 42010 (Architecture Description) standard has been provided to Australia Defence and feedback has been incorporated into the final version of the standard. Australia Defence has a basis on which to encourage the use of these ISO standards both internally and externally. 3.2.2.1 ISO/IEC 42010 (Architecture Description) Figure 1 illustrates the core concepts in this architecture description standard. An architectural viewpoint defines the conventions for the construction, interpretation and use of an architectural view to frame an identified quality attribute. An architecture viewpoint will describe: the quality attribute framed by the viewpoint; the modelling languages, notations, rules, constraints, modelling techniques, or analytical methods to be used in constructing a view and its architectural models; consistency and completeness checks; and evaluation or analysis methods and required analysis models. An architectural view is a representation of a whole system from the perspective of an identified set of architecture-related concerns. An architectural description contains one or more architectural views. The requirement that architectural views depict the whole system is essential to the complete allocation of quality attribute assessments within an architectural description. Architectural models, within views, can be used to selectively present portions of the system to highlight points of interest. An architectural view consists of one or more **architectural models**. Architectural models provide a mechanism to modularize architectural views. In the ordinary case, a view consists of exactly one model. However, there are cases when a view may need to use more than one language, notation or modelling technique to address all of the concerns assigned to it. To do this, a view may consist of multiple architectural models. A **view correspondence** records a relation between two architectural views. A view correspondence may be used to capture a consistency relation, a traceability relation, a constraint or obligation of one view imposed upon another, or other relations relevant to the architecture being described. An **architectural rationale** (or simply, rationale) is an explanation or justification for an architectural decision. A rationale explains the basis for the decision, the reasoning that led to the decision, the trade-offs that were considered, the impact of the decision including its pros and cons, and points to further sources of information. An **architectural decision** (or simply, decision) is a choice that addresses one or more architecture-related quality attributes, and affects (directly or indirectly) the architecture. A decision may address (in full or partial) more than one quality attribute. A decision may affect the architecture in several ways: existence of an architectural entity or the property of some architectural entities. In most cases a decision provides a trace between architectural elements and quality attributes. A decision may raise additional concerns. Using ISO/IEC 42030, a potential supplier can prepare an architectural description to be used throughout the life cycle to make predictions of the fitness-for-use of a system whose architecture complies with the architectural description or to assess how well architecture-related concerns have been addressed. The architectural description will typically evolve throughout the life cycle as understanding of the architecture grows, and can serve as a means for assessing changes to the system. **Use of Architecture Frameworks** An architecture framework establishes a common practice for creating, organizing, interpreting and analysing architectural descriptions used within a particular domain of application or stakeholder community. For example, DODAF and MODAF are two architecture frameworks used in the US DoD. Australia has a similar AUSDAF for ICT-related defence systems. An architecture framework identifies a set of generic architecture-related concerns (quality attributes), generic stakeholders holding these concerns, and one or more predefined architectural viewpoints which frame these concerns. Architectural frameworks use generic stakeholders to motivate the concerns in the predefined viewpoints defined by the architectural framework. In a predefined viewpoint, a generic stakeholder is identified to establish concerns that the predefined viewpoint frames. As part of applying the framework, generic stakeholders and generic concerns are instantiated within an actual architectural description. 3.2.2.2 ISO/IEC 42030 (Architecture Evaluation) This standard specifies the practices and products for planning, executing and documenting architecture evaluations for the purpose of - Determining the quality of an architecture, - Verifying that an architecture addresses stakeholders’ concerns, or - Supporting decision-making informed by the architecture of the System of Interest. Figure 2 shows the parties involved in an architecture evaluation. The evaluator is often an independent expert outside the project team. ![Figure 2 Parties Involved in Architecture Evaluation (reproduced from ISO 42030 Working Draft)](image) Architecture evaluations can be scoped to assess fulfilment of all or a subset of the concerns. Based on the scope and complexity of the architecture, the evaluation is performed by an architecture evaluator or a team of evaluators. Depending upon the context of the evaluation there may also be third-party evaluators. Multi-member evaluation teams will need a leader, who plans, executes and manages the entire evaluation. An architecture evaluator can perform architecture evaluation on behalf of a client. Architecture evaluations are commonly initiated either by the acquirer or as a joint acquirer-supplier review. The initiator becomes the evaluation sponsor and outlines the drivers for this exercise. The evaluation sponsor will provide the purpose for evaluating the architecture. The evaluation team will elaborate the scope and objectives, determine evaluation criteria, determine analysis techniques and tools to use, collect and understand required information, assess the architecture, formulate findings and recommendations, communicate results of the evaluation, manage the architecture evaluation with a plan. Architecture evaluations are performed for specific purposes or concerns, within a fixed timeframe, by one or more evaluators using various sources of information (human or others). Architecture evaluations need to be managed and governed with a plan. Figure 3 depicts the concepts relating to the architecture evaluation plan and key contents of the plan. In addition to evaluation criterion, scope, schedule and resources for completing the evaluation, the plan clearly states how the scope of evaluation will be covered in the form of one or more modules of work. In the context of architecture evaluation, these modules are referred as assessment modules. Assessment modules are built around or use a specific assessment method. They aggregate the assessment method, desired set of participant roles, the time and support expected from them, any special skills required in the evaluator, the kinds of input information and, artefacts or roles that act as the sources of these information. Architecture viewpoints and architecture frameworks may recommend assessment modules for use by an architecture evaluation. The selection of assessment modules should be influenced by analysis and evaluation methods specified in viewpoints or frameworks that are part of the architecture and architecture description. ![Figure 3 Architecture Evaluation Plan (reproduced from ISO42030 Working Draft)](image) The assessment module works as a template that can be reused when creating work units for architecture evaluations performed in the enterprise. When an architecture evaluation is in progress the evaluation plan instantiates the planned assessment modules. The actual instances of the assessment module will have the actual artefacts and named resources. The evaluation criterion is designed keeping the evaluation purpose or concerns in context. Using the evaluation criterion, outcomes and findings of the individual assessment modules are aggregated to draw conclusions or provide recommendations that are documented as part of the evaluation report. ISO/IEC 42030, once published, will provide an internationally-recognised reference point for conducting architecture evaluation. 4 Software Architecture Evaluation in Industry and Defence 4.1 Software Architecture Evaluations in Industry There have been many experience reports and case studies published on architecture evaluation. Most notably, AT&T and sub-companies have reported around 700 architecture evaluations between 1998 and 2005. They estimate that projects of 100,000 non-comment source lines of code have saved an average of US$1 million each by using software architecture evaluations for identifying and resolving problems early (Maranzano, Rozsypal et al. 2005). The primary goal of AT&T’s architecture evaluation was to find design problems early in development when they are less expensive to fix. But a number of other key benefits were also identified: - Leverage experienced people by using their expertise and experience to help other projects in the company - Let companies better manage software component suppliers - Provide management with better visibility into technical and project management issues - Generate good problem descriptions by having the review team critique them for consistency and completeness - Rapidly identify knowledge gaps and establish training in areas where errors frequently occur (for example, creating a company-wide performance course when many reviews indicated performance issues) They found architecture reviews could get management attention without personal retribution. Because independent experts conduct reviews and the organization’s management sanctions the reviews, they are a safe way to identify a project’s most serious issues. Business leaders can also learn more about the architecture and reasons for key technical decisions. They also found architecture reviews assist organizational change. Architecture reviews provide an intervention opportunity to an organizational change effort. By observing both what projects are doing and how they do it, interactions among the various process participants can introduce change into individual projects and the larger organization. When conducting architecture evaluations, AT&T identified five key principles for success: 1. **A clearly defined problem statement drives the system architecture.** Problem statements constrain solutions through considerations or constraints on functionality, structure, environment, and cost and time to develop and deploy the solution. 2. **Product line and business application projects require a system architect at all phases.** The system architect, whether a person or small team, is an explicitly identified role. The architect must explain the reasons for the technical decisions that define the system, including alternatives considered, and must document choices in a concise, coherent architecture specification. 3. **Independent experts conduct reviews.** The chosen reviewers are recognized experts in their fields, generally with considerable experience. To assure impartiality, they are also independent of the particular project under review. 4. **Reviews are open processes.** Thus, the review team conducts the review openly and invites project members to attend. Everyone involved in the review process can see the architecture’s issues and strengths and note them. 5. **Companies conduct reviews for the project’s benefit.** Reaction to the issues discovered, such as a project’s architectural changes, replanning, or even cancellation, is the project team and management’s responsibility. Many of these lessons we believe are applicable to Australia defence acquisition projects. Another more recent survey of the state of practice in architecture evaluation among 235 industry participants identifies a range of benefits, among which the top five are architecture evaluation’s ability to identify project risks (88% responses), assess quality (77% responses), identify reuse opportunities (72% responses), promote good practices within an organisation (64% responses) and reduce project cost by reducing design defects (63% responses) (Babar and Gorton 2009). The full list is shown in Table 2. Another industry survey on risk management in architecture evolution also clearly shows that early architecture risk identification leads to better project outcomes and cost savings (Slyngstad, Conradi et al. 2008). ### 4.2 Software Architecture Evaluation in US DoD System Acquisition The most notable use of software architecture evaluation in the defence domain is from US DoD. US DoD considers software architecture critical to the quality of a software-intensive system (Bergey and Morrow 2005). For an acquisition organization, such as the US Department of Defense (DoD), the ability to evaluate software architectures early in an acquisition was identified to have a favourable impact on the delivered system (Bergey and Fisher 2001). Software architecture can help mitigate many of the technical risks associated with software-intensive system development, thereby improving the ability of the acquisition to achieve the stated system objectives. In an acquisition context, these architecture evaluations provide a proactive means of gaining early visibility into critical design decisions that will drive the entire system-development effort. They can be performed before a system is selected and integrated to determine if the system can be made to satisfy its desired qualities. Data gathered over multiple years confirmed that the use of architecture evaluations are generally beneficial to system acquisitions and suggests that maximal benefit is achievable only if architecture-centric practices are built into the acquisition process (Nord, Bergey et al. 2009). 4.2.1 The CLIP Example from US DoD One example from US DoD is the Common Link Integration Processing (CLIP) program which was a US$275 million project. It involved cooperative Air Force/Navy programs with integrated Tactical Data Links (TDLs) across platforms with a TDL requirement. It needed to provide message processing, gateway functionality, and a common interface to enable transition of new and legacy platforms to a Network Centric Warfare (NCW) environment. The program had some architecture challenges such as: - Incremental acquisition supporting different platform integration needs and their expected date of integration - Developing software assets which would be portable to the different reusable platforms using diverse hardware and software - Integration of CLIP with other DoD systems under development - Development of a common host interface Two key architecture evaluation activities were used in various stages of the project. One is the Quality Attribute Workshop (QAW) method that is essentially a quality-attribute-related scenario elicitation activity involving key stakeholders. The other is the ATAM that can be performed relatively quickly and inexpensively. ATAM involves project decision-makers, other stakeholders including managers, developers, maintainers, testers, re-users, end users, customers, and an architecture evaluation team. ATAM seeks to ask questions to uncover: - Risks: architecture decisions that might create future problems for some quality attribute - Sensitivity points: properties of one or more components (and/or component relationships) that are critical for achieving a particular quality attribute response (i.e., where a slight change in a property can make a significant difference in a quality attribute) - Tradeoffs: decisions affecting more than one quality attribute. Figure 4 illustrates the integration. A QAW was conducted during the acquisition planning stage and all architecture related requirements were integrated into the RFP. Thus enough information on architecture was available inside supplier responses for source selection. After the contract award in May 2005, one more QAW and four ATAMs were conducted to provide a common forum for discussing quality attribute requirements and architectural decisions and to gain stakeholder buy-in. Software architecture documentation was delivered in support of Preliminary Design Review (PDR) and another ATAM engagement in Nov 2005 to successfully increase communication among stakeholders, clarify quality attribute requirements and identify software risks early in the development cycle. 4.2.2 Integrating Software Architecture Evaluation in US DoD Acquisition From the CLIP example, generalisations were made to identify ways of integrating architecture evaluation into the acquisition process. In US DoD, acquisition planning precedes the entire solicitation process and includes generating and validating product requirements (e.g., functional and quality requirements such as reliability or performance). In the pre-award phase, a solicitation package is developed. It informs potential suppliers what the requirements of the acquisition are, how to prepare their proposals, how proposals will be evaluated, and when to submit their proposals. Solicitation packages take various forms and are referred to differently. However, they all have the same characteristics noted here. A Request For Proposal (RFP) is used here to represent all such packages. These are similar to Australia Defence’s Request For Tender (RFT) package which includes the Function and Performance Specification (FPS), Concept Document (OCD), Test Concept Document (TCD) among others. Software architecture and software architecture evaluation requirements are integrated explicitly in different parts of the RFP. For example, Section C in DoD’s RFP contains supplier work requirements in the form of a statement of objectives (SOO) or statement of work (SOW) along with product requirements such as a system performance specification (containing functional and quality requirements). A US DoD system specification typically has two main sections of interest. Section 1 specifies functional and quality requirements for the system. Here, quality requirements refer to those quality attributes of the system and their respective characterizations. Modifiability, reliability, and security are examples of the types of system quality attributes that may be considered. For example, if reliability is a required quality attribute, a characterization might be that “the system will not fail under maximum load conditions”. Eliciting the quality attributes of primary interest as well as their characterizations for the system in question are part of the ATAM. Section 2 of the system specification describes the software architecture evaluation methods, such as the ATAM, to be used in determining if the software architecture can support the satisfaction of the requirements in Section 1. All of the key US DoD acquisition documents are architecture driven, including the Acquisition Strategy and Acquisition Plan, System Engineering Plan, Test and Evaluation Master Plan, Request for Proposal, Statement of Work and System Requirements Document. For example, Section C of the RFP regarding Statement of Work, explicitly says “An evaluation team shall conduct a series of software architecture evaluations in accordance with the special requirements of section H”. Section H of RFP (special contract requirements) includes the detailed requirements specifying how the software architecture evaluation are to be conducted using the ATAM. Section J of RFP (Contract deliverables requirements list), includes “software architecture documentation” and “the software architecture evaluation report” Section L (Proposal Preparation Instructions) describes what potential suppliers should address in their proposals and the response that is required. Typically, the acquirer would ask the potential suppliers for responses in several volumes, such as a technical volume, past performance volume, management volume, and cost volume. There are no set rules for what these volumes exactly contain. In the technical volume, an acquirer may ask potential suppliers to describe their proposed approach for implementing the software architecture requirements and performing an architecture evaluation. In the past performance volume, an acquirer may ask suppliers to describe previous work on software architecture development and architecture evaluation. Section M (Evaluation Factors for Award) tells potential suppliers how their proposals will be evaluated. This typically includes specifically what areas of the supplier’s proposed approach are to be evaluated as part of the proposal evaluation and the specific criteria to be used for judging the supplier’s proposed approach to meeting the RFP/contract requirements for these factors. To incorporate architecture evaluation, Section M must specify how the architecture evaluation will relate to the factors. And, it must specify the criteria to be used in judging the bidder’s approach to satisfying the RFP/contract architecture requirements. Figure 5 and Figure 6, reproduced from (Bergey and Fisher 2001; Bergey and Morrow 2005), shows some integration opportunities division of tasks between acquirers and suppliers. ![Figure 5 Opportunities for Conducting Architecture Evaluation during Acquisition](image) During the post-award phase, making architecture evaluation a contractual checkpoint is also an effective way to gain insight into the architecture’s ability to satisfy system requirements early in its development and integration. Such an evaluation can help the acquirer and supplier to select an architecture decision among several candidate decisions and to surface/mitigate risks associated with architectural decisions early. ![Figure 6 Key Acquirer and Contractor Tasks](image) In US DoD, the following must be addressed for software architecture evaluation requirements: - What evaluation method is to be used and what are the steps? - Who are the participants in the architecture evaluation? - What are their roles and responsibilities? - How many evaluations need to be conducted and when? - If multiple evaluations are involved, how are they to be staged? - What are the prerequisites for conducting the evaluations? - What is involved in terms of time, effort, and cost? - How are evaluation team responsibilities to be transitioned? - How will the objectivity of the participants be ensured? - How are the evaluation results to be captured and used? - What contract deliverables need to be included? - How can the evaluations be carried out collaboratively to ensure both government and contractor stakeholders play an active role? - What training will be provided for the evaluation team members? A proven means achieving this (Bergey 2009) is to provide a sample Software Architecture Evaluation Plan that a DoD program office can easily customize and use in its own Request for Proposal (RFP)/contract. The sample plan covers all aspects on the "who, why, when, where, and how" of the government's approach to conducting a software architecture evaluation. We observe many of the lessons here are applicable to the Australian Defence acquisition process. 5 The Need for Software Architecture Evaluation in Acquisition of Software-Intensive Systems in Australian Defence The life cycle process for Australian Defence’s capability system consists of several major phases: needs, requirements, acquisition, in service and disposal. Our main focus is on the requirements and acquisition phase especially when DMO is involved. There are a number of documents that will be created in preliminary, draft and final form such as Request for Information (RFI), Request for Proposal (RFP), Material Acquisition Agreement, Technical Risk Assessment, Request for Tender Package which includes the Operational Concept Document (OCD), Function and Performance Specification (FPS), Test Concept Document (TCD) among others. Some are more important in the source selection and solicitation stage before awarding the contract. Some are still very valid after the contract award when conducting various system reviews and when transitioning the materiel and capabilities into service. As we can see from the US DoD example, architecture evaluation related information request/response and activities can be integrated across the lifecycle. We note that many acquisition projects in Australian defence acquire MOTS, COTS, GOTS and other OTS systems whenever possible. This is perhaps more so than the US DoD. Although some architecture decisions in OTS are more difficult to change, there are still many key architecture decisions need to be changed or made during integration and customisation. There is already some recognition in Defence about the importance of architecture generally. For example the 2009 NCW roadmap paper outlines the need of a single Enterprise Architecture to support the networked force build is fundamental to achieving the necessary collaboration and synchronisation. A key aspect of architectural design is the development and use of architecture framework views to establish a ‘common language’ between diverse stakeholders, to manage the inherent complexity of system of systems, particularly under the influence of diverse mission requirements, and to enable incremental capability development and integration into the force structure (DoD 2009). The Australian National Audit Office (ANAO)’s report the Collins Class Submarine and Seasprite also highlighted the importance of early design reviews and benefits gained in having a software architecture that allowed for a graceful degradation in performance even if the operator neglected to manage the object density there was no possibility of system malfunction or software stoppage (DOD 2009). Another initiative between DMO and DSTO has seen the establishment of the Defence Systems Integration Technical Advisory (DSI–TA). This new organisation forms part of the DMO, with support provided by DSTO. The main objective of the DSI–TA is to provide independent identification, assessment and mitigation strategies for ‘systems’ and ‘systems of systems’ integration risks in current and future major DCP projects. One of its major tasks is to create integrated defence architecture and sub-domain/section architectures. The quality in these architectures and their applications in individual systems need to be evaluated at different time. In addition, an explicit architecture also helps future maintenance and upgrade and other suppliers (especially Australian SMEs) provide better integration into key mission systems. We make the following general recommendations. **Recommendation 1**: Incorporate software architectural requirements into the Request for Tender package. Contract requirements frame the development of a system, so the best way to ensure adherence to a sound software architecture is to stipulate architectural requirements in the system specification. The first principle from the industry experience of AT&T in architecture evaluation is that a clearly defined problem statement drives the software-system architecture. The various system capabilities' needs should drive the software-system architecture development and be architecture-conscious. It is therefore recommended that software architectural requirements are incorporated into the Request for Tender package. The FPS (DoD 2011) specifies the formal requirements for the Materiel System and provides the basis for design and Verification of the system. The FPS already provides the vehicle for the capture of formal, verifiable and unambiguous requirements, effectively 'distilled' from the OCD. The FPS is intentionally written using formal language, with all requirements in the FPS traceable to needs identified in the OCD. In the current FPS, there are already large sections (section 8.4.x) on quality attributes such as availability, reliability, maintainability, deployability, transportability, usability and safety. It also contains information on environmental conditions, adaptation requirements and design and implementation constraints. There is an opportunity here to tease out the architecturally significant requirements and ask suppliers to respond to these requirements using software architecture description and evaluation report. There is also a special section (Section 3.12) in FPS that outlines the “Architecture, Growth and Expansion” requirements. This section specifies the applicable architectural and other requirements to accommodate the need for flexibility, growth, and expansion for relevant subsets of the Materiel System to support anticipated areas of growth or changes in technology, threat, or mission. Software architecture-driven modifiability analysis is a well-understood field, though not well-established in the Australian Defence context. There is an opportunity to require a more rigorous modifiability analysis based on explicit software architecture descriptions. International standards on architecture description such as ISO/IEC 42030 should be recognised in an appropriate way when defining policies or RFT conditions requesting information about software architectures. Since the contract requirements drive the entire development of a system, the best way to ensure adherence to a sound software architecture is to stipulate software architectural requirements in the system specification. It is therefore recommended that software architectural requirements are incorporated into the Request for Tender package. Recommendation 2: Integrate software architecture evaluation activities across the acquisition process. The successful use of software architecture evaluation in the US DoD provides grounds for believing that software architecture evaluation should be useful in Australia. The Australian Defence Architecture Framework (AUSDAF) is an authorised framework of architectural views that can be incorporated into the OCD where they help stakeholders to understand the proposed system from their perspective. Applicable AUSDAF products are integrated into the OCD at relevant steps in the development process. AUSDAF products are currently only applicable to certain types of OCD (i.e. to the ICT elements). We believe that there is an opportunity to develop other suitable architecture frameworks for all of defence systems beyond the ICT elements. Architecting a system is one of earliest and hardest activities that the developer has to conduct. Any mistakes that may happen at this stage will ripple down the system implementation if they are not detected and rectified earlier. It is well known that it is approximately few-orders-of-magnitude cheaper to correct any errors at software architecture than to detect and correct them at the testing phase. It is therefore recommended that software architecture evaluations are conducted as a risk mitigation strategy in a software-intensive system acquisition. Two principles from AT&T’s industry experience of successful software architecture evaluation are that reviews should be open processes, and be conducted for the project’s benefit. The review team should conduct the architecture evaluations openly with all project members invited to attend. The review process enables all stakeholders to see the architecture’s issues and strengths. The project team and management then bear responsibility for the action plans to address the issues discovered, such as architectural changes or project replanning. International standards on architecture evaluation such as ISO/IEC 42030 should be recognised when defining policies on conducting architecture evaluations. **Recommendation 3:** Build software architecture capabilities in Australian defence industry. Defence acquisitions involve international big companies, which often reside overseas and are subject the countries of origin’ rules and regulations such as US ITAR. These rules may prevent DMO to engage international experts in software architecture to conduct software architecture evaluations on DMO behalf. It is therefore imperative to develop and sustain software architecture capabilities in Australia Defence industry. Two principles from AT&T’s industrial experience of successful software architecture evaluation are that all projects require a software-system architect at all phases, and that independent experts should conduct reviews. The software-system architect, whether a person or small team, should an explicitly identified role; and the software architecture evaluation team should be led by recognized experts external to the project teams. There is a need for software architecture capability within both DMO and industry. It is recommended that DMO develops indigenous software architecture capabilities in Australia Defence industry by putting together a government and industry team to develop specification, data item description (DID) template, and contractual language for incorporating into the Request for Tender; to develop a procedure and techniques for evaluating software architectures; and to train and sustain a pool of software architecture evaluators to support DMO in evaluating software architectures. **Recommendation 4:** Provide appropriate contractor incentives for software architecture definition, analysis, and evaluation throughout the project lifecycle. Developing and analysing a proper software architecture design requires time and expertise. This cost may arise even from the initial response to a RFT. Although defining a clear and Appropriate software architecture early in the project may reduce overall project cost and schedule risk, there may nonetheless be further costs to maintain a proper control of the software architecture throughout a project. Contractors may already conduct in-house software architecture design, but currently this activity and any software architecture documentation, if it exists, are not visible to DMO. Where software architectural definition and analysis is currently performed, under pressures of time and budget, contractors may prioritise meeting immediate requirements at the expense of longer-term benefits from maintaining software architectural integrity. It is therefore recommended that contractors are given incentives to establish, maintain, and evaluate software architectures during the life cycle of system development. 6 Conclusion Australian Defence and the local defence industry are facing new challenges from ever-larger, more complex and interconnected software-intensive systems. Many are based on OTS systems supplied by international primes. Risks need to be better managed in such an environment. In this report, we described the concept of architecture evaluation, and its state of art and practice in industry. For companies such as AT&T, significant cost was saved (an average of 1 million USD saving for 100K-lines-of-code projects) due to early detection of design problems through architecture evaluation. Architecture evaluation can also socialise best practices within an organisation and provide visibility of system-level impacts from technical decisions to upper management. In the defence context, US DoD has successfully incorporated architecture evaluation activities into different stages of acquisition projects through explicit requirements in both RFPs and contracts. We also made some general recommendations in the context of Australian defence acquisition, especially in the development of CDD and FPS. More detailed recommendations and implementation strategies require further investigation. We believe integrating systematic architecture evaluation into Australian Defence acquisition will bring a number of key benefits: - Early detection of design problems to significantly reduce later rework cost - Manage various kinds and levels of risks better with key stakeholder involvement - Help identify reusable components/systems and infrastructure to bring previously separate capabilities into multi-role platforms - Socialise best practices within Australian Defence and across to industry - Help build up industry capabilities in architecture and design, which is a key part of knowledge-based economy - Allow Australian SMEs to integrate better with global supply chain using explicit integration architectures provided by both defence and international primes - Better communication to all stakeholders at various level through using architecture as the centre manage technical and non-technical risks 7 References DoD (2009). "NCW Roadmap.," Department of Defence, DPS:FE005/09, Canberra DoD (2010). "Building Defence Capability: a policy for a smarter and more agile defence industry base." Department of Defence, Canberra The software architecture for a software-intensive system defines the main elements of the system, their relationships, and the rationale for them in the system. Software architecture is fundamental to whether a system can achieve its quality objectives. Architecture evaluation is an approach for assessing whether a software architecture can support the system needs, especially its non-functional requirements (also known as quality requirements). Architecture evaluation can be used at different stages of a project, and is an effective way of ensuring design quality early in the lifecycle to reduce overall project cost and to manage risks. This report describes software architecture and architecture evaluation, and summarises some of the key benefits for software architecture evaluation that have been observed both in industry and in international Defence contexts. We make some general recommendations about architecture evaluation in the context of Australian defence acquisition.
{"Source-Url": "http://pandora.nla.gov.au/pan/24592/20140516-1142/DSTO-TR-2936%20PR.pdf", "len_cl100k_base": 12640, "olmocr-version": "0.1.53", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 66418, "total-output-tokens": 15649, "length": "2e13", "weborganizer": {"__label__adult": 0.0003724098205566406, "__label__art_design": 0.000812530517578125, "__label__crime_law": 0.0005154609680175781, "__label__education_jobs": 0.0010843276977539062, "__label__entertainment": 8.308887481689453e-05, "__label__fashion_beauty": 0.00016248226165771484, "__label__finance_business": 0.000946044921875, "__label__food_dining": 0.00028324127197265625, "__label__games": 0.000988006591796875, "__label__hardware": 0.0006871223449707031, "__label__health": 0.0003020763397216797, "__label__history": 0.00023615360260009768, "__label__home_hobbies": 7.092952728271484e-05, "__label__industrial": 0.00043129920959472656, "__label__literature": 0.00026106834411621094, "__label__politics": 0.0003211498260498047, "__label__religion": 0.0002582073211669922, "__label__science_tech": 0.01386260986328125, "__label__social_life": 6.115436553955078e-05, "__label__software": 0.009185791015625, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.0002310276031494141, "__label__transportation": 0.0003688335418701172, "__label__travel": 0.0001575946807861328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 75258, 0.01743]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 75258, 0.32484]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 75258, 0.91606]], "google_gemma-3-12b-it_contains_pii": [[0, 1405, false], [1405, 1405, null], [1405, 3416, null], [3416, 3613, null], [3613, 5011, null], [5011, 6350, null], [6350, 8460, null], [8460, 8460, null], [8460, 9516, null], [9516, 9549, null], [9549, 13101, null], [13101, 16455, null], [16455, 19456, null], [19456, 22535, null], [22535, 25910, null], [25910, 29386, null], [29386, 32321, null], [32321, 33481, null], [33481, 36626, null], [36626, 38833, null], [38833, 40578, null], [40578, 42698, null], [42698, 44632, null], [44632, 46211, null], [46211, 48051, null], [48051, 49759, null], [49759, 53234, null], [53234, 54165, null], [54165, 55923, null], [55923, 59146, null], [59146, 62657, null], [62657, 65930, null], [65930, 68593, null], [68593, 68896, null], [68896, 71808, null], [71808, 74264, null], [74264, 75258, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1405, true], [1405, 1405, null], [1405, 3416, null], [3416, 3613, null], [3613, 5011, null], [5011, 6350, null], [6350, 8460, null], [8460, 8460, null], [8460, 9516, null], [9516, 9549, null], [9549, 13101, null], [13101, 16455, null], [16455, 19456, null], [19456, 22535, null], [22535, 25910, null], [25910, 29386, null], [29386, 32321, null], [32321, 33481, null], [33481, 36626, null], [36626, 38833, null], [38833, 40578, null], [40578, 42698, null], [42698, 44632, null], [44632, 46211, null], [46211, 48051, null], [48051, 49759, null], [49759, 53234, null], [53234, 54165, null], [54165, 55923, null], [55923, 59146, null], [59146, 62657, null], [62657, 65930, null], [65930, 68593, null], [68593, 68896, null], [68896, 71808, null], [71808, 74264, null], [74264, 75258, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 75258, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 75258, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 75258, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 75258, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 75258, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 75258, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 75258, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 75258, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 75258, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 75258, null]], "pdf_page_numbers": [[0, 1405, 1], [1405, 1405, 2], [1405, 3416, 3], [3416, 3613, 4], [3613, 5011, 5], [5011, 6350, 6], [6350, 8460, 7], [8460, 8460, 8], [8460, 9516, 9], [9516, 9549, 10], [9549, 13101, 11], [13101, 16455, 12], [16455, 19456, 13], [19456, 22535, 14], [22535, 25910, 15], [25910, 29386, 16], [29386, 32321, 17], [32321, 33481, 18], [33481, 36626, 19], [36626, 38833, 20], [38833, 40578, 21], [40578, 42698, 22], [42698, 44632, 23], [44632, 46211, 24], [46211, 48051, 25], [48051, 49759, 26], [49759, 53234, 27], [53234, 54165, 28], [54165, 55923, 29], [55923, 59146, 30], [59146, 62657, 31], [62657, 65930, 32], [65930, 68593, 33], [68593, 68896, 34], [68896, 71808, 35], [71808, 74264, 36], [74264, 75258, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 75258, 0.09091]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
7e73d1f7d4552a3834eb5e2253d95ccfc2f599da
[REMOVED]
{"Source-Url": "https://pure.tudelft.nl/portal/files/47061029/dinucciSSBSE2015.pdf", "len_cl100k_base": 9642, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 46069, "total-output-tokens": 12528, "length": "2e13", "weborganizer": {"__label__adult": 0.0003771781921386719, "__label__art_design": 0.0002942085266113281, "__label__crime_law": 0.0003445148468017578, "__label__education_jobs": 0.0006356239318847656, "__label__entertainment": 6.496906280517578e-05, "__label__fashion_beauty": 0.00019097328186035156, "__label__finance_business": 0.0002137422561645508, "__label__food_dining": 0.0002760887145996094, "__label__games": 0.0008015632629394531, "__label__hardware": 0.0008931159973144531, "__label__health": 0.0005164146423339844, "__label__history": 0.0002341270446777344, "__label__home_hobbies": 7.468461990356445e-05, "__label__industrial": 0.00032019615173339844, "__label__literature": 0.00036072731018066406, "__label__politics": 0.0002505779266357422, "__label__religion": 0.000400543212890625, "__label__science_tech": 0.020294189453125, "__label__social_life": 7.909536361694336e-05, "__label__software": 0.006298065185546875, "__label__software_dev": 0.96630859375, "__label__sports_fitness": 0.0002982616424560547, "__label__transportation": 0.00037288665771484375, "__label__travel": 0.00018024444580078125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48968, 0.0489]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48968, 0.28034]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48968, 0.87911]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2507, false], [2507, 5780, null], [5780, 9263, null], [9263, 12542, null], [12542, 14873, null], [14873, 18093, null], [18093, 21439, null], [21439, 24608, null], [24608, 28103, null], [28103, 31597, null], [31597, 35339, null], [35339, 38743, null], [38743, 41826, null], [41826, 45176, null], [45176, 48968, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2507, true], [2507, 5780, null], [5780, 9263, null], [9263, 12542, null], [12542, 14873, null], [14873, 18093, null], [18093, 21439, null], [21439, 24608, null], [24608, 28103, null], [28103, 31597, null], [31597, 35339, null], [35339, 38743, null], [38743, 41826, null], [41826, 45176, null], [45176, 48968, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48968, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48968, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48968, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48968, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48968, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48968, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48968, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48968, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48968, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48968, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2507, 2], [2507, 5780, 3], [5780, 9263, 4], [9263, 12542, 5], [12542, 14873, 6], [14873, 18093, 7], [18093, 21439, 8], [21439, 24608, 9], [24608, 28103, 10], [28103, 31597, 11], [31597, 35339, 12], [35339, 38743, 13], [38743, 41826, 14], [41826, 45176, 15], [45176, 48968, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48968, 0.22152]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
646e3d94b15ff96f1b968cf87e9d201e22dddf31
Modular Domain-Specific Language Components in Scala Christian Hofer Aarhus University, Denmark cmh@cs.au.dk Klaus Ostermann University of Marburg, Germany kos@informatik.uni-marburg.de Abstract Programs in domain-specific embedded languages (DSELs) can be represented in the host language in different ways, for instance implicitly as libraries, or explicitly in the form of abstract syntax trees. Each of these representations has its own strengths and weaknesses. The implicit approach has good compositability properties, whereas the explicit approach allows more freedom in making syntactic program transformations. Traditional designs for DSELs fix the form of representation, which means that it is not possible to choose the best representation for a particular interpretation or transformation. We propose a new design for implementing DSELs in Scala which makes it easy to use different program representations at the same time. It enables the DSL implementor to define modular language components and to compose transformations and interpretations for them. Categories and Subject Descriptors D.1.5 [Programming Techniques]: Object-oriented Programming; D.2.13 [Software Engineering]: Reusable Software—Reusable Libraries; D.3.2 [Programming Languages]: Language Classifications—Extensible Languages, Specialized Application Languages General Terms Languages, Design Keywords Embedded Languages, Domain-Specific Languages, Term Representation, Visitor Pattern, Scala 1. Introduction The methodology of domain-specific embedded languages, where a domain-specific language (DSL) is embedded as a library into a typed host language, instead of creating a stand-alone DSL, is nowadays well-known. It goes back to Reynolds [26] and has been systematically described by Hudak [12]. It is ideal for prototyping a language for two reasons. Firstly, simple interpreters can be quickly derived and implemented from the denotational semantics of the DSL. Secondly, many parts of the host language can be directly reused: not only its syntax, but also some semantic features like its libraries and even its type system. Furthermore, it is easy to extend the DSL or compose and integrate several DSLs into the same host language, since DSL composition is the same as library composition. Assume for example that we have three different DSLs: a language of regions, one of vectors, and a lambda calculus. We can simply compose these languages, if the concrete representations of their types in the host language match. Assuming Scala as our host language, we can then write a term like: \[ \text{app(lam(x: Region) \rightarrow union(vr(x),univ)), scale(circle(3), add(vec(1,2),vec(3,4))))} \] This term applies a function which maps a region to its union with the universal region to a circle that is scaled by a vector. However, the main advantage of this method is also its main disadvantage. It restricts the implementation to a fixed interpretation which has to be compositional, i.e., the meaning of an expression may depend only on the meaning of its sub-expressions, not on their syntactic form or some context. In the above example, we could perform two optimizations. First, the union of any region with the universal region is itself the universal region, allowing us to replace the body of the lambda abstraction with \(\text{univ} \). And second, we see that the parameter \(x\) is not used (or without the first optimization: only used once) in the body, allowing us to inline the application [1], reducing the whole term to \(\text{univ}\). Optimizations like these are hard to implement using compositional interpretations [2, 11]. In order to perform analyses on the code or write a more efficient implementation (or even a compiler) of the language, the DSL backend can be replaced by an explicit representation of the abstract syntax tree (AST) [8]. Then, arbitrary traversals over the AST can be implemented for analysis and interpretation. However, this flexibility comes at a price: extending the DSL with new operations, or even composing it with other languages, requires adaptation of both the data structure and the tree traversals. Recently, variants of the DSEL approach have been proposed that introduce a separation between language interface and implementation [2, 6, 11]. In this way, it is possible to define several (compositional) interpretations for the same language, e.g., partial evaluators or transformations to continuation-passing style [6]. Furthermore, the languages can still be extended and even composed with other languages in the sense of extending and composing the interpretations [11]. However, two challenges remain. Firstly, can we express program transformations as independent modular components? This requires a program representation that is itself extensible and composable and can serve as the target domain of the transformation. Secondly, can we express non-compositional interpretations like the optimization mentioned above? We believe that a scalable approach to embedding DSLs should address these problems. We propose a design that extends techniques developed in recent studies of the visitor pattern [5, 21] and the expression problem [34], and builds on our own work on polymorphic embedding [11]. More specifically, the design goals set forth in this paper are: - The design should enable the composition of independently developed languages and their representations. • Embedded languages are statically typed (uni-typed in the simple case such as the region DSL). A composed language should preserve the types of the individual languages. • It should be possible to apply different (kinds of) interpretations on the same language representation. ▪ The target domain of the interpretation should be allowed to be the term representation (program transformation). It should be possible to compose several program transformations in this way before interpreting to another domain. ▪ It should be possible to define compositional as well as non-compositional interpretations. • The language representations and their interpretations should be independent. That is, it should be possible to add new interpretations without having to change the language representation. We choose Scala as the implementation language, as its combination of language features both allows for solving the expression problem and makes DSL embedding smooth [11, 34]. In particular, we make use of Scala’s support for nested traits and mixin composition, abstract type members, higher-kind types, self-type annotations, implicit conversions, type inference, and the flexible import statement. We will only show incomplete code examples for space reasons. Furthermore, we will not discuss infix operations in the paper. The complete source code with further examples can be downloaded at: http://www.cs.au.dk/~chmh/mdslcs/. The central contributions of this paper are: 1. We show how to integrate extensible term representations into a DSEL approach in Scala. These term representations can be used as the target domain of program transformations on DSL terms. 2. We show that those term representations are composable in the same way as the languages that they represent. In particular, this composition also reflects the types of the DSL expressions. 3. Our representation accommodates for three kinds of interpretations: compositional interpretations, interpretations based on explicit AST traversal, and interpretations based on AST inspection. We motivate and compare these three options for writing interpretations. 4. We discuss name-binding on DSELS by means of composing with a lambda calculus language. In particular, we present an extensible term representation for the simply-typed lambda calculus using higher-order abstract syntax. This representation allows all three kinds of interpretations. 5. We present DSEL development as an application domain for advanced visitor techniques. We will introduce the core of the representation for a single, uni-typed language in Sec. 2. In Sec. 3 we show how different languages (and their type representations) can be composed. In Sec. 4 we present language composition for a more challenging type system: the simply-typed lambda calculus. At the same time, we present an extensible term representation for the simply-typed lambda calculus using higher-order abstract syntax. This representation allows all three kinds of interpretations. In Sec. 5 we discuss the design goals and the different kinds of interpretations that our representation allows. Related work is discussed in Sec. 6. Section 7 concludes. 2. Presentation of the Core Design In this section, we present the core design for a simple, uni-typed embedded language. We will first show how to define the language interface and compositional interpretations. Then we will introduce the term representation and an interpretation to create it. We demonstrate how the representations can be used to apply both compositional and non-compositional interpretations. We will use a language of regions [12] as the running example. 2.1 Defining the Language Interface Each language specifies the language interface as the signature of an algebra: abstract type members declare the sorts (domains) of the algebra, methods its constants and operations [11]. The language interface of the regions language is shown in the trait RegionLI in Fig. 1. Region is the only sort in this algebra. univ is the universal region, circle is a circle around the origin and union is a binary operation to construct the union of two regions. More operations could be defined in the same way. We could also add them later by defining an extended language interface that inherits from RegionLI. The language interface is extensible. To create a term of a language, we need an object that implements the language interface. However, it is easy to abstract over the actual interpretation such that the same DSL program can be interpreted in multiple ways. One way to do that is shown in the trait Example. Here, we specify a dependency of the example on some interpretation of the region language. The object ExampleInstance then fixes a specific interpretation. Note that the import construct in Scala can appear anywhere in the code and can refer to arbitrary values. We use it here to import the operations of the interpretation so we can use them without prefixing them by the name regionInterpretation. 2.2 Defining Compositional Interpretations Each interpretation is an algebra of the corresponding signature. It is implemented by defining the domains and the operations that are declared in the language interface [11]. As a consequence of the algebraic method, each interpretation is guaranteed – under the assumption that we do not use side-effects – to be compositional in the following sense: The interpretation of an expression is only dependent on the interpretation of its sub-expressions, not on their syntactic structure or some context. This can be seen in the declaration of the union method: its parameters are of type Region, which is the domain of the algebra, and not of some expression type that represents region expressions. If we look at it as a traversal of the AST, each interpretation is a primitive recursion (fold) over the tree, applying the interpretation recursively on all sub-expressions. Compositionality of interpretations is the selling point of denotational semantics. It enables compositional reasoning about programs and it eases language extension. To give an example of how an interpretation looks like, we define an evaluating interpretation of the region language in Fig. 2. The domain of regions is represented by a predicate on points in the coordinate space. The universal region is the region that contains all points, the union of a region is calculated by evaluating whether a point is contained in one of the united regions, etc. trait EvalRegion extends RegionLI { type Region = (Double,Double)=>Boolean def univ = (_,_)=>true def circle(radius : Double) = (x,y) => x*x + y*y <= radius*radius def union(reg1 : Region, reg2 : Region) = (x,y) => reg1(x,y) || reg2(x,y) } Figure 2. A region evaluator written as a compositional interpretation trait RegionAST { trait RExp { def acceptR[R](v : IVisitor[R]) : R def acceptE[R](v : EVisitor[R]) : R } case class Univ extends RExp { def acceptR[R](v : IVisitor[R]) : R = v.univ def acceptE[R](v : EVisitor[R]) : R = v.univ } case class Circle(radius : Double) extends RExp { def acceptR[R](v : IVisitor[R]) : R = v.circle(radius) def acceptE[R](v : EVisitor[R]) : R = v.circle(radius) } case class Union(reg1 : RExp, reg2 : RExp) extends RExp { def acceptR[R](v : IVisitor[R]) : R = v.union(reg1, reg2) def acceptE[R](v : EVisitor[R]) : R = v.union(reg1, reg2) } type IVisitor[R] <: RegionLI { type Region = R } type EVisitor[R] <: RegionEVisitor[RExp, Region] } trait RegionEVisitor[RExp,Region] { def univ : Region def circle(radius : Double) : Region def union(reg1 : RExp, reg2 : RExp) : Region object RegionASTSealed extends RegionAST { type IVisitor[Region] = RegionLI { type Region = Region } type EVisitor[Region] = RegionEVisitor[RExp, Region] } } Figure 3. A term representation for the region language 2.3 The Term Representation In the next step, we define the explicit term representation. Its basic design is adapted with some minor modifications from the functional decomposition approach described in Zenger / Odersky [34], but implementing both the internal and the external visitor pattern [5, 19]. The representation of region terms is shown in Fig. 3 in the trait RegionAST. For each sort in the signature of the algebra, we define an abstract syntax tree (AST) representation. In the example, the only sort is Region. For each operation that maps to a value of that domain, we implement an AST node as a case class inheriting from a common super-node (RExp). That super-node declares the acceptI and the acceptE methods of the internal and external visitor pattern, respectively. We declare higher-kinded abstract type members [16] for both the internal and the external visitor interface (IVisitor[R] and EVisitor[R]). Each sort of the algebra is a type parameter of this type member. To be able to extend the language with new operators, we do not fix these types [34]. Only in the object RegionASTSealed we create a concrete instance of the AST representation, where the visitor interface types are fixed. In RegionAST we only constrain them: The visitor interface of the internal visitor has to extend the language interface of the region language (see Fig. 1), with the type Region corresponding to the type parameter of the visitor. The visitor interface for the external visitor is shown in RegionEVisitor. There are two differences to the internal visitor interface: Firstly, we define the sorts (Region) as type parameters and not any more as an abstract type members, as we do not want to use the external visitor as a language interface. Secondly, and more importantly, the visitor takes another type parameter (or set of type parameters) that represents the expression types (RExp). This type parameter is used in the operations that take elements of the domains as parameters (here: union). In the external visitor interface, those operations take these expressions as parameters and not the domain elements. This reflects the difference between internal and external visitor pattern. The internal visitor is applied to the sub-expressions before they are passed to the visitor of the expression (see the method acceptI in the class Union). This enforces compositional interpretations. The acceptE method of the external visitor, in contrast, does not perform a recursive call on the sub-expressions, but passes them directly to the visitor. In that way, the visitor has access to the syntactic structure of the sub-expressions. This makes non-compositional interpretations possible. 2.4 Program Transformations with Internal Visitors Having defined the term representation, we can now define program transformations, i.e., interpretations that map to this term representation. These interpretations can then be composed with other interpretations by applying the accept method to the latter. For example, we can write an optimization interpretation (a program transformation) and compose it with the evaluator by supplying the latter as a visitor to the result of the former. A trivial program transformation is the reification of the program. It takes a term and maps it to its representation. It is the identity element with respect to the composition of interpretations. The reification for the region language is shown in the trait ReifyRegion in Fig. 4. Being a compositional interpretation it implements the region language interface and can be used as an internal visitor. The trait is parametrized by a value regAST that references the exact instantiation of the AST representation. This parametrization is needed to accommodate for extensions of the region language with further operators. If we instantiate this value with an extended AST representation, reification operates as an injection into this richer structure. The type Region, which specifies the domain of the interpretation, is defined as the expression super-type in the chosen representation, making the interpretation a mapping into the term representation. The operations simply construct the corresponding AST nodes. A more interesting program transformation is optimization. In our case, we define a simple optimization that makes use of an algebraic law on regions: that the union of some region with the universal region is equivalent to the universal region. The interpretation is shown in the trait OptimizeRegion and again is just a compositional interpretation (i.e. inheriting from the language interface). Again, the term representation it produces is parametrized by the value regAST. The interesting case is the implementation of union. Here, we use pattern-matching on the sub-expressions, i.e., we inspect the explicit AST representation. Note that the optimization has already been applied recursively to the parameters reg1 and reg2. In that way, the optimization is propagated through the AST. 2.5 Program Transformations with External Visitors We can define an alternative optimization using an external visitor, shown in Fig. 5. All interpretations using an external visitor depend on the language module, i.e., they have to be nested in another trait. Here, the trait Optimize is nested in OptimizeRegionExternal. The reason is that acceptE has to be called recursively on the optimization visitor. To be able to call acceptE, we have to make sure that Optimize is in fact a valid visitor of type EVisitor[RExp]. We cannot guarantee this at this point, as the value of regAST and therefore the visitor interface is not yet fixed. But we can make the compiler ensure that each concrete instance of it has to be a valid visitor. This is done by declaring the type of this to be EVisitor[RExp] using Scala’s self-type annotations in the first line of the body of Optimize. The visitor is defined in regAST. If we had declared this value inside of the Optimize trait instead, we would not be able to refer to it in the self-type annotation. In the union case, we get the two sub-expressions as unevaluated expressions of type RExp as parameters. In that way, we can implement a more efficient version of the optimizer than before: We at first only optimize the first sub-expression recursively by calling acceptE on it. Only if this is not the universal region, we optimize the other sub-expression, too. We will discuss the respective advantages of compositional interpretations and explicit tree traversals in Sec. 5. 3. Composing Domain-Specific Languages In this section, we will discuss how to compose the representations of the embedded languages. If we only ever want to compose several languages that share the same sort (i.e., in an untyped setting), this is similar to extending a language with new operations. As the term representations we are using have been written with this extensibility in mind [34], this is easy. On the other hand, if we only want to introduce new sorts within a single language, we could make the different expression types share the same visitor, with the accept methods taking several type parameters (to reflect the different sorts) instead of one. The difficulty arises when we want to compose independent languages that each define their own sorts, as the accept methods cannot be extended by type parameters through inheritance. In the following, we will discuss how the term representation can be made to work with language and sort composition. At the same time, this will show how individual languages can be extended with new operators. As a running example we will compose the region language with a simple language of vectors. Its language interface is defined in Fig. 6. We restrict ourselves to two operators: vec constructs a two-dimensional vector, add is the common vector addition. Again, we can implement different compositional interpretations for this language interface, e.g., an interpreter or a pretty-printer. Furthermore, we assume to have a term representation of the vector language defined. The need to integrate region and vector language arises if we want to extend our region language with a new operator: scale, that scales a region by a vector. An example term of the composed language could be: scale(circle(2.0), add(vec(1,2), vec(0,.5))) We extend the language interface of the region language by inheriting from RegionLI, as shown in trait ExtRegionLI. Besides declaring the method scale, we declare an abstract type member Vector in addition to the inherited type member Region. We have shown how to compose language interfaces and their interpretations in this setting in earlier work [11]. Here, we focus on the corresponding composition of term representations. 3.1 Composing Term Representations To compose the different term representations in a modular way, we create an interface between them. The term representation for the vector language is not modified. The representation for the extended region language is shown in the trait ExtRegionAST in Fig. 7. It abstracts over the vector representation VectorRep: We do not necessarily have to compose with an explicit term representation of vectors, but can alternatively use a direct representation. For example, we could choose to represent vectors as pairs of numbers, which could be the result of an evaluating interpretation using the VectorLI language interface. The external visitor interface for the extended region language is shown in trait ExtRegionEVisitor. It takes the representation of vectors as an additional type parameter. Each non-compositional interpretation has the responsibility to deal with the respective representation of the other domain (here: vectors), be it an explicit term representation or a direct representation. The acceptE method in the class Scale is straightforward: It forwards the region expression and the vector representation to the visitor. --- **Figure 4.** Two compositional program transformations ```scala trait OptimizeRegion extends RegionLI { def univ : Region = Univ() ... def union(reg1 : Region, reg2 : Region) = Union(reg1, reg2) } trait OptimizeRegionExternal { def univ : Region = Univ() ... def union(reg1 : Region, reg2 : Region) = Union(reg1, reg2) } ``` **Figure 5.** An optimizer as an external visitor ```scala trait OptimizeRegionExternal { def univ : Region = Univ() ... def union(reg1 : Region, reg2 : Region) = Union(reg1, reg2) } ``` **Figure 6.** Language interface for vector and extended region languages ```scala trait VectorLI { type Vector def vec(x : Double, y : Double) : Vector def add(v1 : Vector, v2 : Vector) : Vector } trait ExtRegionLI extends RegionLI { type Vector def scale(reg : Region, vec : Vector) : Region } ``` representations of languages can be mixed. We present two variants of the different languages can be combined. We will start with the external visitors, as their implementa- tion is more straightforward. The extended optimizer is shown in Figure 8. Two extensions of the optimizer as internal and external visitors, respectively. 3.2 Defining Interpretations In the following, we present how to implement interpretations using internal and external visitors. We will show how to implement the extension of the optimizer in both variants. We will also show how to implement an evaluator on the extended language using an internal visitor in order to demonstrate how distinct representations of the different languages can be combined. We will start with the external visitors, as their implementa- tion is more straightforward. The extended optimizer is shown in trait OptimizeExtRegionExternal in Fig. 8. It is independent of the representation of vectors and just reuses the one specified in the target domain regAST. The same holds also for the internal visitor which is shown in trait OptimizeExtRegion. It does not touch the representation of the vector language and thus defines interpretVector to be the identity function on the vector representation. Finally, it is worth looking at the implementation of an evaluator for the extended region language, to see how interpretVector operates on different representations, and thus: how different representations of languages can be mixed. We present two variants of an evaluator in Fig. 9. Both make use of the same base evalua- tor defined for the extended region language EvalExtRegion that expects vectors to be represented as pairs of Doubles. In the first version (EvalRegionWithVector), we do not use a term representation of vectors. Accordingly, interpretVector is implemented as the identity function on the vector domain. In the second version (EvalRegionWithVecAST), vector terms are represented. We can still reuse EvalExtRegion. The method interpretVector will apply a corresponding visitor for the vector language to get a value in the right domain. As the evaluator is now dependent on the specific term representation for vectors, the corresponding module vecAST is a dependency on the evaluation of regions. The object EvalRegionWithVecASTSealed is an example instantiation. To conclude, we note that while the composition of explicit term representations increases the dependencies on the side of the interpre- tations, the main task with respect to language composition is the extension of the individual representations (here: the region representation) to accommodate for the new language constructs that bring together the two languages. As the chosen representations are extensible, language extension itself is straightforward. We will discuss a more advanced example of language composition in the next section. 4. Embedding the Lambda Calculus Both the region and vector languages are uni-typed, so combing them resulted in a language of two types. However, the design also scales to more complex types in the embedded language. A prominent language with a more demanding type system is the simply-typed lambda calculus with its inductive construction of arrow types. We will therefore briefly sketch how the lambda calculus can be represented.\(^2\) Introducing the lambda calculus serves also another purpose: as a showcase on how to handle name-binding in the embedded language. In this section we will first introduce a language interface for the typed lambda calculus using higher-order abstract syntax (HOAS) \(^2\)The full code is in the accompanying code of the paper. trait EvalExtRegion extends EvalRegion with ExtRegionLI { type Vector <:(Double,Double) def scale(r : Region, v : Vector) : Region = (x,y) ⇒ (x*r,y*r) } object EvalRegionWithVector extends EvalExtRegion with ExtRegionIVisitor { type Vector = (Double, Double) type VRep = VectorExp def interpretVector(v : VRep) = v } trait EvalRegionWithVecAST { val vecAST : VectorAST trait Eval extends EvalExtRegion with ExtRegionIVisitor { type Vector = (Double, Double) type VRep = VectorExp def interpretVector(v : VRep) = v.acceptI(evalVector) } } object EvalRegionWithVecASTSealed extends EvalRegionWithVecAST { val vecAST = VectorASTSealed object Eval extends super.Eval { val evalVector = new EvalVector {} } } Figure 9. Two evaluators based on the internal visitor pattern trait THoasLI { type Rep[,] type VRep[,] def vr[T](x : VRep[T]): Rep[T] def lam[S,T](f : VRep[S] => VRep[T]): Rep[S⇒T] def app[S,T](fun : Rep[S⇒T], param : Rep[S]): Rep[T] } Figure 10. Language interface for the lambda calculus in higher-order abstract syntax [23]. We will then show how to integrate it with the region language interface. Next, we will present an explicit term representation based on HOAS. Discussing the short-comings of this representation, we will motivate a De Bruijn index representation for the untyped lambda calculus, which we will briefly present. The language interface is shown in Fig. 10. We use the type constructor Rep[T] to represent lambda calculus expressions of type T, and VRep[T] for variables of type T. Lambda calculus terms are either variables (constructed with vr), lambda abstractions (lam) or applications (app). Lambda abstractions make use of HOAS, i.e., we use function literals in Scala to represent lambda abstraction. An example term is: lam(x : VRep[Int]) ⇒ vr(x), which represents the identity function on integers. The Scala type checker is not able to infer the type of the parameter x. Therefore, we have to specify it explicitly. Some related works have proposed another representation that omits the vr constructor and the separate representation for variable types [2, 6]. That representation, however, does not give rise to a term representation [24]. Our representation can be regarded as a generalization of [33] for a typed representation. 4.1 A Term Representation for the Lambda Calculus A term representation for the lambda calculus is shown in Fig. 11. Only the constructor for lambda abstractions is shown, the others are straightforward. The main point to note is that we need a different representation for each type of variable representation. Therefore, the latter has to be supplied as a type parameter to THoasExp. The second type parameter T is a type index for the corresponding lambda calculus expression. The represented domain type for an interpretation is R[T]. That means that R[ ] is the type operator that describes the interpretation of a type and is therefore the higher-kinded type parameter of the accept methods. Note that reification (see trait ReifyToTHoas) is always bound to a specific representation type for variables. 4.2 Integrating the Lambda Calculus with Other Languages We can compose the lambda calculus with the region language and get regions as a base type in the lambda calculus and, on the other hand, the capability to use name binding in the region language. To this end, we extend both the region and the lambda calculus language interface, as shown in Fig. 12. We define implicit conversions (i.e., type conversions that will be inserted by the type-checker of Scala automatically) toRegion and fromRegion to translate between the different representations of region language and lambda calculus. The extended interface of the region language needs to know how a lambda calculus type is represented (FunRep[T]). In the same way, the lambda calculus interface needs knowledge about the representation of regions. The main restriction compared to the integration with the vector language is that we need an index type (i.e., a type parameter to Rep) to refer to the atomic region type in the lambda calculus representation. This cannot be Region, as the representation of functions has to be independent of a concrete interpretation domain of regions. That, however, requires that a region representation is not touched when it is transformed to a HOAS term: the parameter in fromRegion is not of type Region and we do not extend the visitor interface with a method interpretRegion. For symmetry, we also left out interpretFunction, making the interpretations themselves responsible for the interpretation step in the other domain. The corresponding extension of the term representation is straightforward and presented in the accompanying source code. 4.3 A Term Representation Based on De Bruijn Indices Unfortunately, HOAS is not a good choice for programming interpretations that need to interpret recursively the body of a lambda To conclude, it is possible to represent a typed lambda calculus using HOAS and compose it with other languages. However, for many program transformations it is not obvious how to implement them in this representation. Therefore, we follow [2] in performing these operations on an untyped representation based on De Bruijn indices. 5. Discussion In this section, we will first review that the presented design indeed meets our design goals. In the second part, we will compare the different representations that are part of our encodings. Finally, we will briefly discuss alternative encodings for the visitor pattern. 5.1 Reviewing the Design Goals First of all, the design allows for the composition of independently developed languages and their representations. We have demonstrated, how the representations of several languages can be composed in Sec. 3 and Sec. 4. We have furthermore demonstrated, how the interpretations compose even for distinct representations of the different languages in the evaluator example of Fig. 9. We have seen that the composed language preserves the types of the individual languages. For example, the $\text{seq} \lambda x. x$ operation requires a region in the first parameter and a vector in the second. The implicit conversions between regions and lambda expressions ensure that only representations of regions can be converted. Furthermore, we have demonstrated that different interpretations can be applied on the same language representation. We have shown, how we can define program transformations like the region optimization that transform to the same representation of the terms and can be composed with other interpretations. The representation can be used for defining compositional interpretations (using the internal visitor pattern) and non-compositional interpretations (using the external visitor pattern). Finally, we have kept language representations and interpretations independent. This is a major difference to the Zenger/Odersky design [34]. We can seal a language representation as demonstrated, e.g., in regionASTSealed in Fig. 3, and define interpretations like those in Fig. 4 independently from it, using dependency injection in the interpretations. 5.2 Comparing the Representations So far, we have focussed the discussion on two different representations, namely external versus internal visitors. However, we are in fact dealing with four different representations. 1. The implicit term representation defined by the language interface. 2. The Church encoding expressed by the acceptI method of the internal visitor. 3. The Scott encoding expressed by the acceptE method of the external visitor. 4. The explicit AST representation that is part of both the external and the internal visitor pattern. In the following, we discuss each of these representations. 5.2.1 The Implicit Term Representation The implicit term representation is defined by the language interface: Each operator of the DSL is represented by a method declaration in the language interface and each term is at some point mapped to a concrete interpretation to a target domain. However, this representation cannot be used as a target domain of an interpretation by itself. If we want to define a program transformation, we immediately have to compose it with an interpretation to another target domain. As a consequence, the transformation of an expression has no access to the transformation of its sub-expressions, but only to the results of their final interpretation. To overcome this, we can define the target domain to be a pair, where the first component is the intended interpretation and the second component is some information that we need for performing the program transformation. We have demonstrated this for the optimization of regions in [11], where the second component was a Boolean flag that informed us, if a region was the universal region. We then could shortcut the intended interpretation in the first component, whenever the Boolean flag was true. 5.2.2 The Church Encoding It is known that the internal visitor pattern corresponds to a Church encoding [5]. The Church encoding, as well as the standard visitor pattern, are not by themselves extensible. An extensible solution has to allow for adding domains and operations to the language interface. The presented design does exactly that. Defining an interpretation using the Church encoding makes it compositional. While compositionality is certainly beneficial for reasoning about a DSL term, not every interpretation can directly be encoded in this style. However, for many non-compositional interpretations to a domain we can find a compositional interpretation to a computation of that domain. This is the core idea behind using monads to define modular denotational semantics [15]. For example, the optimization interpretation in Fig. 4 is not optimal: in the union case, if the first region is the universal region, then we could short-circuit the interpretation of the second region, as the result will be the universal region. However, as we defined the parameters call-by-value, the interpretation of the second region has already taken place. To avoid this, we could redefine the language interface to take the parameters of union as call-by-need parameters. However, if we do not want to change the language interface, we could redefine the method of the optimization interpretation to be a function from the unit type to the AST representation. In that way, we can manually control the triggering of the optimization in the sub-expressions. Another example would be a language of arithmetics, where we cannot implement a compositional evaluator to a domain of numbers that handles division-by-zero, but we could implement a compositional evaluator to a domain of computations that can fail (described by the error monad). And finally, there are many interpretations that depend on a context. One example is the interpretation that counts the occurrences of a free variable in a De Bruijn index representation of a lambda expression. Atkey et al. [2] define this interpretation as a recursion on an explicit AST representation. But we can also express it as a fold, as shown in Fig. 14. We represent the domain as a function that maps a De Bruijn index to the number of occurrences of the variable with this index. The De Bruijn index is the context that is passed through the interpretation of the sub-expressions and is increased inside a lambda body. Another limitation of the Church encoding is that it is hard to define accessor functions to the sub-expressions of an expression. We encounter this problem, if we translate the shrinking reduction implementation from [2] to one based on internal visitors, as shown in Fig. 15. The shrinking reduction [1] is an inlining operation that performs a beta-reduction in cases where a bound variable is used at most once. For simplicity, this interpretation is hard-wired to some sealed version of the AST representation for internal visitors. Furthermore, it assumes a substitution interpretation (Subst). It also does not claim to be an efficient implementation. The interesting part is the interpretation of app: It does a pattern-matching on the interpretation of the first parameter. If it is a lambda expression, it might perform a substitution to inline the application. If we wanted to avoid using pattern-matching, we would need an accessor to the body of the lambda abstraction. The solution to this problem was discovered by Kleene, who defined the predecessor function on Church numerals by a tripe construction [13] together with a projection to the first component of the triple. This trick can be generalized to arbitrary accessors on inductive data structures. However, using this method, the sub-expression has to be fully reconstructed from bottom up, making accessors a linear-time instead of a constant-time operation. It is also not obvious how to adapt Kleene’s trick to access the body of a lambda expression in a higher-order abstract syntax representation. 5.2.3 The Scott Encoding Like the internal visitor pattern corresponds to a Church encoding, Oliveira et al. [21] have pointed out that the external visitor pattern corresponds to a Parigot encoding [22], which is a typed version of the Scott encoding. Like the Church encoding, the Scott encoding itself is not extensible. The Scott encoding makes it easy to define an accessor operation on inductive data structures. In effect, that means that we can implement everything with the Scott encoding that we can implement by pattern matching. The interpretations are not guaranteed to be compositional. Using the presented version of the external visitor pattern has one core advantage over directly accessing the AST representation via pattern-matching: The interpretation stays extensible. If we extend a language by a new operation, we simply have to define the interpretation for this operation. If we had used pattern-matching instead, the interpretation for the extended language version would have to override the original interpretation in order to take the extended cases into consideration. 5.2.4 The Explicit AST Representation After this discussion, there seems to be no place where the explicit AST representation is really needed. We used it in many examples, nevertheless, however not as an alternative encoding, but inside the internal and external visitors. The representation is useful, when we want to analyze the structure of the sub-expressions, typically after applying a code transformation on them. Writing another interpretation that does this analysis is in many cases cumbersome, 1This could of course include the conversion from region language terms. when a pattern matching is so much easier. However, it should be kept in mind that this could conflict with the extensibility of the interpretation which has to rely on correct defaults (see also [35]). 5.3 Alternative Encodings for the Visitor Pattern While the visitor pattern in its basic version does not accommodate well for extending data types, there are several approaches to make the visitor pattern extensible and in that way give a solution to the expression problem [19, 30, 34]. We have adapted the solution presented in [34] to get an extensible visitor pattern as the basis of our design. We have modified it to separate representation from visitors and to make use of higher-kind type members [16]. Furthermore, we have implemented an internal visitor pattern variant for it. In Sec. 5.5 of [18], Oliveira presents a very similar design for an extensible visitor pattern in the context of data type generic programming. Instead of defining two different accept methods for external and internal visitors, this design merges both by using an additional type parameter and an additional implicit parameter. In that sense, it trades simplicity for genericity, but that choice is orthogonal to our design goals. Oliveira’s design is most similar to our representation of the lambda calculus in that it represents types uniformly by applying an abstract type constructor to them as we apply Rep to the types in the lambda calculus language. As a consequence, each interpretation instantiates this type constructor which is then uniformly applied to all represented types. This uniform representation, however, prevents mapping different domains like regions and vectors to different representation types. Common to both designs is the problem that combining several extensions of a language requires combining the visitors explicitly [34]. An interesting alternative encoding of visitors [19] overcomes this limitation and allows constructing internal and external visitors in a very customizable way. This gives the user fine-grained control over which language operators to include. A core advantage of this approach is that it renders the dependency injections that inform each interpretation about the exact language used (see, e.g., value regAST in Fig. 4) unnecessary. Instead, an interpretation of an extended language can always be used as an interpretation for a more restricted language. However, if we want to avoid dependency injection even when composing languages, the visitors and the constructors of the overarching language constructs have to take more type parameters. To integrate these visitors with the original language components, we had to curry the type parameters of the visitors by using Scala’s encoding of anonymous type functions [16]. As a result, the type parameters got very cluttered and Scala’s type checker was clearly pushed to its limits. However, if anonymous type functions get direct support in Scala, this might be the preferable approach. Finally, it would be interesting to see if an analogous design can be expressed in a functional programming language. Oliveira et al. [20] describe a solution to the expression problem in the field of generic programming which may be a promising starting point for a design using Haskell. 6. Related Work Espinosa [9] presented an (untyped) design for denotational semantics where the language interface is decoupled from the (compositional) interpretations. The interpretations Espinosa uses are implemented using monads and monad transformers, making them extensible with respect to different kinds of computational capabilities. Term representations as the domain of an interpretation are not considered. Carette et al. [6] have been using a Church-like encoding for the lambda calculus based on a typed version of [14]. They mainly focus on a MetaOCaml implementation. In this implementation, the terms themselves are not written in an explicit Church encoding in the sense of lambda abstracting over the interpretation of the lam and app terms, but instead they are encapsulated in functions that provide this abstraction. This prevents using a Church encoding as the target domain of an interpretation, although in their type-class-based Haskell implementation this would be possible. However, the encoding allows applying different interpretations with different target domains. For defining program transformations like partial evaluation, they use quoted MetaOCaml terms. In Haskell, they use an explicit AST representation. They do not discuss extension or composition of languages, but restrict themselves to the presentation of a lambda calculus with a fixed set of arithmetics and Boolean operations. In our own previous work [11], we have used an approach similar to [6] to representing terms in Scala that allows for easy composition of languages and interpretations. The definition of language interfaces as traits that declare the signature of an algebra was developed there. However, we only used the implicit term representation and did not consider the possibility to use a Church encoding of the target domain. As a consequence, program transformations like the optimization of regions could only be expressed by coupling them with an interpretation to another target domain, as has been discussed in Sec. 5. Atkey et al. [2] adapt the type-class based representation of [6] written in Haskell and present a typed and an untyped variant of it. They show that this representation is extensible and composable. On the other hand, they argue that an unembedding to an explicit data structure representation of ASTs using De Bruijn indices is necessary for some interpretations. This AST representation, however, is – in contrast to our representation – neither extensible nor composable. There is plenty of literature on using higher-order abstract syntax to represent the lambda calculus in a host language beyond the one already mentioned (recent articles are [10, 17, 25, 27, 29, 33]). Many of them use HOAS on an explicit data structure representation and discuss issues like adequacy of the representation that are beyond the scope of this paper. Our encoding of HOAS has been inspired by the untyped variant discussed in [33]. Stump [29] introduces a new meta-programming language, Archon, based on the untyped lambda calculus, but extended with direct support for structural reflection, using HOAS to represent lambda abstraction. In contrast to approaches built on top of explicit encodings of the object language, Archon introduces explicit language constructs for opening of lambda expressions along with other language constructs for working on variables and overcomes in this way the restrictions of HOAS representations. Buchlovsky / Thieleke [5] have analyzed the type theory of the visitor pattern. They observed the difference between the internal visitor pattern and the external visitor pattern and elaborated the correspondence of the former to the Böhm-Berarducci encoding [3]. Oliveira et al. [21] observed that the external visitors correspond to the Parigot encoding [22]. These correspondences make the visitor pattern an ideal candidate to define compositional and non-compositional interpretations on an AST representation in object-oriented languages. Keeping language representations and their interpretations as extensible components has been the eponymous example for the discussion of the expression problem [32], i.e., the problem of extending data types and operations on them independently. We have adapted and extended the design from [34]. Our work has a different focus, though. The expression problem is about incrementally extending individual languages, not about composing independently developed languages and their representations. More importantly, while the expression problem has been described for un- typed expressions, our design had to accommodate for a typed setting. Finally, there are other approaches to implement embedded languages that use external tools to integrate the embedded language into the host language. Examples are the attribute grammar based approach of abelf [31] and the term rewriting approach of MetaBorg using Stratego/XT [4]. On the other hand, Kiama [28] is a project to integrate those language processing tools as Scala DSELs for code generation from external DSLs. It may be worthwhile inquiring whether the ideas of Kiama can be merged with our design in order to get language tools as modular DSEL components to process DSELs. 7. Conclusion We have presented a design for integrating extensible term representations into a typed DSEL approach. We showed how to use these term representations as target domains for program transformations on DSL terms and as starting points for writing non-compositional interpretations. Furthermore, we demonstrated how several DSELs can be composed in a type-preserving way. We discussed name-binding by introducing the lambda calculus as a DSEL, together with two representations: a typed HOAS-based and an untyped De-Bruijn-index-based representation. Finally we have discussed, how the presented design accommodates for three kinds of interpretations: compositional interpretations, interpretations based on explicit AST traversal and interpretations based on AST inspection. We have compared the advantages and disadvantages of these three styles of interpretation. In the future, we want to further investigate typed lambda calculus representations and the limits of representability in a DSEL approach. Acknowledgments The authors would like to thank Adriaan Moors, Michael Achenbach, and the anonymous reviewers for their insightful comments and suggestions that helped improve the presentation of the paper. References
{"Source-Url": "http://www.informatik.uni-marburg.de/~kos/papers/gpce62-hofer.pdf", "len_cl100k_base": 10456, "olmocr-version": "0.1.51", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 35602, "total-output-tokens": 12965, "length": "2e13", "weborganizer": {"__label__adult": 0.00044035911560058594, "__label__art_design": 0.0003638267517089844, "__label__crime_law": 0.0003273487091064453, "__label__education_jobs": 0.0006213188171386719, "__label__entertainment": 7.152557373046875e-05, "__label__fashion_beauty": 0.0001809597015380859, "__label__finance_business": 0.00016760826110839844, "__label__food_dining": 0.0004410743713378906, "__label__games": 0.0005059242248535156, "__label__hardware": 0.0006575584411621094, "__label__health": 0.0006165504455566406, "__label__history": 0.00026726722717285156, "__label__home_hobbies": 7.843971252441406e-05, "__label__industrial": 0.00044083595275878906, "__label__literature": 0.0003542900085449219, "__label__politics": 0.0003614425659179687, "__label__religion": 0.0006999969482421875, "__label__science_tech": 0.01473236083984375, "__label__social_life": 9.98377799987793e-05, "__label__software": 0.0031337738037109375, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.0004019737243652344, "__label__transportation": 0.0005960464477539062, "__label__travel": 0.00022923946380615232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57431, 0.02067]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57431, 0.58084]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57431, 0.86717]], "google_gemma-3-12b-it_contains_pii": [[0, 5455, false], [5455, 11946, null], [11946, 18832, null], [18832, 24167, null], [24167, 27834, null], [27834, 32877, null], [32877, 36218, null], [36218, 42644, null], [42644, 50520, null], [50520, 57431, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5455, true], [5455, 11946, null], [11946, 18832, null], [18832, 24167, null], [24167, 27834, null], [27834, 32877, null], [32877, 36218, null], [36218, 42644, null], [42644, 50520, null], [50520, 57431, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57431, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57431, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57431, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57431, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57431, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57431, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57431, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57431, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57431, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57431, null]], "pdf_page_numbers": [[0, 5455, 1], [5455, 11946, 2], [11946, 18832, 3], [18832, 24167, 4], [24167, 27834, 5], [27834, 32877, 6], [32877, 36218, 7], [36218, 42644, 8], [42644, 50520, 9], [50520, 57431, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57431, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
654aff557bd00c83993d21700fecd7a63a30ea9d
Java Technologies Lecture VIII Valdas Rapševičius Vilnius University Faculty of Mathematics and Informatics 2014.05.15 You will learn the Java Persistent APIs - **JDBC (java.sql, javax.sql)** - Connection - Statement - ResultSet - RowSet - **JPA (javax.persistence)** - Mapping Entities - PersistenceUnit, EntityManager - Queries - JPQL - Criteria API (+ Metamodel) - Locking, Caching Database - Two-tier Architecture for Data Access. - Three-tier Architecture for Data Access. • The JDBC API is a Java API that can access any kind of tabular data, especially data stored in a Relational Database • JDBC helps you to write Java applications that manage these 3 programming activities: – Connect to a data source, like a database – Send queries and update statements to the database – Retrieve and process the results received from the database in answer to your query interface java.sql.Connection A connection (session) with a specific database. SQL statements are executed and results are returned within the context of a connection. interface java.sql.Statement The object used for executing a static SQL statement and returning the results it produces. public interface PreparedStatement extends Statement An object that represents a precompiled SQL statement public interface CallableStatement extends PreparedStatement The interface used to execute SQL stored procedures interface java.sql.ResultSet A table of data representing a database result set, which is usually generated by executing a statement that queries the database Subinterfaces: CachedRowSet, FilteredRowSet, JdbcRowSet, JoinRowSet, RowSet, SyncResolver, WebRowSet JDBC: Main Classes (2) // Optional properties Properties properties = new Properties(); properties.put("user", username); properties.put("password", password); // Lookup Driver class Class.forName("com.mysql.jdbc.Driver").newInstance(); // Database specific URL String url = "jdbc:mysql://localhost/test?"; // Connect to the database, use the Connection try (Connection conn = DriverManager.getConnection(url)) { ... } try (Connection conn = DriverManager.getConnection(url, username, password)) { ... } try (Connection conn = DriverManager.getConnection(url, properties)) { ... } // From the JNDI container Context ctx = new InitialContext(); DataSource ds = (DataSource)ctx.lookup("jdbc/mydatabase"); try (Connection con = ds.getConnection(username, password)) { ... } try (Connection con = ds.getConnection()) { ... } String query = "select COF_NAME, PRICE, SALES, TOTAL from COFFEES"; // Open statement try (Statement stmt = conn.createStatement()) { // Get result set try (ResultSet rs = stmt.executeQuery(query)) { // Loop results while (rs.next()) { String coffeeName = rs.getString("COF_NAME"); Float price = rs.getFloat(2); Integer sales = rs.getInt("SALES"); Integer total = rs.getInt("TOTAL"); System.out.println(coffeeName + " " + price + " " + sales + " " + total); } } } } catch (SQLException e) { System.printStackTrace(e); } String selectSql = "SELECT * FROM COFFEES"; // createStatement(int resultSetType, int resultSetConcurrency) try (Statement stmt = conn.createStatement( ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE)) { try (ResultSet rs = stmt.executeQuery(selectSql)) { while (rs.next()) { Float price = rs.getFloat("PRICE"); rs.updateFloat("PRICE", f * 1.1); rs.updateRow(); } } catch (SQLException e) { System.printStackTrace(e); } } String selectSql = "SELECT * FROM COFFEES"; try (Statement stmt = conn.createStatement( ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE)) { try (ResultSet rs = stmt.executeQuery(selectSql)) { rs.moveToInsertRow(); rs.updateString("COF_NAME", coffeeName); rs.updateFloat("PRICE", price); rs.updateInt("SALES", sales); rs.updateInt("TOTAL", total); rs.insertRow(); rs.beforeFirst(); } } catch (SQLException e) { System.printStackTrace(e); } PreparedStatement conn.setAutoCommit(false); String updateSql = "update COFFEEs set SALES=? where COF_NAME=?"; try (PreparedStatement stmt = conn.prepareStatement(updateString)) { // First update stmt.setInt(1, 100); stmt.setString(2, "French_Roast"); stmt.executeUpdate(); // Second update stmt.setInt(1, 101); stmt.setString(2, "Espresso"); stmt.executeUpdate(); conn.commit(); } catch (SQLException ex) { System.printStackTrace(ex); conn.rollback(); } Valdas Rapševičius. Java Technologies conn.setAutoCommit(false); try (Statement stmt = conn.createStatement()) { stmt.addBatch("INSERT INTO COFFEES VALUES('Amaretto', 49, 9.99, 0, 0)"); stmt.addBatch("INSERT INTO COFFEES VALUES('Hazelnut', 49, 9.99, 0, 0)"); stmt.addBatch("INSERT INTO COFFEES VALUES('Amaretto_decaf', 49, 10.99, 0, 0)"); stmt.addBatch("INSERT INTO COFFEES VALUES('Hazelnut_decaf', 49, 10.99, 0, 0)"); int[] updateCounts = stmt.executeBatch(); conn.commit(); } catch(SQLException ex) { System.printStackTrace(ex); conn.rollback(); } JDBC: RowSet • A JDBC RowSet object holds tabular data in a way that makes it more flexible and easier to use than a result set • Oracle has defined five RowSet interfaces for some of the more popular uses of a RowSet, and standard reference are available for these RowSet interfaces • Programmers are free write their own versions of the javax.sql.RowSet interface, to extend the implementations of the five RowSet interfaces, or to write their own implementations RowSet: Types - **Connected** RowSet Objects - Only one of the standard RowSet implementations is a connected RowSet object: `JdbcRowSet`. Always being connected to a database, a `JdbcRowSet` object is most similar to a `ResultSet` object and is often used as a wrapper to make an otherwise non-scrollable and read-only `ResultSet` object scrollable and updatable. - **Disconnected** RowSet Objects - A `CachedRowSet` object has all the capabilities of a `JdbcRowSet` object plus it can also do the following: - Obtain a connection to a data source and execute a query - Read the data from the resulting `ResultSet` object and populate itself with that data - Manipulate data and make changes to data while it is disconnected - Reconnect to the data source to write changes back to it - Check for conflicts with the data source and resolve those conflicts - A `WebRowSet` object has all the capabilities of a `CachedRowSet` object plus it can also do the following: - Write itself as an XML document - Read an XML document that describes a `WebRowSet` object - A `JoinRowSet` object has all the capabilities of a `WebRowSet` object (and therefore also those of a `CachedRowSet` object) plus it can also do the following: - Form the equivalent of a SQL JOIN without having to connect to a data source - A `FilteredRowSet` object likewise has all the capabilities of a `WebRowSet` object (and therefore also a `CachedRowSet` object) plus it can also do the following: - Apply filtering criteria so that only selected data is visible. This is equivalent to executing a query on a RowSet object without having to use a query language or connect to a data source. RowSet: Create ```java JdbcRowSet rs = new JdbcRowSetImpl(); rs.setCommand("select * from COFFEES"); rs.setUrl("jdbc:myDriver:myAttribute"); rs.setUsername(username); rs.setPassword(password); rs.execute(); RowSetFactory rsf = RowSetProvider.newFactory(); JdbcRowSet rs = rsf.createJdbcRowSet(); rs.setUrl("jdbc:myDriver:myAttribute"); rs.setUsername(username); rs.setPassword(password); rs.setCommand("select * from COFFEES"); jdbcRs.execute(); ``` RowSet: Operations rs.absolute(3); rs.updateFloat("PRICE", 10.99f); rs.updateRow(); rs.moveToInsertRow(); rs.updateString("COF_NAME", "HouseBlend"); rs.updateFloat("PRICE", 7.99f); rs.insertRow(); rs.last(); rs.deleteRow(); try { // Cached RowSet crs.acceptChanges(); } catch (SyncProviderException spe) { SyncResolver resolver = spe.getSyncResolver(); } ## JDBC: Advanced Data Types <table> <thead> <tr> <th>Advanced Data Type</th> <th>getDataType method</th> <th>setDataType method</th> <th>updateDataType method</th> </tr> </thead> <tbody> <tr> <td>BLOB</td> <td>getBlob</td> <td>setBlob</td> <td>updateBlob</td> </tr> <tr> <td>CLOB</td> <td>getClob</td> <td>setClob</td> <td>updateClob</td> </tr> <tr> <td>NCLOB</td> <td>getNClob</td> <td>setNClob</td> <td>updateNClob</td> </tr> <tr> <td>ARRAY</td> <td>getArray</td> <td>setArray</td> <td>updateArray</td> </tr> <tr> <td>XML</td> <td>getSQLXML</td> <td>setSQLXML</td> <td>updateSQLXML</td> </tr> <tr> <td>Structured type</td> <td>getObject</td> <td>setObject</td> <td>updateObject</td> </tr> <tr> <td>REF(structured type)</td> <td>getRef</td> <td>setRef</td> <td>updateRef</td> </tr> <tr> <td>ROWID</td> <td>getRowId</td> <td>setRowId</td> <td>updateRowId</td> </tr> <tr> <td>DISTINCT</td> <td>getBigDecimal</td> <td>setBigDecimal</td> <td>updateBigDecimal</td> </tr> <tr> <td>DATALINK</td> <td>getURL</td> <td>setURL</td> <td>updateURL</td> </tr> </tbody> </table> • Object-relational mapping (ORM) is a software that provides services by converting data between incompatible type systems in OO programming languages. • Java ORM JPA2 projects: – Hibernate (GLPL) – EclipseLink (EPL) – Oracle Toplink (Oracle) – Apache OpenJPA (ALv2.0) – MyBatis (ALv2.0) The Java Persistence API provides Java developers with an object/relational mapping facility for managing relational data in Java applications. JPA 1.0 at May 2006 (JSR 220) JPA 2.0 at December 2009 (JSR-000317) JPA 2.1 at April 2013 (JSR-000338) Java Persistence consists of 4 areas: - Object/relational mapping metadata - The Java Persistence API - The Query language - The Java Persistence Criteria API • An entity is a lightweight persistence domain object. Typically, an entity represents a table in a relational database, and each entity instance corresponds to a row in that table. The primary programming artifact of an entity is the entity class, although entities can use helper classes. • The persistent state of an entity is represented through either persistent fields or persistent properties. These fields or properties use object/relational mapping annotations to map the entities and entity relationships to the relational data in the underlying data store. • An entity class must follow these requirements – The class must be annotated with the `javax.persistence.Entity` annotation. – The class must have a public or protected, no-argument constructor. The class may have other constructors – The class must not be declared final. No methods or persistent instance variables must be declared final – If an entity instance is passed by value as a detached object, such as through a session bean's remote business interface, the class must implement the `Serializable` interface – Entities may extend both entity and non-entity classes, and non-entity classes may extend entity classes – Persistent instance variables must be declared private, protected, or package-private and can be accessed directly only by the entity class's methods. Clients must access the entity's state through accessor or business methods. @Entity @Table(name="CUST", schema="RECORDS") public class Customer { ... } @Entity @Table(name="CUST") @SecondaryTable(name="CUST_DETAIL", pkJoinColumns={ @PrimaryKeyJoinColumn(name="CUST_ID"), @PrimaryKeyJoinColumn(name="CUST_TYPE")}) public class Customer { ... } @Entity @NamedQuery(name="findAllCustomersWithName", query="SELECT * FROM CUST c WHERE c.name = :custName") public class Customer { ... } Entity: Primary Key Every entity must have a primary key (PK). An entity may have either a simple or a composite primary key. Simple primary keys use the `javax.persistence.Id` annotation to denote the primary key property or field. ```java package lt.vu.mif.model; import javax.persistence.* @Entity @Table(name="FLIGHTS") public class Flight implements Serializable { Long id; @Id @Column(name="FID") public Long getId() { return id; } public void setId(Long id) { this.id = id; } } ``` Every non static non transient property (field or method depending on the access type) of an entity is considered persistent, unless you annotate it as @Transient. Not having an annotation for your property is equivalent to the appropriate @Basic annotation. The @Basic annotation allows you to declare the fetching strategy for a property: ```java private String firstname; String getName() { ... } @Basic @Column(name="NAME", nullable=false, length=128) String getName() { ... } // persistent property @Basic(fetch = FetchType.LAZY) @Column(name="COMMENT", nullable=true, length=512) private String comment; @Temporal(TemporalType.TIME) java.util.Date getDepartureTime() { ... } //enum persisted as String in database @Enumerated(EnumType.STRING) Starred getGender() { ... } ``` Every non static non transient property (field or method depending on the access type) of an entity is considered persistent, unless you annotate it as `@Transient`. Not having an annotation for your property is equivalent to the appropriate `@Basic` annotation. The `@Basic` annotation allows you to declare the fetching strategy for a property: ```java public transient int counter; @Transient private String comment; @Transient String getName() { ...} ``` Multiplicity - **One-to-one**: Each entity instance is related to a single instance of another entity. For example, to model a physical warehouse in which each storage bin contains a single widget, StorageBin and Widget would have a one-to-one relationship. One-to-one relationships use the `javax.persistence.OneToOne` annotation on the corresponding persistent property or field. - **One-to-many**: An entity instance can be related to multiple instances of the other entities. A sales order, for example, can have multiple line items. In the order application, Order would have a one-to-many relationship with LineItem. One-to-many relationships use the `javax.persistence.OneToMany` annotation on the corresponding persistent property or field. - **Many-to-one**: Multiple instances of an entity can be related to a single instance of the other entity. This multiplicity is the opposite of a one-to-many relationship. In the example just mentioned, the relationship to Order from the perspective of LineItem is many-to-one. Many-to-one relationships use the `javax.persistence.ManyToOne` annotation on the corresponding persistent property or field. - **Many-to-many**: The entity instances can be related to multiple instances of each other. For example, each college course has many students, and every student may take several courses. Therefore, in an enrollment application, Course and Student would have a many-to-many relationship. Many-to-many relationships use the `javax.persistence.ManyToMany` annotation on the corresponding persistent property or field. Bidirectional Relationships - In a bidirectional relationship, each entity has a relationship field or property that refers to the other entity. Through the relationship field or property, an entity class’s code can access its related object. Bidirectional relationships must follow these rules: - The inverse side of a bidirectional relationship must refer to its owning side by using the mappedBy element of the @OneToOne, @OneToMany, or @ManyToMany annotation. The mappedBy element designates the property or field in the entity that is the owner of the relationship. - The many side of many-to-one bidirectional relationships must not define the mappedBy element. The many side is always the owning side of the relationship. - For one-to-one bidirectional relationships, the owning side corresponds to the side that contains the corresponding foreign key. - For many-to-many bidirectional relationships, either side may be the owning side. Unidirectional Relationships - In a unidirectional relationship, only one entity has a relationship field or property that refers to the other. For example, LineItem would have a relationship field that identifies Product, but Product would not have a relationship field or property for LineItem. In other words, LineItem knows about Product, but Product doesn’t know which LineItem instances refer to it. @Entity @Entity @Table("DEPARTMENT") public class Department { @Id @Column(name="ID") private Integer id; ... @OneToOne(fetch=FetchType.LAZY) @JoinColumn(name="HEAD_ID") private Person head; ... } @Entity @Entity @Table("PERSONS") public class Person { @Id private Integer id; ... @OneToOne(mappedBy="head") private Department headOfDepartment; ... } @OneToMany / @ManyToOne @Entity @Table("DEPARTMENT") public class Department { @Id @Column(name="ID") private Integer id; ... @OneToMany(mappedBy="department") private List<Person> staff; ... } @Entity @Table("PERSONS") public class Person { @Id private Integer id; ... @ManyToOne @JoinColumn(name="DEPT_ID") private Department department; ... } ```java @Entity @Table("PROJECTS") public class Project { ... @ManyToMany @JoinTable(name="PROJECT_PERSONS", joinColumns={ @JoinColumn(name="PROJECT_ID", referencedColumnName="ID")}, inverseJoinColumns={ @JoinColumn(name="PERSON_ID", referencedColumnName="ID")}) private List<Person> persons; ... } @Entity @Table("PERSONS") public class Person { ... @ManyToMany(mappedBy="persons") private List<Project> projects; ... } ``` The `javax.persistence.CascadeType` enumerated type defines the cascade operations that are applied in the cascade element of the relationship annotations. <table> <thead> <tr> <th>Cascade Operation</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>ALL</td> <td>All cascade operations will be applied to the parent entity’s related entity. All is equivalent to specifying cascade={DETACH, MERGE, PERSIST, REFRESH, REMOVE}</td> </tr> <tr> <td>DETACH</td> <td>If the parent entity is detached from the persistence context, the related entity will also be detached.</td> </tr> <tr> <td>MERGE</td> <td>If the parent entity is merged into the persistence context, the related entity will also be merged.</td> </tr> <tr> <td>PERSIST</td> <td>If the parent entity is persisted into the persistence context, the related entity will also be persisted.</td> </tr> <tr> <td>REFRESH</td> <td>If the parent entity is refreshed in the current persistence context, the related entity will also be refreshed.</td> </tr> <tr> <td>REMOVE</td> <td>If the parent entity is removed from the current persistence context, the related entity will also be removed.</td> </tr> </tbody> </table> @OneToMany(cascade=REMOVE, mappedBy="customer") public Set<Order> getOrders() { return orders; } Embeddable classes are used to represent the state of an entity but don’t have a persistent identity of their own, unlike entity classes. Instances of an embeddable class share the identity of the entity that owns it. Embeddable classes exist only as the state of another entity. An entity may have single-valued or collection-valued embeddable class attributes. Embeddable classes have the same rules as entity classes but are annotated with the `javax.persistence.Embeddable` annotation instead of `@Entity`. ```java @Embeddable public class ZipCode { String zip; String plusFour; ... } ``` ```java @Entity public class Address { @Id protected long id; String street; String city; @Embedded ZipCode zipCode; String country; ... } ``` Inheritance **Entity:** ```java @Entity public abstract class Employee { @Id protected Integer uid; ... } @Entity public class FullTimeEmployee extends Employee { protected Integer salary; ... } @Entity public class PartTimeEmployee extends Employee { protected Float hourlyWage; ... } ``` **Not entity:** ```java @MappedSuperclass public class Employee { @Id protected Integer employeeId; ... } @Entity public class FullTimeEmployee extends Employee { protected Integer salary; ... } @Entity public class PartTimeEmployee extends Employee { protected Float hourlyWage; ... } ``` Inheritance Strategy You can configure how the Java Persistence provider maps inherited entities to the underlying datastore by decorating the root class of the hierarchy with the annotation `javax.persistence.Inheritance`. The following mapping strategies are used to map the entity data to the underlying database: - **The Single Table per Class Hierarchy Strategy** With this strategy, which corresponds to the default `InheritanceType.SINGLE_TABLE`, all classes in the hierarchy are mapped to a single table in the database. This table has a **discriminator column** containing a value that identifies the subclass to which the instance represented by the row belongs. - **The Table per Concrete Class Strategy** In this strategy, which corresponds to `InheritanceType.TABLE_PER_CLASS`, each concrete class is mapped to a separate table in the database. All fields or properties in the class, including inherited fields or properties, are mapped to columns in the class’s table in the database. - **The Joined Subclass Strategy** In this strategy, which corresponds to `InheritanceType.JOINED`, the root of the class hierarchy is represented by a single table, and each subclass has a separate table that contains only those fields specific to that subclass. That is, the subclass table does not contain columns for inherited fields or properties. The subclass table also has a column or columns that represent its primary key, which is a foreign key to the primary key of the superclass table. InheritanceType.SINGLE_TABLE ```java @Entity @Table("EMPLOYEES") @Inheritance(strategy=InheritanceType.SINGLE_TABLE) @DiscriminatorColumn(name="ETYPE") public abstract class Employee { ... @Id private Integer id; ... } @Entity @DiscriminatorValue("FT") public class Fulltime extends Employee { ... } @Entity @DiscriminatorValue("FL") public class Freelance extends Employee { ... } ``` @Entity @Table("EMPLOYEES") @Inheritance(strategy=InheritanceType.JOINED) @DiscriminatorColumn(name="ETYPE") public abstract class Employee { ... @Id private Integer id; ... } @Entity @DiscriminatorValue("FT") @Table(name="FULLTIMES") public class Fulltime extends Employee { ... } @Entity @DiscriminatorValue("FL") @Table(name="FREELANCES") public class Freelance extends Employee { ... } InheritanceType.TABLE_PER_CLASS ```java @Entity @Inheritance(strategy=InheritanceType.TABLE_PER_CLASS) public abstract class Employee { ... @Id private Integer id; ... } @Entity @Table(name="FULLTIMES") public class Fulltime extends Employee { ... } @Entity @Table(name="FREELANCES") public class Freelance extends Employee { ... } ``` JPA Architecture ``` javax.persistence EntityManagerFactory 1 EntityManager Query Persistence EntityTransaction Entity ``` Persistence Unit - A persistence unit defines a set of all entity classes that are managed by EntityManager instances in an application. This set of entity classes represents the data contained within a single data store. - Persistence units are defined by the `persistence.xml` configuration file ```xml <persistence> <persistence-unit name="OrderManagement"> <description>This unit manages orders and customers. It does not rely on any vendor-specific features and can therefore be deployed to any persistence provider.</description> <jta-data-source>jdbc/MyOrderDB</jta-data-source> <jar-file>MyOrderApp.jar</jar-file> <class>com.widgets.Order</class> <class>com.widgets.Customer</class> </persistence-unit> </persistence> ``` The `javax.persistence.EntityManager` API creates and removes persistent entity instances, finds entities by the entity’s primary key, and allows queries to be run on entities. - **Container-Managed Entity Managers** An EntityManager instance’s persistence context is automatically propagated by the container to all application components that use the EntityManager instance within a single Java Transaction API (JTA) transaction. To obtain an EntityManager instance, inject the entity manager into the application component: ```java @PersistenceContext EntityManager em; ``` - **Application-Managed Entity Managers** With an application-managed entity manager, on the other hand, the persistence context is not propagated to application components, and the lifecycle of EntityManager instances is managed by the application. ```java EntityManagerFactory emf = Persistence.createEntityManagerFactory("JpaTest"); EntityManager em = emf.createEntityManager(); ... em.close(); emf.close(); ``` Managing Entities @PersistenceContext EntityManager em; • Finding Entities Using the EntityManager Customer cust = em.find(Customer.class, custID); • Persisting Entity Instances Customer cust = new Customer(); em.persist(cust); • Removing Entity Instances Order order = em.find(Order.class, orderId); em.remove(order); • Synchronizing Entity Data to the Database To force synchronization of the managed entity to the data store, invoke the flush method of the EntityManager instance. If the entity is related to another entity and the relationship annotation has the cascade element set to PERSIST or ALL, the related entity's data will be synchronized with the data store when flush is called. If the entity is removed, calling flush will remove the entity data from the data store. Querying Entities The Java Persistence API provides the following methods for querying entities. • The Java Persistence query language (JPQL) is a simple, string-based language similar to SQL used to query entities and their relationships. • The Criteria API is used to create typesafe queries using Java programming language APIs to query for entities and their relationships. Both JPQL and the Criteria API have advantages and disadvantages. • Just a few lines long, JPQL queries are typically more concise and more readable than Criteria queries. Developers familiar with SQL will find it easy to learn the syntax of JPQL. JPQL named queries can be defined in the entity class using a Java programming language annotation or in the application’s deployment descriptor. JPQL queries are not typesafe, however, and require a cast when retrieving the query result from the entity manager. This means that type-casting errors may not be caught at compile time. JPQL queries don’t support open-ended parameters. • Criteria queries allow you to define the query in the business tier of the application. Although this is also possible using JPQL dynamic queries, Criteria queries provide better performance because JPQL dynamic queries must be parsed each time they are called. Criteria queries are typesafe and therefore don’t require casting, as JPQL queries do. The Criteria API is just another Java programming language API and doesn’t require developers to learn the syntax of another query language. Criteria queries are typically more verbose than JPQL queries and require the developer to create several objects and perform operations on those objects before submitting the query to the entity manager. JPQL Examples select_statement ::= select_clause from_clause [where_clause][groupby_clause][having_clause][orderby_clause] update_statement ::= update_clause [where_clause] delete_statement ::= delete_clause [where_clause] em.createQuery("SELECT p FROM Player p").getResultList(); em.createQuery("SELECT c FROM Customer c WHERE c.name LIKE :custName") .setParameter("custName", name) .getResultList(); SELECT DISTINCT p1 FROM Player p1, Player p2 WHERE p1.salary > p2.salary AND p2.name = :name UPDATE Player p SET p.status = 'inactive' WHERE p.lastPlayed < :inactiveThresholdDate DELETE FROM Player p WHERE p.status = 'inactive' AND p.teams IS EMPTY // Equivalent to SELECT p FROM Pet p // CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery<Pet> cq = cb.createQuery(Pet.class); Root<Pet> pet = cq.from(Pet.class); cq.select(pet); TypedQuery<Pet> q = em.createQuery(cq); List<Pet> allPets = q.getResultList(); Using Metamodel ```java CriteriaBuilder cb = em.getCriteriaBuilder(); Metamodel m = em.getMetamodel(); CriteriaQuery<Pet> cq = cb.createQuery(Pet.class); Root<Pet> pet = cq.from(Pet.class); Date firstDate = new Date(...); Date secondDate = new Date(...); // With Metamodel classes cq.where(cb.equal(pet.get("name"), "Fido")); cq.where(cb.between(pet.get(Pet_.birthDate), firstDate, secondDate)); cq.where(cb.like(pet.get(Pet_.name), "*do")); // Without Metamodel classes EntityType<Pet> Pet_ = m.entity(Pet.class); cq.where(cb.between( pet.get(Pet_.getSingularAttribute("birthDate", Date.class)), firstDate, secondDate)); cq.where(cb.like(pet.get(Pet_.name), "*do")); ``` Entity data is *concurrently accessed* if the data in a data source is accessed at the same time by multiple applications. Special care must be taken to ensure that the underlying data’s integrity is preserved when accessed concurrently. - **Optimistic Locking** By default, persistence providers use optimistic locking, where, before committing changes to the data, the persistence provider checks that no other transaction has modified or deleted the data since the data was read. This is accomplished by a version column in the database table, with a corresponding version attribute in the entity class. When a row is modified, the version value is incremented. The original transaction checks the version attribute, and if the data has been modified by another transaction, a `javax.persistence.OptimisticLockException` will be thrown, and the original transaction will be rolled back. When the application specifies optimistic lock modes, the persistence provider verifies that a particular entity has not changed since it was read from the database even if the entity data was not modified. The `javax.persistence.Version` annotation is used to mark a persistent field or property as a version attribute of an entity. By adding a version attribute, the entity is enabled for optimistic concurrency control. The version attribute is read and updated by the persistence provider when an entity instance is modified during a transaction. The application may read the version attribute, but must not modify the value. ```java @Version protected int version; ``` - **Pessimistic Locking** Pessimistic locking goes further than optimistic locking. With pessimistic locking, the persistence provider creates a transaction that obtains a long-term lock on the data until the transaction is completed, which prevents other transactions from modifying or deleting the data until the lock has ended. Pessimistic locking is a better strategy than optimistic locking when the underlying data is frequently accessed and modified by many transactions. Caution - Using pessimistic locks on entities that are not subject to frequent modification may result in decreased application performance. Lock Modes <table> <thead> <tr> <th>Lock Mode</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>OPTIMISTIC</td> <td>Obtain an optimistic read lock for all entities with a version attribute.</td> </tr> <tr> <td>OPTIMISTIC_FORCE_INCREMENT</td> <td>Obtain an optimistic read lock for all entities with a version attribute, and increment the version attribute value.</td> </tr> <tr> <td>PESSIMISTIC_READ</td> <td>Immediately obtain a long-term read lock on the data to prevent the data from being modified or deleted. Other transactions may read the data while the lock is maintained, but may not modify or delete the data. The persistence provider is permitted to obtain a database write lock when a read lock was requested, but not vice versa.</td> </tr> <tr> <td>PESSIMISTIC_WRITE</td> <td>Immediately obtain a long-term write lock on the data to prevent the data from being read, modified, or deleted.</td> </tr> <tr> <td>PESSIMISTIC_FORCE_INCREMENT</td> <td>Immediately obtain a long-term lock on the data to prevent the data from being modified or deleted, and increment the version attribute of versioned entities.</td> </tr> <tr> <td>READ</td> <td>A synonym for OPTIMISTIC. Use of LockModeType.OPTIMISTIC is to be preferred for new applications.</td> </tr> <tr> <td>WRITE</td> <td>A synonym for OPTIMISTIC_FORCE_INCREMENT. Use of LockModeType.OPTIMISTIC_FORCE_INCREMENT is to be preferred for new applications.</td> </tr> <tr> <td>NONE</td> <td>No additional locking will occur on the data in the database.</td> </tr> </tbody> </table> ```java EntityManager em = ...; Person person = ...; em.lock(person, LockModeType.OPTIMISTIC); Person person = em.find(Person.class, personPK, LockModeType.PESSIMISTIC_WRITE); Person person = em.find(Person.class, personPK); em.refresh(person, LockModeType.OPTIMISTIC_FORCE_INCREMENT); Query q = em.createQuery(...); q.setLockMode(LockModeType.PESSIMISTIC_FORCE_INCREMENT); @NamedQuery(name="lockPersonQuery", query="SELECT p FROM Person p WHERE p.name LIKE :name", lockMode=PESSIMISTIC_READ) ``` Second-level Cache • A second-level cache is a local store of entity data managed by the persistence provider to improve application performance. A second-level cache helps improve performance by avoiding expensive database calls, keeping the entity data local to the application. A second-level cache is typically transparent to the application, as it is managed by the persistence provider and underlies the persistence context of an application. That is, the application reads and commits data through the normal entity manager operations without knowing about the cache. • Note: Persistence providers are not required to support a second-level cache. Portable applications should not rely on support by persistence providers for a second-level cache. Cache Mode - Persistence Unit: ```xml <persistence-unit name="examplePU" transaction-type="JTA"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jta-data-source>jdbc/__default</jta-data-source> <shared-cache-mode>DISABLE_SELECTIVE</shared-cache-mode> </persistence-unit> ``` - EntityManagerFactory: ```java EntityManagerFactory emf = Persistence.createEntityManagerFactory("myExamplePU", new Properties() .add("javax.persistence.sharedCache.mode","ENABLE_SELECTIVE")); ``` ### Cache Mode Setting Description <table> <thead> <tr> <th>Cache Mode Setting</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>ALL</td> <td>All entity data is stored in the second-level cache for this persistence unit.</td> </tr> <tr> <td>NONE</td> <td>No data is cached in the persistence unit. The persistence provider must not cache any data.</td> </tr> <tr> <td>ENABLE_SELECTIVE</td> <td>Enable caching for entities that have been explicitly set with the <code>@Cacheable</code> annotation.</td> </tr> <tr> <td>DISABLE_SELECTIVE</td> <td>Enable caching for all entities except those that have been explicitly set with the <code>@Cacheable(false)</code> annotation.</td> </tr> <tr> <td>UNSPECIFIED</td> <td>The caching behavior for the persistence unit is undefined. The persistence provider's default caching behavior will be used.</td> </tr> </tbody> </table> ```java @Cacheable(true) @Entity public class Person { ... } @Cacheable(false) @Entity public class Person { ... } @Cacheable(timeToLive=30) @Entity public class OrderStatus { ... } ``` Cache Control EntityManager em = ...; em.setProperty("javax.persistence.cache.storeMode", "BYPASS"); Map<String, Object> props = new HashMap<String, Object>(); props.put("javax.persistence.cache.retrieveMode", "BYPASS"); String personPK = ...; Person person = em.find(Person.class, personPK, props); CriteriaQuery<Person> cq = ...; TypedQuery<Person> q = em.createQuery(cq); q.setHint("javax.persistence.cache.storeMode", "REFRESH"); Cache cache = em.getEntityManagerFactory().getCache(); if (cache.contains(Person.class, personPK)) { // the data is cached } else { // the data is NOT cached } cache.evict(Person.class, personPK); cache.evict(Person.class); cache.evictAll(); Session Conclusions • Old plain JDBC – JDBC Drivers (version, provider) – Connection – Statement – ResultSet, RowSet • JPA! – Mapping Entities • Aggregation • Inheritance – FetchType, Cascade – PersistenceUnit • EntityManagerFactory • EntityManager – Queries • JPQL • Criteria API (+ Metamodel) – Locking, Caching
{"Source-Url": "https://klevas.mif.vu.lt/~valdo/jate2014/JavaTech.L08.pdf", "len_cl100k_base": 8351, "olmocr-version": "0.1.49", "pdf-total-pages": 52, "total-fallback-pages": 0, "total-input-tokens": 75436, "total-output-tokens": 10377, "length": "2e13", "weborganizer": {"__label__adult": 0.00028395652770996094, "__label__art_design": 0.0002110004425048828, "__label__crime_law": 0.00020754337310791016, "__label__education_jobs": 0.0017862319946289062, "__label__entertainment": 4.64320182800293e-05, "__label__fashion_beauty": 9.02414321899414e-05, "__label__finance_business": 0.0001379251480102539, "__label__food_dining": 0.0002378225326538086, "__label__games": 0.0003905296325683594, "__label__hardware": 0.0005841255187988281, "__label__health": 0.00026416778564453125, "__label__history": 0.00017511844635009766, "__label__home_hobbies": 6.54458999633789e-05, "__label__industrial": 0.00028443336486816406, "__label__literature": 0.00014281272888183594, "__label__politics": 0.00013208389282226562, "__label__religion": 0.00035691261291503906, "__label__science_tech": 0.00688934326171875, "__label__social_life": 8.857250213623047e-05, "__label__software": 0.007625579833984375, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.00020420551300048828, "__label__transportation": 0.0003376007080078125, "__label__travel": 0.00018668174743652344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36968, 0.00628]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36968, 0.52273]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36968, 0.71629]], "google_gemma-3-12b-it_contains_pii": [[0, 123, false], [123, 415, null], [415, 509, null], [509, 906, null], [906, 1692, null], [1692, 1715, null], [1715, 2518, null], [2518, 3197, null], [3197, 3718, null], [3718, 4251, null], [4251, 4797, null], [4797, 5347, null], [5347, 5816, null], [5816, 7532, null], [7532, 7984, null], [7984, 8357, null], [8357, 9456, null], [9456, 9756, null], [9756, 10165, null], [10165, 11606, null], [11606, 12037, null], [12037, 12551, null], [12551, 13338, null], [13338, 13800, null], [13800, 15375, null], [15375, 16729, null], [16729, 16729, null], [16729, 17085, null], [17085, 17437, null], [17437, 18037, null], [18037, 19200, null], [19200, 19983, null], [19983, 20627, null], [20627, 22136, null], [22136, 22558, null], [22558, 22981, null], [22981, 23345, null], [23345, 23476, null], [23476, 24233, null], [24233, 25260, null], [25260, 26053, null], [26053, 27766, null], [27766, 28443, null], [28443, 28713, null], [28713, 29399, null], [29399, 31592, null], [31592, 33718, null], [33718, 34475, null], [34475, 35730, null], [35730, 35918, null], [35918, 36609, null], [36609, 36968, null]], "google_gemma-3-12b-it_is_public_document": [[0, 123, true], [123, 415, null], [415, 509, null], [509, 906, null], [906, 1692, null], [1692, 1715, null], [1715, 2518, null], [2518, 3197, null], [3197, 3718, null], [3718, 4251, null], [4251, 4797, null], [4797, 5347, null], [5347, 5816, null], [5816, 7532, null], [7532, 7984, null], [7984, 8357, null], [8357, 9456, null], [9456, 9756, null], [9756, 10165, null], [10165, 11606, null], [11606, 12037, null], [12037, 12551, null], [12551, 13338, null], [13338, 13800, null], [13800, 15375, null], [15375, 16729, null], [16729, 16729, null], [16729, 17085, null], [17085, 17437, null], [17437, 18037, null], [18037, 19200, null], [19200, 19983, null], [19983, 20627, null], [20627, 22136, null], [22136, 22558, null], [22558, 22981, null], [22981, 23345, null], [23345, 23476, null], [23476, 24233, null], [24233, 25260, null], [25260, 26053, null], [26053, 27766, null], [27766, 28443, null], [28443, 28713, null], [28713, 29399, null], [29399, 31592, null], [31592, 33718, null], [33718, 34475, null], [34475, 35730, null], [35730, 35918, null], [35918, 36609, null], [36609, 36968, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 36968, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36968, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36968, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36968, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36968, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36968, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36968, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36968, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36968, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36968, null]], "pdf_page_numbers": [[0, 123, 1], [123, 415, 2], [415, 509, 3], [509, 906, 4], [906, 1692, 5], [1692, 1715, 6], [1715, 2518, 7], [2518, 3197, 8], [3197, 3718, 9], [3718, 4251, 10], [4251, 4797, 11], [4797, 5347, 12], [5347, 5816, 13], [5816, 7532, 14], [7532, 7984, 15], [7984, 8357, 16], [8357, 9456, 17], [9456, 9756, 18], [9756, 10165, 19], [10165, 11606, 20], [11606, 12037, 21], [12037, 12551, 22], [12551, 13338, 23], [13338, 13800, 24], [13800, 15375, 25], [15375, 16729, 26], [16729, 16729, 27], [16729, 17085, 28], [17085, 17437, 29], [17437, 18037, 30], [18037, 19200, 31], [19200, 19983, 32], [19983, 20627, 33], [20627, 22136, 34], [22136, 22558, 35], [22558, 22981, 36], [22981, 23345, 37], [23345, 23476, 38], [23476, 24233, 39], [24233, 25260, 40], [25260, 26053, 41], [26053, 27766, 42], [27766, 28443, 43], [28443, 28713, 44], [28713, 29399, 45], [29399, 31592, 46], [31592, 33718, 47], [33718, 34475, 48], [34475, 35730, 49], [35730, 35918, 50], [35918, 36609, 51], [36609, 36968, 52]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36968, 0.05125]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
dc8e4ed65090b25422e400e6ee1795ab678e7872
On the Expressiveness of Single-Pass Instruction Sequences J.A. Bergstra · C.A. Middelburg Published online: 11 November 2010 © The Author(s) 2010. This article is published with open access at Springerlink.com Abstract We perceive programs as single-pass instruction sequences. A single-pass instruction sequence under execution is considered to produce a behaviour to be controlled by some execution environment. Threads as considered in basic thread algebra model such behaviours. We show that all regular threads, i.e. threads that can only be in a finite number of states, can be produced by single-pass instruction sequences without jump instructions if use can be made of Boolean registers. We also show that, in the case where goto instructions are used instead of jump instructions, a bound to the number of labels restricts the expressiveness. Keywords Single-pass instruction sequence · Regular thread · Expressiveness · Jump-free instruction sequence 1 Introduction The work presented in this paper is part of a research program which is concerned with different subjects from the theory of computation and the area of computer architectures where we come across the relevancy of the notion of instruction sequence. The working hypothesis of this research program is that this notion is a central notion of computer science. It is clear that instruction sequence is a key concept in practice, but strangely enough it has as yet not come prominently into the picture in theoretical circles. Program algebra [3], which is intended as a setting suited for developing theory from the above-mentioned working hypothesis, is taken for the basis of the development of theory under the research program. The starting-point of program algebra is the perception of a program as a single-pass instruction sequence, i.e. a finite or infinite sequence of instructions of which each instruction is executed at most once and can be dropped after it has been executed or jumped over. This perception is simple, appealing, and links up with practice. A single-pass instruction sequence under execution is considered to produce a behaviour to be controlled by some execution environment. Threads as considered in basic thread algebra [3] model such behaviours: upon each action performed by a thread, a reply from the execution environment determines how the thread proceeds. A thread may make use of services, i.e. components of the execution environment. Each Turing machine can be simulated by means of a thread that makes use of a service. The thread and service correspond to the finite control and tape of the Turing machine. The threads that correspond to the finite controls of Turing machines are examples of regular threads, i.e. threads that can only be in a finite number of states. The behaviours of all single-pass instruction sequences considered in program algebra are regular threads and each regular thread is produced by some single-pass instruction sequence. In this paper, we show that each regular thread can be produced by some single-pass instruction sequence without jump instructions if use can be made of services that make up Boolean registers. The primitive instructions of program algebra include jump instructions. An interesting variant of program algebra is obtained by leaving out jump instructions and adding labels and goto instructions. It is easy to see that each regular thread can also be produced by some single-pass instruction sequence with labels and goto instructions. In this paper, we show that a bound to the number of labels restricts the expressiveness of this variant. As part of the research program of which the work presented in this paper is part, issues concerning the following subjects from the theory of computation have been investigated from the viewpoint that a program is an instruction sequence: semantics of programming languages [4, 12], expressiveness of programming languages [9, 23], computability [10, 13], and computational complexity [7]. Performance related matters of instruction sequences have also been investigated in the spirit of the theory of computation [11, 12]. In the area of computer architectures, basic techniques aimed at increasing processor performance have been studied as part of this research program (see e.g. [5, 8]). The work referred to above provides evidence for our hypothesis that the notion of instruction sequence is a central notion of computer science. To say the least, it shows that instruction sequences are relevant to diverse subjects. In addition, it is to be expected that the emerging developments with respect to techniques for high-performance program execution on classical or non-classical computers require that programs are considered at the level of instruction sequences. All this has motivated us to continue the above-mentioned research program with the work on expressiveness presented in this paper. --- 1 In [3], basic thread algebra is introduced under the name basic polarized process algebra. This paper is organized as follows. First, we review basic thread algebra and program algebra (Sects. 2 and 3). Next, we present a mechanism for interaction of threads with services and give a description of Boolean register services (Sects. 4 and 5). After that, we show that each regular thread can be produced by some single-pass instruction sequence without jump instructions if use can be made of Boolean register services (Sect. 6). Then, we introduce the variant of program algebra obtained by leaving out jump instructions and adding labels and goto instructions (Sect. 7). Following this, we show that a bound to the number of labels restricts the expressiveness of this variant (Sect. 8). Finally, we make some concluding remarks (Sect. 9). 2 Basic Thread Algebra In this section, we review BTA (Basic Thread Algebra), which is concerned with the behaviours that sequential programs exhibit on execution. These behaviours are called threads. In BTA, it is assumed that a fixed but arbitrary set \( \mathcal{A} \) of basic actions has been given. A thread performs actions in a sequential fashion. Upon each action performed, a reply from the execution environment of the thread determines how it proceeds. To simplify matters, there are only two possible replies: \( T \) and \( F \). BTA has one sort: the sort \( T \) of threads. To build terms of sort \( T \), it has the following constants and operators: - the deadlock constant \( D : T \); - the termination constant \( S : T \); - for each \( a \in \mathcal{A} \), the binary postconditional composition operator \( \preceq a \succeq \) : \( T \times T \rightarrow T \). We assume that there are infinitely many variables of sort \( T \), including \( x, y, z \). We introduce action prefixing as an abbreviation: \( a \circ p \) abbreviates \( p \preceq a \succeq p \), and then proceed as the thread denoted by \( p \) if the reply from the execution environment is \( T \) and proceed as the thread denoted by \( q \) if the reply from the execution environment is \( F \). The threads denoted by \( D \) and \( S \) will become inactive and terminate, respectively. This implies that each closed BTA term denotes a thread that will become inactive or terminate after it has performed finitely many actions. Infinite threads can be described by guarded recursion. A guarded recursive specification over BTA is a set of recursion equations \( E = \{ X = tX \mid X \in V \} \), where \( V \) is a set of variables of sort \( T \) and each \( tX \) is a BTA term of the form \( D, S \) or \( t \preceq a \succeq t' \) with \( t \) and \( t' \) that contain only variables from \( V \). We write \( V(E) \) for the set of all variables that occur in \( E \). We are only interested in models of BTA in which guarded recursive specifications have unique solutions, such as the projective limit model of BTA presented in [2]. For each guarded recursive specification \( E \) and each \( X \in V(E) \), we introduce a constant \( \langle X \rangle E \) of sort \( T \) standing for the unique solution of \( E \) for \( X \). The axioms for these constants are given in Table 1. In this table, we write \( \langle tX \rangle E \) for \( tX \) with, for all \( Y \in V(E) \), all occurrences of \( Y \) in \( tX \) replaced by \( \langle Y \rangle E \). \( X, tX \) and \( E \) stand for an arbitrary variable of sort $T$, an arbitrary BTA term of sort $T$ and an arbitrary guarded recursive specification over BTA, respectively. Side conditions are added to restrict what $X$, $tX$ and $E$ stand for. Closed terms that denote the same infinite thread cannot always be proved equal by means of the axioms given in Table 1. We introduce AIP (Approximation Induction Principle) to remedy this. AIP is based on the view that two threads are identical if their approximations up to any finite depth are identical. The approximation up to depth $n$ of a thread is obtained by cutting it off after it has performed $n$ actions. In AIP, the approximation up to depth $n$ is phrased in terms of the unary projection operator $\pi_n : T \rightarrow T$. AIP and the axioms for the projection operators are given in Table 2. ### 3 Program Algebra In this section, we review PGA (ProGram Algebra). The perception of a program as a single-pass instruction sequence is the starting-point of PGA. In PGA, it is assumed that a fixed but arbitrary set $A$ of basic instructions has been given. PGA has the following primitive instructions: - for each $a \in A$, a plain basic instruction $a$; - for each $a \in A$, a positive test instruction $+a$; - for each $a \in A$, a negative test instruction $-a$; - for each $l \in \mathbb{N}$, a forward jump instruction $\#l$; - a termination instruction $!$. We write $\mathcal{I}$ for the set of all primitive instructions. The intuition is that the execution of a basic instruction $a$ produces either $T$ or $F$ at its completion. In the case of a positive test instruction $+a$, $a$ is executed and execution proceeds with the next primitive instruction if $T$ is produced. Otherwise, the next primitive instruction is skipped and execution proceeds with the primitive instruction following the skipped one. If there is no next instruction to be executed, deadlock occurs. In the case of a negative test instruction $-a$, the role of the value produced is reversed. In the case of a plain basic instruction $a$, execution always proceeds as if $T$ is produced. The effect of a forward jump instruction $\#l$ is that execution proceeds with the $l$-th next instruction. If $l$ equals 0 or the $l$-th next instruction does <table> <thead> <tr> <th>Table 1</th> <th>Axioms for guarded recursion</th> </tr> </thead> <tbody> <tr> <td>$\langle X</td> <td>E \rangle = \langle tX</td> </tr> <tr> <td>$E \Rightarrow X = \langle X</td> <td>E \rangle$ if $X \in \mathcal{V}(E)$</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Table 2</th> <th>Approximation induction principle</th> </tr> </thead> <tbody> <tr> <td>$\bigwedge_{n \geq 0} \pi_n(x) = \pi_n(y) \Rightarrow x = y$</td> <td>AIP</td> </tr> <tr> <td>$\pi_0(x) = D$</td> <td>P0</td> </tr> <tr> <td>$\pi_{n+1}(S) = S$</td> <td>P1</td> </tr> <tr> <td>$\pi_{n+1}(D) = D$</td> <td>P2</td> </tr> <tr> <td>$\pi_{n+1}(x \preceq a \succeq y) = \pi_n(x) \preceq a \succeq \pi_n(y)$</td> <td>P3</td> </tr> </tbody> </table> Table 3 Axioms of PGA <table> <thead> <tr> <th>Axiom</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>((X; Y); Z = X; (Y; Z))</td> <td>PGA1</td> </tr> <tr> <td>((X^n)\omega = X^\omega)</td> <td>PGA2</td> </tr> <tr> <td>(X^\omega; Y = X^\omega)</td> <td>PGA3</td> </tr> <tr> <td>((X; Y)^\omega = X; (Y; X)^\omega)</td> <td>PGA4</td> </tr> </tbody> </table> Table 4 Defining equations for thread extraction operation <table> <thead> <tr> <th>Equation</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>(</td> <td>a</td> </tr> <tr> <td>(</td> <td>a; X</td> </tr> <tr> <td>(</td> <td>+a</td> </tr> <tr> <td>(</td> <td>a + X</td> </tr> <tr> <td>(</td> <td>-a</td> </tr> <tr> <td>(</td> <td>-a; X</td> </tr> <tr> <td>(</td> <td>-a; X</td> </tr> </tbody> </table> not exist, deadlock occurs. The effect of the termination instruction \(!\) is that execution terminates. PGA has the following constants and operators: - for each \(u \in \mathcal{I}\), an instruction constant \(u\); - the binary concatenation operator \(-;\); - the unary repetition operator \(_\omega\). We assume that there are infinitely many variables, including \(X, Y, Z\). A closed PGA term is considered to denote a non-empty, finite or eventually periodic infinite sequence of primitive instructions.\(^2\) Closed PGA terms are considered equal if they denote the same instruction sequence. The axioms for instruction sequence equivalence are given in Table 3. In this table, \(n\) stands for an arbitrary natural number greater than 0. For each PGA term \(P\), the term \(P^n\) is defined by induction on \(n\) as follows: \(P^1 = P\) and \(P^{n+1} = P; P^n\). The equation \(X^\omega = X; X^\omega\) is derivable. Each closed PGA term is derivable equal to one of the form \(P\) or \(P; Q^\omega\), where \(P\) and \(Q\) are closed PGA terms in which the repetition operator does not occur. The repetition operator renders backward jump instructions superfluous. In [3], it is shown how programs in a program notation that is close to existing assembly languages with forward and backward jump instructions can be translated into closed PGA terms. The behaviours of the instruction sequences denoted by closed PGA terms are considered threads, with basic instructions taken for basic actions. The thread extraction operation \(|-|\) determines, for each closed PGA term \(P\), a closed term of BTA with guarded recursion that denotes the behaviour of the instruction sequence denoted by \(P\). The thread extraction operation is defined by the equations given in Table 4 (for \(a \in \mathcal{A}, l \in \mathbb{N}\) and \(u \in \mathcal{I}\)) and the rule that \(|\#l; X| = D\) if \(\#l\) is the beginning of an infinite jump chain. This rule is formalized in e.g. [9]. \(^2\)An eventually periodic infinite sequence is an infinite sequence with only finitely many distinct suffixes. 4 Interaction of Threads with Services A thread may make use of services. That is, a thread may perform an action for the purpose of interacting with a service that takes the action as a command to be processed. The processing of an action may involve a change of state of the service and at completion of the processing of the action the service returns a reply value to the thread. In this section, we introduce the use operators, which are concerned with this kind of interaction between threads and services. It is assumed that a fixed but arbitrary set \( \mathcal{F} \) of foci and a fixed but arbitrary set \( \mathcal{M} \) of methods have been given. Each focus plays the role of a name of some service provided by an execution environment that can be requested to process a command. Each method plays the role of a command proper. For the set \( \mathcal{A} \) of actions, we take the set \( \{f.m \mid f \in \mathcal{F}, m \in \mathcal{M}\} \). Performing an action \( f.m \) is taken as making a request to the service named \( f \) to process command \( m \). A service \( H \) consists of - a set \( S \) of states; - an effect function \( \text{eff} : \mathcal{M} \times S \to S \); - a yield function \( \text{yld} : \mathcal{M} \times S \to \{T, F, B\} \); - an initial state \( s_0 \in S \); satisfying the following condition: \[ \forall m \in \mathcal{M}, s \in S \cdot (\text{yld}(m, s) = B) \implies \forall m' \in \mathcal{M} \cdot \text{yld}(m', \text{eff}(m, s)) = B. \] The set \( S \) contains the states in which the service may be, and the functions \( \text{eff} \) and \( \text{yld} \) give, for each method \( m \) and state \( s \), the state and reply, respectively, that result from processing \( m \) in state \( s \). Let \( H = (S, \text{eff}, \text{yld}, s_0) \) be a service and let \( m \in \mathcal{M} \). Then the derived service of \( H \) after processing \( m \), written \( \partial_{\text{eff}} H \), is the service \( (S, \text{eff}, \text{yld}, \text{eff}(m, s_0)) \); and the reply of \( H \) after processing \( m \), written \( H(m) \), is \( \text{yld}(m, s_0) \). When a thread makes a request to service \( H \) to process \( m \): - if \( H(m) \neq B \), then the request is accepted, the reply is \( H(m) \), and the service proceeds as \( \partial_{\text{eff}} H \); - if \( H(m) = B \), then the request is rejected. We introduce the sort \( S \) of services and, for each \( f \in \mathcal{F} \), the binary use operator \( _{-}/f H \): \( T \times S \to T \). The axioms for these operators are given in Table 5. Intuitively, \( p/f H \) is the thread that results from processing all actions performed by thread \( p \) that are of the form \( f.m \) by service \( H \). When an action of the form \( f.m \) performed by thread \( p \) is processed by service \( H \), the postconditional composition concerned is eliminated on the basis of the reply value produced. No internal action is left as a trace of the processed action, like with the use operators found in papers on thread interleaving (see e.g. [6]). Combining TSU2 and TSU7, we obtain \( \bigwedge_{n \geq 0} \pi_n(x) / f H = D \Rightarrow x / f H = D \). Table 5 Axioms for use operators <table> <thead> <tr> <th>Axiom</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>( S_f H = S )</td> <td>TSU1</td> </tr> <tr> <td>( D_f H = D )</td> <td>TSU2</td> </tr> <tr> <td>( (x \leq g, m \geq y)_f H = (x/f H)_g m \geq (y/f H) ) if ( f \neq g )</td> <td>TSU3</td> </tr> <tr> <td>( (x \leq f, m \geq y)_f H = x/f H ) if ( H(m) = T )</td> <td>TSU4</td> </tr> <tr> <td>( (x \leq f, m \geq y)_f H = y/f H ) if ( H(m) = F )</td> <td>TSU5</td> </tr> <tr> <td>( (x \leq f, m \geq y)_f H = D ) if ( H(m) = B )</td> <td>TSU6</td> </tr> <tr> <td>( \bigwedge_{n \geq 0} \pi_n(x)_f H = \pi_n(y)_f H \Rightarrow x/f H = y/f H )</td> <td>TSU7</td> </tr> </tbody> </table> 5 Instruction Sequences Acting on Boolean Registers Our study of jump-free instruction sequences in Sect. 6 is concerned with instruction sequences that act on Boolean registers. In this section, we describe services that make up Boolean registers. A Boolean register service accepts the following methods: - a set to true method \( \text{set}:T \); - a set to false method \( \text{set}:F \); - a get method \( \text{get} \). We write \( \mathcal{M}_{BR} \) for the set \{\( \text{set}:T \), \( \text{set}:F \), \( \text{get} \)\}. It is assumed that \( \mathcal{M}_{BR} \subseteq \mathcal{M} \). The methods accepted by Boolean register services can be explained as follows: - \( \text{set}:T \): the contents of the Boolean register becomes \( T \) and the reply is \( T \); - \( \text{set}:F \): the contents of the Boolean register becomes \( F \) and the reply is \( F \); - \( \text{get} \): nothing changes and the reply is the contents of the Boolean register. Let \( s \in \{T,F,B\} \). Then the Boolean register service with initial state \( s \), written \( BR_s \), is the service \((\{T,F,B\}, \text{eff}, \text{eff}, s)\), where the function \( \text{eff} \) is defined as follows \((b \in \{T,F\})\): \[ \begin{align*} \text{eff}(\text{set}:T, b) &= T, & \text{eff}(m, b) &= B & \text{if } m \notin \mathcal{M}_{BR}, \\ \text{eff}(\text{set}:F, b) &= F, & \text{eff}(m, B) &= B, \\ \text{eff}(\text{get}, b) &= b, \\ \end{align*} \] Notice that the effect and yield functions of a Boolean register service are the same. 6 Jump-Free Instruction Sequences In this section, we show that each thread that can only be in a finite number of states can be produced by some single-pass instruction sequence without jump instructions if use can be made of Boolean register services. First, we make precise what it means that a thread can only be in a finite number of states. We assume that a fixed but arbitrary model \( \mathfrak{M} \) of BTA extended with guarded recursion and the use mechanism has been given, we use the term thread only for the elements from the domain of $\mathcal{M}$, and we denote the interpretations of constants and operators in $\mathcal{M}$ by the constants and operators themselves. Let $p$ be a thread. Then the set of states or residual threads of $p$, written $\text{Res}(p)$, is inductively defined as follows: - $p \in \text{Res}(p)$; - if $q \sqsubseteq a \triangleright r \in \text{Res}(p)$, then $q \in \text{Res}(p)$ and $r \in \text{Res}(p)$. We say that $p$ is a regular thread if $\text{Res}(p)$ is finite. We will make use of the fact that being a regular thread coincides with being the solution of a finite guarded recursive specification of a restricted form. A linear recursive specification over BTA is a guarded recursive specification $E = \{X = tX \mid X \in V\} \cup \{X_{n+1} = S, X_{n+2} = D\}$. In $P$, a number of Boolean register services is used for specific purposes. The purpose of each individual Boolean register is reflected in the focus that serves as its name: - for each $i \in [1, n + 2]$, $s_i$ serves as the name of a Boolean register that is used to indicate whether the current state of $\langle X_1 \mid E \rangle$ is $\langle X_i \mid E \rangle$; - $\text{rt}$ serves as the name of a Boolean register that is used to indicate whether the reply upon the action performed by $\langle X_1 \mid E \rangle$ in its current state is $T$; - $\text{rf}$ serves as the name of a Boolean register that is used to indicate whether the reply upon the action performed by $\langle X_1 \mid E \rangle$ in its current state is $F$; - $e$ serves as the name of a Boolean register that is used to achieve that instructions not related to the current state of $\langle X_1 \mid E \rangle$ are passed correctly; - $f$ serves as the name of a Boolean register that is used to achieve with the instruction $+f$.set:$\neg$ that the following instruction is skipped. Now we turn to the theorem announced above. It states rigorously that the solution of every finite linear recursive specification can be produced by an instruction sequence without jump instructions if use can be made of Boolean register services. **Theorem 1** Let a finite linear recursive specification $$E = \{X_i = X_{l(i)} \sqsubseteq a_i \triangleright X_{r(i)} \mid i \in [1, n]\} \cup \{X_{n+1} = S, X_{n+2} = D\}$$ Proof This proposition generalizes Theorem 1 from [23] from the projective limit model to an arbitrary model. However, the proof of that theorem is applicable to any model. □ In the proof of the next theorem, we associate a closed PGA term $P$ in which jump instructions do not occur with a finite linear recursive specification $$E = \{X_i = X_{l(i)} \sqsubseteq a_i \triangleright X_{r(i)} \mid i \in [1, n]\} \cup \{X_{n+1} = S, X_{n+2} = D\}.$$ be given. Then there exists a closed PGA term \( P \) in which jump instructions do not occur such that \[ \langle X_1 | E \rangle = (((\cdots (P | /_{s:1} BR_F) \cdots /_{s:n+2} BR_F) /_{r} BR_F) /_{e} BR_F) /_{t} BR_F. \] **Proof** We associate a closed PGA term \( P \) in which jump instructions do not occur with \( E \) as follows: \[ P = s:1.set:T; (Q_1; \ldots; Q_{n+1})^\omega, \] where, for each \( i \in [1, n] \): \[ Q_i = +s:i.get; e.set:T; +e.get; s:i.set:F; +e.get; -a_i; +f.set:F; rt.set:T; +e.get; +rt.get; +f.set:F; rf.set:T; +rt.get; s:i(l(i)).set:T; +rf.get; s:r(i).set:T; rt.set:F; rf.set:F; e.set:F, \] and \[ Q_{n+1} = +s:n+1.get; !. \] We use the following abbreviations (for \( i \in [1, n + 1] \) and \( j \in [1, n + 2] \)): \[ P'_i \quad \text{for} \quad Q_i; \ldots; Q_{n+1}; (Q_1; \ldots; Q_{n+1})^\omega; P'_{i\mid j}^{br} \quad \text{for} \quad (((\cdots (P'_{i\mid j} | /_{s:1} BR_F) \cdots /_{s:n+2} BR_F) /_{r} BR_F) /_{e} BR_F) /_{t} BR_F, \] where \( b_j = T \) and, for each \( j' \in [1, n + 2] \) such that \( j' \neq j \), \( b_j' = F \). From the definition of thread extraction, the definition of Boolean register services, and axiom TSU4, it follows that \[ (((\cdots (P | /_{s:1} BR_F) \cdots /_{s:n+2} BR_F) /_{r} BR_F) /_{e} BR_F) /_{t} BR_F = |P'_{1\mid 1}^{br}. \] This leaves us to show that \( \langle X_1 | E \rangle = |P'_{1\mid 1}^{br} \). Using the definition of thread extraction, the definition of Boolean register services, and axioms P0, P2, TSU1, TSU2, TSU4, TSU5 and TSU7, we easily prove the following: \[ |P'_{i\mid j}^{br} = |P'_{i+1\mid l(i)}^{br} \quad \text{if} \quad 1 \leq i \leq n \land 1 \leq j \leq n + 1 \land i \neq j \quad (1) \] \[ |P'_{i\mid j}^{br} = |P'_{i\mid j}^{br} \quad \text{if} \quad i = n + 1 \land 1 \leq j \leq n + 1 \land i \neq j \quad (2) \] \[ |P'_{i\mid j}^{br} = |P'_{i+1\mid l(i)}^{br} \leq a_i \geq |P'_{i+1\mid r(i)}^{br} \quad \text{if} \quad 1 \leq i \leq n \quad (3) \] \[ |P'_{i\mid j}^{br} = S \quad \text{if} \quad i = n + 1 \quad (4) \] \[ |P'_{i\mid j}^{br} = D \quad \text{if} \quad 1 \leq i \leq n + 1 \land j = n + 2 \quad (5) \] From Properties 1 and 2, it follows that $$|P'_{i|j}^{br} = |P'_{j|j}^{br}|$$ if $$1 \leq i \leq n + 1 \land 1 \leq j \leq n + 1 \land i \neq j.$$ From this and Property 3, it follows that $$|P'_{i|i}^{br} = |P'_{l(i)|l(i)}^{br} \leq a_i \geq |P'_{r(i)|r(i)}^{br}|$$ if $$1 \leq i \leq n.$$ From this and Properties 4 and 5, it follows that $$|P'_{1|1}^{br}$$ is a solution of $$E$$ for $$X_1.$$ Because linear recursive specifications have unique solutions, it follows that $$\langle X_1|E \rangle = |P'_{1|1}^{br}|.$$ Theorem 1 goes through in the case where $$E = \{X_1 = D\};$$ a witnessing $$P$$ is $$(f, \text{get})^\omega.$$ It follows from the proof of Proposition 1 given in [23] that, for each regular thread $$p,$$ either $$p$$ is the solution of $$\{X_1 = D\}$$ for $$X_1$$ or there exists a finite linear recursive specification $$E$$ of the form considered in Theorem 1 such that $$p$$ is the solution of $$E$$ for $$X_1.$$ Hence, we have the following corollary of Proposition 1 and Theorem 1: **Corollary 1** For each regular thread $$p,$$ there exists a closed PGA term $$P$$ in which jump instructions do not occur such that $$p$$ is the thread denoted by $$(((\ldots (|P|/_{s:1}BR_F) \ldots /_{s:n+2}BR_F)/_{l:BR_F}/_{l:BR_F}/_{e:BR_F})/_{l:BR_F})/_{l:BR_F}.$$ In other words, each regular thread can be produced by an instruction sequence without jump instructions if use can be made of Boolean register services. The construction of such instructions sequences given in the proof of Theorem 1 is weakly reminiscent of the construction of structured programs from flow charts found in [14]. However, our construction is more extreme: it yields programs that contain neither unstructured jumps nor a rendering of the conditional and loop constructs used in structured programming. 7 Program Algebra with Labels and Goto’s In this section, we introduce PGA_g, a variant of PGA obtained by leaving out jump instructions and adding labels and goto instructions. In PGA_g, like in PGA, it is assumed that a fixed but arbitrary set $$\mathfrak{A}$$ of basic instructions has been given. PGA_g has the following primitive instructions: - for each $$a \in \mathfrak{A},$$ a plain basic instruction $$a;$$ - for each $$a \in \mathfrak{A},$$ a positive test instruction $$+a;$$ - for each $$a \in \mathfrak{A},$$ a negative test instruction $$-a;$$ - for each $$l \in \mathbb{N},$$ a label instruction $$[l];$$ \[ \square \] Springer – for each \( l \in \mathbb{N} \), a goto instruction \( \#[l] \); – a termination instruction \( ! \). We write \( \mathcal{I}_g \) for the set of all primitive instructions of \( \text{PGA}_g \). The plain basic instructions, the positive test instructions, the negative test instructions, and the termination instruction are as in PGA. Upon execution, a label instruction \([l]\) is simply skipped. If there is no next instruction to be executed, deadlock occurs. The effect of a goto instruction \( \#[l] \) is that execution proceeds with the occurrence of the label instruction \([l]\) next following if it exists. If there is no occurrence of the label instruction \([l]\), deadlock occurs. \( \text{PGA}_g \) has a constant \( u \) for each \( u \in \mathcal{I}_g \). The operators of \( \text{PGA}_g \) are the same as the operators as PGA. Likewise, the axioms of \( \text{PGA}_g \) are the same as the axioms as PGA. Just like in the case of PGA, the behaviours of the instruction sequences denoted by closed \( \text{PGA}_g \) terms are considered threads. The behaviours of the instruction sequences denoted by closed \( \text{PGA}_g \) terms are indirectly given by the behaviour preserving function \( \text{pgag2pga} \) from the set of all closed \( \text{PGA}_g \) terms to the set of all closed PGA terms defined by \[ \text{pgag2pga}(u_1; \ldots; u_n) = \text{pgag2pga}(u_1; \ldots; u_n; (\#[1])^\omega), \] \[ \text{pgag2pga}(u_1; \ldots; u_n; (u_{n+1}; \ldots; u_m)^\omega) = \phi_1(u_1); \ldots; \phi_n(u_n); (\phi_{n+1}(u_{n+1}); \ldots; \phi_m(u_m))^\omega, \] where the auxiliary functions \( \phi_j : \mathcal{I}_g \rightarrow \mathcal{I} \) are defined as follows (1 \( \leq j \leq m \)): \[ \phi_j ([l]) = \#1, \] \[ \phi_j (\#[l]) = \#\text{tgt}_j ([l]), \] \[ \phi_j (u) = u \quad \text{if} \ u \ \text{is not a label or goto instruction}, \] where – \( \text{tgt}_j ([l]) = i \) if the leftmost occurrence of \([l]\) in \( u_j; \ldots; u_m; u_{n+1}; \ldots; u_m \) is the \( i \)-th instruction; – \( \text{tgt}_j ([l]) = 0 \) if there are no occurrences of \([l]\) in \( u_j; \ldots; u_m; u_{n+1}; \ldots; u_m \). Let \( P \) be a closed \( \text{PGA}_g \) term. Then the behaviour of \( P \) is \( \vert \text{pgag2pga}(P) \vert \). The approach to semantics followed here is introduced under the name projection semantics in [3]. The function \( \text{pgag2pga} \) is called a projection. 8 A Bounded Number of Labels In this section, we show that a bound to the number of labels restricts the expressiveness of \( \text{PGA}_g \). We will refer to \( \text{PGA}_g \) terms that do not contain label instructions \([l]\) with \( l > k \) as \( \text{PGA}_g^k \) terms. Moreover, we will write \( \mathcal{I}_g^k \) for the set \( \mathcal{I}_g \setminus \{[l] \mid l > k \} \). We define an alternative projection for closed PGA\textsuperscript{k} terms, which takes into account that these terms contain only label instructions \([l]\) with \(1 \leq l \leq k\). The alternative projection \(\text{pgag}_2\text{pga}\textsuperscript{k}\) from the set of all closed PGA\textsuperscript{k} terms to the set of all closed PGA terms is defined by \[ \begin{align*} \text{pgag}_2\text{pga}\textsuperscript{k}(u_1; \ldots ; u_n) &= \text{pgag}_2\text{pga}\textsuperscript{k}(u_1; \ldots ; u_n ; (\#1)^\omega), \\ \text{pgag}_2\text{pga}\textsuperscript{k}(u_1; \ldots ; u_n ; (u_{n+1}; \ldots ; u_m)^\omega) &= \psi(u_1, u_2); \ldots ; \psi(u_n, u_{n+1}); (\psi(u_{n+1}, u_{n+2}); \ldots \\ &\quad; \psi(u_{m-1}, u_m); \psi(u_m, u_{n+1}))^\omega, \end{align*} \] where the auxiliary function \(\psi: \mathcal{J}_g^k \times \mathcal{J}_g^k \rightarrow \mathcal{J}\) is defined as follows: \[ \psi(u', u'') = \psi'(u') \cdot \#k+2 \cdot \#k+2 \cdot \psi''(u''), \] where the auxiliary functions \(\psi', \psi'': \mathcal{J}_g^k \rightarrow \mathcal{J}\) are defined as follows: \[ \psi'([l]) = \#1, \\ \psi'([l]) = \#l+2 \quad \text{if } l \leq k, \\ \psi'([l]) = \#0 \quad \text{if } l > k, \\ \psi'(u) = u \quad \text{if } u \text{ is not a label or goto instruction,} \\ \psi''([l]) = (\#k+3)^{l-1}; \#k-l+1; (\#k+3)^{k-l}, \\ \psi''(u) = (\#k+3)^k \quad \text{if } u \text{ is not a label instruction.} \] In order to clarify the alternative projection, we explain how the intended effect of a goto instruction is obtained. If \(u_j\) is \([l]\), then \(\psi'(u_j)\) is \([l]+2\). The effect of \([l]+2\) is a jump to the \(l\)-th instruction in \(\psi''(u_{j+1})\) if \(j < m\) and a jump to the \(l\)-th instruction in \(\psi''(u_{n+1})\) if \(j = m\). If this instruction is \(\#k-l+1\), then its effect is a jump to the occurrence of \#1 that replaces \([l]\). However, if this instruction is \(\#k+3\), then its effect is a jump to the \(l\)-th instruction in \(\psi''(u_{j+2})\) if \(j < m - 1\), a jump to the \(l\)-th instruction in \(\psi''(u_{n+1})\) if \(j = m - 1\), and a jump to the \(l\)-th instruction in \(\psi''(u_{n+2})\) if \(j = m\). In the proof of Theorem 2 below, chains of forward jumps are removed in favour of single jumps. The following proposition justifies these removals. **Proposition 2** For each PGA context \(C[\cdot]:\) \[ |C[\#n+1; u_1; \ldots ; u_n; \#m]| = |C[\#m+n+1; u_1; \ldots ; u_n; \#m]|. \] **Proof** Contexts of the forms \(C[\cdot]^\omega; Q \quad \text{and} \quad P; C[\cdot]^\omega; Q\) do not need to be considered because of axiom PGA3. For eight of the remaining twelve forms, the equation to be proved follows immediately from the equations to be proved for the other forms, to wit \(Q, P; Q, P; \_\omega\) and \(P; (Q; \_)^\omega\), the axioms of PGA, the defining equations for thread extraction, and the easy to prove fact that \(|P; \#0| = |P|\). In the case of the form \( _{Q} \), the equation concerned is easily proved by induction on \( n \). In the case of the form \( P ; _{Q} \), only \( P \) in which the repetition operator does not occur need to be considered because of axiom PGA3. For such \( P \), the equation concerned is easily proved by induction on the length of \( P \), using the equation proved for the form \( _{Q} \). In the case of the form \( P ; _{\omega} \), only \( P \) in which the repetition operator does not occur need to be considered because of axiom PGA3. For such \( P \), the equation for the approximate form \( P ; _{k} \) is easily proved by induction on \( k \), using the equation proved for the form \( _{Q} \). From these equations, the equation for the form \( P ; _{\omega} \) follows using AIP. In the case of the form \( P ; (Q ; _{\omega}) \), the equation concerned is proved like in the case of the form \( P ; _{\omega} \). □ The following theorem states rigorously that the projections \( \text{pgag2pga} \) and \( \text{pgag2pga}^{k} \) give rise to instruction sequences with the same behaviour. **Theorem 2** For each closed PGA\(^{k} \) term \( P \), \( |\text{pgag2pga}(P)| = |\text{pgag2pga}^{k}(P)| \). **Proof** Because \( \text{pgag2pga}(u_1 ; \ldots ; u_n) = \text{pgag2pga}(u_1 ; \ldots ; u_n ; (#[1])^{\omega}) \) and \( \text{pgag2pga}^{k}(u_1 ; \ldots ; u_n) = \text{pgag2pga}^{k}(u_1 ; \ldots ; u_n ; (#[1])^{\omega}) \), we only consider the case where the repetition operator occurs in \( P \). We make use of an auxiliary function \(|\_, \_|\). This function determines, for each natural number and closed PGA term in which the repetition operator occurs, a closed term of BTA with guarded recursion. The function \(|\_, \_|\) is defined as follows: \[ |i, u_1 ; \ldots ; u_n ; (u_{n+1} ; \ldots ; u_m)^{\omega}| = |u_i ; \ldots ; u_m ; (u_{n+1} ; \ldots ; u_m)^{\omega}| \quad \text{if } 1 \leq i \leq m, \\ |i, u_1 ; \ldots ; u_n ; (u_{n+1} ; \ldots ; u_m)^{\omega}| = D \quad \text{if } -1 \leq i \leq m. \] Let \( P = u_1 ; \ldots ; u_n ; (u_{n+1} ; \ldots ; u_m)^{\omega} \) be a closed PGA\(^{k} \) term, let \( P' = \text{pgag2pga}(P) \), and let \( P'' = \text{pgag2pga}^{k}(P) \). Moreover, let \( p : \mathbb{N} \rightarrow \mathbb{N} \) be such that \( f(i) = (k + 3) \cdot (i - 1) + 1 \). Then it follows easily from the definitions of \(|\_, \_|\), \(|\_, \_|\), \text{pgag2pga} and \text{pgag2pga}^{k}, the axioms of PGA and Proposition 2 that for \( 1 \leq i \leq m \): \[ |i, P'| = a \circ |i + 1, P'| \quad \text{if } u_i = a, \\ |i, P'| = |i + 1, P'| \preceq a \succeq |i + 2, P'| \quad \text{if } u_i = +a, \\ |i, P'| = |i + 2, P'| \preceq a \succeq |i + 1, P'| \quad \text{if } u_i = -a, \\ |i, P'| = |i + 1, P'| \quad \text{if } u_i = [l], \\ |i, P'| = |i + n, P'| \quad \text{if } u_i = #[l] \land tgt_i(l) = n, \\ |i, P'| = S \quad \text{if } u_i = !. \] and \[ |\rho(i), P''| = a \circ |\rho(i + 1), P''| \quad \text{if } u_i = a, \] \[ |\rho(i), P''| = |\rho(i + 1), P''| \triangleleft a \triangleright |\rho(i + 2), P''| \quad \text{if } u_i = +a, \] \[ |\rho(i), P''| = |\rho(i + 2), P''| \triangleleft a \triangleright |\rho(i + 1), P''| \quad \text{if } u_i = -a, \] \[ |\rho(i), P''| = |\rho(i + 1), P''| \quad \text{if } u_i = [l], \] \[ |\rho(i), P''| = |\rho(i + n), P''| \quad \text{if } u_i = ![l] \land tgt_i(l) = n, \] \[ |\rho(i), P''| = S \quad \text{if } u_i = ! \] (where \(tgt_i\) is as in the definition of \(pgag2pga\)). Because \(|pgag2pga(P)| = |1, P'|\) and \(|pgag2pga^k(P)| = |\rho(1), P''|\), this means that \(|pgag2pga(P)|\) and \(|pgag2pga^k(P)|\) are solutions of the same guarded recursive specification. Because guarded recursive specifications have unique solutions, it follows that \(|pgag2pga(P)| = |pgag2pga^k(P)|\). \(\square\) The projection \(pgag2pga^k(P)\) yields only closed PGA terms that do not contain jump instructions \(#l\) with \(l > k + 3\). Hence, we have the following corollary of Theorem 2: **Corollary 2** For each closed PGA\(^k\) term \(P\), there exists a closed PGA term \(P'\) not containing jump instructions \(#l\) with \(l > k + 3\) such that \(|pgag2pga(P)| = |P'|\). It follows from Corollary 2 that, if a regular thread cannot be denoted by a closed PGA term that does not contain jump instructions \(#l\) with \(l > k + 3\), it cannot be denoted by a closed PGA\(^k\) term. Moreover, it is known that, for each \(k \in \mathbb{N}\), there exists a closed PGA term for which there does not exist a closed PGA term not containing jump instructions \(#l\) with \(l > k + 3\) that denotes the same thread (see e.g. [23], Proposition 3). Hence, we also have the following corollary: **Corollary 3** For each \(k \in \mathbb{N}\), there exists a closed PGA term \(P\) for which there does not exist a closed PGA\(^k\) term \(P'\) such that \(|P| = |pgag2pga(P')|\). 9 Conclusions Program algebra is a setting suited for investigating single-pass instruction sequences. In this setting, we have shown that each behaviour that can be produced by a single-pass instruction sequence under execution can be produced by a single-pass instruction sequence without jump instructions if use can be made of Boolean register services. We consider this an interesting expressiveness result. An important variant of program algebra is obtained by leaving out jump instructions and adding labels and goto instructions. We have also shown that a bound to the number of labels restricts the expressiveness of this variant. Earlier expressiveness results on single-pass instruction sequences as considered in program algebra are collected in [23]. Program algebra does not provide a notation for programs that is intended for actual programming. However, to demonstrate that single-pass instruction sequences as considered in program algebra are suited for explaining programs in the form of assembly programs as well as programs in the form of structured programs, a hierarchy of program notations rooted in program algebra is introduced in [3]. One program notation belonging to this hierarchy, called PGLD\textsubscript{g}, is a simple program notation, close to existing assembly languages, with labels and goto instructions. We remark that a projection from the set of all PGLD\textsubscript{g} programs to the set of all closed PGA\textsubscript{g} terms can easily be devised. The idea that programs are in essence single-pass instruction sequences underlies the choice for the name program algebra. The name seems to imply that program algebra is suited for investigating programs in general. We do not intend to claim this generality, which in any case does not matter when investigating single-pass instruction sequences. The name program algebra might as well be used as a collective name for algebras that are based on any viewpoint concerning programs. To our knowledge, it is not common to use the name as such. Most closely related to our work on instruction sequences is work on Kleene algebras (see e.g. [15–18, 24]), but programs are considered at a higher level in that work. For instance, programming features like jump instructions have never been studied. In most work on computer architecture (see e.g. [1, 19–22]), instruction sequences are under discussion. However, the notion of instruction sequence is not subjected to systematic and precise analysis in the work concerned. Acknowledgements This research was partly carried out in the framework of the Jacquard-project Symbiosis, which is funded by the Netherlands Organisation for Scientific Research (NWO). We thank Alban Ponse, colleague at the University of Amsterdam, and Stephan Schroevers, graduate student at the University of Amsterdam, for carefully reading a preliminary version of this paper and pointing out some flaws in it. Moreover, we thank an anonymous referee for suggesting improvements of the presentation of the paper. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. References
{"Source-Url": "https://pure.uva.nl/ws/files/1412630/107748_360893.pdf", "len_cl100k_base": 12230, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 61381, "total-output-tokens": 14668, "length": "2e13", "weborganizer": {"__label__adult": 0.0003790855407714844, "__label__art_design": 0.0004863739013671875, "__label__crime_law": 0.00042939186096191406, "__label__education_jobs": 0.0008640289306640625, "__label__entertainment": 9.399652481079102e-05, "__label__fashion_beauty": 0.000194549560546875, "__label__finance_business": 0.00030112266540527344, "__label__food_dining": 0.0004832744598388672, "__label__games": 0.0008373260498046875, "__label__hardware": 0.0032501220703125, "__label__health": 0.0007071495056152344, "__label__history": 0.00037789344787597656, "__label__home_hobbies": 0.00017023086547851562, "__label__industrial": 0.0008754730224609375, "__label__literature": 0.00033211708068847656, "__label__politics": 0.00042366981506347656, "__label__religion": 0.0006461143493652344, "__label__science_tech": 0.1339111328125, "__label__social_life": 6.848573684692383e-05, "__label__software": 0.007183074951171875, "__label__software_dev": 0.84619140625, "__label__sports_fitness": 0.00039124488830566406, "__label__transportation": 0.0009279251098632812, "__label__travel": 0.00022971630096435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44399, 0.01815]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44399, 0.77829]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44399, 0.86117]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1508, false], [1508, 5028, null], [5028, 8390, null], [8390, 11229, null], [11229, 14062, null], [14062, 17257, null], [17257, 19843, null], [19843, 22597, null], [22597, 24741, null], [24741, 27203, null], [27203, 30030, null], [30030, 32956, null], [32956, 35867, null], [35867, 38618, null], [38618, 42176, null], [42176, 44399, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1508, true], [1508, 5028, null], [5028, 8390, null], [8390, 11229, null], [11229, 14062, null], [14062, 17257, null], [17257, 19843, null], [19843, 22597, null], [22597, 24741, null], [24741, 27203, null], [27203, 30030, null], [30030, 32956, null], [32956, 35867, null], [35867, 38618, null], [38618, 42176, null], [42176, 44399, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44399, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44399, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44399, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44399, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44399, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44399, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44399, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44399, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44399, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44399, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1508, 2], [1508, 5028, 3], [5028, 8390, 4], [8390, 11229, 5], [11229, 14062, 6], [14062, 17257, 7], [17257, 19843, 8], [19843, 22597, 9], [22597, 24741, 10], [24741, 27203, 11], [27203, 30030, 12], [30030, 32956, 13], [32956, 35867, 14], [35867, 38618, 15], [38618, 42176, 16], [42176, 44399, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44399, 0.10145]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
aa8d35164b7d4a2ad633f85421c125fb1afc4f43
[REMOVED]
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01522938/document", "len_cl100k_base": 10212, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 44912, "total-output-tokens": 13238, "length": "2e13", "weborganizer": {"__label__adult": 0.0004925727844238281, "__label__art_design": 0.003755569458007813, "__label__crime_law": 0.0003554821014404297, "__label__education_jobs": 0.0012559890747070312, "__label__entertainment": 0.0002419948577880859, "__label__fashion_beauty": 0.0002636909484863281, "__label__finance_business": 0.000263214111328125, "__label__food_dining": 0.0004305839538574219, "__label__games": 0.00197601318359375, "__label__hardware": 0.0026683807373046875, "__label__health": 0.0005788803100585938, "__label__history": 0.000652313232421875, "__label__home_hobbies": 0.0001270771026611328, "__label__industrial": 0.00064849853515625, "__label__literature": 0.0004682540893554687, "__label__politics": 0.0002703666687011719, "__label__religion": 0.0007467269897460938, "__label__science_tech": 0.16015625, "__label__social_life": 9.59634780883789e-05, "__label__software": 0.03192138671875, "__label__software_dev": 0.79150390625, "__label__sports_fitness": 0.00030493736267089844, "__label__transportation": 0.0006361007690429688, "__label__travel": 0.0003170967102050781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59180, 0.01999]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59180, 0.18288]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59180, 0.91623]], "google_gemma-3-12b-it_contains_pii": [[0, 2814, false], [2814, 7209, null], [7209, 12622, null], [12622, 13152, null], [13152, 17816, null], [17816, 20236, null], [20236, 24540, null], [24540, 28817, null], [28817, 33638, null], [33638, 38201, null], [38201, 41892, null], [41892, 46786, null], [46786, 51951, null], [51951, 58223, null], [58223, 59180, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2814, true], [2814, 7209, null], [7209, 12622, null], [12622, 13152, null], [13152, 17816, null], [17816, 20236, null], [20236, 24540, null], [24540, 28817, null], [28817, 33638, null], [33638, 38201, null], [38201, 41892, null], [41892, 46786, null], [46786, 51951, null], [51951, 58223, null], [58223, 59180, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59180, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59180, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59180, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59180, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59180, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59180, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59180, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59180, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59180, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59180, null]], "pdf_page_numbers": [[0, 2814, 1], [2814, 7209, 2], [7209, 12622, 3], [12622, 13152, 4], [13152, 17816, 5], [17816, 20236, 6], [20236, 24540, 7], [24540, 28817, 8], [28817, 33638, 9], [33638, 38201, 10], [38201, 41892, 11], [41892, 46786, 12], [46786, 51951, 13], [51951, 58223, 14], [58223, 59180, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59180, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
b2714f0acb427077dc81c630fcc41f6a5c17b581
[REMOVED]
{"Source-Url": "http://2018.russianscdays.org/files/pdf18/171.pdf", "len_cl100k_base": 9562, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 51109, "total-output-tokens": 11991, "length": "2e13", "weborganizer": {"__label__adult": 0.0003540515899658203, "__label__art_design": 0.00061798095703125, "__label__crime_law": 0.0005159378051757812, "__label__education_jobs": 0.0030994415283203125, "__label__entertainment": 0.0001302957534790039, "__label__fashion_beauty": 0.00022411346435546875, "__label__finance_business": 0.0004498958587646485, "__label__food_dining": 0.00046372413635253906, "__label__games": 0.0008678436279296875, "__label__hardware": 0.001995086669921875, "__label__health": 0.0010423660278320312, "__label__history": 0.000583648681640625, "__label__home_hobbies": 0.0001962184906005859, "__label__industrial": 0.0009765625, "__label__literature": 0.0004763603210449219, "__label__politics": 0.0004451274871826172, "__label__religion": 0.0007457733154296875, "__label__science_tech": 0.434326171875, "__label__social_life": 0.0001500844955444336, "__label__software": 0.01029205322265625, "__label__software_dev": 0.54052734375, "__label__sports_fitness": 0.0003752708435058594, "__label__transportation": 0.0008974075317382812, "__label__travel": 0.00025177001953125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36526, 0.04739]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36526, 0.9357]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36526, 0.79315]], "google_gemma-3-12b-it_contains_pii": [[0, 2532, false], [2532, 6008, null], [6008, 9723, null], [9723, 12637, null], [12637, 15603, null], [15603, 18848, null], [18848, 21996, null], [21996, 25857, null], [25857, 29118, null], [29118, 30187, null], [30187, 33097, null], [33097, 36526, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2532, true], [2532, 6008, null], [6008, 9723, null], [9723, 12637, null], [12637, 15603, null], [15603, 18848, null], [18848, 21996, null], [21996, 25857, null], [25857, 29118, null], [29118, 30187, null], [30187, 33097, null], [33097, 36526, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36526, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36526, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36526, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36526, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36526, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36526, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36526, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36526, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36526, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36526, null]], "pdf_page_numbers": [[0, 2532, 1], [2532, 6008, 2], [6008, 9723, 3], [9723, 12637, 4], [12637, 15603, 5], [15603, 18848, 6], [18848, 21996, 7], [21996, 25857, 8], [25857, 29118, 9], [29118, 30187, 10], [30187, 33097, 11], [33097, 36526, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36526, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
2c34a7a6d35a2dd4beac7f7a4a0ae45fc57277d3
On semi-automated matching and integration of database schemas Ünal Karakaş, Ö. Empirical validation of SASMINT In order to measure the quality and performance of our approach to schema matching and integration, we have performed a number of experiments. These experiments consider and make use of schemas that include different types of schema heterogeneities, as addressed in Chapter 3. This chapter describes these experiments and their results. In this respect, in Section 6.1 we first address a number of related experiments performed by other main research efforts. A number of specific quality measures used for assessing the results of our schema matching and schema integration components of SASMINT are described next in Section 6.2. The main characteristics of test schemas are addressed in Section 6.3. The setup and details related to the performed experimental evaluations are given in Section 6.4. Our evaluation results are addressed in Sections 6.5 to 6.7. Finally, Section 6.8 concludes the chapter with a summary of evaluation results. This chapter contains some research results, which were previously published in the Journal of Knowledge and Information Systems (Unal & Afsarmanesh, 2010). 6.1 Schema Matching Evaluations in Related Research The evaluation performed in most existing schema matching research does not use any benchmark; rather they each use their own test schemas in evaluating specific aspects of their proposed system. A comparison of different evaluations introduced by different research for schema matching systems is provided in (Do et al., 2002). It specifies four different types of criteria to compare existing evaluations, including the evaluation of COMA (Do & Rahm, 2002), Cupid (Madhavan et al., 2001), Similarity Flooding (Melnik et al., 2002), SEMINT (Li & Clifton, 2000), and GLUE (Doan et al., 2002). These criteria include: 1) **Input**: Types of input data used, such as dictionaries used and schema specification. 2) **Output**: Information included in the match result, such as the mappings between different schema elements. 3) **Quality measures**: Measures used to assess the accuracy of the match result. 4) **Effort**: Types of needed manual effort measured in evaluations, such as pre-match and post-match efforts. As (Do et al., 2002) concludes, it is difficult to compare results of different schema matching evaluations with each other, as these evaluations have been carried out in different ways and aimed at specific features. Authors further point at the requirement for a schema matching benchmark to make the comparison of results of different research evaluations possible. Such a generic benchmark has however not yet been defined and/or considered in any research work. So far, only a benchmark for evaluating the systems, which match XML Schemas is proposed in (Duchateau et al., 2007). This benchmark, called XBenchMatch, consists of quality measures for both schema matching and schema integration. It also provides some evaluation of the matching performance. In XBenchMatch it is assumed that for evaluating the quality of schema matching, mappings must be given as XML path correspondences (e.g. person.person_name - person.lastName). Furthermore, for evaluating the quality of “integrated schema”, a number of measures are introduced as a part of XBenchMatch. However, these measures assume that also the correct (ideal) integrated schema is provided to the XBenchMatch. As such, the integrated schema which is generated as the output of the schema integration tool is compared against the ideal integrated schema. Both of these schemas need to be in the XML Schema format. For the purpose of evaluating the quality of “schema matching”, XBenchMatch applies the four measures of Precision, Recall, F-measure, and Overall, as most other evaluation approaches also apply some of these methods. Detailed description of these measures is provided in the next section. In summary, most evaluation approaches consider only the quality of schema matching. Although the XBenchMatch prototype measures the quality of both the schema matching and schema integration, it can only support the XML Schema formats. Furthermore, there are some assumptions of XBenchMatch (e.g. the availability of the ideal integrated schema) as explained in the previous paragraph, which makes the general use of this benchmark difficult. Since SASMINT works with relational schemas and due to other reasons addressed above, we could not apply XBenchMatch for the evaluation of SASMINT. Nevertheless, as addressed in Section 6.2 below in details, nearly all measures introduced in other competitive research are considered and applied for validation of SASMINT. 6.2 Quality Measures Used for Evaluating SASMINT The main goal of SASMINT is to automate the schema matching and integration processes to the extent possible. In other words, our main concern for SASMINT is its effectiveness, in how accurately the system can identify the matching pairs and generate the integrated schema automatically. For this reason, we consider only the quality and accuracy measures in our evaluations of the SASMINT system, and do not take into account the time performance related measures and assessment. Performance measures depend on the underlying environment and the technologies used, and thus it is challenging to obtain neutral objective evaluations. Furthermore, for schema matching and integration, when performance is considered, it is not only related to how fast the system works but also how much time the user spends correcting the results manually. Therefore, when the system produces more accurate results, the user needs to spend less manual time and the overall performance increases. Therefore, the accuracy aim of SASMINT also improves the performance of its schema matching and schema integration. We apply two types of quality measures in our experiments: 1) quality measures for schema matching, and 2) quality measures for schema integration. Details of these measures are provided in the next sub-section. 6.2.1 Quality Measures for Schema Matching Similar to most other schema matching evaluations, we used the concepts of precision and recall from the information retrieval field (Cleverdon & Keen, 1966) for measuring the quality of schema matching. Precision (P) and Recall (R) are computed as follows: \[ P = \frac{x}{x+z} \quad \text{and} \quad R = \frac{x}{x+y} \] where \( x \) is the number of correctly identified similar strings (i.e. true positives), \( z \) is the number of strings found as similar, while actually they were not (i.e. false positives), and \( y \) is the number of those similar strings, which the system missed to identify (i.e. false negatives). As such the higher the precision value is and the higher the recall value is, the better is the system. Although precision and recall measures are widely used for a variety of evaluation purposes, neither of them alone can accurately assess the match quality. For instance, recall can be increased by returning all pairs as similar, but increasing the number of false positives and thus decreasing the precision. Therefore, a measure combining precision and recall is better suited for accuracy evaluation. F-measure (Rijsbergen, 1979) is one such measure, combining recall and precision using the following formula. As such the higher the f-measure value is, the better is the system. \[ F = \frac{2}{\frac{1}{P} + \frac{1}{R}} \] Another such measure, called Overall, is proposed by (Melnik et al., 2002). It is different from f-measure in that overall takes into account the amount of work needed to correct the results, namely to add the relevant needed matches that have not been discovered (false negatives) and to remove those matchers, which are incorrect but have been extracted by the matcher (false positives). Overall is always lower than f-measure, and if the precision is lower than 0.5, the result for overall becomes negative (Melnik et al., 2002) (Do et al., 2002). Overall, represented by O, and also called as accuracy, is defined by the following formula. As such the higher the overall value is, the better is the system. \[ O = R \times (2 - \frac{1}{P}) \] As an example, assume that an automatic schema matching system correctly identifies 10 matches out of 25 real matches that can be identified manually by the user, and incorrectly identifies 4 other matches. In this case, the number of true positives (x) is 10, false negatives (y) is 25-10 = 15, and false positives (z) is 4. As a result, the system has the following precision, recall, f-measure, and overall values: \[ P = \frac{10}{10+4} = 0.71 \quad R = \frac{10}{10+15} = 0.40 \] \[ F = \frac{2}{\frac{1}{0.71}+\frac{1}{0.40}} = 0.51 \quad O = 0.40 \times (2 - (1/0.71)) = 0.24 \] 6.2.2 Quality Measures for Schema Integration Quality measures used for the assessment of schema integration in SASMINT benefit from the ideas presented in (Batini et al., 1986). Schema merging and restructuring processes described in (Batini et al., 1986) aim at improving the resulting schema with respect to the following three qualities: 1) **Completeness**: Merged or integrated schema must cover concepts of all participating schemas. 2) **Minimality**: If the same concept is represented in more than one participating schemas, then the integrated schema must contain only a single representation of this concept. In other words, redundancies must be eliminated. 3) **Understandability**: Resulting integrated schema must be easily understandable by the user. In evaluation of SASMINT, we are interested in quantitative objective measures. For this reason, we only consider measuring the **completeness** and **minimality** which will produce objective results. The **understandability** of SASMINT, while not measured rigorously, was satisfactory for the empirical tests we performed in the lab. The two measures of completeness and minimality applied to SASMINT are inspired by (Batini et al., 1986). However, within each of these measures, we have introduced two other measures for **key completeness** and **key minimality** to validate the generated primary and foreign keys when measuring the quality of SASMINT’s schema integration approach. We believe that these added measures, which are missing from Batini’s approach, are required for proper validation of schema integration. These measures are explained below: - **Completeness Measure**: In the resulting integrated schema, all concepts (i.e., tables and columns in the relational schema) of both the donor and recipient schemas must be covered. Completeness measure determines how much this goal has been achieved. Therefore, \( \forall c_i \in \{c_1, c_2, \ldots, c_k\} \), where \( c_i \) is a concept in the donor or recipient schema and \( k \) is the total number of concepts in that donor or recipient schemas, \( \exists c_j \in \{c_1, c_2, \ldots, c_l\} \) where \( c_j \) is a concept of the integrated schema and \( c_j \supseteq c_i \) and \( l \) is the number of concepts in the integrated schema. Taking this definition as the base, completeness of an integrated schema in SASMINT is measured using the following formula: \[ m_{\text{completeness}} = \frac{n_{\text{complete}}}{n_{\text{total}}}, \] where \( n_{\text{complete}} \) is the number of concepts of recipient and donor schemas that are covered in the integrated schema and \( n_{\text{total}} \) (also \( l \) above) is the total number of concepts involved in donor and recipient schemas. Schema integration in SASMINT also handles primary and foreign keys, which will be referred to as “keys” from this point onward. Therefore, another completeness measure, called **key completeness**, is also defined for SASMINT to measure how many of the keys of the recipient and donor schemas are covered in the integrated schema. Given that $n_{\text{completeKey}}$ is the number of keys of recipient and donor schemas that are covered in the integrated schema and $n_{\text{totalKey}}$ is the total number of keys involved in donor and recipient schemas, the following formula measures the key completeness, $m_{\text{completenessKey}}$, of an integrated schema in SASMINT: $$m_{\text{completenessKey}} = \frac{n_{\text{completeKey}}}{n_{\text{totalKey}}}$$ - **Minimality Measure:** The amount of redundancy in the resulting integrated schema must be minimal to the extent possible. Each joint and/or related concept of the donor and recipient schemas shall appear only once in the integrated schema. Namely, if the donor and recipient schemas have common concepts, only one of them must be represented in the integrated schema. Minimality measure identifies how many redundant concepts exist in the integrated schema. Suppose that $\exists c_i \in \{c_1,c_2,..,c_k\}$, where $c_i$ is a concept of the donor schema and $k$ its total number of concepts, and $\exists c_j \in \{c_1,c_2,..,c_l\}$, where $c_j$ is a concept of the recipient schema and $l$ its total number of concepts. If $\exists c_x,c_y \in \{c_1,c_2,..,c_m\}$, where $c_x$ and $c_y$ are concepts of the integrated schema and $m$ its total number of concepts, such that $c_i = c_j = c_x = c_y$, then either $c_x$ or $c_y$ is redundant. Following formula is used to calculate the amount of redundancy in an integrated schema: $$m_{\text{redundancy}} = \frac{n_{\text{redundant}}}{n_{\text{total}}}$$ where $n_{\text{redundant}}$ is the number of redundant concepts in the integrated schema and $n_{\text{total}}$ is the total number of concepts introduced in the donor and recipient schemas. Based on this formula, we derive the following formula to measure the minimality of the SASMINT integrated schema. $$m_{\text{minimality}} = 1 - \frac{n_{\text{redundant}}}{n_{\text{total}}}$$ Similar to the case of completeness measure, another minimality measure, called key minimality, is also defined for SASMINT to determine if the resulting integrated schema is minimal considering its primary and foreign keys. Key minimality, $m_{\text{minimalityKey}}$, is measured using the following formula: $$m_{\text{minimalityKey}} = 1 - \frac{n_{\text{redundantKey}}}{n_{\text{totalKey}}}$$ where the $n_{\text{redundantKey}}$ is the number of redundant primary and foreign keys in the integrated schema and the $n_{\text{totalKey}}$ is the total number of such keys introduced in the donor and recipient schemas. ### 6.3 Test Schemas We have carried out the experimental evaluation of SASMINT using six pairs (donor and recipient) of “test schemas”, characteristics of each of which are shown in Table 6.1 and the six pairs of schemas are represented in Appendix D. As for the evaluation of schema matching, each pair was matched by the SASMINT, and then the results were compared against the correct matches shown in Table 6.2. We carried out the same tests for schema matching in COMA++ (a leading competitor) and compared its results with the results of SASMINT. On the other hand, for evaluation of schema integration, three pairs of schemas all from the university domain (in Table 6.1) were integrated. Moreover, in order to evaluate the Sampler component, first five schema pairs were used in the Sampler tests. Details of these tests are provided in the next sections. **Table 6.1. Characteristics of Test Schemas** <table> <thead> <tr> <th>Test Schema Pair #</th> <th>Short Name</th> <th>Domain</th> <th>Donor/Recipient</th> <th>Number of Tables</th> <th>Number of columns</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>PO</td> <td>Purchase</td> <td>Recipient</td> <td>5</td> <td>27</td> </tr> <tr> <td></td> <td></td> <td>Order</td> <td>Donor</td> <td>5</td> <td>25</td> </tr> <tr> <td>2</td> <td>Hotel</td> <td>Hotel</td> <td>Recipient</td> <td>6</td> <td>21</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Donor</td> <td>5</td> <td>14</td> </tr> <tr> <td>3</td> <td>SDB</td> <td>Biology</td> <td>Recipient</td> <td>9</td> <td>21</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Donor</td> <td>9</td> <td>22</td> </tr> <tr> <td>4</td> <td>Univ1</td> <td>University</td> <td>Recipient</td> <td>9</td> <td>30</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Donor</td> <td>5</td> <td>22</td> </tr> <tr> <td>5</td> <td>Univ2</td> <td>University</td> <td>Recipient</td> <td>9</td> <td>38</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Donor</td> <td>7</td> <td>27</td> </tr> <tr> <td>6</td> <td>Univ3</td> <td>University</td> <td>Recipient</td> <td>5</td> <td>17</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Donor</td> <td>3</td> <td>10</td> </tr> </tbody> </table> We used schemas from four different domains: Schema Pair#1 contains two purchase order schemas that we generated ourselves. Schema Pair#2 consists of two hotel schemas. We modified the hotel schemas used for MAPONTO (An et al., 2006) evaluation tests. Similarly, in Schema Pair#3, we used a modified version of MAPONTO SDB schemas from the biology domain. In Schema Pair#4, we used MAPONTO schemas from the university domain, again after modifying them. Schema Pair#5 consists of university schemas that we generated. As Schema Pair#6, we modified the test schemas of Similarity Flooding (Melnik et al., 2002) from the university domain. We intentionally selected three pairs from the university domain in order to also use them for the schema integration evaluation. Therefore, the schema integration tests integrated six schemas from the university domain. The correct matches represented in Table 6.2 are matches that are generated manually by ourselves. These constitute the source for verification of correctness of automatic matchings. <table> <thead> <tr> <th>Schema Pair#</th> <th>Type</th> <th>Correct Matches</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>table-table</td> <td>purchase_order=po, customer=buyer, product=item</td> </tr> <tr> <td>2</td> <td>table-table</td> <td>one_room=room, suite=room, town_house=room, num_beds=num_beds_attribute, on_floor=on_floor_attribute, smoking_preference=smoking_attribute</td> </tr> </tbody> </table> | column-column | one_room:roomNum=room:roomNum, one_room:hasNumBedsAttribID=room:numBedsAttribID, one_room:hasOnFloorAttribID=room:onFloorAttribID, one_room:hasSmokingPreferenceAttribID=room:smokingOrNoAttribID, one_room:oneRoomID=room:roomID, suite:roomNum=room:roomNum, suite:hasNumBedsAttribID=room:numBedsAttribID, suite:hasOnFloorAttribID=room:onFloorAttribID, suite:hasSmokingPreferenceAttribID=room:smokingOrNoAttribID, town_house:townHouseID=room:roomID, townhouse:roomNum=room:roomNum, town_house:hasNumBedsAttribID=room:numBedsAttribID, town_house:hasOnFloorAttribID=room:onFloorAttribID, town_house:hasSmokingPreferenceAttribID=room:smokingOrNoAttribID, num_beds:numBedsID=num_beds_attribute:numBedsAttributeID, num_beds:numBedsAttrib=room:numBedsAttribID, on_floor:onFloorID=room:onFloorAttribID, on_floor:onFloorAttrib=room:onFloorAttribID, smoking_preference:smokingPreferenceID=smoking_attribute:smokingAttributeID, smoking_preference:smokingPreferenceAttrib=smoking_attribute:smokingAttribute <p>| 3 | table-table | diagnoses=diagnoses, donor=donor, sample=sample, family_history=family_history, life_style_factors=life_style_factors, lab_test=lab_test, medications=medications, animal_donor=donor, human_donor=donor | | 4 | table-table | course=course, student=student, faculty_member=academic_staff |</p> <table> <thead> <tr> <th>Schema Pair#</th> <th>Type</th> <th>Correct Matches</th> </tr> </thead> <tbody> <tr> <td></td> <td>column-table</td> <td>faculty_member:researchInterest=areasOfInterest</td> </tr> <tr> <td></td> <td>column-table</td> <td>faculty_member:email=academic_staff:email, faculty_member:faculty_member_id=academic_staff:academic_staff_id, faculty_member:personName=academic_staff:name, course:number=course:courseNumber, course:courseTitle=course:courseTitle, course:instructor=course:instructor, course:prerequisites=course:prerequisite, course:description=course:description, student:studentName=student:student_name, student:email=student:email, student:advisor=student:supervisor, student:student_id=student:student_id,</td> </tr> <tr> <td></td> <td>table-table</td> <td>professor=professor, student=student, workson=workson</td> </tr> <tr> <td></td> <td>table-column</td> <td>address=professor:address</td> </tr> <tr> <td></td> <td>column-column</td> <td>professor:Id=professor:Id, professor:Name=professor:Name, professor:Sal=professor:Salary, student:Name=student:Name, student:GPA=student:GradePointAverage, student:Yr=student:Year, workson:Name=workson:StudentName, workson:Proj=workson:Project</td> </tr> </tbody> </table> 6.4 Setup for the Experimental Evaluation We compared the “schema matching” component of SASMINT against one of the state of the art system, COMA++ (Aumüeller et al., 2005). We selected COMA++ research prototype, because it is the most complete schema matching tool so far developed, consisting of a library of variety of matching algorithms and a sophisticated GUI. SASMINT and COMA++ are comparable, since they both support matching of relational schemas and aim at providing similar functionalities. Of course not all algorithms or metrics that these two systems apply are the same. Furthermore, how they combine the results of different algorithms is not the same either. Output of the schema matching is given in the range [0-1] in both systems. However, it is not clear in COMA++, in what format the results of schema matching are stored internally. In other words, COMA+++ has an internal repository where the results are stored, but how the results are represented there is not clear. Before starting the evaluation tasks, we inserted a number of abbreviations and their long forms into the abbreviation lists of both systems. One important difference between SASMINT and COMA++ is that SASMINT uses WordNet for semantic matching, whereas COMA++ requires the user to add all needed synonyms in the schema domains manually. Since WordNet might not contain all semantic relationships among the concepts of schemas, in order to make a fair comparison, we did not make any addition to COMA++'s default synonyms list, to make a fair comparison. Furthermore, COMA++ uses only the synonymy relationship; on the other hand, SASMINT also makes use of the IS-A relationships as well as gloss overlaps, which are available in the WordNet dictionary. Representation of schemas through the GUI is also different for the two systems. COMA++ does not explicitly show foreign keys. Instead of showing the foreign key column, it displays the table that is pointed by the foreign key. However, in some cases this functionality of COMA++ does not work as expected. Several different metrics or algorithms are considered and combined in both systems, in the manner that is explained below: - **For SASMINT**: We selected the default strategy of SASMINT for combining the algorithms, which is the weighted sum of them with equal weights applied to each algorithm in each group of syntactic, semantic, and structure matching. Although not the default approach, rather assigning appropriate weights for each match task would give better results, we decided to use SASMINT’s default strategy in order to make a fair comparison with COMA++. In other words, in real practice, the results of SASMINT would be better than what they are in these tests. Sampler could help the user to identify appropriate weights for the linguistic matching algorithms. The reported evaluation results in Section 6.5 and Appendix E are without applying the Sampler component of SASMINT. Results of experiments showing how Sampler can accomplish this improvement of results accuracy, i.e. how these weights affect the match results, are addressed in Section 6.7. - **For COMA++**: We used the default matching strategy of COMA++, which is called COMA. The COMA matcher combines the name, path, leaves, parents, and siblings matchers, by averaging them. In their tests, this combination was the winner and that is why we selected it. We used the default threshold, which is 0.5, in the experiments. As for the selection of match results, we used two different approaches that we call as “select all above threshold” and “select max above threshold”, as detailed below. Please note that while the results of “select all above threshold” are presented in Section 6.5, the results of “select max above threshold” are presented in Appendix E. 1) **Select All above Threshold:** Selecting all matched pairs that have the similarity above a certain threshold value. 2) **Select Max above Threshold:** Selecting the pairs with the maximum similarity. In other words, whenever there is more than one concept matching a single concept in a schema, the one with the highest similarity is selected as the matching candidate. SASMINT and COMA++ use different strategies for selecting the maximal similar pairs. SASMINT’s approach is explained in Chapter 4. COMA++’s default strategy works as follows: When there is more than one match to the same concept, the one with the highest similarity is selected if the difference between the similarity values is more than 0.0080. We also carried out tests in order to validate the Sampler component that helps to identify appropriate weights for each linguistic matching algorithm in the schema matching process. The results of these tests are presented in Section 6.6. The first five schema pairs (Schema Pairs #1, 2, 3, 4, and 5) were selected as the test schemas when evaluating Sampler. Although we compared SASMINT with COMA++ for the purpose of schema matching, we could not carry out functionality comparison for schema integration between them. COMA++ provides a simple schema merging functionality, but it is limited and not comparable to SASMINT’s schema integration. To the best of our knowledge, there is no other system supporting both schema matching and schema integration. Therefore, we evaluated the integration component of SASMINT alone. For this purpose, we used the six schemas from the university domain, introduced as Schema Pair#4, 5, and 6. Since the aim of schema integration is integrating two schemas at a time, based on the correspondences between them, we corrected the wrong or missing matches after the schema matching step and then continued with the integration process. ### 6.5 Evaluation of Schema Matching – For “select all above threshold” strategy In the first experiment that we performed to evaluate SASMINT and compare it with COMA++, we used the “select all above threshold” strategy. We present the results of this experiment in Figures 6.1 through 6.8. Correspondingly, we provide detailed explanations about the four comparison results of precision, recall, f-measure, and overall in the following paragraphs 6.5.1 to 6.5.4. Although the results gained from applying this strategy are worse than the “select max above threshold” strategy, this strategy is important when there is a need for suggesting multiple candidates for each schema element and leaving it to the user to identify the correct match among the alternatives. Namely, instead of proposing only one matched candidate for each schema element, which could be incorrect, the system suggests all possible match candidates, which makes it easier for the user to determine the final match result. 6.5 Evaluation of Schema Matching – For “select all above threshold” strategy 6.5.1 Evaluation of Schema Matching Using Precision Precision shows how correct the system works. Precision values for COMA++ and SASMINT are shown in Figure 6.1 and Figure 6.2 respectively. Since in the “select all above threshold” strategy, all match pairs with similarity above the threshold are selected, the number of false positives was high for some schema pairs. Especially for schemas that consisted element names with more than one token, precision was low. In our test cases, these schemas are the purchase order schemas (Schema Pair#1), university schemas of Maponto (Schema Pair#4), and the university schema that we generated ourselves (Schema Pair#5). In these cases, the low precision was due to the fact that for element names containing similar tokens, although the whole names were different, the final similarity result was usually above the threshold. Furthermore, the systems interpreted and treated all tokens equally, while some tokens had none or little effect in the meaning. For example, “deliverDate” and “deliver_zip” were identified as similar because both names contained the token “deliver”. However, the first one is the name of the column that contains the date of delivery, whereas, the second one is the name of the column that contains the zip code information. In such situations, SASMINT and COMA++ both found similarity values around 0.5. These cases could have been prevented by raising the threshold value, but then some correct matches could have been also missed. When precision was considered, SASMINT achieved almost 9 times better than COMA++ for the Hotel schemas test case. For other schemas, except for Schema Pair#6 from the university domain, for which COMA++ achieved just a little bit better (around 1.05 times), SASMINT achieved on average 2 times better than COMA++. Precision of SASMINT was on the average 0.58, whereas that of COMA++ was 0.26. This result was because of the high number of false positives identified by COMA++. In other words, COMA++ identified high number of irrelevant matches, which can be a bigger problem when schemas being compared are large. 6.5.2 Evaluation of Schema Matching Using Recall Recall shows how well the system finds all true matches and thus it indicates the completeness of the applied system. The average recall for COMA++ was 0.92, whereas for SASMINT it was 0.85. Figures 6.3 and 6.4 show the recall values for COMA++ and SASMINT. respectively. For UNIV-3 schemas, they both had the recall value of 1.0 and for SDB schemas, SASMINT was 1.14 times better than COMA++. For the remaining schemas, which were purchase order, hotel, UNIV-1, and UNIV-2, COMA++ achieved a bit (on the average 1.17 times) higher than SASMINT. However, it should be noted that this happened at the expense of very low precision values for COMA++. That means, in order to achieve just a bit higher recall values, COMA++ sacrificed the precision, resulting in very low precision values for these test cases, as indicated in Figures 6.1 and 6.2. This is due to the fact that there is an inverse relationship between precision and recall. Since COMA++ tries to find all possible matches, it also identifies a large number of false positive matches, which decrease the precision. SASMINT missed some of the correct matches, mostly due to low semantic similarity values that it could compute for some name pairs, such as (product, item) and (suite, room). Especially the gloss-based measure was not as successful as expected. Since the last version of WordNet (3.0) is not available yet for the Windows operating system, we had to use the previous version (2.0) of WordNet. We think that when the new version is ready, WordNet will provide more types of semantic relationships, and therefore the semantic similarity values for both path-based and gloss-based measures of SASMINT will be much more enhanced. ### 6.5.3 Evaluation of Schema Matching Using F-Measure As stated before, f-measure is used to combine the results of precision and recall. In other words, the higher the f-measure value, the better is the quality of the system. Most evaluation experiments in fact use f-measure as the measure to compare the systems, and not the individual precision and recall values. When f-measure is considered, the difference between SASMINT and COMA++ becomes clearer. This is due to the fact that f-measure considers both the precision and recall, and although recall values for COMA++ were a bit higher than those for SASMINT, precision of SASMINT was much better than that of COMA++, which results in higher f-measure values for SASMINT. As it is clear from the Figures 6.5 and 6.6, f-measure values for SASMINT were on average 2.2 times higher than those for COMA++ for all schema pairs, except the last schema pair (UNIV-3), for which they almost achieved the same. What can be inferred from these results is that the quality of results achieved by the SASMINT system is much higher than COMA++, considering the f-measure evaluation. 6.6 Evaluation of Schema Matching with Sampler In order to evaluate the Sampler component, we carried out tests using the first five schema pairs introduced in Table 6.1. As explained before, Sampler is used to compute the weights only for linguistic matching algorithms. In test cases where the element names from two schemas were highly similar, we set the threshold to a value higher than 0.5. In other cases, we used the default threshold value, which was 0.5. After setting the threshold value, we performed the tests using both equal weights for the linguistic matching algorithms and the weights suggested by Sampler for these algorithms. We used the “select max above threshold” strategy for the Sampler tests. Furthermore, we did not use the last schema pair (schema pair#6) in the tests, because for this pair, precision, recall, f-measure, and overall values were already identified as 1 in the tests using the “select max above threshold” strategy, when equal weights were used. Details of tests with the Sampler component are explained below. ### 6.6.1 Test with Purchase Order Schemas-PO (Schema Pair#1) In this test, we used the default threshold value, which was 0.5. We provided the similar pairs shown in Table 6.3 to Sampler, which computed the weights for semantic similarity algorithms shown in the same table. Results for precision, recall, f-measure, and overall were already high before the Sampler component was used. With the use of Sampler, (product, item) pair was correctly identified as similar, which was false negative before. As the result, Sampler helped to increase the values of recall, f-measure, and overall, as shown in Figure 6.9. Precision was 1 both before and after the use of the Sampler component. **Table 6.3. Similar Pairs and Computed weights for Schema Pair#1** <table> <thead> <tr> <th>Similar Pairs</th> <th>Computed Weights</th> </tr> </thead> <tbody> <tr> <td>Semantically Similar Pairs:</td> <td></td> </tr> <tr> <td>customer - buyer</td> <td>Wu and Palmer: 1.0</td> </tr> <tr> <td>product – item</td> <td>Gloss: 0.0</td> </tr> </tbody> </table> Fig. 6.7. Overall values for COMA++ - select all above threshold strategy Fig. 6.8. Overall values for SASMINT - select all above threshold strategy 6.6 Evaluation of Schema Matching with Sampler 6.6.2 Test with Hotel Schemas-Hotel (Schema Pair#2) In this test, we set the threshold value as 0.7. We provided the similar pairs shown in Table 6.4 to Sampler, which computed the weights for syntactic similarity algorithms shown again in Table 6.4. When SASMINT used these weights for matching the hotel schemas, results for recall, f-measure, and overall were on the average 1.75 times (57%) better than the case without the use of Sampler. This result can be seen in Figure 6.10. Table 6.4. Similar Pairs and Computed weights for Schema Pair#2 <table> <thead> <tr> <th>Similar Pairs</th> <th>Computed Weights</th> </tr> </thead> <tbody> <tr> <td>Syntactically Similar Pairs:</td> <td></td> </tr> <tr> <td>smoking_Preference_Attrib - smoking_Attrib</td> <td>Levenshtein: 0.0</td> </tr> <tr> <td>smoking_Preference_ID - smoking_Attribute_ID</td> <td>Jaccard: 0.11</td> </tr> <tr> <td>hasSmokingPreferenceAttribID - smokingOrNoAttribID</td> <td>LCS: 0.20</td> </tr> <tr> <td>on_floor - on_Floor_attribute</td> <td>Monge-Elkan: 0.22</td> </tr> <tr> <td>numBedsID – numBedsAttributeID</td> <td>Jaro: 0.22</td> </tr> <tr> <td></td> <td>TF*IDF: 0.25</td> </tr> </tbody> </table> 6.6.3 Test with Biology Schemas-SDB (Schema Pair#3) Two schemas (donor and recipient) in Schema Pair#3 use the same names for most of their schema elements. We set the threshold value to 0.9 and provided the two similar pairs of (animal_donor-donor) and (human_donor-donor). Sampler computed 1.0 for the weight of Monge-Elkan distance metric and 0.0 for other syntactic similarity metrics, as shown in Table 6.5. When we ran SASMINT with these weights, the results were as shown in Figure 6.11. There was a slight decrease in Precision when Sampler was used. This was due to the two false positives (donor, donor_visit) and (donorID, donorVisitID). However, recall, f-measure, and overall were all improved. <table> <thead> <tr> <th>Similar Pairs</th> <th>Computed Weights</th> </tr> </thead> <tbody> <tr> <td>Syntactically Similar Pairs:</td> <td>Levenshtein: 0.0</td> </tr> <tr> <td>animal_donor - donor</td> <td>Jaccard: 0.0</td> </tr> <tr> <td>human_donor - donor</td> <td>LCS: 0.0</td> </tr> <tr> <td></td> <td>Jaro: 0.0</td> </tr> <tr> <td></td> <td>TF*IDF: 0.0</td> </tr> <tr> <td></td> <td>Monge-Elkan: 1.0</td> </tr> </tbody> </table> 6.6 Evaluation of Schema Matching with Sampler 6.6.4 Test with University Schemas-UNIV1 (Schema Pair#4) For the test with these schema pairs, we set the threshold value as 0.7. We provided syntactically similar pairs, shown in Table 6.6. As shown in Figure 6.12, precision was slightly better before, whereas recall, f-measure, and overall values were higher with the use of Sampler. Since we provided Sampler (personName, name) as the syntactically similar pair, the personName column of the faculty_member table was successfully matched to the name column of the academic_staff table. However, at the same time, it incorrectly matched the personName column of the faculty_member and the name column of the admin_staff table. This in turn, increased the number of false positives, and thus slightly decreased the precision. However, since Sampler helped to identify more number of similar pairs, recall was much better than the case without the Sampler. As the result, f-measure and overall were better with the use of Sampler, as shown in Figure 6.12. Table 6.6. Similar Pairs and Computed weights for Schema Pair#4 <table> <thead> <tr> <th>Similar Pairs</th> <th>Computed Weights</th> </tr> </thead> <tbody> <tr> <td>Syntactically Similar Pairs:</td> <td>Levenshtein: 0.0</td> </tr> <tr> <td>number - courseNumber</td> <td>Jaccard: 0.0</td> </tr> <tr> <td>personName - name</td> <td>Jaro: 0.15</td> </tr> <tr> <td>researchInterest – areasOfInterest</td> <td>LCS: 0.15</td> </tr> <tr> <td></td> <td>TF*IDF: 0.3</td> </tr> <tr> <td></td> <td>Monge-Elkan: 0.4</td> </tr> </tbody> </table> 6.6.5 Test with University Schemas-UNIV2 (Schema Pair#5) In this test, we set the threshold value to 0.7 and provided the pairs shown in Table 6.7 to Sampler. Weights computed by Sampler for syntactic similarity algorithms are presented in Table 6.7. Similar to the case addressed in Section 6.7.4, with the use of Sampler the precision decreased because some new false positive pairs were introduced. For example, the university_name column of the university table and the name column of the university_student table were identified as similar, which was incorrect. However, since the value of recall was much higher when Sampler was used, f-measure and overall increased, as presented in Figure 6.13 also. Table 6.7. Similar Pairs and Computed weights for Schema Pair#5 <table> <thead> <tr> <th>Similar Pairs</th> <th>Computed Weights</th> </tr> </thead> <tbody> <tr> <td>Syntactically Similar Pairs:</td> <td>Levenshtein: 0.0</td> </tr> <tr> <td>academic_semester - semester</td> <td>Jaccard: 0.0</td> </tr> <tr> <td>course_id - academic_course_id</td> <td>Jaro: 0.0</td> </tr> <tr> <td>course_instructor - academic_course_instructor</td> <td>LCS: 0.18</td> </tr> <tr> <td>staff_name - name</td> <td>Monge-Elkan: 0.35</td> </tr> <tr> <td>course - academic_course</td> <td>TF*IDF: 0.47</td> </tr> </tbody> </table> 6.7 Evaluation of Schema Integration Performance In order to evaluate the schema integration component of SASMINT, we used schema pairs from the university domain. The three university schema pairs introduced in Table 6.1 which are Schema Pairs #4, 5, and 6 are used for this purpose. As addressed further below, please note that the Appendix F provides details of the steps of evaluation. Figures 6.14 through 6.16 show the elements of these pairs. SASMINT integrates two schemas at a time, therefore, incrementally generating the final integrated schema. The steps we followed for integrating these six schemas are explained below. We have selected to start with larger schemas first, namely Schema Pair #5. ![Fig. 6.14. Schema Pair#5 (UNIV-2)](image-url) ### First Schema - **academic_programme** - academic_programme_ID - ACADEMIC_YEAR - ACADEMIC_SEMESTER - PROGRAM_REF - **academic_staff_member** - academic_staff_member_ID - STAFF_NAME - STAFF_EMAIL - STAFF_PHONE - STAFF_FAX - STAFF_IDENTIFICATION_NUM - STAFF_BIRTHDATE - **campus** - campus_ID - CAMPUS_NAME - CAMPUS_LOCATION - UNVCAMPUS - **course** - course_ID - COURSE_NAME - COURSE_CREDITS - COURSE_PROVIDER - COURSE_INSTRUCTOR - **department** - department_ID - DEPT_NAME - UNIVERSITY_REF - **faculty** - faculty_ID - FACULTY_NAME - DEAN_REF - UNIVERSITY_REF - **program** - program_ID - PROGRAM_NAME - PROGRAM_DESC - **registration** - registration_ID - REGISTRATION_ACADEMICSTAFFMEMBER_REF - REGISTRATION_COURSE_REF - REGISTRATION_ACADEMICPROGRAMME_REF - **university** - university_ID - UNIVERSITY_NAME - UNIVERSITY_WEBSITE - UNIVERSITY_ESTABLISHMENT_DATE ### Second Schema - **academic_course** - academic_course_ID - ACADEMIC_COURSE_NAME - ACADEMIC_COURSE_CREDITS - ACADEMIC_COURSE_PROVIDER - ACADEMIC_COURSE_INSTRUCTOR - **academic_institution** - academic_institution_ID - ACADEMIC_INSTITUTION_NAME - ACADEMIC_INSTITUTION_WEBSITE - **academic_programme** - academic_programme_ID - YEAR - SEMESTER - PROGRAM_REF - **department** - department_ID - DEPT_NAME - UNIVERSITY_REF - **faculty** - faculty_ID - FACULTY_NAME - DEAN_REF - UNIVERSITY_REF - **program** - program_ID - PROGRAM_NAME - PROGRAM_DESC - **university_academic_instructor** - university_academic_instructor_ID - NAME - ELECTRONIC_MAIL - OFFICE_ADDRESS - TELEPHONE - **university_student** - university_student_ID - NAME - ELECTRONIC_MAIL - TELEPHONE ![Fig. 6.13. Results of the Test with Schema Pair#5](image-url) Step-1: First Schema of Schema Pair#5 + Second Schema of Schema Pair#5 At the first step of schema integration test, SASMINT system has integrated two schemas of the Schema Pair#5, shown in Figure 6.14, resulting in the integrated schema, elements of which are shown in Figure 6.17. During the integration process, one redundancy was automatically generated, which was the “UNIVERSITY_REF” column of the “department” table. Therefore, the result of minimality measure was 0.99, which is a substantial automated achievement. When key minimality is considered, one redundant foreign key relationship was defined on the same “UNIVERSITY_REF” column, which resulted in a key minimality of 97%. Although the resulting integrated schema had one redundant element and foreign key, it covered all the elements and keys of two source schemas. Therefore, the result is considered as 100% complete and 100% key complete, which is again a substantial automated achievement. Further details of this step are provided in Appendix F. Step-2: Integrated Schema#1 + First Schema of Schema Pair#6 At this step, SASMINT integrated the Integrated Schema#1 and the first schema of the Schema Pair#6, generating the Integrated Schema#2. Figure 6.18 shows only newly added tables and those tables that had changes in their columns. Due to the redundant “UNIVERSITY_REF” column and the foreign key defined on it, the result of minimality measure was 0.99 and the key minimality measure was 0.97. However, since all the concepts and keys of the first three schemas integrated (first schema of the Schema Pair#5, second schema of the Schema Pair#5, and first schema of the Schema Pair#6) were covered in the integrated schema, completeness and key completeness were again 100%. 6.7 Evaluation of Schema Integration Performance **Fig. 6.17.** Elements of Integrated Schema#1 - **INTEGRATED_1:university** - university_ID (PK), UNIVERSITY_NAME, UNIVERSITY_ESTABLISHMENT_DATE, UNIVERSITY_WEBSITE - **INTEGRATED_1:program** - program_ID (PK), PROGRAM_NAME, PROGRAM_DESC - **INTEGRATED_1:academic_programme** - academic_programme_ID (PK), ACADEMIC_YEAR, ACADEMIC_SEMESTER, PROGRAM_REF - **INTEGRATED_1:department** - department_ID (PK), DEPT_NAME, UNIVERSITY_REF(FK), FACULTY_REF(FK) - **INTEGRATED_1:course** - course_ID (PK), COURSE_NAME, COURSE_CREDITS, COURSE_PROVIDER (FK), COURSE_INSTRUCTOR(FK) - **INTEGRATED_1:academic_staff_member** - academic_staff_member_ID (PK), STAFF_NAME, STAFF_IDENTIFICATION_NUM, STAFF_FAX, STAFF_BIRTHDATE, OFFICE_ADDRESS, STAFF_EMAIL, STAFF_PHONE - **INTEGRATED_1:campus** - campus_ID (PK), CAMPUS_NAME, CAMPUS_LOCATION, UNVCAMPUS (FK) - **INTEGRATED_1:faculty** - faculty_ID (PK), FACULTY_NAME, DEAN_REF(FK), UNIVERSITY_REF (FK) - **INTEGRATED_1:registration** - registration_ID (PK), REGISTRATION_ACADEMICSTAFFMEMBER_REF(FK), REGISTRATION_COURSE_REF(FK), REGISTRATION_ACADEMICPROGRAMME_REF(FK) - **INTEGRATED_1:university_student** - university_student_ID (PK), NAME, ELECTRONIC_MAIL, TELEPHONE **Fig. 6.18.** New Elements of Integrated Schema#2 - **INTEGRATED_2:payrate** - Rank (PK), HrRate - **INTEGRATED_2:workson** - Name, Proj, Hrs, ProjRank (FK) - **INTEGRATED_2:address** - Id (PK), Street, City, PostalCode - **INTEGRATED_2:academic_staff_member** - academic_staff_member_ID (PK), STAFF_NAME, STAFF_IDENTIFICATION_NUM, STAFF_FAX, STAFF_BIRTHDATE, STAFF_EMAIL, STAFF_PHONE, Sal, addr(FK) - **INTEGRATED_2:university_student** - university_student_ID (PK), NAME, ELECTRONIC_MAIL, TELEPHONE, GPA, Yr **Step-3: Integrated Schema#2 + Second Schema of Schema Pair#6** At Step-3, SASMINT generated Integrated Schema#3, by integrating the Integrated Schema#2 and the second schema of the Schema Pair#6. The only change in the new integrated schema was the addition of one new column, called “Expenses” to the “workson” table. Due to the redundant “UNIVERSITY_REF” column and the foreign key defined on it, the resulting schema was again 99% minimal and 97% key minimal. However, it was again 100% complete considering both the concepts and keys. **Step-4: Integrated Schema#3 + First Schema of Schema Pair#4** In Step-4, SASMINT integrated the Integrated Schema#3 and the first schema of the Schema Pair#4, resulting in the Integrated Schema#4. Figure 6.19 shows only the newly added tables and those tables that had changes in their columns at this step. Minimality and key minimality were 0.99 and 0.98 respectively, because of the redundant “UNIVERSITY_REF” column and the foreign key. Considering the concepts, schema was 100% complete, but since three foreign keys were missed, as explained in Appendix F, the key completeness was 0.95 after this step. Step-5: Integrated Schema#4 + Second Schema of Schema Pair#4 In the final step of schema integration, SASMINT integrated the Integrated Schema#4 and the second schema of the Schema Pair#4. Final integrated schema is called Integrated Schema#5. Figure 6.20 shows the elements of the final integrated schema. This schema was 99% minimal and 99% key minimal. Redundancy was again due to the “UNIVERSITY_REF” column and the foreign key defined on it. Although all the concepts of six schemas integrated were covered in the final schema, resulting in 100% completeness, two more foreign keys were missed in this step, in addition to the ones in the previous step. Therefore, the key completeness was 0.93, as explained in detail in Appendix F. 6.8 Conclusions This chapter presents the results of our evaluation of the SASMINT system. In this chapter, first the state of the art in the schema matching evaluations is addressed, and then the quality measures that were applied during our experiments are explained. After that, the set of six test schemas that were used for evaluating the SASMINT system are introduced. Since there was not any benchmark for relational schema matching systems, we generated our own test schemas, a number of which were the same or modified versions of schemas from the evaluations of similar matching systems in related research. After the introductory part, the results of our experiments are presented in this chapter. Schema matching in SASMINT was compared against one leading state of the art schema matching system, the COMA++. A brief summary of this comparison based on the input, the combination of matchers, the output, the persistence store, and the quality criteria is given below: - **Input:** SASMINT accepts relational schemas, bearing in mind that most data are still stored in relational databases and corresponding schemas are represented as relational DDLs. As stated in Chapter 7 about the Future Steps, it may be possible to extend SASMINT to also support matching of XML Schema. The COMA++ accepts relational schema, XML Schema, and OWL as input to its matching procedure. In addition to the schemas to be matched, SASMINT uses a number of auxiliary inputs. A file consisting of a number of well-known abbreviations is exploited. Users can update (extend) this file with other abbreviations from the domain of schemas. As the second auxiliary input, SASMINT uses the WordNet for identifying semantic relationships between schema elements. Similar to SASMINT, COMA++ also utilizes a user-modifiable list of abbreviations. On the other hand, in order to detect synonymy relationships, COMA++ requires a user-provided list of synonyms. The disadvantage of this approach is that users are required to continuously update this list with pairs of synonyms from the domain of schemas. - **Combination of Matchers:** SASMINT and COMA++ both provide a library of matchers. SASMINT provides the possibility of user assigned weights to different algorithms and a Sampler component, which helps the user to identify the appropriate weight for each linguistic matching metric. On the other hand, COMA++ supports different alternatives for combining, aggregating, and selecting match results from different metrics. But the user should decide and select the approaches to be applied. This feature makes it difficult for an inexperienced user to identify the best combination. - **Output:** The output of a match system is a mapping, indicating which elements of the recipient and donor schemas correspond to each other. Both SASMINT and COMA++ represent these correspondences using a value between 0 and 1. Furthermore, they both can support 1-to-1, 1-to-n, n-to-1, and m-to-n types of matches. Persistence store for the results: For matching and integration of schemas, SASMINT stores the results based on SDML. This allows the results to be used for federated query processing and for decomposition of queries to be sent to different local schemas, as well as for formal representation of the semi-automatically generated integrated schema from the recipient and donor schemas. COMA++ has an internal repository for the results, but users cannot see in which format results are stored and it is not clear how to use these results outside of the system. Quality of Schema Matching: The quality of schema matching supported by SASMINT and COMA++ was compared using their default settings for the combination of different matchers. SASMINT’s default approach for combining linguistic and structure matching metrics calculates their weighted sum. However, then the Linguistic metrics have a higher impact (0.7) than the structure ones (0.3), on the final result. But in the evaluation between the two systems, each metric in groups of the linguistic matching and structure matching was considered with equal weight. Namely, in order to make a fair comparison with COMA++, we did not give higher weights to the metrics that could be more appropriate for some schema types. COMA matcher combines name, path, leaves, parents, and siblings matchers by averaging them. We updated the abbreviation lists of both systems with new abbreviations related to schemas. However, we did not update the synonyms list of COMA++, because manually adding into this list some complex semantic correspondences would also lead to unfair comparisons. We carried out experiments based on two types of result selection strategies that we call as: 1) Select all above threshold and 2) select max above threshold. Both systems performed better in the second approach. When the first approach was used, results for COMA++ were worse than those of SASMINT. For the second approach, the systems performed the same for some schema pairs, for the remaining pairs, SASMINT performed better than COMA++. In order to evaluate the Sampler component of SASMINT, we performed some tests using the same set of test schemas. For this purpose, after setting the threshold value, we provided the Sampler component with a number of similar pairs from the two schemas being compared. We performed schema matching using both the Sampler’s computed weights as well as the equal weights for linguistic matching algorithms. In some cases, using Sampler’s computed weights resulted in an increase in the number of false positives, and thus a decrease in the precision. However, in every such case since Sampler identifies higher number of correct matches, by assigning appropriate weights, the corresponding recall was much better than the case where Sampler was not used. Therefore, even in these cases, this resulted in an increase in f-measure and overall performance of SASMINT. Therefore, using Sampler was shown to improve the quality of match results. After evaluating the schema matching approach of SASMINT against the leading system COMA++, we evaluated the schema integration approach of SASMINT. Since COMA++’s schema merging feature is very primitive and there were no other systems at the level of SASMINT, which can use their schema matching results for semi-automatic schema integration, we could unfortunately not compare the results of schema integration approach of SASMINT with any other system. Nevertheless, we performed the incremental integration of six schemas to be able to evaluate SASMINT against the state of the art criteria defined for automated schema integration. During the empirical evaluation, SASMINT achieved a high percentage of minimality and completeness for its integrated schemas procedure, which applies its user-validated matches. To sum up, schema matching and schema integration are two challenging tasks in SASMINT. Different types of schema heterogeneities, such as semantic and structural, make these tasks more difficult to achieve automatically. A semi-automatic system might perform badly on such schemas. Evaluation data sets need to be carefully selected to cover different types of schema heterogeneities. Furthermore, in order to fairly evaluate the schema matching and schema integration systems, measures need to be carefully selected and defined to consider all aspects of a system in evaluation, such as quality of the match and integration results, how the results are represented, how easily these results can be modified/corrected by the user, and whether it is possible to use these results in other processes like query decomposition in federated query processing.
{"Source-Url": "https://pure.uva.nl/ws/files/1033641/82553_10.pdf", "len_cl100k_base": 13453, "olmocr-version": "0.1.50", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 60972, "total-output-tokens": 14685, "length": "2e13", "weborganizer": {"__label__adult": 0.0005540847778320312, "__label__art_design": 0.0012645721435546875, "__label__crime_law": 0.000766754150390625, "__label__education_jobs": 0.0191802978515625, "__label__entertainment": 0.0002701282501220703, "__label__fashion_beauty": 0.00041866302490234375, "__label__finance_business": 0.0028743743896484375, "__label__food_dining": 0.0006432533264160156, "__label__games": 0.0009660720825195312, "__label__hardware": 0.0008459091186523438, "__label__health": 0.000843048095703125, "__label__history": 0.0010995864868164062, "__label__home_hobbies": 0.0002872943878173828, "__label__industrial": 0.0009055137634277344, "__label__literature": 0.0018014907836914065, "__label__politics": 0.0005488395690917969, "__label__religion": 0.0007672309875488281, "__label__science_tech": 0.255126953125, "__label__social_life": 0.0006089210510253906, "__label__software": 0.154052734375, "__label__software_dev": 0.5546875, "__label__sports_fitness": 0.00024890899658203125, "__label__transportation": 0.0006585121154785156, "__label__travel": 0.0004470348358154297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59355, 0.01829]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59355, 0.34888]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59355, 0.8799]], "google_gemma-3-12b-it_contains_pii": [[0, 209, false], [209, 2698, null], [2698, 6212, null], [6212, 8961, null], [8961, 12034, null], [12034, 14369, null], [14369, 18063, null], [18063, 21353, null], [21353, 24728, null], [24728, 28402, null], [28402, 31435, null], [31435, 33948, null], [33948, 36524, null], [36524, 37175, null], [37175, 38729, null], [38729, 39878, null], [39878, 40931, null], [40931, 42466, null], [42466, 43616, null], [43616, 46224, null], [46224, 47979, null], [47979, 50930, null], [50930, 51523, null], [51523, 54667, null], [54667, 58501, null], [58501, 59355, null]], "google_gemma-3-12b-it_is_public_document": [[0, 209, true], [209, 2698, null], [2698, 6212, null], [6212, 8961, null], [8961, 12034, null], [12034, 14369, null], [14369, 18063, null], [18063, 21353, null], [21353, 24728, null], [24728, 28402, null], [28402, 31435, null], [31435, 33948, null], [33948, 36524, null], [36524, 37175, null], [37175, 38729, null], [38729, 39878, null], [39878, 40931, null], [40931, 42466, null], [42466, 43616, null], [43616, 46224, null], [46224, 47979, null], [47979, 50930, null], [50930, 51523, null], [51523, 54667, null], [54667, 58501, null], [58501, 59355, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59355, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59355, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59355, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59355, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59355, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59355, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59355, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59355, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59355, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59355, null]], "pdf_page_numbers": [[0, 209, 1], [209, 2698, 2], [2698, 6212, 3], [6212, 8961, 4], [8961, 12034, 5], [12034, 14369, 6], [14369, 18063, 7], [18063, 21353, 8], [21353, 24728, 9], [24728, 28402, 10], [28402, 31435, 11], [31435, 33948, 12], [33948, 36524, 13], [36524, 37175, 14], [37175, 38729, 15], [38729, 39878, 16], [39878, 40931, 17], [40931, 42466, 18], [42466, 43616, 19], [43616, 46224, 20], [46224, 47979, 21], [47979, 50930, 22], [50930, 51523, 23], [51523, 54667, 24], [54667, 58501, 25], [58501, 59355, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59355, 0.21184]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
bf7f6e2e70f32990543a9443656d1a6421b931ef
The following full text is a preprint version which may differ from the publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/132716 Please be advised that this information was generated on 2017-10-01 and may be subject to change. Model-based Programming Environments for Spreadsheets Jácome Cunha¹³, João Saraiva¹, and Joost Visser² ¹ HASLab / INESC TEC, Universidade do Minho, Portugal {jacome, jas}@di.uminho.pt ² Software Improvement Group & Radboud University Nijmegen, The Netherlands j.visser@sig.eu ³ Escola Superior de Tecnologia e Gestão de Felgueiras, IPP, Portugal Abstract. Although spreadsheets can be seen as a flexible programming environment, they lack some of the concepts of regular programming languages, such as structured data types. This can lead the user to edit the spreadsheet in a wrong way and perhaps cause corrupt or redundant data. We devised a method for extraction of a relational model from a spreadsheet and the subsequent embedding of the model back into the spreadsheet to create a model-based spreadsheet programming environment. The extraction algorithm is specific for spreadsheets since it considers particularities such as layout and column arrangement. The extracted model is used to generate formulas and visual elements that are then embedded in the spreadsheet helping the user to edit data in a correct way. We present preliminary experimental results from applying our approach to a sample of spreadsheets from the EUSES Spreadsheet Corpus. 1 Introduction Developments in programming languages are changing the way in which we construct programs: naive text editors are now replaced by powerful programming language environments which are specialized for the programming language under consideration and which help the user throughout the editing process. Helpful features like highlighting keywords of the language or maintaining a beautified indentation of the program being edited are now provided by several text editors. Recent advances in programming languages extend such naive editors to powerful language-based environments [1–4]. Language-based environments use knowledge of the programming language to provide the users with more powerful mechanisms to develop their programs. This knowledge is based on the structure and the meaning of the language. To be more precise, it is based on the syntactic and (static) semantic characteristics of the language. Having this knowledge about a language, the language-based environment is not only able to highlight keywords and beautify programs, but it can also detect features of the programs being edited that, for example, violate the properties of the underlying language. Furthermore, a language-based environment may also give information to the user about properties of the program under consideration. Consequently, language-based environments guide the user in writing correct and more reliable programs. Spreadsheet systems can be viewed as programming environments for non-professional programmers. These so-called end-user programmers vastly outnumber professional programmers [5]. In this paper, we propose a technique to enhance a spreadsheet system with mechanisms to guide end-users to introduce correct data. A background process adds formulas and visual objects to an existing spreadsheet, based on a relational database schema. To obtain this schema, we follow the approach used in language-based environments: we use the knowledge about the data already existing in the spreadsheet to guide end-users in introducing correct data. The knowledge about the spreadsheet under consideration is based on the meaning of its data that we infer using data mining and database normalization techniques. Data mining techniques specific to spreadsheets are used to infer functional dependencies from the spreadsheet data. These functional dependencies define how certain spreadsheet columns determine the values of other columns. Database normalization techniques, namely the use of normal forms [6], are used to eliminate redundant functional dependencies, and to define a relational database model. Knowing the relational database model induced by the spreadsheet data, we construct a new spreadsheet environment that not only contains the data of the original one, but that also includes advanced features which provide information to the end-user about correct data that can be introduced. We consider three types of advanced features: auto-completion of column values, non-editable columns and safe deletion of rows. Key (by typing or functionally dependent automatically), the end-user from editing columns that depend on a value. Note that such columns are automatically filled in selecting a primary key. By using the auto-completion feature the safe deletion of rows feature warns if by deleting a selected row. Our techniques not only work for database-like spreadsheets, like the example we will use throughout the paper, but they work also for realistic spreadsheets defined in other contexts (for example, inventory, grades or modeling). In this paper we present our first experimental results obtained by considering a large set of spreadsheets included in the EUSES Spreadsheet Corpus [7]. This paper is organized as follows. Section 2 presents an example used throughout the paper. Section 3 presents our algorithm to infer functional dependencies and how to construct a relational model. Section 4 discusses how to embed assisted editing features into spreadsheets. A preliminary evaluation of our techniques is present in Section 5. Section 6 discusses related work and Section 7 concludes the paper. 2 A Spreadsheet Programming Environment In order to present our approach we shall consider the following well-known example taken from [8] and modeled in a spreadsheet as shown in Figure 1. This spreadsheet contains information related to a housing renting system. It gathers information about clients, owners, properties, prices and renting periods. The name of each column gives a clear idea of the information it represents. We extend this example with three additional columns, named days (that computes the total number of renting days by subtracting the column rentStart to rentFinish), total (that multiplies the... number of renting days by the rent per day value, \(rent\) and \(country\) (that represents the property’s country). As usually in spreadsheets, the columns \(days\) and \(rent\) are expressed by formulas. This spreadsheet defines a valid model to represent the information of the renting system. However, it contains redundant information: the displayed data specifies the house renting of two clients (and owners) only, but their names are included five times, for example. This kind of redundancy makes the maintenance and update of the spreadsheet complex and error-prone. A mistake is easily made, for example, by mistyping a name, thus corrupting the data on the spreadsheet. Two common problems occur as a consequence of redundant data: update anomalies and deletion anomalies [9]. The former problem occurs when we change information in one place but leave the same information unchanged in the other places. The problem also occurs if the update is not performed exactly in the same way. In our example, this happens if we change the rent of property number \(pg4\) from 50 to 60 only one row and leave the others unchanged, for example. The latter problem occurs when we delete some data and lose other information as a side effect. For example, if we delete row 5 in the our example all the information concerning property \(pg36\) is lost. The database community has developed techniques, such as data normalization, to eliminate such redundancy and improve data integrity [9, 10]. Database normalization is based on the detection and exploitation of functional dependencies inherent in the data [11]. Can we leverage these database techniques for spreadsheets systems so that the system eliminates the update and deletion anomalies by guiding the end-user to introduce correct data? Based on the data contained in our example spreadsheet, we would like to discover the following functional dependencies which represent the four entities involved in our house renting system: \(countries\), \(clients\), \(owners\) and \(properties\). \[ country \rightarrow \text{clientNr} \rightarrow \text{cName} \\ onwnerNr \rightarrow \text{oName} \\ propNr \rightarrow \text{pAddress, rent, ownerNr} \] A functional dependency \(A \rightarrow B\) means that if we have two equal inhabitants of \(A\), then the corresponding inhabitants of \(B\) are also equal. For instance, the client number functionally determines his/her name, since no two clients have the same number. The right hand side of a functional dependency can be an empty set. This occurs, for example, in the \(country\) functional dependency. Note that there are several columns (labeled \(rentStart, rentFinish, days\) and \(total\)) that are not included in any functional dependency. This happens because their data do not define any functional dependency. --- **Fig. 1.** A spreadsheet representing a property renting system. Using these functional dependencies it is possible to construct a relational database schema. Each functional dependency is translated into a table where the attributes are the ones participating in the functional dependency and the primary key is the left hand side of the functional dependency. In some cases, foreign keys can be inferred from the schema. The relational database schema can be normalized in order to eliminate data redundancy. A possible normalized relational database schema created for the house renting spreadsheet is presented below. \[ country \] \[ clientNr, cName \] \[ ownerNr, oName \] \[ propNr, pAddress, rent, ownerNr \] This database schema defines a table for each of the entities described before. Having defined a relational database schema we would like to construct a spreadsheet environment that respects that relational model, as shown in Figure 2. For example, this spreadsheet would not allow the user to introduce two different properties with the same property number \( \text{propNr} \). Instead, we would like that the spreadsheet offers to the user a list of possible properties, such that he can choose the value to fill in the cell. Figure 3 shows a possible spreadsheet environment where possible properties can be chosen from a combo box. Using the relational database schema we would like that our spreadsheet offers the following features: **Auto-completion of Column Values:** The columns corresponding to primary keys in the relational model determine the values of other columns; we want the spreadsheet environment to be able to automatically fill those columns provided the end-user defines the value of the primary key. For example, the value of the property number \( \text{propNr} \), column B determines the values of the address \( \text{pAddress} \), column D, rent per day \( \text{rent} \), column I, and owner number \( \text{ownerNr} \), column K. Consequently, the spreadsheet environment should be able to automatically fill in the values of the columns D, I and K. given the value of column \( B \). Since \( \text{ownerNr} \) (column \( K \)) is a primary key of another table, transitively the value of \( oName \) (column \( L \)) is also defined. This auto-completion mechanism has been implemented and is presented in the spreadsheet environment of Figure 2. **Non-Editable Columns:** Columns that are part of a table but not part of its primary key must not be editable. For example, column \( L \) is part of the owner table but it is not part of its primary key. Thus, it must be protected from being edited. The primary key of a table must not be editable also since it can destroy the dependency. This feature prevents the end-user from introducing potentially incorrect data and, thus, producing update anomalies. Figure 4 illustrates this edit restriction. **Safe Deletion of Rows:** Another usual problem with non-normalized data is the deletion problem. Suppose in our running example that row 5 is deleted. In such scenario, all the information about the \( \text{pg36} \) property is lost. However, it is likely that the user wanted to delete the renting transaction represented by that row only. In order to prevent this type of deletion problems, we have added a button per spreadsheet row (see Figure 2). When pressed, this button detects whether the end-user is deleting important information included in the corresponding row. In case important information is removed by such deletion, a warning window is displayed, as shown in Figure 5. Apart from these new features, the user can still access traditional editing features, and can rely on recalculation of functional dependencies in the background. **Traditional Editing:** Advanced programming language environments provide both advanced editing mechanisms and traditional ones (i.e., text editing). In a similar way, a spreadsheet environment should allow the user to perform traditional spreadsheet editing too. In traditional editing the end-user is able to introduce data that may violate the relational database model that the spreadsheet data induces. **Recalculation of the Relational Database Model:** Because standard editing allows the end-user to introduce data violating the underlying relational model, we would like that the spreadsheet environment may enable/disable the advanced features described in this section. When advanced features are disabled, the end-user would be able to introduce data that violates the (previously) inferred relational model. However, when the end-user returns to advanced editing, then the spreadsheet should infer a new relational model that will be used in future (advanced) interactions. In this section we have described an instance of our techniques. In fact, the spreadsheet programming environment shown in the Figures 2, 3, 4 and 5 was automatically produced from the original spreadsheet displayed in Figure 1. In the following sections we will present in detail the technique to perform such an automatic spreadsheet refactoring. 3 From Spreadsheets to Relational Databases This section briefly explains how to extract functional dependencies from the spreadsheet data and how to construct a normalized relational database schema modeling such data. These techniques were introduced in detail in our work on defining a bidirectional mapping between spreadsheets and relational databases [12]. In this section we briefly present an extension to that algorithm that uses spreadsheet specific properties in order to infer a more realistic set of functional dependencies. Relational Databases: A relational schema \( R \) is a finite set of attributes \{\( A_1, ..., A_k \)\}. Corresponding to each attribute \( A_i \) is a set \( D_i \) called the domain of \( A_i \). These domains are arbitrary, non-empty sets, finite or countably infinite. A relation (or table) \( r \) on a relation schema \( R \) is a finite set of tuples (or rows) of the form \( \{ t_1, ..., t_k \} \). For each \( t \in r \), \( t(A_i) \) must be in \( D_i \). A relational database schema is a collection of relation schemas \( \{ R_1, ..., R_n \} \). A Relational Database (RDB) is a collection of relations \( \{ r_1, ..., r_n \} \). Each tuple is uniquely identified by a minimum non-empty set of attributes called a Primary Key (PK). On certain occasions there may be more then one set suitable for becoming the primary key. They are designated candidate keys and only one is chosen to become primary key. A Foreign Key (FK) is a set of attributes within one relation that matches the primary key of some relation. The normalization of a database is important to prevent data redundancy. Although there are several different normal forms, in general, a RDB is considered normalized if it respects the Third Normal Form (3NF) [8]. Discovering Functional Dependencies: In order to define the RDB schema, we first need to compute the functional dependencies presented in a given spreadsheet data. In [12] we reused the well known data mine algorithm, named \textsc{fun}, to infer such dependencies. This algorithm was developed in the context of databases with the main goal of inferring all existing functional dependencies in the input data. As a result, \textsc{fun} may infer a large set of functional dependencies depending on the input data. For our example, we list the functional dependencies inferred from the data using \textsc{fun}: \[ \begin{align*} \text{clientNr} & \rightarrow \text{cName, country} \\ \text{propNr} & \rightarrow \text{country, pAddress, rent, ownerNr, oName} \\ \text{cName} & \rightarrow \text{clientNr, country} \\ \text{pAddress} & \rightarrow \text{propNr, country, rent, ownerNr, oName} \\ \text{rent} & \rightarrow \text{propNr, country, pAddress, ownerNr, oName} \\ \text{ownerNr} & \rightarrow \text{country, oName} \\ \text{oName} & \rightarrow \text{country, ownerNr} \end{align*} \] Note that the data contained in the spreadsheet exhibits all those dependencies. In fact, even the non-natural dependency \( \text{rent} \rightarrow \text{propNr}, \text{country}, \text{pAddress}, \text{ownerNr}, \text{oName} \) is inferred. Indeed, the functional dependencies derived by the \textsc{Fun} algorithm depend heavily on the quantity and quality of the data. Thus, for small samples of data, or data that exhibits too many or too few dependencies, the \textsc{Fun} algorithm may not produce the desired functional dependencies. Note also that the \textit{country} column occurs in most of the functional dependencies although only a single country actually appears in a column of the spreadsheet, namely UK. Such single value columns are common in spreadsheets. However, for the \textsc{Fun} algorithm they induce redundant fields and redundant functional dependencies. In order to derive more realistic functional dependencies for spreadsheets we have extended the \textsc{Fun} algorithm so that it considers the following spreadsheet properties: - \textit{Single value columns}: these columns produce a single functional dependency with no right hand side (\textit{country} \rightarrow, for example). This columns are not considered when finding other functional dependencies. - \textit{Semantic of labels}: we consider label names as strings and we look for the occurrence of words like \textit{code, number, nr, id} given them more priority when considered as primary keys. - \textit{Column arrangement}: we give more priority to functional dependencies that respect the order of columns. For example, \( \text{clientNr} \rightarrow \text{cName} \) has more priority than \( \text{cName} \rightarrow \text{clientNr} \). Moreover, to minimize the number of functional dependencies we consider the smallest subset that includes all attributes/columns in the original set computed by \textsc{Fun}. The result of our spreadsheet functional dependency inference algorithm is: \[ \begin{align*} \text{country} & \rightarrow \\ \text{clientNr} & \rightarrow \text{cName} \\ \text{ownerNr} & \rightarrow \text{oName} \\ \text{propNr} & \rightarrow \text{pAddress, rent, ownerNr, oName} \end{align*} \] This set of dependencies is very similar to the one presented in the previous section. The exception is the last functional dependency which has an extra attribute (\textit{oName}). \textit{Spreadsheet Formulas}: Spreadsheets use formulas to define the values of some elements in terms of other elements. For example, in the house renting spreadsheet, the column \textit{days} is computed by subtracting the column \textit{rentFinish} from \textit{rentStart}, and it is usually written as follows \( H3 = G3 - F3 \). This formula states that the values of \( G3 \) and \( F3 \) determine the value of \( H3 \), thus inducing the following functional dependency: \( \text{rentStart, rentFinish} \rightarrow \text{days} \). Formulas can have references to other formulas. Consider, for example, the second formula of the running example \( J3 = H3 \times I3 \), which defines the total rent by multiplying the total number of days by the value of the rent. Because \( H3 \) is defined by another formula, the values that determine \( H3 \) also determine \( J3 \). As a result, the two formulas induce the following functional dependencies: \[ \begin{align*} \text{rentStart, rentFinish} & \rightarrow \text{days} \\ \text{rentStart, rentFinish, rent} & \rightarrow \text{total} \end{align*} \] In general, a spreadsheet formula of the following form $X_0 = f(X_1, \ldots, X_n)$ induces the following functional dependency: $X_1, \ldots, X_n \rightarrow X_0$. In spreadsheet systems, formulas are usually introduced by copying them through all the elements in a column, thus making the functional dependency explicit in all the elements. This may not always be the case and some elements can be defined otherwise (e.g. by using a constant value or a different formula). In both cases, all the cells referenced must be used in the antecedent of the functional dependency. These functional dependencies are useful for the mapping of spreadsheets to databases as presented in [12]. In this work, they are not relevant since the existing formulas are used to fill in those columns. **Normalizing Functional Dependencies:** Having computed the functional dependencies, we can now normalize them. Next, we show the results produced by the `synthesize` algorithm introduced by Maier in [13]. The `synthesize` algorithm receives a set of functional dependencies as argument and returns a new set of compound functional dependencies. A compound functional dependency (CFD) has the form $(X_1, \ldots, X_n) \rightarrow Y$, where $X_1, \ldots, X_n$ are all distinct subsets of a scheme $R$ and $Y$ is also a subset of $R$. A relation $r$ satisfies the CFD $(X_1, \ldots, X_n) \rightarrow Y$ if it satisfies the functional dependencies $X_i \rightarrow X_j$ and $X_i \rightarrow Y$, where $1 \leq i$ and $j \leq k$. In a CFD, $(X_1, \ldots, X_n)$ is the left side, $X_1, \ldots, X_n$ are the left sets and $Y$ is the right side. Next, we list the compound functional dependencies computed from the functional dependencies induced by our running example. $$ \begin{align*} \{ \text{country} \} & \rightarrow \{ \} \\ \{ \text{clientNr} \} & \rightarrow \{ \text{cName} \} \\ \{ \text{ownerNr} \} & \rightarrow \{ \text{oName} \} \\ \{ \text{propNr} \} & \rightarrow \{ \text{pAddress}, \text{rent}, \text{ownerNr} \} \end{align*} $$ **Computing the Relational Database Schema:** Each compound functional dependency defines several candidate keys for each table. However, to fully characterize the relational database schema we need to choose the primary key from those candidates. To find such keys we use a simple algorithm: we produce all the possible tables using each candidate key as the primary key; we then use the same algorithm that is used to choose the initial functional dependencies to choose the best table. Note that before applying the `synthesize` algorithm, all the functional dependencies with antecedents’ attributes representing formulas should be eliminated since a primary key must not change over time. The final result is listed below. $$ \begin{align*} \text{country} \\ \text{clientNr, cName} \\ \text{ownerNr, oName} \\ \text{propNr, pAddress, rent, ownerNr} \end{align*} $$ This relational database model corresponds exactly to the one shown in Section 2. Note that the `synthesize` algorithm removed the redundant attribute $\text{oName}$ that occurred in the last functional dependency. 4 Building Spreadsheet Programming Environments This section presents techniques to refactor spreadsheets into powerful spreadsheet programming environments as described in Section 2. This spreadsheet refactoring is implemented as the embedding of the inferred functional dependencies and the computed relational model in the spreadsheet. This embedding is modeled in the spreadsheet itself by standard formulas and visual objects: formulas are added to the spreadsheet to guide end users to introduce correct data. Before we present how this embedding is defined, let us first define a spreadsheet. A spreadsheet can be seen as a partial function \( S : A \rightarrow V \) mapping addresses to spreadsheet values. Elements of \( S \) are called cells and are represented as \((a, v)\). A cell address is taken from the set \( A = \mathbb{N} \times \mathbb{N} \). A value \( v \in V \) can be an input plain value \( c \in C \) like a string or a number, references to other cells using addresses or formulas \( f \in F \) that can be applied to one or more values: \[ v \in V ::= c \mid a \mid f(v, \ldots, v). \] Auto-completion of Column Values: This feature is implemented by embedding each of the relational tables in the spreadsheet. It is implemented by a spreadsheet formula and a combo box visual object. The combo box displays the possible values of one column, associated to the primary key of the table, while the formula is used to fill in the values of the columns that the primary key determines. Let us consider the table \( ownerNr, oName \) from our running example. In the spreadsheet, \( ownerNr \) is in column \( K \) and \( oName \) in column \( L \). This table is embed in the spreadsheet introducing a combo box containing the existing values in the column \( K \) (as displayed in Figure 2). Knowing the value in the column \( K \) we can automatically introduce the value in column \( L \). To achieve this, we embed the following formula in row 7 of column \( L \): \[ S(L, 7) = \text{if (isna (vlookup (K7, K2 : L6, 2, 0)), "", vlookup (K7, K2 : L6, 2, 0))} \] This formula uses a (library) function \( \text{isna} \) to test if there is a value introduced in column \( K \). In case that value exists, it searches (with the function \( \text{vlookup} \)) the corresponding value in the column \( L \) and references it. If there is no selected value, it produces the empty string. The combination of the combo box and this formula guides the user to introduce correct data as illustrated in Figure 2. We have just presented a particular case of the formula and visual object induced by a relational table. Next we present the general case. Let \( minr \) be the very next row after the existing data in the spreadsheet, \( maxr \) the last row in the spreadsheet, and \( r1 \) the first row with already existing data. Each relational database table \( a_1, \ldots, a_n, c_1, \ldots, c_m \), with \( a_1, \ldots, a_n, c_1, \ldots, c_m \) column indexes of the spreadsheet, induces firstly, a combo box defined as follows: \[ \forall c \in \{a_1, \ldots, a_n\}, \forall r \in \{minr, \ldots, maxr\}: S(c, r) = \text{combobox} := \{\text{linked cell} := (c, r); \text{source cells} := (c, r1) : (c, r - 1)\} \] secondly, a spreadsheet formula defined as: \[ \forall c \in \{c_1, \ldots, c_m\}, \forall r \in \{minr, \ldots, maxr\}: \] In the case a primary key column value is chosen, isna calculates the corresponding non primary key column value. Each conditional if is responsible for checking a primary key column. This formula must be used for each non primary key column created by our algorithm. The example table analysed before is an instance of this general one. In the table ownerNr, oName, ownerNr is a1, oName is c1, c is L, rI is 2, minr is 7. The value of maxr is always the last row supported by the spreadsheet system. Foreign keys pointing to primary keys become very helpful in this setting. For example, if we have the relational tables A, B and C where B is a foreign key from the second table to the first one, then when we perform auto-completion in column A, both B and C are automatically filled in. This was the case presented in Figure 2. Non-Editable Columns: To prevent wrong introduction of data and, thus, producing update anomalies, we protect some columns from edition. The relational table a1, ..., an, c1, ..., cm induces the non-edition of columns a1, ..., an, c1, ..., cm. That is to say that all columns that form a table become non-editable. Figure 4 illustrates such a restriction. In the case where the end-user really needs to change the value of such protected columns, we provide traditional editing as described in Subsection 4. Safe Deletion of Rows: Another usual problem with non-normalized data is the deletion of data. Suppose in our running example that row 5 is deleted. All the information about property pg36 is lost, although the user would probably want to delete that renting transaction only. To correctly delete rows in the spreadsheet, a button is added to each row in the spreadsheet as follows: for each relational table a1, ..., an, c1, ..., cm each button checks, on its corresponding row, the columns that are part of the primary key, a1, ..., an. For each primary key column, it verifies if the value to remove is the last one. Let c ∈ {a1, ..., an}, let r be the button row, rI be the first row of column c with data and rn be the last row of column c with data. The test is defined as follows: If the value is the last one, the spreadsheet warns the user (showMessage) as can be seen in Figure 5. If the user presses the OK button, the spreadsheet will remove the row. In the other case, Cancel, no action will be performed. In the case the value is not the last one, the row will simply be removed, deleteRow(r). For example, in column propNr of our running example, the row 5 contains the last data about the house with code pg36. If the user tries to delete this row, the warning will be triggered. Traditional Editing Advanced programming language environments provide both advanced editing mechanisms and traditional ones (i.e., text editing). In a similar way, a spreadsheet environment should allow the user to perform traditional spreadsheet editing too. Thus, the environment should provide a mechanism to enable/disable the advanced features described in this section. When advanced features are disabled, the end-user is be able to introduce data that violates the (previously) inferred relational model. However, when the end-user returns to advance editing, the spreadsheet infers a new relational model that will be used in future (advanced) interactions. 4.1 HaExcel Add-in We have implemented the FUN algorithm, the extensions described in this paper, the synthesize algorithm, and the embedding of the relational model in the HASKELL programming language [14]. We have also defined the mapping from spreadsheet to relational databases in the same framework named HaExcel [12]. Finally, we have extended this framework to produce the visual objects and formulas to model the relational tables in the spreadsheet. An Excel add-in as been also constructed so that the end-user can use spreadsheets in this popular system and at the same time our advanced features. 5 Preliminary Experimental Results In order to evaluate the applicability of our approach, we have performed a preliminary experiment on the EUSES Corpus [7]. This corpus was conceived as a shared resource to support research on technologies for improving the dependability of spreadsheet programming. It contains more than 4500 spreadsheets gathered from different sources and developed for different domains. These spreadsheets are assigned to eleven different categories. Among the spreadsheets in the corpus, about 4.4% contain macros, about 2.3% contain charts, and about 56% do not have formulas being only used to store data. In our preliminary experiment we have selected the first ten spreadsheets from each of the eleven categories of the corpus. We then applied our tool to each spreadsheet, with different results (see also Table ??): a few spreadsheets failed to parse, due to glitches in the Excel to Gnumeric conversion (which we use to bring spreadsheets into a processable form). Other spreadsheets were parsed, but no tables could be recognized in them, i.e., their users did not adhere to any of the supported layout conventions. The layout conventions we support are the ones presented in the UCheck project [15]. This was the case for about one third of the spreadsheets in our item. The other spreadsheets were parsed, tables were recognized, and edit assistance was generated for them. We will focus on the last groups in the upcoming sections. **Processed Spreadsheets:** The results of processing our sample of spreadsheets from the EUSES corpus are summarized in Table 1. The rows of the table are grouped by category as documented in the corpus. The first three columns contain size metrics on the spreadsheets. They indicate how many tables were recognized, how many columns are present in these tables, and how many cells. For example, the first spreadsheet in the financial category contains 15 tables with a total of 65 columns and 242 cells. <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>cs101</td> <td>5</td> <td>24</td> <td>402</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>Act4_023_capen</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>act3_23_bartholomew</td> <td>6</td> <td>21</td> <td>84</td> <td>1</td> <td>8</td> <td>1</td> <td>9</td> </tr> <tr> <td>act4_023_bartholomew</td> <td>6</td> <td>23</td> <td>365</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>meyer_Q1</td> <td>2</td> <td>8</td> <td>74</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>posey_Q1</td> <td>5</td> <td>23</td> <td>72</td> <td>0</td> <td>8</td> <td>0</td> <td>8</td> </tr> <tr> <td>%5CDepartmental%20Fol#A861A</td> <td>2</td> <td>4</td> <td>3463</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>00061rP802-15_TG2-Un#A7F69</td> <td>23</td> <td>55</td> <td>491</td> <td>0</td> <td>18</td> <td>4</td> <td>21</td> </tr> <tr> <td>00061rS802-15_TG2-Un#A7F6C</td> <td>30</td> <td>83</td> <td>600</td> <td>25</td> <td>21</td> <td>5</td> <td>26</td> </tr> <tr> <td>0104TexasNutrientdb</td> <td>5</td> <td>7</td> <td>77</td> <td>1</td> <td>1</td> <td>1</td> <td>2</td> </tr> <tr> <td>01BTS_framework</td> <td>52</td> <td>80</td> <td>305</td> <td>4</td> <td>23</td> <td>2</td> <td>25</td> </tr> <tr> <td>03-1-report-annex-5</td> <td>20</td> <td>150</td> <td>1599</td> <td>12</td> <td>15</td> <td>8</td> <td>22</td> </tr> <tr> <td>BROWN</td> <td>5</td> <td>14</td> <td>9047</td> <td>2</td> <td>3</td> <td>1</td> <td>4</td> </tr> <tr> <td>CHOFAS</td> <td>6</td> <td>48</td> <td>4288</td> <td>3</td> <td>3</td> <td>1</td> <td>4</td> </tr> <tr> <td>financial</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>03PFMJOurnalBOOKSFina...</td> <td>15</td> <td>65</td> <td>242</td> <td>0</td> <td>7</td> <td>0</td> <td>7</td> </tr> <tr> <td>10-formc</td> <td>12</td> <td>20</td> <td>53</td> <td>8</td> <td>5</td> <td>4</td> <td>9</td> </tr> <tr> <td>forms3</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>ELECLAB3.reichwja.xl97</td> <td>1</td> <td>4</td> <td>44</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>burnett-clockAsPieChart</td> <td>3</td> <td>8</td> <td>14</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> </tr> <tr> <td>chen-heapSortTimes</td> <td>1</td> <td>2</td> <td>24</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>chen-insertSortTimes</td> <td>1</td> <td>2</td> <td>22</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>chen-lexicTimes</td> <td>1</td> <td>2</td> <td>22</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>chen-quickSortTimes</td> <td>1</td> <td>2</td> <td>24</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>cs515_npep_chart.reichwja.xl97</td> <td>7</td> <td>9</td> <td>93</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>cs515_polynomials.reichwja.xl97</td> <td>6</td> <td>12</td> <td>105</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>cs515_runtimeData.reichwja.XL97</td> <td>2</td> <td>6</td> <td>45</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>grades</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>0304deptcal</td> <td>11</td> <td>41</td> <td>383</td> <td>19</td> <td>18</td> <td>17</td> <td>28</td> </tr> <tr> <td>03_04ballots1</td> <td>4</td> <td>20</td> <td>96</td> <td>6</td> <td>4</td> <td>0</td> <td>4</td> </tr> <tr> <td>030902</td> <td>5</td> <td>20</td> <td>110</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>031001</td> <td>5</td> <td>20</td> <td>110</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>031301</td> <td>5</td> <td>15</td> <td>51</td> <td>3</td> <td>1</td> <td>4</td> <td></td> </tr> </tbody> </table> continues on the next page Table 1 – Preliminary results of processing the selected spreadsheets. <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>homework</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>01_Intro_Chapter_Home#A9171</td> <td>6</td> <td>15</td> <td>2115</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> </tr> <tr> <td>01readdis</td> <td>4</td> <td>16</td> <td>953</td> <td>5</td> <td>4</td> <td>3</td> <td>6</td> </tr> <tr> <td>02%20bb%20medshor</td> <td>1</td> <td>7</td> <td>51</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>022timeline4dev</td> <td>28</td> <td>28</td> <td>28</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>026timeline4dev</td> <td>28</td> <td>28</td> <td>30</td> <td>0</td> <td>2</td> <td>0</td> <td>2</td> </tr> <tr> <td>03_Stochastic_Systems#A9172</td> <td>4</td> <td>6</td> <td>48</td> <td>0</td> <td>2</td> <td>0</td> <td>2</td> </tr> <tr> <td>04-05_proviso_list</td> <td>79</td> <td>232</td> <td>2992</td> <td>0</td> <td>25</td> <td>0</td> <td>25</td> </tr> <tr> <td>inventory</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>02MDE_framework</td> <td>50</td> <td>83</td> <td>207</td> <td>10</td> <td>31</td> <td>1</td> <td>32</td> </tr> <tr> <td>02f202assignment%234soln</td> <td>37</td> <td>72</td> <td>246</td> <td>7</td> <td>20</td> <td>1</td> <td>21</td> </tr> <tr> <td>03-1-report-annex-2</td> <td>5</td> <td>31</td> <td>111</td> <td>10</td> <td>5</td> <td>5</td> <td>8</td> </tr> <tr> <td>03singapore_elec_gene#A8236</td> <td>9</td> <td>45</td> <td>153</td> <td>3</td> <td>5</td> <td>2</td> <td>7</td> </tr> <tr> <td>0038</td> <td>10</td> <td>22</td> <td>370</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>modeling</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>%7B94402d63-cdd8-4cc3#A8841</td> <td>1</td> <td>3</td> <td>561</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>%EC%86%90%ED%97%8C%…</td> <td>1</td> <td>10</td> <td>270</td> <td>13</td> <td>7</td> <td>5</td> <td>9</td> </tr> <tr> <td>%EC%9D%98%EB%8C%80%…</td> <td>1</td> <td>7</td> <td>1442</td> <td>4</td> <td>4</td> <td>5</td> <td>6</td> </tr> <tr> <td>%EC%A1%B0%EC%9B%90%…</td> <td>2</td> <td>17</td> <td>334</td> <td>18</td> <td>13</td> <td>5</td> <td>15</td> </tr> <tr> <td>%ED%99%98%EA%B2%BD%…</td> <td>3</td> <td>7</td> <td>289</td> <td>2</td> <td>1</td> <td>2</td> <td>3</td> </tr> <tr> <td>0,10900,0-0-45-109057-0.00</td> <td>4</td> <td>14</td> <td>6558</td> <td>9</td> <td>9</td> <td>2</td> <td>10</td> </tr> <tr> <td>00-323r2</td> <td>24</td> <td>55</td> <td>269</td> <td>31</td> <td>9</td> <td>6</td> <td>15</td> </tr> <tr> <td>00000r6xP802-15_Docum#A7D9E</td> <td>3</td> <td>13</td> <td>3528</td> <td>10</td> <td>9</td> <td>3</td> <td>11</td> </tr> <tr> <td>003_4</td> <td>25</td> <td>50</td> <td>2090</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> </tbody> </table> Table 1: Preliminary results of processing the selected spreadsheets. The fourth column shows how many functional dependencies were extracted from the recognized tables. These are the non-trivial functional dependencies that remain after we use our extension to the FUN algorithm to discard redundant dependencies. The last three columns are metrics on the generated edit assistance. In some cases, no edit assistance was generated, indicated by zeros in these columns. This situation occurs when no (non-trivial) functional dependencies are extracted from the recognized tables. In the other cases, the three columns respectively indicate: - For how many columns a combo box has been generated for controlled insertion. The same columns are also enhanced with the safe deletion of rows feature. - For how many columns the auto-completion of column values has been activated, i.e., for how many columns the user is no longer required to insert values manually. - How many columns are locked to prevent edit actions where information that does not appear elsewhere is deleted inadvertently. For example, for the first spreadsheet of the inventory category, combo boxes have been generated for 31 columns, auto-completion has been activated for 1 column, and locking has been applied to 32 columns. Note that for the categories jackson and personal, no results were obtained due to absent or unrecognized layout conventions or to the size of the spreadsheets (more than 150,000 cells). Observations: On the basis of these preliminary results, a number of interesting observations can be made. For some categories, edit assistance is successfully added to almost all spreadsheets (e.g. inventory and database), while for others almost none of the spreadsheets lead to results (e.g. the forms/3 category). The latter may be due to the small sizes of the spreadsheets in this category. For the financials category, we can observe that in only 2 out of 10 sample spreadsheets tables were recognized, but edit assistance was successfully generated for both of these. The percentage of columns for which edit assistance was generated varies. The highest percentage was obtained for the second spreadsheet of the modeling category, with 9 out of 10 columns (90 %). A good result is also obtained for the first spreadsheet of the grades category with 28 out of 41 columns (68.3 %). On the other hand, the 5th of the homework category gets edit assistance for only 2 out of 28 columns (7.1 %). The number of columns with combo boxes often outnumbers the columns with auto-completion. This may be due to the fact that many of the functional dependencies are small, with many having only one column in the antecedent and none in consequent. Evaluation: Our preliminary experiment justifies two preliminary conclusions. Firstly, the tool is able to successfully add edit assistance to a series of non-trivial spreadsheets. A more thorough study of these and other cases can now be started to identify technical improvements that can be made to the algorithms for table recognition and functional dependency extraction. Secondly, in the enhanced spreadsheets a large number of columns are generally affected by the generated edit assistance, which indicates that the user experience can be impacted in a significant manner. Thus, a validation experiment can be started to evaluate how users experience the additional assistance and to which extent their productivity and effectiveness can be improved. 6 Related Work Our work is strongly related to a series of techniques by Abraham et al. Firstly, they designed and implemented an algorithm that uses the labels within a spreadsheet for unit checking [16, 17]. By typing the cells in a spreadsheet with unit information and tracking them through references and formulas, various types of users errors can be caught. We have adopted the view of Abraham et al. of a spreadsheet as a collection of tables and we have reused their algorithm for identifying the spatial boundaries of these tables. Rather than exploiting the labels in the spreadsheet to reconstruct implicit user intentions, we exploit redundancies in data elements. Consequently, the errors caught by our approach are of a different kind. Secondly, Abraham et al. developed a type system and corresponding inference algorithm that assigns types to values, operations, cells, formulas, and entire spreadsheets [18]. The type system can be used to catch errors in spreadsheets or to infer spreadsheet models that can help to prevent future errors. We have used such spreadsheet models, namely the ClassSheet models [19], to realize model-driven software evolution in the context of spreadsheets [20–22]. In previous work we presented techniques and tools to transform spreadsheets into relational databases and back [12]. We used the FUN algorithm to construct a relational model, but rather than generating edit assistance, the recovered information was used to perform spreadsheet refactoring. The algorithm for extracting and filtering spreadsheets presented in the current paper is an improvement over the algorithm that we used previously. We provided a short user-centered overview of the idea of generating edit assistance for spreadsheets via extraction of functional dependencies in a previous short paper [23]. In the current paper, we have provided the technical details of the solution, including the improved algorithm for extraction and filtering functional dependencies. Also, we have provided the first preliminary evaluation of the approach by application to a sample of spreadsheets from the EUSES corpus. 7 Conclusions Contributions: We have demonstrated how implicit structural properties of spreadsheet data can be exploited to offer edit assistance to spreadsheet users. To discover these properties, we have made use of our improved approach for mining functional dependencies from spreadsheets and subsequent synthesis of a relational database. On this basis, we have made the following contributions: – Derivation of formulas and visual elements that capture the knowledge encoded in the reconstructed relational database schema. – Embedding of these formulas and visual elements into the original spreadsheet in the form of features for auto-completion, guarded deletion, and controlled insertion. – Integration of the algorithms for reconstruction of a schema, for derivation of corresponding formulas and visual elements, and for their embedding into an add-in for spreadsheet environments. A spreadsheet environment enhanced with our add-in compensates to a significant extent for the lack of the structured programming concepts in spreadsheets. In particular, it assists users to prevent common update and deletion anomalies during edit actions. Future Work: There are several extensions of our work that we would like to explore. The algorithms running in the background need to recalculate the relational schema and the ensuing formulas and visual elements every time new data is inserted. For larger spreadsheets, this recalculation may incur waiting time for the user. Several optimizations of our algorithms can be attempted to eliminate such waiting times, for example, by use of incremental evaluation. Our approach could be integrated with similar, complementary approaches to cover a wider range of possible user errors. In particular, the work of Abraham et al. [18, 24] for preventing range, reference, and type errors could be combined with our work for preventing data loss and inconsistency. We have presented some preliminary experimental results to pave the way for a more comprehensive validation experiments. In particular, we intend to set up a structured experiment for testing the impact on end-user productivity, and effectiveness. Acknowledgment The authors would like to thank Martin Erwig and his team for providing us the code from the UCheck project. This work is funded by the ERDF through the Programme COMPETE and by the Portuguese Government through FCT - Foundation for Science and Technology, project reference PTDC/EIA–CCO/108613/2008. The first author was funded by FCT grant SFRH/BPD/73358/2010. References
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/132716/132716.pdf?sequence=1", "len_cl100k_base": 12116, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 51099, "total-output-tokens": 13630, "length": "2e13", "weborganizer": {"__label__adult": 0.00022733211517333984, "__label__art_design": 0.0003712177276611328, "__label__crime_law": 0.00020420551300048828, "__label__education_jobs": 0.0008320808410644531, "__label__entertainment": 4.214048385620117e-05, "__label__fashion_beauty": 9.882450103759766e-05, "__label__finance_business": 0.00020742416381835935, "__label__food_dining": 0.0002233982086181641, "__label__games": 0.00032448768615722656, "__label__hardware": 0.00043272972106933594, "__label__health": 0.00026726722717285156, "__label__history": 0.00016450881958007812, "__label__home_hobbies": 7.861852645874023e-05, "__label__industrial": 0.0002338886260986328, "__label__literature": 0.00018727779388427737, "__label__politics": 0.00012886524200439453, "__label__religion": 0.0002701282501220703, "__label__science_tech": 0.0112762451171875, "__label__social_life": 7.069110870361328e-05, "__label__software": 0.01134490966796875, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.00014019012451171875, "__label__transportation": 0.0002567768096923828, "__label__travel": 0.0001348257064819336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54069, 0.03373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54069, 0.43953]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54069, 0.85492]], "google_gemma-3-12b-it_contains_pii": [[0, 294, false], [294, 2991, null], [2991, 6334, null], [6334, 9239, null], [9239, 11281, null], [11281, 13933, null], [13933, 17161, null], [17161, 20674, null], [20674, 23791, null], [23791, 27166, null], [27166, 29297, null], [29297, 32556, null], [32556, 37698, null], [37698, 43079, null], [43079, 46372, null], [46372, 49615, null], [49615, 52725, null], [52725, 54069, null]], "google_gemma-3-12b-it_is_public_document": [[0, 294, true], [294, 2991, null], [2991, 6334, null], [6334, 9239, null], [9239, 11281, null], [11281, 13933, null], [13933, 17161, null], [17161, 20674, null], [20674, 23791, null], [23791, 27166, null], [27166, 29297, null], [29297, 32556, null], [32556, 37698, null], [37698, 43079, null], [43079, 46372, null], [46372, 49615, null], [49615, 52725, null], [52725, 54069, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54069, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54069, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54069, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54069, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54069, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54069, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54069, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54069, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54069, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54069, null]], "pdf_page_numbers": [[0, 294, 1], [294, 2991, 2], [2991, 6334, 3], [6334, 9239, 4], [9239, 11281, 5], [11281, 13933, 6], [13933, 17161, 7], [17161, 20674, 8], [20674, 23791, 9], [23791, 27166, 10], [27166, 29297, 11], [29297, 32556, 12], [32556, 37698, 13], [37698, 43079, 14], [43079, 46372, 15], [46372, 49615, 16], [49615, 52725, 17], [52725, 54069, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54069, 0.22593]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
4b676ce015f6e26f0399898352eb13eb837d9de3
Upgrade to SAP MaxDB Database 7.9 on UNIX Operating System: UNIX # Content 1 **Introduction.** ................................................................. 4 1.1 New Features. ................................................................. 4 1.2 Before You Start. ............................................................ 5 SAP Notes for the Upgrade. ................................................ 5 More Information on SAP Service Marketplace. ..................... 6 Naming Conventions. ....................................................... 6 2 **Planning.** ..................................................................... 8 2.1 SAP MaxDB Isolated Installation. .................................... 8 2.2 Database Requirements. .................................................. 10 2.3 Operating System Requirements. ..................................... 10 2.4 SAP System Requirements. ............................................. 10 2.5 Upgrade Strategy. ............................................................ 11 3 **Preparation.** .............................................................. 12 3.1 Preparing for an Upgrade with In-Place. ......................... 12 3.2 Preparing for an Upgrade with Patch Installation. .......... 13 4 **Upgrade Process.** ......................................................... 15 4.1 Performing an Upgrade for In-Place. ............................... 15 4.2 Performing an Upgrade for Patch Installation. ................. 17 4.3 Upgrading the SAP MaxDB Client Software. .................. 17 5 **Post-Upgrade.** ............................................................. 19 5.1 Performing Post-Upgrade Steps After an In-Place Upgrade. 19 5.2 Updating the Database Software to the Current Release. ... 21 5.3 Installing or Upgrading Database Studio for SAP MaxDB. .. 21 5.4 Secure Sockets Layer Protocol for Database Server Communication ........................................... 23 Installing the SAP Cryptographic Library. .......................... 23 Generating the Personal Security Environment. .................. 25 Configuring the SSL Communication between the Application Server and the Database Server .......... 28 6 **Additional Information.** ............................................... 29 6.1 Database Directory Structure. ........................................ 29 6.2 Log Files for Troubleshooting. ......................................... 30 Document History i Note Before you start the implementation, make sure you have the latest version of this document that is available at http://service.sap.com/instguides. The following table provides an overview on the most important document changes: <table> <thead> <tr> <th>Version</th> <th>Date</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>1.05</td> <td>2019-10-03</td> <td>Revised version</td> </tr> <tr> <td>1.04</td> <td>2016-01-20</td> <td>Revised version</td> </tr> <tr> <td>1.03</td> <td>2015-01-28</td> <td>Revised version</td> </tr> <tr> <td>1.02</td> <td>2014-10-02</td> <td>Revised version</td> </tr> <tr> <td>1.01</td> <td>2012-04-17</td> <td>Revised version</td> </tr> <tr> <td>1.00</td> <td>2011-11-10</td> <td>Initial version</td> </tr> </tbody> </table> # Introduction This documentation explains how to upgrade the SAP MaxDB database for the SAP system: - **From** at least SAP Max DB version 7.5 - **To** SAP MaxDB version 7.9 ⚠️ **Caution** Make sure you have the latest version of this document. See the version number on the front page. You can always find the latest version at: If you have a high-availability (HA) liveCache with a cluster environment, see SAP Note [2113981](http://service.sap.com/notes) for more information before starting the upgrade. ## Implementation Considerations - Make sure that you read the relevant SAP Notes [page 5] before beginning the upgrade. These notes contain the most recent information about the upgrade, as well as corrections to the documentation. - For more information about SAP MaxDB, see the following: For the most up-to-date information about the SAP MaxDB documentation and where to find it, see SAP Note [767598](http://service.sap.com/notes). ### 1.1 New Features For more information about the new features for SAP MaxDB version 7.9, see: For more information about the most important enhancements and features for version 7.9, see SAP Note [1444241](http://service.sap.com/notes). **i Note** As of SAP MaxDB version 7.6, support packages and patch levels have been introduced. For more information, see SAP Note [820824](http://service.sap.com/notes). As of SAP MaxDB version 7.8, the installation has changed. For more information, see SAP MaxDB Isolated Installation [page 8]. 1.2 Before You Start Make sure that you read the following sections before you start the upgrade: - SAP Notes for the Upgrade [page 5] - Information Available on SAP Service Marketplace [page 6] - Naming Conventions [page 6] 1.2.1 SAP Notes for the Upgrade Read the SAP notes before you begin the upgrade. Make sure that you have the most recent version of each SAP Note, which you can find at: http://service.sap.com/notes The following notes contain information relevant to the upgrade: <table> <thead> <tr> <th>SAP Note</th> <th>Subject</th> </tr> </thead> <tbody> <tr> <td>1492467</td> <td>Additional Information for Upgrade to MaxDB 7.9</td> </tr> <tr> <td>820824</td> <td>Frequently Asked Questions (FAQ): SAP MaxDB</td> </tr> <tr> <td>498036</td> <td>Overview note on importing database versions</td> </tr> <tr> <td>668849</td> <td>Problems due to several DB versions on one host</td> </tr> <tr> <td>829408</td> <td>Upgrading a database in the UNIX cluster</td> </tr> <tr> <td>2113981</td> <td>SAP MaxDB / liveCache / Content Server Maintenance in High Availability System Environment</td> </tr> </tbody> </table> ⚠ Caution Before you begin the upgrade, always make sure that you read the first SAP Note listed above, 1492467, because it contains up-to-date information essential to the upgrade, including corrections not contained in this upgrade documentation. This note also contains the valid DVD numbers for SAP MaxDB 7.9. 1.2.2 More Information on SAP Service Marketplace You can find more information on SAP Service Marketplace as follows: <table> <thead> <tr> <th>Description</th> <th>Address</th> </tr> </thead> <tbody> <tr> <td>Database upgrade guides</td> <td><a href="http://service.sap.com/instguides">http://service.sap.com/instguides</a></td> </tr> <tr> <td>Product Availability Matrix (PAM)</td> <td><a href="http://service.sap.com/pam">http://service.sap.com/pam</a></td> </tr> <tr> <td>SAP Notes</td> <td><a href="http://service.sap.com/notes">http://service.sap.com/notes</a></td> </tr> </tbody> </table> 1.2.3 Naming Conventions We use the following naming conventions in this documentation: - **Release** Unless otherwise specified, we use “release” to refer to the release of SAP NetWeaver - **SAP MaxDB name** `DBSID` refers to the SAP MaxDB name. For `<DBSID>` you need to substitute your SAP MaxDB name, for example, `MDB`. - **SAP system name** `SAPSID` refers to the SAP system name. Pay attention to lowercase or uppercase. If `<SAPSID>` is used, insert your SAP System name, for example, `PRD`. - **<SAPSID> user name** The user name is written in uppercase and abbreviated with `<SAPSID>ADM`. ▲ Caution Always enter the user name `<sapid>adm` in lowercase for the standalone database server. - **Support Packages and patches** For more information, see: [http://service.sap.com/patches](http://service.sap.com/patches) - **SAP MaxDB operational states** There are the following SAP MaxDB operational states: <table> <thead> <tr> <th>SAP MaxDB State Identifier</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>ONLINE</td> <td>The database instance has been started and users can log on.</td> </tr> </tbody> </table> ### SAP MaxDB State Identifier <table> <thead> <tr> <th>State Identifier</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>ADMIN</td> <td>The database instance is only available to administrators.</td> </tr> <tr> <td>OFFLINE</td> <td>The database instance is not running.</td> </tr> </tbody> </table> 2 Planning Prerequisites You have checked the SAP Notes for the upgrade [page 5]. Process Flow You have to complete the following planning activities: 1. If required, you read about the SAP MaxDB isolated installation [page 8], which is the new kind of installation as of SAP MaxDB version 7.8. 2. You check the database requirements [page 10]. 3. You check the operating system requirements [page 10]. 4. You check the SAP system requirements [page 10]. 5. You choose an upgrade strategy [page 11]. 2.1 SAP MaxDB Isolated Installation As of SAP MaxDB version 7.8, the installation principles of SAP MaxDB have changed, so that the following features are now supported: - Multiple SAP MaxDB installations of the same version can be installed on one computer - Multiple SAP MaxDB installations of different versions can be installed on one computer - Multiple different clients as well as multiple versions of the same client can be installed on the same computer - Any server or client installation can be maintained individually - SAP MaxDB databases or liveCache installations can be maintained individually When installing SAP MaxDB software of version 7.8 and higher, for SAP environments the software and database are stored in the following paths: <table> <thead> <tr> <th>Path Name</th> <th>Variable Name</th> <th>Properties</th> <th>Shared</th> <th>Stored Components</th> </tr> </thead> <tbody> <tr> <td>Global programs</td> <td>GlobalProgPath</td> <td>/sapdb/programs</td> <td>By all SAP MaxDB installations on this computer</td> <td>Installation tools, (for example: sdbuninst, sdbverify, sdbconfig) global listener (sdbgloballistener)</td> </tr> <tr> <td>path</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Path Name</td> <td>Variable Name</td> <td>Properties</td> <td>Shared</td> <td>Stored Components</td> </tr> <tr> <td>-------------------</td> <td>------------------</td> <td>-------------------------------------------------</td> <td>-------------------------------------------------</td> <td>-----------------------------------------------------------------------------------</td> </tr> <tr> <td>Global data path</td> <td>GlobalDataPath</td> <td>/sapdb/data&lt;br&gt;Once per computer only</td> <td>By all SAP MaxDB installations &lt; 7.8 on this computer</td> <td>Parameter and log files of SAP MaxDB versions &lt; 7.8</td> </tr> <tr> <td>Installation path</td> <td>InstallationPath</td> <td>For database server software:&lt;br&gt;/sapdb/&lt;DBSID&gt;/db&lt;br&gt;For database client software:&lt;br&gt;/sapdb/clients/&lt;SAPSID&gt;</td> <td>No</td> <td>Database server software, such as database kernel, X server software (for SAP MaxDB versions &lt; 7.8), and database client software, such as DBMCLI, SQLDBC, JDBC</td> </tr> <tr> <td>Private data path</td> <td>PrivateDataPath</td> <td>For database server software:&lt;br&gt;/sapdb/&lt;DBSID&gt;/data&lt;br&gt;For database client software:&lt;br&gt;/sapdb/clients/&lt;SAPSID&gt;/data</td> <td>No</td> <td>All database related files are stored here (they are no longer stored in the global data path). These files include installation registry and log files, database parameter files, knldiag file, and so on.</td> </tr> </tbody> </table> As a consequence of the new installation principles, higher versions can no longer unintentionally update existing software from a previous version. Since more than one client can be installed on an individual computer, as of SAP MaxDB 7.8 each application server has its own SAP MaxDB runtime (client software installation). This lets you update any client installation without affecting any other client installation. For example, you can now update the SAP MaxDB runtime of an individual application server without affecting a second application server on the same computer. Likewise, you can upgrade a database together with its software to a higher version without affecting another database on this computer and its current connections. You also can run test systems and production systems on the same computer. The creation of system copies in SAP systems is now much easier, since a private data path is used for the SAP MaxDB software. ⚠️ Caution There is still only one database instance allowed for each software installation. 2.2 Database Requirements As part of the upgrade planning [page 8], make sure that your database meets the following requirements before you start the upgrade: - The database is ready to run. - The system tables have been loaded at least once for the existing instance. - The database instance is the only instance that refers to the installation path of the software version that you want to upgrade. - The database parameters of the database instance that you want to upgrade have not changed since the last restart. - The database start version – that is, before you start the upgrade – is at least 7.5 ⚠️ Caution If the database start version does not meet the above requirement, you must upgrade to this version before starting the upgrade. For more information about how to upgrade to the correct database start version, see SAP Note 498036. 2.3 Operating System Requirements As part of the upgrade planning [page 8], make sure that your operating system meets the following requirements before you start the upgrade: For the most up-to-date SAP MaxDB-specific release information on the database and operating system of your product, including required patch levels, check the SAP Product Availability Matrix (PAM) at: http://service.sap.com/pam There you can also find additional information on required operating system patch levels and patches for C++ RTE. As of SAP MaxDB version 7.6, we no longer support the operating system HP Tru64 UNIX. 2.4 SAP System Requirements As part of the upgrade planning [page 8], make sure that your SAP system meets the following requirements before you start the upgrade: - SAP MaxDB Version 7.9 is initially released for SAP NetWeaver 7.0 Enhancement Package 3, and subsequent releases. For previous SAP releases, SAP Note 1353266 shows – with reference to the Product Availability Matrix (PAM) – whether an official downward-compatible release exists for SAP products or whether a special release has been granted for the SAP upgrade start release. For more information, see the Product Availability Matrix (PAM) at: http://service.sap.com/pam You can also find the information in the above SAP Notes at: http://service.sap.com/notes 2.5 Upgrade Strategy As part of the upgrade planning [page 8], you choose an upgrade strategy, which depends on your database start version: <table> <thead> <tr> <th>Your Database Start Version</th> <th>Your Upgrade Strategy</th> </tr> </thead> <tbody> <tr> <td>7.5 or later</td> <td>In-Place Upgrade</td> </tr> <tr> <td></td> <td>With an In-Place upgrade, you upgrade the database instance and the database software.</td> </tr> <tr> <td></td> <td>The start version of the database software must be Version 7.5 or later and the target version must be 7.9 or later</td> </tr> <tr> <td></td> <td>For an In-Place upgrade, the software has a significant amount of new functionality that could cause incompatibilities between the existing data and the new software.</td> </tr> <tr> <td></td> <td>Therefore, the adaptations to the new database functionality and structures are made internally during an In-Place upgrade.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>7.9</th> <th>Patch Installation</th> </tr> </thead> <tbody> <tr> <td></td> <td>With a patch installation, you only upgrade the database software.</td> </tr> <tr> <td></td> <td>You can use this procedure if the only difference between the source and target version is the build number or the support package number, or both.</td> </tr> <tr> <td></td> <td>For a patch installation, the software does not have a significant amount of new functionality that could cause incompatibilities between the existing data and the new software.</td> </tr> </tbody> </table> 3 Preparation Prerequisites You have completed planning the upgrade [page 8]. Process Flow You have to complete the following preparations: 1. If your upgrade strategy is In-Place, you prepare for an In-Place upgrade [page 12]. 2. If your upgrade strategy is Patch installation, you prepare for an upgrade with Patch installation [page 13]. 3.1 Preparing for an Upgrade with In-Place Use As part of the upgrade preparations [page 12] for an In-Place upgrade, you need to perform the preparations described below. Procedure 1. Make sure that the operational state [page 6] of your database is ONLINE. 2. In case you need to recover the database, make sure that you have an installation kit with the start version of the database software. This means that you must have database Version 7.5, 7.6, 7.7, or 7.8 with the same or a higher build. 3. Check the required free space in the database. You need at least 15% free space. 4. Check that there are no bad indexes in the database, using the Database Studio or the Database Manager GUI. For more information about how to remove bad indexes, see SAP Note 566883. 5. Shut down the SAP system using the command stopsap or – if you have Windows platforms in your SAP system – the SAP Microsoft Management Console (SAP MMC). To use stopsap, enter the following command as user <sapsid>adm: ``` stopsap <system ID> <system number> <SAPDIAHOST> ``` i Note <SAPDIAHOST> refers to the instance ID of the additional application server instance. In SAP NetWeaver 7.0 or earlier, this is known as the dialog instance. For more information about SAP MMC, see: http://help.sap.com/nw70 6. Bring the database to operational state OFFLINE using the Database Manager CLI command `db_offline`, Database Manager GUI, or Database Studio. 7. Bring the database to operational state ADMIN using Database Manager CLI command `db_admin`, Database Manager GUI, or Database Studio. 8. Set up the database so that it can be recovered: ○ If you have a recent data backup, make an incremental data backup and a log backup. ○ Otherwise, make a complete data backup ⚠️ Caution If you do not have a backup, you might lose data in the event of a database failure that requires database recovery. After the backup, put the database in operational state ONLINE before the upgrade. 9. Exit the Database Manager CLI, GUI, or Database Studio and any other database applications that are running. 10. If you need to upgrade your operating system do this now. ### 3.2 Preparing for an Upgrade with Patch Installation #### Use As part of the upgrade preparations [page 12] for an upgrade with Patch installation, you need to perform the preparations described below. #### Procedure 1. Make sure that the operational state [page 6] of your database is ONLINE, ADMIN, or OFFLINE without any errors. 2. Stop application software – such as Database Studio, Database Manager CLI or GUI – and the database that you intend to upgrade, so that you can run a full backup. In this case, you do not need to stop the X server. ⚠️ Caution The upgrade tool does not check whether a database backup is available. Therefore, always perform a database backup before the upgrade, so that you can recover the database in the event of data loss. 4 Upgrade Process Prerequisites You have completed preparing for the upgrade [page 12]. Process Flow You have to complete the following to perform the upgrade: 1. If your upgrade strategy [page 11] is In-Place, perform the upgrade for In-Place [page 15]. 2. If your upgrade strategy [page 11] is Patch installation, perform the upgrade for Patch installation [page 17]. 3. If required, you upgrade the SAP MaxDB client software [page 17]. 4.1 Performing an Upgrade for In-Place Prerequisites **Note** Depending on the size of your database catalog, the upgrade can take a long time to finish, especially database migration. **Caution** *Never terminate the upgrade.* If you terminate the upgrade, you risk losing all your data. Your only option then is to use data backups to recover the database instance, which is risky and time-consuming. Context As part of upgrading the database [page 15], you perform this procedure to upgrade your database if your chosen upgrade strategy [page 11] is In-Place. **Procedure** ⚠️ **Caution** When upgrading large databases, make sure that - regarding system limits - the environment of the OS user root corresponds to the environment which is normally used for the database. Otherwise, the upgrade process might stop due to insufficient limit settings. 1. Log on as user root. 2. Load the SAP MaxDB 7.9 DVD in the drive and mount the DVD. 3. Change the working directory as follows: ```bash cd <SAP MaxDB DVD>/DATA_UNITS/MAXDB_UPDATE ``` 4. Start the upgrade as follows: ```bash ./DBUPDATE.SH ``` 5. When the upgrade prompts you, enter the following: - SAP MaxDB name: `<DBSID>` - SAP system ID: `<SAPSID>` - DBM user name: `control` - DBM user password **Results** When you see the message confirming that the upgrade has completed successfully, this means that: - The database instance and its software are now upgraded. - The database instance is in the operational state ONLINE. - Only valid for database start version 7.7 or lower: The isolated database client software (installation name `CL_<SAPSID>`) is now installed in the directory `/sapdb/clients/<SAPSID>`. This is only valid for database servers where associated SAP application software is installed. - The DBENV scripts (`.dbenv_<hostname>.csh`, `.dbenv_<hostname>.sh`, `dbenv.csh`, and `dbenv.sh`) are now up-to-date in the home directories of the SAP system administrator (`<sapsid>adm`) and the SAP database administrator (`sqd_<sapsid>`). This is only valid for database servers where associated SAP application software is installed. - The upgrade is flagged as complete. 4.2 Performing an Upgrade for Patch Installation Use As part of upgrading the database [page 15], you perform this procedure to upgrade your database if your chosen upgrade strategy [page 11] is Patch installation. Procedure 1. Log on as user root. 2. Load the SAP MaxDB 7.9 DVD in the drive and mount the DVD. 3. Change the working directory as follows: ```bash cd <SAP MaxDB DVD>/DATA_UNITS/MAXDB_UPDATE ``` 4. Start the upgrade as follows: ```bash ./DBUPDATE.SH ``` 5. When the upgrade prompts you, enter the following: - SAP MaxDB name: <DBSID> - SAP system ID: <SAPSID> - DBM user name: control - DBM user password Result When you see the message confirming that the upgrade has completed successfully, this means that: - The database software is now upgraded. - The database instance is now in the operational state ONLINE. - The upgrade is flagged as complete. 4.3 Upgrading the SAP MaxDB Client Software If required, as part of upgrading the database [page 15], you must upgrade the database client software for the host where the SAP central or dialog instance runs Prerequisites Stop the following: - The central and dialog instance - Any other SAP MaxDB instances that are running on the central or dialog instance server Procedure 1. Log on as the root user. 2. Load the SAP MaxDB 7.9 DVD in the drive and mount it. 3. Start the client software upgrade: `<SAP MaxDB DVD>/DATA_UNITS/MAXDB_UPDATE/DBUPDATE.SH -client <SAP System Name>` 4. Log on again as the `<sapsid>adm` or `<sqd@dbsid>` user, or both. **Note** Only valid for database start version 7.7 or lower: Make sure that you log on from the beginning, because the environment of `<sapsid>adm` and `<sqd@dbsid>` has been changed. After logging on again, restart the SAP service `SAP<SAPSID>_<InstanceNumber>` so that the environment changes become active. 5. Restart the SAP system using the commands `stopsap` and `startsap` or – if you have Windows platforms in your SAP system – the SAP Microsoft Management Console (SAP MMC). To use the `stopsap` and `startsap` commands, enter the following commands as user `<sapsid>adm`: ``` stopsap <system ID> <system number> <SAPDIAHOST> startsap <system ID> <system number> <SAPDIAHOST> ``` **Note** `<SAPDIAHOST>` refers to the instance ID of the additional application server instance. In SAP NetWeaver 7.0 or earlier, this is known as the dialog instance. For more information about SAP MMC, see: 6. If required, check the client software version as described in SAP Note 822239. If you need to obtain the latest client software, see SAP Note 649814, which describes how to download it from SAP Service Marketplace. 5 Post-Upgrade Prerequisites You have completed the upgrade [page 15]. Process Flow You have to complete the following post-upgrade steps: 1. If your upgrade strategy [page 11] is In-Place, perform post-upgrade steps after an In-Place upgrade [page 19]. 2. You update the database software to the current release [page 21]. 3. You install or upgrade Database Studio [page 21]. 4. If required, you set up Secure Sockets Layer (SSL) protocol for database server communication [page 23]. 5.1 Performing Post-Upgrade Steps After an In-Place Upgrade Use As part of the post-upgrade [page 19] steps, you perform this procedure if your chosen upgrade strategy [page 11] is In-Place. Procedure 1. Perform a complete backup of the database data so that you can recover the new database if necessary. ⚠️ Caution We do not guarantee that you can recover the database using backups from different versions of the database. 2. Log on again as the <sapsid>adm or sqd<dbsid> user, or both. i Note Only valid for database start version 7.7 or lower: Make sure that you log on from the beginning, because the environment of <sapsid>adm and sqd<dbsid> has been changed. After logging on again, restart the SAP service SAP<SAPSID>_<InstanceNumber> so that the environment changes become active. 3. This step applies only to SAP installations that include AS Java (SAP J2EE Engine) and when the database start version is 7.7 or lower: Obtain the JDBC driver as follows: /sapdb/clients/<SAPSID>/runtime/jar i Note The location of the driver has changed compared to previous versions of SAP MaxDB. This is the old location of the JDBC driver, before SAP MaxDB version 7.8: /sapdb/programs/runtime/jar SAP Note 867976 describes how to update the location of the JDBC driver for the Java application server. 4. Start the SAP system using the command startsap or – if you have Windows platforms in your SAP system – the SAP Microsoft Management Console (SAP MMC). To use startsap, enter the following command as user <sapsid>adm: startsap <system ID> <system number> <SAPDIAHOST> i Note <SAPDIAHOST> refers to the instance ID of the additional application server instance. In SAP NetWeaver 7.0 or earlier, this is known as the dialog instance. For more information about SAP MMC, see: http://help.sap.com/nw70 SAP NetWeaver 7.0 Library – English SAP NetWeaver Library SAP NetWeaver by Key Capability Solution Life Cycle Management by Key Capability Solution Monitoring Monitoring in the CCMS SAP Microsoft Management Console: Windows 5. We recommend you to update the optimizer statistics. 6. Only valid for database start version 7.7 or lower: After a successful update and assuming that no SAP application is using it, you can deinstall the SAP MaxDB client software from version 7.7 or lower: ⚠️ Caution Only deinstall the old legacy SAP MaxDB client software if you are completely sure that you do not need it. Log in as user root and execute the following command /sapdb/programs/bin/sdbuninst -i Legacy 5.2 Updating the Database Software to the Current Release After the upgrade and before you start production operation, we strongly recommend you to update the database software to the latest SAP MaxDB patch available on the SAP Software Distribution Center (SWDC). Procedure Download the latest SAP MaxDB patches from http://support.sap.com/swdc Databases SAP MaxDB. For more information about upgrading to a SAP MaxDB patch from a SWDC Support Package, see SAP Note 735598. 5.3 Installing or Upgrading Database Studio for SAP MaxDB This section describes how to install or upgrade Database Studio for SAP MaxDB and SAP liveCache. Database Studio is the database administration tool for SAP MaxDB. With Database Studio you can administer MaxDB databases version 7.6 and newer. Prerequisites - You can install Database Studio on Linux or Windows in your network, even if your database runs on a different operating system. You can then remotely administer the database on a different host. The instructions below refer mainly to the Windows version. i Note To run Database Studio on Linux, you need to meet the requirements for the SAP MaxDB database server. - Your PC must meet the following minimum requirements: - Software requirements: <table> <thead> <tr> <th>Operating System</th> <th>Database Studio 7.9.08</th> <th>Database Studio 7.9.09</th> </tr> </thead> <tbody> <tr> <td>Windows 2008</td> <td>X64</td> <td>X64</td> </tr> <tr> <td>Windows 2008 R2</td> <td>X64</td> <td>X64</td> </tr> </tbody> </table> ### Operating System <table> <thead> <tr> <th>Operating System</th> <th>Database Studio 7.9.08</th> <th>Database Studio 7.9.09</th> </tr> </thead> <tbody> <tr> <td>Windows Vista</td> <td>IA32 and X64</td> <td>X64</td> </tr> <tr> <td>Windows 7</td> <td>IA32 and X64</td> <td>X64</td> </tr> <tr> <td>Windows 8</td> <td>IA32 and X64</td> <td>X64</td> </tr> <tr> <td>Windows 10</td> <td>IA32 and X64</td> <td>X64</td> </tr> </tbody> </table> - **Hardware requirements:** - RAM: 512 MB (recommended RAM: 1 GB) - Processor speed: 1.5 GHz - Free disk space: 200 MB - Monitor: 1024x768 pixels, 256 colors - You can obtain the required files by downloading them from: https://launchpad.support.sap.com/#/softwarecenter > Databases > SAP MaxDB > Database Patches > MAXDB GUI COMPONENTS/TOOLS > MAXDB DATABASE STUDIO 7.9 - Database Studio 7.9.09 comes with the SAP Java Runtime SAPJVM. You no longer need to download the Java runtime. - Database Studio 7.9.08 is still available for downloading. To check your Java version, enter the following command: `java -version` To download Java, go to http://java.com/en/download. ### Context ### Note - Database Studio replaces Database Manager GUI and SQL Studio, which were available in previous releases. For up-to-date information about installing Database Studio, see SAP Note 1097311. For more information about Database Studio, including troubleshooting, see SAP Note 1097311 and 1795588. ### Procedure 1. Start the installation or upgrade by simply executing the downloaded `SDBSETUP.EXE` (Windows clients) or `SDBSETUP` (Linux clients) file. The Installation Manager starts. 2. Follow the Installation Manager steps to install or upgrade Database Studio. 3. If you are prompted to restart your computer after the installation, make sure that you first shut down any databases that are running. 5.4 Secure Sockets Layer Protocol for Database Server Communication The SAP MaxDB database server supports the Secure Sockets Layer (SSL) / Transport Layer Security (TLS) protocol. You can use this protocol to communicate between the database server and its client, here the Application Server (AS). SSL guarantees encrypted data transfer between the SAP MaxDB database server and its client applications. In addition, the server authenticates itself to the client. You need to install SAP’s cryptographic library - SAPCRYPTOLIB. For more information on software versions, see SAP Note 2243688. ⚠️ Caution There is a performance cost for SSL since the data has to be encrypted, which requires time and processing power. To use SSL you need to install the SAP Cryptographic Library [page 23] and generate the personal security environment [page 25] (PSE) on the server (SSL Server PSE) and on the client (SSL Client PSE). In addition, you need to configure the SSL communication between the application server and the database server [page 28]. Related Information Installing the SAP Cryptographic Library [page 23] Generating the Personal Security Environment [page 25] Configuring the SSL Communication between the Application Server and the Database Server [page 28] 5.4.1 Installing the SAP Cryptographic Library This section describes how to install the SAP Cryptographic Library. Prerequisites Prerequisites Download the appropriate installation package for your operating system and liveCache version from: https://launchpad.support.sap.com/#/softwarecenter Support Packages & Patches SAP TECHNOLOGY COMPONENTS SAPCRYPTOLIB COMMONCRYPTOLIB <version> **Context** The SAP Cryptographic Library supplies the cryptographic functions required to build a database server-client connection using the Secure Sockets Layer (SSL) protocol. Therefore, you need to install the SAP Cryptographic Library on the host machine of the SAP MaxDB database server and the SAP Application Server (AS). The installation package consists of the following: - The SAP Cryptographic Library: - SAP liveCache >= 7.9.09: CommonCryptoLib (CCL) - SAP liveCache < 7.9.09: SAPCRYPTOLIB - Configuration tool sapgenpse.exe The installation package is called SAPCRYPTOLIBP_<patch_level>-<platform_id>.SAR. For example, CCL 8.4.45 on 64-bit AIX is called SAPCRYPTOLIBP_8445-20011699.SAR. For more information on the CCL, see SAP Note 1848999. You use the configuration tool to generate key pairs and PSEs. **Procedure** 1. Unpack the installation package for the SAP Cryptographic Library using sapcar.exe, which you can find for example on your installation master media, using the following command: ``` sapcar -xvf <name of your package> ``` **Note** The remainder of the procedure (as described below) does not apply to client applications such as SQL Studio, which do not recognize an independent directory. In this case, you must copy the sapcrypto installation package to the installation directory of the application. 2. Copy the sapcrypto library to the lib subdirectory of the independent program directory. You can find the value of the independent program directory by entering the following command: ``` dbmcli dbm_getpath IndepProgPath ``` **Example** The independent program directory might be called the following: `/sapdb/programs/lib` 3. Copy the configuration tool `sapgenpse.exe` to the directory `<independent program>\lib`. 4. Create a subdirectory called `sec` under the `independent data` directory. **Example** The result might look as follows: `/sapdb/data/sec` 5. Make sure that the directory and the files that the `sec` directory contains – including the SSL Server PSE – belong to the user `lcown` and the group `lcadm`, and that the rights are restricted to `0660`. ### 5.4.2 Generating the Personal Security Environment This section describes how to generate the SSL Server PSE and the SSL Client PSE. **Context** The information required by the database server or client application to communicate using Secure Sockets Layer is stored in the Personal Security Environment (PSE). The required information differs according to whether SSL PSE is for the server or client: - **SSL Server PSE** This PSE contains the security information from the database server, for example, the public-private cryptographic key pair and certificate chain. To install the SSL Server PSE, you need to generate the PSE. You can either do this for a single database server or system-wide. The SSL Server PSE is called `SDBSSLS.exe`. - **SSL Client PSE** The client requires an anonymous certificate called `SDBSSLA.exe`, which contains the list of the public keys of trustworthy database servers. **Procedure** 1. You **generate the SSL Server PSE** [page 26] 2. You **generate the SSL Client PSE** [page 27] 5.4.2.1 Generating the SSL Server PSE Proceed as follows to generate the SSL Server PSE. Context **Note** You need to know the naming convention for the distinguished name of the database server. The syntax of the distinguished name, which you enter in the procedure below, depends on the Certification Authority (CA) that you are using. Procedure 1. Change to the `<global programs>\lib` directory. 2. Set up the following environment variable: ```plaintext SECUDIR=<global data>\sec ``` 3. Enter `<global program>/lib` in the environment variable `LD_LIBRARY_PATH`. 4. Create a SSL Server PSE, `SDBSSLS.pse`, and generate a certificate request file, `certreq`, in the directory defined by `SECUDIR` (see step 2): ```plaintext sapgenpse gen_pse -v -r <SECUDIR>\certreq -p SDBSSLS.pse "<your distinguished name>" ``` For each database server that uses a server-specific PSE, you must set up a unique certificate request. If you are using a valid system-wide SSL Server PSE, you only need to set up a single certificate request for all servers. 5. Send the certificate request to the CA for signing. You can either send it to the SAP CA or to another CA. You must make sure that the CA offers a certificate corresponding to the PKCS#7 certificate chain format. Thawte CA at the Thawte website offers a suitable certificate, either SSL Chained CA Cert or PKCS#7 certificate chain format. The CA validates the information contained in the certificate request, according to its own guidelines, and sends a reply containing the public key certificate. 6. After you have received the reply from the CA, make sure that the contents of the certificate request have not been destroyed during download. For example, if you requested the certificate on a UNIX system and stored it on a Windows front end, the formatting (that is, line indents and line breaks) is affected. To check the contents, open the certificate request with a text editor (such as Notepad) and repair the line indents and the line breaks. ### Example This is an example of a certificate request: ```plaintext -----BEGIN CERTIFICATE REQUEST----- MIIBPzCBqQIBADAAMIIGMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQD/302IT+/YWpkgnSw7U9FWneyWz3W110S18aFCYkRo00wCpD8UwcaC4dds4uGT6h12W1J0/FotUg+EQxon2bArk9sTaikn1mqx3YA0e/gEAdf1wwYkb0gjMk81lM/2b9Jd8srMFyoB9jMC7v5u7+tZWmWaRjnvc1v1GgMw1DAQQBoAAwDQYJKoZIhvcNAQEFBQADBgYEAx2uTAOKpdGmxUKY1WdasU sipw4vhfaHa7ZDBwipvKJ8akYCT+dpmVjhcp9E7cUjL80/6Rup5cnLAA05FhVt5MS6zJJa9YYSN9XP+5/MPF6Q4ayJOvTkSppbrPrWLbKhiDds97LVuQ/myKIAHECwyW6t7sAFJWn4P0fjxKm=-----END CERTIFICATE REQUEST----- ``` 7. Import the reply to the SSL Server PSE: a. Copy the text to a temporary file called `srcert` b. Enter the following command: ```bash sapgenpse import_own_cert -c srcert -p SDBSSLS.pse ``` You have generated the SSL Server PSE. You can now start the XServer as usual (if it is already running, you must stop and restart it). 8. To check whether the SSL functionality is working correctly, view the trace file `niserver_<local computer name>.trace` in the `<global data\wrk` directory. ### 5.4.2.2 Generating the SSL Client PSE Proceed as follows to generate the SSL Client PSE. #### Procedure 1. Change to the `<global programs\lib` directory. 2. Set up the following environment variable: ```bash SECUDIR=<global data>\sec ``` 3. Enter `<global program>/lib` in the environment variable `LD_LIBRARY_PATH`. 4. Create an anonymous client SSL Client PSE, `SDBSSLA.pse` in the directory defined by `SECUDIR` (see previous step): ```bash sapgenpse gen_pse -v -noreq -p SDBSSLA.pse ``` You can leave the distinguished name empty. Before you can establish an SSL connection to a database server, the server certificate must be entered in the PK list of the anonymous client certificate. 5. To see the database server certificate, enter the following command: ```bash ``` 6. Start the import with this command: `x_ping -n <servermode> -i[import]` 7. To administer the PSE, use the configuration tool sapgenpse. For more information, enter the following command: `sapgenpse -h` ### Note For applications such as SQL Studio replace the global data or global program in the above description with the relevant installation directory. --- ### 5.4.3 Configuring the SSL Communication between the Application Server and the Database Server Set the connection information for each database connection for which SSL is to be used. #### Procedure Using transaction `dbco`, set the connection information for each database connection for which SSL is to be used as follows: - Connection information for database connection `<name>`: - `maxdb: remotes://host/database/<SID>-<SID>` - Connection information for database connection `<name>`: - `@DBM_SSL:host-SID` For more information, see SAP Note 2190094. #### Example Database connection: Test - `<host>`: lu12345 - `<SID>`: WB9 Connection information for database connection Test: - `maxdb:remotes://lu12345/database/WB9-WB9` Connection information for Test+: - `@DBM_SSL:lu12345-WB9` 6 Additional Information 6.1 Database Directory Structure You can set up several database instances with different releases in one user environment. For this the database services are split into the following areas. **Note** As of SAP MaxDB version 7.8, with the introduction of the isolated installation, the database directory structure in SAP installations has changed. This section describes the new structure. - **Global programs path:** `GlobalProgPath` This area contains all services that are only allowed to exist once per computer and are downward compatible (for example, installation tools and the global listener, `sdbgloballistener`). Therefore, only programs of the most recent installed version exist here. You can check the path for `GlobalProgPath` with the following `dbmcli` command: ``` dbmcli dbm_getpath GlobalProgPath ``` By default, `GlobalProgPath` is set as follows for the installation: `/sapdb/programs` - **Global data path:** `GlobalDataPath` This area contains all data necessary for an instance with version 7.7 or lower, including run directories and their parameter files. The directory containing this data is called the `GlobalDataPath`. You can check the path for `GlobalDataPath` with the following `dbmcli` command: ``` dbmcli dbm_getpath GlobalDataPath ``` By default, `GlobalDataPath` is set as follows for the installation: `/sapdb/data` - **Private data path:** `PrivateDataPath` This area contains all data necessary for an instance with version 7.8 or higher, including run directories and their parameter files. The directory containing this data is called the `PrivateDataPath`. You can check the path for `PrivateDataPath` with the following `dbmcli` command: ``` dbmcli –s inst_enum <InstallationPath> ``` By default, `PrivateDataPath` is set as follows for the installation: - SAP MaxDB server software installations: `/sapdb/<DBSID>/data` - SAP MaxDB client software installations: `/sapdb/clients/<SAPSID>/data` - **Installation path:** `InstallationPath` This area contains all programs necessary for a running database instance or for client software. The `InstallationPath` of server software (for a database instance): The programs must all correspond to the instance version and are installed once per instance. The programs include, for example, `kernel, console, dbmsrv,` and so on. The storage location is known as the `InstallationPath` of the instance. InstallationPath of a client software: This area contains shared libraries and dlls required by SAP clients at runtime connecting to database instances of version 7.8 or higher, including SQLDBC, JDBC, ODBC, and so on. The client software is installed on each computer, for each SAP instance separately. The installation sets up the directory as follows: - SAP MaxDB server software installations: /sapdb/<DBNAME>/db - SAP MaxDB client software installations: /sapdb/clients/<SAPSID> You can display instance names and the associated InstallationPath on a computer with the following dbmcli command: dbmcli db_enum You can display the InstallationPath of installed software on a computer with the following dbmcli command: dbmcli inst_enum ### 6.2 Log Files for Troubleshooting This section provides information about how you can find log files relevant for the upgrade and the associated software installation. All steps of the upgrade and the associated software installation are logged in the file with the following name: /var/tmp/SDBUPD.log If the directory `<independent_data_path>` is not known at the time of failure, the log is written to the current directory. **i Note** If you are updating the SAP MaxDB client software, you can find the log files here instead: /var/tmp/SDBINST.log Important Disclaimers and Legal Information Hyperlinks Some links are classified by an icon and/or a mouseover text. These links provide additional information. About the icons: - Links with the icon 🌐: You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your agreements with SAP) to this: - The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information. - SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any damages caused by the use of such content unless damages have been caused by SAP’s gross negligence or willful misconduct. - Links with the icon 🚧: You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this information. Beta and Other Experimental Features Experimental features are not part of the officially delivered scope that SAP guarantees for future releases. This means that experimental features may be changed by SAP at any time for any reason without notice. Experimental features are not for productive use. You may not demonstrate, test, examine, evaluate or otherwise use the experimental features in a live operating environment or with data that has not been sufficiently backed up. The purpose of experimental features is to get feedback early on, allowing customers and partners to influence the future product accordingly. By providing your feedback (e.g. in the SAP Community), you accept that intellectual property rights of the contributions or derivative works shall remain the exclusive property of SAP. Example Code Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of example code unless damages have been caused by SAP’s gross negligence or willful misconduct. Gender-Related Language We try not to use gender-specific word forms and formulations. As appropriate for context and readability, SAP may use masculine word forms to refer to all genders.
{"Source-Url": "https://help.sap.com/doc/eac8641e72654aa2bf0b926ac0c5e5fa/7.9/en-US/GUIDE_UX_MAX_UPG.pdf", "len_cl100k_base": 11509, "olmocr-version": "0.1.49", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 66043, "total-output-tokens": 12999, "length": "2e13", "weborganizer": {"__label__adult": 0.0004758834838867187, "__label__art_design": 0.00037550926208496094, "__label__crime_law": 0.0006084442138671875, "__label__education_jobs": 0.0041351318359375, "__label__entertainment": 0.00020325183868408203, "__label__fashion_beauty": 0.00018405914306640625, "__label__finance_business": 0.00540924072265625, "__label__food_dining": 0.00027441978454589844, "__label__games": 0.001209259033203125, "__label__hardware": 0.0020580291748046875, "__label__health": 0.00035953521728515625, "__label__history": 0.0003941059112548828, "__label__home_hobbies": 0.00014519691467285156, "__label__industrial": 0.0008192062377929688, "__label__literature": 0.000392913818359375, "__label__politics": 0.0003540515899658203, "__label__religion": 0.0005030632019042969, "__label__science_tech": 0.032012939453125, "__label__social_life": 0.00019919872283935547, "__label__software": 0.356689453125, "__label__software_dev": 0.5927734375, "__label__sports_fitness": 0.00021195411682128904, "__label__transportation": 0.00033283233642578125, "__label__travel": 0.0001926422119140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48988, 0.01986]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48988, 0.05317]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48988, 0.81695]], "google_gemma-3-12b-it_contains_pii": [[0, 65, false], [65, 2468, null], [2468, 3069, null], [3069, 4833, null], [4833, 6300, null], [6300, 8392, null], [8392, 8711, null], [8711, 10620, null], [10620, 13128, null], [13128, 14875, null], [14875, 16869, null], [16869, 18392, null], [18392, 20262, null], [20262, 20390, null], [20390, 21406, null], [21406, 23023, null], [23023, 24134, null], [24134, 25999, null], [25999, 26988, null], [26988, 29019, null], [29019, 30583, null], [30583, 32349, null], [32349, 34028, null], [34028, 36007, null], [36007, 37512, null], [37512, 39558, null], [39558, 41456, null], [41456, 42645, null], [42645, 45129, null], [45129, 46441, null], [46441, 48988, null], [48988, 48988, null]], "google_gemma-3-12b-it_is_public_document": [[0, 65, true], [65, 2468, null], [2468, 3069, null], [3069, 4833, null], [4833, 6300, null], [6300, 8392, null], [8392, 8711, null], [8711, 10620, null], [10620, 13128, null], [13128, 14875, null], [14875, 16869, null], [16869, 18392, null], [18392, 20262, null], [20262, 20390, null], [20390, 21406, null], [21406, 23023, null], [23023, 24134, null], [24134, 25999, null], [25999, 26988, null], [26988, 29019, null], [29019, 30583, null], [30583, 32349, null], [32349, 34028, null], [34028, 36007, null], [36007, 37512, null], [37512, 39558, null], [39558, 41456, null], [41456, 42645, null], [42645, 45129, null], [45129, 46441, null], [46441, 48988, null], [48988, 48988, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 48988, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48988, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48988, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48988, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48988, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48988, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48988, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48988, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48988, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48988, null]], "pdf_page_numbers": [[0, 65, 1], [65, 2468, 2], [2468, 3069, 3], [3069, 4833, 4], [4833, 6300, 5], [6300, 8392, 6], [8392, 8711, 7], [8711, 10620, 8], [10620, 13128, 9], [13128, 14875, 10], [14875, 16869, 11], [16869, 18392, 12], [18392, 20262, 13], [20262, 20390, 14], [20390, 21406, 15], [21406, 23023, 16], [23023, 24134, 17], [24134, 25999, 18], [25999, 26988, 19], [26988, 29019, 20], [29019, 30583, 21], [30583, 32349, 22], [32349, 34028, 23], [34028, 36007, 24], [36007, 37512, 25], [37512, 39558, 26], [39558, 41456, 27], [41456, 42645, 28], [42645, 45129, 29], [45129, 46441, 30], [46441, 48988, 31], [48988, 48988, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48988, 0.09539]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
1158e77249ab749fe1a2be175b2fc4fe55146b94
A Hierarchical, Co-operative Exception Handling Mechanism Jørgen Lindskov Knudsen DAIMI PB - 204 January 1986 A Hierarchical, Co-operative Exception Handling Mechanism Jørgen Lindskov Knudsen Computer Science Department Aarhus University, Denmark Abstract One of the most dominant philosophies within programming disciplines is the philosophy of layered systems. In a layered system (or hierarchical system) the layers are thought of as each implementing an abstract machine on top of the lower layers. Such an abstract machine in turn implements utilities (e.g. data-structures and operations) to be used at higher layers. This paper will focus on exception handling in block-structured systems (as a special case of layered systems). It will be argued that none of the existing programming language proposals for exception handling support secure and well-behaved termination of activities in a block-structured system. Moreover, it is argued that certain termination strategies within block-structured systems cannot be implemented using the existing proposals. As a result of this discussion and as a solution to the problems, a hierarchical, co-operative exception handling mechanism is proposed. Introduction The area of exception handling has been the topic of many papers during the last ten years. The foundation of the work has been the pioneering work by J.B. Goodenough and most proposals for exception handling within programming languages owe a lot to this work. As a consequence herof the proposals share at least one characteristic, namely that exceptions and their handlers are defined separately and handlers are associated with exceptions either by binding imperatives (e.g. bindings that are executed at run-time), or by dynamic bindings (e.g. exception handlers are found in the dynamic context of the raising statement). In this paper we will propose an exception handling mechanism that does not share the above characteristic. We will examine the possibility of static binding of handlers to exceptions. To be more precise, we examine the possibility of declaring the exception and its handler together. Moreover, we examine static definition of the termination level. That is, static determination of which parts of the system should be destroyed as a consequence of raising a specific exception. The origin of the static approach to exception handling is the sequel concept, first introduced by R.D. Tennent. The origin of the sequel concept is the program-point concept, introduced by Landin. The static approach to exception handling is introduced by this author in a previous paper that was published independently of a paper by R.D. Tennent, who examined the introduction of sequels into Pascal. (Note, that R.D. Tennent uses the term exit instead of sequel in reference 25 and 26.) The present paper extends the proposal to take into account several aspects of exception handling within block-structured systems. When an exception occurs in a block-structured system, an exception handling wave will go through a specific part of the system. This exception handling wave consists of exception handling within specific blocks (those affected by the exception handling wave). In order for this exception handling within a specific block to be secure and well-behaved when it is part of an exception handling wave, one must assume that outer blocks are in a consistent state that makes exception handling in this particular block possible. If consistency of outer levels is not ensured, handling of exceptions is difficult merely by the fact that the particular block cannot assume anything about the state of outer blocks. This is because the exception can occur at any of a number of different places (i.e. the state of outer blocks may be any of a number of different states). Furthermore, the exception handling in this particular block must ensure that the block is in a consistent state before inner blocks are allowed to perform any exception handling actions. (Note, that these consistent states are specific for exception handling and may not be consistent states if no exceptions had occurred.) Finally, the exception handling in this particular block must perform some specific actions, related to the handling of the exception in that block. In order to cope with exception handling as described above, the static approach to exception handling is extended to include so-called prefixed and virtual sequels. It will be shown that the structure obtained by using prefixed and virtual sequels is well-suited for convoluting exception handling in inner blocks with exception handling in outer blocks. Traditional approaches to exception handling (as the Goodenough proposal) are only well-suited for sequential exception handling in block-structured systems. That is, first perform exception handling in the innermost block involved, then in the next to innermost block, and so on until the outermost block involved is reached. Exception handling has been dealt with from the viewpoint of verification, too. One notable approach in this direction is the work by F. Cristian. The exception handling... mechanisms discussed in reference 4 are similar to the Clu mechanisms and those discussed in reference 5 are similar to those of Ada. The aspect of verification of exception handling is important but outside the scope of this paper. Organization of the paper The static approach to exception handling is introduced in section 1. Section 2 contains a discussion of the problems of termination in block-structured systems and concludes that new language constructs are needed. Section 3 is the core of the paper and presents the proposal for a hierarchical, co-operative exception handling mechanism that is able to cope with the problems discussed in section 2. Moreover, a discussion of other proposals for hierarchical exception handling is given. Appendices A, B and C are only introductory material and can be skipped by the knowledgeable reader. Appendix A is an introduction to prefixing and is intended to be read in connection with section 3.1. Appendix B is an introduction to virtual binding and is intended to be read in connection with section 3.2. Finally, appendix C is a taxonomy for exception handling and gives an introduction to the area of exception handling and clarifies several of the terms from the area. Appendix C is intended to be read either before, or in parallel with, reading the rest of the paper. 1 An Introduction to Static Exception Handling The static approach to exception handling\textsuperscript{10} is an attempt to contribute to a better understanding of writing structured programs with exception handling. The kernel of the static approach is the sequel concept. 1.1 The Sequel Concept A sequel is an abstraction of the goto-statement and is based on the procedure concept. A sequel is declared in the following way: ``` sequel S(...) begin ... end ``` A **sequel definition** defines three aspects of exception handling: Firstly, it defines the **name** of an exception. Secondly, it defines the **handler** associated with the exception. Finally, it defines the **termination level** of the exception. Semantically, the sequel concept is similar to the procedure concept except for the transfer of control after the sequel body has been executed. When a sequel is invoked and the sequel body has been executed, control is transferred to the termination point of the block in which the sequel is declared. A **sequel invocation** will initiate the execution of the body of the sequel. If the body terminates successfully, the **encloser** of the sequel will be terminated immediately. A sequel invocation will not terminate successfully if, during execution of the body of a sequel, another sequel is invoked. That is, this second sequel will take over and the termination level of the entire exception handling will be the termination level of this second sequel invocation. This applies of course recursively if this second sequel invocation is interrupted by a third sequel invocation.* The consequences of this semantics are twofold: Firstly, during handling of an exception occurrence it might be discovered that the exception cannot be handled locally — that is, invocation of a sequel declared in an outer block. Secondly, it might be possible to discover that the exception can be handled more locally — that is, invocation of a sequel declared in an inner block. The example in figure 1 illustrates the use of sequels for exception handling. In the example, three sequels are declared: TableError, SearchError and ItemFound. They are declared in three different blocks and function TableSearchAndCount has a formal sequel parameter ItemNotFound. The sequel represents three aspects of exception handling: definition of the exception, association of the exception statically with a handler (the sequel body), and definition of the termination level. The semantics of the sequel concept are such that if TableError, SearchError or ItemFound is invoked and terminates successfully then it will result in termination of block $B_1$, block $B_2$, or the current invocation of function TableSearchAndCount, respectively. The semantics of sequels as parameters are such that if ItemNotFound is invoked within TableSearchAndCount($A', X'$, SearchError) and terminates successfully, then it will result in termination of block $B_2$ whereas within TableSearchAndCount($A', X'$, TableError) it will result in termination of level $B_1$. The difference is due to the termination level of SearchError and TableError, respectively. In the previous paper on the static approach to exception handling, numerous examples of the use of the sequel concept are given and the semantics of the sequel concept are discussed in detail. Note that we use the terms exception and exception occurrence when we discuss certain events during program execution, whereas we use the terms sequel and sequel invocation when we discuss the specific language construct, designed to be used for static exception handling. 1.2 Static vs. Dynamic Exception Handling In this section we will give a brief comparison of the static approach to exception handling with the dynamic approach to exception handling taken by Clu.16 We have chosen Clu because it is a very strict proposal. I.e. the dynamic aspects of the proposal are limited compared with other dynamic approaches such as the proposal by Goodenough, PL/1 and Ada. Moreover, the Clu proposal has been the inspiration of a proposal for an exception handling mechanism in Pascal.* The Clu proposal consists of three parts. Firstly, each procedure or iterator (hereafter called routines) declares which exceptions it might raise during execution of the body of the routine. All exceptions raised within a routine are propagated to the immediate caller which must handle the exception. This rule is not followed strictly: Any exceptions not handled by the caller are converted into the universal exception Failure. This exception *By enclosing is meant the program unit (e.g. block, procedure invocation) in which the sequel is declared *Please note, that this semantics of sequel invocation is not consistent with the semantics described in section 2.9 of the previous paper. In fact, the semantics described there does not solve the problems discussed and the semantics described here is the only realistic alternative. The semantics of section 2.9 in the previous paper should therefore be abandoned. declare (* block B1 *) ... function TableSearchAndCount(A:Table; X:Item; sequel ItemNotFound(Item)):TableIndex; declarer var I : TableIndex; sequel ItemFound(J:TableItem); begin TableSearchAndCount := J; end; begin I := Hash(X); loop if A[I].Item = X then ItemFound(I) elseif A[I].Item = NullItem then ItemNotFound(X) end; I := (I mod TableMax) + 1; end; TableSearchAndCount := I; end; ... sequel TableError(X:Item); begin ... end; ... begin ... declarer (* block B2 *) ... sequel SearchError(X:Item); begin ... end; ... begin ... ItemPos := TableSearchAndCount(A', X', SearchError); ... end; ... ItemPos := TableSearchAndCount(A', X', TableError); ... end Figure 1: An example of sequels used for static exception handling can be propagated through any routine-call without being handled. Secondly, exceptions are handled by the so-called catch-phrases attached to routine-calls. Thirdly, in order to be able to do local exception handling (i.e. exception handling internally in a routine) it is possible to raise exceptions that must be handled within the routine itself and cannot be propagated. If the exception is not handled locally, it is converted into the previously mentioned Failure exception. (Local exceptions are raised by the exit-statement whereas non-local exceptions are raised by the signal-statement.) To illustrate the differences between the dynamic approaches and the static approach to exception handling, two examples are given (figure 2 and figure 3). The examples are the same (useless) program, formulated in Clu and using the static approach, respectively. The example program is taken from reference 15. ``` begin a := sign(x) except when neg(i:int) S1; exit done end b := sign(y) except when neg(i:int) S2; exit done end end except when done S3 ``` Figure 2: A Clu example with exception handling These examples illustrate several differences. First of all, the static approach gives a more clear separation of the specification of the standard execution and the exceptional execution (textually separated). Secondly, specifically with respect to Clu, the examples show that within the static approach there is no need for two different ways of raising exceptions, as in Clu (signal (non-local exceptions) and exit (local exceptions)). This is because the static definition of the termination level makes it possible for the programmer to explicitly state the difference between local and non-local exception at the time of definition of the exception. (I.e. in the static approach, the termination level is a property of the exception declaration and not the raising statement.) Thirdly, except for the sequel concept no additional concepts need to be introduced. Finally, although not explicit in the examples above, it is impossible, using the static approach, to raise an exception without it being handled. This avoids the need for such mechanisms as Failure exceptions with special propagation rules as in Clu. function sign(x:integer; sequel zero; sequel neg(integer)):integer; begin if x<0 then neg(x) elseif x = 0 then zero else sign := x end end declare sequel done; begin S3 end; sequel neg1(i:integer); begin S1; done end; sequel neg2(i:integer); begin S2; done end; sequel zero; begin ... end; begin a := sign(x,zero,neg1); b := sign(y,zero,neg2) end Figure 3: The Clu example implemented using sequels 1.3 Discussion of Derived Definition In addition to the sequel concept, derived definition is presented in the previous paper. Derived definition is not directly tied to exception handling, but is a very useful generalization of Ada’s derived type and generic definition\textsuperscript{18}. However, the application of derived definition to sequels has shown some advantages that justify a discussion of derived definition in connection with exception handling. Derived definition can be illustrated by modifying the example in figure 3. Assume that we within the declare-block several times want to handle the exceptions \texttt{neg} and \texttt{zero} as in the assignments to \texttt{a} and \texttt{b}. This fact can be expressed explicitly by derived definition as illustrated in figure 4. That is, derived definition is used to specialize a parameterized program unit (such as a function, procedure, sequel, ...) by providing (some of) the parameters such that these parameters are constantly bound within the scope of the derived definition. The concept of "curried" functions (see for instance reference 23) is an example of this specialization technique used for functions. Unfortunately, the previous paper on the static approach did not discuss the semantics of derived definition applied to sequels. The problem is the definition of the termination level. Fortunately, the nature of the sequel concept leaves only two possibilities. Let us discuss the example of figure 5. The termination level is either block \texttt{B}_{1} (i.e. the encoder of \texttt{S}), or block \texttt{B}_{\delta} (i.e. the encoder of the derived definition of \texttt{S}). In the latter case, the relationship between \texttt{S} and \texttt{S'} is only a matter of "lending" code. We find that \texttt{S'} is a specialization of \texttt{S} and as such, \texttt{S'} must inherit all of the properties of \texttt{S}, including the termination level. That is, the termination level of \texttt{S'} must be the termination level of \texttt{S} (i.e. block \texttt{B}_{\delta}). All in all, \texttt{S'} is a shorthand for \texttt{S} in that some of the parameters of \texttt{S} may be bound in \texttt{S'} and furthermore the remaining parameters of \texttt{S} may be specialized in \texttt{S'}. In a recent report on a conceptual framework for programming languages\textsuperscript{11}, the derived function sign(...):....; begin (* as in example 3 *) end; declare (* declarations of sign, neg1, neg2 and zero as in example 3 *) function a-sign(x:integer) is new sign(x,zero,neg1); function b-sign(x:integer) is new sign(x,zero,neg2); begin a := a-sign(x); ... x := b-sign(z); b := b-sign(y); ... y := a-sign(r); end Figure 4: An example of the use of derived definition on sequels declare (* block B, *) ... sequel S(...); begin ... end; ... begin ... declare (* block B2 *) ... sequel S'(...) is new S(...); ... begin ... .. S'(..); ... end; ... end Figure 5: What is the termination level of S'? definition concept is discussed in further detail as a general language mechanism supporting specialization. As it will be shown in the following, the hierarchical, co-operative exception handling mechanism contains derived definition as a special case and we therefore do not discuss derived definition of sequels any further. 2 Discussion of Termination in Block-Structured Systems Many of the proposals for exception handling mechanisms are based on a termination model. That is, as a consequence of the occurrence of an exception some part of the system is terminated. In this section we will discuss the problems of termination in block-structured systems as a motivation for a proposal for a hierarchical, co-operative exception handling mechanism. Let us assume that we have a block-structured system as outlined in figure 6 and let us ![Figure 6: A block-structured system](image) further assume that an exception occurrence has been identified in block $B_i$ and therefore an exception (named $E$) has been raised in block $B_i$. Assuming that the exception handling mechanism being used follows a termination model, two problems arise: How do we determine the termination level? (i.e. the outermost block that must be terminated as a consequence of the exception occurrence), and how do we terminate each intermediate block? The problem of how to determine the termination level has been discussed in length in the previous paper$^{10}$ and will furthermore be summarized in section C.2.3. We will therefore only discuss the termination process here. Let us assume that the termination level is found to be block $B_j$ ($j < i - 1$; i.e. there is at least one intermediate block between the raising block $B_i$ and $B_j$). That is, blocks $B_i, B_{i-1}, \ldots, B_{j+1}, B_j$ have to be terminated. The most common termination process is the following: Terminate blocks $B_i, B_{i-1}, \ldots, B_{j+1}$ abruptly (i.e. no local clean-up actions possible) after which block $B_j$ is allowed to do some local clean-up actions (specified in the exception handler for the exception) before it terminates. In many cases this is too abrupt since the intermediate blocks $B_i, B_{i-1}, \ldots, B_{j+1}$ have no opportunity to do local clean-up actions. Several of the proposals do therefore contain mechanisms that is oriented towards this situation (see section 3.5). Unfortunately, there exist a range of exception handling strategies that cannot be implemented using any of these mechanisms. Let us again consider the system outlined in figure 6 in which exception $E$ is raised in block $B_i$ and where the termination level is block $B_j$. Now let us further assume that exception $E$ represents an erroneous computational state of the program and that we want some debugging information to be written on a specific file $X$ in the case of $E$ being raised. Let us further assume that clean-up handlers $H_{j+1}, \ldots, H_{i-1}, H_i$ (as discussed above) are specified in blocks $B_{j+1}, \ldots, B_{i-1}, B_i$. The problem is: Where do we specify the opening of the file $X$. One solution (which we for obvious reasons will abandon) is to open the file at initialization of block $B_j$ regardless of whether $E$ will ever happen. If we specify the opening in handler $H_j$ in block $B_j$, the handlers $H_{j+1}, \ldots, H_{i-1}, H_i$ are unable to write debugging information on $X$ since it is not open when these handlers are activated (since the handlers are activated in sequence: $H_j, H_{i-1}, \ldots, H_i$). Then, the only possible solution is to specify the opening in handler $H_j$. If, on the other hand, it could happen that $E$ were raised in block $B_{i-1}$, handler $H_{i-1}$ has to be able to open $X$. This implies that all handlers might specify opening of $X$. This implies further that a very careful programming style is needed in order to ensure that the file is opened when needed and that only one handler actually opens the file. In general, the problem is that in some cases, a block must be in a consistent state before any termination process of the inner blocks can be initiated in order to ensure a consistent termination of these inner blocks. That is, the termination of a block is a three-phased process: Firstly, the state of the block must be made consistent. Secondly, the inner blocks are allowed to terminate consistently and finally, the block itself may perform some local clean-up actions, ensuring further consistency. In the example above, the consistent state with respect to exception handling of $E$ in block $B_i$ is that file $X$ is open and that the blocks $B_j, B_{j+1}, \ldots, B_{i-1}$ have all done their preexample debugging. One final requirement is that the clean-up actions of each intermediate block can be made dependent on the identity of the exception that initiated the termination process. --- We find that the two termination processes outlined above are very different and will therefore use two different terms to identify them: **Abrupt termination** and **smooth termination**. **By abrupt termination** is meant that blocks $B_i, B_{i-1}, \ldots, B_{j+1}$ are all terminated immediately, after which the handler in block $B_j$ is executed, and then block $B_j$ is terminated. **By smooth termination** is meant that the blocks $B_i, B_{i-1}, \ldots, B_j$ are each given the opportunity to do some local clean-up actions before they are terminated. We have not seen smooth termination discussed elsewhere although its relevance is demonstrated by the above discussion. In section 3.5 other proposals for hierarchical exception handling will be discussed (AML/X\textsuperscript{20}, Mesa\textsuperscript{19}, Multics PL/I\textsuperscript{22}, Clu\textsuperscript{16}, Ada\textsuperscript{18}, ANSI/IEEE Pascal proposal\textsuperscript{27} and Taxis\textsuperscript{21}). In the next section we will introduce an extension to the static approach to exception handling. An extension that allows smooth termination without sacrificing the static behavior of the approach. The prime source of inspiration to the proposal can be found in an article on prefixed procedures\textsuperscript{28}. Prefixing used as a general program structuring mechanism is a very important aspect of the programming language Beta\textsuperscript{12}, and is discussed in depth in reference 11. 3 Hierarchical, Co-operative Exception Handling Inspired by the above discussion we want to define an exception handling mechanism that allows hierarchical exception handling. Furthermore, the multi-level termination process should involve co-operation of the involved levels during the termination process (i.e. smooth termination). Such an exception handling mechanism can be obtained by replacing the derived definition concept by the far more powerful prefixing concept. For an introduction to prefixing, see appendix A. 3.1 Prefixed Sequels A prefixed sequel is similar to a derived sequel — the main difference is that the body-part of the sequel can be expanded by the prefixed sequel. Assume that sequel S is defined in an outer block, then sequel S' can be defined as a prefixed sequel by means of the following declaration in an inner block: \begin{verbatim} sequel S'(...) prefix S(...) begin ... end; \end{verbatim} We call S' the prefixed sequel and S the prefix of S'. A prefixed sequel is a specialization of another sequel. The termination level of a prefixed sequel is the termination level of the prefix. The body of a prefixed sequel is a specific combination of the body of the prefix and the body of the prefixed sequel itself. Figure 7 illustrates the prefixed sequel concept by repeating figure 5 now formulated using prefixed sequels. The differences between the two examples are the INNER statement in S and the prefix-and body-part of S'. The semantics of S are the same here as in figure 5 — in the case of S being invoked explicitly, the INNER statement is equivalent to the Skip statement. The semantics of S', on the other hand, are as follows: The termination level of S' is the termination level of the prefix (i.e. block B\textsubscript{1}). When S' is invoked, control is immediately passed to the prefix S which then executes code1. In this case, execution of the INNER statement means execution of the body of the prefixed sequel S'. Execution of S' means execution of code3 after which control is returned to S which in turn executes code2. If S declare (* block B₁ *) ...sequel S(...) begin code₁; INNER; code₂; end; ... begin ... declare (* block B₂ *) ... sequel S'(...) prefix S(...) begin code₃ end; ... begin ... .. S'(..); ... end; ... end Figure 7: The example in figure 5 implemented using prefixed sequels terminates successfully, then blocks B₁ and B₂ will be terminated immediately. Note, that the body of S' will not be executed if either no INNER-statement is present in the body of S, or if the INNER-statement in the body of S is not executed. In general, assume that we have sequels \[ S₀, S₁, \ldots, Sₚ, \ldots, Sₙ \] where \( Sᵢ \) is prefixed with \( Sᵢ₋₁ \) (as illustrated in figure 8). Then the termination level of \( Sᵢ \) is the same as the termination level of \( S₀ \). When \( Sᵢ \) is invoked, control is passed to \( S₀ \) which executes \( \text{pre}_₀ \) and then passes control to \( Sᵢ \) (as a result of the INNER statement). \( Sᵢ \) then executes \( \text{pre}_¹ \), etc. until control reaches \( Sᵢ \) which executes \( \text{pre}_i \) then INNER as a Skip statement (since \( Sᵢ \) is the invoked sequel), then executes \( \text{post}_i \), and then passing control to \( Sᵢ₋₁ \) which executes \( \text{post}_i₋₁ \), and so on until control reaches \( S₀ \) which executes \( \text{post}_₀ \). If this computation terminates successfully, then the blocks \( B₀, B₁, \ldots, Bᵢ \) are terminated immediately. That is, the execution history of invoking \( Sᵢ \) is: \[ \text{exec}(Sᵢ) \Rightarrow \text{exec} (\text{pre}_₀) \] \[ \text{exec}(\text{pre}_¹) \] \[ \text{exec}(\text{pre}_i) \] \[ \text{exec}(\text{Skip}) \] \[ \text{exec}(\text{post}_₀) \] \[ \text{terminate}(\text{block B₀}) \] \[ \text{terminate}(\text{block B₁₋₁}) \] \[ \text{terminate}(\text{block Bᵢ₋₁}) \] \[ \text{terminate}(\text{block Bᵢ}) \] sequel $S_0$ declare decl$_0$ begin pre$_0$; INNER; post$_0$ end; sequel $S_i(...)$ prefix $S_{i-1}(...)$ declare decl$_i$ begin pre$_i$; INNER; post$_i$ end; sequel $S_s(...)$ prefix $S_{s-1}(...)$ declare decl$_s$ begin pre$_s$; INNER; post$_s$ end; Figure 8: Multi-layered prefixing of sequels Additionally, within a prefixed sequel, the identifiers, etc. that are declared local to the prefix (and local to its prefix, etc.) are accessible. That is, \( S_i \) may use global identifiers, and all identifiers declared locally in \( S_0, S_1, \ldots, S_{i-1}, S_i \) (i.e. declared in \( \text{decl}_0, \text{decl}_1, \ldots, \text{decl}_i \)). Similarly to derived definition, it is possible to let a prefixed sequel bind some (or all) of the parameters defined in the prefix. The control flow of prefixed sequels is complex, but this complexity is justified by four factors. First of all, all prefixed sequels follow the same pattern of control flow; i.e. it is therefore a matter of comprehending this uniform pattern of control flow initially, and then utilize this pattern when approaching any actual prefixed sequel. Secondly, the problem of hierarchical exception handling as discussed above contains inherited complexity that must be reflected in any solution, whether it is explicitly programmed using one of the traditional methods, or by means of a specific language construct as prefixed sequels. Thirdly, as pointed out by Dijkstra\(^7\) understanding of programs is eased if the static structure of the program reflects directly the structure of the corresponding computation. It is our claim that the static properties of prefixed sequels directly reflect the structure of the corresponding computation, and that the benefits of the proposal outweigh the initial difficulty in learning the concept. Moreover, implementation of prefixed sequels does not cause severe problems. Finally, the complexity of prefixed sequels (and virtual sequels, see later) is found to be comparable to the complexity of the exception handling mechanisms of languages like Ada. Unfortunately, prefixed sequels are only a partial solution to the problem discussed in section 2. Indeed, prefixed sequels represent a hierarchical, co-operative exception handling mechanism that solves the problem of where to specify the opening of file \( X \) (namely in \( \text{pre}_0 \) — and \( X \) can then in turn be closed in \( \text{post}_0 \)). Moreover, as the scope of the identifiers declared in the prefix includes the prefixed sequel, declaring \( X \) local to \( S_0 \) would prevent unintended access to \( X \) (i.e. accesses other than for debugging). The deficiency of prefixed sequels is that invoking \( S_i \) in block \( B_i \) will cause smooth termination of blocks \( B_{i-i}, B_{i-i-1}, \ldots, B_{i-1}, B_0 \), but invoking \( S_j \) (\( j < i \)) in block \( B_j \) will cause abrupt termination of blocks \( B_{i-j}, B_{i-j-1}, \ldots, B_{j+1} \). Initially, this seems to be a problem of only minor concern, but if \( S_j \) is invoked within a routine declared in block \( B_j \) but invoked from block \( B_i \), the problem becomes apparent. Of course, returning to a dynamic approach would give an easy solution, but would abandon the merits of the static approach. We therefore extend the static approach (slightly) to cope with the problem — the extension is termed virtual sequels. For an introduction to virtual binding, see appendix B. ### 3.2 Virtual Sequels A virtual sequel is very similar to an ordinary sequel or a prefixed sequel, and can be declared in one of the following ways: \[ \text{sequel } S(\ldots) \text{ virtual begin } \ldots \text{ INNER } \ldots \text{ end;} \] \[ \text{sequel } S(\ldots) \text{ virtual prefix } S'(\ldots) \text{ begin } \ldots \text{ INNER } \ldots \text{ end;} \] The first declaration declares a virtual sequel by giving its default body (handler). (We refer to this first declaration as the initial virtual binding.) The second declaration combines the prefixing and virtual concepts — that is, declares a virtual sequel by giving its default body (handler) as a prefixed sequel. A virtual sequel allows inner blocks to augment its body (handler) in such a way that all invocations of the sequel will invoke the augmented handler as long as the inner block is active. The termination level of a virtual sequel is the termination level of the initial virtual binding. The inner blocks may issue a further binding of the virtual sequel by means of the following declaration: \[ \text{sequel } S(...) \text{ virtual bind } \begin{array}{l} \text{begin} \\ \text{INNER} \\ \text{end} \end{array} \] The example in figure 9 illustrates the virtual sequel concept. In block \( B_1 \), \( S \) is declared \[ \text{declare (* block } B_1 \text{ *)} \\ \text{...} \\ \text{sequel } S(...) \text{ virtual begin } \begin{array}{l} \text{INNER} \\ \text{end} \end{array} \\ \text{begin} \\ \text{...} \\ \text{declare (* block } B_2 \text{ *)} \\ \text{...} \\ \text{sequel } S(...) \text{ virtual bind } \begin{array}{l} \text{INNER} \\ \text{end} \end{array} \\ \text{begin} \\ \text{...} \\ \text{... } S(...) \\ \text{...} \\ \text{end;} \\ \text{...} \] Figure 9: Virtual binding of sequels virtual and in block \( B_2 \), a further binding of \( S \) is done. A virtual sequel is a sequel that allows inner blocks to augment its handler. In many respects, the semantics of sequel \( S \) in block \( B_2 \) is the same as sequel \( S' \) in figure 7 — that is, sequel \( S \) in block \( B_2 \) can be considered as prefixed with sequel \( S \) in block \( B_1 \). The difference between the semantics of \( S \) in block \( B_2 \) in figure 9 and \( S' \) in figure 7 is that in figure 9, invoking \( S \) in block \( B_2 \) (either directly as in figure 9 or indirectly through a routine declared in block \( B_1 \)) will always lead to smooth termination of block \( B_2 \). In general, if sequel \( S \) is declared virtual in block \( B_0 \) then any inner block is allowed to augment the sequel by a further binding (as \( S \) in block \( B_1 \) in figure 9). The semantics of a virtual sequel define the termination level to be the termination level of the prefix, and moreover a partial handler that is concerned with the clean-up actions in that block. Furthermore, inner blocks are allowed to augment the handler to ensure local clean-up actions in those blocks. In some respects, virtual sequels resemble dynamic exception handling but it is very important to note that when using virtual sequels it is impossible to nullify previous handlers — it is only possible to augment the previous handlers to take care of local clean-up. Even the termination level is static. We find that virtual sequels are a far more structured tool for co-operative exception handling than the dynamic approaches to exception handling as discussed in section 3.5. The advantages of virtual sequels are that the static structure of the program reflects the desired computation directly, and moreover that the co-operation among the handlers during the smooth termination process is far more structured than can be obtained using any of the dynamic approaches. 3.3 Default Exception Handling Default exception handling covers two distinct notions: Default Exception Handler and Default Smooth Termination (see appendix C). In several proposals these notions are dealt with by means of specialized constructs such as default handlers in Taxis, and Others-clauses in Clu and Ada. In the static approach it is always the case that an exception has a handler (namely the body of the sequel) and a notion of default handlers is therefore superfluous. The notion of virtual sequels can though be thought of as default binding of a handler in case no further bindings are defined. Default smooth termination cannot immediately be dealt with within the static ap- proach. However, virtual binding can be used to simulate a similar behaviour as illustrated by the example in figure 10. ``` decare (* block B1 *) sequel Default virtual begin code1; INNER; code2 end begin ... decare (* block B2 *) sequel S prefix Default begin code3; INNER; code4 end begin ... decare (* block B3 *) sequel Default virtual bind begin code5; INNER; code6 end ... sequel S' prefix S begin code7; INNER; code8 end begin ... decare (* block B4 *) sequel Default virtual bind begin code9; INNER; code10 end begin ... S' ... end ... end ``` Figure 10: Simulation of Default Smooth Termination Let us examine what happens when \(S'\) is invoked in block \(B_4\). Since \(S'\) is prefixed with \(S\) which in turn is prefixed with Default and Default is virtual with further bindings in block \(B_3\) and \(B_4\), the following execution path will be followed: \[ \text{code1; code5; code9; code3; code7; Skip; code8; code4; code10; code6; code2; terminate}(B_1) \] thereby terminating blocks \(B_2\), \(B_3\) and \(B_4\) as well. This usage of virtual prefixing results in a two-phased exception handling technique. The first phase is the default exception handling of all the blocks in which the virtual prefix is further bound (e.g. code1; code5; code9 and code10; code6; code2 in the above execution path). The second phase is exception handling of the particular exception occurrence (e.g. code3; code7 and code8; code4 in the above execution path). However, the above usage of virtual prefixing is not the ultimate solution to default smooth termination. First of all, if \(S\) had not been prefixed with Default, \(B_4\) would have been terminated abruptly. That is, it is not possible to ensure that default smooth termination always is applied to a specific block without examining all sequels that might be invoked in inner blocks. That is, default smooth termination will not be a property of the block itself, but a property of some of the sequels that might be invoked in the block, or in inner blocks. Secondly, as a consequence of this simulation, the termination level of \(S'\) will be the same as the termination level of Default. That is, simulating default smooth termination as above will interact with the termination level of \(S'\). However, if the initial virtual binding of Default were at the same level as \(S\), there would not be any problems with this interaction. This, however, leads to a programming style in which any declaration of a sequel is accompanied with an initial virtual binding of a default sequel to be used as a virtual prefix of the original sequel. That is, all sequels will be declared as virtually prefixed. This programming style is for most practical usages comparable to declaring the original sequel by means of an initial virtual binding. In fact, using virtual sequels is a far more simple mechanism for default smooth termination that the above mentioned method and should be preferred. There is, however, one drawback with using virtual sequels as the basis for default smooth termination: Default smooth termination is still a property of the individual sequels and not a property of the individual blocks. In fact, virtual sequels do not support default smooth termination as defined in appendix C but on the other hand, virtual sequels alliviate the need for direct support of default smooth termination. In order to support default smooth termination within the static approach, we have to introduce the concept of default sequels. A default sequel is declared in the following way: \[ \text{sequel S(...)} \text{ default begin ... INNER ... end} \] If a default sequel in invoked explicitly, its semantics are the same as if default had not been specified (that is, exactly as the equivalent simple sequel). However, a default sequel can also be invoked implicitly as part of an exception handling wave. Implicit invocation will take place if the block in which the default sequel is declared, is terminated abruptly. When a default sequel implicitly becomes part of an exception handling wave, it is inserted in the prefix chain of the explicitly invoked sequel. The position of the default sequel in the prefix-chain is determined in the following way: Let the prefix-chain of the explicitly invoked sequel \(S\) be \(E_0, E_1, \ldots, E_n\) (i.e. \(E_i\) is prefixed with \(E_{i-1}\) and \(E_n = S\)). Let \(E_0, E_1, \ldots, E_n\) be declared in blocks $B_0, B_1, \ldots, B_n$. Let the default sequel $D$ be declared in block $B$. Then $B_j < B < B_{j+1}$ for some $j$ ($0 \leq j \leq n$), and therefore $D$ is implicitly inserted in the prefix-chain: $E_0, \ldots, E_j, D, E_{j+1}, \ldots, E_n$. If $S$ is non-virtual, this is read: $E_0, D$. Of course, all default sequels that are involved in the exception handling wave is implicitly inserted as above, and their relative positions in the chain are defined by the nesting of their defining blocks. That is, the resulting prefix-chain before the exception handling is initiated will be: $$E_0, \ldots, E_j, D_1, E_{j+1}, \ldots, E_j, D_2, D_3, \ldots, D_k, E_{j+1}, \ldots, E_n, D_p, \ldots, D_m$$ That is, all default sequels corresponding to blocks that are abruptly terminated, are inserted in the prefix-chain according to the nesting of their defining blocks. As it can be seen from above, the static approach needs specialized constructs for default smooth termination — just as Clu, Ada and others need Others-clauses to deal with it. This seems to be an inherited property of default smooth termination and we should therefore not be surprised that specialized constructs appear in the static approach too. ### 3.4 Hierarchical, Co-operative Exception Handling — An Example In this section we will present yet another example of the usage of the proposed language mechanisms for hierarchical, co-operative exception handling. Unfortunately, such examples are extensive by the nature of the domain, but it is our hope, that the following example will prove valuable. Let us consider the following scenario. In some part of a program a resource is used. The resource is controlled by the well-known request-release scheme. We further assume that the resource is utilized through some utility operations (not important here), and in addition the resource has an Undo operation — undo as much as possible of the work done since last request. During the utilization of the resource, the program engages in a specific communication pattern with some third party. We assume that this communication pattern is described in a manner similar to scripts. We assume that the script is controlled by the following operations: Engage is used by an object to indicate that it is willing to engage in the script; Disengage is used by an object to indicate that it considers its obligations in the script as fulfilled; ScriptError is used by an object to indicate that it is not able to fulfill the obligations of the script (e.g. due to some exception occurrence internally in the object); And moreover, there may be additional script control operations (not important here). We assume finally that if the script cannot be fulfilled, then the resource must undo upto last request. (This example might seen imaginary but think of the resource as a database system and the script as describing a particular database transaction.) The problem can now be formulated as follows: How do we handle an exception occurrence during engagement in the script in such a way that, first of all, the script is informed properly and secondly, the resource is informed to undo and then released. Using the proposed language mechanisms in this paper, the problem can be formulated as outlined in figure 11. Note first of all the clean separation of exception handling with respect to the resource and the script, respectively. It should also be noted that, although textually separated, the exception handling of the script is convoluted in the exception handling of the resource. Although this example is programmed using prefixing, it could have been programmed using virtual binding (even with additional benefit). If we had declared ResourceError as declare (* block B_1 *) sequel ResourceError begin Resource!Undo; INNER; Resource!Release end begin Resource!Request; ... ... declare (* block B_2 *) sequel LocalError prefix ResourceError begin Script!ScriptError; INNER; Script!DisEngage end begin Script!Engage; ... ... LocalError; (* Exception occurrence *) ... ... Script!DisEngage end ... ... Resource!Release end Figure 11: Hierarchical Exception Handling — An Example virtual in block $B_1$ and further bound it in block $B_2$ instead of declaring *LocalError* in block $B_2$, then we would have had the additional benefit that even if *ResourceError* were invoked indirectly by an operation declared in $B_1$, exception handling of the script would be performed properly. ### 3.5 Mechanisms towards Hierarchical Exception Handling in Other Proposals In this section we will give a brief overview of some of the other proposals for exception handling mechanisms and relate them to our proposal for a hierarchical, co-operative exception handling mechanism. **AML/X** The hierarchical exception handler binding mechanism of AML/X\textsuperscript{20} can be used to specify a hierarchical structure of exceptions very similar to the one that can be specified using derived definition of sequels. The main differences between the two proposals are that the AML/X proposal is highly dynamic (e.g. allows computable association of handlers), and furthermore substitution of parameters down the hierarchy is not possible in AML/X. Furthermore, the AML/X proposal is a very divergent proposal that is intended to be useful for many very different ways of handling exceptions. The hierarchical part of the AML/X proposal is essentially an exception renaming mechanism similar to the possibility of renaming exceptions in Ada\textsuperscript{18}. That is, the hierarchical structure is not suitable for the specification of smooth termination as discussed above. **Mesa** The exception handling mechanism in Mesa\textsuperscript{19} contains the so-called *Unwind*-mechanism that to some degree alleviates the problem of hierarchical exception handling as discussed in section 4. Intuitively, the *Unwind*-mechanism behaves as if the exception *Unwind* is raised within each intermediate block just before it terminates. The termination level of *Unwind* is always the same as the block itself. This means that if each intermediate block specifies a handler for exception *Unwind*, local clean-up actions can be specified in that handler. The remaining problem is that the clean-up actions in this way are always the same irrespective of which exception originally caused the termination process to be initiated. In this respect the *Unwind*-mechanism is similar to default sequels. The Multics System\textsuperscript{22} has extended the PL/1 exception handling mechanism to include an *Unwind*-mechanism similar to that of Mesa. **Clu, Ada, a.o.** Most other proposals alleviate the problem of hierarchical exception handling by allowing handlers to reraise exceptions and using a particular programming style. The method is as follows: Each intermediate block defines its own handler for the exception if local clean-up is needed. This handler contains the local clean-up actions and immediately before the terminating end of the handler, the propagated exception is reraised (e.g. Clu and Ada). This method is possible because of the dynamic or computable association of handlers. Another way to alleviate the problem of hierarchical exception handling is by means of the so-called *Others*-clause (in languages like Ada and Clu). An *Others*-clause in an exception handler specifies that the handler is ready to handle any exception that is propagated to it. The body of the *Others*-clause will be executed if an exception is propagated to the handler and not otherwise handled by that handler. That is, the Others-clause specifies default exception handling. The effect of Unwind can now be implemented by reraising the propagated exception at the end of the body that is attached to the Others-clause. The ANSI/IEEE proposal for extending Pascal The joint ANSI/IEEE proposal for introduction of exception handling mechanisms into Pascal (by the joint ANSI X3J9 and IEEE P770 standards committee)\textsuperscript{27} contains a hierarchical structuring mechanism for exceptions that is useful both for default exception handling and for renaming of exceptions as the AML/X proposal. It is important to note that the hierarchical structure is a structure of the names of exceptions and not a hierarchical structure of the exception handlers (as the proposals in this paper). Therefore, although the ANSI/IEEE proposal is an interesting proposal, it does not support hierarchical exception handling as discussed in section 3. Taxis The exception handling mechanism of Taxis\textsuperscript{21} contains similar ideas to those presented in this paper. But there are some basic differences. The exception handling model is single-level termination with dynamic association of handlers. This implies that smooth termination as discussed above does not apply to the Taxis proposal. Moreover, all exceptions are raised implicitly when either pre- or post-conditions of transactions are violated, or the return-value of the transaction does not conform with the specification. The user can specify which exception should be raised when a specific pre- or post-condition is violated, whereas a pre-defined exception is raised when the return-value is nonconformable. Default exception handling is done by associating a default handler with the exception at the time of definition of the exception. (That is, static association of the default handler.) The hierarchical part of the proposal is divided into two parts. Both exceptions and handlers are organized in a specialization hierarchy but the two hierarchies are independent in the sense that any handler may be associated with any exception (of course, the parameters must match). The hierarchical structure of the exceptions is used in the same way as the hierarchical structure of exception names in the ANSI/IEEE proposal. That is, a handler associated with a specific exception is capable of handling occurrences of that exception and any specializations hereof. The hierarchical structure of the handlers is used to structure the handlers in the same way as transactions (in fact, handlers are transactions). 3.6 Final Remarks We have presented a hierarchical, co-operative exception handling mechanism as an extension to the static approach to exception handling. The proposal is divided into three parts: Prefixed sequels that make it possible to specify smooth termination; Virtual sequels that make it possible to augment a handler in inner blocks; And finally default sequels that make default exception handling possible within the static approach and thereby extend the possibilities of smooth termination. Implementation of the static approach has not been tried yet. But since the underlying principles of prefixing and virtual binding are well-known and implemented elsewhere (e.g. Simula67 and Beta), we do not find that implementation of the proposal will give rise to significant difficulties. Implementing sequels is similar to implementing procedures with a non-local goto (to the end of the enclosing) immediately before the terminating end. This proposal is only a proposal for a new language construct, not an entire language proposal. We find that the proposed language constructs can be incorporated into any language in the Algol family (e.g. Algol60, Pascal, Ada). That is, in any statically scoped language. The extra complexity introduced hereby is felt to be in the same line as the extra complexity that is introduced by the exception handling mechanisms of, say, Ada. With that in mind, we find that prefixing and virtual binding is a generally useful structuring mechanism (as discussed in reference 11 and 2). We would therefore prefer that this proposal was incorporated in a language in which prefixing and virtual binding were general structuring mechanisms. The proposal would in that way be part of a more homogenous language design. As examples of languages of this sort are Galileo\textsuperscript{1}, Taxis\textsuperscript{21}, Simula67\textsuperscript{6} and Beta\textsuperscript{12}. (Note that Galileo and Taxis do not support virtual binding.) Acknowledgements The work reported here has benefited from many discussions with Kristine Stougaard Thomsen and Ole Lehmann Madsen. Thanks are also due to Brian H. Mayoh, Peter Mosses and the referees for reading earlier drafts of this paper and giving many valuable comments. References A Introduction to Prefixing The concept of prefixing originates from the programming language Simula67\(^6\). It was further developed by J.G. Vaucher into a structuring mechanism for operations\(^9\) and in the programming language Beta it is used as a general program structuring mechanism\(^12\). We would like here to give a short general introduction to the concept of prefixing. The foundation of prefixing is the concept of some sort of descriptor* that may consist of a formal parameter part, a declarative part, and an action part. Examples are classes, procedures, functions and sequels. Let us consider the example in figure 12. In order to make prefixing possible, we \[ P = \text{descriptor}(FP_1, FP_2, \ldots, FP_n) \quad \begin{array}{c} \text{--- formal parameter part} \\ DP_1; \\ DP_2; \\ \ldots \\ DP_m; \\ \text{begin} \\ S_1; \\ \ldots \\ S_{i-1}; \\ \text{INNER;} \\ S_{i+1}; \\ \ldots \\ S_t \\ \text{end} \\ \end{array} \quad \begin{array}{c} \text{--- declarative part} \\ \text{--- action part} \end{array} \] Figure 1: A descriptor have introduced a special kind of action (called INNER). The semantics of INNER will be explained soon. Figure 13 shows how a new descriptor \(P'\) can be prefixed with \(P\). The overall principle of prefixing is that the formal parameters of \(P'\) are the union of the formal parameters of \(P\) and those specified in \(P\). The identifiers declared in \(P'\) are the union of those of \(P\) and those specified in the declarative part of \(P'\). Finally, the actions of \(P'\) are the actions of \(P\) with the action part of \(P'\) merged into the action part of \(P\) at the point of the INNER statement. Figure 14 illustrates the semantics of prefixing. However it should be noted that \(P''\) is not literally the semantics of \(P'\) since the bindings in \(P\) are done in the environment of \(P\) only whereas the bindings in \(P'\) are done in an environment similar to that of \(P''\). That is, \(P\) cannot use the bindings in \(P'\) but \(P'\) can use the bindings in \(P\). The control pattern of prefixed procedures can be illustrated by figure 15. As an abstraction mechanism, the prefixed procedures \(S\), \(S'\) and \(S''\) expresses a three layered system: *For a further discussion of descriptors in programming languages, see reference 11. \[ P' = \text{prefix } P \] \[ \text{descriptor}(FP'_1, ..., FP'_r) \] \[ DP'_1; \] \[ DP'_2; \] \[ ... \] \[ DP'_s; \] \[ \text{begin} \] \[ S'_1; \] \[ ... \] \[ S'_{j-1}; \] \[ \text{INNER;} \] \[ S'_{j+1}; \] \[ ... \] \[ S'_t \] \[ \text{end} \] Figure 2: A prefixed descriptor \[ P'' = \text{descriptor}(FP'_1, ..., FP'_r, FP'_r, ..., FP'_s) \] \[ D_1; \quad \text{- - concatenation of} \] \[ ... \quad \text{- - formal param. parts} \] \[ D_n; \] \[ D'_1; \] \[ ... \] \[ D'_s; \] \[ \text{begin} \] \[ S_1; \] \[ ... \] \[ S_{j-1}; \] \[ S'_1; \] \[ ... \] \[ S'_{j-1}; \] \[ \text{INNER;} \] \[ S'_{j+1}; \] \[ ... \] \[ S'_t; \] \[ S_{j+1}; \] \[ ... \] \[ S_t; \] \[ \text{end} \] Figure 3: \( P'' \) illustrates the semantics of \( P' \) in figure 13 — except for the binding aspect of prefixing in which $S'$ is a specialization of $S$ and $S''$ a further specialization of $S'$. For a discussion of prefixing as a general abstraction mechanism, see reference 2 and 11. The execution of each prefixed procedure can be seen as a five phased execution. When the control enters the prefixed procedure, it is immediately passed to the prefix levels to let those execute their initial actions. When control is passed back from the prefix levels, the prefixed procedure itself executes some initial actions (e.g. initialize local data-structures and bringing them into a consistent state). Then, when control reaches an INNER statement, it will be passed to the prefixed level (if any). When the prefixed levels have terminated their actions, control is passed back to the prefixed procedure that then executes some finalization actions (e.g. local clean-up, collecting results from the prefixed levels, etc.). Finally, when the prefixed procedure has executed all of its actions, control will be passed to the prefix levels. This pattern of control can be pictured as in figure 15. Note, that in the case of the prefixed level being non-existent, the INNER statement acts as a Skip/NoOp statement (see INNER in $S''$ of figure 15). B Introduction to Virtual Binding The origin of the concept of virtual binding is the programming language Simula67\(^6\) and is further developed as part of the programming language Beta\(^12\). In both languages a virtual binding can be introduced in a class definition and further binding is only allowed in sub-class definitions of that class. We extend the concept of virtual binding slightly. First of all, we assume in this paper that virtual bindings can be introduced in ordinary blocks as well. Furthermore, we allow further bindings of virtual bindings in outer blocks to appear in inner blocks. That is, virtual bindings introduced in ordinary blocks are only allowed to be further bound in inner blocks. (In fact this is not an extension of the original virtual binding concept but rather a consequence of considering blocks as anonymous program units and inner blocks as anonymously prefixed with the outer block.) Let us discuss the block-structured system outlined in figure 16 and let us further assume that identifier $E$ is bound to some descriptor $D$ within block $S$. If $E$ is bound non-virtually within $S$, then all invocations of $E$ within $S$, $S'$, $S''$ and $S'''$ will result in execution of the action part of $D$. If, however, $E$ is bound to $D'$ within $S'$, invoking $E$ in $S$ will result in execution of the action part of $D$ whereas invocation of $E$ in $S'$, $S''$ or $S'''$ will result in execution of the action part of $D'$. If $E$ is bound to $D'$ in $S'$ and $D'$ is prefixed with $D$ then invocation of $E$ in $S$ will result in execution of the action part of $D$ only, whereas invocation of $E$ in $S'$, $S''$ or $S'''$ will result in execution of the action part of both $D$ and $D'$ (as explained in the previous section on prefixing). Virtual bindings behave differently. Let us assume that $E$ is virtually bound to $D$ in $S$. The virtual binding of $E$ means that if $E$ is further bound to $D'$ in, say, $S'$ then first of all, $D'$ must be prefixed with $D$ and secondly, the further binding will have effect on invocations of $E$ in $S$. That is, if from within $S'$, $S''$ or $S'''$, $E$ is invoked in $S$, the action part of both $D'$ and $D$ will be executed just as if $E$ were invoked directly in $S'$, $S''$ or $S'''$. This is in contrast to a non-virtual binding of $E$ to $D'$ in $S'$, in which case invocations of $E$ in $S$ from within $S'$, $S''$ or $S'''$ would result in execution of the action part of $D$ only, and invocations of $E$ directly in $S'$, $S''$ or $S'''$ would result in execution of the action part of both $D'$ and $D$. $E$ may be further bound to $D''$ in $S''$, and to $D'''$ in $S'''$. Since $S''$ and $S'''$ are parallel blocks it is only necessary to demand that $D''$ and $D'''$ both are prefixed with $D'$. That is, $D''$ need not be prefixed with $D'''$, or vice versa. Note that virtual binding is not the same as dynamic binding. A very important aspect of virtual binding is that it is impossible to nullify the binding of $E$ in $S$ and replace it with another totally different binding. (As a remark, in Simula67 it is possible to nullify the initial virtual binding which makes virtual binding in Simula67 resemble dynamic binding.) Virtual bindings allow rebindings of the original binding — rebindings that augment the original binding by prefixing. We say that the original virtual binding has been specialized. As indicated above, this presentation of virtual binding is mostly inspired by the definition of virtual binding used in the Beta programming language. In reference 11, a detailed discussion of virtual binding is given. ### G A Terminology for Exception Handling The purpose of this section is to describe a taxonomy for exception handling in order to establish a unified terminology within the domain of exception handling. The domain of exception handling is characterized by the definition of what to classify --- *Note that we assume static scoping. as exception occurrences. We define an exception occurrence as being "a computational state that requires an extraordinary computation". Exceptions are associated with classes of exception occurrences. We say that an exception is raised if the corresponding exception occurrence has been reached. We say that an exception is handled when the extraordinary computation is initiated. An exception handler is the specification of an extraordinary computation. If an exception is raised but not handled at the same level of the program, we say that the exception is propagated to an outer level. If the exception is handled, and that handler raises the same exception, we say that the exception is reraised. Finally, we say that a specific exception is associated with a specific handler when raising the exception results in invocation of the handler. The taxonomy for exception handling is divided into six categories to be discussed in the following. The most important category with respect to this paper is the identification of the two different termination processes: Abrupt termination and smooth termination. C.1 Exception Occurrences The overall spirit of an exception handling mechanism can be identified by considering the way in which the proposal characterizes exception occurrences. In most cases this classification will have a major impact on the proposal. In PL/I, exception occurrences are "computational states where hardware limitations have been exceeded". In a proposal for exception handling in C, exception occurrences are "computational states where something unusual has happened". In reference 5, exception occurrences are "invocation of an operation in a computational state that is outside the standard domain of the operation". Finally, in the static approach, exception occurrences are "computational states where an alternative computation must be initiated". C.2 Exception Handling Model Perhaps the most important aspect of exception handling is the exception handling model. The exception handling model is concerned with the control pattern that will be the result of raising an exception. There are three major models: The resumption model, the signalling model, and the termination model. C.2.1 The Resumption Model In the resumption model the control pattern in the case of an exception occurrence is very similar to an interrupt. That is, raising an exception will result in suspension of the present computation, invocation and execution of the handler, and then resumption of the suspended computation. Examples are Goodenough's Notify exceptions and PL/I's condition. C.2.2 The Signalling Model The control pattern of the signalling model is similar to the resumption model. The difference is that the handler might specify that resumption should be abandoned and termination initiated; That is, specify a change to the termination model. The Signal exceptions in Goodenough's proposal are examples of exception handling following the signalling model. C.2.3 The Termination Model In the termination model raising an exception will result in invocation of a handler and termination of at least the syntactic entity in which the exception is raised. The control pattern of the termination model can be characterized by the termination level and the termination process: - **Termination Level** The termination level may be either single-level or multi-level: 1. **Single-level** Termination in which only the syntactic entity in which the exception is raised, is terminated (e.g. Clu16). 2. **Multi-level** Termination in which termination of several levels of syntactical units is possible (e.g. Ada). In multi-level termination, the termination level must be defined somehow. There are at least two possibilities: (a) to the level of the declaration of the exception (e.g. sequels10,26). (b) to the level of the handler (e.g. Ada18) - **Termination Process** In terms of figure 6: Let us assume that an exception is raised at level \( i \) and the termination level is level \( j \). The termination of levels \( i,i-1,...,j+1,j \) can then follow two main patterns: Abrupt termination or smooth termination. 1. **Abrupt** Termination By abrupt termination is meant that levels \( i,i-1,...,j+1 \) are all terminated immediately, after which the handler is executed, and then level \( j \) is terminated (e.g. Ada). 2. **Smooth** Termination By smooth termination is meant that the levels \( i,i-1,...,j \) are each given the opportunity to do some local clean-up actions before they are terminated. The role of the handler(s) in this case may differ from proposal to proposal. Prefixed sequels is an example of an exception handling mechanism that supports smooth termination directly. Languages like Clu and Ada contain mechanisms that make it possible to implement smooth termination to some degree. C.3 Association of Handlers There are three major approaches to handler association: Computed association, dynamic association and static association. 1. **Computed** association Here the association is done by means of binding imperatives (e.g. the ON CONDITION-statement in PL/I). 2. **Dynamic** Association Here the handler is associated with the exception using dynamic binding of exception handlers to exception names. That is, the handler is found in the dynamic context of the raising statement (e.g. Ada, Clu and many others). 3. **Static Association** Here the handler is associated with the exception using static binding. That is, the handler is found in the static context of the raising statement (e.g. sequels). C.4 **Handler Types** The handlers associated with an exception may be of several different types. Some of the possibilities are: - routines or routine-like (e.g. sequels, AML/X) - statements or blocks (e.g. Ada, Clu) - expressions (e.g. Goodenough's proposal, AML/X) - labels (e.g. AML/X) - booleans (e.g. AML/X) C.5 **Parameterization of Exceptions and Handlers** The expressive power of the different proposals depends moreover on whether it is allowed to parameterize the exceptions and their handlers, making it possible to communicate information to the handler when raising an exception. C.6 **Default Exception Handling** Default exception handling can follow two different paths: 1. **Default Exception Handler** By a default exception handler is meant that a specific handler is invoked if an exception is raised and not handled by another handler. The default handler might be a pre-defined handler (e.g. the implicit `except` statement attached to any routine body in a Clu program), or it might be user-definable (e.g. the default handler mechanism of Taxis). 2. **Default Smooth Termination** By default smooth termination is means that a specific level may specify a handler to be invoked if the level is terminated as a consequence of an exception being propagated through the level and not otherwise handled in that level. E.g. default sequels, or the `Others`-clause in Clu and Ada. C.7 **Equivalence of User- and Language-defined Exceptions** Since we are examining language constructs for user-specified exception handling, it is important to observe whether user-defined exceptions and handlers are allowed, and whether these are treated in the same way as the language-defined exceptions.
{"Source-Url": "https://tidsskrift.dk/index.php/daimipb/article/download/7555/6402", "len_cl100k_base": 15599, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 34208, "total-output-tokens": 18856, "length": "2e13", "weborganizer": {"__label__adult": 0.00032019615173339844, "__label__art_design": 0.00029349327087402344, "__label__crime_law": 0.0002486705780029297, "__label__education_jobs": 0.0005021095275878906, "__label__entertainment": 5.322694778442383e-05, "__label__fashion_beauty": 0.00012010335922241212, "__label__finance_business": 0.00013148784637451172, "__label__food_dining": 0.0002696514129638672, "__label__games": 0.0005154609680175781, "__label__hardware": 0.0006165504455566406, "__label__health": 0.00028634071350097656, "__label__history": 0.00018799304962158203, "__label__home_hobbies": 6.705522537231445e-05, "__label__industrial": 0.0002665519714355469, "__label__literature": 0.0003018379211425781, "__label__politics": 0.00020945072174072263, "__label__religion": 0.0004014968872070313, "__label__science_tech": 0.006526947021484375, "__label__social_life": 5.8591365814208984e-05, "__label__software": 0.004108428955078125, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.0002155303955078125, "__label__transportation": 0.00038242340087890625, "__label__travel": 0.00015270709991455078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 72712, 0.01729]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 72712, 0.50149]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 72712, 0.8808]], "google_gemma-3-12b-it_contains_pii": [[0, 112, false], [112, 1208, null], [1208, 5150, null], [5150, 7688, null], [7688, 11540, null], [11540, 12363, null], [12363, 14612, null], [14612, 17425, null], [17425, 18049, null], [18049, 19807, null], [19807, 23493, null], [23493, 26508, null], [26508, 28335, null], [28335, 28675, null], [28675, 32531, null], [32531, 35352, null], [35352, 36925, null], [36925, 40761, null], [40761, 44501, null], [44501, 44973, null], [44973, 48338, null], [48338, 51756, null], [51756, 54541, null], [54541, 56811, null], [56811, 56923, null], [56923, 59309, null], [59309, 60120, null], [60120, 62790, null], [62790, 65339, null], [65339, 68345, null], [68345, 70798, null], [70798, 72712, null]], "google_gemma-3-12b-it_is_public_document": [[0, 112, true], [112, 1208, null], [1208, 5150, null], [5150, 7688, null], [7688, 11540, null], [11540, 12363, null], [12363, 14612, null], [14612, 17425, null], [17425, 18049, null], [18049, 19807, null], [19807, 23493, null], [23493, 26508, null], [26508, 28335, null], [28335, 28675, null], [28675, 32531, null], [32531, 35352, null], [35352, 36925, null], [36925, 40761, null], [40761, 44501, null], [44501, 44973, null], [44973, 48338, null], [48338, 51756, null], [51756, 54541, null], [54541, 56811, null], [56811, 56923, null], [56923, 59309, null], [59309, 60120, null], [60120, 62790, null], [62790, 65339, null], [65339, 68345, null], [68345, 70798, null], [70798, 72712, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 72712, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 72712, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 72712, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 72712, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 72712, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 72712, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 72712, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 72712, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 72712, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 72712, null]], "pdf_page_numbers": [[0, 112, 1], [112, 1208, 2], [1208, 5150, 3], [5150, 7688, 4], [7688, 11540, 5], [11540, 12363, 6], [12363, 14612, 7], [14612, 17425, 8], [17425, 18049, 9], [18049, 19807, 10], [19807, 23493, 11], [23493, 26508, 12], [26508, 28335, 13], [28335, 28675, 14], [28675, 32531, 15], [32531, 35352, 16], [35352, 36925, 17], [36925, 40761, 18], [40761, 44501, 19], [44501, 44973, 20], [44973, 48338, 21], [48338, 51756, 22], [51756, 54541, 23], [54541, 56811, 24], [56811, 56923, 25], [56923, 59309, 26], [59309, 60120, 27], [60120, 62790, 28], [62790, 65339, 29], [65339, 68345, 30], [68345, 70798, 31], [70798, 72712, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 72712, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
ad0716b621d31b97e93b3b223633408fd480c58c
Verification of UML Model Elements Using B Ninh Thuan Truong, Jeanine Souquières To cite this version: HAL Id: hal-00097566 https://hal.archives-ouvertes.fr/hal-00097566 Submitted on 30 Oct 2007 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Verification of UML model elements using B Abstract This paper describes the formal verification of UML model elements using B abstract machines. We study the UML metamodel of class diagrams, collaboration diagrams and state-chart diagrams as well as their well-formedness rules. Each element of UML models which is an instance of a metaclass, is transformed into a B abstract machine. The relationship between abstract machines is organised using the abstract syntax of UML class diagram of the UML metamodel. B specifications are proved by a B prover which generates automatically proof obligations, allowing UML model elements to be verified. The correctness of the UML model elements is ensured by the well-formedness rules which are transformed to B invariants. We illustrate our approach by a simple case study, the printing system. Keywords: B method, UML models, UML metamodel, formal verification, well-formedness rules 1 Introduction The application of formal methods [9] allows the rigorous definition and analysis of the functionality and the behaviour of a system. Starting from rigorous specifications, formal methods can be used for the derivation of test cases, or for a validation and verification technique aimed at proving that the specification satisfies the requirements. The Unified Modelling Language (UML) [4] is a widely accepted modelling language that can be used to visualise, specify, construct and document the artifacts of a software system. It has been accepted as a standard object-oriented modelling language by OMG [14], and is becoming the dominant modelling language in the industry. The syntax and semantics of the notations provided in UML are defined in terms of its metamodel. In the UML metamodel [14], modelling constructs are defined using three distinct views: an abstract syntax in UML class diagrams, static semantics ensuring that all UML constructs are statically well formed in OCL [20], and dynamic semantics specifying the meaning of the constructs, mainly in English. The derivation from UML specifications into the B formal method [1] is considered as an appropriate way to jointly use UML and B in practical, unified and rigorous software development. The transformation from UML diagrams to B specification have been considered in [8, 13, 7]; these approaches provide a multi-view framework for the specification of a system but do not allow the complete verification of the properties of UML semantics. The transformation of the UML metamodel to formal methods (Object-Z, B) has been considered by K. Soon-Kyeong, C. David in [10] and R. Laleau, F. Polack in [11]. However, these approaches only consider the specification process but do not allow the checking of UML model elements with support tools. Cavara et al. [5] build a framework to transform UML metamodel and UML models into ASM (Abstract State Machines) [2] but the verification of this work is not presented. Concerning the validation of UML models and OCL constraints, M. Richter et al. [15] present an approach based on animation. They developed a tool called USE (UML-based Specification Environment) which is an animator for simulating UML models and an OCL interpreter for constraint checking. The purpose of our work is to use B support tools to check UML model elements. We transform meta-classes of an UML specification, objects of these classes (which are elements of UML models) and well-formedness rules of UML semantics proposed by OMG [14] into B abstract machines. The corresponding B specification is then proved by support tools which generate automatically proof obligations. The transformation of the Core package and the verification of class diagrams is presented in [18]. The transformation of behavioural diagrams is presented in [19]. This paper integrates these derivations in a common approach to verify UML model elements of different diagrams. The paper is organised as follows. Section 2 provides the basic concepts of our approach. Section 3 presents a case study to illustrate the transformation and verification. Section 4 presents our principal contribution; we compare the difference of well-formedness rules between packages and propose a general principle to transform the UML metamodel into B. In the three next sections, we present the transformation of the metamodel of class diagrams, collaboration diagrams and statechart diagrams, into B. We illustrate the transformation of well-formedness rules to B invariants and the proof of UML model elements using B support tools. Section 8 ends with some concluding remarks. 2 Background In this section, we introduce the B method, the UML metamodel and its relation with UML models. 2.1 The B method B [1, 16] is a formal software development method, originally developed by J. R. Abrial. The B notation is based on set theory, the language of generalised substitutions and first order logic. It is one of few formal methods which has robust, commercial available support tools for the entire development lifecycle from specification through to code generation [17]. Specifications are composed of abstract machines, similar to modules or classes. Each abstract machine consists of a set of variables, invariant properties relating to those variables and operations. Variables of B specifications are strictly typed. Type of variables is not given explicitly as in the majority of language programming but is presented in the invariants clause. The state of the system, i.e. the set of variable values, is only modifiable by operations which must preserve its invariants. With the refinement mechanism, an abstract specification can evolve to a more concrete specification, by adding new data or operations, allowing the behaviour of the 2 2.2 UML metamodel and its relation with UML models The UML metamodel [14] defines the complete semantics for representing object models using UML. It is defined in a metacircular manner, using a subset of UML notations and semantics to specify itself. In this way the UML metamodel bootstraps itself in a similar way to how a compiler is used to compile itself. The UML metamodel is defined as one of the four-layer metamodelling architecture: meta-metamodel, metamodel, model and user objects. Figure 1 shows an example of the relation between the UML metamodel and the UML model of the printing system. A model is an instance of the UML metamodel, each element of the UML model is an instance of a metaclass of the metamodel, for example, the notifyStatus() operation of the Computer class is an instance of the Operation metaclass of the UML metamodel. ![Diagram of UML metamodel and UML model](image) Figure 1: Example of relation between UML metamodel and UML model The metamodel is described in a semi-formal manner using three views, which are helpful to understand the UML semantics: - **Abstract Syntax.** UML class diagrams are used to present the UML metamodel, its concepts (meta-classes), relationships and constraints. • Well-Formedness Rules. A set of rules and constraints for UML model elements are defined. Rules are expressed in an English prose and in the Object Constraint Language (OCL). OCL [20] is a specification language that uses logic for specifying invariant properties of systems. • Semantics. The semantics of model usage is described in an English prose. Since the metamodel layer is relatively complex, it is decomposed into logical packages. Each package show strong cohesion with each other and loose coupling with meta-classes in other packages. The metamodel is decomposed into many packages. We focus on three packages of the metamodel: the Core package, the Collaboration package and the State Machine package which are respectively the metamodel of the class diagrams, the collaboration diagrams and the statechart diagrams. 3 A case study To illustrate our approach, we present the specification of the printing system, which is a system to print a file from a computer. This system works as follows: when a user gives a command to print a file, this command will be transferred to the PrintServer. If the printer is busy, the file to print will be stored in a queue, else it will be printed and the PrintServer will notify the status of the printing process to the computer. The class diagram of this system is presented in the UML model part of Figure 1. In this diagram, we only describe the elements and properties necessary to illustrate the transformation. The collaboration diagram is showed in the UML model part of Figure 2. ![UML Models and Metamodel Diagram] Figure 2: Collaboration diagram of the printing system and its metamodel As for the class diagram, there is a package of the metamodel layer called the Collaboration package which is used to define collaboration diagram elements. Each element of a collaboration diagram is an instance of a metaclass in the Collaboration package. For instance, the message 1.3 notifyStatus of the collaboration diagram of the printing system is an instance of the Message metaclass of the UML metamodel (see Figure 2). 4 Transformation of the UML metamodel into B Many approaches of the transformation of UML diagrams into a B specification are proposed. In these approaches, the transformation of an attribute of an UML class to a variable of a B abstract machine is usually presented as follows [13]: \[ \begin{align*} \text{CLASS} & \subseteq \text{OBJECTS} \\ \text{CONSTANTS} & \text{CLASS} \\ \text{PROPERTIES} & \\ \text{VARIABLES} & \\ \text{INARIANT} & \\ \text{END} \end{align*} \] With this transformation, it is easy to express attributes of classes by binary relation constructs of the set theory. However, this transformation is only applied when object identifiers will be generated in the execution of the program (in this case, if an object is created, its identifier is assigned to a random integer number because a deferred set in B is defined as a non-empty subset of integers \( (\text{OBJECTS} \in P(\text{INT})) \)). With the metamodel, it is to be noticed that the UML abstract syntax is mapped to a set of MOF packages (Meta Object Facility) called the UML Interchange Metamodel. These packages are available as an XML document which is generated from the UML Interchange Metamodel following the rules of the XML Metadata Interchange (XMI) [14]. It is a standard for the UML models to be exchanged between tool editors of UML (Rational Rose, ArgouML, ...) as a stream or as files. To enhance facility, we work with the XMI structure and values of attributes of XMI. That means that object identifiers of meta-classes are determined by identifiers in the XMI code generated by UML tool editors. Hence we can simply define the type of variables as follows: \[ \begin{align*} \text{attrib} & \in \text{CLASS} \rightarrow \text{TYPE(attrib)} \end{align*} \] where \( S \rightarrow T \) denotes the set of all partial functions from \( S \) to \( T \), \( \text{CLASS} \) is a set of object identifiers and \( \text{TYPE}(\text{attrib}) \) is the type of the attribute transformed into B. One usage of this expression is its application for the verification. In the process of verification of the B method, support tools generate automatically proof obligations for proving predicates. These proof obligations always verify the correctness of variable values of substitution in the operations with invariants of abstract machines. When predicates in the \text{INARIANT} clause contains existential and/or universal quantifiers, variables of abstract machines have to contain all potential values in order that the tool support performs the comparison and proves the predicates. With this definition of the type of variables, we introduce a new variable $attr$, typed similarly as the one of objects, in order to merge all values of variables of machine’s object: $$attr \in CLASS \rightarrow TYPE(attr)$$ $$attr := attr_1 \cup attr_2 \cup \ldots \cup attr_n$$ The value of the variable $attr$ is a set of pairs of object identifiers mapped to the attribute values of all objects of the class: $$attr = \{object_1 \mapsto value_1, object_2 \mapsto value_2, \ldots, object_n \mapsto value_n\}$$ With the $attr$ set, considered as a variable, we can express the well-formedness rules as an invariant of abstract machines. Proof obligations generated by support tools can inspect the data of all objects to verify the existential and/or universal quantification transformed from well-formedness rules. The structure of the metamodel of class diagrams and behavioural diagrams (com- posed of collaboration and statechart diagrams) is similar. Class diagrams are used to describe static properties (attributes and associations) on UML models. However, the object’s attribute on the Behavioural Elements package (Collaboration and State Machine packages) can be valued by a set of elements, meanwhile the one of the Core package has only one value. Depending on the value of attributes, each attribute is transformed to B as a variable whose invariant is either a partial function or a relation between the type of the class in which it is introduced and the type of attributes. Another difference between the Behavioural Elements package and the Core pack- age is well-formedness rules. In the Core package, well-formedness rules are usually simple, each rule expresses constraints for only one attribute. Well-formedness rules of the Behavioural Elements package are more complex, with many attributes which participate together in a rule. To verify the correctness of each rule (transformed as an invariant of a B abstract machine), values of all the variables which participate in invariants of B abstract machines should be determined, furthermore, all these variables must be assigned to values at the same time, the B prover thus inspects all values of variables to prove predicates. In the approach of the transformation of the Core package to B [18], we merge the data of the variables on the separate operations of the B abstract machine. This approach cannot be applied to the transformation of the Behavioural Elements package, because variables which participate in the same invariant may not be assigned to the values at the same time and hence the result of the proof can be incorrect. To solve this problem, we propose a common approach applied to the transformation of the Core package as well as the Behavioural Elements package. Before the presentation of the procedure of transformation, let’s see some definitions: **Definition 1** A composite class is a class that serves as the "whole" within a composition relation- ship; a composite object is an instance of a composite class. **Definition 2** A component class is a class that serves as the "part" within a composition relation- ship; a component object is an instance of a component class. The procedure of transformation is defined as follows: • Each object of a meta-class (UML model element) is transformed into a B abstract machine, object attributes are transformed to variables of the abstract machine. The type on these variables is expressed in the IN Variant clause as a partial function from the set of object identifiers to the type of the attribute: \[ \text{attr}_j \in \text{CLASS} \rightarrow \text{TYPE(attr}_j) \] • The value of variables will be initialised in the INITIALISATION clause with a set of the object identifier maps to the object’s attribute value: \[ \text{attr}_j := \{ \text{object}_j \mapsto \text{value}_j \} \] • Machines of the composite objects contain not only the variables which are transformed from the attributes of these objects, but also variables to merge values of the variables in the machines of component objects. These variables are typed identically to the one of component objects: \[ \text{attr}_i \in \text{CLASS} \rightarrow \text{TYPE(attr}_i) \] • We add an extra operation in the OPERATIONS clause of the composite object’s abstract machine to merge variables of component object’s machines into additional variables of composite object’s machines: \[ \text{mergeData} = \\ \text{PRE} \\ \land \text{attr}_j = \text{value}_j \\ \text{THEN} \\ \text{attr}_1 := \text{attr}_{11} \cup \text{attr}_{12} \cup \ldots \cup \text{attr}_{1n} \parallel \\ \text{attr}_2 := \text{attr}_{21} \cup \text{attr}_{22} \cup \ldots \cup \text{attr}_{2n} \parallel \\ \vdots \\ \text{attr}_m := \text{attr}_{m1} \cup \text{attr}_{m2} \cup \ldots \cup \text{attr}_{mn} \\ \text{where: } \text{attr}_i (i = 1..m): \text{additional variables of composite object’s machine,} \\ \text{attr}_j (i = 1..m, j = 1..n): \text{variables of machines of component objects,} \\ m \text{is the number of attributes of the component objects,} \\ n \text{is the number of component objects of a composite object.} \] The substitution above allows component object’s variables to be merged because the type of additional variables of composite object’s machines is the same as the one of component object’s machines. • The well-formedness rules of the component class in the metamodel are transformed into invariants of machines of the composite objects. This is the main procedure which allows us to transform the UML metamodel of class diagrams, collaboration diagrams and statechart diagrams into B. 5 Transformation of the metamodel of UML class diagrams into B Based on the previous procedure of the transformation, we perform a transformation of the metamodel of UML class diagram into B, illustrated by the printing system presented in Figure 1 in order to verify UML model elements. 5.1 Transformation of the UML metamodel First, we consider the transformation of an object of the metaclass Operations of the metamodel, the \texttt{createComputer} operation, into B. An example of its XMI specification generated in UML tool editors is presented as follows: ```xml <?xml version="1.0" encoding="UTF-8"?> <uml:Operation xml:id="xml.011"> <uml:ModelElement.name>createComputer</uml:ModelElement.name> <uml:ModelElement.visibility xml.value="public"/> <uml:BehavioralFeature.isSpecification xml.value="false"/> <uml:Operation.isLeaf xml.value="false"/> <uml:Operation.isLeaf xml.value="false"/> <uml:Operation.isAbstract xml.value="false"/> </uml:Operation> ``` Here defines the parameters ```xml </uml:Feature.owner> </uml:Operation> ``` The result of the transformation of the UML \texttt{createComputer} operation to a B abstract machine is given in Figure 3. ![Figure 3: B abstract machine for the UML createComputer operation](image) As presented in the procedure of transformation in the Section 4, machines of composite objects contain not only variables which are transformed from attributes of these objects, but additional variables to merge values of variables in the machines of component objects. Note that, each parameter of the operation \texttt{createComputer} is transformed into a B abstract machine (\texttt{CreateComputer\_HostName}, \texttt{CreateComputer\_IPAddress}, \texttt{CreateComputer\_UserName}), the structure of these machines is similar to the one of the CreateComputer machine presented Figure 3. In the UML metamodel, the Parameters metaclass is a component of the Operations metaclass. The abstract machine of the operation \texttt{createComputer} have to contain additional variables to merge the values of variables in machines of its parameters. The additional part (which composed of additional variables and the \texttt{mergeData} operation) for the specification of the \texttt{Create Computer} abstract machine is presented Figure 4. Based on the structure of the UML metamodel represented by the XMI structure in the left part of Figure 5, the general structure of B abstract machines transformed from UML model elements of the printing system’s class diagram is presented as follows (see the right part of Figure 5): ``` MACHINE CreateComputer ... USES CreateComputer_HostName, CreateComputer_IPAddress, CreateComputer_UserName / * The structure of the abstract machine parameters is similar to the one of the CreateComputer machine in Figure 3 * / VARIABLES parameter_name, parameter_direction, ... INVARIANT parameter_name ∈ PARAMETER → PARAMETERS_NAME ∧ parameter_direction ∈ PARAMETER → DIRECTION_KIND ∧ ... INITIALISATION parameter_name := ∅ || parameter_direction := ∅ || ... OPERATIONS mergeData = pre hostname_name = {P1 → hostName} ∧ hostname_direction = {P1 → m} ∧ / * from CreateComputer_HostName machine * / ipAddress_name = {P2 → ipAddress} ∧ ipAddress_direction = {P2 → m} ∧ / * from CreateComputer_IPAddress machine * / userName_name = {P3 → userName} ∧ userName_direction = {P3 → m} ∧... then parameter_name := hostname_name ∪ ipAddress_name ∪ userName_name || parameter_direction := hostname_direction ∪ ipAddress_direction ∪ userName_direction || ... END ``` Figure 4: Additional part for the CreateComputer abstract machine The machine of Model uses the machines of objects of the Association class\(^1\) (Computer_PrintServer, PrintServer_Printer,...) and machines of objects of the Class class (Computer, PrintServer, Queue, Printer). The machines of objects of the Association class (Computer_PrintServer) use the machines of objects of the AssociationEnd class (Computer_PrintServer_computer, Computer_PrintServer_printserver). \(^1\)As the associations have no name, we give a name composed of the name of the two classes that are connected by the association. The machines of objects of Class class (Computer) use the machines of objects of the Attribute class (Computer.HostName, Computer.JPAdress, Computer.UserName) and machines of objects of the Operation class (Computer.createComputer, Computer.r_notifyStatus) (their names are prefixed by the name of the class). The machines of objects of the Operation class (Computer.createComputer) use the machines of objects of the Parameter class (CreateComputer.HostName, CreateComputer.JPAdress, CreateComputer.UserName) (their names are prefixed by the name of the operation to distinguish them with the name of Attribute's machines). All machines in the system see the Types machine which defines all the sets of the system (members of these sets are extracted from the XMI specification of the metamodel of the UML class diagram): CLASS = \{C1, C2, C3, C4\}; /* xmi.id = C1, ... */ OPERATION = \{O11, O12\}; /* xmi.id = O11, ... */ VISIBILITYKIND = \{public, private, protected\}; DIRECTIONKIND = \{in, out, inout\}; ... Remarks. Abstract machines of the objects of the Multiplicity class are combined with the abstract machines of objects of AssociationEnd class to become one kind of machine, abstract machines of objects of AssociationEnd class. The attributes of objects of the Multiplicity class are transformed to variables of abstract machines of objects of the AssociationEnd class. The goal of this transformation is to merge data and to work with the well-formedness rules for the verification of the Association machine. Figure 5: General structure of the UML metamodel of class diagrams and their transformation to B With the arrangement of the above-mentioned abstract machines’ structure, we can keep the structure of the machines corresponding to metaclasses in the UML metamodel. It is clear and simple, furthermore, we can also use the well-formedness rules such as invariants of abstract machines and exploit the B theorem prover to prove their own correctness. 5.2 Verification of UML model elements Let’s consider a well-formedness rule of the Core package of the UML metamodel: **Rule WFR1:** All Parameters should have a unique name \[ \text{self.parameter} \rightarrow \forall \exists (p1, p2) \mid p1.\text{name} = p2.\text{name} \implies p1 = p2) \] This OCL predicate can be transformed to a B invariant as presented in Figure 6. This well-formedness rule of the Parameter meta-class is included in the abstract \[ \text{INVARIANT} \] \[ \forall (x, y). (x \in \text{PARAMETER\_NAME} \land \ y \in \text{PARAMETER\_NAME} \land x = y) \\ \implies \text{parameter\_name}^{-1}(x) = \text{parameter\_name}^{-1}(y) \] Figure 6: Transformation of the well-formedness rule WFR1 to B ...machine of the composite objects (objects of the Operation class). In this case, it is presented in the abstract machine of the createComputer operation (Figure 4), with a renaming of the attribute name to parameter\_name to have the same notation as for the variable of this abstract machine. The PARAMETER and PARAMETER\_NAME sets are defined in the Types machine as follows: \[ \text{PARAMETER} = \{P1, P2, P3\}; \\ \text{PARAMETER\_NAME} = \{\text{hostName, ipAdress, userName}\} \] One of the proof obligations that the B method proposes in an abstract machine is of the form \(I \land P \Rightarrow [S]I\) where - \(P\): precondition of the operation - \(S\): body of the operation - \(I\): invariant of the abstract machine Applying this proof obligation to the invariant and the mergeData operation of the CreateComputer abstract machine, the proof obligation in the B prover will be written as follows: \[ (x \in \{\text{hostName, ipAdress, userName}\} \land \ y \in \{\text{hostName, ipAdress, userName}\} \land \ x = y) \\ ((P1 \rightarrow \text{hostName, P2} \rightarrow \text{ipAdress, P3} \rightarrow \text{userName})^{-1}(x) = \\ (P1 \rightarrow \text{hostName, P2} \rightarrow \text{ipAdress, P3} \rightarrow \text{userName})^{-1}(y)) \] The result of this predicate is WFR1 = true. In a similar way, we transform others well-formedness rules of the Core package into B to verify the correctness of UML model elements of class diagrams. 6 Transformation of the metamodel of UML collaboration diagrams into B The metamodel of UML collaboration diagrams called the Collaboration package is a sub-package of the Behavioural Elements package. It specifies the concepts needed to express how different elements of a model interact with each other from a structural view. This package uses constructs defined in the Foundation package of UML as well as in the Common Behaviour package. 6.1 Transformation of the UML metamodel Based on the procedure of transformation presented in Section 4, we transform the metamodel of UML collaboration diagrams into B to verify its elements. Attributes of each object in the Collaboration package are transformed as variables of the abstract machine with their type determined as follows: ```b MACHINE Interaction SEES Types USES Message1, Message11, Message12, Message13 VARIABLES interaction_name, interaction_context, message_name, message_interaction, message_sender, message_activator, message_predecessor, ... INVARIANT interaction_name ∈ INTERACTION ⇒ INTERACTION_NAME ∧ interaction_context ∈ INTERACTION ⇒ COLLABORATION ∧ message_name ∈ MESSAGE ⇒ MESSAGE_NAME ∧ message_interaction ∈ MESSAGE ⇒ INTERACTION ∧ message_sender ∈ MESSAGE ⇒ CLASSIFIER_ROLE ∧ message_activator ∈ MESSAGE ⇒ MESSAGES ∧ message_predecessor ∈ MESSAGE ⇒ MESSAGE ∧ ... INITIALISATION interaction_name := \{ int1 → interaction1 \} || interaction_context := \{ int1 → call1 \} || message_name := ∅ || message_interaction := ∅ || message_sender := ∅ || message_activator := ∅ || message_predecessor := ∅ ... OPERATIONS mergeData = pre message1_predecessor = ∅ ∧ message11_predecessor = ∅ ∧ message12_predecessor = \{ mess12 → mess11 \} ∧ message13_predecessor = \{ mess13 → mess11, mess13 → mess12 \} ∧ ... then message_predecessor := message1_predecessor ∪ message11_predecessor ∪ message12_predecessor ∪ message13_predecessor || ... end END ``` Figure 7: Interaction B abstract machine - The type of variables transformed from attributes of objects which contain only one value is defined as a partial function from the set of object identifiers to the type of their values: \( attr_i ∈ CLASS ⇒ TYPE(attr_i) \). • The type of variables transformed from attributes of objects which possibly contain a set of elements is defined as a relation from the set of object identifiers to the type of their values: \( \text{attr}_i \in \text{CLASS} \leftrightarrow \text{TYPE}(\text{attr}_i). \) To illustrate the transformation of the metamodel of the UML collaboration diagram of the printing system (presented Figure 2) to a B specification, we introduce B abstract machines of objects of the Message and Interaction classes. Four instances are identified for the Message meta-class: 1, 1.1, 1.2, 1.3. Based on the procedure of transformation presented in Section 4, each instance of the Message class is transformed into a B abstract machine, named Message1, Message11, Message12, Message13. The Interaction meta-class of this case study have only one object. This object is transformed into a B abstract machine presented in Figure 7. The \textit{Interaction} abstract machine does not only contain variables transformed from the attributes of the Interaction object (prefixed with \textit{interaction}), it also contains variables to merge the data of the variables in the abstract machines of objects of the Message class (prefixed with \textit{message}). The merging of variables is realised by the operation \textit{mergeData} of the \textit{Interaction} machine. The purpose of this merging, as presented above, is to verify UML model elements of the collaboration diagram. The machines of component objects in the UML metamodel are used by the machine of the composite objects (in the XMI specification, component classes are expressed by siblings, composite classes are expressed by parent). The general structure of B machines transformed from the metamodel of a UML collaboration diagram is presented Figure 8. ![Figure 8: General structure of the UML metamodel of collaboration diagrams and their transformation to B](image) The left hand part of the figure gives the XMI summary description of a collaboration diagram on UML models. The right hand part is the structure of corresponding B abstract machines. The machine of object of the Model class uses (USES) the machines of objects of the Collaboration class and the CallAction class; the machines of objects of the Collaboration class uses the machines of objects of the ClassifierRole class, the AssociationRole class and the Interaction class and so on. All the machines in the system see (SEES) the Types machine which defines all sets of the system. In our case study, these sets are: MESSAGE = \{mess1, mess11, mess12, mess13\}; MESSAGE_NAME = \{print, restore, notifyStatus\}; CLASSIFIER_ROLE = \{class1, class2, class3, class4\}; CLASSIFIER_ROLE_NAME = \{Computer, PrintServer, Queue, Printer\}; ... 6.2 Verification of UML model elements Let’s consider the transformation of well-formedness rules of the Messages class and the verification of UML model elements of the Collaboration package which must satisfy these rules. Rule WFR2. The predecessors and the activator must be contained in the same Interaction. \[ \text{self.predecessor} \rightarrow \text{forall}(p \mid p.\text{interaction} = \text{self.interaction}) \] and \[ \text{self.activator} \rightarrow \text{forall}(a \mid a.\text{interaction} = \text{self.interaction}) \] This OCL predicate can be transformed to the B invariant as presented Figure 9. <table> <thead> <tr> <th>INARIANT</th> </tr> </thead> <tbody> <tr> <td>( \forall \ pp. (pp \in \text{MESSAGE} \land \text{message.predecessor}[{pp}] \neq \emptyset ) \Rightarrow \text{message.interaction}[\text{message.predecessor}[{pp}]] = \text{message.interaction}[{pp}] ) \land ( \forall \ aa. (aa \in \text{MESSAGE} \land \text{message.activator}[{aa}] \neq \emptyset ) \Rightarrow \text{message.interaction}[\text{message.interaction}[{aa}]] = \text{message.interaction}[{aa}] )</td> </tr> </tbody> </table> Figure 9: The B invariant transformed from the WFR2 rule Applying this rule on the case study of the printing system and taking into account the values of the variables, we have: MESSAGE = \{mess1, mess11, mess12, mess13\}. The values of message.predecessor, message.activator and message.interaction sets established by the mergeData operation of the Interaction machine are: message.predecessor = \{mess12 \mapsto mess11, mess13 \mapsto mess11, mess13 \mapsto mess12\}; message.activator = \{mess11 \mapsto mess1, mess12 \mapsto mess1, mess13 \mapsto mess1\}; message.interaction = \{mess1 \mapsto \text{inte1}, mess11 \mapsto \text{inte1}, mess12 \mapsto \text{inte1}, mess13 \mapsto \text{inte1}\} Let’s analyse the proof obligation \( I \land P \Rightarrow [S]I \) of the B abstract machine for verifying the preservation of the invariant \( I \) above, where \( P \) is the precondition of the mergeData operation and \( S \) is its body. \[ WFR2a = \forall \ pp. (pp \in \text{MESSAGE} \land \text{message.predecessor}[\{pp\}] \neq \emptyset \Rightarrow \text{message.interaction}[\text{message.predecessor}[\{pp\}]] = \text{message.interaction}[\{pp\}]) \] 14 Let’s examine each value of pp: if pp = mess1 ⇒ message_predecessor([mess1]) = ∅; if pp = mess11 ⇒ message_predecessor([mess11]) = ∅; In these two cases, the precondition of WFR2a is not satisfied. if pp = mess12 ⇒ message_predecessor([mess12]) = {mess11} ⇒ message_interaction([mess11]) = {int11} So message_interaction(message_predecessor([mess1])) = {int1} On the other hand, message_interaction([mess12]) = {int1} ⇒ WFR2a = true if pp = mess13 ⇒ message_predecessor([mess13]) = {mess11, mess12} Note that: ran(u ⊆ r) = r[u] with u ⊆ s ∧ r ∈ s ↔ t (See The B-Book [1], p.102) ⇒ message_interaction([mess11, mess12]) = ran([mess11, mess12] ⊆ message_interaction) = ran([mess11 ⊆ int1, mess12 ⊆ int1]) = {int1} and message_interaction([mess13]) = {int1} ⇒ WFR2a = true. We deduce that WFR2a = true for each value of pp. WFR2b = ∀aa (aa ∈ MESSAGE ∧ message_activator([aa]) ≠ ∅ ⇒ message_interaction(message_activator([aa]) = message_interaction([aa])) if aa = mess1 ⇒ message_activator([mess1]) = ∅ if aa = mess11 or aa = mess12 or aa = mess13 ⇒ message_activator([aa]) = {mess1} ⇒ message_interaction([mess1]) = {int1} and message_interaction([aa]) = {int1} ⇒ WFR2b = true As a consequence, we have: WFR2 = WFR2a ∧ WFR2b = true. Rule WFR3. The predecessors must have the same activator as the Message: self.allPredecessors -> forAll( p | p.activator = self.activator) This OCL predicate can be transformed to the B invariant as presented Figure 10. ``` INVARIANT ... ∀xx(xx ∈ MESSAGE ∧ message_predecessor([xx]) ≠ ∅ ⇒ message_activator(message_predecessor([xx])) = message_activator([xx])) ``` Figure 10: The B invariant transformed from the WFR3 rule if xx = mess1 ⇒ message_predecessor([mess1]) = ∅; if xx = mess11 ⇒ message_predecessor([mess11]) = ∅; if xx = mess12 ⇒ message_predecessor([mess12]) = {mess11} ⇒ message_activator([mess11]) = {mess1} 15 and message_activator[\{mess12\}] = \{mess1\} \\ ⇒ WFR3 = true \\ if xx = mess13 ⇒ message_predecessor[\{mess13\}] = \{mess11, mess12\} \\ ⇒ message_activator[\{mess11, mess12\}] \\ = ran(\{mess11, mess12\} < message_activator) \\ = ran(\{mess11 ↦ mess1, mess12 ↦ mess1\}) = \{mess1\} \\ and message_activator[\{mess13\}] = \{mess1\} \\ ⇒ WFR3 = true. \\ The result of this invariant is WFR3 = true \\ Rule WFR4. A Message cannot be the predecessor of itself. not self.allPredecessor → includes(self) This OCL predicate can be transformed to the B invariant as presented Figure 11. <table> <thead> <tr> <th>INVENTORY</th> </tr> </thead> <tbody> <tr> <td>∀xx (xx ∈ MESSAGE ⇒ xx ∉ message_predecessor[{xx}])</td> </tr> </tbody> </table> Figure 11: The B invariant transformed from the WFR4 rule if xx = mess1 ⇒ message_predecessor[\{mess1\}] = \Ø; if xx = mess11 ⇒ message_predecessor[\{mess11\}] = \Ø; if xx = mess12 ⇒ message_predecessor[\{mess12\}] \\ = ran(\{mess12 ↦ mess11\}) \\ = \{mess11\}, (mess12 ∉ \{mess11\}); if xx = mess13 ⇒ message_predecessor[\{mess13\}] \\ = ran(\{mess13 ↦ mess11, mess13 ↦ mess1\}) \\ = \{mess11, mess12\}, (mess13 ∉ \{mess11, mess12\}); As a result, WFR4 = true. The verification of the well-formedness rules can be executed by the support tool AtelierB [17], which can both automatically and interactively demonstrate theorems. When using the AtelierB tool to prove the Interaction abstract machine of the printing system, 15 proof obligations are proved automatically and 3 proof obligations are proved interactively. 7 Transformation of the metamodel of UML statechart diagrams into B The metamodel of statechart diagrams called the State Machine package, it is a subpackage of the Behavioural Elements package. It specifies a set of concepts that can be used for modelling behaviour through finite state-transition systems. It is defined as an elaboration of the Foundation package. The State Machine package depends on concepts defined in the Common Behaviour package, enabling integration with other sub-packages in Behavioural Elements. The procedure of transformation of the State Machine package into B is similar to the one of the Collaboration package. Based on the structure of the State Machine package, the structure of B abstract machines is composed as presented in Figure 12. Figure 12: General structure of the UML metamodel of statechart diagrams and their transformation to B The machines of objects of the Class class use the machines of objects of the StateMachine class; the machines of objects of the StateMachine class use machines of objects of the Transition class and machines of objects of the CompositeState class and so on. Because of the similarity in the verification with well-formedness rules, we point out only the transformation of UML metamodel of state-chart diagrams and do not illustrate its verification in the case study. 8 Conclusion We have presented a technique to transform the metamodel of UML class diagrams, collaboration diagrams, statechart diagrams and their well-formedness rules into B formal specifications. This transformation aims to verify the UML model elements which must satisfy the well-formedness rules of UML semantics. By exploiting the advantages of formal approaches for the verification, our approach owns powerful provers like AtelierB. In addition, OCL used to specify well-formedness rules of UML semantics and B notations are based on the first order predicate logic so their reciprocal transformation is easy. Furthermore, B is based on the set theory, the relation between classes and their objects in UML is similar to the relation between sets and their elements. Operations on attributes of classes correspond to operations on binary relation constructs in the set theory. The proof in the B provers is automatically and easily performed. A prototype ArgoUML+B [12] has been developed from ArgoUML ², a free available platform for editing UML diagrams. This prototype automatically transforms UML diagrams (class, state-chart, collaboration) into B. Furthermore, the internal representation of an UML model is completely generated from the specification, that means that values of objects in the UML metamodel are saved as XMI code. We continue to develop this prototype to automatically generate B abstract machines from the XMI support. Inspired from this approach, we plan to build a formal approach to specify and verify object-based systems using the B method. ²http://argouml.tigris.org References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00097566/file/main.pdf", "len_cl100k_base": 9447, "olmocr-version": "0.1.49", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 118416, "total-output-tokens": 11718, "length": "2e13", "weborganizer": {"__label__adult": 0.0003173351287841797, "__label__art_design": 0.0004930496215820312, "__label__crime_law": 0.0003521442413330078, "__label__education_jobs": 0.0012340545654296875, "__label__entertainment": 5.6862831115722656e-05, "__label__fashion_beauty": 0.0001500844955444336, "__label__finance_business": 0.0002892017364501953, "__label__food_dining": 0.00029659271240234375, "__label__games": 0.0004944801330566406, "__label__hardware": 0.0006513595581054688, "__label__health": 0.0004839897155761719, "__label__history": 0.0002741813659667969, "__label__home_hobbies": 8.857250213623047e-05, "__label__industrial": 0.0004658699035644531, "__label__literature": 0.0002925395965576172, "__label__politics": 0.00026154518127441406, "__label__religion": 0.0004718303680419922, "__label__science_tech": 0.03436279296875, "__label__social_life": 8.958578109741211e-05, "__label__software": 0.007415771484375, "__label__software_dev": 0.95068359375, "__label__sports_fitness": 0.00026345252990722656, "__label__transportation": 0.0005121231079101562, "__label__travel": 0.0002014636993408203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43248, 0.03043]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43248, 0.7894]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43248, 0.80023]], "google_gemma-3-12b-it_contains_pii": [[0, 924, false], [924, 3523, null], [3523, 6683, null], [6683, 7921, null], [7921, 9749, null], [9749, 12617, null], [12617, 15845, null], [15845, 18525, null], [18525, 20534, null], [20534, 22587, null], [22587, 24565, null], [24565, 26999, null], [26999, 28957, null], [28957, 31499, null], [31499, 33974, null], [33974, 35860, null], [35860, 38148, null], [38148, 40334, null], [40334, 42703, null], [42703, 43248, null]], "google_gemma-3-12b-it_is_public_document": [[0, 924, true], [924, 3523, null], [3523, 6683, null], [6683, 7921, null], [7921, 9749, null], [9749, 12617, null], [12617, 15845, null], [15845, 18525, null], [18525, 20534, null], [20534, 22587, null], [22587, 24565, null], [24565, 26999, null], [26999, 28957, null], [28957, 31499, null], [31499, 33974, null], [33974, 35860, null], [35860, 38148, null], [38148, 40334, null], [40334, 42703, null], [42703, 43248, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43248, null]], "pdf_page_numbers": [[0, 924, 1], [924, 3523, 2], [3523, 6683, 3], [6683, 7921, 4], [7921, 9749, 5], [9749, 12617, 6], [12617, 15845, 7], [15845, 18525, 8], [18525, 20534, 9], [20534, 22587, 10], [22587, 24565, 11], [24565, 26999, 12], [26999, 28957, 13], [28957, 31499, 14], [31499, 33974, 15], [33974, 35860, 16], [35860, 38148, 17], [38148, 40334, 18], [40334, 42703, 19], [42703, 43248, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43248, 0.01442]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
4aa417120225f3d2a40d112a38998d0ee337dacf
A contribution-based framework for the creation of semantically-enabled web applications Mariano Rico\textsuperscript{a}, David Camacho\textsuperscript{a,\textasteriskcentered}, Óscar Corcho\textsuperscript{b} \textsuperscript{a}\textit{Computer Science Department, Universidad Autónoma de Madrid, 28049 Madrid, Spain} \textsuperscript{b}\textit{Ontology Engineering Group, Departamento de Inteligencia Artificial, Universidad Politécnica de Madrid, Spain} \textbf{ARTICLE INFO} \textbf{Article history:} Received 26 November 2008 Received in revised form 17 April 2009 Accepted 4 July 2009 \textbf{Keywords:} Contributely-collaborative systems Wiki-based applications Semantic web technologies Semantic web applications \textbf{ABSTRACT} We present Fortunata, a wiki-based framework designed to simplify the creation of semantically-enabled web applications. This framework facilitates the management and publication of semantic data in web-based applications, to the extent that application developers do not need to be skilled in client-side technologies, and promotes application reuse by fostering collaboration among developers by means of wiki plugins. We illustrate the use of this framework with two Fortunata-based applications named OMEMO and VPOET, and we evaluate it with two experiments performed with usability evaluators and application developers respectively. These experiments show a good balance between the usability of the applications created with this framework and the effort and skills required by developers. © 2009 Elsevier Inc. All rights reserved. 1. Introduction In the last years a large number of ontologies has been made available on the Internet, and sources of semantic data have also had a large growth [4], especially in the context of the LinkedData initiative (see http://linkeddata.org), which has seen the emergence of a good number of SPARQL Endpoints [12]. However, this wealth of information still remains mostly hidden behind these SPARQL Endpoints, ontology libraries and ontology search engines, and are not used extensively in semantically-enabled web applications. This is due to the fact that, on the one hand, there is an increasing difficulty in the design of attractive and easily reusable web applications where a wide set of client-side technologies (e.g. HTML, Javascript, CSS, DHTML, Flash, or AJAX) and server-side technologies (e.g. ASP, JSP, JSF, .NET) need to be used, converting web designers in skilled programmers as pointed by Rochen et al. [15]. And on the other hand, the complexity of some semantic web technologies still represents a hard adoption barrier for any web application developer. As an example, let us suppose that an application developer in a company is interested in creating a prototype of a semantically-enabled web application, so as to make an initial check of the feasibility of this approach and its validity for the type of problem that she is trying to solve. This developer has to master general-purpose server and client web technologies (which she may be probably already aware of) as well as semantic web technologies (which are less frequent among developers). Even a simple application, such as a small prototype, requires a large amount of competencies. This paper presents Fortunata,\textsuperscript{1} a framework designed to facilitate these application development tasks. Fortunata allows developers to build semantically-enabled web applications more easily, by reducing the required competencies in web and \textsuperscript{1} This work has been funded by the Spanish Ministry of Science and Technology under the projects TIN2007-65989 and TIN2007-64718. \textsuperscript{2} Corresponding author. Tel.: +34 91 497 2288; fax: +34 91 497 2235. \textit{E-mail addresses:} Mariano.Rico@uam.es (M. Rico), David.Camacho@uam.es (D. Camacho), ocorcho@fi.upm.es (Ó. Corcho). \textsuperscript{1} A detailed description, and practical examples, can be found at http://ishtar.ii.uam.es/fortunata. The Fortunata source code, as well as the source code of VPOET and OMEMO, described later, is available at http://code.google.com/p/fortunata. semantic web technologies. The framework provides a programming library able to delegate: (1) the client-side presentation tasks to the wiki-engine on top of which it is built, and (2) the management and publication of semantic data by incorporating a simple set of ontology management functions. Ours is not the first approach that aims at combining semantic and wiki-based technologies. In fact, this is something that has been covered, at least partially, by semantic wikis, semantic portals and, more recently, Semantic Pipes. However, there are some important differences between all these approaches: - Semantic wikis [11,3] are focused on the collaborative creation [5,13,17] of semantic data (e.g., RDF triples) and its publication in a wiki-like fashion, normally combined with natural language text. Fortunata is instead focused on the collaborative creation of applications that exploit that data, by means of wiki-based plugins (also known as Fortunata-plugins or F-plugins). That is, semantic wikis are focused on content and are addressed to end users, while Fortunata is focused on applications and are addressed to developers. - Semantic portals [8] are also focused on the collaborative creation of semantic data, although unlike semantic wikis they are more rigid and enforce the use of specific knowledge models (ontologies), which are converted into forms. These portals normally present data in table-based representations, which are configured by the portal developers with ad hoc scripting languages. Besides being focused on applications instead of data, Fortunata provides application developers with the ability to control the data flow between the user’s form fields and the published semantic data. - Semantic Pipes [9] focus on application developers who want to handle semantic data, as in Fortunata. They allow creating semantically-enabled web applications as workflows that connect the inputs and outputs of different semantic data services, and these applications can be contributed for others to be used. Unlike Fortunata, they are not focused on the presentation of data but only on its transformation. In summary, Fortunata aims at combining the advantages of semantic wikis (using their easy-syntax for rendering information), semantic portals (allowing forms to enter user’s data that will be converted to semantic data) and semantic pipes (allowing semantic data transformation), and minimizing their drawbacks (uncontrolled edition of semantic data in semantic wikis, difficult transformation of user’s input in semantic portals and lack of focus on presentation in semantic pipes). This is shown graphically in Fig. 1. Any Fortunata-based web application comprises a set of plugins that: integrate semantic data from any existing source (including other Fortunata-based applications), allow its transformation in different manners, and/or provides presentations for semantic data. While traditional development centralises the source code, applications designed under this architectural paradigm are created in a decentralised way. That is, in traditional development, extending functionality of a semantic portal or wiki typically requires accessing the source code and compiling it, resulting in a new version of the application. Instead plugins allow members of a community to contribute to the creation of new functionality with a minimal degree of interdependence (e.g., they do not need to have access to the other plugins’ code in order to compile them). When a developer has created and tested a new plugin, the source code is sent to the Fortunata-based wiki administrator. If the code is considered valid and safe, it is compiled and added to the Fortunata-based wiki engine, and is made available to any user or developer. Following the aforementioned example, after a short initial training on Fortunata, the developer can create its prototype in a short amount of time (as shown in the evaluation section). The Fortunata API hides most error-prone details, allowing developers to focus their efforts in the provision of the required functionality. Obviously, more advanced applications will probably require more advanced competencies in semantic web technologies, but critical and error-prone tasks such as web presentation and publishing of semantic data do not need to be taken into account by the developer, since they are delegated to Fortunata. To validate our approach, we have carried out two experiments. The first one evaluated the usability of our contribution-based framework at initial design stages (that is, in prototype phases), previous to the creation of more complex applications. A study using Inspection Methods [7] was carried out by usability experts. The results of this study allowed us to improve the system (overcoming some of the limitations of the selected wiki engine on top of which Fortunata is developed), as well as providing future developers with a usability guide specifically oriented to Fortunata-based developers. Following this guide, two applications have been created: OMEMO and VPOET, which are also described in this work. The second experiment was focused on measuring the benefits that the contribution-based strategy provides for developers when using Fortunata. Developers were requested to develop the same application, but were divided into two groups: one of them had to use traditional development tools while the another had to use Fortunata. Results of this experiment show that Fortunata-based developers required the use of fewer tools and needed less time to create their applications. This paper is structured as follows. Section 2 describes Fortunata, focusing on how it has been built and on how it allows designing semantic web applications based on the contributively collaboration of developers. Section 3 describes OMEMO and VPOET, which are examples of the types of semantic web applications that can be created with this framework. Section 4 describes the two sets of evaluations carried out to check the validity of our approach, in terms of usability and development effort expenditure. Finally, Section 5 summarises the conclusions and future work. 2. Creating semantic web applications by developers contribution This section provides a technical description of the architecture of Fortunata, the features provided by Fortunata, and a developer use case. 2.1. Fortunata features and architecture As commented in the introduction, Fortunata aims at combining benefits from wiki-engines and ontology management systems, providing semantic web application developers with the following features: - **Wiki-style presentation.** Developers do not require competencies in client-side technologies (e.g., HTML, CSS, or Ajax), but only wiki-style syntax, which can be learnt in a short period of time (around 10 min). This is because Fortunata provides developers with simple predefined methods to render a given wiki-text. In order to create applications, forms is another required feature. In this case the wiki-text can specify a form easily. This feature is provided by the wiki engine described later. - **Semantic data management.** Ontology management systems such as Jena, Sesame, etc., normally provide developers with rather large and rich APIs that allow creating and handling ontologies and other semantic data sources. Fortunata only provides a simplified set of methods to handle these sources, getting an appropriate balance between the richness of the API and the amount of methods that need to be learnt by developers. - **A set of utilities.** Limitations of JSPWiki such as data exchange between wiki pages, or a unified types set of user messages, are overcome by using a set of Fortunata utilities. --- **Fig. 2.** Fortunata architecture. In order to support these features, Fortunata has been created on top of the functionality provided by the JSPWiki wiki engine and by the ontology management library Jena, as shown in Fig. 2. Now we describe the different software components of this framework and justify their use for the creation of Fortunata. JSPWiki (See http://jspwiki.org) is an open source Java-based wiki engine that allows the use of plugins to extend its functionality. Some plugins belong to the “core” library (maintained by the JSPWiki community), while other are “contributions” (maintained by the plugin creators). In 2006 only two wiki engines allowed plugins and forms: JSPWiki (Java language) and Twiki (Perl language). As Jena requires Java language, our decision was to use JSPWiki as wiki engine. Besides extensibility, the core implementation of JSPWiki provides other features that can be used by Fortunata developers as well as by Fortunata-based application users. These benefits are briefly summarized as follows: - Forms creation by means of specific wiki tags. JSPWiki wiki syntax allows an easy creation of forms which links buttons actions to plugins. This key relationship is described in Section 2.2. - Web feeds (based in the standard RSS) provide users with a subscription mechanism. For example, a user subscribed to a wiki page (which may be generated from a semantic data source and hence represent a change in that semantic data source) will be notified whenever the subscribed page is changed. - Common access control mechanisms allow managing permissions to see or modify a wiki page, provide user authentication mechanisms, group management, etc., which are common functions to be included in (semantic) web-based applications. - Version control allows reverting any wiki page to its previous state. This feature encourages users to modify any wiki page with the guaranty that changes can be reverted. - Link management provides mechanisms to identify “null” links (links pointing to non-existing wiki pages), “inverse” links (pages linking to the current page), or orphan links (pages not linked by any page). - Indexing, implemented with Lucene, allows searches by keyword in all wiki pages, no matter whether they have been created manually or automatically from the semantic data sources. In order to support these features, Fortunata has been created on top of the functionality provided by the JSPWiki wiki engine and by the ontology management library Jena, as shown in Fig. 2. Now we describe the different software components of this framework and justify their use for the creation of Fortunata. JSPWiki (See http://jspwiki.org) is an open source Java-based wiki engine that allows the use of plugins to extend its functionality. Some plugins belong to the “core” library (maintained by the JSPWiki community), while other are “contributions” (maintained by the plugin creators). In 2006 only two wiki engines allowed plugins and forms: JSPWiki (Java language) and Twiki (Perl language). As Jena requires Java language, our decision was to use JSPWiki as wiki engine. Besides extensibility, the core implementation of JSPWiki provides other features that can be used by Fortunata developers as well as by Fortunata-based application users. These benefits are briefly summarized as follows: - Forms creation by means of specific wiki tags. JSPWiki wiki syntax allows an easy creation of forms which links buttons actions to plugins. This key relationship is described in Section 2.2. - Web feeds (based in the standard RSS) provide users with a subscription mechanism. For example, a user subscribed to a wiki page (which may be generated from a semantic data source and hence represent a change in that semantic data source) will be notified whenever the subscribed page is changed. - Common access control mechanisms allow managing permissions to see or modify a wiki page, provide user authentication mechanisms, group management, etc., which are common functions to be included in (semantic) web-based applications. - Version control allows reverting any wiki page to its previous state. This feature encourages users to modify any wiki page with the guaranty that changes can be reverted. - Link management provides mechanisms to identify “null” links (links pointing to non-existing wiki pages), “inverse” links (pages linking to the current page), or orphan links (pages not linked by any page). - Indexing, implemented with Lucene, allows searches by keyword in all wiki pages, no matter whether they have been created manually or automatically from the semantic data sources. In order to support these features, Fortunata has been created on top of the functionality provided by the JSPWiki wiki engine and by the ontology management library Jena, as shown in Fig. 2. Now we describe the different software components of this framework and justify their use for the creation of Fortunata. JSPWiki (See http://jspwiki.org) is an open source Java-based wiki engine that allows the use of plugins to extend its functionality. Some plugins belong to the “core” library (maintained by the JSPWiki community), while other are “contributions” (maintained by the plugin creators). In 2006 only two wiki engines allowed plugins and forms: JSPWiki (Java language) and Twiki (Perl language). As Jena requires Java language, our decision was to use JSPWiki as wiki engine. Besides extensibility, the core implementation of JSPWiki provides other features that can be used by Fortunata developers as well as by Fortunata-based application users. These benefits are briefly summarized as follows: - Forms creation by means of specific wiki tags. JSPWiki wiki syntax allows an easy creation of forms which links buttons actions to plugins. This key relationship is described in Section 2.2. - Web feeds (based in the standard RSS) provide users with a subscription mechanism. For example, a user subscribed to a wiki page (which may be generated from a semantic data source and hence represent a change in that semantic data source) will be notified whenever the subscribed page is changed. - Common access control mechanisms allow managing permissions to see or modify a wiki page, provide user authentication mechanisms, group management, etc., which are common functions to be included in (semantic) web-based applications. - Version control allows reverting any wiki page to its previous state. This feature encourages users to modify any wiki page with the guaranty that changes can be reverted. - Link management provides mechanisms to identify “null” links (links pointing to non-existing wiki pages), “inverse” links (pages linking to the current page), or orphan links (pages not linked by any page). - Indexing, implemented with Lucene, allows searches by keyword in all wiki pages, no matter whether they have been created manually or automatically from the semantic data sources. Jena (See http://jena.sourceforge.net) is a Java library that provides developers with a programming environment to manage ontologies and semantic data. It can handle different ontology languages, such as OWL and RDFS, as well as different persistence and reasoning models. Fortunata hides this variety to developers by allowing the Fortunata wiki administrator to use a fixed set of options. For example, in the implementation used for our evaluations developers were using a Fortunata framework that was using a file-based persistence model (which is quite natural for wiki-based systems) and OWL DL ontologies. With that configuration, the methods provided by Fortunata prevents developers from having to do tasks such as managing files, managing Jena models, etc. The developer only has to provide code for the creation of semantic data instances, as well as the code for the creation of the classes and properties. Fig. 3 shows the code provided by the developer to create an instance in the “Hello World” example provided in the developers tutorial. Compare these three lines to the amount of code needed to provide this functionality (in this case provided by Fortunata). 2.2. Plugin development in JSPWiki and fortunata In this section we describe first how JSPWiki plugins can be created and then we move into the creation of F-plugins. 2.2.1. Contributing functionalities in JSPWiki by means of plugins Wiki plugins are pieces of code that extend the functionality of the wiki-engine, which is focused on allowing users a simple edition of wiki pages. Wiki plugins automate actions on a wiki page, and they present the result of such action in the “view mode”. Examples of these core JSPWiki plugins are: TableOfContents, which generates a TOC from the wiki page contents, or ReferringPages, which finds and list all pages that refer to the current page, etc. JSPWiki “core” comprise 25 plugins, and the set of “contributed” plugins is currently around one hundred (by Nov. 2008). Fig. 4 shows an example of how a specific plugin that is available in the system can be invoked from any wiki page (on the left part of the figure), and its corresponding result after the invocation with a specific set of parameters (on the right part). The invocation text of the plugin is between “[“ and “]”. This plugin contains arguments that specify a font size, an alignment, as well as a body containing a formula in Latex format. The result of this invocation is the wiki page in “view mode” that can be seen on the right. This plugin is automatically executed each time this wiki page is displayed in “view mode”. The plugin execution output results in an image displaying the formula. To create a JSPWiki plugin, developers only have to create a class implementing the interface WikiPlugin (see Fig. 5). This interface requires the implementation of the execute() method. This method will be invoked by the wiki-engine in the plugin execution, when a user is viewing the wiki page that contains it. Within that method developers have access to the plugin parameters and values, and have to include the code that performs the plugin operations (or the invocation to the corresponding method). 2.2.2. F-plugins development In a similar fashion to how plugins are implemented in JSPWiki, the implementation of an F-plugin is done by means of a Java class (see Fig. 5) that implements the interface WikiPlugin (from the JSPWiki library). Additionally it extends the class FortunataPlugin (from the Fortunata library) which provides developers with useful methods (e.g. renderWikiText()) concerning forms management and rendering. Fig. 4. Usage example of a JSPWiki plugin that generates an image from a Latex formula. (a) “Edition mode” shows how this plugin is invoked. (b) “View mode” showing the result. Fig. 5. Classes diagram of Fortunata-based applications. The layer “Fortunata API” shows the main class methods that a developer must implement. In this example, the F-plugin contains one instance of the class Vpoet. Abstract classes and interfaces are written in italics. Fig. 5 shows the layers involved in the development of F-plugins. The upper layer is the API provided by JSPWiki, which provides the abstract class WikiPlugin. Below this layer, the Fortunata API provides a set of generic classes that can be exploited by specific application classes. This is the case of the semantic web application VPOET, which uses the class Vpoet, shown in the layer named “Semantic Web application”. The last layer, named in the figure “F-plugin”, is for specific application plugins. The class AddVisualization is an example of the kind of classes that exist in this layer, and how it is related to the other layers. A semantically-enabled web application is represented by a class derived from the abstract class FortunataSWApplication, which provides developers with useful methods as well as force developers to implement the methods createIndividual() and fillDataModel() concerning semantics persistence. All the plugins in a Fortunata-based application share a semantic web application. In this example, the figure shows the class AddVisualization. This class is an F-plugin, and consequently it inherits the methods implemented in the base class FortunataPlugin and it is forced to implement three methods (one from the interface WikiPlugin and two from the class FortunataPlugin). This plugin contains an instance of the class Vpoet, which implements two methods from the class FortunataSWApplication concerning semantic data management. The process to create and contribute an F-plugin is detailed in the upper part of Fig. 6, and follows the usual procedure in any plugin-based architecture. First, the developer must create the F-plugin locally (steps 1–3) and perform an adequate number of tests to check that it is working correctly (step 4). Then she must proceed to the publication (step 5) of the plugin source code and of the documentation about its usage. The bottom part of the figure depicts the process to create new functionality by reusing the initial functionality following a “contributively collaboration” schema. It comprises the following steps: installation of an existing F-plugin (step 6), reading and understanding of its associated ontologies (either by manually reading the OWL files, using any off-the-shelf ontology editor, or by means of OMEMO) in order to find the elements that must be added, removed or modified, or in order to decide whether a new set of ontologies has to be imported and used (step 7), local creation of the extended plugin (steps 8–10), local tests (step 11) and publication (step 12). The purpose of this detailed explanation is to show the low complexity of the plugin reuse and contribution process. Table 1 summarizes the development tasks that are normally associated to the development of a typical semantic web application, and compares the skills that are required to perform these tasks when using a traditional development approach and a Fortunata-based approach. Traditional development requires more competencies (more development tools and roles) than Fortunata-based development. This is one of the main results of the comparison performed with real developers, which is described in Section 4. 3. OMEMO and VPOET: examples of Fortunata-based semantic web applications This section illustrates how the Fortunata framework can be used to create two prototypical semantically-enabled web applications. These applications are not intended to be original or innovative, since similar types of applications are available in the current state of the art, but we aim at demonstrating that they are easy to implement and extend using our approach. OMEMO is focused on the HTML publication of ontologies (in a similar fashion to systems like OWLDoc [See http://www.code.org/downloads/owldoc]), and it is interesting as a case study since it exploits many features of the wiki infrastructure, such as orphan links or the simplicity of the wiki syntax. VPOET is focused on semantic data visualization, and especially exploits the forms provided by the underlying JSPWiki infrastructure and the ontology publication functionality provided by OMEMO. 3.1. OMEMO OMEMO stands for “Ontologies for MEre Mortals”. It is aimed at users with no previous knowledge about ontology languages, whom may find it difficult to understand the knowledge model that an ontology or set of ontologies is providing in languages like OWL or RDF Schema. By using OMEMO, users can browse and navigate through the components (classes, properties and individuals) of any ontology that they upload to the system. When a new ontology is added to the system, a set of wiki pages is generated automatically, which provide a simplified vision of the ontology components, oriented to show the structure of the information, since the application is focused to non-technical users. Therefore, OMEMO hides knowledge representation aspects well-known for ontology experts like “range”, “domain”, the differentiation between datatype properties and object properties, “functional properties”, “inverse functional properties”, etc. It is worth noting that an ontology can have different versions, e.g. the FOAF ontology is available in at least two different versions: 20050403 and 20050603. The page-generation process results in a page for the whole ontology, which links to a set of pages for each class, property and individual. Fig. 7 shows a section of the wiki page generated for the class Person (version 20050403). Point ¤ indicates that other versions of the FOAF ontology exist and allows the access to those pages through this link. The value of the interest Property is Document as defined in FOAF, whereas the value of the firstName property is Literal as defined in the RDFS ontology. Whenever the RDFS ontology is stored in OMEMO, a link (solid underline) to the wiki page appears. Otherwise, the link will be underlined by a dotted-line (orphan link, i.e. a link pointing to a non-existing page). The numbered list in Fig. 8 shows the name of the wiki pages pointed. Finally, these automatically generated pages are not editable, resulting in a non-activated “Edit” button in the wiki page. It is worth mentioning that users may also create manual pages pointing to these automatically generated pages, if they wish to add extra documentation to these ontologies (e.g., competency questions as identified in many ontology engineering methodologies, details of applications that are using them, etc.). Besides, all these pages (manually or automatically generated by OMEMO) are indexed by the Lucene engine that is part of JSPWiki; therefore, the search facilities provided by JSPWiki include any text available in the original ontologies plus any additional documentation. A detailed explanation of the process that follows once the ontology’s URL is provided by the user can be described as follows: - A HTTP connection to the specified URL is established. The file containing the ontology is downloaded and stored in a temporal folder. - The ontology is analyzed to detect which namespaces are used. This allows to link different ontology pages among them. For example, the ontology FOAF refers to the ontology RDFS in the definition of the FOAF:firstName property when the ontology FOAF declares that FOAF:firstName is a RDFS:Literal (see Fig. 7, first row in the table). - Check 1: Conflict with prefixes. Blank prefixes (namespaces with no prefix defined), duplicated prefixes (ontology O1 defines prefix p for namespace n, and ontology O2 defines the same prefix for a different namespace), and overwritten namespaces (ontology O1 defines namespace n with the prefix p1, and ontology O2 defines namespace n with the prefix p2). --- **Table 1** Comparison between traditional development of semantic web applications and the Fortunata-based approach. <table> <thead> <tr> <th>Task</th> <th>Traditional development</th> <th>Fortunata-based development</th> </tr> </thead> <tbody> <tr> <td>Creation of pages</td> <td>CMS (Contents Managing System)</td> <td>Wiki</td> </tr> <tr> <td>Creation of forms</td> <td>HTML, CSS, DOM, AJAX</td> <td>Wiki forms</td> </tr> <tr> <td>Creation of web applications</td> <td>JSP, JSF, .NET technologies</td> <td>Wiki-engine</td> </tr> <tr> <td>Permissions, authentication</td> <td>Web server administrator competencies</td> <td>Wiki-engine</td> </tr> <tr> <td>Creation of ontologies</td> <td>Jena</td> <td>Fortunata</td> </tr> <tr> <td>Creation of semantic data</td> <td>Jena</td> <td>Fortunata</td> </tr> <tr> <td>Publication of ontologies and semantic data</td> <td>Jena + Web server administration</td> <td>Fortunata</td> </tr> </tbody> </table> --- **Fig. 7.** Snapshot of the OMEMO wiki page for class Person (version 20050403) belonging to the FOAF ontology. Check 2: Ontology versions. O1 at URL1, and O2 at URL2, define the same namespace, but the content is different. For example, two versions of FOAF exist: 20050603 and 20050403. In version 603 there are properties such as FOAF:isPrimaryTopicOf and FOAF:birthday, that are missed in version 403. Check 3: Duplicated ontologies. The same ontology may have two or more different URLs. This is the case for Dublin Core, which involves two different URLs: - `purl.org/dc/elements/1.1/` - `dublincore.org/2006/08/28/dces.rdf`. Without this test, this would result in two different entries: `dc.20030324` and `dc.20060828`, but the content would be the same. Effective solutions to most of these problems have been implemented, resulting in a concrete schema (Spec.prefix.version.elem) for the generated wiki pages, as was shown in the numbered list in Fig. 7. 3.2. VPOET VPOET [14] allows client-side web designers to create interactive templates for a given ontology component, not just to show semantic data (output templates) but to request data from the user (input templates). These templates can be created by any user, ranging from users with basic skills in client-side technologies, such as HTML or Javascript, to professional web designers. This is possible because VPOET users only need to embed simple macros in the client-side web code, providing interaction templates for each ontology component. Information about each ontology component may be obtained, for instance, by reading wiki pages generated by OMEMO, although this is not compulsory. Once the interaction template is finished, its creator indicates the features of the template, specifying details such as the template type (input or output), the behavior in case of changes to the font size, sizes (preferred, minimum, maximum), the code-type provided (HTML, Javascript, CSS), or the dominant colors. As any other Fortunata-based application, all the generated information is published as semantic data, so that it can be used by other internal or external applications. Although VPOET can be used by any user with basic skills in client-side web technologies, it has been created to let professional graphical web designers author attractive designs capable of handling semantic data. From a user point of view, this application is like any other web application, with form elements like text fields, radio buttons, or lists, as explained in the VPOET tutorial. The process to create a template starts by targeting an ontology component. In our example, the Person element from the FOAF ontology version 20050403 is selected. --- The process to create an output template comprises the following steps (see Fig. 8): 1. Getting information about the structure of the targeted element for the given ontology and version, \(e (o, v)\). That is, to know which sub-elements comprise the element. The visualization provider obtains this information by reading the wiki pages automatically generated by OMEMO. Fig. 7 shows a snapshot of the OMEMO wiki page for the FOAF:Person for this version. 2. Authoring a graphical design in which the semantic data will be inserted. Web designers are free to use their favorite web authoring tool. 3. Choosing an identifier (ID) to create a wiki page with that ID. This wiki page shows information about the VP and its templates. 4. The graphical design comprises a set of files: images, and client-side code such as HTML, CSS, or Javascript. (a) The client-side code is copied and pasted into the appropriated VPOET form fields. (b) Image files or “included” files must be uploaded to the provider wiki page, or uploaded to any web server. In any case, the client code must point correctly to these files. 5. A test loop starts, that uses semantic data sources (typically external to VPOET) containing instances of the targeted element. A substitution process starts: (a) Absolute paths must be substituted by a specific macro. (b) In the location of the semantic values, specific macros must be inserted. (c) The design is tested against the test data sources. (d) This loop finishes when the design produces a successful visualization for all the semantic test data sources. For this example, a test source can be http://www.eps.uam.es/~mrico/foaf.rdf. Part (a) of Fig. 9 shows a small part (two instances of Person) of the web page generated by using a given template. Each instance can be rendered individually (circles 1a and 2a), as well as each source (circles 1b and 2c). Circles 1c and 2c show the data stored in the data source about these instances. Part (b) of Fig. 9 shows the rendering of the individual by using the same template. This results in a semantic web browser rendering each source, jumping from data source to data source, for a given template and ontology element. 6. The design is characterized by its creator, providing information about the template features, such as template type (input or output), colors, size policy, or font changes behavior. Most of the effort required to create a template is in the test loop, especially in the insertion of macros. VPOET has been designed to let users reuse any other template. This is achieved by using: (1) rendering of an element specifying the template (of his/her own or not) and (2) links pointing to data that will render the destination element of a relation specifying the template as previous. ![Fig. 9](image-url) Testing the example template against a given semantic data source. 4. Evaluating usability and collaborative features Usability can be defined as “the ease of use and acceptability of a system for a particular class of users carrying out specific tasks in a specific environment” [7]. In the context of semantically-enabled web applications, usability is still a major challenge [6]. Hence we have focused our efforts in trying to measure and understand how usable Fortunata-based applications are. For this purpose, we have analysed the usability of the two applications described in the previous section (VPOET and OMEMO), as examples of the common types of applications that can be generated with the Fortunata framework. The literature on usability identifies two main groups of usability evaluation techniques [7]: - Test Methods, which are normally applied to running applications (or at least with early-prototypes of these applications), and require real users. - Inspection Methods, which are normally used during the design phase of these applications to identify potential usability problems, and do not require real users but only usability evaluators. In our test we applied Inspection Methods, since our focus was to identify potential usability problems in applications during their design phase. Inspection Methods comprise different techniques, such as Heuristic Evaluation (HE), Cognitive Walkthrough and Action Analysis, with different levels of competencies required from the evaluators. The first one (HE) was selected due to the lower competencies required from the evaluators, what means that evaluators did not need to be usability experts, which are difficult to find. In fact, a set of 3–5 evaluators applying this technique can typically identify 75–80% of all usability problems [1]. Evaluators were asked whether the user interface followed some well-known usability principles, and the evaluation results were used to improve Fortunata and to create a “usability guide” for Fortunata-based developers. The second type of evaluation was focused on the adequacy of the contributively collaboration feature of Fortunata. In this experiment, several developers with similar competencies on client-side technologies and a minimal background on semantic web technologies were selected and divided in two groups (identified respectively as “A” and “B”). A common goal was proposed for both groups, consisting in the incremental creation of three semantically-enabled web applications: 1. Implement a “Personal agenda”, with a user interface to provide Name, mail and telephone; 2. Use the data from step 1 to add the necessary functionality to provide a user interface to schedule meetings for a given person; 3. Use the data from step 2 to add the functionality to display the people involved in meetings for a given date. The first group (“A”) had no training on the Fortunata framework, so they were free to use their preferred technologies and development tools. The only restriction for them was that they had to follow the usability “eight golden rules” (described in the next subsection). The second group (“B”) received some training (in the form of a 20-min practical tutorial) on the Fortunata framework, and were requested to use this framework and to follow the “usability guide” created from the first experiment. The experiment ended filling in a complete questionnaire with quantitative and qualitative questions. 4.1. Experimental setup 4.1.1. Usability experiment Five usability evaluators were recruited from our academic institutions. They were requested to evaluate independently early-prototype versions of VPOET and OMEMO. During the evaluation session the evaluator uses the applications several times, from different starting points, and inspects the interactive elements. They had to answer questions related to “the eight golden rules” [16], shown in Table 2. The questionnaire comprised ten questions from [10]. This is a Likert scale-based questionnaire, i.e. “one based on forced choice questions, where a statement is made and the respondent then indicates the degree of agreement or disagreement with the statement on a 5 (or 7) point scale” [2], with seven possible values for the answers in the range from 1 (hard) to 7 (easy). Besides the choice, each question had an optional comments field. Table 3 shows the questions. Additionally, each evaluator provided us with a list of recommendations to improve usability. These comments and recommendations were analyzed once the independent evaluations were carried out. 4.1.2. Collaboration experiment For the second experiment, six students were selected from a “Semantic Web Technologies” master’s course in Computer Science from one of our Institutions (Universidad Autónoma de Madrid). Three students were assigned to the “A” group (traditional developers) and three were assigned to the “B” group (Fortunata’s developers). The questionnaire comprised two main blocks of questions related to complexity, collaboration and contribution. Some questions (Q3–Q6) used the previous Likert-based range values and the rest provides a numerical (continuous) value. These questions are shown in Table 4. Table 2 Usability “Eight golden rules”. <table> <thead> <tr> <th>ID</th> <th>Rule</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Consistency</td> <td>There must be consistency in the actions, terminology (messages, menus and help windows) and graphics (colors, layout, and fonts)</td> </tr> <tr> <td>2</td> <td>Universal usability</td> <td>Each user has a need; therefore the system must provide some facilities in order to transform contents. Not only impaired users, but differences between beginners-experts (beginners need explanations, the experts need shortcuts), or age ranges.</td> </tr> <tr> <td>3</td> <td>Informative feedback</td> <td>Each action in the system must produce a feedback. For common actions not very important, the answer must be small, but infrequent actions or important must produce a higher response.</td> </tr> <tr> <td>4</td> <td>Dialogs</td> <td>Dialogs must be designed to finalize something. Sequences of actions must be organized in groups with a start, middle, and final. For example, the concept of cart in web applications, with visualization for finished stages and pending stages.</td> </tr> <tr> <td>5</td> <td>Errors prevention</td> <td>The system must avoid that users make mistakes, but if the error is produced, the system must provide with a solution simple, constructive and specific</td> </tr> <tr> <td>6</td> <td>Undo</td> <td>Allow users to undo actions in an easy way. Everything should be undo-able</td> </tr> <tr> <td>7</td> <td>Locus internal control</td> <td>Support for the locus internal control. The expert users must have the sensation of controlling the tool. Users must start actions, not only respond to them</td> </tr> <tr> <td>8</td> <td>Memory load</td> <td>Diminish the memory load in the short-term. Avoid multiple windows, codes, or complex sequences</td> </tr> </tbody> </table> Table 3 Questionnaire for usability evaluators of Fortunata-based applications. <table> <thead> <tr> <th>ID</th> <th>Question</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Visibility of system status</td> </tr> <tr> <td>2</td> <td>Match between system and the real-world</td> </tr> <tr> <td>3</td> <td>User control and freedom</td> </tr> <tr> <td>4</td> <td>Consistency and standards</td> </tr> <tr> <td>5</td> <td>Error prevention</td> </tr> <tr> <td>6</td> <td>Recognition rather than recall</td> </tr> <tr> <td>7</td> <td>Flexibility and efficiency of use</td> </tr> <tr> <td>8</td> <td>Aesthetic and minimalist design</td> </tr> <tr> <td>9</td> <td>Help users recognize, diagnose, and recover from errors</td> </tr> <tr> <td>10</td> <td>Help and documentation</td> </tr> </tbody> </table> 4.2. Experimental results 4.2.1. Usability experiment For the first proposed experiment, Fig. 10 shows the average values assigned by evaluators to each question and its standard deviation. The average usability value (discontinuous line) was 5.66 (in the range [1,7]), with std. dev. 1.16 (dotted lines). Question Q3 (“User control and freedom”) had low values due to the lack of undo/redo features. Although the wiki-engine provides some kind of undo by means of control versions, reverting a wiki page to any previous state, the functionality implemented by F-plugins should support this feature more prominently. The current versions of OMEMO and VPOET do not implement this feature. Many questions (Q2, Q3, Q5, Q8, Q9) had high consensus (low standard deviation) from evaluators (std. dev. = 0.55). The lowest consensus was for Q6 (std. dev. = 0.84). The (qualitative) recommendations from the evaluators and their relationship to the “eight golden rules” are summarized in Table 5. Some recommendations were added to the Fortunata API (e.g. Rec1), other recommendations were generic guidelines (e.g. Rec2 and Rec3) that do not have a specific implementation in Fortunata. Finally, there was a group of recommen- dations that can be achieved by “hacking” JSPWiki (e.g. Rec4 and Rec5), but other recommendations cannot be followed because the current wiki version (JSPWiki version 2.4) does not support them (e.g. Rec6 and Rec7) or because they are not implemented yet (e.g. Rec8). The “usability guide for Fortunata-based developers”, comprising these recommendations can be found at http://ishtar.ii.uam.es/fortunata/Wiki.jsp?page=UsabilityRecommendations4Fortunata. 4.2.2. Collaboration experiment From the questionnaire used in the second experiment (see Table 4), two kind of results can be considered, quantitative (Q1 and Q2 questions), and qualitative (Q3–Q6). Quantitative questions were designed to measure the development effort of Fortunata-based applications (see Fig. 11). The results for Q1 show that the number of development hours (sum of the three contributive steps) reduces about 40% using Fortunata against traditional technologies. The results obtained for Q2 show that Table 4 Questionnaire for semantically-enabled web applications developers. <table> <thead> <tr> <th>ID</th> <th>Evaluation goal</th> <th>Question</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Complexity</td> <td>Hours dedicated to create the application</td> </tr> <tr> <td>2</td> <td>Complexity</td> <td>Tools used to create the application</td> </tr> <tr> <td>3</td> <td>Complexity</td> <td>Level of difficulty to create the application</td> </tr> <tr> <td>4</td> <td>Collaboration</td> <td>Level of difficulty to follow the usability “eight golden rules”</td> </tr> <tr> <td>5</td> <td>Collaboration</td> <td>Level of dependencies on “previous code”</td> </tr> <tr> <td>6</td> <td>Collaboration</td> <td>Level of difficulty to share your application’s source code</td> </tr> </tbody> </table> ![Fig. 10. Questionnaire for semantically-enabled web applications developers.](image) Table 5 Aggregated usability recommendations provided by evaluators. <table> <thead> <tr> <th>ID</th> <th>Recommendation</th> <th>Rules involved</th> <th>Solution provided by Fortunata API</th> </tr> </thead> <tbody> <tr> <td>Rec1</td> <td>Execution messages must have a fixed location, font size and colors</td> <td>#1, #2</td> <td>Fortunata API provides a unified method for displaying execution messages, with three different warn levels (“success”, “warning” and “error”), visually differentiated</td> </tr> <tr> <td>Rec2</td> <td>Use links to data pages/help to avoid remembering codes or identifiers.</td> <td>#8, #7, #3</td> <td>General recommendation with no effects on Fortunata API</td> </tr> <tr> <td>Rec3</td> <td>Forms should fit in a single page (avoiding page scrolls). Tabs usage is recommended for large forms</td> <td>#5, #7, #8</td> <td>General recommendation with no effects on Fortunata API</td> </tr> <tr> <td>Rec4</td> <td>Redirection by means of links</td> <td>#6</td> <td>There is no dynamic-redirection. Therefore, the execution of a plugin cannot change the wiki page. The solution is to place a link to the destination page in the plugin’s execution message text</td> </tr> <tr> <td>Rec5</td> <td>Improve form features</td> <td>#4</td> <td>JSPWiki imposes some restriction to forms: (1) only buttons can fire plugins, (2) JavaScript is not allowed, (3) There are no lists. Evaluators did not find severe usability problems</td> </tr> <tr> <td>Rec6</td> <td>Dynamic change of the skin</td> <td>#2</td> <td>The current version of JSPWiki supports skins, but it cannot be changed dynamically. Most skins support successfully changes in the font size</td> </tr> <tr> <td>Rec7</td> <td>Undo/Redo features</td> <td>#6</td> <td>The wiki-engine provides with a version control feature for wiki pages, allowing undo/redo for wiki contents. However, concerning F-plugins functionality, undo/redo must be implemented by the plugin’s developer</td> </tr> <tr> <td>Rec8</td> <td>Advanced Editors (e.g. colored source code, auto fill text fields)</td> <td>#5, #4</td> <td>This functionality has not been implemented in JSPWiki yet by any contributor. These features would improve VPOET templates editor</td> </tr> </tbody> </table> the number of development tools reduces about 60% (the tool used for the three developers belonging the “B” group was just the Fortunata API, which produces a std. dev. = 0.0). Fig. 12 shows the results of the rest of questions. In all of them the reduction by using Fortunata is between 10% (Q4) and 60% (Q6). Group A had uniform consensus for all the questions (std. dev. = 0.58) except Q4 (std. dev. = 1.0). Group B had the same uniform consensus for all the questions (std. dev. = 0.58) except Q2 in which the agreement was complete (std. dev. = 0.0). 5. Conclusions and future work The work presented in this paper is focused on the provision of a simple and easily extensible programming framework that facilitates the creation of semantically-enabled web applications, while at the same time allows developers to contribute new applications that can be reused by others. The focus of our work is hence on: (1) simple and collaborative development environments, (2) facilities for reusing the contributed functionality, and (3) minimal dependencies between developers. To achieve these requirements, Fortunata takes advantage of the functions provided by an open source wiki-engine and an ontology management library (Jena), simplifying their use by hiding their complexity to non-expert developers. A usability study has been carried out in order to ensure that Fortunata-based applications fulfil basic usability requirements. The results of this study provided some improvements to the Fortunata API as well as a usability guide for Fortunata developers. Following this guide, an initial study with real developers has been used to measure quantitative (complexity of Fortunata-based development) and qualitative (contributively collaborative facilities) aspects of Fortunata. Results show good levels of usability values and a reduction on the software development effort. As a proof-of-concept, two applications (VPOET and OMEMO) were created by using the Fortunata framework. The experiments show that this infrastructure is useful to implement real, reusable, and shareable semantically-enabled web applications. However, these applications also point out some of the main drawbacks of Fortunata, which are mainly due to the... limitations inherited from JSPWiki in forms (e.g. no lists, no javascript support, no advanced editors, no dynamic page redirec- tion). Some of these limitations have been overcome and others will be addressed in future versions. Future versions of Fortunata will overcome these drawbacks, providing users with better user interfaces. We will also incorporate some of the ideas obtained from the current work on Semantic Pipes, especially in what respects to the integra- tion of data that is coming from external sources, on the declarative creation of transformational workflows for semantic data, and on the visualization providing better user interaction by means of integration with VPOET interactive templates. Finally, some of the work that we are currently doing is not focused on the platform itself but on one of the applications that were evaluated: VPOET. We are now proposing mechanisms to select the most appropriated template for a given user profile, considering aspects like its visual impairments, its interaction device, and its aesthetic preferences. This experiment, ad- dressed at real user needs, will show the power of reusing different types of visualizations for the same semantic data, which is something that is already been shown in VPOET and that has only been exploited so far by semantic portals. References (2005) 40–49. 2513–2514.
{"Source-Url": "http://arantxa.ii.uam.es/~dcamacho/papers/rico-infsciences09.pdf", "len_cl100k_base": 11743, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 38835, "total-output-tokens": 13465, "length": "2e13", "weborganizer": {"__label__adult": 0.0002512931823730469, "__label__art_design": 0.00039887428283691406, "__label__crime_law": 0.0002276897430419922, "__label__education_jobs": 0.0007534027099609375, "__label__entertainment": 5.5909156799316406e-05, "__label__fashion_beauty": 0.00011515617370605467, "__label__finance_business": 0.00018537044525146484, "__label__food_dining": 0.00020647048950195312, "__label__games": 0.00037169456481933594, "__label__hardware": 0.0004172325134277344, "__label__health": 0.0002338886260986328, "__label__history": 0.00019359588623046875, "__label__home_hobbies": 5.996227264404297e-05, "__label__industrial": 0.0002071857452392578, "__label__literature": 0.0002123117446899414, "__label__politics": 0.00014710426330566406, "__label__religion": 0.0002846717834472656, "__label__science_tech": 0.00872802734375, "__label__social_life": 7.444620132446289e-05, "__label__software": 0.01202392578125, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.0001475811004638672, "__label__transportation": 0.00028395652770996094, "__label__travel": 0.0001475811004638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59337, 0.03373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59337, 0.54124]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59337, 0.89075]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4149, false], [4149, 7550, null], [7550, 11945, null], [11945, 18859, null], [18859, 22750, null], [22750, 27032, null], [27032, 27526, null], [27526, 32042, null], [32042, 34782, null], [34782, 37674, null], [37674, 42822, null], [42822, 49245, null], [49245, 52857, null], [52857, 55104, null], [55104, 59337, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4149, true], [4149, 7550, null], [7550, 11945, null], [11945, 18859, null], [18859, 22750, null], [22750, 27032, null], [27032, 27526, null], [27526, 32042, null], [32042, 34782, null], [34782, 37674, null], [37674, 42822, null], [42822, 49245, null], [49245, 52857, null], [52857, 55104, null], [55104, 59337, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59337, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59337, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59337, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59337, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59337, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59337, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59337, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59337, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59337, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59337, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4149, 2], [4149, 7550, 3], [7550, 11945, 4], [11945, 18859, 5], [18859, 22750, 6], [22750, 27032, 7], [27032, 27526, 8], [27526, 32042, 9], [32042, 34782, 10], [34782, 37674, 11], [37674, 42822, 12], [42822, 49245, 13], [49245, 52857, 14], [52857, 55104, 15], [55104, 59337, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59337, 0.18631]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
6e53b46740ccde89b75580ed673ccc33117a7c21
Efficient Accessing and Searching in a Sequence of Numbers Jungjoo Seo and Myoungji Han Department of Computer Science and Engineering, Seoul National University, Seoul, Korea jjseo@theory.snu.ac.kr, mjhan@theory.snu.ac.kr Kunsoo Park* Department of Computer Science and Engineering and Korea Institute of Computer Technology, Seoul National University, Seoul, Korea kpark@theory.snu.ac.kr Abstract Accessing and searching in a sequence of numbers are fundamental operations in computing that are encountered in a wide range of applications. One of the applications of the problem is cryptanalytic time-memory tradeoff which is aimed at a one-way function. A rainbow table, which is a common method for the time-memory tradeoff, contains elements from an input domain of a hash function that are normally sorted integers. In this paper, we present a practical indexing method for a monotonically increasing static sequence of numbers where the access and search queries can be addressed efficiently in terms of both time and space complexity. For a sequence of \( n \) numbers from a universe \( U = \{0, \ldots, m-1\} \), our data structure requires \( n \lg(m/n) + O(n) \) bits with constant average running time for both access and search queries. We also give an analysis of the time and space complexities of the data structure, supported by experiments with rainbow tables. Category: Smart and intelligent computing Keywords: Data structure; Access/search; Rank/select; Time-memory tradeoff I. INTRODUCTION Given a monotonically increasing sequence \( A \) of \( n \) numbers from a finite universe \( U = \{0, \ldots, m-1\} \) of cardinality \( m \), let us consider the following two queries. - \( \text{access}(i) \) : return the \( i \)-th number in \( A \). - \( \text{search}(x) \) : return an index \( i \) such that \( A[i] = x \) and \(-1 \) otherwise. We want to answer these two queries efficiently while consuming as little space as possible on the word RAM model with the word size \( \Theta(\lg m) \) bits. One of the applications of the problem is cryptanalytic time-memory tradeoff (TMTO) which is aimed at a one-way function. In TMTO, a number of huge tables of integers are generated and stored in non-decreasing order. Storing a sorted sequence of integers is exactly the problem we want to address in this paper. There are numerous areas beyond TMTO that encounter the integer indexing problems, such as database, text indexing, and social network graphs [1]. There are several data structures that represent a sequence of numbers [2–8]. The wavelet tree represents a sequence of numbers from the range [1..r] supporting access, rank, and select in O(\log r) time where r = O(polylog(n)). Here, rank(p, V) returns the number of c’s up to position p in V and select(j, V) returns the position of the j-th c in V. Ferragina et al. [8] improved the time complexity to constant time using nH0(A) + O(n) bits where H0(A) is the zero-order empirical entropy of A. Brodnik and Munro [4] presented a succinct data structure that supported search in constant time with the space requirement of \(B + O(B)\) bits where \(B = \lceil \log_2 n \rceil\) is the information-theoretic lower bound of space that is needed to store a set of n elements from a universe of size m. Pagh’s data structure [5] achieved constant time with the improved space of \(B + O(n)\) bits. Raman et al. [7] gave a succinct data structure that also supported rank/select operations. In this paper, we show that there is a simple data structure that indexes a non-decreasing sequence of integers to support not only a membership query but also a random access operation. We also give an analysis for the time and space complexity of the data structure. The average running time of the two operations is constant, assuming that select is done in constant time, and the required space is \(n \log (m/n) + O(n)\). While theoretical succinct data structures in the literature are very complex to implement, the data structure explained in Section III is simple to implement. In Section IV, we give an improved data structure to support multisets, exploiting the idea of [7]. Because our data structures are based on rank/select, we adopted multiple implementations of rank/select data structures [7, 9, 10] and the experimental results are presented in Section V. To verify practicality, we tested our data structures on rainbow tables. II. PRELIMINARIES Here, we introduce a simple method to index a monotonically non-decreasing sequence \(A\) from \(U\) that is explained in [11, 12]. This method will be called Sindex throughout the paper. We denote \(m\) as the size of \(U\) and \(n\) as the size of \(A\). Then we can represent any element of \(U\) with \([\log m]\) bits. Consider the s most significant bits of each number in \(A\) where \(s \leq \log n\). For each integer \(0 \leq i < 2^s\), if we can locate the boundaries of the maximal subarray \(A[i..h]\) that contains numbers having \(i\) as a prefix of their binary representation, the numbers can be stored with \([\log m]\) least significant bits without loss of information. To directly determine the boundaries, we build an index table \(I\). An index table \(I\) contains \(2^s\) elements of size \([\log n]\) bits each, and \(A[i]\) is the smallest number \(j\) such that the \(s\) most significant bits of \(A[j]\) are greater than or equal to \(i\). With the index table \(I\) and the reduced integer array \(R\) of \(n\) numbers of size \([\log m]\) bits each, all elements in \(A\) are stored without loss of information. A. Access To retrieve the value of \(A[i]\) for access(i), suppose \(A[i]\) is the concatenation of two bit strings \(q\) and \(r\) of size \(s\) and \([\log m]\) bits, respectively. To compute \(q\), we search \(i\) for the position of the largest number that is smaller than or equal to \(i\). \(r\) can be obtained by directly accessing the reduced array \(R\). Because the number of elements in \(I\) is \(2^s\), access(i) requires \(O(s)\) time with the index table. B. Search Let \(x\) be the given number to search for. Also, let \(x\) be the concatenation of two bit strings \(x_q\) and \(x_r\) where the sizes of \(x_q\) and \(x_r\) are \(s\) and \([\log m]\) bits, respectively. First, we have to find the boundaries \(l\) and \(h\) of the maximal subarray so that all the elements in \(A[l..h]\) have \(x_q\) as their prefixes. \(l\) can be obtained by accessing \(I[x_q]\), and \(h\) is simply \([x_q + 1] - 1\). Note that if there are no numbers of the prefix \(x_q\) in \(A\), then \(h = l - 1\), which indicates that \(x\) does not exist. After \(l\) and \(h\) where \(l \leq h\) are computed, we can determine the existence and the position of \(x\) by finding the \(x_r\) in \(R\) using binary search. C. Space Requirement The number of bits required for the index table method is the sum of bits for two components \(I\) and \(R\). The spaces for \(I\) and \(R\) are \(2[\log n] + [\log m] - s\) respectively. We set \(s\) to \([\log n\log n]\) to minimize the space requirement for the whole data structure. Thus, the total space is \(O(n[\log m] + \log n)\). D. Time Complexity To analyze the time complexity of access and search, we set \(s\) to \([\log n\log n]\) to minimize the space requirement. The required time for access is \(O(s) = O(\log n)\) since binary search on the table of size \(2^s\) takes \(O(\log 2^s)\) and accessing \(R\) takes \(O(1)\). For search, we analyze the time complexity in the average case assuming that each element of \(A\) is chosen uniformly at random from \(U\). Theorem 1. Assume Sindex and each element of \(A\) is randomly chosen from \(U\). Given a number \(x \in U\), the binary search on \(R\) in search can be done in \(O(\log \log n)\) time in the average case. Proof. Consider a fixed element \(x \in U\) to search for. Now imagine choosing \(n\) numbers from \(U\) to construct \(A\). Let \(X_i\), \(1 \leq i \leq n\), be a random variable such that \(X_i = 1\) if the \(i\)-th chosen number has the same prefix of size \(s\) with that of \(x\), and \(X_i = 0\) otherwise. Let \(X = X_1 + \ldots + X_n\), i.e., \(X\) is the random variable that represents the number of elements in \(A\) that have the same prefix as that of \(x\). \(X\) is the size of subarray to perform binary search in search. When a number is chosen randomly from \(U\), the probability that http://dx.doi.org/10.5626/JCSE.2015.9.1.1 its prefix equals to that of \( x \) is \( 1/2^i = 1/2^{\lfloor \log_2 n \rfloor} \). Thus \( X_i \) is a Bernoulli random variable with \( p = 1/2^{\lfloor \log_2 n \rfloor} \). Because \( X_i \)'s are independent and identically distributed random variables, \( X \) is a binomial random variable with parameters \( n \) and \( p = 1/2^{\lfloor \log_2 n \rfloor} \). Thus, \( E[X] = np = n/2^{\lfloor \log_2 n \rfloor} \). By Jensen’s inequality [13], \[ E[\log X] \leq \log E[X] = \log \frac{n}{2^{\lfloor \log_2 n \rfloor + 1}} \\ \leq \log \frac{n}{2^{\lfloor \log_2 n \rfloor}} = \log \log n + 1 \\ = O(\log \log n) \] III. PRACTICAL INDEXING In this section, a more efficient data structure with respect to time and space complexity is explained. The improved data structure will be called \( \text{Pindex} \) throughout this paper. To improve the space efficiency of the index table of \( \text{Sindex} \), we adopt a unary index scheme from Elias [14] and Fano [15] that is used frequently in the literature [16, 17]. As in Section II, prefixes of a fixed length of each number in the given sequence are used to construct an index. To make the content complete, we first explain the representation from [14] and give the analyses of time and space complexity. Given a monotonically increasing sequence \( A \) of \( n \) numbers from a finite universe \( U \), let \( z = \lceil \log n \rceil \) and the quotient \( q_i \) be the \( z \) most significant bits of \( A[i] \) and the remainder \( r \), be the \( \lceil \log m \rceil - z \) least significant bits. Note that the sequence of \( q_i \) is also monotonically non-decreasing, i.e., \( 0 \leq q_i \leq q_{i+1} < 2^z \) for \( 1 \leq i < n \). The remainders \( r_1, \ldots, r_n \) are stored in table \( R \) by simply concatenating them using \( n(\lceil \log m \rceil - z) \) bits. To store the quotients \( q_1, \ldots, q_n \), we use the unary representation for the differences of the consecutive quotients. More specifically, \( q_i \) is encoded to \( 0^{q_i-1}1 \) where \( q_0 = 0 \) and \( 0^i \) is the bit string consisting of \( x \) zeros. The encoded quotients are concatenated to a single bit string \( Q \). \( Q \) requires at most \( 2n \) bits, because the number of 1s is \( n \) and the number of 0s is at most \( 2^z \leq n \). Note that the number of 1s is greater than or equal to that of 0. Before we proceed with the analysis, let us briefly introduce \( \text{rank} \) and \( \text{select} \), because they are performed on the bit string \( Q \) for \( \text{access} \) and \( \text{search} \). - \( \text{rank}(p, V) \) : return the number of \( c \)'s up to position \( p \) in \( V \). - \( \text{select}(j, V) \) : return the position of the \( j \)-th \( c \) in \( V \). \( c \) can be any of 0 or 1. There has been extensive research on the rank/select data structure in the literature that aims to achieve optimality of time and space theoretically [7] or to give practical implementations with plentiful experiments [9, 10, 18-20]. A. Access We perform the same procedure that was introduced in [17]. Given a query \( \text{access}(i) \), \( q_i \) and \( r_i \) need to be computed to obtain \( A[i] \). To compute \( q_i \), we first compute the position of the \( i \)-th 1 in \( Q \), and then calculate the number of 0s up to the position of the \( i \)-th 1 in \( Q \). Because the number of 0s before the \( i \)-th 1 is \( \sum_{j=i}^{i} q_j - q_{j-1} \), \( q_i \) is the number of 0s up to the \( i \)-th 1 in \( Q \). Thus, \( q_i = \text{select}(i, Q) - i \). \( r_i \) can be obtained by accessing \( R \) directly. The required time for \( \text{access} \) is \( O(\log n) \) where \( se \) is the cost of a \( \text{select} \). B. Search Given a query \( \text{search}(x) \) where \( x \in U \), let \( q \) and \( r \) be the quotient and the remainder of \( x \), respectively. As in Section II, we first determine the boundaries \( l \) and \( h \) of the maximal subarray so that all the numbers in \( A[l..r] \) have \( q \) as their prefixes. If such a subarray exists, the first \( q \) occurrences of 0 should be followed by 1 and the size of the subarray is equal to the number of consecutive 1s following the \( q \)-th 0. Thus letting \( i \) and \( j \) be \( \text{select}(Q, q) \) and \( \text{select}(Q, q + 1) \), respectively, \( l \) and \( h \) are computed by \( l = i - q + 1 \) and \( h = j + i - 1 \). Note that \( h = l + 1 \) if there is no number that has \( q \) as its prefix \( A \). Once we compute the boundary, the subarray \( A[l..h] \) is searched for \( r \) by binary search. THEOREM 2. Assume \( P \) index and each element of \( A \) is randomly chosen from \( U \). Given a number \( x \in U \), the binary search on the remainder table \( R \) in search can be done in \( O(1) \) time in the average case. Proof. Consider the random variable \( X \) in Section II-D. Because the \( p = 1/2^x = 1/2^{\lfloor \log n \rfloor} \), \( E[X] = n/2^{\lfloor \log n \rfloor} \). By Jensen’s inequality, \( E[\log X] = O(1) \). COROLLARY 1. Search for a given number \( x \in U \) requires \( O(\log \log n) \) time in the average case where \( se \) is the cost for \( \text{select} \) on \( Q \). C. Space Requirement The data structure explained in Section III consists of three components: table \( R \), bit string \( Q \) and an auxiliary data structure to support \( \text{select} \) on \( Q \). To store table \( R \), we need \( n(\lceil \log m \rceil - \lceil \log n \rceil) \) bits. \( Q \) requires at most \( 2n \) bits. Thus, the total space requirement depends on the data structure that is chosen to support \( \text{select} \) on \( Q \). Let \( L(u, l) \) be the required space to support \( \text{select} \) on a bit string of length L that contains n ones. Then the total space requirement is $O(n) + n \left(\lceil \log m \rceil - \lceil \log n \rceil \right) + L(n, 2n)$. There are many data structures in the literature that support select [10]. Although the space requirements of their implementations differ, one can construct the auxilary data structure using extra space less than the size of Q. Thus, we can say that $L(n, 2n)$ is $O(n)$. By omitting ceilings and floors, the space complexity becomes $$n \log \frac{m}{n} + O(n).$$ IV. MULTISET While the Pindex data structure can accommodate multisets, there can be high redundancy in an R table. We show another data structure the Mindex that reduces the space requirement in the case of multisets by the technique used in [7]. Let $S_j$ be the set that has elements of a monotonically increasing sequence $A$ of size $n$, and $P$ be a bit vector of length $n$ where $P[i]$ is 0 if $A[i] = A[i-1]$ and 1 otherwise. Then we build a Pindex of $S_j$ and a rank/select data structure of $P$ for 1 bits. The i-th elements of $A$ can be obtained by performing access(rank($P, i$)) on $S_j$ using Pindex. To address search($x$) on $A$, we first compute $i = \text{search}(x)$ using Pindex of $S_j$, and get select($P, i$). Corollary 2. Mindex requires $\frac{n}{k} \log \left(\frac{m}{n}\right) + O(n)$ bits where $k = \frac{n}{\lceil \frac{m}{n} \rceil}$ is the average repeated times of the elements in $A$. V. EXPERIMENTAL RESULTS In the experiment, we measured the average size of range for binary search on the $R$ array for search to verify the theorems that we proved, and tested the actual running time of access and search. The space requirement was also measured. The average binary search range on the $R$ array does not depend on the implementation of rank/select data structure whereas the running time and space requirement do. In addition, we show the improvement of Mindex on the space requirement in case of multisets. To demonstrate the efficiency of Pinex and Mindex in the real world, we conducted an experiment with rainbow tables. Various implementations of rank/select data structures were used for the experiment [7, 9, 10]. The implementation of [9], [10] and [7] are referred to as Kim, Vigna, and RRR, respectively. For RRR, we adopted the SDSL-Lite library [21], which implements RRR using techniques described in [19] and [20]. Table 1 shows the average size of range for binary search on $R$ table for search with various sizes of sequences. For each sequence size, random sequences are generated on three different integer distributions: uniform, normal and exponential. To generate a random number, we randomly chose a real number between 0 and 1, and then multiplied it by $2^{40}$. The mean and standard deviation for the normal distribution are chosen to 0.5 and 0.05, respectively. The lambda for the exponential distribution was set to 6. The average size of range for binary search on $R$ table for search is measured by performing search a million times with numbers that are chosen from the universe. As we expected from the theorems, the average range size for Pindex is constant while it increases as $n$ grows for Sindex. Note that the average size increases linearly with $\log n$. It increases discretely because we use rounding function for computing $s$ in Sindex. The space requirement for each implementation is shown in Fig. 1. All implementations of Pindex consume less space than Sindex. Among the rank/select data structures, the RRR implementation shows the best performance in terms of space requirement. For the sequence of size $2^{28}$, the RRR-based Pindex shows about 31% performance improvement compared with Sindex in terms of space requirement. The measured running time of access is presented in Fig. 2. The x-axis is the sizes of sequences and y-axis is the time taken to perform a million accesses. As can be seen, all Pindex’s except RRR took less time than Sindex. <table> <thead> <tr> <th>$n$</th> <th>Uniform</th> <th>Normal</th> <th>Exponential</th> <th>Uniform</th> <th>Normal</th> <th>Exponential</th> </tr> </thead> <tbody> <tr> <td>$2^2$</td> <td>1.999</td> <td>2.000</td> <td>2.001</td> <td>1.499</td> <td>1.624</td> <td>1.500</td> </tr> <tr> <td>$2^4$</td> <td>3.999</td> <td>3.998</td> <td>4.002</td> <td>1.626</td> <td>1.499</td> <td>1.435</td> </tr> <tr> <td>$2^6$</td> <td>7.998</td> <td>8.006</td> <td>8.012</td> <td>1.690</td> <td>1.594</td> <td>1.376</td> </tr> <tr> <td>$2^{10}$</td> <td>15.997</td> <td>16.013</td> <td>15.984</td> <td>1.642</td> <td>1.567</td> <td>1.433</td> </tr> <tr> <td>$2^{14}$</td> <td>15.998</td> <td>16.003</td> <td>15.998</td> <td>1.633</td> <td>1.573</td> <td>1.428</td> </tr> <tr> <td>$2^{18}$</td> <td>31.999</td> <td>31.976</td> <td>31.973</td> <td>1.634</td> <td>1.570</td> <td>1.434</td> </tr> <tr> <td>$2^{22}$</td> <td>32.012</td> <td>32.041</td> <td>32.030</td> <td>1.632</td> <td>1.569</td> <td>1.433</td> </tr> <tr> <td>$2^{26}$</td> <td>31.993</td> <td>32.028</td> <td>31.939</td> <td>1.631</td> <td>1.569</td> <td>1.433</td> </tr> </tbody> </table> http://dx.doi.org/10.5626/JCSE.2015.9.1.1 Jungjoo Seo et al. The difference becomes bigger when the size of sequence increases. There was no noticeable difference amongst three implementations of Pindex apart from RRR. RRR showed the worst performance, because select of RRR implementation [21] has \( O(\lg n) \) complexity rather than constant time described in [7], where \( n \) is size of sequence. Similarly, Fig. 3 presents the measured running time of search. Theoretically, search of Sindex has \( O(\lg n) \) time complexity while Vigna and Kim have constant complexity. Nevertheless, the results showed no remarkable difference amongst them. This is because the range of binary search in search of Sindex is negligibly small compared to the size of a sequence (see Table 1). RRR again showed the worst performance, because select takes \( O(\lg n) \) in the implementation of RRR [21] which is invoked twice in search. Table 2 shows the measured space requirements and running time of access and search of Pindex and Mindex on randomly generated multisets of size \( 2^{36} \) from a universe of size \( 2^{48} \). For the rank/select data structure, we chose RRR and Vigna that showed efficiency in space and time, respectively. The running time was measured by performing access and search 10 million times with random queries. The value of improvement in Table 2 is the ratio of the space requirement of Pindex to that of Mindex with the same rank/select data structure. Because the size of the \( R \) table takes a dominant proportion of the space requirement, Mindex shows much better efficiency in space compared with Pindex in the case of multisets. Also, as the average redundancy grows, the space requirement decreases. The running times of Mindex for both access and search are slightly longer than those of Pindex because there is one more rank and select invocation in access and search, respectively. To demonstrate that the Pindex and Mindex are efficient in real-world applications, we tested the two data structures on rainbow tables. A rainbow table is one of the cryptanalytic time/memory tradeoff methods that aims to invert cryptographic hash functions. In a rainbow table, elements from an input domain of a hash function which are normally represented as integers are stored in sorted order. There are two types of rainbow tables: perfect and non-perfect. All elements in a perfect rainbow table are distinct while a non-perfect rainbow table may contain repeated elements. The rainbow tables that were used in the experiment were generated to invert SHA-1 hash function, and the input domain is a set of strings of length from 1 to 8 with lowercase, uppercase and digits. The size of the input domain is \( 2^{21,919,451,578,090} \approx 2^{21,657} \). Table 3 shows the measured running time of access and search, and space requirements for Sindex and Pindex data structure for a perfect rainbow table of 80,517,490 distinct elements. Because a perfect rainbow table is a set, we do not consider Mindex here. Regardless of the choice of a rank/select data structure, Pindex consumes less space than Sindex as we expected. Although RRR has a disadvantage in run time, it outperforms in terms of the space requirement. To test the performance of Mindex, two non-perfect rainbow tables of size 80,530,636 and 202,331,368 were used for the experiment. A real number below the size of a multiset is the average redundancy of each table. The value of improvement is the ratio of the space of Sindex to each of Pindex and Mindex data structure. As shown in the table, all Pindex and Mindex achieved better performance in space compared with Sindex, and Mindex con- Table 2. The measured running time of access and search (seconds), and the space requirement (megabytes) for multisets of various average redundancies <table> <thead> <tr> <th>Average redundancy</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> </tr> </thead> <tbody> <tr> <td>RRR</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Pindex</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Access</td> <td>15.28</td> <td>14.74</td> <td>14.82</td> <td>13.60</td> </tr> <tr> <td>Search</td> <td>27.28</td> <td>26.22</td> <td>26.25</td> <td>24.76</td> </tr> <tr> <td>Space (MB)</td> <td>1425.37</td> <td>1425.24</td> <td>1425.14</td> <td>1425.05</td> </tr> <tr> <td>Vigna</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Access</td> <td>5.96</td> <td>5.11</td> <td>4.47</td> <td>4.77</td> </tr> <tr> <td>Search</td> <td>6.73</td> <td>5.98</td> <td>6.13</td> <td>5.64</td> </tr> <tr> <td>Space (MB)</td> <td>1664.00</td> <td>1664.00</td> <td>1664.00</td> <td>1664.00</td> </tr> <tr> <td>RRR</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Mindex</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Access</td> <td>21.02</td> <td>21.31</td> <td>20.36</td> <td>19.88</td> </tr> <tr> <td>Search</td> <td>36.86</td> <td>36.88</td> <td>36.2</td> <td>35.23</td> </tr> <tr> <td>Space (MB)</td> <td>1474.11</td> <td>779.73</td> <td>523.73</td> <td>409.45</td> </tr> <tr> <td>Vigna</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Access</td> <td>7.21</td> <td>5.21</td> <td>4.87</td> <td>4.43</td> </tr> <tr> <td>Search</td> <td>9.17</td> <td>7.54</td> <td>6.73</td> <td>6.32</td> </tr> <tr> <td>Space (MB)</td> <td>1779.57</td> <td>988.77</td> <td>713.23</td> <td>575.15</td> </tr> </tbody> </table> The sizes of multisets and the universe are $2^{26}$ and $2^{48}$, respectively. Table 3. The measured running time of access and search (seconds), and the space requirement (megabytes) for a perfect rainbow table that consists of 80,517,490 elements <table> <thead> <tr> <th>Operation</th> <th>Sindex</th> <th>Pindex</th> </tr> </thead> <tbody> <tr> <td>Access</td> <td>5.94</td> <td>15.80</td> </tr> <tr> <td>Search</td> <td>7.84</td> <td>28.10</td> </tr> <tr> <td>Space (MB)</td> <td>1666.54</td> <td>1245.71</td> </tr> </tbody> </table> Table 4. The measured running time of access and search (seconds), and the space requirement (megabytes) for two non-perfect rainbow tables <table> <thead> <tr> <th>Size of multiset</th> <th>Operation</th> <th>Sindex</th> <th>Pindex</th> <th>Mindex</th> </tr> </thead> <tbody> <tr> <td>Access</td> <td>7.79</td> <td>16.20</td> <td>5.68</td> <td>23.05</td> </tr> <tr> <td>Search</td> <td>6.42</td> <td>28.30</td> <td>6.62</td> <td>39.74</td> </tr> <tr> <td>Space (MB)</td> <td>1666</td> <td>1245</td> <td>1486</td> <td>734</td> </tr> <tr> <td>Improvement</td> <td>-</td> <td>1.34</td> <td>1.12</td> <td>2.27</td> </tr> <tr> <td>Access</td> <td>9.47</td> <td>17.10</td> <td>6.69</td> <td>23.97</td> </tr> <tr> <td>Search</td> <td>6.66</td> <td>29.70</td> <td>7.60</td> <td>41.93</td> </tr> <tr> <td>Space (MB)</td> <td>3971</td> <td>2932</td> <td>3488</td> <td>1271</td> </tr> <tr> <td>Improvement</td> <td>-</td> <td>1.35</td> <td>1.62</td> <td>2.31</td> </tr> </tbody> </table> To test the performance of Mindex, two non-perfect rainbow tables of size 80,530,636 and 202,331,368 were used for the experiment. A real number below the size of a multiset is the average redundancy of each table. The value of improvement is the ratio of the space of Sindex to each of Pindex and Mindex data structure. As shown in the table, all Pindex and Mindex achieved better performance in space compared with Sindex, and Mindex con- sumes much less space than Pindex with both rank/select data structure. VI. CONCLUSIONS In this paper, we introduced two fundamental operations on a non-decreasing sequence of numbers and showed that there are efficient data structures with respect to both time and space complexity. The running times of both operations are proven to take constant time assuming that the numbers are chosen uniformly at random from their universe. We also showed that these data structures are practically efficient by performing experiments on real-world data, e.g., rainbow tables for cryptanalytic time-memory tradeoff. It would be interesting to find more applications of these data structures. ACKNOWLEDGMENTS This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (2011-0029924). REFERENCES **Jungjoo Seo** Jungjoo Seo received his B.S. degree in Computer Science and Engineering from Sungkyunkwan University in 2009. He is currently a Ph.D. student in the Department of Computer Science and Engineering at Seoul National University. His research interests are in algorithms, computer theory, and cryptography. **Myoungji Han** Myoungji Han received his B.S. degree in Computer Science and Engineering from Seoul National University in 2010. He is currently a Ph.D. student in the Department of Computer Science and Engineering at Seoul National University. His research interests are in computer theory and string algorithms. **Kunsoo Park** Kunsoo Park received his B.S. and M.S. degrees in Computer Engineering from Seoul National University in 1983 and 1985, respectively, and Ph.D. degree in Computer Science from Columbia University in 1991. From 1991 to 1993, he was a Lecturer at King's College, University of London. He is currently a Professor in the Department of Computer Science and Engineering at Seoul National University. His research interests include design and analysis of algorithms, cryptography, and bioinformatics.
{"Source-Url": "http://ocean.kisti.re.kr/downfile/volume/kiss/E1EIKI/2015/v9n1/E1EIKI_2015_v9n1_1.pdf", "len_cl100k_base": 8316, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 29935, "total-output-tokens": 9985, "length": "2e13", "weborganizer": {"__label__adult": 0.0004487037658691406, "__label__art_design": 0.00051116943359375, "__label__crime_law": 0.0007047653198242188, "__label__education_jobs": 0.0015649795532226562, "__label__entertainment": 0.00013971328735351562, "__label__fashion_beauty": 0.0002510547637939453, "__label__finance_business": 0.0004863739013671875, "__label__food_dining": 0.0005230903625488281, "__label__games": 0.0008206367492675781, "__label__hardware": 0.002384185791015625, "__label__health": 0.0013513565063476562, "__label__history": 0.0004792213439941406, "__label__home_hobbies": 0.00019037723541259768, "__label__industrial": 0.0008006095886230469, "__label__literature": 0.0003921985626220703, "__label__politics": 0.0004143714904785156, "__label__religion": 0.0006880760192871094, "__label__science_tech": 0.39013671875, "__label__social_life": 0.00014638900756835938, "__label__software": 0.0106201171875, "__label__software_dev": 0.58544921875, "__label__sports_fitness": 0.000377655029296875, "__label__transportation": 0.0007433891296386719, "__label__travel": 0.0002294778823852539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31066, 0.07889]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31066, 0.47204]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31066, 0.8537]], "google_gemma-3-12b-it_contains_pii": [[0, 2548, false], [2548, 8485, null], [8485, 14302, null], [14302, 19097, null], [19097, 22293, null], [22293, 25462, null], [25462, 29915, null], [29915, 31066, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2548, true], [2548, 8485, null], [8485, 14302, null], [14302, 19097, null], [19097, 22293, null], [22293, 25462, null], [25462, 29915, null], [29915, 31066, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31066, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31066, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31066, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31066, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31066, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31066, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31066, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31066, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31066, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31066, null]], "pdf_page_numbers": [[0, 2548, 1], [2548, 8485, 2], [8485, 14302, 3], [14302, 19097, 4], [19097, 22293, 5], [22293, 25462, 6], [25462, 29915, 7], [29915, 31066, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31066, 0.28302]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
d0c38750a676ea57efaa6b1490aa4fb3014f7f4b
[REMOVED]
{"len_cl100k_base": 11541, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 48765, "total-output-tokens": 14149, "length": "2e13", "weborganizer": {"__label__adult": 0.00037288665771484375, "__label__art_design": 0.0004260540008544922, "__label__crime_law": 0.000335693359375, "__label__education_jobs": 0.0015001296997070312, "__label__entertainment": 6.216764450073242e-05, "__label__fashion_beauty": 0.0001735687255859375, "__label__finance_business": 0.0002541542053222656, "__label__food_dining": 0.0003399848937988281, "__label__games": 0.0005326271057128906, "__label__hardware": 0.0006728172302246094, "__label__health": 0.0006070137023925781, "__label__history": 0.0003428459167480469, "__label__home_hobbies": 9.846687316894533e-05, "__label__industrial": 0.0005002021789550781, "__label__literature": 0.00035953521728515625, "__label__politics": 0.0002925395965576172, "__label__religion": 0.0005850791931152344, "__label__science_tech": 0.028533935546875, "__label__social_life": 0.00010371208190917967, "__label__software": 0.004650115966796875, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.0003237724304199219, "__label__transportation": 0.0007114410400390625, "__label__travel": 0.0002346038818359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59422, 0.02025]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59422, 0.42313]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59422, 0.89232]], "google_gemma-3-12b-it_contains_pii": [[0, 3553, false], [3553, 8988, null], [8988, 11963, null], [11963, 17563, null], [17563, 22362, null], [22362, 26780, null], [26780, 30739, null], [30739, 35541, null], [35541, 40122, null], [40122, 44521, null], [44521, 48812, null], [48812, 51288, null], [51288, 57640, null], [57640, 59422, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3553, true], [3553, 8988, null], [8988, 11963, null], [11963, 17563, null], [17563, 22362, null], [22362, 26780, null], [26780, 30739, null], [30739, 35541, null], [35541, 40122, null], [40122, 44521, null], [44521, 48812, null], [48812, 51288, null], [51288, 57640, null], [57640, 59422, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59422, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59422, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59422, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59422, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59422, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59422, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59422, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59422, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59422, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59422, null]], "pdf_page_numbers": [[0, 3553, 1], [3553, 8988, 2], [8988, 11963, 3], [11963, 17563, 4], [17563, 22362, 5], [22362, 26780, 6], [26780, 30739, 7], [30739, 35541, 8], [35541, 40122, 9], [40122, 44521, 10], [44521, 48812, 11], [48812, 51288, 12], [51288, 57640, 13], [57640, 59422, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59422, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
471f7c585e02f25f157c81cb6bed8aef16259aba
Implementation of Cloud Repository for Secure Data Sharing GRADUATE PROJECT REPORT Submitted to the Faculty of The School of Engineering & Computing Sciences Texas A&M University-Corpus Christi Corpus Christi, TX in Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science by Niharika Medaboina Spring 2015 Committee Members Dr. Dulal Kar Committee Chairperson Dr. Long-Zhuang Li Committee Member ABSTRACT Storage is the most prominent feature of cloud computing, growing rapidly in quality which gives immediate access to information through web service application programming interface (or) web-based content management systems. Cloud storage providers store the data in multiple servers when it is distributed. These servers are maintained by hosting companies for providing immediate access increasing the risk of unauthorized access to the private content of data. The risk of unauthorized access can be reduced by using encryption techniques. In the proposed system user encrypts all the files with distinct keys before uploading them into the cloud. The user can upload the files as private or public. However, public files can be downloaded directly, but to download the private files a user will send a request to the file owner. The user has a flexibility to request single or multiple files at a time. When the file owner accepts the request the application server provides a single Access key extracted from the attributes of the requested files. This Access key is shared to the requesting user which further retrieves the private key of the files. Using the private key cipher text is converted into plain text, and the plain text gets downloaded. This technique increases the flexibility of sharing the files as we are sharing single Access key for multiple files requested. # TABLE OF CONTENTS Abstract............................................................................................................................................ ii Table of Contents................................................................................................................................. iii List of Figures.................................................................................................................................. vi 1 BACKGROUND AND RATIONALE ...................................................................................... 1 1.1 Introduction ................................................................................................................................. 1 1.2 Cloud Service Models ................................................................................................................ 1 1.2.1 Software as a Service ............................................................................................................ 1 1.2.2 Platform as a Service ........................................................................................................... 2 1.2.3 Infrastructure as a Service.................................................................................................... 3 1.2.4 Deployment Models............................................................................................................. 3 1.3 Cloud Storage ........................................................................................................................... 4 1.4 Security Issues with Cloud Storage.......................................................................................... 6 1.5 Encryption ................................................................................................................................. 7 1.5.1 Encryption Algorithms......................................................................................................... 8 2 NARRATIVE ................................................................................................................................. 10 2.1 Problem Statement .................................................................................................................... 10 2.2 Motivation ................................................................................................................................. 11 LIST OF FIGURES FIGURE 1.1 RELATION BETWEEN SERVICE MODEL AND DEPLOYMENT MODEL [4] .............................................. 4 FIGURE 1.2: STATISTICS OF DATA USED FOR CLOUD STORAGE [5] .............................................................. 5 FIGURE 1.3: STATISTICS OF ISSUES WITH CLOUD STORAGE [6] ................................................................. 6 FIGURE 1.4: WORKING OF ENCRYPTION TECHNIQUE ................................................................................... 8 FIGURE 1.5: WORKING OF SYMMETRIC ENCRYPTION .................................................................................. 9 FIGURE 1.6: WORKING OF ASYMMETRIC ENCRYPTION ............................................................................... 9 FIGURE 3.1: CLOUD REPOSITORY SYSTEM ARCHITECTURE ........................................................................ 14 FIGURE 3.2: SYSTEM DESIGN ....................................................................................................................... 16 FIGURE 3.3: DATA FLOW DIAGRAM OF USER UPLOADING FILES ............................................................. 17 FIGURE 3.4: DATA FLOW DIAGRAM OF USER FOR ACCEPTING/REJECTING A REQUEST .......................... 18 FIGURE 3.5: DATA FLOW DIAGRAM OF A USER FOR DOWNLOADING THE FILE ...................................... 19 FIGURE 3.6: USE CASE DIAGRAM ............................................................................................................... 20 FIGURE 3.7: SEQUENCE DIAGRAM ............................................................................................................. 21 FIGURE 3.8: ACTIVITY DIAGRAM ................................................................................................................ 22 FIGURE 4.1: GENERATING UNIQUE ID TO THE EACH USER REGISTERED ..................................................... 26 FIGURE 4.2: ENCRYPTING USER PASSWORD ............................................................................................... 27 FIGURE 4.3: REGISTRATION PAGE ............................................................................................................... 28 FIGURE 4.4: LOGIN PAGE .............................................................................................................................. 29 FIGURE 4.5: COMPARING ENTERED PASSWORD DIGEST WITH DIGEST STORED IN THE CLOUD .......... 30 FIGURE 4.6: UPLOADING FILES TO THE CLOUD ........................................................................................ 31 FIGURE 4.7: CLASS FOR GENERATING PRIVATE KEY .................................................................................. 32 FIGURE 4.8: CLASS FOR CONVERTING PLAINTEXT INTO CIPHERTEXT .................................................. 33 <table> <thead> <tr> <th>Figure</th> <th>Description</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>4.9</td> <td>Plain text converted to cipher text</td> <td>34</td> </tr> <tr> <td>4.10</td> <td>My files screen</td> <td>35</td> </tr> <tr> <td>4.11</td> <td>Files screen</td> <td>36</td> </tr> <tr> <td>4.12</td> <td>Requesting private files</td> <td>37</td> </tr> <tr> <td>4.13</td> <td>Requested files screen</td> <td>38</td> </tr> <tr> <td>4.14</td> <td>Access key generation</td> <td>39</td> </tr> <tr> <td>4.15</td> <td>Accepting/rejecting requested private files</td> <td>40</td> </tr> <tr> <td>4.16</td> <td>Downloading received files</td> <td>42</td> </tr> <tr> <td>4.17</td> <td>Downloading received files at same time</td> <td>42</td> </tr> <tr> <td>4.18</td> <td>Class for decrypting cipher text to plain text</td> <td>43</td> </tr> <tr> <td>4.19</td> <td>Displaying downloaded files</td> <td>44</td> </tr> <tr> <td>5.1</td> <td>User authentication</td> <td>45</td> </tr> <tr> <td>5.2</td> <td>Displaying error messages</td> <td>46</td> </tr> <tr> <td>5.3</td> <td>Multiple users requesting same files</td> <td>47</td> </tr> <tr> <td>5.4</td> <td>Accepting one request and rejecting one request</td> <td>47</td> </tr> <tr> <td>5.5</td> <td>Received keys shared</td> <td>48</td> </tr> <tr> <td>5.6</td> <td>Rejected request</td> <td>49</td> </tr> <tr> <td>5.7</td> <td>Waiting request</td> <td>49</td> </tr> <tr> <td>5.8</td> <td>Three users using same access key</td> <td>50</td> </tr> <tr> <td>5.9</td> <td>Displaying error message for unauthorized access</td> <td>51</td> </tr> <tr> <td>5.10</td> <td>Displaying downloaded files for authorized access</td> <td>51</td> </tr> </tbody> </table> 1 BACKGROUND AND RATIONALE 1.1 Introduction Cloud computing has become an emerging infrastructure for organizations throughout the world. The cloud computing uses specialized connections with a network of servers gathered substantially for data processing across them. Frequently, virtualization techniques are utilized to maximize the power of cloud computing [1]. Through the use of virtualization, it reduces the need of purchasing, maintaining and updating their own networks and computer systems as it uses the computing resources as a service over a network. 1.2 Cloud Service Models There are many diverse cloud computing service models. Most fundamental service models include Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). The acknowledged development models are: (i) public cloud (ii) private cloud (iii) community cloud (iv) hybrid cloud. The most common protocols used to achieve the services of cloud computing are HTTP (Hyper Text Transfer Protocol), HTTPS (Hyper Text Transfer Protocol Secure) to achieve information security and data integrity, and Secure Shell [2]. The overall functionality of these three services is to keep the flow on data and computation. 1.2.1 Software as a Service According to the definition of Wikipedia, Software as a service (SaaS) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted [3]. This is also popularly known as “on-demand software”. As the name suggests, this is service which provides software for a client request. Characteristics of SaaS are as follows [2]: - Provides web access facility for commercial software - This service provides a central location where the software can be accessed. - Allows user to integrate different pieces of software using Application Programming Interfaces (APIs) - This service is “one to many” model. - User do not need to worry about software upgrades or patches, this service handles by itself. ### 1.2.2 Platform as a Service This is a service where a user can develop the applications or programs without worrying about the problems of buying or maintaining the infrastructure. By using this service, the user can easily and quickly create the web application. The only difference between the SaaS and PaaS is that a PaaS service additionally provides the facility to create our own application rather than using the existing ones. Characteristics of PaaS are as follows [2]: - This service provides a single integrated development environment for developing, testing, deploying, hosting and also for maintaining the application. - User can create, modify, test and deploy different UI scenarios in web based user interface which is provided by this service. - This service contributes Multi-tenant architecture where multiple concurrent users utilize the same development application. - It provides built in scalability, load balancing and failover. - Web service and databases are integrated with the help of common standards. - Subscription and billings can be managed by provided tools. 1.2.3 Infrastructure as a Service The Service in this layer is the total infrastructure of software like a network, Operating systems and storage too. This is similar to SaaS discussed above where software is served as a client request, but in this layer, users buy those resources as fully outsourced service on demand. The characteristics of IaaS are as follows [2]: - This service distributes resources as another form of service - Dynamic scaling is allowed - This service also has a variable cost, utility pricing model. - Multiple users can use this service on a single piece of hardware. 1.2.4 Deployment Models - **Private Cloud** – private infrastructure for an organization and managed by third party. - **Community Cloud** – shared infrastructure by several Organizations. - **Public Cloud** – Infrastructure available for everyone. - **Hybrid Cloud** - This is the combination of two or more cloud infrastructures. Figure1.1 [4] shows the relation between the deployment models and service models clearly. - IaaS service models use both Private Cloud and Hybrid Cloud - PaaS service model use Community Cloud and Public Cloud - SaaS service model use all other models expect the Private Cloud - Since, IaaS service model serves the client request for purchase and the private cloud is solemnly used for them [5]. 1.3 Cloud Storage Cloud storage is the place where digital data is saved in logical pools. Cloud storage spans multiple servers. In favor of that, all the rights for the physical environment belong to the hosting company. Here, clients purchase storage capacity of the providers to host asserts facilitated with them in the remote server. In return, cloud storage provides instant access to the information through web service application programming interface (or) web-based content management systems. With the use of cloud storage, we can store, update, and retrieve data. Since it is stored online there is no data loss and users can get the data from anywhere at any time as long as they have internet access. There are tons of cloud service providers available popular among them are Drop Box, SkyDrive, Google Drive, iCloud, which give limited free storage capacity and can be extended using premium account features. Figure 1.2: Statistics of Data Used for Cloud Storage [6] In Figure 1.2 [6], a large portion of the records maintained on the cloud storage are photographs and individual information. Thus, users are concerned about data integrity and security. Accordingly, they like to protect their data from being abused. Here cloud storage suppliers are responsible for keeping the data available and accessible, and the physical environment protected and running. Cloud storage is: - Made up of numerous distributed resources but still acts as one - often referred to federated storage clouds [7] - High fault tolerance through redundancy and distribution of data - Highly durable through the creation of versioned copies - Typically eventually consistent with regard to data replicas 1.4 Security Issues with Cloud Storage Security is the huge complication in the cloud storage. It is evident from Figure 1.3 [8], that the highest issue in cloud storage is the security concern. When data is distributed it is stored at more locations increasing the risk of unauthorized physical access to the data. Figure 1.3: Statistics of Issues with Cloud Storage [8] The major vulnerabilities in cloud storage are: - Data Leakage - Cloud Credentials - Snooping - Key Management • Performance When data is distributed it is stored at several locations increasing the risk of unauthorized physical access to the data. Sometimes the computationally strong client service provider servers cannot be trusted as clients do not exhibit full control over them [9]. This implies that the major challenges that any cloud computing service provider must overcome is to make sure that if its servers are attacked by hackers, the client data cannot be stolen or misused [10]. Moreover, the confidential client data must remain invisible even to the cloud service providers. However, to overcome the above mentioned security issues in the cloud storage encryption techniques are used. 1.5 Encryption Encryption is a process used to protect data stored in the cloud from unauthorized users. In other words, the primary purpose of encryption is confidentiality. This technique is advancing day by day. The modern encryption along with confidentiality, they provide key elements of security. They are listed below: 1. Authentication 2. Integrity 3. Non-repudiation Encryption is also referred as cryptography, which uses a cipher system to change plaintext transforming it into a non-intelligible text. An authorized user can be able to easily decipher the message with the key provided by the owner to recipients, but not the unauthorized interceptors can decipher the message. Figure 1.4 shows the working of encryption technique. There are three different basic encryption methods, each with their own merits. i. **Hashing Method** ii. **Symmetric Method** iii. **Asymmetric Method** **i. Hashing** Hashing creates a novel, fixed-length signature for a message or data set. Every “hash” is unique to a selected message. Therefore, minor changes to that message would be easy to track. Once the data is encrypted utilizing hashing technique it is difficult to decipher or reverse the message [11]. Hashing, however not actually an encryption method as such, is still helpful for providing data proficiency and proving data hasn’t been tampered with. **ii. Symmetric method** Symmetric encryption is also known as private-key cryptography and is called so because the key used to encrypt and decrypt the message must remain secure because anyone with access to it can decrypt the data [11]. Using symmetric method, a sender encrypts the data with one key, sends the data (the ciphertext) and then the receiver uses the key to decrypt the data as shown in Figure 1.5. iii. Asymmetric method Asymmetric encryption or public-key cryptography is different than the previous method because it uses two keys for encryption or decryption (it has the potential to be more secure as such) [11]. In this method, a public key made is free and will be available to everyone and is used to encrypt messages, and a different, private key is used by the recipient to decrypt messages as shown in Figure 1.6. 2 NARRATIVE 2.1 Problem Statement Users are concerned about their security and privacy of data uploaded into the cloud. As all the cloud services are available at the remote locations, users can’t have the complete control over their data. It is always their basic right to protect their data from unauthorized access. In essence a user will upload the data into cloud using encryption technology where plaintext is changed into ciphertext. To view the unintelligible ciphertext, it need to be decrypted using an instance of encryption algorithm called “Secret Key”. This secret key is shared with the users who would like to access the data. Encryption of the files that need to be uploaded into cloud can be done in two of the following ways: I. The user can encrypt all the files using single encryption key and upload data to the cloud. II. The user can encrypt each file with distinct key and upload data to the cloud. In either way, the user will upload his/her data to the cloud storage system to avoid the access to private content of their data in the cloud storage system. If he/she wants to share their encrypted data with their circle: I. The user needs to send their single encryption key which is used to decrypt the saved data in the cloud storage system. II. The user needs to send the corresponding distinct keys which are used to decrypt the files that are intended to share. In the first approach, providing a single encryption key would be inadequate since all the undesirable data may be likewise revealed. Whereas in the second, sharing large amount of cipher data with their corresponding private keys to their circle will increase the cost. Although, decrypting cipher data with distinct keys will result in loss of efficiency as the number of such keys is as many as the number of shared files [12]. 2.2 Motivation Cloud computing, is trending in all sectors like governments, non-profits or small businesses and even unto fortune 500 companies. However, as organizations continue to take benefits of cloud services, they must consider how the introduction of cloud services affects their privacy and security [13]. The motivation of the cloud repository system is as follows: i. Providing security to the data stored on the cloud from unauthorized access, intruders, employees of the enterprise, and even from the cloud service providers. ii. Identity Management to avoid serious crimes involving identity theft. iii. Increasing the efficiency of the cloud storage system by encrypting files with distinct keys. iv. Sharing multiple files securely with the registered users in the system. v. Restricting access control levels for private and public files. vi. Focus on the decryption of the distinct set of cipher data by using Access key. vii. Decrease the amount of cost while decrypting the cipher data from the cloud storage system. 2.3 Project Objective The primary objective of this project is to maintain security of the data stored in the cloud that runs on the network. To process with the cloud system, the system should have network facility. The data stored will be encrypted by the system using symmetric encryption. This encryption is to prevent the unauthorized access, from intruders including employee of enterprise, which attempts to retrieve data of the cloud storage user while the data is in transmission. In this technique, a user will encrypt the data using the private key and converts plain text into cipher text. Extracted cipher text will be stored in the cloud and the private key used for encryption will be stored in the local database. As the data stored is secure, any type of data such as personal or computed or an application data can be stored. To access the files of other users he/she can make a request. Whenever a request is made, the file owner generates an Access key for the requested set of files. The user can retrieve the shared data based upon the user credentials, file attributes and the Access key [12]. 2.4 System The user needs to be registered in cloud repository system. Once registered he/she can login to the system and upload their files into the cloud. The user can upload their files in two categories, 1. Public files 2. Private files Uploaded file names and attributes of all users can be seen by registered users. In order to get access to files of other users, they need to be downloaded. However, files uploaded as public can be directly downloaded and to download the private files, the user needs to request for an Access key. The user can request single/multiple private files to the file owner. The file owner can share the Access key for single/multiple requested files. Additionally, the file owner has the flexibility to accept or reject the request made. The user can download the private files only if the file owner sends an Access key for the requested set of private files. 2.5 Project Functionality The main functionalities of cloud repository system are listed below - User Authentication - Providing security to the data stored in the cloud - Restricting access control levels - Requesting access for multiple private files - Sharing access to multiple requested files - Generating Access key for requested set of files - Reducing the decryption cost - Maintaining logs of downloaded files 3 System Design This chapter discusses about the architecture of the entire system. This chapter also discusses about data flow diagram, use case diagram, sequence diagram, and activity diagram. 3.1 System Architecture The System Architecture of the cloud repository system shown in Figure 3.1 describes various components and communication between those components. A user as depicted in the system architecture, should be authorized to login to the system. The user will communicate with the application server to store the data onto the cloud through a web browser. When the user upload the data it is encrypted using a key generated and thus uploaded in the cloud. Whenever a user requests for the files stored in the cloud, the file owner shares an Access key for requested files. As soon as the user enters the Access key, it gets the private key used to encrypt that file from the local database and decrypts the file using the private key and gets downloaded. 3.2 System Design Figure 3.2 shows the system design of the cloud repository system. It uses cloud to store information about the users, files uploaded by the users, requests made, Access keys generated for the requested files for the requesting user. The login validations checks the username and password entered with the username and password in the database and confirms or rejects login accordingly. Upon confirmation, the application server will establish a connection with the cloud repository system. After that it will pull all the information from the cloud and show it to the user. This application allows the user to store or retrieve data from cloud repository system. Whenever a user tries to upload a file, a private key will be generated and that key will be used to encrypt the file. The key used to encrypt the file is stored in the local database and the encrypted data is stored in the cloud. Whenever a user tries to retrieve the data the public file can be downloaded directly whereas to retrieve the private files the user needs to request for an Access key. Using this Access key and file name, the private key for that particular file can be taken from the local database by the application server and file can be decrypted and downloaded. 3.3 Data Flow Diagrams Data flow diagram (DFD) is one of the prominent modelling tools which is used to model system components. These components include input data to the system, various processing carried, external entity that interacts with the system and the information flow in the system. 3.3.1 Data Flow Diagram for Uploading the Files ![Data Flow Diagram of User Uploading Files](image) **Figure 3.3: Data Flow Diagram of User Uploading Files** Figure 3.3 shows the flow of process between the components while uploading the files. The user can upload either text or image files. Whenever the user uploads a file, a private key will be generated for that uploaded file. Further, files get encrypted using the private key generated. Here private key is stored in the local database and the encrypted content is stored in the cloud. 3.3.2 Data Flow Diagram for Accepting/Rejecting the Request Figure 3.4: Data Flow Diagram of User for Accepting/Rejecting a Request Figure 3.4 shows the data flow process when a user gets a request. When the user gets a request for a file he can either accept it or reject the request. If the user rejects the request process will be terminated otherwise, a key will be generated in the process. 3.3.3 Dataflow Diagram for Downloading the Requested Files Figure 3.5: Data Flow Diagram of a User for Downloading the File Figure 3.5 shows the data flow process of a user for downloading the file. When user downloads the data flow process would start from downloading the encrypted content. By using Access key it retrieves the private key generated while data is uploaded. So with the Access key and encrypted content it decrypts the file. 3.4 UML Diagrams 3.4.1 Use Case Diagram Use case diagram is a behavioral diagram which depicts the behavior of the system. Use cases represent the activities or the functionalities in the system. Figure 3.6 represents the use case diagram for the project where actors are the users and the components are the functions performed. Use case diagrams are mostly used in requirements analysis phase of the system. ### 3.4.2 Sequence Diagram A sequence diagram is a Unified Modeling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. Figure 3.7 shows the sequence diagram for the activities in the cloud repository system. ![Sequence Diagram](image-url) **Figure 3.7: Sequence Diagram** 3.4.3 Activity Diagram Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. An activity diagram shows the overall flow of control. Figure 3.8 show the activity diagram for cloud repository system. Figure 3.8: Activity Diagram 4 System Implementation 4.1 Environment The following are used in developing the project: 1. Java / JSP programming 2. NetBeans IDE 3. MySQL database 4. XAMPP control panel 5. Libraries 1. Java / JSP In the project, J2EE is used in developing Java Servlets. Since it is platform independent and also contains a set of services, APIs, and protocols that can be used for developing web based applications, this technology is used for developing, building and deploying of online Web application. In brief, Java Servlets are java programs written at server side [14]. Whenever the application server gets a client request, servlets are executed at server side. Additionally, these servlets provide the following: 1. Security: Java Servlets inherits the security feature that the Web container provides. 2. Session Management: User identity and state is kept intact across more than one requests. 3. Instance persistence: Frequent disk access is prevented. This enhances server performance. On the other hand, JSP is a technology used for both web designing and web developing. To put it more clearly, we use HTML for the layout of web page and then Java code or other JSP related tags are used to develop main logic inside the layout. For instance, these JSPs by using special tags can embed the java functionality into HTML page directly. Hence, lots of time and effort can be saved. 2. **NetBeans IDE** NetBeans IDE is the most powerful tool available in the present market. IDE stands for Integrated Development Environment which means it is an integrated tool where various programming applications like C, C++, python, Java and many more can be developed. The most important feature in NetBeans is that, it has various plugins which comes handy in developing any project. It can be installed on any operating system that supports java. NetBeans IDE 7.2 version is used in this project 3. **MYSQL** There are two different editions: the open source MySQL community Server and the proprietary Enterprise Server. Out of which, MySQL community Server is most widely used Relational database management system. As discussed above, Apache server uses XAMPP to store all the data like files, username and encrypted password in MySQL database. 4. **XAMPP** XAMPP is an open source platform developed by apache. It is web server solution stack package. The main components in XAMPP are - Apache 2.4.12 - MySQL 5.6.24 - PHP 5.6.8 - phpMyAdmin 4.3.11 • OpenSSL 1.0.11 • XAMPP Control Panel 3.2.1 • Webalizer 2.23-04 • Mercury Mail Transport System 4.63 • FileZilla FTP Server 0.9.41 • Tomcat 7.0.56 (with mod_proxy_ajp as connector) • Strawberry Perl 7.0.56 Portable It uses MYSQL database to store data using apache server which is called by tomcat. More importantly, the XAMPP requires only one zip, tar, 7z or exe file to download and run. 5. Libraries • Activation • Bcprov-ext-jdk15on-151 • Cos-multipart • Cos • Javax.servelet • Mysql-connector-java-5.0.5 • Servlet-api • Standard 4.2 Application Modules The Application Modules for the cloud repository system are as follows: 1. Registration/login 2. Uploading Files 3. Requesting Files 4. Sharing Files 5. Downloading Files 4.2.1 Registration/Login In this module for the first time login user needs to register with the system to use the application. In the registration page as shown in Figure 4.3 a form will be displayed to the user where valid information needs to be filled in the provided fields with a generated unique user id. A unique user id will be generated using the code shown in Figure 4.1. ```java Connection con = databasecon.getConnection(); Statement st=con.createStatement(); ResultSet rs=st.executeQuery("select count(id) from reg"); if (rs.next()) { String u=rs.getString(1); int u1=Integer.parseInt(u); int u2=u1+101; String u3=Integer.toString(u2); session.setAttribute("u2",u2); } ``` Figure 4.1: Generating Unique Id to the Each User Registered All the required fields need to be filled appropriately. Validations are performed on the fields entered. If the information filled in the form are not according to the requirements the query fails and a catch statement will be able to determine the reason and prompt error messages to the user for resolving this issue. Once user clicks the submit button with valid information it needs to be uploaded in the cloud. However, before uploading the user information into the cloud the application server creates a digest for the password entered by the user as shown in Figure 4.2. The application server will replace the password entered by the user with the digest created and updates into the cloud server. If the registration is successful, the user is redirected to the login page prompting successful registration. ``` Connection con1 = databasecon.getConnection(); PreparedStatement ps = con1.prepareStatement("insert into reg (id,name,username,password,email,mobileno,date,age,address,gender) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"); ps.setString(1, id); ps.setString(2, f); ps.setString(3, username1); ps.setString(4, password1); int a1 = st1.executeUpdate("update reg set password=AES_ENCRYPT('"+data1+"', dnynI1xYAjeWiK0cHSQrw') where id='"+ab+"'"); int x = ps.executeUpdate(); ``` **Figure 4.2: Encrypting User Password** Figure 4.3: Registration Page In the login page, a form will be displayed to the user as shown in Figure 4.4 to enter his credentials provided during registration. ![Login Page](image) **Figure 4.4: Login Page** Validations will be performed on the values entered. When the user clicks the submit button creates a digest for the password entered and compares with the digest stored in the cloud server as shown in Figure 4.5. The user can login if the username and password entered by the user matches with the records in the cloud server or else error message will be given to the user for resolving the issue. If login is successful, the user can start managing the files in the cloud server. Connection con = databasecon.getConnection(); String name1=request.getParameter("uname");out.println(name1); session.setAttribute("name2",name1); String password1=request.getParameter("pwd"); Statement st=con.createStatement(); ResultSet rs=st.executeQuery("select * from reg where username='"+name1+"' and password=AES_ENCRYPT("'+password1+'",' +dnyIL1xYAjvWlK0cHSQr' +')"); if(rs.next()){ response.sendRedirect("userhome.jsp?success"); } Figure 4.5: Comparing Entered Password Digest with Digest Stored in the Cloud 4.2.2 Uploading Files In this module, a user can upload text files and image files as shown in Figure 4.6 (a). For each uploaded file a unique id is generated by the application server as shown in Figure 4.6 (b). Additionally, he/she can upload the files as public or private. However, both private files and public files are encrypted and stored using AES algorithm. The class for encrypting the files is shown in Figure 4.8. While uploading, the user needs to mention the file name and upload it. When the user clicks on submit button a private key will be generated. This key is used for converting plain text into cipher text as shown in Figure 4.9. The private key used for encryption is generated using the class shown in Figure 4.7 and is stored in the local database and the cipher text extracted will be stored in the cloud server. If the file is uploaded, success message is displayed as shown in Figure 4.6(c) or else catch block can determine the failure and prompts the error message to the user to resolve the issues. Figure 4.6: Uploading Files to the Cloud import com.sun.org.apache.xerces.internal.impl.dv.util.Base64; import java.io.ByteArrayOutputStream; import java.io.FileInputStream; import java.io.FileWriter; import java.util.Scanner; import javax.crypto.Cipher; import javax.crypto.KeyGenerator; import javax.crypto.SecretKey; import javax.crypto.spec.SecretKeySpec; import javax.swing.JOptionPane; import sun.misc.BASE64Encoder; public class AesEncrDec { SecretKey secretKey; public String keytostring(SecretKey skey){ //converting secretkey to String byte[] b=skey.getEncoded();//encoding secretkey String stringkey=Base64.encode(b); System.out.println("secretkey "+skey); System.out.println("converted secretkey to string:"+stringkey); return stringkey; } public SecretKey Stringtokey(String stringkey){ //converting String to secretkey byte[] bs=Base64.decode(stringkey); SecretKey sec=new SecretKeySpec(bs, "AES"); System.out.println("converted string to secretkey:"+sec); return secretKey; } } Figure 4.7: Class for Generating Private Key import com.sun.org.apache.xerces.internal.impl.dv.util.Base64; import java.io.ByteArrayOutputStream; import java.io.FileInputStream; import java.io.FileWriter; import java.util.Scanner; import javax.crypto.Cipher; import javax.crypto.KeyGenerator; import javax.crypto.SecretKey; import javax.crypto.spec.SecretKeySpec; import javax.swing.JOptionPane; import sun.misc.BASE64Encoder; public class Encryption { public String encrypt(String text, SecretKey seckey) { String plainData = text, cipherText = null; try { SecretKey secretKey = seckey; System.out.println("secret key:" + secretKey); // converting secretkey to String byte[] b = secretKey.getEncoded(); // encoding secretkey String skey = Base64.encode(b); System.out.println("converted secretkey to string:" + skey); Cipher aesCipher = Cipher.getInstance("AES"); // getting AES instance aesCipher.init(Cipher.ENCRYPT_MODE, secretKey); // initiating ciper encryption using secretkey byte[] byteDataToEncrypt = plainData.getBytes(); byte[] byteCipherText = aesCipher.doFinal(byteDataToEncrypt); // encrypting data cipherText = new BASE64Encoder().encode(byteCipherText); // converting encrypted data to string System.out.println("\n given text : " + plainData + " \n Cipher Data : " + cipherText + "\n Secretkey:" + secretKey); return cipherText; } catch (Exception e) { System.out.println(e); } return cipherText; } } Figure 4.8: Class for Converting Plaintext into Ciphertext Figure 4.9: Plain Text Converted to Cipher Text All the uploaded files of a user can be seen in “MY FILES” screen as shown in Figure 4.10. Additionally, a user can download (or) delete his/her files from the cloud repository system. Figure 4.10: My Files Screen Furthermore, a user can see files of all other users in a single window i.e. in “FILES” screen as shown in Figure 4.11. However, the user can only download the files that are made as public. In order to download the private files, the user needs to request the file owner to share the private key. <table> <thead> <tr> <th>IMAGE ID</th> <th>IMAGE NAME</th> <th>OWNER NAME</th> <th>UPLOAD DATE</th> <th>TYPE</th> <th>DOWNLOAD</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>butterfly</td> <td>JohnathonTaylor</td> <td>05/04/2015 22:05</td> <td>public</td> <td>Download</td> </tr> <tr> <td>3</td> <td>Meet</td> <td>JohnathonTaylor</td> <td>05/04/2015 22:05</td> <td>public</td> <td>Download</td> </tr> <tr> <td>7</td> <td>welliams_center</td> <td>CassandraDavis</td> <td>05/04/2015 22:05</td> <td>public</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>IMAGE ID</th> <th>IMAGE NAME</th> <th>OWNER NAME</th> <th>UPLOAD DATE</th> <th>TYPE</th> </tr> </thead> <tbody> <tr> <td>8</td> <td>Animation</td> <td>LusaniMiller</td> <td>05/04/2015 22:05</td> <td>private</td> </tr> <tr> <td>9</td> <td>iphone</td> <td>LusaniMiller</td> <td>05/04/2015 22:05</td> <td>private</td> </tr> <tr> <td>5</td> <td>lab</td> <td>JohnathonTaylor</td> <td>05/04/2015 22:05</td> <td>private</td> </tr> <tr> <td>2</td> <td>Lecture</td> <td>JohnathonTaylor</td> <td>05/04/2015 22:05</td> <td>private</td> </tr> <tr> <td>1</td> <td>official_groupie</td> <td>JohnathonTaylor</td> <td>05/04/2015 22:05</td> <td>private</td> </tr> <tr> <td>6</td> <td>ugadi_event</td> <td>CassandraDavis</td> <td>05/04/2015 22:05</td> <td>private</td> </tr> </tbody> </table> **Figure 4.11: Files Screen** 4.2.3 Requesting Files In this system, a user can see the files uploaded by all the users registered into the system as shown in Figure 4.11. However, files made as public can be downloaded directly. To download the private files, a user needs to send a request to the file owner to share the private key used for encryption as shown in Figure 26. To request the file owner, a user needs to navigate to the request page. Thereafter, the user needs to select type of the file as shown in Figure 4.12 (a) and the file owner name as shown in Figure 4.12 (b). Eventually, all the private files of the selected file owners are displayed to the user where he/she can request the private key for a single or multiple files as shown in Figure 4.12 (d). The request made by the user is sent to the file owner and the success message is displayed to the user as shown in Figure 4.12 (c). File owner can see requests made by all the users in a single window in “REQUESTED FILES SCREEN” as shown in Figure 4.13. ![Figure 4.12: Requesting Private Files](image) Figure 4.12: Requesting Private Files Figure 4.13: Requested Files Screen 4.2.4 Sharing Files Whenever a request is made by the user it is shown in “REQUESTED FILES SCREEN” of the file owner as shown in Figure 4.13. Here user has the flexibility to accept or reject the requests made. In order to accept/reject the requests made he/she needs to select the requested user name as shown in Figure 4.15 (a). Eventually, all the files requested by the user are displayed where he can accept/reject few or all the files requested as shown in Figure 4.15 (b). Whenever the file owner accepts the request a single Access key is generated for the accepted file(s) using the code shown in Figure 4.14 and is sent to the requesting user, and the success message is displayed as shown in Figure 4.15 (c). The Access key generated for the requested files is valid only to the requesting user. No other user will be able to decrypt the requested file using the same Access key. Once the requested file is accepted/rejected, that file is removed from the list of requested files as shown in the Figure 4.15 (c). ```java StringBuffer sb=new StringBuffer(); KeyGenerator keyGen = KeyGenerator.getInstance("AES"); keyGen.init(128); SecretKey secretKey = keyGen.generateKey(); Cipher aesCipher = Cipher.getInstance("AES"); aesCipher.init(Cipher.ENCRYPT_MODE,secretKey); System.out.println("String buffer:"+sb.toString()); String ss=new Encryption().encrypt(sb.toString(),secretKey); String skey=new AesEncrDec().keytostring(secretKey); ``` **Figure 4.14: Access Key Generation** Figure 4.15: Accepting/Rejecting Requested Private Files 4.2.5 Downloading Files A user can download his/her files directly from the “MY FILES” page and the requested files can be downloaded in the “RECEIVED FILES” page. All the requests made by the user and the key associated with it are displayed in the received files screen where he/she can download the accepted files as shown in Figure 4.16. Whenever a request is made key element maintains any one of the status mentioned below: - Waiting - Accept - Reject First, when the request is made by the user, the status of the key element will be in waiting until file owner accepts or rejects. Second, if the file owner accepts the request made then the status will be changed to the Access key shared. Last, if the file owner rejects the request made, the status will be changed from waiting to reject. Here only the accepted files would be able to download by the user. In order to download the accepted files, a user need to navigate to the “RECEIVED FILES SCREEN” and select the accepted file and enter the Access key shared by the file owner as shown in Figure 4.16. Whenever the Access key is entered, it compares with the requested username and file name associated with it. If it matches the application server will get the private key used to encrypt the file from the local database and decrypts the file using the class shown in the Figure 4.18 and downloads the file. If the Access key entered does not match, it gives appropriate error message to the user for resolving. A user can download multiple files requested at the same time by selecting the requested time and date as shown in Figure 4.17. Figure 4.16: Downloading Received Files Figure 4.17: Downloading Received Files at Same Time import com.sun.org.apache.xerces.internal.impl.dv.util.Base64; import java.io.ByteArrayOutputStream; import java.io.FileInputStream; import java.io.FileWriter; import java.util.Scanner; import javax.crypto.Cipher; import javax.crypto.KeyGenerator; import javax.crypto.SecretKey; import javax.crypto.spec.SecretKeySpec; import javax.swing.JOptionPane; import sun.misc.BASE64Decoder; import sun.misc.BASE64Encoder; public class Decryption{ public String decrypt(String txt,String skey){ String decryptedtext = null; try{ //converting string to secretkey byte[] bs=Base64.decode(skey); SecretKey sec=new SecretKeySpec(bs, "AES"); System.out.println("converted string to seretkey:"+sec); System.out.println("secret key:"+sec); Cipher aesCipher = Cipher.getInstance("AES"); //getting AES instance byte[] byteCipherText = new BASE64Decoder().decodeBuffer(txt); aesCipher.init(Cipher.DECRYPT_MODE,sec,aesCipher.getParameters()); byte[] byteDecryptedText = aesCipher.doFinal(byteCipherText); System.out.println("upto this ok"); decryptedtext = new String(byteDecryptedText); System.out.println("Decrypted Text:"+decryptedtext); return decryptedtext; }catch(Exception e){ System.out.println(e); } return decryptedtext;}} Figure 4.18: Class for Decrypting Cipher Text to Plain Text If any user downloads the file, immediately “file downloaded” message will be displayed in the “DOWNLOADED FILES” screen with the user name, file name, time and date as shown in Figure 4.19. Figure 4.19: Displaying Downloaded Files 5 Testing and Evaluation In this phase, functionalities of the application are to be tested like 1. User authentication 2. Restricting access control levels 3. Requesting and sharing multiple files 4. Generating user specific Access key, 5. Eliminating unauthorized access. 5.1 Test Case 1 In this test case user, authentication is tested in registration page and login page. Here validation for user credentials is verified as shown in Figure 5.1. In addition, validations are performed for empty fields for login page and registration page as shown in Figure 5.2. Validations for appropriate information is verified in the registration page as shown in Figure 5.2 where error message is displayed when the username given already exists. Figure 5.1: User Authentication 5.2 Test Case 2 In this test case, multiple functionalities have been tested i.e. requesting and sharing multiple files, restricting access control levels, eliminating unauthorized access. In this scenario, users LusianMiller, MichelCorwin and CassandraDavis send request to share the same set of files to user JohnathonTaylor as shown in Figure 5.3. JohnathonTaylor now accepts the request made by LusianMiller as shown in Figure 5.4(a), rejects the request made by CassandraDavis as shown in Figure 5.4(b) and neither accepts nor rejects the requests made by MichelCorwin and therefore the request remains in waiting state. Figure 5.3: Multiple users requesting same files Figure 5.4: Accepting One Request and Rejecting One Request By using the key shared by user JohnathonTaylor as shown in Figure 5.5, user LusianMiller can download the files. <table> <thead> <tr> <th>FILE NAME</th> <th>ACCESS KEY</th> <th>STATUS</th> <th>OWNER NAME</th> <th>DOWNLOAD</th> </tr> </thead> <tbody> <tr> <td>lab</td> <td>riOSkxDgU7dD/kTZx8+4Jg==</td> <td>accept</td> <td>JohnathonTaylor</td> <td>Download</td> </tr> <tr> <td>Lecture</td> <td>riOSkxDgU7dD/kTZx8+4Jg==</td> <td>accept</td> <td>JohnathonTaylor</td> <td>Download</td> </tr> </tbody> </table> Select Request Date to Download Files Select Date: 05/08/2015 15:05:47 Figure 5.5: Received keys shared As shown in Figure 5.6 CassandraDavis request is rejected and MichelCorwin’s request is in waiting as shown in Figure 5.7, both will not be able to download the file. If MichelCorwin, CassandraDavis get access to the key sent to LusianMiller even then they will not be able to download the files as the key is restricted to LusianMiller. First, file name for that Access key is verified. Second, it checks with the owner name, owner id and requester user name, requested user id. If the comparison gets success it allows the requested user to download the file. If the comparison fails, it displays an error message to the user to enter the correct Access key which is demonstrated in the Figure 5.9. ### Figure 5.6: Rejected Request <table> <thead> <tr> <th>FILE NAME</th> <th>ACCESS KEY</th> <th>STATUS</th> <th>OWNER NAME</th> <th>DOWNLOAD</th> </tr> </thead> <tbody> <tr> <td>lab</td> <td>reject</td> <td>reject</td> <td>JohnathonTaylor</td> <td>Download</td> </tr> <tr> <td>Lecture</td> <td>reject</td> <td>reject</td> <td>JohnathonTaylor</td> <td>Download</td> </tr> </tbody> </table> Select Request Date to Download Files Select Date: 05/08/2015 16:05:27 Submit clear --- ### Figure 5.7: Waiting Request <table> <thead> <tr> <th>FILE NAME</th> <th>ACCESS KEY</th> <th>STATUS</th> <th>OWNER NAME</th> <th>DOWNLOAD</th> </tr> </thead> <tbody> <tr> <td>lab</td> <td>waiting</td> <td>waiting</td> <td>JohnathonTaylor</td> <td>Download</td> </tr> <tr> <td>Lecture</td> <td>waiting</td> <td>waiting</td> <td>JohnathonTaylor</td> <td>Download</td> </tr> </tbody> </table> Select Request Date to Download Files Select Date: 05/08/2015 16:05:47 Submit clear Figure 5.8: Three Users Using Same Access Key Here all the three users try to use the same Access key as shown in Figure 5.8, but only LusianMiller would be able to download the file. When LusianMiller downloads the file “file downloaded” message will be sent to JohnathonTaylor as shown in Figure 5.10. If the other two user’s try to download the file with the same Access key. Which is sent to Lusian Miller an error message will be displayed to enter correct Access key as shown in Figure 5.9. Figure 5.9: Displaying Error Message for Unauthorized Access Figure 5.10: Displaying Downloaded Files for Authorized Access 6 Conclusion and Future Work This project contributes to provide security to the data stored in the cloud, by encrypting the data before uploading into the cloud. As encryption consumes more processing overhead, many cloud service providers will have basic encryption applied only on few data fields. If cloud service providers can encrypt data, then cloud service can providers can decrypt encrypted data. To keep the cost low and maintain high sensitive data, it would be better to encrypt the data before uploading. In this project, we encrypt data using symmetric key encryption where private keys of the files will be stored in the local database. The system generates a single key for accessing multiple files. This Access key is stored in the cloud which further helps to retrieve private keys that are stored in the local database. As a single key is stored in the cloud for multiple files, flexibility will be increased for sharing any number of files, cost for key management will be reduced. In future, Access key generation can be enhanced. If the Access key itself decrypts the files requested, it would reduce maintenance of private keys in the local database. File Modification techniques without downloading the file can be improved. The encryption technique can be enhanced further. 7 References [3] "Software as a service," of Computer Science & Information Technology, June 2013. [Accessed March 2015].
{"Source-Url": "http://sci.tamucc.edu/~cams/projects/458.pdf", "len_cl100k_base": 11581, "olmocr-version": "0.1.49", "pdf-total-pages": 61, "total-fallback-pages": 0, "total-input-tokens": 94508, "total-output-tokens": 14043, "length": "2e13", "weborganizer": {"__label__adult": 0.0003173351287841797, "__label__art_design": 0.0004425048828125, "__label__crime_law": 0.0004410743713378906, "__label__education_jobs": 0.0031585693359375, "__label__entertainment": 7.283687591552734e-05, "__label__fashion_beauty": 0.0001608133316040039, "__label__finance_business": 0.0005412101745605469, "__label__food_dining": 0.0002932548522949219, "__label__games": 0.0005068778991699219, "__label__hardware": 0.0012254714965820312, "__label__health": 0.0003592967987060547, "__label__history": 0.0002715587615966797, "__label__home_hobbies": 0.00010859966278076172, "__label__industrial": 0.0003533363342285156, "__label__literature": 0.0002332925796508789, "__label__politics": 0.00018167495727539065, "__label__religion": 0.0002918243408203125, "__label__science_tech": 0.045684814453125, "__label__social_life": 0.0001354217529296875, "__label__software": 0.0195770263671875, "__label__software_dev": 0.9248046875, "__label__sports_fitness": 0.00017642974853515625, "__label__transportation": 0.0003933906555175781, "__label__travel": 0.00017452239990234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53478, 0.037]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53478, 0.29732]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53478, 0.84812]], "google_gemma-3-12b-it_contains_pii": [[0, 444, false], [444, 1840, null], [1840, 4274, null], [4274, 4274, null], [4274, 4274, null], [4274, 7094, null], [7094, 8217, null], [8217, 9918, null], [9918, 11337, null], [11337, 12670, null], [12670, 13597, null], [13597, 14374, null], [14374, 14861, null], [14861, 16304, null], [16304, 17235, null], [17235, 17774, null], [17774, 19357, null], [19357, 20655, null], [20655, 22210, null], [22210, 23096, null], [23096, 23589, null], [23589, 25334, null], [25334, 25630, null], [25630, 26177, null], [26177, 26575, null], [26575, 26879, null], [26879, 27249, null], [27249, 27822, null], [27822, 28145, null], [28145, 29412, null], [29412, 30601, null], [30601, 31140, null], [31140, 32109, null], [32109, 33443, null], [33443, 33473, null], [33473, 34141, null], [34141, 35698, null], [35698, 35739, null], [35739, 36846, null], [36846, 38526, null], [38526, 38574, null], [38574, 38789, null], [38789, 40121, null], [40121, 41210, null], [41210, 41246, null], [41246, 42763, null], [42763, 42820, null], [42820, 44430, null], [44430, 44524, null], [44524, 46015, null], [46015, 46248, null], [46248, 47024, null], [47024, 47652, null], [47652, 47762, null], [47762, 48975, null], [48975, 49800, null], [49800, 50122, null], [50122, 50424, null], [50424, 51726, null], [51726, 52682, null], [52682, 53478, null]], "google_gemma-3-12b-it_is_public_document": [[0, 444, true], [444, 1840, null], [1840, 4274, null], [4274, 4274, null], [4274, 4274, null], [4274, 7094, null], [7094, 8217, null], [8217, 9918, null], [9918, 11337, null], [11337, 12670, null], [12670, 13597, null], [13597, 14374, null], [14374, 14861, null], [14861, 16304, null], [16304, 17235, null], [17235, 17774, null], [17774, 19357, null], [19357, 20655, null], [20655, 22210, null], [22210, 23096, null], [23096, 23589, null], [23589, 25334, null], [25334, 25630, null], [25630, 26177, null], [26177, 26575, null], [26575, 26879, null], [26879, 27249, null], [27249, 27822, null], [27822, 28145, null], [28145, 29412, null], [29412, 30601, null], [30601, 31140, null], [31140, 32109, null], [32109, 33443, null], [33443, 33473, null], [33473, 34141, null], [34141, 35698, null], [35698, 35739, null], [35739, 36846, null], [36846, 38526, null], [38526, 38574, null], [38574, 38789, null], [38789, 40121, null], [40121, 41210, null], [41210, 41246, null], [41246, 42763, null], [42763, 42820, null], [42820, 44430, null], [44430, 44524, null], [44524, 46015, null], [46015, 46248, null], [46248, 47024, null], [47024, 47652, null], [47652, 47762, null], [47762, 48975, null], [48975, 49800, null], [49800, 50122, null], [50122, 50424, null], [50424, 51726, null], [51726, 52682, null], [52682, 53478, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53478, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53478, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53478, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53478, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53478, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53478, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53478, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53478, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53478, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53478, null]], "pdf_page_numbers": [[0, 444, 1], [444, 1840, 2], [1840, 4274, 3], [4274, 4274, 4], [4274, 4274, 5], [4274, 7094, 6], [7094, 8217, 7], [8217, 9918, 8], [9918, 11337, 9], [11337, 12670, 10], [12670, 13597, 11], [13597, 14374, 12], [14374, 14861, 13], [14861, 16304, 14], [16304, 17235, 15], [17235, 17774, 16], [17774, 19357, 17], [19357, 20655, 18], [20655, 22210, 19], [22210, 23096, 20], [23096, 23589, 21], [23589, 25334, 22], [25334, 25630, 23], [25630, 26177, 24], [26177, 26575, 25], [26575, 26879, 26], [26879, 27249, 27], [27249, 27822, 28], [27822, 28145, 29], [28145, 29412, 30], [29412, 30601, 31], [30601, 31140, 32], [31140, 32109, 33], [32109, 33443, 34], [33443, 33473, 35], [33473, 34141, 36], [34141, 35698, 37], [35698, 35739, 38], [35739, 36846, 39], [36846, 38526, 40], [38526, 38574, 41], [38574, 38789, 42], [38789, 40121, 43], [40121, 41210, 44], [41210, 41246, 45], [41246, 42763, 46], [42763, 42820, 47], [42820, 44430, 48], [44430, 44524, 49], [44524, 46015, 50], [46015, 46248, 51], [46248, 47024, 52], [47024, 47652, 53], [47652, 47762, 54], [47762, 48975, 55], [48975, 49800, 56], [49800, 50122, 57], [50122, 50424, 58], [50424, 51726, 59], [51726, 52682, 60], [52682, 53478, 61]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53478, 0.08759]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
49509b6a41cee37326ec151798a8fcfddbc4159f
A User’s Guide to the Generalized Image Library Leonard G. C. Hamey Computer Science Department. Carnegie-Mellon University. Pittsburgh, PA 15213 Revised August 9, 1990 Abstract This document describes those aspects of the generalized image library which affect the users of programs built with the library. A separate document provides information for programmers who wish to use the library. Copyright © Leonard G. C. Hamey This research was sponsored by the Defense Advanced Research Projects Agency, DOD through ARPA Order No. 4976, and monitored by the Air Force Avionics Laboratory under contract F33615-84-K-1520. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or of the U.S. Government. Contents 1 Introduction ........................................... 1 2 What's In a Name? ........................................ 1 3 Formats of Images on Disk ........................................ 1 3.1 CMU Format ........................................... 1 3.2 GIF format ........................................... 3 3.3 MIP format ........................................... 3 4 Naming Images ........................................... 3 4.1 Images on Disk ........................................... 3 4.2 Piped Images ........................................... 4 4.3 Constant Images ........................................... 4 4.4 Display Device ........................................... 5 4.5 Matrox Boards ........................................... 5 4.6 Androx Boards ........................................... 6 4.7 Selecting Pixel Characteristics ........................................... 6 4.8 Network Access ........................................... 6 4.9 Shifting an Image ........................................... 7 4.10 Cropping an Image ........................................... 7 3.11 Dividing an Image ........................................... 8 4.12 Quarters of an Image ........................................... 9 4.13 Extending an Image ........................................... 9 4.14 Tee ........................................... 10 4.15 Magnify and Zoom ........................................... 10 4.16 Digitization ........................................... 12 4.17 Files and Devices ........................................... 12 5 Multi-band Images ........................................... 13 5.1 Specifying the Bands of a Multi-band Image ........................................... 14 5.2 Multi-band Images on Disk ........................................... 16 5.3 Color Images ........................................... 17 5.4 Stereo Images ........................................... 17 5.5 Three-Dimensional Viewing with Red/Blue Movie Glasses ........................................... 17 5.6 Black and White Images ........................................... 18 6 Cursor Positioning ........................................... 18 7 Environment 8 Some Useful Utilities 8.1 Header ........................................ 24 8.2 Copying Images ................................... 25 8.3 Cursor ............................................ 27 9 Paying the Piper 1 Introduction The generalized image library provides access to a variety of image devices and disk formats. Programs which are built with the library inherit this flexibility: the same program can access all the different image devices and disk formats depending only upon the image name which the user specifies. This document describes the image naming syntax which is supported by the generalized image library and programs built upon it. 2 What's In a Name? When the generalized image library opens or creates an image, the name which you give serves to do more than just name a disk file in which the image will reside. At the very least, the name of an image indicates the format in which the image will be stored on disk: it may even indicate that the image is not to be stored on disk at all but is actually a physical device or a virtual image. Currently, there are three image formats which are supported. They are: CMU format, GIF format and MIP format. Also supported are a number of frame buffer devices, xwindows and suntools displays, network access and a number of operations which can be applied to images. Figure 1 is a BNF grammar representing the set of image names which are currently supported. Some of the above facilities can only be used when an existing image is being opened. Others are only valid when a new image is being created. The restrictions are as follows. - Disk files cannot be created over the network. This is to prevent security problems. - The shift keyword is not supported when a new image is being created. The constant keyword can only be used as an input image. - The unsigned, signed and float keywords are not allowed when an existing image is being opened. 3 Formats of Images on Disk The three image formats, CMU, GIF and MIP, have different uses and capabilities. This section describes each format briefly to assist you in choosing which to use. 3.1 CMU Format CMU image format is an obsolete image format which was supported by the old image library. The pixel type information which was previously handled explicitly by some programs is now handled directly by the image library. This means that library programs are no longer confused about signed images and can even be expected to do reasonable things with floating-point images. image-name: (image-name) - display display matrox matrox suntools suntools xuin xuin memory memory matrix matrix name:image-name name:image-name name:image-name name:image-name name:image-name name:image-name gif:file-name gif:file-name mip:file-name mip:file-name color:image-name color:image-name stereo:image-name stereo:image-name threed:image-name threed:image-name 3d:image-name 3d:image-name bw:image-name bw:image-name digitize:image-name digitize:image-name bands:bands:names:image-name bands:bands:names:image-name printer:printer[width=width] printer:printer[width=width] constant:constant[constant,...] constant:constant[constant,...] unsigned:unsigned:bits-per-pixel:image-name unsigned:unsigned:bits-per-pixel:image-name signed:signed:bits-per-pixel:image-name signed:signed:bits-per-pixel:image-name float:float:bits-per-pixel:image-name float:float:bits-per-pixel:image-name machine:machine:image-name machine:machine:image-name network:network:machine:image-name network:network:machine:image-name magnify:magnify:factor[factor]:image-name magnify:magnify:factor[factor]:image-name quarter:piece-number:image-name quarter:piece-number:image-name shift:mu-start,column-start:image-name shift:mu-start,column-start:image-name multiply:multiply:multiplier:image-name multiply:multiply:multiplier:image-name add:addend:image-name add:addend:image-name ltrans:ltrans:multiplier,addend:image-name ltrans:ltrans:multiplier,addend:image-name crop:crop:row-start,row-end,column-start,column-end:image-name crop:crop:row-start,row-end,column-start,column-end:image-name extend:constant,row-start,row-end,col-start,col-end:image-name extend:constant,row-start,row-end,col-start,col-end:image-name divide:row-divisions,column-divisions,piece-number:image-name divide:row-divisions,column-divisions,piece-number:image-name tee:tee:image-name,tee:image-name tee:tee:image-name tee:image-name,tee:image-name Figure 1: A BNF grammar for image names. CMU format is useful for large images because the image data can be accessed on disk as it is needed. This is called software paging. 3.2 GIF format GIF (generalized image format) was developed exclusively for the generalized image library. GIF format provides full support for the pixel types which the generalized image library implements: \texttt{unsigned}, \texttt{signed} and \texttt{float}. GIF format images are stored internally as matrices (see \textit{matrix}(3)). This places some size restrictions on images which can be stored in GIF format as they must be entirely loaded into computer memory. The advantage of GIF format is that it provides for repetition-based packing which is especially useful for large sparse images. 3.3 MIP format MIP format was developed on the Suns. A MIP image consists of 480 rows of 512 columns of 8 bit unsigned pixels. MIP format is very restrictive because there is no support for pixel types other than unsigned 8 bit pixels, and because the images bounds are fixed. The only reason for using MIP format is to provide compatibility with existing software which requires MIP format. 4 Naming Images The name of an image is not simply a file name. In section 2 a BNF grammar was presented which summarizes the expressions which may be used to name images. This section describes each expression in detail and gives examples of its use. 4.1 Images on Disk The name of an image on disk specifies not only its file name, but also the format in which it is stored. There are two ways in which the format is specified: \textit{keywords} and \textit{file types}. File types are the simplest \textit{method} of identifying image formats. File names which end in .\texttt{mip} are assumed to be MIP format unless a \textit{keyword} is used. File names which end in .\texttt{gif} are assumed to be GIF format and file names which end in .\texttt{img} are assumed to be CMU format. If an image format cannot be identified by its file type, then CMU format is assumed. If an image format is not correctly indicated by the file type, then a keyword may be used to indicate the format of the image. An image format keyword is one of the character strings \texttt{cmu}, \texttt{mip} or \texttt{gif} followed by a colon and preceding the image name. For example, if the file sunset is a MIP format image file, then the name \texttt{mip:sunset} identifies the file and specifies that it contains a MIP format image. The following command illustrates the use of the imgcp image copying prograin to convert a MIP format file into CMU format. ``` imgcp mip:sunset sunset.img ``` ### 4.2 Piped Images An image name which is a single hyphen (-) represents a piped image. Piped images may be used for input and for output. A piped input image is read from the standard input and a piped output image is written to the standard output. Each program can use only one piped input image and one piped output image. Programs cannot use piped images if they use the standard channels for other purposes. This means that interactive programs cannot use piped input images. It also means that programs which print messages on the standard output cannot use piped output images. Piped images are useful for combining simple programs in shell commands. For example, the following shell command uses the smooth program to low-pass filter an image. It then pipes the low-pass image into subimg where it is subtracted from the original image to produce a high-pass image. ``` smooth gauss -5 original.gif - | subimg original.gif - highpass.gif ``` ### 4.3 Constant Images The keyword constant: followed by an integer or floating-point constant may be used to open a constant image. Constant images are virtual images which are filled with a constant value. Every image fetch operation performed by a program will return the constant value. Constant images do not have known image bounds. They are essentially unlimited in size. Therefore their use requires a little care. Constant images are most useful in conjunction with programs such as add. Consider the following command. ``` add tree.img constant:100 bright.img ``` This command adds together the two images tree.img and constant:100 producing the new image bright.img. This has the effect of adding the constant value 100 to the pixels of tree.img and storing the new values in bright.img. --- 1. The hyphen syntax is in accordance with a Unix convention. 2. For this reason, the generalized image library does not use the standard output. Everything printed by the library is put on the standard error channel. It should be noted that, because a constant image contains a fixed constant, it is incorrect to use one as anything but an input image to a program. Also, because the bounds of a constant image are essentially unlimited, it is impossible to copy a constant image to a disk image without specifying the region to be copied. The following command uses \texttt{imgcp} to create a CMU format disk image with 200 columns and 100 rows containing the integer constant value 3. \texttt{imgcp crop:0,99,0,199:constant:3 con3.img} The keyword \texttt{constant:} may also be used to create a multi-band constant image. Instead of a single constant value, several constants may be specified separated by commas. The number of constants must match the number of bands in the multi-band image. For example, the following command fills the \texttt{matrox} display with red. \texttt{imgcp -c con:255,0,0 matrox} The keyword \texttt{constant:} may be abbreviated to \texttt{con:}. \textbf{4.4 Display Device} The keyword \texttt{display} (which does not require a colon) indicates the display device appropriate for the machine. This keyword is commonly an alias indicating a hardware frame buffer or a display on a remote machine. The following command copies the image \texttt{tree.gif} to the display device where it can be viewed on a monitor. \texttt{imgcp tree.gif display} The keyword \texttt{display} can be abbreviated to \texttt{dis}. \textbf{4.5 Matrox Boards} The keyword \texttt{matrox} (which does not require a colon) indicates the Matrox display device. This keyword can only be used on machines which have a Matrox display. The Matrox supports 8 bit unsigned integer pixels. It has 480 rows and 512 columns. The following command copies the image \texttt{sunset.gif} to the Matrox display where it can be viewed on a monitor. \texttt{imgcp sunset.gif matrox} 4.6 Androx Boards The keyword *androx* indicates the Androx display device. This keyword can only be used on machines which have an Androx display. The Androx supports 8 bit unsigned integer pixels. It has 480 rows and 512 columns. 4.7 Selecting Pixel Characteristics The keywords *unsigned:*, *signed:* and *float:* may be used to select the pixel characteristics of a new image which is being created by a program. The selected characteristics override the characteristics which the program would ordinarily select itself. However, the selected characteristics may be overridden by the library depending on the capabilities of the image file or device. The keyword *unsigned:* is followed by an integer, a colon and an image name expression. The selected integer pixel holds only positive pixel values. For example, an unsigned 8 bit pixel holds integral pixel values in the range 0 to 255. The integer, which is optional, indicates the number of bits needed to store each pixel. If it is omitted, then the image will be created with the number of bits per pixel that was selected by the program. The *unsigned:* keyword may be abbreviated to *u:*. The following example specifies that the Sobel edge detection algorithm is to allow 16 bits for the detected edge magnitudes. ``` edge Sobel house.gif -m u:16:house.mag.gif ``` The keyword *signed:* is similar to *unsigned:* but selects signed integer pixels. A signed 8 bit pixel holds integral pixel values in the range -128 to 127. The keyword *signed:* may be abbreviated to *s:* The keyword *float:* selects floating-point pixels. As with the other pixel types, the number of bits per pixel may be omitted. In that case, it defaults to the size of a floating-point number on the machine. The keyword *float:* may be abbreviated to *f:*. The following example copies an image and converts it to floating-point pixels. ``` imgcp sunset.gif f:32:sunset.float.gif ``` 4.8 Network Access The keyword *network:* indicates that an image is to be accessed over the network. Network image access is only available when the remote machine is running the image network daemon. The *network:* keyword is followed by the name of the remote machine, a colon and the name of the machine. Floating-point numbers are 32 bits on the Vax and the Sun. image on the remote machine. Due to the difficulty of validating remote users, it is not possible to create or modify image files over the network. For example, the following command copies the image `ocean.img` to the display device on the `iusb` Sun (i.e., its matrox). ``` imgcp ocean.img network:iusb:display ``` The keyword `network:` may be abbreviated to `net:`. The keyword may also be completely omitted if the name of the remote machine is known to the library. Thus, the following command also copies the image `ocean.img` to the display device on the `IUSB` Sun. ``` imgcp ocean.img iusb:dis ``` ### 4.9 Shifting an Image The keyword `shift:` may be used to shift the co-ordinates of an image. The keyword is followed by the desired starting row and column co-ordinates. Either co-ordinate may be omitted and will default to the actual co-ordinate of the image. The image, which must already exist, will be relocated so that its rows and columns start at the specified co-ordinates. The shifting is done entirely in software and has no effect on the image file itself. The following command illustrates the use of `imgcp` to create a copy of `house.img` which starts at row 100 and column 200. ``` imgcp shift:100,200:house.img newhouse.img ``` ### 4.10 Cropping an Image The keyword `crop:` can be used to select a portion of an image. The keyword is followed by the desired row and column start and end bounds. The specified bounds must be within the actual bounds of the image. Any of the bounds may be omitted - they will default to the actual image bounds. The following command illustrates the use of `imgcp` to create a new image `branches.img` which contains a portion of the image `tree.img`. Only the ending row bound has been specified, so the other bounds default to the actual image bounds. The selected portion of `tree.img` is thus all the rows from the top of the image to row 200. ``` imgcp crop:,200,:tree.img branches.img ``` The crop: keyword may also be used in the name of an output image which is being created by the program. In that case the generalized image library opens an existing image or a display device and allows the program to overwrite the specified portion of the image. Thus, the following command copies small.img to the top-left corner of the display device. An easier way to achieve the same result is presented in the next section. \texttt{imgcp small.img crop:0,239,0,255:display} 4.11 Dividing an Image It is often useful to be able to access a portion of a display device. The divide: keyword can be used to divide an image into pieces and provide access to a specific piece. The keyword is followed by three integers separated by commas: the number of vertical divisions, the number of horizontal divisions and the piece number. The pieces of an image are numbered from one in standard row-order sequence. For example, the following command copies a very small image to the top right-hand corner of a display which has been divided into six pieces: halves vertically and thirds horizontally. \texttt{imgcp tiny.img divide:2,3,3:display} The arrangement of the display divisions is shown in figure 2. The divide: keyword is especially useful in conjunction with the automatic zoom option -z of \texttt{imgcp}, which magnifies the input image to fit the specified output image. This allows arbitrary images to be displayed on portions of the screen with reduced resolution. For example, the following two commands copy a color stereo image to the display. The first command copies the left portion of the image to the left half of the screen. The second command copies the right portion of the image to the right half of the screen. 4.12 Quarters of an Image The keyword *quarter:* may be used to select a quarter of an image. It is followed by an integer which is the quarter number, then a colon and an image name expression. The quarter number ranges from 1 in the top-left corner to 4 in the bottom-right corner as shown in figure 3. The keyword *quarter:* is a synonym for *divide:2,*2. It may be abbreviated to *q:.* For example, the following command copies *archuay.*img to the bottom-left corner of the display. ``` imgcp -zc archuay.img q:3:display ``` 4.13 Extending an Image The *extend:* keyword is the opposite of *crop:* . Instead of reducing the size of the image, *extend:* increases the image size by filling around the image with either a constant value or replicated pixels. The keyword is followed by an optional constant fill value or, for multi-band images, a constant fill vector enclosed in parentheses. When using parentheses on shell command lines it is important to enclose the entire image name expression in quotes. The optional fill value is followed by the row and column start and end bounds of the extended image. Any of the new bounds may be omitted and defaults to the bounds of the underlying image expression. If a fill value is specified then the image is extended with the value. If no fill value ``` ``` is specified then the image is extended by replicating the border pixels. The following example extends an image by replicating the border pixels. The input image has 480 rows and 512 columns and has origin zero. The output image has 701 rows and columns with the origin at -100. ``` imgcp extend:,-100,600,-100,600:fred.mip extendfred.gif ``` As another example, the following command extends a color image by surrounding it with red pixels. ``` imgcp 'extend:(255,0,0),-100,600,-100,600:aus.red.mip' aus.red.img ``` ### 4.14 Tee The **tee** keyword is useful for putting an output image in more than one place at once. For instance, it can be used to write the output of a program to an image file for later use and also display it on a display device. The **tee** keyword is followed by the names of two images, separated by a comma. If the first image name contains a comma, then that name should be enclosed in parentheses. As an example of the use of **tee**, the following command uses the `sobel(1)` program to detect edges. The detected edges are stored in the image file **edge.img** and are also displayed for viewing while the program is running. ``` edge Sobel house.img -m tee:edge.img,display ``` ### 4.15 Magnify and Zoom The **magnify** keyword may be used to magnify or reduce an image. The keyword is followed by a rational number which is the magnification factor. The magnification factor may optionally be followed by a second magnification factor which is followed by a colon and an image name expression. The magnification is done entirely in software and has no permanent effect on an input image. The magnification factor is a rational number, represented as two integers separated by a slash (/). If it is greater than one then the image visible to the program will be a magnified version of the physical image. A magnification factor of less than one effects a reduction in which the image visible to the program consists of pixels selected from the physical image. For example, suppose that `small.img` is a small image and we wish to display it magnified four times. We can use the **magnify** keyword and the `imgcp` command as follows. The magnification is performed by replicating each input pixel value to occupy 16 pixels of the display. As another example, suppose that `large.img` is too large to fit on the display device. `imgcp` may be used to magnify it by a factor of $2/3$ and display it as follows. The reduction is performed by selecting four of every nine input pixels. ``` imgcp magnify:2/3:large.img display ``` When two magnification factors are specified, the first is applied to rows of the image and the second is applied to columns. It is thus possible to achieve an aspect ratio adjustment. For example, the Matrox has an aspect ratio of approximately four rows to every three columns. This means that images digitized with the Matrox have a greater density of pixels in the row direction. When such images are printed with a standard aspect ratio they appear vertically stretched. This effect can be corrected by magnifying the columns of the image by a factor of $4/3$ or, equivalently, magnifying the rows by $3/4$. The following example corrects the aspect ratio of an image and prints it using `iht`. ``` iht magnify:3/4,1:tree.img ``` The `zoom:` keyword is similar to `magnify:` but is intended to be used with output images, especially display devices. It is important to note that both `magnify:` and `zoom:` establish a relationship between the external image which you see and the internal image which the program sees. The effect of `magnify:` on an output image is therefore somewhat counter-intuitive. For example, consider the following command. ``` imgcp phone.img magnify:2:display ``` At first sight, it may appear that this command displays `phone.img` enlarged by a factor of two. Actually, the displayed image will be two times smaller than `phone.img`. This occurs because the `magnify:` keyword establishes a relationship between the external display device and the internal image in which the external image is half the size of the internal image. In such situations, the `zoom:` keyword should be used. `Zoom:` achieves the intended effect because it is designed for use with output images. For example, the following command displays `phone.img` zoomed by a factor of two. ``` imgcp phone.img zoom:2:display ``` The `zoom:` keyword is especially useful in conjunction with the `cursor` program, described in section 8.3. 4.16 Digitization The generalized image library supports two methods of digitizing images. Sequences of digitized images can be processed using the image sequence capabilities. Individual images can be digitized when they are opened by use of the `camera:` or `digitize:` keywords. The `camera:` keyword is followed by up to four integer parameters: the camera number, the gain and offset and an interaction level. The camera number defaults to 0. The gain and offset default to reasonable values for the display device. The interaction level defaults to 1, which indicates interactive digitization. The parameters are followed by a colon and an image name expression. When the generalized image library encounters the `camera:` keyword, it first opens the image name expression following the camera parameters. The library expects that image to be some sort of physical device which supports digitization. After opening the image it uses the specified camera number, gain and offset to activate the digitizer. If the interaction level is zero, an image is grabbed and returned to the program. If the interaction level is not zero, a simple interactive command interpreter is entered. The command interpreter recognizes simple commands which allow the user to change the camera number, gain and offset. Figure 4 summarizes the commands which are available when digitizing an image. The user can watch the live image and press `return` when he is ready to capture the image. Once this is done, the digitized image is available for the program to use. The following command illustrates the use of `imgcp` to digitize an image: ``` imgcp camera:0,255,0,1:matrox mynewimg.img ``` Color digitization is achieved by opening a color image. In the preceding example, the `matrox` has been opened as a black-and-white device so the digitized image will be a black-and-white image. In this case, the digitized image will consist of an average of the three color inputs: red, green and blue. In the case where only one input (usually the red input) is significant, it is necessary to either digitize a color image and discard two of the files or use the `bmatrox` keyword which accesses the red board of a color `matrox` as though it were a stand-alone `matros` board. In both cases the live image will be displayed red but the digitized image will be true black-and-white. The `digitize:` keyword is equivalent to the `camera:` keyword with all the default values. It can be abbreviated to `dig:`. For example: ``` imgcp dig:bmatrox newimg.gif ``` 4.17 Files and Devices The BNF grammar presented in section 2 identified certain keywords which represent physical devices. For example, the name `display` represents a display device. This means that an image ? Obtain a list of commands. yes Digitize the image (default). camera Select the camera number. setcamera Set camera gain and offset. Figure 4: Digitization commands. file named 'display' cannot be specified by just giving its name. So, how can you access an image file named 'display'? There is, of course, more than one answer. However, the most direct approach is to use one of the image format keywords to clearly identify that the name is an image file name and not a keyword. For example, the following command copies the CMU format image file display to the physical display device. `imgcp cmu:display display` Because of the confusion that could result from using file names which are the same as or similar to keywords, the generalized image library requires image file names to contain a period. This restriction can be lifted by using one of the image format keywords to clearly identify the name as a file name. 5 Multi-band Images Traditionally, an image has been viewed as a rectangular array of pixel values. The pixel values may be intensity measurements or other scalar values obtained on an integer grid. Multi-band images extend this view by allowing each pixel position to have several different measurements. Each measurement is called a band of the image, and the multi-band image consists of a number of band images. For example, a standard color image is composed of three bands: the red, green and blue bands. When programs are operating on multi-band images, the generalized image library must know which bands are required by the program. This information is usually supplied by the program. so in most cases you do not need to specify the bands of a multi-band image. In particular, programs often open color images. In such cases it is sufficient to name a color device or one of the bands of a color image on disk. Some programs, however, require the user to indicate the bands to be used. This is true of general-purpose programs such as `imgcp` which are capable of manipulating arbitrary multi-band images. When using general-purpose programs, the \texttt{bands:} keyword may be used to specify the bands of an image. 5.1 Specifying the Bands of a Multi-band Image The \texttt{bands:} keyword may be used to specify the bands of a multi-band image. The keyword is followed by a \textit{bands specification}, then a colon and an image name expression. The \texttt{bands:} keyword is useful in two situations. 1. Some general-purpose programs such as \texttt{imgcp} allow you to specify arbitrary single-band or multi-band images. These programs rely on you to specify the bands of a multi-band image using the \texttt{bands:} keyword. 2. Many programs know in advance the type of multi-band image which they will be dealing with. The generalized image library has limited facilities for coercing multi-band images from one type to another. You can therefore specify a different multi-band image and rely on the library to do the appropriate conversion. For example, a program which expects to create a stereo image can be made to create a color image instead. The resulting color image is suitable for viewing with red/blue movie glasses. \texttt{Imgcp} is a general-purpose program which is capable of copying arbitrary multi-band images. When it is opening the input image, it does not know in advance what bands the image should have. Instead, it opens the image and then finds out what bands it happens to have. Unless you specify the image bands with a keyword, the generalized image library will assume that the image is an ordinary single-band image. For example, consider the following two commands. The first command copies a single-band image. The image happens to be the red band of a color image. The second command copies a color image, copying all three bands at once. Notice that the \texttt{bands:} keyword is not required on the output image because \texttt{imgcp} assumes that the output image will have the same bands as the input image. \begin{verbatim} imgcp sunrise.red.img display imgcp bands:red,green,blue:sunrise.red.img display \end{verbatim} The \texttt{bands} specification which follows the \texttt{bands:} keyword gives the names of all the bands contained in the image. The specification consists of one or more \textit{band names} separated by commas. Each band name consists of one or more \textit{attribute names} separated from each other by periods. The attribute names represent actual attributes or characteristics of the image band being referred \footnote{If you use the \texttt{-c} option to specify a color image then \texttt{imgcp} opens a color image instead of using the general-purpose approach. Similarly, if you use the \texttt{-g} option, \texttt{imgcp} opens stereo images.} **Figure 5:** Some Standard Attribute Names to. For example, the attribute name **red** indicates an image taken with a red filter. Similarly, the attribute name **left** indicates an image taken from the left camera position. Figure 5 lists some standard attribute names and their meanings. The bands specification of a standard color image was used in the above `imgcp` examples. As we have already seen, such an image has three bands: one band has the **red** attribute, a second has the **green** attribute and the third has the **blue** attribute. Each band has only a single attribute, being the color, so the band names are **red**, **green** and **blue** respectively. The complete bands specification is a combination of the three band names: **red**, **green**, **blue**. As another example, consider a stereo image. A stereo image has two image bands, corresponding to the left and right camera positions. The left image has the **left** attribute and the right image has the **right** attribute, so the band specification is **left**, **right**. Because of the multi-band conversion capabilities of the generalized image library, `imgcp` can be used to copy a stereo image into a color image, producing a color image suitable for viewing with red/blue movie glasses. The input image is a stereo image consisting of two image files `chair.left.img` and `chair.right.img`. The output color image consists of the files `chair3d.red.img`, `chair3d.green.img` and `chair3d.blue.img`. In the following example, the bands specifications have been typed in full. In practice, this is usually unnecessary because there are other keywords which represent commonly-used multi-band specifications such as color and stereo images. ``` imgcp bands: left, right: chair.left.img bands: red, green, blue: chair3d.red.img ``` --- 5The order of band names in a bands specification is significant. An image with bands **red, green, blue** is very different from an image with bands **blue, green, red**. As a final, more complex example of a bands specification, consider a color stereo image. Such an image has six bands. Each band has two attributes being the camera position (left or right) and filter color (red, green or blue). The first band of the image has two attributes left and red. The band name is thus left.red. The complete bands specification for a color stereo image is left.red, left.green, left.blue, right.red, right.green, right.blue 5.2 Multi-band Images on Disk Multi-band images are stored on disk with each band in a separate image file. This provides greater flexibility than if the bands were stored in a single file. However, the generalized image library must be capable of determining the name of the file in which each band is stored. For this reason, a naming convention is employed. Multi-band images should be named according to this convention in order to facilitate their use with the generalized image library. The generalized image library requires all the bands of a multi-band image to be stored in the same directory. When a multi-band image name is being given, the user must specify the full name of one of the bands of the image. For example, chair.left.img is the full name of the left hand of a stereo image. The library uses the supplied name and the bands specification to determine the full names of the other bands of the image. In the example, the name of the right band of the stereo image is found by substituting right for left in the supplied name. Thus, chair.right.img is opened as the right band of the stereo image. In order for the substitution described above to succeed, the following rules must be strictly adhered to when naming multi-band images on disk. 1. The name of each image file must contain all the attributes which are relevant to that image. For example, the red band of the left image of a color stereo image must contain both the attributes red and left. 2. Attribute names are only substituted after the last slash (/) in the file name. All the files belonging to a single multi-band image must therefore reside in a single directory. However, there may be more than one multi-band image in the same directory. 3. The names of corresponding image files must differ only in the attributes present in the names. Thus, sunset.red.img and sunset.green.img are part of the same color image, but tree.red.img and tree.blue.gif are not. 4. The order of corresponding attributes must be preserved. Thus, house.left.red.img can be part of the same color stereo image as house.right.green.img. However, house.left.red.img and house.green.right.img are not acceptable. 5. Attributes must be clearly delimited by lion-alphanumeric characters. For example, \texttt{house.red.img} is acceptable but \texttt{houserred.img} is not. The one exception to this rule is that attributes may be preceded by numeric characters. This is to provide compatibility with the naming convention which was employed previously. 6. Attributes may not be abbreviated, with the exception of the color attributes. The color attributes may be abbreviated to single letters \texttt{r}, \texttt{g} and \texttt{b}. If the color attribute is abbreviated in one band of a multi-band image, then it must be abbreviated in all bands. 7. Attributes are arbitrary alpha-numeric strings. However, the standard attribute names should be used whenever possible to avoid confusion. 5.3 Color Images The \texttt{color:} keyword may be used in place of the full bands specification \texttt{bands:red,green,blue:} when dealing with color images. The \texttt{color:} keyword may be abbreviated to \texttt{c:}. For example: \begin{verbatim} imgcp c:sunset.red.img display \end{verbatim} 5.4 Stereo Images The \texttt{stereo:} keyword may be used instead of the full bands specification \texttt{bands:left,right:} to indicate a stereo image. The \texttt{stereo:} keyword may be abbreviated to \texttt{st:}. For example: \begin{verbatim} imgcp shift:100,200:st:house.left.img newhouse.left.img \end{verbatim} 5.5 Three-Dimensional Viewing with Red/Blue Movie Glasses As explained above, the generalized image library can convert a stereo image into a color image suitable for viewing with red/blue movie glasses. This conversion takes place whenever a color image is supplied to a program which expects a stereo image. The keyword \texttt{threed:} can be used to explicitly force the generalized image library to create a 3D image. The keyword \texttt{threed:} can be abbreviated to \texttt{3d:}. \begin{verbatim} imgcp stereo:chair.left.img 3d:chair3d.red.img \end{verbatim} 5.6 Black and White Images The `bw:` keyword can be used to explicitly force an image to be a black-and-white image. This is useful if a program expects a color input image but you wish to give it an ordinary single-band image instead. In such cases, the single-band image is converted to a color image in the obvious way; the red, green and blue intensity values at each point are exactly the intensity values in the black-and-white image. For example: ``` imgcp -c bw:house.img house.red.img ``` 6 Cursor Positioning Some programs may use the cursor positioning facilities of the generalized image library. These facilities allow you to indicate pixel locations in an image by positioning a cursor. Figure 6 indicates the keys which may be used to position the cursor. The numeric cursor positioning keys are intended to be used on terminals with a numeric keypad. The amount by which the cursor moves cannot be controlled by the shift and control keys as for the other methods of cursor positioning. Instead, three of the numeric keypad keys provide the ability to set the amount of cursor movement for the numeric cursor positioning keys. The minus (-) key sets the cursor movement to 1 pixel. The zero (0) key sets the cursor movement to 8 pixels, which is the default. The period (.) key sets the movement to 64 pixels. When the cursor is in the desired position, press return and the cursor position will be returned to the program. At any point, you may press the v (value) key. The library will then report the current cursor position and the value(s) of the image at that position. The t key may also be used to toggle the cursor on and off - pressing t when the cursor is displayed causes it to disappear and pressing t when the cursor is not visible causes it to appear. The cursor is always visible after you move it. If you wish to abort a program during cursor positioning, your usual interrupt key may be used. If you need help, the ? key results in a message which describes the keys in detail. If you press an unrecognized key, then a brief message will be printed on your terminal. 7 Environment The generalized image library makes use of the following environment variables: `IMDEBUG`, `IMCURSOR`, `IMSYNC`, `IMLOAD`, `IMPATH` and `IMCREATE`. `IMDEBUG` is used for debugging and error checking control. `IMCURSOR` is used to control the method of cursor movement in programs which use the GIL CURSOR facilities. `IMSYNC` is used to control the sync source for display devices. `IMLOAD` is used to select the library version to be loaded at run time. `IMPATH` contains a directory path list which is searched f. k: Cursor to the right (forward) one pixel. F. K: Cursor to the right 8 pixels. control-F. control-L: Cursor right 64 pixels. b. h: Cursor to the left (backward) one pixel. B. H: Cursor to the left 8 pixels. control-B. control-H: Cursor left 64 pixels. p. u: Cursor up one pixel. P. U: Cursor up 8 pixels. control-P. control-U: Cursor up 64 pixels. n. j: Cursor down one pixel. N. J: Cursor down 8 pixels. control-N. control-J: Cursor down 64 pixels. 1: Cursor down and to the left. 2: Cursor down. 3: Cursor down and to the right. 4: Cursor left. 6: Cursor right. 7: Cursor up and to the left. 8: Cursor up. 9: Cursor up and to the right. Figure 6: Cursor movement keys. when opening image files. **IMCREATE** contains a single directory name which is used as a prefix when image files are being created. The environment variable **IMDEBUG** may be used to control error checking and debugging in programs built with the generalized image library. The **IMDEBUG** environment variable contains a string of option letters which affect various portions of the library. The following options are available: 1. **b**: Bounds check. This option strictly checks all image accesses to ensure that they are within the allowable bounds. It reports any violations as an error. Without this option, the result of an out-of-bounds image access is not defined; it may result in a program crash or in corruption of the image. 2. **c**: Enable CRC check. Programs which are compiled with the option `-DDEBUG=1` have special code generated which can perform a simple cyclic redundancy check on the generalized image structure before invoking the pixel access routines. Normally, these checks are not performed because they are too time consuming. However, if the `c` option is enabled then the CRC checks are performed. Irrespective of whether the `c` option is present in **IMDEBUG**, cyclic redundancy checks are performed whenever an image is closed and at certain other strategic points in the generalized image library. 3. **d**: Dump core on error. If the library detects an error which would cause the program to abort, the `d` option causes a core dump to be produced. This is particularly useful when developing programs which use the library. 4. **e**: Efficiency report. The `e` option causes the generalized image library to report information which may be useful in tracking down suspected efficiency problems. This is particularly useful when doing development work on the library. 5. **i**: Identify. If this option is present, the first attempt to open or create an image will cause the generalized image library to display its version identification. This is useful if you suspect that a program may have been built with an old version of the library. 6. **l**: List active images on error abort. When the program is aborted by the generalized image library, the active generalized image structures are listed. This provides useful information for debugging. 7. **n**: Network debugging. This option causes the program to use a debugging version of the network server. Useful for development work on the network server. 8. **q**: Quiet mode. This option suppresses informative messages that are otherwise produced by the library. For example, the message associated with **IMCREATE** is suppressed if the **q** option is present in **IMDEBUG**. 9. **v**: Value check. The **v** option enforces strict pixel value checking. Every operation which fetches or stores pixels is checked to ensure that the pixel values are in the valid range. The first range error is reported but execution continues. All out-of-range pixels are truncated to the nearest extreme value of the range. *Unimplemented.* 10. **z**: Wizard debugging information. The **z** option is used by developers to display debugging information. This information will probably be unintelligible to users. 11. **F**: Fake forks. When debugging GIL operations which involve forking, it is sometimes useful to disable the actual fork operations. When the option **F** is set, the GIL does not execute the fork operation but continues processing as though it were the child process. Useful for wizards only. 12. **M**: Macro expansion debugging. The **M** option is used by developers to debug GIL macro translation. This information will probably be unintelligible to users. Within the **csh** shell, the **setenv** command may be used to set the **IMDEBUG** environment variable. For example, the following command sequence runs the program **myprog** with strict bounds checking on image access and with core to be dumped if an error abort occurs. ```csh % setenv IMDEBUG bd % myprog in.img out.img ``` Once **IMDEBUG** has been set, the options remain in effect until it is reset. To cancel all options, **IMDEBUG** may be set to the empty string as follows. ```csh % setenv IMDEBUG "" ``` The **IMCURSOR** environment variable is used to select alternate methods of moving the GIL cursor. By default, cursor motion is based on keyboard commands as explained below. If the environment variable **IMCURSOR** is set to **x** (or **X**) then the GIL will use the mouse under the X window system to move the GIL cursor. The **IMPATH** environment variable contains a path list of directories which are searched for an image file that is being opened. The path list is not searched if the image file name is an absolute path, that is if it commences with a slash (/). The path list consists of any number of directory names separated from each other by colons. Normally, the first directory in the path is dot (.) which causes the library to look for the image in the current directory. As an example, consider a user fred who has many of his images stored in the directory /visi/fred and some additional images in the directory /visi/fred/extras. The following csh shell command could be used to set the image library's path to search both of these directories after the current directory. ``` % setenv IMPATH './visi/fred:/visi/fred/extras' ``` After executing the above command to establish his path, the user fred can omit the full path names of image files which reside in either of the directories /visi/fred or /visi/fred/extras. So, he can copy the image /visi/fred/tree.img to the display device with the following command (assuming that there is no file tree.img in his current directory). ``` imgcp tree.img display ``` He can also copy a file called /visi/fred/experiment/tree.img to the display device with the following command. ``` imgcp experiment/tree.img display ``` The path searching strategy applies only when existing images are being named. When a new image is being created, the name must be specified in full unless IMCREATE has been set. For instance, if fred wished to copy his image /visi/fred/house.img into GIF format in the same directory, he would use the following command. ``` imgcp house.img /visi/fred/house.gif ``` The IMCREATE environment variable contains a single directory name. When an image file is being created, the generalized image library prefixes the file name with the contents of the IMCREATE environment variable. This is not done if the file name is an absolute path, that is if it commences with a slash (/). IMCREATE is also not used if the image file name commences with a period (.) as it is then assumed to be explicitly named relative to the current directory. When IMCREATE is used, the library reports the full name of the image file it is creating. The message may be suppressed by the q option in IMDEBUG. As an example, consider again the user fred who likes to store his images in /visi/fred. The following csh command could be used to set the IMCREATE environment variable so that the library would create images in /visi/fred by default. After executing the command to establish `IMCREATE`, the user `fred` can omit the full path name when creating image files in the directory `/visi/fred`. He can also abbreviate the path name of images created in directories beneath `/visi/fred`. For example, he can copy the image `/visi/fred/house.img` into GIF format in the same directory with the following command. ``` imgcp house.img house.gif ``` To copy his image `/visi/fred/extras/car.img` into GIF format, he could use the following command. ``` imgcp car.img extras/car.gif ``` The `IMSYNC` environment variable is provided to give the user explicit control over the sync signal used by display devices. Display devices such as the hlatros are capable of synchronizing their output signals either to an external sync signal or to an internally generated signal. Since the external signal is often unstable, the library normally uses an internal signal except when images are being digitized. The following command may be used to set the `IMSYNC` environment variable and force the library to always use the external sync signal. ``` % setenv IMSYNC external ``` To return to the default mode of internally generated sync signal, use the following command. ``` % setenv IMSYNC internal ``` The `IMLOAD` environment variable is provided to give the user explicit control over the load-at-run-time library. Normally, the version of the library which is loaded at run time is determined by the version of `libldgimage.a` with which the program was linked. However, the `IMLOAD` environment variable may be set to override this default. If the `IMLOAD` environment variable is set, then the contents is taken as the name of the load-at-run-time file. For example, the following command causes the experimental library to be used. ``` % setenv IMLOAD /usr/vision/experimental/lib/libgimage.ld ``` % cursor Usage: cursor input-image(s) C-d display] [-o output] -c: Use color. -o: Output label points image. For help during cursor positioning, type '?' Figure 7: The Syntax Summary for Cursor. Additional debugging information can be obtained by prefixing the file name with a hyphen. ``` % setenv IMLOAD -/usr/vision/experimental/lib/libgimage.ld ``` ### 8 Some Useful Utilities This section describes some useful utility programs which have been implemented using the generalized image library. These include programs for copying images, printing out the header information of an image and interacting with an image via the cursor positioning facilities of the library. If any of the programs is invoked without any arguments, it displays a brief description of how it is used. Once you are familiar with the program, this description will remind you of the exact syntax of the command and the switch names. For example, figure 7 shows the syntax summary for the `cursor` program. #### 8.1 Header The `header` program displays the header information of an image. The command syntax is as follows. ``` header [-p] image(s) ``` `Header` can be used with any generalized image and will display the image type, bounds and pixel characteristics. Additional information describing the paging characteristics is printed for CMU format images. The command snitch `-p` causes `header` to display the property list of the image(s). The property list is used to store descriptive information. Figure 8 shows two examples of using the header program. In the first example, the characteristics of a CMU format image are displayed along with its property list. In the second example, the header information for the default display device is shown. 8.2 Copying Images One of the most useful utilities is the imgcp program. The examples throughout this document generally involve copying images from one format to another. So imgcp is used. Imgcp combines with the naming conventions of the generalized image library to provide a powerful tool for creating and displaying images, and converting them from one data format to another. Command line switches provide additional features. The command syntax follows. ``` imgcp [-switches] input-image output-image ``` In its simplest form, imgcp can be used to copy any generalized image to any other generalized image. The input and output name expressions can employ the generalized image library keywords described previously to modify the image. If the output image refers to an existing disk file, it will be destroyed without any warning. If the output image is a display device, imgcp will only copy that portion of the input image which can be accommodated on the display. The command switch -c indicates that the input and output images are color images. The command switch -s indicates that the input and output images are stereo images. Combining the two switches indicates a color stereo image. The command switch -r causes pixel values to be converted from the input image pixel value range to the output image pixel value range. This is useful for displaying binary images, as in the following example. ``` imgcp -r sunthresh.img display ``` Linear transformations may also be invoked by the generalized image library naming syntax or by the -m and -a command switches. The -m switch is used to specify a multiplier and -a indicates a constant to be added after the multiplication. For example, the following command multiplies moon.img by three and adds 128. ``` imgcp -m3 -a128 moon.img display ``` When the pixel values of the input image are unknown, a reasonable display may be produced by using the -n command switch. The -n switch obtains a random sample of the input image and % header -p pitt.img /usr/vision/images/pitt.img: Image format: CMU 351 rows (250:600). 381 columns (190:570) 8 bits/pixel Pixel type: 'unsigned' Pixel range: 0:255 132 pages: 11 down, 12 across. Each page holds 32 rows, 32 columns of pixels. Total pixel storage space = 139264 bytes <table> <thead> <tr> <th>Properties</th> <th>Values</th> </tr> </thead> <tbody> <tr> <td>pixel type</td> <td>unsigned</td> </tr> <tr> <td>pixel range</td> <td>0:255</td> </tr> <tr> <td>description</td> <td>A scene of Pittsburgh, PA</td> </tr> </tbody> </table> % header display display: Image format: net 480 rows (0:479). 512 columns (0:511) 8 bits/pixel Pixel type: 'unsigned' Pixel range: 0:255 Figure 8: Examples of the Header Program estimates the mean and standard deviation of the pixels. It then computes a linear transformation to the desired mean and standard deviation which are given as arguments to the \texttt{-n} switch. For example: \texttt{imgcp -n128,32 weird.img display} The \texttt{-o} command switch can be used to produce complex displays by overlaying several input images. When the \texttt{-o} option is used, \texttt{imgcp} opens the output image and writes the input image to it instead of creating the output image from scratch. It is thus possible to use the \texttt{shift} and \texttt{crop} keywords to position input images on the output. The \texttt{-z} command switch computes an automatic magnification and shift of the input image to fit into the output image. For example, the following command produces a reasonable display of the color image \texttt{myscene.red.img}, magnifying or reducing it as necessary to fit on the display. \texttt{imgcp -cz myscene.red.img display} The \texttt{-S} command switch is used to copy image sequences. When this switch is used, \texttt{imgcp} opens the input and output images as sequences rather than as individual images. It then proceeds to interactively copy images from the input sequence to the output sequence. If the input sequence is a display device capable of digitization then input images will be digitized and copied to the output image sequence. For example, the following command may be used to digitize a sequence of images from live video input. \texttt{imgcp -S matrox newdata.seq1.img} ### 8.3 Cursor The \texttt{cursor} program provides cursor-based interaction with generalized images. It allows image values to be interrogated interactively and provides the ability to record selected co-ordinates in an output image. The command syntax is as follows. \texttt{cursor input(s)} \texttt{[-d display]} \texttt{[-o output]} In its simplest form, the \texttt{cursor} command may be used to interrogate the pixel values of an image under the control of the generalized image library's cursor package. Cursor movement is described in section 6. The input image will be copied to your machine's default \texttt{display} device. If the image is larger than the screen, automatic \texttt{scrolling/panning} will be performed as necessary. For example, the following command allows the image \texttt{large.img} to be viewed on the default display device. More than one input image may be specified. In that case, the cursor program will display the pixel values of all the input images whenever return is pressed. Only the first input image, however, will be displayed on the screen. For example, suppose that house.img is a house scene and houseseg.img is a segmentation labelled image. Then the following command can be used to query the region labels at points in the image. ``` cursor house.img houseseg.img ``` The -d switch may be used to override the default display. This is useful not only to specify an alternative display device but also to modify the way in which the display is used. In particular, the display can be zoomed to obtain higher resolution for more accurate positioning of the cursor. The following command example uses a zoom factor of four to enable the cursor to be positioned to the nearest pixel accurately. ``` cursor tree.img -d zoom:4:display ``` The cursor program may be used to record positional data in an output image. The -o switch specifies the output image. The image will be created and filled with zeroes if it does not already exist. When the -o option is used, pressing return causes a pixel value to be stored in the output image. Pixel locations which have been selected will be marked with a white spot on the display. The stored value is specified interactively when cursor is first executed and may be changed at any time simply by pressing return twice at the same pixel location. You will then be prompted for the new value. To exit from the cursor program when using the -o option it is necessary to press return twice and select minus one. Exiting the program by means of your usual interrupt character will cause the stored values to be lost. This is a bug which remains to be corrected. For example, the following command could be used to locate positional features such as windows and corners in a house scene. The display is zoomed to facilitate accurate cursor positioning. ``` cursor house.img -d zoom:4:display -o housefeat.img ``` 'This is a hack which should be fixed.' 9 Paying the Piper If you think that all this flexibility makes programs big and slow, you may have a point. The speed cost of using the flexibility of the generalized image library varies, but is usually low. The generalized image library provides faster access to CMU format images than was possible with the old library. The facility for loading the library at run-time makes programs smaller but involves a fixed overhead of loading the entire library every time a program is started.
{"Source-Url": "http://www.ri.cmu.edu/pub_files/pub3/hamey_l_g_c_1990_1/hamey_l_g_c_1990_1.pdf", "len_cl100k_base": 13761, "olmocr-version": "0.1.50", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 97075, "total-output-tokens": 15362, "length": "2e13", "weborganizer": {"__label__adult": 0.00032639503479003906, "__label__art_design": 0.0014438629150390625, "__label__crime_law": 0.0002446174621582031, "__label__education_jobs": 0.0020999908447265625, "__label__entertainment": 0.00019800662994384768, "__label__fashion_beauty": 0.00017154216766357422, "__label__finance_business": 0.00033354759216308594, "__label__food_dining": 0.00023984909057617188, "__label__games": 0.0011119842529296875, "__label__hardware": 0.007015228271484375, "__label__health": 0.00023746490478515625, "__label__history": 0.0004298686981201172, "__label__home_hobbies": 0.00020754337310791016, "__label__industrial": 0.0006890296936035156, "__label__literature": 0.0003478527069091797, "__label__politics": 0.0002123117446899414, "__label__religion": 0.0005140304565429688, "__label__science_tech": 0.08782958984375, "__label__social_life": 0.00010883808135986328, "__label__software": 0.09100341796875, "__label__software_dev": 0.8046875, "__label__sports_fitness": 0.0001747608184814453, "__label__transportation": 0.0003812313079833984, "__label__travel": 0.00015783309936523438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61912, 0.02216]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61912, 0.92635]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61912, 0.85194]], "google_gemma-3-12b-it_contains_pii": [[0, 879, false], [879, 3168, null], [3168, 3405, null], [3405, 5704, null], [5704, 7661, null], [7661, 10104, null], [10104, 12266, null], [12266, 14138, null], [14138, 16438, null], [16438, 18406, null], [18406, 20145, null], [20145, 21463, null], [21463, 23747, null], [23747, 25983, null], [25983, 28740, null], [28740, 30759, null], [30759, 33514, null], [33514, 35519, null], [35519, 38160, null], [38160, 40133, null], [40133, 42770, null], [42770, 43454, null], [43454, 45913, null], [45913, 48278, null], [48278, 50575, null], [50575, 52437, null], [52437, 53932, null], [53932, 56188, null], [56188, 56926, null], [56926, 59337, null], [59337, 61423, null], [61423, 61912, null]], "google_gemma-3-12b-it_is_public_document": [[0, 879, true], [879, 3168, null], [3168, 3405, null], [3405, 5704, null], [5704, 7661, null], [7661, 10104, null], [10104, 12266, null], [12266, 14138, null], [14138, 16438, null], [16438, 18406, null], [18406, 20145, null], [20145, 21463, null], [21463, 23747, null], [23747, 25983, null], [25983, 28740, null], [28740, 30759, null], [30759, 33514, null], [33514, 35519, null], [35519, 38160, null], [38160, 40133, null], [40133, 42770, null], [42770, 43454, null], [43454, 45913, null], [45913, 48278, null], [48278, 50575, null], [50575, 52437, null], [52437, 53932, null], [53932, 56188, null], [56188, 56926, null], [56926, 59337, null], [59337, 61423, null], [61423, 61912, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61912, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61912, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61912, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61912, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61912, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61912, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61912, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61912, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61912, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61912, null]], "pdf_page_numbers": [[0, 879, 1], [879, 3168, 2], [3168, 3405, 3], [3405, 5704, 4], [5704, 7661, 5], [7661, 10104, 6], [10104, 12266, 7], [12266, 14138, 8], [14138, 16438, 9], [16438, 18406, 10], [18406, 20145, 11], [20145, 21463, 12], [21463, 23747, 13], [23747, 25983, 14], [25983, 28740, 15], [28740, 30759, 16], [30759, 33514, 17], [33514, 35519, 18], [35519, 38160, 19], [38160, 40133, 20], [40133, 42770, 21], [42770, 43454, 22], [43454, 45913, 23], [45913, 48278, 24], [48278, 50575, 25], [50575, 52437, 26], [52437, 53932, 27], [53932, 56188, 28], [56188, 56926, 29], [56926, 59337, 30], [59337, 61423, 31], [61423, 61912, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61912, 0.00916]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
69cc3ce29c18723187116c0e5fe623687335ce4c
Trustworthy Computing Software Vulnerability Management At Microsoft July 2010 Contents Introduction .......................................................... 4 High-Quality Security Updates ........................................ 11 The Microsoft Development Process for Security Updates ........ 11 Minimizing the Number of Security Updates ....................... 12 Security Updates for All Affected Products Are Released Simultaneously ..................... 14 Coordinated Security Update Releases .............................. 15 Application Compatibility Testing ................................. 15 Decreased Complexity and Time to Deploy Security Updates ........... 18 Consolidated Security Updates to Minimize System Restart ........ 19 Community-Based Defense ............................................ 22 Sharing Information and Intelligence with Partners ................ 22 Sharing Research and Knowledge with Independent Software Vendors .................. 23 Working with Security Researchers and Advocating for Coordinated Vulnerability Disclosure ............................. 24 Comprehensive Security Response Process .......................... 26 Predictable and Transparent Release Cycle ......................... 27 Communications and Guidance ..................................... 28 Managing Risk through Standardized Workflow .................... 30 Conclusion ............................................................ 31 Contributors .......................................................... 32 Introduction Vulnerabilities are weaknesses in software that enable an attacker to compromise the integrity, availability, or confidentiality of that software or the data it processes. Some of the most severe vulnerabilities enable attackers to run software code of their choice, potentially compromising the system’s software. The disclosure of a vulnerability is the revelation of a vulnerability to the public at large. Disclosures can come from various sources, including software vendors, security software vendors, independent security researchers, and those who create malicious software (also known as “malware”). It is impossible to completely prevent vulnerabilities from being introduced during the development of large-scale software projects. As long as human beings write software code, no software is perfect and mistakes that lead to imperfections in software will be made. Some imperfections (“bugs”) simply prevent the software from functioning exactly as intended, but other bugs may present vulnerabilities. Not all vulnerabilities are equal; some vulnerabilities won’t be exploitable because specific mitigations prevent attackers from using them. Nevertheless, some percentage of the vulnerabilities that exist in a given piece of software poses the potential to be exploitable. Manual code reviews performed by developers and testers, in concert with automated tools such as fuzzers and static analysis tools, are very helpful techniques for identifying vulnerabilities in code. But these techniques cannot find every vulnerability in large scale software projects. As developers build more functionality into their software, their code becomes more and more complex. The challenge of finding vulnerabilities in very complex code is compounded by the fact that there are an infinite number of ways that developers can make coding errors that can create vulnerabilities, some of which are very, very subtle. To illustrate how subtle a security vulnerability can be, the following small code sample contains a vulnerability that is difficult to find using code reviews or tools or both. ```c bool fAllowAccess = true; if (AccessCheck(...) == 0 && GetLastError() == ERROR_ACCESS_DENIED) fAllowAccess = false; ``` In this real-world example, the developer who wrote the code intended to have the code check whether the user running the program should be denied access to the program or if they should be granted access. The problem in this code is that the function (AccessCheck()) that the developer is using to decide whether to grant access to the user, can fail for many reasons, many of which are not conditions related to denying access. For example, if the application runs out of memory for any reason during this operation, the function could return an “out of memory” error instead of the “access denied” error that the developer was expecting. Because the developer only checks for an “access denied” error, this code will grant access to the user if any error other than “access denied” error is returned. This is, therefore, a vulnerability that could potentially be exploited if an attacker could create the right conditions. Again, developers can make an infinite number of potential mistakes during software development. Many cannot be readily detected by human (individuals reviewing) or machine (through various code review tools and technologies) review. Some vulnerabilities are not identified because they aren’t known to be a vulnerability when the code is written; that is, security research continues to uncover new types of vulnerabilities that might affect products developed with the best practices available at the time. Research in the Microsoft Security Intelligence Report (SIR) Volume 8\(^1\) shows that thousands of vulnerabilities of varying severities\(^2\) are disclosed continually across the entire software industry\(^3\) every year (see Figure 1). ![Industry-wide vulnerability disclosures by severity, by half-year from the first half of 2006 through the second half of 2009](image) **Figure 1: Industry-wide vulnerability disclosures by severity, by half-year from the first half of 2006 through the second half of 2009** Managing vulnerabilities within software is an industry-wide challenge—a challenge that must balance the flexibility of software design with consistency in coding practices. The customization of applications built on software platforms must be balanced with the wide variety of tools used to build and deploy solutions. Although thousands of disclosures are made throughout the software industry each year, the largest percentage of vulnerability disclosures relate to software applications. Figure 2 shows vulnerabilities for operating systems, browsers, and other components since the first half of 2006. --- \(^1\) Microsoft Security Intelligence Report Volume 8, [www.microsoft.com/sir](http://www.microsoft.com/sir) \(^2\) For an explanation of the Common Vulnerability Scoring System (CVSS) scoring methodology, see [http://www.first.org/cvss/cvss-guide.html](http://www.first.org/cvss/cvss-guide.html) \(^3\) The nomenclature used to refer to different reporting periods is nHyy, where nH refers to either the first (1) or second (2) half of the year, and yy denotes the year. For example, 2H08 represents the period covering the second half of 2008 (July 1 through December 31), and 1H09 represents the period covering the first half of 2009 (January 1 through June 30). Figure 2: Industry-wide operating system, browser, and other vulnerabilities during the period between the first half of 2006 through the second half of 2009 In general, trends for Microsoft vulnerability disclosures have mirrored those for the industry as a whole, though vulnerabilities in Microsoft software represent a small fraction of the total population as seen in Figure 3. Over the past four years, vulnerability disclosures relating to software manufactured by Microsoft have consistently accounted for 3 to 5 percent of all disclosures industry-wide.4 --- 4 Microsoft Security Intelligence Report Volume 8, www.microsoft.com/sir Software Vulnerability Management at Microsoft Another way Microsoft tries to manage risk for its customers is through company efforts related to responsible disclosure (RD). Responsible disclosure means disclosing vulnerabilities privately to the vendor who created the software so that a comprehensive security update to address the vulnerability can be developed before the details become public knowledge. Ideally, with responsible disclosure, the release of the security update coincides with the public release of vulnerability information. This helps to keep users safer by limiting the risk that attackers would learn about newly discovered vulnerabilities before security updates are available. To reduce the risk of an attack to customers, Microsoft and others in the industry work to keep responsible disclosure rates as high as possible, using a variety of strategies to encourage reporting of vulnerabilities. Figure 4 shows responsible disclosures of vulnerabilities in Microsoft software that the Microsoft Security Response Center received in each half-year period since the first half of 2006, as a percentage of all disclosures.5 5 Microsoft Security Intelligence Report Volume 8, www.microsoft.com/sir In July 2010, the Microsoft philosophy on the disclosure of vulnerabilities encouraging new levels of coordination between researchers and vendors, was announced, Coordinated Vulnerability Disclosure (CVD). CVD is an evolution of Responsible Disclosure that generally follows the same principles, in an effort to minimize risk to customers and minimize criminal activity on the Internet. It is important to note that no policy will eliminate risk, or criminal/malicious activity, however it is important to Microsoft customers that the focus always be on introducing less risk, and minimizing attacks. The principles associated with Microsoft’s Coordinated Vulnerability Disclosure philosophy are as follows: - Vendors and vulnerability finders need to work closely together toward a resolution - Extensive efforts should be made to make a timely response - Only in the event of active attacks is public disclosure, focused on mitigations and workarounds, likely the best course of action, and even then it should be coordinated as closely as possible. Advocating for CVD helps Microsoft to galvanize the security community around a key point: that coordination and collaboration are required to resolve issues in a way that minimizes risk and disruption for customers. For more details on CVD and the way it is envisioned to work, see http://blogs.technet.com/b/ecostrat/archive/2010/07/22/coordinated-vulnerability-disclosure-bringing-balance-to-the-force.aspx. Microsoft has also worked collaboratively with software companies through responsible disclosure, and now through coordinated vulnerability disclosure, to help identify vulnerabilities in other commercially available software by leveraging the tools and processes that Microsoft uses to review its own software. The Microsoft Vulnerability Research (MSVR) program is designed to improve the --- 6 http://go.microsoft.com/?linkid=9738193 security of software running on the Microsoft platform. Through the MSVR program, Microsoft extends its own security expertise and tools to discover vulnerabilities in third-party code that runs on the Microsoft platform, reports vulnerabilities to affected vendors, and then offers to assist the companies whose software has security vulnerabilities to resolve those vulnerabilities. Another measure that suggests that the focus on responsible disclosure, and now coordinated vulnerability disclosure, benefits Microsoft customers is in the relatively low number of Microsoft “out-of-band” security updates. Microsoft implemented a predictable monthly security update release cycle in October 2003 whereby Microsoft releases security bulletins each month that fix vulnerabilities in Microsoft software. Security bulletins are typically released on the second Tuesday of each month, although on rare occasions Microsoft releases security updates between the monthly security updates (these are also known as “out-of-band” updates) when the vulnerability is determined to pose an urgent risk to customer systems. Microsoft realizes that out-of-band security updates that address security vulnerabilities are unexpected and present planning and deployment difficulties for enterprise customers. With this in mind, Microsoft only releases out-of-band updates when the risk of exploitation using the vulnerability is determined to be elevated; these decisions are made by weighing the risk of exploitation and the potential impact to customers. Figure 5 shows the number of out-of-band security updates Microsoft released between 2005 and 2009 compared to regular security update releases during the same period. <table> <thead> <tr> <th>Period</th> <th>Total Security Bulletins</th> <th>Out-of-band Security Bulletins</th> </tr> </thead> <tbody> <tr> <td>1H05</td> <td>33</td> <td>0</td> </tr> <tr> <td>2H05</td> <td>21</td> <td>0</td> </tr> <tr> <td>1H06</td> <td>32</td> <td>1</td> </tr> <tr> <td>2H06</td> <td>46</td> <td>1</td> </tr> <tr> <td>1H07</td> <td>35</td> <td>1</td> </tr> <tr> <td>2H07</td> <td>34</td> <td>0</td> </tr> <tr> <td>1H08</td> <td>36</td> <td>0</td> </tr> <tr> <td>2H08</td> <td>42</td> <td>2</td> </tr> <tr> <td>1H09</td> <td>27</td> <td>0</td> </tr> <tr> <td>2H09</td> <td>47</td> <td>2</td> </tr> <tr> <td>Total</td> <td>353</td> <td>7</td> </tr> </tbody> </table> ![Figure 5: Total security bulletins and out-of-band updates released by Microsoft between the first half of 2005 and the last half of 2009](image) Despite the relatively low number of vulnerabilities in Microsoft products, historically high responsible disclosure rates and a low number of out-of-band security update releases, Microsoft understands that security vulnerabilities in Microsoft products still have the potential to disrupt customer experiences. In order to help protect customers from potential disruptive behavior by criminals seeking to gain access to systems and information through a cyber-attack, Microsoft has developed a security update release process that seeks to give customers a high level of consistency, predictability, quality and transparency while minimizing risk. The goal is to make it easier for... Software Vulnerability Management at Microsoft customers to understand the related risks posed to their information technology (IT) environment, enabling them to plan, resource, schedule and budget for the associated system maintenance. This security update release process has evolved over the past decade based on insights from direct customer feedback to Microsoft and to keep pace with the constantly changing threat landscape that customers face.7 Still, some security industry influentials and security researchers question why some Microsoft security updates sometimes take long periods of time to release. Opinions in the industry on the topic of “time to fix” vary, but our priority at Microsoft for security updates is to minimize disruption to customers and to help protect against online criminal attacks. The goal of this paper is to elaborate on the strategies employed by Microsoft to minimize disruptions to its customers. Although these insights apply to all Microsoft customers, this paper focuses on benefits to enterprise customers in particular. For most consumers, the Microsoft Update service makes keeping Microsoft software up to date on their systems easy. Enterprises and organizations with large IT operations are often required to meet compliance requirements, standards, and guidelines. Subsequently these additional requirements demand a level of rigor and predictability that enterprise customers can rely on in order to minimize disruptions to their businesses. In this paper, you’ll learn how Microsoft uses a multipronged approach to help its customers manage their risks. This approach includes three key elements: - **High-quality security updates** – Using world class engineering practices produces high-quality security updates that customers can confidently deploy to hundreds of millions of diverse systems in the computing ecosystem and that help customers minimize disruptions to their businesses. - **Community-based defense** – Microsoft partners with many other parties when it investigates potential vulnerabilities in Microsoft software. Microsoft looks to mitigate exploitation of vulnerabilities through the collaborative strength of the industry and through partners, public organizations, customers, and security researchers. This approach helps to minimize potential disruptions to Microsoft customers’ businesses. - **Comprehensive security response process** – Using a comprehensive security response process helps Microsoft effectively manage security incidents while providing the predictability and transparency that customers need in order to minimize disruptions to their businesses. --- 7 Details on Microsoft’s security update release process and recommendations on how enterprise customers can leverage all of the outputs of this process are available in the Microsoft Security Update Guide available at [http://microsoft.com/securityupdateguide](http://microsoft.com/securityupdateguide) High-Quality Security Updates The challenge of minimizing disruptions to a diverse computing ecosystem of hundreds of millions of systems worldwide requires relentless focus on engineering excellence that leads to the development of high-quality products and, as needed, corresponding high-quality updates for those products. This section of this paper focuses on Microsoft security update releases and some of the ways the company works to minimize disruptions to enterprise customers’ operations, keep associated costs for customers low, and help provide protection from online criminal activity. To do this, Microsoft rigorously designs, develops, and tests comprehensive security updates with a focus on deployment ease for customers. This section discusses several related topics including the process that Microsoft uses to develop security updates, Microsoft efforts to minimize the number of security updates, the importance of releasing security updates in a simultaneous and coordinated fashion, some insights into functional testing and application compatibility testing for Microsoft security updates, and some ways Microsoft works to make it easier for enterprises to deploy security updates. The Microsoft Development Process for Security Updates When a security researcher finds a vulnerability in Microsoft software and reports it to Microsoft, the Microsoft Security Response Center (MSRC) uses a process to manage the investigation, development, and release of a security update for the vulnerability. This process is comprehensive to ensure that security updates released by Microsoft address the vulnerabilities identified with minimal disruption to the hundreds of millions of customers using Microsoft software around the world. The process of investigating and developing a security update includes several related categories of work: - **Triage and reproduction:** Reproduce the issue, assess its severity, identify the root cause, determine affected components and products, identify and test existing mitigations, and determine if a new security update should be released. - **Planning:** Assess whether new functional testing needs to be conducted and determine how vulnerability variant investigation will occur, identify appropriate release mechanism and target release date. - **Finding variants:** Create fuzzing tools and/or improve existing ones to ensure proper coverage to identify related issues, identify new variants, review code fixes/assembly from a security perspective, (when applicable) run static code-analysis tools to help identify any unidentified issues. - **Code and engineering reviews:** Perform engineering reviews to ensure the code change addresses the underlying issue and determine the broader impact of the code change. - **Functional testing:** Ensure code changes have not affected functionality in an unexpected way; setup and deployment testing as well as serviceability testing is performed for all affected platforms. - **Integration testing:** Application compatibility testing and working with partners (other groups within Microsoft and ISVs) that are potentially affected by the vulnerability or the security update or both. The comprehensive nature of a vulnerability investigation and subsequent engineering work takes time. Although some of the categories of work described above can be completed simultaneously, some of the work cannot be rushed without increasing the risk of disruptions—for example, application compatibility issues—for customers. Some of the benefits and challenges related to this process are discussed in more detail in the remainder of this section. Minimizing the Number of Security Updates The MSRC receives more than 100,000 email messages per year (273 email messages per day or 11 per hour, from approximately 0.01% of customers using computers and devices running Windows) at secure@microsoft.com. Most of this email is from customers that have encountered a security issue but are not reporting a vulnerability. For example, customers need help logging into Windows Live Hotmail or something similar. In these cases, the MSRC routes them to the correct support team within Microsoft for help. However, some of these email messages are from security researchers and passionate customers located all over the world who believe they might have identified a vulnerability in a Microsoft product. Security experts read and monitor every email message manually and are continually on the alert for information about newly discovered vulnerabilities. --- The MSRC opens an investigation to determine if a reported vulnerability exists and what components and products the vulnerability affects. After a vulnerability has been confirmed, the development process for security updates described above is initiated. This comprehensive approach to investigations and fixing vulnerabilities helps avoid the need to re-release a security update or to release multiple updates to address issues in the same components that otherwise could have been addressed in a single update. This type of scenario occurred when Microsoft released security bulletin MS03-026 on July 16, 2003. After the update was released, three additional vulnerabilities were discovered in the same component: RPCSS/DCOM. MS03-039 was then released to address all vulnerabilities in the component on September 10, 2003. Identifying variants of vulnerabilities in code is often difficult and time-consuming. For example, in some cases, new fuzzing techniques and/or new tools need to be developed so that security engineers can perform the meticulous testing required to ensure a high level of confidence that all variants of a vulnerability have been identified and addressed. After ensuring that the reported vulnerability and any variants have been addressed and testing the update, deep functional testing, setup and deployment testing, serviceability testing, and application compatibility testing are all completed to ensure the highest quality security update possible is ready for release. It is important to note that once a security update is released, the update will be deployed to hundreds of millions of systems worldwide in a very short period of time. Given the scale of deployment required, it is imperative that the update has been rigorously engineered to avoid creating any type of disruption to these systems. Throughout the testing, the MSRC monitors the threat landscape for signs that the vulnerability or variants are being used in active attacks. The MSRC does this by using comprehensive telemetry systems as well as data and information provided by customers and partners around the world, along with relationships with the security industry. This approach helps Microsoft balance the potential urgency of releasing an update for a vulnerability that is under active exploitation with the need for high confidence that the update will address the vulnerability and all of its variants and maintain the functionality and stability that customers expect from the affected products. Microsoft tries to address vulnerabilities and all of their variants in as few updates as possible. Deploying updates costs enterprise customers time, effort, and money. Therefore, costs increase if customers have to re-assess the effects and deploy multiple updates. Taking additional time to complete a comprehensive examination helps to ensure the number of security updates Microsoft releases and might need to re-release, is kept to a minimum, thus reducing the costs and potential disruption to enterprise customers’ operations. With the increased quality of security updates achieved over the last five years, some enterprise customers deploy security updates with little or no testing, and hundreds of millions of consumers continue to use the Automatic Update clients on their Windows systems to ensure that they stay protected automatically. Security Updates for All Affected Products Are Released Simultaneously One principle that Microsoft uses to help customers manage risk is to offer security updates for all products affected by a given vulnerability simultaneously. When a software vulnerability is reported to Microsoft, the MSRC opens an investigation to determine if the reported vulnerability exists and what components and products the vulnerability affects. In some cases a vulnerability exists in a single product, but in many cases, a single reported vulnerability is shared by multiple components and/or products. For example, security bulletin MS04-028\(^\text{11}\) (published September 14, 2004), addressed a vulnerability in GDI+ that affected fifty-three separate products. Identifying all components and products, including applicable third-party products, that are affected by a vulnerability is a critical step in an MSRC investigation. The work to do this is not trivial from an engineering perspective because code in different products and components is modified from release to release. Often, the result is that a single reported vulnerability is identified as a variant in earlier version products. Variants typically require different fixes that all must be tested differently. Some have argued that to release security updates more quickly, Microsoft should release security updates as they become ready and not try to release updates for all affected products simultaneously. They argue that the time it takes to fully investigate the code in all potentially affected components and products exposes the computing ecosystem to increased risk. When Microsoft releases a security update, security researchers and criminals immediately start reverse engineering the security update in an effort to identify the specific section of code that contains the vulnerability being fixed. Once they identify the vulnerability addressed by the update, they attempt to develop code that will allow them to exploit that vulnerability on systems that do not have the security update installed on them. They also try to identify whether the vulnerability exists in other products with the same or similar functionality. For example, if a vulnerability is addressed in one version of Windows, researchers investigate whether other versions of Windows have the same vulnerability. In the past, some researchers have been able to develop working exploit code for some vulnerabilities in a matter of days or even a few hours. If Microsoft released security updates for affected products in a staggered manner instead of releasing updates for all affected products simultaneously, this would extend the window of opportunity that criminals can use to attack Microsoft customers, increasing the risk for customers. In the case of MS04-028 mentioned earlier in this paper, if Microsoft hadn’t released security updates for all the products that shared the same vulnerability at the same time, it would have increased the likelihood that attackers would have successfully attacked affected products that were not updated in the initial update release. Such a process would provide an advantage to criminals and disadvantages to customers. Some critics of this process lament that the Microsoft pledge to provide security updates for older versions of affected software (also known as “legacy code”) slows down the release of security updates for newer products. Microsoft customers benefit from one of the longest product support lifecycle policies in the software industry. One major advantage of the support lifecycle policy includes continued security update support throughout the lifetime of the product. Typically this provides 10 years of support for many products (5 years Mainstream Support and 5 years Extended Edit 11\) http://www.microsoft.com/technet/security/bulletin/MS04-028.mspx Software Vulnerability Management at Microsoft Support). This product-support-lifecycle policy helps protect enterprise customers and the entire ecosystem by providing security update support for the many versions of Windows-based systems in the ecosystem. For example, support for Windows 2000, which was originally released on March 31st 2000, ended on July 13, 2010. The support lifecycle policy provides enterprise customers with the time and flexibility needed to maximize the return on investment from their applications running on Microsoft software and to minimize the total cost of ownership, while maintaining their ability to manage risks during the same period. Coordinated Security Update Releases There are security vulnerabilities that affect many applications developed by different software vendors across the ecosystem. Addressing these types of vulnerabilities requires far more coordinated work among Microsoft, its partners, and the broader software industry than other types of vulnerabilities. As a result, it takes significantly more time to investigate, identify the attack vectors, develop mitigations and engineer a solution that helps protect the entire ecosystem, and release and deploy associated security updates. These cases can involve many third parties, not all of which have well developed incident response mechanisms in place. For example, in 2008, a security researcher discovered a vulnerability in the Domain Name System (DNS) protocol that allows all Internet users to resolve domain names to IP addresses. Because this vulnerability was introduced in the design specifications that all software vendors use when they develop DNS related code, this vulnerability affected several major software vendors’ implementations of DNS. Several major software vendors with affected products and services subsequently released security updates to address this vulnerability, including Microsoft, who released Security Bulletin MS08-037. To help protect all users of the Internet and avoid giving criminals an advantage, the affected vendors worked together on coordinated releases of updates to address this problem. Application Compatibility Testing Application compatibility is a fundamental underlying requirement for users of any operating system, productivity suite, or browser. Subtle changes in behavior can occur when an operating system is updated, potentially resulting in unpredictable behavior for applications. When mission-critical and/or line-of-business applications fail to operate as expected, business is disrupted. Therefore, application compatibility testing is a key component to the Microsoft approach to developing and releasing security updates for its products. Minimizing application compatibility issues through security updates involves both depth and breadth testing. When a security update affects multiple versions of Windows or multiple versions of Windows Internet Explorer, for example, the test matrix grows rapidly as do the test plans required to ensure a very high level of confidence in the quality of the update. Security updates affecting Windows are tested on all supported versions of the operating system including Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2. For enterprise customers who take advantage of the Microsoft Custom Support Program, Windows 2000 and/or Windows NT 4.0 might also be added to the test matrix (though this support option for Windows NT 4.0 ended in July 2010). Different SKUs of affected versions of Windows might also be tested (for example, Home Basic, Home Premium, Business, and Ultimate). Different service packs for Windows and hotfixes (QFEs) are part of the test matrix for security updates. Affected versions of Windows are also tested in many different languages (Arabic, Chinese, German, Japanese, Russian, and others) as well. Because Windows is available for a wide variety of computing architectures, all of the testing is completed on the various architectures (that is, x86, x64, and Itanium). Thousands of the world’s most used Windows applications are tested across these architectures, Windows versions, service pack levels, and languages. Figure 7: An illustration reflecting the size of an example test matrix containing a subset (for the purposes of example) of tested languages and applications. This large test undertaking to address multiple architectures, products, and versions is a key way that Microsoft minimizes disruptions to enterprise customers. To further minimize the potential for disruption, in 2005, Microsoft started the Security Update Validation Program (SUVP). The SUVP seeks to ensure the quality of security updates by testing them in environments, configurations, and against applications (such as line-of-business applications) that Microsoft cannot easily duplicate. As a part of this quality control program, Microsoft makes security updates available to a limited group of customers, under a strict non-disclosure agreements (NDA), providing a way for customers to test updates in a broad range of configurations and environments before the updates are released for general availability. Participants are required to provide feedback based on their deployment experience to help identify potential compatibility problems before the MSRC releases the updates to the public. This program provides only the security updates to participants of the SUVP. Participants are not given any information about the underlying vulnerabilities, the area of code being updated, or information about how to exploit the vulnerabilities. The program has reduced compatibility issues and helps enhance the quality of security updates significantly, making it easier for customers to deploy updates more quickly. The efforts outlined above have helped to significantly increase the quality of Microsoft security updates over the last five years. These quality improvements have enabled some customers to reduce the amount of testing they perform on Microsoft security updates, reducing the resources and costs associated with such work within their IT environments, returning budget to fund other projects or reducing operating expenses to their corporations. **Decreased Complexity and Time to Deploy Security Updates** The time spent developing, testing, and releasing security updates (including the detection logic needed to determine whether each individual system around the world needs to be offered any given update) through Windows Update and Microsoft Update enables Microsoft to provide security updates to hundreds of millions of diverse systems worldwide quickly and securely. This helps enterprises manage risks in their IT environments by dramatically decreasing the time and effort it takes to assess and deploy security updates for Microsoft software. Before the release of the Windows 98 operating system, security updates from Microsoft were released using the Microsoft Download Center. Administrators had to manually download and install security updates for Windows and Microsoft products after they determined whether the update applied to any of the products they ran in their environments. This was not a trivial task for administrators to undertake; therefore, the security deployment practices of many enterprises were sub-optimal. For example, a vulnerability in Microsoft SQL Server 2000 addressed in Microsoft Security Bulletin MS02-039 was originally released on the Microsoft Download Center because it pre-dated the availability of the Windows Update Service and Automatic Update Client. At this time, many administrators were unaware that they needed to install the update. Compounding the challenge, the steps required to install the update manually were hard to follow, and administrators had no tools to find systems running affected components in their environments. Although the security updates --- update that addressed the vulnerability was released on July 24, 2002, the “SQL Slammer” worm successfully exploited the vulnerability on numerous systems around the world starting January 25th, 2003 (six months later). Many businesses were disrupted as a result of this attack. The Windows Update service and the Automatic Update Client revolutionized security updating by automatically determining whether individual Windows systems required each security update released by Microsoft based on the components installed on each system. These innovations drastically reduced the complexity and time required to assess and install security updates for Microsoft software, helping to protect the computing ecosystem more efficiently and effectively. Windows Update, and later Microsoft Update, along with tools such as Microsoft Baseline Security Analyzer (MBSA), Windows Server Update Services (WSUS), Microsoft System Center Configuration Manager, Systems Management Server 2003 Inventory Tool for Microsoft Updates, and others\textsuperscript{17} helped enterprise customers save time, effort, and costs associated with doing such work manually\textsuperscript{18}. The Microsoft Update service, launched June 2005, provides all of the updates offered through Windows Update and provides updates for other Microsoft software, such as the Microsoft Office system, Microsoft SQL Server, and Microsoft Exchange Server. Users can opt in to the service when they install software serviced through Microsoft Update or at the Microsoft Update website\textsuperscript{19}. Two key features of the Automatic Update Client are delta binary patching and background processing. These features allow enterprise administrators to download a full package to their WSUS server, while the desktops on the network only download the specific components of the package necessary to implement the update. These features play a major role in reducing network and branch office traffic related to downloading Microsoft security updates, and it minimizes the impact to information workers who are working on desktops during deployments. Today, the efficiencies these innovations provide are necessary to help keep Microsoft customers ahead of online criminal attackers who seek to disrupt their businesses for profit\textsuperscript{20}. **Consolidated Security Updates to Minimize System Restart** Microsoft recognizes that restarting systems can disrupt its customers’ businesses and that uptime is critical. Restarting systems after installing Microsoft security updates is only required when absolutely necessary. There have been suggestions that Microsoft release fewer, larger update packages containing all necessary updates to reduce the number of system restarts required. The MSRC is constantly trying to find ways to reduce system restart requirements for security updates, while it manages a broad set of considerations. A single security bulletin often addresses multiple vulnerabilities from the Common Vulnerabilities and Exposures (CVE) database\textsuperscript{21}, each of which is listed in the bulletin, along with any other relevant issues. The figure below shows the number of security bulletins released and the number of individual CVE-identified vulnerabilities they have addressed for each half-year period since 1H05\textsuperscript{22}. (Note that not all vulnerabilities are addressed in the period in which they are initially disclosed.) \textsuperscript{17} http://technet.microsoft.com/en-us/security/cc297183.aspx \textsuperscript{18} http://microsoft.com/securityupdateguide \textsuperscript{19} http://update.microsoft.com/microsoftupdate \textsuperscript{20} http://microsoft.com/sir \textsuperscript{21} http://cve.mitre.org \textsuperscript{22} Microsoft Security Intelligence Report Volume 8, www.microsoft.com/sir Whenever possible, the MSRC consolidates multiple vulnerabilities that affect a single binary or component and addresses them with a single security bulletin, to maximize the effectiveness of each update and minimize the potential disruption that customers face from testing and deploying individual security updates into their computing environments. When vulnerabilities affect different unrelated components and must be addressed by separate updates, consolidation is not always feasible. Although the ratio of CVEs to security bulletins in 2H09 is down from the historic high achieved in the first half of the year, the ratio remains high in relation to most previous periods, and the overall trend is positive\textsuperscript{23}. \textsuperscript{23} Microsoft Security Intelligence Report Volume 8, \url{www.microsoft.com/sir} Figure 9: Average number of CVEs addressed per security bulletin from the first half of 2005 through the second half of 2009 Community-Based Defense Security is a challenge for the entire industry and one that no single company can solve by itself. As the data in the introduction of this paper illustrates, thousands of security vulnerabilities are disclosed across the software industry each year, most in applications, and most of a high severity. Microsoft works with partners and the broader industry in a variety of ways to sustain resilient computing environments around the world. Much of the information that an MSRC vulnerability investigation generates is not only used to engineer a security update to address the vulnerability and provide tested guidance for customers, but also to enable a large number of Microsoft partners to protect their customers. Enabling partners is a cornerstone of the Microsoft security strategy—one that benefits a wide variety of organizations and consumers. The ultimate goal of this work is to minimize disruptions and to continuously improve the capabilities of computers and devices to withstand disruption, attack, and theft in a variety of forms. Sharing Information and Intelligence with Partners Microsoft operates and/or participates in several software vulnerability sharing programs. The Microsoft Active Protections Program\(^{24}\) (MAPP) is a program for security software providers. Members of MAPP receive vulnerability information early so that they can provide timely, updated protections to enterprise customers through their security software or devices, such as antivirus software, network-based intrusion detection systems, or host-based intrusion prevention systems. The MAPP partner page includes links to the active protections partners (www.microsoft.com/security/msrc/mapp/partners.mspx). The result for enterprise customers is enhanced protections from vendors they trust that they can deploy while they test and deploy Microsoft security updates. Recognizing the important role the security response community plays in Microsoft security efforts, the company formed the Microsoft Security Response Alliance (MSRA) in 2006 as a framework for partners, vendors, governments, and infrastructure providers to collaborate in a secure and timely manner. The MSRA serves as an “umbrella” structure for a number of other alliances and initiatives, several of which pre-date the formation of the MSRA itself. For example, the Microsoft Virus Initiative (MVI) was originally formed in 1997 to facilitate communication between Microsoft and antivirus (AV) software vendors about macro viruses, which led to the development of the Antivirus application programming interface (Antivirus API). Figure 10 lists the MSRA member organizations and their primary businesses. \(^{24}\) www.microsoft.com/security/msrc/collaboration/mappfaq.aspx <table> <thead> <tr> <th>Organization</th> <th>Focus</th> <th>Purpose</th> </tr> </thead> <tbody> <tr> <td>The Global Infrastructure Alliance for Internet Safety (GIAIS)</td> <td>Internet service providers (ISPs)</td> <td>Fosters cooperation between Microsoft and the world’s leading ISPs to keep their customers safe on the Internet</td> </tr> <tr> <td>The Microsoft Virus Initiative (MVI)</td> <td>Security researchers, antivirus software vendors</td> <td>Enables Microsoft to share key technical details of Microsoft technologies with partners, to facilitate development of well-integrated security solutions</td> </tr> <tr> <td>Virus Information Alliance (VIA)</td> <td>Antivirus software vendors</td> <td>Provides AV partners with detailed technical information about significant viruses affecting Microsoft products and customers</td> </tr> <tr> <td>Microsoft Security Cooperation Program (SCP)</td> <td>Public sector infrastructure, law enforcement, public safety, and education</td> <td>Provides a framework for information exchange and collaboration between Microsoft and the public sector, primarily in the areas of response and outreach</td> </tr> <tr> <td>Microsoft Security Support Alliance (MSSA)</td> <td>Microsoft original equipment manufacturer (OEM) partners</td> <td>Provides authoritative and timely information on newly discovered security threats to Microsoft’s OEM partners, enabling them to better communicate security information to their customers</td> </tr> <tr> <td>Security Alliance for Financial Institutions (SAFI)</td> <td>Financial institutions</td> <td>Facilitate collaboration between Microsoft and financial institutions worldwide regarding the threats that such institutions face</td> </tr> </tbody> </table> **Figure 10: Organizations and working groups under the MSRA umbrella** For more information about MSRA, see [http://www.microsoft.com/security/msra/default.mspx](http://www.microsoft.com/security/msra/default.mspx) The Industry Consortium for the Advancement of Security on the Internet (ICASI)\(^25\), of which Microsoft is a founding member, is a trusted forum for addressing international, multi-product security challenges. This trusted forum extends the ability of information technology vendors to proactively address complex security issues and better protect enterprises, governments, and citizens, and the critical IT infrastructures that support them. ICASI shares the results of its work with the IT industry through papers and other media. **Sharing Research and Knowledge with Independent Software Vendors** The Microsoft Vulnerability Research (MSVR) program was announced in 2008. This program is intended to make use of Microsoft knowledge and experience in securing software to help other software and hardware vendors deal with vulnerabilities reactively as well as develop proactive internal programs to improve the overall security of their products. \(^25\) [http://www.icasi.org/index.htm](http://www.icasi.org/index.htm) The goals of the MSVR program include: - **Research**: Find vulnerabilities in third-party software running on the Windows platform, either manually or through the use of internally developed tools. - **Coordination**: Provide information about internally discovered vulnerabilities to the developers of the affected software and to other organizations that can help address the issues. - **Protection**: Work with software vendors to help protect customers by taking advantage of security functionality built into the Windows platform, such as Internet Explorer ActiveX control kill bits and the proactive use of Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR). - **Engineering Excellence**: Coordinate with the SDL Outreach team to help evangelize the advantages of Microsoft’s Security Development Lifecycle (SDL) and help vendors move towards more secure and proactive development processes. ### Working with Security Researchers and Advocating for Coordinated Vulnerability Disclosure Microsoft collaborates with many other parties when it investigates potential vulnerabilities in Microsoft software. Microsoft looks to mitigate exploitation of vulnerabilities through the capabilities of the industry, partners, public organizations, customers, and security researchers. Along the way, Microsoft supports and encourages coordinated vulnerability disclosure (CVD) of vulnerabilities. As mentioned earlier in this paper, Coordinated Vulnerability Disclosure means vendors and vulnerability finders work closely toward a resolution; extensive efforts are made to make a timely response; and only in the event of active attacks is public disclosure, focused on mitigations and workarounds, the best course of action, and even in those instances should be coordinated as closely as possible. The goal of CVD is to encourage the coordination and collaboration in the security community necessary to resolve issues in a way that minimizes risk and disruption for customers. Ideally, with CVD, the security update is released to coincide with public availability of the vulnerability information. This process serves everyone’s best interests and ensures that users are not exposed to malicious exploitation while security updates are being developed. When a security researcher is acknowledged in a Microsoft monthly security bulletin, the acknowledgment signals that the vulnerability was reported to the MSRC using CVD practices and that the individual security researcher or organization worked with Microsoft to help the company understand the vulnerability, the extent of the risk to the products and platforms, and possible mitigations. During the technical investigation and development of the update, the vulnerability reporter is continually apprised and updated about the availability of the impending security update. In the end, this process helps provide a solution for deployment across customer systems before potential attackers are aware of the vulnerability or are able to leverage the vulnerability for malicious use. In many cases, a researcher reporting a vulnerability to Microsoft has invested time and effort to uncover and detail the vulnerability in question. After the vulnerability is reported, it is understandable that a researcher is interested in seeing their discovery communicated externally as soon as possible. Yet, as this paper has described, an MSRC investigation includes looking for variants of a reported vulnerability, inspecting the related code around a reported vulnerability, and looking for the vulnerability in other components and products that share similar functionality. Investigations might also include working with multiple third parties, such as ISVs, to address related issues in intellectual property that they own. These stages, across the size of the installed base and affected version or versions along with third-party product implications and feedback loops through security partner programs all contribute to quality updates but extend the time to fix. The length of time between the report of a vulnerability in Microsoft software and the release of a security update has been mentioned as a point of disagreement by those who would like to see all discovered issues fixed quickly. Some researchers have chosen to release details of a discovered vulnerability before a security update has been completed. This course of action can expose the ecosystem to greater risk and can lead to active attacks on consumers and enterprises alike. Microsoft takes action on both privately and publicly reported issues. Since uncoordinated vulnerability disclosure can lead to more attacks on customers, the risks of using uncoordinated vulnerability disclosure do not outweigh the benefits of collaboration. Microsoft continues to stand by its position that coordinated vulnerability disclosure—when reporting vulnerabilities to Microsoft or to any software vendor—is the most prudent and constructive course of action. Security researchers that report vulnerabilities to Microsoft live and work all over the world. Consequently, security-related conferences and events are held all over the world. The MSRC sponsors, attends, and delivers presentations at many of these conferences and events. Engaging in the security community by supporting worldwide events helps Microsoft learn about the new areas of focus and industry trends within the security community, tools and techniques, and related cultural and philosophical elements that affect the security landscape. The conferences are a platform for technical information exchange, for new research and relationships to be developed, and for greater understanding of regional trends and research. Attending these events ultimately helps the MSRC provide timely and accurate information that helps better protect customers. These events are also an opportunity for the MSRC to give something back to the security community by presenting technical content related to vulnerabilities, tools, and more. The MSRC sponsors only those security conferences where there is a strict adherence to CVD practices. Figure 11: Conferences that the Microsoft Security Response Center sponsored and/or attended between 2005 and 2009. Comprehensive Security Response Process Over the past decade, the computing ecosystem has been disrupted by major attacks several times. In response to these attacks, Microsoft has built a comprehensive security response process to help minimize disruptions to customers. This process includes a predictable and transparent release cycle for security updates that includes the timely publication of customer-focused communications and guidance. This process also includes managing risk for the computing ecosystem through a standardized workflow at Microsoft called the Software Security Incident Response Process, described in this section of this paper. Predictable and Transparent Release Cycle Beginning in October 2003, Microsoft started releasing security bulletins to address discovered vulnerabilities in Microsoft software on a predictable, monthly schedule. Since then, security bulletins have been released on the second Tuesday of each month. The bulletins provide a standard list of details to help customers understand and assess the risk to their computing environment posed by the vulnerabilities documented in each bulletin and understand how these vulnerabilities can be addressed. Implementing a predictable monthly release schedule was a direct result of feedback from Microsoft enterprise customers; the insight from this feedback was that monthly security update releases work best for enterprise customers. The predictable monthly cadence makes it easier for enterprise administrators and operations teams to plan, resource, schedule, and budget for the deployment of security updates. A monthly cadence provides a reasonable amount of time for large enterprises to evaluate, test, and deploy security updates in their environments. Some consider this predictable monthly release cycle too slow and suggest that it exposes enterprises to greater risk. Before October 2003, Microsoft did not release security updates on a predictable monthly cycle. Any number of security updates was released weekly, as soon as they were ready for release. For example, between June and August 1998 twelve separate security updates were released. Today, only three updates would be released over the same period of time. The unpredictable nature of this process made it difficult for enterprise customers to plan, resource, schedule, and budget for deployments. Before October 2003, some of these customers developed their own risk assessment and deployment methodologies that had a monthly or quarterly cadence to help them manage deployments in a way that could be planned, resourced, scheduled, and budgeted for. Although this approach enabled these customers to better plan deployments, it provided a head start to attackers because of the window of opportunity between the time the security update was publicly released and the time the update was deployed. Many other customers simply couldn’t manage such an unpredictable release process and abandoned deploying security updates in favor of accepting more risk and deploying service packs on a much longer timeline. Some customers that could not tolerate the same risk levels tried to keep pace with the rapid, ad hoc release schedule but found that it became expensive to properly resource and budget for this work. Some customers told Microsoft that even if security updates were released the same day each week, this frequency was simply too difficult to manage, and they requested a better approach. For the same reasons, Microsoft tries to minimize the release of out-of-band security update releases. Today, nearly seven years after implementing the predictable monthly security update release cycle, Microsoft receives feedback from enterprise customers that this release cycle continues to help them effectively manage risks, minimize disruptions, and optimize their operations. Over time, several other major software companies with large enterprise customer install bases, such as Adobe (http://blogs.adobe.com/asset/), also adopted predictable security update cycles for their customers. **Communications and Guidance** In addition to providing customers with security updates to help protect them, Microsoft also provides communications that help keep customers informed about risks, detailed guidance that helps them assess risks and informs deployment strategies, and support options for when customers need help and advice. When there is relevant information about a vulnerability that threatens the security of its products, Microsoft sends out notifications to customers. The breadth and depth of the communications that Microsoft produces is based on direct customer feedback on the type of information customers need and when they need it. Microsoft simultaneously publishes communications about security updates, localized in many different languages. This consistent, predictable approach enables enterprise customers around the world to take advantage of security vulnerability information and deployment guidance when they are assessing risks and deployment strategies. The technical information provided in these communications has been written and reviewed by the appropriate security professionals across Microsoft. Information is the most important resource for enterprise customers during a security update release; without timely information and mitigations and workarounds, Microsoft enterprise customers could not accurately assess risks or deploy security updates in their environments with the high level of confidence they require. The Microsoft Security Bulletin Advance Notification Service (ANS) helps enterprise customers plan the appropriate resources for an impending security update release. An advance notification contains information about the number of new security bulletins being released, the products affected, the aggregate maximum severity, and information about detection tools relevant to the update. The level of detail included is balanced against a need to protect organizations until the security updates are released by not disclosing any information that could facilitate attacks. To help ensure against surprises and to minimize possible confusion and disruption, the security bulletin advance notification also provides information about other updates that will be released on the same day that are not associated with security bulletins. Specifically, it details how many non-security updates will be released through Microsoft Update and Windows Update, in addition to any updates to the Malicious Software Removal Tool (MSRT). Where possible, Microsoft makes this notice available three business days before a security bulletin is released to give administrators and operations teams time to plan for the release without giving attackers an advantage. When security bulletins are released, the security bulletin advance notification is replaced by the security bulletin summary, providing the definitive resource for information about the security --- 26 http://www.microsoft.com/technet/security/bulletin/advance.mspx Software Vulnerability Management at Microsoft update. The security bulletin summary includes links to each security bulletin that the release includes, in addition to any related Knowledge Base articles that provide extra technical information to help IT professionals with risk evaluation. In addition to the information in the advance notification, security bulletin summaries contain an assessment of each vulnerability’s potential exploitability. The Exploitability Index\textsuperscript{27} was developed based on customer feedback to help IT professionals prioritize deployment of security updates based on the likelihood of exploitation. For more information about Microsoft security bulletin summaries, see www.microsoft.com/technet/security/bulletin/summary.mspx. Each security bulletin contains detailed guidance and information about the security update and the vulnerability. Security bulletins are localized in 14 languages\textsuperscript{28}, and contain frequently-asked questions, vulnerability information, mitigations and workarounds, and other pertinent security update information. Security bulletin summaries are provided in 5 additional languages\textsuperscript{29}. For more information, see the Microsoft Security Bulletin Search page at www.microsoft.com/technet/security/current.aspx. The Microsoft Customer Service and Support (CSS) group writes Knowledge Base (KB) articles that link to the corresponding security bulletin without duplicating all of the same information in the security bulletin. Knowledge Base articles are also released to highlight known caveats or issues with security updates and will continue to be referenced in the security bulletin that provides the security updates. Microsoft security advisories, localized in 18 languages\textsuperscript{30}, are communications from Microsoft about potential vulnerabilities, active attacks, and other security information that is material to an IT installation’s overall security needs. Some security advisories may result in the release of a security update or may include guidance to help IT professionals mitigate the threat that is posed. Some notifications may not require a security bulletin or a security update, but may still affect customers’ overall security. Each security advisory is accompanied by a unique Knowledge Base article number that references additional information. Some examples of topics that security advisories may discuss include: \begin{itemize} \item Guidance and mitigations that may be applicable for publicly disclosed vulnerabilities. \item Clarifying information about potential threats that are publicly disclosed. \end{itemize} In addition, security and privacy experts at Microsoft share further insights, information, guidance, and knowledge in numerous blogs dedicated to security and privacy topics, including security update releases. To make it easier to consume all of the information published across all of these blogs, a “blog aggregator” page dynamically consolidates and features this blog content. You can access the Trustworthy Computing Security and Privacy blog aggregator at www.microsoft.com/twc/blogs. \textsuperscript{27} http://technet.microsoft.com/security/cc998259.aspx \textsuperscript{28} Languages include Chinese Traditional, Chinese Simplified, Dutch, French, German, Hungarian, Italian, Korean, Polish, Portuguese, (Portugal), Portuguese (Brazil), Russian, Spanish, Turkish \textsuperscript{29} Languages include Czech, Danish, Finnish, Hebrew, Norwegian \textsuperscript{30} Languages include Chinese Traditional, Chinese Simplified, Czech, Danish, Dutch, French, German, Hebrew, Hungarian, Italian, Korean, Norwegian, Polish, Portuguese, (Portugal), Portuguese (Brazil), Russian, Spanish, Turkish Managing Risk through Standardized Workflow When a security incident threatens Microsoft customers—whether it is an attack on the entire Internet or is more restricted in scope—the MSRC quickly mobilizes teams across Microsoft and around the world, including affected product teams, Customer Support and Services, Microsoft IT, and external partners. The MSRC uses the Microsoft worldwide Software Security Incident Response Process (SSIRP) to understand security incidents, that is, a situation that arises where malicious users exploit vulnerabilities. The SSIRP enables Microsoft to quickly investigate, analyze, and resolve those incidents. SSIRP has several phases: - **Watch**: MSRC and its partners are always on the alert for threats. - **Alert and mobilize resources**: When a threat is identified, first responders are paged and mobilized into two teams of engineers and communications professionals. - **Assess and stabilize**: The engineering team investigates and develops the solution, and the communications team reaches out to provide guidance to customers and partners. - **Resolve**: MSRC provides tools and solutions, and the Watch phase resumes. Microsoft SSIRP participants include the MSRC\(^{31}\), the MMPC\(^{32}\), the MSEC\(^{33}\), and Microsoft product groups—such as the Windows, Internet Explorer, SQL Server, and Microsoft Office teams. Collectively, these groups provide visibility on active attacks by using data from massive telemetry systems they manage\(^{34}\). In addition, SSIRP participants also include external partners and organizations like the GIAIS (a consortium of Internet Service Providers), VIA (an alliance where partners exchange valuable technical information on newly discovered viruses), and the MVI (a forum designed to share information and improve responses to virus outbreaks)\(^{35}\). Microsoft leverages its SSIRP capabilities to help balance the urgency required to release a security update and the time required to perform all of the actions outlined in this paper, including using a predictable and transparent release cycle that provides customer-focused communications and guidance, releases high quality updates and leverages community-based defense to increase the scale of protections for customers. Ultimately, the goal of all of these efforts is to minimize disruptions to customers’ businesses. --- \(^{31}\) [http://microsoft.com/msrc](http://microsoft.com/msrc) \(^{32}\) [http://www.microsoft.com/mmpc](http://www.microsoft.com/mmpc) \(^{33}\) [http://www.microsoft.com/msec](http://www.microsoft.com/msec) \(^{34}\) Telemetry systems as documented in the Microsoft Security Intelligence Report [www.microsoft.com/sir](http://www.microsoft.com/sir) Conclusion It is impossible to completely prevent the introduction of vulnerabilities during the development of large-scale software projects. Microsoft uses a multipronged, customer-centric strategy to help minimize disruptions to customers that includes: - **High-quality security updates**—Using world class engineering practices produces high-quality security updates that can be confidently deployed to hundreds of millions of diverse systems in the computing ecosystem and helps customers minimize disruptions to their businesses. - **Community-based defense**—Microsoft partners with many other parties when it investigates potential vulnerabilities in Microsoft software. Microsoft looks to mitigate exploitation of vulnerabilities through the collaborative strength of the industry and through partners, public organizations, customers, and security researchers. This approach helps to minimize potential disruptions to Microsoft’s customers’ businesses. - **Comprehensive security response process**—Using a comprehensive security response process helps Microsoft effectively manage security incidents while providing the predictability and transparency that customers need in order to minimize disruptions to their businesses. Microsoft and the security engineers, product managers, program managers, and communications professionals it employs continue to be dedicated to creating secure, private, and reliable computing experiences for everyone. Contributors Adrian Stone Microsoft Security Response Center Adrienne Hall Microsoft Trustworthy Computing Baris Saydag Microsoft Windows Serviceability Damian Hasse Microsoft Security Response Center David Hicks Microsoft Windows Compatibility Hemanth Kaza Microsoft Windows Compatibility James Rodrigues Microsoft Windows Serviceability Jeff Jones Microsoft Trustworthy Computing Mark Miller Microsoft Trustworthy Computing Mike Reavey Microsoft Security Response Center Matt Thomlinson Microsoft Security Response Center Paul Potterff Windows Consumer Product Management Sue Bohn Microsoft Windows Compatibility Tim Rains Microsoft Trustworthy Computing
{"Source-Url": "https://www.riskmanagementstudio.com/images/stories/rmstudio/software%20vulnerability%20management%20at%20microsoft.pdf", "len_cl100k_base": 12462, "olmocr-version": "0.1.53", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 69383, "total-output-tokens": 13992, "length": "2e13", "weborganizer": {"__label__adult": 0.0004801750183105469, "__label__art_design": 0.0005655288696289062, "__label__crime_law": 0.002368927001953125, "__label__education_jobs": 0.0021800994873046875, "__label__entertainment": 0.0002815723419189453, "__label__fashion_beauty": 0.00021255016326904297, "__label__finance_business": 0.004039764404296875, "__label__food_dining": 0.00031065940856933594, "__label__games": 0.0030498504638671875, "__label__hardware": 0.0019197463989257812, "__label__health": 0.0004248619079589844, "__label__history": 0.0003669261932373047, "__label__home_hobbies": 0.0001823902130126953, "__label__industrial": 0.0004115104675292969, "__label__literature": 0.00054168701171875, "__label__politics": 0.0004622936248779297, "__label__religion": 0.0003757476806640625, "__label__science_tech": 0.06329345703125, "__label__social_life": 0.00017917156219482422, "__label__software": 0.265380859375, "__label__software_dev": 0.65185546875, "__label__sports_fitness": 0.0002397298812866211, "__label__transportation": 0.0003876686096191406, "__label__travel": 0.00024187564849853516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70016, 0.0154]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70016, 0.11139]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70016, 0.916]], "google_gemma-3-12b-it_contains_pii": [[0, 81, false], [81, 81, null], [81, 1563, null], [1563, 5242, null], [5242, 7043, null], [7043, 7687, null], [7687, 8910, null], [8910, 10817, null], [10817, 14277, null], [14277, 17240, null], [17240, 20438, null], [20438, 21869, null], [21869, 25240, null], [25240, 29108, null], [29108, 32500, null], [32500, 33341, null], [33341, 33501, null], [33501, 37327, null], [37327, 41164, null], [41164, 41999, null], [41999, 42124, null], [42124, 44903, null], [44903, 47674, null], [47674, 51239, null], [51239, 53823, null], [53823, 53939, null], [53939, 57712, null], [57712, 61147, null], [61147, 64920, null], [64920, 67853, null], [67853, 69317, null], [69317, 70016, null], [70016, 70016, null]], "google_gemma-3-12b-it_is_public_document": [[0, 81, true], [81, 81, null], [81, 1563, null], [1563, 5242, null], [5242, 7043, null], [7043, 7687, null], [7687, 8910, null], [8910, 10817, null], [10817, 14277, null], [14277, 17240, null], [17240, 20438, null], [20438, 21869, null], [21869, 25240, null], [25240, 29108, null], [29108, 32500, null], [32500, 33341, null], [33341, 33501, null], [33501, 37327, null], [37327, 41164, null], [41164, 41999, null], [41999, 42124, null], [42124, 44903, null], [44903, 47674, null], [47674, 51239, null], [51239, 53823, null], [53823, 53939, null], [53939, 57712, null], [57712, 61147, null], [61147, 64920, null], [64920, 67853, null], [67853, 69317, null], [69317, 70016, null], [70016, 70016, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 70016, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70016, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70016, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70016, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70016, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70016, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70016, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70016, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70016, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70016, null]], "pdf_page_numbers": [[0, 81, 1], [81, 81, 2], [81, 1563, 3], [1563, 5242, 4], [5242, 7043, 5], [7043, 7687, 6], [7687, 8910, 7], [8910, 10817, 8], [10817, 14277, 9], [14277, 17240, 10], [17240, 20438, 11], [20438, 21869, 12], [21869, 25240, 13], [25240, 29108, 14], [29108, 32500, 15], [32500, 33341, 16], [33341, 33501, 17], [33501, 37327, 18], [37327, 41164, 19], [41164, 41999, 20], [41999, 42124, 21], [42124, 44903, 22], [44903, 47674, 23], [47674, 51239, 24], [51239, 53823, 25], [53823, 53939, 26], [53939, 57712, 27], [57712, 61147, 28], [61147, 64920, 29], [64920, 67853, 30], [67853, 69317, 31], [69317, 70016, 32], [70016, 70016, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70016, 0.07807]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
040d20122ed0a6b229233cf57b9f9d68d0c92ac1
RFC 8851 RTP Payload Format Restrictions Abstract In this specification, we define a framework for specifying restrictions on RTP streams in the Session Description Protocol (SDP). This framework defines a new “rid” ("restriction identifier") SDP attribute to unambiguously identify the RTP streams within an RTP session and restrict the streams’ payload format parameters in a codec-agnostic way beyond what is provided with the regular payload types. This specification updates RFC 4855 to give additional guidance on choice of Format Parameter (fmtp) names and their relation to the restrictions defined by this document. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 7841. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc8851. Copyright Notice Copyright (c) 2021 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Terminology 2. Introduction 3. Key Words for Requirements 4. SDP "a=rid" Media Level Attribute 5. "a=rid" Restrictions 6. SDP Offer/Answer Procedures 6.1. Generating the Initial SDP Offer 6.2. Answerer Processing the SDP Offer 6.2.1. "a=rid"-Unaware Answerer 6.2.2. "a=rid"-Aware Answerer 6.3. Generating the SDP Answer 6.4. Offerer Processing of the SDP Answer 6.5. Modifying the Session 7. Use with Declarative SDP 8. Interaction with Other Techniques 8.1. Interaction with VP8 Format Parameters 8.1.1. max-fr - Maximum Frame Rate 8.1.2. max-fs - Maximum Frame Size, in VP8 Macroblocks 8.2. Interaction with H.264 Format Parameters 8.2.1. profile-level-id and max-rev-level - Negotiated Subprofile 8.2.2. max-br / MaxBR - Maximum Video Bitrate 8.2.3. max-fs / MaxFS - Maximum Frame Size, in H.264 Macroblocks 8.2.4. max-mbps / MaxMBPS - Maximum Macroblock Processing Rate 8.2.5. max-smbps - Maximum Decoded Picture Buffer 8.3. Redundancy Formats and Payload Type Restrictions 9. Format Parameters for Future Payloads 1. Terminology The terms "source RTP stream", "endpoint", "RTP session", and "RTP stream" are used as defined in [RFC7656]. [RFC4566] and [RFC3264] terminology is also used where appropriate. 2. Introduction The payload type (PT) field in RTP provides a mapping between the RTP payload format and the associated SDP media description. For a given PT, the SDP rtpmap and/or fmtp attributes are used to describe the properties of the media that is carried in the RTP payload. Recent advances in standards have given rise to rich multimedia applications requiring support for either multiple RTP streams within an RTP session [RFC8843] [RFC8853] or a large number of codecs. These demands have unearthed challenges inherent with: - The restricted RTP PT space in specifying the various payload configurations - The codec-specific constructs for the payload formats in SDP - Missing or underspecified payload format parameters • Overloading of PTs to indicate not just codec configurations, but individual streams within an RTP session To expand on these points: [RFC3550] assigns 7 bits for the PT in the RTP header. However, the assignment of static mapping of RTP payload type numbers to payload formats and multiplexing of RTP with other protocols (such as the RTP Control Protocol (RTCP)) could result in a limited number of payload type numbers available for application usage. In scenarios where the number of possible RTP payload configurations exceeds the available PT space within an RTP session, there is a need for a way to represent the additional restrictions on payload configurations and effectively map an RTP stream to its corresponding restrictions. This issue is exacerbated by the increase in techniques – such as simulcast and layered codecs – that introduce additional streams into RTP sessions. This specification defines a new SDP framework for restricting source RTP streams (Section 2.1.10 of [RFC7656]), along with the SDP attributes to restrict payload formats in a codec-agnostic way. This framework can be thought of as a complementary extension to the way the media format parameters are specified in SDP today, via the "a=fmt" attribute. The additional restrictions on individual streams are indicated with a new "a=rid" ("restriction identifier") SDP attribute. Note that the restrictions communicated via this attribute only serve to further restrict the parameters that are established on a PT format. They do not relax any restrictions imposed by other mechanisms. This specification makes use of the RTP Stream Identifier Source Description (SDES) RTCP item defined in [RFC8852] to provide correlation between the RTP packets and their format specification in the SDP. As described in Section 6.2.1, this mechanism achieves backwards compatibility via the normal SDP processing rules, which require unknown "a=" lines to be ignored. This means that implementations need to be prepared to handle successful offers and answers from other implementations that neither indicate nor honor the restrictions requested by this mechanism. Further, as described in Section 6 and its subsections, this mechanism achieves extensibility by: (a) having offerers include all supported restrictions in their offer, and (b) having answerers ignore "a=rid" lines that specify unknown restrictions. 3. Key Words for Requirements The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here. 4. SDP "a=rid" Media Level Attribute This section defines new SDP media-level attribute [RFC4566], "a=rid", used to communicate a set of restrictions to be applied to an identified RTP stream. Roughly speaking, this attribute takes the following form (see Section 10 for a formal definition): ``` a=rid:<rid-id> <direction> [pt=<fmt-list>;]<restriction>=<value>... ``` An "a=rid" SDP media attribute specifies restrictions defining a unique RTP payload configuration identified via the "rid-id" field. This value binds the restriction to the RTP stream identified by its RTP Stream Identifier Source Description (SDES) item [RFC8852]. Implementations that use the "a=rid" parameter in SDP MUST support the RtpStreamId SDES item described in [RFC8852]. Such implementations MUST send that SDES item for all streams in an SDP media description ("m=") that have "a=rid" lines remaining after applying the rules in Section 6 and its subsections. Implementations that use the "a=rid" parameter in SDP and make use of redundancy RTP streams [RFC7656] -- e.g., RTP RTX [RFC4588] or Forward Error Correction (FEC) [RFC5109] -- for any of the source RTP streams that have "a=rid" lines remaining after applying the rules in Section 6 and its subsections MUST support the RepairedRtpStreamId SDES item described in [RFC8852] for those redundancy RTP streams. RepairedRtpStreamId MUST be used for redundancy RTP streams to which it can be applied. Use of RepairedRtpStreamId is not applicable for redundancy formats that directly associate RTP streams through shared synchronization sources (SSRCs) -- for example, [RFC8627] -- or other cases that RepairedRtpStreamId cannot support, such as referencing multiple source streams. RepairedRtpStreamId is used to provide the binding between the redundancy RTP stream and its source RTP stream by setting the RepairedRtpStreamId value for the redundancy RTP stream to the RtpStreamId value of the source RTP stream. The redundancy RTP stream MAY (but need not) have an "a=rid" line of its own, in which case the RtpStreamId SDES item value will be different from the corresponding source RTP stream. It is important to note that this indirection may result in the temporary inability to correctly associate source and redundancy data when the SSRC associated with the RtpStreamId or RepairedRtpStreamId is dynamically changed during the RTP session. This can be avoided if all RTP packets, source and repair, include their RtpStreamId or RepairedRtpStreamId, respectively, after the change. To maximize the probability of reception and utility of redundancy information after such a change, all the source packets referenced by the first several repair packets SHOULD include such information. It is RECOMMENDED that the number of such packets is large enough to give a high probability of actual updated association. Section 4.1.1 of [RFC8285] provides relevant guidance for RTP header extension transmission considerations. Alternatively, to avoid this issue, redundancy mechanisms that directly reference its source data may be used, such as [RFC8627]. The "direction" field identifies the direction of the RTP stream packets to which the indicated restrictions are applied. It may be either "send" or "recv". Note that these restriction directions are expressed independently of any "inactive", "sendonly", "recvonly", or "sendrecv" attributes associated with the media section. It is, for example, valid to indicate "recv" restrictions on a "sendonly" stream; those restrictions would apply if, at a future point in time, the stream were changed to "sendrecv" or "recvonly". The optional "pt=<fmt-list>" lists one or more PT values that can be used in the associated RTP stream. If the "a=rid" attribute contains no "pt", then any of the PT values specified in the corresponding "m=" line may be used. The list of zero or more codec-agnostic restrictions (Section 5) describes the restrictions that the corresponding RTP stream will conform to. This framework MAY be used in combination with the "a=fmtp" SDP attribute for describing the media format parameters for a given RTP payload type. In such scenarios, the "a=rid" restrictions (Section 5) further restrict the equivalent "a=fmtp" attributes. A given SDP media description MAY have zero or more "a=rid" lines describing various possible RTP payload configurations. A given "rid-id" MUST NOT be repeated in a given media description ("m=" section). The "a=rid" media attribute MAY be used for any RTP-based media transport. It is not defined for other transports, although other documents may extend its semantics for such transports. Though the restrictions specified by the "rid" restrictions follow a syntax similar to session-level and media-level parameters, they are defined independently. All "rid" restrictions MUST be registered with IANA, using the registry defined in Section 12. Section 10 gives a formal Augmented Backus-Naur Form (ABNF) [RFC5234] grammar for the "rid" attribute. The "a=rid" media attribute is not dependent on charset. 5. "a=rid" Restrictions This section defines the "a=rid" restrictions that can be used to restrict the RTP payload encoding format in a codec-agnostic way. Please also see the preceding section for a description of how the "pt" parameter is used. The following restrictions are intended to apply to video codecs in a codec-independent fashion. - **max-width**, for spatial resolution in pixels. In the case that stream-orientation signaling is used to modify the intended display orientation, this attribute refers to the width of the stream when a rotation of zero degrees is encoded. - **max-height**, for spatial resolution in pixels. In the case that stream-orientation signaling is used to modify the intended display orientation, this attribute refers to the height of the stream when a rotation of zero degrees is encoded. • **max-fps**, for frame rate in frames per second. For encoders that do not use a fixed frame rate for encoding, this value is used to restrict the minimum amount of time between frames: the time between any two consecutive frames **SHOULD NOT** be less than 1/max-fps seconds. • **max-fs**, for frame size in pixels per frame. This is the product of frame width and frame height, in pixels, for rectangular frames. • **max-br**, for bitrate in bits per second. The restriction applies to the media payload only and does not include overhead introduced by other layers (e.g., RTP, UDP, IP, or Ethernet). The exact means of keeping within this limit are left up to the implementation, and instantaneous excursions outside the limit are permissible. For any given one-second sliding window, however, the total number of bits in the payload portion of RTP **SHOULD NOT** exceed the value specified in "max-br." • **max-pps**, for pixel rate in pixels per second. This value **SHOULD** be handled identically to max-fps, after performing the following conversion: max-fps = max-pps / (width * height). If the stream resolution changes, this value is recalculated. Due to this recalculation, excursions outside the specified maximum are possible near resolution-change boundaries. • **max-bpp**, for maximum number of bits per pixel, calculated as an average of all samples of any given coded picture. This is expressed as a floating point value, with an allowed range of 0.0001 to 48.0. These values **MUST NOT** be encoded with more than four digits to the right of the decimal point. • **depend**, to identify other streams that the stream depends on. The value is a comma-separated list of rid-ids. These rid-ids identify RTP streams that this stream depends on in order to allow for proper interpretation. The mechanism defined in this document allows for such dependencies to be expressed only when the streams are in the same media section. All the restrictions are optional and subject to negotiation based on the SDP offer/answer rules described in Section 6. This list is intended to be an initial set of restrictions. Future documents may define additional restrictions; see Section 12.2. While this document does not define restrictions for audio codecs or any media types other than video, there is no reason such restrictions should be precluded from definition and registration by other documents. Section 10 provides formal Augmented Backus-Naur Form (ABNF) [RFC5234] grammar for each of the "a=rid" restrictions defined in this section. 6. **SDP Offer/Answer Procedures** This section describes the SDP offer/answer procedures [RFC3264] when using this framework. Note that "rid-id" values are only required to be unique within a media section ("m=" line); they do not necessarily need to be unique within an entire RTP session. In traditional usage, each media section is sent on its own unique 5-tuple (that is: combination of sending address, sending port, receiving address, receiving port, and transport protocol), which provides an unambiguous scope. Similarly, when using BUNDLE [RFC8843], Media Identification (MID) values associate RTP streams uniquely to a single media description. When restriction identifier (RID) is used with the BUNDLE mechanism, streams will be associated with both MID and RID SDES items. 6.1. Generating the Initial SDP Offer For each RTP media description in the offer, the offerer MAY choose to include one or more "a=rid" lines to specify a configuration profile for the given set of RTP payload types. In order to construct a given "a=rid" line, the offerer must follow these steps: 1. It **MUST** generate a "rid-id" that is unique within a media description. 2. It **MUST** set the direction for the "rid-id" to one of "send" or "recv". 3. It **MAY** include a listing of SDP media formats (usually corresponding to RTP payload types) allowed to appear in the RTP stream. Any payload type chosen **MUST** be a valid payload type for the media section (that is, it must be listed on the "m=" line). The order of the listed formats is significant; the alternatives are listed from (left) most preferred to (right) least preferred. When using RID, this preference overrides the normal codec preference as expressed by format type ordering on the "m=" line, using regular SDP rules. 4. The offerer then chooses zero or more "a=rid" restrictions (Section 5) to be applied to the RTP stream and adds them to the "a=rid" line. 5. If the offerer wishes the answerer to have the ability to specify a restriction but does not wish to set a value itself, it includes the name of the restriction in the "a=rid" line, but without any indicated value. Note: If an "a=fmtp" attribute is also used to provide media-format-specific parameters, then the "a=rid" restrictions will further restrict the equivalent "a=fmtp" parameters for the given payload type for the specified RTP stream. If a given codec would require an "a=fmtp" line when used without "a=rid", then the offerer **MUST** include a valid corresponding "a=fmtp" line even when using "a=rid". 6.2. Answerer Processing the SDP Offer 6.2.1. "a=rid"-Unaware Answerer If the receiver doesn't support the framework defined in this specification, the entire "a=rid" line is ignored following the standard offer/answer rules [RFC3264]. Section 6.1 requires the offer to include a valid "a=fmtp" line for any media formats that otherwise require it (in other words, the "a=rid" line cannot be used to replace "a=fmtp" configuration). As a result, ignoring the "a=rid" line is always guaranteed to result in a valid session description. 6.2.2. "a=rid"-Aware Answerer If the answerer supports the "a=rid" attribute, the following verification steps are executed, in order, for each "a=rid" line in a received offer: 1. The answerer ensures that the "a=rid" line is syntactically well formed. In the case of a syntax error, the "a=rid" line is discarded. 2. The answerer extracts the rid-id from the "a=rid" line and verifies its uniqueness within a media section. In the case of a duplicate, the entire "a=rid" line, and all "a=rid" lines with rid-ids that duplicate this line, are discarded and MUST NOT be included in the SDP answer. 3. If the "a=rid" line contains a "pt=", the list of payload types is verified against the list of valid payload types for the media section (that is, those listed on the "m=" line). Any PT missing from the "m=" line is discarded from the set of values in the "pt=". If no values are left in the "pt=" parameter after this processing, then the "a=rid" line is discarded. 4. If the "direction" field is "recv", the answerer ensures that the specified "a=rid" restrictions are supported. In the case of an unsupported restriction, the "a=rid" line is discarded. 5. If the "depend" restriction is included, the answerer MUST make sure that the listed rid-ids unambiguously match the rid-ids in the media description. Any "depend" "a=rid" lines that do not are discarded. 6. The answerer verifies that the restrictions are consistent with at least one of the codecs to be used with the RTP stream. If the "a=rid" line contains a "pt=", it contains the list of such codecs; otherwise, the list of such codecs is taken from the associated "m=" line. See Section 8 for more detail. If the "a=rid" restrictions are incompatible with the other codec properties for all codecs, then the "a=rid" line is discarded. Note that the answerer does not need to understand every restriction present in a "send" line: if a stream sender restricts the stream in a way that the receiver does not understand, this causes no issues with interoperability. 6.3. Generating the SDP Answer Having performed verification of the SDP offer as described in Section 6.2.2, the answerer shall perform the following steps to generate the SDP answer. For each "a=rid" line that has not been discarded by previous processing: 1. The value of the "direction" field is reversed: "send" is changed to "recv", and "recv" is changed to "send". 2. The answerer MAY choose to modify specific "a=rid" restriction values in the answer SDP. In such a case, the modified value MUST be more restrictive than the ones specified in the offer. The answer MUST NOT include any restrictions that were not present in the offer. 3. The answerer MUST NOT modify the "rid-id" present in the offer. 4. If the "a=rid" line contains a "pt=", the answerer is allowed to discard one or more media formats from a given "a=rid" line. If the answerer chooses to discard all the media formats from an "a=rid" line, the answerer MUST discard the entire "a=rid" line. If the offer did not contain a "pt=" for a given "a=rid" line, then the answer **MUST NOT** contain a "pt=" in the corresponding line. 5. In cases where the answerer is unable to support the payload configuration specified in a given "a=rid" line with a direction of "recv" in the offer, the answerer **MUST** discard the corresponding "a=rid" line. This includes situations in which the answerer does not understand one or more of the restrictions in an "a=rid" line with a direction of "recv". Note: In the case that the answerer uses different PT values to represent a codec than the offerer did, the "a=rid" values in the answer use the PT values that are present in its answer. ### 6.4. Offerer Processing of the SDP Answer The offerer **SHALL** follow these steps when processing the answer: 1. The offerer matches the "a=rid" line in the answer to the "a=rid" line in the offer using the "rid-id". If no matching line can be located in the offer, the "a=rid" line is ignored. 2. If the answer contains any restrictions that were not present in the offer, then the offerer **SHALL** discard the "a=rid" line. 3. If the restrictions have been changed between the offer and the answer, the offerer **MUST** ensure that the modifications are more restrictive than they were in the original offer and that they can be supported; if not, the offerer **SHALL** discard the "a=rid" line. 4. If the "a=rid" line in the answer contains a "pt=" but the offer did not, the offerer **SHALL** discard the "a=rid" line. 5. If the "a=rid" line in the answer contains a "pt=" and the offer did as well, the offerer verifies that the list of payload types is a subset of those sent in the corresponding "a=rid" line in the offer. Note that this matching must be performed semantically rather than on literal PT values, as the remote end may not be using symmetric PTs. For the purpose of this comparison: for each PT listed on the "a=rid" line in the answer, the offerer looks up the corresponding "a=rtpmap" and "a=fmtp" lines in the answer. It then searches the list of "pt=" values indicated in the offer and attempts to find one with an equivalent set of "a=rtpmap" and "a=fmtp" lines in the offer. If all PTs in the answer can be matched, then the "pt=" values pass validation; otherwise, it fails. If this validation fails, the offerer **SHALL** discard the "a=rid" line. Note that this semantic comparison necessarily requires an understanding of the meaning of codec parameters, rather than a rote byte-wise comparison of their values. 6. If the "a=rid" line contains a "pt=" , the offerer verifies that the attribute values provided in the "a=rid" attributes are consistent with the corresponding codecs and their other parameters. See **Section 8** for more detail. If the "a=rid" restrictions are incompatible with the other codec properties, then the offerer **SHALL** discard the "a=rid" line. 7. The offerer verifies that the restrictions are consistent with at least one of the codecs to be used with the RTP stream. If the "a=rid" line contains a "pt=" , it contains the list of such codecs; otherwise, the list of such codecs is taken from the associated "m=" line. See **Section 8** for more detail. If the "a=rid" restrictions are incompatible with the other codec properties for all codecs, then the offerer **SHALL** discard the "a=rid" line. Any "a=rid" line present in the offer that was not matched by step 1 above has been discarded by the answerer and does not form part of the negotiated restrictions on an RTP stream. The offerer **MAY** still apply any restrictions it indicated in an "a=rid" line with a direction field of "send", but it is not required to do so. It is important to note that there are several ways in which an offer can contain a media section with "a=rid" lines, although the corresponding media section in the response does not. This includes situations in which the answerer does not support "a=rid" at all or does not support the indicated restrictions. Under such circumstances, the offerer **MUST** be prepared to receive a media stream to which no restrictions have been applied. ### 6.5. Modifying the Session Offers and answers inside an existing session follow the rules for initial session negotiation. Such an offer **MAY** propose a change in the number of RIDs in use. To avoid race conditions with media, any RIDs with proposed changes **SHOULD** use a new ID rather than reusing one from the previous offer/answer exchange. RIDs without proposed changes **SHOULD** reuse the ID from the previous exchange. ### 7. Use with Declarative SDP This document does not define the use of a RID in declarative SDP. If concrete use cases for RID in declarative SDP use are identified in the future, we expect that additional specifications will address such use. ### 8. Interaction with Other Techniques Historically, a number of other approaches have been defined that allow restricting media streams via SDP. These include: - Codec-specific configuration set via format parameters ("a=fmtp") -- for example, the H.264 "max-fs" format parameter [RFC6184] - Size restrictions imposed by the "a=imageattr" attribute [RFC6236] When the mechanism described in this document is used in conjunction with these other restricting mechanisms, it is intended to impose additional restrictions beyond those communicated in other techniques. In an offer, this means that "a=rid" lines, when combined with other restrictions on the media stream, are expected to result in a non-empty intersection. For example, if image attributes are used to indicate that a PT has a minimum width of 640, then specification of "max-width=320" in an "a=rid" line that is then applied to that PT is nonsensical. According to the rules of Section 6.2.2, this will result in the corresponding "a=rid" line being ignored by the recipient. In an answer, the "a=rid" lines, when combined with the other restrictions on the media stream, are also expected to result in a non-empty intersection. If the implementation generating an answer wishes to restrict a property of the stream below that which would be allowed by other parameters (e.g., those specified in "a=fmtp" or "a=imageattr"), its only recourse is to discard the "a=rid" line altogether, as described in Section 6.3. If it instead attempts to restrict the stream beyond what is allowed by other mechanisms, then the offerer will ignore the corresponding "a=rid" line, as described in Section 6.4. The following subsections demonstrate these interactions using commonly used video codecs. These descriptions are illustrative of the interaction principles outlined above and are not normative. 8.1. Interaction with VP8 Format Parameters [RFC7741] defines two format parameters for the VP8 codec. Both correspond to restrictions on receiver capabilities and never indicate sending restrictions. 8.1.1. max-fr - Maximum Frame Rate The VP8 "max-fr" format parameter corresponds to the "max-fps" restriction defined in this specification. If an RTP sender is generating a stream using a format defined with this format parameter, and the sending restrictions defined via "a=rid" include a "max-fps" parameter, then the sent stream will conform to the smaller of the two values. 8.1.2. max-fs - Maximum Frame Size, in VP8 Macroblocks The VP8 "max-fs" format parameter corresponds to the "max-fs" restriction defined in this document, by way of a conversion factor of the number of pixels per macroblock (typically 256). If an RTP sender is generating a stream using a format defined with this format parameter, and the sending restrictions defined via "a=rid" include a "max-fs" parameter, then the sent stream will conform to the smaller of the two values; that is, the number of pixels per frame will not exceed: \[ \min(\text{rid}_\text{max}_\text{fs}, \text{fmtp}_\text{max}_\text{fs} \times \text{macroblock}\_\text{size}) \] This fmtp parameter also has bearing on the max-height and max-width parameters. Section 6.1 of [RFC7741] requires that the width and height of the frame in macroblocks be less than int(sqrt(fmtp_max_fs * 8)). Accordingly, the maximum width of a transmitted stream will be limited to: \[ \min(\text{rid}_\text{max}_\text{width}, \text{int}(\sqrt{\text{fmtp}_\text{max}_\text{fs} \times 8})) \times \text{macroblock}\_\text{width} \] Similarly, the stream's height will be limited to: \[ \min(\text{rid}_\text{max}_\text{height}, \text{int}(\sqrt{\text{fmtp}_\text{max}_\text{fs} \times 8})) \times \text{macroblock}\_\text{height} \] 8.2. Interaction with H.264 Format Parameters [ RFC6184 ] defines format parameters for the H.264 video codec. The majority of these parameters do not correspond to codec-independent restrictions: - deint-buf-cap - in-band-parameter-sets - level-asymmetry-allowed - max-rcmd-nalu-size - max-cpb - max-dpb - packetization-mode - redundant-pic-cap - sar-supported - sar-understood - sprop-deint-buf-req - sprop-init-buf-time - sprop-interleaving-depth - sprop-level-parameter-sets - sprop-max-don-diff - sprop-parameter-sets - use-level-src-parameter-sets Note that the max-cpb and max-dpb format parameters for H.264 correspond to restrictions on the stream, but they are specific to the way the H.264 codec operates, and do not have codec-independent equivalents. The [ RFC6184 ] codec format parameters covered in the following sections correspond to restrictions on receiver capabilities and never indicate sending restrictions. 8.2.1. profile-level-id and max-recv-level - Negotiated Subprofile These parameters include a "level" indicator, which acts as an index into Table A-1 of [ H264 ]. This table contains a number of parameters, several of which correspond to the restrictions defined in this document. [ RFC6184 ] also defines format parameters for the H.264 codec that may increase the maximum values indicated by the negotiated level. The following sections describe the interaction between these parameters and the restrictions defined by this document. In all cases, the H.264 parameters being discussed are the maximum of those indicated by [ H264 ] Table A-1 and those indicated in the corresponding "a=fmtp" line. 8.2.2. max-br / MaxBR - Maximum Video Bitrate The H.264 "MaxBR" parameter (and its equivalent "max-br" format parameter) corresponds to the "max-bps" restriction defined in this specification, by way of a conversion factor of 1000 or 1200; see [RFC6184] for details regarding which factor gets used under differing circumstances. If an RTP sender is generating a stream using a format defined with this format parameter, and the sending restrictions defined via "a=rid" include a "max-fps" parameter, then the sent stream will conform to the smaller of the two values -- that is: \[ \text{min(rid\_max\_br, h264\_MaxBR \times \text{conversion\_factor})} \] 8.2.3. max-fs / MaxFS - Maximum Frame Size, in H.264 Macroblocks The H.264 "MaxFs" parameter (and its equivalent "max-fs" format parameter) corresponds roughly to the "max-fs" restriction defined in this document, by way of a conversion factor of 256 (the number of pixels per macroblock). If an RTP sender is generating a stream using a format defined with this format parameter, and the sending restrictions defined via "a=rid" include a "max-fs" parameter, then the sent stream will conform to the smaller of the two values -- that is: \[ \text{min(rid\_max\_fs, h264\_MaxFs \times 256)} \] 8.2.4. max-mbps / MaxMBPS - Maximum Macroblock Processing Rate The H.264 "MaxMBPS" parameter (and its equivalent "max-mbps" format parameter) corresponds roughly to the "max-pps" restriction defined in this document, by way of a conversion factor of 256 (the number of pixels per macroblock). If an RTP sender is generating a stream using a format defined with this format parameter, and the sending restrictions defined via "a=rid" include a "max-pps" parameter, then the sent stream will conform to the smaller of the two values -- that is: \[ \text{min(rid\_max\_pps, h264\_MaxMBPS \times 256)} \] 8.2.5. max-smbps - Maximum Decoded Picture Buffer The H.264 "max-smbps" format parameter operates the same way as the "max-mbps" format parameter, under the hypothetical assumption that all macroblocks are static macroblocks. It is handled by applying the conversion factor described in Section 8.1 of [RFC6184], and the result of this conversion is applied as described in Section 8.2.4. 8.3. Redundancy Formats and Payload Type Restrictions Section 4 specifies that redundancy formats using redundancy RTP streams bind the redundancy RTP stream to the source RTP stream with either the RepairedRtpStreamId SDES item or other mechanisms. However, there exist redundancy RTP payload formats that result in the redundancy being included in the source RTP stream. An example of this is "RTP Payload for Redundant Audio Data" [RFC2198], which encapsulates one source stream with one or more redundancy streams in the same RTP payload. Formats defining the source and redundancy encodings as regular RTP payload types require some consideration for how the "a=rid" restrictions are defined. The "a=rid" line "pt=" parameter can be used to indicate whether the redundancy RTP payload type and/or the individual source RTP payload type(s) are part of the restriction. Example (SDP excerpt): ``` +---------------------------------+---------------------------------+ | m=audio 49200 RTP/AVP 97 98 99 100 101 102 | a=mid:foo | | a=rtpmap:97 G711/8000 | a=rtpmap:98 LPC/8000 | | a=rtpmap:99 OPUS/48000/1 | a=rtpmap:100 RED/8000/1 | | a=rtpmap:101 CN/8000 | a=rtpmap:102 telephone-event/8000 | | a=fmtp:99 useinbandfecn=1; usedtx=0 | a=fmtp:100 97/98 | | a=fmtp:102 0-15 | a=fmtp:102 97/98 | | a=fmtp:101 CN/8000 | a=maxptime:20 | | a=rid:5 send pt=99,102;max-br=64000 | a=rid:6 send pt=100,97,101,102 | +---------------------------------+---------------------------------+ ``` The RID with ID=6 restricts the payload types for this RID to 100 (the redundancy format), 97 (G.711), 101 (Comfort Noise), and 102 (dual-tone multi-frequency (DTMF) tones). This means that RID 6 can either contain the Redundant Audio Data (RED) format, encapsulating encodings of the source media stream using payload type 97 and 98, 97 without RED encapsulation, Comfort noise, or DTMF tones. Payload type 98 is not included in the RID, and can thus not be sent except as redundancy information in RED encapsulation. If 97 were to be excluded from the pt parameter, it would instead mean that payload types 97 and 98 are only allowed via RED encapsulation. 9. Format Parameters for Future Payloads Registrations of future RTP payload format specifications that define media types that have parameters matching the RID restrictions specified in this memo SHOULD name those parameters in a manner that matches the names of those RID restrictions and SHOULD explicitly state what media-type parameters are restricted by what RID restrictions. 10. **Formal Grammar** This section gives a formal Augmented Backus-Naur Form (ABNF) [RFC5234] grammar, with the case-sensitive extensions described in [RFC7405], for each of the new media and "a=rid" attributes defined in this document. ``` rid-syntax = %s"a=rid:" rid-id SP rid-dir [ rid-pt-param-list / rid-param-list ] rid-id = 1*(alpha-numeric / ":" / ",") alpha-numeric = < as defined in [RFC4566] > rid-dir = %s"send" / %s"recv" rid-pt-param-list = SP rid-fmt-list *(";" rid-param) rid-param-list = SP rid-param *(";" rid-param) rid-fmt-list = %s"pt=" fmt *( "," fmt ) fmt = < as defined in [RFC4566] > rid-param = rid-width-param / rid-height-param / rid-fps-param / rid-fs-param / rid-br-param / rid-pps-param / rid-bpp-param / rid-depend-param / rid-param-other rid-width-param = %s"max-width" [ ":" int-param-val ] rid-height-param = %s"max-height" [ ":" int-param-val ] rid-fps-param = %s"max-fps" [ ":" int-param-val ] rid-fs-param = %s"max-fs" [ ":" int-param-val ] rid-br-param = %s"max-br" [ ":" int-param-val ] rid-pps-param = %s"max-pps" [ ":" int-param-val ] rid-bpp-param = %s"max-bpp" [ ":" float-param-val ] rid-depend-param = %s"depend=" rid-list rid-list = rid-id *( "," rid-id ) int-param-val = 1*DIGIT float-param-val = 1*DIGIT "." 1*DIGIT param-val = *(%x20-3A / %x3C-7E) ; Any printable character except semicolon ``` 11. SDP Examples Note: See [RFC8853] for examples of RID used in simulcast scenarios. 11.1. Many Bundled Streams Using Many Codecs In this scenario, the offerer supports the Opus, G.722, G.711, and DTMF audio codecs and VP8, VP9, H.264 (CBP/CHP, mode 0/1), H.264-SVC (SCBP/SCHP), and H.265 (MP/M10P) for video. An 8-way video call (to a mixer) is supported (send 1 and receive 7 video streams) by offering 7 video media sections (1 sendrecv at max resolution and 6 recvonly at smaller resolutions), all bundled on the same port, using 3 different resolutions. The resolutions include: - 1 receive stream of 720p resolution is offered for the active speaker. - 2 receive streams of 360p resolution are offered for the prior 2 active speakers. - 4 receive streams of 180p resolution are offered for others in the call. NOTE: The SDP given below skips a few lines to keep the example short and focused, as indicated by either the "..." or the comments inserted. The offer for this scenario is shown below. ... m=audio 10000 RTP/SAVPF 96 9 8 0 123 a=rtpmap:96 OPUS/48000 a=rtpmap:9 G722/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:0 PCMU/8000 a=rtpmap:123 telephone-event/8000 a=mid:a1 ... m=video 10000 RTP/SAVPF 98 99 100 101 102 103 104 105 106 107 a=extmap 1 urn:ietf:params:rtp-hdrext:sdes:rtp-stream-id a=rtpmap:98 VP8/90000 a=fmtp:98 max-fs=3600; max-fr=30 a=rtpmap:99 VP9/90000 a=fmtp:99 max-fs=3600; max-fr=30 a=rtpmap:100 H264/90000 a=fmtp:100 profile-level-id=42401f; packetization-mode=0 a=rtpmap:101 H264/90000 a=fmtp:101 profile-level-id=42401f; packetization-mode=1 a=rtpmap:102 H264/90000 a=fmtp:102 profile-level-id=640c1f; packetization-mode=0 a=rtpmap:103 H264/90000 a=fmtp:103 profile-level-id=640c1f; packetization-mode=1 a=rtpmap:104 H264-SVC/90000 a=fmtp:104 profile-level-id=530c1f a=rtpmap:105 H264-SVC/90000 a=fmtp:105 profile-level-id=560c1f a=rtpmap:106 H265/90000 a=fmtp:106 profile-id=1; level-id=93 a=rtpmap:107 H265/90000 a=fmtp:107 profile-id=2; level-id=93 a=sendrecv a=mid:v1 (max resolution) a=rid:1 send max-width=1280; max-height=720; max-fps=30 a=rid:2 recv max-width=1280; max-height=720; max-fps=30 ... m=video 10000 RTP/SAVPF 98 99 100 101 102 103 104 105 106 107 a=extmap 1 urn:ietf:params:rtp-hdrext:sdes:rtp-stream-id ... same rtpmap/fmtp as above... a=recvonly a=mid:v2 (medium resolution) a=rid:3 recv max-width=640; max-height=360; max-fps=15 ... m=video 10000 RTP/SAVPF 98 99 100 101 102 103 104 105 106 107 a=extmap 1 urn:ietf:params:rtp-hdrext:sdes:rtp-stream-id ... same rtpmap/fmtp as above... a=recvonly a=mid:v3 (medium resolution) a=rid:3 recv max-width=640; max-height=360; max-fps=15 ... m=video 10000 RTP/SAVPF 98 99 100 101 102 103 104 105 106 107 a=extmap 1 urn:ietf:params:rtp-hdrext:sdes:rtp-stream-id ... same rtpmap/fmtp as above... a=recvonly a=mid:v4 (small resolution) a=rid:4 recv max-width=320; max-height=180; max-fps=15 ... 11.2. Scalable Layers Adding scalable layers to a session within a multiparty conference gives a selective forwarding unit (SFU) further flexibility to selectively forward packets from a source that best match the bandwidth and capabilities of diverse receivers. Scalable encodings have dependencies between layers, unlike independent simulcast streams. RIDs can be used to express these dependencies using the "depend" restriction. In the example below, the highest resolution is offered to be sent as 2 scalable temporal layers (using Multiple RTP Streams on a Single Media Transport (MRST)). See [RFC8853] for additional detail about simulcast usage. ``` Offer: ... m=audio ...same as previous example ... ... m=video ...same as previous example ... ...same rtpmap/fmtp as previous example ... a=sendrecv a=mid:v1 (max resolution) a=rid:0 send max-width=1280;max-height=720;max-fps=15 a=rid:1 send max-width=1280;max-height=720;max-fps=30;depend=0 a=rid:2 recv max-width=1280;max-height=720;max-fps=30 a=rid:5 send max-width=640;max-height=360;max-fps=15 a=rid:6 send max-width=320;max-height=180;max-fps=15 a=simulcast: send rid=0;1;5;6 recv rid=2 ... ...same m=video sections as previous example for mid:v2-v7... ``` 12. IANA Considerations This specification updates [RFC4855] to give additional guidance on choice of Format Parameter (fmtp) names and their relation to RID restrictions. 12.1. New SDP Media-Level Attribute This document defines "rid" as an SDP media-level attribute. This attribute has been registered by IANA under "Session Description Protocol (SDP) Parameters" under "att-field (media level only)". The "rid" attribute is used to identify the properties of an RTP stream within an RTP session. Its format is defined in Section 10. The formal registration information for this attribute follows. Attribute name (as it will appear in SDP) rid Long-form attribute name in English Restriction Identifier Type of attribute (session level, media level, or both) Media Level Whether the attribute value is subject to the charset attribute The attribute is not dependent on charset. A one-paragraph explanation of the purpose of the attribute The "rid" SDP attribute is used to unambiguously identify the RTP streams within an RTP session and restrict the streams' payload format parameters in a codec-agnostic way beyond what is provided with the regular payload types. A specification of appropriate attribute values for this attribute Valid values are defined by the ABNF in RFC 8851 Multiplexing (Mux) Category SPECIAL 12.2. Registry for RID-Level Parameters This specification creates a new IANA registry named "RID Attribute Parameters" within the SDP parameters registry. The "a=rid" restrictions MUST be registered with IANA and documented under the same rules as for SDP session-level and media-level attributes as specified in [RFC4566]. Parameters for "a=rid" lines that modify the nature of encoded media MUST be of the form that the result of applying the modification to the stream results in a stream that still complies with the other parameters that affect the media. In other words, restrictions always have to restrict the definition to be a subset of what is otherwise allowable, and never expand it. New restriction registrations are accepted according to the "Specification Required" policy of [RFC8126]. The registration MUST contain the RID parameter name and a reference to the corresponding specification. The specification itself must contain the following information (not all of which appears in the registry): - restriction name (as it will appear in SDP) - an explanation of the purpose of the restriction - a specification of appropriate attribute values for this restriction - an ABNF definition of the restriction The initial set of "a=rid" restriction names, with definitions in Section 5 of this document, is given below: <table> <thead> <tr> <th>RID Parameter Name</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>pt</td> <td>RFC 8851</td> </tr> <tr> <td>max-width</td> <td>RFC 8851</td> </tr> <tr> <td>max-height</td> <td>RFC 8851</td> </tr> <tr> <td>max-fps</td> <td>RFC 8851</td> </tr> <tr> <td>max-fs</td> <td>RFC 8851</td> </tr> <tr> <td>max-br</td> <td>RFC 8851</td> </tr> <tr> <td>max-pps</td> <td>RFC 8851</td> </tr> <tr> <td>max-bpp</td> <td>RFC 8851</td> </tr> <tr> <td>depend</td> <td>RFC 8851</td> </tr> </tbody> </table> Table 1: "a=rid" restriction names It is conceivable that a future document will want to define RID-level restrictions that contain string values. These extensions need to take care to conform to the ABNF defined for rid-param-other. In particular, this means that such extensions will need to define escaping mechanisms if they want to allow semicolons, unprintable characters, or byte values greater than 127 in the string. 13. Security Considerations As with most SDP parameters, a failure to provide integrity protection over the "a=rid" attributes gives attackers a way to modify the session in potentially unwanted ways. This could result in an implementation sending greater amounts of data than a recipient wishes to receive. In general, however, since the "a=rid" attribute can only restrict a stream to be a subset of what is otherwise allowable, modification of the value cannot result in a stream that is of higher bandwidth than would be sent to an implementation that does not support this mechanism. The actual identifiers used for RIDs are expected to be opaque. As such, they are not expected to contain information that would be sensitive, were it observed by third parties. 14. References 14.1. Normative References [RFC2119] 14.2. Informative References Acknowledgements Many thanks to Cullen Jennings, Magnus Westerlund, and Paul Kyzivat for reviewing. Thanks to Colin Perkins for input on future payload type handling. Contributors The following individuals have contributed significant text to this document. **Peter Thatcher** Google Email: pthatcher@google.com **Mo Zanaty** Cisco Systems Email: mzanaty@cisco.com **Suhas Nandakumar** Cisco Systems Email: snandaku@cisco.com **Bo Burman** Ericsson Email: bo.burman@ericsson.com **Byron Campen** Mozilla Email: bcampen@mozilla.com **Author's Address** Adam Roach (EDITOR) Mozilla Email: adam@nostrum.com
{"Source-Url": "https://www.rfc-editor.org/rfc/rfc8851.pdf", "len_cl100k_base": 11645, "olmocr-version": "0.1.49", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 58421, "total-output-tokens": 14309, "length": "2e13", "weborganizer": {"__label__adult": 0.00045228004455566406, "__label__art_design": 0.0008916854858398438, "__label__crime_law": 0.0010709762573242188, "__label__education_jobs": 0.0009584426879882812, "__label__entertainment": 0.0005717277526855469, "__label__fashion_beauty": 0.00022542476654052737, "__label__finance_business": 0.0006542205810546875, "__label__food_dining": 0.0003485679626464844, "__label__games": 0.0014667510986328125, "__label__hardware": 0.01322174072265625, "__label__health": 0.00037932395935058594, "__label__history": 0.0005412101745605469, "__label__home_hobbies": 8.702278137207031e-05, "__label__industrial": 0.0008282661437988281, "__label__literature": 0.0004987716674804688, "__label__politics": 0.0005750656127929688, "__label__religion": 0.0006194114685058594, "__label__science_tech": 0.3271484375, "__label__social_life": 9.649991989135742e-05, "__label__software": 0.1104736328125, "__label__software_dev": 0.53759765625, "__label__sports_fitness": 0.0004761219024658203, "__label__transportation": 0.0006685256958007812, "__label__travel": 0.00022995471954345703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49199, 0.05711]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49199, 0.34029]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49199, 0.82054]], "google_gemma-3-12b-it_contains_pii": [[0, 1586, false], [1586, 2953, null], [2953, 3882, null], [3882, 6606, null], [6606, 9703, null], [9703, 12501, null], [12501, 15575, null], [15575, 18152, null], [18152, 21178, null], [21178, 24521, null], [24521, 27027, null], [27027, 29719, null], [29719, 31357, null], [31357, 33611, null], [33611, 36308, null], [36308, 36547, null], [36547, 37826, null], [37826, 38790, null], [38790, 38834, null], [38834, 40718, null], [40718, 42564, null], [42564, 44520, null], [44520, 46260, null], [46260, 48573, null], [48573, 48834, null], [48834, 49199, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1586, true], [1586, 2953, null], [2953, 3882, null], [3882, 6606, null], [6606, 9703, null], [9703, 12501, null], [12501, 15575, null], [15575, 18152, null], [18152, 21178, null], [21178, 24521, null], [24521, 27027, null], [27027, 29719, null], [29719, 31357, null], [31357, 33611, null], [33611, 36308, null], [36308, 36547, null], [36547, 37826, null], [37826, 38790, null], [38790, 38834, null], [38834, 40718, null], [40718, 42564, null], [42564, 44520, null], [44520, 46260, null], [46260, 48573, null], [48573, 48834, null], [48834, 49199, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49199, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49199, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49199, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49199, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49199, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49199, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49199, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49199, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49199, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49199, null]], "pdf_page_numbers": [[0, 1586, 1], [1586, 2953, 2], [2953, 3882, 3], [3882, 6606, 4], [6606, 9703, 5], [9703, 12501, 6], [12501, 15575, 7], [15575, 18152, 8], [18152, 21178, 9], [21178, 24521, 10], [24521, 27027, 11], [27027, 29719, 12], [29719, 31357, 13], [31357, 33611, 14], [33611, 36308, 15], [36308, 36547, 16], [36547, 37826, 17], [37826, 38790, 18], [38790, 38834, 19], [38834, 40718, 20], [40718, 42564, 21], [42564, 44520, 22], [44520, 46260, 23], [46260, 48573, 24], [48573, 48834, 25], [48834, 49199, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49199, 0.04398]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
0ae9aeaf821a9b04919e4bcfe6d8e1c206e8de67
[REMOVED]
{"len_cl100k_base": 15132, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 60365, "total-output-tokens": 17304, "length": "2e13", "weborganizer": {"__label__adult": 0.0003192424774169922, "__label__art_design": 0.0005359649658203125, "__label__crime_law": 0.00031757354736328125, "__label__education_jobs": 0.0008091926574707031, "__label__entertainment": 9.065866470336914e-05, "__label__fashion_beauty": 0.000164031982421875, "__label__finance_business": 0.0004901885986328125, "__label__food_dining": 0.0003390312194824219, "__label__games": 0.0006389617919921875, "__label__hardware": 0.0006661415100097656, "__label__health": 0.0004727840423583984, "__label__history": 0.0002808570861816406, "__label__home_hobbies": 7.605552673339844e-05, "__label__industrial": 0.0002951622009277344, "__label__literature": 0.0003695487976074219, "__label__politics": 0.0002701282501220703, "__label__religion": 0.0003843307495117187, "__label__science_tech": 0.041046142578125, "__label__social_life": 9.298324584960938e-05, "__label__software": 0.01369476318359375, "__label__software_dev": 0.93798828125, "__label__sports_fitness": 0.00020015239715576172, "__label__transportation": 0.0004274845123291016, "__label__travel": 0.00022530555725097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64484, 0.04054]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64484, 0.27129]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64484, 0.87608]], "google_gemma-3-12b-it_contains_pii": [[0, 3304, false], [3304, 8027, null], [8027, 13266, null], [13266, 18362, null], [18362, 21091, null], [21091, 23372, null], [23372, 28408, null], [28408, 32827, null], [32827, 34573, null], [34573, 39098, null], [39098, 43389, null], [43389, 47898, null], [47898, 51401, null], [51401, 56648, null], [56648, 60667, null], [60667, 64484, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3304, true], [3304, 8027, null], [8027, 13266, null], [13266, 18362, null], [18362, 21091, null], [21091, 23372, null], [23372, 28408, null], [28408, 32827, null], [32827, 34573, null], [34573, 39098, null], [39098, 43389, null], [43389, 47898, null], [47898, 51401, null], [51401, 56648, null], [56648, 60667, null], [60667, 64484, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64484, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64484, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64484, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64484, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64484, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64484, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64484, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64484, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64484, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64484, null]], "pdf_page_numbers": [[0, 3304, 1], [3304, 8027, 2], [8027, 13266, 3], [13266, 18362, 4], [18362, 21091, 5], [21091, 23372, 6], [23372, 28408, 7], [28408, 32827, 8], [32827, 34573, 9], [34573, 39098, 10], [39098, 43389, 11], [43389, 47898, 12], [47898, 51401, 13], [51401, 56648, 14], [56648, 60667, 15], [60667, 64484, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64484, 0.20811]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
2b4926832f84376f3fab0ef406198946e188265a
SCons 4.0.1 Design Steven Knight Copyright © 2001 Steven Knight Publication date 2001 Copyright (c) 2001 Steven Knight Portions of this document, by the same author, were previously published Copyright 2000 by CodeSourcery LLC, under the Software Carpentry Open Publication License, the terms of which are available at http://www.software-carpentry.com/openpub-license.html [http://www.software-carpentry.com/openpub-license.html]. # Table of Contents 1. Introduction ................................................................................................................................. 1 1.1. About This Document ........................................................................................................... 1 2. Goals ............................................................................................................................................ 2 2.1. Fixing Make's problems ......................................................................................................... 3 2.2. Fixing Cons's problems ......................................................................................................... 3 3. Overview ....................................................................................................................................... 4 3.1. Architecture .......................................................................................................................... 4 3.2. Build Engine .......................................................................................................................... 5 3.2.1. Python API ....................................................................................................................... 5 3.2.2. Single-image execution ................................................................................................... 5 3.2.3. Dependency analysis ....................................................................................................... 5 3.2.4. Customized output .......................................................................................................... 6 3.2.5. Build failures .................................................................................................................. 6 3.3. Interfaces ................................................................................................................................ 6 3.3.1. Native Python interface ................................................................................................. 6 3.3.2. Makefile interface ........................................................................................................... 6 3.3.3. Graphical interfaces ........................................................................................................ 7 4. Build Engine API ............................................................................................................................ 8 4.1. General Principles .................................................................................................................. 8 4.1.1. Keyword arguments ......................................................................................................... 8 4.1.2. Internal object representation ......................................................................................... 8 4.2. Construction Environments ...................................................................................................... 8 4.2.1. Construction variables .................................................................................................... 9 4.2.2. Fetching construction variables ..................................................................................... 9 4.2.3. Copying a construction environment ............................................................................... 9 4.2.4. Multiple construction environments .............................................................................. 10 4.2.5. Variable substitution ....................................................................................................... 10 4.3. Builder Objects ....................................................................................................................... 11 4.3.1. Specifying multiple inputs .............................................................................................. 12 4.3.2. Specifying multiple targets ............................................................................................. 12 4.3.3. File prefixes and suffixes ............................................................................................... 13 4.3.4. Builder object exceptions .............................................................................................. 13 4.3.5. User-defined Builder objects .......................................................................................... 13 4.3.6. Copying Builder Objects ................................................................................................. 14 4.3.7. Special-purpose build rules ............................................................................................ 15 4.3.8. The Make Builder ............................................................................................................ 15 4.3.9. Builder maps .................................................................................................................... 15 4.4. Dependencies .......................................................................................................................... 16 4.4.1. Automatic dependencies .................................................................................................. 16 4.4.2. Implicit dependencies ..................................................................................................... 16 4.4.3. Ignoring dependencies .................................................................................................... 17 4.4.4. Explicit dependencies .................................................................................................... 17 4.5. Scanner Objects ...................................................................................................................... 17 4.5.1. User-defined Scanner objects ......................................................................................... 18 4.5.2. Copying Scanner Objects ............................................................................................... 18 4.5.3. Scanner maps .................................................................................................................. 19 6. Targets ........................................................................................................................................... 19 4.6.1. Building targets ................................................................................................................... 19 4.6.2. Removing targets ............................................................................................................... 19 4.6.3. Suppressing cleanup removal of build-targets ................................................................... 20 4.6.4. Suppressing build-target removal ..................................................................................... 20 4.6.5. Default targets ................................................................. 20 4.6.6. File installation ............................................................... 21 4.6.7. Target aliases ................................................................. 21 4.7. Customizing output ............................................................ 21 4.8. Separate source and build trees ........................................... 22 4.9. Variant builds ................................................................. 23 4.10. Code repositories ............................................................. 23 4.11. Derived-file caching ......................................................... 24 4.12. Job management .............................................................. 24 5. Native Python Interface .......................................................... 25 5.1. Configuration files ........................................................... 25 5.2. Python syntax ................................................................. 25 5.3. Subsidiary configuration files ............................................ 26 5.4. Variable scoping in subsidiary files ................................... 26 5.5. Hierarchical builds ......................................................... 27 5.6. Sharing construction environments .................................... 27 5.7. Help ......................................................................... 27 5.8. Debug .................................................................... 28 6. Other Issues ................................................................. 29 6.1. Interaction with SC-config ............................................... 29 6.2. Interaction with test infrastructures .................................. 29 6.3. Java dependencies ........................................................ 29 6.4. Limitations of digital signature calculation .......................... 30 6.5. Remote execution .......................................................... 30 6.6. Conditional execution ..................................................... 30 7. Background ................................................................. 31 8. Summary ................................................................. 32 9. Acknowledgements .......................................................... 33 List of Figures 3.1. SCons Architecture ................................................................. 4 The SCons tool provides an easy-to-use, feature-rich interface for constructing software. Architecturally, SCons separates its dependency analysis and external object management into an interface-independent Build Engine that could be embedded in any software system that can run Python. At the command line, SCons presents an easily-grasped tool where configuration files are Python scripts, reducing the need to learn new build-tool syntax. Inexperienced users can use intelligent methods that “do the right thing” to build software with a minimum of fuss. Sophisticated users can use a rich set of underlying features for finer control of the build process, including mechanisms for easily extending the build process to new file types. Dependencies are tracked using digital signatures, which provide more robust dependency analysis than file time stamps. Implicit dependencies are determined automatically by scanning the contents of source files, avoiding the need for laborious and fragile maintenance of static lists of dependencies in configuration files. The SCons tool supports use of files from one or more central code repositories, a mechanism for caching derived files, and parallel builds. The tool also includes a framework for sharing build environments, which allows system administrators or integrators to define appropriate build parameters for use by other users. 1.1. About This Document This document is an ongoing work-in-progress to write down the ideas and tradeoffs that have gone, and will go into, the SCons design. As such, this is intended primarily for use by developers and others working on SCons, although it is also intended to serve as a detailed overview of SCons for other interested parties. It will be continually updated and evolve, and will likely overlap with other documentation produced by the project. Sections of this document that deal with syntax, for example, may move or be copied into a user guide or reference manual. So please don't assume that everything mentioned here has been decided and carved in stone. If you have ideas for improvements, or questions about things that don't seem to make any sense, please help improve the design by speaking up about them. 2 Goals As a next-generation build tool, SCons should fundamentally improve on its predecessors. Rather than simply being driven by trying to not be like previous tools, SCons aims to satisfy the following goals: **Practicality** The SCons design emphasizes an implementable feature set that lets users get practical, useful work done. SCons is helped in this regard by its roots in Cons, which has had its feature set honed by several years of input from a dedicated band of users. **Portability** SCons is intended as a portable build tool, able to handle software construction tasks on a variety of operating systems. It should be possible (although not mandatory) to use SCons so that the same configuration file builds the same software correctly on, for example, both Linux and Windows NT. Consequently, SCons should hide from users operating-system-dependent details such as filename extensions (for example, `.o` vs. `.obj`). **Usability** Novice users should be able to grasp quickly the rudiments of using SCons to build their software. This extends to installing SCons, too. Installation should be painless, and the installed SCons should work "out of the box" to build most software. This goal should be kept in mind during implementation, when there is always a tendency to try to optimize too early. Speed is nice, but not as important as clarity and ease of use. **Utility** SCons should also provide a rich enough set of features to accommodate building more complicated software projects. However, the features required for building complicated software projects should not get in the way of novice users. (See the previous goal.) In other words, complexity should be available when it's needed but not required to get work done. Practically, this implies that SCons shouldn't be dumbed down to the point it excludes complicated software builds. **Sharability** As a key element in balancing the conflicting needs of Usability and Utility, SCons should provide mechanisms to allow SCons users to share build rules, dependency scanners, and other objects and recipes for constructing software. A good sharing mechanism should support the model wherein most developers on a project use rules and templates that are created and maintained by a local integrator or build-master. **Extensibility** SCons should provide mechanisms for easily extending its capabilities, including building new types of files, adding new types of dependency scanning, being able to accommodate dependencies between objects other than files, etc. Fixing Make's problems Flexibility In addition to providing a useful command-line interface, SCons should provide the right architectural framework for embedding its dependency management in other interfaces. SCons would help strengthen other GUIs or IDEs and the additional requirements of the other interfaces would help broaden and solidify the core SCons dependency management. 2.1. Fixing Make's problems 2.2. Fixing Cons's problems 3 Overview 3.1. Architecture The heart of SCons is its *Build Engine*. The SCons Build Engine is a Python module that manages dependencies between external objects such as files or database records. The Build Engine is designed to be interface-neutral and easily embeddable in any software system that needs dependency analysis between updatable objects. The key parts of the Build Engine architecture are captured in the following quasi-UML diagram: **Figure 3.1. SCons Architecture** The point of SCons is to manage dependencies between arbitrary external objects. Consequently, the Build Engine does not restrict or specify the nature of the external objects it manages, but instead relies on subclass of the `Node` class to interact with the external system or systems (file systems, database management systems) that maintain the objects being examined or updated. The Build Engine presents to the software system in which it is embedded a Python API for specifying source (input) and target (output) objects, rules for building/updating objects, rules for scanning objects for dependencies, etc. Above its Python API, the Build Engine is completely interface-independent, and can be encapsulated by any other software that supports embedded Python. Software that chooses to use the Build Engine for dependency management interacts with it through Construction Environments. A Construction Environment consists of a dictionary of environment variables, and one or more associated Scanner objects and Builder objects. The Python API is used to form these associations. A Scanner object specifies how to examine a type of source object (C source file, database record) for dependency information. A Scanner object may use variables from the associated Construction Environment to modify how it scans an object: specifying a search path for included files, which field in a database record to consult, etc. A Builder object specifies how to update a type of target object: executable program, object file, database field, etc. Like a Scanner object, a Builder object may use variables from the associated Construction Environment to modify how it builds an object: specifying flags to a compiler, using a different update function, etc. Scanner and Builder objects will return one or more Node objects that represent external objects. Node objects are the means by which the Build Engine tracks dependencies: A Node may represent a source (input) object that should already exist, or a target (output) object which may be built, or both. The Node class is sub-classed to represent external objects of specific type: files, directories, database fields or records, etc. Because dependency information, however, is tracked by the top-level Node methods and attributes, dependencies can exist between nodes representing different external object types. For example, building a file could be made dependent on the value of a given field in a database record, or a database table could depend on the contents of an external file. The Build Engine uses a Job class (not displayed) to manage the actual work of updating external target objects: spawning commands to build files, submitting the necessary commands to update a database record, etc. The Job class has sub-classes to handle differences between spawning jobs in parallel and serially. The Build Engine also uses a Signature class (not displayed) to maintain information about whether an external object is up-to-date. Target objects with out-of-date signatures are updated using the appropriate Builder object. ### 3.2. Build Engine More detailed discussion of some of the Build Engine's characteristics: #### 3.2.1. Python API The Build Engine can be embedded in any other software that supports embedding Python: in a GUI, in a wrapper script that interprets classic Makefile syntax, or in any other software that can translate its dependency representation into the appropriate calls to the Build Engine API. describes in detail the specification for a "Native Python" interface that will drive the SCons implementation effort. #### 3.2.2. Single-image execution When building/updating the objects, the Build Engine operates as a single executable with a complete Directed Acyclic Graph (DAG) of the dependencies in the entire build tree. This is in stark contrast to the commonplace recursive use of Make to handle hierarchical directory-tree builds. #### 3.2.3. Dependency analysis Dependency analysis is carried out via digital signatures (a.k.a. "fingerprints"). Contents of object are examined and reduced to a number that can be stored and compared to see if the object has changed. Additionally, SCons uses the same signature technique on the command-lines that are executed to update an object. If the command-line has changed since the last time, then the object must be rebuilt. 3.2.4. Customized output The output of Build Engine is customizable through user-defined functions. This could be used to print additional desired information about what SCons is doing, or tailor output to a specific build analyzer, GUI, or IDE. 3.2.5. Build failures SCons detects build failures via the exit status from the tools used to build the target files. By default, a failed exit status (non-zero on UNIX systems) terminates the build with an appropriate error message. An appropriate class from the Python library will interpret build-tool failures via an OS-independent API. If multiple tasks are executing in a parallel build, and one tool returns failure, SCons will not initiate any further build tasks, but allow the other build tasks to complete before terminating. A –k command-line option may be used to ignore errors and continue building other targets. In no case will a target that depends on a failed build be rebuilt. 3.3. Interfaces As previously described, the SCons Build Engine is interface-independent above its Python API, and can be embedded in any software system that can translate its dependency requirements into the necessary Python calls. The "main" SCons interface for implementation purposes, uses Python scripts as configuration files. Because this exposes the Build Engine's Python API to the user, it is current called the "Native Python" interface. This section will also discuss how SCons will function in the context of two other interfaces: the Makefile interface of the classic Make utility, and a hypothetical graphical user interface (GUI). 3.3.1. Native Python interface The Native Python interface is intended to be the primary interface by which users will know SCons--that is, it is the interface they will use if they actually type SCons at a command-line prompt. In the Native Python interface, SCons configuration files are simply Python scripts that directly invoke methods from the Build Engine's Python API to specify target files to be built, rules for building the target files, and dependencies. Additional methods, specific to this interface, are added to handle functionality that is specific to the Native Python interface: reading a subsidiary configuration file; copying target files to an installation directory; etc. Because configuration files are Python scripts, Python flow control can be used to provide very flexible manipulation of objects and dependencies. For example, a function could be used to invoke a common set of methods on a file, and called iteratively over an array of files. As an additional advantage, syntax errors in SCons Native Python configuration files will be caught by the Python parser. Target-building does not begin until after all configuration files are read, so a syntax error will not cause a build to fail half-way. 3.3.2. Makefile interface An alternate SCons interface would provide backwards compatibility with the classic Make utility. This would be done by embedding the SCons Build Engine in a Python script that can translate existing Makefiles into the underlying calls to the Build Engine’s Python API for building and tracking dependencies. Here are approaches to solving some of the issues that arise from marrying these two pieces: - **Makefile** suffix rules can be translated into an appropriate **Builder** object with suffix maps from the Construction Environment. - Long lists of static dependences appended to a **Makefile** by various "**make depend**" schemes can be preserved but supplemented by the more accurate dependency information provided by **Scanner** objects. - Recursive invocations of Make can be avoided by reading up the subsidiary **Makefile** instead. Lest this seem like too outlandish an undertaking, there is a working example of this approach: Gary Holt's Make++ utility is a Perl script that provides admirably complete parsing of complicated **Makefiles** around an internal build engine inspired, in part, by the classic Cons utility. ### 3.3.3. Graphical interfaces The SCons Build Engine is designed from the ground up to be embedded into multiple interfaces. Consequently, embedding the dependency capabilities of SCons into graphical interface would be a matter of mapping the GUI's dependency representation (either implicit or explicit) into corresponding calls to the Python API of the SCons Build Engine. Note, however, that this proposal leaves the problem of designed a good graphical interface for representing software build dependencies to people with actual GUI design experience... 4.1. General Principles 4.1.1. Keyword arguments All methods and functions in this API will support the use of keyword arguments in calls, for the sake of explicitness and readability. For brevity in the hands of experts, most methods and functions will also support positional arguments for their most-commonly-used arguments. As an explicit example, the following two lines will each arrange for an executable program named foo (or foo.exe on a Win32 system) to be compiled from the foo.c source file: ```python env.Program(target = 'foo', source = 'foo.c') env.Program('foo', 'foo.c') ``` 4.1.2. Internal object representation All methods and functions use internal (Python) objects that represent the external objects (files, for example) for which they perform dependency analysis. All methods and functions in this API that accept an external object as an argument will accept either a string description or an object reference. For example, the two following two-line examples are equivalent: ```python env.Object(target = 'foo.o', source = 'foo.c') env.Program(target = 'foo', 'foo.o') # builds foo from foo.o ``` ```python foo_obj = env.Object(target = 'foo.o', source = 'foo.c') env.Program(target = 'foo', foo_obj) # builds foo from foo.o ``` 4.2. Construction Environments A construction environment is the basic means by which a software system interacts with the SCons Python API to control a build process. A construction environment is an object with associated methods for generating target files of various types (Builder objects), other associated object methods for automatically determining dependencies from the contents of various types of source files (Scanner objects), and a dictionary of values used by these methods. Passing no arguments to the Environment instantiation creates a construction environment with default values for the current platform: ```python env = Environment() ``` ### 4.2.1. Construction variables A construction environment has an associated dictionary of construction variables that control how the build is performed. By default, the Environment method creates a construction environment with values that make most software build "out of the box" on the host system. These default values will be generated at the time SCons is installed using functionality similar to that provided by GNU Autoconf. At a minimum, there will be pre-configured sets of default values that will provide reasonable defaults for UNIX and Windows NT. The default construction environment values may be overridden when a new construction environment is created by specifying keyword arguments: ```python env = Environment(CC = 'gcc', CCFLAGS = '-g', CPPPATH = ['.', 'src', '/usr/include'], LIBPATH = ['/usr/lib', '.']) ``` ### 4.2.2. Fetching construction variables A copy of the dictionary of construction variables can be returned using the Dictionary method: ```python env = Environment() dict = env.Dictionary() ``` If any arguments are supplied, then just the corresponding value(s) are returned: ```python ccflags = env.Dictionary('CCFLAGS') cc, ld = env.Dictionary('CC', 'LD') ``` ### 4.2.3. Copying a construction environment A method exists to return a copy of an existing environment, with any overridden values specified as keyword arguments to the method: --- 1 It would be nice if we could avoid re-inventing the wheel here by using some other Python-based tool Autoconf replacement—like what was supposed to come out of the Software Carpentry configuration tool contest. It will probably be most efficient to roll our own logic initially and convert if something better does come along. 4.2.4. Multiple construction environments Different external objects often require different build characteristics. Multiple construction environments may be defined, each with different values: ```python env = Environment() debug = env.Copy(CCFLAGS = '-g') ``` ```python env = Environment(CCFLAGS = '') debug = Environment(CCFLAGS = '-g') env.Make(target = 'hello', source = 'hello.c') debug.Make(target = 'hello-debug', source = 'hello.c') ``` Dictionaries of values from multiple construction environments may be passed to the `Environment` instantiation or the `Copy` method, in which case the last-specified dictionary value wins: ```python env1 = Environment(CCFLAGS = '-O', LDFLAGS = '-d') env2 = Environment(CCFLAGS = '-g') new = Environment(env1.Dictionary(), env2.Dictionary()) ``` The `new` environment in the above example retains `LDFLAGS = '-d'` from the `env1` environment, and `CCFLAGS = '-g'` from the `env2` environment. 4.2.5. Variable substitution Within a construction command, any variable from the construction environment may be interpolated by prefixing the name of the construction with `$`: ```python MyBuilder = Builder(command = "$XX $XXFLAGS -c $_INPUTS -o $target") env.Command(targets = 'bar.out', sources = 'bar.in', command = "sed '1d' < $source > $target") ``` Variable substitution is recursive: the command line is expanded until no more substitutions can be made. Variable names following the `$` may be enclosed in braces. This can be used to concatenate an interpolated value with an alphanumeric character: ```python VerboseBuilder = Builder(command = "$XX -${XXFLAGS}v > $target") ``` The variable within braces may contain a pair of parentheses after a Python function name to be evaluated (for example, `${map()}`). SCons will interpolate the return value from the function (presumably a string): ```python env = Environment(FUNC = myfunc) env.Command(target = 'foo.out', source = 'foo.in', ``` If a referenced variable is not defined in the construction environment, the null string is interpolated. The following special variables can also be used: **$targets** All target file names. If multiple targets are specified in an array, $targets expands to the entire list of targets, separated by a single space. Individual targets from a list may be extracted by enclosing the targets keyword in braces and using the appropriate Python array index or slice: ``` ${targets[0]} # expands to the first target ${targets[1:] # expands to all but the first target ${targets[1:-1]} # expands to all but the first and last targets ``` **$target** A synonym for ${targets[0]}, the first target specified. **$sources** All input file names. Any input file names that are used anywhere else on the current command line (via ${sources[0]}, ${sources[1]}, etc.) are removed from the expanded list. Any of the above special variables may be enclosed in braces and followed immediately by one of the following attributes to select just a portion of the expanded path name: - **.base** Basename: the directory plus the file name, minus any file suffix. - **.dir** The directory in which the file lives. This is a relative path, where appropriate. - **.file** The file name, minus any directory portion. - **.suffix** The file name suffix (that is, the right-most dot in the file name, and all characters to the right of that). - **.filebase** The file name (no directory portion), minus any file suffix. - **.abspath** The absolute path to the file. ### 4.3. Builder Objects By default, SCons supplies (and uses) a number of pre-defined Builder objects: <table> <thead> <tr> <th>Object</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>compile</td> <td>compile or assemble an object file</td> </tr> <tr> <td>archive</td> <td>archive files into a library</td> </tr> </tbody> </table> --- **command = "${FUNC($<)}"** 4.3.1. Specifying multiple inputs Multiple input files that go into creating a target file may be passed in as a single string, with the individual file names separated by white space: ``` env.Library(target = 'foo.a', source = 'aaa.c bbb.c ccc.c') env.Object(target = 'yyy.o', source = 'yyy.c') env.Program(target = 'bar', source = 'xxx.c yyy.o foo.a') ``` Alternatively, multiple input files that go into creating a target file may be passed in as an array. This allows input files to be specified using their object representation: ``` env.Library(target = 'foo.a', source = ['aaa.c', 'bbb.c', 'ccc.c']) yyy_obj = env.Object(target = 'yyy.o', source = 'yyy.c') env.Program(target = 'bar', source = ['xxx.c', yyy_obj, 'foo.a']) ``` Individual string elements within an array of input files are not further split into white-space separated file names. This allows file names that contain white space to be specified by putting the value into an array: ``` env.Program(target = 'foo', source = ['an input file.c']) ``` 4.3.2. Specifying multiple targets Conversely, the generated target may be a string listing multiple files separated by white space: 4.3.3. File prefixes and suffixes For portability, if the target file name does not already have an appropriate file prefix or suffix, the Builder objects will append one appropriate for the file type on the current system: ```python # builds 'hello.o' on UNIX, 'hello.obj' on Windows NT: obj = env.Object(target = 'hello', source = 'hello.c') # builds 'libfoo.a' on UNIX, 'foo.lib' on Windows NT: lib = env.Library(target = 'foo', source = ['aaa.c', 'bbb.c']) # builds 'libbar.so' on UNIX, 'bar.dll' on Windows NT: slib = env.SharedLibrary(target = 'bar', source = ['xxx.c', 'yyy.c']) # builds 'hello' on UNIX, 'hello.exe' on Windows NT: prog = env.Program(target = 'hello', source = ['hello.o', 'libfoo.a', 'libbar.so']) ``` 4.3.4. Builder object exceptions Builder objects raise the following exceptions on error: 4.3.5. User-defined Builder objects Users can define additional Builder objects for specific external object types unknown to SCons. A Builder object may build its target by executing an external command: ```python WebPage = Builder(command = 'htmlgen $HTMLGENFLAGS $sources > $target', suffix = '.html', src_suffix = '.in') ``` Alternatively, a Builder object may also build its target by executing a Python function: ```python def update(dest): # [code to update the object] return 1 OtherBuilder1 = Builder(function = update, ``` src_suffix = ['.in', '.input']) An optional argument to pass to the function may be specified: def update_arg(dest, arg): # [code to update the object] return 1 OtherBuilder2 = Builder(function = update_arg, function_arg = 'xyzzy', src_suffix = ['.in', '.input']) Both an external command and an internal function may be specified, in which case the function will be called to build the object first, followed by the command line. User-defined Builder objects can be used like the default Builder objects to initialize construction environments. WebPage = Builder(command = 'htmlgen $HTMLGENFLAGS $sources > $target', suffix = '.html', src_suffix = '.in') env = Environment(BUILDERS = ['WebPage']) env.WebPage(target = 'foo.html', source = 'foo.in') # Builds 'bar.html' on UNIX, 'bar.htm' on Windows NT: env.WebPage(target = 'bar', source = 'bar.in') The command-line specification can interpolate variables from the construction environment; see "Variable substitution," above. A Builder object may optionally be initialized with a list of: • the prefix of the target file (e.g., 'lib' for libraries) • the suffix of the target file (e.g., '.a' for libraries) • the expected suffixes of the input files (e.g., '.o' for object files) These arguments are used in automatic dependency analysis and to generate output file names that don't have suffixes supplied explicitly. 4.3.6. Copying Builder Objects A Copy method exists to return a copy of an existing Builder object, with any overridden values specified as keyword arguments to the method: build = Builder(function = my_build) build_out = build.Copy(suffix = '.out') Typically, Builder objects will be supplied by a tool-master or administrator through a shared construction environment. 4.3.7. Special-purpose build rules A pre-defined Command builder exists to associate a target file with a specific command or list of commands for building the file: ``` env.Command(target = 'foo.out', source = command = 'foo.in', "foo.process $sources > $target") commands = [ "bar.process -o .tmpfile $sources", "mv .tmpfile $target" ] env.Command(target = 'bar.out', source = 'bar.in', command = commands) ``` This is useful when it's too cumbersome to create a Builder object just to build a single file in a special way. 4.3.8. The Make Builder A pre-defined Builder object named Make exists to make simple builds as easy as possible for users, at the expense of sacrificing some build portability. The following minimal example builds the 'hello' program from the 'hello.c' source file: ``` Environment().Make('hello', 'hello.c') ``` Users of the Make Builder object are not required to understand intermediate steps involved in generating a file---for example, the distinction between compiling source code into an object file, and then linking object files into an executable. The details of intermediate steps are handled by the invoked method. Users that need to, however, can specify intermediate steps explicitly: ``` env = Environment() env.Make(target = 'hello.o', source = 'hello.c') env.Make(target = 'hello', source = 'hello.o') ``` The Make method understands the file suffixes specified and "does the right thing" to generate the target object and program files, respectively. It does this by examining the specified output suffixes for the Builder objects bound to the environment. Because file name suffixes in the target and source file names must be specified, the Make method can't be used portably across operating systems. In other words, for the example above, the Make builder will not generate hello.exe on Windows NT. 4.3.9. Builder maps The env.Make method "does the right thing" to build different file types because it uses a dictionary from the construction environment that maps file suffixes to the appropriate Builder object. This BUILDERMAP can be initialized at instantiation: ``` env = Environment(BUILDERMAP = { ``` With the BUILDERRMAP properly initialized, the env.Make method can be used to build additional file types: ```python env.Make(target = 'index.html', source = 'index.input') ``` Builder objects referenced in the BUILDERRMAP do not need to be listed separately in the BUILDERS variable. The construction environment will bind the union of the Builder objects listed in both variables. ## 4.4. Dependencies ### 4.4.1. Automatic dependencies By default, SCons assumes that a target file has automatic dependencies on the: - tool used to build the target file - contents of the input files - command line used to build the target file If any of these changes, the target file will be rebuilt. ### 4.4.2. Implicit dependencies Additionally, SCons can scan the contents of files for implicit dependencies on other files. For example, SCons will scan the contents of a .c file and determine that any object created from it is dependent on any .h files specified via #include. SCons, therefore, "does the right thing" without needing to have these dependencies listed explicitly: ```bash % cat Construct env = Environment() env.Program('hello', 'hello.c') % cat hello.c #include "hello_string.h" main() { printf("%s\n", STRING); } % cat > hello_string.h #define STRING "Hello, world!\n" % scons . gcc -c hello.c -o hello.o gcc -o hello hello.o % ./hello Hello, world! % cat > hello_string.h ``` - `.o` : Object, - `.a` : Library, - `.html` : WebPage, - `''` : Program, Ignoring dependencies #define STRING "Hello, world, hello!\n" % scons . gcc -c hello.c -o hello.o gcc -o hello hello.c % ./hello Hello, world, hello! % 4.4.3. Ignoring dependencies Undesirable automatic dependencies or implicit dependencies may be ignored: ```python env.Program(target = 'bar', source = 'bar.c') env.Ignore('bar', '/usr/bin/gcc', 'version.h') ``` In the above example, the `bar` program will not be rebuilt if the `/usr/bin/gcc` compiler or the `version.h` file change. 4.4.4. Explicit dependencies Dependencies that are unknown to SCons may be specified explicitly in an SCons configuration file: ```python env.Depends(target = 'output1', dependency = 'input_1 input_2') env.Depends(target = 'output2', dependency = ['input_1', 'input_2']) env.Depends(target = 'output3', dependency = ['white space input']) env.Depends(target = 'output_a output_b', dependency = 'input_3') env.Depends(target = ['output_c', 'output_d'], dependency = 'input_4') env.Depends(target = ['white space output'], dependency = 'input_5') ``` Just like the `target` keyword argument, the `dependency` keyword argument may be specified as a string of white-space separated file names, or as an array. A dependency on an SCons configuration file itself may be specified explicitly to force a rebuild whenever the configuration file changes: ```python env.Depends(target = 'archive.tar.gz', dependency = 'SConstruct') ``` 4.5. Scanner Objects Analagous to the previously-described Builder objects, SCons supplies (and uses) Scanner objects to search the contents of a file for implicit dependency files: | CScan | scan {c,C,cc,xxx,xx,cpp} files for #include dependencies | A construction environment can be explicitly initialized with associated Scanner objects: User-defined Scanner objects ```python def scanner1(file_contents): # search for dependencies return dependency_list FirstScan = Scanner(function = scanner1) ``` The scanner function must return a list of dependencies that its finds based on analyzing the file contents it is passed as an argument. The scanner function, when invoked, will be passed the calling environment. The scanner function can use construction environments from the passed environment to affect how it performs its dependency scan—the canonical example being to use some sort of search-path construction variable to look for dependency files in other directories: ```python def scanner2(file_contents, env): path = env.{'SCANNERPATH'} # XXX # search for dependencies using 'path' return dependency_list SecondScan = Scanner(function = scanner2) ``` The user may specify an additional argument when the Scanner object is created. When the scanner is invoked, the additional argument will be passed to the scanner function, which can be used in any way the scanner function sees fit: ```python def scanner3(file_contents, env, arg): # skip 'arg' lines, then search for dependencies return dependency_list Skip_3_Lines_Scan = Scanner(function = scanner2, argument = 3) Skip_6_Lines_Scan = Scanner(function = scanner2, argument = 6) ``` 4.5.2. Copying Scanner Objects A method exists to return a copy of an existing Scanner object, with any overridden values specified as keyword arguments to the method: Typically, \texttt{Scanner} objects will be supplied by a tool-master or administrator through a shared construction environment. ### 4.5.3. Scanner maps Each construction environment has a \texttt{SCANNERMAP}, a dictionary that associates different file suffixes with a scanner object that can be used to generate a list of dependencies from the contents of that file. This \texttt{SCANNERMAP} can be initialized at instantiation: ```python env = Environment(SCANNERMAP = { '.c' : CScan, '.cc' : CScan, '.m4' : M4Scan, }) ``` \texttt{Scanner} objects referenced in the \texttt{SCANNERMAP} do not need to be listed separately in the \texttt{SCANNERS} variable. The construction environment will bind the union of the \texttt{Scanner} objects listed in both variables. ### 4.6. Targets The methods in the build engine API described so far merely establish associations that describe file dependencies, how a file should be scanned, etc. Since the real point is to actually build files, SCons also has methods that actually direct the build engine to build, or otherwise manipulate, target files. #### 4.6.1. Building targets One or more targets may be built as follows: ```python env.Build(target = ['foo', 'bar']) ``` Note that specifying a directory (or other collective object) will cause all subsidiary/dependent objects to be built as well: ```python env.Build(target = '.') env.Build(target = 'builddir') ``` By default, SCons explicitly removes a target file before invoking the underlying function or command(s) to build it. #### 4.6.2. Removing targets A "cleanup" operation of removing generated (target) files is performed as follows: Like the `Build` method, the `Clean` method may be passed a directory or other collective object, in which case the subsidiary target objects under the directory will be removed: ``` env.Clean(target = ['foo', 'bar']) ``` (The directories themselves are not removed.) ### 4.6.3.Suppressing cleanup removal of build-targets By default, `SCons` explicitly removes all build-targets when invoked to perform "cleanup". Files that should not be removed during "cleanup" can be specified via the `NoClean` method: ``` env.Library(target = 'libfoo.a', source = ['aaa.c', 'bbb.c', 'ccc.c']) env.NoClean('libfoo.a') ``` The `NoClean` operation has precedence over the `Clean` operation. A target that is specified as both `Clean` and `NoClean`, will not be removed during a clean. In the following example, target 'foo' will not be removed during "cleanup": ``` env.Clean(target = 'foo') env.NoClean('foo') ``` ### 4.6.4. Suppressing build-target removal As mentioned, by default, `SCons` explicitly removes a target file before invoking the underlying function or command(s) to build it. Files that should not be removed before rebuilding can be specified via the `Precious` method: ``` env.Library(target = 'libfoo.a', source = ['aaa.c', 'bbb.c', 'ccc.c']) env.Precious('libfoo.a') ``` ### 4.6.5. Default targets The user may specify default targets that will be built if there are no targets supplied on the command line: ``` env.Default('install', 'src') ``` Multiple calls to the `Default` method (typically one per `SConscript` file) append their arguments to the list of default targets. 4.6.6. File installation Files may be installed in a destination directory: \[ \text{env.Install}('/\text{usr/bin}', '\text{program1}', '\text{program2}') \] Files may be renamed on installation: \[ \text{env.InstallAs}('/\text{usr/bin/xyzzy}', '\text{xyzzy.in}') \] Multiple files may be renamed on installation by specifying equal-length lists of target and source files: \[ \text{env.InstallAs}(['/\text{usr/bin/foo}', '/\text{usr/bin/bar}'], ['\text{foo.in}', '\text{bar.in}']) \] 4.6.7. Target aliases In order to provide convenient "shortcut" target names that expand to a specified list of targets, aliases may be established: \[ \text{env.Alias}('\text{alias} = 'install', \text{targets} = ['/\text{sbin}', '/\text{usr/lib}', '/\text{usr/share/man}']) \] In this example, specifying a target of install will cause all the files in the associated directories to be built (that is, installed). An Alias may include one or more other Aliases in its list: \[ \text{env.Alias}('\text{alias} = 'libraries', \text{targets} = ['\text{lib}']) \] \[ \text{env.Alias}('\text{alias} = 'programs', \text{targets} = ['\text{library}', '\text{src}']) \] 4.7. Customizing output The SCons API supports the ability to customize, redirect, or suppress its printed output through user-defined functions. SCons has several pre-defined points in its build process at which it calls a function to (potentially) print output. User-defined functions can be specified for these call-back points when Build or Clean is invoked: \[ \text{env.Build}(\text{target} = '.', \text{on_analysis} = \text{dump_dependency}, \text{pre_update} = \text{my_print_command}, \text{post_update} = \text{my_error_handler}, \text{on_error} = \text{my_error_handler}) \] The specific call-back points are: **on_analysis** Called for every object, immediately after the object has been analyzed to see if it’s out-of-date. Typically used to print a trace of considered objects for debugging of unexpected dependencies. **pre_update** Called for every object that has been determined to be out-of-date before its update function or command is executed. Typically used to print the command being called to update a target. **post_update** Called for every object after its update function or command has been executed. Typically used to report that a top-level specified target is up-to-date or was not remade. **on_error** Called for every error returned by an update function or command. Typically used to report errors with some string that will be identifiable to build-analysis tools. Functions for each of these call-back points all take the same arguments: ```python my_dump_dependency(target, level, status, update, dependencies) ``` where the arguments are: **target** The target object being considered. **level** Specifies how many levels the dependency analysis has recursed in order to consider the target. A value of 0 specifies a top-level target (that is, one passed to the Build or Clean method). Objects which a top-level target is directly dependent upon have a level of <1>, their direct dependencies have a level of <2>, etc. Typically used to indent output to reflect the recursive levels. **status** A string specifying the current status of the target ("unknown", "built", "error", "analyzed", etc.). A complete list will be enumerated and described during implementation. **update** The command line or function name that will be (or has been) executed to update the target. **dependencies** A list of direct dependencies of the target. ### 4.8. Separate source and build trees SCons allows target files to be built completely separately from the source files by "linking" a build directory to an underlying source directory: ```python env.Link('build', 'src') SConscript('build/SConscript') ``` SCons will copy (or hard link) necessary files (including the SConscript file) into the build directory hierarchy. This allows the source directory to remain uncluttered by derived files. 4.9. Variant builds The Link method may be used in conjunction with multiple construction environments to support variant builds. The following SConstruct and SConscript files would build separate debug and production versions of the same program side-by-side: ```bash % cat SConstruct env = Environment() env.Link('build/debug', 'src') env.Link('build/production', 'src') flags = '-g' SConscript('build/debug/SConscript', Export(env)) flags = '-O' SConscript('build/production/SConscript', Export(env)) % cat src/SConscript env = Environment(CCFLAGS = flags) environment.Program('hello', 'hello.c') ``` The following example would build the appropriate program for the current compilation platform, without having to clean any directories of object or executable files for other architectures: ```bash % cat SConstruct build_platform = os.path.join('build', sys.platform) Link(build_platform, 'src') SConscript(os.path.join(build_platform, 'SConscript')) % cat src/SConscript env = Environment env.Program('hello', 'hello.c') ``` 4.10. Code repositories SCons may use files from one or more shared code repositories in order to build local copies of changed target files. A repository would typically be a central directory tree, maintained by an integrator, with known good libraries and executables. ```python Repository('/home/source/1.1', '/home/source/1.0') ``` Specified repositories will be searched in-order for any file (configuration file, input file, target file) that does not exist in the local directory tree. When building a local target file, SCons will rewrite path names in the build command to use the necessary repository files. This includes modifying lists of -I or -L flags to specify an appropriate set of include paths for dependency analysis. SCons will modify the Python sys.path variable to reflect the addition of repositories to the search path, so that any imported modules or packages necessary for the build can be found in a repository, as well. If an up-to-date target file is found in a code repository, the file will not be rebuilt or copied locally. Files that must exist locally (for example, to run tests) may be specified: ``` Local('program', 'libfoo.a') ``` in which case SCons will copy or link an up-to-date copy of the file from the appropriate repository. ### 4.11. Derived-file caching SCons can maintain a cache directory of target files which may be shared among multiple builds. This reduces build times by allowing developers working on a project together to share common target files: ``` Cache('/var/tmp/build.cache/i386') ``` When a target file is generated, a copy is added to the cache. When generating a target file, if SCons determines that a file that has been built with the exact same dependencies already exists in the specified cache, SCons will copy the cached file rather than re-building the target. Command-line options exist to modify the SCons caching behavior for a specific build, including disabling caching, building dependencies in random order, and displaying commands as if cached files were built. ### 4.12. Job management A simple API exists to inform the Build Engine how many jobs may be run simultaneously: ``` Jobs(limit = 4) ``` The "Native Python" interface is the interface that the actual SCons utility will present to users. Because it exposes the Python Build Engine API, SCons users will have direct access to the complete functionality of the Build Engine. In contrast, a different user interface such as a GUI may choose to only use, and present to the end-user, a subset of the Build Engine functionality. ## 5.1. Configuration files SCons configuration files are simply Python scripts that invoke methods to specify target files to be built, rules for building the target files, and dependencies. Common build rules are available by default and need not be explicitly specified in the configuration files. By default, the SCons utility searches for a file named `SConstruct`, `Sconstruct`, `sconstruct`, `SConstruct.py`, `Sconstruct.py` or `sconstruct.py` (in that order) in the current directory, and reads its configuration from the first file found. A `-f` command-line option exists to read a different file name. ## 5.2. Python syntax Because SCons configuration files are Python scripts, normal Python syntax can be used to generate or manipulate lists of targets or dependencies: ```python sources = ['aaa.c', 'bbb.c', 'ccc.c'] env.Make('bar', sources) ``` Python flow-control can be used to iterate through invocations of build rules: ```python objects = ['aaa.o', 'bbb.o', 'ccc.o'] for obj in objects: src = replace(obj, '.o', '.c') env.Make(obj, src) ``` or to handle more complicated conditional invocations: Because SCons configuration files are Python scripts, syntax errors will be caught by the Python parser. Target-building does not begin until after all configuration files are read, so a syntax error will not cause a build to fail half-way. ## 5.3. Subsidiary configuration Files A configuration file can instruct SCons to read up subsidiary configuration files. Subsidiary files are specified explicitly in a configuration file via the `SConscript` method. As usual, multiple file names may be specified with white space separation, or in an array: ``` SConscript('other_file') SConscript('file1 file2') SConscript(['file3', 'file4']) SConscript(['file name with white space']) ``` An explicit `sconscript` keyword may be used: ``` SConscript(sconscript = 'other_file') ``` Including subsidiary configuration files is recursive: a configuration file included via `SConscript` may in turn `SConscript` other configuration files. ### 5.4. Variable scoping in subsidiary files When a subsidiary configuration file is read, it is given its own namespace; it does not have automatic access to variables from the parent configuration file. Any variables (not just SCons objects) that are to be shared between configuration files must be explicitly passed in the `SConscript` call using the `Export` method: ``` env = Environment() ddebug = Environment(CCFLAGS = '-g') installdir = '/usr/bin' SConscript('src/SConscript', Export(env=env, debug=debug, installdir=installdir)) ``` Which may be specified explicitly using a keyword argument: ``` env = Environment() ddebug = Environment(CCFLAGS = '-g') installdir = '/usr/bin' SConscript(sconscript = 'src/SConscript', export = Export(env=env, debug=debug, installdir=installdir)) ``` Hierarchical builds Explicit variable-passing provides control over exactly what is available to a subsidiary file, and avoids unintended side effects of changes in one configuration file affecting other far-removed configuration files (a very hard-to-debug class of build problem). ### 5.5. Hierarchical builds The `SConscript` method is so named because, by convention, subsidiary configuration files in subdirectories are named `SConscript`: ```python SConscript('src/SConscript') SConscript('lib/build_me') ``` When a subsidiary configuration file is read from a subdirectory, all of that configuration file's targets and build rules are interpreted relative to that directory (as if SCons had changed its working directory to that subdirectory). This allows for easy support of hierarchical builds of directory trees for large projects. ### 5.6. Sharing construction environments SCons will allow users to share construction environments, as well as other SCons objects and Python variables, by importing them from a central, shared repository using normal Python syntax: ```python from LocalEnvironments import optimized, debug optimized.Make('foo', 'foo.c') ddebug.Make('foo-d', 'foo.c') ``` The expectation is that some local tool-master, integrator or administrator will be responsible for assembling environments (creating the `Builder` objects that specify the tools, options, etc.) and make these available for sharing by all users. The modules containing shared construction environments (`LocalEnvironments` in the above example) can be checked in and controlled with the rest of the source files. This allows a project to track the combinations of tools and command-line options that work on different platforms, at different times, and with different tool versions, by using already-familiar revision control tools. ### 5.7. Help The SCons utility provides a `Help` function to allow the writer of a `SConstruct` file to provide help text that is specific to the local build tree: ```python Help("" Type: scons . build and test everything scons test build the software scons src run the tests scons web build the web pages "") ``` This help text is displayed in response to the `-h` command-line option. Calling the `Help` function more than once is an error. ## 5.8. Debug SCons supports several command-line options for printing extra information with which to debug build problems. See the `-d`, `-p`, `-pa`, and `-pw` options in the , below. All of these options make use of call-back functions to printed by the Build Engine. No build tools is perfect. Here are some SCons issues that do not yet have solutions. ### 6.1. Interaction with SC-config The SC-config tool will be used in the SCons installation process to generate an appropriate default construction environment so that building most software works "out of the box" on the installed platform. The SC-config tool will find reasonable default compilers (C, C++, Fortran), linkers/loaders, library archive tools, etc. for specification in the default SCons construction environment. ### 6.2. Interaction with test infrastructures SCons can be configured to use SC-test (or some other test tool) to provide controlled, automated testing of software. The Link method could link a `test` subdirectory to a build subdirectory: ```python Link('test', 'build') SConscript('test/SConscript') ``` Any test cases checked in with the source code will be linked into the `test` subdirectory and executed. If `SConscript` files and test cases are written with this in mind, then invoking: ``` % sccons test ``` Would run all the automated test cases that depend on any changed software. ### 6.3. Java dependencies Java dependencies are difficult for an external dependency-based construction tool to accommodate. Determining Java class dependencies is more complicated than the simple pattern-matching of C or C++ `#include` files. From the point of view of an external build tool, the Java compiler behaves "unpredictably" because it may create or update multiple output class files and directories as a result of its internal class dependencies. An obvious SCons implementation would be to have the `Scanner` object parse output from `Java-depend-verbose` to calculate dependencies, but this has the distinct disadvantage of requiring two separate compiler invocations, thereby slowing down builds. 6.4. Limitations of digital signature calculation In practice, calculating digital signatures of a file's contents is a more robust mechanism than time stamps for determining what needs building. However: 1. Developers used to the time stamp model of Make can initially find digital signatures counter-intuitive. The assumption that: ``` % touch file.c ``` will cause a rebuild of file is strong... 2. Abstracting dependency calculation into a single digital signature loses a little information: It is no longer possible to tell (without laborious additional calculation) which input file dependency caused a rebuild of a given target file. A feature that could report, "I'm rebuilding file X because it's out-of-date with respect to file Y," would be good, but an digital-signature implementation of such a feature is non-obvious. 6.5. Remote execution The ability to use multiple build systems through remote execution of tools would be good. This should be implementable through the Job class. Construction environments would need modification to specify build systems. 6.6. Conditional builds The ability to check run-time conditions as suggested on the sc-discuss mailing list ("build X only if: the machine is idle / the file system has Y megabytes free space") would also be good, but is not part of the current design. Most of the ideas in SCons originate with Cons, a Perl-based software construction utility that has been in use by a small but growing community since its development by Bob Sidebotham at FORE Systems in 1996. The Cons copyright was transferred in 2000 from Marconi (who purchased FORE Systems) to the Free Software Foundation. I’ve been a principal implementer and maintainer of Cons for several years. Cons was originally designed to handle complicated software build problems (multiple directories, variant builds) while keeping the input files simple and maintainable. The general philosophy is that the build tool should “do the right thing” with minimal input from an unsophisticated user, while still providing a rich set of underlying functionality for more complicated software construction tasks needed by experts. In 2000, the Software Carpentry sought entries in a contest for a new, Python-based build tool that would provide an improvement over Make for physical scientists and other non-programmers struggling to use their computers more effectively. Prior to that, the idea of combining the superior build architecture of Cons with the easier syntax of Python had come up several times on the cons-discuss mailing list. The Software Carpentry contest provided the right motivation to spend some actual time working on a design document. After two rounds of competition, the submitted design, named ScCons, won the competition. Software Carpentry, however, did not immediately fund implementation of the build tool, instead contracting for additional, more detailed draft(s) of the design document. This proved to be not as strong motivation as actual coding, and after several months of inactivity, I essentially resigned from the Software Carpentry effort in early 2001 to start working on the tool independently. After half a year of prototyping some of the important infrastructure, I accumulated enough code to take the project public at SourceForge, renaming it SCons to distinguish it slightly from the version of the design that won the Software Carpentry contest while still honoring its roots there and in the original Cons utility. And also because it would be a teensy bit easier to type. SCons offers a robust and feature-rich design for an SC-build tool. With a Build Engine based on the proven design of the Cons utility, it offers increased simplification of the user interface for unsophisticated users with the addition of the "do-the-right-thing" `env.Make` method, increased flexibility for sophisticated users with the addition of `Builder` and `Scanner` objects, a mechanism to allow tool-masters (and users) to share working construction environments, and embeddability to provide reliable dependency management in a variety of environments and interfaces. I'm grateful to the following people for their influence, knowing or not, on the design of SCons: **Bob Sidebotham** First, as the original author of Cons, Bob did the real heavy lifting of creating the underlying model for dependency management and software construction, as well as implementing it in Perl. During the first years of Cons' existence, Bob did a skillful job of integrating input and code from the first users, and consequently is a source of practical wisdom and insight into the problems of real-world software construction. His continuing advice has been invaluable. **The SCons Development Team** A big round of thanks go to those brave souls who have gotten in on the ground floor: David Abrahams, Charles Crain, Steven Leblanc, Anthony Roach, and Steven Shaw. Their contributions, through their general knowledge of software build issues in general Python in particular, have made SCons what it is today. **The Cons Community** The real-world build problems that the users of Cons share on the cons-discuss mailing list have informed much of the thinking that has gone into the SCons design. In particular, Rajesh Vaidheeswarran, the current maintainer of Cons, has been a very steady influence. I've also picked up valuable insight from mailing-list participants Johan Holmberg, Damien Neil, Gary Oberbrunner, Wayne Scott, and Greg Spencer. **Peter Miller** Peter has indirectly influenced two aspects of the SCons design: Miller’s influential paper *Recursive Make Considered Harmful* was what led me, indirectly, to my involvement with Cons in the first place. Experimenting with the single-Makefile approach he describes in *RMCH* led me to conclude that while it worked as advertised, it was not an extensible scheme. This solidified my frustration with Make and led me to try Cons, which at its core shares the single-process, universal-DAG model of the "RMCH" single-Makefile technique. The testing framework that Miller created for his Aegis change management system changed the way I approach software development by providing a framework for rigorous, repeatable testing during development. It was my success at using Aegis for personal projects that led me to begin my involvement with Cons by creating the cons-test regression suite. **Stuart Stanley** An experienced Python programmer, Stuart provided valuable advice and insight into some of the more useful Python idioms at my disposal during the original ScCons; design for the Software Carpentry contest. **Gary Holt** I don’t know which came first, the first-round Software Carpentry contest entry or the tool itself, but Gary's design for Make++ showed me that it is possible to marry the strengths of Cons-like dependency management with backwards compatibility for Makefiles. Striving to support both Makefile compatibility and a native Python interface cleaned up the SCons design immeasurably by factoring out the common elements into the Build Engine.
{"Source-Url": "https://scons.org/doc/production/PDF/scons-design.pdf", "len_cl100k_base": 14086, "olmocr-version": "0.1.50", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 74666, "total-output-tokens": 16295, "length": "2e13", "weborganizer": {"__label__adult": 0.0003058910369873047, "__label__art_design": 0.0006403923034667969, "__label__crime_law": 0.00013124942779541016, "__label__education_jobs": 0.0009326934814453124, "__label__entertainment": 7.69495964050293e-05, "__label__fashion_beauty": 0.00011521577835083008, "__label__finance_business": 0.0003185272216796875, "__label__food_dining": 0.0002536773681640625, "__label__games": 0.0005745887756347656, "__label__hardware": 0.0005908012390136719, "__label__health": 0.0001474618911743164, "__label__history": 0.00021958351135253904, "__label__home_hobbies": 0.00014531612396240234, "__label__industrial": 0.00027942657470703125, "__label__literature": 0.00024580955505371094, "__label__politics": 0.00014710426330566406, "__label__religion": 0.00028324127197265625, "__label__science_tech": 0.0028095245361328125, "__label__social_life": 9.083747863769533e-05, "__label__software": 0.0099639892578125, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.00023424625396728516, "__label__transportation": 0.00024771690368652344, "__label__travel": 0.00015747547149658203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69163, 0.01526]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69163, 0.8487]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69163, 0.78829]], "google_gemma-3-12b-it_contains_pii": [[0, 35, false], [35, 435, null], [435, 7733, null], [7733, 10151, null], [10151, 10260, null], [10260, 12486, null], [12486, 15033, null], [15033, 15474, null], [15474, 16349, null], [16349, 20171, null], [20171, 23436, null], [23436, 24912, null], [24912, 26350, null], [26350, 28627, null], [28627, 30592, null], [30592, 32522, null], [32522, 33682, null], [33682, 35111, null], [35111, 36962, null], [36962, 39146, null], [39146, 40621, null], [40621, 42389, null], [42389, 43903, null], [43903, 45575, null], [45575, 47175, null], [47175, 48923, null], [48923, 50985, null], [50985, 53164, null], [53164, 54407, null], [54407, 55925, null], [55925, 57668, null], [57668, 59836, null], [59836, 60239, null], [60239, 62072, null], [62072, 63409, null], [63409, 65629, null], [65629, 66208, null], [66208, 68946, null], [68946, 69163, null]], "google_gemma-3-12b-it_is_public_document": [[0, 35, true], [35, 435, null], [435, 7733, null], [7733, 10151, null], [10151, 10260, null], [10260, 12486, null], [12486, 15033, null], [15033, 15474, null], [15474, 16349, null], [16349, 20171, null], [20171, 23436, null], [23436, 24912, null], [24912, 26350, null], [26350, 28627, null], [28627, 30592, null], [30592, 32522, null], [32522, 33682, null], [33682, 35111, null], [35111, 36962, null], [36962, 39146, null], [39146, 40621, null], [40621, 42389, null], [42389, 43903, null], [43903, 45575, null], [45575, 47175, null], [47175, 48923, null], [48923, 50985, null], [50985, 53164, null], [53164, 54407, null], [54407, 55925, null], [55925, 57668, null], [57668, 59836, null], [59836, 60239, null], [60239, 62072, null], [62072, 63409, null], [63409, 65629, null], [65629, 66208, null], [66208, 68946, null], [68946, 69163, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 69163, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69163, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69163, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69163, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69163, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69163, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69163, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69163, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69163, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69163, null]], "pdf_page_numbers": [[0, 35, 1], [35, 435, 2], [435, 7733, 3], [7733, 10151, 4], [10151, 10260, 5], [10260, 12486, 6], [12486, 15033, 7], [15033, 15474, 8], [15474, 16349, 9], [16349, 20171, 10], [20171, 23436, 11], [23436, 24912, 12], [24912, 26350, 13], [26350, 28627, 14], [28627, 30592, 15], [30592, 32522, 16], [32522, 33682, 17], [33682, 35111, 18], [35111, 36962, 19], [36962, 39146, 20], [39146, 40621, 21], [40621, 42389, 22], [42389, 43903, 23], [43903, 45575, 24], [45575, 47175, 25], [47175, 48923, 26], [48923, 50985, 27], [50985, 53164, 28], [53164, 54407, 29], [54407, 55925, 30], [55925, 57668, 31], [57668, 59836, 32], [59836, 60239, 33], [60239, 62072, 34], [62072, 63409, 35], [63409, 65629, 36], [65629, 66208, 37], [66208, 68946, 38], [68946, 69163, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69163, 0.00648]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
795620e8dcd09ff7466f109918d17d4d01e70868
Distributed Data Management Part 1 - Schema Fragmentation Today's Question 1. Distribution of Relational Databases 2. Horizontal Fragmentation of Relational Tables 3. Vertical Fragmentation of Relational Tables Problem 3. Scalability for Large Databases - Locate data on available storage medium in an efficient manner - Blocks, files, network nodes, ... - Provide efficient access to data for specific addressing methods (i.e., indexing) - combinations, probes, paths, ... - Data access structure (tree, hash table, etc.) - Example: B-Tree - tree node, node size of storage system - tree is balanced - all operations (search, update) logarithmic ©2006/7, Karl Aberer, EPFL-IC, Laboratoire de systèmes d'informations répartis 1. Relational Databases <table> <thead> <tr> <th>DEPARTMENTS</th> <th>EMPLOYEES</th> </tr> </thead> <tbody> <tr> <td><strong>DNo</strong></td> <td><strong>EName</strong></td> </tr> <tr> <td>P4</td> <td>Smith</td> </tr> <tr> <td>P2</td> <td>Lee</td> </tr> <tr> <td>P3</td> <td>Miller</td> </tr> <tr> <td>P5</td> <td>Davis</td> </tr> <tr> <td>P6</td> <td>Jones</td> </tr> </tbody> </table> <table> <thead> <tr> <th>SALARIES</th> <th></th> </tr> </thead> <tbody> <tr> <td><strong>Skill</strong></td> <td><strong>Salary</strong></td> </tr> <tr> <td>Director</td> <td>200000</td> </tr> <tr> <td>Manager</td> <td>100000</td> </tr> <tr> <td>Assistant</td> <td>50000</td> </tr> </tbody> </table> In this lecture we introduce some basic concepts on distributing relational databases. These concepts exhibit some important principles related to the problem of distributing data management in general. In particular, these principles apply beyond the relational data model. In the following we assume that we are familiar with the basic notions of the relational data model, including the notions of relation, attribute, query, primary and foreign keys, relational algebra operators, and relational calculus (in particular the notion of predicates and basic logical operators). Distributed Relational Databases A1: find all employees with low salaries A2: update salaries A3: find all employees from Geneva with high salaries A4: find all budgets - Geneva DEPARTMENTS - EMPLOYEES with high salaries - Bangalore DEPARTMENTS - Munich and Paris DEPARTMENTS - EMPLOYEES with low salaries - all SALARIES Given a relational database: Improve performance of queries by properly distributing the database to the physical locations (design of a distribution schema) Having a (relational) database that is shared by distributed applications in a network opens immediately a possibility to optimize the access to the database, when taking into account which applications at which sites require which data with which frequency. An obviously useful idea is to move the data to the place where it is needed most, in order to reduce the cost of accessing data over the network. The communication cost involved in accessing data over the network is high as compared to local access to data. The example illustrates the situation, where the relational database from the previous slide is distributed to the sites where the database is accessed (applications are indicated by A1-A4). The distribution schema, i.e. the description which parts of the database are distributed to which site, and the applications (queries) are given informally. A possible reasoning for distributing the data as shown could be as follows: since A3 needs all information about Geneva departments and high salary employees we put the related data in that site. In Bangalore only the DEPARTMENT table is accessed, but parts of it are allocated to other sites as they are used there, therefore only the locally relevant data is kept. Paris has no applications, so no data is put there. Munich has all other data, in particular, for example, the salary table, which is also used in Geneva, but more frequently in Munich, as it is updated there. In the following we will introduce methods of how to make the specification of such a distribution schema precise and provide algorithms that support the process of developing such a distribution schema. What Do You Think? - Problems to solve when distributing a relational database Assumptions on Distribution Design - Distributed relational database system is available - allow to distribute relational data over physically distributed sites - takes care of transparent processing of database accesses - Top-down design - no pre-existing constraints on how to organize the database - Access patterns are known and static - no need to adapt to changes in access patterns (otherwise redesign) - Replication is not considered - reasonable assumption if updates are frequent The problem of distributing a relational database is a very general one: we will make a number of assumptions in order to be able to focus on specific questions. We will not concern ourselves with the issue of developing a distributed database system architecture. This requires to solve a number of important problems, such as communication support, management of the data distribution schema, and processing of distributed queries. We assume that if we can specify of how the data is to be distributed all other issues are taken care of. Thus we focus on the problem of distribution schema design. We also assume that there exist no a-priori constraints on how we distribute the database, be it of technical or organizational nature. We are free to decide which data goes where. The access patterns are assumed to be static, or changing so slowly that we can afford to perform a re-design whenever needed. Thus we can design our distribution schema off-line. Finally we do not take advantage of replication, which is a reasonable assumption in update-intensive environments. Methods involving replication can pursue similar approaches as we will describe, but considering it introduces an additional design dimension which for the purpose of clarity we will ignore. A first important question is: to which degree should fragmentation occur, i.e. which parts of a relation can be distributed independently. We will call these parts in the following "fragments". Restricting the distribution to complete relations appears to be too limited in general, in particular when considering tables containing information relevant for different sites. On the other extreme deciding on the distribution for each single attribute value or tuple seems to be a too complex task when considering the distribution of very large tables. A flexible way to create fragments that can be distributed to the sites is to use queries which can select subsets of a relational table, as shown in the example. These fragments can be essentially of two different kinds: 1. **horizontal fragments** of a table are defined through selection (i.e. what is specified in the WHERE clause of a SQL query). These are subsets of tuples of a relation. 2. **vertical fragments** of a table are defined through projection (i.e. what is specified in the SELECT clause of a SQL query). These are subtables consisting of a subset of the attribute columns. Correct Fragmentation - **Completeness** - decomposition of a relation $R$ into fragments $R_1, \ldots, R_n$ is complete if every attribute value found in one of the relations is also found in one of the fragments. - **Reconstruction** - if a relation is decomposed into fragments $R_1, \ldots, R_n$ then it should also be possible to reconstruct the relation $R$ from its fragments (e.g. by applying appropriate relational operators such as join, union etc.). - **Disjointness** - if a relation is decomposed into fragments $R_1, \ldots, R_n$ then every attribute value should be contained only in one of the fragments. - **Attention**: Reconstruction and (full) disjointness cannot be achieved at the same time (more later). When decomposing a relational table into fragments a number of minimal requirements have to be satisfied in order to avoid the loss of information. First, we have to make sure that every data value of the original table is found in one fragment, otherwise we loose this data value. This property is called **completeness**. Second, we must be able to **reconstruct** the original table from the fragments. This is a problem very similar to the one encountered when normalizing relational database schemas by decomposition of tables. Also there it can occur that by improper decomposition we can no more reconstruct the original table. Finally, the fragments should be **disjoint** (in order to avoid update dependencies) as far as possible. We will see later that the last two conditions of reconstruction and disjointness can not be completely satisfied at the same time in general. Summary • Why should a relational database be fragmented? • At which phase of the database lifecycle is fragmentation performed? • What are the alternative approaches to fragment relations? • Under which conditions is a fragmentation considered correct? • In which environments would replication be an appropriate alternative to fragmentation? 2. Primary Horizontal Fragmentation - Horizontal Fragmentation of a single relation - Example - Application A1 running at Geneva: "update department budgets > 200000 three times a month, others monthly" <table> <thead> <tr> <th>DNo</th> <th>DName</th> <th>Budget</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>P4</td> <td>Sales</td> <td>500000</td> <td>Geneva</td> </tr> <tr> <td>P2</td> <td>Marketing</td> <td>300000</td> <td>Paris</td> </tr> <tr> <td>P3</td> <td>Development</td> <td>250000</td> <td>Munich</td> </tr> <tr> <td>P1</td> <td>Development</td> <td>150000</td> <td>Bangalore</td> </tr> <tr> <td>P5</td> <td>Marketing</td> <td>120000</td> <td>Geneva</td> </tr> <tr> <td>P6</td> <td>Development</td> <td>90000</td> <td>Paris</td> </tr> <tr> <td>P7</td> <td>Research</td> <td>80000</td> <td>Paris</td> </tr> </tbody> </table> We begin by fragmenting a single relational table horizontally. Since the fragmentation of a table depends on the usage of the table, we have first to be able to describe of how the table is accessed. In other words, we need a model for the table access. One possible model would be to give for every single tuple the frequency of access of a specific application, as illustrated in the histogram on the right. Since we do however not consider fragmentation of a table into single tuples, this description is at a too fine granularity. Also one can see that for many tuples the access frequency will be the same (as a consequence of the structure of the application executing an SQL query). Thus we rather model the access only for those parts of relations that potentially qualify for fragmentation. Thus the model we are interested in has to describe two things: first what are possible (horizontal) fragments about which we want to say something, and second what we want to say about the access. The answer to the first question is a consequence of our idea of using SQL to describe fragments: the horizontal fragment will be described by some form of predicate (or logical expression) that consists of conditions on the attributes of the table. As for the second question, we restrict ourselves to specifying that for a given fragment the tuples are all accessed with the same frequency. This corresponds exactly to what we see in the example. We have two fragments F1 and F2 described by a predicate and all tuples in each of the fragments are accessed with uniform frequency. Modeling Access Characteristics - Describe (potential) horizontal fragments - select subsets of the relation using predicates - Describe the access to horizontal fragments - all tuples in a fragment are accessed with the same frequency - Obtain the necessary information - provided by developer - analysis of previous accesses The necessary information on access frequencies either can be provided by a developer, who knows the application and can derive from that the necessary (approximate) specification, or is obtained from analysis of database access logs. The second approach is technically more challenging, and typically will require statistical analysis or data mining tools (we will introduce basic data mining techniques at the end of this lecture) Determining Access Frequencies - What do we need to know about horizontal fragments? - access frequency \( af(A_i, F_j) \): given a tuple in fragment \( F_j \), how often is it accessed by application \( A_i \) per time unit - Examples: - update of some tuple in \( F_j \) by \( A_i \) occurs \( t \) times per time unit: \[ af(A_i, F_j) = \frac{t}{\text{size}(F_j)} \] - query by \( A_i \) accesses all tuples in \( F_j \) and query occurs \( t \) times per time unit: \[ af(A_i, F_j) = t \] - query by \( A_i \) accesses 10% of all tuples in \( F_j \) and query occurs \( t \) times per time unit: \[ af(A_i, F_j) = \frac{t}{10} \] The access frequencies are measured in terms of average number of accesses to a tuple of the fragment within a time unit. Thus each access to each tuple is counted as a single access for an application. This information can be derived from the application in different ways, as is illustrated in the examples. Example Multiple Applications - A second application A2 running in Paris: - "request the Bangalore dept budget on average three times a month" - "request some Geneva dept budget twice a month" - "request some Paris dept budget 6 times a month" - "request Munich dept budget every second month" <table> <thead> <tr> <th>DNo</th> <th>DName</th> <th>Budget</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>P4</td> <td>Sales</td> <td>500000</td> <td>Geneva</td> </tr> <tr> <td>P2</td> <td>Marketing</td> <td>300000</td> <td>Paris</td> </tr> <tr> <td>P3</td> <td>Development</td> <td>250000</td> <td>Munich</td> </tr> <tr> <td>P1</td> <td>Development</td> <td>150000</td> <td>Bangalore</td> </tr> <tr> <td>P5</td> <td>Marketing</td> <td>120000</td> <td>Geneva</td> </tr> <tr> <td>P6</td> <td>Development</td> <td>90000</td> <td>Paris</td> </tr> <tr> <td>P7</td> <td>Research</td> <td>80000</td> <td>Paris</td> </tr> </tbody> </table> If we can describe the access to a relation by one application we can do the same also for other applications as shown in the example. For a single relation we know what are the potential horizontal fragments, those that are identified by the access model as having same access frequency. With multiple applications we see that there exist different possible combinations of access frequencies for different tuples since different applications fragment the relations differently. Each different combination potentially might lead to a different decision on where to locate the tuple. Example Access Frequencies to Fragments - Each fragment can be described as conjunction of predicates, e.g. \( \text{F1: Location = "Paris" \land Budget > 200000} \) - There exist the following different combinations of access frequencies \(<af1, af2>\) for applications A1 and A2 <table> <thead> <tr> <th>(&lt;af1, af2&gt;)</th> <th>Location = &quot;Paris&quot;</th> <th>Location = &quot;Geneva&quot;</th> <th>Location = &quot;Munich&quot;</th> <th>Location = &quot;Bangalore&quot;</th> </tr> </thead> <tbody> <tr> <td>Budget &gt; 200000</td> <td>&lt;3, 2&gt;</td> <td>&lt;3, 1&gt;</td> <td>&lt;3, 0.5&gt;</td> <td>n/a</td> </tr> <tr> <td>Budget &lt;= 200000</td> <td>&lt;1, 2&gt;</td> <td>&lt;1, 1&gt;</td> <td>n/a</td> <td></td> </tr> </tbody> </table> Important Observation: if all tuples in a set of tuples are accessed with the same frequency by all applications, then whichever method we use to optimize access to tuples, these tuples will be assigned to the same site. Therefore it makes no sense to make a further distinction among them, i.e. fragments in which all tuples are accessed with equal probability are the smallest we have to consider. What we can do is to enumerate all possible combinations of access frequencies as shown in this example. We take each possible combination of horizontal fragments from the two applications. If we form the conjunction of the predicates describing the fragments in each of the applications (which can be done by using the logical AND connector), then we obtain fragments of the relational tables for which the access frequency is the same for all tuples for both applications. We find these access frequencies thus in the entries of the table capturing all possible combinations of predicates. An important observation relates now to the fact that we have not to further fragment the table than it is done by combining all possible fragments of all applications, since whichever method we use to distributed the tuples to different sites, it will not be able to distinguish them (through the access frequency) and thus they will be moved to the same site. Describing Horizontal Fragments - **(Simple) predicates** $P$: testing the value of a single attribute - Examples: $P = \{\text{Location} = \text{"Paris"}, \text{Budget} > 200000\}$ - **Minterm predicates** $M(P)$: Combining all simple predicates taken from $P$ using "and" and "not" ($\land$ and $\lnot$) - Example: If $P = \{\text{Location} = \text{"Paris"}, \text{Budget} > 200000, \text{DName} = \text{"Sales"}\}$ then $\text{Location} = \text{"Paris"} \land \lnot \text{Budget} > 200000 \land \text{DName} = \text{"Sales"}$ is 1 out of 8 possible elements in $M(P)$ As we have seen we need conjunctions of predicates in order to describe fragments in the general case. We make now the description of horizontal fragments more precise: Given a relation we can assume that there exists a set of atomic predicates that can be used to describe horizontal fragments, these are called simple predicates. From those we can compose complex predicates by using conjunctions and negations. More precisely we consider all possible compositions of all simple predicates using conjunction and negation. This set we call **minterm predicates** and it constitutes the set of all predicates that we consider for describing horizontal fragments. One might wonder why disjunctions (OR) are not considered. In fact they would be of no use as they would allow to only define fragments that are the union of some fragments one obtains from minterm predicates. In other words with minterm predicates we obtain the finest partitioning of the relational table that can be obtained by using a given set of simple predicates, and this is sufficient to describe the access frequencies for all tuples. Horizontal Fragments - A horizontal fragment $F_i$ of a relation $R$ consists of all tuples that satisfy a minterm predicate $m_i$ - Example: $m_1 : \text{Location} = \text{"Paris"} \land \neg \text{Budget > 200000} \land \text{DName} = \text{"Research"}$ $m_2 : \neg \text{Location} = \text{"Geneva"} \land \text{Budget > 200000}$ <table> <thead> <tr> <th>DNo</th> <th>DName</th> <th>Budget</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>P4</td> <td>Sales</td> <td>500000</td> <td>Geneva</td> </tr> <tr> <td>P2</td> <td>Marketing</td> <td>300000</td> <td>Paris</td> </tr> <tr> <td>P3</td> <td>Development</td> <td>250000</td> <td>Munich</td> </tr> <tr> <td>P1</td> <td>Development</td> <td>150000</td> <td>Bangalore</td> </tr> <tr> <td>P5</td> <td>Marketing</td> <td>120000</td> <td>Geneva</td> </tr> <tr> <td>P6</td> <td>Development</td> <td>90000</td> <td>Paris</td> </tr> <tr> <td>P7</td> <td>Research</td> <td>80000</td> <td>Paris</td> </tr> </tbody> </table> All possible horizontal fragments are those subsets of a relation that can be selected by using a minterm predicate over a given set of simple predicates. Complete and Minimal Fragmentation • How many simple predicates do we need? - e.g. is \( P = \{ \text{Budget} > 200000, \text{Budget} \leq 200000 \} \) a good set? • At least as many such that the access frequency within a horizontal fragment is uniform for all tuples for all applications (otherwise we could not model the access) \( \rightarrow \) complete set of simple predicates • but no more \( \rightarrow \) minimal set of simple predicates The situation is now as follows. Different applications will use (propose) different simple predicates in order to describe the access to a relation. They will need as many simple predicates as necessary, to obtain fragments for which the access frequency for the specific application is uniform. As we have seen in order to describe the combined access frequencies of multiple applications to the relations we have to combine those simple predicates into complex predicates. Thus possible fragments are constructed from minterm fragments over the set of simple predicates that is the union of the set of all simple predicates used by the different applications. This set allows to construct any possible intersection of fragments originating from different applications through minterm predicates. However, a set of simple predicates obtained in this manner can contain simple predicates that are not useful, such that we would consider too many minterm predicates which lead to no additional fragments. Example - $P_1 = \{ \text{Location = "Paris", Budget > 200000 } \}$ not complete - $P_2 = \{ \text{Location = "Paris", Location = "Munich", Location = "Geneva", Budget > 200000 } \}$ complete, minimal? - $P_3 = \{ \text{Location = "Paris", Location = "Munich", Location = "Geneva", Budget > 200000, Budget <= 200000 } \}$, complete but not minimal <table> <thead> <tr> <th>DNo</th> <th>DName</th> <th>Budget</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>P4</td> <td>Sales</td> <td>500000</td> <td>Geneva</td> </tr> <tr> <td>P2</td> <td>Marketing</td> <td>300000</td> <td>Paris</td> </tr> <tr> <td>P3</td> <td>Development</td> <td>250000</td> <td>Munich</td> </tr> <tr> <td>P1</td> <td>Development</td> <td>150000</td> <td>Bangalore</td> </tr> <tr> <td>P5</td> <td>Marketing</td> <td>120000</td> <td>Geneva</td> </tr> <tr> <td>P6</td> <td>Development</td> <td>900000</td> <td>Paris</td> </tr> <tr> <td>P7</td> <td>Research</td> <td>800000</td> <td>Paris</td> </tr> </tbody> </table> We illustrate the difference between complete and minimal set of predicates in this example. $P_2$ is not complete since it does not allow e.g. to distinguish Geneva from Munich, which have different access frequencies for $A_1$ and $A_2$. $P_3$ is obviously not minimal. The question is whether $P_2$ is complete and minimal. Example Minimal Fragmentation F1 : Location="Paris" \(\land\) \(\neg\) Location="Geneva" \(\land\) Budget > 200000 F2 : Location="Paris" \(\land\) \(\neg\) Location="Geneva" \(\land\) \(\neg\) Budget > 200000 F3 : \(\neg\) Location="Paris" \(\land\) Location="Geneva" \(\land\) Budget > 200000 F4 : \(\neg\) Location="Paris" \(\land\) \(\neg\) Location="Geneva" \(\land\) Budget > 200000 F5 : \(\neg\) Location="Paris" \(\land\) \(\neg\) Location="Geneva" \(\land\) Budget > 200000 F6 : \(\neg\) Location="Paris" \(\land\) \(\neg\) Location="Geneva" \(\land\) \(\neg\) Budget > 200000 <table> <thead> <tr> <th>(&lt;AF1, AF2&gt;)</th> <th>Location = &quot;Paris&quot;</th> <th>Location = &quot;Geneva&quot;</th> <th>Location = &quot;Munich&quot;</th> <th>Location = &quot;Bangalore&quot;</th> </tr> </thead> <tbody> <tr> <td>Budget &gt; 200000</td> <td>(&lt;3, 2&gt; (F1))</td> <td>(&lt;3, 1&gt; (F3))</td> <td>(&lt;3, 0.5&gt; (F5))</td> <td>n/a</td> </tr> <tr> <td>Budget &lt;= 200000</td> <td>(&lt;1, 2&gt; (F2))</td> <td>(&lt;1, 1&gt; (F4))</td> <td>n/a</td> <td>(&lt;1, 3&gt; (F6))</td> </tr> </tbody> </table> - \(P2' = \{\text{Location = "Paris", Location = "Geneva", Budget > 200000}\}\) is Here we see that actually the predicate Location = "Munich" is not needed. The observation is that for the given database this predicate is only useful to distinguish \(F5\) and \(F6\), but this can already be done with another predicate, namely Budget>20000. Therefore \(P2\) is in fact complete and but not minimal. \(P2'\) is complete and minimal since we cannot eliminate any further simple predicate from \(P2'\) without loosing completeness. It is very important to understand that this fact depends on the actual state of the database, i.e., the content of the relation. As soon as for example a tuple would enter the database, which contains a department in Munich with budget less than 200000, the predicate Location = "Munich" will be needed to describe the new fragment (provided it is accessed differently than current fragment \(F6\)). Determining a Minimal Fragmentation - Given fragments generated by a set $M(P)$: We say a predicate $p$ is relevant to $M(P)$ if there exists at least one element of $m \in M(P)$, such that when creating the fragments corresponding to $m_1 = m \land p$ and $m_2 = m \land \neg p$ there exists at least one application that accesses the two fragments $F_1$ and $F_2$ generated by $m_1$ and $m_2$ differently. Algorithm MinFrag Start from a complete set of predicates $P$ Find an initial $p \in P$ such that $p$ relevant to $M(\emptyset)$ set $P' = \{p\}$, $P = P \setminus \{p\}$ Repeat until $P$ empty - find a $p \in P$ such that $p$ relevant to $M(P')$ - set $P' = P' \cup \{p\}$, $P = P \setminus \{p\}$ - if there exists a $p \in P'$ that is not relevant to $M(P' \setminus \{p\})$ then set $P' = P' \setminus \{p\}$ The algorithm MinFrag determines a minimal set of simple predicates from a given set and for a given database. It proceeds by iteratively adding predicates from the given complete set of predicates. While doing that it observes two things: first, it adds only predicates that are relevant, with respect to the currently selected set of predicates. This is expressed by the concept of RELEVANCE. Second, in each step it checks whether one of the already included predicates has become non-relevant through the addition of the new predicate. In fact it might be the case that one predicate $p_1$ is "more relevant" than another $p_2$ included earlier, i.e. we can eliminate $p_2$ without loosing interesting fragments but not vice versa. Example MinFrag Algorithm - $P_3 = \{\text{Location} = "\text{Paris}"$, Location = "Munich", Location = "Geneva", Budget > 200000, Budget <= 200000$\} is a complete set of predicates 1. Step 1: add Location = "Munich" (ok) 2. Step 2: add Budget > 200000 (ok) 3. Step 3: add Budget <= 200000 (no, is dropped) 4. Step 4: add Location = "Paris" (ok) 5. Step 5: add Location = "Geneva" (ok, but now Location = "Munich" is dropped) We illustrate of how the MinFrag algorithm would work for our example. The dropping of the predicate in step 3 is for obvious reasons. In step 5 the predicate Location="Munich" is dropped, since as we have seen earlier it is not required to distinguish all possible fragments. Note, that in case Location="Munich" would have only been considered in the last step (rather in the first), it would never have been included into the set $P'$. Thus the execution of the algorithm depends on the order of processing of predicates from the initial complete set of predicates. Eliminating Empty Fragments - Not all minterm predicates constructed from a complete and minimal set of predicates generate useful fragments - Example: \{Location = "Paris", Location = "Geneva", Budget > 200000 \} is minimal - All minterm predicates F1 : Location="Paris" \land \neg Location="Geneva" \land Budget > 200000 F2 : Location="Paris" \land \neg Location="Geneva" \land \neg Budget > 200000 F3 : \neg Location="Paris" \land Location="Geneva" \land Budget > 200000 F4 : \neg Location="Paris" \land Location="Geneva" \land \neg Budget > 200000 F5 : \neg Location="Paris" \land \neg Location="Geneva" \land Budget > 200000 F6 : \neg Location="Paris" \land \neg Location="Geneva" \land \neg Budget > 200000 F7 : Location="Paris" \land Location="Geneva" \land Budget > 200000 F8 : Location="Paris" \land Location="Geneva" \land \neg Budget > 200000 Finally, after executing MinFrag, it still is possible that certain minterm fragments are to be excluded for logical reasons. It is very well possible as illustrated in this example that we need a certain minimal set of simple predicates in order to properly describe all horizontal fragments of the relation, but that we can construct from this set minterm predicates that produce empty fragments, as shown in the example. The typical example is where multiple equality conditions on the same predicate are included. Then the conjunction of two such predicates in their positive form (unnegated) always leads to a contradictory predicate, resp. an empty fragment. Summary Primary Horizontal Fragmentation - **Properties** - Relation is completely decomposed - We can reconstruct the original relations from fragments by union - The fragments are disjoint (definition of minterm predicates) - **Application provides information on** - what are fragments of single applications - what are the access frequencies to the fragments - **Algorithm MinFrag** - derives from a complete set of predicates a minimal set of predicates needed to decompose the relation completely - without producing unnecessary fragments Since the process of fragmenting a single relation horizontally is a considerable effort, the question is whether such a fragmentation cannot be exploited further. In fact, there exists a good reason to do so when considering how typically relational database schemas are constructed. In general, one finds many foreign key relationships, where one relation refers to another relation by using its primary key as reference. Since these relationships carry a specific meaning it is very likely that this foreign key relationship will also be used during accesses to the database, i.e. by executing join operations over the two relations. This means that the corresponding tuples in the two relations will be jointly accessed. Thus it is of advantage to keep them at the same site in order to reduce communication cost. As a consequence it is possible and of advantage to "propagate" a horizontal fragmentation that has been obtained for one relation to other relations that are related via a foreign key relationship and to keep later the corresponding fragments at the same site. We call fragments obtained in this way derived horizontal fragments. Formally the derivation of horizontal fragments can be introduced using the so-called **semi-join operator**. The semi-join operator is a relational algebra operator, that takes the join of two relations, but then projects the result to one relation. When computing the semi-join of a horizontal fragment with another relation one obtains the corresponding derived horizontal fragments of the second relation. Multiple Derived Horizontal Fragmentations - Distribute the primary and derived fragment to the same site - tuples related through foreign key relationship will be frequently processed together (relational joins) - If multiple foreign key relationships exist multiple derived horizontal fragmentations are possible - choose the one where the foreign key relationships is used most frequently In general, different DHFs can be obtained if the same relation is related to multiple relations through a foreign key relationship. In that case a decision has to be taken, since a fragmentation according to different primary fragmentation would make no sense: it would not be possible to keep the tuples in the derived fragments together with the corresponding primary fragments if they are moved to different sites. Therefore the DHF is chosen, which is induced by the relation that is expected to be used most frequently together with the relation for which the DHF is generated. Summary • How are horizontal fragments specified? • When are two fragments considered to be accessed in the same way? • What is the difference between simple and minterm predicates? • How is relevance of simple predicates determined? • Is the set or predicates selected in the MinFrag algorithm monotonically growing? • Why are minterm predicates eliminated after executing the MinFrag algorithm? 3. Vertical Fragmentation - Vertical Fragmentation of a single relation - Modeling the access to the relations - Example Similarly as for tuples in the horizontal fragmentation we can also analyze for the vertical fragmentation of how attributes are accessed by applications running on different sites. Using this information then the goal would be to place attributes there were they are used most. In this example we see that two attributes are always accessed jointly, and that site S2 is the one that uses them most. So S2 might be a candidate to place the attributes. For similar reasons DNo and Location are best placed at S3. Correct Vertical Fragments - **Vertical Fragmentation?** <table> <thead> <tr> <th>DNo</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>P4</td> <td>Geneva</td> </tr> <tr> <td>...</td> <td>...</td> </tr> </tbody> </table> Fragment moved to S3 <table> <thead> <tr> <th>DName</th> <th>Budget</th> </tr> </thead> <tbody> <tr> <td>Sales</td> <td>500000</td> </tr> <tr> <td>...</td> <td>...</td> </tr> </tbody> </table> Fragment moved to S2 - **Primary key must occur in every vertical fragment, otherwise the original relation cannot be reconstructed from the fragments** <table> <thead> <tr> <th>DNo</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>P4</td> <td>Geneva</td> </tr> <tr> <td>...</td> <td>...</td> </tr> </tbody> </table> Fragment moved to S3 <table> <thead> <tr> <th>DNo</th> <th>DName</th> <th>Budget</th> </tr> </thead> <tbody> <tr> <td>P4</td> <td>Sales</td> <td>500000</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> Fragment moved to S2 - **Possible vertical fragments are all subsets of attributes that contain the primary key** If we partition the attributes in the way described before we have a problem: Since the fragment moved to S2 does not contain the primary key of the relation (which is underlined), we no longer can reconstruct which tuple in this fragment corresponds to which tuple in the fragment kept at S3 and we could not reconstruct the relation. Therefore there is no other possibility than to keep also the primary key attribute at S2. This form of replication of data values is unavoidable in vertical fragmentation. Therefore in the following we assume that the primary key attributes are always replicated to all fragments. Modeling Access Characteristics Possible Queries - q1: `SELECT Budget FROM DEPARTMENT WHERE Location="Geneva"` - q2: `SELECT Budget FROM DEPARTMENT WHERE Budget>100000` - q3: `SELECT Location FROM DEPARTMENT WHERE Budget>100000` - q4: `SELECT DName FROM DEPARTMENT WHERE Location="Paris"` etc. <table> <thead> <tr> <th>DName</th> <th>Budget</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>q1, q3</td> <td>0</td> <td>1</td> </tr> <tr> <td>q2</td> <td>0</td> <td>1</td> </tr> <tr> <td>q4</td> <td>1</td> <td>0</td> </tr> </tbody> </table> DEPARTMENT <table> <thead> <tr> <th>DNo</th> <th>DName</th> <th>Budget</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>P4</td> <td>Sales</td> <td>500000</td> <td>Geneva</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> Matrix Q describing which type of query accesses which attributes Modeling the access characteristics for vertical fragmentation differs from the case of horizontal fragmentation for an important reason: the number of attributes is compared to the number of tuples fairly small. One of the consequences from that observation is that is very well possible that many different applications access exactly the same subset of attributes of a relation. This is unlikely to occur in horizontal fragmentation for subsets of tuples. In the example we see of how applications are characterized as different queries. In the matrix Q we describe now simply which of the queries (applications) access which attributes. Queries accessing the same subset of attributes are treated the same in the following. What is relevant for modeling the access characteristics is of how attributes are accessed at different sites. Thus we record in a second matrix $M$ the access frequency for each site (in our example three sites $S_1$, $S_2$, $S_3$) for each type of application. The application types correspond to the ones we have identified previously with matrix $Q$. Matrices $Q$ and $M$ together thus model of how the relation is accessed by the distributed applications. Modeling Access Characteristics • Every subset of attributes will be most likely accessed differently by some application - Thus trying to find subsets that are uniformly accessed is futile - Rather find subsets of attributes that are most similarly accessed • Determine how often attributes are accessed jointly (affinity) - \( \text{aff}(\text{Dname}, \text{Budget}) = 0 \) - \( \text{aff}(\text{Budget}, \text{Location}) = 20 + 15 = 35 \) - \( \text{aff}(\text{Dname}, \text{Location}) = 15 \) - \( \text{aff}(\text{Dname}, \text{Dname}) = 15 \) - \( \text{aff}(\text{Budget}, \text{Budget}) = 65 \) \[ \text{aff}(A_i, A_j) = \sum_{k \text{ such that } Q_{ik} = 1, Q_{kj} = 1} \sum_{l = 1}^m S_{kl} \] <table> <thead> <tr> <th></th> <th>Budget</th> <th>DName</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>Budget</td> <td>65</td> <td>0</td> <td>35</td> </tr> <tr> <td>DName</td> <td>0</td> <td>15</td> <td></td> </tr> <tr> <td>Location</td> <td>35</td> <td></td> <td>50</td> </tr> </tbody> </table> For horizontal fragmentation the goal was to group together in fragments those tuples that are accessed exactly the same by all applications. The corresponding idea for attributes would be to partition the subsets of attributes in a way that each subset of the partition is accessed by all applications the same. Given the fact that the number of applications is probably large compared to the number of attributes such an approach most likely ends up in fragmenting the relation into fragments containing one attribute each (ignoring the additional primary key attribute), thus always the finest possible fragmentation would occur. This is obviously not very interesting. Thus we relax our goals in order to achieve reasonable sized fragments containing multiple attributes, and require only that attributes are accessed similarly within a fragment. In the following we thus introduce a method that allows to detect similar access patterns to attributes, based on clustering. A first step towards this goal is to identify for each pair of attributes how often they are accessed jointly. Finding attributes that are similarly accessed by many applications and clustering them together will be the basis for identifying vertical fragments. For that purpose we compute from the information contained in our access model (matrices M and Q) an affinity matrix A, as described and illustrated above. Given the matrix Q we can now determine of how well neighboring attributes in A (neighboring rows, resp. columns) fit to each other in terms of access pattern. To that end we compute for each neighboring two columns in the matrix the neighborhood affinity as the scalar product of the columns. This provides a measure how similar the two columns are (remember: of the scalar product is 0 two vectors are orthogonal). Adding up all similarity values for all neighboring columns provides a global measure of how well columns fit. The second figure shows an interesting point: by swapping two columns (and the corresponding rows) the global neighborhood affinity value increases substantially. Qualitatively we can see that in fact in the matrix a cluster formed, where the columns related to Budget and Location appear similar to each other. For vertical fragmentation this indicates that the two attributes should stay together in the same vertical fragment, since in many applications if the one attribute is accessed also the other is accessed and thus less communication occurs if the attributes reside in the same sites. Also one can conclude in order to locate good clusters, first, it is important to increase the global neighbor affinity measure. Bond Energy Algorithm - Clusters entities (in this case attributes) together in a linear order such that subsequent attributes have strong affinity with respect to $A$ **Algorithm** BEA (bond energy algorithm) Given: nxn affinity matrix $A$ Initialization: Select one column and put it into the first column of output matrix Iteration Step $i$: place one of the remaining $n-i$ columns in the one of the possible $i+1$ positions in the output matrix, that makes the largest contribution to the global neighbor affinity measure Row Ordering: order the rows the same way as the columns are ordered Contribution of column, when placing $A_k$ between $A_i$ and $A_j$: $$\text{cont}(A_i, A_k, A_j) = \text{bond}(A_i, A_k) + \text{bond}(A_k, A_j) - \text{bond}(A_i, A_j)$$ $$\text{bond}(A_x, A_y) = \sum_{z=1..n} \text{aff}(A_x, A_z) \text{aff}(A_z, A_y)$$ The question is whether we can always reorganize the matrix in the way described before, by properly exchanging columns and rows (i.e. changing the attribute order). For a set of attributes there exist many possible orderings, i.e. for $n$ attributes $n!$ orderings. To that end there exists an efficient algorithm for finding clusters: The **bond energy algorithm** proceeds by linearly traversing the set of attributes. In each step one of the remaining attributes is added. It is inserted in the current order of attributes such that the maximal contribution is achieved. This is first done for the columns. Once all columns are determined the row ordering is adapted to the column ordering and the resulting affinity matrix exhibits the desired clustering. For computing the contribution to the global affinity value one computes the gain obtained by adding a new column, and subtracts from that the loss incurred through separation of previously joint columns. The contribution of a pair of columns is the scalar product of the columns, which is maximal if the columns exhibit the same value distribution (or when considered as vectors in a vector space: point into the same direction) This example illustrates of how BEA works. The first two columns can be arbitrarily chosen, since no decision needs to be made. For the third column there exist however three possibilities to place it. For each placement a different contribution can be obtained. In case the neighbouring column required to compute the contribution is empty, the bond value is set to 0. The computation shows that A2 is to be positioned at the third position (if A1 and A2 would have been added at the beginning, in the second step A3 would go into the second position). The last step will result in adding A4 into the fourth position. Then the rows are reordered. From the resulting affinity matrix we can nicely "see" the clusters that would be the optimal vertical fragmentation. But how to compute these clusters? For finding clusters we have to go back to our access model. We can see above that we have three possibilities to split the set of attributes into two fragments. For each of the possibilities we have to determine what would be the result: This is done by computing in how many cases access are made to attributes from one of the two fragments only (this is good) and to attributes from the two fragments (this is bad). To compare we compute the split quality by producing a positive contribution for the good cases and a negative for the bad cases (see formula). The computation of the number is simple given our access model. For each of the queries q1-q4 (note this model is different from the one we introduced at the beginning in the first example) we select the cases where attributes from one and where attributes from both fragments are accessed, by inspecting matrix Q. For these cases we add over all sites the total number of accesses made by taking them from matrix M. Vertical Splitting Algorithm - Two problems remain: - Cluster forming in the middle of the matrix A - shift row and columns one by one - search best splitting point for each "shifted" matrix - cost $O(n^2)$ - Split in two fragments only is not always optimal - find $m$ splitting points simultaneously - try $1, 2, \ldots, m$ splitting points and select the best - cost $O(2^m)$ On the previous slide we considered only a simple way to split the matrix: namely selecting a split point along the diagonal, and taking the resulting upper and lower "quadrant" as fragments. Now assume that the attributes are rotated (a possible rotation is indicated). Then the same upper and lower fragments would look as illustrated in the upper figure, and we would not be able to identify the fragment anymore by using the simple method described before. What this means is that it is possible that we might miss "good" fragments, depending on the choice of the first attribute (which is random). Therefore a better way is to consider all possible rotations of attributes, which increases the cost of search, but also allows to investigate many more alternative. Another issue is that actually the simultaneous split into multiple fragments reveals good clusters. If for example three good clusters exist, as indicated in the figure below, it is by no means always the case that any combination of two of the clusters will ever be recognized as good clusters, and thus such a split can not be obtained by subsequent binary splits. Summary Vertical Fragmentation - **Properties** - Relation is completely decomposed - We can reconstruct the original relations from fragments by relational join - The fragments are disjoint with exception if primary keys that are distributed everywhere - **Application provides information on** - what are attributes accessed by single applications - what are the access frequencies of the applications - **Algorithm BEA** - identifies clusters of attributes that are similarly accessed - the clusters are potential vertical fragments To round up the picture we describe of how horizontal clustering and vertical clustering can be combined. The figure illustrates of how a horizontal fragmentation of a relation R is further refined performing vertical fragmentations of the obtained horizontal fragments as if they were separate relations. An interesting question is why one should perform first the horizontal fragmentation: again the reason is that there exist many more tuples than attributes and thus many more possible horizontal fragmentations than vertical fragmentations can exist. When first choosing a unique vertical fragmentation for all the possible horizontal fragmentations one would unnecessarily constrain the search space. 4. Fragment Allocation - Problem - given fragments F1, ..., Fn - given sites S1, ..., Sm - given applications A1, ..., Ak - find the optimal assignment of fragments to sites such that the total cost of all applications is minimized and the performance is maximized - Application costs - communication, storage, processing - Performance - response time, throughput - problem is in general NP complete - apply heuristic methods from operations research Generating the fragments (both vertically and horizontally) creates a fragmentation that takes into account in an optimized manner the information that is available about the access behavior of the applications. The fragmentation is as fine as necessary to take into account all important variations in behavior, but not finer. An important issue that we do not treat here, is the problem of allocating the fragments that have been identified to the best possible sites. This problem can be described in a rather straightforward way, by taking into account all types of costs that occur during processing considering the applications running on the database. The optimality criterion has also to balance between the resource costs incurred and the performance achieved for the user (in terms of response time and throughput). Having formulated the problem in this manner it can be reduced to standard operations research problems. For the solution a number of heuristic approaches from OR have been adopted. Summary • Why do we use different clustering criteria for vertical and horizontal fragmentation (similar access vs. uniform access)? • Why does the affinity measure for attributes lead to a useful clustering of attributes in vertical fragments? • How does the Bond Energy Algorithm proceed in ordering the attributes? • What is the criterion to find an optimal splitting of the ordered attributes? • Which variants exist for searching optimal splits? References - Course material based on - Relevant articles
{"Source-Url": "http://lsirwww.epfl.ch/courses/dis/2006ws/lecture/DistributedDataManagement.pdf", "len_cl100k_base": 11598, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 76168, "total-output-tokens": 12767, "length": "2e13", "weborganizer": {"__label__adult": 0.000301361083984375, "__label__art_design": 0.0004954338073730469, "__label__crime_law": 0.0004167556762695313, "__label__education_jobs": 0.0035037994384765625, "__label__entertainment": 6.973743438720703e-05, "__label__fashion_beauty": 0.00017702579498291016, "__label__finance_business": 0.0008721351623535156, "__label__food_dining": 0.0004191398620605469, "__label__games": 0.0004482269287109375, "__label__hardware": 0.0011959075927734375, "__label__health": 0.0007014274597167969, "__label__history": 0.0003914833068847656, "__label__home_hobbies": 0.00018680095672607425, "__label__industrial": 0.0009212493896484376, "__label__literature": 0.0003736019134521485, "__label__politics": 0.00025773048400878906, "__label__religion": 0.00046634674072265625, "__label__science_tech": 0.1488037109375, "__label__social_life": 0.00014209747314453125, "__label__software": 0.03741455078125, "__label__software_dev": 0.80126953125, "__label__sports_fitness": 0.0002110004425048828, "__label__transportation": 0.0005183219909667969, "__label__travel": 0.0002391338348388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47179, 0.03181]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47179, 0.6475]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47179, 0.8932]], "google_gemma-3-12b-it_contains_pii": [[0, 58, false], [58, 744, null], [744, 1731, null], [1731, 3871, null], [3871, 3951, null], [3951, 5727, null], [5727, 6876, null], [6876, 8498, null], [8498, 8847, null], [8847, 11011, null], [11011, 11783, null], [11783, 12750, null], [12750, 13995, null], [13995, 16083, null], [16083, 17797, null], [17797, 18655, null], [18655, 20119, null], [20119, 21164, null], [21164, 23079, null], [23079, 24651, null], [24651, 25650, null], [25650, 27192, null], [27192, 27754, null], [27754, 28904, null], [28904, 29314, null], [29314, 30296, null], [30296, 30699, null], [30699, 31335, null], [31335, 32659, null], [32659, 34029, null], [34029, 34490, null], [34490, 36803, null], [36803, 38057, null], [38057, 40111, null], [40111, 40665, null], [40665, 40913, null], [40913, 41894, null], [41894, 43442, null], [43442, 43995, null], [43995, 44703, null], [44703, 46183, null], [46183, 46639, null], [46639, 47179, null]], "google_gemma-3-12b-it_is_public_document": [[0, 58, true], [58, 744, null], [744, 1731, null], [1731, 3871, null], [3871, 3951, null], [3951, 5727, null], [5727, 6876, null], [6876, 8498, null], [8498, 8847, null], [8847, 11011, null], [11011, 11783, null], [11783, 12750, null], [12750, 13995, null], [13995, 16083, null], [16083, 17797, null], [17797, 18655, null], [18655, 20119, null], [20119, 21164, null], [21164, 23079, null], [23079, 24651, null], [24651, 25650, null], [25650, 27192, null], [27192, 27754, null], [27754, 28904, null], [28904, 29314, null], [29314, 30296, null], [30296, 30699, null], [30699, 31335, null], [31335, 32659, null], [32659, 34029, null], [34029, 34490, null], [34490, 36803, null], [36803, 38057, null], [38057, 40111, null], [40111, 40665, null], [40665, 40913, null], [40913, 41894, null], [41894, 43442, null], [43442, 43995, null], [43995, 44703, null], [44703, 46183, null], [46183, 46639, null], [46639, 47179, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47179, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47179, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47179, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47179, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 47179, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47179, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47179, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47179, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47179, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47179, null]], "pdf_page_numbers": [[0, 58, 1], [58, 744, 2], [744, 1731, 3], [1731, 3871, 4], [3871, 3951, 5], [3951, 5727, 6], [5727, 6876, 7], [6876, 8498, 8], [8498, 8847, 9], [8847, 11011, 10], [11011, 11783, 11], [11783, 12750, 12], [12750, 13995, 13], [13995, 16083, 14], [16083, 17797, 15], [17797, 18655, 16], [18655, 20119, 17], [20119, 21164, 18], [21164, 23079, 19], [23079, 24651, 20], [24651, 25650, 21], [25650, 27192, 22], [27192, 27754, 23], [27754, 28904, 24], [28904, 29314, 25], [29314, 30296, 26], [30296, 30699, 27], [30699, 31335, 28], [31335, 32659, 29], [32659, 34029, 30], [34029, 34490, 31], [34490, 36803, 32], [36803, 38057, 33], [38057, 40111, 34], [40111, 40665, 35], [40665, 40913, 36], [40913, 41894, 37], [41894, 43442, 38], [43442, 43995, 39], [43995, 44703, 40], [44703, 46183, 41], [46183, 46639, 42], [46639, 47179, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47179, 0.21002]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
4b59592dc59747ccacdad5a4a2e95d6d5e97e7ce
Managing the ACM programming contest Navin Bhaskar Follow this and additional works at: http://scholarworks.rit.edu/theses Recommended Citation This Master’s Project is brought to you for free and open access by the Thesis/Dissertation Collections at RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact ritscholarworks@rit.edu. Project Report: Contest Management Application Committee Prof. Axel-Tobias Schreiner, Chair Prof. Rajendra K. Raj, Reader Prof. Paul Tymann, Observer # Table of contents 1. Introduction............................................................................................................. 5 2. Objectives ............................................................................................................... 5 3. Initial Hypothesis.................................................................................................... 5 4. Architecture............................................................................................................. 6 5. Technology Overview............................................................................................. 7 5.1 Java Servlet technology .................................................................................. 7 5.2 Java RMI technology ...................................................................................... 9 5.3 Java Applet technology ............................................................................... 10 5.4 Java Server Pages technology ....................................................................... 10 5.5 Application Server ........................................................................................ 10 6. Application: User Interface................................................................................... 11 6.1 Options: Team............................................................................................... 11 6.2 Options: Administrator ................................................................................. 14 6.3 Options: Judge .............................................................................................. 14 7. Application: Functioning ...................................................................................... 15 7.1 File upload/submission and logging ............................................................. 15 7.2 Obtaining the scoreboard implementation .................................................... 16 7.3 Downloading latest team submissions .......................................................... 16 7.4 Scoreboard refresh and computation ............................................................ 17 7.5 Clarification mechanism ............................................................................... 21 7.6 Application security...................................................................................... 21 8. Application: Packages........................................................................................... 22 8.1 Package: server ............................................................................................. 22 8.2 Package: scoreboard...................................................................................... 24 8.3 Package: io.................................................................................................... 25 8.4 Package: db ................................................................................................... 26 8.5 Package: data ................................................................................................ 26 8.6 Package: config............................................................................................. 27 8.7 Package: jsp .................................................................................................. 27 8.8 Package: users............................................................................................... 27 9. Application: Configuration files ........................................................................... 28 9.1 Internationalization files: MessagesBundle_xx_XX.properties ................... 28 9.2 Application menu options file: LeftMenu.xml ............................................. 28 9.3 Contest user details file: userplus.xml .......................................................... 29 10. Logging and server crash recovery.................................................................... 29 10.1 Activity Codes .............................................................................................. 29 11. Security ............................................................................................................... 30 12. Choice of technology ........................................................................................ 30 12.1 Microsoft based solution............................................................................... 30 12.2 Java based solution ....................................................................................... 30 13. Other contest management systems .................................................................. 31 13.1 PC ² ............................................................................................................... 31 13.2 Mooshak........................................................................................................ 32 14. Future enhancements .......................................................................................... 33 Abstract The Association for Computer Machinery (ACM) conducts an international collegiate programming contest held on an annual basis. Teams compete to solve multiple questions within the allotted six hour duration of the contest. A panel of judges grades the solutions online. The team that answers the maximum questions correctly wins the contest. This application will assist in the process of selection of winners of the programming contest. It will make the process of submission of solutions, and grading of the answers convenient. The ACM application is a Java based and web-enabled application and therefore platform independent. 1. Introduction This project report describes the design, function, implementation and deployment of a contest management application for conducting programming contests. The first part of this document is primarily dedicated to product architecture, design and implementation. The latter sections compare and contrast it with currently available solutions and technology options. Two software products have been evaluated, PC² [2] and Mooshak [1]. With minor configuration changes, this contest management application can be reused to conduct other programming contests held along similar lines. System prerequisites, installation, configuration and deployment details are also elaborated. 2. Objectives This product is primarily meant to assist in conducting the annually held ACM programming contest. With minimum configuration changes, it should be reusable for contests held along similar lines. It should not rely on proprietary libraries, use of which would limit the customization and/or enhancement in the future. The code is modularly organized and implementations have been wrapped using Java language interfaces to modify the implementation without breaking the application. This product is also compared with 2 available products namely Mooshak [1] and PC² [2]. The first product is based on legacy CGI technology; the second requires separate installation. Implemented in Java using distributed and object-oriented technologies, this contest management software requires minimum configuration and installation steps. 3. Initial Hypothesis Unlike legacy CGI based contest software employing process switching; a thread-based design would provide performance benefits. Though modeled for primarily assisting in conducting the ACM programming contest, this product would be generic enough to permit application reuse for other contests. Being rich in thread and network protocol API’s, Java technology could be used to develop and demonstrate a robust, modular and web based contest management application without operating system, platform or shell (command line interpreter) dependencies. Applet-Servlet communication and RMI technology could be leveraged to implement a near-real time scoreboard. Modular coding, packaging and technologies could be utilized to extend the product in the future. 4. Architecture The ACM contest management application is a 3-tier web-application with the middle tier being the application server and the final tier being a relational database (storage tier) to log contest activity. The web application is comprised of many web pages accessed using a web browser. Users have to be authenticated and authorized by the application server to access the web application. Authentication is achieved by password comparison. Authorization is verified by comparing the role assigned to the user with the role required to access the resource. This security mechanism prevents teams from accessing web pages allowed to judges. The server application is a Java Servlet implementing the Remote interface. A single Servlet services all client requests. The ACM server is responsible for logging the contest activity, storing uploaded team submissions for subsequent grade assignment by a judge and refreshing client scoreboards with the latest standings. Contest activity is maintained as records in a database table. JDBC API’s allow the ACM server (or simply server) to access and update the database tables. The choice of a relational database makes the task of storage and retrieval of data convenient. Additionally, creation of reports in the future is easy. Only one client can be used from a machine at a time because of resource constraints. Teams use a web form to upload solutions from their machines. HTTP POST mechanism is used to upload the zip archive containing the team submission to the ACM server. Subsequently, a judge downloads the solution and assigns a grade after matching the output with the correct solution. Solution submission by a team and grade assignment by a judge causes the standings to change. An automatically refreshing scoreboard allows the participants to view the latest standings. Scoreboards are also capable of sorting the standings automatically; the leading team is always displayed at the top. A team that solves the maximum number of problems correctly is adjudged the winner of the programming contest. In case of a tie, minimum cumulative time is used to determine the winner. The scoreboard is implemented as an AWT Applet implementing the Remote interface. The ACM server exposes Remote methods to enable participant scoreboards to register Scoreboards self register with the server during Applet initialization. Self registration involves providing its Remote reference to the server. As soon as the standings change, the server updates each registered scoreboard with the latest standings (server initiated RMI method invocation). Visually, the judge and team scoreboard appear to be identical. However, the scoreboard available to a judge is context sensitive. It allows a judge to download team solutions and assign grades with right-click of a mouse. Once the submission has been downloaded by a judge, the process of archive extraction and output matching is manually performed. Unlike judges and teams; an administrative user is responsible for assisting the judges with the contest and not considered as a participant. Teams can seek clarifications from judges for ambiguous questions. Clarifications are forwarded to the judge by the ACM server as an e-mail using Java Mail API involving mail protocols like SMTP and POP3/IMAP. Secure versions of mail protocols are also supported. File submission by the team and grade assignment activities are logged in the database during the duration of the contest. Clarification requests are not logged in the database. Though this implementation will be deployed on Apache Jakarta Project's Tomcat Application Server, it can be ported to any J2EE compliant application server. 5 Technology Overview 5.1 Java Servlet technology A Java class is said to be a Servlet if it sub-classes (or extends) a basic Servlet implementation (GenericServlet) or a HTTP (HTTPServlet) specific implementation. Servlets are regular classes allowing the use of all Java programming language syntax, API’s, language features and constructs. Unlike a Java application that is executed from the command line (as a separate JVM/process), Servlets execute within the application server’s JVM process. Unlike a legacy CGI server that spawns a new child process per-request to generate dynamic content [sections (b) and (c) in the figure below], a Servlet services client requests on separate threads within the server’s (JVM) process. Each client request results in a call to the Servlet’s service [section (a) of the figure below] method. The overhead of a thread context switch on the underlying operating system is substantially less than a process context switch. Servlets are loaded, instantiated and initialized by the application server’s Servlet container (to be referred henceforth as container). The main advantage of using Servlet technology is that the container manages the protocol. Servlets permit the use of instance variables; however they need to be explicitly synchronized because of concurrent (multi-threaded) access. A Servlet Container (container) is responsible for loading and managing Servlets. Web requests are intercepted by the container and routed to the requested Servlet based on information available in the deployment descriptor. A deployment descriptor is a XML configuration file containing Servlet information. If the Servlet is listed (in the deployment descriptor files) but not loaded in the memory, the container attempts to load the Servlet class from the file system using a chain of class loaders. If the container determines that this is the first request for the Servlet, then its (Servlets) init method is invoked. For each request from the web browser, the Servlet parses the input stream and creates new instances of HttpServletRequest and HttpServletResponse objects. The ServletRequest instance contains client HTTP parameter, encoding and content information. The ServletResponse contains cookie, status code, and header information encapsulating HTTP response provided in response to a client request. The container invokes the service method and passes both of them as parameters. The container invokes the service method of the Servlet for each request. Conversely, when the server is being shut down or attempting to free memory, the container calls the Servlet's destroy method before the Servlet class is unloaded from the memory. A catastrophic failure like the process being killed prevents the invocation of this method. The destroy method is an opportunity for the unloading Servlet to clean up/close resources by itself. Servlet lifetime From the time a Servlet is loaded by the Servlet container to the time it is ready to be garbage collected, three methods are invoked. Therefore, these methods are called lifetime methods. Sub classes can override these methods to provide additional functionality. - Method: init - Once the Servlet class is loaded by the Servlet container (‘s class loader), it is guaranteed to invoke the init method on it. Default implementation (provided by the super class) can be extended to provide application specific initialization steps. A Servlet can throw ServletException during initialization. It can service requests only if the init method has returned normally. - Method: service - The container is responsible for routing each client request to the Servlet’s service method. Each service method executes concurrently in a separate thread within the server process unless the Servlet implements the SingleThreadModel interface. For such a reason, all instance members need to be explicitly synchronized. - Method: destroy - This is a container call-back method invoked prior to a server shutdown or a Servlet unloading. It is an opportunity for the Servlet to return and close important resources like files, sockets and database connections. It is reciprocal to the init method. Once a Servlet has been destroyed, it can be garbage collected by the JVM. The init and destroy methods are called once; the bulk of the Servlet’s logic is within the service method. 5.2 Java RMI technology RMI technology allows method invocations between Java objects residing on disparate hosts (Remote objects). The caller and the callee can potentially reside on different machines. The callee has to exist before the caller can invoke methods on it. Syntactically, a RMI method invocation appears similar to an in-JVM call (caller and callee executing in the same JVM). For such an invocation to happen the callee or Remote object must implement a marker (Remote) interface and all distributed methods must throw RemoteException. The callee then registers itself with a naming facility in order to be looked-up by callers. Sun Microsystems provides an executable (rmiregistry) implementing a basic naming service. The caller/client looks up the remote object at naming service based on a key. If the lookup is successful, the client/caller has a reference to the remote object on which it can invoke distributed methods. The reference obtained as a result of the successful lookup is a local surrogate or stub of the Remote object implementation. RMI technology uses a combination of marshalling (with the use of serialization) and socket technologies to pass the parameters from the caller, execute the method (on the Remote object) and return a value (or Exception) to the caller. 5.3 Java Applet technology Java Applets are client-side executing Java code. They execute within the web browser. They are transported along with the web page usually in the form of binary jar/zip files. Applet’s can make socket connections to the server that sent it to the client. Applet lifetime is closely related to the actions performed by the user accessing the web page (containing the Applet). Applet lifetime methods are summarized below:- - Method: `init` – One time initialization of the Applet. - Method: `start` – Page (re)visit or Applet (re)load. - Method: `stop` – When the control leaves the page containing the Applet or user closes the browser window. - Method: `destroy` – Release of resources requested before being unloaded from the memory by the browser. It is a reciprocal method to init. It is also an opportunity for an Applet to clean up resources after use. 5.4 Java Server Pages technology It is used to dynamically generate HTML content based on embedded Java code between HTML tags. Interestingly, a Java Server (JSP) Page compiles into a HTTP Servlet. The additional benefit is that the application server’s web container is responsible for compiling the JSP file containing the source code to a Servlet. All underlying Servlet API’s are therefore accessible from a JSP page. JSP simplifies programming by allowing the use of implicit objects. Majority of the implicit objects are classes directly related to Servlets. It is also possible to programmatically invoke other Servlets from within the JSP code. 5.4.1 Implicit Objects JSP extends Servlet technology by providing implicit objects for ease of use. They include but are not limited to - `request` - `javax.Servlet.ServletRequest` - `response` - `javax.Servlet.ServletResponse` - `session` - `javax.Servlet.http.HttpSession` - `application` - `javax.Servlet.ServletContext` 5.5 Application Server The importance of the application server in all the technologies cannot be understated. All the technologies have to be supported by the application server. It contains the Servlet Container responsible for loading the Servlets. Application server also provides additional facilities like database connection pooling, naming lookup facility etc. Though this implementation is deployed on Tomcat Application Server (henceforth referred to as Tomcat), it can be deployed on any J2EE compliant application server. 6. Application: User Interface Being a protected web application, users will be challenged to provide a valid user ID and password to access the contest management software. Even after the user has successfully logged in, the application server (hosting the web application) verifies if the user has the required role before serving each web page. An administrator is responsible for user setup. User setup is a two step process, the first step involves the adding a user (team or a judge) to the list of application server users. Tomcat maintains user information in a tomcat-users.xml file. The second step involves maintenance of additional contest specific information. This step is required since the application server does not maintain additional attributes required for a product user. The details maintained for teams include name of the institution, user ID to access the application and password to access the e-mails. Similarly, the administrator also updates information relating to a judge including e-mail address, room number and contact phone number. This information is separately maintained in product specific userplus.xml file in XML format. This file exclusively maintains information for users participating in the contest. Like all the users participating in the programming contest, an administrator also has to be an existing application server user. Once the user setup is complete the administrator has updates the start time, end time and contest title. Once the application is redeployed with the contest specific configuration, teams and judges can start using the web application. Sometime prior to application redeployment, the teams are provided with a hard copy of the contest problems. Teams log into the application before uploading the submissions. Similarly judges log in to answer clarifications and assign grades. The different options based on the roles available to a judge and team is described in detail. 6.1 Options: Team a) Submit solution – During the duration of the contest, teams upload/submit source code and build files in the form of a single zip archive. A web form allows the team to upload the team submission containing this archive. A copy of this submitted file is downloaded by the judge to assign grades. The file upload web page provides a drop-down menu to select the problem for which the solution is being uploaded. Once the team confirms it by pressing the “Submit” button, the ACM server makes a copy of the uploaded solution. The correct output based on the problem will also be inserted in the uploaded solution. The file upload web page provides a drop-down menu to select the problem for which the solution is being uploaded. Once the team confirms it by pressing the “Submit” button, the ACM server makes a copy of the uploaded solution. The correct output based on the problem will also be inserted in the archive being maintained by the server. All scoreboard Applets will also be notified indicating (by incrementing the count of number of submission and elapsed time) after receipt of this submission by the server. b) View Scorecard – This functionality allows the teams to view the latest standings. The scoreboard is a matrix of problems and teams. Teams are arranged horizontally and problems vertically. As soon as the teams make a submission or the judges mark the uploaded solution to be correct, the scoreboard gets refreshed automatically. <table> <thead> <tr> <th>Available Options</th> <th>Scorecard</th> </tr> </thead> <tbody> <tr> <td>View Scorecard</td> <td></td> </tr> <tr> <td>Request Clarification</td> <td></td> </tr> <tr> <td>View All Clarifications</td> <td></td> </tr> <tr> <td>Documentation</td> <td></td> </tr> <tr> <td>Submit Solution</td> <td>Time: 22:58:52</td> </tr> <tr> <td>Log Off</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Team</th> <th>Problem 1</th> <th>Problem 2</th> <th>Summary</th> </tr> </thead> <tbody> <tr> <td></td> <td>Elapsed n</td> <td>Elapsed n</td> <td>Total n</td> </tr> <tr> <td>mxb8951</td> <td>00:57</td> <td>00:58</td> <td>1</td> </tr> <tr> <td>mxb8952</td> <td>00:59</td> <td>-</td> <td>-</td> </tr> </tbody> </table> (Please note: The right and the bottom part of the above screen shot has been trimmed for formatting) c) Request a clarification – This functionality allows the teams to request for a clarification. Teams can use this facility if they feel there is any confusion or ambiguity in the problems. The provided text is sent to the ACM server using HTTP POST mechanism. The ACM server forwards the clarification to the judge as an e-mail using Java Mail API’s. The judge can use any mail client to reply to the team’s clarification. The judge has to manually forward the e-mail to a group account so that the information is available to the benefit all the teams. d) View all clarifications – This facility allows the teams to centrally view all the clarifications. Clarifications are sorted by problems. Clicking on a clarification allows the team to view the entire exchange of communication between the team and the judge in detail. e) View API documentation – This facility allows the teams to refer to web resources to solve the problems. The URL information is maintained in the Docs.xml file and is configured before the beginning of the contest. 6.2 Options: Administrator a) Freezing the scoreboard – This facility allows an administrator to freeze the scoreboard until the time; the judges have graded all the solutions and chosen the winner. Teams can continue to upload the solutions but the scoreboard will not get refreshed. The defreeze option will return the application back to the active state. The current status is active, please choose a new status ☐ Freeze ☐ Defreeze ☐ Closed submit Reset 6.3 Options: Judge a) View / Grade solution – As mentioned before, the team and judge scoreboards appear to be the same. However judge scoreboards are context sensitive. They allow operations using the right-click of the mouse (pointing device). Judges can download team submissions and also assign grades. b) Look Up judge information – This is an informative page containing information about the location and contact details of other judges. Who, Where and What Judge 1: John Doe Telephone Extension: 585-424-1001 Room No: X-1456 E-mail address: judge1@cs.rit.edu Additional Information: Questions 1,3 and clarifications Judge 2: Scott Adams Telephone Extension: 585-424-1502 Room No: X-1857 E-mail address: judge2@cs.rit.edu Additional Information: Questions 2,6,7 7. Application: Functioning The container implements the ServletContext interface. This interface provides methods to acquire a RequestDispatcher for the current ServletContext. The RequestDispatcher can be used to include or forward a request to a target Servlet for further processing. The two methods differ in target Servlet’s ability to modify the request header and response body. Once the include method is invoked on the ACM Servlet, the service method gets invoked. This mechanism can be used to centrally route all the contest activity through the ACM Servlet. This helps the server to assign consistent timestamps to each team submission, grade assignment and provide timely scoreboard updates. 7.1 File upload/submission and logging This product accepts team submissions in the form of a zip archive. Teams can submit any number and type of files within the zip archive necessary for the compilation and execution of the program. As mentioned before teams have to select the problem for which the solution is being uploaded from a drop-down menu. When the team submits the zip archive the following sequence of events occurs: 1) The contents of the web form including the zip file are submitted using HTTP POST mechanism. Scorecard <table> <thead> <tr> <th>Team</th> <th>Problem 1</th> <th>Problem 2</th> <th>Summary</th> </tr> </thead> <tbody> <tr> <td></td> <td>Elapsed n</td> <td>Elapsed n</td> <td>Total n</td> </tr> <tr> <td>nxb8651</td> <td>00:57</td> <td>00:58</td> <td>01:56</td> </tr> <tr> <td>nxb8652</td> <td>00:59</td> <td>-</td> <td>-</td> </tr> </tbody> </table> 2) Before the form is submitted, a numeric value (or action code) is set in request object to be read by the ACM Servlet’s service method. Team name and problem number of the file being uploaded are also set. 3) The RequestDispatcher reference for the ACM Servlet (target Servlet) is acquired and JSP file’s request and response reference are passed as parameters to the include method. Calling the include method on the Servlet using the RequestDispatcher allows the control to be passed to the ACM Servlets service method. 4) The numeric value (or action code) in the request allows the server to handle grade assignment and file upload differently. Based on the action code specified in the request (specified before the file upload), the ACM Servlet delegates the task to the correct method of the ServiceProvider. In case of a team submission, it parses the multi-form data and stores a copy of the file submitted by the team. A new record is also inserted into the ACTIVITY table to acknowledge the receipt of the team’s submission. 5) The Servlet’s service method updates a flag to indicate that team standings have changed. Based on this flag, the “notifier” determines that the standings have changed and invokes the enqueueScore method on each scoreboard. The scoreboards are refreshed to have the latest scores. 7.2 Obtaining the scoreboard implementation The class files containing the team and judge Applets are packaged into separate zip files at placed at the root of the web application. This location is called the Applet codebase and is a URL relative to the root of the web application. When the team or judge clicks on the web page containing the Applet, the zip file is automatically downloaded from location specified by the codebase attribute. Apart from containing the class files required for painting the scoreboard (and related data structures), the archive also contains surrogate or stubs classes for the ACM server. The scoreboard uses it to invoke remote methods on it. 7.3 Downloading latest team submissions Judges have a facility to download the team’s submissions directly from the scoreboard. The scoreboard Applet available to a judge is context-sensitive, which means that the judge needs to right-click on a mouse and choose the “Get Submission” option to download the zip archive containing the solution. The menu is implemented as a PopupMenu. The event generated as a result of the mouse click is handled. The actual download is implemented by redirecting the judge to a web page which independently downloads the latest submission from the file system based on the logs from the ACTIVITY table. File download feature is achieved by setting the header as “Content-Disposition” and writing the contents of the file to the Response stream. 7.4 Scoreboard refresh and computation During scoreboard initialization, the Applet registers itself with the ACM server. During this self-registration process, it provides its Remote object reference to the ACM server. This allows the ACM Server to maintain a collection of scoreboard references in a hash table. The scoreboard Applet’s being Remote objects themselves can be invoked from the ACM server. This mechanism is used by the ACM Server to implement client-callbacks. When the ACM Servlet is initialized, it spawns a “notifier” thread. Periodically, this thread determines if the standings have changed. Once the standings have changed, the notifier thread reads all the records from the ACTIVITY table and creates and populates the Score data structure for each problem and team. An array of this data structure containing the standings for the problems solved by team is transported to the scoreboard. Applet via RMI. This information is used by the Applet to recreate the scoreboard with the latest standings. **Implementation: Scoreboard** The scoreboard is implemented as an Applet implementing `Remote` interface. Visually the scoreboard visible to the judges and teams appear to be the same. A scoreboard available to a judge has the capability to download the latest team submissions and assign grades. Each row of the scoreboard (excluding the headers) is a `Panel` implementing the `Comparable` interface. Teams are arranged in the descending order based on the value in the summary column (last row in each row). **Computation and Logic** Once the standings have changed the ACM server calls the `enqueueScore` method on the scoreboard Applet using its `Remote` reference. The server invokes the `enqueueScore` method and provides the `Score` array as parameters. Each `Score` data structure represents the standing for a problem and team. It is rendered automatically. **Scoreboard auto-refresh** The scoreboard is computed dynamically. Whenever a team makes a submission or judge assigns a grade, the scoreboards are refreshed. The scoreboard is a matrix made up of teams arranged horizontally and problems vertically. **Team Scorecard [at the beginning of the contest]** <table> <thead> <tr> <th>Team</th> <th>Problem 1</th> <th>Problem 2</th> <th>Summary</th> </tr> </thead> <tbody> <tr> <td></td> <td>Elapsed n</td> <td>Elapsed n</td> <td>Total n</td> </tr> <tr> <td>nxb8951</td> <td>-</td> <td>-</td> <td>00:00 0</td> </tr> <tr> <td>nxb8952</td> <td>-</td> <td>-</td> <td>00:00 0</td> </tr> </tbody> </table> **Team Scoreboard [after uploading a solution]** As soon as the teams upload the solution for a given problem, the scoreboard refreshes itself. If the problem is being attempted again the count on the problem column is incremented. The elapsed time also changed. The elapsed time is the difference in time when the submission was made to the time the contest started and is displayed in hour-minute-second precision format. The `Scoreboard` available to a judge appears visually similar to the teams; however the major difference being that it is context-sensitive. A judge is displayed a pop-up menu when the judge right clicks over team standings. The options available to a judge include grade assignment and downloading of team solutions. **Judge Scoreboard [Assigning the grade]** Once the latest submissions have been downloaded, the judge has to externally compile the source file using the build scripts provided by the teams. Output matching with the correct solution has to be done manually by the judge before grade assignment. The correct/required output for a problem is automatically inserted into the archive by the ACM server before the solution containing the team’s solution is saved in the server file system. A judge compares the output of the team’s submission with the correct solution prior to grade assignment. The “Assign Grade” option redirects the judge to a web page with radio buttons to assist the judge in assigning a grade. If the judge is to mark the solution as incorrect, a detailed reason is also provided in the form of a drop-down menu. **Judge Scoreboard [marks them as correct]** The solution for problem 1 by nxb8951 is : - Unassigned ✔ Correct ☐ Incorrect Reason, if incorrect Incorrect Output submit Reset As soon as the judge marks the solution as correct, all scoreboards are automatically updated with the latest standings. Scoreboard Sorting The judge and team scoreboards get sorted automatically at run time. Teams leading the contest are arranged at the top of the row. Sorting is exclusively performed based on the summary column. For a team, this column displays the cumulative time for correct submissions and the total number of correct submissions. At the end of the contest, the winning team would have solved the maximum number of correct solutions in minimum cumulative time. <table> <thead> <tr> <th>Team</th> <th>Problem 1 Elapsed n</th> <th>Problem 2 Elapsed n</th> <th>Summary Total n</th> </tr> </thead> <tbody> <tr> <td>nxb8951</td> <td>17:43</td> <td>-</td> <td>17:43 1</td> </tr> <tr> <td>nxb8952</td> <td>-</td> <td>-</td> <td>00:00 0</td> </tr> </tbody> </table> The scoreboard sorts itself on the basis of total number of submissions followed by least cumulative time in that order. Correct submissions are marked in green color. <table> <thead> <tr> <th>Team</th> <th>Problem 1 Elapsed n</th> <th>Problem 2 Elapsed n</th> <th>Summary Total n</th> </tr> </thead> <tbody> <tr> <td>nxb8951</td> <td>17:43</td> <td>-</td> <td>17:43 1</td> </tr> <tr> <td>nxb8952</td> <td>-</td> <td>18:19</td> <td>18:19 1</td> </tr> </tbody> </table> Safeguards against thread freezing The ACM Server invokes the Remote enqueueScore method of the scoreboard whenever there is a change in standings. This method internally calls the refresh method of the scoreboard. The refresh method obtains the SystemQueue reference and calls the invokeLater method. This method requires a target reference whose run method will be invoked. The scoreboards implement the run method themselves; therefore the “this” reference is passed as a parameter to this method. This method in-turn calls the run method and requests the event-dispatching threads to execute the steps required to refresh the scoreboard. The invokeLater method returns immediately since the code in the invoked run method is executed concurrently. 7.5 Clarification mechanism Teams can seek clarifications from judges for ambiguous questions. A judge replies to these questions. Clarifications are classified on the basis of problem numbers. Teams provide the text of the clarification in the text area provided. The text is submitted to the ACM server using HTTP POST mechanism. This submitted text is programatically converted to an e-mail and transported to the judge in the form of an e-mail. Java Mail API supports SMTP protocol. The e-mail is addressed to a single judge in the contest. The judge replies to the e-mail using any e-mail client of choice and forwards the e-mail to a group account so that this information is shared by all participating teams. This e-mail is available in the “View All Clarifications” page. This page displays all contest related e-mails. E-mails are displayed by using pattern matching mechanism (on the subject data element) to selectively display contest related e-mails. POP and IMAP protocols are supported by Java Mail to obtain e-mails from the mail server. 7.6 Application security Only bona-fide users having the required roles can access the ACM application. Three roles have been defined for this product. From an application perspective, any user assigned an “acmTeam” role is a participating team. Similarly “acmJudge” and “acmAdmin” roles also exist for judge and administrative users respectively. The ACM application authorizes and authenticates the user before the page is displayed to the user. This security feature prevents unauthorized access to web pages accessible to users having different roles (even if the URL to access the web page is known to a potentially malicious user). An administrator is responsible for creating the contest users and assigning them appropriate roles. With respect to Tomcat, the administrator has to add users first to the Tomcat tomcat-users.xml file prior to adding the contest participants. This is necessary because the resources are protected using system user information. In case participants attempt to access a web page normally unavailable to them, they will receive an error message informing that the user’s credentials do not permit access to the requested resource. Each web page in the application is protected by roles. The page-wise protection is specified in the web.xml web application deployment descriptor file. 8. Application: Packages The ACM application is packaged into a single Web Application Resource or WAR file. Packaged within the WAR file are the JSP files, style sheets, images and Java classes that comprise the application. The Application Deployment Guide Document describes the WAR file and its content in complete detail. Some of the most important classes and their use will be described in detail. (Please note that this is not a comprehensive list of all the classes constituting this application. Some have been omitted for the sake of clarity.) 8.1 Package: server Class: ACMServer This class is a Servlet class. The init method creates the subdirectories for storing the team submissions, subdirectories containing the judge’s data file to compare during grading. It also spawns a “notifier” thread that updates the clients with the updated standings, in case of a change. The service method accepts team submissions and delegates it. It services judge’s grading requests and administrator requests to freeze and unfreeze the scoreboard. The ACM Server sub classes a `GenericServlet` and provides implementations to `Runnable` and `Remote` interfaces. The `init` method is inherited from the `GenericServlet` super class. The ACM Server overrides this implementation by doing the following activities in the order given below. The ACM Server creates the directory structure required for the storing team submissions. This is an optimization to avoid directory creation at run-time. It also creates the directory structure for storing the correct output. The directories created will be same as the number of problems for the contest. The text files in these directories will be automatically inserted when the team submits a given problem. This will assist the judges in comparing the output from the team’s submission with the correct solution before grades are assigned. The `init` method also starts a “notifier”. This thread is responsible for updating the client scoreboards with the latest standings. All contest specific requests are routed through the ACM Servlet’s service method. Therefore, this method is organized on the basis of activity codes. This is done by invoking the `include` method on the ACM Servlet’s `RequestDispatcher`. Before the include method is invoked, a numeric activity code is set to the request object. The service method understands the steps to be performed based on this activity code. Syntactically this method is broken down into many cases for a switch statement. The `destroy` method stops the “notifier” thread from running. **Class: ServiceProvider** The ACM Server delegates the task of storing the team submission as an archive in the correct directory and updates the logs (ACTIVITY table) appropriately with the available information. Each team submission is handled by this class. **Interface: ScoreboardService** The implementation has to override `enlist` and `delist` methods. Only enlisted clients receive standing change notifications. The onus of delisting is on the clients themselves. It happens automatically in the `destroy` method of the Applet. Recall that scoreboards are implemented as Applets. This interface also provides methods to provide information about participating teams and contest problems. ### 8.2 Package: scoreboard #### System Event Queue and AWT Event Dispatch Thread GUI classes from the AWT library push events (in response to events like Button press, mouse over etc.) into the System Event Queue. An AWT Event thread periodically pops an event from the System Event Queue and delegates it to the appropriate Event handler associated with that GUI component. The AWT Event Dispatch Thread is important for this application because the ACM server notifies a standing change by calling the `enqueueScore` method on the scoreboard which in turn calls the `invokeLater` method. This method queues code execution with the AWT Event Queue. It is a mechanism to request a non event dispatching thread to delegate the code (to be executed) to the event dispatching thread. It is the only way of notifying the GUI of a non GUI event without locking itself. When the Event thread is free it will execute the code (in the run method) that was requested by the `invokeLater` method. #### Class: TeamScoreboard *TeamScoreboard* is an AWT GUI Applet implementing the *Judge* interface. When a team logs into the contest management system, the Applet is sent to the client from the “codebase” along with the page HTML contents. When the client Applet code is executed within the browser, the client provides its reference with the ACM server. Being a *Remote* implementation, it can be invoked as a distributed call. This mechanism is utilized by the ACM Server to queue messages with the client scoreboard. #### Class: JudgeScoreboard JudgeScoreboard is an AWT GUI Applet. It also implements the Judge interface. When a judge logs into the contest management system, the Applet is sent to the client from the "codebase" along with the page HTML contents. When the client Applet code is executed within the browser, the client provides its reference with the ACM server. Being a Remote implementation, it can be invoked as a distributed call. This mechanism is utilized by the ACM Server to queue messages with the client scoreboard. The JudgeScoreboard receives all the notifications of score change, a team scoreboard receives. A JudgeScoreboard additionally allows a judge to download the team submissions from the server and assign grades. 8.3 Package: io Class: FileUploadAssistantImpl A factory implementation provides this class to the caller (ACM Servlet). Teams upload their team submissions to the server. The files are uploaded over HTTP using FORM POST mechanism (as a multi-part message). This class reads the input stream, reconstructs the uploaded files and stores it under the appropriate locations under the server file system. It uses Apache commons library to provide this functionality. The files are stored as a zip archive under the directory structure. Before the archive is stored in the file system, the judge’s data file to compare the teams output is also inserted into the archive. This file will assist the judge in performing output comparison prior to grade assignment. Class: MailTransport TransportFactory class provides an implementation for transport of messages. Present implementation uses a SMTP and IMAP protocols for clarifications transport. Each message is mapped to a *Clarification* data type. The *MailTransport* class provides implementation for two methods namely send and receive. The *send* method sends a single clarification using SMTP protocol as the medium of transport to the judges. Before the clarification from the team is sent to the SMTP server, the e-mail’s subject is modified with an appropriate string for easy lookup later. The receive method receives all the messages from the users mailbox. It excludes messages from the team’s inbox, irrelevant to the contest. The *receive* method filters the e-mail messages based on a string pattern in the e-mail’s subject. Recall that this pattern was introduced by the *send* method. 8.4 **Package: db** **Class: ActivityManager** During the course of the contest, if the scores or standing change the ACM server notifies all the clients of the score change. The task of obtaining the latest scores from the underlying database is performed by this class. It queries the ACTIVITY table and obtains the details for teams that have submitted a solution at any time or have been graded. As soon as the standings change, the “notifier” thread invokes the *getStandings* method on the *ActivityManager* class which returns an array of *Score* objects. This array is used to update the scoreboards with the latest standings. To reduce the size of payload (on the array being sent to the scoreboard from the server), the *getStandings* method excludes *Score* objects for problems which the teams are yet to attempt/solve. **Class: SQLQueryManager** This is a placeholder class consolidating all the SQL statements issued to the database for execution in a central location. SQL statements are stored in the form of strings. This centralization permits code change in case there are any non-standard query requirements from the underlying database. **Class: ConnectionPool** The current implementation wraps the *DataSource* class. However, the class leaves enough room for any special database or application specific requirements if necessary. 8.5 **Package: data** **Class: Score** This data structure represents standing per problem per team. Each *Score* class contains the problem number, team name; time elapsed and a *Boolean* flag to indicate if this problem has been graded by a judge. Since this class is sent across the wire it extends the marker *Serializable* interface. As soon as the standings change, the ACM server updates all registered *Scoreboard*’s with updated scores in the form of an array of *Score*’s. The *Scoreboard* iterates through the array and reconstructs the scoreboard. Class: Contest This data structure contains information about the contest itself including details like start time, anticipated end time, status of the contest, e-mail address to forward team clarifications and locale information. Locale information from this class is used to achieve application internationalization. During contest start, this class contains information obtained from the userplus.xml file. Once this information is written to the database, information is retrieved from the database. 8.6 Package: config Class: ContestDefaults This is a class implementing all static methods. It provides methods to obtain information about the participating teams, problems, judges, locale and the contest. This class provides the caller’s [JSP pages and the Servlet] with meta-data information like list of problems, teams, contest details and status. For optimization, this information is read once during server startup and requires a refresh if any underlying configuration files or database has changed. 8.7 Package: jsp Class: LeftMenu This class maintains hyperlink information used to display the web application’s left menu. Information required by this class is loaded once during server startup from the LeftMenu.xml configuration file. This XML file is made up of three separate sections (for each role). Each section in the configurable file lists the URL’s accessible to a user having a particular role. This mechanism allows addition or removal of hyperlinks/options on the basis of roles without any change in application code. 8.8 Package: users Class: Team Each instance of this class represents a single participating team. Product specific user information including e-mail password and name of representing institution is available to the product from this class. Since this information does not change during the duration of the contest, this information is loaded from the userplus.xml file during server startup. Scoreboard Applet queries the ACM Servlet for list of Team’s to display the scoreboard. Class: Judge Each instance of this class represents a single judge. Contact (name, e-mail address) details and location (room number) related information is available to the product using this class. For performance optimization, this information is loaded from the userplus.xml file once during server startup and maintained until server shutdown. Information available in this class is specifically used to display the “Who, where and what” page only accessible to judges. 9. Application: Configuration files The ACM application can be customized if required with ease. The application relies extensively on information available from configuration files. Some of the files described below are related to the contest and need to be modified with each new contest. The rest need not be touched under normal circumstances, unless there is a compelling reason to modify or customize the application itself. 9.1 Internationalization files: MessagesBundle_xx_XX.properties ACM application supports internationalization, i.e., the application displays the textual content (verbiage and the error messages) based on the region or locale. To support this feature, all the web pages use `MessageBundle` class to obtain locale specific text from property files. File names containing locale specific information have to follow ISO codes for specifying language (ISO 639) and country (ISO 3166). Information maintained in each file is separate for unique combination of a language and country. Based on the locale specified in the appropriate file is read and the entire application appears to be in that language. Support for new languages not provided by default is as simple as creating an additional set of information. It does not require source code recompilation. For e.g.:- A code snippet of an internationalized section of the application code would appear as follows:- ```java ResourceBundle messages = ResourceBundle.getBundle("MessagesBundle",defaultLocale); messages.getString("UserName"); ``` Depending on the locale set one of the following region specific descriptions would be displayed. <table> <thead> <tr> <th>Country/Language</th> <th>US/English</th> <th>Germany/German</th> <th>France/French</th> </tr> </thead> <tbody> <tr> <td>MessageBundle_xx_XX.properties</td> <td>en_US</td> <td>de_DE</td> <td>fr_FR</td> </tr> <tr> <td>Sample name-value pairs</td> <td>[logon.jsp] UserName = User Name Password = Password</td> <td>[logon.jsp] UserName = Benutzer-Name Password = Kennwort</td> <td>[logon.jsp] UserName = Nom D'Utilisateur Password = Mot de passe</td> </tr> </tbody> </table> 9.2 Application menu options file: LeftMenu.xml Depending on the user’s role, a user is provided different options. Therefore the options available to the team, judge and the administrator are different. Judges are allowed to download individual team submissions. It logically follows that teams should not be allowed to access this feature. This information is maintained in a configurable XML based file names LeftMenu.xml. The actual protection and role to resource mapping is done in the web.xml file. 9.3 Contest user details file: userplus.xml This file is in addition to any application server specific requirements (For e.g. tomcat-users.xml file in case of Tomcat application server). This file maintains details about teams registered for the contest. Before the commencement of a contest, a new section has to be included to this file. 10. Logging and server crash recovery The ACM application records important events during the duration of the contest using activity values. Therefore solution submission and grading activity is recorded along with the timestamp. Each action results in a record in the ACTIVITY table. Important events in the lifetime of a contest like solution submission and grading have been classified by the use of unique activity codes. Clarification requests have been excluded from the logging activity. After a crash, the server reads the records for the contest from the ACTIVITY table and automatically resurrects the scoreboard. 10.1 Activity Codes The ACM Server logs contest information in the ACTIVITY table. Actions performed by a team and a judge are differentiated on the basis of activity codes. These activity codes assist the server to render the latest scoreboard during crash recovery. In event of a change in standing, the ACM server’s “notifier” thread iterates through the table and reads the records in the ACTIVITY table having activity codes (5 and 8). It creates and array of Score’s and updates all the registered scoreboards with the latest standings. The main activity codes are listed. <table> <thead> <tr> <th>Sr. No</th> <th>Activity</th> <th>Activity Code</th> </tr> </thead> <tbody> <tr> <td>1.</td> <td>Judge determines that the submitted solution is correct.</td> <td>5 (ActivityCode.GRADE_CORRECT)</td> </tr> <tr> <td>2.</td> <td>Incorrect solution.</td> <td>6 (ActivityCode.GRADE_INCORRECT)</td> </tr> <tr> <td>3.</td> <td>Team uploaded/submitted the solution to the ACM server</td> <td>8 (ActivityCode.SUBMIT_SOLUTION)</td> </tr> </tbody> </table> 11. Security Security is given importance and is implemented at various levels: 1. Security has been given importance right from the choice of technology and programming language. Java is a type safe language. Java Applet technology via its sand box model only allows limited access to the file system for a code executing from the web browser. 2. Clarifications use mail technologies like IMAP and SMTP. This application is also secure socket ready. Medium of transport for clarifications are over secure channels. 3. Client-side Applet code containing the scoreboard functionality is packaged into separate archives. Even if the file names are known, the user has to have the relevant credentials (via roles) to access it. 4. Client side policy restricts the work an Applet can do within the Applet sand box model. 5. All web pages are protected by the corresponding roles. Therefore, teams cannot access resources available to a judge or an administrator even if the URL’s were known somehow. 12. Choice of technology 12.1 Microsoft based solution Microsoft provides Active X controls. Active X controls are self-registering Component Object Model (COM) components (reusable code implementing IUnknown interface). Apart from implementing the mandatory IUnknown interface, an ActiveX control can be implement user-defined interfaces to receiving notifications or callbacks from a server. It can also invoke web services to query the latest standings. A web service is a platform neutral way of invoking method or operation. The input parameters, types, return values, and methods/operations are defined in a standard way via the use of WSDL. The WSDL also has location (host and the port details) of the service. Based on the WSDL, the server/callee provides implementation to the web service and the caller uses it to generate the client so as to invoke the web service. The calling code is platform specific. Active X controls could use Web Services to query for the latest scores from a service endpoint. However, the biggest disadvantage of using Microsoft Active X based solution is that it is keyed into M.S. Internet Explorer browser and Windows technology. Moreover, popular browsers like Netscape and Mozilla are not capable of executing client side Active X controls as-is. 12.2 Java based solution Over the years Java technology has matured to provide seamless support for networking, and security. Java compiler’s and run time are available for most operating systems today. Recall that Java does require platform specific compiler and run-time. Java is supported by almost all modern browsers including Microsoft Internet Explorer, Netscape and Mozilla. Java is therefore the logical choice for this application. Deploying the Applet as a Java plug-in is a portable way of deploying the Applet. **Choice of programming language** 1. Ease of programming. Java language does not provide pointer operations. When compared with C and C++, that is a huge advantage for a novice programmer. 2. All modern browsers support Sun’s JVM. Therefore Java API’s and its extensions can be used across browsers. Microsoft’s VM (MSJVM) capable of executing proprietary version of Java is being phased out. 3. Java compiler generates a byte code as target. This class files containing the byte code (virtual machine instructions) can be executed on any platform providing a run time environment. 4. Java provides API’s for accessing XML files with ease. No platform or O.S requirements like Microsoft’s Active X. This application does not impose any platform or O.S specific feature like Microsoft’s Active X technology. This technology is not supported by most of the non-Microsoft/Windows browsers. Therefore this technology was not used for the contest management application. **13.1 Other contest management systems** Two contest management products are available, PC² and Mooshak. **13.1 PC²** Developed by California State University, Sacramento, PC² is a Java based contest management application based on the client-server model. Server and client parts are stand-alone Java applications. Accounts have been categorized as administrators, judges, teams and scoreboard respectively. There is a separate application (“module”) corresponding to each account category. Teams use their separate module to submit solutions. **13.1.1 Product organization** PC² is composed of multiple programs or modules. Team module is a tabbed stand alone Java application. Having one team application per machine is recommended but not a necessity. The only requirement being that each team module of a given type be executing on a different directory in the file system. The team module assists the team in submitting the solution, requesting clarifications from a judge and viewing the status of a prior submission or run. Teams can also perform a test run of the solution using their own input file before submitting it for grading with the judges. A Judge module is also a tabbed stand-alone Java application executed separately. A judge uses this module to verify the solutions and update the standings. If the teams close their module in between, they can view the clarifications at a later time. They are persisted. The scoreboard module periodically fetches the latest standings and stores the standings in the local file system as HTML files. More than one version of the scoreboard is provided. Only one scoreboard module is needed to view the latest standings. An administrator can hyperlink the HTML files displaying the standings from a web server to let the contestants view their latest standings. The delay in refreshing the scoreboard is configurable. The administrator is responsible for starting/stopping the contest and the contest clock. Additionally, creation of problems, teams and configuration of Validators are also responsibilities of an administrator. The administrator can view all the prior team runs and current team status (logged in / clarification requested/run made etc.) using the module. **Comparison** Unlike this product, Team standings in PC² are not updated automatically. Instead, latest standings are fetched when the team requests for an updated scoreboard. **Additional features supported by PC²** - PC² provides an optional facility to grade the submission via the use of *Validators*. *Validators* are implemented akin to UNIX ‘diff’ command. Additionally it allows a rudimentary comparison white spaces option, one at a time. - Facility for teams to perform test runs before the actual submission. Code compilation and execution on local machine using the participating team’s own data file (provided alongside). **Disadvantages of PC²** - PC² requires client-side Java runtime installation and the module containing the application needs to be available on the machine being run. - Separate applications for teams, judges, administrator and scoreboard. - Scoreboard is not real-time. In PC², a separate “board” module updates the scoreboard periodically (based on a configuration parameter). Hyperlinks to this location in the file system (or to a separate location where the files are copied) have to be used to view the scoreboard. **13.2 Mooshak** Mooshak is based on two-tiered web model. It is implemented using legacy CGI technology and deployed on open source Apache web server. CGI technology relies on inter-process communication between the web server and the process generating dynamic content for each request using CGI protocol. Tcl scripts deployed under the *cgi-bin* directory generate the HTML content eventually rendered by the web browser. **13.2.1 Product organization** Mooshak provides different set of web interfaces (or views) depending on the permission level of the user attempting to access the application. It is very similar in function to all the other software solutions. Mooshak requires any browser with basic support for Javascript. Three main types of views are available namely contestant, judge and an administrator. A contestant view allows a team to upload submissions and request for clarifications. Teams can also view clarifications requested by other teams using a web interface. A judge view allows judges to answer clarifications, grade submissions and manage printouts. Grading is performed automatically the application, so a judge is not required to grade each submission; however the option of reevaluating the submission manually is also provided. An administrative access is required to configure or prepare the software for the contest. An administrator is also responsible for maintaining problems, adding teams, maintaining language (compile-time and runtime) settings for automatic grading. Additional Features - Automatic grading – Automatic grading is a two step process involving close analysis of source code and program output. Static analysis includes verification of source code compilation and size. If the static test fails, Mooshak halts further analysis. Dynamic analysis involves verification of output, presentation errors and run time errors as a result of program execution. Depending on the results from the analysis a numeric severity value for the team submission is assigned. A correct solution has the least severity. - Printout management – Mooshak provides commands using which teams can take printouts of the source code and separately work on the solutions. Disadvantages of Mooshak - Mooshak is built using legacy CGI technology. Each web request results in the creation of a new server side process to cater to the request and generate HTML content. Advantages of this application over PC² and Mooshak - User experience is based on the roles assigned/available to the user. This product does not require separate client installation. - Scoreboard application is integrated into the web application. No special user/application is needed to view the scoreboard. - Scoreboard is automatically updated as soon the standings change. - This product is a 3-tier web application using modern object oriented language. - This product is scalable. The database and the application server can be scaled up. Scaling up applications that use file for maintaining the logs can be difficult. - Servlet technology services client requests on separate threads rather than spawning child processes in the case of CGI based applications. It is less CPU intensive. 14. Future enhancements 1) This application relies on Java RMI technology to notify client (scoreboard Applet’s) of score changes. Since some organizations disallow these ports, a version of this solution that is capable of tunneling requests through a proxy server over HTTP would be beneficial. 2) E-mail password (to read the users inbox) is maintained in clear-text. For added security, it could be maintained in an encrypted format. Conclusion A combination of technologies including Java Servlet, RMI and JDBC technologies can be used to design and build a modular and efficient web based content management application. A web based product-design removes the need for separate application installation, typical of other contest management systems. Applet-Servlet communication can be used to implement an automatically refreshing scoreboard. Scores are updated via callbacks as soon as the standings change. This mechanism results in fewer messages between scoreboard (on the client-side) and server when compared with periodic page refresh. Networking protocols like TCP/IP, HTTP and SMTP have been used to implement clarification mechanism. Java networking and mail API’s provide implementation for the above protocols. Servlet technology avoids the overhead of spawning many processes to service client request. Even with Java’s “Write once run anywhere”[4] guarantee, Java Applet code can encounter web browser portability issues. Java plug-in technology could be utilized to minimize its effects. Newer technologies using web services could also be employed to implement an automatically refreshing scoreboard. Appendix: Web Application Deployment The ACM application is deployed as a WAR file (web archive) on the application server. Steps to run and access the ACM application - Admin: Download and copy the policy and key store files under $HOME directory for each user logging in. - Admin: Modify the userplus.xml file to include team, problem and judge information for the current contest. - Admin: Start the MySQL database ensuring that acmdb instance/database is created. - Admin: Start the RMI registry service. - Admin: Start Tomcat application server. All: ACM application ready for use by all participants. Software product requirements The following software components are required for the ACM application to function. Application server, database and Ant script requirements are only applicable for the host running the contest application. Client requirements are minimal and most of the popular web browsers qualify as-is. Application server: Apache Tomcat On Windows platform this amounts to downloading the archive (zip format) and extracting it in a path that is recognized by Windows OS, or explicitly adding the installed package to the PATH environment variable. Tomcat application server if not already existing has to be downloaded and extracted. Database: MySQL Contest information is eventually maintained in a database. Storage and retrieval of data is relatively easy compared to text file manipulation. SQL queries can also be easily used to generate meaningful reports. This implementation uses MySQL database, however any database allowing programmatic access via JDBC protocol and supporting ANSI SQL statements can be used. **Runtime and Compilation: Java** Java runtime and compiler is required to be installed on the application server hosting the application server. **Automation scripts: Apache Ant** This requirement is only for the host running the application server. Ant needs to be used only for compilation of the code from source files and deployment on the application server. **Client: Java enabled web browser** Any Sun’s Java enabled web browser should suffice as the client. Most popular browsers qualify as-is. Policy file modification might be required to permit socket communication between the scoreboard Applet and Servlet. **External Java Library Requirements** ACM application relies on external libraries for database connections, file upload support over HTTP, mail services and unit testing. <table> <thead> <tr> <th></th> <th>Library</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>mysql-connector-java-x.jar</td> <td>JDBC drivers for MySQL database.</td> </tr> <tr> <td>3</td> <td>activation.jar</td> <td>JavaBeans TM Activation Framework</td> </tr> <tr> <td>4</td> <td>mailapi.jar</td> <td>JavaMail API core classes.</td> </tr> <tr> <td>5</td> <td>pop3.jar</td> <td>Provider for POP3 protocol.</td> </tr> <tr> <td>6</td> <td>smtp.jar</td> <td>Provider for SMTP protocol.</td> </tr> <tr> <td>7</td> <td>imap.jar</td> <td>Provider for IMAP protocol.</td> </tr> <tr> <td>8</td> <td>commons-fileupload-1.0.jar</td> <td>Libraries allowing form based file upload using HTTP.</td> </tr> <tr> <td>9</td> <td>junit.jar</td> <td>Libraries for JUnit test suite. Only required for unit testing purposes.</td> </tr> <tr> <td>10</td> <td>ant-jmeter.jar</td> <td>Libraries for JMeter required by Ant.</td> </tr> <tr> <td>11</td> <td>catalina-ant.jar</td> <td>Library containing Apache’s implementation of tasks for deployment and undeployment. This has to be placed in the “lib” directory where a new archive is being made.</td> </tr> </tbody> </table> File numbers 1 through 8 inclusive of both are to be placed under $CATALINA_HOME/common/lib directory (of Tomcat application server). Putting the archives at this location allows all the web applications deployed on this server to access the libraries. File numbers 9 and 10 are only required for unit testing using Ant scripts, so it is placed under $ANT_HOME/lib directory. These libraries are not required by the application server. **Building and Deployment process** Building of the web archive containing the implementation and the deployment onto the application server is achieved using the Ant utility. One build file is sufficient to automate all the tasks. **Ant Utility** Ant is a Java based build utility without the limitations and problems associated with makefile’s. Ant interprets the task information from a XML file (named build.xml by default). It can be used to compile the sources, package the contents into a web archive file and also deploy it. The build.xml file defines the various Ant tasks. The relevant Ant tasks for this application deployment are as follows:- <table> <thead> <tr> <th>Sr. No.</th> <th>Ant task (Case sensitive)</th> <th>Description.</th> </tr> </thead> <tbody> <tr> <td>all</td> <td>Do the pre-requisite tasks in a given order.</td> <td>Default task for this build file.</td> </tr> <tr> <td>clean</td> <td>Remove the entire work directory.</td> <td></td> </tr> <tr> <td>compile</td> <td>Compile source files and generate RMI stubs if required.</td> <td></td> </tr> <tr> <td>install</td> <td>Install the latest version on the application server.</td> <td></td> </tr> <tr> <td>dist</td> <td>Make a deployable distribution.</td> <td></td> </tr> <tr> <td>javadoc</td> <td>Generate JavaDoc format source code documentation for all the compiled classes.</td> <td></td> </tr> <tr> <td>list</td> <td>List installed applications on Servlet container</td> <td></td> </tr> <tr> <td>prepare</td> <td>Create directories for building the archive.</td> <td></td> </tr> <tr> <td>reload</td> <td>Reload application on Servlet container</td> <td></td> </tr> <tr> <td>remove</td> <td>Remove application from the Servlet container</td> <td></td> </tr> <tr> <td>rmic</td> <td>Generate stubs for Remote types.</td> <td></td> </tr> <tr> <td>archive</td> <td>Make zip file archives containing the scoreboard Applet.</td> <td></td> </tr> <tr> <td>junit</td> <td>Perform unit test cases using JUnit.</td> <td></td> </tr> <tr> <td>jmeter</td> <td>Perform stress testing using JMeter.</td> <td></td> </tr> </tbody> </table> **Contents of the web archive file** The Ant utility is used to compile the source files, include the web pages and resources (CSS files and images). The contents are packaged into a single Web Archive or WAR file. The WAR file is actually a JAR archive confirming to a particular directory structure. The structure and contents of the file are as follows:- <table> <thead> <tr> <th>Path</th> <th>Files/Directories/Packages</th> <th>Description/Use/Comments</th> </tr> </thead> <tbody> <tr> <td>/</td> <td>*.jsp</td> <td>JSP source files for the web application.</td> </tr> <tr> <td>/WEB-INF/classes</td> <td>MessagesBundle_xx_XX.properties</td> <td>Look up file for internationalization.</td> </tr> <tr> <td>/WEB-INF/classes</td> <td>acm/*</td> <td>Java class libraries/packages containing implementation for the web application. (The packages are elaborated later in this document under a separate section.)</td> </tr> <tr> <td>/WEB-INF/lib</td> <td>External jar archive files.</td> <td>Not being used.</td> </tr> <tr> <td>/WEB-INF</td> <td>web.xml</td> <td></td> </tr> <tr> <td>/styles</td> <td>contest.css</td> <td>Style sheet for the entire application.</td> </tr> <tr> <td>/images</td> <td>*.gif</td> <td>Images in GIF format used in the web pages.</td> </tr> <tr> <td>/META-INF</td> <td>MANIFEST.MF</td> <td>Manifest file.</td> </tr> <tr> <td>/META-INF</td> <td>context.xml</td> <td>XML file defining context, naming, session and data sources.</td> </tr> </tbody> </table> **Source file organization** ACM web application is comprised of Java source files, JSP web pages, images, style sheets, properties files and XML configuration files. Java sources are included in the archive in compiled form. JSP files and other resources like images and style-sheets are also included as-is. They are resolved relative to the WAR file root. Java packages The contents of the package have been described below: 1. **config**: Contains classes and interfaces that represent the category of users that can access the ACM application. 2. **data**: Contains data structures that are used across other packages. 3. **db**: Contains classes that will persist the data from the ACM Server to and from the permanent storage (database). 4. **io**: Contains classes that are responsible for the implementation of the underlying transport layer for clarifications. 5. **jsp**: Contains classes that will assist in rendering JSP pages. 6. **scoreboard**: Contains client scoreboard Applet classes and interfaces. 7. **server**: Contains ACM Server interfaces and implementation classes. 8. **users**: Provides classes and interfaces that represent the category of users that can access the ACM application. Web page related resources The following resources assist in the look-and-feel of the contest application. - **styles**: This directory contains the style sheets required for the web pages. For the sake of simplicity a single CSS file can change the style for the entire application. The only style sheet file is: - contest.css - **images**: This directory contains the images required to support the web pages. The following images are present under the images directory: - dir.gif - dir_open.gif - file.gif - icon_bar.gif - icon_blank.gif - icon_folder_open.gif - icon_folder_open_topic.gif - red_alert.gif Configuration files The ACM application maintains important configuration information in XML and properties files. Information for internationalization is maintained in locale-specific properties file. They are as follows: 1. **contest.xml** - Main application configuration file. 2. **LeftMenu.xml** – Configuration file containing HTML link information for all users. 3. **userplus.xml** – Configuration file containing team and judge. This file needs to be provided/updated in addition to application server specific file containing user information [like tomcat-users.xml] 4. MessagesBundlexxXX.properties - Internationalization related information file. Needs to be modified only if additional languages or locales need to be supported. 5. web.xml - ACM application deployment descriptor file. 6. tomcat-users.xml – Tomcat server configuration file containing user account information [user name, password and role information]. 7. server.xml - Tomcat configuration file. 8. .java.policy - Java Policy file, generated by policy tool. 9. .keystore – Key store containing trusted certificate (of the mail server), optional. Required only if secure connection is used. Files numbers 1 to 4 are product specific files. Tomcat application server requires file numbers 5, 6 and 7. File numbers 8 and 9 are required by Java at run-time. References 2. California State University at Sacramento: “Programming Contest Control System (PC²)”, USA http://www.ecs.csus.edu/pc2/
{"Source-Url": "https://scholarworks.rit.edu/cgi/viewcontent.cgi?article=7892&context=theses", "len_cl100k_base": 15854, "olmocr-version": "0.1.50", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 74493, "total-output-tokens": 17121, "length": "2e13", "weborganizer": {"__label__adult": 0.0002422332763671875, "__label__art_design": 0.0003485679626464844, "__label__crime_law": 0.00021636486053466797, "__label__education_jobs": 0.0016918182373046875, "__label__entertainment": 9.608268737792967e-05, "__label__fashion_beauty": 0.00011879205703735352, "__label__finance_business": 0.00024235248565673828, "__label__food_dining": 0.00021398067474365232, "__label__games": 0.0009546279907226562, "__label__hardware": 0.0005974769592285156, "__label__health": 0.00016391277313232422, "__label__history": 0.00019049644470214844, "__label__home_hobbies": 9.298324584960938e-05, "__label__industrial": 0.0002386569976806641, "__label__literature": 0.00020992755889892575, "__label__politics": 0.0001982450485229492, "__label__religion": 0.00027060508728027344, "__label__science_tech": 0.005481719970703125, "__label__social_life": 0.00012004375457763672, "__label__software": 0.01178741455078125, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.0003197193145751953, "__label__transportation": 0.0003159046173095703, "__label__travel": 0.0001577138900756836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 80391, 0.02341]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 80391, 0.20559]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 80391, 0.86831]], "google_gemma-3-12b-it_contains_pii": [[0, 554, false], [554, 708, null], [708, 5935, null], [5935, 5935, null], [5935, 6575, null], [6575, 8886, null], [8886, 11204, null], [11204, 13559, null], [13559, 15275, null], [15275, 18100, null], [18100, 20520, null], [20520, 23775, null], [23775, 25015, null], [25015, 25752, null], [25752, 26546, null], [26546, 28517, null], [28517, 31304, null], [31304, 32219, null], [32219, 34521, null], [34521, 35617, null], [35617, 37591, null], [37591, 39971, null], [39971, 40828, null], [40828, 43127, null], [43127, 44803, null], [44803, 46459, null], [46459, 49086, null], [49086, 51597, null], [51597, 54135, null], [54135, 56191, null], [56191, 59012, null], [59012, 62060, null], [62060, 64796, null], [64796, 67282, null], [67282, 69763, null], [69763, 72328, null], [72328, 74622, null], [74622, 76193, null], [76193, 78590, null], [78590, 80391, null]], "google_gemma-3-12b-it_is_public_document": [[0, 554, true], [554, 708, null], [708, 5935, null], [5935, 5935, null], [5935, 6575, null], [6575, 8886, null], [8886, 11204, null], [11204, 13559, null], [13559, 15275, null], [15275, 18100, null], [18100, 20520, null], [20520, 23775, null], [23775, 25015, null], [25015, 25752, null], [25752, 26546, null], [26546, 28517, null], [28517, 31304, null], [31304, 32219, null], [32219, 34521, null], [34521, 35617, null], [35617, 37591, null], [37591, 39971, null], [39971, 40828, null], [40828, 43127, null], [43127, 44803, null], [44803, 46459, null], [46459, 49086, null], [49086, 51597, null], [51597, 54135, null], [54135, 56191, null], [56191, 59012, null], [59012, 62060, null], [62060, 64796, null], [64796, 67282, null], [67282, 69763, null], [69763, 72328, null], [72328, 74622, null], [74622, 76193, null], [76193, 78590, null], [78590, 80391, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 80391, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 80391, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 80391, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 80391, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 80391, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 80391, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 80391, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 80391, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 80391, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 80391, null]], "pdf_page_numbers": [[0, 554, 1], [554, 708, 2], [708, 5935, 3], [5935, 5935, 4], [5935, 6575, 5], [6575, 8886, 6], [8886, 11204, 7], [11204, 13559, 8], [13559, 15275, 9], [15275, 18100, 10], [18100, 20520, 11], [20520, 23775, 12], [23775, 25015, 13], [25015, 25752, 14], [25752, 26546, 15], [26546, 28517, 16], [28517, 31304, 17], [31304, 32219, 18], [32219, 34521, 19], [34521, 35617, 20], [35617, 37591, 21], [37591, 39971, 22], [39971, 40828, 23], [40828, 43127, 24], [43127, 44803, 25], [44803, 46459, 26], [46459, 49086, 27], [49086, 51597, 28], [51597, 54135, 29], [54135, 56191, 30], [56191, 59012, 31], [59012, 62060, 32], [62060, 64796, 33], [64796, 67282, 34], [67282, 69763, 35], [69763, 72328, 36], [72328, 74622, 37], [74622, 76193, 38], [76193, 78590, 39], [78590, 80391, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 80391, 0.16844]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
03a607b3ecd47a636281740bc392cf41c9c508e4
Using Python with Caché Caché Version 2018.1 2019-06-20 Copyright © 2019 InterSystems Corporation All rights reserved. InterSystems, InterSystems Caché, InterSystems Ensemble, InterSystems HealthShare, HealthShare, InterSystems TrakCare, TrakCare, InterSystems DeepSee, and DeepSee are registered trademarks of InterSystems Corporation. InterSystems IRIS Data Platform, InterSystems IRIS, InterSystems iKnow, Zen, and Caché Server Pages are trademarks of InterSystems Corporation. All other brand or product names used herein are trademarks or registered trademarks of their respective companies or organizations. This document contains trade secret and confidential information which is the property of InterSystems Corporation, One Memorial Drive, Cambridge, MA 02142, or its affiliates, and is furnished for the sole purpose of the operation and maintenance of the products of InterSystems Corporation. No part of this publication is to be used for any other purpose, and this publication is not to be reproduced, copied, disclosed, transmitted, stored in a retrieval system or translated into any human or computer language, in any form, by any means, in whole or in part, without the express prior written consent of InterSystems Corporation. The copying, use and disposition of this document and the software programs described herein is prohibited except to the limited extent set forth in the standard software license agreement(s) of InterSystems Corporation covering such programs and related documentation. InterSystems Corporation makes no representations and warranties concerning such software programs other than those set forth in such standard software license agreement(s). In addition, the liability of InterSystems Corporation for any losses or damages relating to or arising out of the use of such software programs is limited in the manner set forth in such standard software license agreement(s). THE FOREGOING IS A GENERAL SUMMARY OF THE RESTRICTIONS AND LIMITATIONS IMPOSED BY INTERSYSTEMS CORPORATION ON THE USE OF, AND LIABILITY ARISING FROM, ITS COMPUTER SOFTWARE. FOR COMPLETE INFORMATION REFERENCE SHOULD BE MADE TO THE STANDARD SOFTWARE LICENSE AGREEMENT(S) OF INTERSYSTEMS CORPORATION, COPIES OF WHICH WILL BE MADE AVAILABLE UPON REQUEST. InterSystems Corporation disclaims responsibility for errors which may appear in this document, and it reserves the right, in its sole discretion and without notice, to make substitutions and modifications in the products and practices described in this document. For Support questions about any InterSystems products, contact: InterSystems Worldwide Response Center (WRC) Tel: +1-617-621-0700 Tel: +44 (0) 844 854 2917 Email: support@InterSystems.com Table of Contents About This Book .......................................................................................................................... 1 1 The Caché Python Binding ........................................................................................................ 3 1.1 Python Binding Architecture .......................................................................................... 3 1.2 Quick Start .................................................................................................................... 4 1.3 Installation and Configuration ....................................................................................... 4 1.3.1 Python Client Requirements ............................................................................... 5 1.3.2 UNIX® Installation ............................................................................................... 5 1.3.3 Windows Installation ............................................................................................ 6 1.3.4 Caché Server Configuration ................................................................................. 6 1.4 Sample Programs .......................................................................................................... 7 2 Using the Python Binding ....................................................................................................... 9 2.1 Python Binding Basics .................................................................................................. 9 2.1.1 Connecting to the Caché Database ..................................................................... 10 2.1.2 Using Caché Database Methods ......................................................................... 10 2.1.3 Using Caché Object Methods ............................................................................. 11 2.2 Passing Parameters by Reference ................................................................................ 12 2.3 Using Collections and Lists .......................................................................................... 12 2.3.1 %Collection Objects .......................................................................................... 12 2.3.2 %List Variables .................................................................................................. 13 2.4 Using Relationships ...................................................................................................... 15 2.5 Using Queries ............................................................................................................... 15 2.6 Using %Binary Data ..................................................................................................... 16 2.7 Handling Exceptions ..................................................................................................... 17 2.7.1 Error reporting ..................................................................................................... 17 3 Python Client Class Reference ............................................................................................. 19 3.1 Datatypes ..................................................................................................................... 19 3.2 Connections .................................................................................................................. 19 3.2.1 Connection Information ....................................................................................... 20 3.3 Database ....................................................................................................................... 21 3.4 Objects .......................................................................................................................... 22 3.5 Queries .......................................................................................................................... 22 3.5.1 prepare query ....................................................................................................... 22 3.5.2 set parameters ..................................................................................................... 23 3.5.3 execute query ....................................................................................................... 23 3.5.4 fetch results ......................................................................................................... 24 3.6 Times and Dates ............................................................................................................. 24 3.6.1 %TIME ................................................................................................................ 24 3.6.2 %DATE ............................................................................................................... 25 3.6.3 %TIMESTAMP .................................................................................................... 26 3.7 Locale and Client Version ............................................................................................. 28 About This Book This book is a guide to the Caché Python Language Binding. This book contains the following sections: • The Caché Python Binding • Using the Python Binding • Python Client Class Reference There is also a detailed Table of Contents. For general information, see Using InterSystems Documentation. The Caché Python Binding The Caché Python binding provides a simple, direct way to manipulate Caché objects from within a Python application. It allows Python programs to establish a connection to a database on Caché, create and open objects in the database, manipulate object properties, save objects, run methods on objects, and run queries. All Caché datatypes are supported. The Python binding offers complete support for object database persistence, including concurrency and transaction control. In addition, there is a sophisticated data caching scheme to minimize network traffic when the Caché server and the Python applications are located on separate machines. This document assumes a prior understanding of Python and the standard Python modules. Caché does not include a Python interpreter or development environment. 1.1 Python Binding Architecture The Caché Python binding gives Python applications a way to interoperate with objects contained within a Caché server. The Python binding consists of the following components: - The `intersys.pythonbind` module — a Python C extension that provides your Python application with transparent connectivity to the objects stored in the Caché database. - The Caché Object Server — a high performance server process that manages communication between Python clients and a Caché database server. It communicates using standard networking protocols (TCP/IP), and can run on any platform supported by Caché. The Caché Object Server is used by all Caché language bindings, including Python, Perl, C++, Java, JDBC, and ODBC. The basic mechanism works as follows: - You define one or more classes within Caché. These classes can represent persistent objects stored within the Caché database or transient objects that run within a Caché server. - At runtime, your Python application connects to a Caché server. It can then access instances of objects within the Caché server. Caché automatically manages all communications as well as client-side data caching. The runtime architecture consists of the following: - A Caché database server (or servers). - The Python interpreter (see Python Client Requirements). - A Python application. At runtime, the Python application connects to Caché using either an object connection interface or a standard ODBC interface. All communications between the Python application and the Caché server use the TCP/IP protocol. 1.2 Quick Start Here are examples of a few basic functions that make up the core of the Python binding: - **Create a connection and get a database** ```python cconn = intersys.pythonbind.connection() cconn.connect_now(url, user, password, None) database = intersys.pythonbind.database(conn) ``` database is your logical connection to the namespace specified in `url`. - **Open an existing object** ```python person = database.openid("Sample.Person", str(id), -1, -1) `` person is your logical connection to a `Sample.Person` object on the Caché server. - **Create a new object** ```python person = database.create_new("Sample.Person", None) `` - **Set or get a property** ```python person.set("Name", "Doe, Joe A") name = person.get("Name") ``` - **Run a method** ```python answer = person.run_obj_method("Addition", [17, 20]) ``` - **Save an object** ```python person.run_obj_method("%Save", []) ``` - **Get the id of a saved object** ```python id = person.run_obj_method("%Id", []) ``` - **Run a query** ```python sqlstring = "SELECT ID, Name, DOB, SSN " SQL_FROM SAMPLE.PERSON " WHERE Name $STARTSWITH ?" query = intersys.pythonbind.query(database) query.prepare(sqlstring) query.set_par(1, "A") query.execute(); while 1: cols = query.fetch([None]) if len(cols) == 0: break print cols ``` 1.3 Installation and Configuration The standard Caché installation places all files required for Caché Python binding in `<cachesys>/dev/Python`. (For the location of `<cachesys>` on your system, see Default Caché Installation Directory in the Caché Installation Guide). You should be able to run any of the Python sample programs after performing the following installation procedures. 1.3.1 Python Client Requirements Caché provides client-side Python support through the intersys.pythonbind module, which implements the connection and caching mechanisms required to communicate with a Caché server. This module requires the following environment: - Python version 2.7 or Python 3.0+. For Windows, Intersystems supports only the ActiveState distribution, ActivePython© (www.activestate.com). - A C++ compiler to generate the Python C extension. On Windows, you need Visual Studio .NET 2008 or higher (required by the ActiveState distribution). On UNIX®, you need GCC. The bitness of both your Python distribution and your compiler must match the bitness of Caché: 64-bit systems require the 64-bit versions of Python and compiler, and 32-bit systems require the 32-bit versions of Python and compiler. - On RedHat Linux, the python-devel package must be installed in order to compile the python sample. - Your PATH must include the <cachesys>dir. (For the location of <cachesys> on your system, see Default Caché Installation Directory in the Caché Installation Guide). - Set up your environment variables to support C compilation and linking, as described in the following sections. 1.3.2 UNIX® Installation - Make sure <cachesys>/bin is on your PATH and in your LD_LIBRARY_PATH. (For the location of <cachesys> on your system, see Default Caché Installation Directory in the Caché Installation Guide). For example: ```bash export PATH=/usr/cachesys/bin:$PATH export LD_LIBRARY_PATH=/usr/cachesys/bin:$LD_LIBRARY_PATH ``` **Note:** Mac OS X uses DYLD_LIBRARY_PATH instead of LD_LIBRARY_PATH. For example: ```bash export DYLD_LIBRARY_PATH=/usr/cachesys/bin:$DYLD_LIBRARY_PATH ``` - Run setup.py (located in <cachesys>/dev/python): ```bash python setup.py install ``` The following prompt is displayed: ```bash enter directory where you installed Cache' ``` - At the prompt, supply the location of <cachesys>. For example: ```bash /usr/cachesys ``` The resulting lib and include paths will be displayed: ```bash libdir=/usr/cachesys/bin include dir=/usr/cachesys/dev/cpp/include ``` - Run test.py (located in dev/python/samples) to test the installation: ```bash python test.py ``` Do not run test programs from <cachesys>/dev/python or the test program will not be able to find the pythonbind module. The python path is relative and you will pick up files from the intersys subdirectory instead. 1.3.3 Windows Installation - Make sure your path and environment is setup to run the Microsoft C/C++ compiler. Follow the Microsoft instructions. For example from the command line, run: `vsvars32.bat` for 32-bit systems, or: `vcvarsall.bat x64` for 64-bit systems. These files set up the path and environment variables for using the Microsoft C/C++ compiler. Please read your Microsoft documentation to determine the location of these .bat files, since the location varies depending on the version of Visual Studio you are using. - Run `setup.py`, located in `<cachesys>/dev/python` (for the location of `<cachesys>` on your system, see Default Caché Installation Directory in the Caché Installation Guide): `python setup.py install` The following prompt is displayed: enter directory where you installed Cache' At the prompt, supply the location of `<cachesys>`. For example: `C:\Intersystems\Cache` The resulting lib and include paths will be displayed: `libdir=C:\Intersystems\Cache\dev\cpp\lib` `include dir=C:\Intersystems\Cache\dev\cpp\include` - Run `test.py` (located in `<cachesys>/dev/python/samples`) to test the installation: `python test.py` Do not run test programs from `<cachesys>/dev/python` or the test program will not be able to find the pythonbind module. The python path is relative and you will pick up files from the intersys subdirectory instead. 1.3.4 Caché Server Configuration Very little configuration is required to use a Python client with a Caché server. The Python sample programs provided with Caché should work with no change following a default Caché installation. This section describes the server settings that are relevant to Python and how to change them. Every Python client that wishes to connect to a Caché server needs the following information: - A URL that provides the server IP address, port number, and Caché namespace. - A username and password. By default, the Python sample programs use the following connection information: - URL: "localhost[1972]:Samples" - username: "_SYSTEM" - password: "SYS" Check the following points if you have any problems: - Make sure that the Caché server is installed and running. - Make sure that you know the IP address of the machine on which the Caché server is running. The Python sample programs use "localhost". If you want a sample program to default to a different system you will need to change the connection string in the code. - Make sure that you know the TCP/IP port number on which the Caché server is listening. The Python sample programs use "1972". If you want a sample program to default to a different port, you will need change the number in the sample code. - Make sure that you have a valid username and password to use to establish a connection. (You can manage usernames and passwords using the Management Portal). The Python sample programs use the administrator username "_SYSTEM" and the default password "SYS" or "sys". Typically, you will change the default password after installing the server. If you want a sample program to default to a different username and password, you will need to change the sample code. - Make sure that your connection URL includes a valid Caché namespace. This should be the namespace containing the classes and data your program uses. The Python samples connect to the SAMPLES namespace, which is pre-installed with Caché. ### 1.4 Sample Programs The standard Caché installation contains a set of sample programs that demonstrate the use of the Caché Python binding. These samples are located in: ``` <calhesys>/dev/Python/samples/ ``` (For the location of `<calhesys>` on your system, see Default Caché Installation Directory in the Caché Installation Guide) The following sample programs are provided: - `CPTest2.py` — Get and set properties of an instance of Sample.Person. - `CPTest5.py` — Process datatype collections. - `CPTest6.py` — Process the result set of a ByName query. - `CPTest7.py` — Process the result set of a dynamic SQL query. - `CPTest8.py` — Process employee subclass and company/employee relationship. All of these applications use classes from the Sample package in the SAMPLES namespace (accessible in Atelier). **Arguments** The sample programs are controlled by various switches that can be entered as arguments to the program on the command line. A default value is supplied if you don't enter an argument. For example, `CPTest2.py` accepts the following optional arguments: - `-user` — the username you want to login under (default is "_SYSTEM"). - `-password` — the password you want to use (default is "SYS"). - `-host` — the host computer to connect to (default is "localhost"). The Caché Python Binding - **-port** — the port to use (default is "1972"). A **-user** argument would be specified as follows: ``` python CPTest2.py -user _MYUSERNAME ``` The **CPTest7.py** sample accepts a **-query** argument that is passed to an SQL query: ``` python CPTest7.py -query A ``` This query will list all **Sample.Person** records containing names that start with the letter A. This chapter provides concrete examples of Python code that uses the Caché Python binding. The following subjects are discussed: - **Python Binding Basics** — the basics of accessing and manipulating Caché database objects. - **Passing Parameters by Reference** — some Python-specific ways to access Python bindings. - **Using Collections** — iterating through Caché lists and arrays. - **Using Relationships** — manipulating embedded objects. - **Using Queries** — running Caché queries and dynamic SQL queries. - **Using %Binary Data** — moving data between Caché %Binary and Python list of integers. - **Handling Exceptions** — handling Python exceptions and error messages from the Python binding. Many of the examples presented here are modified versions of the sample programs. The argument processing and error trapping (try/catch) statements have been removed to simplify the code. See Sample Programs for details about loading and running the complete sample programs. ### 2.1 Python Binding Basics A Caché Python binding application can be quite simple. Here is a complete sample program: ```python import codecs, sys import intersys.pythonbind # Connect to the Cache' database url = "localhost[1972]:Samples" user = "_SYSTEM" password = "SYS" conn = intersys.pythonbind.connection() conn.connect_now(url, user, password, None) database = intersys.pythonbind.database(conn) # Create and use a Cache' object person = database.create_new("Sample.Person", None) person.set("Name","Doe, Joe A") print "Name: " + str(person.get("Name")) ``` This code imports the intersys.pythonbind module, and then performs the following actions: - Connects to the Samples namespace in the Caché database: - Defines the information needed to connect to the Caché database. – Creates a Connection object (conn). – Uses the Connection object to create a Database object (database). • Creates and uses a Caché object: – Uses the Database object to create an instance of the Caché Sample.Person class. – Sets the Name property of the Sample.Person object. – Gets and prints the Name property. The following sections discuss these basic actions in more detail. ### 2.1.1 Connecting to the Caché Database The basic procedure for creating a connection to a namespace in a Cache database is as follows: • Establish the physical connection: ```python conn = intersys.pythonbind.connection() conn.connect_now(url, user, password, timeout) ``` The `connect_now()` method creates a physical connection to a namespace in a Caché database. The `url` parameter defines which server and namespace the Connection object will access. The Connection class also provides the `secure_connect_now()` method for establishing secure connections using Kerberos. See Connections for a detailed discussion of both methods. • Create a logical connection: ```python database = intersys.pythonbind.database(conn) ``` The Connection object is used to create a Database object, which is a logical connection that lets you use the classes in the namespace to manipulate the database. The following code establishes a connection to the Samples namespace: ```python address = "localhost" # server TCP/IP address ("localhost" is 127.0.0.1) port = "1972" # server TCP/IP port number namespace = "SAMPLES" # sample namespace installed with Cache' url = address + "[" + port + "]:" + namespace user = "_SYSTEM"; password = "SYS"; conn = intersys.pythonbind.connection() conn.connect_now(url, user, password, None) database = intersys.pythonbind.database(conn) ``` ### 2.1.2 Using Caché Database Methods The `intersys.pythonbind.database` class allows you to run Caché class methods and connect to Caché objects on the server. Here are the basic operations that can be performed with the Database class methods: • Create Objects The `create_new()` method is used to create a new Caché object. The syntax is: ```python object = database.create_new(class_name, initial_value) ``` where `class_name` is the name of a Caché class in the namespace accessed by `database`. For example, the following statement creates a new instance of the Sample.Person class: person = database.create_new("Sample.Person", None) In this example, the initial value of person is undefined. - **Open Objects** The `openid()` method is used to open an existing Caché object. The syntax is: ```python object = database.openid(class_name, id, concurrency, timeout) ``` For example, the following statement opens the Sample.Person object that has an id with a value of 1: ```python person = database.openid("Sample.Person", str(1), -1, -1) ``` Concurrency and timeout are set to their default values. - **Run Class Methods** You can run class methods using the following syntax: ```python result = database.run_class_method(classname, methodname, [LIST]) ``` where LIST is a list of method arguments. For example, the `database.openid()` example shown previously is equivalent to the following code: ```python person = database.run_class_method("Sample.Person", "%OpenId", [str(1)]) ``` This method is the analogous to the Caché `##class` syntax. For example, the following code: ```python list = database.run_class_method("%ListOfDataTypes","%New",[]) list.run_obj_method("Insert",["blue"]) ``` is exactly equivalent to the following ObjectScript code: ```objectscript set list=##class(%ListOfDataTypes).%New() do list.Insert("blue") ``` ### 2.1.3 Using Caché Object Methods The `intersys.pythonbind.object` class provides access to Caché objects. Here are the basic operations that can be performed with the Object class methods: - **Get and Set Properties** Properties are accessed through the `set()` and `get()` accessor methods. The syntax for these methods is: ```python object.set(propname, value) value = object.get(propname) ``` For example: ```python person.set("Name","Doe, Joe A") name = person.get("Name") ``` Private and multidimensional properties are not accessible through the Python binding. - **Run Object Methods** You can run object methods by calling `run_obj_method()`: ```python answer = object.run_obj_method(MethodName, [LIST]) ``` For example: ```python answer = person.run_obj_method("Addition", [17,20]) ``` This method is useful when calling inherited methods (such as **%Save** or **%Id**) that are not directly available. - **Save Objects** To save an object, use `run_obj_method()` to call **%Save**: ```python object.run_obj_method("%Save",[]) ``` To get the id of a saved object, use `run_obj_method()` to call **%Id**: ```python id = object.run_obj_method("%Id",[]) ``` ## 2.2 Passing Parameters by Reference It is possible to pass arguments by reference with the Python binding. Since Python does not have a native mechanism for passing by reference, the binding passes all arguments as a list that can be modified by the called function. For example, assume the following Caché class: ```plaintext Class Caudron.PassByReference Extends %Persistent [ ProcedureBlock ] { ClassMethod PassByReference(ByRef Arg1 As %String) { set Arg1="goodbye" } } ``` The following code passes the argument by reference: ```python list = ["hello"] print "passed to method PassByReference = " + str(list[0]) database.run_class_method("Caudron.PassByReference","PassByReference",list) print "returned by reference = " + str(list[0]) ``` The print statements will produce the following output: ``` passed to method PassByReference = hello returned by reference = goodbye ``` ## 2.3 Using Collections and Lists Caché %Collection objects are handled like any other Python binding object. Caché %List variables are mapped to Python array references. The following sections demonstrate how to use both of these items. ### 2.3.1 %Collection Objects Collections are manipulated through object methods of the Caché %Collection class. The following example shows how you might manipulate a Caché %ListOfDataTypes collection: ```python # Create a %ListOfDataTypes object and add a list of colors newcolors = database.create_new("%ListOfDataTypes", None) color_list = ['red', 'blue', 'green'] print "Adding colors to list object: for i in range(len(color_list)): newcolors.run_obj_method("Insert",[color_list[i]]) print " added >" + str(color_list[i]) +"<" # Add the list to a Sample.Person object. person = database.openid("Sample.Person","str(1),-1,0") person.set("FavoriteColors",newcolors) # Get the list back from person and print it out. colors = person.get("FavoriteColors") ``` print "\nNumber of colors in 'FavoriteColors' list: %d" % (colors.get("Size") index = [0] while (index[0] != None): color = colors.run_obj_method("GetNext", index) if (index[0] != None): print "Color #%d = >%s< % (index[0], str(color)) # Remove and replace the second element index = 2 if (colors.get("Size") > 0): colors.run_obj_method("RemoveAt", [index]) colors.run_obj_method("InsertAt", ["purple", index]) newcolor = colors.run_obj_method("GetAt", [index]) print "\nChanged color #%d to %s." % (index, str(newcolor)) 2.3.2 %List Variables The Python binding maps Caché %List variables to Python array references. CAUTION: While a Python array has no size limit, Caché %List variables are limited to approximately 32KB. The actual limit depends on the datatype and the exact amount of header data required for each element. Be sure to use appropriate error checking (as demonstrated in the following examples) if your %List data is likely to approach this limit. The examples in this section assume the following Caché class: Class Sample.List Extends %Persistent { Property CurrentList As %List; Method InitList() As %List { q $ListBuild(1,"hello",3.14) } Method TestList(NewList As %List) As %Integer { set $ZTRAP="ErrTestList" set ItemCount = $ListLength(NewList) if (ItemCount = 0) {set ItemCount = -1} q ItemCount ErrTestList set $ZERROR = "" set $ZTRAP = "" q 0 } } The TestList() method is used to test if a Python array is a valid Caché list. If the list is too large, the method traps an error and returns 0 (Python false). If the list is valid, it returns the number of elements. If a valid list has 0 elements, it returns -1. Example 1: Caché to Python The following code creates a Sample.List object, gets a predefined Caché list from the InitList() method, transforms it into a Python array, and displays information about the array: listobj = database.create_new("Sample.List",None) mylist = listobj.run_obj_method("InitList",[]) print "Initial List from Cache:" print "array contents = " + str(mylist) for i in range(len(mylist)): print " element %d = [%s] " % (i,mylist[i]) This code produces the following output: Initial List from Cache: array contents = [1, 'hello', 3.1400000000000001] There are 3 elements in the list: element 0 = [1] element 1 = [hello] element 2 = [3.14] In element 3, the null list element in Caché corresponds to value of None in the Python array. **Example 2: Python to Caché and Back Again** The following code passes a list in both directions. It creates a small Python array, stores it in the Caché object’s CurrentList property, then gets it back from the property and converts it back to a Python array. ```python # Generate a small Python list, pass it to Cache and get it back. oldarray = [1, None, 2.78, "Just a small list."] listobj.set("CurrentList", oldarray) newarray = listobj.get("CurrentList") print "The 'CurrentList' property now has %d elements:" % (len(newarray)) for i in range(len(newarray)): print " element %d = [%s]" % (i, newarray[i]) ``` This code produces the following output: ``` The 'CurrentList' property now has 4 elements: element 0 = [1] element 1 = [None] element 2 = [2.78] element 3 = [Just a small list.] ``` **Example 3: Testing List Capacity** It is important to make sure that a Cache %List variable can contain the entire Python array. The following code creates a Python array that is too large, and attempts to store it in the CurrentList property. ```python # Create a large array and print array information. longitem = "1022 character element" + ("1234567890" * 100) array = ["This array is too long."] cache_list_size = len(array[0]) while (cache_list_size < 32768): array.append(longitem) cache_list_size = cache_list_size + len(longitem) print "Now for a HUGE list:" print "Total bytes required by Cache' list: more than %d" % (cache_list_size) print "There are %d elements in the ingoing list. # Check to see if the array will fit. bool = listobj.run_obj_method("TestList", [array]) print "TestList reports that " if (bool): print "the array is OK, and has %d elements." % bool else: print "the array will be damaged by the conversion." # Pass the array to Cache', get it back, and display the results listobj.set("CurrentList", array) badarray = listobj.get("CurrentList") print "There are %d elements in the returned array:" % len(badarray) for i in range(len(badarray)): line = str(badarray[i]) if (len(line)> 80): # long elements are shortened for readability. line = line[:9] + "..." + line[:50] print " element %d = [%s]" % (i, line) ``` The printout shortens undamaged sections of the long elements to make the output more readable. The following output is produced on a unicode system: ``` Now for a HUGE list: Total bytes in array: at least 33884 There are 34 elements in the ingoing array. TestList reports that the array will be damaged by the conversion. There are 17 elements in the returned array: element 1 = [This list is too long.] element 2 = [1022 chara...90123456789012345678901234567890] element 3 = [1022 chara...90123456789012345678901234567890] element 4 = [1022 chara...90123456789012345678901234567890] element 5 = [1022 chara...90123456789012345678901234567890] element 6 = [1022 chara...90123456789012345678901234567890] element 7 = [1022 chara...90123456789012345678901234567890] element 8 = [1022 chara...90123456789012345678901234567890] element 9 = [1022 chara...90123456789012345678901234567890] element 10 = [1022 chara...90123456789012345678901234567890] ``` 14 Using Python with Caché The damaged list contains only 17 of the original 34 elements, and element 17 is corrupted. 2.4 Using Relationships Relationships are supported through the relationship object and its methods. The following example generates a report by using the one/many relationship between the company object and a set of employee objects. The relationship object emp_relationship allows the code to access all employee objects associated with the company object: ```python company = database.openid("Sample.Company", str(1), -1, 0) emp_relationship = company.get("Employees") index = [None] print "Employee list:\n" while 1: employee = emp_relationship.run_obj_method("GetNext", index) if employee != None: i = str(index[0]) name = str(employee.get("Name")) title = str(employee.get("Title")) company = employee.get("Company") compname = str(company.get("Name")) SSN = str(employee.get("SSN")) print " employee #%s:" % i print " name=%s SSN=%s" % (name, SSN) print " title=%s companyname=%s" % (title, compname) else: break ``` The following code creates a new employee record, adds it to the relationship, and automatically saves the employee information when it saves company. ```python new_employee = database.create_new("Sample.Employee","") new_employee.set("Name", name) new_employee.set("Title", title) new_employee.set("SSN", SSN) emp_relationship.run_obj_method(Insert, new_employee) company.run_obj_method( "%Save" ) ``` 2.5 Using Queries The basic procedure for running a query is as follows: - **Create the query object** ```python query = intersys.pythonbind.query(database) ``` where database is an intersys.pythonbind.database object and query is an intersys.pythonbind.query object. - **Prepare the query** An SQL query uses the prepare() method: ```python sql_code = query.prepare(sql_query) ``` where sql_query is a string containing the query to be executed. A Caché class query uses the **prepare_class()** method: ``` sql_code = query.prepare_class(class_name, query_name) ``` - **Assign parameter values** ``` query.set_par(idx, val) ``` The **set_par()** method assigns value val to parameter idx. The value of an existing parameter can be changed by calling **set_par()** again with the new value. The **Query** class provides several methods that return information about an existing parameter. - **Execute the query** ``` sql_code = query.execute() ``` The **execute()** method generates a result set using any parameters defined by calls to **set_par()**. - **Fetch a row of data** ``` data_row = query.fetch([None]) ``` Each call to the **fetch()** method retrieves a row of data from the result set and returns it as a list. When there is no more data to be fetched, it returns an empty list. The **Query** class provides several methods that return information about the columns in a row. Here is a simple SQL query that retrieves data from Sample.Person: ```python # create a query sqlstring = "SELECT ID, Name, DOB, SSN FROM SAMPLE.PERSON WHERE Name %STARTSWITH ?" query = intersys.pythonbind.query(database) query.prepare(sqlstring) query.set_par(1, "A") query.execute(); # Fetch each row in the result set, and print the # name and value of each column in a row: while 1: cols = query.fetch([None]) if len(cols) == 0: break print "\nRow %s ===============" % str(cols[0]) print " %s: %s, %s: %s, %s: %s" % (str(query.col_name(2)), str(cols[1]), str(query.col_name(3)), str(cols[2]), str(query.col_name(4)), str(cols[3])) ``` For more information on the **intersys.pythonbind.Query** class, see Queries in the Python Client Class Reference chapter. For information about queries in Caché, see Defining and Using Class Queries in "Using Caché Objects". ### 2.6 Using %Binary Data The Python binding uses the Python **pack()** and **unpack()** functions to convert data between a Caché %Binary and a Python list of integers. Each byte of the Caché binary data is represented in Python as an integer between 0 and 255. In this example, the binary initially contains the string "hello". The following code changes the binary to "hellos": ```python ``` ``` binary = reg.get("MyByte") for c in binary: print "%c" % c print type(ord("s")) binary.append(ord("s")) reg.set("MyByte",binary) for c in binary: print "%c" % c ``` # 2.7 Handling Exceptions The Python binding uses Python exceptions to return errors from the C binding and elsewhere. Here is an example using Python exceptions: ``` try: #some code except intersys.pythonbind.cache_exception, err: print "InterSystems Cache' exception" print sys.exc_type print sys.exc_value print sys.exc_traceback print str(err) ``` ## 2.7.1 Error reporting When processing an argument or a return value, error messages from the C binding are specially formatted by the Python binding layer. This information can be used by InterSystems WRC to help diagnose the problem. Here is a sample error message: ``` file=PythonBIND.xs line=71 err=-1 message=cbind_variant_set_buf() cpp_type=4 var.cpp_type=-1 var.obj.oref=178435886 class_name=%Library.RelationshipObject mtd_name=GetNext argnum=0 ``` The error message components are: <table> <thead> <tr> <th>message</th> <th>meaning</th> </tr> </thead> <tbody> <tr> <td>file=PythonBIND.xs</td> <td>file where the failure occurred.</td> </tr> <tr> <td>line=71</td> <td>line number in the file.</td> </tr> <tr> <td>err=-1</td> <td>return code from the C binding.</td> </tr> <tr> <td>message= cbind_variant_set_buf()</td> <td>C binding error message.</td> </tr> <tr> <td>cpp_type=4</td> <td>cpp type of the method argument or return type.</td> </tr> <tr> <td>var.cpp_type=-1</td> <td>variant cpp type.</td> </tr> <tr> <td>var.obj.oref=1784835886</td> <td>variant oref.</td> </tr> <tr> <td>class_name= %Library.RelationshipObject</td> <td>class name of the object on which the method is invoked.</td> </tr> <tr> <td>mtd_name=GetNext</td> <td>method name.</td> </tr> <tr> <td>argnum=0</td> <td>argument number. 0 is the first argument and -1 indicates a return value.</td> </tr> </tbody> </table> 3 Python Client Class Reference This chapter describes how Caché classes and datatypes are mapped to Python code, and provides details on the classes and methods supported by the Caché Python binding. The following subjects are discussed: - **Datatypes** — %Binary data. - **Connections** — methods used to create a physical connection to a namespace in a Caché database. - **Database** — methods used to open or create Caché objects, create queries, and run Caché class methods. - **Objects** — methods used to manipulate Caché objects by getting or setting properties, running object methods, and returning information about the objects. - **Queries** — methods used to run queries and fetch the results. - **Times and Dates** — methods used to access the Caché %Time, %Date, or %Timestamp datatypes. - **Locale and Client Version** — methods that provide access to Caché version information and Windows locale settings. ### 3.1 Datatypes All Caché datatypes are supported. See the following sections for information on specific datatypes: - The Caché %Binary datatype corresponds to a Python list of integers. See Using %Binary Data for examples. - Collections such as %Library.ArrayOfDataTypes and %Library.ListOfDataTypes are handled through object methods of the Caché %Collection classes. See %Collection Objects for examples. - Caché %Library.List variables are mapped to Python lists. A list can contain strings, ordinary or unicode, integers, None, and doubles. See %List Variables for examples. - Caché %Time, %Date, and %Timestamp datatypes are supported by corresponding classes in the Python binding. See Times and Dates for a description of these classes. ### 3.2 Connections Methods of the intersys.pythonbind.connection package create a physical connection to a namespace in a Caché database. A Connection object is used only to create a Database object, which is the logical connection that allows Python binding applications to manipulate Caché objects. See Connecting to the Caché Database for information on how to use the Connection methods. Here is a complete listing of connection methods: **connect_now()** ```python conn = intersys.pythonbind.connection() conn.connect_now(url, user, password, timeout) ``` See Connection Information later in this section for a detailed discussion of the parameters. **secure_connect_now()** ```python conn = intersys.pythonbind.connection() conn.secure_connect_now(url, srv_principal, security_level, timeout) ``` `Connection.secure_connect_now()` returns the connection proxy that is used to get the proxy for the Caché namespace identified by `url`. This method takes the following parameters: - **url** — See Connection Information later in this section for a detailed description of the URL format. - **srv_principal** — A Kerberos "principal" is an identity that is represented in the Kerberos database, has a permanent secret key that is shared only with the Kerberos KDCs (key distribution centers), can be assigned credentials, and can participate in the Kerberos authentication protocol. - A "user principal" is associated with a person, and is used to authenticate to services which can then authorize the use of resources (for example, computer accounts or Caché services). - A "service principal" is associated with a service, and is used to authenticate user principals and can optionally authenticate itself to user principals. - A "service principal name" (such as `srv_principal_name`) is the string representation of the name of a service principal, conventionally of the form: ``` <service>/<instance>@<REALM> ``` For example: ``` cache/turbo.iscinternal.com@ISCINTERNAL.COM ``` On Windows, The KDCs are embedded in the domain controllers, and service principal names are associated with domain accounts. See your system's Kerberos documentation for a detailed discussion of principals. - **security_level** — Sets the "Connection security level", which is an integer that indicates the client/server network security services that are requested or required. Security level can take the following values: - 0 — None. - 1 — Kerberos client/server mutual authentication, no protection for data. - 2 — As 1, plus data source and content integrity protection. - 3 — As 2, plus data encryption. - **timeout** — Number of seconds to wait before timing out. ### 3.2.1 Connection Information The following information is needed to establish a connection to the Caché database: • **URL** — The URL specifies the server and namespace to be accessed as a string with the following format: ``` <address>[<port>]:<namespace> ``` For example, the sample programs use the following connection string: ``` "localhost[1972]:Samples" ``` The components of this string are: - `<address>` — The server TCP/IP address or Fully Qualified Domain Name (FQDN). The sample programs use "localhost" (127.0.0.1), assuming that both the server and the Python application are on the same machine. - `<port>` — The server TCP/IP port number for this connection. Together, the IP address and the port specify a unique Caché server. - `<namespace>` — The Caché namespace containing the objects to be used. This namespace must have the Caché system classes compiled, and must contain the objects you want to manipulate. The sample programs use objects from the SAMPLE namespace. • **username** — The username under which the connection is being made. The sample programs use "_SYSTEM", the default SQL System Manager username. • **password** — The password associated with the specified username. Sample programs use the default, "SYS". ### 3.3 Database Database objects provide a logical connection to a Caché namespace. Methods of the intersys.pythonbind.Database package are used to open or create Caché objects, create queries, and run Caché class methods. Database objects are created by calling ``` database = intersys.pythonbind.database(conn), ``` where `conn` is a intersys.pythonbind.connection object. See [Connecting to the Caché Database](#) for more information on creating a Database object. Here is a complete listing of Database methods: - **create_new()** ```python obj = database.create_new(type, init_val) ``` Creates a new Caché object instance from the class named by `type`. Normally, `init_val` is None. See [Objects](#) for details on the objects created with this method. - **open()** ```python obj = database.open(class_name, oid, concurrency, timeout, res) ``` Opens a Caché object instance using the class named by `class_name` and the oid of the object. The `concurrency` argument has a default value of -1. `timeout` is the ODBC query timeout. - **openid()** ```python obj = database.openid(class_name, id, concurrency, timeout) ``` Opens a Caché object instance using the class named by `class_name` and the id of the object. The `concurrency` argument has a default value of -1. `timeout` is the ODBC query timeout. run_class_method() ```python def run_class_method() value = database.run_class_method(class_name, method_name, [LIST]) ``` Runs the class method `method_name`, which is a member of the `class_name` class in the namespace that `database` is connected to. Arguments are passed in `LIST`. Some of these arguments may be passed by reference depending on the class definition in Caché. Return values correspond to the return values from the Caché method. ### 3.4 Objects Methods of the `intersys.pythonbind.object` package provide access to a Caché object. An `Object` object is created by the `intersys.pythonbind.database create_new()` method (see Database for a detailed description). See Using Caché Object Methods for information on how to use the `Object` methods. Here is a complete listing of `Object` methods: get() ```python def get() value = object.get(prop_name) ``` Returns the value of property `prop_name` in Caché object `object`. run_obj_method() ```python def run_obj_method() value = object.run_obj_method(method_name, [LIST]) ``` Runs method `method_name` on Caché object `object`. Arguments are passed in `LIST`. Some of these arguments may be passed by reference depending on the class definition in Caché. Return values correspond to the return values from the Caché method. set() ```python def set() object.set(prop_name, val) ``` Sets property `prop_name` in Caché object `object` to `val`. ### 3.5 Queries Methods of the `intersys.pythonbind.query` package provide the ability to prepare a query, set parameters, execute the query, and fetch the results. See Using Queries for information on how to use the `Query` methods. A `Query` object is created as follows: ```python def query() = intersys.pythonbind.query(database) ``` Here is a complete listing of `Query` methods: #### 3.5.1 prepare query prepare() ```python def prepare() query.prepare(string) ``` Prepares a query using the SQL string in `string`. ```python prepare_class() query.prepare_class(class_name, query_name) ``` Prepares a query in a class definition ### 3.5.2 set parameters ```python set_par() query.set_par(idx, val) ``` Assigns value `val` to parameter `idx`. The method can be called several times for the same parameter. The previous parameter value will be lost, and the new value can be of a different type. The `set_par()` method does not support by-reference parameters. ```python is_par_nullable() nullable = query.is_par_nullable(idx) ``` Returns 1 if parameter `idx` is nullable, else 0. ```python is_par_unbound() unbound = query.is_par_unbound(idx) ``` Returns 1 if parameter `idx` is unbound, else 0. ```python num_pars() num = query.num_pars() ``` Returns number of parameters in query. ```python par_col_size() size = query.par_col_size(idx) ``` Returns size of parameter column. ```python par_num_dec_digits() num = query.par_num_dec_digits(idx) ``` Returns number of decimal digits in parameter. ```python par_sql_type() type = query.par_sql_type(idx) ``` Returns sql type of parameter. ### 3.5.3 execute query ```python execute() query.execute() ``` Generates a result set using any parameters defined by calls to `set_par()`. ### 3.5.4 fetch results ```python def fetch() data_row = query.fetch([None]) ``` Retrieves a row of data from the result set and returns it as a list. When there is no more data to be fetched, it returns an empty list. ```python def col_name() name = query.col_name(idx) ``` Returns name of column. ```python def col_name_length() length = query.col_name_length(idx) ``` Returns length of column name. ```python def col_sql_type() sql_type = query.col_sql_type(idx) ``` Returns sql type of column. ```python def num_cols() num_cols = query.num_cols() ``` Returns number of columns in query. ### 3.6 Times and Dates The `PTIME_STRUCTPtr`, `PDATE_STRUCTPtr`, and `PTIMESTAMP_STRUCTPtr` packages are used to manipulate Caché `%TIME`, `%DATE`, or `%TIMESTAMP` datatypes. #### 3.6.1 %TIME Methods of the `PTIME_STRUCTPtr` package are used to manipulate the Caché `%DATE` data structure. Times are in `hh:mm:ss` format. For example, 5 minutes and 30 seconds after midnight would be formatted as `00:05:30`. Here is a complete listing of Time methods: ```python def new() time = PTIME_STRUCTPtr.new() ``` Create a new Time object. ```python def get_hour() hour = time.get_hour() ``` ... Return hour get_minute() minute = time.get_minute() Return minute get_second() second = time.get_second() Return second set_hour() time.set_hour(hour) Set hour (an integer between 0 and 23, where 0 is midnight). set_minute() time.set_minute(minute) Set minute (an integer between 0 and 59). set_second() time.set_second(second) Set second (an integer between 0 and 59). toString() stringrep = time.toString() Convert the time to a string: hh:mm:ss. 3.6.2 %DATE Methods of the PDATE_STRUCTPtr package are used to manipulate the Caché %DATE data structure. Dates are in yyyy-mm-dd format. For example, December 24, 2003 would be formatted as 2003-12-24. Here is a complete listing of Date methods: new() date = PDATE_STRUCTPtr.new() Create a new Date object. get_year() year = date.get_year() Return year get_month() month = date.get_month() Return month **get_day()** ```python day = date.get_day() ``` Return day **set_year()** ```python date.set_year(year) ``` Set year (a four-digit integer). **set_month()** ```python date.set_month(month) ``` Set month (an integer between 1 and 12). **set_day()** ```python date.set_day(day) ``` Set day (an integer between 1 and the highest valid day of the month). **toString()** ```python stringrep = date.toString() ``` Convert the date to a string: *yyyy-mm-dd*. ### 3.6.3 %TIMESTAMP Methods of the `PTIMESTAMP_STRUCTPtr` package are used to manipulate the Caché %TIMESTAMP data structure. Timestamps are in *yyyy-mm-dd hh:mm:ss.fffffffff* format. For example, December 24, 2003, five minutes and 12.5 seconds after midnight, would be formatted as: ``` 2003-12-24 00:05:12:500000000 ``` Here is a complete listing of TimeStamp methods: **new()** ```python timestamp = PTIMESTAMP_STRUCTPtr.new() ``` Create a new Timestamp object. **get_year()** ```python year = timestamp.get_year() ``` Return year in *yyyy* format. **get_month()** ```python month = timestamp.get_month() ``` Return month in *mm* format. get_day() day = timestamp.get_day() Return day in dd format. get_hour() hour = timestamp.get_hour() Return hour in hh format. get_minute() minute = timestamp.get_minute() Return minute in mm format. get_second() second = timestamp.get_second() Return second in ss format. get_fraction() fraction = timestamp.get_fraction() Return fraction of a second in ffffffff format. set_year() timestamp.set_year(year) Set year (a four-digit integer). set_month() timestamp.set_month(month) Set month (an integer between 1 and 12). set_day() timestamp.set_day(day) Set day (an integer between 1 and the highest valid day of the month). set_hour() timestamp.set_hour(hour) Set hour (an integer between 0 and 23, where 0 is midnight). set_minute() timestamp.set_minute(minute) Set minute (an integer between 0 and 59). set_second() timestamp.set_second(second) Set second (an integer between 0 and 59). set_fraction() timestamp.set_fraction(fraction) Set fraction of a second (an integer of up to nine digits). toString() stringrep = timestamp.toString() Convert the timestamp to a string yyyy-mm-dd hh:mm:ss.fffffffff. 3.7 Locale and Client Version Methods of the intersys.pythonbind.default package provide access to Caché version information and Windows locale settings. Here is a complete listing of these methods: get_client_version() clientver = intersys.pythonbind.get_client_version(); Identifies the version of Caché running on the Python client machine. setlocale() newlocale = intersys.pythonbind.setlocale(category, locale) Sets the default locale and returns a locale string for the new locale. For example: newlocale = intersys.pythonbind.setlocale(0, "Russian") # 0 stands for LC_ALL would set all locale categories to Russian and return the following string: Russian_Russia.1251 If the locale argument is an empty string, the current default locale string will be returned. For example, given the following code: intersys.pythonbind.setlocale(0, "English") mylocale = intersys.pythonbind.setlocale(0, "\n"),"\n"; the value of mylocale would be: English_United States.1252 For detailed information, including a list of valid category values, see the MSDN library (http://msdn.microsoft.com/library) entry for the setlocale() function in the Visual C++ runtime library. set_thread_locale() intersys.pythonbind.set_thread_locale(lcid) Sets the locale id (LCID) for the calling thread. Applications that need to work with locales at runtime should call this method to ensure proper conversions. For a listing of valid LCID values, see the "Locale ID (LCID) Chart" in the MSDN library (http://msdn.microsoft.com/library). The chart can be located by a search on "LCID Chart". It is currently located at: For detailed information on locale settings, see the MSDN library entry for the SetThreadLocale() function, listed under "National Language Support".
{"Source-Url": "https://docs.intersystems.com/latest/csp/docbook/pdfs/pdfs/GBPY.pdf", "len_cl100k_base": 13204, "olmocr-version": "0.1.53", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 66310, "total-output-tokens": 15796, "length": "2e13", "weborganizer": {"__label__adult": 0.00023853778839111328, "__label__art_design": 0.00017011165618896484, "__label__crime_law": 0.00015270709991455078, "__label__education_jobs": 0.0004851818084716797, "__label__entertainment": 4.100799560546875e-05, "__label__fashion_beauty": 5.984306335449219e-05, "__label__finance_business": 0.00030040740966796875, "__label__food_dining": 0.00019049644470214844, "__label__games": 0.0003643035888671875, "__label__hardware": 0.00043892860412597656, "__label__health": 0.0001538991928100586, "__label__history": 0.00011849403381347656, "__label__home_hobbies": 5.0008296966552734e-05, "__label__industrial": 0.00022351741790771484, "__label__literature": 0.0001474618911743164, "__label__politics": 0.00012671947479248047, "__label__religion": 0.00022661685943603516, "__label__science_tech": 0.004062652587890625, "__label__social_life": 4.953145980834961e-05, "__label__software": 0.014862060546875, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.00013518333435058594, "__label__transportation": 0.00018084049224853516, "__label__travel": 0.00013303756713867188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57120, 0.02108]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57120, 0.88787]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57120, 0.67089]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2749, false], [2749, 8009, null], [8009, 8009, null], [8009, 8325, null], [8325, 8325, null], [8325, 10748, null], [10748, 12520, null], [12520, 15020, null], [15020, 17122, null], [17122, 19874, null], [19874, 20273, null], [20273, 22048, null], [22048, 24466, null], [24466, 26641, null], [26641, 28941, null], [28941, 31319, null], [31319, 34603, null], [34603, 36534, null], [36534, 38885, null], [38885, 39939, null], [39939, 40566, null], [40566, 42505, null], [42505, 45082, null], [45082, 47579, null], [47579, 49502, null], [49502, 50741, null], [50741, 52045, null], [52045, 52981, null], [52981, 54118, null], [54118, 54952, null], [54952, 56507, null], [56507, 57120, null], [57120, 57120, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2749, false], [2749, 8009, null], [8009, 8009, null], [8009, 8325, null], [8325, 8325, null], [8325, 10748, null], [10748, 12520, null], [12520, 15020, null], [15020, 17122, null], [17122, 19874, null], [19874, 20273, null], [20273, 22048, null], [22048, 24466, null], [24466, 26641, null], [26641, 28941, null], [28941, 31319, null], [31319, 34603, null], [34603, 36534, null], [36534, 38885, null], [38885, 39939, null], [39939, 40566, null], [40566, 42505, null], [42505, 45082, null], [45082, 47579, null], [47579, 49502, null], [49502, 50741, null], [50741, 52045, null], [52045, 52981, null], [52981, 54118, null], [54118, 54952, null], [54952, 56507, null], [56507, 57120, null], [57120, 57120, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 57120, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57120, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57120, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57120, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57120, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57120, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57120, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57120, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57120, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57120, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2749, 2], [2749, 8009, 3], [8009, 8009, 4], [8009, 8325, 5], [8325, 8325, 6], [8325, 10748, 7], [10748, 12520, 8], [12520, 15020, 9], [15020, 17122, 10], [17122, 19874, 11], [19874, 20273, 12], [20273, 22048, 13], [22048, 24466, 14], [24466, 26641, 15], [26641, 28941, 16], [28941, 31319, 17], [31319, 34603, 18], [34603, 36534, 19], [36534, 38885, 20], [38885, 39939, 21], [39939, 40566, 22], [40566, 42505, 23], [42505, 45082, 24], [45082, 47579, 25], [47579, 49502, 26], [49502, 50741, 27], [50741, 52045, 28], [52045, 52981, 29], [52981, 54118, 30], [54118, 54952, 31], [54952, 56507, 32], [56507, 57120, 33], [57120, 57120, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57120, 0.01163]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
e07699228e0c9af0bfd74ce5466c8d09c514ab4f
Efficient Scheduling of Tiled Iteration Spaces onto a Fixed Size Parallel Architecture Maria Athanasiaki, Evangelos Koukis and Nectarios Koziiris National Technical University of Athens, School of Electrical and Computer Engineering, Computing Systems Laboratory e-mail: {maria, vkoukis, nkoziiris}@cslab.ntua.gr Abstract. In this paper we propose several alternative methods for the compile time scheduling of Tiled Iteration Spaces onto a cluster of SMP nodes with a fixed number of nodes. We elaborate on the distribution of computations among processors, provided that we have chosen either a non-overlapping communication mode, which involves successive computation and communication steps, or an overlapping communication mode, which supposes a pipelined, concurrent execution of communication and computations. In order to utilize the available processors as efficiently as possible, we can either adopt a cyclic assignment schedule, or assign neighboring tiles to the same CPU, or adapt the size and shape of tiles to the underlying architecture size. We theoretically and experimentally compare four different schedules, so as to design one which achieves the minimum total execution time, depending on the cluster configuration, the internal characteristics of the underlying architecture and the iteration space size and shape. Experimental results in a small Linux cluster of SMP nodes, using the GM library over the Myrinet high performance interconnect, illustrate the merits and drawbacks of each approach. 1 Introduction One of the most intriguing areas in the field of parallel computing is the efficient mapping of loop iterations onto a parallel system, taking into account its architectural internal characteristics. In order to achieve maximum acceleration of the final program, one of the key issues to be considered is minimization of the communication overhead. Among other solutions given in the literature, researchers have dealt with this problem by applying the supernode or tiling transformation. Supernode partitioning of the iteration space was proposed by Frigo and Tricoli in [13]. They introduced the initial model of loop tiling and gave conditions for a tiling transformation to be valid. Later, Ramanujam and Sadayappan in [19] showed the equivalence between the problem of finding a set of extreme vectors for a given set of dependence vectors and the problem of finding a tiling transformation H that produces valid, deadlock-free tiles. The problem of determining the optimal shape was surveyed, and more accurate conditions were also given by others, as in [22], [5], [11], [12]. Nevertheless, all above approaches do not investigate an ideal execution scheme in order to reduce the overall tiled space completion time. Hodzic and Shang [10] proposed a method to correlate optimal tile size and shape, based on overall completion time reduction. Their approach considers a straightforward execution scheme, where each processor executes all tiles along a specific dimension, by interleaving computation and communication phases. In [9] we proposed an alternative method for the problem of scheduling the tiles to single CPU nodes. Each atomic tile execution involves a communication and a computation phase and this is repeatedly done for all time planes. We compacted this sequence of communication and computation phases, by overlapping them for the different processors. The proposed method acts like enhancing the performance of a processor’s datapath with pipelining [18], because a processor computes its tile at $k$ time step and concurrently receives data from all neighbors to use them at $k + 1$ time step and sends data produced at $k − 1$ time step. In [4] we extended the method proposed in [9] for executing Tiled Iteration Spaces in SMP nodes (Symmetric MultiProcessors). We grouped together neighboring tiles along a hyperplane. Hyperplane-grouped tiles are concurrently executed by the CPUs of the same SMP node. In this way, we eliminate the need for tile synchronization and communication between intranode CPUs. As far as scheduling of groups is concerned, we take advantage of the overlapping execution scheme of [9] in order to “hide” each group communication volume within the respective computation volume. Under the above execution scheme, the iteration space involves the overlapped execution of communication and computation phases between successive groups of tiles. We thus avoid most of the communication overhead by allowing for actual computation to communication overlapping. However, the proposed schedule assumes the availability of an unlimited number of SMP nodes. In [3] Andromikos et al. have proposed an assignment scheme onto a fixed number of nodes, however the complexity of evaluating which tiles should be assigned to which node is too high. In [6], [7] Boulet et al. and Calland et al. have theoretically proven the optimality of a cyclic assignment of 2-dimensional tiles onto a fixed number of single CPU nodes. On the other hand, Manjikian and Abdelrahman have presented in [16] an alternative method for scheduling Tiled Iteration Spaces onto a fixed number of SMP nodes, without taking into account that there is no need for communication among CPUs of the same SMP node, since the data required are located in the node’s shared memory. In this paper, we propose four different methods for scheduling tiled iteration spaces onto an existing clustered system with a fixed number of SMP nodes: the cyclic assignment schedule, the mirror assignment schedule, the cluster assignment schedule and the retiling schedule. Firstly, we adapt the method proposed in [4] for a cluster of SMPs with a fixed number of nodes. We discuss the approaches of [6], [7], [16] and generalize them for $n$-dimensional Spaces, taking into account the particularity of immediate exchange of data among CPUs of the same SMP node. In addition, we apply to all four schedules, two alternative execution schemes, the overlapping [9] and the non-overlapping [10] communication scheme and we discuss the merits and drawbacks of each combined approach. The rest of this paper is organized as follows: In Section 2 we provide the mathematical background and terminology used throughout the paper and we briefly revise concepts, such as grouping transformation, described in our previous work. In Section 3 we adapt the theory proposed in [4] for a fixed number of SMP nodes, using four different mapping methods. In Section 4 we use some exemplary Iteration Spaces, so as to experimentally delve into the advantages of each schedule. We deduce that our experimental results strongly confirm our theory. Finally, in Section 5 we summarize our conclusions. 2 Algorithmic Model - Grouping Transformation Our proposed method can be applied to any code segment which can be transformed into a Tiled Iteration Space. However, without lack of generality, in this paper our model consists of perfectly nested FOR-loops with uniform data dependencies, as in [4],[9]. Throughout this paper, the following notation is used: \(N\) is the set of natural numbers, \(n\) is the number of nested FOR-loops of the algorithm. \(J^n \subseteq Z^n\) is the set of loop indices: \(J^n = \{ (j_1, \ldots, j_n) | j_i \in Z \land l_i \leq j_i \leq u_i, 1 \leq i \leq n\}\). Each point in this \(n\)-dimensional integer space is a distinct instantiation of the loop body. In a Supernoode or Tiling Transformation, the iteration space \(J^n\) is partitioned into identical \(n\)-dimensional parallelepipeds (tiles or supernodes) formed by \(n\) independent families of parallel hyperplanes. Tiling transformation is defined by the \(n\)-dimensional square matrix \(H\). Each row vector of \(H\) is perpendicular to one family of hyperplanes forming the tiles. Dually, tiling transformation can be defined by \(n\) linearly independent vectors, which are the sides of the tiles. Similar to matrix \(H\), matrix \(P\) contains the side-vectors of a tile as column vectors. It holds \(P = H^{-1}\). Formally, tiling transformation is defined as follows: \[ r : Z^n \rightarrow Z^{2n}, r(j) = \left[ \begin{array}{c} [Hj] \\ j - H^{-1}[Hj] \end{array} \right], \] where \([Hj]\) identifies the coordinates of the tile that index point \(j_1, j_2, \ldots, j_n\) is mapped to and \(j - H^{-1}[Hj]\) gives the coordinates of \(j\) within that tile relative to the tile origin. The resulting Tile Space \(J^S\) is defined as follows: \(J^S = \{ j^S | j^S = [Hj], j \in J^n \}\). It can be also written as \(J^S = \{ j^S | j^S = (j_1^S, \ldots, j_n^S) | j_1^S \in Z \land l_1^S \leq j_1^S \leq u_1^S, 1 \leq i \leq n \}\), where \(l_i^S, u_i^S\) can be directly computed from the functions \(l_1, \ldots, l_n, u_1, \ldots, u_n\) and the tiling matrix \(H\), as described in [1], [8]. In the rest of this paper we shall consider that the non-overlapping and overlapping execution schemes, extensively discussed in [9] (sections 3,4), [20] and the concept of grouping, introduced in [4] (section 4) are known. For example, let us consider an \(n\)-dimensional rectangular Tile Space \(J^S\), whose bounds are defined as follows: \(0 \leq j_i^S < u_i^S, i = 1, \ldots, n\) and \(u_1^S \geq u_i^S, i = 2, \ldots, n\). It is grouped according to the matrices \[ P^G = \begin{bmatrix} 1 & -m_2 & \ldots & -m_n \\ 0 & m_2 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & m_n \end{bmatrix}, H^G = (P^G)^{-1} = \begin{bmatrix} 1 & 1 & \ldots & 1 \\ 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & 1 \end{bmatrix} \] Thus, a tile $j^S$ belongs to group $j^G = (\sum_{i=1}^{n} j_i^S, \lfloor \frac{u_i^S}{m_i} \rfloor, \ldots, \lfloor \frac{u_n^S}{m_n} \rfloor)^T$. Following the overlapping execution scheme, if there are as many SMP nodes as required, it will be executed in the SMP node $(j_2^G, \ldots, j_n^G)$ during the time step $t = \sum_{i=1}^{n} j_i^G = \sum_{i=1}^{n} j_i^S + \sum_{i=2}^{n} \lfloor \frac{u_i^S}{m_i} \rfloor$ (according to the scheduling vector $\Pi^G = (1, 1, \ldots, 1)$). Thus, the number of steps required for the completion of the algorithm will be: $$P_{\text{unlimited-overlap}} = 1 + \sum_{i=1}^{n} (u_i^S - 1) + \sum_{i=2}^{n} \lfloor \frac{u_i^S}{m_i} \rfloor$$ $$P_{\text{unlimited-overlap}} = \sum_{i=1}^{n} u_i^S + \sum_{i=2}^{n} \lfloor \frac{u_i^S}{m_i} \rfloor - 2n + 2$$ Similarly, if we follow the non-overlapping execution scheme, then group $j^G = (\sum_{i=1}^{n} j_i^S, \lfloor \frac{u_i^S}{m_i} \rfloor, \ldots, \lfloor \frac{u_n^S}{m_n} \rfloor)^T$ will be executed during the time step $t = \sum_{i=1}^{n} j_i^G = \sum_{i=1}^{n} j_i^S$ (according to the scheduling vector $\Pi^G = (1, 0, \ldots, 0)$). Thus, the number of steps required for the completion of the algorithm will be: $$P_{\text{unlimited-non-overlap}} = 1 + \sum_{i=1}^{n} (u_i^S - 1)$$ $$P_{\text{unlimited-non-overlap}} = \sum_{i=1}^{n} u_i^S - n + 1$$ 3 Scheduling onto a Fixed Number of SMPs 3.1 Cyclic Assignment to SMPs In [6], [7] the optimality of the cyclic assignment of 2-dimensional tiles onto a fixed number of processors was theoretically proven. However, the calculations in [6], [7] did not take into account the communication overhead involved. Generalizing this approach for $n$-dimensional tiles and for clusters of SMP nodes, we consider that the available SMP nodes form a virtual $(n - 1)$-dimensional mesh of $p_2 \times \ldots \times p_n = p$ SMP nodes. We cyclically assign the groups to the SMP nodes. That is, we assign group $j^G$ to the SMP node $(j_2^G \% p_2, \ldots, j_n^G \% p_n)$, as indicated in Fig. 1. Therefore, each SMP node will execute $\lfloor \frac{u_2^G}{m_2 p_2} \rfloor \times \ldots \times \lfloor \frac{u_n^G}{m_n p_n} \rfloor = \lfloor \frac{u_2^S}{p_2} \rfloor \times \ldots \times \lfloor \frac{u_n^S}{p_n} \rfloor$ rows of groups (where $u_i^G = \lfloor \frac{u_i^S}{m_i} \rfloor$, $i = 2, \ldots, n$). If the rows of groups assigned to an SMP node, are executed in lexicographic order, the row $(x, j_2^G, \ldots, j_n^G)$ will be executed in the SMP node $(j_2^G \% p_2, \ldots, j_n^G \% p_n)$ after $\sum_{i=2}^{n} \lfloor \frac{u_i^G}{p_i} \rfloor \sum_{k=1}^{n} \lfloor \frac{u_k^G}{p_k} \rfloor$ rows, imposing a latency of $u_i^G \sum_{i=2}^{n} \lfloor \frac{u_i^G}{p_i} \rfloor \sum_{k=1}^{n} \lfloor \frac{u_k^G}{p_k} \rfloor$ time steps. In addition, as deduced from Fig. 1, the location of a group, relatively to the corresponding chunk origin, is $(j_1^G, j_2^G \% p_2, \ldots, j_n^G \% p_n)$, where $j_1^G = j_1^S + \sum_{i=2}^{n} j_i^S \% m_i p_i$. Fig. 1. Cyclic assignment to SMPs Therefore, if the underlying architecture allows for concurrent execution of computations and communication, following the overlapping execution scheme, group \( j^G \) will be computed during the time step \( t(j^G) = j_1^G + \sum_{i=2}^{n} j_i^G \% p_i + u_i^S \sum_{i=2}^{n} \left[ \left\lfloor \frac{u_i^G}{p_i} \right\rfloor \prod_{k=i+1}^{n} \left\lfloor \frac{u_k^G}{p_k} \right\rfloor \right] \). Thus, the number of steps required for the completion of the algorithm will be \[ P_{\text{cyclic-overlap}} = \max t(j^G) - \min t(j^G) + 1 \Rightarrow \] \[ P_{\text{cyclic-overlap}} = \sum_{i=2}^{n} \left[ (u_i^S - 1)\% m_i p_i + \left( \left\lfloor \frac{u_i^G}{m_i} \right\rfloor - 1 \right)\% p_i \right] + u_i^S \prod_{i=2}^{n} \left\lfloor \frac{u_i^G}{m_i p_i} \right\rfloor \quad (4) \] The first term of the right-hand part in formula (4) represents the time required for filling the pipeline, while the second term corresponds to the time each processor is busy executing calculations. If we should do with a conventional communication architecture as node interconnect (i.e. without NIC support for relieving the CPU from the communication burden), following the non-overlapping execution scheme, group \( j^G \) will be computed during the time step \( t(j^G) = j_1^G + u_i^S \sum_{i=2}^{n} \left[ \left\lfloor \frac{u_i^G}{p_i} \right\rfloor \prod_{k=i+1}^{n} \left\lfloor \frac{u_k^G}{p_k} \right\rfloor \right] \). Thus, the number of steps required for the completion of the algorithm will be \[ P_{\text{cyclic-non-overlap}} = \max t(j^G) - \min t(j^G) + 1 \Rightarrow \] \[ P_{\text{cyclic-non-overlap}} = \sum_{i=2}^{n} \left[ (u_i^S - 1)\% m_i p_i \right] + u_i^S \prod_{i=2}^{n} \left\lfloor \frac{u_i^G}{m_i p_i} \right\rfloor \quad (5) \] 3.2 Mirror Assignment to SMPs Let us consider another schedule, if we assign the tiles to SMP nodes as indicated in Fig. 2. That is, we assign group \( j^G \) to the SMP node \[ \left( j_1^G \% p_2 \text{ if } \text{even}(j_2^G / p_2) \right) \cdot \left( (p_2 - 1) - j_2^G \% p_2 \text{ if } \text{odd}(j_2^G / p_2) \right) \cdot \left( j_3^G \% p_3 \text{ if } \text{even}(j_3^G / p_3) \right) \cdot \left( (p_3 - 1) - j_3^G \% p_3 \text{ if } \text{odd}(j_3^G / p_3) \right) \cdot \ldots \cdot \left( j_n^G \% p_n \text{ if } \text{even}(j_n^G / p_n) \right) \cdot \left( (p_n - 1) - j_n^G \% p_n \text{ if } \text{odd}(j_n^G / p_n) \right) \] This schedule has the advantage that there is no need for data transfer along the boundaries of chunks of tiles, thus less time is wasted for communication. Then, like the cyclic assignment schedule, if the chunks of groups are executed in lexicographic order, the chunk containing row \((x, j^G_1, \ldots, j^G_n)\) will be executed after \[ \sum_{i=2}^{n} \left\lfloor \frac{j^G_i}{p_i} \right\rfloor \prod_{k=i+1}^{n} \left\lfloor \frac{w^G_k}{p_k} \right\rfloor \] chunks. The latency imposed by each of the previous chunks, when combining the mirror assignment schedule with the overlapping execution scheme, is greater than the respective one when applying the cyclic assignment schedule. It, thus, equals to \[ u^S_i + \sum_{i=2}^{n} \left\lfloor (m_i + 1)p_i \right\rfloor - 2n + 2, \] as the computation of a whole chunk should be finished before the computation of the next chunk starts. In addition, as deduced from Fig. 2, the position of a group, relatively to the corresponding chunk origin, is \((j^G_1, j^G_2 \% p_2, \ldots, j^G_n \% p_n)\), where \[ j^G_i = j^S_i + \sum_{i=2}^{n} j^S_i \% m_i \% p_i. \] Group \(j^G_i\) will be computed during the time step \[ t(j^G) = j^G_i' + \sum_{i=2}^{n} j^G_i' \% p_i + \left[ u^S_i + \sum_{i=2}^{n} \left\lfloor (m_i + 1)p_i \right\rfloor - 2n + 2 \right] \sum_{i=2}^{n} \left[ \frac{j^G_i}{p_i} \right\rfloor \prod_{k=i+1}^{n} \left\lfloor \frac{w^G_k}{p_k} \right\rfloor \] Thus, the number of steps required for the completion of the algorithm will be \[ P_{\text{mirror-overlap}} = \max t(j^G) - \min t(j^G) + 1 \] \[ P_{\text{mirror-overlap}} = \sum_{i=2}^{n} \left( (u^S_i - 1) \% m_i \% p_i \right) + \left( \left\lfloor \frac{w^G_i}{m_i} \right\rfloor - 1 \right) \% p_i - \sum_{i=2}^{n} \left( (m_i + 1) \% p_i \right) + 2n - 2 + \left[ u^S_i + \sum_{i=2}^{n} \left( (m_i + 1) \% p_i \right) - 2n + 2 \right] \prod_{i=2}^{n} \left( \left\lfloor \frac{w^G_i}{m_i} \right\rfloor \% p_i \right) \] If there is no shortage of processors ($u_i^S \leq m_ip_i, \forall i = 2, \ldots, n$), the proposed schedules are equivalent. Otherwise, it can be easily deduced from (4),(6) that $P_{\text{cyclic-overlap}} < P_{\text{mirror-overlap}}$. Their difference is due to the fact that, following the mirror assignment schedule, every time the computation of a chunk finishes and the computation of the next one starts, there are some idle time steps for some of the processors, as indicated in Fig. 2. The cyclic schedule is thus preferable to the mirror one. Similarly, following the non-overlapping execution scheme, group $j^G$ will be computed during the time step $t(j^G) = j^G_1 + \left( u_1^S + \sum_{i=2}^n m_ip_i - n + 1 \right) \sum_{i=2}^n \left[ \frac{u_i^S}{p_i} \right] + \prod_{k=i+1}^n \left[ \frac{u_k^S}{p_k} \right]$. Thus, the number of steps required for the completion of the algorithm will be $P_{\text{mirror-non-overlap}} = \max t(j^G) - \min t(j^G) + 1 = P_{\text{mirror-non-overlap}} = \sum_{i=2}^n \left[ \left( u_i^S - 1 \right)/m_ip_i \right] - \sum_{i=2}^n m_ip_i + n - 1 + $ $$\left[ u_1^S + \sum_{i=2}^n m_ip_i - n + 1 \right] \prod_{i=2}^n \left[ \frac{u_i^S}{m_ip_i} \right]$$ (7) It can be deduced from (5),(7) that $P_{\text{cyclic-non-overlap}} < P_{\text{mirror-non-overlap}}$. However, since the communication overhead is not hidden under the computation time, this schedule may sometimes result in a shorter total execution time, due to better exploitation of the available bandwidth. In particular, if there are only two SMP nodes along a dimension, no SMP node should both send and receive data along that dimension. Thus, the communication overhead will be halved. ### 3.3 Cluster Assignment to SMPs ![Cluster Assignment to SMPs](image) Fig. 3. Cluster assignment to SMPs Alternatively, following the approach of [16], generalizing it for $n$-dimensional spaces and taking into account that there is no need for communication among processors of the same SMP node, we may assign neighboring rows of tiles to the same CPU, as indicated in Fig. 3. In order to achieve this schedule, we cluster together neighboring tiles \((j^S_1, j^S_2, \ldots, j^S_n)\), mapping them to a supertile or “TILE” labeled as \((j^S_1, [\frac{j^S_2}{m_2}], \ldots, [\frac{j^S_n}{m_n}])\). Thus, the corresponding “GROUP” will be \(j^G = (j_1 + \sum_{i=2}^{n} \frac{j^S_i}{m_i}, \frac{j^S_1}{m_1}, \ldots, \frac{j^S_n}{m_n})\) and, following the overlapping execution scheme, it will be executed during the time “STEP”\( t(j^S) = j_1 + \sum_{i=2}^{n} \frac{j^S_i}{m_i} + \frac{j^S_1}{m_1} \). As a “TILE” consists of \(\prod_{i=2}^{n} \frac{u^S_i}{m_i} \) tiles, a “STEP” will be equivalent to \(\prod_{i=2}^{n} \frac{u^S_i}{m_i} \) time steps (excluding the DMA initialization and synchronization time). Thus, the total number of steps required for the completion of the algorithm will be\( P_{\text{cluster-overlap}} = \prod_{i=2}^{n} \frac{u^S_i}{m_i} (\max t(j^S) - \min t(j^S) + 1) \Rightarrow \) \[ P_{\text{cluster-overlap}} = \prod_{i=2}^{n} \frac{u^S_i}{m_i} \left( u^S_1 - 2n + 2 + \sum_{i=2}^{n} \frac{j^S_i}{m_i} + \sum_{i=2}^{n} \frac{j^S_1}{m_i} \right) (8) \] Lemma 1. It holds that \(P_{\text{dyadic-overlap}} \leq P_{\text{cluster-overlap}}\). Proof: Omitted due to lack of space. Thus, this schedule results to a worse number of execution steps than the previous one. Their difference is due to the fact that, in this schedule, the filling of the pipeline is slower. In case that \(u^S_1 >> u^S_i (i = 2, \ldots, n)\), the time each processor is busy, out- fluxes the pipeline filling time and it holds that \(P_{\text{dyadic-overlap}} \approx P_{\text{cluster-overlap}}\). However, the previous mathematical lemma has not taken into consideration the time required for the initialization of messages and for synchronization. Since the cluster assignment schedule requires less messages to be sent and less synchronization, in some cases it may be practically proven more efficient, as we will show in §4. Similarly, following the non-overlapping execution scheme, tile \((j^S_1, j^S_2, \ldots, j^S_n)\), corresponding to “GROUP” \(j^G = (j_1 + \sum_{i=2}^{n} \frac{j^S_i}{m_i}, \frac{j^S_1}{m_1}, \ldots, \frac{j^S_n}{m_n})\) is executed during the time “STEP” \(t(j^S) = j_1 + \sum_{i=2}^{n} \frac{j^S_i}{m_i} \). A computation “subSTEP” is equivalent to \(\prod_{i=2}^{n} \frac{u^S_i}{m_i} \) computation substeps, but a communication “subSTEP” is equivalent to less than \(\prod_{i=2}^{n} \frac{u^S_i}{m_i} \) communication substeps. In particular, if the communication load is equal along all communication dimensions (as resulted by the method proposed in [22]), the amount of data to be transferred, as indicated in Fig. 4, is \(\prod_{i=2}^{n} \frac{u^S_i}{m_i} \sum_{i=2}^{n} \frac{1}{u^S_i} \leq \prod_{i=2}^{n} \frac{u^S_i}{m_i} \) times the communication load of a tile. Thus, the total number of steps required for the completion of the algorithm will be Fig. 4. Clustering communication \[ P_{\text{cluster-non-overlap}} = C \left( \max t(j^S) - \min t(j^S) + 1 \right) \quad \text{(where } 1 \leq C \leq \prod_{i=2}^{n} \left\lfloor \frac{n}{m_i p_i} \right\rfloor \Rightarrow \right) \] \[ P_{\text{cluster-non-overlap}} = C \left( u_1^S - n + 1 + \sum_{i=2}^{n} \left\lfloor \frac{u_i^S}{m_i p_i} \right\rfloor \right) \quad (9) \] In conclusion, comparing to the cyclic assignment schedule, this method has the drawback of slower pipeline filling. However, it results to less communication overhead, which significantly reduces the total execution time, especially when the non-overlapping execution scheme is applied. 3.4 Retiling A more efficient schedule can be obtained, if we adapt the size of tiles to the available number of SMPs (Fig. 5). That is, we retile the initial Iteration Space, so as to get \( u_i^{S'} = m_i p_i, (i = 2, \ldots, n) \) and \( u_1^{S'} = u_1^S \prod_{i=2}^{n} \frac{m_i p_i}{m_1 p_1} \). Then, the size of a “new” tile will be equal to the size of an “old” tile and, consequently, a “new” computation step will be equivalent to an “old” computation step. Following the overlapping execution scheme, the number of time steps required for the completion of the algorithm, according to the formula (2), will be \[ P_{\text{retile-overlap}} = \sum_{i=1}^{n} u_i s_i + \sum_{i=2}^{n} \left[ \frac{u_i s_i}{m_i} \right] - 2n + 2 \Rightarrow \] \[ P_{\text{retile-overlap}} = \sum_{i=2}^{n} \left[ (m_i + 1) p_i \right] - 2n + 2 + u_1 s_1 \prod_{i=2}^{n} \frac{u_i s_i}{m_i p_i} \] \hspace{1cm} (10) Using the non-overlapping execution scheme, the number of time steps required for the completion of the algorithm, according to the formula (3), will be \[ P_{\text{retile-non-overlap}} = \sum_{i=1}^{n} u_i s_i - n + 1 \Rightarrow \] \[ P_{\text{retile-non-overlap}} = \sum_{i=2}^{n} m_i p_i - n + 1 + u_1 s_1 \prod_{i=2}^{n} \frac{u_i s_i}{m_i p_i} \] \hspace{1cm} (11) From (5),(11), we can deduce that \[ P_{\text{retile-non-overlap}} \leq P_{\text{cyclic-non-overlap}} \] In addition, a “new” computation substep is equivalent to an “old” computation substep, but a “new” communication substep is equivalent to less than an “old” communication substep. In particular, as in the cluster assignment schedule, if the communication load is equal along all communication dimensions, the amount of data to be transferred is \[ \sum_{i=3}^{n} \frac{1}{(n-1)m_i p_i} \leq 1 \times \text{the communication load of an “old” tile}. \] In conclusion, in every case this schedule is preferable to previously proposed ones, assuming that there are no factors constraining the tile shape, such as false sharing, or cache locality \[14], [15], [21]. It can fully exploit the computational power of all the SMP nodes and it achieves a perfect load balance, without imposing any additional complexity to the initial schedule. But if, apart from parallel scheduling, there are other factors constraining the tile size and shape, this schedule will be proven to be inefficient, since it totally reorganizes the execution order of iterations. 4 Experimental Results 4.1 Experimental Platform In order to evaluate the proposed methods, we use a Linux SMP cluster with 2 identical nodes. Each node has 1GB of RAM and 2 Pentium III @ 1266 MHz CPUs. The cluster nodes communicate through a Myrinet high performance interconnect, using the GM low level message passing system. In order to utilize the available processors in each SMP node as efficiently as possible, our implementation uses one multi-threaded process per SMP, with the number of threads equal to the number of CPUs. Multithreading support is based on the Lin- xThreads library. Threads executing on the same SMP communicate using shared memory, eliminating the need for message passing. For the data exchange between processes executing on different SMPs, Myricom’s GM version 1.6.3 is used [17]. GM is a low-level message passing library for Myrinet. It comprises a library used by userspace programs, an OS driver (in our case, a Linux kernel module) and a Myrinet Control Program (MCP), which is executed on the LANai, the embedded RISC microprocessor on the Myrinet NIC. The GM driver is used during the execution of a userspace process to open and close ports and to allocate and free memory suitable for DMA transfers. A port is a communication endpoint, used as the interface between a userspace process and the NIC. Having opened a port, a process can communicate directly with the NIC without the need for system calls, bypassing the operating system. Thus, all data exchange is performed directly to and from userspace buffers. To provide flow control between the host and the NIC, sending and receiving messages is regulated by tokens. Initially, a process possesses a finite number of send and receive tokens. To be able to receive a message, the process must provide GM with a buffer in DMAable memory, relinquishing a receive token. When a message is received, the DMA engine on the Myrinet NIC places it directly into the userspace buffer. The process polls for new messages and retrieves the receive token when a message arrives. The same applies to sending messages: The process relinquishes a send token by requesting the transmission of a message from a userspace buffer, then retrieves it when the send operation completes and an appropriate send completion callback function is executed by GM. As the data exchange between the host memory and the NIC is undertaken by the DMA engine on the NIC, without involving the CPU, overlapping of communication with computation is possible. 4.2 Experimental Data We performed several series of experiments in order to evaluate and compare the practical speedups obtained using each one of the four alternative schedules, combined with both the alternative execution schemes. Our test application code was the following: \[ \text{for } (i=1; \ i<=X; \ i++) \text{ for } (j=1; \ j<=Y; \ j++) \text{ for } (k=1; \ k<=Z; \ k++) A[i][j][k]=func(A[i-1][j][k], A[i][j-1][k],A[i][j][k-1]); \] where \( A \) is an array of \( X \times Y \times Z \) floats and \( X, Y \ll Z \). Without lack of generality, we consider, as a tile, a rectangle with \( ij, ik \) and \( jk \) sides. The dimension \( k \) is the largest one, so all tiles along the \( k \)-axis are mapped onto the same processor, as proposed in [2, 9]. Each tile has \( i, j \) dimensions equal to \( x \) and \( k \)-dimension equal to \( z \). Thus, there are \( \frac{X}{x} \) tiles along dimension \( i \), \( \frac{Y}{y} \) tiles along dimension \( j \) and \( \frac{Z}{z} \) tiles along dimension \( k \). After implementing all four schedules in combination with both execution schemes, as described by the pseudo-code of Table 1, we measured the performance of all schedules and compared it with their theoretically expected performance. For various tile sizes, we conducted a series of experiments for each combination of schedule and execution scheme, varying the iteration space size. In Figs 6-8 we have plotted our experimental results along with the respective theoretical curves. As a measure of performance, we have used the ratio of the speedup obtained to the best possible speedup. That is, we have depicted the ratio of the speedup obtained to the number of processors used. Thus, the closer a plot is to 1, the more efficient a schedule is. As can be seen in Figs 6-8, the practical completion times of our experiments differ to our theoretical predictions by at most 3%. For the overlapping communication schedules, this can be attributed to both the DMA engine on the Myrinet NIC and the CPU trying to access data in memory simultaneously. **Table 1. Execution Schemes Implementation** <table> <thead> <tr> <th>Non Overlapping Execution Scheme</th> <th>Overlapping Execution Scheme</th> </tr> </thead> <tbody> <tr> <td><strong>Pre-computation Part of Communication</strong></td> <td><strong>If on first tile</strong></td> </tr> <tr> <td><code>gm_provide_receive_buffer()</code></td> <td>Execute a non-overlapping receive</td> </tr> <tr> <td>do</td> <td><code>gm_provide_receive_buffer()</code> for tile $(t_1 + 1, t_2, t_3)$</td> </tr> <tr> <td>poll the GM event queue</td> <td><code>gm_provide_receive_buffer()</code> for tile $(t_1, -1, t_2, t_3)$</td> </tr> <tr> <td>process the event</td> <td><code>gm_send_with_callback()</code> for tile $(t_1, -1, t_2, t_3)$</td> </tr> <tr> <td>until data received</td> <td><strong>Post-computation Part of Communication</strong></td> </tr> <tr> <td><code>gm_send_with_callback()</code></td> <td>do</td> </tr> <tr> <td>do</td> <td><code>gm_send_with_callback()</code> for tile $(t_1, -1, t_2, t_3)$</td> </tr> <tr> <td>poll the GM event queue</td> <td><code>gm_send_with_callback()</code> for tile $(t_1, -1, t_2, t_3)$</td> </tr> <tr> <td>process the event</td> <td><code>gm_send_with_callback()</code> for tile $(t_1, -1, t_2, t_3)$</td> </tr> <tr> <td>until data sent</td> <td><strong>Barrier for Threads in SMP</strong></td> </tr> <tr> <td><strong>Execute a non-overlapping send</strong></td> <td><strong>If on last tile</strong></td> </tr> <tr> <td><strong>Barrier for Threads in SMP</strong></td> <td><strong>If on last tile</strong></td> </tr> <tr> <td><strong>Execute a non-overlapping send</strong></td> <td><strong>Execute a non-overlapping send</strong></td> </tr> </tbody> </table> **Fig. 6. Experimental Data: Tile Size 32 x 32 x 32** One can easily deduce that in almost all cases, the retiling schedule achieves the best performance, both theoretically and experimentally. This result was expected, since the retiling schedule absolutely adjusts tiles to the existing configuration of a cluster. However, in our experiments we have eliminated the effect of cache miss penalties by using small iteration space widths. If our iteration space dimensions which are not assigned to the same processor were too long, the retiling schedule could have destroyed the data locality achieved by optimally selected small tiles. Note also that the cluster assignment schedule using tile size $x$ is equivalent to the retiling schedule using tile size $4x$. This was expected, considering that by construction the iterations executed and the data sent in these two cases are the same. What differs is the execution order of iterations but here we have eliminated the cache misses overhead, in order to test our schedules’ optimality and not data locality. When following the non-overlapping execution scheme, the difference among the performance of the four schedules is mainly due to the volume of the data to be transferred. As depicted in Fig. 9, the mirror assignment schedule involves double the communication of retiling and cluster assignment schedule, while the cyclic assignment schedule involves 6 times the same communication volume. When following the overlapping execution scheme, since the communication volume is hidden under computation, their difference is due to the time steps that each SMP has to stall waiting for the required data to arrive. The number of these time steps is the same for both the retiling and the cyclic assignment schedule. However, using the cluster or the mirror assignment schedule, results in a multiple number of idle time steps, as depicted in Figs 1, 2. In addition, note that all schedules achieve better performance for long Iteration Spaces. This is due to the fact that when the mapping dimension of the Iteration Space is comparatively short, the time required for the last processor to start computing after the first data have arrived, is not negligible in comparison to the total execution time. --- **Fig. 7.** Experimental Data: Tile Size 128 × 32 × 32 **Fig. 8.** Experimental Data: Tile Size 256 × 32 × 32 5 Conclusions In this paper, we presented and experimentally compared four different methods for scheduling Tiled Iteration Spaces onto a cluster with a fixed number of SMPs. We concluded that the most efficient schedule is in most cases obtained when we adapt the size and shape of tiles to the size of the underlying architecture (retiling schedule). However, in case it is not possible, or it is not desired, since tiles are already optimally selected considering data locality [14], [15], [21], we propose either a cyclic assignment schedule, or clustering together neighboring tiles and handling them as a super-tile. The cyclic approach is preferable when the communication and computation substeps can be overlapped. In the opposite case, we propose the cluster assignment schedule, which considerably reduces the volume of data to be transferred. Acknowledgement Maria Athanasaki is partially supported by a research student scholarship, awarded by the A.S. Onassis public benefit foundation. References
{"Source-Url": "https://www.cslab.ece.ntua.gr/~nkoziris/papers/epy2003.pdf", "len_cl100k_base": 10120, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 78790, "total-output-tokens": 12190, "length": "2e13", "weborganizer": {"__label__adult": 0.0003788471221923828, "__label__art_design": 0.0004858970642089844, "__label__crime_law": 0.0003936290740966797, "__label__education_jobs": 0.0012540817260742188, "__label__entertainment": 0.00011140108108520508, "__label__fashion_beauty": 0.00020968914031982425, "__label__finance_business": 0.0003781318664550781, "__label__food_dining": 0.0003969669342041016, "__label__games": 0.0007376670837402344, "__label__hardware": 0.005527496337890625, "__label__health": 0.0008244514465332031, "__label__history": 0.0005393028259277344, "__label__home_hobbies": 0.00020110607147216797, "__label__industrial": 0.0012865066528320312, "__label__literature": 0.00025725364685058594, "__label__politics": 0.000362396240234375, "__label__religion": 0.0007548332214355469, "__label__science_tech": 0.390380859375, "__label__social_life": 0.00010502338409423828, "__label__software": 0.01204681396484375, "__label__software_dev": 0.58154296875, "__label__sports_fitness": 0.0003809928894042969, "__label__transportation": 0.0010471343994140625, "__label__travel": 0.0003085136413574219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38875, 0.02455]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38875, 0.34821]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38875, 0.84363]], "google_gemma-3-12b-it_contains_pii": [[0, 2799, false], [2799, 6375, null], [6375, 9602, null], [9602, 12625, null], [12625, 15085, null], [15085, 17050, null], [17050, 19121, null], [19121, 22063, null], [22063, 23078, null], [23078, 26013, null], [26013, 29254, null], [29254, 31762, null], [31762, 33324, null], [33324, 35414, null], [35414, 38875, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2799, true], [2799, 6375, null], [6375, 9602, null], [9602, 12625, null], [12625, 15085, null], [15085, 17050, null], [17050, 19121, null], [19121, 22063, null], [22063, 23078, null], [23078, 26013, null], [26013, 29254, null], [29254, 31762, null], [31762, 33324, null], [33324, 35414, null], [35414, 38875, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38875, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38875, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38875, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38875, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38875, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38875, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38875, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38875, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38875, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38875, null]], "pdf_page_numbers": [[0, 2799, 1], [2799, 6375, 2], [6375, 9602, 3], [9602, 12625, 4], [12625, 15085, 5], [15085, 17050, 6], [17050, 19121, 7], [19121, 22063, 8], [22063, 23078, 9], [23078, 26013, 10], [26013, 29254, 11], [29254, 31762, 12], [31762, 33324, 13], [33324, 35414, 14], [35414, 38875, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38875, 0.07442]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
e7023e9ae3acd4b9b67c2cf321030f9109f5c3ea
Erlang ODBC application version 2.0 Contents 1 Erlang ODBC User's Guide ...................................................... 1 1.1 Introduction .......................................................................... 1 1.1.1 Purpose ........................................................................... 1 1.1.2 Prerequisites ..................................................................... 1 1.1.3 About ODBC .................................................................... 1 1.1.4 About the Erlang ODBC application ................................ 1 1.2 Getting started ...................................................................... 2 1.2.1 Setting things up ............................................................. 2 1.2.2 Using the Erlang API ....................................................... 2 1.3 Databases ........................................................................... 5 1.3.1 Databases ........................................................................ 5 1.3.2 Database independence .................................................... 6 1.3.3 Data types ....................................................................... 6 1.3.4 Batch handling ............................................................... 8 1.4 Error handling ...................................................................... 8 1.4.1 Strategy ............................................................................ 8 1.4.2 The whole picture ........................................................... 9 2 Erlang ODBC Reference Manual ................................................. 13 2.1 odbc .................................................................................. 15 List of Figures ............................................................................ 23 List of Tables ............................................................................ 25 Chapter 1 Erlang ODBC User's Guide The Erlang ODBC Application provides an interface for accessing relational SQL-databases from Erlang. 1.1 Introduction 1.1.1 Purpose The purpose of the Erlang ODBC application is to provide the programmer with an ODBC interface that has an Erlang/OTP touch and feel. So that the programmer may concentrate on solving his/her actual problem instead of struggling with pointers and memory allocation which is not very relevant for Erlang. This user guide will give you some information about technical issues and provide some examples of how to use the Erlang ODBC interface. 1.1.2 Prerequisites It is assumed that the reader is familiar with the Erlang programming language, concepts of OTP and has a basic understanding of relational databases and SQL. 1.1.3 About ODBC Open Database Connectivity (ODBC) is a Microsoft standard for accessing relational databases that has become widely used. The ODBC standard provides a c-level application programming interface (API) for database access. It uses Structured Query Language (SQL) as its database access language. 1.1.4 About the Erlang ODBC application Provides an Erlang interface to communicate with relational SQL-databases. It is built on top of Microsoft's ODBC interface and therefore requires that you have an ODBC driver to the database that you want to connect to. The Erlang ODBC application is designed using the version 3.0 of the ODBC-standard, however using the option `scrollable_cursors, off` for a connection has been known to make it work for at least some 2.X drivers. 1.2 Getting started 1.2.1 Setting things up As the Erlang ODBC application is dependent on third party products there are a few administrative things that needs to be done before you can get things up and running. - The first thing you need to do, is to make sure you have an ODBC driver installed for the database that you want to access. Both the client machine where you plan to run your erlang node and the server machine running the database needs the ODBC driver. (In some cases the client and server may be the same machine). - Secondly you might need to set environment variables and paths to appropriate values. This may differ a lot between different os's, databases and ODBC drivers. This is a configuration problem related to the third party product and hence we can not give you a standard solution in this guide. - The Erlang ODBC application consists of both Erlang and C code. The C code is delivered as a precompiled executable for windows and solaris in the commercial build. In the open source distribution it is built the same way as all other application using configure and make. You may want to provide the the path to your ODBC libraries using –with-odbc=PATH. Note: The Erlang ODBC application should run on all Unix dialects including Linux, Windows 2000, Windows XP and NT. But currently it is only tested for Solaris, Windows 2000, Windows XP and NT. 1.2.2 Using the Erlang API The following dialog within the Erlang shell illustrates the functionality of the Erlang ODBC interface. The table used in the example does not have any relevance to anything that exist in reality, it is just a simple example. The example was created using sqlserver 7.0 with servicepack 1 as database and the ODBC driver for sqlserver with version 2000.80.194.00. 1 > application:start(odbc). ok Connect to the database 2 > {ok, Ref} = odbc:connect("DSN=sql-server;UID=aladin;PWD=sesame", []). {ok,<0.342.0>} Create a table 3 > odbc:sql_query(Ref, "CREATE TABLE EMPLOYEE (NR integer, FIRSTNAME char varying(20), LASTNAME char varying(20), GENDER char(1), PRIMARY KEY(NR))"). {updated,undefined} Insert some data 1.2: Getting started 4 > odbc:sql_query(Ref, "INSERT INTO EMPLOYEE VALUES(1, 'Jane', 'Doe', 'F')"). {updated,1} Check what data types the database assigned for the columns. Hopefully this is not a surprise, some times it can be! These are the data types that you should use if you want to do a parameterized query. 5 > odbc:describe_table(Ref, "EMPLOYEE"). {ok, [{"NR", sql_integer}, {"FIRSTNAME", {sql_varchar, 20}}, {"LASTNAME", {sql_varchar, 20}}, {"GENDER", {sql_char, 1}}]} Use a parameterized query to insert many rows in one go. 6 > odbc:param_query(Ref,"INSERT INTO EMPLOYEE (NR, FIRSTNAME, " "LASTNAME, GENDER) VALUES(?, ?, ?, ?), [{sql_integer,[2,3,4,5,6,7,8]}, {sql_varchar,20}, ["John", "Monica", "Ross", "Rachel", "Piper", "Prue", "Louise"], {sql_varchar,20}, ["Doe","Geller","Geller", "Green", "Halliwell", "Halliwell", "Lane"], {sql_char,1}, ["M","F","M","F","F","F","F"]]). {updated,7} Fetch all data in the table employee 7> odbc:sql_query(Ref, "SELECT * FROM EMPLOYEE"). {selected,[["NR","FIRSTNAME","LASTNAME","GENDER"], [1,"Jane","Doe","F"], [2,"John","Doe","M"], [3,"Monica","Geller","F"], [4,"Ross","Geller","M"], [5,"Rachel","Green","F"], [6,"Piper","Halliwell","F"], [7,"Prue","Halliwell","F"], [8,"Louise","Lane","F"]]} Associate a result set containing the whole table EMPLOYEE to the connection. The number of rows in the result set is returned. 8 > odbc:select_count(Ref, "SELECT * FROM EMPLOYEE"). {ok,8} You can always traverse the result set sequential by using next 9 > odbc:next(Ref). {selected,[["NR","FIRSTNAME","LASTNAME","GENDER"],[[1,"Jane","Doe","F"]]]} 10 > odbc:next(Ref). {selected,[["NR","FIRSTNAME","LASTNAME","GENDER"],[[2,"John","Doe","M"]]]} If your driver supports scrollable cursors you have a little more freedom, and can do things like this. 11 > odbc:last(Ref). {selected,["NR","FIRSTNAME","LASTNAME","GENDER"],[[8,"Louise","Lane","F"]]} 12 > odbc:prev(Ref). {selected,["NR","FIRSTNAME","LASTNAME","GENDER"],[[7,"Prue","Halliwell","F"]]} 13 > odbc:first(Ref). {selected,["NR","FIRSTNAME","LASTNAME","GENDER"],[[1,"Jane","Doe","F"]]} 14 > odbc:next(Ref). {selected,["NR","FIRSTNAME","LASTNAME","GENDER"],[[2,"John","Doe","M"]]} Fetch the fields FIRSTNAME and NR for all female employees 15 > odbc:sql_query(Ref, "SELECT FIRSTNAME, NR FROM EMPLOYEE WHERE GENDER = 'F'"). {selected,[["FIRSTNAME","NR"], [{"Jane",1}, {"Monica",3}, {"Rachel",5}, {"Piper",6}, {"Prue",7}, {"Louise",8}]]} Fetch the fields FIRSTNAME and NR for all female employees and sort them on the field FIRSTNAME. 16 > odbc:sql_query(Ref, "SELECT FIRSTNAME, NR FROM EMPLOYEE WHERE GENDER = 'F' ORDER BY FIRSTNAME"). {selected,[["FIRSTNAME","NR"], [{"Jane",1}, {"Louise",8}, {"Monica",3}, {"Piper",6}, {"Prue",7}, {"Rachel",5}]]} Associate a result set that contains the fields FIRSTNAME and NR for all female employees to the connection. The number of rows in the result set is returned. 17 > odbc:select_count(Ref, "SELECT FIRSTNAME, NR FROM EMPLOYEE WHERE GENDER = 'F'"). {ok,6} A few more ways of retrieving parts of the result set when the driver supports scrollable cursors. Note that next will work even without support for scrollable cursors. 18 > odbc:select(Ref, {relative, 2}, 3). {selected,[["FIRSTNAME","NR"],[["Monica",3],["Rachel",5],["Piper",6]]} 19 > odbc:select(Ref, next, 2). {selected,[["FIRSTNAME","NR"],[["Prue",7],["Louise",8]]} 1.3 Databases 1.3.1 Databases If you need to access a relational database such as sqlserver, mysql, postgres, oracle, cybase etc. from your erlang application using the Erlang ODBC interface is the way to go about it. The Erlang ODBC application should work for any relational database that has an ODBC driver. But currently it is only tested for sqlserver and oracle. 1.3.2 Database independence The Erlang ODBC interface is in principal database independent, e.i. an erlang program using the interface could be run without changes towards different databases. But as SQL is used it is alas possible to write database dependent programs. Even though SQL is an ANSI-standard meant to be database independent, different databases have proprietary extensions to SQL defining their own data types. If you keep to the ANSI data types you will minimize the problem. But unfortunately there is no guarantee that all databases actually treats the ANSI data types equivalently. For instance an installation of Oracle Enterprise release 8.0.5.0.0 for unix will accept that you create a table column with the ANSI data type integer, but when retriving values from this column the driver reports that it is of type SQLENCIMAL(0, 38) and not SQL_INTEGER as you may have expected. Another obstacle is that some drivers do not support scrollable cursors which has the effect that the only way to traverse the result set is sequentially, with next, from the first row to the last, and once you pass a row you can not go back. This means that some functions in the interface will not work together with certain drivers. A similar problem is that not all drivers support “row count” for select queries, hence resulting in that the function select_count/[3,4] will return \{ok, undefined\} instead of \{ok, NrRows\} where NrRows is the number of rows in the result set. 1.3.3 Data types The following is a list of the ANSI data types. For details turn to the ANSI standard documentation. Usage of other data types is of course possible, but you should be aware that this makes your application dependent on the database you are using at the moment. - CHARACTER (size), CHAR (size) - NUMERIC (precision, scale), DECIMAL (precision, scale), DEC (precision, scale) precision - total number of digits, scale - total number of decimal places - INTEGER, INT, SMALLINT - FLOAT (precision) - REAL - DOUBLE PRECISION - CHARACTER VARYING (size), CHAR VARYING (size) When inputing data using sql_query/[2,3] the values will always be in string format as they are part of an SQL-query. Example: ```erlang odbc:sql_query(Ref, "INSERT INTO TEST VALUES(1, 2, 3)"). ``` **Note:** Note that when the value of the data to input is a string, it has to be quoted with ‘. Example: ```erlang odbc:sql_query(Ref, "INSERT INTO EMPLOYEE VALUES(1, 'Jane', 'Doe', 'F')"). ``` You may also input data using param_query/[3,4] [page 19] and then the input data will have the Erlang type corresponding to the ODBC type of the column. See ODBC to Erlang mapping [page 6]. When selecting data from a table, all data types are returned from the database to the ODBC driver as an ODBC data type. The tables below shows the mapping between those data types and what is returned by the Erlang API. <table> <thead> <tr> <th>ODBC Data Type</th> <th>Erlang Data Type</th> </tr> </thead> <tbody> <tr> <td>SQL_CHAR(size)</td> <td>String</td> </tr> <tr> <td>SQL_NUMERIC(p,s) when ( p \geq 0 ) and ( p \leq 9 ) and ( s = 0 )</td> <td>Integer</td> </tr> <tr> <td>SQL_NUMERIC(p,s) when ( p \geq 10 ) and ( p \leq 15 ) and ( s = 0 ) or ( s \leq 15 ) and ( s &gt; 0 )</td> <td>Float</td> </tr> <tr> <td>SQL_NUMERIC(p,s) when ( p \geq 16 )</td> <td>String</td> </tr> <tr> <td>SQL_DECIMAL(p,s) when ( p \geq 0 ) and ( p \leq 9 ) and ( s = 0 )</td> <td>Integer</td> </tr> <tr> <td>SQL_DECIMAL(p,s) when ( p \geq 10 ) and ( p \leq 15 ) and ( s = 0 ) or ( s \leq 15 ) and ( s &gt; 0 )</td> <td>Float</td> </tr> <tr> <td>SQL_DECIMAL(p,s) when ( p \geq 16 )</td> <td>String</td> </tr> <tr> <td>SQL_INTEGER</td> <td>Integer</td> </tr> <tr> <td>SQL_SMALLINT</td> <td>Integer</td> </tr> <tr> <td>SQL_FLOAT</td> <td>Float</td> </tr> <tr> <td>SQL_REAL</td> <td>Float</td> </tr> <tr> <td>SQL_DOUBLE</td> <td>Float</td> </tr> <tr> <td>SQL_VARCHAR(size)</td> <td>String</td> </tr> </tbody> </table> Table 1.1: Mapping of ODBC data types to the Erlang data types returned to the Erlang application. <table> <thead> <tr> <th>ODBC Data Type</th> <th>Erlang Data Type</th> </tr> </thead> <tbody> <tr> <td>SQL_TYPE_DATE</td> <td>String</td> </tr> <tr> <td>SQL_TYPE_TIME</td> <td>String</td> </tr> <tr> <td>SQL_TYPE_TIMESTAMP</td> <td>String</td> </tr> <tr> <td>SQL_LONGVARCHAR</td> <td>String</td> </tr> <tr> <td>SQL_BINARY</td> <td>String</td> </tr> <tr> <td>SQL_VARBINARY</td> <td>String</td> </tr> <tr> <td>SQL_LONGVARBINARY</td> <td>String</td> </tr> <tr> <td>SQL_TINYINT</td> <td>Integer</td> </tr> <tr> <td>SQL_BIT</td> <td>Boolean</td> </tr> </tbody> </table> Table 1.2: Mapping of extended ODBC data types to the Erlang data types returned to the Erlang application. **Note:** To find out which data types will be returned for the columns in a table use the function `describe_table([2,3])` [page 18] 1.3.4 Batch handling Grouping of SQL queries can be desirable in order to reduce network traffic. Another benefit can be that the data source sometimes can optimize execution of a batch of SQL queries. Explicit batches and procedures described below will result in multiple results being returned from sql_query/[2,3], while with parameterized queries only one result will be returned from param_query/[2,3]. Explicit batches The most basic form of a batch is created by semicolons separated SQL queries, for example: "SELECT * FROM FOO; SELECT * FROM BAR" or "INSERT INTO FOO VALUES(1,'bar'); SELECT * FROM FOO" Procedures Different databases may also support creating of procedures that contains more than one SQL query. For example, the following SQL Server-specific statement creates a procedure that returns a result set containing information about employees that work at the department and a result set listing the customers of that department. ``` CREATE PROCEDURE DepartmentInfo (@DepartmentID INT) AS SELECT * FROM Employee WHERE department = @DepartmentID SELECT * FROM Customers WHERE department = @DepartmentID ``` Parameterized queries To effectively perform a batch of similar queries, you can use parameterized queries. This means that you in your SQL query string will mark the places that usually would contain values with question marks and then provide lists of values for each parameter. For instance you can use this to insert multiple rows into the EMPLOYEE table while executing only a single SQL statement, for example code see "Using the Erlang API" [page 3] section in the "Getting Started“ chapter. 1.4 Error handling 1.4.1 Strategy On a conceptual level starting a database connection using the Erlang ODBC API is a basic client-server application. The client process uses the API to start and communicate with the server process that manages the connection. The strategy of the Erlang ODBC application is that programming faults in the application itself will cause the connection process to terminate abnormally. (When a process terminates abnormally its supervisor will log relevant error reports.) Calls to API functions during or after termination of the connection process will return {error, connection_closed}. Contextual errors on the other hand will not terminate the connection it will only return {error, Reason} to the client, where Reason may be any erlang term. 1.4: Error handling Clients The connection is associated with the process that created it and can only be accessed through it. The reason for this is to preserve the semantics of result sets and transactions when `select_count/[2,3]` is called or `auto_commit` is turned off. Attempts to use the connection from another process will fail. This will not effect the connection. On the other hand, if the client process dies the connection will be terminated. Timeouts All request made by the client to the connection are synchronous. If the timeout is used and expires the client process will exit with reason timeout. Probably the right thing to do is let the client die and perhaps be restarted by its supervisor. But if the client chooses to catch this timeout, it is a good idea to wait a little while before trying again. If there are too many consecutive timeouts that are caught the connection process will conclude that there is something radically wrong and terminate the connection. Guards All API-functions are guarded and if you pass an argument of the wrong type a runtime error will occur. All input parameters to internal functions are trusted to be correct. It is a good programming practise to only distrust input from truly external sources. You are not supposed to catch these errors, it will only make the code very messy and much more complex, which introduces more bugs and in the worst case also covers up the actual faults. Put your effort on testing instead, you should trust your own input. 1.4.2 The whole picture As the Erlang ODBC application relies on third party products and communicates with a database that probably runs on another computer in the network there are plenty of things that might go wrong. To fully understand the things that might happen it facilitate to know the design of the Erlang ODBC application, hence here follows a short description of the current design. Note: Please note that design is something, that not necessarily will, but might change in future releases. While the semantics of the API will not change as it is independent of the implementation. When you do application:start(odbc) the only thing that happens is that a supervisor process is started. For each call to the API function connect/2 a process is spawned and added as a child to the Erlang ODBC supervisor. The supervisors only tasks are to provide error-log reports, if a child process should die abnormally, and the possibility to do a code change. Only the client process has the knowledge to decide if this connection managing process should be restarted. The erlang connection process spawned by connect/2, will open a port to a c-process that handles the communication with the database through Microsoft's ODBC API. The erlang port will be kept open for exit signal propagation, if something goes wrong in the c-process and it exits we want know as mush as possible about the reason. The main communication with the c-process is done through sockets. The C-process consists of two threads, the supervisor thread and the database handler thread. The supervisor thread checks for shutdown messages on the supervisor socket and the database handler thread receives requests and sends answers on the database socket. If the database thread seems to hang on some database call, the erlang control process will send a shutdown message on the supervisor socket, in this case the c-process will exit. If the c-process crashes/exits it will bring the erlang-process down too and vice versa i.e. the connection is terminated. **Note:** The function `connect/2` will start the odbc application if that is not already done. In this case a supervisor information log will be produced stating that the odbc application was started as a temporary application. It is really the responsibility of the application that uses the API too make sure it is started in the desired way. Error types The types of errors that may occur can be divide into the following categories: - **Configuration problems** - Everything from that the database was not set up right to that the c-program that should be run through the erlang port was not compiled for your platform. - **Errors discovered by the ODBC driver** - If calls to the ODBC-driver fails due to circumstances that can not be controlled by the Erlang ODBC application programmer, an error string will be dug up from the driver. This string will be the `Reason` in the `{error, Reason}` return value. How good this error message is will of course be driver dependent. Examples of such circumstances are trying to insert the same key twice, invalid SQL-queries and that the database has gone off line. - **Connection termination** - If a connection is terminated in an abnormal way, or if you try to use a connection that you have already terminated in a normal way by calling `disconnect/1`, the return value will be `{error, connection_closed}`. A connection could end abnormally because of an programming error in the Erlang ODBC application, but also if the ODBC driver crashes. - **Contextual errors** - If API functions are used in the wrong context, the `Reason` in the error tuple will be a descriptive atom. For instance if you try to call the function `last/[1,2]` without first calling `select_count/[2,3]` to associate a result set with the connection. If the ODBC-driver does not support some functions, or if you disabled some functionality for a connection and then try to use it. Erlang ODBC Reference Manual Short Summaries - Erlang Module **odbc** (page 15) - Erlang ODBC application **odbc** The following functions are exported: - `commit(Ref, CommitMode)`: Commit or rollbacks a transaction. - `commit(Ref, CommitMode, TimeOut)`: Commit or rollbacks a transaction. - `connect(ConnectStr, Options)`: Connects to the database. - `connect(ConnectStr, Options, TimeOut)`: Connects to the database. - `disconnect(Ref)`: Disconnects the database. - `disconnect(Ref, TimeOut)`: Disconnects the database. - `describe_table(Ref, Table)`: Queries the database to find out the data types of the columns of the table `Table`. - `describe_table(Ref, Table, Timeout)`: Queries the database to find out the data types of the columns of the table `Table`. - `first(Ref)`: Returns the first row of the result set and positions a cursor at this row. - `first(Ref, TimeOut)`: Returns the first row of the result set and positions a cursor at this row. - `last(Ref)`: Returns the last row of the result set and positions a cursor at this row. - `last(Ref, TimeOut)`: Returns the last row of the result set and positions a cursor at this row. - `next(Ref)`: Returns the next row of the result set relative the current cursor position and positions the cursor at this row. - `next(Ref, TimeOut)`: Returns the next row of the result set relative the current cursor position and positions the cursor at this row. - `param_query(Ref, SQLQuery, Params) ->` [page 19] Executes a parameterized SQL query. - `param_query(Ref, SQLQuery, Params, TimeOut) -> ResultTuple | {error, Reason}` [page 19] Executes a parameterized SQL query. - `prev(Ref) ->` [page 19] Returns the previous row of the result set relative the current cursor position and positions the cursor at this row. - `prev(ConnectionReference, TimeOut) -> {selected, ColNames, Rows} | {error, Reason}` [page 19] Returns the previous row of the result set relative the current cursor position and positions the cursor at this row. - `sql_query(Ref, SQLQuery) ->` [page 20] Executes a SQL query or a batch of SQL queries. If it is a SELECT query the result set is returned, on the format `{selected, ColNames, Rows}`. For other query types the tuple `{updated, NRows}` is returned, and for batched queries, if the driver supports them, this function can also return a list of result tuples. - `sql_query(Ref, SQLQuery, TimeOut) -> ResultTuple | [ResultTuple] | {error, Reason}` [page 20] Executes a SQL query or a batch of SQL queries. If it is a SELECT query the result set is returned, on the format `{selected, ColNames, Rows}`. For other query types the tuple `{updated, NRows}` is returned, and for batched queries, if the driver supports them, this function can also return a list of result tuples. - `select_count(Ref, SelectQuery) ->` [page 20] Executes a SQL SELECT query and associates the result set with the connection. A cursor is positioned before the first row in the result set and the tuple `{ok, NrRows}` is returned. - `select_count(Ref, SelectQuery, TimeOut) -> {ok, NrRows} | {error, Reason}` [page 20] Executes a SQL SELECT query and associates the result set with the connection. A cursor is positioned before the first row in the result set and the tuple `{ok, NrRows}` is returned. - `select(Ref, Position, N) ->` [page 20] Selects N consecutive rows of the result set. - `select(Ref, Position, N, TimeOut) -> {selected, ColNames, Rows} | {error, Reason}` [page 21] Selects N consecutive rows of the result set. Erlang ODBC Reference Manual odbc Erlang Module This application provides an Erlang interface to communicate with relational SQL-databases. It is built on top of Microsoft's ODBC interface and therefore requires that you have an ODBC driver to the database that you want to connect to. Note: The functions first/1,2, last/1,2, next/1,2, prev/1,2 and select/3,4 assumes there is a result set associated with the connection to work on. Calling the function select_count/2,3 associates such a result set with the connection. Calling select_count again will remove the current result set association and create a new one. Calling a function which does not operate on an associated result sets, such as sql_query/2,3, will remove the current result set association. Alas some drivers only support sequential traversal of the result set, e.i. they do not support what in the ODBC world is known as scrollable cursors. This will have the effect that functions such as first/1,2, last/1,2, prev/1,2, etc will return {error, driver does not support function}. COMMON DATA TYPES Here follows type definitions that are used by more than one function in the ODBC API. Note: The type TimeOut has the default value infinity, so for instance: commit(Ref, CommitMode) is the same as commit(Ref, CommitMode, infinity). If the timeout expires the client will exit with the reason timeout. - connection_reference() - as returned by connect/2 - time_out() = milliseconds() | infinity - milliseconds() = integer() >= 0 - common_reason() = connection_closed | term() - some kind of explanation of what went wrong - string() = list of ASCII characters - col_name() = string() - Name of column in the result set col_names() - [col_name()]: A list of column names selected in the result set. row() = {value()}: Tuple of column values, e.g., one row of the result set. value() = null | term(): A column value. rows() = [row()]: A list of rows from the result set. result_tuple() = {updated, n_rows() | {selected, col_names(), rows()}} n_rows() = integer(): The number of affected rows for UPDATE, INSERT, or DELETE queries. For other query types, the value is driver defined, and hence should be ignored. odbc_data_type() = sql_integer | sql_smallint | sql_tinyint | {sql_decimal, precision(), scale()} | {sql_numeric, precision(), scale()} | {sql_char, size()} | {sql_varchar, size()} | {sql_float, precision()} | {sql_numeric, precision()} | sql_real | sql_double | sql_bit | atom() precision() = integer() scale() = integer() size() = integer() ERROR HANDLING The error handling strategy and possible errors sources are described in the Erlang ODBC User’s Guide [page 8] Exports commit(Ref, CommitMode) -> commit(Ref, CommitMode, TimeOut) -> ok | {error, Reason} Types: - Ref = connection_reference() - CommitMode = commit | rollback - TimeOut = time_out() - Reason = not_an_explicit_commit_connection | process_not_owner_of_odbc_connection | common_reason() Commits or rollbacks a transaction. Needed on connections where automatic commit is turned off. connect(ConnectStr, Options) -> {ok, Ref} | {error, Reason} Types: ConnectStr = string() An example of a connection string: "DSN=sql-server;UID=alladin;PWD=sesame" where DSN is your ODBC Data Source Name, UID is a database user id and PWD is the password for that user. These are usually the attributes required in the connection string, but some drivers have other driver specific attributes, for example "DSN=Oracle8;DBQ=gandalf;UID=alladin;PWD=sesame" where DBQ is your TNSNAMES.ORA entry name e.g. some Oracle specific configuration attribute. Options = [] | [option()] All options has default values option() = {auto_commit, auto_commit_mode()} | {timeout, milliseconds()} | {tuple_row, tuple_mode()} | {scrollable_cursors, use_scrollable_cursors()} | {trace_driver, trace_mode()} The default timeout is infinity auto_commit_mode() = on | off Default is on. tuple_mode() = on | off Default is on. The option is deprecated and should not be used in new code. use_scrollable_cursors() = on | off Default is on. trace_mode() = on | off Default is off. Ref = connection_reference() - should be used to access the connection. Reason = port_program_executable_not_found | common_reason() Opens a connection to the database. The connection is associated with the process that created it and can only be accessed through it. This function may spawn new processes to handle the connection. These processes will terminate if the process that created the connection dies or if you call disconnect/1. If automatic commit mode is turned on, each query will be considered as an individual transaction and will be automatically committed after it has been executed. If you want more than one query to be part of the same transaction the automatic commit mode should be turned off. Then you will have to call commit/3 explicitly to end a transaction. As default result sets are returned as a lists of tuples. The TupleMode option still exists to keep some degree of backwards compatibility. If the option is set to off, result sets will be returned as a lists of lists instead of a lists of tuples. Scrollable cursors are nice but causes some overhead. For some connections speed might be more important than flexible data access and then you can disable scrollable cursor for a connection, limiting the API but gaining speed. If trace mode is turned on this tells the ODBC driver to write a trace log to the file SQL.LOG that is placed in the current directory of the erlang emulator. This information may be useful if you suspect there might be a bug in the erlang ODBC application, and it might be relevant for you to send this file to our support. Otherwise you will probably not have much use of this. Note: For more information about the ConnectStr see description of the function SQLDriverConnect in [1]. disconnect(Ref) -> ok | {error, Reason} Types: - Ref = connection_reference() - Reason = process_not_owner_of_odbc_connection Closes a connection to a database. This will also terminate all processes that may have been spawned when the connection was opened. This call will always succeed. If the connection cannot be disconnected gracefully it will be brutally killed. However you may receive an error message as result if you try to disconnect a connection started by another process. describe_table(Ref, Table) -> describe_table(Ref, Table, Timeout) -> {ok, Description} | {error, Reason} Types: - Ref = connection_reference() - Table = string() - Name of database table. - Timeout = timeout() - Timeout for query. - Description = [{col_name(), odbc_data_type()}] - Reason = common_reason() Queries the database to find out the ODBC data types of the columns of the table Table. first(Ref) -> first(Ref, Timeout) -> {selected, ColNames, Rows} | {error, Reason} Types: - Ref = connection_reference() - Timeout = timeout() - ColNames = col_names() - Rows = rows() - Reason = result_set_does_not_exist | driver_does_not_support_function | scrollable.Cursors_disabled | process_not_owner_of_odbc_connection | common_reason() Returns the first row of the result set and positions a cursor at this row. last(Ref) -> last(Ref, Timeout) -> {selected, ColNames, Rows} | {error, Reason} Types: - Ref = connection_reference() - Timeout = timeout() - ColNames = col_names() - Rows = rows() - Reason = result_set_does_not_exist | driver_does_not_support_function | scrollable.Cursors_disabled | process_not_owner_of_odbc_connection | common_reason() Returns the last row of the result set and positions a cursor at this row. next(Ref) -> next(Ref, TimeOut) -> {selected, ColNames, Rows} | {error, Reason} Types: - Ref = connection_reference() - TimeOut = time_out() - ColNames = col_names() - Rows = rows() - Reason = result_set_does_not_exist | process_not_owner_of_odbc_connection | common_reason() Returns the next row of the result set relative the current cursor position and positions the cursor at this row. If the cursor is positioned at the last row of the result set when this function is called the returned value will be \{selected, ColNames, []\} e.i. the list of row values is empty indicating that there is no more data to fetch. param_query(Ref, SQLQuery, Params) -> param_query(Ref, SQLQuery, Params, TimeOut) -> ResultTuple | {error, Reason} Types: - Ref = connection_reference() - SQLQuery = string() - a SQL query with parameter markers/place holders in form of question marks. - Params = [{odbc_data_type(), [value()]}, \] - TimeOut = time_out() - Values = term() - Must be consistent with the Erlang data type that corresponds to the ODBC data type ODBCDataType Executes a parameterized SQL query. For an example see the "Using the Erlang API" Note: Use the function describe_table/[2,3] to find out which ODBC data type that is expected for each column of that table. If a column has a data type that is described with capital letters, alas it is not currently supported by the \param_query\ function. To know which Erlang data type corresponds to an ODBC data type see the Erlang Returns the previous row of the result set relative the current cursor position and positions the cursor at this row. ```erlang sql_query(Ref, SQLQuery) -> sql_query(Ref, SQLQuery, TimeOut) -> ResultTuple | [ResultTuple] | {error, Reason} ``` Types: - `Ref = connection_reference()` - `SQLQuery = string()` - The string may be composed by several SQL queries separated by a ";", this is called a batch. - `TimeOut = time_out()` - `ResultTuple = result_tuple()` - `Reason = process_not_owner_of_odbc_connection | common_reason()` Executes a SQL query or a batch of SQL queries. If it is a SELECT query the result set is returned, on the format `{selected, ColNames, Rows}`. For other query types the tuple `{updated, NRows}` is returned, and for batched queries, if the driver supports them, this function can also return a list of result tuples. **Note:** Some drivers may not have the information of the number of affected rows available and then the return value may be `{updated, undefined}`. The list of column names is ordered in the same way as the list of values of a row, e.g. the first `ColName` is associated with the first `Value` in a `Row`. ```erlang select_count(Ref, SelectQuery) -> select_count(Ref, SelectQuery, TimeOut) -> {ok, NRows} | {error, Reason} ``` Types: - `Ref = connection_reference()` - `SelectQuery = string()` - SQL SELECT query. - `TimeOut = time_out()` - `NRows = n_rows()` - `Reason = process_not_owner_of_odbc_connection | common_reason()` Executes a SQL SELECT query and associates the result set with the connection. A cursor is positioned before the first row in the result set and the tuple `{ok, NRows}` is returned. **Note:** Some drivers may not have the information of the number of rows in the result set, then `NRows` will have the value `undefined`. ```erlang select(Ref, Position, N) -> ``` select(Ref, Position, N, TimeOut) -> \{selected, ColNames, Rows\} \| \{error, Reason\} Types: - \(Ref = \text{connection}\_\text{reference}()\) - \(Position = \text{next} \| \{\text{relative}, \text{Pos}\} \| \{\text{absolute}, \text{Pos}\}\) Selection strategy, determines at which row in the result set to start the selection. - \(\text{Pos} = \text{integer}()\) Should indicate a row number in the result set. When used together with the option \text{relative} it will be used as an offset from the current cursor position, when used together with the option \text{absolute} it will be interpreted as a row number. - \(\text{N} = \text{integer}()\) - \(\text{TimeOut} = \text{time}\_\text{out}()\) - \(\text{Reason} = \text{result}\_\text{set}\_\text{does}\_\text{not}\_\text{exist} \| \text{driver}\_\text{does}\_\text{not}\_\text{support}\_\text{function} \| \text{scrollable}\_\text{cursors}\_\text{disabled} \| \text{process}\_\text{not}\_\text{owner}\_\text{of}\_\text{odbc}\_\text{connection} \| \text{common}\_\text{reason}()\) Selects \(N\) consecutive rows of the result set. If \(Position\) is \text{next} it is semantically equivalent of calling next/[1,2] \(N\) times. If \(Position\) is \{relative, Pos\}, \text{Pos} will be used as an offset from the current cursor position to determine the first selected row. If \(Position\) is \{absolute, Pos\}, \text{Pos} will be the number of the first row selected. After this function has returned the cursor is positioned at the last selected row. If there is less then \(N\) rows left of the result set the length of \text{Rows} will be less than \(N\). If the first row to select happens to be beyond the last row of the result set, the returned value will be \{selected, ColNames, []\} e.i. the list of row values is empty indicating that there is no more data to fetch. REFERENCES [1]: Microsoft ODBC 3.0, Programmer’s Reference and SDK Guide See also http://msdn.microsoft.com/ List of Figures 1.1 Architecture of the Erlang odbc application .................................. 10 List of Tables 1.1 Mapping of ODBC data types to the Erlang data types returned to the Erlang application. 7 1.2 Mapping of extended ODBC data types to the Erlang data types returned to the Erlang application. ........................................... 7 List of Tables Index of Modules and Functions Modules are typed in this way. Functions are typed in this way. commit/2 odbc , 16 commit/3 odbc , 16 connect/2 odbc , 16 describe_table/2 odbc , 18 describe_table/3 odbc , 18 disconnect/1 odbc , 18 first/1 odbc , 18 first/2 odbc , 18 last/1 odbc , 18 last/2 odbc , 18 next/1 odbc , 19 next/2 odbc , 19 odbc commit/2, 16 commit/3, 16 connect/2, 16 describe_table/2, 18 describe_table/3, 18 disconnect/1, 18 first/1, 18 first/2, 18 last/1, 18 last/2, 18 next/1, 19 next/2, 19 param_query/3, 19 param_query/4, 19 prev/1, 19 prev/2, 19 select/3, 20 select/4, 21 select_count/2, 20 select_count/3, 20 sql_query/2, 20 sql_query/3, 20 param_query/3 odbc , 19 param_query/4 odbc , 19 prev/1 odbc , 19 prev/2 odbc , 19 select/3 odbc , 20 select/4 odbc , 21 select_count/2 odbc , 20 select_count/3 odbc , 20 sql_query/2 odbc , 20 sql_query/3 <table> <thead> <tr> <th>Module</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>odbc</td> <td>20</td> </tr> </tbody> </table>
{"Source-Url": "http://erlang.org/documentation/doc-5.4.5/pdf/odbc-2.0.3.pdf", "len_cl100k_base": 10114, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 60118, "total-output-tokens": 11622, "length": "2e13", "weborganizer": {"__label__adult": 0.00027370452880859375, "__label__art_design": 0.0001653432846069336, "__label__crime_law": 0.0001226663589477539, "__label__education_jobs": 0.00028228759765625, "__label__entertainment": 5.1081180572509766e-05, "__label__fashion_beauty": 6.586313247680664e-05, "__label__finance_business": 0.0001055598258972168, "__label__food_dining": 0.00022161006927490232, "__label__games": 0.0005078315734863281, "__label__hardware": 0.0008988380432128906, "__label__health": 0.00014317035675048828, "__label__history": 0.0001214146614074707, "__label__home_hobbies": 4.124641418457031e-05, "__label__industrial": 0.0002377033233642578, "__label__literature": 0.0001285076141357422, "__label__politics": 8.058547973632812e-05, "__label__religion": 0.0002970695495605469, "__label__science_tech": 0.004119873046875, "__label__social_life": 3.814697265625e-05, "__label__software": 0.016448974609375, "__label__software_dev": 0.97509765625, "__label__sports_fitness": 0.0001392364501953125, "__label__transportation": 0.0002263784408569336, "__label__travel": 0.00011593103408813477}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40778, 0.03628]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40778, 0.48476]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40778, 0.79262]], "google_gemma-3-12b-it_contains_pii": [[0, 37, false], [37, 37, null], [37, 2014, null], [2014, 2014, null], [2014, 3598, null], [3598, 5738, null], [5738, 7627, null], [7627, 9528, null], [9528, 9900, null], [9900, 12564, null], [12564, 14921, null], [14921, 17348, null], [17348, 19468, null], [19468, 20907, null], [20907, 22821, null], [22821, 22821, null], [22821, 24255, null], [24255, 26384, null], [26384, 28079, null], [28079, 29507, null], [29507, 32258, null], [32258, 33984, null], [33984, 35573, null], [35573, 37422, null], [37422, 39383, null], [39383, 39383, null], [39383, 39486, null], [39486, 39486, null], [39486, 39743, null], [39743, 39758, null], [39758, 40725, null], [40725, 40778, null]], "google_gemma-3-12b-it_is_public_document": [[0, 37, true], [37, 37, null], [37, 2014, null], [2014, 2014, null], [2014, 3598, null], [3598, 5738, null], [5738, 7627, null], [7627, 9528, null], [9528, 9900, null], [9900, 12564, null], [12564, 14921, null], [14921, 17348, null], [17348, 19468, null], [19468, 20907, null], [20907, 22821, null], [22821, 22821, null], [22821, 24255, null], [24255, 26384, null], [26384, 28079, null], [28079, 29507, null], [29507, 32258, null], [32258, 33984, null], [33984, 35573, null], [35573, 37422, null], [37422, 39383, null], [39383, 39383, null], [39383, 39486, null], [39486, 39486, null], [39486, 39743, null], [39743, 39758, null], [39758, 40725, null], [40725, 40778, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 40778, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40778, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40778, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40778, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40778, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40778, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40778, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40778, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40778, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40778, null]], "pdf_page_numbers": [[0, 37, 1], [37, 37, 2], [37, 2014, 3], [2014, 2014, 4], [2014, 3598, 5], [3598, 5738, 6], [5738, 7627, 7], [7627, 9528, 8], [9528, 9900, 9], [9900, 12564, 10], [12564, 14921, 11], [14921, 17348, 12], [17348, 19468, 13], [19468, 20907, 14], [20907, 22821, 15], [22821, 22821, 16], [22821, 24255, 17], [24255, 26384, 18], [26384, 28079, 19], [28079, 29507, 20], [29507, 32258, 21], [32258, 33984, 22], [33984, 35573, 23], [35573, 37422, 24], [37422, 39383, 25], [39383, 39383, 26], [39383, 39486, 27], [39486, 39486, 28], [39486, 39743, 29], [39743, 39758, 30], [39758, 40725, 31], [40725, 40778, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40778, 0.05609]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
c4000963bb548a827ce34b6b4a1fcbf33cc92925
Scientific Information Management in Collaborative Experimentation Environments Kaletas, E.C. Citation for published version (APA): Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. Chapter 5 Management of Information in the VLAM-G Experimentation Environment Chapter 4 defined a framework and generic/reusable methodologies for information management in collaborative experimentation environments (CEEs). In addition to modelling the information related to scientific experiments in a CEE, Chapter 4 also provided description and modelling of the required CEE functionality for managing the experiment-related information. The models defined for data and functionality are both based on the experiment model, which was also introduced and defined in the previous chapter. Furthermore, a user environment was described to tackle the challenges of how to present the experiment, its related information, and the functionality provided by the CEE to its users. The information management framework defined in Chapter 4 is generic and it can be implemented in different ways. The main subject of this chapter is to present one specific application of the information management framework described in the previous chapter and its implementation within the VLAM-G project; namely the Virtual Laboratory Information Management for Cooperation (VIMCO). VIMCO is the information management platform designed and implemented within the Grid-based Virtual Laboratory Amsterdam (VLAM-G) project, which was briefly introduced in Subsection 1.5.1. The remaining of this chapter is organized as follows. The chapter first revisits the VLAM-G experimentation environment, describes its experiment model and user environment (Front-End), and provides an example user session that includes the typical user activities within the VLAM-G. Then the chapter focuses on the description of VIMCO first by providing an overview, and then describing its architecture, imple- mentation, databases, and services. Since this chapter presents an implementation of the framework described in the previous chapter, several cross-references are provided throughout the text to the previous chapter. 5.1 VLAM-G Experimentation Environment – Re-visited The VLAM-G is a virtual laboratory environment supporting multiple-disciplines. It was described in more details in Subsection 1.5.1. In this section, the design and implementation of the VLAM-G experiment model and VLAM-G Front-End are provided. The experiment model adopted by the VLAM-G is an implementation of the experiment model introduced in the previous chapter, and the Front-End is an implementation of the user environment also described in the previous chapter. 5.1.1 Experiment Model of the VLAM-G The experiment model presented in Section 4.1 is adopted by the VLAM-G with a different naming convention. Scientific experiments in VLAM-G consist of three components, namely Process Flow Template, Study, and Topology; respectively corresponding to the Experiment Procedure, Experiment Context and Experiment's Computational Processing components of the experiment model described in Section 4.1. This is summarized below: \[ \begin{align*} \text{Process Flow Template} & \leftrightarrow \text{Experiment Procedure} \\ \text{Study} & \leftrightarrow \text{Experiment Context} \\ \text{Topology} & \leftrightarrow \text{Experiment's Computational Processing} \end{align*} \] Since the experiment model was described in details in the previous chapter, this section outlines only the main aspects of the VLAM-G experiment model. A Process Flow Template (PFT) is a step-wise description of a certain type of experiments and contains the steps involved in a typical experiment of this type. PFTs are designed by domain experts and serve as a template for scientists during experimentation. A Study is an instance of a PFT. It provides information about a particular experiment, by describing the data elements and activities involved in the experiment. The computational aspects of scientific experiments are represented as a Topology in VLAM-G. A topology consists of a number of 'software modules' attached to each other to form a data flow, which is intended to solve a particular problem. 5.1.2 VLAM-G User Environment: Front-End The Front-End is the user environment of the VLAM-G. Users interact with the VLAM-G only through the Front-End. It consists of a number of graphical user interfaces (GUIs), which present the experiment-related information managed by VIMCO and the functionalities provided by VLAM-G and VIMCO to users in a uniform way. Recall here that although the author was involved in the design activities, the architectural design and the actual implementation of the Front-End were realized by other VLAM-G members (see ‘VLAM-G Development’ in Subsection 1.5.1). GUIs of the Front-End include three editors: PFT Editor, PFT Viewer, and Topology Editor. Interfaces for log-in process, for displaying the available services, and for registration of new software modules to VLAM-G are the other GUIs in the Front-End. The PFT Editor supports basic PFT manipulation functionality, such as designing new PFTs and browsing and/or updating existing PFTs. A PFT defines an experiment from a specific domain, and the PFT Editor is available only to domain experts who have extensive knowledge in their domains. When designing a new PFT, the PFT Editor displays the types in the database schema selected by the domain expert. The domain expert defines the PFT using the database schema types. During the PFT design process, the PFT Editor assists the expert by, for instance, inserting compulsory elements, displaying the relationships between elements, connecting elements to each other, and generating and holding information about the graphical representation of the PFT. Figure 5.1 shows the PFT Editor, used for designing the Material Analysis PFT. In the upper-left side of the editor window, a list of the data types in the material analysis database schema can be seen. Information about the selected PFT element (in this case corresponding to the MaterialAnalysis class) is displayed in the lower-left side of the editor window. The PFT Viewer is an editor for instantiating PFTs (i.e. for making studies) and for browsing and modifying existing studies. When a user logs-in to the VLAM-G, s/he is presented with a list of PFTs to make a new study and a list of existing studies. If the user selects to make a new study, the PFT corresponding to that type of study is loaded from VIMCO and displayed in the PFT Viewer. If the user selects an existing study, that study is loaded from VIMCO together with its PFT, and displayed in the PFT Viewer. The graphical information for the proper display of a study on screen as well as other necessary information related to the problem domain are hold by the PFT, hence, a link is maintained between a study and the PFT that is used to make that study. Clicking on any step in the study displays the detailed information about that step in a form. Users make use of this form to create a new instance of a study step, or to browse and modify the information about a step. The form contains the attributes and their values for that step. Depending on the visibility policies specified for the attributes, the PFT Viewer may decide to filter out some of the attributes. The viewer assists the user to connect the steps in a study to each other. This is not done automatically, because, a study step may contain more than one instance at a time, and it is not possible to automatically determine which instance must be connected to the next/previous step instances. This occurs, for instance, when a study contains a parallel data analysis operation, where two analysis step instances generate two data step instances (as output files). In this case the user must manually link the analysis to its output. Furthermore, users can enter the query mode for retrieving already existing instances of a step from the database by clicking on the query button for that step. The query mode displays a form containing the attributes of that step, fill-in boxes to specify the condition values for attributes, and the operators for the condition. When the user enters the query conditions, PFT Viewer formulates the query in SQL, and submits it to VIMCO. The query results returned by VIMCO also include the PFT information, which is used by the viewer to properly display the results. Figure 5.2 shows the PFT Viewer used for creating a new material analysis study. The outlook of the viewer window is the same as the PFT Editor, except that the PFT Viewer does not include a list for showing the data types in the database schema. Information about the selected study element is however displayed on the left side of the viewer window (in this case the MaterialAnalysis instance). The Topology Editor allows users to compose their topologies using available software modules, and save and execute topologies. The Topology Editor is launched by the Front-End when the user clicks on a study step in the PFT Viewer which is marked as an RTS process (i.e. as a processing step). Upon initiation, the Topology Editor loads the descriptions of the software modules from VIMCO that this user is authorized to use. A module description contains information about its functionality, its run-time requirements (e.g. CPU requirements), brief description and detailed manuals, input and output ports, etc. Users create instances of the modules by dragging and dropping them into the working area of the Topology Editor, and set the actual parameter values. By connecting the input/output ports of the modules, users define a data flow, which can be saved and executed. The Topology Editor also allows users to display execution results, as well as to retrieve status information and change execution parameters at run-time. The Topology Editor is shown in Figure 5.3, used for designing a topology for material analysis. In the upper-left side of the editor window, a list of available software modules can be seen. Information about the selected module (in this case *apodization*) is displayed in the lower-left side of the editor window. ### 5.1.3 Experimentation in the VLAM-G This section provides an example user session to illustrate the typical activities performed by a user within the VLAM-G, as depicted in Figure 5.4. A user logs-in to the VLAM-G by presenting her/his certificate using the Front-End log-in dialog. If the user presents a valid certificate, a new Session Manager instance is created, which then contacts VIMCO for the authentication of this user as a valid VLAM-G user. If the user is authenticated after this two-level authentication, VIMCO generates a list of available services that this user is authorized to use. The list contains the types of studies that this user is allowed to make, existing studies that this user is allowed to browse/modify, and active sessions at the time of login. Different types of studies are represented as PFTs. Users make new studies by creating instances of these PFTs within the VLAM-G. If the user is an expert in her/his domain, the list also includes PFTs that this expert is authorized to manipulate. Each item in the list contains a name and a brief description, which helps the user to find and select the service that s/he needs. The user then chooses either to make a new study by selecting its template, to work on an existing study by selecting that study from the list, or to join an active session by selecting that session from the list. For the latter case, this user will be registered as a participant of the collaborative session that s/he joined (implementa- ![Figure 5.2: Snapshot of the PFT Viewer](image-url) tion of collaborative sessions is planned for future work). For the first two cases, a new session is created and this user is assigned as the owner. This example assumes that the user selects a PFT to make a new study. As soon as the user selects the PFT, it is loaded and presented to the user in the Front-End. The user goes over the PFT steps and fills-in the required information for each step. This way, s/he fills-in the PFT and saves the study. Some of the steps in the PFT actually correspond to computational processes; for instance, analysis of a data set generated by a laboratory instrument. In this case, after filling in the required information for this analysis step, the Topology Editor is launched, with a list of software modules that s/he is allowed to use. S/he defines a topology by dragging-dropping modules from the list and connecting them to each other, and executes it by sending the topology to the VLAM-G RTS. At the same time, the topology is saved in VIMCO, and a link is created from the analysis step to this topology. When finished with the current study, the user either logs-out from VLAM-G or continues by re-displaying the list of available services. In the former case, the session is terminated and VIMCO releases all resources allocated for this session (e.g. closes all database connections). In the latter case, Session Manager updates the session information and persists it in VIMCO. ![Figure 5.3: Snapshot of the Topology Editor](image-url) 5.2 VIMCO: Virtual Laboratory Information Management for Cooperation VIMCO is the information management platform of the VLAM-G. In addition to data modelling for diverse types of experiment-related information handled in the VLAM-G, VIMCO provides several mechanisms for platform independent database access, distributed and multi-threaded manipulation of these diverse types of experiment-related information, and XML-based information exchange facilities. VIMCO is an implementation of the information management framework presented in Chapter 4. Therefore, VIMCO provides the functionalities described in that chapter to its users. VIMCO is designed and developed as an integral part of the VLAM-G architecture. The main goal behind the VIMCO development is to provide a scalable and flexible platform for managing the wide variety of experiment-related information, also considering the requirements listed in Section 2.6. Below, a summary of the main requirements for VIMCO is given: - Modelling wide variety of information related to scientific experiments. - Providing mechanisms for the management of such information. Figure 5.4: Experimentation in the VLAM-G: An example user session • Designing and developing a scalable and flexible platform that is open for adding new users and resources, and open for adding new types of information and components to support the management of new information. • Designing and developing an open platform that do not depend on any third party products. In this direction, the design principles set for the design and development of VIMCO can be summarized as follows: 2. Using standards as much as possible. 4. Distributed, multi-threaded, and modular architecture. 5. Clear, consistent, and generic interfaces that each module declares for its functionality. The rest of this chapter describes the VIMCO design and development in details, which address the requirements and follow the design principles as described above. Note here that all architecture and data model diagrams provided in this chapter are object-oriented and use UML notation. 5.3 The VIMCO Architecture The VIMCO architecture has been designed to support information management in the distributed VLAM-G environment. The diagram in Figure 5.5 provides an overview of the VIMCO architecture. The figure shows the VIMCO architectural components, the interactions among them, and their distribution over several nodes (on which the components run). The VIMCO architecture is open for extensions and improvements in the future. This is achieved through the modular architecture, the standardized and generic interfaces defined by components, and usage of driver-components for third-party software packages. One advantage of standardized and generic interfaces and usage of driver-components is that they reduce/ remove dependencies among components as well as they remove dependencies to third party solutions. For instance, developing driver-classes for different XML generator/parser packages that comply with the base XML Manager allows for plugging-in any XML package. The driver-components approach is similar to the mediator approach. Components of the VIMCO architecture are grouped into three different types of servers that are dispersed over three different types of nodes (see Figure 5.5): • VIMCO Communication Servers • VIMCO Core Functionality Server 5.3. The VIMCO Architecture In the rest of this section, VIMCO components will be described in details. 5.3.1 VIMCO Communication Servers VIMCO Communication Servers are responsible for the communication between VIMCO and the outside world (i.e. other VLAM-G components). Within the general VLAM-G architecture, no direct user access to VIMCO is permitted. The only client of VIMCO is the VLAM-G Session Manager, which acts as a broker in the VLAM-G environment. The communication servers present the VIMCO functionality to the outside world through interfaces based on different communication mechanisms. Three different communication servers have been developed for VIMCO: RMI Server, Activatable RMI Server, and Servlets. Although all VIMCO communication servers use the VIMCO Core Functionality Server as the back-end, having separate communication servers enables future modifications to the VIMCO Core Functionality Server by decoupling it totally from the client programs. Hence, effects of possible future changes in the VIMCO Core Functionality Server to clients are minimized. **VIMCO RMI Server** is the RMI interface of VIMCO. The current design of VLAM-G Session Manager uses the VIMCO RMI Server to communicate with VIMCO. It forwards the arriving requests as RMI calls to the VIMCO Core Functionality Server, serializes the results into XML documents, and returns the XML documents back to the requester. **VIMCO Activatable RMI Server** provides an RMI-based communication between VIMCO and the outside world. The difference between this server and the VIMCO RMI Server is the following: In order to keep the VIMCO RMI Server process alive, the Java Virtual Machine process must also be kept alive. That is, the VIMCO Server must be continuously active and running in the foreground. However, Activatable RMI Server works as a daemon in the background, and is activated only when there is a request. Thus, the Activatable RMI Server enables the VIMCO Server to be activated on demand. **VIMCO HTTP Server** provides an HTTP/HTTPS-based communication. The VIMCO HTTP Server has the capability of understanding HTTP requests, converting the message-based requests into calls to VIMCO Core Functionality Server, and sending the results back using HTTP. The requests are formulated as messages over HTTP, while the actual data is transferred as XML. A protocol has been defined for the HTTP-based communication between VIMCO and the Session Manager [70]. The protocol messages have the following generic structure: ```xml <sessionId>, <type>, <messagename>, <attributes> ``` In this protocol, *sessionId* indicates to which session the message belongs. Possible values for *type* are the following: ‘request’, ‘response’, ‘update’ and ‘inform’. *messagename* is the name of the message. *attributes* contain the relevant information for the message in XML format. The reason for using XML-based message content is that intermediate communication layers will be isolated from the possible changes to the content or format of XML messages. ### 5.3.2 VIMCO Core Functionality Server VIMCO Core Functionality Server is the main component in VIMCO. It provides the functionality to support a VLAM-G user throughout her/his experimentation, start- 5.3. The VIMCO Architecture The VIMCO Core Functionality Server work together to process the user requests that may arrive during a session. VIMCO stores the session information in a persistent storage, while the VLAM-G Session Manager is responsible for the management of sessions. A session contains information about the active user in the session at any time, the study that the active user is working on, PFT of the study, the list of topologies that were submitted to the RTS during this session, and other users collaborating with the active user for the study. VIMCO Core Functionality Server uses the session information to process user requests. For instance, the user information in a session object is used to enforce the access rights defined for the active user. Following describes the components in the VIMCO Core Functionality Server. **VIMCO Server Manager** The VIMCO Server Manager component is used by the VIMCO Administrator only during system initialization and system shutdown. It provides two functionalities: Startup and shutdown. During startup, VIMCO Server Manager connects to the VIMCO DB and reads the necessary information needed for the proper operation of the VIMCO Core Functionality Server, and initializes the internal data structures maintained by the Lookup Server. This information includes the available databases and the mappings between the databases and their contents. The Connection Manager and RMI Server are also initialized by the VIMCO Server Manager at startup time. During shutdown, several operations are performed for a clean shutdown of the system, e.g. all open database connections are closed. The VIMCO Server Manager consists of the following classes and interfaces: The IVimcoServerManager interface defines the two methods that VimcoServerManager class implements, which performs the actual startup and shutdown operations. The VimcoServerLauncher is a password-protected graphical interface for administrators to startup and shutdown the system. **Lookup Server** Lookup Server maintains lookup lists in the memory for fast access at run-time, containing information about the databases, active sessions, users and access rights, and the mapping between a database and its contents. The Lookup Server maintains a number of lists at run-time. The database list and the mapping between the database ID and the content ID are initialized by the VIMCO Server Manager at startup time. Session information, however, is inserted into the session list at run-time by the VLAM-G Session Manager using VIMCO services. The methods provided by the Lookup Server enable the other VIMCO Core Functionality Server components to lookup for information based on keys, such as the ID of a database. For example, Lookup Server is used to obtain a connection to a VIMCO database as follows: A request to read a study contains the session ID. VIMCO looks up for the session information for the given ID, from which it obtains the database ID holding the study. The database ID is then used when obtaining a connection to the database from the Connection Manager. **VIMCO Server** The main component in the VIMCO Core Functionality Server is the VIMCO Server. All VIMCO services representing the overall VIMCO functionality are implemented by the VIMCO Server. It also coordinates the activities of other components in the VIMCO Core Functionality Server in order to process a user request. For instance, when a request to save a study arrives to VIMCO Core Functionality Server, the VIMCO Server will first map the session information into actual database information using the session information and the other internal information provided by the Lookup Server, and contact the Connection Manager to get the correct data manager. Then it starts a transaction in the corresponding application database, saves the study, and commits the transaction using the Connection Manager. Every VIMCO service is executed within a transaction. Transactions are closed after the completion of the service execution. *VimcoServer* is the main class in the VIMCO Server component that implements the VIMCO functionality. It contains read, write, delete, and write-as methods for different types of information managed by VIMCO, including for instance, PFTs, studies, and session information. Among other classes in the VIMCO Server, *VimcoServerConstants* defines the constant values used by all components in VIMCO, such as the reuse policy values and the attribute properties. The *Session* class for representing the session information is also defined as part of VIMCO Server. Finally, VIMCO Server contains a number of ‘info’ classes, each of which provides a summary of the actual information in the VIMCO databases. VIMCO services are described in details in Section 5.5. **XML Manager** XML is used for the data transfer among the VLAM-G components. Since the VIMCO Server is object-oriented, XML Manager is used to serialize Java objects into XML documents and to de-serialize XML documents back into Java objects. The XML Manager provides a standardized interface, so that the underlying XML package can be replaced by another package at any time if need occurs. The same XML Manager is used also by the Front-End and the Run Time System. Currently, besides the Vimco-XML package, two third-party XML packages called JSX [166] and ElectricXML [167] are available for Java–XML conversion. The XML Manager consists of the following classes: *XMLManager* class provides two methods for Java–XML conversion: *writeXML* for serializing Java objects into XML documents, and *readXML* for de-serializing XML documents into Java objects. The *XMLManager* class itself does not perform the actual conversion. It uses one of the available XML drivers for conversion, which is imported at compile time. For each of the available XML packages, a driver class is developed, which declares the same methods as the *XMLManager* class. The methods make the necessary calls to the under- 5.3. The VIMCO Architecture ### Comparison of ElectricXML (EXML) and VimcoXML <table> <thead> <tr> <th>Database name</th> <th># of objects in the DB</th> <th>DBMS backup size</th> <th>Uncompressed EXML</th> <th>Uncompressed VimcoXML</th> <th>Compressed EXML</th> <th>Compressed VimcoXML</th> </tr> </thead> <tbody> <tr> <td>VIMCO DB</td> <td>1075</td> <td>16632</td> <td>356</td> <td>218</td> <td>11</td> <td>7.5</td> </tr> <tr> <td>RTS DB</td> <td>869</td> <td>14656</td> <td>624</td> <td>392</td> <td>76</td> <td>62</td> </tr> <tr> <td>Project DB</td> <td>30</td> <td>1744</td> <td>28</td> <td>22</td> <td>2.7</td> <td>2.1</td> </tr> <tr> <td>MACS DB</td> <td>1136</td> <td>9624</td> <td>657</td> <td>436</td> <td>32</td> <td>20</td> </tr> <tr> <td>Expressive DB</td> <td>157557</td> <td>32304</td> <td>140000</td> <td>63000</td> <td>6900</td> <td>4400</td> </tr> </tbody> </table> Table 5.1: Comparison of the size of XML documents generated by ElectricXML (EXML) and VimcoXML (all file sizes are given in KB) lying XML package for the serialization/de-serialization of Java objects. The `XML` data type is used to represent XML documents as Java objects; XML managers receive and generate XML documents as instances of this type. Its `value` attribute contains the contents of the XML document as a Java String. Following a mediator-like approach breaks the dependency of the VIMCO components to the used conversion package. All three packages mentioned above have been used and tested in VIMCO. At the time of testing, JSX did not provide support for all the Java types that were used in VIMCO. ElectricXML is a much more robust tool, however, the XML document generated by ElectricXML contains too much overhead mainly due to the XML Schema used by the tool. To overcome this overhead, VimcoXML has been developed. The XML document of VimcoXML only contains the full class name, and tags corresponding to the attributes of the object being serialized and their values. A test was made to compare the sizes of the XML documents generated by ElectricXML and VimcoXML. The test was to backup the VIMCO databases by serializing their contents into XML documents. Table 5.1 shows the test results. For each database, number of objects in the database, size of the backup file generated by the DBMS itself, and the uncompressed and compressed file sizes of the XML documents generated by the two packages are given. In all cases, VimcoXML generated smaller files than ElectricXML. When compressed, the VimcoXML files are even smaller than the DBMS backup files. Connection Manager Connection Manager uses the available database drivers to provide connections to other VIMCO Core Functionality Server components for accessing the VIMCO databases. A database driver is a library to manipulate the data stored in a specific DBMS. The Connection Manager opens a connection to the specified database and manipulates the data in that database using the corresponding database driver. On the contrary to transactions, connections to the VIMCO databases are kept open during many user requests. Because the access rights are defined for individual users and their enforcement requires the user ID, connections to the databases must be opened by the users themselves. Therefore, performance improvement mechanisms requiring an anonymous access to databases, such as connection pooling, can not be applied. Instead, connection caching is used as an alternative mechanism for reducing the overhead introduced by opening and closing connections, where connections are kept open during the lifetime of a user session. Open connections are cached based on the user name, database name, and the current session ID. The first time a user requests a connection to a database within a session, a new connection is opened for this user. This connection is returned back to the requester (i.e. the VIMCO Server), and at the same time it is cached internally. When the same user requests a connection to the same database still in the same session, Connection Manager returns the cached connection. When a session ends, VIMCO Server issues a remove command to the Connection Manager to close and remove any cached connections opened to the specified database within that session. The Connection Manager component consists of a manager class, a standardized interface to different DBMSs, and drivers implementing this interface. The IConnection interface defines the basic database connection and data manipulation methods, such as connect/disconnect, start/commit/abort transactions, read/write/delete objects, and execute queries. Two drivers have been implemented until now, both for the Matisse object-oriented database management system (ODBMS) [168]. The JMatiss e driver is implemented on top of the Matisse C API, using Java Native Interface (JNI). JMatiss e uses persistence by reachability for reading and writing database objects. That is, given an entry point object, all objects reachable from the entry point object (through navigating the relationships) are persisted to the database or retrieved from the database. JMatiss e defines depth to overcome the object boundary problem. The depth specifies the number of recursions during the traversal of the object graph (i.e. the depth of the object graph to be traversed). MtsJava driver, on the hand, uses indirect object pointers to overcome the object boundary problem. MtsJava is implemented using the Matisse Java API. ConnectionManager is the class that implements the connection management and caching functionality, as described above. It maintains a list of open connections based on the session ID and the database ID. Connections can be closed one by one upon request, or all connections in a session can be closed at session termination. ConnectionManager also provides an init method, which is used during VIMCO startup to open connections to the VIMCO DB to initialize the Lookup Server. Fig. 5.6: Technologies/tools used for the implementation of VIMCO **Log Manager** In order to log the errors occurred and trace the execution flow during run-time, a logging system is developed. Several exceptions have been defined to represent the errors that may occur during the VIMCO execution, and traces have been identified that correspond to the important points in the execution flow. These errors and traces are prioritized with a level number, specifying the importance of the error or trace. The level of required error or trace log is set at startup time by the administrator, and error messages and traces for which the level number is above the specified value are logged in two separate files at run-time. LogManager was especially useful during the software development phase. ### 5.3.3 VIMCO DB Servers VIMCO DB Servers correspond to the VIMCO databases. Currently, all VIMCO databases are developed using the Matisse ODBMS [168]. VIMCO databases are described in Section 5.4. ### 5.3.4 Implementation of VIMCO Several technologies have been used for the development of the different VIMCO components, as shown in Figure 5.6. Entire VIMCO is implemented in Java, except the VIMCO RTS Module which is implemented in C/C++. The RMI and RMI Activatable technologies and the RMI Registry tool of Sun Microsystems are used for the implementation of the VIMCO RMI Servers, while Java Servlets and Apache Tomcat servlet container are used for the implementation of the VIMCO HTTP Server. As mentioned earlier, ElectricXML and VimcoXML tools are used for the serialization/deserialization of Java objects. Matisse ODBMS is used for the implementation of all VIMCO databases. Currently, work on using a relational DBMS is ongoing. For database access, mainly the MtsJava library developed on top of the Matisse Java API is used. JDBC is also used for some operations, such as querying Matisse databases. The VLAM-G RTS, its libraries and the provided support tools are used for the development of the VIMCO RTS module. VIMCO is currently running on both a Linux cluster and a Sun server. Some reserved nodes of the Linux cluster host the VIMCO Communication Servers, VIMCO Core Functionality Server, and the VIMCO databases. 5.4 VIMCO Databases VIMCO maintains several databases for storing different types of information generated and handled within the VLAM-G. VIMCO databases include the application databases storing experiment-related information, and three other databases: VIMCO DB for internal VIMCO information, Project DB for multi-disciplinary projects, and RTS DB for module descriptions and topologies. In the remaining of this section, VIMCO databases and their contents are briefly described. VIMCO DB VIMCO DB is the database for information that is used internally by the VLAM-G and VIMCO. Contents of the VIMCO DB include user information and access rights definitions, session information, information about available data sources, and other information that is only used by VIMCO for internal purposes such as a counter for unique identifiers. The UML diagram for the VIMCO DB schema is given in Figure 5.7. VIMCO DB implements the user model defined in Subsection 4.3.3 for managing user information and access rights definitions. In addition, VIMCO DB schema contains the DataSource class to maintain information about the available VIMCO data sources, such as the description of the data source, the host on which it resides, and the driver for accessing the data source and for manipulating its contents. Currently, all VIMCO data sources are database management systems, represented as instances of the DB class, which in turn inherits from the DataSource. The Session class is defined in the VIMCO DB schema to provide persistence to session information. Session class was described in Subsection 4.3.4. Remaining two classes in the VIMCO DB schema are Password and Identifier. Password is a singleton class with only one instance holding the common password for all VIMCO users. To access a database, a user must have an account on the underlying DBMS, with a unique username and password. However, in order to provide single 5.4. VIMCO Databases Figure 5.7: Class diagram for the VIMCO DB Schema sign-on facility, VIMCO does not require the users to provide a password for accessing VIMCO databases. Since no direct access to VIMCO databases is allowed, defining a single password that is common to all VIMCO users is sufficient. This password is never published to users, but only used by the Connection Manager to connect to the VIMCO databases. Finally, Identifier is the other singleton class that is used to generate unique integer identifiers. There can be more than one type of identifier, each of which is unique only within its own context. For example, currently, only an identifier for session objects is generated and maintained. If needed, more identifiers can be defined in the Identifier class as new attributes. Application Databases One database is developed for each application in an experimental science domain. VIMCO currently maintains two application databases: MACS DB for material analysis experiments, and Expressive DB for microarray experiments. Figure 5.8: Data model for PFTs in application databases Application databases contain PFTs, studies, and other information that are specific to the application. For the representation of PFTs, application databases implement the Procedure Data Model, which was described in Subsection 4.3.2. However, as mentioned earlier in this chapter, VLAM-G uses the term ‘Process-Flow Template’ (PFT) to refer to experiment procedures. The PFT Data Model, which is appended to the schema of each application database, is given in Figure 5.8. As also described in Subsection 4.3.2, application database schemas extend the Experimentation Environment Data Model for modelling the domain-specific experiment-related information. The Expressive DB schema will be described in 5.4. VIMCO Databases ![Diagram of the VIMCO database structure] Figure 5.9: Data model for study-PFT links in application databases details in Chapter 6, and the schema designed for MACS DB can be found in [9]. Another structure that is included in the application database schemas is the one for representing the link between studies and PFTs. The model of the structure was described in Subsection 4.4.1, however, Figure 5.9 shows the data model with the naming convention used in the implementation. As described in Subsection 4.4.2 in the previous chapter, the OriginCopy class was defined for storing information about reused objects. For the same purpose, this class is appended to the application database schemas as it is. Project DB Multi-disciplinary projects consist of experiments from different domains. VIMCO adopts the approach for multi-disciplinary projects defined in Subsection 4.5.1, which uses the general structure of scientific experiments and the project concept as the facilitator. In order to facilitate multi-disciplinary projects in VIMCO, a specific database for projects called Project DB is developed. All information about existing projects is stored in this database, while the actual experiment information (i.e. studies) is stored in multiple application databases. Representation of multi-disciplinary projects is illustrated in Figure 5.10. The schema of the Project DB is given in Figure 5.11. The schema consists of two classes: Project class for representing (multi-disciplinary) projects, and ExpXRef class for representing the experiments in projects. A Project object contains high-level information about the project such as its description and start/end dates, and links to the experiments included in the project. The links are actually to the ExpXRef objects, which act as proxies to experiment objects in the application databases. An ExpXRef object contains the ID of the database storing the actual experiment object and the OID of that experiment object. In the current implementation of VIMCO, application database schemas still contain the Project class, however, these are not instantiated. When writing a study, VIMCO writes all the objects in the study to the corresponding application database except the Project object. Since the project information is not stored in the same application database, the relationships from the Experiment object to its project are set to null. The Project object is then written into the Project DB, and an ExpXRef instance is created as a proxy to the Experiment object that was stored in the application database. VIMCO then sets the attributes of the ExpXRef object to the correct values, and creates the links between the Project and ExpXRef objects. Similarly, when a read request for a study arrives to VIMCO, the project information for the study is retrieved from the Project DB; or, if a query involving the Project class arrives, VIMCO re-directs the query to the Project DB. Furthermore, ordering of experiments in a project is realized by ordering the ExpXRef objects in the Project DB rather than the actual Experiment objects in the application databases. All these operations are performed by VIMCO transparently to the user. In fact, users are not aware of the existence of the Project DB. **RTS DB** *RTS DB* is the VIMCO database for storing information related to the computations performed by the VLAM-G RTS. Module definitions provided by domain experts and topologies defined by scientists are stored in the RTS DB. From one point of view, topologies can be seen as application data/information, and hence they should be stored in the application databases together with the studies. However, topologies are composed of modules, and a scientist can make use of any kind of module in her/his topology, without being restricted only to those modules that are developed specifically for her/his field of expertise. Also considering that one of the aims of the VLAM-G is to promote inter-disciplinary research, all modules must be available to all scientists, although specific security and access rights are still applicable, for instance to use a very expensive device. Thus, instead of replicating the module definitions in all application databases, a specific database to hold both the module definitions and the topologies is a better approach. ![Diagram](image_url) Figure 5.10: Representing multi-disciplinary projects RTS DB stores the topologies. Since a topology is part of a study, that is, it corresponds to a (number of) processing step(s) in a study, a link is maintained from that/those step(s) in the study to the topology. Currently, this link is implemented as a pointer attribute in the study step, whose value is the topology ID. After the topology is saved in the RTS DB, the PFT Viewer sets the value of this attribute to the correct topology ID. The RTS DB schema is given in Figure 5.12. This schema implements the data models developed for software entity descriptions and computational processing of an experiment, as described in Subsection 4.3.2. It also includes the data model for user information as defined in Subsection 4.3.3. Hence, the schema diagram in Figure 5.12 consists of three parts (represented using the UML subsystem symbol): Part A of the RTS DB schema is depicted in Figure 5.13. Part A includes module descriptions, their input/output ports and data types, required environment variables, parameters, etc. Part B of the RTS DB schema is given in Figure 5.14, which models the topologies with the modules in the topology and their connections to each other. Figure 5.15 shows the part C of the RTS DB schema, which includes the data types for user information. 5.5 Functionality/Services Provided by VIMCO In order to maintain the security and consistency of information, and to provide higher-level functionality to scientists, no direct access to VIMCO databases is allowed. VIMCO provides a number of services to access the databases and manipulate their contents. In this section, first an overview of the VIMCO services is given, followed by their descriptions. --- **Figure 5.11: Class diagram for the Project DB Schema** 5.5.1 Overview of the VIMCO API VIMCO API consists of the services declared by the four interfaces shown in Figure 5.16, each offering different levels of functionality. This figure shows the hierarchy of VIMCO interfaces which is based on the different types of users; namely application users (IVimcoUserServer), administrators and domain experts (IVimcoAdminServer), and VLAM-G Session Manager (IVimcoSesManServer). IVimcoServer is the top-level interface, supporting functionality that is available to all types of users, such as reading information about the current user and the type of current user. IVimcoSesManServer interface is presented to the VLAM-G Session Manager, and it extends IVimcoServer interface with methods for session information management and authentication. IVimcoUserServer interface defines methods for ordinary VIMCO users, providing the basic information management functionality to support experimentation in VLAM-G, such as reading and writing studies and topologies, reading PFTs, and executing queries. The IVimcoAdminServer interface is reserved for administrators and domain experts, extending standard user functionality among others with update functionality for PFTs, module descriptions, and user information. The VimcoRMIServer and VimcoRMIServerActivatable classes both implement these interfaces, and the protocol defined for the VimcoHTTPServer messages covers all the functionality defined by these three interfaces. As mentioned earlier, all VIMCO Communication Servers utilize the VimcoServer as the back-end, which provide the actual functionality. VIMCO services are briefly described in the following subsections. Note that the notation given in Figure 5.17 is used in the remaining of this section for all figures representing VIMCO services. 5.5.2 Services for Accessing VIMCO Figure 5.18 shows the services for accessing VIMCO. Users can access VIMCO through one of the RMI Servers or through the HTTP Server. The following describes the steps involved in accessing VIMCO. In order to access VIMCO, a user must first authenticate herself/himself. For this purpose, Session Manager extracts the distinguished name of the user from her/his Grid proxy, and passes this information to the RMI Server, which forwards the request to the VimcoServer. VimcoServer retrieves the user object with this distinguished name. If there is a user, then the user ID is returned; otherwise a negative integer is returned. Session Manager then uses this user ID to retrieve the user type. Depending on the user type, the RMI proxy for VIMCO is casted to the correct interface type (i.e. one of the three interfaces inheriting from the root interface shown in Figure 5.16) to enforce access control for services. Next step is to retrieve the available services. Available services are presented to the users as number of info objects (as described in Subsection 4.3.4). The info classes include ProjectInfo, StudyInfo, PFTInfo, and SessionInfo. To obtain the PFT, Project, and Study information, VimcoServer queries each application database and the Project DB to retrieve all instances of PFT, Project, and Experiment classes considering the current user’s access rights, and extracts the information necessary for the info classes. For Session information, the VIMCO DB is queried to retrieve all Session objects. Finally, in addition to the other info objects, one DBInfo object is created for each VIMCO database to be sent to the user, since the database ID is required for the calls to VIMCO (except the Project DB, which is totally transparent to users). These steps are the same for all users and for all types of VIMCO access. The subsequent steps differ, however, depending on the service that the user chooses. Note that some of these steps are performed by the Session Manager on behalf of the user, without her/his involvement. Steps for retrieving the user type and casting the interface, and for instantiating a session are not visible to the user. Figure 5.16: Class diagram showing the hierarchy of interfaces in the VIMCO API Figure 5.17: Notation used for the VIMCO services Figure 5.18: Services for accessing VIMCO 5.5.3 Services for Session, User and Access Rights Management This subsection describes the VIMCO services for session information management, user management, and access rights management. Session Information Management Services VIMCO provides the services depicted in Figure 5.19 for the management of session information. As mentioned earlier, VIMCO only maintains a persistent copy of the session object, and it never modifies the contents of the session object. Actual session management is performed by the VLAM-G Session Manager. ![Session Information Management Services](image) Figure 5.19: VIMCO Session Information Management Services User Management Services VIMCO users are the users that can access VIMCO and request one of the VIMCO services. VIMCO maintains information about the registered users, their roles, and restrictions on available data types. VIMCO implements the user data model that was defined in Subsection 4.3.3. This data model is implemented by each VIMCO database to store the user related information in a homogenous way, excluding the roles and restrictions which are only defined in the VIMCO DB. The user management services of VIMCO are given in Figure 5.20. Since every VIMCO user is assumed to be employed by an Organization, the organization information must already be in the database. XML document in the user and organization management calls also include the ContactInfo object for the user or the organization being manipulated. ![User Management Services](image) Figure 5.20: VIMCO User Management Services The main repository for user information is the VIMCO DB. All user management operations are primarily performed on this database. For instance, in order to create a new user for an application database, the user must be first created in the VIMCO DB. When writing user information in a database other than the VIMCO DB, VIMCO first checks whether this user is already defined in the VIMCO DB, and throws an exception otherwise. Then VIMCO checks whether the `username` specified for the new user is unique for insertions and whether it already exists for updates. Once the `User` object is inserted to the database, a user account must be created in the underlying DBMS. Currently, this is manually done by the administrator, though this can be automated by calling the DBMS command from within the `writeUser` method. If the user information is required in all VIMCO databases, it is the administrator’s responsibility to make the necessary calls for each database. On the other hand, when deleting a user from a database other than the VIMCO DB, the user information is not removed from the database, but only the user account on the underlying DBMS is deleted. If the target database is the VIMCO DB, then the user information is removed from the database, and the user account on all DBMSs are deleted. In the case that a user that was deleted from the VIMCO DB is being defined once more, VIMCO maintains consistency by ensuring that each real user has only one `User` object in any VIMCO database (since the `User` objects are not removed from the other databases). ### Access Rights Management Services Access rights information in VIMCO is stored in the VIMCO DB, which implements the model for roles and access rights that was shown in Figure 4.18. Roles that can be assumed by users were described in Subsection 4.5.3. Below, the specific roles defined for current VIMCO users are summarized: - **Expressive User** as an Application User, that can access the Expressive DB, and read and write microarray information. - **MACS User** as an Application User, that can access the MACS DB, and read and write material analysis information. - **VLAM-G Admin** as a CEE Admin, that can create user accounts in VLAM-G and in VIMCO. - **VIMCO Admin** as an Information Management Admin, responsible for maintaining the information stored in VIMCO DB. - **Expressive Expert** as a Domain Expert, responsible for maintaining the microarray PFTs in Expressive DB and microarray related module descriptions in RTS DB. - **MACS Expert** as a Domain Expert, responsible for maintaining the material analysis PFTs in MACS DB and material analysis related module descriptions in RTS DB. Note here that an application database schema is defined by VIMCO Admin together with an expert from the domain of the application. - *System*, which has all privileges, is used only during emergency cases. As described in Subsections 4.3.3 and 4.5.3, access rights for each role are defined as a set of restrictions based on the role definitions given above. Overview of the VIMCO functionality for access rights management is given in Figure 5.21. ```xml readRole(sessionId : int, dbId : int, roleId : int) : XML writeRole(sessionId : int, dbId : int, role : XML) : XML deleteRole(sessionId : int, dbId : int, roleId : int) readRestriction(sessionId : int, dbId : int, restrictionId : int) : XML writeRestriction(sessionId : int, dbId : int, restriction : XML) : XML deleteRestriction(sessionId : int, dbId : int, restrictionId : int) addUserRole(sessionId : int, dbId : int, userId : int, roleId : int) removeUserRole(sessionId : int, dbId : int, userId : int, roleId : int) addRoleRestriction(sessionId : int, dbId : int, roleId : int, restrictionId : int) removeRoleRestriction(sessionId : int, dbId : int, roleId : int, restrictionId : int) ``` Figure 5.21: VIMCO Access Rights Management Services Access rights in VIMCO are defined on the VIMCO data types. In this version of VIMCO, the data types consist of the different types of information managed by VIMCO. Although it is called as data type, a data type can actually consist of several types in VIMCO databases. The VIMCO data types contain PFTs, studies, module definitions, topologies, session information, data source information (databases), user information, access rights (roles and restrictions), identifier, password, and info classes. VIMCO considers all these types as atomic, meaning that, a restriction is defined on the entire study, but not on the individual elements of the study. However, restrictions can also be defined on any data type in a database schema, which are only enforced during query execution. These data types are used to define the restrictions that are applied to users when requesting a service that manipulates one of these data types. However, VIMCO provides some services which do not work on any of these data types. In order to support access rights on these services, similar mechanisms must be developed for those services. Ideally, this mechanism should be the *service* counterpart of the *data type*, where restrictions are defined for *all* VIMCO services. However, for practical reasons and for simplicity, the current VIMCO version provides access rights management for only a subset of all VIMCO services, and treats them as data types. These services are write objects, get user type, and authenticate user. For each of these services, one restriction is defined per role (if there should be any restrictions). 5.5.4 Services for Managing Experiment-Related Information Services provided by VIMCO for managing experiment-related information (i.e. PFTs, studies, module definitions, topologies) are described in this subsection. PFT Management Services Figure 5.22 shows the VIMCO services for PFT management. A PFT consists of a PFT object, a number of PFTElements and PFTConnections, and one PFTGUI object for each PFTElement and PFTConnection object. PFT manipulation services consider a PFT as an atomic unit. When a PFT is passed to VIMCO, it traverses the PFT graph starting from the PFT object, and inserts all objects into the specified database. Similarly, when reading a PFT, first the PFT object with the specified ID is retrieved from the database, then other objects in the PFT are traversed and retrieved. <table> <thead> <tr> <th>Service</th> <th>Parameters</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>readPFT</td> <td>(in sessionid : int, in dbid : int, in pftid : int) : XML</td> <td>Retrieve a PFT by ID</td> </tr> <tr> <td>writePFT</td> <td>(in sessionid : int, in dbid : int, in pft : XML) : XML</td> <td>Create a PFT object</td> </tr> <tr> <td>writePFTAs</td> <td>(in sessionid : int, in dbid : int, in pft : XML) : XML</td> <td>Create a PFT object as a template</td> </tr> <tr> <td>deletePFT</td> <td>(in sessionid : int, in dbid : int, in pftid : int)</td> <td>Delete a PFT by ID</td> </tr> </tbody> </table> Figure 5.22: VIMCO PFT Management Services In VIMCO, different versions of the same PFT are grouped together. The available services list always contains the latest version of a PFT; hence, new studies can only be created using the latest versions of the PFTs. A PFT is not deleted nor updated as long as there are studies created using that PFT. Domain experts can retrieve older versions of a PFT by specifying the ID of the older version. Study Management Services Similar to PFTs, studies are also considered as atomic units. VIMCO services for study management are given in Figure 5.23. When writing a study, a link is stored for each element in the study that points to the corresponding PFTElement. These link objects are also included in the study XML received from the Front-End as part of the study manipulation calls. One of the major issues with study management is the boundary of a study. Since an object can be included in more than one study (e.g. a software instance), traversing the links starting from the Experiment object is not sufficient. Current version of VIMCO uses the study-PFT links to retrieve the study elements. Given a study, VIMCO first retrieves the link objects, and obtains the OIDs of the study elements from these objects. Then the study objects are retrieved one by one using the OID. As mentioned in Subsection 4.4.2, the reuse policy values are used to determine which study elements must actually be removed from the database when deleting a study. The same algorithm is used when an existing study is saved as a new one, to determine which objects must be copied and which must not. The execQuery method is called by the Front-End, to retrieve the instances of a study step satisfying certain criteria. The query is executed by VIMCO as described in Subsection 4.4.2. Queries in the current VIMCO version are formulated in Matisse SQL. The getReuseObjects method is called by the Front-End when a user issues a query and selects one object from the query result set. Upon this call, VIMCO retrieves the semantically related objects from the database based on the reuse policy values (as described in Subsection 4.4.2). If objects are copied, then one OriginCopy instance for each copied object is created and sent back to the user together with the related objects. If the user decides to use a copied object in her/his study, only then the corresponding OriginCopy object is persisted in the database. Module Information Management Services VIMCO provides a number of services for module information management (see Fig­ ure 5.24). These services are only available to domain experts. Similar to PFTs and studies, a module is also considered as an atomic unit by VIMCO, i.e. a request for module information manipulation contains not only descriptive information about the module itself, but also information about its ports and data types of the ports, required parameters and environment variables, and executables. Topology Manipulation Services VIMCO services for topology management are given in Figure 5.25. These services only work on the instances of the module descriptions. In other words, when reading a topology, VIMCO first retrieves the Topology object with the specified ID, then tra­ verses all the Module objects in the topology following their connections to each other. During the traversal, all relationships of objects in the topology are followed, except those that link the topology objects to the module description objects (i.e. relation­ ships having a name that starts with hasMeta). Similarly, when writing a topology, the Front-End does not include the module description objects in the topology XML. ``` readStudy(int sessionld : int, int dbId : int, int studyId : int) : XML writeStudy(int sessionld : int, int dbId : int, int study : XML) : XML writeStudyAs(int sessionld : int, int dbId : int, int study : XML) : XML deleteStudy(int sessionld : int, int dbId : int, int studyId : int) execQuery(int sessionld : int, int dbId : int, int query : string) : XML getReuseObjects(int sessionld : int, int dbId : int, int copyStudyElmld : int, int currentPFTElmld : int) : XML ``` Figure 5.23: VIMCO Study Management Services 5.5. Functionality/Services Provided by VIMCO VIMCO RTS Module Users can access VIMCO also through the VLAM-G Run Time System (RTS). The current version of VIMCO provides a single RTS module for querying a Matisse database, which is mainly used for testing purposes. This module accepts host name, database name, user name, password and query as parameters from the user, and directly accesses to the specified database. This is the only case where direct access to VIMCO databases occurs. This module will be replaced with a new one, which uses HTTP to access VIMCO and use XML as the data exchange format. This will allow both the enforcement of VIMCO access rights to requests from the RTS and the usage of more than one database in a request (in comparison to directly accessing a single database at a time). Utility Services In order to ease the development and testing of VIMCO and VLAM-G, several utility services and tools have been developed. Two of the utility services are instantiateSession and writeObjects. instantiateSession creates and initializes a Session object using the provided session information. writeObjects is used by administrators, mainly to insert large amounts of data into the application databases for testing the developed VIMCO services. The utility tools include SchemaClassGenerator which is used to generate Java classes for the types defined in a Matisse database. Another Utility class implements several methods, such as tokenizing a string value, generating keys to be used by the Although it supports more functionality than the above mentioned utility services and tools, attribute properties can be considered as a utility for the Front-End. Attribute properties are used to direct the Front-End about how to treat attributes of the objects displayed in the Front-End editors. For instance, the `pftGroupId` attribute of the PFT class is set by the VIMCO, and should not be modified by users. Another example is the `oid` attribute, which can be set only by the VIMCO or by the Front-End itself. To handle such situations, the `AttProperty` class is defined (see Figure 5.26), which defines the visibility level for each attribute of a class, handling type of the attribute, and whether the value of the attribute can be set to null. It also defines the name that should be used for display in the Front-End. The possible values for the handling type and access modes are defined in the `VimcoServerConstants` class (which are also shown in Figure 5.27). An attribute may correspond to a file location for which the file selection dialog needs to be displayed, or it can be a date or a list requiring special formatting. Also, a user can read and write an attribute, or may only view the value of the attribute. In some cases, users cannot see the attribute at all. Figure 5.26: Class `AttProperty` Figure 5.27: Possible values for `handlingType` and `accessMod` in `AttProperty` 5.6 Conclusions This chapter presented the architectural design and implementation of the information management framework that was described in the previous chapter. Specifically, first the implementation of the experiment model and the user environment within the VLAM-G experimentation environment is described. Then this chapter focused on the design and implementation of VIMCO, the information management platform of VLAM-G. The VIMCO architecture, databases maintained by VIMCO, and services provided by VIMCO are described, all of which implement the corresponding data or functionality models defined in the information management framework of Chapter 4. The following can be mentioned among the key characteristics of the VIMCO design and implementation: - **Modular architecture.** VIMCO has a modular architecture, which allows for easy maintenance and modification of the VIMCO software. - **Well-defined interfaces.** Components in the VIMCO architecture present well-defined, uniform programming interfaces that are easy to understand. This makes the development of a new component easy, reduces the possibility of error, and improves the interoperability and reusability of components. - **Independent components.** VIMCO components are independent from each other. Independence is achieved through interfaces and driver components, where driver components implement the interfaces. As such, any driver component can be replaced by another component without disturbing the system, as long as the new component complies with the interface. - **Scalability.** VIMCO architecture is scalable, flexible, and open to support addition of new users and resources in time. As a result, VIMCO is an evolvable information management platform that implements the generic and uniform data/functionality models of the information management framework.
{"Source-Url": "https://pure.uva.nl/ws/files/4080823/39393_UBA002001441_09.pdf", "len_cl100k_base": 14158, "olmocr-version": "0.1.53", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 89322, "total-output-tokens": 15504, "length": "2e13", "weborganizer": {"__label__adult": 0.0003523826599121094, "__label__art_design": 0.0006775856018066406, "__label__crime_law": 0.0004241466522216797, "__label__education_jobs": 0.00997161865234375, "__label__entertainment": 0.0001773834228515625, "__label__fashion_beauty": 0.00020003318786621096, "__label__finance_business": 0.0006089210510253906, "__label__food_dining": 0.0004193782806396485, "__label__games": 0.0008568763732910156, "__label__hardware": 0.002033233642578125, "__label__health": 0.0005183219909667969, "__label__history": 0.0007290840148925781, "__label__home_hobbies": 0.00020754337310791016, "__label__industrial": 0.0009021759033203124, "__label__literature": 0.0004897117614746094, "__label__politics": 0.0003173351287841797, "__label__religion": 0.000583648681640625, "__label__science_tech": 0.256591796875, "__label__social_life": 0.0003027915954589844, "__label__software": 0.06439208984375, "__label__software_dev": 0.658203125, "__label__sports_fitness": 0.00029730796813964844, "__label__transportation": 0.000606536865234375, "__label__travel": 0.000293731689453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66755, 0.01496]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66755, 0.5804]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66755, 0.89603]], "google_gemma-3-12b-it_contains_pii": [[0, 770, false], [770, 2543, null], [2543, 5181, null], [5181, 6944, null], [6944, 10709, null], [10709, 12376, null], [12376, 13868, null], [13868, 15067, null], [15067, 17350, null], [17350, 17803, null], [17803, 20617, null], [20617, 23620, null], [23620, 26639, null], [26639, 29381, null], [29381, 32766, null], [32766, 34164, null], [34164, 36947, null], [36947, 38000, null], [38000, 38765, null], [38765, 41354, null], [41354, 43215, null], [43215, 44969, null], [44969, 46768, null], [46768, 47335, null], [47335, 47809, null], [47809, 48977, null], [48977, 49343, null], [49343, 50717, null], [50717, 53409, null], [53409, 56239, null], [56239, 59242, null], [59242, 62009, null], [62009, 63490, null], [63490, 64894, null], [64894, 66755, null]], "google_gemma-3-12b-it_is_public_document": [[0, 770, true], [770, 2543, null], [2543, 5181, null], [5181, 6944, null], [6944, 10709, null], [10709, 12376, null], [12376, 13868, null], [13868, 15067, null], [15067, 17350, null], [17350, 17803, null], [17803, 20617, null], [20617, 23620, null], [23620, 26639, null], [26639, 29381, null], [29381, 32766, null], [32766, 34164, null], [34164, 36947, null], [36947, 38000, null], [38000, 38765, null], [38765, 41354, null], [41354, 43215, null], [43215, 44969, null], [44969, 46768, null], [46768, 47335, null], [47335, 47809, null], [47809, 48977, null], [48977, 49343, null], [49343, 50717, null], [50717, 53409, null], [53409, 56239, null], [56239, 59242, null], [59242, 62009, null], [62009, 63490, null], [63490, 64894, null], [64894, 66755, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66755, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66755, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66755, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66755, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66755, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66755, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66755, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66755, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66755, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66755, null]], "pdf_page_numbers": [[0, 770, 1], [770, 2543, 2], [2543, 5181, 3], [5181, 6944, 4], [6944, 10709, 5], [10709, 12376, 6], [12376, 13868, 7], [13868, 15067, 8], [15067, 17350, 9], [17350, 17803, 10], [17803, 20617, 11], [20617, 23620, 12], [23620, 26639, 13], [26639, 29381, 14], [29381, 32766, 15], [32766, 34164, 16], [34164, 36947, 17], [36947, 38000, 18], [38000, 38765, 19], [38765, 41354, 20], [41354, 43215, 21], [43215, 44969, 22], [44969, 46768, 23], [46768, 47335, 24], [47335, 47809, 25], [47809, 48977, 26], [48977, 49343, 27], [49343, 50717, 28], [50717, 53409, 29], [53409, 56239, 30], [56239, 59242, 31], [59242, 62009, 32], [62009, 63490, 33], [63490, 64894, 34], [64894, 66755, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66755, 0.04392]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
fbc20cb55e6b95bf814ea9cd6418cfd9a7425bdb
FP-Hadoop: Efficient Processing of Skewed MapReduce Jobs Miguel Liroz-Gistau, Reza Akbarinia, Divyakant Agrawal, Patrick Valduriez To cite this version: HAL Id: lirmm-01377715 https://hal-lirmm.ccsd.cnrs.fr/lirmm-01377715 Submitted on 7 Oct 2016 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. FP-Hadoop: Efficient Processing of Skewed MapReduce Jobs Miguel Liroz-Gistau\textsuperscript{a}, Reza Akbarinia\textsuperscript{a,∗}, Divyakant Agrawal\textsuperscript{b}, Patrick Valduriez\textsuperscript{a} \textsuperscript{a}INRIA Montpellier, France \textsuperscript{b}Department of Computer Science, University of California, Santa Barbara Abstract Nowadays, we are witnessing the fast production of very large amount of data, particularly by the users of online systems on the Web. However, processing this big data is very challenging since both space and computational requirements are hard to satisfy. One solution for dealing with such requirements is to take advantage of parallel frameworks, such as MapReduce or Spark, that allow to make powerful computing and storage units on top of ordinary machines. Although these key-based frameworks have been praised for their high scalability and fault tolerance, they show poor performance in the case of data skew. There are important cases where a high percentage of processing in the reduce side ends up being done by only one node. In this paper, we present FP-Hadoop, a Hadoop-based system that renders the reduce side of MapReduce more parallel by efficiently tackling the problem of reduce data skew. FP-Hadoop introduces a new phase, denoted intermediate reduce (IR), where blocks of intermediate values are processed by intermediate reduce workers in parallel. With this approach, even when all intermediate values are associated to the same key, the main part of the reducing work can be performed in parallel taking benefit of the computing power of all available workers. We implemented a prototype of FP-Hadoop, and conducted extensive experiments over synthetic and real datasets. We achieved excellent performance gains compared to native Hadoop, e.g. more than 10 times in reduce time and 5 times in total execution time. Keywords: MapReduce, Data Skew, Parallel Data Processing 1. Introduction In the past few years, advances in the Web have made it possible for the users of information systems to produce large amount of data. However, processing this big data is very challenging since both space and computational requirements are hard to satisfy. One solution for dealing with such requirements is to take advantage of parallel frameworks, such as MapReduce\textsuperscript{1} or its IO-efficient versions such as Spark\textsuperscript{2}, that allow to make powerful computing and storage units on top of ordinary machines. The idea behind MapReduce is simple and elegant. Given an input file of key-value pairs, and two functions, map and reduce, each MapReduce job is executed in two main phases. In the first phase, called map, the input data is divided into a set of splits, and each split is processed by a map task in a given worker node. These tasks apply the map function on every key-value pair of their split and generate a set of intermediate pairs. In the second phase, called reduce, all the values of each intermediate key are grouped and assigned to a reduce task. Reduce tasks are also assigned to worker machines and apply the reduce function on the created groups to produce the final results. Although MapReduce and Spark frameworks have been praised for their high scalability and fault tolerance, they show poor performance in the case of data skew. There are important cases where a high percentage of processing in the reduce side ends up being done by only one node. Let’s illustrate this by an example. Example 1. Top accessed pages in Wikipedia. Suppose we want to analyze the statistics\textsuperscript{3} that the free encyclopedia, Wikipedia, has published about the visits of its pages by users. In the statistics, for every hour, there is a file in which for each visited page, there is a line containing some information including, among others, its URL, language and the number of visits. Given a file, we want to return for each language, the top-k% accessed pages, e.g., top 1%. To answer this query, we can write a simple program as in the following Algorithm\textsuperscript{4}. \begin{algorithm} \caption{Top accessed pages in Wikipedia.} \begin{algorithmic} \State {Given a file containing the visits statistics of Wikipedia pages, we want to return the top-k% accessed pages for each language.} \State {1. Read the file line by line.} \State {2. For each line, parse the URL, language and number of visits.} \State {3. Group the pages by language.} \State {4. For each group, sort the pages by number of visits.} \State {5. Return the top-k% accessed pages for each language.} \end{algorithmic} \end{algorithm} \begin{thebibliography}{1} \bibitem{1} http://dumps.wikimedia.org/other/pagecounts-raw/ \bibitem{2} This program is just for illustration; actually, it is possible to write a more efficient code by leveraging on the sorting mechanisms of MapReduce. \end{thebibliography} map($id : K_1$, $content : V_1$) $\textbf{foreach}$ line ($lang$, $page_id$, $num_visits$, ...) in $content$ $\textbf{do}$ $\textbf{emit}$ ($lang$, $page_id$) = ($num_visits$, $page_id$) $\textbf{end}$ reduce($lang : K_2$, $pages_info : \text{list}(V_2)$) Sort $pages_info$ by $num_visits$ $\textbf{foreach}$ $page_info$ in top $k\%$ $\textbf{do}$ $\textbf{emit}$ ($lang$, $page_id$) $\textbf{end}$ Algorithm 1: Map and reduce functions for Example 1 In this example, the load of reduce workers may be highly skewed. In particular, the worker that is responsible for reducing the English language will receive a lot of values. According to the statistics published by Wikipedia\footnote{http://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia}, the percentage of English pages over total was more than 70\% in 2002 and more than 25\% in 2007. This means for example that if we use the pages published up to 2007, when the number of reduce workers is more than 4, then we have no way for balancing the load because one of the nodes would receive more than 1/4 of the data. The situation is even worse when the number of reduce tasks is high, e.g., 100, in which case after some time, all reduce workers but one would finish their assigned task, and the job has to wait for the responsible of English pages to finish. In this case, the execution time of the reduce phase is at least equal to the execution time of this task, no matter the size of the cluster. There have been some proposals to deal with the problem of reduce side data skew. One of the main approaches is to try to uniformly distribute the intermediate values to the reduce tasks, e.g., by dynamically repartitioning the keys to the reduce workers\footnote{http://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia}. However, this approach is not efficient in many cases, e.g., when there is only one single intermediate key, or when most of the values correspond to one of the keys. One solution for decreasing the reduce side skew is to filter the intermediate data as much as possible in the map side, e.g., by using a combiner function. However, the input of the combiner function is restricted to the data of one map task, thus its filtering power is very limited for some applications. Let’s illustrate this by using our problem of top-1%. Suppose we have 1TB of Wikipedia data, and 200 nodes for processing them. To be able to filter some intermediate data by the combiner function, we should have more than 1% of the total values of at least one key (language) in the map task. Thus, if we use the default splits of Hadoop (64 MB size), the combiner function can filter no data. The solution is to increase significantly the size of input splits, e.g. more than 10GB (1% of total). However, using big splits is not advised since it decreases significantly the MapReduce performance due to the following disadvantages: 1) more map-side skew: with big splits, there may be some map tasks that take too much time (e.g. because of their slow CPU), and this would increase significantly the total MapReduce execution time; 2) less parallelism: big split size means small number of map tasks, so several nodes (or at least some of their computing slots) may have nothing to do in the map phase. In our example, with 10GB splits, there will be only 100 map tasks, thus half of the nodes are idle. This performance degradation is confirmed by our experimental results reported in Section 5.11. In this paper, we propose FP-Hadoop, a Hadoop-based system that uses a novel approach for dealing with the data skew in reduce side. In FP-Hadoop, there is a new phase, called intermediate reduce (IR), whose objective is to make the reduce side of MapReduce more parallel. More specifically, the programmer replaces his reduce function by two functions: intermediate reduce (IR) and final reduce (FR) functions. Then, FP-Hadoop executes the job in three phases, each phase corresponding to one of the functions: map, intermediate reduce (IR) and final reduce (FR) phases. In the IR phase, even if all intermediate values belong to only one key (i.e., the extreme case of skew), the reducing work is done by using the computing power of all available workers. Briefly, the data reducing in the IR phase has the following distinguishing features: - **Parallel reducing of each key**: The intermediate values of each key can be processed in parallel by using multiple intermediate reduce workers. - **Distributed intermediate block construction**: The input of each intermediate worker is a block composed of intermediate values distributed over multiple nodes of the system, and chosen using a scheduling strategy, e.g. locality-aware. - **Hierarchical execution**: The processing of intermediate values in the IR phase can be done in several levels (iterations). This permits to perform hierarchical execution plans for jobs such as top-k% queries, in order to decrease the size of the intermediate data more and more. - **Non-overwhelming reducing**: The size of the intermediate blocks is bounded by configurable maximum value that prevents the intermediate reducers to be overwhelmed by very large blocks of intermediate data. We implemented a prototype of FP-Hadoop, and conducted extensive experiments over synthetic and real datasets. The results show excellent performance gains of FP-Hadoop compared to native Hadoop. For example, in a cluster of 20 nodes with 120GB of input data, FP-Hadoop outperformed Hadoop by a factor of about 10 in reduce time, and a factor of 5 in total execution time. This paper is a major extension of \cite{4} and \cite{5}, with at least 30\% of new materials. In the current paper, we propose a fault-tolerance mechanism that assures the correctness of the results in the case of failures, and reduces the amount of data to be re-processed compared to the native Hadoop. Additionally, we describe the design in more details and provide a more extensive experimental evaluation. The rest of this paper is organized as follows. In Section 2, we explain how Hadoop’s MapReduce works and give the necessary details to present our approach. In Section 3 we present the principles of FP-Hadoop including its programming model. Then, in Section 4, we give more details about its design. In Section 5, we report the results of our experiments done to evaluate the performance of FP-Hadoop. In Section 6, we discuss related work, and Section 7 concludes. 2. MapReduce Background In this section, we first briefly explain how MapReduce works in Hadoop. This will be useful to understand the technical details of FP-Hadoop. Then, we give an abstract view of the MapReduce execution. This abstract view is useful to better understand the main differences between the programming models of Hadoop and FP-Hadoop. 2.1. Job Execution in Hadoop In Hadoop, for executing a MapReduce job, we need a master node for coordinating the job execution, and some worker nodes for executing the map and reduce tasks [6]. The worker nodes can be configured with a predefined number of slots for map and reduce tasks, so that each slot is able to execute a single task at a given time. When a MapReduce job is submitted to a node, it computes the input splits. The number of input splits can be personalized, but typically there is a one-to-one relationship between splits and file chunks in the filesystem, which by default have a size of 64MB. The location of these splits and some information about the job are submitted to the master that creates a job object with all the necessary information, including the map and reduce tasks to be executed. One map task is created per input split. When a map task is assigned to a worker, it is executed in a Java Virtual Machine (JVM). The task reads the corresponding input split, applies the map function on each input element (e.g. line), and generates intermediate key-value pairs, which are firstly maintained in a buffer in main memory. When the content of the buffer reaches a threshold (by default 80% of its size), the buffered data is stored on the disk in a file called spill. Before writing the content of the buffer into the spill, the keys are divided into several partitions (as many as reduce tasks) using a partitioning function, and then the values of each key are sorted and written to its corresponding partition in the spill. An optional combiner function may be applied on the buffer data just before they are written into the spills. The objective of this function is to decrease the size of intermediate data that should be transferred to the reduce workers. Once a map task is completed, the generated spills are merged into a final output file and the master is notified. In the reduce phase, each partition is assigned to one of the reduce tasks. Each reduce task retrieves the key-value pairs corresponding to its partition from all the map output files, and merges them using the merge-sort algorithm. The transfer of data from map workers to the reduce workers is called shuffling, and can be started when a map task finishes its work. However, the reduce function cannot be applied until all the map tasks have finished and their outputs merged and grouped. Each reduce task groups the values of the same key, applies the reduce function on the corresponding values, and generates the final output results. When, all reduce tasks of a job are completed successfully, the user is notified by the master. 2.2. An Abstract View In an abstract view, the input of the map phase in MapReduce can be considered as a set of data splits, which should be processed by all workers. Thus, each map worker takes one split, processes it, and then takes another one until there are no splits in the set. Thus, we can hope for good parallelism in the map side, since no worker is idle until there is any split in the split set. But, in the reduce side, there is not such a parallelism, because the values of each key should be processed by one reduce worker. Thus, there may be situations where a high volume of the work is done by a single worker or a few number of workers, while the others are idle. We have made FP-Hadoop more parallel and efficient than Hadoop since all reduce workers can contribute in reducing the map output even if the output belong to only one key. 3. FP-Hadoop Principles In this section, we introduce the programming model of FP-Hadoop, its main phases, and the functions that are necessary for executing jobs. The design details of FP-Hadoop development are given in the next section. 3.1. Programming Model In FP-Hadoop, the output of the map tasks is organized as a set of blocks (splits) which are consumed by the reduce workers. More specifically, the intermediate key-value pairs are dynamically grouped into splits, called Intermediate Result Splits (IR splits for short). The size of an IR split is bounded between two values, MinIRSize and MaxIRSize, configurable by the user. Formally, each IR split is a set of \((k, V)\) pairs such that \(k\) is an intermediate key and \(V\) is a subset of the values generated for \(k\) by the map tasks. FP-Hadoop executes the jobs in three different phases (see Figure 1): map, intermediate reduce, and final reduce. The map phase is almost the same as that of Hadoop in the sense that the map workers apply the map function on the input splits, and produce intermediate key-value pairs. The only difference is that in FP-Hadoop, the map output is managed as a set of IR fragments that are used for constructing IR splits. There are two different reduce functions: intermediate reduce (IR) and final reduce (FR) functions. In the intermediate reduce phase, the IR function is executed in parallel by reduce workers on the IR splits, which are constructed using a scheduling strategy from the intermediate values distributed over the nodes. More specifically, in this phase, each ready reduce worker takes an IR split as input, applies the IR function on it, and produces a set of key-value pairs which may be used for constructing future IR splits. When a reduce worker finishes its input split, it takes another split and so on until there is no more IR splits. In general, programming the IR function is not very complicated; it can be done in a similar way as the combiner function of Hadoop. In Section 3.2, we give more details about the IR function, and how it can be programmed. The intermediate reduce phase can be repeated in several iterations, to apply the IR function several times on the intermediate data and reduce incrementally the final splits consumed by the FR function (see Figure 2). The maximum number of iterations can be specified by the programmer, or be chosen adaptively, i.e., until the intermediate reduce tasks input/output size ratio is higher than a given threshold. In the final reduce phase, the FR function is applied on the IR splits generated as the output of the intermediate reduce phase. The FR function is in charge of performing the final grouping and producing the results of the job. Like in Hadoop, the keys are assigned to the reduce tasks according to a partitioning function. Each reduce worker pulls all IR splits corresponding to its keys, merges them, applies the FR function on the values of each key, and generates the final job results. Since in FP-Hadoop the final reduce workers receive the values on which the intermediate workers have worked, the load of the final reduce workers in FP-Hadoop is usually much lower than that of the reduce workers in Hadoop. In the next subsection, we give more details about the IR and FR functions, and explain how they can be programmed. ### 3.2. IR and FR Functions To take advantage of the intermediate reduce phase, the programmer should replace his/her reduce function by intermediate and final reduce functions. Formally, the input and output of map (M), intermediate (IR) and final reduce (FR) functions is as follows: \[ M : (K_1, V_1) \rightarrow \text{list}(K_2, V_2) \] \[ IR : (K_2, \text{partial list}(V_2)) \rightarrow (K_2, \text{partial list}(V_2)) \] \[ FR : (K_2, \text{list}(V_2)) \rightarrow \text{list}(K_3, V_3) \] Notice that in IR function, any partial set of intermediate values can be received as input. However, in FR function, all values of an intermediate key are passed to the function. Given a reduce function, to write the IR and FR functions, the programmer should separate the sections that can be processed in parallel and put them in IR function, and the rest in FR function. Formally, given a reduce function \(R\), the programmer should find two functions IR and FR, such that for any intermediate key \(k\) and its list of values \(S\), we have: \[ R(k, S) = FR(k, \langle IR(k, S_1), ..., IR(k, S_n) \rangle) \] for every partition \(S_1 \cup ... \cup S_n = S\) In Table 1, we enumerate some important functions, and show their intermediate and FR functions. There are many functions for which we can use the original reduce function. The following example shows how a Top-k query can be implemented in FP-Hadoop using the same function for IR and FR. **Example 2. Top-k.** Consider a job that given a scoring function computes the top-k tuples of a big table. In this job, the map function computes the score of each read tuple, and emits \((\text{key}, \{\text{tupleID, score}\})\), where key is the identifier of the set of values on which we want to find the top-k values. Then, both IR and FR functions can be implemented as a function that, given a set of \(\{\text{tupleID, score}\}\) pairs, returns the \(k\) pairs that have the highest scores. Thus, in practice, the intermediate reduce workers generate partial top-k results (top-k tuples of their input IR splits), and the FR function produces the final results from intermediate partial results. If the value of \(k\) is big, e.g. 1% of the input as in our motivating example in Introduction, then we may need more iterations in the intermediate phase to reduce the intermediate data more and more. In this case, the execution of the job for overloaded key(s) can be hierarchical as in Figure 2. To plan such executions, it is sufficient to set the maximum number of iterations to the required level, and then FP-Hadoop organizes multiple iterations for overloaded keys when it is needed. A class of functions that can usually take advantage of the intermediate reduce phase is that of aggregate functions. Examples of such functions are SUM, MIN/MAX (using the same function as IR and FR), AVG and STD (using different functions for IR and FR). Aggregate functions can be classified into three groups: - **Definition 1 (Distributive).** Let \(F\) be a reduce function, and \(S\) a set of values, and \(S_1\cdots S_n\) a partitioning over \(S\). \(F\) is distributive if there is a function \(G\) such that \(F(S) = G((F(S_1),\ldots,F(S_n)))\). - **Definition 2 (Algebraic).** Let \(F\) be a reduce function, and \(S\) a set of values, and \(S_1\cdots S_n\) a partitioning over \(S\). \(F\) is Algebraic if there is a \(m\)-valued function \(G\) and a function \(H\) such that \(F(S) = H((G(S_1),\ldots,(G(S_m)))\)). • Definition 3 (Holistic). Let $F$ be a reduce function, and $S$ a set of values, and $S_1 \cdot \ldots \cdot S_n$ a partitioning over $S$. $F$ is Holistic if there is not a $m$-valued function (with bounded $m$) that characterizes the computation. In other words, $F$ is holistic if it is neither distributive nor algebraic. Intuitively, a function is distributive if it can be computed in a distributed manner. Examples of such functions are COUNT, SUM, MIN, and MAX. Distributive functions can particularly benefit from the usage of intermediate reduce tasks, as they usually reduce the data size significantly. Algebraic functions are the functions that can be computed by using a bounded number of distributive functions. Examples of algebraic functions are AVG and STD. These functions can also take advantage of intermediate reduce tasks. In holistic functions, there is no constant bound on the storage size needed to describe a sub-aggregate. Some holistic functions, such as SKYLINE, may take great advantage of running an intermediate phase. For some other holistic functions, it may be difficult to find an efficient intermediate function. We precise that if it is difficult to find an efficient IR function for a job, it is sufficient to specify no IR function, then the final reduce phase starts just after the map phase, i.e., like in Hadoop. 4. Design Details In this section, we describe our design choices in FP-Hadoop. We focus on the activities that do not exist in Hadoop or are very different in FP-Hadoop, particularly: 1) management of IR fragments used for making IR splits; 2) intermediate task scheduling; 3) intermediate and final reduce task creation; 4) management of multiple iterations in the intermediate reduce phase. 4.1. IR Fragments for Constructing IR Splits In this subsection, we describe our approach for constructing the IR splits that are the working blocks of the reducers in the intermediate reduce phase. In Hadoop, the output of the map tasks is kept in the form of temporary files (called spills). Each spill contains a set of partitions, such that each partition involves a set of keys and their values. These spills are merged at the end of the map task, and the data of each partition is sent to one of the reducers. In FP-Hadoop, the spills are not merged. Each partition of a spill generates an IR fragment, and the IR fragments are used for making IR splits. When a spill is produced by a map task, the information about the spill’s IR fragments, which we call IRF metadata, is sent to the master node by using the heartbeat message passing mechanism. An IR fragment is uniquely identified by the task which produced it, its spill number and the partition. The data flow between FP-Hadoop components is shown in Figure 3. Notice that the choice of using spills as the output unit of the map phase provides the following advantages. The intermediate phase may start as soon as sufficient spills have been produced, even if no map task has yet finished, thus increasing the parallelism. Moreover, the fact that spills are bounded in size guarantees that IR split maximum size is respected. This would not be the case if the whole output of a map task is used, since its size is unbounded and could be by itself bigger than the IR split limit. For keeping IRF metadata, the master of FP-Hadoop uses a specific data structure called IR fragment table (IRF Table). Each partition has an entry in IRF Table that points to a list keeping the IR fragment metadata of the partition, e.g., size, spill and the ID of the map worker where the IR fragment has been produced. The master uses the information in IRF Table for constructing IR splits and assigning them to ready reduce workers. This is done mainly based on the scheduling strategies described in the next subsection. 4.2. Scheduling Strategies The master node is responsible for scheduling the intermediate and final reduce tasks. For this, it uses a component called Reduce Scheduler that schedules the tasks by using a customizable scheduling strategy. Here we describe the scheduling strategies which are used in FP-Hadoop; we present the details of Reduce Scheduler in Section 4.3. For scheduling an intermediate reduce task, the most important issue is to choose the IR fragments that belong to the IR split that should be processed by the task. The strategies, which are currently implemented in FP-Hadoop are: • Greedy: In this strategy, IR fragments are chosen from within the biggest partition, i.e., the partition whose IR fragments account for the highest total size. In the IRF Table, for each partition, along with the list of IR fragments we store its total size, allowing to chose the partition by a simple vertical scan of IRF Table. After choosing the partition, we select IR fragments starting from the head of the list until reaching the MaxIRSize value, i.e., the upper bound size for an IR split. This strategy is the default strategy in FP-Hadoop. • Locality-aware: In this strategy, the objective is to choose for a worker $w$ the IR fragments that are on its local disk or close to it. Consequently, we chose the partition $p$ for which the IR fragments produced in $w$ account for the biggest size, provided that $p$’s total size is at least MinIRSize. From the partition’s fragments, we select those local to $w$ until reaching --- 4 This mechanism is used for communication between the master and workers. MaxIRSize. If their total size is inferior to MinIRSize we keep taking fragments until reaching MinIRSize, first choosing from those produced in the same rack as \( w \) and then from the same data center. 4.3. Multiple Iterations FP-Hadoop can be configured to execute the intermediate reduce phase in several iterations, in such a way that the output of each iteration is consumed by the next iteration. Notice that the output of IR tasks in phase \( n \) produces the IR fragments that are to be consumed in phase \( n + 1 \). An example of Top-k query execution over Wikipedia data (described in Section 5) is shown in Figure 3 where each phase is depicted as a rectangle whose height represents its input size and its length the execution time in seconds. In this example, there are two iterations for the intermediate reduce phase (shown with gray rectangles). We can see how each iteration further reduces the intermediate data size so that when the final reduce phase (in blue) is executed, its input size is sufficiently small. In FP-Hadoop, there is a parameter \( \text{MaxNumIter} \) that defines the maximum number of iterations. Notice that this parameter just establishes the maximum number, but in practice each partition may be processed in a different number of iterations, for instance depending on its size, input/output ratio or skew. **MaxIRSize.** If their total size is inferior to **MinIRSize** we keep taking fragments until reaching **MinIRSize**, first choosing from those produced in the same rack as \( w \) and then from the same data center. 4.3. Multiple Iterations FP-Hadoop can be configured to execute the intermediate reduce phase in several iterations, in such a way that the output of each iteration is consumed by the next iteration. Notice that the output of IR tasks in phase \( n \) produces the IR fragments that are to be consumed in phase \( n + 1 \). An example of Top-k query execution over Wikipedia data (described in Section 5) is shown in Figure 3 where each phase is depicted as a rectangle whose height represents its input size and its length the execution time in seconds. In this example, there are two iterations for the intermediate reduce phase (shown with gray rectangles). We can see how each iteration further reduces the intermediate data size so that when the final reduce phase (in blue) is executed, its input size is sufficiently small. In FP-Hadoop, there is a parameter **MaxNumIter** that defines the maximum number of iterations. Notice that this parameter just establishes the maximum number, but in practice each partition may be processed in a different number of iterations, for instance depending on its size, input/output ratio or skew. **FP-Hadoop** provides the following strategies for configuring the number of iterations: - **Size-based:** In this approach, an iteration is launched only if its input size is more that a given threshold, **MinIterSize**. By default, FP-Hadoop uses the same value as **MinIRSize** (i.e. the minimum size of IR splits). The actual number of iterations may vary among partitions, and the maximum number may not be reached, e.g. if the size of intermedi- ate data was originally small or has been sufficiently reduced during previous iterations. The extreme scenario corresponds to the case when the map output for a given partition is less than MinIterSize. In this case, the final reduce phase is directly launched. - **Input/Output-Ratio-based**: In this approach, an iteration is launched when the ratio between the input and output size of the previous iteration is greater than a system parameter IORatio. The rationale is that if the input/output ratio is lower than some value, then further intermediate iterations will not help reducing the size of the partition in consideration, and hence, will not improve the job execution performance. - **Skew-based**: In this approach, an intermediate iteration is launched for a given partition if its size is SkewRatio times higher than the average partition size. Thus it only executes more iterations for the overloaded partitions. For example by setting SkewRatio equal to 2, only the partitions whose size is at least twice the average size are consumed in a new iteration, and the others are sent to the final reduce phase. ### 4.4. Reduce Scheduler To schedule the intermediate and final reduce tasks, the master node uses a component called Reduce Scheduler. The scheduling is done using a generic algorithm whose pseudo-code is shown in Algorithm 2. The algorithm scans IRF Table, and selects the best partition for constructing the IR split which should be consumed by the worker. Then, based on the size of the data in the partition and the maximum number of iterations, it decides whether to launch a final reduce task or an intermediate task. If it decides to launch a final reduce task, it uses all IR fragments of the partition. In the other case, i.e., an intermediate reduce task, it chooses the fragments of the IR split from the best partition based on the scheduling strategy. The main functions that are used in the generic scheduling algorithm (Algorithm 2) are as follows: - **isBestPartition()**: Based on the scheduling strategy, this function selects a partition from which the IR fragments of the task will be chosen. For example, with the Greedy strategy, it selects the partition that has the biggest data size among the partitions satisfying the scheduling constraints, i.e., their total size is at least MinIRSize, and the number of fragments of the partition are at least MinIRFs, which can be configured by the user. - **shouldRunFR()**: This function decides if the final reduce task should be launched for the data of a partition. The decision depends on the number of iterations that should be executed in the intermediate ``` Input: w: Worker with the idle slot, P: set of partitions Result: t: Task to run begin chosen_Partition ← ∅ for each p ∈ P do if isBestPartition(p, chosen_Partition, w) then chosen_Partition ← p end end if shouldRunFR(chosen_Partition) then t ← create_FR_Task(chosen_Partition) else t ← create_IR_Task(chosen_Partition) end return t end ``` Algorithm 2: Generic scheduling algorithm reduce phase. If the current iteration for a partition is the last iteration to be done, and the tasks of the iteration have finished successfully, the final reduce task can be scheduled. - **create_FR_Task()**: This function creates a final reduce task. It simply uses all the IR fragments from the partition given by isBestPartition function. - **create_IR_Task()**: This function creates an intermediate reduce task. It takes the partition given by isBestPartition function, and chooses a set of IR fragments as the IR split of the task. The fragments are chosen based on the scheduling strategy, for instance local fragments would be favored in the locality-aware strategy, whereas in the basic Greedy scheduling strategy, IR fragments are added to the task’s IR split while they do not surpass MaxIRSize. The task metadata are sent to the ready worker (see Figure 3). To implement new scheduling strategies in Reduce Scheduler, usually it is sufficient to change the above functions. For instance, to implement locality-aware strategy instead of Greedy strategy, we changed isBestPartition function (that selects the partition to use) in such a way that it gives the priority to the partition that has the maximum IR fragments generated in the ready worker, and then changed generateIntermediateRT function to choose the IR fragments accordingly, e.g., favor local fragments. ### 4.5. Fault Tolerance In Hadoop, the master marks a task as failed either if an error is reported (e.g., an Exception of the program or a sudden exit of the JVM) or if it has not received a progress update from the task for a given period [1]. In both cases, the task is scheduled for re-execution, if possible in a different node. During the execution of a job, the output of completed map tasks should be kept available in the workers’ disks until the job finishes. The reason is that any reduce task reads data from all map tasks’ output. If a worker fails, intermediate data stored on its disk is not available any more. Consequently, in addition to running tasks, all successfully completed map tasks executed in the worker need to be rescheduled to make their output available again. In FP-Hadoop, the behavior in the case of failure is slightly different, because we have to consider intermediate reduce tasks and we can also profit from the fact that they only read data from a subset of the map tasks. As in Hadoop, we distinguish two scenarios: task failure and worker failure. 4.5.1. Task Failure For map and final reduce tasks, the behavior is exactly the same as in native Hadoop, that is, the failed task is scheduled for re-execution, if possible in a different node. In the case of IR tasks, we need to re-inject their input IR fragments metadata into the IRF Table. Then, the fragments will be assigned to another task, not necessarily grouped in the same IR Split. Since new IRs may be available, the reduce scheduler may choose to combine them in different ways. The case of an IR task failure is illustrated in the example shown in Figure 5. The state of the IRFs list corresponding to a given partition $p$ in the IRF Table is presented before and after each event. In time $t = t_1$ task $ir_i$ is scheduled with input fragments $f_1$ and $f_2$. Then, in time $t = t_2$ the task fails and, consequently, its input fragments are injected back in the list. Notice that since $t_1$ some fragments have already been consumed (e.g. $f_3$) and new fragments inserted (e.g. $f_6$). Finally, when an idle worker asks for a new task in time $t = t_3$, fragments are grouped differently and its input is composed of fragments $f_6$ and $f_1$. In Figure 5: Task failure example. and Figure 6: Worker failure example. 4.5.2. Worker Failure If a worker node fails, if in the node there were running tasks, they need to be re-scheduled. Map tasks are simply re-executed and the input fragments of intermediate and final reduce tasks are re-inserted into the IRF Table. In FP-Hadoop, as opposed to native Hadoop, not all completed tasks' output is needed in the future, since their data may have already been consumed and reduced by intermediate reduce tasks in subsequent iterations of the execution tree. Indeed, FP-Hadoop uses a mechanism that only requires the re-execution of a minimal amount of tasks in the case of a worker failure. Each IR fragment stores within its metadata the information about their provenance, that is, a reference to the fragments that were used on its generation. In the failure of a worker, just after input IR fragments of running tasks are re-inserted to the IRF Table, each fragment stored in the failed node is replaced by the fragments that were used on its construction. If within those fragments, there are some fragments that are stored in the failed node, they are replaced by their predecessors as well. Fragments are replaced recursively by their predecessors until they are available or they have been generated in a map task. Only in that case, the map task is scheduled for re-execution. Re-injected fragments can be consumed by new IR tasks following the standard procedure. Let’s illustrate this with the example of Figure 6 in which the execution tree for a given partition p is shown. Reduce task r was running in the worker node when it failed. IR fragments stored in the failed node are shadowed. The input fragments f_{12} and f_{14} are re-inserted to the IRF Table as in a single task failure. Then, unavailable fragments are replaced recursively by their predecessors. In this way, f_{14} is replaced by f_{11} and f_{12} and then f_{12} by f_{7} and f_{8}. No map task needs to be re-executed and the re-injected fragments can be consumed again by new tasks. FP-Hadoop recovery strategy only re-processes the data that is needed to finish the execution of the job, as opposed to native Hadoop, which needs to re-execute all map tasks and non-finished reduce tasks executed in the failed worker. Furthermore, as IR tasks have already reduced the size of the intermediate data, the amount of data to be treated is reduced even more, thus accelerating the recovery process. 5. Performance Evaluation We implemented a prototype of FP-Hadoop by modifying Hadoop’s components. In this section, we report on the results of our experiments for evaluating the performance of FP-Hadoop\footnote{The executable code of FP-Hadoop as well as the tested jobs are accessible in the following page: \url{http://gforge.inria.fr/plugins/mediawiki/wiki/FP-Hadoop/}}. We first discuss the experimental setup such as the datasets, queries and the experimental platform. Then, we discuss the results of our tests done to study the performance of FP-Hadoop in different situations, particularly by varying parameters such as the number of nodes in the cluster, the size of input data, etc. 5.1. Setup We run the experiments in the Grid500 platform\footnote{http://www.grid5000.fr} in a cluster with up to 50 nodes. The nodes are provided with Intel Quad-Core Xeon L5335 processors with 4 cores each, and 16GB of RAM. Nodes are connected through a switch providing a Gigabit ethernet connection to each node. We compare FP-Hadoop with standard Apache Hadoop and SkewTune \footnote{http://www.grid5000.fr} which is the closest related work to ours (see a brief description in Related Work Section). In all our experiments, we use a combinier function (for Hadoop, FP-Hadoop and SkewTune) that is executed on the results of map tasks before sending them to the reduce tasks. This function is used to decrease the amount of data transferred from map to reduce workers, and so to decrease the load of reduce workers. The results of the experiments are the average of three runs. We measure two metrics: - **Execution time.** This is the time interval (in seconds) between the moment when the job is launched and the moment when it ends. This is our default metrics reported for most of the results. - **Reduce time.** We use this metric to consider the time that is used only for shuffling and reducing. It is measured as the time interval between the moment when the last map task is finished and the end of the job. With respect to Hadoop’s configuration, the number of slots was set to the number of cores. All the experiments are executed with a number of reduce workers equal to the number of machines. We change \texttt{io.sort.factor} to 100, as advised in \cite{skewtune}, which actually favors Hadoop. For the rest of the parameters, we employ Hadoop’s default values. The default values for the parameters which we employ in our experiments are as follows. The default number of nodes which we use in our cluster is 20. Unless otherwise specified, the input data size in the experiments is 20 GB. In FP-Hadoop, we use the default Greedy scheduling strategy as the high throughput of the network limits the impact of the locality-aware strategy. The default value for \texttt{MinIRSize} is set to 512 MB. The value of \texttt{MaxIRSize} is always twice that of \texttt{MinIRSize}, and the maximum number of iterations is set to 1. 5.2. Queries and Datasets We use the following combinations of MapReduce jobs and datasets to assess the performance of our prototype: Top-k\% (TK). This job, which is our default job in the experiments, corresponds to the query from the Wikipedia example described in the introduction of the paper. The input dataset is stored in the form of lines with the schema: \textit{Visits}\{language, article, num\_views, other\_data\} Our query consists on retrieving for each language the k\% most visited articles. The default value of \(k\) is 1, i.e., by default the query returns 1\% of the input data. We have used real-world and synthetic datasets. The real-world dataset (TK-RD) is obtained from the logs about Wikipedia page visits\(^7\) consisting of a set of files each containing the statistics collected for a single hour. We also produced two synthetic datasets, in which we can control the number of keys and their skew. In the first synthetic dataset (TK-SK), which is the default dataset, the number of articles per language follows a Zipfian distribution function \[ f(l, S, N) = \frac{1/l^s}{\sum_{n=1}^{N}(1/n^s)} \] that returns the frequency of rank \(l\), where \(S\) and \(N\) are the parameters that define the distribution, i.e., \(f(1, S, N)\) returns the frequency of the most popular language. The default value for Zipf exponent parameter \((S)\) is 1, and the parameter \(N\) is equal to the number of languages (10 by default). In the second synthetic dataset (TK-U), the articles are uniformly distributed in the keys. We perform several tests varying the data size, among other parameters, up to 120GB. The query is implemented using a secondary sort\(^8\), where intermediate keys are sorted first by language and then by the article’s number of visits, but only grouped by language. Inverted Index (II). This job consists of generating an inverted index with the words of the English Wikipedia articles\(^9\) as in \(\text{II}\). We use a RADIX partitioner to map letters of the alphabet to reduce tasks and produce a lexicographically ordered output. We execute the job with a dataset containing 20GB of Wikipedia articles. PageRank (PR). This query applies the PageRank\(^8\) algorithm to a graph in order to assign weights to the vertices. As in \(\text{II}\) we use the implementation provided by CloudDB\(^8\). And as dataset, we use the PLD graph from Web Data Commons\(^10\) whose size is about 2.8GB. Wordcount (WC). Finally, we use the wordcount job provided in Hadoop standard framework. We apply it to a dataset generated with the RandomWriter job provided in the Hadoop distribution. We test this job with a 100GB dataset. 5.3. Scalability We investigate the effect of the input size on the performance of FP-Hadoop compared to Hadoop. Using TK-SK dataset, Figures 7b and 7c show the reduce time and execution time respectively, by varying the input size up to 120 GB, \textit{MinIRSize} set to 5 GB, and other parameters set as default values described in Section 5.1. Figures 7c and 7d show the performance using TK-RD dataset with sizes up to 100 GB, while other parameters as default values described in Section 5.1. As expected, increasing the input size increases the execution time of both Hadoop and FP-Hadoop, because more data should be processed by map and reduce workers. However, the performance of FP-Hadoop is much better than Hadoop when we increase the size of input data. For example, in Figure 7d, the speed-up of FP-Hadoop vs Hadoop on execution time is around 1.4 for input size of 20GB, but this gain increases to around 5 when the input size is 120GB. For the latter data size, the reduce time of FP-Hadoop is more than 10 times lower than Hadoop. The reason for this significant performance gain is that in the intermediate reduce phase of FP-Hadoop the reduce workers collaborate on processing the values of the keys containing a high number of values while in Hadoop a single task has to process all this data. This is illustrated in Figures 7c and 7d where we compare the execution time of the longest reduce tasks of Hadoop and FP-Hadoop for TK-RD and 20GB. We can see that the longest task in Hadoop is the responsible of the poor performance in the reduce phase, which explains the total execution time. In FP-Hadoop, the longest task is 4 times shorter, and this is a consequence of the improved parallelism. 5.4. Effect of Cluster Size We study the effect of the number of nodes of the cluster on performance. Figure 8a shows the execution time by varying the number of nodes, and other parameters set as default values described in Section 5.1. Increasing the number of nodes decreases the execution time of both Hadoop and FP-Hadoop. However, FP-Hadoop benefits more from the increasing number of nodes. In Figure 8a with 5 nodes, FP-Hadoop outperforms Hadoop by a factor of around 1.75. However, when the number of nodes is equal to 50, the improvement factor is around 4. This increase in the gain can be explained by the fact that when there are more nodes in the system, more nodes can collaborate on the values of hot keys in FP-Hadoop. In opposition, in Hadoop, although using higher number of nodes can decrease the execution time of the map phase, it cannot significantly decrease the reduce phase time, in particular if there are intermediate keys with high number of values. 5.5. Effect of the Number of Intermediate Keys In our tests, we use the attribute language as the intermediate key for grouping the intermediate values. Here, Figure 7: Scalability of FP-Hadoop: With TK-SK (a) reduce time and (b) execution time; with TK-RD (c) execution time, (d) reduce time and longest reduce tasks with 20GB for (e) Hadoop and (f) FP-Hadoop. Figure 8: Effect of several parameters on the performance of FP-Hadoop: (a) number of intermediate keys, (b) zipfian exponent, (c) data skew and queries, (d) cluster size, (e) intermediate data filtering. we report the results of our experiments done to study the effect of this parameter on the performance of FP-Hadoop and Hadoop. Figure 8a shows the execution time with varying the number of keys (languages) from 50 to 5, and other parameters set as default values. When the number of keys is lower than 20, Hadoop cannot take advantage of all available reduce nodes (there were 20 cluster nodes in this test). However, in FP-Hadoop, the intermediate reduce workers can contribute in processing the values of the keys that have high numbers of values. This is why there is a significant difference between the execution time of FP-Hadoop and Hadoop in these cases. Even for the cases where the number of keys is higher than the number of nodes (i.e., 20), the execution time of FP-Hadoop is better than that of Hadoop, because in these cases, there are keys with high number of values, and Hadoop cannot well balance the load of reduce workers. For Hadoop, when increasing the number of keys, the execution time decreases until some number of keys and then it becomes constant. We performed our tests with up to 200 keys, and observed that after 60 keys, there is no decrease in the execution time of Hadoop. 5.6. Effect of High Skew in Reduce Workers Load Here, we study the effect of high data skew on performance by varying the zipf exponent (S) used for generating synthetic datasets with Zipfian distribution. The higher is S, the higher is the skew in the size of the data (articles) assigned to the keys (languages). Figure 8b shows the execution time with varying zipf exponent S from 1 to 5, and other parameters set as default values. The figure shows that the data skew has a big negative impact on the performance of Hadoop, but its impact on FP-Hadoop is slight. The gain factor of FP-Hadoop increases by increasing S. The reason is that by increasing S, there will be intermediate keys with higher number of intermediate values, and these keys are the bottlenecks in Hadoop, because the values of each key are processed by only one reduce worker. 5.7. Effect of Data Skew on Different Queries Figure 8c shows the execution time of FP-Hadoop by using several queries and their corresponding data as described in Section 5.1. The results show that FP-Hadoop outperforms Hadoop, in all cases except for the word-count query (WC). The extent of the gain depends on the amount of reduction that can be performed in the intermediate phase as a result of the partial aggregation. This explains while the best case is for TK-Sk, where each IR task can reduce the data back to k tuples and also while the reduction in PR is small, since only the mass contributions can be aggregated while the node structure has to be passed along untouched. For the word-count query, the execution time of FP-Hadoop is a little bit (3%) higher than Hadoop, and this increase corresponds to the overhead of the intermediate reduce phase. Indeed, in this query, there is no skew in the reduce side, because the partitioner can balance well the reduce load among workers. FP-Hadoop spends some time to detect that there is no skew in the intermediate results, and then launches the final reduce phase. Thus, its execution time is slightly higher than Hadoop. Notice that even this small overhead can be avoided by disabling the intermediate reduce phase. 5.8. Effect of Intermediate Data Filtering Using parameter k in the top-k% query, we can control the amount of data that may be filtered by the intermediate reduce tasks. In fact, in the top-k% query we return k% of the data that have the highest values. Thus, in the output of a map task or input data of an intermediate reduce task, if the amount of data is higher than k% of the total data, then we can keep the k% highest ranked data and filter the rest. Therefore, the lower is k, the higher is the capacity of data filtering by intermediate reduce tasks. Figure 8d shows the performance of the two systems by varying k in top-k% query and other parameters set as default values. The figure shows that the lower is k, the better is the performance gain of FP-Hadoop. The reason is that with lower k values, the intermediate reduce tasks can filter more intermediate values. In particular, for values lower than 10%, FP-Hadoop can profit well from the filtering in intermediate reduce workers. For values higher than 10%, data filtering by intermediate reduce workers decreases significantly, this is why the gain of FP-Hadoop reduces. However, even for k values higher than 10%, there is a significant difference between the execution time of FP-Hadoop and Hadoop. 5.9. Analysis of FP-Hadoop Parameters We investigate the effect of the MinIRSize parameter on the performance of FP-Hadoop. For this, we use two configurations: 1) FP-Hadoop: the default configuration described in Section 5.1 with one iteration in the intermediate reduce phase; 2) FP-Hadoop-Iterations: in this configuration, we set the maximum number of iterations to be 10. As discussed in Section 4.3, the real number of iterations may be lower than this maximum value, e.g. because in an iteration the size of a partition may be lower than MinIRSize. By varying MinIRSize, Figure 8e shows the execution time of FP-Hadoop and FP-Hadoop-Iterations by varying the MinIRSize parameter for processing an input of size 100GB, and other parameters set as default values. The results show that in FP-Hadoop with one iteration, when we set MinIRSize to small values (e.g., lower than 128MB), the execution time is not very good. This suggests to not choose very small values for MinIRSize, when using only one iteration. The reason is that in these cases, we cannot well take advantage of the one iteration in the intermediate reduce phase, since with very small IR Splits the amount of data that can be filtered by intermediate tasks may not be large. However, by using more iterations in FP-Hadoop-Iterations, even with small MinIRSize values, the response time is good, since it uses more iterations to take advantage of the intermediate phase. Figure 9(b) shows the number of iterations done by FP-Hadoop-Iterations, when using different MinIRSize values. We observe that as MinIRSize increases, less iterations are needed to complete the intermediate phase. In our experiments, when using FP-Hadoop with one iteration, the best configuration for MinIRSize is in the cases where it is close to the ratio of the map output size over the number of reduce workers. We also study the number of IR tasks launched by the Reduce Scheduler. Figure 9(c) depicts the number of scheduled IR tasks vs. the partition sizes for TK-SK with 100GB of data and different MinIRSize values. We observe that as MinIRSize increases, fewer IR tasks are needed. This is an expected behavior of FP-Hadoop, which tries to fully take advantage of parallelism for the overloaded partitions. 5.10. Comparison with SkewTune Here, we compare FP-Hadoop with SkewTune [3]. Figures 10a and 10b show the reduce time and execution time of both approaches, using the data and parameters described in Section 5.1. For these experiments, we downloaded the SkewTune prototype that is publicly accessible [11]. The data which we used are the default data and sizes described in Section 5.1 (e.g. 20GB of data for TK-SK). As the results show, FP-Hadoop can outperform SkewTune with significant factors. This is particularly noticeable for TK-SK, where SkewTune cannot divide the execution of the most popular key into several tasks. 5.11. Effect of Map Split Size As discussed in Introduction of this paper, in order to use the combiner function to decrease the reduce skew in the jobs such as Top-k%, we need to increase the size of the map splits significantly, giving more chance to the combiner function. However, by using more iterations in FP-Hadoop-Iterations, even with small MinIRSize values, the response time is good, since it uses more iterations to take advantage of the intermediate phase. In our experiments, when using FP-Hadoop with one iteration, the best configuration for MinIRSize is in the cases where it is close to the ratio of the map output size over the number of reduce workers. We also study the number of IR tasks launched by the Reduce Scheduler. Figure 9(c) depicts the number of scheduled IR tasks vs. the partition sizes for TK-SK with 100GB of data and different MinIRSize values. We observe that as MinIRSize increases, fewer IR tasks are needed. This is an expected behavior of FP-Hadoop, which tries to fully take advantage of parallelism for the overloaded partitions. Figure 9: Analysis of FP-Hadoop parameters: (a) MinIRSize and iterations, (b) number of performed iterations (c) number of IR tasks. 5.12. Discussion Overall the performance results show the effectiveness of FP-Hadoop for dealing with the data skew in the reduce side. For example, the results show that for 120GB of input data, FP-Hadoop can outperform Hadoop with factors of 10 and 5 in reduce time and total execution time respectively. This gain increases when augmenting the data size. The results also show that increasing the number of nodes of the cluster can significantly increase the gain of FP-Hadoop compared to Hadoop. The results also show that there are jobs for which the IR phase has no benefits. This occurs particularly... in the cases where there is no skew in the intermediate data or the skew can be significantly decreased in the map phase by using a combiner function, e.g., in the word count job which we tested. In these cases, it is sufficient to not declare an IR function, then FP-Hadoop executes the job without performing the IR phase. 6. Related Work In the literature, there have been many efforts to improve MapReduce \cite{9}, particularly by supporting high level languages on top of it, e.g., Pig \cite{10}, optimizing I/O cost, e.g., by using column-oriented techniques for data storage \cite{11,12,13}, supporting loops \cite{14}, adding index \cite{15}, caching intermediate data \cite{16}, supporting special operations such as join and Skyline \cite{17,18,19,20,21,22,23,24}, balancing data skew \cite{25,26}, etc. Hereafter, we briefly present some of them that are the most related to our work. The approach proposed in \cite{25} tries to balance data skew in reduce tasks by subdividing keys with large value sets. It requires some user interaction or user knowledge of statistics or sampling, in order to estimate in advance the values size of each key, and then subdivide the keys with large values. Gufler et al. \cite{25} propose an adaptive approach which collects statistics about intermediate key frequencies and assigns them to the reduce tasks dynamically at scheduling time. In a similar approach, Saillish \cite{27} collects some information about the intermediate keys, and uses them for optimizing the number of reduce tasks and partitioning the keys to reducer workers. However, these approaches are not efficient when all or a big part of the intermediate values belong to only one key or a few number of keys (i.e., less than the number of reduce workers). SkewTune \cite{3} adopts an on-the-fly approach that detects straggling reduce tasks and dynamically repartitions their input keys among the reduce workers that have completed their work. This approach can be efficient in the cases where the slow progress of a reduce task is due to inappropriate initial partitioning of the key-values to reduce tasks. But, it does not allow the collaboration of reduce workers on the same key. Haloop \cite{14} extends MapReduce to serve applications that need iterative programs. Although iterative programs in MapReduce can be done by executing a sequence of MapReduce jobs, they may suffer from big data transfer between reduce and map workers of successive iterations. Haloop offers a programming interface to express iterative programs and implements a task scheduling that enables data reuse across iterations. However, it does not allow hierarchical execution plans for reducing the intermediate values of one key, as in our intermediate reduce phase. Memory Map Reduce (M3R) \cite{28} is an implementation of Hadoop that keeps the results of map tasks in memory and transfers them to the reduce tasks via message passing, i.e., without passing via the local disks. M3R is very efficient, but can be used only for the applications in which intermediate key-values can fit in memory. SpongeFiles is a system that uses the available memory of nodes in the cluster to construct a distributed-memory, for minimizing the disk spilling in MapReduce jobs, and thereby improving performance. The idea of using main memory for data storage has been also exploited in Spark, an alternative to MapReduce, that uses the concept of Resilient Distributed Datasets (RDDs) to transparently store data in memory and persist it to disc only when needed. The concept of intermediate reduce phase proposed in FP-Hadoop can be used as a complementary mechanism in the systems such as Hadoop, M3R, SpongeFiles and Spark, to resolve the problem of data skew when reducing the intermediate data. In MapReduce Online, instead of waiting for reduce tasks to pull the map outputs, the map tasks push their results periodically to the reduce tasks. This allows to increase the overlap between the map and shuffle phases, and consequently to reduce the total execution time. Similarly, we also benefit from an increased overlap as we do not need to wait for the end of a map task in order to start transferring its output. But, MapReduce Online does not resolve the problem of data skew in overloaded keys. There have been systems proposing new phases to MapReduce in order to deal with special problems. For example in Map-Reduce-Merge, in addition to the map and reduce phases, a third phase called merge is added to MapReduce in order to merge the reduce outputs of two different MapReduce jobs. The merge phase is particularly used for implementing multi-join operations. However, Map-Reduce-Merge and other solutions proposed for join query processing, e.g., Join Query Processing, cannot be used for resolving the problem of data skew due to overloaded keys. In general, none of the existing solutions in the literature can deal with data skew in the cases when most of the intermediate values correspond to a single key, or when the number of keys is less than the number of reduce workers. But, FP-Hadoop addresses this problem by enabling the reducers to work in the IR phase on dynamically generated blocks of intermediate values, which can belong to a single key. 7. Conclusion In this paper, we presented FP-Hadoop, a system that brings more parallelism to the MapReduce job processing by allowing the reduce workers to collaborate on processing the intermediate values of a key. We added a new phase to the job processing, called intermediate reduce phase, in which the input of reduce workers is considered as a set of IR Splits (blocks). The reduce workers collaborate on processing IR splits until finishing them, thus no reduce worker becomes idle in this phase. In the final reduce phase, we just group the results of the intermediate reduce phase. By enabling the collaboration of reduce workers on the values of each key, FP-Hadoop improves significantly the performance of jobs, in particular in the case of skew in the values assigned to the intermediate keys, and this is done without requiring any statistical information about the distribution of values. We evaluated the performance of FP-Hadoop through experiments over synthetic and real datasets. The results show excellent gains compared to Hadoop. For example, over a cluster of 20 nodes with 120GB of input data, FP-Hadoop can outperform Hadoop by a factor of about 10 in reduce time, and a factor of 5 in total execution time. The results show that the higher is the number of nodes, the higher can be the gain of FP-Hadoop. They also show that the bigger is the size of the input data, the higher can be the improvement gain of FP-Hadoop. Aknowledgements Experiments presented in this paper were carried out using the Grid 5000 experimental testbed, being developed under the INRIA ALADDIN development action with support from CNRS, RENATER and several universities as well as other funding bodies (see https://www.grid5000.fr). References [14] Y. Bu, B. Howe, M. Balazinska, M. D. Ernst, The halop approach to large-scale iterative data analysis, VLDB J. 21 (2).
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-01377715/file/infsys.pdf", "len_cl100k_base": 14651, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 61914, "total-output-tokens": 17548, "length": "2e13", "weborganizer": {"__label__adult": 0.00029921531677246094, "__label__art_design": 0.0005173683166503906, "__label__crime_law": 0.0003921985626220703, "__label__education_jobs": 0.003664016723632813, "__label__entertainment": 0.0001932382583618164, "__label__fashion_beauty": 0.00020170211791992188, "__label__finance_business": 0.0008025169372558594, "__label__food_dining": 0.0004303455352783203, "__label__games": 0.0007352828979492188, "__label__hardware": 0.0016384124755859375, "__label__health": 0.000667572021484375, "__label__history": 0.0005869865417480469, "__label__home_hobbies": 0.00014901161193847656, "__label__industrial": 0.0006933212280273438, "__label__literature": 0.0005030632019042969, "__label__politics": 0.0003993511199951172, "__label__religion": 0.0005140304565429688, "__label__science_tech": 0.45068359375, "__label__social_life": 0.0002040863037109375, "__label__software": 0.05206298828125, "__label__software_dev": 0.483642578125, "__label__sports_fitness": 0.00019812583923339844, "__label__transportation": 0.0005388259887695312, "__label__travel": 0.00023925304412841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70849, 0.02304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70849, 0.63141]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70849, 0.89648]], "google_gemma-3-12b-it_contains_pii": [[0, 1017, false], [1017, 5933, null], [5933, 11635, null], [11635, 17246, null], [17246, 20958, null], [20958, 23106, null], [23106, 28561, null], [28561, 31740, null], [31740, 36665, null], [36665, 38588, null], [38588, 44090, null], [44090, 49489, null], [49489, 49898, null], [49898, 55704, null], [55704, 59246, null], [59246, 62080, null], [62080, 68059, null], [68059, 70849, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1017, true], [1017, 5933, null], [5933, 11635, null], [11635, 17246, null], [17246, 20958, null], [20958, 23106, null], [23106, 28561, null], [28561, 31740, null], [31740, 36665, null], [36665, 38588, null], [38588, 44090, null], [44090, 49489, null], [49489, 49898, null], [49898, 55704, null], [55704, 59246, null], [59246, 62080, null], [62080, 68059, null], [68059, 70849, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70849, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70849, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70849, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70849, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70849, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70849, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70849, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70849, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70849, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70849, null]], "pdf_page_numbers": [[0, 1017, 1], [1017, 5933, 2], [5933, 11635, 3], [11635, 17246, 4], [17246, 20958, 5], [20958, 23106, 6], [23106, 28561, 7], [28561, 31740, 8], [31740, 36665, 9], [36665, 38588, 10], [38588, 44090, 11], [44090, 49489, 12], [49489, 49898, 13], [49898, 55704, 14], [55704, 59246, 15], [59246, 62080, 16], [62080, 68059, 17], [68059, 70849, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70849, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
bf93f5b90558de41117f2702ce8c4bc071ccc882
METHOD OF OPTIMIZING RECOGNITION OF COLLECTIVE DATA MOVEMENT IN A PARALLEL DISTRIBUTED SYSTEM Inventors: Takeshi Ogasawara, Hachioji; Hideaki Komatsu, Yokohama, both of Japan Assignee: International Business Machines Corporation, Armonk, N.Y. Appl. No.: 873,472 Filed: Jun. 12, 1997 Foreign Application Priority Data Int. Cl. .......................... G06F 15/80 U.S. Cl. .......................... 395/800.1; 395/706 Field of Search ................. 395/200.3, 800.01, 395/800.1, 800.16, 670, 376, 705, 706, 500, 701, 712 References Cited U.S. PATENT DOCUMENTS 5,475,842 12/1995 Gilbert et al. ......................... 395/706 ABSTRACT To optimize collective data movement recognition in a parallel distributed system, a data movement set is formed into a data structure where access regularity is efficiently used with respect to problems, and processor expression independent of the number of processors in the parallel distributed system is introduced. By using the data structure and the processor expression, the data movement set is calculated for each dimension of an array, and the collective data movement is extracted when constructing data movement from the data movement set of each dimension. 2 Claims, 4 Drawing Sheets **FIG. 1** ``` subroutine lu_fact_kernel(a,k,n) real*8 a(n) !HPF$ processors p(number_of_processors()) !HPF$ distribute a(*,cyclic) onto p do j=k+1, n do i=k+1, n a(i,j) = a(i,j) - a(i,k)*a(k,j) end do end do end ``` **FIG. 2** ![Diagram of matrix operations](image) FIG. 3 integer a(N,N) !HPFS processors p(number_of_processors() / 4, 4) !HPFS distribute a(block, block) onto p call sub(a) end subroutine sub(a) integer a(N,N) !HPFS processors p(number_of_processors() / 4, 4) !HPFS template t(N,N) !HPFS align a(i,*) with t(i,*) !HPFS distribute t(block, block) onto p FIG. 4 FIG. 8 FIG. 9 <table> <thead> <tr> <th>ARRAY DIMENSION</th> <th>DATA MOVEMENT</th> <th>SOURCE NUMBER</th> <th>4-PIECE SET</th> <th>TOTAL NUMBER OF PROCESSORS</th> <th>FIRST ITR BLOCK</th> <th>DATA MOVEMENT</th> <th>DESTINATION 4-PIECE SET</th> <th>SECOND ITR BLOCK</th> <th>DATA MOVEMENT</th> <th>DESTINATION 4-PIECE SET</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>2</td> <td>8</td> <td></td> <td>(1:49)</td> <td>(1:1:2:4)</td> <td>(51:99)</td> <td></td> <td></td> <td>(2:1:2:4)</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td>(50:50)</td> <td>(2:1:2:4)</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>2</td> <td>2</td> <td>8</td> <td></td> <td>(0:49)</td> <td>(1:2:2:4)</td> <td>(50:99)</td> <td></td> <td></td> <td>(2:2:2:4)</td> </tr> </tbody> </table> FIG. 10 LU DECOMPOSITION (n=2000): ONE-TO-ONE VS. MULTICAST SCALABILITY vs. NODES Collective Data Movement (Broadcast) ——— One-to-One 1 METHOD OF OPTIMIZING RECOGNITION OF COLLECTIVE DATA MOVEMENT IN A PARALLEL DISTRIBUTED SYSTEM FIELD OF THE INVENTION The present invention relates to a method of optimizing recognition of collective data movement in a parallel distributed system. BACKGROUND OF THE INVENTION In software applications described in parallel data language for a parallel distributed system such as a parallel distributed computer, codes for transmitting data on a remote host necessary for such calculation on a local host are generated by a compiler. This parallel data language has an important role in writing a parallel program with a parallel computer. Particularly, when a system such as this executes a numerical calculation on a large scale, it is very important that a compiler can suitably detect the collective data movement of a processor. Collective data movement recognition by a compiler processing system, can be described as follows: 1) The kind of data movement in a parallel data language program; 2) The optimization of data movement by a compiler; and 3) Collective data movement library. 1) Data Movement in a Parallel Data Language Program Data movement between processors in a parallel data language program is roughly classified into: i) data movement for prefetching data before execution of a parallel loop and ii) data redistribution for a change in an array distributing method. The term “prefetch” means that data is transmitted in advance before loop execution, from a processor with a data on a right side array region, which is read out and referred to, to a processor which performs calculation of an executive statement. The term “redistribution” means to distribute an array again. Redistribution becomes necessary where it is expected that a change in the array distributing method can enhance execution performance from the nature of algorithm. In such a case a right side array region, which is read out and referred to, is suitably redistributed before algorithm execution. The redistribution is also performed in the case where the distribution method of an argument array changes at a subroutine boundary. i) Prefetch Example FIG. 1 contains a program list (e.g., a kernel loop of LU decomposition), in high performance FORTRAN (hereinafter referred to as HPF), used in describing a prefetch. The data movement analysis which is performed by a parallel data language compiler is the calculation of a data movement set, in which one processor Pi transmits a right side array region R_k which is accessed during loop execution to another processor P_j. To form a parallel loop, a compiler generates the data prefetch code of a right side array operand, a(k,j), before a doubly nested loop. If the loop is executed along an “owner computes rule” by NP processors, a processor P where an array region a(k,j) is divided and held will need to transmit an array region a(k+1:n,k) to processors which own an array region a(k+1:n,k+1:m) which is accessed at the left side. That is, the data movement set comprises a transmitting processor process to which a(k+1:n,k) is distributed and a receiving processor group to which a(k+1:n,k+1:n) is distributed. FIG. 2 is a conceptual diagram used to explain the data movement in such a prefetch. In the figure, reference char- acters (k, i, j, and n) correspond to the variables in the program of FIG. 1. A strip of regions indicates an array that has been cyclically divided, and the shaded portion indicates the array region which is transmitted. ii) Redistribution Example Redistribution will take place where an array has been divided into regions (block, block), when a subroutine in a program list, such as that shown in FIG. 3, is called out. FIG. 4 is a conceptual diagram used to explain the data movement in redistribution. In the figure, square regions indicate regions into which an array was divided. An array region distributed first to two-dimensional processor form p(i,j) is taken to be A_0. When a subroutine (sub (3)) is called out, redistribution takes place so that each processor of the first dimension i of the two-dimensional processors has the same array region (A_0, . . . , A_n). In the data movement set, processors p(i,j) receive array regions (A_0, A_n) from the respective distributors, and processors p(i,j) transmit A_y to processors p(y,i). 2) Optimization of Data Movement by a compiler The data movement between processors in a program described in parallel data language such as HPF, is extracted by a compiler. In this extraction, it is important to optimize data movement so that the characteristics of the parallel computer are used efficiently. This optimization becomes very important in a parallel data language where an array is distributed between processors of a parallel distributed memory computer having a memory for each processor. In a computer such as this, a local processor makes a calculation by using data which is not resident in its memory. Since the time it takes to move data is very slow compared with the time to access a memory, the overhead of the data movement has a direct influence on the performance of the parallel computer. 3) Collective Data Movement Library A parallel computer system is provided with a collective data movement library which is a data movement optimizing subsystem for making the efficient use of a network. When data movement between processors matches a pattern defined in the library, optimum data movement is performed in accordance with the library. Message Passing Interface (MPI) is well known as the international standard of the collective data movement library. In order to enhance the performance of a parallel application, it is important how the collective data movement library can be efficiently used when data movement is needed. For example, in the program list of FIG. 1, a library called broadcast data movement can be applied. In the list of FIG. 3, a library called all gather data movement can be applied. In the conventional approach in collective data movement, the method of distributing a left side array and a right side array, existing in the main body of a loop, is determined at the time of compiling. This determination is made under the assumption that distributed processor forms are identical. This approach is limited in that only one kind of processor form is recognized at the time of compiling. In addition, the approach is not applicable where the distribution method is determined at runtime or where an array region to be transmitted varies at runtime. Therefore, the conventional approach cannot be applied where: i) processor forms on the left and right sides are different; ii) the method of distributing a right side or left side array is determined at runtime, and iii) the right side or left side region to be accessed is determined at runtime. Since, in generally used applications, there are many cases where prefetch or redistribution is performed in arbi- trary processor form and an arbitrary array division method, the conventional approach at the time of compiling cannot be effectively applied to realistic applications. A method of recognizing collective data movement at runtime has been proposed (i.e., a method which utilizes scatter data movement, gather data movement, and all-to-all data movement). However, its calculation quantity $0 (n_1 + \ldots + n_m)$ ($n_i$ is the size of the $i$-th array) for an $m$-dimensional array, is not suitable for a large scale numerical calculations. Therefore, it is an object of the present invention to provide a method of optimizing recognition of collective data movement in parallel distributed systems. It is another object of the present invention to have the steps of forming a data movement set into a data structure where access regularity is efficiently used with respect to regular problems and introducing processor expression independent of the number of processors, calculating the data movement set for each dimension of an array by using said data structure and said processor expression, and extracting the collective data movement when constructing data movement from said data movement set of each dimension. **SUMMARY OF THE INVENTION** A method comprising the steps of forming a data movement set into a data structure (ITR list) where access regularity is efficiently used with respect to regular problems and introducing processor expression (a quadruplet) independent of the number of processors, calculating the data movement set for each dimension of an array by using said data structure and said processor expression, and extracting the collective data movement when constructing data movement from said data movement set of each dimension. In the present invention, collective data movement can be recognized at high speed at runtime with respect to regular problems. An expression method such as quadruplet processors are introduced in order to specify processors which participate in data movement, and data movement which is performed by a certain processor is described with an array region and this set of four expressions. When data movement is analyzed, recognition of collective data movement can be performed at runtime by the order of array dimensions regardless of the size of an array and the number of processors of a parallel computer, by making the best use of the regularity of a regular problem and by collective arithmetic defined on a set of four. Therefore, in the present invention, not only can the overhead during execution be reduced, but also the present invention is suitable for large-scale numerical calculation on a super parallel computer. **DESCRIPTION OF THE DRAWINGS** FIG. 1 shows an example of a program list of LU decomposition described in HPF for explaining a prefetch; FIG. 2 is a conceptual diagram used to explain the data movement in the prefetch; FIG. 3 shows an example of a program list described in HPF for explaining redistribution; FIG. 4 is a conceptual diagram used to explain the data movement in the redistribution; FIG. 5 is a conceptual diagram used to explain a range to which the present invention is applicable; FIG. 6 is a conceptual diagram used to explain the data movement set calculation on an ITR list; FIG. 7 is a conceptual diagram showing the calculation of a participation data movement processor based on a quadruplet; FIG. 8 is a conceptual diagram for describing a quadruplet in detail; FIG. 9 is a graph showing an example of an ITR; and FIG. 10 is a graph showing the performance difference between the case where LU decomposition (kernel $n=2000$) used one-to-one data movement and the case where LU decomposition (kernel $n=2000$) used broadcast data movement by actually applying the algorithm of the present invention. **DETAILED DESCRIPTION OF THE INVENTION** The present invention employs an ITR list and a quadruplet, and consequently there is no need to perform an exchange of information with another processor at runtime. Collective data movement for both prefetch and redistribution can be extracted by the same method and with a calculation quantity of the order of the number of dimensions of an array. This does not depend upon the size of the array and the scale of the parallel computer. FIG. 5 is a conceptual diagram used to explain a range to which the present invention is applicable. Following is a description of an ITR list and a quadruplet which are features of the algorithm of the present invention. Further, high-speed recognition of collective data movement at runtime, based on the ITR list and the set of four expressions, will be explained. A data movement set is formed into a data structure where access regularity can be efficiently used. The data movement set is expressed with an ITR list generated for each dimension of an array. The ITR list is constituted by a row of ITRs and ITR master which is the management information of the ITRs. ITR: expresses an array region specified with a triplet (start: end: step) and a group of opponent processors of the region. ITR master: Management data structure contains information of which processor group should acquire which ITR on an ITR list. FIG. 6 is a conceptual diagram for explaining a data movement set calculation performed on the ITR list. When analyzing data movement, the ITR on the ITR list which is used is calculated on each processor. In many cases the array access method has regularity between processors with respect to regular problems, so the regularity is expressed in the form of compression based on the list rule by the ITR list. For example, in the array $a(1:100)$ distributed to five processors in blocks, if an access is made at the array $a(1:100)$, an array region which is accessed by the processor $p$ will be parameterized by a processor ID as follows: \[ a(1+p \times 20+20p+p \times 20). \] In the calculation of the data movement set, a resultant ITR list is obtained by performing arithmetic for two ITR lists of each dimension of an array. One for how array regions to be referred, and the other for how array regions are owned by processors. An array section which is transmitted is expressed by a combination of ITRs which each processor acquired from the ITR list of each dimension for itself. The data structure of the ITR list will be specifically described. An ITRs (to be described later) such as that shown in FIG. 9 is expressed with the ITR list for each dimension of an array. As described above, the ITR list is constituted by ITR blocks and an ITR master which is a management data structure. The ITR block is constituted by an array section $R_{ITR}$ specified by a triplet, start (ed), and step (st), and a quadruplet, $Q_{ITR}$ which specifies the data movement component of the section. Thus, the ITR block is a group of ITRs expressed as follows: $$ITR = [R_{ITR}, Q_{ITR}] = [ed, st, dx, ps, nd, mp]$$ [Equation 1] The ITR block on the ITR list is also specified by a set called a data movement source quadrant. This is distinguished from the aforementioned set of four, which an ITR describes, called a data movement destination quadrant. In the case of transmission, the former represents a transmitting processor and the latter represents a receiving processor, and in the case of reception, they are reversed. In the ITR master $M_{ITR}$, there are described the "mps," "mnd," and "mpn" of the data movement source quadrant. Each processor converts its processor ID to a data movement source quadrant and acquires an ITR block to which it is related. A data movement source quadrant which includes a processor with "pid" as its ID is calculated by the following equation: $$mdx = \lfloor mod((pid - 1)/mps, mnd) \rfloor$$ [Equation 2] Equation 2 calculates a decomposition position, mdx, by using the $M_{ITR}$. The decomposition position, mdx, indicates the ITR block position on the ITR list (ITRL). ITRL is expressed as follows: $$ITRL = \langle mps, mnd, \{[ed, st, dx, ps, nd, mp]\} \rangle$$ [Equation 3] The ITRs of the ITR block of each dimension, obtained in the aforementioned way, are combined for all dimensions by the following ITR product (represented by $\otimes$) and specifies a single array region and the opponent data movement processor. In the case of a d-dimensional array, each of the $n$ ITRs $(1 \leq i \leq d)$ selected at i-th dimension is expressed as $ITR_{k,i} = (1 \leq k_i \leq n_i)$, and the ITR block is expressed as $[ITR_{k,1} \otimes \ldots \otimes ITR_{k,d}]$. If done in this way, all data movements to which a processor relates, that is, the data movement set will be expressed as follows: $$[ITR_{k,1} \otimes \ldots \otimes ITR_{k,d}] = \{[R_{k,1}, G_{k,1}], [R_{k,2}, G_{k,2}], \ldots, [R_{k,d}, G_{k,d}]\}$$ [Equation 4] In Equation (4), $ITR_{k,1} \otimes \ldots \otimes ITR_{k,d}$ is the data movement descriptor. The specific operation of the ITR product is as follows: If the array section and the data movement destination quadrant, which are two constituent elements of the ITRs, are taken to $R_{k,i}$ and $Q_{k,i}$, the $ITR_{k,1} \otimes \ldots \otimes ITR_{k,d}$ is expressed as follows: $$ITR_{k} = \ldots \otimes ITR_{k,d} = [R_{k,1}, G_{k,1}] [R_{k,2}, G_{k,2}] \ldots [R_{k,d}, G_{k,d}]$$ [Equation 5] In Equation (5), $[R_{k},Q_{k}]$ is the data movement descriptor, which has an array section $R$ that is transmitted and a data movement destination processor $Q$. $R$ is the Cartesian product element of ITR blocks where the ITR block is regarded as a set of $R_{k,i}$, and $Q$ is a product of sets when $Q_{k,i}$ is regarded as a set of processors. The arithmetic of a product of sets can be executed at high speed because it is a bit vector conjunction. Also, for the ITR list, as with an augmented regular section descriptor, a row of ITR blocks is compressed by making the best use of the regularity of ITR blocks. For example, $[l_{10:9} \ldots l_{4:3} \ldots l_{1}]$ is a compressed form which expresses three ITRs. With a compressed form, a processor can know an array section data movement destination processor, which is the content of an ITR to which the processor other than itself relates, without retrieving the ITR list. This is efficiently used when shift data movement is recognized. When the ITR product is performed, an arithmetic $\otimes$ is also performed for the data movement source quadrant, mdx= $M_{ITR}$. The result of the $\otimes$ arithmetic is a group of processors having the same data movement descriptor. A plurality of processors with the same data movement source quadrant exist. The data movement source quadrant will have a natural expression among processors having the same data movement pattern for each dimension of an array. From the foregoing, by calculating a data movement descriptor to which it is related, a processor can know other processors which have a similar data movement pattern. The IOS, shown in FIG. 9, is an example of a two-dimensional array. As an example, the data movement to which a processor $P_3$ is related is obtained from the logical table of each dimension of the IOS. For array dimension 1 of FIG. 9, the first ITR block is extracted by 1+mod((3-1)/1,2) obtained by substituting the values of Table of FIG. 9 into Equation (3). In like manner, for array dimension 2 of FIG. 9, the second ITR block is extracted one by one from each dimension by 1+mod((3-1)/2,2) (two ITRs for the ITR block of the first dimension and a single ITR for the ITR block of the second dimension). If the ITR product of the first ITR block [50:50,2:1:2:4] of the first dimension and the second ITR block [50:50,2:2:2:4] of the second dimension is considered, (50,50:99) will be obtained for the array section and (4:1:4:4) for the data movement destination quadrant. This represents that $P_3$ transmits array section (50,50:99) to P4. For the data movement source quadrant, it is found that with (1:1:2:8) n (2:2:2:8), P7 performs the same data movement. Quadraplet Expression A quadruplet is employed as an expression of processor which does not depend upon the number of processors. An ITR on an ITR list and an opponent data movement processor described by the ITR are expressed by the following set of four expressions. Decomposition position Processor Step Number of Decompositions Total Number of Processors FIG. 7 is a conceptual diagram showing the calculation of participation data movement processors based on a quadruplet. The number of iterations in this figure will be described later. A group of processors by a quadruplet are one-dimensional processor form expressions in arbitrary processor form. Therefore, the processor form difference in the data movement between arrays distributed to different processor forms is absorbed. In the conversion of obtaining an array section which is transmitted, the set arithmetic of a quadruplet of ITRs and an ITR master is performed, and an opponent data movement processor group and a data movement source processor group are respectively calculated. One-dimensional processor form is generally realized by a bit vector and a computer can process bit vector arithmetic at high speed, so high-speed arithmetic can be accomplished even in the case of a great number of parallel computers. In addition, in the algorithm of the present invention, the collective data movement is recognized in exactly the same way as described above even in the replicated case where there are a plurality of transmitting processors, i.e., owners of an array element, or even in the state where a plurality of receiving processors exist. By making the best use of a plurality of transmitting processors and dividing data movement by the number of transmitting processors, it becomes possible to reduce the number of synchronizations between processors. The data structure of the set of four in a quadruplet is as follows: - Decomposition index (dx) - Processor step (ps) - Number of decompositions (nd) - Total number of processors (np) The set of four, \((dx;ps;nd;np)\), specifies a plurality of processors in one-dimensional processor form. The relationship between the four elements is as follows: Where the total number of processors is \(np\) (processors 1 through \(np\)) and one processor decomposition expresses a group of \(ps\) processors, then one-dimensional processor form is constituted by a row of processor decompositions such as \(r_f\) iterations of \(nd\) processor decompositions can be expressed by the following Equation: \[ rf=\frac{np}{ps}nd \] The decomposed position \(dx\) of the quadruplet specifies a position in a row of \(nd\) processor decompositions. Specifying a processor decomposition is equal to specifying \(ps\) \(rf\) processors, as shown in FIG. 8. For example, consider a quadruplet, \((1;2;4;32)\). From one-dimensional processor form of 32 processors, a single processor decomposition represents 2 processors. Four processor decompositions are iterated four times \((4=32/2^{4})\). One iteration is 8 processors. The decomposed position 1 is iterated and indicates the first one of the respective processor decompositions. Consequently, this quadruplet specifies processors 1, 2, 9, 10, 17, 18, 25, and 26. High-speed recognition of collective data movement at runtime With an ITR list and a quadruplet, broadcast data movement, shift data movement, and all gather data movement, can be detected by the calculation of the data movement related to a self-processor. Broadcast data movement The set arithmetic of the quadruplet, performed when a data movement section is obtained, leaves information, indicating which processor likewise acquires an ITR which a self-processor acquires, in the ITR master. Therefore, when the data movement of an array section such as becoming broadcast data movement is obtained, the quadruplet of the ITR master, in the calculation of a data movement set of reception, indicates that a plurality of processors receive the same array section. Such an ITR master has been obtained in respective processors. In the calculation of a data movement set of transmission, on the other hand, the quadruplet of combined ITRs indicates a group of receiving processors which receive the same array section. Therefore, with this, broadcast data movement is recognized. Since recognition such as this is performed without transmitting or receiving information between processors, the constitution of a processor group necessary for calling out a collective data movement library becomes possible without taking unnecessary synchronization between processors. Shift data movement Shift data movement is a result of regular array access based on a fixed offset. In this case, only a regular ITR list is constituted and shift data movement is recognized. In regular problems, the regularity of array access is reflected on an ITR list. The ITR list is compressed by the regularity. This compression is not performed solely for the purpose of reducing a memory quantity. If the ITR list of a certain array dimension is compressed for regularity, access to another processor in the dimension can be calculated without scanning the ITR list. All gather data movement and all-to-all data movement Consider a processor group constituted from a plurality of processors. In the case where a certain data movement processor of the group transmits the same array region to all other receiving processors and where the transmission is performed for all transmitting processors, this collective data movement is called all gather data movement. Typically, all gather data movement is performed by matrix multiplication. In the case of all gather data movement, a processor group (quadruplet of an ITR master) which tries to acquire an ITR from an ITR list becomes equal to a processor group (quadruplet of an ITR) which becomes the data movement opponent of the array section. In that case all gather data movement is recognized. Also, all-to-all data movement is likewise recognized. The differing point of all-to-all data movement from all gather data movement is that a different array region is transmitted. FIG. 10 is a graph showing the performance difference between the case where LU decomposition (kernel \(n=2000\)) used one-to-one data movement and the case where LU decomposition (kernel \(n=2000\)) used broadcast data movement by actually applying the algorithm of the present invention, with relation to the effect of the number of processors. This graph shows that it is very important to make the best use of a broadcast data movement library and at the same time shows an improvement effect in the performance of the algorithm of the present invention. It is found that, as compared with the case where the collective data movement library provided in a parallel computer system is not used, performance is enhanced by making the best use of the collective data movement library to which the algorithm of the present invention is applied. An embodiment of the present invention will be executed in the following flow: 1. AOS Generation 2. LIS Generation 3. IOS Generation 4. Collective Data Movement Recognition AOS is short for an array ownership set and is a data structure where array decomposition is described. In the case of HPF, a result where an array was distributed is described with block (n) and Cyclic (n). LIS is short for a local iteration set and is a specific iterative loop space distributed to each processor, based on an owner computes rule. The generation of these is performed at the time of compiling to the utmost and reduces the execution-time overhead, however, in the case of insufficient information, the generation is performed at runtime. Even in the case where the generation is performed at runtime, it is possible to reduce the execution-time overhead by reusing the result of the generation. 1. AOS Generation In the case of HPF, mapping of a pd-dimensional processor form \(P(m_{1}, \ldots, m_{d})\) and an ad-dimensional array \(A\) \((n_{1}, \ldots, n_{d})\) is performed. \(A_{i}\), which is the \(i\)-th \(A\) is divided by the \(i\)-th \(P\), or it is not divided completely (i.e., it is collapsed). The dimension of \(P\) where no division is performed is replicated. In the case where \(A_{i}\) has been divided by \(P_{i}\), the \(i\)-th ITR list of the AOS is expressed like the following Equation (7). Note that $R_K$ is determined according to a distribution method. $$ M^p = \bigcup_{i=1}^{q_{\text{pl}}} \{m : \text{mc}(m) \subseteq \text{pr} \} $$ $$ ITR_{\text{AO}, \text{ao}} = (M^p) \cap \{R_{\text{mc}} \cap \{0\} \} $$ i-th AOS of AOS: ITR list, $I_{\text{TR}, \text{AO}, \Phi}$ Number of processors: $p = \sum_{i=1}^{q_{\text{pl}}} m_i$ Array section of the nd-th ITR: $R_{\text{mc}}$ (2) LIS Generation The iterative space of a nested loop before being distributed to a processor is called a global iteration set, which will hereinafter be referred to as a GIS. The d-th GIS of the GIS is classified based on whether it corresponds to the dimension of the left side array A. When there is correspondence, the subscript equation of a corresponding array dimension can be expressed by employing the index variable $i v_{\Phi}$ of the GIS. When there is no correspondence, the d-th GIS of the GIS, as it is, constitutes an LIS. If GIS corresponds to the i-th array, GIS$^i$ is divided based on an owner computes rule and becomes LIS$_i$. Here, the lower limit, upper limit and step of a loop are taken to be $i v_{\Phi}, i v_{\Phi} + 1$ and $\Delta i v_{\Phi}$ respectively, and the subscript equation of an array is expressed as $S_i_{\Phi}(i v_{\Phi})$. With this assumption, by employing the GIS$^i$, a region which each processor accesses at the left side (hereinafter referred to as an LWS) is obtained by the intersection between $S_i_{\Phi}$, GIS$^i$ and the array section of each ITR on $I_{\text{TR}, \text{AO}, \Phi}$. The ITR is the processor form where the left side is distributed, and it is the dimension of the $\Pi$ to which the array dimension is corresponded. $$ R_{\text{LS}, \text{mc}} = S_i_{\Phi}(i v_{\Phi}) \cap \{R_{\text{mc}} \cap (\text{GIS}^i) \} $$ $$ I_{\text{TR}, \text{LS}, \text{mc}} = (M^p) \cap \{R_{\text{mc}} \cap (\text{GIS}^i) \} $$ The LIS$_i$ can be obtained in the following equation: $$ R_{\text{LS}, \text{mc}} = S_i_{\Phi}(i v_{\Phi}) \cap \{R_{\text{mc}} \cap (\text{GIS}^i) \} $$ (3) IOS Generation The array region of the right side B which is accessed with the LIS is called a local read set (LRS). An IOS is calculated by the arithmetic between an LRS and an AOS. The i-th B$_{\Phi}$ of the right side B is divided based on whether there is correspondence with a GIS. When there is no correspondence, the i-th B$_{\Phi}$ is, as it is, constitutes an LRS. Consider the case where the i-th Bir of B and the d-th GIS$_{\Phi}$/LIS$_i$ of a loop correspond with each other. The LRS is calculated by $T_{\Phi}(i v_{\Phi})$ and becomes like the following equation: $$ R_{\text{LS}, \text{mc}} = S_i_{\Phi}(i v_{\Phi}) \cap \{R_{\text{mc}} \cap (\text{GIS}^i) \} $$ $$ I_{\text{TR}, \text{LS}, \text{mc}} = (M^p) \cap \{R_{\text{mc}} \cap (\text{GIS}^i) \} $$ Now, the ITR division (this is indicated by adding "..." over $i v_{\Phi}$) will be defined. The ITR division is performed by dividing an ITR of the ITR list which is a divided by an ITR of the ITR list which is a divisor. As a result of the division, if the array section of the divided ITR is taken to be $R_{\Phi}$ and the array section of the divisor ITR is taken to be $R_{\Phi}$, $R_{\Phi} \cap R_{\Phi}$ will be obtained for the array section of ITR and decomposition position of the divisor ITR: divisor ITR master) will be obtained for the quadrant. The division is performed for each $R_{\Phi}$ which is $R_{\Phi} \cap R_{\Phi}$. Here, if it is assumed that $P_{\Phi}$ is the processor form where the right side is distributed and that $p_{\Phi}$ is the dimension of $p_{\Phi}$ to which the array dimension is corresponds, $I_{\text{TR}, \text{AO}, \Phi}$ will be expressed like the following equation: $$ I_{\text{TR}, \text{AO}, \Phi} = (M^p) \cap \{R_{\text{mc}} \cap (\text{GIS}^i) \} $$ The IOS is shown like the following equation by obtaining the ITR division of two ITR lists. $$ I_{\text{TR}, \text{AO}, \Phi} = (M^p) \cap \{R_{\text{mc}} \cap (\text{GIS}^i) \} $$ These are called an in/out set (IOS). The meaning of the arithmetic of these is as follows. If the ITR list of the LRS is divided by the ITR list of the AOS, the resultant ITR list will obtain an in set (ITR$_{\text{in}}$) which describes from which processor each processor reads the respective array regions which are accessed at the right side. Also, if the ITR list of the AOS is divided by the ITR list of the LRS, the resultant ITR list will obtain an out set (ITR$_{\text{out}}$) which describes a region which is read among the array regions that each processor has and a processor which is read. In the case of the array dimension which has not been divided, $\Phi$ which indicates that there is no division is put into the decomposition position of the quadruplet of the ITR. For example, consider $I_{\text{TR}, \text{AO}, \Phi} <1:2:4> \{<2:11, \Phi>, <12:20, \Phi>\}$ and $I_{\text{TR}, \text{AO}, \Phi} = <2:2:4> \{<1:10, \Phi>, <11:20, \Phi>\}$, $<1:2:4> \{<2:10,1:2:2:4>, <11:2:2:4>\}$ is obtained for the in set, and $<2:2:4> \{<2:10,1:2:2:4>, <11:1:2:2:4>\}$ is obtained for the out set. (4) Collective Data Movement Recognition The method of making an ITR product of ITR lists of each dimension of the in set or the out set and calculating a data movement set has already been described in the column of the “ITR List.” Here, a description will be specifically made of how collective data movement is detected when obtaining a data movement descriptor. Shift Data Movement The ITR list will be compressed if there is regularity. Now, consider the case where the ITR lists of all dimensions of the IOS have been compressed by regularity, or the case where where the ITR lists are common to all dimensions. Consider the case where the quadruplet of the data movement destination of ITR is expressed with the linear expression $mx_{\Phi}y_{\Phi}z_{\Phi}$ of the decomposition position $mx_{\Phi}$ of the ITR, the compressed dimension i is recognized as shift data movement. In the shift data movement, there is the data movement between processors of which the distance is a vector ( , , ...). Therefore, the optimization specific to the shift data movement becomes possible. Broadcast Data Movement In the case where there are other processors which share the data movement descriptor obtained from the in set, i.e., in the case where there are a plurality of processors which receive the same array region, broadcast data movement is recognized. However, there is excluded the case of the following all gather data movement and all-to-all data movement. With the ITR product when obtaining a data movement descriptor from the in set, the $\cap$ arithmetic is performed for the data movement source quadruplet, and, in the case of the in set, the $\cap$ arithmetic is also performed for the reception quadruplet. When the resultant reception quadruplet has specified a plurality of processors, the data movement descriptor of the result of the ITR product is common to the processors. In LU decomposition which will be described later, a recognition example of the broadcast data movement is shown. All Gather Data Movement and All-to-All Data Movement In the case where, on the in/out set, the processor group expressed by the data movement destination quadruplet equals the processor group expressed by the data movement source quadruplet, all-gather data movement or all-to-all data movement is recognized. The difference of the two is that while the former transmits the same array region, the latter transmits a different region to each opponent processor. Gather Data Movement and Scatter Data Movement Consider the case where only a single processor can constitute a data movement descriptor from the in set and where the ITR block to which the processor relates has a plurality of ITRs which indicate different processors. Also, all ITR lists of the out set have been either compressed by regularity or are common to all processors. Therefore, it is found that all transmitting processors transmit data to a single processor. The aforementioned case is recognized as gather data movement. The case of scatter data movement is the reverse of this. Collective Data Movement Recognition Example: LU Decomposition With the program list of the LU decomposition shown in FIG. 1, a description will be made of an actual operation where the present algorithm recognizes broadcast data movement when n is 128 and the number of processors is 32. (1) AOS Generation In the program list of FIG. 1, the second dimension of the AOS of an a is cyclic, and the ITRL is expressed by the following equation by employing thereafter mod((p−1)/32) for processor p (1 ≤ p ≤ 32): \[ \text{ITRL}_{\text{AOS}} = \left(1:32;32 \left[ [1:32;1:32;1:32;1:32;1:32;1:32] \right] \right) \] \[ R_{\text{AOS}}[p] \ldots q \text{indicates the ITR from processor p to processor q.} \] (2) LIS Generation From GIS of (113:128:1), in the case of k=112, the array subscript equation S−1 and ITRL_{AOS}^{k=112} an LWS is expressed by the following Equation. \[ \text{ITRL}_{\text{AOS}}^{k=112} = \left(1:32;32 \left[ [1:32;1:32;1:32;1:32;1:32;1:32] \right] \right) \] Since ITRL_{AOS}^{k=112} and S−1 are self-evident, the ITR list of the LIS of a j loop becomes as follows. \[ \text{ITRL}_{\text{LIS}} = \left(1:32;32 \left[ [1:32;1:32;1:32;1:32;1:32;1:32] \right] \right) \] (3) IOS Calculation Now, pay attention to the right side array a(i,k) in the case of k=112. Since the ITRL_{LIS} and the array subscript equation i of the first dimension correspond, the LRS of a(i,112) becomes as follows. \[ \text{ITRL}_{\text{LRS}} = \left(1:32;32 \left[ [1:32;1:32;1:32;1:32;1:32;1:32] \right] \right) \]
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/5822604", "len_cl100k_base": 9798, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 11396, "total-output-tokens": 10529, "length": "2e13", "weborganizer": {"__label__adult": 0.0003342628479003906, "__label__art_design": 0.00045228004455566406, "__label__crime_law": 0.0004606246948242187, "__label__education_jobs": 0.0008878707885742188, "__label__entertainment": 9.22083854675293e-05, "__label__fashion_beauty": 0.00017535686492919922, "__label__finance_business": 0.0009222030639648438, "__label__food_dining": 0.000331878662109375, "__label__games": 0.0008106231689453125, "__label__hardware": 0.00630950927734375, "__label__health": 0.0004875659942626953, "__label__history": 0.00045228004455566406, "__label__home_hobbies": 0.0001596212387084961, "__label__industrial": 0.0015554428100585938, "__label__literature": 0.0002765655517578125, "__label__politics": 0.0003306865692138672, "__label__religion": 0.0005640983581542969, "__label__science_tech": 0.266845703125, "__label__social_life": 6.866455078125e-05, "__label__software": 0.027435302734375, "__label__software_dev": 0.68994140625, "__label__sports_fitness": 0.0002872943878173828, "__label__transportation": 0.0007123947143554688, "__label__travel": 0.00024271011352539065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40345, 0.03645]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40345, 0.86281]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40345, 0.89574]], "google_gemma-3-12b-it_contains_pii": [[0, 1311, false], [1311, 1605, null], [1605, 1932, null], [1932, 1932, null], [1932, 3174, null], [3174, 10150, null], [10150, 16793, null], [16793, 23791, null], [23791, 30619, null], [30619, 37673, null], [37673, 40345, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1311, true], [1311, 1605, null], [1605, 1932, null], [1932, 1932, null], [1932, 3174, null], [3174, 10150, null], [10150, 16793, null], [16793, 23791, null], [23791, 30619, null], [30619, 37673, null], [37673, 40345, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40345, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40345, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40345, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40345, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40345, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40345, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40345, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40345, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40345, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40345, null]], "pdf_page_numbers": [[0, 1311, 1], [1311, 1605, 2], [1605, 1932, 3], [1932, 1932, 4], [1932, 3174, 5], [3174, 10150, 6], [10150, 16793, 7], [16793, 23791, 8], [23791, 30619, 9], [30619, 37673, 10], [37673, 40345, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40345, 0.02315]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
9421579e2f82e38218467ba998ee77be50f4a49a
The present invention relates to a methodology to translate exact interpretations of keyword queries into meaningful and grammatically correct plain-language queries in order to convey the meaning of these interpretations to the initiator of the search. The method includes the steps of generating at least one grammatically valid plain-language sentence interpretation for a keyword query from a generated sentence plain-language sentence clauses, wherein the grammatically valid plain-language sentence is based upon differing matching elements, and presenting at least one grammatically valid plain-language sentence interpretation for the keyword query to a keyword query system user for the user's review. Fig. 2 MERGE OPERATION Assign Type Instance to Each Match 205 Do Types Have Same Alias 210 Alias's Refer to Different Instances No Yes Generate Clause For Group 215 FIG. 3 Specify Template For Each Path 305 Generate Clauses from the Templates 310 Can Template Be Merged? 315 Yes No Merge Clauses 320 Yes No ENGLISH-LANGUAGE TRANSLATION OF EXACT INTERPRETATIONS OF KEYWORD QUERIES CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application is a continuation of U.S. patent application Ser. No. 11/615,115 filed Dec. 22, 2006, the con- tents of which are incorporated by reference herein in their entirety. BACKGROUND OF THE INVENTION [0002] 1. Field of the Invention [0003] This invention relates to field of information retrieval techniques, in particular to the English language translation of exact interpretations of keyword queries. [0004] 2. Description of Background [0005] Before our invention keyword searching was the most important paradigm for Information Retrieval (IR). Conven- tionally, an Avatar Semantic Search was accom- plished by generating precise queries from a keyword query that was based upon a domain-specific system type. For a given keyword query, several possible interpretations of the keyword query may be produced within a search. Semantic optimizers using semantic knowledge and heuristics operate to prune keyword query interpretations, wherein the remain- ing keyword query interpretations are utilized to assist in the keyword search. In structure, keyword query interpretations are X-Path expressions, thus displaying the keyword query interpretations directly to a user is of little value since the interpretations cannot be easily understood and reviewed by the user. Therefore, there exists a need for an approach for displaying plain-language interpretations of X-Path expres- sions for review to the initiator of an Avatar Semantic Search. SUMMARY OF THE INVENTION [0006] Aspects of the present invention relate to a method- ology for the translation of exact interpretations of keyword queries into meaningful and grammatically correct plain- language queries in order to convey the meaning of these interpretations to the initiator of the keyword search. [0007] The shortcomings of the prior art are overcome and additional advantages are provided through the provision of a method for translating an interpretation of a keyword query into a grammatically correct plain-language query, the method comprising the steps of acquiring at least one key- word to perform a keyword query search upon, semantically interpreting the acquired keyword, further including the step of building a translation index to determine matching ele- ments, wherein matching elements are derived from informa- tion comprising type names, attribute names, and atomic attributes values that are associated with a specific keyword. [0008] The method further comprises the steps of merging the matching elements in the event that differing keywords comprise the same matching element and type alias, provid- ing a clause template for the customization of a plain-lang- uate sentence clause, wherein the plain-language sentence clause is based upon the matching elements that are selected for customization, and generating at least one plain-language sentence clause, and determining if the plain-language sen- tence clauses can be merged, wherein the determination is based upon the matches on the attribute paths for a given type element. Further, the method comprises the steps of specify- ing the plain-language sentence clauses that are to be merged, the plain-language sentence clause mergers being based on the attribute paths for a given matching type element, and merging the plain-language sentence clauses. Further, the method comprises a language for specifying custom tem- plates for generating clauses and sentences. [0009] Yet further, the method comprises the steps of gen- erating at least one grammatically valid plain-language sen- tence interpretation for the keyword query from the generated sentence plain-language sentence clauses, wherein the gram- matically valid plain-language sentence is based upon differ- ently matching elements, and presenting at least one grammati- cally valid plain-language sentence interpretation for the keyword query to a keyword query system user for the user’s review. [0010] System and computer program products corre- sponding to the above-summarized methods are also described and claimed herein. [0011] Additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed inven- tion. For a better understanding of the invention with advan- tages and features, refer to the description and to the draw- ings. [0012] As a result of the summarized invention, technically we have achieved a solution that assists in the translation of interpretations of keyword queries into meaningful and gram- matically correct plain-language queries, the meaning of these interpretations thereafter being displayed to the initia- tor of the search. BRIEF DESCRIPTION OF THE DRAWINGS [0013] The subject matter that is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are appar- t from the following detailed description taken in conjunction with the accompanying drawings in which: [0014] FIG. 1 illustrates one example of a flow diagram illustrating aspects of the methodology that relates to the present invention. [0015] FIG. 2 illustrates one example of a flow diagram detailing aspects of a clause merge operation. [0016] FIG. 3 illustrates one example of a flow diagram detailing aspects of a sentence generation customization operation. [0017] The detailed description explains the preferred embodiments of the invention, together with advantages and features, by way of example with reference to the drawings. DETAILED DESCRIPTION OF THE INVENTION [0018] One or more exemplary embodiments of the inven- tion are described below in detail. The disclosed embo- diments are intended to be illustrative only since numerous modifications and variations therein will be apparent to those of ordinary skill in the art. [0019] Document collections often have valuable struc- tured information that is associated with each document that is present within the collection. Traditional information retrieval (IR) models used in keyword searching employ tex- tentric representations of queries and documents (e.g., term vectors, bag of index terms, etc.). As a result, such IR models are incapable of effectively utilizing structured metadata as part of keyword retrieval operations. To address the mismatch between the need for a simple keyword-based search interface, and the need for complex queries to exploit structured data, Avatar Semantic Search operations employ the concept of query interpretation. In particular, Avatar Semantic Searching enumerates several possible interpretations of a keyword query and expresses each interpretation as a complex query over the underlying collection of queries. Conventionally, query interpretation is the process of generating a set of precise queries over a data set, one for each possible interpretation of a given keyword query. An interpretation for a keyword assigns specific semantics for the particular keyword. By assigning specific semantics to each keyword in the query, very precise interpretations for the query are subsequently produced. Thus, given a keyword query, a system generates a set of interpretations for that query. Turning now to the drawings in greater detail, FIG. 1 shows a flow diagram detailing aspects of the present keyword translation methodology. The method comprises the steps of the party that is desirous of the keyword search supplying the keyword(s) that will form the basis of the search (step 105). At step 110, the keyword search is initiated, and at step 115, a clause is generated for each keyword match that occurs within the query. Next, at step 120, the clauses generated for word matches that have occurred within the search are combined into a single clause. Lastly, at step 125, the clauses from the type match, path match, and value match occurrences in the search are combined with the keyword match clause to form a plain-language interpretation of the keyword query search. As an example, let us consider a keyword search over a body of email documents. Given the task of looking for the telephone number of an individual named Philip by locating an email message in which the number is mentioned, a natural user query would be ‘Philip telephone’. In the absence of any structured data, a traditional IR engine would return documents that contain the tokens ‘Philip’ and ‘telephone’ (ignoring synonym expansion, stemming, etc.). Now assume that in addition to the actual text, each document is automatically associated with four structured attributes corresponding to the email headers: from, date, to, and subject. Additionally, consider that the following text analysis engines (TAEES) are executed over the entire corpus of the email: - **1.** Entity recognition engines to extract names of persons and organizations. - **2.** Pattern recognition engines to extract telephone numbers and URLs. - **3.** Signature identifier to process email signatures and extract persons, companies, websites, numbers, etc. from the text of the signature. In order to figure out possible interpretations for any keyword, the system builds a translation index. The translation index is a keyword-matching engine built over the set of all type names (e.g., Email, Person, Telephone, . . . ), attribute names (firstname, number, . . . ), and atomic attribute values (Philip, pdf, 408, . . . ). This index allows us to restrict the potential space of semantic interpretations for each keyword. Given a keyword, the translation index returns a set of one or more matching elements (types, paths, or values) from the semantic catalog. Within aspects of the present invention, type matches are based on type names, path matches are based on attribute names, and value matches are based on the atomic attribute values. For instance, given the keyword ‘telephone’, the translation index may return a type match [type Telephone], and a path match [path Signature. phone]. Similarly, given the keyword Philip, the translation index may return one or more of the following value matches: [val Person. name], [val signature.person.name], [val Email. from], and [val Email. to]. Notice how the type and path matches are dependent only on the type system, while the value matches are actually dependent on the data. During the Query interpretation stage, each token in the query is probed against the translation index to enumerate all possible semantic interpretations. In our case, this step results in: | Philip => | | (1) [val Email. from] | | (2) [val Signature.person.name] | | (3) [val Email.to] | | (4) [word Email.body] | | Telephone => | | (1) [type Telephone] | | (2) [path signature.phone] | | (3) [word Email.body] | The fact that a token can be simply treated as a keyword is reflected by the match [word Email.body] on the original document text. Queries are generated by taking all of the possible combinations of matches for each keyword. Some sample queries are given below. The query labels below are designed to reflect the interpretations used for each keyword. - **q1:1** retrieve emails from Philip containing a telephone number - **q2:2** retrieve emails containing Philip’s signature with a telephone number - **q3:1** retrieve emails sent to Philip containing a telephone number - **q3:2** retrieve emails from Philip containing the keyword Telephone Each of these query label interpretations correspond to a precise query over the data set. These precise queries are evaluated, and the results of the evaluation are presented to the user. Each interpretation of a query represents the particular semantics for that query. It is very useful to display to the user the semantics that the system is using, so that the user can see the co-relation between the results and the particular interpretation. One way to display the semantics is to show the precise query corresponding to the interpretation to the user. However, the precise query is expressed in the Avatar query language, and this language may prove to be difficult for the user to understand without first having an understanding of the Avatar object model and query language. An alternative approach to informing the user of the relationship between the results and an interpretation is to generate an English language equivalent for the query interpretation, and display the English language equivalent to the user. Such an interpretation will be easy for any user to understand, and the user can also straightforwardly compare the different interpretations, selecting the interpretation that accurately captures what they intended for the query. For example, see the English language interpretations of the query ‘Philip telephone’ as listed above. The problem that this invention solves can be described as follows: 1. Given a set of keywords and their semantic interpretations, generate a grammatically valid English sentence to represent the interpretation. 2. The sentence generation should be easily customizable so that specific clauses can be generated for different types and matches. Generating Clauses: The present invention provides solutions for generating a clause for each match, and combines these clauses into a meaningful sentence. There are four types of possible matches: 1. Type match (type k T)—this indicates that the keyword k matches the name of a type T in the system. For example, the keyword 'Telephone' generates a type match (type 'Telephone', Telephone). 2. Path match (path k T.a,b) —this indicates that a keyword k matches the name of an attribute path 'a.b' for type T. Since the type system is hierarchical, attributes can be other types. We use a dot notation to denote a chain of attributes. For example, the keyword 'Telephone' generates a path match (path 'Telephone' Signature. phone). 3. Value match (value k T.a,b) —this indicates that a keyword k matches one of the values taken by an attribute path 'a.b' for type T in the body. For example, the keyword 'Philip' generates a value match (value 'Philip' signature.person.name) since there is an instance of Signature in the body that has a person with name 'Philip'. 4. Word match (word k) —this indicates that k be treated simply as a keyword to match against the document. For example, the keyword 'Philip' generates a word match (word 'Philip'). For each kind of match, we have a default clause that gets generated: 1. Type match (type k T): the clause generated is either 'a T' or 'an T' depending on the first letter of T. For example, (type 'Telephone' Telephone) generates 'a Telephone'. 2. Path match (path K T.a,b,c): the clause generated is 'a/an T having a/an a with a/an b with a/an c'. For example, (path 'Telephone' Signature. phone) generates the clause 'a Signature having a phone'. 3. Value match (value k T.a,b,c): the clause generated is 'a/an T having a/an a with a/an b with a/an c containing k'. For example, (value 'Philip' signature.person.name) generates the clause 'a Signature having a person with a name containing 'Philip'. 4. Word match (word k): the clause generated is 'k'. For example, (word 'Philip') generates the clause "'Philip'." Combining the Clauses: The clauses generated from the matches are put together in a sentence. With aspects of the present invention, the construction of a valid sentence from clauses is based upon the grammatical rules for the English language; however, the present methodology can be adapted to conform to the grammatical rules of languages other than English. In the present implementation, since the sentence is of a very specific form, we can construct it in a more direct manner. Let Ck1, Ck2, …, Ckm be the clauses from the word matches. First, these clauses are put together into a single clause Ck = "the keyword's Ck1, Ck2, …, Ckm". For example, if there are two word clauses 'Philip' and 'Telephone', the combined clause Ck is "the keywords 'Philip' and 'Telephone'". Let C1, C2, …, Cn be the clauses generated from type, path, value matches, and the combined word clause. The final sentence will be of the form: "Retrieve documents that contain C1, C2, …, Cn". For example, consider an interpretation of the keyword query 'Philip telephone' that includes the matches (path 'Telephone' Signature. phone) and (word 'Philip'). The clauses generated are 'a Signature having a phone" and "the keyword 'Philip'". Putting these together, we get the final sentence: "Retrieve documents that contain a signature having a phone and the keyword 'Philip'". Handling Type Merge: In some interpretations, the different keywords might match the same type. For example, (value 'Philip' signature.person.name) and (path 'Telephone' Signature. phone) refer to the same type signature. In this event there are two possibilities: either the two matches might refer to different signature instances, or they refer to the same signature instance. The semantics of the two choices are different. In one case, we are looking for emails that contain a signature having a person with name 'Philip', and a signature (may be same or different) having a phone number. In the single instance case, we are looking for emails that contain a signature having a person with name 'Philip' and a phone number. The process of having different matches for a type refer to the same instance is called type merging (See FIG. 2). These two choices are considered as separate interpretations and are generated by the system using type merge. As shown at step 205, an initial determination is made to assign a type instance to each match. The information about the type instance for any match is also stored in an interpretation using a type alias. If the alias for two matches is the same, they refer to the same instance (step 210). Adding type alias to our notation, the two choices are: 1. (value 'Philip' Signature.person.name s1), (path 'Telephone' Signature. phone s2) where the matches refer to different instances of Signature s1 and s2. 2. (value 'Philip' Signature.person.name s1), (path 'Telephone' Signature. phone s1) where the matches refer to the same instance of Signature s1. To generate an appropriate English representation for an interpretation with type merge, we first group matches by their type alias. For example, 1. If the matches are (value 'Philip' Signature.person.name s1), (path 'Telephone' Signature. Phone. s2), we have two groups: s1: {value 'Philip' Signature.person.name s1} and s2: {path 'Telephone' Signature. Phone s2}. 2. If the matches are (value 'Philip' Signature.person. name s1), (path 'Telephone' Signature. phone s1) we have a single group s1: {value 'Philip' signature.person.name s1}, (path 'Telephone' Signature. phone s1). Type merge affects the way clauses are generated for matches. Type merge is not applicable for a type match, since the system automatically prunes multiple type matches to the same type. Type merge is also not applicable for a word match, since word matching is for the document content and not any particular type instance. Let us now revisit the clause generation for path and value matches. Type merge implies a clause merge on the generated English clause. Rather than generating a clause for each match, we generate a clause for each group when matches are grouped by the type alias (step 215). The clause for a group has the type mentioned once and has a sub clause for each different match in the group, consider these examples: 1. Only Path Matches [0058] After grouping by type aliasing, consider a group that contains [0059] t1: {(path K1 T.a.b.c t1), (path K2 T.e.f t1)} [0060] The clause generated is ‘a/an T having a/an a with a/an b with a/an c and/or a/an e with a/an f’. [0061] For example, the clause for the interpretation with the group s1: {(path ‘Philip’ Signature.person.name s1), (path ‘Telephone’ Signature.phone s1)} will be ‘a Signature having a person with a name and a phone.’ 2. Only Value Matches [0062] The different value matches might refer to the same path or different paths on the type. To handle these cases, we do a further grouping by the path used in the value matches. [0063] A-different paths [0064] t1: {(value K1 T.a.b.c t1), (value K2 T.e.f t1)} [0065] The clause generated is ‘a/an T having a/an a with a/an b with a/an c containing K1 and a/an e with a/an f containing K2.’ [0066] B-common path a.b.c [0067] t1: {(value K1 T.a.b.c t1), (value K2 T.a.b.c t1)} [0068] The clause generated is ‘a/an T having a/an a with a/an b with a/an c containing K1 and K2.’ [0069] For example, the clause for the interpretation with the group s1: {(value ‘Philip’ Signature.person.name s1), (value ‘Thomas’ Signature.person.name s1)} will be “a Signature having a person with a name containing ‘Philip’ and ‘Thomas’.” 3. Both Path and Value Matches [0070] We combine the steps described in 1 and 2. Consider a group that contains: [0071] t1: {(path K1 T.a.b.c t1), (value K2 T.e.f t1) (value K3 T.e.f t1)} [0072] The clause generated is ‘a/an T having a/an a with a/an b with a/an c and a/an e with a/an f containing K2 and K3’. [0073] For example, the clause for the interpretation with the group s1: {(value ‘Philip’ Signature.person.name s1), (path ‘Telephone’ Signature.phone s1)} will be “Signature having a phone and a person with a name containing ‘Philip’”. Customizing the Sentence Generation [0074] The algorithm presented until now treats all types uniformly, and generates clauses for them based on type and attribute names. However, very often users want to customize the plain-language English sentence that is generated. The sentence is more readable if customized clauses are generated for certain types and their matches. For example, rather than saying: [0075] “Signature having a person with a name containing ‘Philip’”, one can say “Philip’s Signature”. [0076] We have defined a template-based algorithm for allowing these customizations (See FIG. 3). At step 305, the user can provide a clause template for the types and matches that she wants to customize. At step 310, the custom clauses are generated from these templates. A design issue to consider is the level of sentence customization that can be allowed. For example, given type T can have multiple attributes (and consequently attribute paths). Due to type merge, we may have multiple paths matching for the same type instance. To be very general, we will need to be able to specify a clause for matches on each subset of attributes for a type. Consider the type Signature that has the attributes person.name and phone. In the instance of the match (value ‘Philip’ signature.person.name s1), we want to generate the clause “Philip’s Signature”, and for the match (path ‘Telephone’ Signature.phone s2), we want to generate the clause “signature having a phone number”. [0077] In the event that the two types are merged, the ideal clause to be generated is “Philip’s Signature having his phone number”. There is no obvious way to generate this from the two individual clauses specified by the user. The user has to specify this merged clause explicitly to be used in case there is a match on both person.name and phone for a given instance of signature. Specifying a clause for each subset of attributes leads to an exponential blowup in the number of clause templates that can be specified. As a tradeoff, users are allowed to specify templates for each path separately and also determine if these templates can be merged. If merging is allowed (step 315), our algorithm will merge the clauses automatically (step 320). The details of templates and algorithms utilized within aspects of the invention are explained below. Template Specification: [0078] A template is a string that comprises embedded processing instructions and placeholders. The placeholders and instructions are specified within the characters “<e>” and “<>”. Templates are arranged hierarchically, and further a template is provided for an overall sentence. Within aspects of the present invention templates have placeholders for clauses, wherein each clause is generated using a template. A clause can have sub-clauses depending on the match type. An example of a simple sentence template is “Retrieve all emails <<CLAUSE0056>><<CLAUSE1058>>”. This template has two placeholders <<CLAUSE0056>> and <<CLAUSE1058>>. The constructs allowed in templates are describe below: [0079] <<CLAUSE0056>>: This is a placeholder for a clause of type X. Clauses can be of different types that are numbered as 0, 1… n. A clause of type X will be inserted at the location of <<CLAUSE0056>>. Having clauses of different types enables us to enforce positional constraints on where difference clauses occur in the final sentence. [0080] <<TRIPLE: s1:s2:s3>:> This is a processing instruction and provides a mechanism for generating different strings depending on the position of the clause. For example, let T be a template that has the instruction <<TRIPLE: s1:s2:s3>>, and E be the enclosing template, i.e. T generates a clause that is inserted into E. The semantics of these templates are represented as: [0081] => If T is the first clause to be inserted into E, then the TRIPLE generates s1 in T [0082] => If T is the last but not the first clause to be inserted in E, then the triple generates s3 in T [0083] If T is neither the first nor the last clause to be inserted in E, then the triple generates S2 in T. [0084] For example, let the template for type match on Signature be T1="<<TRIPLE: that contain:.; and>> a signature". The template for type match on Phone is T2="<<TRIPLE: that contain:.; and>> a phone number". T1 and T2 are clauses of type 0. The enclosing template is the sentence template E="Retrieve all emails<<CLAUSE0>>". If the interpretation has two type matches, first on Signature and the second on Phone, then applying the semantics of TRIPLE, the first clause generated is "that contain a signature" and the second clause is "and a phone number". Substituting these in the enclosing template E, we get "Retrieve all emails that contain a signature and a phone number". The TRIPLE allows us to generate "that contain" in one case and "and" in the other case depending on where the clause will be placed in the sentence. --- <<K>> This is a placeholder for a value in a word match. <<V>> This is a placeholder for a value in a value match. &&SET: ... is generated in the clause and Var is reset to false. Otherwise, nothing is generated and this instruction has no effect. --- [0085] SET and CHKUST give more fine grain control over strings to generate and might be useful in cases where TRIPLE is not sufficient. This template specification language is powerful enough to handle a great assortment of linguistic cases. [0086] Next we will describe what templates need to be specified for the different cases: 1. Sentence template: This is the overall template of the sentence. This will have placeholders for "<<CLAUSE0>>" to indicate where the clauses of different types are to be inserted. Example: [0088] Sentence Template="Retrieve all emails<<CLAUSE0>><<CLAUSE1>>" 2. Type match template: For each type, we specify: [0090] A template that generates the clause for a match on that type. This clause will be substituted into the sentence template. [0091] B> the type of the clause generated. We will refer to these templates as Type Match Templates. Example: for type 'Telephone': Type Match template="<<TRIPLE: that contain:.; and>> a phone number" type=1 [0092] Path and Value matches: Path and Value template that generates the type part of the clause for a path or value match. Example: for type Signature and path person.name: Path Match Path template = "<<TRIPLE: having:.; and>> a phone number" type=0 mergeable=true 3. Path and Value matches: Path and Value template that generates the sub-clause that gets inserted into the type template. Example: for type Telephone: Path/Value Match Type template = "<<TRIPLE: that contain:.; and>><<CLAUSE0>>signature<<CLAUSE1>>" type=1 4. Path and Value matches: Path and Value template that generates the clause that gets inserted into the path clause generated by the Path Match template. Example: for type Signature and path phone: Path/Value Match Path Template = "<<TRIPLE: having:.; and>> a phone number" type=0 mergeable=true III> For each path, we also specify a value match template that is applicable for value matches [0101] A template that generates the value clause to be inserted into the path clause generated by the Path Match template. We will refer to these templates as Value Match Value Templates. Example: for type 'Signature' and path 'person.Name': Value Match Value template="<<TRIPLE:; ; and>><<V>>" 4. Word matches: We specify: [0103] A template to generate the keyword clause that will be inserted into the sentence template. [0104] B> the type of the clause generated. We refer to this template as Word Match template. Example: a keyword template could be, Word Match template = <<TRIPLE: that contain:.; and>><<K>> type=1 Consider an interpretation that has the matches: (value "Philip Signature.person.name sl") (path "Telephone" Signature. phone sl) Note that the types have been merged. [0105] \(\text{a} \Rightarrow \text{the Value Match Value template for signature.} \) \(\text{person.name} = "<\text{TTRIPLE::; and}>><\text{V}>>"\) For the value “Philip” this resolves to, “Philip”. The TRIPLE generates an empty string since this is the first value in the enclosing template. [0106] \(\text{b} \Rightarrow \text{the Value Match Path template for value match}\) \(\text{of Signature.person.name} = "<<CLAUSE>>s"\) with type-0 and mergeable-true. Substituting the value clause, this resolves to “Philip’s”. This is a clause of type 0. [0107] \(\text{c} \Rightarrow \text{the Path Match Path template for path match}\) on Signature. phone is “<<TRIPLE: having; and}>>a phone number” with type-1 and mergeable-true. Since this is the first clause of type 1, this resolves to “having a phone number”. This is a clause of type 1. [0108] \(\text{d} \Rightarrow \text{the Path/Value Match Type template for path} \) and value matches for Signature is “<<TRIPLE: that contain: and}>><<CLAUSE0>>Signature<<CLAUSE1>>” with type-1. Substituting the clauses generated in steps b and c in their appropriate places and resolving the TRIPLE, we get “that contain Philip’s signature having a phone number”. We could do this since clauses generated in b c and both mergeable. This is a clause of type 1. [0109] \(\text{e} \Rightarrow \text{finally, substituting this into the sentence template}\) “Retrieve all emails<<CLAUSE0>><<CLAUSE1>>”, we get the final sentence “Retrieve all emails that contain Philip’s signature having a phone number”. [0110] Thus the template based sentence generation method- odologies of the present invention allow for the straightfor- ward customization of generated English sentences. If cus- tomization for a type or path is not needed, then the user doesn’t have to specify the type or path. In these cases, the system will automatically use default templates that will gen- erate sentences as described initially. In the signature equivalent, with default templates the system will generate: [0111] “Retrieve documents that contain a signature having a person with name containing ‘Philip’ and a phone” [0112] The capabilities of the present invention can be implemented in software, firmware, hardware or some com- bination thereof. [0113] As one example, one or more aspects of the present invention can be included in an article of manufacture (e.g., one or more computer program products) having, for instance, computer usable media. The media has embodied therein, for instance, computer readable program code means for providing and facilitating the capabilities of the present invention. The article of manufacture can be included as a part of a computer system or sold separately. [0114] Additionally, at least one program storage device readable by a machine, tangible embodying at least one pro- gram of instructions executable by the machine to perform the capabilities of the present invention can be provided. [0115] The flow diagrams depicted herein are just ex- amples. There may be many variations to these diagrams or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order, or steps may be added, deleted, modified. All of these variations are considered a part of the claimed invention. [0116] While the preferred embodiment to the invention has been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described. What is claimed: 1. A method for translating an interpretation of a keyword query into a grammatically correct plain-language query statement, the method comprising the steps of: - acquiring at least one keyword to perform a keyword query search upon; - semantically interpreting the acquired keyword, further including the step of building a translation index to determine matching elements, wherein matching ele- ments are derived from information comprising type names, attribute names, and atomic attributes values that are associated with a specific keyword; - merging the matching elements in the event that differing keywords comprise a same matching element and type alias; - providing a clause template for the customization of a plain-language sentence clause, wherein the plain- language sentence clause is based upon the matching ele- ments that are selected for customization; - generating at least one plain-language sentence clause; - determining if the plain-language sentence clauses can be merged, wherein the determination is based upon the attributes matched for a given type element; - specifying the plain-language sentence clauses that are to be merged, the plain-language sentence clause mergers being based upon the attributes matched for a given type element; - merging the plain-language sentence clauses; - generating at least one grammatically valid plain-language sentence for the keyword query from the generated plain-language sentence clauses, wherein the grammati- cally valid plain-language sentence is based upon differ- ing matching elements; - presenting the at least one grammatically valid plain-lang- uage sentence for the keyword query to a keyword query system user for the user’s review. 2. The method of claim 1, wherein the matching elements comprise elements that are related to a type match, a path match, a value match, and a word match. 3. The method of claim 2, further comprising the step of associating a differing default plain-language sentence clause with each matching element. 4. The method of claim 1, wherein the generated grammati- cally valid plain-language sentence is based on grammatical rules associated with the English language. 5. The method of claim 1, wherein multiple word matching elements are combined into a plain-language sentence clause. 6. The method of claim 1, wherein the matching elements having the same type alias are grouped together, and their plain-language sentence clauses are merged. 7. The method of claim 1, further comprising the step of providing a template for the overall structure of the at least one grammatically valid plain-language sentence. 8. The method of claim 7, wherein the template comprises at least one placeholder for the information that is contained within a plain-language sentence clause. 9. The method of claim 7, wherein the templates are hier- archical in structure, the templates being configured to gen- erate clauses, and sub-clauses that are comprised within the clauses, the clauses and sub-clauses of the template being used to construct plain-language sentences. 10. The method of claim 7, wherein the plain-language sentence clauses are classified as consecutively numbered types. 11. A computer program product that includes a computer readable medium usable by a processor, the medium having stored thereon a sequence of instructions which, when executed by the processor, causes the processor to translate an interpretation of a keyword query into a grammatically correct plain-language query, wherein the computer program product executes the steps of: acquiring at least one keyword to perform a keyword query search upon; generating a keyword query in order to semantically interpret the acquired keyword, further including the step of building a translation index to determine matching elements, wherein matching elements are derived from information comprising type names, attribute names, and atomic attributes values that are associated with a specific keyword; merging the matching elements in the event that differing keywords comprise a same matching element and type alias; providing a clause template for the customization of a plain-language sentence clause, wherein the plain-language sentence clause is based upon the matching elements that are selected for customization; generating at least one plain-language sentence clause; determining if the plain-language sentence clauses can be merged, wherein the determination is based upon the attributes matched for a given type element; specifying the plain-language sentence clauses that are to be merged, the plain-language sentence clause mergers being based upon the attributes matched for a given matching element; generating at least one grammatically valid plain-language sentence interpretation for the keyword query from the generated plain-language sentence clauses, wherein the grammatically valid plain-language sentence is based upon differing matching elements; presenting the at least one grammatically valid plain-language sentence interpretation for the keyword query to a keyword query system user for the user's review. 12. The computer program product of claim 11, further comprising the step of providing a template for the overall structure of the at least one grammatically valid plain-language sentence. 13. The computer program product of claim 12, wherein the template comprises at least one placeholder for the information that is contained within a plain-language sentence clause. 14. The computer program product of claim 12, wherein the templates are hierarchical in structure, the templates being configured to generate clauses, and sub-clauses that are comprised within the clauses, the clauses and sub-clauses of the template being used to construct plain-language sentences. 15. The computer program product of claim 12, wherein the plain-language sentence clauses are classified as consecutively numbered types. 16. The computer program product of claim 12, wherein the plain-language sentence templates can be optionally labeled as having the capability of being merged, in the event that the plain-language sentence templates are labeled as having the capability to be merged, then the clauses that correspond to the plain-language sentence templates are thereafter merged. * * * * *
{"Source-Url": "https://patentimages.storage.googleapis.com/34/fe/69/eb854da0dbfacf/US20080228468A1.pdf", "len_cl100k_base": 8769, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 15031, "total-output-tokens": 9633, "length": "2e13", "weborganizer": {"__label__adult": 0.0004000663757324219, "__label__art_design": 0.0008301734924316406, "__label__crime_law": 0.001809120178222656, "__label__education_jobs": 0.0016565322875976562, "__label__entertainment": 0.00016510486602783203, "__label__fashion_beauty": 0.00021529197692871096, "__label__finance_business": 0.002033233642578125, "__label__food_dining": 0.00028705596923828125, "__label__games": 0.0008716583251953125, "__label__hardware": 0.002002716064453125, "__label__health": 0.00032782554626464844, "__label__history": 0.0003495216369628906, "__label__home_hobbies": 0.00010788440704345704, "__label__industrial": 0.000560760498046875, "__label__literature": 0.0009465217590332032, "__label__politics": 0.0003380775451660156, "__label__religion": 0.0003809928894042969, "__label__science_tech": 0.0556640625, "__label__social_life": 6.592273712158203e-05, "__label__software": 0.0865478515625, "__label__software_dev": 0.84375, "__label__sports_fitness": 0.00014913082122802734, "__label__transportation": 0.0003786087036132813, "__label__travel": 0.00015223026275634766}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40616, 0.01957]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40616, 0.50451]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40616, 0.8605]], "google_gemma-3-12b-it_contains_pii": [[0, 711, false], [711, 711, null], [711, 883, null], [883, 1033, null], [1033, 7494, null], [7494, 14032, null], [14032, 20303, null], [20303, 26611, null], [26611, 30496, null], [30496, 37255, null], [37255, 40616, null]], "google_gemma-3-12b-it_is_public_document": [[0, 711, true], [711, 711, null], [711, 883, null], [883, 1033, null], [1033, 7494, null], [7494, 14032, null], [14032, 20303, null], [20303, 26611, null], [26611, 30496, null], [30496, 37255, null], [37255, 40616, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40616, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40616, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40616, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40616, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40616, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40616, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40616, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40616, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40616, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40616, null]], "pdf_page_numbers": [[0, 711, 1], [711, 711, 2], [711, 883, 3], [883, 1033, 4], [1033, 7494, 5], [7494, 14032, 6], [14032, 20303, 7], [20303, 26611, 8], [26611, 30496, 9], [30496, 37255, 10], [37255, 40616, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40616, 0.02222]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
f6f0d91924ccc6767f06660de213b5741877bf19
Security Mechanism Names for Media draft-dawes-sipcore-mediasec-parameter-10.txt Abstract Negotiating the security mechanisms used between a Session Initiation Protocol (SIP) user agent and its next-hop SIP entity is described in [2]. This document adds the capability to distinguish security mechanisms that apply to the media plane by defining a new Session Initiation Protocol (SIP) header field parameter to label such security mechanisms. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [1]. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on November 29, 2019. Copyright Notice Copyright (c) 2019 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents. Table of Contents 1. Problem Statement ........................................... 3 2. Introduction ................................................ 3 3. Access Network Protection ................................. 4 4. Solution .................................................... 4 4.1. Signaling security negotiation ......................... 4 4.2. Header fields for signaling security negotiation ... 5 4.3. Syntax ................................................ 5 4.4. Protocol Operation ..................................... 5 4.4.1. The "mediasec" Header Field Parameter ....... 5 4.4.2. Client Initiated .............................. 5 4.5. Security Mechanism Initiation ......................... 7 4.6. Duration of Security Associations .................. 7 4.7. Summary of Header Field Use ......................... 7 5. Backwards Compatibility .................................... 7 6. Examples ................................................... 7 6.1. Initial Registration 3GPP .......................... 7 6.2. Re-Registration 3GPP ............................... 12 6.3. Client Initiated as per RFC 3329 ................. 14 6.4. Server Initiated as per RFC 3329 .................. 16 6.5. Using Media Plane Security ......................... 18 7. Formal Syntax .............................................. 20 8. Acknowledgements ........................................... 20 9. IANA Considerations ........................................ 20 9.1. Registry for Media Plane Security Mechanisms ... 20 9.2. Registration Template .............................. 21 9.3. Header Field Names ................................ 21 9.4. Response Codes ..................................... 21 10. Security Considerations .................................... 21 11. References ................................................. 21 11.1. Normative References ............................. 21 11.2. Informative References ............................ 22 Appendix A. Additional stuff .................................. 22 Author’s Address .............................................. 22 1. Problem Statement In the 3GPP defined architecture and SIP profile for packet-domain communication, SIP signaling is security protected at the network layer but media-plane traffic is not (it is protected by the cellular wireless access). The SIP signaling security used by 3GPP runs from the user device to the first hop proxy and negotiation of security mechanism and the start of security protection is described in [2]. Because the 3GPP architecture also allows access technologies that do not protect media, e.g. WiFi, this document extends the negotiation of security mechanism to the media plane. During previous discussion of the topic of media plane security it was suggested that DTLS-SRTP should be used, but 3GPP considered this impractical to implement in the 3GPP-defined architecture and also limited in terms of meeting all 3GPP requirements which include protection of non-RTP media such as MSRP. The purpose of this specification is to define a new header field parameter for the Session Initiation Protocol (SIP) that distinguishes security mechanisms that apply to the media plane and to create an IANA registry for these mechanisms. This header field parameter may be used with the Security-Client, Security-Server, and Security-Verify header fields defined by [2]. The header field parameter introduced by this draft originates from 3GPP specifications and related procedures and header field values are also described in 3GPP specifications, primarily in 3GPP TS 33.328 [2]. The purpose of this draft is to name the header field parameter, give several illustrative examples to make it clear how it is used, and set up an IANA registry for existing and future values. The draft does not propose that IETF defines any new security setup procedures, ciphering, integrity protection etc. 2. Introduction [2] describes negotiation of a security mechanism for SIP signaling between a UAC and its first hop proxy and allows a client or network to ensure that protection of SIP signaling is turned on when the client registers with the network. SIP signaling is then protected as it traverses the access network. To enable similar protection for media, this document enables client and network to exchange their security capabilities for the media plane combined with the negotiation described in [2]. Similar to the signaling plane, the evolution of security mechanisms for media often introduces new algorithms, or uncovers problems in existing ones, making capability exchange of such mechanisms a necessity. 3. Access Network Protection Some access technologies, such as many cellular wireless accesses, protect the data passed over them by default but some, such as WLAN, do not. For accesses with no inherent protection, it is useful for the media controlled by SIP signaling to be protected by default because of vulnerability to eavesdropping. It is currently possible for a UA to request protection of the media plane end-to-end by including the crypto attribute in SDP at session setup. This does not guarantee protection however, because it relies on support of encryption by the called UA, or by another entity in the path taken by the media. In some cases, the session will originate in an access that protects the media and terminate in one that does not, meaning that media is protected in all but some hops of its path. In cases where the same provider supplies the user equipment and provides the IP access, the IP access technology that the UA will use is predictable and the media is vulnerable only as far as the core network. In such cases, the user equipment it is possible to protect the media plane by encrypting at the UA and decrypting at the edge of the core network, and for the user agent that originates or terminates the session to expect the edge of the core network to be capable of encrypting and decrypting media. The header field parameter described in this document enables this case of first-hop protection, which is typically provided by default to a user agent. 4. Solution 4.1. Signaling security negotiation A specification already exists for setting up security for SIP signaling between a client and its first-hop proxy, as defined in [2] which gives an overview of the mechanism as follows: ``` 1. Client ----------client list---------> Server 2. Client <--------server list---------- Server 3. Client ------(turn on security)------- Server 4. Client ---------server list-----------> Server 5. Client <--------ok or error---------- Server ``` Figure 1: Security agreement message flow from RFC 3329 The security mechanism above ensures that SIP signaling is protected between a client and its first hop entity but the media plane is still unprotected. This document proposes that client and server additionally exchange their media plane security capabilities at steps 1 and 2. Media plane security needs to be applied on a per- media basis at the time that media is initiated. Therefore the client and server need not turn on media plane security immediately. This document defines the "mediasec" header field parameter that labels any of the Security-Client, Security-Server, or Security-Verify header fields as applicable to the media plane and not the signaling plane. 4.2. Header fields for signaling security negotiation The "mediasec" header field parameter defined in this document is used with procedures defined in [2] to distinguish media plane security, with the difference that media plane security need not be started immediately and can be applied and removed on-the-fly as media are added and removed within a session. The SIP responses that can contain the Security-Client, Security-Server, and Security-Verify header fields are SIP responses 421 (Extension Required) and 494 (Security Agreement Required) as defined in [2]. 4.3. Syntax This document does not define any new SIP header fields, it defines a header field parameter for header fields Security-Client, Security-Server and Security-Verify defined in [2]. 4.4. Protocol Operation 4.4.1. The "mediasec" Header Field Parameter The "mediasec" header field parameter may be used in the Security-Client, Security-Server, or Security-Verify header fields defined in [2] to indicate that a header field applies to the media plane. Any one of the media plane security mechanisms supported by both client and server, if any, may be applied when a media stream is started. Or a media stream may be set up without security. Values in the Security-Client, Security-Server, or Security-Verify header fields labelled with the "mediasec" header field parameter are specific to the media plane and specific to the secure media transport protocol used on the media plane. 4.4.2. Client Initiated A client wishing to use the security capability exchange of this specification MUST add a Security-Client header field to a request addressed to its first-hop proxy (i.e., the destination of the request is the first-hop proxy). This header field contains a list of all the media plane security mechanisms that the client supports. The client SHOULD NOT add preference parameters to this list. The client MUST add a "mediasec" header field parameter to the Security-Client header field. The contents of the Security-Client header field may be used by the server to include any necessary information in its response. As described in [2], the response will be 494 if the client includes "sec-agree" in the Require and Proxy-Require header fields, or a 2xx response if the Require and Proxy-Require header fields do not contain "sec-agree". The server MUST add its list to the response even if there are no common security mechanisms in the client’s and server’s lists. The server’s list MUST NOT depend on the contents of the client’s list. Any subsequent SIP requests sent by the client to that server MAY make use of the media security capabilities exchanged in the previous step by including media plane security parameters in SDP in the session or the media description. These requests MUST contain a Security-Verify header field that mirrors the server’s list received previously in the Security-Server header field. The server MUST check that the security mechanisms listed in the Security-Verify header field of incoming requests correspond to its static list of supported security mechanisms. Note that, following the standard SIP header field comparison rules defined in [3], both lists have to contain the same security mechanisms in the same order to be considered equivalent. In addition, for each particular security mechanism, its parameters in both lists need to have the same values. The server can proceed processing a particular request if, and only if, the list was not modified. If modification of the list is detected, the server MUST respond to the client with a 494 (Security Agreement Required) response. This response MUST include the server’s unmodified list of supported security mechanisms. Once security capabilities have been exchanged between two SIP entities, the same SIP entities MAY use the same security when communicating with each other in different SIP roles. For example, if a UAC and its outbound proxy exchange some media-plane security mechanisms, they may try to use the same security for incoming requests (i.e., the UA will be acting as a UAS). The user of a UA SHOULD be informed about the results of the security mechanism agreement. The user MAY decline to accept a particular security mechanism, and abort further SIP communications with the peer. 4.5. Security Mechanism Initiation Once the client chooses a security mechanism from the list received in the Security-Server header field from the server, it MAY initiate that mechanism on a session level, or on a media level when it initiates new media in an existing session. 4.6. Duration of Security Associations Once media-plane security capabilities have been exchanged, both the server and the client need to know until when they can be used. The media plane security mechanism setup is valid for as long as the UA has a SIP signaling relationship with its first-hop proxy or until new keys are exchanged in SDP. The SDP used to set up media plane security will be protected by a security association used to protect SIP signaling and the media plane security mechanism can be used until the signaling plane security association expires. 4.7. Summary of Header Field Use The header fields defined in this document may be used to exchange supported media plane security mechanisms between a UAC and other SIP entities including UAS, proxy, and registrar. Information about the use of headers in relation to SIP methods and proxy processing is given in [2] Table 1. 5. Backwards Compatibility Security mechanisms that apply to the media plane only MUST NOT have the same name as any signaling plane mechanism. If a signaling plane security mechanism name is re-used for the media plane and distinguished only by the "mediasec" parameter, then implementations that do not recognize the "mediasec" parameter may incorrectly use that security mechanism for the signaling plane. 6. Examples The following examples illustrate the use of the mediasec header field parameter defined above. 6.1. Initial Registration 3GPP At initial registration, the client includes its supported media plane security mechanisms in the SIP REGISTER request. The first-hop proxy returns its supported media plane security mechanisms in the SIP 401 (Unauthorized) response. As per [2], a UA negotiates the security mechanism for the media plane to be used with its outbound proxy without knowing beforehand which mechanisms the proxy supports, as shown in Figure 2 below. ``` <table> <thead> <tr> <th>UAC</th> <th>Proxy</th> <th>Registrar</th> </tr> </thead> <tbody> <tr> <td><a href="mailto:user1_public1@home1.net">user1_public1@home1.net</a></td> <td>pcscf1.home1.net</td> <td>registrar.home1.net</td> </tr> <tr> <td>------(1) REGISTER----&gt;</td> <td>Security-Client: sdes-srtp; mediasec</td> <td></td> </tr> <tr> <td></td> <td>---(2) REGISTER----&gt;</td> <td></td> </tr> <tr> <td></td> <td>&lt;-----(3) 401-------</td> <td></td> </tr> <tr> <td>&lt;------(4) 401--------&gt;</td> <td>Security-Server: sdes-srtp; mediasec</td> <td></td> </tr> <tr> <td>----- (5) REGISTER-------&gt;</td> <td>Security-Client: sdes-srtp; mediasec</td> <td>Security-Verify: sdes-srtp; mediasec</td> </tr> <tr> <td></td> <td>---(6) REGISTER----&gt;</td> <td></td> </tr> <tr> <td></td> <td>&lt;-----(7) 200 OK----</td> <td></td> </tr> <tr> <td>&lt;------(8) 200 OK------</td> <td></td> <td></td> </tr> <tr> <td>------(9) INVITE-------&gt;</td> <td>Security-Verify: sdes-srtp; mediasec</td> <td></td> </tr> <tr> <td></td> <td>Content-Type: application/sdp</td> <td></td> </tr> <tr> <td></td> <td>a=3ge2ae</td> <td></td> </tr> <tr> <td></td> <td>a=crypto:1 AES_CM_128_HMAC_SHA1_80</td> <td></td> </tr> <tr> <td></td> <td>inline:WVNfX19zZW1jdGwgKCkgewkyMjA7fQp9CnVubGVz</td> <td>2^20</td> </tr> <tr> <td></td> <td>FEC_ORDER=FEC_SRTP</td> <td></td> </tr> </tbody> </table> ``` Figure 2: Exchange of Media Security Mechanisms at Initial Registration The UAC sends a REGISTER request (1) to its outbound proxy indicating the security mechanisms for the media plane and that it supports in a Security-Client: header field. Indication of media security mechanisms is identified by the "mediasec" header field parameter. The outbound proxy forwards the REGISTER request (2) to the registrar with the Security-Client: header field removed as described in [2]. The registrar responds with a 401 (Unauthorized) response (3) to the REGISTER request. The outbound proxy responds forwards the 401 (Unauthorized) response (4) to the UAC with its own list of security mechanisms for the media plane in the Security-Server: header field. Security mechanisms for the media plane are distinguished by the "mediasec" header field parameter. The UAC sends a second REGISTER request (5) using the security credentials it received in the 401 (Unauthorized) response. The UAC includes the security mechanisms for the media plane and that it supports in a Security-Client: header field. The UAC also echos the list of security mechanisms it received from the outbound proxy in the Security-Server: header field. Media security mechanisms are distinguished by the "mediasec" header field parameter. The REGISTER request is forwarded to the registrar (6) and the registrar responds with 200 OK (7), which is forwarded to the UAC (8). When the connection is successfully established, the UAC sends an INVITE request(9) including an SDP description of the media plane security to be used (a="e2ae" and a crypto attribute). This INVITE contains a copy of the server’s security list in a Security-Verify header field. The server verifies it, and since it matches its static list, it processes the INVITE and forwards it to the next hop. If this example was run without the Security-Server header field in Step (2), the UAC would not know what kind of security the other one supports, and would be forced to make error-prone trials. More seriously, if the Security-Verify header field was omitted in Step (3), the whole process would be prone to MitM attacks. An attacker could remove the media plane security description from the header in Step (1), therefore preventing protection of the media plane. (1) REGISTER sip:registrar.home1.net SIP/2.0 Via: SIP/2.0/UDP [5555::aaa:bbb:ccc:ddd];comp=sigcomp;branch=z9hG4bKnashds7 New-Foreward: 70 P-Access-Network-ID: 3GPP-UTRAN-TDD; utran-cell-id-3gpp=234151D0FCE11 From: <sip:user1_public1@home1.net>;tag=4fa3 To: <sip:user1_public1@home1.net> Contact: <sip:[5555::aaa:bbb:ccc:ddd];comp=sigcomp>;expires=60000 Call-ID: apb03a0s09dkjdfglkj49111 Authorization: Digest username="user1_private@home1.net", realm="registrar.home1.net", nonce="", uri="sip:registrar.home1.net", response="" Security-Client: ipsec-3gpp; alg=hmac-sha-1-96; spi-c=23456789; spi-s=12345678; port-c=2468; port-s=1357 Require: max-forward, sec-agree Proxy-Require: sec-agree Supported: port Content-Length: 0 (2) SIP/2.0 401 Unauthorized Via: SIP/2.0/UDP pcscf1.home1.net;branch=z9hG4bK351g45.1, SIP/2.0/UDP [5555::aaa:bbb:ccc:ddd];comp=sigcomp;branch=z9hG4bKnashds7 Max-Forwards: 69 P-Access-Network-ID: Path: Require: P-Visited-Network-ID: P-Charging-Vector: From: To: Contact: Call-ID: Authorization: CSeq: 1 REGISTER Supported: Content-Length: 0 (3) SIP/2.0 401 Unauthorized Via: SIP/2.0/UDP pcscf1.home1.net;branch=z9hG4bK351g45.1, SIP/2.0/UDP [5555::aaa:bbb:ccc:ddd];comp=sigcomp;branch=z9hG4bKnashds7 From: <sip:user1_public1@home1.net>;tag=4fa3 To: <sip:user1_public1@home1.net>; tag=5ef4 Call-ID: apb03a0s09dkjdfglkj49111 WWW-Authenticate: Digest realm="registrar.home1.net", nonce=base64(RAND + AUTN + server specific data), algorithm=AKAv1-MD5, ik="00112233445566778899aabbccddeeff", ck="ffeeddccbbaa11223344556677889900" CSeq: 1 REGISTER Content-Length: 0 (4) SIP/2.0 401 Unauthorized Via: SIP/2.0/UDP [5555::aaa:bbb:ccc:ddd];comp=sigcomp;branch=z9hG4bKnashds7 From: To: Call-ID: WWW-Authenticate: Digest realm="registrar.home1.net", nonce=base64(RAND + AUTN + server specific data), algorithm=AKAv1-MD5 CSeq: 1 REGISTER Content-Length: 0 (5) REGISTER sip:registrar.home1.net SIP/2.0 Via: SIP/2.0/UDP 5555::aaa:bbb:ccc:ddd:1357;comp=sigcomp;branch=z9hG4bKnashds7 Max-Forwards: 70 P-Access-Network-Info: 3GPP-UTRAN-TDD; utran-cell-id-3gpp=234151D0FCE11 From: <sip:user1_public1@home1.net>;tag=4fa3 To: <sip:user1_public1@home1.net> Contact: <sip:5555::aaa:bbb:ccc:ddd:1357;comp=sigcomp>;expires=600000 Call-ID: apb03a0s09dkjdfglkj49111 Authorization: Digest username="user1_private@home1.net", realm="registrar.home1.net", nonce=base64(RAND + AUTN + server specific data), algorithm=AKAv1-MD5, uri="sip:registrar.home1.net", response="6629fae49393a05397450978507c4ef1" Security-Server: ipsec-3gpp; alg=hmac-sha-1-96; spi-c=23456789; spi-s=12345678; port-c=2468; port-s=1357 Security-Verify: ipsec-3gpp; q=0.1; alg=hmac-sha-1-96; spi-c=98765432; spi-s=87654321; port-c=8642; port-s=7531 Require: sec-agree Proxy-Require: sec-agree CSeq: 2 REGISTER Supported: path Content-Length: 0 (6) REGISTER sip:registrar.home1.net SIP/2.0 Via: SIP/2.0/UDP pcscf1.home1.net;branch=z9hG4bK351g45.1, SIP/2.0/UDP 5555::aaa:bbb:ccc:ddd:1357;comp=sigcomp;branch=z9hG4bKnashds7 Max-Forwards: 69 Path: <sip:term@pcscf1.visited1.net;lr> From: <sip:user1_public1@home1.net>;tag=4fa3 To: <sip:user1_public1@home1.net> Call-ID: apb03a0s09dkjdfglkj49111 Contact: <sip:5555::aaa:bbb:ccc:ddd:1357;comp=sigcomp>;expires=600000 CSeq: 2 REGISTER Supported: path Content-Length: 0 (7) SIP/2.0 200 OK Via: SIP/2.0/UDP pcscf1.home1.net;branch=z9hG4bK351g45.1, SIP/2.0/UDP 5555::aaa:bbb:ccc:ddd:1357;comp=sigcomp;branch=z9hG4bKnashds7 Path: <sip:term@pcscf1.visited1.net;lr> From: <sip:user1_public1@home1.net>;tag=4fa3 To: <sip:user1_public1@home1.net> Call-ID: apb03a0s09dkjdfglkj49111 Contact: <sip:5555::aaa:bbb:ccc:ddd:1357;comp=sigcomp>;expires=600000 CSeq: 2 REGISTER Supported: path Content-Length: 0 Dawes Expires November 29, 2019 [Page 11] 6.2. Re-Registration 3GPP Media plane security mechanisms are also exchanged when a registration is refreshed or a new public identity is registered. The UAC sends a REGISTER request (1) and includes the security mechanisms for the media plane and that it supports in a Security-Client: header field. The UAC also echos the list of security mechanisms it received from the outbound proxy in the Security- Server: header field. Media security mechanisms are distinguished by the "mediasec" header field parameter. In the example below, the Security-Verify: header field is included as required by clause 5.1.1.4.2 when setting up ipsec-3gpp signaling plane security. The REGISTER request is forwarded to the registrar (2) and the registrar responds with 200 OK (3), which is forwarded to the UAC (4). (1) REGISTER sip:registrar.home1.net SIP/2.0 Via: SIP/2.0/UDP [5555::aaa:bbb:ccc:ddd]:1357;comp=sigcomp;branch=z9hG4bKnashds7 Max-Forwards: 70 P-Access-Network-Info: 3GPP-UTRAN-TDD; utran-cell-id-3gpp=234151D0FCE11 From: <sip:user1_public1@home1.net>;tag=4fa3 To: <sip:user1_public1@home1.net> Contact: <sip:[5555::aaa:bbb:ccc:ddd]:1357;comp=sigcomp]>expires=600000 Call-ID: apb03a0s09dkjdfglkj49111 Authorization: Digest username="user1_private@home1.net", realm="registrar.home1.net", nonce=base64(RAND + AUTN + server ... uri="sip:registrar.home1.net", response="6629fae49393a05397450978507c4ef1", integrity-protected="yes" Security-Client: ipsec-3gpp; alg=hmac-sha-1-96; spi-c=23456789; spi-s=12345678; port-c=2468; port-s=1357 Security-Client: sdes-srtp; mediasec ***new*** Security-Verify: ipsec-3gpp; q=0.1; alg=hmac-sha-1-96; spi-c=98765432; spi-s=87654321; port-c=8642; port-s=7531 Security-Verify: sdes-srtp; mediasec ***new*** Require: sec-agree Proxy-Require: sec-agree CSeq: 3 REGISTER Supported: path Content-Length: 0 (2) REGISTER sip:registrar.home1.net SIP/2.0 Via: SIP/2.0/UDP pcscf1.home1.net;branch=z9hG4bK240f34.1, SIP/2.0/UDP [5555::aaa:bbb:ccc:ddd]:1357;comp=sigcomp;branch=z9hG4bKnashds7 P-Access-Network-Info: Max-Forwards: 69 Path: Require: P-Visited-Network-ID: P-Charging-Vector: From: To: Contact: Call-ID: Authorization: CSeq: Supported: Content-Length: Dawes Expires November 29, 2019 [Page 13] Media plane security mechanisms are also exchanged at client initiated security negotiation described in [3]. Figure 5: Use of mediasec parameter 6.3. Client Initiated as per RFC 3329 Media plane security mechanisms are also exchanged at client initiated security negotiation described in [3]. After exchange of security capabilities, the UAC sends an INVITE request(3) including an SDP description of the media plane security to be used (a="e2ae" and a crypto attribute). This INVITE contains a copy of the server’s security list in a Security-Verify header field. The server verifies it, and since it matches its static list, it processes the INVITE and forwards it to the next hop. --- (1) OPTIONS------> <----- (2) 494-------- <========TLS========> ----- (3) INVITE------> --- (4) INVITE----> <----- (5) 200 OK----> <----- (6) 200 OK------> ----- (7) ACK-------> ----- (8) ACK------> <-(Protected media)--> <--(Media)-------> Figure 6: Negotiation Initiated by the Client. 6.4. Server Initiated as per RFC 3329 Media plane security mechanisms are also exchanged at server initiated security negotiation described in [2]. Figure 7: Negotiation Initiated by the Server. Media security mechanisms are included in Security-Server: and Security-Client: header fields in the same way as signaling security mechanisms. (1) INVITE sip:uas.example.com SIP/2.0 (2) SIP/2.0 421 Extension Required Security-Server: ipsec-ike;q=0.1 Security-Server: tls;q=0.2 Security-Server: mechanism; mediasec (4) INVITE sip:uas.example.com SIP/2.0 Security-Verify: ipsec-ike;q=0.1 Security-Verify: tls;q=0.2 Security-Verify: mechanism; mediasec Figure 8: Negotiation Initiated by the Server. 6.5. Using Media Plane Security To request end to access edge media security either on a session or media level the UE sends, for example, an SDP Offer for an SRTP stream containing one or more SDES crypto attributes, each with a key and other security context parameters required according to [4], together with the attribute "a=3ge2ae". (3) INVITE sip:bob@ua2.example.com SIP/2.0 Security-Verify: ipsec-ike;q=0.1 Security-Verify: tls;q=0.2 Security-Verify: sdes-srtp;mediasec Route: proxy.example.com Require: sec-agree Proxy-Require: sec-agree Via: SIP/2.0/TCP proxy.example.com:5060;branch=z9hG4bK74bf9 Max-Forwards: 70 From: Alice <sip:alice@ua1.example.com>;tag=9fxced76s1 To: Bob <sip:bob@ua2.example.com> Call-ID: 38482762982201885110@ua1.example.com CSeq: 1 INVITE Contact: <sip:alice@ua1.example.com;transport=tcp> Content-Type: application/sdp Content-Length: 285 v=0 o=alice 2890844526 2890844526 IN IP4 ua1.example.com s=- c=IN IP4 192.0.2.101 t=0 0 m=audio 49172 RTP/SAVP 0 a=3ge2ae a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:WVNfX19zZW1jdGwgRCKgewkyMjA7fQp9CnVubGVz|2^20|1:4 FEC_ORDER=FEC_SRTP a=rtpmap:0 PCMU/8000 (4) INVITE sip:bob@ua2.example.com SIP/2.0 Route: sip:proxy.example.com (5) SIP/2.0 200 OK (6) SIP/2.0 200 OK Security-Server: tls;q=0.2 Security-Server: sdes-srtp;mediasec a=3ge2ae a=crypto:1 AES_CM_128_HMAC_SHA1_80 a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:PS1uQCVeeCFCanVmcjKpPywjNWhcYD0mXXtxaVBR|2^20|1:4 Figure 9: Using media security 7. Formal Syntax The following syntax specification uses the augmented Backus-Naur Form (BNF) as described in RFC 5234 [RFC5234]. "mediasec" is a "header field parameter", as defined by [RFC3968]. Header Field Name in which the parameter can appear. - Security-Client - Security-Server - Security-Verify <table> <thead> <tr> <th>Header Fields</th> <th>Parameter Name</th> <th>Values</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>Security-Client</td> <td>mediasec</td> <td>No</td> <td>[this document]</td> </tr> <tr> <td>Security-Server</td> <td>mediasec</td> <td>No</td> <td>[this document]</td> </tr> <tr> <td>Security-Verify</td> <td>mediasec</td> <td>No</td> <td>[this document]</td> </tr> </tbody> </table> Name of the Header Field Parameter being registered. "mediasec" 8. Acknowledgements Remember, it’s important to acknowledge people who have contributed to the work. This template was extended from an initial version written by Pekka Savola and contributed by him to the xml2rfc project. 9. IANA Considerations This specification creates a new registry for media plane security mechanisms. 9.1. Registry for Media Plane Security Mechanisms The IANA has created a subregistry for media plane security mechanism token values to be used with the 'mediasec' header field parameter under the Session Initiation Protocol (SIP) Parameters registry. As per the terminology in [RFC5226], the registration policy for new media plane security mechanism token values shall be 'Specification Required'. 9.2. Registration Template To: ietf-sip-sec-agree-mechanism-name@iana.org Subject: Registration of a new SIP Security Agreement mechanism Mechanism Name: (Token value conforming to the syntax described in Section 4.3.) Published Specification(s): (Descriptions of new SIP media plane security agreement mechanisms require a published specification.) 9.3. Header Field Names This specification registers no new header fields. 9.4. Response Codes This specification registers no new response codes. 10. Security Considerations This specification is an extension of [2] and as such shares the same security considerations. A further consideration of this specification is protection of the cryptographic key to be used for SRTP and carried in SDP. In order to protect this key, one of the security mechanisms defined in [2] SHOULD be used in parallel with this specification. 11. References 11.1. Normative References 11.2. Informative References [5] 3GPP, "IP multimedia call control protocol based on Session Initiation Protocol (SIP) and Session Description Protocol (SDP); Stage 3", 3GPP TS 24.229 10.13.0, September 2013. Appendix A. Additional stuff You can add appendices just as regular sections, the only difference is that they go within the "back" element, and not within the "middle" element. And they follow the "reference" elements. Author’s Address Peter Dawes Vodafone Group Services Ltd. Newbury UK Email: peter.dawes@vodafone.com
{"Source-Url": "https://tools.ietf.org/pdf/draft-dawes-sipcore-mediasec-parameter-10.pdf", "len_cl100k_base": 8315, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 42806, "total-output-tokens": 9776, "length": "2e13", "weborganizer": {"__label__adult": 0.0005164146423339844, "__label__art_design": 0.0003056526184082031, "__label__crime_law": 0.002105712890625, "__label__education_jobs": 0.0009207725524902344, "__label__entertainment": 0.00024306774139404297, "__label__fashion_beauty": 0.0001728534698486328, "__label__finance_business": 0.0018301010131835935, "__label__food_dining": 0.00031304359436035156, "__label__games": 0.0015649795532226562, "__label__hardware": 0.01061248779296875, "__label__health": 0.0005855560302734375, "__label__history": 0.0004191398620605469, "__label__home_hobbies": 0.0001024007797241211, "__label__industrial": 0.0010366439819335938, "__label__literature": 0.0003845691680908203, "__label__politics": 0.0006866455078125, "__label__religion": 0.0005769729614257812, "__label__science_tech": 0.368896484375, "__label__social_life": 0.00012791156768798828, "__label__software": 0.11431884765625, "__label__software_dev": 0.49267578125, "__label__sports_fitness": 0.00043845176696777344, "__label__transportation": 0.0008473396301269531, "__label__travel": 0.0002319812774658203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31864, 0.067]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31864, 0.14215]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31864, 0.7763]], "google_gemma-3-12b-it_contains_pii": [[0, 1571, false], [1571, 3733, null], [3733, 6269, null], [6269, 8638, null], [8638, 10808, null], [10808, 13208, null], [13208, 15146, null], [15146, 16708, null], [16708, 18927, null], [18927, 20684, null], [20684, 22594, null], [22594, 22937, null], [22937, 24858, null], [24858, 25155, null], [25155, 25843, null], [25843, 25992, null], [25992, 26542, null], [26542, 26882, null], [26882, 28023, null], [28023, 29302, null], [29302, 30544, null], [30544, 31864, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1571, true], [1571, 3733, null], [3733, 6269, null], [6269, 8638, null], [8638, 10808, null], [10808, 13208, null], [13208, 15146, null], [15146, 16708, null], [16708, 18927, null], [18927, 20684, null], [20684, 22594, null], [22594, 22937, null], [22937, 24858, null], [24858, 25155, null], [25155, 25843, null], [25843, 25992, null], [25992, 26542, null], [26542, 26882, null], [26882, 28023, null], [28023, 29302, null], [29302, 30544, null], [30544, 31864, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31864, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31864, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31864, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31864, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31864, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31864, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31864, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31864, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31864, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31864, null]], "pdf_page_numbers": [[0, 1571, 1], [1571, 3733, 2], [3733, 6269, 3], [6269, 8638, 4], [8638, 10808, 5], [10808, 13208, 6], [13208, 15146, 7], [15146, 16708, 8], [16708, 18927, 9], [18927, 20684, 10], [20684, 22594, 11], [22594, 22937, 12], [22937, 24858, 13], [24858, 25155, 14], [25155, 25843, 15], [25843, 25992, 16], [25992, 26542, 17], [26542, 26882, 18], [26882, 28023, 19], [28023, 29302, 20], [29302, 30544, 21], [30544, 31864, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31864, 0.05685]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
2d430cce5f701f6202ee099123f2722432b655e0
The Open Geospatial Consortium (OGC®) Request for Quotation And Call for Participation In the FUTURE CITY PILOT PHASE 1 (FCP1) Annex A — Development Approach RFQ Issuance Date: 5 February 2016 Proposal Due Date: 4 March 2016 Table of Contents 1 Introduction .......................................................... 5 2 Interoperability Initiative Process Framework .......... 5 2.1 Tasks .................................................................. 5 2.1.1 Coordination ............................................... 5 2.1.2 Assessment and Analysis................................. 6 2.1.3 Concept Development .................................... 6 2.1.4 Architecture Development............................... 6 2.1.5 Initiative Preparation and Startup .................. 6 2.1.6 Specification Development .............................. 6 2.1.7 Component Development ............................... 6 2.1.8 Testing and Integration ................................. 7 2.1.9 Solution Transfer ........................................ 7 2.1.10 Demonstration ........................................... 7 2.1.11 Documentation ........................................... 7 3 Work Breakdown Structure (WBS) ....................... 7 4 Concept of Operations ......................................... 7 4.1 Project Lifecycle Phases ................................... 8 4.1.1 Proposal Development ................................. 8 4.1.2 Proposal Evaluation, Selection and Negotiations 10 4.1.3 Kickoff Workshop ....................................... 12 4.1.4 FCP Interface and Demonstration Development 13 4.1.5 Network Integration and Solution Transfer ..... 15 4.2 Progress Reporting ........................................ 15 4.3 Integrated Initiatives ....................................... 15 5 Communications Plan ........................................ 15 5.1 Overview ..................................................... 15 5.2 Communications Plan Details .......................... 16 5.2.1 FCP Email Reflector ..................................... 16 5.2.2 FCP Public Web Site and Participant Portal .... 17 5.2.3 Web-Based Upload Mechanism ..................... 18 5.2.4 Project Wiki ................................................ 18 5.2.5 Teleconference / GoToMeeting Procedure .... 19 5.3 Progress Reporting ......................................... 20 6 Interoperability Program Code of Conduct .......... 21 6.1 Abstract ....................................................... 21 6.2 Introduction .................................................. 21 6.3 Principles of Conduct ...................................... 21 6.4 Acknowledgements .......................................... 22 Appendix: WBS Outline ......................................... 23 1 Coordination ....................................................... 23 1.1 Collaborative Environment .............................. 23 1.1.1 Routine and ad hoc telecons as assigned ....... 23 1.1.2 E-mail review and comment ....................... 23 1.1.3 Action Item status reporting ....................... 23 1.2 Initiative Plan Development ............................. 23 1. Concept Development ......................................................... 26 1.2.1 Project Plan Development ........................................... 23 1.2.2 Project Schedule Development .................................. 23 1.2.3 WBS Development .................................................. 23 1.2.4 Concept of Operations Development ......................... 23 2 Assessments and Analysis ..................................................... 26 2.1 Organizational Capability Review .................................. 26 2.2 Organizational OGC Requirements Review .................. 26 3 Concept Development .......................................................... 26 3.1 Sponsor Feasibility Study Review ................................. 26 3.2 RFT Development ...................................................... 26 3.3 RFT Response Analysis .............................................. 26 3.4 RFT Response Review ............................................... 26 4 Architecture Development .................................................... 26 4.1 Operational Architecture Development ....................... 26 4.2 System Architecture Development ............................... 26 4.3 Technical Architecture Development ............................ 26 5 Initiative Preparation ............................................................ 26 5.1 Sponsor Planning TEMs .............................................. 26 5.2 RFQ Development ..................................................... 26 5.3 Participant Budget Development ................................. 26 5.4 Contract Development ............................................... 26 5.5 SOW/SOP Development ............................................ 26 6 Specification Development .................................................... 26 6.1 Model Development .................................................. 27 6.2 Schema Development ............................................... 27 6.3 Encoding Development .............................................. 27 6.4 Interface Development .............................................. 27 6.5 Specification Program Coordination ............................. 27 7 Component Development ..................................................... 27 7.1 Prototypical Interoperable Software Development ........... 27 7.1.1 Server software development ............................... 27 7.1.2 Client software development .............................. 28 8.2 Infrastructure Setup .................................................. 28 8 Testing and Integration ......................................................... 28 8.1 Configuration Management ........................................ 28 8.1.1 CM Plan Development ....................................... 28 8.1.2 Initiative CM ..................................................... 28 8.3 Technology Integration Experiments (TIE) ..................... 28 8.3.1 Iterations 1-N ...................................................... 28 8.4 System Tests ............................................................ 29 8.4.1 Functional Test ................................................... 29 8.4.2 Interface Test .................................................... 29 OGC Internal IP Status Briefings .............................................. 26 OGC External IP Status Briefings ............................................. 26 9 Solution Transfer ................................................................. 29 9.1 Software Installation .......................................................... 29 9.2 Software Integration .......................................................... 29 9.3 Data Loading ................................................................. 29 10 Demonstration .......................................................... 29 10.1 Use Case Development ...................................................... 29 10.2 Storyboard Development .................................................... 29 10.3 Venue Access ................................................................. 30 10.4 Data Requirements Assessment ............................................ 30 10.5 Data Acquisition and Distribution ........................................ 30 10.6 Demonstration Preparation and Delivery .................................. 30 11 Documentation .......................................................... 30 11.1 ER Development ............................................................... 30 11.2 System Documentation Development .................................... 30 11.2.1 Functional Specification .................................................. 30 11.2.2 Installation Guide ............................................................ 31 11.2.3 Training Material & Users Guide ........................................... 31 11.3 Planning Study Report ...................................................... 31 12 Compliance Test Development ........................................... 31 12.1 Summarize TIEs, demo results and data issues ......................... 31 12.2 Compliance Test ............................................................. 31 12.2.1 Test Cases ............................................................... 31 12.2.2 Data ................................................................. 32 12.2.3 Recommendations .......................................................... 32 1 Introduction This Annex A document is an integral part of this RFQ/CFP. It contains and describes the following: 1) Interoperability Initiative Process Framework 2) Work Breakdown Structure (WBS) 3) Concept of Operations 4) Communications Plan 5) Interoperability Program Code of Conduct 2 Interoperability Initiative Process Framework This section describes a flexible framework of standard, repeatable processes, which can be combined and adapted as necessary to address the requirements of each Interoperability Initiative. These tasks are executed with a Virtual Team Infrastructure. This Process Framework forms the basis for the OGC Initiative Work Breakdown Structure. ![Figure 1, Interoperability Initiative Process Framework](image) 2.1 Tasks 2.1.1 Coordination This task enables overall coordination between OGC Staff, OGC IP Team, Sponsors, selected organizations, and other TC/PC Members as needed to perform the following Subtasks: - **Collaborative Environment** - OGC IP Team provides synchronous and asynchronous collaboration environments for cross organizational, globally distributed, virtual teams working interdependently to execute Initiative Orders Activities under this subtask. include reading email and engaging in collaborative discussions including teleconferences. ☐ Management - Services ensuring Initiative Order participants are staying within designated budgets, that the work is progressing according to the agreed schedule, and that the tasks identified in the Statement of Work are executed. Including status reporting. ☐ Communication – Includes communicating ongoing and planned Initiative and Work Item Status to OGC, Sponsors, and other organizations such as ISO. This task does not include IP Business Development functions. 2.1.2 Assessment and Analysis This task requires assessment/evaluation and analysis of issues and documentation of an organization’s or domains existing capabilities, and assessment of requirements for OGC compliant technology. This task is implemented during planning stages for the initiative. 2.1.3 Concept Development This task conducts a Feasibility Study that assesses emerging technologies and architectures capable of supporting eventual Interoperability Initiatives (e.g. Pilot, Testbed). Part of the concept development process is the use of a Request for Technology (RFT) to gain a better understanding of the current state of a potential technology thrust and the architecture(s) used in support of that technology. The feasibility study examines alternative prototype mechanisms that enable commercial web-services technology to interoperate. The study may also assess the costs and benefits of the architectural approaches, technologies, and candidate components to be utilized in a pilot or testbed and potential demonstration. This task also collates Sponsor requirements and assesses the applicability of current specifications. 2.1.4 Architecture Development This task defines the architectural views for any given Initiative. In the context of the OGC Interoperability Program, there are three – and perhaps more - architectural views for any given effort. These views are the Enterprise View, Information View and Computational View (based on RM-ODP, ISO 10746). Part of the Architecture Development task may be the use of an RFQ issued to industry to enable organizations interested in participating in an Interoperability Initiative to respond with a proposal. This task may also be implemented during the planning stages of an initiative. 2.1.5 Initiative Preparation and Startup This task defines the participant budget (if any), develops and executes agreements and contracts that define roles and responsibilities of each participant. This task may refine the Work Package. 2.1.6 Specification Development This task defines and develops models, schemas, encodings, and interfaces necessary to realize required Architectures and includes specification Pre-design and Design tasks. This task may include activities to coordinate ongoing Initiatives with Specification Program activities. 2.1.7 Component Development This task develops prototype interoperable commercial software components based on draft candidate implementation specifications or adopted specifications necessary to realize the required Architecture. 2.1.8 Testing and Integration This task integrates, documents and tests functioning interoperable components and infrastructures that execute operational elements, assigned tasks, and information flows required to fulfill a set of user requirements. Includes Technology Integration Experiments (TIEs). 2.1.9 Solution Transfer This task prepares prototypical interoperable components so that they can be assembled at required sites. 2.1.10 Demonstration This task defines, develops and deploys functioning interoperable components and infrastructures that execute operational elements, assigned tasks, and information flows required to fulfill a set of user requirements. 2.1.11 Documentation This Task ensures development and maintenance of the pre-specification; pre-conformant interoperable technologies and OGC work products, including draft and final Interoperability Engineering Reports (ERs) and systems level documentation, such as example user documentation, necessary to conduct the Initiative. This task may include coordination with OGC Specification Program activities including the Documentation Team. 3 Work Breakdown Structure (WBS) The Work Breakdown Structure (WBS) provided in the Appendix of this Annex is derived from the OGC Interoperability Initiative Process Framework. A proposing organization does not have to respond to all tasks in the WBS. However bold italic text in the task explanation indicates which tasks are mandatory or conditional. Conditional tasks are those that are mandatory if a proposing organization takes on certain non-mandatory tasks. All responses shall use this WBS to structure their responses. Evaluations of responses will be based on whether a proposal addresses the WBS task items. So an organization anticipating working on a particular task that fails to indicate their intent by using the WBS structure below will not be considered for the desired task. The project plan and schedule will use this WBS as a template as well. 4 Concept of Operations This section describes the Concept of Operations for the FCP Initiative. It is organized around eight particular time frames or phases. The phases are: - Proposal Development — the time during which RFQ respondent proposals will be developed. This time will also be used by the OGC to develop draft management and communication plans for the initiative operational phases. - Proposal Evaluation, Selection and Negotiations — During this period, the OGC IP Team will analyze responses for funded and unfunded work items in the WBS described in Section 3 of this Annex A. OGC will communicate with RFQ respondents concerning their proposals, negotiate on their participation for funded and In-Kind (unfunded) Contributions, and communicate the status of the FCP Initiative with Sponsors and the OGC Technical and Planning Committees. During this time, Participant Agreements with Statements of Work (SOW) will be signed. Task Initiation Workshop — the Task Initiation Workshop will be a face-to-face meeting and last approximately two days. During the Workshop, participants will (a) develop generic interfaces and protocols to be used as the starting place for software components, (b) finalize the initial System Architecture and; (c) refine the Demonstration Concept. Attendance at the Workshop is required by participants and includes Sponsor representation. Preliminary Design and Deliverables – This is a milestone established for participants to complete initial draft documents, such as design documents or preliminary service implementations needed for initiative coordination and integration, as determined by IP Team and Participants during the Task Initiation Workshop. This Milestone activity may be conducted using GoToMeeting and Telecon. Design, Development, Testing, and Evaluation Sprints - During each of these periods, selected organizations will draft or update software, hardware, and data designs as needed; develop or identify supporting software, hardware, and instance data; conduct analysis and testing of target information exchanges and system capabilities; then evaluate the designs in light of running code experience. Final Demonstration Milestone – A Milestone is established for submitting drafts of engineering reports and demonstration material. These preparations are typically organized and coordinated among Participants according to roles and system architectural dependencies to achieve integration of components, support demonstration and final delivery. Final Delivery – This milestone is the close of funded activity, when all final reports and demonstration materials are due. Further development may take place to refine demonstrations for placement for public viewing on the OGC website, or for subsequent OGC meetings. 4.1 Project Lifecycle Phases 4.1.1 Proposal Development The following guidelines are provided to proposing organizations concerning proposal development: - Proposing organizations must be members of OGC, or must submit an application for membership if their proposals are accepted. - The OGC Standards will cover some of the technology areas under consideration in the RFQ. The relationship between the content of the proposal and the relevant OGC standards should be noted by the Proposing organizations. - Proposing organizations should plan on performing all development work at their own facilities. These facilities should include a server (where applicable) that is accessible to other participants via the Internet. Technology Integration Experiments (TIEs) will be carried out among the participants based on these Internet-accessible servers. - The immediate outcomes of the initiative will include Engineering Report(s), which may become new OGC specifications or Best Practices; or implementations that become part of the OGC Network. Proposals covering technologies that require licensing should indicate how these technologies can be made available as a (permanent) part of OGC Network. Proposals should include a description of technologies requiring specific hardware or software environments. Proposals need not address the full spectrum of this initiative’s architecture as outlined in Annex B. Proposals can focus on specific tasks or portions of that architecture. Proposing organizations should be prepared to build interoperable components and thus should be prepared to cooperate with other participant development teams, regardless of whether their proposals covered the full initiative architecture or portions of it. Software components implemented in this initiative should either be based upon currently shipping products, or should be prototypes or pre-release versions of products that the responding organization intends to sell or otherwise distribute for ultimate deployment. Responding organizations must participate in the full course of interface and component development, test and integration experiments, and other essential activities throughout the initiative in order to have access to and participate in demonstration exercises. Proposal selection and funding may be awarded on the basis of portions of the proposal deemed most likely to lead to a successful project implementation. Proposing organizations may propose to provide alternatives to the project architecture. However, it should be noted that proposals would be selected on the basis of how successfully the various components of all the selected responses interoperate. Radically different architectures that would require intensive rework on the part of a majority of the participants would have to be supported by cost/benefit analysis. Advance coordination with affected participants to present a coherent, realistic, and reasonable approach will greatly improve chances of acceptance by the proposal review team. Proposing organizations should be familiar with the existing OGC Network. OGC Network provides a set of services, datasets, components, toolkits, and reference materials that can and should be used to leverage results for this initiative. Proposing organizations shall use the supplied template and forms to prepare their proposals. Organizations choosing to respond to this RFQ/CFP are expected to have representatives available to attend or participate in the following teleconferences or activities: 1. Questions Due and Bidders’ Q&A teleconference 2. Negotiations with selected organizations. Organizations selected and awarded cost share funding to participate in the initiative; and participants offering In-Kind Contributions shall plan to send at least one technical representative to the Kickoff Workshop. Specific dates for the events identified above are provided in the project’s Master Schedule (RFQ/CFP Main Body, Section 4.) 4.1.1.2 Management Approach and Communications Plan The FCP IP Team will apply the standard OGC Initiative management approach, and initiate its communication plan during the period between the release of the RFQ and the submission of the responses. These activities will provide guidance to the FCP Team and participants for the conduct of the FCP Pilot. The management approach for project, as for other OGC IP initiatives, is outlined in the Interoperability Program Policy and Procedures documents available on the OGC Website. 1 http://www.opengeospatial.org/ogc/policies/ipp These documents provide details on the following roles and responsibilities of individuals providing management support to OGC initiatives: 1. **Sponsor Team**—representatives from the organizations that have provided sponsorship for the FCP initiative. Note that some sponsor organizations may also provide components in the initiative, effectively also acting as participants. 2. **OGC Initiative Manager**—the OGC staff person responsible for the overall management of the FCP initiative. 3. **Initiative Architect**—the individual(s) responsible for the overall initiative architecture during the course of the initiative. 4. **Participants**—Organizations that provide the development and demonstration effort of the initiative. Participants develop component interface and protocol definitions, implement components, revise interface and protocol definitions, and evolve the initiative architecture. Participants prepare scenarios for demonstrations, design tests that exercise the components, perform data development in support of these scenarios, build demonstrations and tests, and evolve the demonstration concept. 5. **Demonstration Manager**—the individual responsible for planning and managing the Demonstration activity of the FCP initiative – this role may be performed as part of other roles. 6. **Communications and Outreach**—the individual(s) responsible for the messaging with media, sponsors, and industry related aspects about the initiative. 7. **OGC IP Team**—a group composed of the OGC Initiative Manager, Initiative Architect, Demonstration Manager, and Communication and Outreach personnel. The Communications Plan, provided in Section 5, provides details on resources and procedures for reporting and exchanging information with participants, relevant working groups (WGs), Technical Committee (TC), Planning Committee (PC), Strategic Member Advisory Committee (SMAC) and sponsors. This plan includes the development of an OGC Portal web page with appropriate documents and updates for project information. The OGC IP Team will provide an email list server for participants to exchange project-relevant content and for discussion. A teleconferencing plan and online collaboration plan will be developed to further support communications among participants. ### 4.1.2 Proposal Evaluation, Selection and Negotiations The IP Team, Sponsors and partners will review the RFQ responses beginning immediately after the deadline for submission. During the review and evaluation process the OGC IP Team may need to contact proposing organizations for clarification or to understand a Bidder’s proposed Initiative Design and Demonstration Concept. The process leading up to the Kickoff Workshop is detailed in the following paragraphs. #### 4.1.2.1 Component and Requirement Analysis The review team will accomplish three tasks: 1. Analyze the elements proposed in the RFQ responses in the context of the WBS. 2. Compare the proposed efforts with the requirements of the initiative and determine viability. 3. Assess the feasibility of the RFQ responses against the use cases. 4. Analyze proposed specification development 5. Analyze proposed testing methodologies, including but not limited to performance testing methodologies. 4.1.2.2 Initiative Architecture Recommendation The proposal review team will then draft a candidate system architecture, which will include the set of proposed components for development within the initiative, and relate them to the hardware, software and data available. Any candidate interface and protocol specifications received during the RFQ process will be included with the draft system architecture as annexes. 4.1.2.3 Demonstration Concept Recommendation The team will incorporate results from the evaluation of responses into a preliminary demonstration concept. The demonstration concept will discuss the ability of proposed hardware and software components and related data to work together in the demonstration context, and will identify gaps. For proposals that include tasks to provide clients, data access service or processing components, the IP evaluation team will assess the ability of proposed components to support the anticipated demonstration scenarios and requirements. The IP Team will then estimate the extent to which proposed datasets, other data assets and related metadata would be suitable for use to support the project and demonstration. In cases where components are intended to operate on dynamic data such as sensor observations, data assets will be needed to support both live testing and repeatable demo simulations. Bidders are encouraged to provide as much information as possible to describe the type, extent and suitability of data available to support development, testing and demonstration for the initiative. In addition to clients, service components and datasets to be provided by participants, the demonstration concept aims to identify existing or emerging resources on OGC Network. This initiative will culminate in a demonstration and supporting materials for sponsors, stakeholders and OGC members. 4.1.2.4 Decision Technical Evaluation Meeting (TEM) I At Decision TEM I, the OGC IP Team will present to the sponsors: - The Initiative Architecture Recommendation - The Demonstration Concept Recommendation - Evaluation of the RFQ/CFP responses - Selections for awards of cost-share funding. This presentation will be made in the context of first drafts of the plans described above: - Communications Plan - Sponsor Requirements The primary decisions to be made at this TEM are: - Is the recommended Initiative Architecture workable? If not, how to make it workable. - Which RFQ responses, or subset thereof, should be provided cost-sharing funds and at what level given all inputs? - Is the Demonstration Concept workable? If not, how to make it workable. - Are the management approach and the Communications Plan reasonable and complete? Following Decision TEM I, the Initiative Manager will begin to contact selected organizations based outcomes of discussions during TEM I. The Initiative Manager will revise plans and concepts accordingly and make budgetary adjustments based on sponsor inputs. 4.1.2.5 Decision TEM II At Decision TEM II, the OGC IP Team will present to the sponsors: The revised Initiative (System) Architecture The revised Demonstration Concept Updated Participant Selections The primary decisions to be made at this TEM are: - Is the revised Initiative Architecture workable? If not, how to make it workable. - Are the participant selections correct and affordable? - Is the Demonstration Concept workable? If not, how to make it workable. - Are the management approach and Communications Plans reasonable and complete? Following Decision TEM II, the IP Team will 1) Finalize the Initiative Architecture and Concept of Operation (now including the Demonstration Concept), 2) Begin to insert specific information into the SOW template for each selected participant organization, and 3) Adjust description of task specifics for all participants using the Participant Agreement template. The Initiative Manager will identify participant primary and alternate POC for Technical and Business matters. The output of Decision TEM II will be a final Initiative Architecture and Demonstration Concept. 4.1.3 Kickoff Workshop The project will be launched officially with a Kickoff Workshop meeting. Prior to the Workshop meeting all the participants must commit to a preliminary Statement of Work (SOW), with the understanding that their SOW may change somewhat during the Workshop, as the participants, architects and sponsors gain better understanding of the project scope, architecture needed, and implementation issues. Following the Workshop, all participant organizations must sign a Contract, based on the final SOW, that includes a description of the assigned work items in Section 3 of this Annex A, subject to any mutually agreed changes decided during the Workshop. The Workshop will address two development activities in the OGC IP process: 1) component interface and protocol definitions, and 2) demonstration scenario development. The demonstration scenarios used in project will be derived from those presented in the RFQ and other candidates provided by OGC and the sponsors. The two development activities will interact and affect each other, and the interaction will be iterative. During the Workshop, both activities will begin with a preliminary specification development provided by the IP Team and the Sponsors, and other assets that participants bring to the Workshop. Participants will be asked to provide technical recommendations to address any perceived shortfalls and that may also be included in the final Engineering Report as factors or considerations. The Initiative Manager will lead plenary meetings for the exchange of information. An additional product of the Workshop will be a development schedule that defines specific milestones in the Interface Development and Demonstration Development activities. These milestones will include component-to-component interactions across the interfaces under development, and component insertion into demonstration scenarios. Milestones for Technology Integration Experiments TIEs) will be identified and planned during the Specification Development activities (See WBS task items 6 and 8.3). During the Workshop, participants will nominally organize into teams to 1) begin developing component interface definitions and use cases; and 2) to begin developing the demonstration scenario and uses cases for the Pilot initiative. Interface design and demonstration design activities should be shared and coordinated to ensure they are developed to achieve common objectives based on the scenario, use cases and specific requirements of the RFQ/CFP. Each participant organization is expected to have systems and/or software engineers attend the workshop to contribute in the initial assessment and interaction of the interfaces. This may include UML modeling of the interfaces. Workshop participants and IP Team prepare presentations to facilitate communication and common understanding, to describe how components to be used in the initiative scenario interface with one another. The scenario design must account for the requirements and dependencies of the overall initiative, including client designs, server designs, service interfaces and encodings. Live presentations of contributed hardware components are welcome as well, but not required. Technical plenary sessions will be conducted during the course of the Workshop. The Plenary sessions are intended to allow participants working on interface and protocol definitions to interact with participants working demonstration development. These plenaries will use UML use case and UML sequence diagrams to describe the interaction of the scenario and demonstration development and the interface definition effort. **4.1.4 FCP Interface and Demonstration Development** This section defines an initial concept for the conduct of development activities in this initiative. The actual schedule and further information will be provided at the Initiative Workshop. **4.1.4.1 Interface Development** This Interface Development (ID) Phase corresponds with WBS Tasks 6, 7 and 8 and their related subtasks. The schedule and further information will be developed and provided at the Workshop. During the ID phase, the Technical Architecture (System Architecture) will be refined while groups of participants work on development of specific components. Interface development work will also be shaped by the Scenario and Data Development tasks. Progress on demonstration and scenario development provides details necessary to identify key actions and behaviors of “actors” in the scenarios, which are needed as clear, measurable, short-term goals for the technical development teams to pursue. The technical implementation activity also provides feedback to the demonstration scenario and data preparation activities. This mutual interaction will allow problems and successes to surface early, and will guide early TIEs, without waiting until Demonstration Integration and testing time (See WBS task item 8 and related sub-tasks). Demonstration Integration and Testing will integrate already tested interfaces into a larger, cohesive unit capable of supporting the end-to-end nature of the scenarios. Technology Integration Experiments (TIEs) will be conducted on a regular basis, in an iterative manner, as outlined by the initiative architects in the development schedule. During identified TIE phases of the initiative, participants developing components within the Architecture shall test interfaces for component accessibility, behavior, and most important, interoperability. The IP Team will develop a TIE matrix defining the nature of TIEs that shall be conducted and their scheduled occurrence within the initiative. Participants will report the outcome of each TIE following the TIE reporting template provided by IP Team. TIEs will be conducted within the development cycle of the Initiative. TIEs will follow initial interface design, interface construction, component creation, and integration of the interface with application logic. Server components under test shall have data loaded to allow client software to exercise the current functionality. Participants working behind firewalls shall take any necessary steps to allow the test to be conducted through the firewall or outside of the firewall. Participants for components under test are expected to provide appropriate documentation to allow the successful conduct of these experiments. Participants are expected to upload or update a reference to their components on the OGC project portal or wiki for each TIE. Participants shall report the outcome of TIEs to the project email list and the project architecture team. This FCP project will conduct development and testing in a series of short duration Sprints that each focus on a limited and defined set of requirements to be addressed. These monthly software / hardware development sprints and informal demonstrations will contribute to the refinement of the overall pilot framework. Design and development work will be coordinated through weekly web conferences and web-based collaboration tools. To the extent possible, software code, hardware designs, tests, and other documentation will be managed and made accessible for review and testing on GitHub. Informal demonstrations of developed software and hardware components will be organized on a bimonthly basis and wherever possible deployed as a persistent online capability. Issues exposed in each Sprint will drive requirements for the following Sprint that include interface definition, refinement, coding, and testing. The Technical Architecture in Annex B describes a notional architecture that represents an initial set of services and interface mechanisms. Individual items in the notional architecture are to be refined during the Workshop meeting and will be further refined during the interface design phase. Since development will occur in a series of Sprints, it is expected there will be periods of development followed by a collaborative retrospective among the various component developers in preparation for the next Sprint. This will allow for issues to be resolved and documented in order to avoid divergence between comparable components (i.e., two servers) or dependent components (i.e., servers and clients). 4.1.4.2 Demonstrations This activity builds upon the initiative characteristics developed during the Workshop demonstration scenario design and creation discussions. To be successful, participants must execute four activities—designing a demonstration, building a demonstration, testing the live demonstration, and packaging the demonstration on presentation media. Capitalizing on the Use Case work performed at the Workshop, the demonstration development aims to expand these initiatives in four design areas—completing demonstration storyboards, finalizing specifications, finalizing datasets and providers and client applications to exercise the various services for the demonstration. - Review and Finalize Storyboards—participants identify and refine the relationships between the data, the sponsor scenarios, and the components. - Finalize component interfaces — given the nature of work during a pilot, some inconsistencies may remain between specifications and interfaces, and between different implementations. Participants must identify and resolve these differences with appropriate solutions. - Finalize supporting data—access to the appropriate data is essential to exercising the initiative architecture and capturing a representative demonstration. Participants clearly must ensure that appropriate data exists and is available. - Finalize nature and extent of datasets – where applicable, OGC Implementation Specification conformant data sources are preferred. However, given the nature of the various sensors and data feeds available, this may not always be possible. Other important issues are the quality, availability, schema, and interoperability of the datasets. - Manage Supporting datasets – On-line supporting data requires that the participants identify the data stores, availability, throughput limitations, and data loading process. Successful execution of data pre-staging will require the participants to have a data plan, so valuable time is not lost due to inadequate preparation. Incorporate supporting datasets – participants must identify how data will process through initiative components to be exercised for the demonstration. The elements of the demonstrations include but are not limited to the following: 1. Deployed service components and clients 2. Supporting datasets, schemas, schema instances 3. Supporting documentation, installation instructions, scripts, etc. Participation in demonstration exercises is predicated upon full engagement with development, testing, and planning activities throughout this initiative. 4.1.5 Network Integration and Solution Transfer Network Integration will be complete when the interfaces and demonstrations developed during the interface development and demonstration development have been integrated into the OGC Network initiative infrastructure. This activity will result in configuration-controlled components that are considered stable enough to use on a pilot basis. Solution transfer entails the deployment of software components developed during the pilot at a data provider facility unless other arrangements have been proposed and agreed. This task will be complete when sufficient documentation or instruction has been provided, and adequate licensing procedures completed, to allow the Sponsor organizations to exercise and evaluate and deploy these products or product prototypes. Solution transfer is not required for all components. 4.2 Progress Reporting The OGC IP Team will provide monthly progress reports and briefings to the Sponsors pertaining to the current status of the initiative. The OGC IP Team and the sponsors intend to provide regular status reports about the program to the OGC Technical Committee, Planning Committee, and the OGC Strategic Member Advisory Committee. Participant presentations to the TC will include presentations on Engineering Reports and Demonstration scenarios. 4.3 Integrated Initiatives Other ongoing IP activities may present opportunities to support this initiative and be coordinated with the activities within this initiative. Any such resources and related activities may be integrated with those of this initiative in order to take advantage of economies of scale, and possibly to explore the deployment of innovations coming from this initiative. 5 Communications Plan 5.1 Overview This section describes the Communications Plan for this FCP initiative. The plan includes a defined OGC approach as well as policies and procedures for effective communications among selected organizations, participants, sponsors, and the OGC Interoperability Program (IP) Staff also referred to as the IP Team. Each organization, regardless of any teaming arrangement, shall provide a designated Point of Contact (POC) who will be available for scheduled communications about project status. That POC shall identify alternatives that will support the designated POC in scheduled activities and represent the organization as-needed in ad hoc discussions of IP issues. The designated and alternative POCs shall provide contact information including their e-mail addresses and phone numbers. All proposals shall include a statement or documentation of their understanding, acceptance, and handling of the communications plan. OGC will designate technical Team Leaders for activities described in the Work Breakdown Structure for this Initiative. The Team Leaders shall work with the IP Team, responsible participants, and the sponsors to ensure that project tasks/activities are properly assigned and executed. The team leader is accountable for activity and schedule control and team communication. They must also raise issues of concern in a timely manner regarding schedule slippage or resource issues to the IP Team. 5.2 Communications Plan Details The following objectives of the communications plan are directed to one or more tasks/activities in the WBS identified in Section 3 and provided in Appendix to this Annex A: - Provide timely and appropriate notifications to participants of events, deadlines, and decisions that affect them - Keep participants apprised of the status of all participants to ensure coordination and cross-communication - Participants need to post items of interest, status reports, and software for distribution amongst the participants - Participants need to provide software and/or data for installation at various support sites to IP Staff or other participants - Participants need to communicate/discuss and resolve ongoing definitional and development issues and related solutions among the affected groups and team The following tools are implemented for use during this initiative: - Interoperability Program email reflector (fcp@lists.opengeospatial.org) - Public project web site (http://www.opengeospatial.org/projects/initiatives/###) - Project Wiki site for team collaboration - Web portal (http://portal.opengeospatial.org/) with the following modules: - Calendar for assigning, viewing and coordinating schedules - Contact list of participants, staff and other key individuals - Discussion Forum for technical discussions - A web-based file upload mechanism - Project timeline tracking - Action items tracking, and - A procedure for arranging, announcing, and executing teleconferences. Each of these tools is described below. 5.2.1 FCP Email Reflector Electronic mail communications should be sent to the single email reflector for the FCP project. This email list is fcp@lists.opengeospatial.org. All technical discussions will take place on this email list. Reminders will be issued if the guidelines are not used. Participants should carefully consider the subject of email. To facilitate sorting, email to this list will automatically contain the Prefix in the Subject line of each message: [FCP]. The project email list can receive heavy traffic. In order to facilitate efficient handling of message traffic and to reduce redundancy, all replies will go to the list not the sender. OGC is currently using the Mailman software package to manage and maintain our lists. Mailman allows project users to customize many preferences, for example, you can change your settings to allow for Mailman to digest the messages per day, to receive “no mail” when you are on vacation, etc. PLEASE NOTE: the email reflector is not intended for exchanging files with others. Rather, a procedure for uploading files to the project web sites is described below. When files are uploaded, automatic notification may be sent to participants. 5.2.2 FCP Public Web Site and Participant Portal A Portal project will be created within the OGC member portal for the FCP initiative. **Figure 2** below shows the initial hierarchy of the overall portal information system and the FCP portal project. --- **Figure 2—FCP Portal project within the OGC Portal Information System.** The initial pages and their content are described here: - OGC Portal Information System—Repository of important current and historic data regarding everything from requirements and use cases, to contact information and documentation status. There are various levels of access within the OGC Information System. This asset will continue to grow and mature. - OGC Public Website—Publicly available information to help aid in the process of Specification Development. - OGC Members Portal—A valuable resource for all members to get the latest information from the Specification Program. - OGC IP Initiative Homepage—Links to archived, current and future IP Initiatives: ([http://www.opengeospatial.org/initiatives](http://www.opengeospatial.org/initiatives)) - Project Initiative Homepage—Publicly accessible home page for this Initiative effort - OGC Members Portal - User specific page giving basic tools and information based upon the users Login Access - Calendar - Calendar for assigning, viewing and coordinating schedules - Contact Information - A listing of the participants, Staff and other Key individuals - Wiki – A web or project-specific pages for collaborative editing of its content and structure by project users - Task/Action Items - Action Items tracking - Project Tracking - Project timeline tracking Although the project portal will begin with the above layout, it may change and evolve over the life of the project. Participants who would like to contribute content should follow the directions in the next section for submitting material for the project portal site. 5.2.3 Web-Based Upload Mechanism Participants that wish to upload materials onto the project portal described above may transfer these materials to any of the file upload locations described in this section. Participants should follow the procedure described below to ensure effective communication of file uploads. PLEASE NOTE: The preferred mechanism for sharing files with other participants is via upload to portal with a reference to the storage location on the portal via a URL. This mechanism reduces load on email servers and lists; and eliminates the need for those who have no need to receive files from doing so, while making sure that all parties are informed of the availability of files. Portal Access and Posting 1. The participant shall login to the OGC members’ portal via a personal log-on using a web browser. ([http://portal.opengeospatial.org/](http://portal.opengeospatial.org/)) using their assigned individual Username and Password. Project participants or observers can request access to the project portal by contacting the Initiative Manager. Note that project Observers typically do not have permission to upload files to the project portal; however, contact the Initiative Manager if the need arises. a. Log into the Portal with assign Username and Password b. Select the project in the “Project Quick Selector” drop-down on the right end of the top navigation menu. 2. In the project portal, the second tier Navigation Menu has a tab labeled “Files”. Select this tab. You are now viewing the File Manager Web page for the portal project. 3. The participant may upload a single file at a time by selecting the “New File” link or icon. File uploads may be packaged (even if only a single file) using one of several formats: a. an archive format (such as WinZip for Windows-based submissions) b. tar and Gzip c. tar and compress for Unix-based submissions. 4. The participant should provide certain metadata for tracking and recognizing the files on-line. a. **Title**: title of the submission b. **Authors**: work group or area for which the document was developed c. **Description**: or abstract, a paragraph describing the purpose and content of the submission, and d. **OGC Doc Type**: identify the type of document for the upload, e. **Upload File**: submitted file to be selected from the submitter’s system. 5. Click “Go” on the “Artifact Details” dialog window. 5.2.4 Project Wiki A wiki site will be established for project collaboration purposes. This wiki will be a location for group collaboration, preparation and editing of raw content that may be further developed as content for an ER or for collaboration and coordination for task activities that includes teleconference meeting notes, preliminary designs, Technology Integration Experiments (TIE) activities and results, description of service capabilities, endpoints or for schema development to name a few. When the editor(s) and contributors reach consensus on the form and content for a publication (for example an ER), it should be moved to the Project Portal where it can be controlled (with versions) in a more formal manner. ### 5.2.5 Teleconference / GoToMeeting Procedure In general, any teleconference may involve either or both audio and webcast connections. The project will set up a standing teleconference time, usually occurring each week, with voice line and GoToMeeting (as needed) reserved for the duration of the project. OGC maintains a GoToMeeting account, as a primary and preferred resource for teleconferences using Voice Over Internet Protocol (VoIP), which avoids international connection charges. GoToMeeting sessions are accompanied by a list of in-country dial-in phone numbers for attendees to use when they may only have access to a phone connection for a particular meeting. Whenever necessary, participants may schedule additional teleconferences in coordination with the Initiative Manager or Initiative Architect. In addition to GoToMeeting resources, OGC also maintains a sufficient number of telephone/audio-only lines to accommodate several simultaneous teleconferences without conflict. Portal resources are shared among OGC TC working groups, the OGC Planning Committee, Board of Directors, and executive staff so plan ahead to ensure availability. Guidelines described below have evolved over time to ensure productive and efficient use of these teleconference resources. There are three phases in the execution of a project teleconference. These phases are initiation, planning, and execution. The procedure for each phase is defined below. #### 5.2.5.1 Teleconference Initiation Due to the need to carefully manage the resources of the IP effort, a teleconference must be appropriately planned and coordinated with the Initiative Architect and the IP Initiative Manager. Before making a request, always check the Events Calendar on the OGC Members Portal, to avoid obvious conflicts with other scheduled teleconferences. However, depending on the requesting participant’s position and access permissions, not all scheduled events may be visible. For example, some OGC committee meetings are only visible to committee members and OGC staff. This is the main reason for following the guidelines below. An **authorized discussion leader** must lead a teleconference. These individuals are typically identified during the Kickoff Workshop. However, any participant may initiate a teleconference by first contacting an designated discussion leader to pre-plan the teleconference. The discussion leader must then coordinate with the IP Initiative Manager or Initiative Architect to set up the meeting. Approval is gained by sending an email with the subject line “IP Teleconference Request” to the Initiative or Operations Manager for FCP with the following format and content. 1. **Proposed Date and Time**: the proposed date and time 2. **Purpose**: a description of the purpose of the teleconference 3. **Designated Discussion Leader**: identify the designated discussion leader; prior coordination with Initiative Architect or Initiative Manager, if necessary to ensure adequate facilitation is available 4. **Participants**: expected audience or participants needed for the meeting; include name, organization, and email for attendees who may be subscribed to established project email reflectors used for the meeting announcement. 5. **Resources Required**: identify appropriate meeting resource: GoToMeeting (preferred) or teleconference line (audio only) resource 6. **Expected Duration**: planned duration of the teleconference 7. **Agenda**: an agenda, including an estimate of time to be spent on each topic, as appropriate to facilitate meeting progress Approval and setup should be coordinated well in advance, to avoid conflicts with other teleconference schedules, but in any case should be planned at least two business days prior to the proposed teleconference date. It is recommended that teleconferences involving participants on multiple continents (Australia, Europe, Asia, and North America) be scheduled and announced at least three days in advance. Once the schedule has been agreed upon, a member of OGC IP Team or Teleconference Moderator will set up the teleconference by entering the meeting information into the portal calendar and reserving the teleconference resource. When necessary to resolve scheduling conflicts, the IP Initiative Manager will work with the requesting individual, organization, or group to reach a satisfactory solution to all. 5.2.5.2 Teleconference Planning A member of the OGC IP Team or designated Meeting Organizer will plan the teleconference. Members of the project will be notified of the meeting with details that include date, time, proposed agenda and other information needed to facilitate the meeting. 5.2.5.3 Teleconference Execution Required participants are expected to join the teleconference at the appointed date and time. Teleconferences may be extended depending on availability of resources and required participants. The designated teleconference leader is responsible to keep the teleconference on schedule with the agenda. This means that vital agenda items should be covered early in the agenda. If the meeting should take longer than planned, the teleconference leader should adjust the agenda or plan coordinate with attendees for an additional date and time to continue. The teleconference leader will prepare minutes of the teleconference. The notes should contain a record of decisions reached, action items (including a description and action item holder), and issues for resolution. The meeting minutes will be posted on the project portal or on the wiki page. 5.3 Progress Reporting The OGC IP staff will provide regular (monthly) progress reports and briefings for the FCP project to the sponsors. To do this, **participants must submit technical and business progress reports by the 6th of each month, as detailed in WBS Section 1.3.1 of this Annex A**. Besides reporting progress in terms of “percentage complete” on each of the deliverables expected, another purpose of the monthly **technical reports** is to capture and report: - Record of decisions and actions taken - Results obtained - Lessons learned - Recommendations for any changes to the work program. This becomes a valuable record of the project activity experience. The purpose of the monthly **business report** is to provide the Initiative Manager, Financial Officer, and IP Executive Director with a quick indicator of the project health, from each Participant’s perspective. These reports have proved crucial to identifying underlying issues needing to be addressed, which may not have received adequate attention in the weekly telecons and other daily communications. Initiative Architect consolidates monthly technical reports to send to the Initiative Manager by the 15th of each month. The Initiative Manager then consolidates these into the progress reports submitted to the sponsors by the 20th of each month. The OGC IP staff and the sponsors also provide status reports about the program to the OGC Technical Committee and the OGC Planning Committee as feasible and appropriate. At those times the participants may present interface designs and other reports to the TC and PC. Demonstration scenarios and the architecture to support those demonstrations would be included in these presentations. OGC IP staff will review action item status on a weekly basis with Team Leads and participants that are responsible for the completion of those actions. Action item status reports will be posted to the FCP web sites each week. Email will be used to notify Team Leads and responsible parties of pending actions for a given week. 6 Interoperability Program Code of Conduct 6.1 Abstract This section outlines the Principles of Conduct that shall govern personal and public interactions in any OGC activity. The Principles recognize the diversity of OGC process participants, emphasize the value of mutual respect, and stress the broad applicability of our work. A separate section of the Policies and Procedures details consequences that may occur if the Principles of Conduct are violated. 6.2 Introduction The work of the OGC relies on cooperation among a broad cultural diversity of peoples, ideas, and communication styles. The Principles for Conduct guide our interactions as we work together to develop multiple, interoperable technologies for the Internet. All OGC process participants aim to abide by these Principles as we build consensus in person, at OGC meetings, in teleconferences, and in e-mail. If conflicts arise, we resolve them according to the procedures outlined in the OGC TC and IP Policies and Procedures. 6.3 Principles of Conduct OGC process participants extend respect and courtesy to their colleagues at all times. OGC process participants come from diverse origins and backgrounds and are equipped with multiple capabilities and ideals. Participants in related tasks are often employed by competing organizations. Regardless of these individual differences, participants treat their colleagues with respect as persons--especially when it is difficult to agree with them. Seeing from another's point of view is often revealing, even when it fails to be compelling. English is the de facto language of the OGC process, but it is not the native language of many OGC process participants. Native English speakers are requested to speak clearly and a bit slowly, and to limit the use of slang in order to facilitate the comprehension of all listeners. OGC process participants develop and test ideas impartially, without finding fault with the colleague proposing the idea. We dispute ideas by using reasoned argument, rather than through intimidation or ad homonym attack. Or, said in a somewhat more consensus-like way: "Less heat and more light." OGC process participants think globally, devising solutions that meet the needs of diverse technical and operational environments. The goal of the OGC is to maintain and enhance a working, viable, scalable, global set of interfaces and protocols that provide a framework for interoperability in the geospatial domain. Many of the problems we encounter are genuinely very difficult. OGC participants use their best engineering judgment to find the best solution for the whole domain of geospatial interoperability, not just the best solution for any particular network, technology, vendor, or user. We follow the intellectual property Principles outlined in http://www.opengeospatial.org/legal/. Individuals who attend OGC facilitated meetings are prepared to contribute to the ongoing work of the membership and the organization. OGC participants who attend OGC meetings read the relevant Pending Documents, RFCs, and e-mail archives beforehand, in order to familiarize themselves with the technology under discussion. This may represent a challenge for newcomers, as e-mail archives can be difficult to locate and search and it may not be easy to trace the history of longstanding Working Group, Revision Working Group, SIG, Standard Working Group, Domain Working Group or Initiative debates. With that in mind, newcomers who attend OGC meetings are encouraged to observe and absorb whatever material they can, but should not interfere with the ongoing process of the group. OGC meetings run on a very limited time schedule, and are not intended for the education of individuals. The work of the group will continue on the mailing list, and many questions would be better expressed on the list in the months that follow. It is expected that many of the participants working on related tasks are from competing organizations. To preserve and sustain our productive environment in which ideas are discussed openly, and all participants’ viewpoints are respected, it is imperative that participants refrain from using OGC resources (mail lists, portal, wiki, teleconferences, etc.) for commercial messages favoring any particular products, business models, or ideology. 6.4 Acknowledgements OGC acknowledges the work done by the IETF on a code of conduct (specifically RFC 3184). These principles of conduct are modeled on their work. Appendix: WBS Outline The following Work Breakdown Structure (WBS) is derived from the OGC Interoperability Initiative Process Framework. This WBS should be interpreted in the following manner: - Items that are shaded gray are either IP Team tasks, have already been completed, or are not required for this Initiative. - Bold text is a task grouping or subtask grouping. - Plain text indicates tasks against which proposing organizations should respond. - Italic text represents the task explanation. 1 Coordination 1.1 Collaborative Environment The following subtasks are mandatory for selected organizations. 1.1.1 Routine and ad hoc telecons as assigned The proposing organization shall provide a technical representative and an alternate to participate in regularly scheduled telecons. If a participant organization has a representative that is requested or volunteers to participate in an ad hoc telecon, then that representative or a reasonable alternative shall join the ad hoc telecon if at all possible. 1.1.2 E-mail review and comment The proposing organization shall provide technical representatives to participate in specification and prototypical component development discussions via the FCP mail list. 1.1.3 Action Item status reporting Proposing organizations’ representatives shall report the status of their work in response to any action item accepted by them in whole or part. Action Items will be assigned to relevant work groups with an identified work group leader. Action item status shall be reported to the relevant work group leader. 1.2 Initiative Plan Development 1.2.1 Project Plan Development 1.2.2 Project Schedule Development 1.2.3 WBS Development 1.2.4 Concept of Operations Development 1.3 Management The following subtasks are mandatory for all selected organizations. 1.3.1 Status Reporting Proposing organizations’ business representatives shall report the status of their work as assigned to and accepted by them in their SOW. Status reports will reflect the SOW item number and name, the "health" of the effort with **green** indicating optimal; **yellow** indicating issues have arisen that appear resolvable; and **red** indicating that issues have arisen that require immediate resolution or the effort will not succeed, and finally the report will describe the work done to fulfill the WBS item. **Workshop Status Report:** A one-time Workshop status report shall be provided by each participant organization that includes a list of personnel assigned to support Initiative. The Workshop status report shall be submitted to the portal and the Initiative Manager no later than the last day of the Workshop in soft copy format only. **Thread Teleconference Meetings:** Weekly or biweekly thread-level teleconferences will be conducted and recorded in minutes posted on the portal, beginning after the Workshop. These are for verbal updates and additions of tasks and actions listed on the portal, and to respond to requests for status among participants, by the IP Team and Sponsors. **Formal Status Reports:** - Formal status reports will be submitted on a Monthly basis on the portal for compilation to an overall thread and initiative status. - **Due by the sixth (6th) of each month or the first Monday thereafter.** Two kinds of status reports are required (report templates will be provided on the project portal): **Monthly Technical Report** - Word document posted on portal, and the Thread Architect notified - Narrative to describe work accomplished during this reporting period by the participant’s technical team - Show % Complete on assigned subtasks within a Participant’s SOW (no cost or labor figures) **Monthly Thread Summary Report** - The IP Team will compile the participant Technical Reports into a **Monthly Summary Report**, due according to Sponsor schedule requirements each month following the completion of the Workshop. **Monthly Business Report** - Word document posted on portal, then the IP Executive Director, Initiative Manager, and OGC Business Manager notified - Work status overview, by WBS element and name, with Green-Yellow-Red indicators - Accomplishments (% completion in work and dollars) - Expenditures, such as labor and Other Direct Costs – budgeted, actual, projected, and cumulative totals - Identification of potential technical performance and/or cost issues and risk mitigation - Summary of work expected to be performed during the next period **Final Summary Report** - Each participant organization shall prepare and submit a final monthly technical report summarizing the Participant’s overall contribution to the project throughout the project from Kickoff to completion. - This report shall include a summary description of results achieved for the participant’s contribution for all assigned tasks in the project for the entire project’s period of performance. 1.3.2 Initiative Accounting Cost-share compensation to selected organizations is typically invoiced and paid in three bi-monthly installments. The dates of these installments for the initiative will be identified in the Participant Agreement. Business/contract representatives for selected organizations shall submit an invoice to the OGC Business Office at OGC Headquarters. The invoice shall include: - OGC Accounting Job Code provided in the contract - Work completed during the prior period - Itemized list of Deliverables - The SOW budget not to exceed amount 1.4 Communication 1.4.1 OGC Internal IP Status Briefings 1.4.2 OGC External IP Status Briefings 2 Assessments and Analysis 2.1 Organizational Capability Review 2.2 Organizational OGC Requirements Review 3 Concept Development 3.1 Sponsor Feasibility Study Review 3.2 RFT Development 3.3 RFT Response Analysis 3.4 RFT Response Review 4 Architecture Development 4.1 Operational Architecture Development 4.2 System Architecture Development 4.3 Technical Architecture Development 5 Initiative Preparation 5.1 Sponsor Planning TEMs 5.2 RFQ Development 5.3 Participant Budget Development 5.4 Contract Development 5.5 SOW/SOP Development 6 Specification Development The Bidder’s proposal shall include brief resume(s) or qualifications of technical representative(s) to lead Specification Development effort for each or applicable tasks listed below. All selected organizations shall send technical representatives to the Workshop meeting. Attendance at this meeting is mandatory for all selected organizations. 6.1 Model Development Technical representatives of selected organizations shall develop or support the development of models that represent a service, interface, operation, message, or encoding that is being developed for the initiative. These models may be in UML or some other appropriate modeling language. The final form of models developed in the initiative should be posted to OGC Network\textsuperscript{TM} (http://www.ogcnetwork.net/). 6.2 Schema Development Technical representatives of selected organizations shall develop or support the development of schemas that specify an interface that is being developed for the initiative. These schemas will be written in XML Schema or some other appropriate language. All schemas developed in the initiative will be posted to OGC Network\textsuperscript{TM} (http://www.ogcnetwork.net/). 6.3 Encoding Development Technical representatives of selected organizations shall develop or support the development of encodings that specify an interface that is being developed for the initiative. These encodings will be specified in XML Schema or some other appropriate language. As applicable, all encodings developed in the initiative will be posted to OGC Network\textsuperscript{TM} (http://www.ogcnetwork.net/). 6.4 Interface Development Technical representatives of selected organizations shall develop or support the development of interfaces that specify operations, encodings or messages that are being developed for initiative. These interfaces will be specified in XML Schema or some other appropriate language. As applicable, all interfaces developed in the initiative should be posted to OGC Network\textsuperscript{TM} (http://www.ogcnetwork.net/). 6.5 Specification Program Coordination Technical representatives of selected organizations shall submit Engineering Reports (ER’s) pertaining to interface developments for the initiative to the OGC Technical Committee for review. Technical representatives shall present these Engineering Reports to the relevant OGC TC working groups and work with OGC members to resolve comments or issues that the OGC members may raise with regard to the ER and the interface(s) described therein. 7 Component Development Technical representative(s) of selected organizations shall lead Component Development effort, as applicable, for each of the tasks listed below. 7.1 Prototypical Interoperable Software Development Selected organizations shall either develop software or modify existing product software to provide the interfaces necessary as assigned for this initiative. 7.1.1 Server software development Selected organizations shall deploy or develop server software or modify existing product server software to exercise the interfaces developed or enhanced under the Specification Development task in item 6 above for this initiative. The selected organizations will make this server software available for review, testing and input during the course of this initiative. 7.1.2 Client software development Selected organizations shall develop client software or modify existing client software to exercise the servers developed under the Component Development tasks of this initiative. Selected organizations shall develop client software to support their server software or make arrangements with other participants to use their client software to exercise other servers during the course of the initiative. This is subject to approval by the IP Team and sponsors to ensure that the client is appropriate for exercising the functionality of the relevant server. If the proposing organization is developing server software and client software, then the client software shall exercise all initiative or other OGC services provided by their server. 8 Testing and Integration 8.1 Configuration Management 8.1.1 CM Plan Development The selected organization shall provide a representative to develop a configuration management plan for interfaces and components developed during the initiative. 8.1.2 Initiative CM The selected organization shall provide a representative to exercise the configuration management plan for interfaces and components developed during the initiative. 8.2 Infrastructure Setup The selected organization shall deploy its components on the same hardware and operating systems for their final deployments to the extent possible. This item is mandatory for all organizations proposing to provide software and/or hardware components for the initiative. 8.3 Technology Integration Experiments (TIE) 8.3.1 Iterations 1-N 8.3.1.1 Component Interface Test The selected organization shall provide a technical representative to conduct TIEs that exercise server and/or client component software's ability to properly implement the interfaces, operations, encodings, and messages developed during this initiative. There will be multiple TIEs during the course of the initiative that will exercise various interfaces, operations, encodings, and messages developed during the initiative. This item is mandatory for all organizations proposing to provide software components for this initiative. 8.3.1.2 Test Result Analysis The selected organization shall provide a technical representative to report the outcome and relevant software reporting messages from TIEs in which the proposing organization participates. The results of these TIEs should be coordinated with the Initiative Architect and reported on the initiative email list, on a designated page of the project wiki and within Monthly Status Report. This item is mandatory for all organizations proposing to provide software components for this initiative.) 8.4 System Tests 8.4.1 Functional Test The selected organization shall demonstrate the functionality of all software delivered against the Scenario and Use Cases described in Annex B, Technical Architecture. This item is mandatory for all organizations proposing to provide software components for this initiative. 8.4.2 Interface Test The selected organization shall demonstrate conformance with the appropriate OGC interfaces by using the OGC CITE Web site where the appropriate test suites are available. This item is mandatory for all organizations proposing to provide software components for this initiative. 9 Solution Transfer 9.1 Software Installation The selected organization shall provide endpoint links for relevant software components deployed in this initiative. This may be accomplished by making the software component(s) available from an open site on their network --OR-- by installing it on a sponsor or other host machine on the OGC Network. If the latter option is taken, then the selected organization shall provide a technical representative to install the software component(s). This item is mandatory for all organizations proposing to develop software components for this initiative. 9.2 Software Integration 9.3 Data Loading The selected organization shall provide a technical representative to load data to any server components the proposing organization may provide. This task includes data loading on the Participant’s server OR on the OGC Network based servers if deployed there. This item is mandatory for all organizations proposing to develop server components for this Initiative. 10 Demonstration 10.1 Use Case Development The selected organization shall provide a technical representative to develop or support the development of a scenario and associated use cases that define and explain the utility of the components or capabilities developed during this Initiative. These scenario/use cases shall be used to provide a basis for demonstration storyboards and the demonstration itself. 10.2 Storyboard Development The selected organization shall provide a technical or business representative to develop or support the development of the demonstration storyboards that will define the structure and content of the demonstration to which their components contribute. 10.3 Venue Access 10.4 Data Requirements Assessment 10.5 Data Acquisition and Distribution 10.6 Demonstration Preparation and Delivery The selected organization shall provide a technical and/or business representative to develop or support the development of the demonstration that will exercise the functionality of the capabilities developed during this Initiative. The representative(s) will also support the demonstration event(s) as required. The proposing organization will maintain server and client software developed for the initiative for a period of no less than one year after the completion of the Initiative demonstration. This item is mandatory for all organizations proposing to provide software components for this Initiative. 11 Documentation 11.1 ER Development Selected organizations shall provide a technical representative to serve as editor of a relevant Engineering Report (ER). Not all organizations responding to this item will be required to provide an editor; alternatively however, each participant shall support the editor by providing authors to contribute applicable material for sections of the ER and for reviews of the Draft ER. The ER is the deliverable of the work items within this Initiative. Participants shall use the appropriate Document template posted on the OGC portal when preparing reports for submittal as part of this Pilot initiative: In some cases, the documentation required may be a Change Request to an existing OGC standard. All Change Requests are to be entered into the public, online CR system, found here: [http://www.opengeospatial.org/standards/cr](http://www.opengeospatial.org/standards/cr) 11.2 System Documentation Development 11.2.1 Functional Specification 11.2.1.1 Architectural Overview The selected organization shall provide a technical representative to develop an architectural overview of their software component(s) relevant to the Initiative architecture. This item is mandatory for all organizations proposing to provide software components for this Initiative. 11.2.1.2 Scenario and Use Cases The selected organization shall provide a technical representative to refine the scenario and use cases to show the functionality of their software components in the context of the Initiative architecture. This item is mandatory for all organizations proposing to provide software components for this Initiative. 11.2.1.3 UML System Models The selected organization shall provide a technical representative to develop valid UML documents describing information models and architectures involved in their contribution to this Initiative. 11.2.1.4 System Configuration The selected organization shall provide a technical representative to develop a detailed document describing the combined environment of hardware and software component(s) that compose their contribution to this Initiative. **This item is mandatory for all organizations proposing to develop software components for this Initiative to be installed at a data provider or other host sites.** 11.2.2 Installation Guide The selected organization shall provide a technical representative to develop an installation guide for their software component(s). **This item is mandatory for all organizations proposing to develop software components for this Initiative to be installed at a data provider or other host sites.** 11.2.3 Training Material & Users Guide The selected organization shall provide a technical representative to develop a User's Guide and Training Materials pertaining to their software component(s) developed or modified for this Initiative. The documents shall be provided to the IP Team and sponsors to support their ability to demonstrate the proposing organization's contributions to the initiative. **This item is mandatory for all organizations proposing to develop software components for this Initiative.** 11.3 Planning Study Report 12 Compliance Test Development Technical representatives of selected organizations shall develop Draft Compliance Test documentation pertaining to an interface developed or enhanced for this Initiative. Compliance test documentation shall be submitted as part of an associated Engineering Report. This task includes coordination with OGC Compliance Program. Bidder’s proposals shall address this task along with Task 6, Specification Development and Task 11, Documentation in this Annex. 12.1 Summarize TIEs, demo results and data issues Technical representatives of selected organizations shall provide information detailing progress pertaining to the implementation, integration, or enhancement of an interface by including TIE results, lessons-learned from the demo, and particular data issues. 12.2 Compliance Test Technical representatives of selected organizations shall outline all of the necessary information to conduct a valid compliance test of the interface, including the sub items below. 12.2.1 Test Cases Technical representatives of selected organizations shall outline a valid compliance test for the interface. A valid compliance test will include identification of all required and optional server requests in the interface and the acceptable results for testing servers, the syntax checks to perform for testing client requests; an explanation of an acceptable verification of the results (machine, human, etc.); a list of expected/valid warnings or exceptions to interface behavior; a matrix of test dependencies and explanation of ordering tests appropriately for inherent tests and dependencies. 12.2.2 Data Technical representatives of selected organizations shall identify appropriate data sets for use in conducting a compliance test for an interface. 12.2.3 Recommendations Technical representatives of selected organizations shall document recommendations to resolve issues with the current state of the interface, or with the compliance tests.
{"Source-Url": "https://portal.opengeospatial.org/files/66977", "len_cl100k_base": 15460, "olmocr-version": "0.1.42", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 66525, "total-output-tokens": 17032, "length": "2e13", "weborganizer": {"__label__adult": 0.0004000663757324219, "__label__art_design": 0.0013675689697265625, "__label__crime_law": 0.0004045963287353515, "__label__education_jobs": 0.005512237548828125, "__label__entertainment": 0.0001404285430908203, "__label__fashion_beauty": 0.00028896331787109375, "__label__finance_business": 0.003559112548828125, "__label__food_dining": 0.0003383159637451172, "__label__games": 0.0011987686157226562, "__label__hardware": 0.0024280548095703125, "__label__health": 0.0003285408020019531, "__label__history": 0.0007305145263671875, "__label__home_hobbies": 0.00024509429931640625, "__label__industrial": 0.0012044906616210938, "__label__literature": 0.0004549026489257813, "__label__politics": 0.0005812644958496094, "__label__religion": 0.0004925727844238281, "__label__science_tech": 0.056884765625, "__label__social_life": 0.0001246929168701172, "__label__software": 0.0195159912109375, "__label__software_dev": 0.90234375, "__label__sports_fitness": 0.0003693103790283203, "__label__transportation": 0.0009379386901855468, "__label__travel": 0.00029277801513671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 84334, 0.03645]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 84334, 0.0988]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 84334, 0.87511]], "google_gemma-3-12b-it_contains_pii": [[0, 227, false], [227, 3346, null], [3346, 6902, null], [6902, 8973, null], [8973, 10184, null], [10184, 13298, null], [13298, 16231, null], [16231, 19393, null], [19393, 22644, null], [22644, 25907, null], [25907, 28961, null], [28961, 32341, null], [32341, 36436, null], [36436, 40252, null], [40252, 43502, null], [43502, 46529, null], [46529, 48427, null], [48427, 51641, null], [51641, 55119, null], [55119, 58440, null], [58440, 61637, null], [61637, 63974, null], [63974, 66009, null], [66009, 68862, null], [68862, 69430, null], [69430, 70456, null], [70456, 73447, null], [73447, 76119, null], [76119, 78435, null], [78435, 81060, null], [81060, 83978, null], [83978, 84334, null]], "google_gemma-3-12b-it_is_public_document": [[0, 227, true], [227, 3346, null], [3346, 6902, null], [6902, 8973, null], [8973, 10184, null], [10184, 13298, null], [13298, 16231, null], [16231, 19393, null], [19393, 22644, null], [22644, 25907, null], [25907, 28961, null], [28961, 32341, null], [32341, 36436, null], [36436, 40252, null], [40252, 43502, null], [43502, 46529, null], [46529, 48427, null], [48427, 51641, null], [51641, 55119, null], [55119, 58440, null], [58440, 61637, null], [61637, 63974, null], [63974, 66009, null], [66009, 68862, null], [68862, 69430, null], [69430, 70456, null], [70456, 73447, null], [73447, 76119, null], [76119, 78435, null], [78435, 81060, null], [81060, 83978, null], [83978, 84334, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 84334, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 84334, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 84334, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 84334, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 84334, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 84334, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 84334, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 84334, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 84334, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 84334, null]], "pdf_page_numbers": [[0, 227, 1], [227, 3346, 2], [3346, 6902, 3], [6902, 8973, 4], [8973, 10184, 5], [10184, 13298, 6], [13298, 16231, 7], [16231, 19393, 8], [19393, 22644, 9], [22644, 25907, 10], [25907, 28961, 11], [28961, 32341, 12], [32341, 36436, 13], [36436, 40252, 14], [40252, 43502, 15], [43502, 46529, 16], [46529, 48427, 17], [48427, 51641, 18], [51641, 55119, 19], [55119, 58440, 20], [58440, 61637, 21], [61637, 63974, 22], [63974, 66009, 23], [66009, 68862, 24], [68862, 69430, 25], [69430, 70456, 26], [70456, 73447, 27], [73447, 76119, 28], [76119, 78435, 29], [78435, 81060, 30], [81060, 83978, 31], [83978, 84334, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 84334, 0.0]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
c1600e24996e5c9ec46167a3d212f9dc2a14e932
**Machine-Learning-Guided Selectively Unsound Static Analysis** Kihong Heo Seoul National University Seoul, Korea Hakjoo Oh* Korea University Seoul, Korea Kwangkeun Yi* Seoul National University Seoul, Korea --- **Abstract**—We present a machine-learning-based technique for selectively applying unsoundness in static analysis. Existing bug-finding static analyzers are unsound in order to be precise and scalable in practice. However, they are uniformly unsound and hence at the risk of missing a large amount of real bugs. By being sound, we can improve the detectability of the analyzer but it often suffers from a large number of false alarms. Our approach aims to strike a balance between these two approaches by selectively allowing unsoundness only when it is likely to reduce false alarms, while retaining true alarms. We use an anomaly-detection technique to learn such harmless unsoundness. We implemented our technique in two static analyzers for full C. One is for a taint analysis for detecting format-string vulnerabilities, and the other is for an interval analysis for buffer-overflow detection. The experimental results show that our approach significantly improves the recall of the original unsound analysis without sacrificing the precision. **Keywords**—Static Analysis, Machine Learning, Bug-finding --- **I. Introduction** Any realistic bug-finding static analyzers are designed to be unsound. Ideally, a static analyzer is expected to be sound, precise, and scalable; that is, it should be able to consider all program executions and hence do not miss any intended bug while avoiding false positives and scaling to large programs. In reality, however, achieving the three at the same time is extremely challenging, and therefore existing commercial static analysis tools (e.g., [1]) and published static bug-finders (e.g., [2], [3], [4], [5], [6]) trade soundness in order to obtain acceptable performance in precision and scalability. To our knowledge, all of the existing unsound analysis tools are uniformly unsound. For instance, since loops and unknown library calls are major sources of imprecision in static analysis, most static bug-finding tools compromise soundness in analyzing them (e.g., [2], [3], [4], [5], [6]); loops are unrolled for a fixed number of times and subsequent loop iterations are ignored entirely, and unknown library calls are considered as pre-defined behaviors such as skip. All of these approaches are uniformly unsound in that they ignore every loop and library call in a given program regardless of their different conditions. However, this uniform approach to unsoundness has a considerable shortcoming; it causes the analysis to miss a significant amount of real bugs. For instance, our taint analysis for detecting format-string vulnerabilities ignores the possible data flows of all unknown library calls in the program and therefore only report 5 false alarms in the 13 benchmark C programs (Section V). However, it only managed to detect 16 bugs among the 106 potentially detectable format-string bugs. In other words, this unsound analysis has low false positive rate (FPR = #False Alarms / #All Alarms) but it has high false negative rate (FNR = #Missing Bugs / #All Bugs). On the other hand, a simple-minded, uniformly sound analysis poses the opposite problem; it has low FNR at the cost of high FPR. For example, a simple solution to decrease the FNR of the unsound taint analysis is to modify the analysis to consider the potential data flows of every unknown library call in the program. This uniformly sound analysis is able to find all 106 bugs in the benchmark programs. However, it reports 276 false alarms too. Our work is to reduce the FNR of an unsound bug-finder while maintaining the original (low) FPR by being selectively unsound only when it is likely to be harmless. For example, we unsoundly analyze library calls only when it is likely to reduce FPR while maintaining low FNR. With our approach, the selectively unsound taint analysis reports 92 real bugs (among 106) with 27 false alarms only. We achieve this by using a machine learning technique that is specialized for anomaly detection [7]. Our key insight is that the program components (e.g., loops and library calls) that produce false alarms are alike, predictable, and sharing some common properties. Meanwhile, the real bugs are often caused by different reasons that are atypical and unpredictable in their own ways (Section III-B2) [8]. Based on this observation, we aim to capture the common characteristics of the harmless and precision-decreasing program components by using one-class support vector machines. The entire learning process in our approach (i.e. generating labelled data and learning a classifier) is fully automatic once a codebase with known bugs is given. The experimental results show that our method effectively reduces false negatives of the baseline analyzer without sacrificing its precision. We evaluated our method with two realistic static analyzers for C and open-source benchmarks. The first experiment is done with a taint analysis for finding out format-string bugs. In our benchmarks with 106 bugs, the baseline, uniformly unsound analysis detects 16 bugs with 5 false alarms (FPR: 24%, FNR: 85%). Uniformly improving the soundness impairs the precision too much: it reports 106 real bugs with *Corresponding authors Consider an analysis that is uniformly unsound for every loop. That is, all the loops in the given program are unrolled for a fixed number of times, and subsequent loop iterations are ignored during the analysis. From the perspective of such an unsound analysis, the example program is treated as follows. \begin{verbatim} str = "hello world"; for(i=0; !str[i]; i++) // buffer access 1 skip; size = positive_input(); for(i=0; i<size; i++) skip; ... = str[i]; // buffer access 2 \end{verbatim} Fig. 1. Example program 276 false alarms (FPR: 72%). Our selectively unsound analysis maintains the original precision while greatly decreasing the number of false negatives: it reports 92 bugs with 27 false alarms (FPR: 23%, FNR: 13%). The second experiment is done with an interval analysis for buffer-overflow detection, where we control the soundness for both loops and library calls. In the benchmarks with 138 bugs, the uniformly unsound analysis detects 33 bugs with 104 false alarms (FPR: 76%, FNR: 76%). The uniformly sound analysis detects 118 bugs with 677 false alarms (FPR: 85%). Our selectively unsound analysis detects 96 bugs with 266 false alarms (FPR: 73%, FNR: 30%). To summarize, our contributions are as follows: - We present a new approach of selectively employing unsoundness in static analysis. All of the existing bug-finding static analyzers are uniformly unsound. - We present a machine-learning technique that can automatically tune a static analysis to be selectively unsound. Our technique is based on anomaly detection with automatic generation of labelled data. - We demonstrate the effectiveness of the technique by experiments with two bug-finding static analyzers for C. II. OVERVIEW We illustrate our approach using a static analysis with the interval domain. The goal of the analysis is to detect buffer overflow bugs in a program. For simplicity, we only concern with loops in this section, which could be a potential cause of the buffer overflow bugs. Consider a simple program in Figure 1. In the program, there are two loops and two buffer-access expressions. The first loop iterates over a constant string until the null value in the string is found. In the loop, buffer access 1 is always safe, since i is guaranteed to be smaller than the length of str inside the loop. On the other hand, buffer access 2 is not always safe, because the index i has the value of size after the second loop, which can be an arbitrary value due to the external input and may cause a buffer overflow. A. Uniformly Unsound Analysis Consider an analysis that is uniformly unsound for every loop. That is, in the string is found. In the loop, buffer access 1 is always safe, since i is guaranteed to be smaller than the length of str inside the loop. On the other hand, buffer access 2 is not always safe, because the index i has the value of size after the second loop, which can be an arbitrary value due to the external input and may cause a buffer overflow. \begin{verbatim} str = "hello world"; for(i=0; !str[i]; i++); // buffer access 1 skip; size = positive_input(); for(i=0; i<size; i++) skip; ... = str[i]; // buffer access 2 \end{verbatim} Note that each loop is unrolled once and replaced with an if-statement. The analysis does not report a false alarm for buffer access 1, since the value of i remains as [0,0]. However, it also fails to report a true alarm for buffer access 2; the value of i is approximated to [0,0], hence the analysis considers the buffer access to be safe. B. Uniformly Sound Analysis On the other hand, a sound interval analysis can detect the bug at buffer access 2 with a false alarm at buffer access 1. Inside the first loop, the analysis conservatively approximates the value of i to [0,+,∞], since this value is not refined by the loop condition !str[i]. It is because the interval domain cannot capture non-convex properties (e.g. i ≠ 11, where 11 is the null index of str). Thus, the analysis reports an alarm for buffer access 1 as a potential buffer overflow error, which is a false alarm that we want to avoid. Meanwhile, the variable i in the second loop is upper bounded by size whose range is approximated as [0,+,∞] due to the unknown input value. Therefore the analyzer reports an alarm for buffer access 2, which is a true alarm in this case. C. Selectively Unsound Analysis Our selectively unsound analyzer applies unsoundness only to the loops that are likely to remove false alarms only. In the example program in Figure 1, we ignore the first loop since analyzing it soundly results in reporting a false alarm at buffer access 1. The second loop, on the other hand, needs to be analyzed soundly, since it has the possibility of causing an actual buffer overflow. The selectively unsound analysis on the given program corresponds to analyzing the following program. \begin{verbatim} str = "hello world"; if(!str[i]) // buffer access 1 skip; size = positive_input(); for(i = 0; i < size; i++) skip; ... = str[i]; // buffer access 2 \end{verbatim} Note that we only unroll the first loop, not the second loop. By being unsound for the first loop and sound for the second loop, the analysis is able to report the true alarm for buffer access 2 while avoiding the false alarm for buffer access 1. D. Our Learning Approach We achieve the selectively unsound analysis via machine learning-based anomaly detection. Assume that we have a codebase and a set of features. The codebase is a set of programs in which all the bugs are found and their locations are annotated so that we can classify alarms into true or false alarms. Then, we need to decide which set of program components to apply unsoundness selectively. In our example, it is the set of loops in the program we want to analyze. The features in this case describe general characteristics of the loops. The learning phase consists of three steps. 1) We collect harmless loops from the codebase. A loop is harmless if unsoundly analyzing the loop does not cause to miss real bugs but reduces false alarms. For simplicity, we assume there is only one program in the codebase, and the program contains \( n \) loops. When analyzed soundly, it reports certain number of true alarms and false alarms. Then, we examine each loop by replacing it with an if-statement (i.e., unrolling) one by one and compare the result to that of the original program. If the replacement of a loop makes the number of true alarms remain same, but makes the number of false alarms decrease, we consider the loop to be harmless. We collect all the loops satisfying the condition. 2) Next, we represent the loops as feature vectors. Once all the harmless loops in the codebase are collected, we create a feature vector for each loop using the set \( f = \{ f_1, f_2, \ldots, f_k \} \) where \( f_i \) is a predicate over loops. For example, \( f_1 \) may indicate whether a loop has a conditional statement containing nulls. 3) Finally, having the generated feature vectors as training data, we learn a classifier that can distinguish such harmless loops. We use one-class classification algorithm [7] for learning the classifier that requires only positive examples (i.e., harmless loops). We use the anomaly detection algorithm to learn the common characteristics and regularities of the harmless loops. In the testing phase, the classifier takes the feature vectors of all the loops in a new program as an input. If the classifier considers a loop to be harmless, then the loop is analyzed unsoundly, meaning that it is unrolled once and replaced with an if-statement. Otherwise, if the classifier considers a loop to be harmful (i.e., anomaly), then the loop is analyzed soundly. III. Our Technique Our goal is to find harmless components and selectively employ unsoundness only to them. In this section, we describe how to build a selectively unsound static analyzer in detail. First, we introduce a parameterized static analysis that applies unsoundness only to certain program components. Then, we explain how to learn a statistical model from an existing codebase, which is used to derive a soundness parameter. A. Parameterized Static Analysis Our analysis employs a parameterized strategy for selecting the set of program components that will be analyzed soundly. This is a variant of the well-known setting for the parameterized static analysis [9], [10], except the parameter controls the soundness of the analysis, not the precision. Let \( P \in Pgm \) be a program that we want to analyze. \( C_P \) is the set of program points in \( P \). \( J_P \) is the set of program components such as the set of loops, the set of library calls, or the set of other operations in \( P \). In the rest of this section, we omit the subscript \( P \) from \( C_P \) and \( J_P \) when there is no confusion. The selectively unsound static analyzer is a function \( F : Pgm \times \varphi(J) \rightarrow \varphi(C) \) which is parameterized by the soundness parameter \( \pi \in \varphi(J) \) (i.e. a set of program components). Given a program \( P \) and its parameter \( \pi \), the analyzer outputs alarms (i.e. a set of program points) A soundness parameter \( \pi \in \varphi(J) \) is a set of program components which need to be analyzed soundly. In other words, it selects the program components that are likely to produce true alarms as a result of detecting real bugs in the program. For instance, when \( J = \{ j_1, \ldots, j_n \} \) is the set of loops in the program \( P \), \( j_i \in \pi \) means that the \( i \)-th loop in the program is not considered to be harmless; we analyze the loop as it is rather than unrolling the loop once and ignoring all the subsequent loop iterations. We want to find a good soundness parameter which allows the analyzer to apply costly soundness only to the necessary components which are not harmless. Let \( 1 \) be the parameter where every component is selected and \( 0 \) be the parameter where no component is selected. Then, \( F(P, 1) \) denotes the analysis that is fully sound, which can detect the maximum number of the real bugs along with lots of false alarms. \( F(P, 0) \) means the fully unsound analysis, reporting the minimum number of false alarms with risk of missing many real bugs. For our analysis, it is important to find a proper parameter which strikes the balance between 1 and 0, reporting false alarms as few as possible while detecting most of the real bugs. B. Learning a Classifier We want to build a classifier which can predict whether a given program component is harmless or not. The classifier in our approach exploits general properties of harmless components and uses the information for new, unseen programs. 1) Features: We define features to capture common properties of program components. Features are either syntactic or semantic properties of program components, which have either binary or numeric values. For simplicity, we assume them to be binary properties: \( f_j : J \rightarrow \{ 0, 1 \} \). Given a set of features, we can derive a feature vector for each program component. Suppose that we have \( n \) features: \( f = \{ f_1, \ldots, f_n \} \). With the set of features, each program component \( j \in J \) can be represented as a feature vector \( f(j) = (f_1(j), \ldots, f_n(j)) \). Our approach requires analysis designers to come up with a set of features for each parameterized static analysis $F$. In Section IV, we discuss how to construct program features with two case studies for loops and library calls. 2) Learning Process: A classifier is defined as a function $C : \{0,1\}^n \rightarrow \{0,1\}$ which takes a feature vector of a program component as an input. It returns 1 if it considers the component to be harmless or 0, otherwise. We define a model $M : Pgm \rightarrow \varnothing[P]$ that is used to derive a soundness parameter for a given program as follows: $$M(P) = \{j \in J | C(f(j)) = 0\}.$$ The model collects the program components that are potentially harmful, which may cause real bugs. With the model, we run the static analysis for a new, unseen program $P: F(P, M(P))$. That is, we first obtain the soundness parameter $M(P)$ from the model and instantiate the static analysis with the parameter. As a result, the analysis becomes sound for the program components that are selected by the parameter from the model and unsound for the others. We learn the model with One-Class Support Vector Machine (OC-SVM) [7]. OC-SVM is an unsupervised algorithm that learns a model for anomaly detection: classifying new data as similar or different to its training data. Our intuition is that harmless program components tend to be typical, sharing common properties, whereas harmful components are atypical, therefore difficult to be characterized. It is because bugs in the real world are introduced unexpectedly by nature. In addition, collecting examples for all kinds of bugs is infeasible, whereas collecting and generalizing the characteristics of harmless components is relatively easy to achieve. Therefore, we use this one-class classification method; it only requires positive examples (e.g., harmless loops) that are expected to share some regularities, learns such regularities, and classifies new data as similar or different to its training data. Note that the characteristics of harmless components are largely determined by the design choices of a given static analysis (e.g., abstract domain), whereas that of harmful components are not affected by the analysis design. For example, for an interval analysis of C programs, the following loops are typically harmless: - Loops iterating over constant strings: ``` char str[] = "hello world"; for (i=0; !str[i]; i++) // false alarm ... ``` As explained before, analyzing such loops soundly is likely to cause false alarms, rather than detecting true bugs, because of the non-disjunctive limitation of the interval domain. - Loops involving variable relationships: ``` int len = 10; for (i = 0; i < len; i++) p[i] = ... // false alarm ``` Sound analysis of this kind of loops is likely to produce false alarms because of the non-relational limitation of the interval domain. The analysis cannot track the relationship between the value of `len`, the value of `i`, and the size of buffer `p`. 3) Generating Training Data: From an existing codebase, we generate training data for learning the classifier. The training dataset is composed of a set of feature vectors. Note that we only collect the feature vectors of harmless components, because OC-SVM is designed to learn the regularities of positive examples. The positive examples in our case are the harmless components. The codebase of the system is a set of annotated programs $P = \{(P_1, B_1), \ldots, (P_n, B_n)\}$, in which each program $P_i$ is associated with a set of buggy program points $B_i \subseteq \mathbb{P}_i$. Once all the programs in the codebase are annotated accordingly, we can automatically generate training data for the classifier. We first applies unsoundness to each component one by one, runs the analysis, and collects the feature vectors from all the harmless components in the given codebase. We consider a program component to be harmless if the number of true alarms remains same and the number of false alarms is decreased, when analyzed unsoundly. The algorithm for generating training data is shown in Algorithm 1. For each program $P_i$ in the codebase, we run the fully sound static analysis and classify the output alarms $A_i$ into true alarms $A_t$ and false alarms $A_f$ (line 3 and 4). Then, for each program component $j \in \mathbb{J}_i$, we run the static analysis without the $j$th component (i.e. $\mathbb{J}_i \setminus \{j\}$) (line 6). The component $j$ is considered to be harmless when the analysis which is unsound for $j$ still captures all the real bugs (i.e. $|A_t| = |A_t'|$ and reports fewer false alarms (i.e. $|A_f| < |A_f'|$) compared to the fully sound analysis (line 8). We collect feature vectors from all the harmless program components into the training set $T \subseteq \{0,1\}^n$. IV. INSTANCE ANALYSES In this section, we present a generic static analysis that is selectively unsound for loops and library calls as well as a set of features for them. We have chosen loops and library calls because they are the main sources of false alarms from real-world static analyzers and thus often made unsound in practice (e.g. [2], [3], [4], [5], [6]). In the analysis, loops are unrolled for a fixed number of times and library calls are simply ignored. Algorithm 1 Training data generation ```python 1: \text{T} := \emptyset 2: for all $(P_i, B_i) \in P$ do 3: $A_i = F(P_i, 1)$ 4: $(A_t, A_f) := (A_i \cap B_i, A_i \setminus B_i)$ 5: for all $j \in \mathbb{J}_i$ do 6: $A_i' = F(P_i, 1 \setminus \{j\})$ 7: $(A_t', A_f') = (A_i' \cap B_i, A_i' \setminus B_i)$ 8: if $|A_t| = |A_t'| \land |A_f'| < |A_f|$ then 9: \text{T} := \text{T} \cup \{f(j)\} 10: end if 11: end for 12: end for ``` for a loop statement $V$ is accompanied by two functions $$F_n(L := E, s) = s\{L, s \mapsto V(E, s)\}$$ $$F_n(C_1; C_2, s) = F_n(C_2, F_n(C_1, s))$$ $$F_n(\text{if } C_1; C_2, s) = F_n(C_1) \cup F_n(C_2)$$ $$F_n(\text{while } E, C, s) = \begin{cases} \text{fix}(AX, s \cup F_n(C, X)) & \text{if } l \in \pi \\ F_n(C, s) & \text{otherwise} \end{cases}$$ $$F_n(L := \text{lib}_l(), s) = \begin{cases} \{L, s \mapsto \top\} & \text{if } l \in \pi \\ s & \text{otherwise} \end{cases}$$ Fig. 2. Static analysis selectively unsound for loops and library calls as skips. Our aim is to selectively unroll and ignore loops and library calls, respectively, only when doing so is harmless. We present two instances of the analysis, one for an interval analysis and the other for a taint analysis. The interval analysis is used to find out possible buffer-overflow errors, and the taint analysis is for detecting format string vulnerabilities (i.e. uses that labels in the program are all distinct. An expression $J$ labels forms in Section IV-A, we define the generic analysis with features. Section IV-B and Section IV-C present two instances, namely the interval analysis and the taint analysis. A. A Generic, Selectively Unsound Static Analysis 1) Abstract Semantics: We consider a family of static analyses whose soundness is parametric for loops and library calls. Consider the following simple imperative language: $$C \rightarrow L := E | C_1 ; C_2 | \text{if } C_1 \ C_2 | \text{while } E \ C | L := \text{lib}_l()$$ $$E \rightarrow n \ | L \ | ext{alloc}_l(E) \ | &L \ | E_1 + E_2$$ $$L \rightarrow x \ | *E \ | E_1 \ E_2$$ A command is an assignment, sequence, if statement, while statement, or a call to an unknown library function. In the program, loops and library calls are labelled and the set of labels forms $j$ in Section III-A. The parameter space is the set of all subsets of program labels, i.e., $\wp(J)$. We assume that labels in the program are all distinct. An expression is an integer ($n$), $l$-value expression ($E$), $l$-array allocation ($\text{alloc}_l(E)$) where $E$ is the size of the array to be allocated and $l$ is the label for the allocation site, address-of expression ($&E$), or compound expression ($E + E$). An $l$-value expression is a variable ($x$) or array access expression ($E_1 [E_2]$). The abstract semantics of the analysis is defined in Figure 2. The analysis is parameterized by $\pi \in \wp(J)$, a set of labels, and is unsound for loops and library calls not included in $\pi$. The abstract semantics is defined by the semantic function $F_{\pi} : C \times S \to S$, where $S$ is the domain of abstract states mapping abstract locations to abstract values, i.e., $S = L \to V$. The analysis is generic in that abstract locations ($L$) and values ($V$) are unspecified. They will be given for each analysis instance in subsequent subsections. We assume that the abstract domain is accompanied by two functions $L : L \times S \to \wp(L)$ and $V : E \times S \to V$, which compute abstract locations and values of given $l$-value and $r$-value expressions, respectively. The abstract semantics is standard except for the selective treatment of soundness. For a loop statement ($\text{while}_l E C$), the analysis applies the usual (sound) fixed point computation (fix is a pre-fixpoint operator) when the label $l$ is included in the parameter $\pi$. When a loop is not included in $\pi$, the analysis ignores the loop and execute the body $C$ only once (i.e. unrolling the loop once). For unknown library calls, the analysis conservatively updates the return location $L$ when $l$ is chosen, i.e., $l \in \pi$. Otherwise, we completely ignore the effect of the library call. Thus, $\pi$ determines how soundly we analyze the program with respect to loops and unknown library calls. For instance, when $\pi = \emptyset$, the analysis is completely unsound and ignores all of the loops and library calls in the program. 2) Features: We have designed a set of features for loops and library calls, which can be used for instantiating the generic analysis above. We examined open-source C programs and identified 37 features (Figure 5) that describe common characteristics of loops and library calls in typical C programs. The features are classified into syntactic and semantic features. A syntactic feature describes a property that can be checked by a simple syntax analysis. For example, a syntactic feature characterizes loops whose conditional expressions involve constant values, or library calls whose return type is an integer. A semantic feature describes a property that requires a (yet simple) data-flow analysis. For instance, a semantic feature for loops describes that the loop condition involves application of a (yet simple) data-flow analysis. For instance, a semantic feature characterizes loops whose conditional expressions involve constant values, or library calls whose return type is an integer. A semantic feature describes a property that requires a (yet simple) data-flow analysis. ```c C = input(); // external input ``` The taint domain consists of two abstract values: an abstract value is a tuple of a taint value and a points-to set. We designed those features with generality in mind so that the features can be reused for different analyses as much as possible. Note that the features in Figure 5 are not dependent on a particular static analysis, but describe rather general, syntactic and semantic program properties. We use the same set of features for the interval and taint analyses and show that we can effectively tune the soundness of both analyses with the single set of features as shown in Section V. B. Instantiation 1: Interval Analysis We first instantiate the generic analysis with the interval domain and use it to find out potential buffer-overflow errors in the program. The generic analysis left out the definitions of abstract locations (L), abstract values (V), and the evaluation functions for them (L and V). These definitions for the interval analysis are given in Figure 3. An abstract location is either a variable or an allocation-site. An abstract value is a tuple of an interval (I), which is an abstraction of a set of numeric values, a points-to set (℘(L)) and a set of abstract arrays (℘(A)). Abstract array (a, o, s) has the abstract location (a ∈ L), offset (o ∈ I), and size (s ∈ ℤ). The evaluation function L takes an l-value expression and an abstract state, and computes the set of abstract locations that the l-value denotes. The function V(E, s) evaluates to the abstract value of E under s. In the definition, we write V(E, s).n for the nth component of the abstract value of V(E). The analysis reports a buffer-overflow alarm when the index of an array can be greater than its size according to the analysis results. For example, consider an expression arr[idx]. Suppose the analysis concludes that arr has an array of ([1, 0, 0], [5, 10]) i.e. an array of size [5, 10]) and the interval value of idx is [3, 7]. The analysis raises an alarm at the array expression because the index value may exceed the size of the array (e.g. when the size is 5 and the index is 7). C. Instantiation 2: Taint Analysis The second instance is a taint analysis for detecting format string vulnerabilities in C programs. The abstract domain and semantics are given in Figure 4. The analysis combines a taint analysis and a pointer analysis, and therefore an abstract location is still either a variable or an allocation-site. An abstract value is a tuple of a taint value and a points-to set. The taint domain consists of two abstract values: ⊤ is used to indicate that the value is tainted and ⊥ represents untainted values. For simplicity, we model taint sources by a particular set ⊤ ⊆ ℤ of integers; constant integer n generates a taint value ⊤ if n ∈ ⊤. In actual implementation, ⊤ is produced by function calls that receives user input such as fgets. The analysis reports an alarm whenever a taint value is involved in a format string parameter of functions. V. Experiments We empirically show the effectiveness of our approach on selectively applying unsoundness only to harmless program components. We design the experiments to address the following questions: - **Effectiveness of Our Approach**: How much is the selectively unsound analysis better than the fully sound or fully unsound analyses? - **Efficacy of OC-SVM**: Does the one-class classification algorithm outperform two-class classification algorithms? - **Feature Design**: How should we choose a set of features to effectively predict harmless program components? - **Time Cost**: How does our technique affect cost of analysis? A. Setting 1) Implementation: We have implemented our method on top of a static analyzer for full C. It is a generic analyzer that tracks all of numeric, pointer, array, and string values with flow-, field-, and context-sensitivity. The baseline analyzer is unsound by design to achieve a precise bug-finder; it ignores complex operations (e.g., bitwise operations and weak updates) and filters out reported alarms that are unlikely to be true. We modified the baseline analyzer and created two instance analyzers, an interval analysis and a taint analysis, as described in Section IV. For each analysis, we built a fully sound version (BASELINE), a uniformly unsound version (UNIFORM), and a selectively unsound version (SELECTIVE) with respect to the soundness parameter in Section IV. In the interval analysis for buffer-overflow errors, UNIFORM is set to be uniformly unsound for every loop and library call, and SELECTIVE is selectively unsound for them. In the taint analysis for format string vulnerabilities, UNIFORM is uniformly unsound for all the library calls (but not for loops), and SELECTIVE is selectively unsound for them. To implement the OC-SVM classifier, we used scikit-learn machine-learning package [11] with the default setting of the algorithm (specifically, we used the radial basis function (RBF) kernel with γ = 0.1 and ν = 0.1). 2) Benchmark: Our experiments were performed on 36 programs whose buggy program points are known. They are the programs from open source software packages or previous work on static analysis evaluations [12], [13]. Table I and II contain the list of the benchmark programs for the interval and the taint analysis, respectively. SM-X, BIND-X, and FTP-X are model programs from [12], which contain buffer overflow vulnerabilities. Most of the bugs in the benchmarks are reported as critical vulnerabilities by authorities such as CVE [14]. In total, our benchmark programs have 138 real buffer-overflow bugs and 106 real format string bugs. ### B. Effectiveness of Our Approach We evaluate the effectiveness of our approach by comparing precision of SELECTIVE to that of the other analyzers, BASELINE and UNIFORM. We use cross-validation, a model validation technique for assessing how the results of a statistical analysis will generalize to new data. We show the results from three types of cross-validation: leave-one-out, 2-fold, and 3-fold cross-validation. 1) Leave-one-out Cross-validation: This is one of the most common types of cross-validation, which uses one observation as the validation set and the remaining observations as the training set. In case of the interval analysis, for example, among the 23 benchmark programs, one program is used for validating and measuring the effectiveness of the learned model, and the other remaining 22 programs are used for training. Table I shows the results of the leave-one-out cross-validation for the interval analysis. We measured the number of true (T) and false (F) alarms from BASELINE, UNIFORM, and SELECTIVE. In terms of true alarms, BASELINE detects 118 real bugs (FNR: 14.5%) in the programs. While UNIFORM detects only 33 bugs (FNR: 76.1%), SELECTIVE effectively de- ![Fig. 5. Features for typical loops and library calls in C programs](image) <table> <thead> <tr> <th>Target</th> <th>Feature</th> <th>Property</th> <th>Type</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Loop</td> <td>Const</td> <td>Syntactic</td> <td>Binary</td> <td>Whether the loop condition contains nulls or not</td> </tr> <tr> <td></td> <td>Void</td> <td>Syntactic</td> <td>Binary</td> <td>Whether the parameters contain constants or not</td> </tr> <tr> <td></td> <td>Int</td> <td>Syntactic</td> <td>Binary</td> <td>Whether the return type is void or not</td> </tr> <tr> <td></td> <td>CString</td> <td>Syntactic</td> <td>Binary</td> <td>Whether the function is declared in string.h or not</td> </tr> <tr> <td></td> <td>InsideLoop</td> <td>Syntactic</td> <td>Binary</td> <td>Whether the function is called in a loop or not</td> </tr> <tr> <td></td> <td>#Args</td> <td>Syntactic</td> <td>Numeric</td> <td>The (normalized) number of arguments</td> </tr> <tr> <td></td> <td>DefParam</td> <td>Semantic</td> <td>Binary</td> <td>Whether a parameter are defined in a loop or not</td> </tr> <tr> <td></td> <td>UseRet</td> <td>Semantic</td> <td>Binary</td> <td>Whether the return value is used in a loop or not</td> </tr> <tr> <td></td> <td>UptParam</td> <td>Semantic</td> <td>Binary</td> <td>Whether a parameter is updated via the library call</td> </tr> <tr> <td></td> <td>Escape</td> <td>Semantic</td> <td>Binary</td> <td>Whether the return value escapes the caller</td> </tr> <tr> <td></td> <td>GVar</td> <td>Semantic</td> <td>Binary</td> <td>Whether global variables are accessed in the loop</td> </tr> <tr> <td></td> <td>FinInterval</td> <td>Semantic</td> <td>Binary</td> <td>Whether a variable has a finite interval value in the loop</td> </tr> <tr> <td></td> <td>FinArray</td> <td>Semantic</td> <td>Binary</td> <td>Whether a variable has a finite size of array in the loop</td> </tr> <tr> <td></td> <td>LSize</td> <td>Semantic</td> <td>Binary</td> <td>Whether a variable has an array of which the size is a left-closed interval</td> </tr> <tr> <td></td> <td>LOffset</td> <td>Semantic</td> <td>Binary</td> <td>Whether a variable has an array of which the offset is a left-closed interval</td> </tr> <tr> <td></td> <td>#AbsLoc</td> <td>Semantic</td> <td>Numeric</td> <td>The (normalized) number of abstract locations accessed in the loop</td> </tr> <tr> <td></td> <td>#ArgString</td> <td>Semantic</td> <td>Numeric</td> <td>The (normalized) number of string arguments</td> </tr> </tbody> </table> | Library | Const | Syntactic| Binary| Whether the parameters contain constants or not | | | Void | Syntactic| Binary| Whether the return type is void or not | | | Int | Syntactic| Binary| Whether the return type is int or not | | | CString | Syntactic| Binary| Whether the function is declared in string.h or not | | | InsideLoop | Syntactic| Binary| Whether the function is called in a loop or not | | | #Args | Syntactic| Numeric| The (normalized) number of arguments | | | DefParam | Semantic| Binary| Whether a parameter are defined in a loop or not | | | UseRet | Semantic| Binary| Whether the return value is used in a loop or not | | | UptParam | Semantic| Binary| Whether a parameter is updated via the library call | | | Escape | Semantic| Binary| Whether the return value escapes the caller | | | GVar | Semantic| Binary| Whether global variables are accessed in the loop | | | FinInterval | Semantic| Binary| Whether a variable has a finite interval value in the loop | | | FinArray | Semantic| Binary| Whether a variable has a finite size of array in the loop | | | LSize | Semantic| Binary| Whether a variable has an array of which the size is a left-closed interval | | | LOffset | Semantic| Binary| Whether a variable has an array of which the offset is a left-closed interval | | | #AbsLoc | Semantic| Numeric| The (normalized) number of abstract locations accessed in the loop | | | #ArgString | Semantic| Numeric| The (normalized) number of string arguments | **Table I** <table> <thead> <tr> <th>Program</th> <th>LOC</th> <th>Bug</th> <th>BASELINE</th> <th>SELECTIVE</th> <th>UNIFORM</th> </tr> </thead> <tbody> <tr> <td>SM-1</td> <td>0.5K</td> <td>28</td> <td>18</td> <td>15</td> <td>5</td> </tr> <tr> <td>SM-2</td> <td>0.8K</td> <td>2</td> <td>16</td> <td>4</td> <td>0</td> </tr> <tr> <td>SM-3</td> <td>0.7K</td> <td>3</td> <td>3</td> <td>3</td> <td>0</td> </tr> <tr> <td>SM-4</td> <td>0.7K</td> <td>10</td> <td>10</td> <td>6</td> <td>6</td> </tr> <tr> <td>SM-5</td> <td>1.7K</td> <td>3</td> <td>3</td> <td>6</td> <td>0</td> </tr> <tr> <td>SM-6</td> <td>0.4K</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>SM-7</td> <td>1.1K</td> <td>2</td> <td>2</td> <td>32</td> <td>0</td> </tr> <tr> <td>BIND-1</td> <td>1.2K</td> <td>1</td> <td>1</td> <td>35</td> <td>1</td> </tr> <tr> <td>BIND-2</td> <td>1.7K</td> <td>1</td> <td>1</td> <td>45</td> <td>0</td> </tr> <tr> <td>BIND-3</td> <td>0.5K</td> <td>1</td> <td>1</td> <td>4</td> <td>0</td> </tr> <tr> <td>BIND-4</td> <td>1.1K</td> <td>2</td> <td>2</td> <td>0</td> <td>0</td> </tr> <tr> <td>FTP-1</td> <td>0.8K</td> <td>4</td> <td>4</td> <td>13</td> <td>4</td> </tr> <tr> <td>FTP-2</td> <td>1.5K</td> <td>1</td> <td>1</td> <td>7</td> <td>1</td> </tr> <tr> <td>FTP-3</td> <td>1.5K</td> <td>24</td> <td>24</td> <td>25</td> <td>17</td> </tr> <tr> <td>polymorph-0.4.0</td> <td>0.7K</td> <td>10</td> <td>10</td> <td>6</td> <td></td> </tr> <tr> <td>ncompress-4.2.4</td> <td>1.9K</td> <td>12</td> <td>0</td> <td>10</td> <td></td> </tr> <tr> <td>129.compress</td> <td>2.0K</td> <td>7</td> <td>7</td> <td>34</td> <td></td> </tr> <tr> <td>spell-1.0</td> <td>2.2K</td> <td>1</td> <td>0</td> <td>0</td> <td></td> </tr> <tr> <td>man-1.5h1</td> <td>4.7K</td> <td>6</td> <td>5</td> <td></td> <td></td> </tr> <tr> <td>gzip-1.2.4a</td> <td>4.9K</td> <td>3</td> <td>3</td> <td></td> <td></td> </tr> <tr> <td>bc-1.06</td> <td>17.0K</td> <td>2</td> <td>0</td> <td>57</td> <td></td> </tr> <tr> <td>sed-4.0.8</td> <td>25.9K</td> <td>1</td> <td>4</td> <td></td> <td></td> </tr> </tbody> </table> | Total | 138 | 118 | 677 | 100 | 264 | **Table I** The number of alarms in interval analysis It seems there's a mix-up in the content. Could you please provide the correct text or clarify the input? It appears there are inconsistencies and possibly incomplete sentences or data. it is still insignificant since both of them are much more imprecise than our system. D. Feature Design 1) Winning Features: The learned classifier tells us which feature is most useful for learning harmless unsoundness. The features we used capture general characteristics of harmless program components. In order to determine the ordering of features, we used information gain which is the expected reduction in entropy when a feature is used to classify training examples (in classification, low entropy, i.e., impure data, is preferred) [16]. The results show that harmless loops tend to have pointers iteratively accessing (PtrInc) arrays (Array) or strings (FinString) with loop conditions that compare array contents with null (Null) or constant values (Const). These features collectively describe loops like the first example loop in Section II. The result also shows that most harmless library calls for the interval analysis return integer values (Int) and manipulate strings (CString). This is because our interval analyzer aggressively abstracts string values, so unsound treatment of string libraries (e.g., strlen, strcat) is likely to improve the analysis precision. For the taint analysis, the results show that library calls with less arguments (#Args) and abstract locations (#AbsLoc) (e.g., random, strlen) are likely to be irrelevant to propagation of user inputs compared to ones with more arguments (e.g., fread, recv). 2) Different Feature Sets: We measured the performance of the classifier with less features in three ways: 1) with syntactic features only; 2) with semantic features only; and 3) with randomly-chosen half of the features. For the interval analyzer, the classifier learned with only syntactic features reported 1% more bugs but 26% more false alarms than the classifier with all features, the classifier with only semantic features reported 1% more false alarms and missed 41% more bugs, and the classifier with half of the features reported 17% more false alarms and missed 1% more bugs on average. E. Time Cost We measured how long it takes to run each analysis on our benchmark programs and compare it with the time that our selective unsound analysis takes. For the benchmark programs in Table I, the sound interval analysis BASELINE took 42.1 seconds for analyzing all the listed programs, UNIFORM only took 27.7 seconds, reducing the total time by 14.4 seconds (34.2%). SELECTIVE took 33.8 seconds, reducing the total time by 8.3 seconds (19.7%). RANDA and RANDB took longer than SELECTIVE: 35.4 and 37.5 seconds, respectively. In summary, SELECTIVE takes less time than BASELINE, RANDA, and RANDB. F. Discussion As addressed in the experiments, our technique may miss some true alarms which can be detected by the fully sound analysis or fail to avoid some false alarms which are not reported by the fully unsound analysis. In this section, we discuss why these limitations occur and how to overcome. 1) Remaining False Alarms: Compared to the fully unsound analysis, our technique reports more false alarms. It is mainly because reporting the false alarms is inevitable in order to detect true alarms. Consider the following example from SM-5: ``` size = 10 + positive_input(); arr = malloc(size); for(i = 0; i < size; i++){ arr[i] = ... // buffer access 1 arr[i+1] = ... // buffer access 2 } ``` By soundly analyzing the loop, the analysis reports an alarm for the buffer-overflow bug at line 6 at the cost of a false alarm at line 5. The unsound analysis removes the false alarm, but it also fails to report the true alarm. Our selective method may decide to analyze such a loop soundly in order to detect the bug, even though reporting the false alarm is inevitable. We found that these inevitable false alarms are the primary reason for SELECTIVE to report more false alarms compared to UNIFORM. For example, when analyzing SM-5 in our benchmark programs, five among six false alarms are inevitable. In order to remove such false alarms in a harmless way, we need a more fine-grained parameter space for soundness so that we can apply different degrees of soundness to different statements in a single loop, which would be an interesting future direction to investigate. 2) Missing True Alarms: Compared to the fully sound analysis, our technique reports less true alarms. It is mainly because the bugs are involved in typically-harmless loops. Consider the following code snippet from man-1.5h1: ``` char arr[10] = "string"; size = positive_input(); for(i = 0; i < size; i++) skip; arr[i] = 0; // buffer access 1 for(i = 0; !arr[i]; i++)// buffer access 2 skip; ``` The two buffer access expressions both contain buffer overflow bugs. However, our technique detects the first bug, but not the second. It is because it classifies the second loop as harmless—it learns that loops that iterate constant strings are likely to be harmless. However, we found that most of the missing bugs share the root causes with other bugs detected by our technique. For instance, in the above example, fixing the first bug at line 5 automatically fixes the second one. In our case, therefore, missing true alarms is in fact not a huge drawback in terms of practicability. VI. RELATED WORK A. Unsoundness in Static Analysis Existing unsound static analyses are all uniformly unsound (e.g., [2], [3], [4], [5], [6]). In addition to their unsound handling of every loop and library call in a given program, they consider only a specific branch of all conditional statements in a program [2], deactivate all recursive calls [5], [3], or ignore all the possible inter-procedural aliasing [2], [5], [3]. As shown in this paper, these uniform approaches have a considerable drawback; it significantly impairs the capability of detecting real bugs. This paper is the first to tackle this problem and presents a novel approach of selectively employing unsoundness only when it is likely to be harmless. Mangal et al. proposed an interactive system to control the unsoundness of static analysis online based on the user feedback [17]. They define a probabilistic Datalog analysis with “hard” and (unsound) “soft” rules, where the goal of the analysis is to find a solution that satisfies all of the hard rules while maximizing the weight of the satisfied soft rules. The feedback from analysis users is encoded as soft rules, and based on the feedback, the analysis is re-run and produces a report that optimizes the updated constraints. In our setting (non-Datalog), however, it is not straightforward to tune the unsoundness from user feedbacks. Instead, our approach automatically learns harmless unsoundness and selectively applies unsound strategies depending on the different circumstances. Our goal is different from the existing work on unsoundness by Christakis et al. [18], which empirically evaluated the impact of unsoundness in a static analyzer using runtime checking. They instrumented programs with the unsound assumptions of the analyzer and check whether the assumptions are violated at runtime. On the contrary, we introduce a new notion of selective unsoundness and evaluate its impact in terms of the number of true alarms and false alarms reported. B. Parametric Static Analysis Our work uses a parametric static analysis in a novel way, where the parameters specify the degree of soundness, not the precision setting of the analysis. The existing parametric static analyses have been focused on balancing the precision and the cost of static analysis [19], [20], [21], [10]. They infer a cost-effective abstraction for a newly given program by iterative refinements [19], [20], impact pre-analyses [21], or learning from a codebase [10]. On the other hand, our goal is to find a soundness parameter striking the right balance between existing fully sound and unsound approaches. Furthermore, the existing techniques for deriving static analysis parameters (e.g., [19], [20], [21]) cannot be used for our purpose since it is simply impossible to automatically judge truth and falsehood of alarms. We address this problem by designing a supervised learning method that learns a strategy from a given codebase with known bugs. Because we have labelled data, using the learning algorithm via black-box optimization [10] is inappropriate. Instead, we use an off-the-shelf learning method, which uses the gradient-based optimization algorithm and works much faster than the black-box optimization approach. C. Statistical Alarm Filtering Our approach is orthogonal to statistical post-processing of alarms [22], [23], [24]. The post-processing (e.g. ranking) approach aims to remove false positives (reported false alarms). Instead, our approach aims to remove false negatives (unreported true alarms). From the undiscerning, uniformly unsound analysis that will have too many unreported true alarms, we tune it to be selectively unsound. These post-processing systems are also complementary to our approach. Because in practice any realistic bug-finding static analyzer cannot but be unsound (for the analysis precision and scalability), our technique provides a guide on how to design an unsound one. The existing post-processing techniques (e.g. ranking) can be anyway applied to the results from such selectively unsound static analyzers. VII. CONCLUSION In this paper, we presented a novel approach for selectively employing unsoundness in static analysis. Unlike existing uniformly unsound analyses, our technique applies unsoundness only when it is likely to be harmless (i.e., in a way to reduce the number of false alarms while retaining true alarms). We proposed a learning-based method for automatically tuning the soundness of static analysis in such a harmless way. The experimental results showed that the technique is effectively applicable to two bug-finding static analyzers and reduces their false negative rates while retaining their original precision. ACKNOWLEDGMENT We thank Jonggwon Kim and Woosuk Lee for their implementation of the taint analysis, and Mina Lee for comments on an early version of the paper. This work was partly supported by Samsung Electronics, Samsung Research Funding Center of Samsung Electronics (No.SRFC-IT1502-07), and Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government (MSIP) (No.B0717-16-0098, Development of homomorphic encryption for DNA analysis and biometry authentication and No.R0190-16-2011, Development of Vulnerability Discovery Technologies for IoT Software Security). This work was also supported by BK21 Plus for Pioneers in Innovative Computing (Dept. of Computer Science and Engineering, SNU) funded by National Research Foundation of Korea (NRF) (21A20151113068) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2016R1C1B2014062). REFERENCES
{"Source-Url": "http://www.seas.upenn.edu/~kheo/home/paper/icse17-heohyi.pdf", "len_cl100k_base": 12649, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 42652, "total-output-tokens": 13826, "length": "2e13", "weborganizer": {"__label__adult": 0.00037980079650878906, "__label__art_design": 0.00032019615173339844, "__label__crime_law": 0.0003237724304199219, "__label__education_jobs": 0.00063323974609375, "__label__entertainment": 5.704164505004883e-05, "__label__fashion_beauty": 0.0001512765884399414, "__label__finance_business": 0.0001341104507446289, "__label__food_dining": 0.0002510547637939453, "__label__games": 0.0006151199340820312, "__label__hardware": 0.0008263587951660156, "__label__health": 0.0003690719604492187, "__label__history": 0.0001760721206665039, "__label__home_hobbies": 8.14199447631836e-05, "__label__industrial": 0.0002684593200683594, "__label__literature": 0.00023818016052246096, "__label__politics": 0.00021445751190185547, "__label__religion": 0.0003736019134521485, "__label__science_tech": 0.007022857666015625, "__label__social_life": 8.20159912109375e-05, "__label__software": 0.003862380981445313, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.000270843505859375, "__label__transportation": 0.00039076805114746094, "__label__travel": 0.00016582012176513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53667, 0.02551]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53667, 0.25526]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53667, 0.85768]], "google_gemma-3-12b-it_contains_pii": [[0, 5428, false], [5428, 10748, null], [10748, 16797, null], [16797, 22569, null], [22569, 27673, null], [27673, 33294, null], [33294, 39326, null], [39326, 39512, null], [39512, 44283, null], [44283, 50451, null], [50451, 53667, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5428, true], [5428, 10748, null], [10748, 16797, null], [16797, 22569, null], [22569, 27673, null], [27673, 33294, null], [33294, 39326, null], [39326, 39512, null], [39512, 44283, null], [44283, 50451, null], [50451, 53667, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53667, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53667, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53667, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53667, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53667, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53667, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53667, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53667, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53667, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53667, null]], "pdf_page_numbers": [[0, 5428, 1], [5428, 10748, 2], [10748, 16797, 3], [16797, 22569, 4], [22569, 27673, 5], [27673, 33294, 6], [33294, 39326, 7], [39326, 39512, 8], [39512, 44283, 9], [44283, 50451, 10], [50451, 53667, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53667, 0.19805]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
38f796876014c4bcd94b80664c9e0534c1909241
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-540-70545-1_47.pdf", "len_cl100k_base": 12586, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 53055, "total-output-tokens": 13966, "length": "2e13", "weborganizer": {"__label__adult": 0.0004723072052001953, "__label__art_design": 0.00045180320739746094, "__label__crime_law": 0.0006875991821289062, "__label__education_jobs": 0.0008053779602050781, "__label__entertainment": 0.0001246929168701172, "__label__fashion_beauty": 0.0002396106719970703, "__label__finance_business": 0.00039577484130859375, "__label__food_dining": 0.0006546974182128906, "__label__games": 0.0014982223510742188, "__label__hardware": 0.0016613006591796875, "__label__health": 0.0010242462158203125, "__label__history": 0.0004138946533203125, "__label__home_hobbies": 0.00018334388732910156, "__label__industrial": 0.0008597373962402344, "__label__literature": 0.0004127025604248047, "__label__politics": 0.00054168701171875, "__label__religion": 0.0007314682006835938, "__label__science_tech": 0.169189453125, "__label__social_life": 0.00011849403381347656, "__label__software": 0.007534027099609375, "__label__software_dev": 0.81005859375, "__label__sports_fitness": 0.0005459785461425781, "__label__transportation": 0.0011453628540039062, "__label__travel": 0.00028586387634277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45751, 0.04468]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45751, 0.45681]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45751, 0.82903]], "google_gemma-3-12b-it_contains_pii": [[0, 2487, false], [2487, 6104, null], [6104, 10146, null], [10146, 14122, null], [14122, 16170, null], [16170, 19658, null], [19658, 22343, null], [22343, 25864, null], [25864, 29014, null], [29014, 33019, null], [33019, 38385, null], [38385, 42604, null], [42604, 45751, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2487, true], [2487, 6104, null], [6104, 10146, null], [10146, 14122, null], [14122, 16170, null], [16170, 19658, null], [19658, 22343, null], [22343, 25864, null], [25864, 29014, null], [29014, 33019, null], [33019, 38385, null], [38385, 42604, null], [42604, 45751, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45751, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45751, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45751, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45751, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45751, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45751, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45751, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45751, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45751, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45751, null]], "pdf_page_numbers": [[0, 2487, 1], [2487, 6104, 2], [6104, 10146, 3], [10146, 14122, 4], [14122, 16170, 5], [16170, 19658, 6], [19658, 22343, 7], [22343, 25864, 8], [25864, 29014, 9], [29014, 33019, 10], [33019, 38385, 11], [38385, 42604, 12], [42604, 45751, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45751, 0.24859]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
b72d1ecf53382dafea49a0815b7baffe517da5db
A Level-based Approach to Computing Warranted Arguments in Possibilistic Defeasible Logic Programming Teresa Alsinet, Carlos Chesñevar and Lluís Godo Abstract. Possibilistic Defeasible Logic Programming (P-DeLP) is an argumentation framework based on logic programming which incorporates a treatment of possibilistic uncertainty at object-language level. In P-DeLP, the closure of justified conclusions is not always consistent, which has been detected to be an anomaly in the context of so-called rationality postulates for rule-based argumentation systems. In this paper we present a novel level-based approach to computing warranted arguments in P-DeLP which ensures the above rationality postulate. We also show that our solution presents some advantages in comparison with the use of a transposition operator applied on strict rules. Keywords. Formal models of argument, Possibilistic logic, Rationality postulates for argumentation. 1. Introduction and motivation Possibilistic Defeasible Logic Programming (P-DeLP) [1] is an argumentation framework based on logic programming which incorporates the treatment of possibilistic uncertainty at the object-language level. Indeed, P-DeLP is an extension of Defeasible Logic Programming (DeLP) [9], a logic programming approach to argumentation which has been successfully used to solve real-world problems in several contexts such as knowledge distribution [3] and recommendations systems [8], among others. As in the case of DeLP, the P-DeLP semantics is skeptical, based on a query-driven proof procedure which computes warranted (justified) arguments. Following the terminology used in [5], P-DeLP can be seen as a member of the family of rule-based argumentation systems, as it is based on a logical language defined over a set of (weighed) literals and the notions of strict and defeasible rules, which are used to characterize a P-DeLP program. Recently Caminada & Amgoud have defined several rationality postulates [5] which every rule-based argumentation system should satisfy. One of such postulates (called Indirect Consistency) involves ensuring that the closure of warranted conclusions be guar- anted to be consistent. Failing to satisfy this postulate implies some anomalies un unintuitive results (e.g. the modus ponens rule cannot be applied based on justified conclusions). A number of rule-based argumentation systems are identified in which such postulate does not hold (including DeLP [9] and Prakken & Sartor [11], among others). As an alternative to solve this problem, the use of transposed rules is proposed to extend the representation of strict rules. For grounded semantics, the use of a transposition operator ensures that all rationality postulates to be satisfied [5, pp.294]. In this paper we present a novel level-based approach to computing warranted arguments in P-DeLP which ensures the above rationality postulate without requiring the use of transposed rules. Additionally, in contrast with DeLP and other argument-based approaches, we do not require the use of dialectical trees as underlying structures for characterizing our proof procedure. We show that our solution presents some advantages in comparison with the use of a transposition operator applied on strict rules, which might lead to unintuitive results in some cases. In particular, we show that adding transposed rules can turn a valid (consistent) P-DeLP program into an inconsistent one, disallowing further argument-based inferences on the basis of such a program. The rest of the paper is structured as follows. Section 2 summarizes the main elements of P-DeLP. Section 3 discusses the role of the rationality postulate of indirect consistency introduced in [4], and the solution provided in terms of a transposition operator $Cl_p$. We also show some aspects which may be problematic from this approach in P-DeLP. Section 4 presents our new level-based definitions of warrant for P-DeLP, as well as some illustrative examples. We also show that this characterization ensures that the above postulate can be now satisfied without requiring the use of transposed rules nor the computation of dialectical trees. Finally Section 5 discusses some related work and concludes. 2. Argumentation in P-DeLP: an overview In order to make this paper self-contained, we will present next the main definitions that characterize the P-DeLP framework. For details the reader is referred to [1]. The language of P-DeLP is inherited from the language of logic programming, including the usual notions of atom, literal, rule and fact, but over an extended set of atoms where a new atom “$\sim p$” is added for each original atom $p$. Therefore, a literal in P-DeLP is either an atom $p$ or a (negated) atom of the form $\sim p$, and a goal is any literal. A weighted clause is a pair of the form $(\varphi, \alpha)$, where $\varphi$ is a rule $Q \leftarrow P_1 \land \ldots \land P_k$ or a fact $Q$ (i.e., a rule with empty antecedent), where $Q, P_1, \ldots, P_k$ are literals, and $\alpha \in [0, 1]$ expresses a lower bound for the necessity degree of $\varphi$. We distinguish between certain and uncertain clauses. A clause $(\varphi, \alpha)$ is referred as certain if $\alpha = 1$ and uncertain, otherwise. A set of P-DeLP clauses $\Gamma$ will be deemed as contradictory, denoted $\Gamma \vdash \perp$, if, for some atom $q$, $\Gamma \vdash (q, \alpha)$ and $\Gamma \vdash (\sim q, \beta)$, with $\alpha > 0$ and $\beta > 0$, where $\vdash$ stands for deduction by means of the following particular instance of the generalized modus ponens rule: $$ \frac{(Q \leftarrow P_1 \land \ldots \land P_k, \alpha) \quad (P_1, \beta_1), \ldots, (P_k, \beta_k)}{(Q, \min(\alpha, \beta_1, \ldots, \beta_k))} $$ [GMP] Formally, we will write $\Gamma \vdash (Q, \alpha)$, where $\Gamma$ is a set of PGL clauses, $Q$ is a literal and $\alpha > 0$, when there exists a finite sequence of PGL clauses $C_1, \ldots, C_m$ such that $C_m = (Q, \alpha)$ and, for each $i \in \{1, \ldots, m\}$, either $C_i \in \Gamma$, or $C_i$ is obtained by applying the GMP rule to previous clauses in the sequence. A P-DeLP program $\mathcal{P}$ (or just program $\mathcal{P}$) is a pair $(\Pi, \Delta)$, where $\Pi$ is a non-contradictory finite set of certain clauses, and $\Delta$ is a finite set of uncertain clauses. Formally, given a program $\mathcal{P} = (\Pi, \Delta)$, we say that a set $\mathcal{A} \subseteq \Delta$ of uncertain clauses is an argument for a goal $Q$ with necessity degree $\alpha > 0$, denoted $(\mathcal{A}, Q, \alpha)$, iff: 1. $\Pi \cup \mathcal{A}$ is non contradictory; 2. $\alpha = \max\{\beta \in [0, 1] \mid \Pi \cup \mathcal{A} \vdash (Q, \beta)\}$, i.e. $\alpha$ is the greatest degree of deduction of $Q$ from $\Pi \cup \mathcal{A}$; 3. $\mathcal{A}$ is minimal wrt set inclusion, i.e. there is no $\mathcal{A}_1 \subset \mathcal{A}$ such that $\Pi \cup \mathcal{A}_1 \vdash (Q, \alpha)$. Moreover, if $(\mathcal{A}, Q, \alpha)$ and $(\mathcal{S}, R, \beta)$ are two arguments wrt a program $\mathcal{P} = (\Pi, \Delta)$, we say that $(\mathcal{S}, R, \beta)$ is a subargument of $(\mathcal{A}, Q, \alpha)$, denoted $\langle \mathcal{S}, R, \beta \rangle \subseteq \langle \mathcal{A}, Q, \alpha \rangle$, whenever $\mathcal{S} \subseteq \mathcal{A}$. Notice that the goal $R$ may be any subgoal associated with the goal $Q$ in the argument $\mathcal{A}$. From the above definition of argument, note that if $(\mathcal{S}, R, \beta) \subseteq \langle \mathcal{A}, Q, \alpha \rangle$ it holds that: (i) $\beta \geq \alpha$, and (ii) if $\beta = \alpha$, then $\mathcal{S} = \mathcal{A}$ iff $R = Q$. Let $\mathcal{P}$ be a P-DeLP program, and let $(\mathcal{A}_1, Q_1, \alpha_1)$ and $(\mathcal{A}_2, Q_2, \alpha_2)$ be two arguments wrt $\mathcal{P}$. We say that $(\mathcal{A}_1, Q_1, \alpha_1)$ counterargues $(\mathcal{A}_2, Q_2, \alpha_2)$ iff there exists a subargument (called disagreement subargument) $(\mathcal{S}, Q, \beta)$ of $(\mathcal{A}_2, Q_2, \alpha_2)$ such that $Q_1 \equiv Q$. Moreover, if the argument $(\mathcal{A}_1, Q_1, \alpha_1)$ counterargues the argument $(\mathcal{A}_2, Q_2, \alpha_2)$ with disagreement subargument $(\mathcal{A}, Q, \beta)$, we say that $(\mathcal{A}_1, Q_1, \alpha_1)$ is a proper (respectively blocking) defeater for $(\mathcal{A}_2, Q_2, \alpha_2)$ when $\alpha_1 > \beta$ (respectively $\alpha_1 = \beta$). In P-DeLP, as in other argumentation systems [7,12], argument-based inference involves a dialectical process in which arguments are compared in order to determine which beliefs or goals are ultimately accepted (or justified or warranted) on the basis of a given program and is formalized in terms of an exhaustive dialectical analysis of all possible argumentation lines rooted in a given argument. An argumentation line starting in an argument $(\mathcal{A}_0, Q_0, \alpha_0)$ is a sequence of arguments $\lambda = [(\mathcal{A}_0, Q_0, \alpha_0), (\mathcal{A}_1, Q_1, \alpha_1), \ldots, (\mathcal{A}_n, Q_n, \alpha_n), \ldots]$ such that each $(\mathcal{A}_i, Q_i, \alpha_i)$ is a defeater for the previous argument $(\mathcal{A}_{i-1}, Q_{i-1}, \alpha_{i-1})$ in the sequence, $i > 0$. In order to avoid fallacious reasoning additional constraints are imposed, namely: 1. **Non-contradiction**: given an argumentation line $\lambda$, the set of arguments of the proponent (respectively opponent) should be non-contradictory wrt $\mathcal{P}$. 2. **Progressive argumentation**: (i) every blocking defeater $(\mathcal{A}_i, Q_i, \alpha_i)$ in $\lambda$ with $i > 0$ is defeated by a proper defeater $^3$ $(\mathcal{A}_{i+1}, Q_{i+1}, \alpha_{i+1})$ in $\lambda$; and (ii) each argument $(\mathcal{A}_i, Q_i, \alpha_i)$ in $\lambda$, with $i \geq 2$, is such that $Q_i \neq \sim Q_{i-1}$. $^1$In what follows, for a given goal $Q$, we will write $\sim Q$ as an abbreviation to denote “$\sim q$”, if $Q \equiv q$, and “$\neg q$”, if $Q \equiv \neg q$. $^2$Non-contradiction for a set of arguments is defined as follows: a set $S = \bigcup_{i=1}^{n}\{\mathcal{A}_i, Q_i, \alpha_i\}$ is contradictory wrt $\mathcal{P}$ iff $\Pi \cup \bigcup_{i=1}^{n}\mathcal{A}_i$ is contradictory. $^3$It must be noted that the last argument in an argumentation line is allowed to be a blocking defeater for the previous one. The non-contradiction condition disallows the use of contradictory information on either side (proponent or opponent). The first condition of progressive argumentation enforces the use of a proper defeater to defeat an argument which acts as a blocking defeater, while the second condition avoids non-optimal arguments in the presence of a conflict. An argumentation line satisfying the above restrictions is called acceptable, and can be proven to be finite. The set of all possible acceptable argumentation lines results in a structure called dialectical tree. Given a program $\mathcal{P} = (\Pi, \Delta)$ and a goal $Q$, we say that $Q$ is warranted wrt $\mathcal{P}$ with a maximum necessity degree $\alpha$ iff there exists an argument $(A, Q, \alpha)$, for some $A \subseteq \Delta$, such that: i) every acceptable argumentation line starting with $(A, Q, \alpha)$ has an odd number of arguments; and ii) there is no other argument of the form $(A_1, Q, \beta)$, with $\beta > \alpha$, satisfying the above. In the rest of the paper we will write $\mathcal{P} \models^w (A, Q, \alpha)$ to denote this fact. 3. Indirect consistency as rationality postulate. Transposition of strict rules In a recent paper Caminada and Amgoud [5] have defined a very interesting characterization of three rationality postulates that—according to the authors—any rule-based argumentation system should satisfy in order to avoid anomalies and unintuitive results. We will summarize next the main aspects of these postulates, and their relationship with the P-DeLP framework. Their formalization is intentionally generic, based on a defeasible theory $T = (S, D)$, where $S$ is a set of strict rules and $D$ is a set of defeasible rules. The notion of negation is modelled in the standard way by means of a function $\neg$. An argumentation system is a pair $(\text{Args}, \text{Def})$, where $\text{Args}$ is a set of arguments (based on a defeasible theory) and $\text{Def} \subseteq \text{Args} \times \text{Args}$ is a defeat relation. The closure of a set of literals $L$ under the set $S$, denoted $CL_S(L)$ is the smallest set such that $L \subseteq CL_S(L)$, and if $\phi_1, \ldots, \phi_n \rightarrow \psi \in S$, and $\phi_1, \ldots, \phi_n \in CL_S(L)$, then $\psi \in CL_S(L)$. A set of literals $L$ is consistent iff there not exist $\psi, \phi \in L$ such that $\psi = \neg \phi$, otherwise it is said to be inconsistent. An argumentation system $(\text{Args}, \text{Def})$ can have different extensions $E_1, E_2, \ldots, E_n$ ($n \geq 1$) according to the adopted semantics. The conclusions associated with those arguments belonging to a given extension $E_i$ are defined as $\text{Concs}(E_i)$, and the output of the argumentation system is defined skeptically as $\text{Output} = \bigcap_{i=1}^{n} \text{Concs}(E_i)$. On the basis of the above concepts, Caminada & Amgoud [5, pp.294] present three important postulates: direct consistency, indirect consistency and closure. Let $T$ be a defeasible theory, $(\text{Args}, \text{Def})$ an argumentation system built from $T$, Output the set of justified (warranted) conclusions, and $E_1, \ldots, E_n$ its extensions under a given semantics. Then these three postulates are defined as follows: - $(\text{Args}, \text{Def})$ satisfies closure iff (1) $\text{Concs}(E_i) = CL_S(\text{Concs}(E_i))$ for each $1 \leq i \leq n$ and (2) $\text{Output} = CL_S(\text{Output})$. - $(\text{Args}, \text{Def})$ satisfies direct consistency iff (1) $\text{Concs}(E_i)$ is consistent for each $1 \leq i \leq n$ and (2) $\text{Output}$ is consistent. - $(\text{Args}, \text{Def})$ satisfies indirect consistency iff (1) $CL_S(\text{Concs}(E_i))$ is consistent for each $1 \leq i \leq n$ and (2) $CL_S(\text{Output})$ is consistent. Closure accounts for requiring that the set of justified conclusions as well as the set of conclusions supported by each extension are closed. Direct consistency implies that the set of justified conclusions and the different sets of conclusions corresponding to each extension are consistent. Indirect consistency involves a more subtle case, requiring that the closure of both Concs(£i) and Output is consistent. Caminada and Amgoud show that many rule-based argumentation system (e.g. Prakken & Sartor [11] and DeLP [9]) fail to satisfy indirect consistency, detecting as a solution the definition of a special transposition operator Cltp for computing the closure of strict rules. This accounts for taking every strict rule r = φ1, φ2, . . . , φn → ψ as a material implication in propositional logic which is equivalent to the disjunction φ1 ∨ φ2 ∨ . . . ∨ φn ∨ ¬ψ. From that disjunction different rules of the form φ1, . . . , φi−1, ¬ψ, φi+1, . . . , φn → ¬φi can be obtained (transpositions of r). If S is a set of strict rules, Cltp is the minimal set such that (1) S ⊆ Cltp(S) and (2) if s ∈ Cltp(S) and t is a transposition of s, then t ∈ Cltp(S). The use of such an operator allows the three rationality postulates to be satisfied in the case of the grounded extension (which corresponds to the one associated with systems like DeLP or P-DeLP). Theorem 1 [5] Let ⟨Args, Def⟩ be an argumentation system built from ⟨Cltp(S), D⟩, where Cltp(S) is consistent, Output is the set of justified conclusions and E its grounded extension. Then ⟨Args, Def⟩ satisfies closure and indirect consistency. Caminada & Amgoud show that DeLP does not satisfy the indirect consistency postulate. The same applies for P-DeLP, as illustrated next. Consider the program P = (Π, Δ), where Π = { (y, 1), (∼ y ← a ∧ b, 1) } and Δ = { (a, 0.9), (b, 0.9) }. It is easy to see that arguments { {a, 0.9}, a, 0.9 } and { {b, 0.9}, b, 0.9 } have no defeaters wrt P. Thus { y, a, b } = Output turns out to be warranted, and it holds that y, ∼ y ∈ Cl∈tp(Π)(y, a, b), so that indirect consistency does not hold. We think that Caminada & Amgoud’s postulate of indirect consistency is indeed valuable for rule-based argumentation systems, as in some sense it allows to perform “forward reasoning” from warranted literals. However, P-DeLP and DeLP are Horn-based systems, so that strict rules should be read as inference rules rather than as material implications. In this respect, the use of transposed rules might lead to unintuitive situations in a logic programming context. Consider e.g. the program P = { (q ← p ∧ r, 1), (s ← ∼ r, 1), (p, 1), (∼ q, 1), (∼ s, 1) }. In P-DeLP, the facts (p, 1), (∼ q, 1) and (∼ s, 1) would be warranted literals. However, the closure under transposition Cltp(P) would include the rule (∼ r ← p ∧ ∼ q, 1), resulting in inconsistency (both (∼ s, 1) and (s, 1) can be derived), so that the whole program would be deemed as invalid. Our goal is to retain a Horn-based view for a rule-based argumentation system like P-DeLP, satisfying at the same time the indirect consistency postulate. To do this we will not take into account transposed rules, introducing instead the notion of level-based warranted literals, as discussed in the next Section. 4. A level-based approach to computing warranted arguments In a logic programming system like P-DeLP the use of transposed rules to ensure indirect consistency may have some drawbacks that have to be taken into consideration. Apart from the problem mentioned at the end of last section of turning an apparently valid program into a non-valid one, there are two other issues: (i) a computational lim- iteration, in the sense that extending a P-DeLP program with all possible transpositions of every strict rule may lead to an important increase in the number of arguments to be computed; and (ii) when doing so, the system can possibly establish as warranted goals conclusions which are not explicitly expressed in the original program. For instance, consider the program $P = \{ (\sim y \leftarrow a \land b, 1), (y, 1), (a, 0.9), (b, 0.7) \}$. Transpositions of the strict rule $(\sim y \leftarrow a \land b, 1)$ are $(\sim a \leftarrow y \land b, 1)$ and $(\sim b \leftarrow y \land a, 1)$. Then, the argument $\langle A, \sim b, 0.9 \rangle$, with $A = \{ (y, 1), (a, 0.9), (\sim b \leftarrow a \land y, 1) \}$, is warranted wrt $P$, although no explicit information is given for the literal $\sim b$ in $P$. In this paper we will provide a new formal definition of warranted goal with maximum necessity degree which will take into account direct and indirect conflicts between arguments. Indirect conflicts will be detected without explicitly transposing strict rules, distinguishing between warranted and blocked goals. Direct conflicts between arguments refer to the case of both proper and blocking defeaters. For instance, consider the program $P = \{ (a \leftarrow b, 0.9), (b, 0.8), (\sim b, 0.8) \}$. Thus arguments $\{ (b, 0.8) \}$, $b, 0.8$ and $\{ (\sim b, 0.8) \}$, $\sim b, 0.8$ are a pair of blocking defeaters expressing (direct) contradictory information, and therefore $b$ and $\sim b$ will be considered a pair of blocked goals with maximum necessity degree 0.8. Note that although the argument $\{ (\sim b, 0.8) \}$, $\sim b, 0.8$ is a blocking defeater for the argument $\langle A, a, 0.8 \rangle$, with $A = \{ (a \leftarrow b, 0.9), (b, 0.8) \}$, goals $a$ and $\sim b$ do not express contradictory information, and therefore $a$ is a neither blocked nor warranted goal with necessity degree 0.8. On the other hand, we will refer to indirect conflicts between arguments when there exists an inconsistency emerging from the set of certain (strict) clauses of a program and arguments with no defeaters. For instance, consider the program $P = \langle \Pi, \Delta \rangle$ with $\Pi = \{ (\sim y \leftarrow a \land b, 1), (y, 1), (\sim z \leftarrow c \land d, 1), (x, 1) \}$ and $\Delta = \{ (a, 0.7), (b, 0.7), (c, 0.7), (d, 0.6) \}$. In standard P-DeLP [1], (i.e. without extending the program with transpositions of rules of $\Pi$) $\{ (a, 0.7) \}$, $a, 0.7$ and $\{ (b, 0.7) \}$, $b, 0.7$ are arguments with no defeaters and therefore their conclusions would be warranted. However, since $\Pi \cup \{ (a, 0.7), (b, 0.7) \} \vdash \bot$, arguments $\{ (a, 0.7) \}$, $a, 0.7$ and $\{ (b, 0.7) \}$, $b, 0.7$ express (indirect) contradictory information. Moreover, as both goals are supported by arguments with the same necessity degree 0.7, none of them can be warranted nor rejected, and therefore we will refer to them as (indirect) blocked goals with maximum necessity degree 0.7. On the other hand, a similar situation appears with $\{ (c, 0.7) \}$, $c, 0.7$ and $\{ (d, 0.6) \}$, $d, 0.6$. As before, $\Pi \cup \{ (c, 0.7), (d, 0.6) \} \vdash \bot$, but in this case the necessity degree of goal $c$ is greater than the necessity degree of goal $d$. Therefore $c$ will be considered a warranted goal with maximum necessity degree 0.7. Let $ARG(P) = \{ \langle A, Q, \alpha \rangle \mid A$ is an argument for $Q$ with necessity $\alpha$ wrt $P \}$ and let $Concl(P) = \{ (Q, \alpha) \mid \langle A, Q, \alpha \rangle \in ARG(P) \}$. An output for a P-DeLP program $P$ will be a pair $(Warr, Block)$, where $Warr, Block \subseteq Concl(P)$, denoting respectively a set of warranted and blocked goals (together with their degrees) and fulfilling a set of conditions that will ensure a proper handling of the problem of global inconsistency discussed earlier, and that will specified in the following definition. Since the intended construction of the sets $Warr, Block$ is done level-wise, starting from the first level and iteratively going from one level to next level below, we introduce some useful notation. Indeed, if $1 \geq \alpha_1 > \alpha_2 \geq \ldots \geq \alpha_p > 0$ are the weights appearing in arguments from $ARG(P)$, we can stratify the sets by putting $Warr = Warr(\alpha_1) \cup \ldots \cup Warr(\alpha_p)$ and similarly $Block = Block(\alpha_1) \cup \ldots \cup Block(\alpha_p)$, where $Warr(\alpha_i)$ and $Block(\alpha_i)$ are respectively the sets of the warranted and blocked goals with maximum degree $\alpha_i$. We will also write $\text{Warr}(> \alpha_i)$ to denote $\cup_{\beta > \alpha_i} \text{Warr}(\beta)$, and analogously for $\text{Block}(> \alpha_i)$. In what follows, given a program $\mathcal{P} = (\Pi, \Delta)$ we will denote by $\text{rules}(\Pi)$ and $\text{facts}(\Pi)$ the set of strict rules and strict facts of $\mathcal{P}$ respectively. **Definition 1 (Warranted and blocked goals)** Given a program $\mathcal{P} = (\Pi, \Delta)$, an output for $\mathcal{P}$ is a pair $(\text{Warr}, \text{Block})$ where the sets $\text{Warr}(\alpha_i)$ and $\text{Block}(\alpha_i)$, for $i = 1 \ldots p$ are required to satisfy the following constraints: 1. An argument $\langle A, Q, \alpha_i \rangle \in \text{ARG}(\mathcal{P})$ is called acceptable if it satisfies the following three conditions: (i) $(Q, \beta) \notin \text{Warr}(> \alpha_i) \cup \text{Block}(> \alpha_i)$ and $(\neg Q, \beta) \notin \text{Block}(> \alpha_i)$, for all $\beta > \alpha$ (ii) for any subargument $\langle B, R, \beta \rangle \subseteq \langle A, Q, \alpha_i \rangle$ such that $R \neq Q$, $(R, \beta) \in \text{Warr}(\beta)$ (iii) $\text{rules}(\Pi) \cup \text{Warr}(> \alpha_i) \cup \{(R, \alpha) \mid (B, R, \alpha) \subseteq \langle A, Q, \alpha_i \rangle\} \not\vdash \bot$. 2. For each acceptable $\langle A, Q, \alpha_i \rangle \in \text{ARG}(\mathcal{P})$, $(Q, \alpha_i) \in \text{Block}(\alpha_i)$ whenever (i) either there exists an acceptable $\langle B, \neg Q, \alpha_i \rangle \in \text{ARG}$; or (ii) there exists $G \subseteq \{(P, \alpha) \mid \langle C, P, \alpha_i \rangle \in \text{ARG}$ is acceptable and $\neg P \notin \text{Block}(\alpha_i)\}$ such that $\text{rules}(\Pi) \cup \text{Warr}(> \alpha_i) \cup G \not\vdash \bot$ and $\text{rules}(\Pi) \cup \text{Warr}(> \alpha_i) \cup G \cup \{(Q, \alpha_i)\} \vdash \bot$. otherwise, $(Q, \alpha_i) \in \text{Warr}(\alpha_i)$. Note that in Def. 1 the notion of argument ensures that for each argument $\langle A, Q, \alpha \rangle \in \text{ARG}$, the goal $Q$ is non-contradictory wrt the set $\Pi$ of certain clauses of $\mathcal{P}$. However, it does not ensure non-contradiction wrt $\Pi$ together with the set $\text{Warr}(> \alpha)$ of warranted goals with degree greater than $\alpha$ (as required by the indirect consistency postulate [5]). Therefore, for each argument $\langle A, Q, \alpha \rangle \in \text{ARG}$ satisfying that each subgoal is warranted, the goal $Q$ can be warranted at level $\alpha$ only after explicitly checking indirect conflicts wrt the set $\text{Warr}(> \alpha)$, i.e. after verifying that $\text{rules}(\Pi) \cup \text{Warr}(> \alpha) \cup \{(Q, \alpha)\} \not\vdash \bot$. For instance, consider the program $\mathcal{P} = (\Pi, \Delta)$ with $$\Pi = \{(y, 1), (\neg y \leftarrow a \land c, 1)\} \text{ and}$$ $$\Delta = \{(a, 0.9), (b, 0.9), (c \leftarrow b, 0.8)\}.$$ According to Def. 1, the goal $y$ is warranted with necessity degree 1 and goals $a$ and $b$ are warranted with necessity degree 0.9. Then $$\langle \{(b, 0.9), (c \leftarrow b, 0.8)\}, c, 0.8 \rangle$$ is an argument for $c$ such that the subargument $\langle \{(b, 0.9)\}, b, 0.9 \rangle$ is warranted. However, as $\text{rules}(\Pi) \cup \{(y, 1), (a, 0.9), (b, 0.9)\} \cup \{(c, 0.8)\} \not\vdash \bot$, the goal $c$ is neither warranted nor blocked wrt $\mathcal{P}$. Suppose now in Def. 1, that an argument $\langle A, Q, \alpha \rangle \in \text{ARG}$ involves a warranted subgoal with necessity degree $\alpha$. Then $Q$ can be warranted only after explicitly checking indirect conflicts wrt its set of subgoals, i.e. after verifying that $\text{rules}(\Pi) \cup \text{Warr}(> \alpha) \cup \{(R, \alpha) \mid (B, R, \alpha) \subseteq \langle A, Q, \alpha \rangle\} \not\vdash \bot$. For instance, consider the program $\mathcal{P} = (\Pi, \Delta)$, with Consider the program \[ \Pi = \{(y, 1), (\neg y \iff a \land b), 1\} \] and \[ \Delta = \{(a, 0.7), (b \iff a, 0.7)\}. \] Then \( y \) and \( a \) are warranted goals with necessity degrees 1 and 0.7, respectively, and although it is not possible to compute a defeater for the argument \[ \langle\{(a, 0.7), (b \iff a, 0.7)\}, b, 0.7\rangle \] in \( \mathcal{P} \) and the subgoal \( a \) is warranted with necessity degree 0.7, \( b \) is not warranted since \( \{(\neg y \iff a \land b, 1)\} \cup \{(y, 1)\} \cup \{(a, 0.7)\} \cup \{(b, 0.7)\} \vdash \bot \). Finally, note that in Def. 1, direct conflicts invalidate possible indirect conflicts in the following sense. Consider the program \( \mathcal{P} = (\Pi, \Delta) \), with \[ \Pi = \{(y \iff a, 1), (\neg y \iff b \land c), 1\} \] and \[ \Delta = \{(a, 0.7), (b, 0.7), (c, 0.7), (\neg c, 0.7)\}. \] Then, \( c \) and \( \neg c \) are blocked goals with necessity degree 0.7 and thus \( a, b \) and \( y \) are warranted goals with necessity degree 0.7. The next example illustrates some interesting cases of the notion of warranted and blocked goals in P-DeLP. **Example 2** Consider the program \( \mathcal{P}_1 = (\Pi_1, \Delta_1) \), with \[ \Pi_1 = \{(y, 1), (\neg y \iff a \land b), 1\} \] and \[ \Delta_1 = \{(a, 0.7), (b, 0.7), (\neg a, 0.5)\}. \] According to Def. 1, \((y, 1)\) is warranted and \((a, 0.7)\) and \((b, 0.7)\) are blocked. Then, as \( a \) is blocked with necessity degree 0.7, \( \{(\neg a, 0.5)\} \), \( \neg a, 0.5 \) is not an acceptable argument and hence the goal \( \neg a \) is neither warranted nor blocked wrt \( \mathcal{P}_1 \). Now consider the program \( \mathcal{P}_2 = (\Pi_2, \Delta_2) \) with \[ \Pi_2 = \{(y, 1), (\neg y \iff a \land c), 1\} \] and \[ \Delta_2 = \{(a, 0.9), (b, 0.9), (c \iff b, 0.9)\}. \] According to Def. 1, \((y, 1)\) and \((b, 0.9)\) are warranted. On the other hand, \( \{(a, 0.9)\}, a, 0.9 \) is an argument for \( a \) with an empty set of subarguments and \( \{(b, 0.9), (c \iff b, 0.9)\}, c, 0.9 \) is an argument for \( c \) satisfying that the subargument \( \{(b, 0.9)\}, b, 0.9 \) is warranted. However, as \( \{(\neg y \iff a \land c), 1\} \cup \{(y, 1), (b, 0.9)\} \cup \{(a, 0.9), (c, 0.9)\} \vdash \bot \), \( a \) and \( c \) are a pair of blocked goals wrt \( \mathcal{P}_2 \) with necessity degree 0.9. Finally, consider the program \( \mathcal{P}_3 = (\Pi_2, \Delta_3) \) with \[ \Delta_3 = \{(a, 0.9), (c, 0.9), (b \iff c, 0.9), (d \iff a \land c), 0.9\}. \] In that case \((y, 1)\) is warranted and \((a, 0.9)\) and \((c, 0.9)\) are blocked. Then, according to Def. 1, as \( c \) is a blocked goal with necessity 0.9, \( \{(c, 0.9), (b \iff c, 0.9)\}, b, 0.9 \) is not an acceptable argument and hence the goal \( b \) is neither warranted nor blocked wrt \( \mathcal{P}_3 \). Notice that since \( a \) and \( c \) are contradictory wrt \( \Pi_2 \), no argument can be computed for goal \( d \). It can be shown that if \((\text{Warr}, \text{Block})\) is an output of a P-DeLP program, the set \( \text{Warr} \) of warranted goals (according to Def. 1) is indeed non-contradictory and satisfies indirect consistency with respect to the set of strict rules. Proposition 3 (Indirect consistency) Let \( \mathcal{P} = (\Pi, \Delta) \) be a P-DeLP program and let \( \text{Warr, Block} \) be an output for \( \mathcal{P} \). Then: (i) \( \text{facts} (\Pi) \subseteq \text{Warr} \), (ii) \( \text{Warr} \not\vdash \bot \), and (iii) \( \text{rules} (\Pi) \cup \text{Warr} \vdash (Q, \alpha) \) implies \( (Q, \beta) \in \text{Warr} \), for some \( \beta \geq \alpha \). Actually, (iii) above can be read also as saying that \( \text{Warr} \) satisfies (somewhat softened) the closure postulate with respect to the set of strict rules. Indeed, it could be recovered in the full sense if the deduction characterized by \( \vdash \) would be defined taking only into account those derivations yielding maximum degrees of necessity. Proof: We prove (ii) and (iii), as (i) is straightforward. (ii) Suppose that for some goal \( Q, \{ (Q, \alpha), (\sim Q, \beta) \} \subseteq \text{Warr} \). Then, there should exist \( A \subseteq \Delta \) and \( B \subseteq \Delta \) such that \( \Pi \cup A \vdash (Q, \alpha) \) and \( \Pi \cup B \vdash (\sim Q, \beta) \). If \( \alpha = \beta \), \( (A, Q, \alpha) \) and \( (B, \sim Q, \beta) \) are a pair of blocking arguments; otherwise, one is a proper defeater for the other one and \( \text{rules} (\Pi) \cup \{(Q, \alpha), (\sim Q, \beta)\} \vdash \bot \). Hence, by Def. 1, \( \{(Q, \alpha), (\sim Q, \beta)\} \not\subseteq \text{Warr} \). (iii) Suppose that, for some goal \( Q, \text{rules} (\Pi) \cup \text{Warr} \vdash (Q, \alpha) \) and \( (Q, \beta) \not\in \text{Warr} \), for all \( \beta \geq \alpha \). Then, there should exist a strict rule in \( \Pi \) of the form \( (Q \leftarrow P_1 \land \ldots \land P_k, 1) \) such that either for each \( i = 1, \ldots, k \), \( (P_i, \alpha_i) \in \text{Warr} \) or, recursively, \( \text{rules} (\Pi) \cup \text{Warr} \vdash (P_i, \alpha_i) \), and \( \min(\alpha_1, \ldots, \alpha_k) = \alpha \). Now, if \( (Q, \alpha) \not\in \text{Warr} \), by Def. 1, it follows that either \( (Q, \beta) \in \text{Warr} \) or \( (\sim Q, \beta) \in \text{Warr} \) for some \( \beta > \alpha \), or \( \text{rules} (\Pi) \cup \text{Warr} \vdash (\sim Q, \alpha) \). As \( \alpha = \min(\alpha_1, \ldots, \alpha_k) \), it follows that \( \alpha = \alpha_i \), for some \( 1 \leq i \leq k \). Then, if \( (\sim Q, \beta) \in \text{Warr} \), with \( \beta > \alpha \), or \( \Pi \cup \text{Warr} \vdash (\sim Q, \alpha) \), by Def. 1, there should exist, at least, a goal \( P_i \), with \( 1 \leq i \leq k \), such that \( (P_i, \alpha_i) \not\in \text{Warr} \) and \( \text{rules} (\Pi) \cup \text{Warr} \not\vdash (P_i, \alpha_i) \). Hence, if \( \text{rules} (\Pi) \cup \text{Warr} \vdash (Q, \alpha) \), then \( (Q, \beta) \in \text{Warr} \), for some \( \beta \geq \alpha \). Next we show that if \( \text{Warr, Block} \) is an output of a P-DeLP program \( \mathcal{P} = (\Pi, \Delta) \), the set \( \text{Warr} \) of warranted goals contains indeed each literal \( Q \) satisfying that \( \mathcal{P}^* \models^w (A, Q, \alpha) \) and \( \Pi \cup A \vdash (Q, \alpha) \), with \( \mathcal{P}^* = (\Pi \cup \text{Cl}_{\text{lp}}(\text{rules} (\Pi)), \Delta) \) and whenever \( \Pi \cup \text{Cl}_{\text{lp}}(\text{rules} (\Pi)) \) is non-contradictory. Proposition 4 Let \( \mathcal{P} = (\Pi, \Delta) \) be a P-DeLP program such that \( \Pi \cup \text{Cl}_{\text{lp}}(\text{rules} (\Pi)) \) is non-contradictory and let \( Q \) be a literal such that \( \mathcal{P}^* \models^w (A, Q, \alpha) \). If \( \Pi \cup A \vdash (Q, \alpha) \), \( (Q, \alpha) \in \text{Warr} \) for all output \( \text{Warr, Block} \) of \( \mathcal{P} \). Notice that the inverse of Prop. 4 does not hold; i.e. assuming that \( \Pi \cup \text{Cl}_{\text{lp}}(\text{rules} (\Pi)) \) is non-contradictory it can be the case that \( (Q, \alpha) \in \text{Warr} \) and \( (Q, \alpha) \) is not warranted wrt the extended program \( \mathcal{P}^* \). This is due to the fact that the new level-wise approach for computing warranted goals allows us to consider a more specific treatment of both direct and indirect conflicts between literals. In particular we have that each blocked literal invalidates all rules in which the literal occurs. For instance, consider the program \( \mathcal{P}_1 = (\Pi_1, \Delta_1) \), with \[4\text{In what follows, proofs are omitted for space reasons.}\] \[ \Pi_1 = \{(y, 1), (\sim y \leftarrow a \land b, 1)\} \text{ and } \Delta_1 = \{(a, 0.7), (b, 0.7), (\sim b, 0.7)\}. \] According to Def. 1, \((y, 1)\) and \((a, 0.7)\) are warranted and \((b, 0.7)\) and \((\sim b, 0.7)\) are blocked. However, when considering the extended program \(P_1^* = (\Pi_1 \cup \text{Cl}_{tp}(\text{rules}(\Pi_1)), \Delta_1)\) one is considering the transposed rule \((\sim a \leftarrow y \land b, 1)\) and therefore, \[ \lambda_1 = [\{(a, 0.7)\}, \{(b, 0.7)\}, \{\sim a, 0.7\}] \] is an acceptable argumentation line wrt \(P_1^*\) with an even number of arguments, and thus, \((a, 0.7)\) is not warranted wrt \(P_1^*\). Another case that can be analyzed is the following one: Consider now the program \(P_2 = (\Pi_1, \Delta_2)\), with \[ \Delta_2 = \{(a, 0.7), (b \leftarrow a, 0.7), (\sim b, 0.7)\}. \] According to Def. 1, \((y, 1)\) and \((a, 0.7)\) are warranted and, as indirect conflicts are not allowed, \(\langle B, b, 0.7 \rangle\) with \(B = \{(b \leftarrow a, 0.7), (a, 0.7)\}\) is not an acceptable argument for \((b, 0.7)\), and therefore \((\sim b, 0.7)\) is warranted. However, when considering the extended program \(P_2^* = (\Pi_1 \cup \text{Cl}_{tp}(\text{rules}(\Pi_1)), \Delta_2)\), \[ \lambda_2 = [\{(\sim b, 0.7)\}, \sim b, 0.7)\}, \langle B, b, 0.7 \rangle] \] is an acceptable argumentation line wrt \(P_2^*\) with an even number of arguments, and thus, \((\sim b, 0.7)\) is not warranted wrt \(P_2^*\). Actually, the intuition underlying Def. 1 can be defined as follows: An argument \(\langle A, Q, \alpha \rangle\) is warranted or blocked if each subargument \(\langle B, R, \beta \rangle \subseteq \langle A, Q, \alpha \rangle\), with \(Q \neq R\), is warranted. Then, it is warranted if it does not induce direct nor indirect conflicts and blocked, otherwise. The following results provide an interesting characterization of the relationship between warranted and blocked goals in a P-DeLP program. **Proposition 5** Let \(\mathcal{P} = (\Pi, \Delta)\) be a P-DeLP program and let \((\text{Warr}, \text{Block})\) be an output for \(\mathcal{P}\). Then: 1. If \((Q, \alpha) \in \text{Warr} \cup \text{Block}\), then there exists \(\langle A, Q, \alpha \rangle \in \text{ARG}(\mathcal{P})\) and, for each subargument \(\langle B, R, \beta \rangle \subseteq \langle A, Q, \alpha \rangle\) with \(R \neq Q\), \((R, \beta) \in \text{Warr}\). 2. If \((Q, \alpha) \in \text{Warr} \cup \text{Block}, then for every argument \(\langle A, Q, \beta \rangle\), with \(\beta > \alpha\), there exists a subargument \(\langle B, R, \gamma \rangle \subseteq \langle A, Q, \beta \rangle\) with \(R \neq Q\), such that \((R, \gamma) \notin \text{Warr}\). 3. If \((Q, \alpha) \in \text{Warr}, there is no \(\beta > 0\) such that \((Q, \beta) \in \text{Block} or (\sim Q, \beta) \in \text{Block}\). 4. If \((Q, \alpha) \notin \text{Warr} \cup \text{Block for each} \alpha > 0, then either (\sim Q, \beta) \in \text{Block for some} \beta > 0, or for each argument \(\langle A, Q, \alpha \rangle\), there exists a subargument \(\langle B, R, \beta \rangle \subseteq \langle A, Q, \alpha \rangle\) with \(R \neq Q\), such that \((R, \beta) \notin \text{Warr}, or \text{rules}(\Pi) \cup \text{Warr}(\geq \alpha) \cup \{(Q, \alpha)\} \vdash \bot\). Finally, we will come to the question of whether a program \(\mathcal{P}\) always has a unique output \((\text{Warr}, \text{Block})\) according to Def. 1. In general, the answer is yes, although we have identified some recursive situations that might lead to different outputs. For instance, consider the program \[ \mathcal{P} = \{ (p, 0.9), (q, 0.9), (\neg p \leftarrow q, 0.9), (\neg q \leftarrow p, 0.9) \}. \] Then, according to Def. 1, \( p \) is a warranted goal iff \( q \) and \( \neg q \) are a pair of blocked goals and vice versa, \( q \) is a warranted goal iff \( p \) and \( \neg p \) are a pair of blocked goals. Hence, in that case we have two possible outputs: \((\text{Warr}_1, \text{Block}_1)\) and \((\text{Warr}_2, \text{Block}_2)\) where \[ \begin{align*} \text{Warr}_1 &= \{ (p, 0.9) \}, \\ \text{Block}_1 &= \{ (\neg q, 0.9), (q, 0.9) \} \\ \text{Warr}_2 &= \{ (q, 0.9) \}, \\ \text{Block}_2 &= \{ (p, 0.9), (\neg p, 0.9) \} \end{align*} \] In such a case, either \( p \) or \( q \) can be warranted goals (but just one of them).\(^5\) Thus, although our approach is skeptical, we can get sometimes alternative extensions for warranted beliefs. A natural solution for this problem would be adopting the intersection of all possible outputs in order to define the set of those literals which are ultimately warranted. Namely, Let \( \mathcal{P} \) be a P-DeLP program, and let \( \text{output}_i(\mathcal{P}) = (\text{Warr}_i, \text{Block}_i) \) denote all possible outputs for \( \mathcal{P} \), \( i = 1 \ldots n \). Then the skeptical output of \( \mathcal{P} \) could be defined as \( \text{output}_{\text{skep}}(\mathcal{P}) = (\bigcap_{i=1}^{n} \text{Warr}_i, \bigcap_{i=1}^{n} \text{Block}_i) \). It can be shown that that \( \text{output}_{\text{skep}}(\mathcal{P}) \) satisfies by construction also Prop. 3 (indirect inconsistency). It remains as a future task to study the formal properties of this definition. 5. Related Work. Conclusions We have presented a novel level-based approach to computing warranted arguments in P-DeLP. In order to do so, we have refined the notion of conflict among arguments, providing refined definitions of blocking and proper defeat. The resulting characterization allows to compute warranted goals in P-DeLP without making use of dialectical trees as underlying structures. More importantly, we have also shown that our approach ensures the satisfiability of the indirect consistency postulate proposed in [4,5], without requiring the use of transposed rules. Assigning levels or grades to warranted knowledge has been source of research within the argumentation community in the last years, and to the best of our knowledge can be traced back to the notion of degree of justification addressed by John Pollock [10]. In this paper, Pollock concentrates on the “on sum” degree of justification of a conclusion in terms of the degrees of justification of all relevant premises and the strengths of all relevant reasons. However, his work is more focused on epistemological issues than ours, not addressing the problem of indirect inconsistency, nor using the combination of logic programming and possibilistic logic to model argumentative inference. An alternative direction is explored by Besnard & Hunter [2] by characterizing aggregation functions such as categorisers and accumulators which allow to define more evolved forms of computing warrant (e.g. counting arguments for and against, etc.). However, this research does not address the problem of indirect consistency, and performs the grading on top of a classical first-order language, where clauses are weighed as in our case. More recently, the research work of Cayrol & Lagasquie-Schiex [6] pursues a more ambitious goal, providing a general framework for formalizing the notion of graduality in valuation models for argumentation frameworks, focusing on the valuation of arguments and \(^5\)A complete characterization of these pathological situations is a matter of current research. the acceptability according to different semantics. The problem of indirect consistency is not addressed here either, and the underlying system is Dung’s abstract argumentation systems, rather than a logic programming framework as in our case. We contend that our level-based characterization of warrant can be extended to other alternative argumentation frameworks in which weighted clauses are used for knowledge representation. Part of our current research is focused on finding a suitable generalization for capturing the results presented in this paper beyond the P-DeLP framework. Acknowledgments This research was partially supported by CICYT Projects MULOG (TIN2004-07933-C03-01/03) and IEA (TIN2006-15662-C02-01/02), by CONICET (Argentina), and by the Secretaría General de Ciencia y Tecnología de la Universidad Nacional del Sur (Project PGI 24/ZN10). References
{"Source-Url": "https://www.iiia.csic.es/sites/default/files/IIIA-2008-1630.pdf", "len_cl100k_base": 12945, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 55533, "total-output-tokens": 14693, "length": "2e13", "weborganizer": {"__label__adult": 0.0004930496215820312, "__label__art_design": 0.0007314682006835938, "__label__crime_law": 0.0008845329284667969, "__label__education_jobs": 0.006877899169921875, "__label__entertainment": 0.00020802021026611328, "__label__fashion_beauty": 0.0002963542938232422, "__label__finance_business": 0.0007672309875488281, "__label__food_dining": 0.0009150505065917968, "__label__games": 0.001621246337890625, "__label__hardware": 0.0009145736694335938, "__label__health": 0.0012340545654296875, "__label__history": 0.0005559921264648438, "__label__home_hobbies": 0.0002343654632568359, "__label__industrial": 0.0011796951293945312, "__label__literature": 0.0026264190673828125, "__label__politics": 0.0008521080017089844, "__label__religion": 0.0010433197021484375, "__label__science_tech": 0.3125, "__label__social_life": 0.0002589225769042969, "__label__software": 0.012847900390625, "__label__software_dev": 0.6513671875, "__label__sports_fitness": 0.00037026405334472656, "__label__transportation": 0.00107574462890625, "__label__travel": 0.0002341270446777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43947, 0.02951]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43947, 0.69694]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43947, 0.82522]], "google_gemma-3-12b-it_contains_pii": [[0, 2166, false], [2166, 5765, null], [5765, 10344, null], [10344, 14296, null], [14296, 17780, null], [17780, 22260, null], [22260, 26208, null], [26208, 29425, null], [29425, 33825, null], [33825, 37429, null], [37429, 41138, null], [41138, 43947, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2166, true], [2166, 5765, null], [5765, 10344, null], [10344, 14296, null], [14296, 17780, null], [17780, 22260, null], [22260, 26208, null], [26208, 29425, null], [29425, 33825, null], [33825, 37429, null], [37429, 41138, null], [41138, 43947, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43947, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43947, null]], "pdf_page_numbers": [[0, 2166, 1], [2166, 5765, 2], [5765, 10344, 3], [10344, 14296, 4], [14296, 17780, 5], [17780, 22260, 6], [22260, 26208, 7], [26208, 29425, 8], [29425, 33825, 9], [33825, 37429, 10], [37429, 41138, 11], [41138, 43947, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43947, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
7c2e5546161cfbdf2092f85a27020745b9c67f95
Hybrid Microkernel Akshay Khole - 1100226 Anuja Tupe - 1100227 David Lindskog - 1033581 COEN 283 - Operating Systems November 10, 2014 Preface In computer science, a **microkernel** is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system. These mechanisms include low-level address space management, thread management, and interprocess communication (I.P.C). The aim of our project is to overcome the issues related to security, extensibility, reliability and flexibility faced by monolithic kernels. microkernels are one way to achieve this since they divide the work between different servers. Failure of one server would not affect the functioning of the other servers hence, proving microkernels to be a more robust solution. However, microkernels have performance issues due to the indefinite message passing via the kernel space to carry out a task. We will try to overcome this shortcoming by switching the servers dynamically from user-space to kernel space based on the load on the server and the permissions provided to them. Acknowledgement We would like to thank Professor Ming-Hwa Wang for his guidance, his constant supervision and for providing us with the necessary information regarding the project. We would like to thank the authors of the IEEE papers that helped us with our research. Table of Contents List of Tables and Figures 1. Abstract 2. Introduction Objective What is the problem? Why is this project related to this class? Why other approaches are not good? Why we think our approach is better Statement of the problem Area or scope of investigation 3. Theoretical basis and literature review Definition of the problem Theoretical background of the problem Related research to solve this problem Advantage / disadvantage of those research Our solution to solve this problem Why our solution is better and where your solution different from others 4. Hypothesis (or goals) Positive / Negative hypothesis Multiple hypothesis 5. Methodology Generating / Collecting Input Data Solving the Problem Language Used Algorithm Design Tools Used Generating Output Testing Against Hypothesis 6. Implementation Code Design Document and Flowchart 7. Data Analysis and Discussion Output Generation Output Analysis Compare output against hypothesis 8. Conclusions and Recommendations Summary and Conclusions Recommendations for future studies 9. Bibliography 10. Appendices List of Tables and Figures <table> <thead> <tr> <th>Figure number</th> <th>Figure Name</th> <th>Page number</th> </tr> </thead> <tbody> <tr> <td>Figure 1</td> <td>Design difference between monolithic kernels and microkernels</td> <td>7</td> </tr> <tr> <td>Figure 2</td> <td>Physical memory allocation in Fiasco.OC.</td> <td>11</td> </tr> <tr> <td>Figure 3</td> <td>Main program flowchart</td> <td>13</td> </tr> <tr> <td>Figure 4</td> <td>Scheduler adder flowchart</td> <td>14</td> </tr> <tr> <td>Figure 5</td> <td>Scheduler popper flowchart</td> <td>15</td> </tr> <tr> <td>Figure 6</td> <td>Graph of process id versus their execution time</td> <td>20</td> </tr> </tbody> </table> 1. Abstract In this project proposal, we present a type of kernel architecture that can be used to overcome the problems of traditional kernels that are widely used in most modern operating systems like Linux, Windows, older Mac OS. This architecture, called microkernel, focuses on modularizing each component of the monolithic kernel into independent ‘servers’ that run in the user space rather than the kernel space. Running these servers in user space limits the permission level each component has. 2. Introduction Objective To simulate the implementation of a microkernel and demonstrate how the microkernel architecture can solve certain problems faced by the traditional monolithic kernel architecture. What is the problem? Most of the popular and widely used operating systems like Unix, MS-DOS and older versions of Mac OS follow the monolithic kernel architecture. Monolithic kernels are designed in a way that all components of the kernel reside in one single module. These components permanently run in the ‘kernel space’. Not all of these components require the privileges that are given to modules running in the kernel space. Because they run in the kernel space, it is possible that they may execute commands that they aren’t supposed to. This poses problems of security, reliability, flexibility and extensibility as we shall discuss further. Because of the monolithic nature of older kernels, if one of the components of the kernel crashes, it causes the entire kernel to crash. Figure 1. Design Difference between monolithic kernel and microkernel Why is this project related to this class? The term ‘microkernel’ speaks for itself. Every operating system has an integral part without which it cannot function and that part is called the kernel. It is responsible for all the low level operations like communicating with the hardware, managing memory, handling interrupts and managing processes. Why other approaches are not good? Layered OS was one of the solutions developed to overcome the problems that were prevalent in operating systems running on the monolithic kernel. This type of OS reduces the security problems in monolithic kernel by layering the OS, however implementing security mechanisms was very pretty difficult due to the constant communication between the adjacent layers. Additionally, code changes in one layer would have numerous effects on the adjacent layers. Hence, this solution was not very helpful. Why we think our approach is better Our approach investigates the roots of the problems faced by the traditional monolithic kernels. The fundamental design of the monolithic kernel generates certain problems that can be solved by changing the architecture and making components of the kernel independent of each other. Statement of the problem Traditional kernel i.e. monolithic kernel is difficult to extend, maintain and debug due to the nature of its design. We try to solve this problem using microkernel design. Area or scope of investigation - We will be simulating the most basic functionalities like memory management, IPC and process management. - We shall try improving the performance of microkernels by dynamically switching servers from the user space to the kernel space, avoiding all the message passing required to carry out a task. - Future scope - We would like to add the interrupt handling functionality to the project at a later stage. 3. Theoretical basis and literature review Definition of the problem Modifying / extending monolithic kernels to suit requirements of modern personal and embedded computers is hard. Microkernel allows us to easily make changes to kernel components to suit requirements of different types of modern computers. Theoretical background of the problem For years there have been debates of which architecture should an operating system ideally follow. There is no silver bullet to this problem and one of the solutions is to modify the architecture according to its applications to better perform specific tasks. Related research to solve this problem There have been many publications in the area of microkernel architecture to be used on modern computers including smartphones. L4, one of the most popular microkernel is already deployed on billions of devices. Advantage / disadvantage of those research Research in this field has opened up the many possibilities of using the kernel for different types of OS running on different types of devices. One clear disadvantage is speed that is compromised to gain the advantages. We try to overcome this disadvantage using our ‘hybrid microkernel’ approach. Our solution to solve this problem The Layered OS approach was not very helpful. Hence, we plan on using microkernels. The philosophy underlying the microkernel is that only absolutely essential core operating system functions should be in the kernel. Other auxiliary components should be in the user space. Why our solution is better and where your solution different from others Our solution is better as it makes the operating system more flexible as functionalities can be removed as and when wanted, more extensible as functionalities can be added without affecting the kernel code, more reliable and more secure as all the communication and calls are made via the kernel space based on the permissions assigned to the servers. Since microkernels are modular in the way that since all components are running as independent servers, if a single component crashes, the kernel may still be usable for executing other tasks. Also, a part of our solution tries to improve the performance of the microkernel by switching the servers back and forth between the kernel mode and user mode depending on the current load and security settings. 4. Hypothesis (or goals) Positive / Negative hypothesis The microkernel would be more secure, reliable and extensible than monolithic kernel. Microkernels are generally slower than monolithic kernels due to the added IPC calls between kernel components and servers running in user space. Multiple hypothesis - The hybrid microkernel aims to increase speed when necessary and when security and reliability can be sacrificed for speed of execution. - This also gives user the flexibility to switch between modes of execution. 5. Methodology Generating / Collecting Input Data Since the activity of our project is distributed amongst various modules, input for the most part will be done through IPC. For overall test cases, we may read commands from a text file, or simply hard code scenarios for simplicity. Solving the Problem Language Used We are designing our project in Java in order to take advantage of its services such as threads and synchronization. The purpose of the project is to simulate a functioning microkernel. By using java, we will reduce the time we spend building basic services/data structures and increase the time we can spend on bigger ideas. Algorithm Design Functionality can be in either kernel space or user space. In a microkernel design, only essential services are kept in kernel space and the rest are in user space. This creates great modularity, security, stability etc. at the cost of speed. As shown in Figure 3 below, to allocate memory in a micro kernel, there are 7 messages that are sent between the kernel and various servers. To reduce the amount of communication between servers and the kernel during high demand circumstances, we want to implement a ‘space shifting’ algorithm that will allow the kernel to access server functionality directly. Process management is the only module that knows how much process traffic there is. Therefore we will let it decide what servers to put in kernel mode and when. If a process is in kernel mode, the kernel can bypass use of external user functionality and use internal kernel functionality directly (ex. Kernel PMA vs User PMA). By enabling the kernel to skip communication steps, the amount of time needed to get things done is significantly reduced. Tools Used We will be using Java to program our project, therefore eclipse will be a tool we use in our development. We also used Python libraries numpy, matplotlib and pyplot to implement some graph generating scripts that illustrate our output more graphically. We also use Git for source code versioning and management. Generating Output Output is generated as each module communicates and executes. Since we are only simulating a microkernel, we use time to illustrate the transfer of data and messages. To show that each message is being passed or that processing is being done, we have threads wait for random amount of time and display how long tasks are taking. Output is generated in the form of graphs as well. Text files containing process runtime data are created and then read by simple python scripts. These scripts graph each process’s start time and duration so it is more visually clear when the kernel shifts servers into kernel space. Testing Against Hypothesis Since we are actively displaying how much time tasks are taking, we use this data to compare the speed with and without our space shifting algorithm. 6. Implementation Code The project is implemented in Java. The file names are as follows. 1. Main.java 2. User.java 3. Userspace.java 4. Kernel.java 5. SchedulerAdder.java 6. SchedulerPopper.java 7. Process.java 8. ProcessQueue.java 9. Message.java 10. Server.java 11. ServerCrashedException.java 12. ServerPermission.java 13. Sigma.java 14. MemoryManager.java 15. MemoryTree.java 16. Node.java 17. Memory.java 18. Simulation.java 19. FileManager.java 20. Cpu.java 21. Log.java 22. Report.java 23. Graph.java We have also used python library to display the output of our project in the form of graphs. The file names are as follows. 1. plot_graph_2.py 2. plot_graph_3.py The python modules create two graphs which display the time taken by every process created by the user. The switch from user mode to kernel mode and back to user mode is visible due to the time variations for every process. Design Document and Flowchart Main Algorithm - 1) Main 2) User 3) Create Process 4) Send process to kernel 5) Kernel sends process to scheduler 6) Scheduler schedules process (Explained in the following algorithms) 7) Cpu executes each command of process 8) If in User mode: a. If memory command: i. Send request message to Sigma0 ii. Sigma0 sends request to MemoryManager iii. MemoryManager reads request type and executes command 9) If in kernel mode a. if memory command: iv. Retrieve MemoryManager reference from permissionTable v. Directly tell MemoryManager to execute command Scheduler Adder Algorithm - 1) Process is created 2) Process is scheduled using the public schedule method of schedulerAdder 3) Is place there in scheduler queue? a) Yes, adds the process to the queue and notifies the scheduler popper waiting for the queue to get processes. Locks the queue so that the scheduler popper doesn’t pop a process from the queue when it is trying to add a process. b) No, prints that the queue has no more space and tells the kernel no space and waits till the queue has space c) releases the lock. Figure 4. Scheduler Adder Flowchart Scheduler Popper Algorithm - 1) Decides which OS mode is being used - user mode or kernel mode based on the number of process which are present in the queue for execution a) If process queue size is greater than threshold mentioned and os mode is not equal to kernel mode, switch the os mode to be kernel mode. b) If process queue size is smaller than the threshold mentioned and the os mode is not equal to user mode, switch the os mode to be user mode. 2) Is scheduler queue empty? a) If Yes, scheduler popper waits on the queue till it gets at least one element b) If No, pops the process from the top of the queue and notifies the scheduler adder waiting on the queue to get space. Locks the queue so that the scheduler adder doesn’t add a process to the queue when it is trying to pop a process. i) Gives this popped queue for execution to the CPU. Figure 5. Scheduler Popper Flowchart Memory Allocation Algorithm - Memory Allocation in our project is implemented as a Buddy System. We have a slab size for the system. Any requests which are bigger than the slab size are allocated in the buddy system else they are allocated in the slab system which is a pre split tree. The slab size is decided by the request sizes which are very frequent. 1) Process comes in with a memory requirement and PID 2) Process checks the type of command 3) If command is Alloc, it checks if that PID is already holding some memory a) If Yes, Displays a message “Memory is already allocated for PID” b) If No, Memory Manager checks if the the request for memory is less than or equal to the slab size i) If the request is smaller or equal to slab size and there is place in the slab tree, the request is fulfilled. ii) If the request is bigger than the slab size and there is place in the buddy system, the request is fulfilled. 4) If command is Free, it checks if the PID is holding memory in the system a) If yes, it frees the memory held by that PID b) If No, it displays a warning message “PID not found. Could not free memory for PID” 5) If command is Realloc, it checks if the PID is holding memory in the system. a) If yes, it follows step 4) followed by step 3) b) If No, it displays a warning message “PID not found. Could not realloc memory for PID” 7. Data Analysis and Discussion Output Generation For simplicity purposes, we have presented three test cases. All the outputs are displayed on the console. We have also maintained a detailed report.txt file which records all the details of the kernel, scheduler, CPU, processes and servers. Additionally, we have also displayed the performance variation for the processes when the OS mode switch comes into action. This is displayed in the form of graphs which are implemented using Python libraries - matplotlib. Simulation of basic microkernel - This test case is to display the basic working or microkernel and the improvements introduced by us. The microkernel switches the OS mode to kernel mode depending on the traffic in the scheduler. If the number of processes go beyond the threshold of the servers, the OS mode is changed to kernel mode which is faster than using the user mode as the communication part between the servers is discarded. **Simulation of security in microkernel** - This test case displays the strong security in the microkernel. We make a server call the command which it does not have a permission for and the kernel denies the execution of the command as the server does not have the rights to run that command. **Simulation of reliability** - This test case displays the reliability of microkernels. We make a server crash. Any calls to the crashed server are not completed. However, the rest of the microkernel continues to work properly. We crash the memory manager and the file manager runs perfectly even when the memory manager has crashed. **Output Analysis** From the output following observations can be made - 1. When any server is in user mode, the number of steps for executing are increased since a lot of messages need to be passed between kernel and the server and server and its internal components before any work is actually done. This adds overhead. 2. When any server is in kernel mode, kernel is in full control and it can directly invoke servers’ components to which it has references. This eliminates the overhead of message passing and makes code run faster. 3. Because the kernel also needs to have some understanding for directly operating servers when they are in kernel mode, additional code needs to be saved within the kernel. This might increase the size of the kernel. 4. After generating output, we realize that it may happen that the queue gets filled much faster than processes get executed so it is possible that even though a server has a threshold value of, say 5, the 2nd process itself may start executing in the kernel mode. This is because even though only the 2nd process is currently running, the process queue may have number of processes well over the threshold value (5). Compare output against hypothesis Our hypothesis states that the time required for the execution of the process reduces when a particular module switches from the user mode to the kernel mode. Thus, increasing the performance of the OS by eliminating the overhead caused by the intermediate messaging passing steps via the kernel. Our hypothesis stands true. We have depicted this using a graph. We run 20 processes one after the other. The starting process stays in the user mode and takes a considerable amount of time to execute. However, as more and more processes start coming in the scheduler, a mode switch takes place. When the module is in kernel mode, the processes run faster than what they would run in user mode. Additionally, for the last few processes, the processes will run in user mode as there won’t be any more processes coming in. ![Graph of process id versus their execution time](image) Figure 6. Graph of process id versus their execution time Figure 6 gives us the following findings: Average time taken for executing in user mode for PIDs 1, 16, 17, 18, 19 and 20: \[ (4 + 5 + 4.5 + 4 + 5.7 + 5) / 6 = 4.7 \] If a server executes in kernel mode all the time, it takes about 4.7 time units for execution. Average time taken for executing in kernel mode for PIDs 2 - 15: \[ (0.3 + 0.5 + 0.2 + 1 + 0.9 + 0.4 + 0.4 + 0.3 + 0.7 + 0.6 + 0.2 + 0.6 + 0.8 + 0.5) / 14 = 0.52 \] While these values are highly ideal and represent simulation conditions, we expect actual findings to be similar. 8. **Conclusions and Recommendations** **Summary and Conclusions** The project demonstrates the merits of using a microkernel. We have explored the security, reliability and the switch mode aspects of the microkernel. The demonstrations given by us indicate that using a microkernel is a good choice when you have tight security constraints and at the same time you need a good performance from the OS. The switch between the modes i.e. the switch from user mode and kernel mode back and forth show how the microkernel may solve the performance issues faced by monolithic kernels. **Recommendations for future studies** - HelenOS [http://www.helenos.org/](http://www.helenos.org/) - Our future studies include studying the HelenOS. HelenOS will become a complete and usable modern operating system, offering room for experimenting and research. HelenOS uses its own microkernel written from scratch and supports SMP, multitasking and multithreading on both 32-bit and 64-bit, little-endian and big-endian processor architectures. HelenOS is being developed mostly by faculty members, and former and contemporary students of Faculty of Mathematics and Physics at Charles University in Prague. Nonetheless, the project is open for everyone, so we also have developers with different backgrounds from various places around the world. The source code is open and available under the BSD license. - Future project goals include implementation of device drivers. 9. Bibliography - A Practical Look at Micro-Kernels and Virtual Machine Monitors - François Armand, Michel Gien, Member, IEEE - Research on Microkernel Technology - Wang Chengjun, Department of Computer Science and Technology, Weifang University - A Scalable Physical Memory Allocation Scheme For L4 Microkernel - Chen Tian, Daniel Waddington, Jilong Kuang - Design of Embedded OS micro-kernel Experiment Series on ARM - Bo Qu 10. Appendices Program Flowchart Create different users for different use cases User creates processes randomly Process is sent to kernel for further processing Scheduler schedules process CPU checks the mode of the OS, whether it is User mode or Kernel mode. Is OS in user mode? --- If the command is a memory command Send message to Sigma0 Sigma sends a message to memory manager MemoryManager reads request type and executes command --- If the command is a memory command Retrieve MemoryManager reference from permissionTable Directly tell MemoryManager to execute command package hybrid_mircokernel; import java.util.ArrayList; import java.util.Iterator; public class SchedulerAdder extends Thread { private ArrayList<Process> process_table; public int QUEUE_SIZE; public static final int RANGE = 1000; public static final int MIN_NUMBER = 0; /* * Constructor */ public SchedulerAdder(ArrayList<Process> process_table, int size) { this.process_table = process_table; this.QUEUE_SIZE = size; } /* * Adds the incoming process to the process table i.e the scheduler queue */ public boolean schedule(Process p) { Log.i("PROCESS TABLE SIZE: ", String.valueOf(process_table.size())); if(process_table.size() == QUEUE_SIZE) { synchronized(process_table) { try { Log.i("QUEUE IS FULL.."); process_table.wait(); } catch(InterruptedException e) { Log.e("SchedulerAdder.schedule()", e); } catch(InterruptedException e) { Log.e("SchedulerAdder.schedule()", e); } return false; } } return true; } } synchronized(process_table) { p.setBurstTime((long)(Math.random() * RANGE) + MIN_NUMBER); process_table.add(p); p.appendToLog("Process added to queue"); process_table.notify(); } return true; Scheduler Popper - package hybrid_mircokernel; import java.util.ArrayList; import java.util.Iterator; public class SchedulerPopper extends Thread { private ArrayList<Process> process_table; public int QUEUE_SIZE; boolean running = false; /* * Constructor */ public SchedulerPopper(ArrayList<Process> process_table, int size) { this.process_table = process_table; this.QUEUE_SIZE = size; } /* * Toggles the OS mode between kernel mode and User mode based on the traffic in the queue */ private void setOSMode() { Iterator<ServerPermission> iterator = Kernel.permissionTable.values().iterator(); while (iterator.hasNext()) { ServerPermission temp = iterator.next(); if(process_table.size() >= temp.getThreshold() && temp.getMode() != ServerPermission.MODE_KERNEL) { temp.setMode(ServerPermission.MODE_KERNEL); Log.i("Changed to Kernel Mode"); } } } } else if (process_table.size() < temp.getThreshold() && temp.getMode() != ServerPermission.MODE_USER) { temp.setMode(ServerPermission.MODE_USER); Log.i("Changed to User Mode"); } System.out.println("NAME: " + temp.getName() + " MODE: " + temp.getMode()); } /* * Pops the next process from the head of the queue for execution */ private void popProcess() { Process activeProcess; setOSMode(); while (process_table.isEmpty()) { synchronized(process_table) { try { Log.i("QUEUE IS EMPTY.."); process_table.wait(); } catch(InterruptedException e) { Log.e(e); } } synchronized (process_table) { process_table.notifyAll(); activeProcess = process_table.get(0); process_table.remove(0); activeProcess.appendToLog("Process popped from queue and given to CPU"); } } if (activeProcess != null) { Cpu.execute(activeProcess); } } public void run() { running = true; while (running) { popProcess(); } } public void shutdown() { running = false; } } CPU - package hybrid_mircokernel; import java.util.Random; public class Cpu extends Thread { /* Constructor */ private Cpu() {} /* * Method to execute the processes sent by the scheduler. * Executes the requested commands by the processes. */ public static void execute(Process p) { if (Simulation.type == Simulation.SECURITY) { simulateSecurity(p); } else if (Simulation.type == Simulation.CRASH) { simulateReliability(p); } else { long burstTime = p.getBurstTime(); p.appendToLog("Starting to execute process with burst time: " + burstTime); p.setProcessExecutionStartTime(); StringBuffer report = new StringBuffer("Process: " + p.getPid() + ",n"); try { sleep(burstTime); } } } } if (Kernel.permissionTable.get("memoryManager").getMode() == ServerPermission.MODE_USER) { report.append("Mode: USER\n"); p.appendToLog("==========>>> Running " + p.getPid() + " in User mode."); } // We check if this server has the permission to execute // ALLOC UserSpace.getSigma0().receive( new Message(p, Message.CODE_ALLOC)); for (int i = 0; i < 5; i++) { executeRandomCommand(p, ServerPermission.MODE_USER); } UserSpace.getSigma0().receive( new Message(p, Message.CODE_FREE)); } else if (Kernel.permissionTable.get("memoryManager") .getMode() == ServerPermission.MODE_KERNEL) { report.append("Mode: KERNEL\n"); p.appendToLog("==========>>> Running " + p.getPid() + " in Kernel mode."); MemoryManager memoryManager = (MemoryManager) Kernel.permissionTable .get("memoryManager").getReference(); memoryManager.receive(new Message(p, Message.CODE_ALLOC)); for (int i = 0; i < 5; i++) { executeRandomCommand(p, ServerPermission.MODE_KERNEL); } memoryManager.receive(new Message(p, Message.CODE_FREE)); } catch (InterruptedException e) { Log.e("Cpu.execute()", e); } p.appendToLog("Finished executing process."); p.setProcessExecutionEndTime(); Graph.printValuesForGraph(p); report.append("Start: " + p.getExecutionStartTime() + "\n"); report.append("End: " + p.getExecutionEndTime() + "\n"); report.append("Duration: " + p.processExecutionDuration() + "\n"); report.append("\n======================================================\n\n"); Report.write(report.toString()); p.appendToLog("Process executed for " + p.processExecutionDuration() + " nsec\n"); p.appendToLog("\n================================================="); /* Method to randomly select the memory commands to be executed by the processes */ private static void executeRandomCommand(Process p, int mode) { int randomNo = (new Random()).nextInt(100); if (randomNo % 2 == 0) { p.setNewMemoryRequirements(); if (mode == ServerPermission.MODE_USER) { UserSpace.getSigma0().receive( new Message(p, Message.CODE_REALLOC)); } else { ((MemoryManager) Kernel.permissionTable.get("memoryManager") .getReference()).receive(new Message(p, Message.CODE_REALLOC)); } } } /* Method to randomly select the file commands to be executed by the processes */ private static void executeRandomFileCommand(Process p, FileManager fm) { int randomNo = (new Random()).nextInt(100); if (randomNo % 2 == 0) { p.setNewMemoryRequirements(); if (Kernel.permissionTable.get("memoryManager").getMode() == ServerPermission.MODE_USER) { UserSpace.getSigma0().receive( new Message(p, Message.CODE_REALLOC)); } else { ((MemoryManager) Kernel.permissionTable.get("memoryManager") .getReference()).receive(new Message(p, Message.CODE_REALLOC)); } } else { if (Kernel.permissionTable.get("fileManager").getMode() == ServerPermission.MODE_USER) { fm.receive(new Message(p, Message.CODE_CREATE_FILE)); } else { ((FileManager) Kernel.permissionTable.get("fileManager") .getReference()).receive(new Message(p, Message.CODE_CREATE_FILE)); } } /* Method to simulate the security aspect of microkernel */ private static void simulateSecurity(Process p) { FileManager fileManager = new FileManager(); int[] fileCommands = { Message.CODE_CREATE_FILE, Message.CODE_FREE, Message.CODE_DELETE_FILE };; for (int i = 0; i < fileCommands.length; i++) { if (Kernel.permissionTable.get("fileManager").hasPermission( fileCommands[i])) { fileManager.receive(new Message(p, fileCommands[i])); } else { Log.e("***EXCEPTION: File manager server tried to execute a FREE MEMORY command (" + fileCommands[i] + ")" + "but was not executed as it does not have the necessary permissions"); } /* Method to simulate the reliability aspect of microkernel */ private static void simulateReliability(Process p) { FileManager fileManager = new FileManager(); for (int i = 0; i < 20; i++) { executeRandomFileCommand(p, fileManager); } } /* Exception handler for server crashes */ public static void catchException(ServerCrashedException e) { if (e.getCode() == ServerCrashedException.CODE_CRASH) { Log.e("File manager has crashed during process " + e.getProcess().getPid()); } else if (e.getCode() == ServerCrashedException.CODE_TIMEOUT) { Log.e("File manager is not responding."); } } ServerPermission - package hybrid_mircokernel; public class ServerPermission { public static final int MODE_KERNEL = 0; public static final int MODE_USER = 1; private static int processCount; private boolean[] permissions; private int mode; private String name; private Object reference; private int threshold; /* Constructor */ class ServerPermission { public ServerPermission(String name, Object reference, int threshold) { mode = ServerPermission.MODE_USER; this.name = name; this.reference = reference; this.threshold = threshold; } } /* getters/setters */ public int getMode() { return mode; } public void setMode(int mode) { this.mode = mode; } public String getName() { return name; } public void setName(String name) { this.name = name; } protected Object getReference() { return reference; } public void setReference(Object reference) { this.reference = reference; } public void setThreshold(int threshold) { this.threshold = threshold; } public int getThreshold() { return threshold; } public void upCount() { processCount++; } public void downCount() { processCount--; } protected void setPermissions(boolean[] permissions) { this.permissions = permissions; } public boolean hasPermission(int code) { return permissions[code]; } package hybrid_mircokernel; import java.util.ArrayList; import java.util.HashMap; public class Kernel extends Thread { private boolean running; private SchedulerAdder schedulerAdder; private SchedulerPopper schedulerPopper; private static final int QUEUE_SIZE = 10; public static HashMap<String, ServerPermission> permissionTable; private ArrayList<Process> process_table = new ArrayList<Process>(); private Object state; /* constructors */ public Kernel() {} /* instance methods */ /* Boots the kernel */ public void boot() { schedulerAdder = new SchedulerAdder(process_table, QUEUE_SIZE); schedulerPopper = new SchedulerPopper(process_table, QUEUE_SIZE); schedulerPopper.start(); permissionTable = new HashMap<String, ServerPermission>(); state = new Object(); UserSpace.create(); addToPermissionTable(UserSpace.getServers()); running = true; try { this.start(); } catch (Exception e) { Log.e(e); } } /* Shuts down the kernel */ public void shutdown() { UserSpace.shutDownServers(); schedulerPopper.shutdown(); try { schedulerPopper.join(); } catch (InterruptedException e) { e.printStackTrace(); } Report.close(); running = false; wakeup(); } public void run() { while (running) { synchronized (state) { try { state.wait(); } catch (InterruptedException e) { } } } private void wakeup() { synchronized (state) { state.notify(); } } public boolean receive(Message message) { wakeup(); if (message == null) { Log.e("Message was null."); return false; } boolean status = schedulerAdder.schedule(message.getProcess()); return status; } /* Adds a server permission to the server permission table */ protected void addToPermissionTable(String name, Object reference, int threshold) { ServerPermission s = new ServerPermission(name, reference, threshold); Kernel.permissionTable.put(name, s); } /* Adds a server permission to the server permission table */ private void addToPermissionTable(ServerPermission[] servers) { int i = 0; ServerPermission s = servers[i]; boolean[] permissions = {false, false, false, false, false, false}; while (s != null) { if (servers[i].getName().equals(FileManager.NAME)) { permissions[Message.CODE_CREATE_FILE] = true; permissions[Message.CODE_DELETE_FILE] = true; permissions[Message.CODE_FREE] = false; permissions[Message.CODE_ALLOC] = false; permissions[Message.CODE_REALLOC] = false; } i++; } } else if (servers[i].getName().equals(MemoryManager.NAME)) { permissions[Message.CODE_ALLOC] = true; permissions[Message.CODE_REALLOC] = true; permissions[Message.CODE_FREE] = true; permissions[Message.CODE_CREATE_FILE] = false; permissions[Message.CODE_DELETE_FILE] = false; } else if (servers[i].getName().equals(Sigma.NAME)) { // none? } s.setPermissions(permissions); Kernel.permissionTable.put(s.getName(), s); i++; s = servers[i]; } /* Removes a server permission to the server permission table */ protected ServerPermission removeFromPermissionTable(String name) { return permissionTable.remove(name); } Input/Output Listing Choose from the following test cases: (1) Simulate Microkernel Security (2) Simulate reliability (3) Hybrid use case (0) Quit 1 INFO:: QUEUE IS EMPTY.. INFO:: PROCESS TABLE SIZE: :: 0 INFO:: Creating a new file for process 1 ERROR:: ***EXCEPTION: File manager server tried to execute a FREE MEMORY command (3) but was not executed as it does not have the necessary permissions INFO:: Deleting a file requested by process 1 NAME: fileManager MODE: 1 NAME: memoryManager MODE: 1 NAME: sigma0 MODE: 1 INFO:: QUEUE IS EMPTY.. Choose from the following test cases: (1) Simulate Microkernel Security (2) Simulate reliability (3) Hybrid use case (0) Quit 2 INFO:: QUEUE IS EMPTY.. INFO:: PROCESS TABLE SIZE: :: 0 INFO:: Memory reallocated for process 1 ERROR:: File manager has crashed! ERROR:: File manager has crashed during process 1 ERROR:: File manager is not responding. ERROR:: File manager is not responding. INFO:: Memory reallocated for process 1 ERROR:: File manager is not responding. INFO:: Memory reallocated for process 1 INFO:: Memory reallocated for process 1 INFO:: Memory reallocated for process 1 INFO:: Memory reallocated for process 1 ERROR:: File manager is not responding. ERROR:: File manager is not responding. Choose from the following test cases: (1) Simulate Microkernel Security (2) Simulate reliability (3) Hybrid use case (0) Quit 3 INFO:: QUEUE IS EMPTY.. INFO:: PROCESS TABLE SIZE: :: 0 INFO:: PROCESS TABLE SIZE: :: 0 INFO:: PROCESS TABLE SIZE: :: 1 INFO:: Memory allocated for process 1 INFO:: PROCESS TABLE SIZE: :: 2 INFO:: PROCESS TABLE SIZE: :: 3 INFO:: Memory reallocated for process 1 INFO:: PROCESS TABLE SIZE: :: 4 INFO:: PROCESS TABLE SIZE: :: 5 INFO:: Memory reallocated for process 1 INFO:: PROCESS TABLE SIZE: :: 6 INFO:: PROCESS TABLE SIZE: :: 7 INFO:: Memory with PID '1' free'd. INFO:: Changed to Kernel Mode NAME: fileManager MODE: 0 INFO:: Changed to Kernel Mode NAME: memoryManager MODE: 0 INFO:: Changed to Kernel Mode NAME: sigma0 MODE: 0 INFO:: PROCESS TABLE SIZE: :: 7 INFO:: Memory allocated for process 2 INFO:: Memory reallocated for process 2 INFO:: Memory reallocated for process 2 INFO:: Memory reallocated for process 2 INFO:: Memory reallocated for process 2 INFO:: Memory with PID '2' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: PROCESS TABLE SIZE: :: 7 INFO:: Memory allocated for process 3 INFO:: Memory reallocated for process 3 INFO:: Memory with PID '3' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: PROCESS TABLE SIZE: :: 6 INFO:: Memory allocated for process 4 INFO:: Memory reallocated for process 4 INFO:: Memory with PID '4' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: PROCESS TABLE SIZE: :: 6 INFO:: Memory allocated for process 5 INFO:: Memory reallocated for process 5 INFO:: Memory reallocated for process 5 INFO:: Memory with PID '5' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: PROCESS TABLE SIZE: :: 6 INFO:: PROCESS TABLE SIZE: :: 7 INFO:: PROCESS TABLE SIZE: :: 8 INFO:: Memory allocated for process 6 INFO:: Memory reallocated for process 6 INFO:: Memory reallocated for process 6 INFO:: Memory reallocated for process 6 INFO:: Memory reallocated for process 6 INFO:: Memory with PID '6' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: Memory allocated for process 7 INFO:: Memory reallocated for process 7 INFO:: Memory reallocated for process 7 INFO:: Memory with PID '7' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: PROCESS TABLE SIZE: :: 7 INFO:: Memory allocated for process 8 INFO:: Memory reallocated for process 8 INFO:: Memory reallocated for process 8 INFO:: Memory with PID '8' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: Memory allocated for process 9 INFO:: Memory reallocated for process 9 INFO:: Memory reallocated for process 9 INFO:: Memory with PID '9' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: Memory allocated for process 10 INFO:: Memory reallocated for process 10 INFO:: Memory reallocated for process 10 INFO:: Memory reallocated for process 10 INFO:: Memory with PID '10' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: PROCESS TABLE SIZE: :: 5 INFO:: PROCESS TABLE SIZE: :: 6 INFO:: Memory allocated for process 11 INFO:: Memory reallocated for process 11 INFO:: Memory reallocated for process 11 INFO:: Memory reallocated for process 11 INFO:: Memory with PID '11' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: PROCESS TABLE SIZE: :: 6 INFO:: Memory allocated for process 12 INFO:: Memory reallocated for process 12 INFO:: Memory reallocated for process 12 INFO:: Memory reallocated for process 12 INFO:: Memory with PID '12' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: PROCESS TABLE SIZE: :: 5 INFO:: Memory allocated for process 13 INFO:: Memory reallocated for process 13 INFO:: Memory with PID '13' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: PROCESS TABLE SIZE: :: 6 INFO:: Memory reallocated for process 14 INFO:: Memory reallocated for process 14 INFO:: Memory reallocated for process 14 INFO:: Memory with PID '14' free'd. NAME: fileManager MODE: 0 NAME: memoryManager MODE: 0 NAME: sigma0 MODE: 0 INFO:: Memory allocated for process 15 INFO:: Memory reallocated for process 15 INFO:: Memory reallocated for process 15 INFO:: Memory reallocated for process 15 INFO:: Memory reallocated for process 15 INFO:: Memory with PID '16' free'd. INFO:: Changed to User Mode NAME: fileManager MODE: 1 INFO:: Changed to User Mode NAME: memoryManager MODE: 1 INFO:: Changed to User Mode NAME: sigma0 MODE: 1 INFO:: Memory allocated for process 17 INFO:: Memory reallocated for process 17 INFO:: Memory reallocated for process 17 INFO:: Memory reallocated for process 17 INFO:: Memory with PID '17' free'd. NAME: fileManager MODE: 1 NAME: memoryManager MODE: 1 NAME: sigma0 MODE: 1 INFO:: Memory allocated for process 18 INFO:: Memory reallocated for process 18 INFO:: Memory reallocated for process 18 INFO:: Memory reallocated for process 18 INFO:: Memory with PID '18' free'd. NAME: fileManager MODE: 1 NAME: memoryManager MODE: 1 NAME: sigma0 MODE: 1 INFO:: Memory allocated for process 19 INFO:: Memory reallocated for process 19 INFO:: Memory reallocated for process 19 INFO:: Memory with PID '19' free'd. NAME: fileManager MODE: 1 NAME: memoryManager MODE: 1 NAME: sigma0 MODE: 1 INFO:: Memory allocated for process 20 INFO:: Memory reallocated for process 20 INFO:: Memory reallocated for process 20 NAME: fileManager MODE: 1 NAME: memoryManager MODE: 1 NAME: sigma0 MODE: 1 INFO:: QUEUE IS EMPTY.. REPORT: Process: 1 Mode: USER Start: 108956947565975 End: 108961189705041 Duration: 4242139066 Process: 2 Mode: KERNEL Start: 108961258141953 End: 108961864459608 Duration: 606317655 Process: 3 Mode: KERNEL Start: 108961864723513 End: 108962667291049 Duration: 802567536 Process: 4 Mode: KERNEL Start: 108962667487512 End: 108962856228442 Duration: 188740930 Process: 5 Mode: KERNEL Start: 108962856465481 End: 108963098319620 Duration: 241854139 Process: 6 Mode: KERNEL Start: 108963098582477 End: 108963916640750 Duration: 818058273 Process: 7 Mode: KERNEL Start: 108963916864063 End: 108964625452530 Duration: 708588467 Process: 8 Mode: KERNEL Start: 108964625727167 End: 108965408312697 Duration: 782585530 Process: 9 Mode: KERNEL Start: 108965408489761 End: 108965648508061 Duration: 240018300 Process: 10 Mode: KERNEL Start: 108965648859072 End: 108965688428910 Duration: 39569838 Process: 11 Mode: KERNEL Start: 108965688643080 End: 108966580560522 Duration: 891917442 Process: 12 Mode: KERNEL Start: 108966580729162 End: 108967223508025 Duration: 642778863 Process: 13 Mode: KERNEL Start: 108967223736608 End: 108967679180434 Duration: 455443826 Process: 14 Mode: KERNEL Start: 108967679350347 End: 108968263406866 Duration: 584056519 Process: 15 Mode: KERNEL Start: 108968263624732 End: 108968625040930 Duration: 361416198 Process: 16 Mode: KERNEL Start: 108968625290858 End: 108968686031732 Duration: 60740874 Process: 17 Mode: USER Start: 108968686386421 End: 108973919294752 Duration: 5232908331 Process: 18 Mode: USER Start: 108973919548011 End: 108979164301151 Duration: 5244753140 Process: 19 Mode: USER Start: 108979164682227 End: 108983710903237 Duration: 4546221010 Other related material Matplotlib - Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell (ala MATLAB® or Mathematica®†), web application servers, and six graphical user interface toolkits.
{"Source-Url": "http://www.cse.scu.edu/~mwang2/projects/Microkernel_hybrid_14f.pdf", "len_cl100k_base": 10671, "olmocr-version": "0.1.53", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 80969, "total-output-tokens": 13539, "length": "2e13", "weborganizer": {"__label__adult": 0.00037026405334472656, "__label__art_design": 0.0004775524139404297, "__label__crime_law": 0.00032901763916015625, "__label__education_jobs": 0.0011930465698242188, "__label__entertainment": 8.624792098999023e-05, "__label__fashion_beauty": 0.00016641616821289062, "__label__finance_business": 0.0002288818359375, "__label__food_dining": 0.0003426074981689453, "__label__games": 0.0007472038269042969, "__label__hardware": 0.003814697265625, "__label__health": 0.0005598068237304688, "__label__history": 0.00037217140197753906, "__label__home_hobbies": 0.00013828277587890625, "__label__industrial": 0.00064849853515625, "__label__literature": 0.0002186298370361328, "__label__politics": 0.00024437904357910156, "__label__religion": 0.0005540847778320312, "__label__science_tech": 0.10052490234375, "__label__social_life": 0.0001074671745300293, "__label__software": 0.009063720703125, "__label__software_dev": 0.87841796875, "__label__sports_fitness": 0.0003421306610107422, "__label__transportation": 0.0007071495056152344, "__label__travel": 0.00021409988403320312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46906, 0.06017]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46906, 0.39072]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46906, 0.79451]], "google_gemma-3-12b-it_contains_pii": [[0, 136, false], [136, 1093, null], [1093, 1362, null], [1362, 2532, null], [2532, 2532, null], [2532, 3439, null], [3439, 3944, null], [3944, 5360, null], [5360, 7463, null], [7463, 9724, null], [9724, 11767, null], [11767, 12576, null], [12576, 13475, null], [13475, 14095, null], [14095, 14492, null], [14492, 15479, null], [15479, 15570, null], [15570, 17911, null], [17911, 19715, null], [19715, 20686, null], [20686, 22695, null], [22695, 23267, null], [23267, 23859, null], [23859, 25069, null], [25069, 26338, null], [26338, 27451, null], [27451, 28389, null], [28389, 29483, null], [29483, 30884, null], [30884, 32443, null], [32443, 33428, null], [33428, 34142, null], [34142, 34935, null], [34935, 35945, null], [35945, 37175, null], [37175, 38149, null], [38149, 39226, null], [39226, 39913, null], [39913, 40955, null], [40955, 42138, null], [42138, 43276, null], [43276, 44327, null], [44327, 45104, null], [45104, 45560, null], [45560, 46066, null], [46066, 46532, null], [46532, 46906, null]], "google_gemma-3-12b-it_is_public_document": [[0, 136, true], [136, 1093, null], [1093, 1362, null], [1362, 2532, null], [2532, 2532, null], [2532, 3439, null], [3439, 3944, null], [3944, 5360, null], [5360, 7463, null], [7463, 9724, null], [9724, 11767, null], [11767, 12576, null], [12576, 13475, null], [13475, 14095, null], [14095, 14492, null], [14492, 15479, null], [15479, 15570, null], [15570, 17911, null], [17911, 19715, null], [19715, 20686, null], [20686, 22695, null], [22695, 23267, null], [23267, 23859, null], [23859, 25069, null], [25069, 26338, null], [26338, 27451, null], [27451, 28389, null], [28389, 29483, null], [29483, 30884, null], [30884, 32443, null], [32443, 33428, null], [33428, 34142, null], [34142, 34935, null], [34935, 35945, null], [35945, 37175, null], [37175, 38149, null], [38149, 39226, null], [39226, 39913, null], [39913, 40955, null], [40955, 42138, null], [42138, 43276, null], [43276, 44327, null], [44327, 45104, null], [45104, 45560, null], [45560, 46066, null], [46066, 46532, null], [46532, 46906, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46906, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46906, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46906, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46906, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 46906, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46906, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46906, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46906, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46906, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46906, null]], "pdf_page_numbers": [[0, 136, 1], [136, 1093, 2], [1093, 1362, 3], [1362, 2532, 4], [2532, 2532, 5], [2532, 3439, 6], [3439, 3944, 7], [3944, 5360, 8], [5360, 7463, 9], [7463, 9724, 10], [9724, 11767, 11], [11767, 12576, 12], [12576, 13475, 13], [13475, 14095, 14], [14095, 14492, 15], [14492, 15479, 16], [15479, 15570, 17], [15570, 17911, 18], [17911, 19715, 19], [19715, 20686, 20], [20686, 22695, 21], [22695, 23267, 22], [23267, 23859, 23], [23859, 25069, 24], [25069, 26338, 25], [26338, 27451, 26], [27451, 28389, 27], [28389, 29483, 28], [29483, 30884, 29], [30884, 32443, 30], [32443, 33428, 31], [33428, 34142, 32], [34142, 34935, 33], [34935, 35945, 34], [35945, 37175, 35], [37175, 38149, 36], [38149, 39226, 37], [39226, 39913, 38], [39913, 40955, 39], [40955, 42138, 40], [42138, 43276, 41], [43276, 44327, 42], [44327, 45104, 43], [45104, 45560, 44], [45560, 46066, 45], [46066, 46532, 46], [46532, 46906, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46906, 0.00818]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
896b792443e63f5d41035aab76112759e495a28e
Advanced query language for manipulating complex entities Timo Niemi\textsuperscript{a,\ast}, Marko Junkkari\textsuperscript{a}, Kalervo Järvelin\textsuperscript{b} and Samu Viita\textsuperscript{a} \textsuperscript{a}Department of Computer and Information Sciences, FIN-33014 University of Tampere, Finland \textsuperscript{b}Department of Information Studies, FIN-33014 University of Tampere, Finland \textsuperscript{\ast}Corresponding author Dr. Timo Niemi Department of Computer and Information Sciences FIN-33014 University of Tampere Finland E-mail: tn@cs.uta.fi Tel: +358 3 215 6782 Fax: +358 3 215 6070 Abstract Complex entities are one of the most popular ways to model relationships among data. Especially complex entities, known as physical assemblies, are popular in several applications. Typically, complex entities consist of several parts organized at many nested levels. Contemporary query languages intended for manipulating complex entities support only extensional queries. Likewise, the user has to master the structures of complex entities completely, which is impossible if a physical assembly consists of a huge number of parts. Further, query languages do not support the manipulation of documents related to parts of physical assemblies. In this paper we introduce a novel, declarative and powerful query language, in which the above deficiencies have been eliminated. Our query language supports text information retrieval related to parts and it contains intensional and combined extensional-intensional query features. These features support making queries of new types. In the paper we give several sample queries, which demonstrate the usefulness of these query types. In addition, we show that conventional extensional queries can be formulated intuitively and compactly in our query language. Among other things this is due to our query primitives allowing removal of the explicit specification of navigation from the user. Keywords: Complex entities, physical assembly, query language, information retrieval, XML documents 1. Introduction An information system consists of entities, their properties and relationships. Similar entities are grouped into an entity type with a unique name. Entities belong to the extensional level (the instance level) whereas entity types belong to the intensional level (the schema level). The properties (attributes) of entity types belong to the intensional level whereas their values to the extensional level. Relationships may also be represented both at the extensional and intensional level. At the intensional level a relationship is represented through entity types and the representation at the extensional level is based on entities. A relationship may contain, in addition to participating entity types, attributes to express its characteristics. In modeling relationships among entities three basic relationships are usually distinguished: the is-a relationship (or specialization/generalization), the association (or member-of relationship) and the part-of relationship (Rumbaugh et al., 1991; Rumbaugh et al., 1999; Motschnig-Pitrik & Kaasböl, 1999; Renguo et al. 2000; Wand et al., 1999). In the is-a relationship one organizes similar entity types hierarchically. If X and Y are entity types and X is-a Y holds then X is called the subentity type of Y and Y the superentity type of X. At the extensional level, each entity belonging to the entity type X also belongs to the entity type Y. Association models an event, a phenomenon or a fact among independent entity types / entities. Typically, each entity type / entity participating in an association plays some role. In an association the participating entity types are assumed to be conceptually at the same level (Renguo et al., 2000). In a part-of relationship entities / entity types are not conceptually at the same level because they have different complexity. In modeling a part-of relationship it is essential to recognize the entities / entity types that play the roles of parts in an entity / entity type which is the whole. The modeling of a part-of relationship requires structuring among entities / entity types and results in several hierarchy levels among them. Therefore transitivity is a primary characteristic of the part-of relationship. In this paper we deal only with part-of relationships. The part-of relationship has no established terminology. For example, it has been called whole-part association (Civello, 1993), part-whole relationship (Motschnig-Pitrik & Kaasböl, 1999), whole-part relationship (Barbier et al., 2000), part-whole hierarchy (Pazzi, 1999), part-of structure (Rousset & Hors, 1996), aggregation (Rumbaugh et al., 1991), complex object (Savnik et al., 1999) and composition relation (Urtado & Oussalah, 1998). Here we call, by following Järvelin and Niemi (1999), a complex entity such an entity modeled by the part-of relationship whose parts are organized as several nested substructures. The context tells whether we mean its intensional or extensional level or both. Our notion of complex entity refers to a single unit, which contains all its parts and which has its own function in the real world. It is typical of physical assemblies that they are complex entities which have been constructed for a specific purpose in the real world. For example, we can consider a car as a physical assembly, which consists immediately of a body, engine, transmission, etc. and in turn, a body consists of a frame, doors and windows etc. In other words, a car is a single unit in the real world, which is capable of moving in a controlled way whereas any of its parts has not this property. Most database query languages support only extensional queries, i.e., query results consist only of extensional level information. We develop a declarative query language for complex entities, which supports both extensional and intensional queries. In the semantic sense, a component and its immediate components often are of special interest in complex entities. For example, the assembly and disassembly of a complex entity usually happens in phases where a component and its immediate components are treated in one phase only. In many applications it is natural to associate a document with each part of the physical assembly at hand, e.g. for giving instructions for assembly or maintenance. Further, in many applications the manipulation of complex entities and their documentation is needed at the same time. To the best of our knowledge, our query language is the first proposal for this purpose. The rest of the paper is organized as follows. In Section 2 we review both approaches to model complex entities and query languages for manipulating them. We shall also present the goals of our query language. In Section 3 we introduce complex entity modeling and related XML documentation of our system. In this section we also illustrate our sample application. The primitives of our query language and the notion of variable are introduced in Section 4. In Section 5 we formulate sample queries of different types. The properties and implementation of our language are discussed in Section 6. Summary is given in Section 7. 2. Related work Complex entities are common in the real world. They have an important role in many advanced applications in engineering, manufacturing and graphics design. They have been also used for organizing medical terminologies (Liu et al., 1996). Niemi and Järvelin (1995; Järvelin & Niemi 1999), like several other authors (e.g. Sacks-Davis et al., 1995; Zobel et al., 1991; Lambrix & Padgham, 2000), have proposed complex entities for representing and manipulating hierarchical documents. Järvelin and others (2000) have shown that complex entities are natural structures for informetrics. Complex entities have also proven useful in the Web as embedded in Web pages defined in the OHTML syntax and in the Object Exchange Model (OEM) (Riet, 1998). Complex entities are popular because they support the modeling of semantically different relationships in applications. Many authors (e.g. Winston et al., 1987; Kim et al., 1987; Civello, 1993; Motschnig-Pitrik & Kaasböll, 1999; Halper et al., 1998; Barbier et al., 2000) have focused on the semantics of complex entities and on categorizing them. For us, the most important semantic distinction is whether a complex entity is exclusive or shared. A component is exclusive if it can be attached as an immediate part to at most one more complex entity whereas a shared component may be attached to any number of more complex entities (Artale et al. 1996; Halper et al., 1998). In this paper we assume that a complex entity consists only of exclusive components. This constraint is characteristic of physical assemblies such as vehicles or buildings (Halper et al., 1998). In this paper we consider physical assemblies with their related textual documents. 2.1. Approaches to modeling and representing complex entities There are two basic approaches to represent information in databases: the value-oriented and the object-oriented approach (Ullman, 1988). In the former, values of some attributes are used for the identification of entities or relationships, i.e., these attributes act as their keys. In the object-oriented approach a unique identifier is assigned to each entity (called object) and it is used to refer to the entity. In addition to attributes, an entity may have functional properties. These functional properties are called methods and they are implemented as pieces of code. The relational model is based on the value-oriented approach. It does not support the representation of complex entities directly. In relational databases a complex entity is represented in several relations. Thus the same values of attributes must be stored in several relations in order to maintain semantic connections among data. The manipulation of a complex entity as a single unit therefore requires data collection from several relations. The construction of complex entities of several hierarchy levels presupposes the specification of many relational joins for which the user is responsible. Yet the result of a relational query is always a flat relation which does not make the structure of a complex entity obvious. Therefore better ways of representing complex entities are needed. NF\textsuperscript{2} relations or non-first normal form relations are also based on the value-oriented approach, but support the complex entity notion (Roth et al., 1988). They allow relation-valued attributes, which may contain relation-valued attributes, etc. Järvelin and Niemi (1995; 1999) review the use of NF\textsuperscript{2} relations in the IR area. NF\textsuperscript{2} relations make the structuring among component entities of a complex entity explicit. From the perspective of complex entities the NF\textsuperscript{2} relational model has two essential disadvantages. First, it does not contain operations for analyzing complex entity types. Second, it requires that each atomic-valued and relation-valued attribute has a unique name. However, the same component entity type may appear in several composite types of a complex type, e.g. bolts may belong to several parts of an airplane. From the semantic perspective, it is desirable to use the same entity type name wherever the entity type appears. Object-orientation has concentrated on modeling and manipulating the is-a relationship rather than complex entities (complex objects in the object-oriented terminology) (Renguo et al., 2000). One indication on this is that the support of object-oriented programming for complex entities, similar to the support for the is-a relationship, is desired (Motschnig-Pitrik & Kaasböll, 1999). Likewise, the indexing mechanisms of object-oriented databases mainly support the manipulation of the is-a and the association relationships. An exception is the work by Renguo and others (2000) who developed the indexing of complex entities. Several authors have realized that in object-orientation complex entities are often treated as a kind of association although they usually require particular semantics and update mechanisms (see e.g. Motschnig-Pitrik & Kaasböll, 1999; Renguo et al., 2000). They can neither be represented through ordinary attributes because the distinction between a property and a component is lost in this case (Civello, 1993; Artale et al., 1996). Unfortunately, this is a very common practice in object-orientation (e.g. Cattel & Barry, 2000; Cluet, 1998; Hua & Tripathy, 1994). Often the attributes containing the identifiers of the objects are called complex attributes (Lee & Lee, 1998) or object-valued attributes (Pazzi, 1999). However, this kind of implementation makes the traversal of complex entities difficult in an order other than the established one. Although the popular object-oriented modeling language UML (Rumbaugh et al., 1999) distinguishes the modeling of complex entities from the modeling of other relationships, it falls short in modeling several essential details. It is important to specify all constraints, which complex entities must satisfy. Civello (1993) discusses constraints for their representation (see also Halper et al., 1998; Motschnig-Pitrik & Kaasböll, 1999). Likewise the object-oriented modeling methods OMT (Rumbaugh et al., 1991) and UML do not deal with the inheritance of properties in complex objects. In the is-a relationship the inheritance mechanism is always downward, i.e., all properties of an entity type are also properties of its subentity types. The inheritance mechanism in complex entities may be both downward and upward (inheritance happens from a part to its composite entity). For example, if a car obtains its color from the color of its body then the upward inheritance appears. If the date of an article is the date of the newspaper publishing it we have downward inheritance. Different kinds of inheritances within complex entities are discussed in (Halper et al., 1993; 1998). Both constructor-oriented formalisms and description logics have been proposed for the exact representation of complex entities. The former emphasize structural aspects among entities. There are both value-oriented (Riet, 1998) and object-oriented (Bancilhon & Khoshafian, 1986) constructor-based formalisms. Description logics emphasize logic-based representation and reasoning. Description logic systems have been extended to represent complex entities (Rousset & Hors, 1996; Lambrix & Padgham, 2000). 2.2. Query languages for manipulating complex entities In current query languages proposed for the manipulation of complex entities the user must know what entities/entity types (s)he manipulates. In many applications (e.g. the processing of hierarchical documents) it is impossible to find one stable structure suitable to all user needs. Therefore the restructuring capability of query languages is necessary in these applications. However physical assemblies tend to possess a stable structure but need a greater analyzing power than contemporary query languages offer. Further, these languages are difficult to use, as argued below. There are several SQL-like language proposals based on the \( \text{No}^2 \) relational model for manipulating complex entities. These languages have the value-oriented origin. Niemi and Järvelin (1995) give a thorough survey on these languages and analyze query formulation difficulty in them. Typically, in addition to the conventional SQL specification, users are required to master both the semantics of the restructuring operations and the design of large nested expressions where restructuring expressions are embedded in conventional SQL expressions. Niemi and Järvelin (1995; Järvelin and Niemi, 1995; 1999) introduce a truly declarative query language minimizing end-user effort in query formulation. However, from the viewpoint of manipulating physical assemblies the proposed language has several disadvantages. For example, it lacks primitives for analyzing the intensional level of complex entities. It is important in the object-oriented approach that complex entities can be manipulated bidirectionally (see e.g. Halper et al., 1994), i.e. forward and backward traversal of complex entities is needed. In object-orientation two basic alternatives have been proposed for this. One is based on applying methods of component entity types (e.g. Halper et al., 1994). In these alternatives the user controls the use of methods through hierarchy levels of a complex entity, i.e. it requires programming skills. In the second alternative complex attributes are allowed. If a complex entity is implemented in a unidirectional way, then the expression of forward traversal is straightforward but the expression of backward traversal is troublesome and requires procedural thinking (Lee & Lee, 1998). When a complex entity is implemented bidirectionally, there is no difference between backward and forward traversals (e.g., Cattel & Barry, 2000). However, the synchronization of both traversals is very demanding (Lee & Lee, 1998). Although considerable progress in developing object-oriented SQL-like query languages has taken place they are not yet suitable for lay users. We discuss this by using OQL (Cluet, 1998), a typical object-oriented query language, as an example. Although the user need not master actual algorithmic programming (s)he must master many aspects of object-orientation such as object identity, class, attribute, method, inheritance, literal etc. In addition, in the OQL the user must combine iterators (e.g., select-from-where, grouping and sorting iterators) with each other. Therefore (s)he must also understand how a variable within an iterator will be instantiated with different entities until all entities belonging to the entity type to which the variable refers have been processed. In complex queries the user must nest iterators whereby there may be several instantiations of one variable for one instantiation of another variable. Thus the OQL user must think iteratively. In object-oriented query languages complex entities are typically constructed by applying available constructors like the set, list, tuple and tree constructors on atomic data types. Dar and Agrawal (1993) argue that constructors make query formulation quite complicated for lay users. This is particularly true when users must nest constructors within each other. QAL is a functional object-oriented query language supporting the manipulation of complex entities (Savnik et al., 1999). QAL supports typical database queries such as 'retrieve the values of selected attributes from any nesting levels of complex entities'. In addition, QAL is able to query the intensional level and to express the connection between the extensional and intensional levels as well. These are useful features in a query language intended for physical assemblies. However, QAL has three disadvantages. First, the QAL user must know the entities of interest and give path expressions leading to them. More powerful primitives are needed for analyzing complex structures because in many queries one needs to find entities / entity types which satisfy specific properties without knowing where they reside in a complex entity. Second, the user must master relevant constructors. Third, QAL does not contain any mechanism to combine text retrieval with the manipulation of complex entities. 2.3. Goals for an advanced query language for complex entities Physical assemblies have their own special characteristics and needs. A physical assembly may consist of a huge number of components at several hierarchy levels. The same component type can reside in several different constructs. Therefore the management of structural aspects needs particular support. Next we describe two practical situations where such support is needed. First, assume that a user must change some parts of a complex entity based on their age for maintenance reasons. It is likely that the user does not know where precisely such decaying components reside. Second, assume that the production process of a company is improved by changing some tools and methods. Now one should find all components affected by this change. If information on tools and methods related to components has been stored in documents, one must manipulate complex entities and their documents at the same time. The examples above demonstrate that a query language for complex entities should support queries where the user cannot express which entities or entity types (s)he should manipulate. Likewise primitives to manipulate complex entities and their documentation together are needed. Our purpose is to offer such a query language for advanced manipulation of physical assemblies. This query language has the following goals: - The degree of declarativity must be much higher than in the contemporary query languages for complex entities. Therefore the user need not master programming (e.g., iterative or recursive thinking). The user also should not need to specify complex nested expressions. - The user need not apply constructors in queries. - It is possible to express extensional, intensional and combined extensional-intensional queries. The language must also contain primitives connecting the intensional and extensional levels in a straightforward way. - The language must support both forward and backward traversal in complex entities. This support must be general so that the user may refer to any component of a given entity / entity type at any hierarchy level. - The query language must be able to process textual documentation of complex entities. Therefore text retrieval must be integrated with the manipulation of complex entities. 3. **A database for complex entities and related documents** Our system contains two components: a database component consisting of physical assemblies and a document database consisting of documents related to entity types in the physical assemblies. First we consider how a physical assembly is represented as a complex entity and next we describe how documents are associated with physical assemblies. 3.1. **The representation of a complex entity** In the representation of a complex entity we have combined the advantages of NF² relations and object-oriented approaches. Niemi and others (2002) discuss these advantages in detail. Figure 1 presents our sample database. It contains only three complex entities: one tricycle and two bicycle entities. It is a user-oriented view for the data and we assume that its information content is self-explanatory except for the columns “oid”, “W.” and “M.”. The “oid” columns express the identities of entities and the user need not be aware of them whereas “W.” and “M.” are the abbreviations for the attributes Weight and Material, respectively. ![Image of the extensions level of the sample database] **3.2. Documents related to complex entities** In many applications one needs to associate documents with entity types in physical assemblies. A document is attached to each entity type and it contains information common to all entities belonging to this type. Therefore our document database contains one document for each entity type in the database. In the context of physical assemblies these documents can be, e.g., instructions for assembly or service of parts. In our system the documents are represented as XML documents (http://www.w3.org/XML/). Note that in some applications documents could also be associated with entities, too – e.g., giving their use and maintenance history. Thus our sample document database contains XML documents for each entity type in TRICYCLE and BICYCLE. These documents have the same structure. The tools, which are needed in disassembling an entity belonging to an entity type are listed between the tags `<tool>` and `</tool>`. The text identified by the tags `<disassembly_instructions>` describes the disassembly instructions of the entities of this entity type. The skills, which are needed in the disassembly, are indicated by the tags <skill_requirements>. Safety instructions for the disassembly are coded by the tags <safety_requirements>. The string nil indicates missing information. The Appendix presents the documents for BICYCLE. Sample query evaluations will be based on this document. 4. Query language for manipulating complex entities Users of contemporary query languages must know which entities they manipulate. They must also know exactly the structure of complex entities and specify navigation in this structure. An advanced query language should also support queries where the user does not know these aspects in detail. For that reason our query language offers primitives for analyzing both the intensional and extensional levels and for moving between them. 4.1. The notion of variable in query formulation In our approach, users may refer to unknown factors in their queries. For example, entity types, their properties, entities, documents, subdocuments, and values of properties may be such unknown factors. Each variable in a query refers to some unknown factor. A variable may be associated with a construct at the intensional or the extensional level. The role of query primitives is to express semantic associations among variables. This guarantees a declarative query language. Query processing is responsible for finding the values of variables satisfying the criteria given in primitives. The notion of variable in our query language was borrowed from deductive databases (e.g., Liu, 1999). A variable starts by an uppercase letter whereas constants are numbers or strings, which start by lowercase letters. Query primitives are connected by commas or semicolons, which indicate logical conjunctions and disjunctions, respectively. If the same variable appears in two or more query primitives it is a shared variable. Such a variable must be instantiated to the same value in all query primitives. From the user viewpoint, query formulation consists of combining query primitives containing variables. 4.2. The intensional primitives The intensional level query primitives are as follows: (1) Arg1 is_whole_type_of_type Arg2 (2) Arg1 is_part_type_of_type Arg2 (3) Arg is_top_type (4) Arg is_basic_type (5) Arg1 is_property_of Arg2 (6) Arg1 is_path_to Arg2 In these primitives, Arg, Arg1 and Arg2 are arguments that the user specifies when applying primitives. The user may explicate an argument or refer to it by a variable. In the explicit specification the user gives a constant possibly found in some complex entity. Thus, for example, the names of known entity types and properties are expressed with constants. The primitive (1) expresses that Arg1 is an immediate or indirect composite entity type for the entity type Arg2. The primitive (2) is the opposite, i.e., Arg1 is an immediate or indirect component of the entity type Arg2. The primitives (3) and (4) are used to find an entity type (Arg) that has no composite or component entity type in the complex entities considered, respectively. The primitive (5) refers to any property (Arg1) of the entity type (Arg2). The primitive (6) forms a path Arg1 that starts from a top entity type and leads to the entity type Arg2. Let us assume that we apply primitives in the database given in Figure 1. For example, a composite entity type for the entity type PEDALS may be found by the expression $X \text{ is\_whole\_type\_of\_type } \text{pedals}$. This means that $X$ may be instantiated to the values tricycle, steering, bicycle, drivegear. The expression $Z \text{ is\_whole\_type\_of\_type } Y$ finds all pairs $Z$ and $Y$ where $Y$ is an entity type and $Z$ its any (perhaps indirect) composite entity type. The primitive $B \text{ is\_top\_type}$ instantiates the variable $B$ to the values tricycle and bicycle. In the primitive $P \text{ is\_property\_of } \text{frame}$, possible instantiations for the variable $P$ are frame_no, material, and weight. The primitive $P \text{ is\_property\_of } E$ refers to any property ($P$) of any entity type ($E$). 4.3. *The extensional primitives* Our query language contains the following primitives for manipulating information at the extensional level: (7) $\text{Arg is\_basic\_entity}$ (8) $\text{Arg is\_top\_entity}$ (9) $\text{Arg1 is\_whole\_entity\_of\_entity Arg2}$ (10) $\text{Arg1 is\_part\_entity\_of\_entity Arg2}$ The primitives (7), (8), (9) and (10) correspond to the primitives (4), (3), (1) and (2) of the intensional level. For example, assume that the variable $Z$ has been instantiated to the entity with object identifier o16, i.e. it is an entity of type PEDALS in the complex entity BICYCLE. Now the variable $Y$ in the primitive $Y \text{ is\_whole\_entity\_of\_entity } Z$ may be instantiated to the entities with identifiers o28 (DRIVE GEAR) and o37 (BICYCLE). 4.4. *The primitives for connecting the intensional and extensional levels* It is important to be able to analyze the structure of complex physical assemblies and to manipulate data based on this analysis. The following primitives for connecting the intensional and extensional levels are therefore needed: 11. \( \text{Arg1} : \text{Arg2} \) 12. \( \text{Arg1 is_instance_of Arg2} \) 13. \( \text{Arg1 is_whole_entity_of_type Arg2} \) 14. \( \text{Arg1 is_part_entity_of_type Arg2} \) The primitive (11) refers to the value of a property (Arg2) of an entity (Arg1), i.e., Arg1 is at the extensional level whereas Arg2 belongs to the intensional. Assume that the variable Y has been instantiated to the entity with object identifier o15 (of type CHAIN RING). Now the primitive \( Y: \text{diam} \) means the value 4. The primitive (12) can be used to refer to any entity (Arg1) belonging to entity type given in Arg2. For example, in the primitive \( Z \text{ is_instance_of drivegear} \) the variable can be instantiated to the entities with object identifiers o28 and o33. Primitives (13) and (14) refer to a composite and component entity (Arg1) of the entity type given in Arg2, respectively. 4.5. *The primitives for integrating complex entities with documents* Present query languages for manipulating complex entities do not offer mechanisms for formulating queries involving information in text documents describing entity types. Our query language contains primitives for manipulating structured text documents together with related complex entities: 15. \( \text{Arg1 is_doc_of Arg2} \) 16. \( \text{Arg1 is_sub_doc Arg2} \) 17. \( \text{Arg1 contains Arg2} \) The primitive (15) expresses that Arg1 is the document attached to the entity type Arg2. For example, in the primitive \( X \text{ is_doc_of frame} \) the variable X denotes the document attached to the entity type frame (see Appendix). Our text documents are structured into four parts by the tags <tools>, <disassembly_instructions>, <skill_requirements> and <safety_requirements>. Each part can be seen as a subdocument. In the primitive (16) the argument Arg2 has the form [Tag, Doc_name]. This primitive assigns to Arg1 the subdocument, which has been structured by Tag in the document Doc_name. The primitive (17) expresses that the string Arg2 is included in the document or subdocument Arg1. A string is expressed between apostrophes. 5. Sample queries In our query language a query has the following structure: <form of result> where <primitive sequence>. The construct <form of result> expresses the content of the result. It has the form res(x1,x2, …, xn) where res is the name selected for the result. The components x1, x2, …, xn are the columns of the result. The string where is a reserved word separating the form from the conditions the result must satisfy. In the construct <primitive sequence> primitives are connected by conjunction (comma) or disjunction (semicolon). Typically the components x1, x2, …, xn of the result and primitives in <primitive sequence> may contain several shared variables. Query processing finds the variable instantiations in x1, x2, …, xn, which satisfy the criteria given. The database component may contain several complex entities. A background assumption is that the primitives are applied to all complex entities in it. If the user wants to apply primitives only to some complex entities (s)he can restrict them by the expression apply_to [el , …, en]. Here the list [el , …, en] expresses the scope of primitives. For example, the expression apply_to [bicycle] means that the primitives are applied only within the entity BICYCLE. This expression is a part of \textit{primitive sequence}. 5.1. Extensional queries Most present query languages support only extensional queries of this type, where the result consists only of extensional level data. The background assumption of extensional queries is that the user knows exactly what information (s)he needs from complex entities. Unlike in the present approaches, in our language the user need not specify paths in complex entities. In Sample Query 1 the user wants to know the prices and weights of bicycles and the frame numbers of their frames. We number the lines of our queries in order to indicate their parts. \textit{Sample Query 1} (1) bicycle_info(Bi:weight,Bi:price,Fr:frame_no) where (2) Bi is_instance_of bicycle, (3) Fr is_instance_of frame, (4) Bi is\_whole\_entity\_of\_entity Fr. On line (1) the user expresses the form of the result based on the variables used in the where -part. The variable Bi denotes an instance of the bicycle type (see (2)) whereas the variable Fr (see (3)) refers to any entity of type frame. In (4) (s)he specifies that the specific frame Fr must be a part of the bicycle Bi. Figure 2 gives the query result based on the information in Figure 1. <table> <thead> <tr> <th>bicycle_info</th> <th>weight</th> <th>price</th> <th>frame_no</th> </tr> </thead> <tbody> <tr> <td></td> <td>16.2</td> <td>400</td> <td>43285</td> </tr> <tr> <td></td> <td>18.2</td> <td>500</td> <td>8265</td> </tr> </tbody> </table> Fig. 2. The result of Sample Query 1. Sample Query 1 demonstrates how a query can be specified simply although the result contains information from several hierarchy levels of complex entities. Also selection conditions could be specified simply in the *where* - part of our query language. ### 5.2. Intensional queries In intensional queries the result contains only intensional information. Conventional query languages do not support intensional queries. Intensional queries are necessary in analyzing the structure of complex entities. Our query language offers powerful primitives for structural analysis. These primitives allow the formulation of queries without knowing the structure of complex entities exactly. Sample Query 2 demonstrates this. Assume that the user is interested in the physical assembly TRICYCLE and (s)he wants to know which basic component types (i.e., these component types have no parts) are associated with the entity type REAR. In addition (s)he wants to know the properties of these basic component types. *Sample Query 2* (1) `basic_info(Btype,Prop) where` (2) `apply_to [tricycle],` (3) `Btype is_basic_type,` (4) `rear is_whole_type_of_type Btype,` (5) `Prop is_property_of Btype.` Through (2) and (3) Btype stands for any basic component type in the complex entity TRICYCLE. Line (4) specifies that any basic component type must be an immediate or indirect part of the entity type REAR. The variable Prop stands for any property of the basic component type. The result is in Figure 3. <table> <thead> <tr> <th>basic_info</th> <th>Btype</th> <th>Prop</th> </tr> </thead> <tbody> <tr> <td>rear_axle</td> <td>diam</td> <td></td> </tr> <tr> <td>rear_axle</td> <td>weight</td> <td></td> </tr> <tr> <td>wheel</td> <td>diam</td> <td></td> </tr> <tr> <td>wheel</td> <td>r_type</td> <td></td> </tr> <tr> <td>wheel</td> <td>weight</td> <td></td> </tr> </tbody> </table> Fig. 3. The result of Sample Query 2. The result of Sample Query 2 is produced by manipulating only information at the intensional level. Sometimes intensional queries may also require the manipulation of the extensional level. There often is a need to find component types of complex entities, which contain specific values. For example, if some material (at the extensional level) has been found hazardous, it would be nice to find entity types, which contain this material. The user may want to find, for example, all component types of BICYCLE, which may contain plastic. Because of space limitations we leave the formulation of this query as an exercise. 5.3. Combined extensional-intensional queries Queries producing both extensional and intensional information are combined extensional-intensional queries. Such queries are usual in physical assemblies when the user wants to know both the result of structural analysis and the information related to the corresponding entities. In Sample Query 3 the user wants to find all component entity types of the entity type DRIVE GEAR of bicycles and the corresponding component entities. In the result he is interested only in the values of the properties Diameter and Weight of the component. Sample Query 3 (1) result(Comp,Inst:diam,Inst:weight) where (2) apply_to [bicycle], (3) Comp is_part_type_of_type drivegear, (4) Inst is_instance_of Comp. In (3) the variable Comp stands for any component entity type of DRIVE GEAR. In (4) the variable Inst is instantiated to an entity belonging to the entity type represented by Comp. The result is in Figure 4. The entity type CHAIN does not have the property Diam (see Figure 1) and thus this property has the value 'null'. <table> <thead> <tr> <th>result</th> </tr> </thead> <tbody> <tr> <td>Comp</td> </tr> <tr> <td>chain</td> </tr> <tr> <td>chainring</td> </tr> <tr> <td>chainring</td> </tr> <tr> <td>chainring</td> </tr> <tr> <td>pedals</td> </tr> <tr> <td>pedals</td> </tr> </tbody> </table> Fig. 4. The result of Sample Query 3. 5.4. Queries for integrating complex entities with their documents Integration of complex entities and their documentation is needed in two query types. There are queries only finding information in complex entities but using documents in the selection of this information. Moreover, there are queries finding documents attached to entity types. Sample Query 4 and Sample Query 5 demonstrate these two query types. Information in Appendix is needed in their evaluation. Sample Query 4 finds those component entity types of BICYCLE the disassembly of which requires tongs or gloves. **Sample Query 4** (1) result(Part) where (2) Part is_part_type_of_type bicycle, (3) Part_Doc is_doc_of Part, Tools is_sub_doc [tools, Part_Doc], (4) Safe is_sub_doc [safety_requirements, Part_Doc], (5) (Tools contains `tongs`; Safe contains `gloves`). The variable Part_Doc refers to the document attached to any component type of BICYCLE. The variables Tools and Safe stand for its subdocuments. They contain information on tools and safety aspects. In (5) we test that either the subdocument on tools contains the string `tongs` or the subdocument on safety instructions contains the string `gloves`. The result of this intensional query is in Figure 5. <table> <thead> <tr> <th>result</th> </tr> </thead> <tbody> <tr> <td>Part</td> </tr> <tr> <td>drivegear</td> </tr> <tr> <td>steering</td> </tr> <tr> <td>chain</td> </tr> <tr> <td>chainring</td> </tr> <tr> <td>front_axle</td> </tr> </tbody> </table> Fig. 5. The result of Sample Query 4. Sample Query 5 finds composite entity types containing as indirect or immediate parts the entity types CHAIN and PEDALS. In addition, it retrieves the subdocuments of these entity types containing the disassembly instructions. **Sample Query 5** (1) type_subdoc(Type, SubDoc) where (2) Type is_whole_type_of_type chain, (3) Type is_whole_type_of_type pedals, (4) Doc is_doc_of Type, (5) SubDoc is_sub_doc [disassembly_instructions,Doc]. By using the shared variable Type in (2) and (3) we specify that Type is a common immediate or indirect composite entity type for CHAIN and PEDALS. The variable SubDoc is instantiated to the subdocument of each entity type of this kind, which provides the disassembly instructions. The result is in Figure 6. <table> <thead> <tr> <th>type_subdoc</th> <th>SubDoc</th> </tr> </thead> <tbody> <tr> <td>bicycle</td> <td>The saddle is separated as follows: Loosen the screw using an adjustable wrench and draw the saddle from the frame. If the saddle does not move then hit lightly the downward side of the saddle by a hammer. When separating the steering it has to be partially disassembled. However, when separating drive gear it has to be fully disassembled. See their disassembly instructions.</td> </tr> <tr> <td>drivegear</td> <td>First, the rear wheel has to be released using an adjustable wrench. Next, the chain is separated. The chain ring in the wheel is loosened using gear wrench. When separating pedals and the front chain ring the boss must be disassembled.</td> </tr> </tbody> </table> Fig. 6. The result of Sample Query 5. 6. Discussion In contemporary database query languages queries are represented mainly with intensional elements, which produce answers composed entirely of extensional information. This also applies to most query languages intended for manipulating complex entities. However, increasing attention has recently been paid to the possibility of supporting intensional queries. Intensional queries increase the expressive power and offer more intelligent query languages (Motro, 1994). In this paper we deal with complex entities called physical assemblies. Large physical assemblies may consist of a large number (possibly thousands) of parts. In practice the users are not able to master such structures in detail. However, most query languages expect the user to know exactly the structure of complex entities in query formulation. Through intensional queries and intensional query primitives of our query language one may refer to unspecified structural information and manipulate it in physical assemblies. This requires the capability of analyzing structural aspects of physical assemblies and this kind of mechanism is necessary in any advanced query language for managing the complexity of physical assemblies. Our sample queries showed that the primitives of our query language remove the explicit specification of navigation from the user. We have presented several sample queries showing that extensional, intensional and combined extensional-intensional queries are needed in the context of physical assemblies. In order to support queries of different types our query language contains primitives connecting the intensional and extensional levels. Through these primitives the user can easily transfer data manipulation from the extensional level to the intensional level and vice versa. For example, the user may first analyze entity types satisfying given structural criteria and then manipulate entities belonging to these entity types. Our sample queries showed that often both forward and backward traversal in complex entities is needed. Such traversals may occur both at the extensional and the intensional level. In contemporary object-oriented approaches the integration of forward and backward traversal is very troublesome (Lee & Lee, 1998). Although our approach is also object-oriented, forward and backward traversal is very straightforward. This is because our representation of physical assemblies combines the strengths of the value-oriented and object-oriented representations. As in NF² relations we store subentities in the entities which contain them. Therefore forward/backward traversal among subentities is based on nesting. In other words link manipulation, common in the object-oriented approaches, is unnecessary. As Junkkari (2001) has shown, our indexing mechanism enables one to find all component or composite entities or entity types related to a specific entity or entity type. Physical assemblies often have several instructions associated with their parts such as assembly, disassembly and service instructions. Such information is usually represented as text documents. As our sample queries exemplified, information both from complex entities and their related documents is often needed. Our query language contains primitives for referring to a document or subdocument related to a specific entity type. Further there is a primitive for investigating the content of text documents. The extensions to more comprehensive IR primitives are obvious. By using shared variables in query components the user can express semantically related information intuitively and compactly. There are several query language proposals for complex entities. Some have prototype implementations. A prototype implementation of the present language was programmed in Prolog++ (Moss, 1994), which combines logic programming and object-oriented programming into one homogeneous deductive object-oriented programming framework. This kind of framework was ideal for the implementation of our query language because it supports, in addition to object-oriented features, the notion of variable used in our query language. 7. Conclusions A powerful and declarative query language is needed for manipulating complex entities - especially physical assemblies. In contemporary query language approaches the user needs to know exactly all information in complex entities. For example, the user is expected to specify navigation paths leading to the data of interest. This is an unrealistic assumption in such physical assemblies, which consist of a large number of parts. Therefore the primitives of our query language were designed so that the user can manipulate data and structures which (s)he does not know. For example, (s)he may easily refer to any composite or component entity / entity type of a specific entity / entity type. This feature facilitates considerably the specification of forward and backward traversal in complex entities in comparison to the other languages. Our language contains primitives of three kinds. In addition to extensional and intensional query primitives, it contains primitives for transferring data manipulation from the extensional level to the intensional level and vice versa. These primitives support the formulation of intensional and combined extensional-intensional queries, in addition to conventional extensional queries. Our sample queries demonstrated that this support has a great practical significance. Unlike contemporary query languages, our query language also contains primitives for manipulating structured text documents related to parts of physical assemblies. The possibility to use information in documents increases the expressive power of our language. This feature is necessary in many applications of physical assemblies. We borrowed the notion of variable for our query language from deductive databases and showed that this makes query formulation intuitive and compact. Acknowledgement This research was supported by the Academy of Finland under the grant number 52894. References APPENDIX The sample document database consists of the XML documents related to entity types in TRICYCLE and BICYCLE. Only the latter document is given below. Here […] indicates suppressed components. Doc. of BICYCLE: <doc> <tools> screwdriver, adjustable wrench, cotter pin extractors, tongs, hammer </tools> <disassembly_instructions> The saddle is separated as follows: Loosen the screw using an adjustable wrench and draw the saddle from the frame. If the saddle does not move then hit lightly the downward side of the saddle by a hammer. When separating the steering it has to be partially disassembled. However, when separating drive gear it has to be fully disassembled. See their disassembly instructions. </disassembly_instructions> <skill_requirements> High technical skills </skill_requirements> <safety_requirements> Protective gloves are required in disassembling the drive gear. </safety_requirements> </doc> Doc. of DRIVE GEAR: <doc> <tools> gear wrench, adjustable wrench </tools> <disassembly_instructions> First, the rear wheel has to be released using an adjustable wrench. Next, the chain is separated. The chain ring in the wheel is loosened using gear wrench. When separating pedals and the front chain ring the boss must be disassembled. </disassembly_instructions> <skill_requirements> Releasing the chain requires low technical skills. Separating of pedals and chain rings requires the skills of a cycle mechanic. </skill_requirements> <safety_requirements> Protective gloves are required in disassembling the chain and chain rings. <doc> Doc. of STEERING: <doc> <tools> adjustable wrench, cotter pin extractors, tongs </tools> <disassembly_instructions> The disassembly of the steering starts by separating of the wheel from the fork of the bicycle. For this the nuts must be loosened using adjustable wrench. The handlebars are separated by turning the bolt on top of the holder by an adjustable wrench. The axle is separated by turning the guard using tongs. After that the cotter must be disconnect using cotter pin extractors. Finally the axle is drawn out from the shaft tunnel. </disassembly_instructions> <skill_requirements> Low technical skills. </skill_requirements> </doc> Doc. of FRAME: [...] Doc. of CHAIN: <doc> <tools> nil </tools> <disassembly_instructions> nil </disassembly_instructions> <skill_requirements> nil </skill_requirements> </doc> Doc. of CHAIN RING: <doc> <tools> nil </tools> <disassembly_instructions> nil </disassembly_instructions> <skill_requirements> nil </skill_requirements> </doc> <safety_requirements> Protective gloves are required. </safety_requirements> </doc> Doc. of SADDLE: [...]
{"Source-Url": "http://tampub.uta.fi/bitstream/handle/10024/65982/advanced_query_language_2004.pdf;jsessionid=2F6C2FC7D2BC2C609653D00A49CC6920?sequence=1", "len_cl100k_base": 10740, "olmocr-version": "0.1.53", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 62913, "total-output-tokens": 14627, "length": "2e13", "weborganizer": {"__label__adult": 0.0003638267517089844, "__label__art_design": 0.0006194114685058594, "__label__crime_law": 0.0003933906555175781, "__label__education_jobs": 0.0018873214721679688, "__label__entertainment": 0.00013077259063720703, "__label__fashion_beauty": 0.0002092123031616211, "__label__finance_business": 0.0005650520324707031, "__label__food_dining": 0.000354766845703125, "__label__games": 0.0007162094116210938, "__label__hardware": 0.001007080078125, "__label__health": 0.0004730224609375, "__label__history": 0.0003936290740966797, "__label__home_hobbies": 0.00016880035400390625, "__label__industrial": 0.0008211135864257812, "__label__literature": 0.0005817413330078125, "__label__politics": 0.0002295970916748047, "__label__religion": 0.0004940032958984375, "__label__science_tech": 0.1383056640625, "__label__social_life": 0.00013816356658935547, "__label__software": 0.029632568359375, "__label__software_dev": 0.8212890625, "__label__sports_fitness": 0.00020682811737060547, "__label__transportation": 0.0007009506225585938, "__label__travel": 0.00020420551300048828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56950, 0.029]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56950, 0.56109]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56950, 0.86196]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 618, false], [618, 2065, null], [2065, 3956, null], [3956, 5926, null], [5926, 7718, null], [7718, 9565, null], [9565, 11620, null], [11620, 13620, null], [13620, 15448, null], [15448, 17458, null], [17458, 19386, null], [19386, 21029, null], [21029, 22593, null], [22593, 24048, null], [24048, 25692, null], [25692, 27328, null], [27328, 29021, null], [29021, 30864, null], [30864, 32573, null], [32573, 34050, null], [34050, 35398, null], [35398, 36931, null], [36931, 38288, null], [38288, 39483, null], [39483, 41180, null], [41180, 43206, null], [43206, 45012, null], [45012, 46749, null], [46749, 48619, null], [48619, 50611, null], [50611, 52442, null], [52442, 54285, null], [54285, 55776, null], [55776, 56844, null], [56844, 56950, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 618, true], [618, 2065, null], [2065, 3956, null], [3956, 5926, null], [5926, 7718, null], [7718, 9565, null], [9565, 11620, null], [11620, 13620, null], [13620, 15448, null], [15448, 17458, null], [17458, 19386, null], [19386, 21029, null], [21029, 22593, null], [22593, 24048, null], [24048, 25692, null], [25692, 27328, null], [27328, 29021, null], [29021, 30864, null], [30864, 32573, null], [32573, 34050, null], [34050, 35398, null], [35398, 36931, null], [36931, 38288, null], [38288, 39483, null], [39483, 41180, null], [41180, 43206, null], [43206, 45012, null], [45012, 46749, null], [46749, 48619, null], [48619, 50611, null], [50611, 52442, null], [52442, 54285, null], [54285, 55776, null], [55776, 56844, null], [56844, 56950, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56950, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56950, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56950, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56950, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56950, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56950, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56950, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56950, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56950, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56950, null]], "pdf_page_numbers": [[0, 0, 1], [0, 618, 2], [618, 2065, 3], [2065, 3956, 4], [3956, 5926, 5], [5926, 7718, 6], [7718, 9565, 7], [9565, 11620, 8], [11620, 13620, 9], [13620, 15448, 10], [15448, 17458, 11], [17458, 19386, 12], [19386, 21029, 13], [21029, 22593, 14], [22593, 24048, 15], [24048, 25692, 16], [25692, 27328, 17], [27328, 29021, 18], [29021, 30864, 19], [30864, 32573, 20], [32573, 34050, 21], [34050, 35398, 22], [35398, 36931, 23], [36931, 38288, 24], [38288, 39483, 25], [39483, 41180, 26], [41180, 43206, 27], [43206, 45012, 28], [45012, 46749, 29], [46749, 48619, 30], [48619, 50611, 31], [50611, 52442, 32], [52442, 54285, 33], [54285, 55776, 34], [55776, 56844, 35], [56844, 56950, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56950, 0.09014]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
2f50f18b145f71bb0c21595f6c7a6f4a2afc9c0d
Defining a Knowledge Graph Development Process Through a Systematic Review Tamašauskaitė, G.; Groth, P. DOI 10.1145/3522586 Publication date 2023 Document Version Final published version Published in ACM Transactions on Software Engineering and Methodology License CC BY Citation for published version (APA): Defining a Knowledge Graph Development Process Through a Systematic Review GYTĖ TAMĀŠAUSKAITĖ and PAUL GROTH, University of Amsterdam, Netherlands Knowledge graphs are widely used in industry and studied within the academic community. However, the models applied in the development of knowledge graphs vary. Analysing and providing a synthesis of the commonly used approaches to knowledge graph development would provide researchers and practitioners a better understanding of the overall process and methods involved. Hence, this article aims at defining the overall process of knowledge graph development and its key constituent steps. For this purpose, a systematic review and a conceptual analysis of the literature was conducted. The resulting process was compared to case studies to evaluate its applicability. The proposed process suggests a unified approach and provides guidance for both researchers and practitioners when constructing and managing knowledge graphs. CCS Concepts: • Software and its engineering → Software development process management; • Computing methodologies → Ontology engineering; Semantic networks; • Information systems → Semantic web description languages; Information integration; Additional Key Words and Phrases: Knowledge graphs, knowledge graph construction, development process semantic network, information integration ACM Reference format: 1 INTRODUCTION Knowledge graphs—graph-structured knowledge bases [57]—are widely employed to represent structured knowledge and perform a variety of AI driven tasks in the context of diverse, dynamic, and large-scale data [32, 87]. Given this increasing adoption, there is a need for guidance on knowledge graph development that would assist researchers, developers, and engineers in the process of creating and maintaining knowledge graphs [9]. While there are descriptions of methods for knowledge graph development [37, 80], that outline the necessary steps to take in order to develop a knowledge graph, these methods vary per article and there is a lack of a global view of the development of these software artifacts. While generally applicable development processes exist in such areas as software development [3], ontology construction [26], and knowledge engineering [64]; it is unclear to what extent these existing theories can be directly applied to knowledge graph development, due to the complex combination of data and software used for their construction. Indeed, from a software engineering perspective, the development of knowledge graphs requires understanding of the structure of the graph, the relationships between the nodes, and the processes involved in maintaining the graph. Authors’ address: G. Tamašauskaitė and P. Groth, University of Amsterdam, Faculty of Science, Postbus 94323 1090 GH, Amsterdam, Netherlands; emails: gyte.tama@gmail.com, p.t.groth@uva.nl. This work is licensed under a Creative Commons Attribution International 4.0 License. © 2023 Copyright held by the owner/author(s). 1049-331X/2023/02-ART27 https://doi.org/10.1145/3522586 ACM Transactions on Software Engineering and Methodology, Vol. 32, No. 1, Article 27. Publication date: February 2023. 27:2 G. Tamašauskaitė and P. Groth Thus, considering the growth of knowledge graphs and a lack of global process view of their development, this article focuses on formulating key process steps when managing the construction and maintenance of knowledge graphs. Specifically, this article contributes a: A synthesis of common steps in knowledge graph development described in the academic literature. The aim is to provide guidance for both academia and industry in planning and managing the process of knowledge graph development. Moreover, we hope this analysis can provide for a better understanding of how other development lifecycles can be applied to knowledge graphs. This article is structured as follows: Section 2 covers related work in the area of knowledge graphs. Then, the methodology behind the systematic review is presented in Section 3. This is followed by the results of the review, in Section 4, which describe the proposed knowledge graph development process, its steps, and how they interrelate. The process is assessed by mapping the proposed steps to the case studies in Section 5. Finally, Section 6 discusses the strengths and limitations of the research. Section 7 outlines the main findings and future work. 2 RELATED WORK This section presents knowledge graphs, trends in their development and development practices more broadly. 2.1 Knowledge Graphs The term “knowledge graph” was first used in 1972; however, it became widely adopted after 2012, following the announcement of the Google Knowledge Graph [1, 29]. This event also led to the growth of the development and use of knowledge graphs in industry [27, 32, 58]. The term “knowledge graph” can be defined as “a graph of data intended to accumulate and convey knowledge of the real world, whose nodes represent entities of interest and whose edges represent relations between these entities” [32]. Thus, knowledge graphs are structured to represent facts that cover entities, relations, and semantic descriptions [37]. Knowledge graphs can be formally defined as a directed graph \((G)\), where \(G = (V, E)\) [2].\(V\) refers to the vertices (\(V\)) or nodes that represent the real-world entities. \(E\) refers to the edges (\(E\)) or links between the nodes that represent the relations between the entities. Commonly, entities and their relations are presented as triples (subject, predicate, and object) [2] and in graph form (see Figure 1). Knowledge graphs are used for multiple tasks, including search and querying (e.g., Google, Bing), serving as a semantic database (e.g., Wikidata), and big data analytics (e.g., Walmart) [87]. In Fig. 1. Entities and relations in a knowledge graph [37]. practice, the literature distinguishes between two types of knowledge graphs—generic knowledge graphs and domain-specific knowledge graphs [2]. The first type provides access to multiple domains, commonly with encyclopedic content, e.g., Wikidata [71], YAGO [68], and DBpedia [7, 44]. The second type is focused on a more narrow domain, often for a specific problem or industry [2]. In this article, both types are included in the analysis to ensure a broad overview of the field. 2.2 Trends in Knowledge Graph Development Knowledge graph development is commonly categorised into two types, either as top-down or bottom-up [2, 23, 45, 92]. The top-down approach refers to when the ontology (or data schema) is defined first and, based on the ontology, knowledge is extracted [45]. The bottom-up approach refers to when the knowledge is extracted from data and, based on the data, the ontology of the knowledge graph is defined [45]. Current research presents multiple instances of how knowledge graphs can be developed [37, 80, 87, 88]. However, it commonly focuses on state-of-the-art techniques (e.g., machine-learning and other advanced algorithms) that can be used in the development of knowledge graphs rather than the overall process of knowledge graph development. For example, the techniques discussed in one study [87] include data extraction from various sources, harvesting relations between entities, building rules and inference, as well as storage and management of the knowledge graph. In another study [80], the techniques are grouped differently—knowledge integration, entity discovery and typing, entity canonicalisation, construction of attributes and relationships, open schema construction, and knowledge base curation. Yet, another study [88] focuses on the techniques of structured knowledge extraction, classification and non-classification relationship extraction, and graph optimisation. Thus, there are both different approaches to as well as different vocabularies used with respect to knowledge graph development. Therefore, this article focuses on reviewing different knowledge graph development processes presented in the literature. It contributes to the field by providing a summary of how knowledge graphs are being constructed as well as providing a synthesized description of the process. 2.3 Applicability of Existing Development Processes Similar processes of development are described in other areas of computer science, for example, in software engineering or ontology construction. In software engineering, there are several development life cycles, e.g., waterfall, V-model, incremental, iterative, and spiral [3]. While, in general, these life cycles could be applied when developing the knowledge graph; it is not known to what extent it could cover the specific requirements of the knowledge graph development. In ontology construction, there are also several approaches, such as the Cyc method, Uschold and King’s method, the Grüninger and Fox’s methodology, the KACTUS approach, METHONTOLOGY, and others [26]. Ontologies and knowledge graphs have similarities, though, ontologies primarily focus on capturing the knowledge models (i.e., data models), while knowledge graphs primarily focus on capturing the large amounts of data itself [63]. Additionally, ontology construction is commonly seen as one of the steps in knowledge graph development [23, 92]. Thus, it is not apparent whether ontology construction methodologies are fully suitable for knowledge graph development. While it is useful to understand these existing approaches; it is also beneficial to take into account the specificity of knowledge graph development. Understanding how knowledge graphs are developed allows for better insights into how these existing approaches can be applied. 3 METHODOLOGY To understand the overall process of knowledge graph development, we conducted a systematic review of the literature to understand the key process steps—identifying, describing, and integrating these concepts. To evaluate the applicability of the process, we compared it to real-world case studies. The methodology was designed based on the principles for systematic reviews in software engineering [43] and the main phases of the conceptual framework analysis [35]. The following sections present the details of how the data collection and data analysis were conducted as well as the evaluation approach. 3.1 Data Collection As a basis for this article, relevant and recent research articles were collected and analysed. The overall flow of selecting articles for the systematic review is presented in Figure 2 as a PRISMA workflow [60]. The data collection and screening was performed by a single author with checks in terms of protocol conducted by the other author. Data sources. Articles were collected from eight online well-established data sources for academic research (ACM Digital Library, IEEEExplore, ScienceDirect, arXiv, SpringerLink, Zeta AI Research Navigator, Semantic Scholar, and Google Scholar) within the period of March-April 2021. The majority of the sources are recommended in particular when performing software engineering reviews [43]. Inclusion and exclusion criteria. For the search, two keywords were used: knowledge graph development and knowledge graph construction. Only articles from 2012 onward were considered as the growth of the topic started in 2012 with the announcement of Google Knowledge Graph [32]. Only articles in English were reviewed. The first most relevant (as determined by the data source’s ranking) 50 articles per source were screened, setting a threshold for prioritising the review of articles due to a large number of identified articles and decreasing relevancy of search results [59]. First, the title and abstract were screened to determine whether the article covers the knowledge graph development. Then, the content of the article was skimmed to assess if it covers the explicit process steps. If the article met these criteria, it was added to the reference management system for further analysis. Considering that the articles were chosen from credible sources and that the articles focused on knowledge graphs as a result, rather than reflecting on its development process, the evaluation of the experimental results of the articles was not performed. Search outcome. Overall, 57 articles were selected for the analysis ranging from 2016 to 2021\(^1\) (the full list is in Appendix A), that given focused time period ensures that the totality of relevant articles are covered. The distribution of the year of publication is presented in Table 1. The majority of the articles were covering the development of domain-specific knowledge graphs (Table 1). These articles focus on presenting knowledge graphs built for a specific purpose and what techniques were used for their development. Another type of article was categorised as methodological, presenting a more theoretical overview of knowledge graphs and development methods. Furthermore, the majority of articles covered bottom-up knowledge graph development approach (Table 1). Although the majority of articles do not indicate the type of the development approach used; the distinction was made by determining whether the ontology development was done or not as the first step of knowledge graph development. After having selected the articles, the required data was extracted from the articles and analysed in multiple iterations. As a first iteration, the type of the article, the type of the knowledge graph development process, and the process itself were written down. Then, an extensive list of the process’ steps was compiled. \(^1\)Note that during screening no relevant articles from 2012 to 2016 were identified and, thus, not included in this review. While there are a number of articles on knowledge graphs in 2012–2016, the main focus of these articles is on technological or theoretical analysis of knowledge instead of presenting the development process. The processes were of different granularity, some including the algorithms and techniques used in the knowledge graph construction as steps, while others only indicated the main phases. The process steps were written out in three different levels, specifying the more generic steps and what they consist of (see Figure 3). Level I steps provide a more generic description of the step, Level II steps specify Level I tasks into smaller stages, while Level III steps are specific and focus on describing the algorithms and techniques used. ### 3.2 Data Analysis The overall data analysis workflow is presented in Figure 4 that describes how steps of knowledge graph development were extracted and processed. Initially, a total of 620 steps of all levels were indicated, of which 519 steps were unique. However, some steps were synonymous to each other; thus, the list was manually amended by changing similar tasks to the same expressions, e.g., relationship extraction was changed to relation extraction; data, data input and similar were changed to data source. After adjusting the synonyms, there were 414 unique values in the final list, of which 182 were of level I, 196 of level II, and 60 of level III. The III level steps were specific, indicating the algorithms and techniques used, thus, were not considered in further analysis. The full list of process steps is available in a dataset repository.\(^2\) The frequency of each step was counted to determine the most common steps in the knowledge graph development. This was used as guidance in formulating the general process steps. Additionally, the process figures were extracted from each article, which allowed analysis of how the process is presented visually (Appendix C and dataset repository). Using information about frequent steps and the visually presented processes, the first process draft of the knowledge graph development was prepared. Then, having these steps, each article was reviewed again in order to record the relevant data per each indicated process step. Finally, using the described steps of the knowledge graph development process and the visual representations of the processes, the final proposed process was developed and is described in more detail in the following section. ### 3.3 Evaluation through Case Studies In order to evaluate the applicability and generalisability of the proposed knowledge graph development process, a comparison to case studies was carried out. The proposed process was compared and mapped to real-life knowledge graphs, and how they are constructed and maintained. The evaluation covers the comparison of two types of knowledge graphs—generic open knowledge graphs and domain-specific knowledge graphs. As a result, this evaluation provided insights on to what extent the proposed process is suitable and relevant to real life examples, as well as possible areas for future work with respect to development lifecycles. ### 4 RESULTS The knowledge graph development process based on the review and analysis of the selected articles is presented in Figure 5. The process consists of six main steps: (i) Identify data, (ii) Construct the knowledge graph ontology, (iii) Extract knowledge, (iv) Process knowledge, (v) Construct the knowledge graph, and (vi) Maintain the knowledge graph. The process incorporates both top-down and bottom-up approaches. Each step and its sub-steps are described in the following sections. #### 4.1 Identify Data The objective of this step is to identify a domain of interest, a data source, and a way of data acquisition. As mentioned before, knowledge graphs can either be generic or domain-specific [2, 32]. Usually, generic knowledge graphs cover multiple domains and are publicly available, while domain-specific knowledge graphs are for specific domain or problem and commonly used in organisations \(^2\)https://zenodo.org/record/5608878. for their operations. Defining the domain of the knowledge graph allows for better identification of data sources and determine how data can be extracted later [87]. The domain can be as broad or narrow as needed, e.g., education [4, 6, 13, 16, 18, 67, 69, 90], healthcare [34, 47, 52, 78], social media [48, 66], and so on. Having chosen the domain, it is important to identify the data sources as it influences the overall knowledge graph development process as well as the choice of knowledge extraction techniques. In general, data can be either structured, semi-structured, or unstructured and can be extracted from multiple sources. Structured data is a type of data that has explicit structure, e.g., data in tables or relational databases [82]. Semi-structured data has a certain structure, but it is not strict, e.g., XML data [82]. Unstructured data do not have a predefined structure, e.g., text [82]. For instance, data can be acquired from an online encyclopedia such as Wikipedia (e.g., [28, 83]), a structured database (e.g., [16]), semi-structured documents (e.g., [90]), unstructured text (e.g., [21]), or a mix of several data sources (e.g., [86]). Finally, the data acquisition methods are chosen based on the type of data and data source. Web resources can be acquired using web crawlers (e.g., [14]), databases can be harvested using data mining techniques (e.g., [85]), and files can be downloaded or accessed directly (e.g., [70]). A suitable method should be chosen considering what data are needed for constructing a knowledge graph. As the result of this step, the data required for knowledge graph development is acquired and prepared for the knowledge extraction. ### 4.2 Construct the Knowledge Graph Ontology The objective of this step is to construct the knowledge graph ontology that provides a top-level structure for the knowledge graph. This step is needed when the top-down approach is used. The top-down approach is usually used either when (i) there is already a clear domain ontology (e.g., medical classification in a healthcare domain [52]) that can be used as a basis for the knowledge graph ontology, or (ii) there is structured data that provides a framework for the ontology to be constructed (e.g., a course syllabus structure in an education domain [6]). Constructing the knowledge graph ontology allows having predefined types of entities and relations between them. For the basis of ontology construction, common ontologies such as FOAF [11], Geonames [81] or others relevant for the domain, as well as common ontology languages such as RDF(S) [73], OWL [72], and XML [74] can be reused. Ontologies can be constructed manually or automatically. Domain experts can manually develop the ontology, but it is labour intensive. Additionally, it may be complicated to find relevant experts if the domain is narrow [45]. The automatic approach is driven by data and is described in Step 4.2 (see Section 4.4.2). ### 4.3 Extract Knowledge Having acquired the data, the next step is to extract knowledge from it. The objective of this step is to extract entities, relations between them and attributes. There are a number of methods to apply for knowledge extraction, and for different types of data, different techniques are needed. Knowledge extraction from semi-structured and unstructured data requires more effort and more complex techniques, while for structured data, entities and relationships are identified more easily. #### 4.3.1 Extract Entities Entity extraction is aimed at discovering and detecting entities in a wide range of data. The objective of this step is both to discover multiple entities for a given type and to identify more informative types for a certain entity [80]. One of the most frequently applied methods is **named-entity recognition (NER)**, which focuses on the discovery and classification of entities to the predefined categories or types [14, 34, 36, 38, 40, 47, 48, 51, 53, 69, 84, 86, 87, 92]. Other machine learning methods also include dictionary-based or pattern-based discovery, sequence labelling, word and entity embeddings, and so on [80]. The quality of extracted entities highly affects the efficiency and quality of knowledge extraction tasks (relations, attributes). Thus, it is a crucial step in knowledge graph development [92]. 4.3.2 Extract Relations. After having extracted entities, they are isolated and not linked together; therefore, it is necessary to extract relations among the entities as well [38]. This step also depends on the type of data. For structured data, relations are explicit and easily identifiable. For semi-structured data, the pattern-based and rule-based approaches can be used as well as other machine learning techniques [80, 87]. In case of unstructured, textual data, relation extraction requires interpreting semantic information, where natural language processing (NLP) methods are commonly used [14, 38, 84], such as semantic role labeling [21, 54, 66, 87] or neural information extraction [80, 87]. Other examples of relation extraction methods include Open Information Extraction (OIE), bootstrapping and distant supervision for automatic labelling, methods based on frame semantics, such as FrameNet [32], kernel methods, and word embeddings [87]. If an ontology is available (as defined in Step 2, Section 4.2), then the relations between extracted entities can be assigned based on the ones defined in the ontology [78]. As a result of this step, having extracted entities and relations allows constructing triples that are used in the knowledge graph. 4.3.3 Extract Attributes. Attribute extraction refers to acquiring and aggregating the information about a specific entity [48, 51]. In some cases, attribute extraction is seen as the discovery of special types of relations [48] between entities. Nevertheless, the main objective of this step is to describe the entity more clearly [92]. For attribute extraction, similar methods to ones used for relation extraction can be applied, e.g., semantic role labeling [54], or machine learning techniques [80]. In some cases, the type of attribute can be predefined before extracting or gathering the data, e.g., attributes for road signs are colour, shape, and so on [42]. 4.4 Process Knowledge The next step in the process is the processing of knowledge. The objective of this step is to ensure that the knowledge extracted is of high quality. The unprocessed extracted entities, relations and attributes may be ambiguous, redundant or incomplete. Furthermore, knowledge from different sources has to be aligned. Therefore, it is needed to integrate the knowledge, map it to an ontology and complete missing values before constructing the knowledge graph. 4.4.1 Integrate Knowledge. Knowledge integration, also known as knowledge fusion, refers to integrating knowledge from different sources and cleaning it to eliminate redundancy, contradiction, and ambiguity [45, 48, 62]. First of all, all knowledge should be cleaned by removing unnecessary signs, stop words, and other noise, if there is any [19]. This improves the overall quality of knowledge and prepares the data for entity resolution. In order to remove duplicates and eliminate ambiguity, it is necessary to perform entity resolution [40, 92] that is also referred to as entity alignment [51, 84, 92], entity canonicalisation [80], and entity matching [38, 92]. The objective of this task is to evaluate if different entities refer to the same real-world objects, and, if so, link them in the knowledge graph. Furthermore, all entities should be linked to unique identifiers (such as URI or IRI) that allow the definition of custom namespaces [55]. Entity resolution involves the tasks of blocking, that is used to cluster similar entities to the blocks, and similarity, that is used to evaluate are there are duplicates in the block [40]. There are a variety of methods to be applied per each task, including traditional blocking, sorted neighbourhoods, canopies for blocking, and machine learning methods for similarity, such as feature vector computation and others [40]. Relations can also be semantically similar, but syntactically different; thus, it is also necessary to merge similar relations and only keep the main ones (e.g., exploit, use, and adopt are similar [19]). 4.4.2 Construct Ontology or Map to it. If the ontology was not constructed in Step 2 (Section 4.2), then it is recommended to develop it after having integrated knowledge. The ontology in Step 2 defines the structure of the knowledge graph before extracting knowledge, whereas, in this step, the structure of the knowledge graph is defined based on the extracted knowledge. The ontology of a knowledge graph allows creating a model of how the knowledge graph is represented in a structured way [23] and describes relations between concepts within a domain [33]. It also helps to evaluate the quality of the extracted data and how completed the knowledge is. While constructing the ontology, it is possible to analyze the knowledge graph and identify if the use of domain knowledge is not redundant [70] or predict incomplete ontological triples [36]. Moreover, the construction of the ontology should follow good practices of ontology development [10]. If the ontology was developed in Step 2 and additional knowledge was extracted in Step 3 (Section 4.3), then at this step mapping between the ontology and the extracted knowledge should be done. Thereby, the types of entities and relations should be aligned to the ones defined in the ontology [78]. Additionally, the previously developed ontology can be enriched based on the extracted knowledge [28]. Thus, the ontology of the knowledge graph should be continuously reviewed and updated. 4.4.3 Complete Knowledge. The objective of this step is to complete and enrich the knowledge in the knowledge graph as well as to improve its overall quality. This includes performing reasoning and inference, validating the triples, and optimising the knowledge graph. Knowledge reasoning and inference refers to developing and enriching the knowledge graph by establishing new relations among entities based on existing relations and discovering new knowledge from existing knowledge [48, 77]. In general, this can be done by logical inference that is based on the existing rules between relations and through the use of machine-learning (e.g., statistical relational learning or building embedding-based link predictors and node classifiers) [61, 87]. The latter notion also comes under the heading of knowledge graph refinement [61]. The validation of triples allows ensuring that only valid and relevant knowledge is included in the knowledge graph. This can be done by setting integrity and other constraints [23] or setting necessary features for a triple to be considered valid [21]. In addition, a labelling process can be applied to tag triples as valid or not valid [19]. Finally, knowledge graph optimisation can be performed by removing nodes that are not relevant to the domain [88]. This should be based on consistent and logical rules that allow identifying and eliminating conflicts and gaps in the knowledge graph [80]. 4.5 Construct the Knowledge Graph The objective of this step is to ensure that the knowledge graph is accessible and available for use. This includes storing the knowledge graph in a suitable database, displaying and visualising it for exploration, as well as enabling its use. 4.5.1 Store Knowledge Graph. The knowledge graphs can be stored in various ways due to a wide variety of data models, graph algorithms, and applications [87]. This includes relational databases, key/value stores, triple stores, map/reduce storage [87], and graph databases [32]. Relational databases can be used for storage, even though they may not be the most suitable for large graph management [87]. This type of database can be implemented on top of an existing relational database in the organisation’s infrastructure [77, 87]. Key/value stores are NoSQL database systems that allow improving scalability of knowledge graphs and more flexibility with regard to data types [87]. Triple stores are databases that store knowledge as triples (subject - predicate - object). The majority of triple stores focus on storing knowledge graphs as RDF triples that provide a unified framework for representing information online [87]. Map/reduce storage is used for processing large knowledge graphs, as it divides the number of nodes on different machines, then each machine requires a relatively small size of computation [87]. Graph databases allow for the storage of nodes, edges, and properties of graphs. These databases provide a variety of functionalities for querying and graph mining; however, the update of knowledge can be slow [92]. As an example of a graph database, Neo4j is widely used in the knowledge graph development [14, 15, 22, 33, 38, 49, 51, 53, 56, 70, 86, 88, 90]. It has built-in functionality for, among other things, graph analysis, and querying [22, 53]. 4.5.2 Display Knowledge Graph. Knowledge graphs are useful because they can be not only analysed in the database but also inspected visually. For this, it is necessary to create a knowledge graph visualisation in order to enable analysis, navigation, and discovery of related knowledge [40, 69, 75, 78, 92]. An example of knowledge graph visualisation is presented in Figure 6(a). Some knowledge graph databases have built-in tools for graph visualisation, for example, Neo4j [56]. Another option is to develop the visualisation using front-end tech stacks, for example, using suitable JavaScript libraries [33, 51, 54, 67, 69]. When developing knowledge graph visualisations, it is important to ensure interactivity and follow best practices of information visualisation. Nevertheless, the display of the visualisation depends on the application of the knowledge graph. For example, Google presents the nodes of Google Knowledge Graph as infoboxes in the search results (Figure 6(b)). Thus, the knowledge graphs can be displayed in multiple ways, and the most suitable one should be chosen considering the intended use of the graph. Fig. 6. Examples of knowledge graph visualisations. 4.5.3 Enable Use. Knowledge graphs can have multiple applications, such as web search [87], question answering [78], recommendation generation [46], chatbot functionalities [46], decision support systems [47], text understanding [80, 87], and so on. The application depends on the purpose of the knowledge graph and the domain. Regardless of the chosen application, it is then necessary to implement tools that enable effective knowledge graph use. The implementation is highly dependent on the required functionality. Furthermore, it is important to consider the end users, what kind of skills they have, and how they are going to use the knowledge graph. Querying is one of the key functions of knowledge graphs. It allows users to explore and discover knowledge. Query functions can be already built-in in the graph database [90]. For example, Neo4j supports the Cypher graph query language that allows data queries [70]. Other RDF triple stores support SPARQL, which is widely used as the standard query language of knowledge graphs [23, 92]. Querying functionality can also be developed based on specific needs, for example, using knowledge graph matching, distributional semantic matching, or other techniques [78]. 4.6 Maintain the Knowledge Graph As knowledge is constantly changing and evolving, knowledge graphs are never complete. Thus, it is necessary to constantly monitor the knowledge graph, its usage and data sources relevant for the domain, and update the knowledge graphs as needed. 4.6.1 Evaluate the Knowledge Graph. Besides the evaluation of completeness and quality, which are addressed in Step 4 (Section 4.4), knowledge graphs can be tested through their application by gathering user feedback [91]. By analysing feedback, it is possible to identify gaps in the knowledge graph and set the development directions. This feedback may help identify new data available or provide suggestions on how to improve the application of the knowledge graph, e.g., make it faster or add new functionalities. For this, Step 5 (Section 4.5) has to be repeated by evaluating possible improvement in storage, display, and/or use of the knowledge graph. 4.6.2 Update the Knowledge Graph. In general, updating the knowledge graph may be needed when (i) there is new data in the data source already used, or (ii) there is a new data source relevant for the knowledge domain [23]. To identify new data in the data source, version and update management is needed both in the data source and in the knowledge graph. Comparing the version and the latest date of update allows easy identification of newly available data. However, this is not always possible, as not all data have version management. In particular, it may be more difficult with unstructured, free text data. Thus, other update mechanisms should be introduced, such as periodical extraction of new knowledge and mapping with current entities and relations [82]. Once new data is identified, the process is repeated from Step 3 (Section 4.3). The discovery of new relevant data sources is a more complex task. It requires manual research to identify and access new data sources; it can also include legal agreements for data use [23]. Once the new data source is identified, the process is repeated from Step 1 (Section 4.1). 5 CASE STUDIES In order to evaluate the applicability of the proposed knowledge graph development process, the process is compared to the development of two knowledge graphs—DBpedia as a generic open knowledge graph and the User Experience Practices Knowledge Graph as a domain-specific knowledge graph. 5.1 Comparison to DBpedia DBpedia is a crowd-sourced open knowledge graph project aimed at extracting structured content from the Wikimedia projects [44]. Data are accessible as Linked Data and through standard Web browsers or automated crawlers. DBpedia’s development process includes the following steps [20]: 1. Definition of mappings and ontology editing; 2. Execution of the knowledge extraction process over Wikipedia dumps; 3. Parsing and validation of the data against strict rules; 4. Release of (intermediate) data artifacts; 5. ID management and knowledge fusion from all language editions; 6. Deployment of the resulting knowledge graph. The steps of the proposed knowledge graph development process can be mapped to the DBpedia process as follows: 1. **Identify data.** This step is omitted in DBpedia’s development process as the data source is already identified and clearly defined. As mentioned, DBpedia uses data from various Wikimedia projects. This covers a wide variety of domains, thus, making DBpedia a generic knowledge graph. 2. **Construct knowledge graph ontology.** This step corresponds to the step “Definition of mappings and ontology editing”. DBpedia’s ontology was first developed based on infoboxes within Wikipedia and is continuously updated [44]. Currently, the ontology has over 700 classes and 3,000 properties [20]. 3. **Extract knowledge.** This step corresponds to the step “Execution of the knowledge extraction process over Wikipedia dumps”. DBpedia extracts data from Wiki pages through the continuous knowledge extraction process (that is defined by **DBpedia Information Extraction Framework (DIEF)**) and live extraction, including entities, relations, and attributes extraction. The continuous extraction is performed every month. DBpedia extraction is available through mapping-based (rule-based), generic (automatic), text, and Wikidata extraction [20]. 3.1 **Extract entities.** A key method of entity extraction in DBpedia is to extract unmapped information in Wikipedia infoboxes and create entities from the attribute values [20, 44]. 3.2 **Extract relations.** Mappings-based extraction is one of the methods used to extract relations from Wikipedia infoboxes [20]. 3.3 **Extract attributes.** Other attributes are extracted directly from the article text. Attributes then can be mapped to the existing properties [20]. 4. **Process knowledge.** This step corresponds to the steps “Parsing and validation of the data against strict rules” and “ID management and knowledge fusion from all language editions”. 4.1 **Integrate knowledge.** At first, the data itself are parsed and validated for early release. Then, it is processed globally, focusing on eliminating redundancy and instability of IRI identifiers. For this process, DBpedia is using FlexiFusion approach that provides flexibility in processing a large variety of data [20]. 4.2 **Construct ontology or map to it.** The knowledge is mapped to DBpedia’s ontology [20]. This also allows for the evaluation of the completeness of the extracted knowledge. 4.3 **Complete knowledge.** Finally, there are multiple data validation and quality rules applied to ensure the completeness of the RDF triples, for example, reviewing conformance to the predefined schema and ontology restrictions as well as identifying missing artifacts [30]. Fig. 7. The workflow of the UX Methods Knowledge Graph [24]. (5) **Construct knowledge graph.** This step corresponds to the steps “Release of (intermediate) data artifacts” and “Deployment of the resulting knowledge graph”. The extracted and processed knowledge is published in an accessible way enabling its use twice—firstly, as intermediate data after strict parsing and validation, and secondly, as a completed knowledge graph [20]. (5.1) **Store the knowledge graph.** The data itself are stored in the DBpedia Databus platform. (5.2) **Display the knowledge graph.** The Databus platform is accessible online as datasets, also, DBpedia Live can be accessed as an API. DBpedia also exposes human readable representations (i.e., HTML pages) of its knowledge in the form of Linked Data [44]. (5.3) **Enable use.** Data search is enabled by the DBpedia SPARQL endpoint as well through Linked Data [44]. This allows to users access data and use it for their own needs. (6) **Maintain knowledge graph.** The entire process of DBpedia’s development is iterative and constantly reviewed, which allows capturing the most recent and relevant data. This structured release cycle allows to ensure that the knowledge graph is kept up to date [30]. (6.1) **Evaluate the knowledge graph.** Community reviews, contributions, and feedback are used to further develop DBpedia. The knowledge graph and its ontology is widely accessible for users to provide feedback and their suggestions on how the data should be updated [20, 44]. (6.2) **Update the knowledge graph.** As the DBpedia is based on Wikipedia data that is constantly changing and updated, DBpedia is also always maintained and updated version is updated in accordance to the release cycle [30]. Overall, DBpedia’s process is similar to the proposed one. Nevertheless, DBpedia’s process steps are specified to better correspond to the operations and procedures, as they are executed in DBpedia. In addition, DBpedia has two stages of processing and releasing data, which allows earlier access to data, even if it is not completed as a knowledge graph. ### 5.2 Comparison to User Experience Practices Knowledge Graph UX Methods is a domain-specific boutique knowledge graph aimed at gathering and integrate knowledge related to the user experience design [25]. Its development workflow is presented in Figure 7 and consists of five main stages—(i) Capture, (ii) **Extract Transform Load (ETL)**, (iii) Semantic Reasoning, (iv) Publication, and (v) Iteration. The steps of the proposed knowledge graph development process can be mapped to the UX Methods process as follows: (1) **Identify data.** This step corresponds to the step “Capture”. The data are submitted by users using Google Forms in a semi-structured way, providing such information as the method name, description, steps, outcomes, subsequent methods, and available web resources. Additionally, a headless content management system is used to capture information. (2) **Construct the knowledge graph ontology.** As the ontology of UX Methods is predefined, this step is omitted in the overall workflow. However, the UX Methods use ontology to describe relationships between different disciplines and methods. It is constantly evolving as new knowledge is added [25]. (3) **Extract knowledge.** This step corresponds to the step “ETL”. The manually captured data are extracted and transformed to RDF, including entities, relations, and attributes. For this purpose, different techniques are used, including auto-classification, semantic data integration, and NLP [24]. (3.1) **Extract entities.** Entities are gathered through a headless content management system. (3.2) **Extract relations.** The knowledge model provides a set of relation types that are then used to create relations between entities. (3.3) **Extract attributes.** Attributes are also gathered through the headless content management system and mapped to the Knowledge Model. (4) **Process knowledge.** This step corresponds to the steps “ETL” and “Semantic Reasoning”. (4.1) **Integrate knowledge.** Newly extracted knowledge can be linked to the existing entities when the data are being updated; however, this is not explicitly explained as the data are manually gathered. (4.2) **Construct ontology or map to it.** The extracted knowledge is mapped to the ontology. (4.3) **Complete knowledge.** The data are processed by Protégé reasoner that allows for enabling the inference and identifying additional relations, and thus, complete the knowledge graph [24]. (5) **Construct knowledge graph.** This step corresponds to the step “Publication” [25]. (5.1) **Store the knowledge graph.** The data are stored on Data.world platform [24]. (5.2) **Display the knowledge graph.** The processed knowledge is published in RDF/XML format and is used to populate the website, allowing users to query and view it [25]. (5.3) **Enable use.** Multiple front-end tools are used to provide access and enable use, including, Jekyll and Jekyll-RDF for querying, Lunr.js for implementing the search functionality, and GULP for automating the development workflow [25]. (6) **Maintain knowledge graph.** This step corresponds to the step “Iteration”. (6.1) **Evaluate the knowledge graph.** The evaluation, recapturing and reintegration of knowledge is performed based on user input, traffic analytics, and search analytics [25]. (6.2) **Update the knowledge graph.** With each iteration data are re-integrated in to data models mappings and queries. The knowledge graph relies on the users’ feedback both for the update of the knowledge and maintenance of the knowledge graph itself [25]. Overall, the UX Methods process is similar to the proposed one, as it includes all the identified steps and employs different techniques and algorithms to develop a knowledge graph. However, UX Methods leverages the users’ input, feedback, and interaction to further develop the knowledge graph, whereas this is not captured in the proposed process. ### 6 DISCUSSION Based on the case studies, the proposed knowledge graph development process is applicable. The main steps cover the essential development steps and, thus, can be applied in practice. However, there are several considerations as to what extent the proposed process is suitable for use in all cases of knowledge graph development. Initial vs. continuous development of knowledge graphs. Based on this systematic review, the research literature focuses on the initial development of knowledge graphs, while the case studies focused on presenting the continuous development of knowledge graphs. In the case studies, initial considerations (such as Step 1 “Identify data”) are done once when establishing a need for a knowledge graph. In addition, in the case studies, the knowledge graph ontology is present, and it is not explained whether it was developed separately or based on the knowledge used in the graph. Therefore, if the proposed process for the existing knowledge graph is used, Step 2 “Construct ontology” is not needed. Whereas, Step 4.2 “Construct ontology or map to it” is performed, focusing on mapping the new data to existing ontology and, if needed, updating the ontology based on the extracted knowledge. While Steps 1 and 2 are essential for determining the scope and structure of the knowledge graph, they are not necessarily revised with each update of the knowledge graph. The nature of scientific articles also affects the “pipeline-like” visualisation of the proposed process. Since articles focus on presenting how a knowledge graph was developed for a specific case, they commonly do not consider feedback loops and continuous iterations. Thus, more focus is on the initial one-time development, rather than on continuous updates. Under these considerations, our proposed process appears to be more useful for initial knowledge graph development, where it is necessary to determine the data and the structure of the knowledge graph. We believe that in order to apply this process to existing knowledge graphs would require additional adaption since many of the main decision points have, typically, already been made and the main focus is on acquiring new data and processing it in order to update the graph. Level of abstraction. The proposed process aims at providing overall guidance in knowledge graph development. However, the developers (a person or a team responsible for developing the knowledge graph) have to perform additional research and make decisions in order to construct the knowledge graph. Based on various factors, such as the type of data, the choice of algorithms, the type of graph storage, the application of knowledge, and others, the process can differ between knowledge graphs. The process can be useful as a tool to check if all aspects and considerations are covered. Nevertheless, there is still a need for the developers to choose appropriate methods and algorithms for data acquisition, knowledge extraction and processing, as well as set appropriate measures for maintaining, updating, and managing the knowledge graph (e.g., setting the frequency and procedure for the knowledge graph update). User perspective. The reviewed literature does not focus on discussing the role of knowledge graph users in the knowledge graph development process. This may be a result of the fact that research articles are focusing on presenting the most efficient algorithms and how they work rather than on how the knowledge graph will be used once developed. In contrast, the case studies take into account the user feedback, and how the knowledge graph is used (e.g., traffic or search analytics) for the knowledge graph maintenance and further development. User feedback and analytics can indicate what data are needed to include, how the knowledge graph should be updated, and how the application itself could be improved. Therefore, while the user perspective was not considered in the literature we reviewed, it can be a valuable addition when maintaining a knowledge graph. Positively, we note that the success of Wikidata [71] has led to greater interest in the user and knowledge graph development by the research community [5, 39]. Applying the proposed process. The proposed process provides a starting point when developing a knowledge graph as well as main steps and areas to consider. It assists in deciding whether a top-down or bottom-up approach should be used as well as planning the work that needs to be done. Nevertheless, the process is generic and requires additional research and decision-making from the individual or team applying this process on what tools and techniques to adopt. There are multiple tools and techniques that can be used in each step of the knowledge graph development process, and they depend on multiple development decisions that were described in the article (Section 4). While some algorithms and methods are mentioned here, there are other resources that describe such methods in detail (e.g., [31, 80]). The main focus of the reviewed articles is generic or domain-specific knowledge graphs and building them from the beginning rather than on how to improve them. For this reason, the proposed process is better suited for initial knowledge graph development than applying it for an existent knowledge graph improvement. In addition, there are more types of knowledge graphs emerging that were not described in the reviewed articles, for example, personal knowledge graphs [8]. Such knowledge graphs are focused on the user or a person rather than a specific domain. Additionally, for simple knowledge graphs, the proposed process may be too complex and include unnecessary steps. Lastly, the vast majority of reviewed articles did not base their approach to knowledge graph development on a solid framework but rather described the workflow of their project. The process described here is a synthesis based on the knowledge graph development approaches in the literature. Thus, the described process provides an evidenced-based framework for organising and managing knowledge graph development in a structured manner. Validity of the research. While this article achieves its goal of providing a summary of knowledge graph development processes found in multiple articles, several considerations about its validity need to be taken into account. Internal validity is affected by the methodology and research design. The systematic review is highly dependent on the interpretation and biases of the author in the choice of articles, coding, and setting priorities. Moreover, while we believe that our method captured the research base as the most relevant articles in multiple major scientific sources where screened, we cannot guarantee that we retrieved all relevant articles as we applied a threshold and did not perform snowballing due to time constraints. To help ensure validity, the PRISMA guidelines were followed focusing on transparency of the review process. A check list can be found in Appendix B. In addition, external validity is affected in terms of to what extent results apply to a population. Only scientific articles were analysed, and the evaluation was based on two case studies. While the evaluated case studies show that the proposed process corresponds to actual industry cases, there is not an empirical bases to determine as to what extent the proposed process can be applied and generalised to the whole population. Additional evaluation methods could lead to a broader understanding of its general applicability (e.g., interviews with experts or organisations using the knowledge graphs in their operations). Furthermore, the practical implementation of a knowledge graph could be carried out following the proposed process to examine its efficacy as a guide to knowledge graph development. 7 CONCLUSION This article aimed at understanding the main steps in the knowledge graph development process and how they are interrelated. This was done through a systematic review and conceptual analysis. The main steps of the development process include: (i) identify data, (ii) construct the knowledge graph ontology, (iii) extract knowledge, (iv) process knowledge, (v) construct the knowledge graph, and (vi) maintain the knowledge graph. The relations between steps are presented in Figure 5. process suggests a unified approach to knowledge graph development and provides guidance for both researchers and practitioners when constructing and managing knowledge graphs. There are a number of avenues for future work, including: — **Researching additional industry cases.** While this research focuses mostly on the development of the knowledge graphs as reported in the literature, a study on how organisations are performing this process would provide further richness to the process in practice. — **Evaluating the proposed process with experts and organisations using knowledge graph in their activities.** This would allow for a more accurate assessment of the proposed process; its added value and how it can be used in practice. — **Examining how existing software development, ontology development or other methodologies in the field of computer science can be applied for knowledge graph development.** This article focused on synthesising and analysing knowledge graph development processes. Examining the proposed process by comparing it to other existing methodologies would allow for this extensive literature to be incorporated and compared. — **Developing a knowledge graph using the proposed process.** This would allow for the evaluation of the practicality and applicability of the proposed process. — **Researching tools and techniques for each step of knowledge graph development.** While this article is focused on the organising and managing knowledge graph development process, additional research, and mapping of tools and techniques for each step could provide further assistance for researchers and developers. Overall, we hope this research provides a foundation for further investigation into how software and data engineering methodologies can be used to assist developers and researchers in the construction and maintenance of knowledge graphs. **APPENDICES** **A SUMMARY OF ARTICLES** <table> <thead> <tr> <th>No.</th> <th>Article</th> <th>Year</th> <th>Article type</th> <th>Process type</th> <th>Process label</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Sun K. et al. [69]</td> <td>2016</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Process</td> </tr> <tr> <td>3</td> <td>Lian H. et al. [48]</td> <td>2017</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Process</td> </tr> <tr> <td>4</td> <td>Qui L. et al. [62]</td> <td>2017</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Process</td> </tr> <tr> <td>5</td> <td>Zhao Y. et al. [91]</td> <td>2017</td> <td>Domain specific</td> <td>Top-down</td> <td>Aspects</td> </tr> <tr> <td>6</td> <td>Lin Z. Q. et al. [49]</td> <td>2017</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Overview</td> </tr> <tr> <td>7</td> <td>Xin H. et al. [85]</td> <td>2018</td> <td>Domain specific</td> <td>Top-down</td> <td>Workflow</td> </tr> <tr> <td>8</td> <td>Yan J. et al. [87]</td> <td>2018</td> <td>Methodological</td> <td>Bottom-up</td> <td>Framework</td> </tr> <tr> <td>10</td> <td>Wang C. et al. [75]</td> <td>2018</td> <td>Domain specific</td> <td>Top-down</td> <td>Workflow</td> </tr> <tr> <td>11</td> <td>Martinez-Rodriguez J. L. et al. [54]</td> <td>2018</td> <td>Methodological</td> <td>Bottom-up</td> <td>Method</td> </tr> <tr> <td>12</td> <td>Shekarpour S. et al. [66]</td> <td>2018</td> <td>Domain specific</td> <td>Top-down</td> <td>Pipeline</td> </tr> <tr> <td>13</td> <td>Zhao Z. et al. [92]</td> <td>2018</td> <td>Methodological</td> <td>Bottom-up</td> <td>Architecture</td> </tr> <tr> <td>14</td> <td>Yang C. et al. [16]</td> <td>2018</td> <td>Domain specific</td> <td>Top-down</td> <td>Procedure</td> </tr> <tr> <td>15</td> <td>Chenglin Q. et al. [15]</td> <td>2018</td> <td>Domain specific</td> <td>Top-down</td> <td>Technologies</td> </tr> <tr> <td>16</td> <td>Wu T. et al. [82]</td> <td>2018</td> <td>Methodological</td> <td>Bottom-up</td> <td>Framework</td> </tr> <tr> <td>17</td> <td>Sharafeledeen D. et al. [65]</td> <td>2019</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Workflow</td> </tr> </tbody> </table> (Continued) <table> <thead> <tr> <th>No.</th> <th>Article</th> <th>Year</th> <th>Article type</th> <th>Process type</th> <th>Process label</th> </tr> </thead> <tbody> <tr> <td>18</td> <td>Mehta A. et al. [55]</td> <td>2019</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Pipeline</td> </tr> <tr> <td>19</td> <td>Huang L. et al. [34]</td> <td>2019</td> <td>Domain specific</td> <td>Top-down</td> <td>Process</td> </tr> <tr> <td>20</td> <td>Zhou Y. et al. [94]</td> <td>2019</td> <td>Domain specific</td> <td>Top-down</td> <td>Framework</td> </tr> <tr> <td>21</td> <td>Hu H. et al. [33]</td> <td>2019</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Framework</td> </tr> <tr> <td>22</td> <td>Wu T. et al. [83]</td> <td>2019</td> <td>Methodological</td> <td>Bottom-up</td> <td>Framework</td> </tr> <tr> <td>23</td> <td>Christophides V. et al. [17]</td> <td>2019</td> <td>Methodological</td> <td>Bottom-up</td> <td>Workflow</td> </tr> <tr> <td>24</td> <td>Kejriwal M. [40]</td> <td>2019</td> <td>Methodological</td> <td>Bottom-up</td> <td>-</td> </tr> <tr> <td>26</td> <td>Wang P. et al. [77]</td> <td>2019</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Framework</td> </tr> <tr> <td>27</td> <td>Chen Y. et al. [14]</td> <td>2019</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Framework</td> </tr> <tr> <td>28</td> <td>Yu H. et al. [88]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Framework</td> </tr> <tr> <td>29</td> <td>Weikum G. et al. [80]</td> <td>2020</td> <td>Methodological</td> <td>Bottom-up</td> <td>Roadmap</td> </tr> <tr> <td>30</td> <td>Li F. et al. [46]</td> <td>2020</td> <td>Domain specific</td> <td>Top-down</td> <td>Process</td> </tr> <tr> <td>31</td> <td>Su Y. et al. [67]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Method</td> </tr> <tr> <td>32</td> <td>Hertling S. et al. [28]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Workflow</td> </tr> <tr> <td>33</td> <td>Nitisha J. [36]</td> <td>2020</td> <td>Domain specific</td> <td>Top-down</td> <td>Approach</td> </tr> <tr> <td>34</td> <td>Li L. et al. [47]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Procedure</td> </tr> <tr> <td>35</td> <td>Mao S. et al. [53]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Process</td> </tr> <tr> <td>36</td> <td>Kim J. E. et al. [42]</td> <td>2020</td> <td>Domain specific</td> <td>Top-down</td> <td>Approach</td> </tr> <tr> <td>37</td> <td>Wang Q. et al. [78]</td> <td>2020</td> <td>Domain specific</td> <td>Top-down</td> <td>Framework</td> </tr> <tr> <td>38</td> <td>Xiao D. et al. [84]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Method</td> </tr> <tr> <td>39</td> <td>Yu S. et al. [89]</td> <td>2020</td> <td>Domain specific</td> <td>Not clear</td> <td>Framework</td> </tr> <tr> <td>40</td> <td>Elhammadi S. et al. [21]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Pipeline</td> </tr> <tr> <td>41</td> <td>Fang W. et al. [22]</td> <td>2020</td> <td>Domain specific</td> <td>Top-down</td> <td>Workflow</td> </tr> <tr> <td>42</td> <td>Wang M. et al. [76]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Pipeline</td> </tr> <tr> <td>43</td> <td>Malik K. M. et al. [52]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Architecture</td> </tr> <tr> <td>44</td> <td>Muhammad I. et al. [56]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Approach</td> </tr> <tr> <td>45</td> <td>Liu S. et al. [51]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Framework</td> </tr> <tr> <td>46</td> <td>Jin Y. et al. [38]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Process</td> </tr> <tr> <td>47</td> <td>Li F. et al. [45]</td> <td>2020</td> <td>Methodological</td> <td>Bottom-up</td> <td>Flow chart</td> </tr> <tr> <td>48</td> <td>Fensel D. et al. [23]</td> <td>2020</td> <td>Methodological</td> <td>Bottom-up</td> <td>Process</td> </tr> <tr> <td>49</td> <td>Dessì D. et al. [19]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Pipeline</td> </tr> <tr> <td>51</td> <td>Yan H. et al. [86]</td> <td>2020</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Process</td> </tr> <tr> <td>52</td> <td>Kim H. [41]</td> <td>2021</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Process</td> </tr> <tr> <td>53</td> <td>Yu X. et al. [90]</td> <td>2021</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Process</td> </tr> <tr> <td>54</td> <td>Liu J. et al. [50]</td> <td>2021</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Workflow</td> </tr> <tr> <td>55</td> <td>Zhou B. et al. [93]</td> <td>2021</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Framework</td> </tr> <tr> <td>56</td> <td>Dessì D. et al. [18]</td> <td>2021</td> <td>Domain specific</td> <td>Bottom-up</td> <td>Workflow</td> </tr> <tr> <td>57</td> <td>Tan J. et al. [70]</td> <td>2021</td> <td>Domain specific</td> <td>Top-down</td> <td>Framework</td> </tr> </tbody> </table> ## B PRISMA 2020 CHECKLIST ![PRISMA 2020 Checklist](image) **Fig. B.1. The PRISMA checklist for this review [60].** C VISUALISATIONS OF THE KNOWLEDGE GRAPH DEVELOPMENT IN THE SELECTED ARTICLES Fig. C.1. Workflow of subjective KB construction [85]. Fig. C.2. Framework of the construction method [88]. Fig. C.3. The framework of knowledge graph [87]. Fig. C.4. Data mining workflow for knowledge graph construction [65]. Fig. C.5. The general framework of Chinese knowledge graph construction [82]. Fig. C.6. System architecture of K12EduKG [13]. Fig. C.7. The architecture of subject KG construction [67]. Fig. C.8. Knowledge base construction system [4]. Fig. C.9. The overall framework of TCM knowledge graph construction [94]. Fig. C.10. Functional design framework [33]. Fig. C.11. Proposed framework (OE: Online encyclopedia) [83]. Fig. C.12. The generic end-to-end workflow for Entity Resolution [17]. Fig. C.13. The overall workflow creating the DBkWik knowledge graph [28]. Fig. C.14. First approach for NER for artwork titles [36]. Fig. C.15. Technique procedure in this study [75]. Fig. C.16. Overview of the proposed method [54]. Fig. C.17. Proposed systematic procedure of medical KG construction [47]. Fig. C.18. Overall process of analyzing formal concepts from collected specifications and transforming a product knowledge graph [41]. Fig. C.19. Construction process of the knowledge graph [53]. Fig. C.20. The pipeline of the required steps for developing a knowledge graph of interlined events [66]. Fig. C.21. The process of creating this knowledge graph [90]. Fig. C.22. COVID-KG overview: from data to semantics to knowledge [78]. Fig. C.23. The overview of our method [84]. Fig. C.24. The financial knowledge extraction pipeline [21]. Fig. C.25. The architecture of knowledge graph [92]. Fig. C.26. The workflow of the proposed hybrid semantic computer vision approach [22]. Defining a Knowledge Graph Development Process Fig. C.27. Building a knowledge graph flowchart [16]. Fig. C.28. Constructing process of the visual analysis platform [69]. Fig. C.29. Functional view of automated knowledge graph architecture [52]. Fig. C.30. The framework for construction for WRKG [93]. Fig. C.31. Workflow of the approach for building a scientific knowledge graph from scientific textual resources [18]. Fig. C.32. The flowchart of knowledge graph in the domain of culture [79]. Fig. C.33. The three-level framework of the ontology-based literatures’ knowledge reasoning network modeling [12]. Fig. C.34. Stages involved in the construction of a literature knowledge graph using OIE4KGC [56]. Fig. C.35. Construction framework of Chinese ancient historical and cultural knowledge graph [51]. Fig. C.36. Research framework for the construction and complement of knowledge graphs in the field of urban traffic [70]. Fig. C.37. The united process to construct the graph personal relationships [38]. Fig. C.38. Overview of Sogou knowledge graph construction framework [77]. Fig. C.39. The process of knowledge graph construction [48]. Fig. C.40. Improved flow chart of knowledge graph construction [45]. Fig. C.41. The process of constructing Uyghur knowledge graph [62]. Fig. C.42. Collaborative development of industrial knowledge graph [91]. Fig. C.43. The framework of AgriKG [14]. Fig. C.44. Schema of the pipeline to extract and handle entities and relations [19]. Fig. C.45. System architecture [6]. Defining a Knowledge Graph Development Process Fig. C.46. Logical overview of the software knowledge graph construction platform [49]. Fig. C.47. The process of built knowledge graph [86]. REFERENCES [34] Lan Huang, Congcong Yu, Yang Chi, Xiaohui Qi, and Hao Xu. 2019. Towards smart healthcare management based on knowledge graph technology. In *Proceedings of the 2019 8th International Conference on Software and Computer Applications*. Association for Computing Machinery, 330–337. DOI: https://doi.org/10.1109/3316165.3316678 [70] Jiuyan Tan, Qianqian Qiu, Weiwei Guo, and Tingshuai Li. 2021. Research on the construction of a knowledge graph and knowledge reasoning model in the field of urban traffic. Sustainability 13, 6 (3 2021), 3191. DOI: https://doi.org/10.3390/su13063191 ACM Transactions on Software Engineering and Methodology, Vol. 32, No. 1, Article 27. Publication date: February 2023. Received 3 August 2021; revised 4 February 2022; accepted 24 February 2022
{"Source-Url": "https://pure.uva.nl/ws/files/140296592/Defining_a_Knowledge_Graph_Development_Process.pdf", "len_cl100k_base": 15103, "olmocr-version": "0.1.50", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 88565, "total-output-tokens": 23432, "length": "2e13", "weborganizer": {"__label__adult": 0.000392913818359375, "__label__art_design": 0.0009450912475585938, "__label__crime_law": 0.00034546852111816406, "__label__education_jobs": 0.007640838623046875, "__label__entertainment": 0.0001220703125, "__label__fashion_beauty": 0.00024509429931640625, "__label__finance_business": 0.0008392333984375, "__label__food_dining": 0.0003876686096191406, "__label__games": 0.0006132125854492188, "__label__hardware": 0.0008411407470703125, "__label__health": 0.0005826950073242188, "__label__history": 0.0006098747253417969, "__label__home_hobbies": 0.0002034902572631836, "__label__industrial": 0.0006356239318847656, "__label__literature": 0.0007424354553222656, "__label__politics": 0.0002739429473876953, "__label__religion": 0.0005736351013183594, "__label__science_tech": 0.10345458984375, "__label__social_life": 0.0001983642578125, "__label__software": 0.0230865478515625, "__label__software_dev": 0.85595703125, "__label__sports_fitness": 0.00026416778564453125, "__label__transportation": 0.0006842613220214844, "__label__travel": 0.00024187564849853516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 85436, 0.06115]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 85436, 0.68148]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 85436, 0.87451]], "google_gemma-3-12b-it_contains_pii": [[0, 568, false], [568, 3986, null], [3986, 6679, null], [6679, 10491, null], [10491, 12060, null], [12060, 14711, null], [14711, 15922, null], [15922, 18616, null], [18616, 22494, null], [22494, 26332, null], [26332, 30411, null], [30411, 32906, null], [32906, 36511, null], [36511, 39884, null], [39884, 42562, null], [42562, 46344, null], [46344, 50393, null], [50393, 54339, null], [54339, 58209, null], [58209, 61455, null], [61455, 61573, null], [61573, 61761, null], [61761, 61960, null], [61960, 62120, null], [62120, 62303, null], [62303, 62561, null], [62561, 62821, null], [62821, 63125, null], [63125, 63373, null], [63373, 63622, null], [63622, 63874, null], [63874, 64189, null], [64189, 64394, null], [64394, 64669, null], [64669, 64907, null], [64907, 66713, null], [66713, 72190, null], [72190, 77494, null], [77494, 77494, null], [77494, 83214, null], [83214, 85436, null]], "google_gemma-3-12b-it_is_public_document": [[0, 568, true], [568, 3986, null], [3986, 6679, null], [6679, 10491, null], [10491, 12060, null], [12060, 14711, null], [14711, 15922, null], [15922, 18616, null], [18616, 22494, null], [22494, 26332, null], [26332, 30411, null], [30411, 32906, null], [32906, 36511, null], [36511, 39884, null], [39884, 42562, null], [42562, 46344, null], [46344, 50393, null], [50393, 54339, null], [54339, 58209, null], [58209, 61455, null], [61455, 61573, null], [61573, 61761, null], [61761, 61960, null], [61960, 62120, null], [62120, 62303, null], [62303, 62561, null], [62561, 62821, null], [62821, 63125, null], [63125, 63373, null], [63373, 63622, null], [63622, 63874, null], [63874, 64189, null], [64189, 64394, null], [64394, 64669, null], [64669, 64907, null], [64907, 66713, null], [66713, 72190, null], [72190, 77494, null], [77494, 77494, null], [77494, 83214, null], [83214, 85436, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 85436, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 85436, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 85436, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 85436, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 85436, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 85436, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 85436, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 85436, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 85436, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 85436, null]], "pdf_page_numbers": [[0, 568, 1], [568, 3986, 2], [3986, 6679, 3], [6679, 10491, 4], [10491, 12060, 5], [12060, 14711, 6], [14711, 15922, 7], [15922, 18616, 8], [18616, 22494, 9], [22494, 26332, 10], [26332, 30411, 11], [30411, 32906, 12], [32906, 36511, 13], [36511, 39884, 14], [39884, 42562, 15], [42562, 46344, 16], [46344, 50393, 17], [50393, 54339, 18], [54339, 58209, 19], [58209, 61455, 20], [61455, 61573, 21], [61573, 61761, 22], [61761, 61960, 23], [61960, 62120, 24], [62120, 62303, 25], [62303, 62561, 26], [62561, 62821, 27], [62821, 63125, 28], [63125, 63373, 29], [63373, 63622, 30], [63622, 63874, 31], [63874, 64189, 32], [64189, 64394, 33], [64394, 64669, 34], [64669, 64907, 35], [64907, 66713, 36], [66713, 72190, 37], [72190, 77494, 38], [77494, 77494, 39], [77494, 83214, 40], [83214, 85436, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 85436, 0.15099]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
d660edbc7fda260b117a00fb29e73ffc473e6520
Package ‘groHMM’ Version 1.10.0 Date 2016-02-11 Title GRO-seq Analysis Pipeline Author Charles G. Danko, Minho Chae, Andre Martins, W. Lee Kraus Maintainer Anusha Nagari <anusha.nagari@utsouthwestern.edu>, Venkat Malladi <venkat.malladi@utsouthwestern.edu>, Tulip Nandu <tulip.nandu@utsouthwestern.edu>, W. Lee Kraus <lee.kraus@utsouthwestern.edu> Depends R (>= 3.0.2), MASS, parallel, S4Vectors (>= 0.9.25), IRanges (>= 2.5.27), GenomeInfoDb, GenomicRanges (>= 1.23.16), GenomicAlignments, rtracklayer Suggests BiocStyle, GenomicFeatures, edgeR, org.Hs.eg.db, TxDb.Hsapiens.UCSC.hg19.knownGene Description A pipeline for the analysis of GRO-seq data. URL https://github.com/Kraus-Lab/groHMM BugReports https://github.com/Kraus-Lab/groHMM/issues License GPL-3 biocViews Sequencing, Software LazyLoad yes NeedsCompilation yes R topics documented: groHMM-package ........................................ 2 averagePlot ........................................... 3 breakTranscriptsOnGenes ............................... 4 combineTranscripts ................................. 5 countMappableReadsInInterval .......................... 5 detectTranscripts ................................. 6 evaluateHMMInAnnotations .............................. 7 expressedGenes ...................................... 8 getCores ............................................ 9 getTxDensity ....................................... 9 limitToXkb ......................................... 10 makeConsensusAnnotations ......................... 11 metaGene .......................................... 12 metaGeneMatrix .................................... 13 groHMM-package Description groHMM was developed for analysis of GRO-seq data, which provides a genome wide 'map' of the position and orientation of all transcriptionally active RNA polymerases. groHMM predicts the boundaries of transcriptional activity across the genome de novo using a two-state hidden Markov model (HMM). The model essentially divides the genome into 'transcribed' and 'non-transcribed' regions in a strand specific manner. We also use HMMs to identify the leading edge of Pol II at genes activated by a stimulus in GRO-seq time course data. This approach allows the genome-wide interrogation of transcription rates in cells. In addition to these advanced features, groHMM provides wrapper functions for counting raw reads, generating wiggle files for visualization, and creating metagene (averaging) plots. Although groHMM is tailored towards GRO-seq data, the same functions and analytical methodologies can, in principal, be applied to a wide variety of other short read data sets. Details Package: groHMM Type: Package Version: 0.99.0 Date: 2014-04-02 License: GPL (>=3) LazyLoad: yes Depends: R (>= 2.14.0), MASS, GenomicRanges, rtracklayer, parallel Author(s) Charles G. Danko, Minho Chae, Andre Martins averagePlot Maintainer: Minho Chae<minho.chae@gmail.com> References averagePlot Returns the average profile of tiling array probe intensity values or wiggle-like count data centered on a set of genomic positions (specified by 'Peaks'). Description Supports parallel processing using mclapply in the 'parallel' package. To change the number of processors, use the argument 'mc.cores'. Usage averagePlot(ProbeData, Peaks, size = 50, bins = seq(-1000, 1000, size)) Arguments ProbeData Data.frame representing chromosome, window center, and a value. Peaks Data.frame representing chromosome, and window center. size Numeric. The size of the moving window. Default: 50 bp. bins The bins of the meta gene – i.e. the number of moving windows to break it into. Default +/- 1kb from center. Value A vector representing the 'typical' signal centered on the peaks of interest. Author(s) Charles G. Danko and Minho Chae breakTranscriptsOnGenes **Description** Breaks transcripts when they are overlapped with multiple well annotated genes. **Usage** ``` breakTranscriptsOnGenes(tx, annox, strand = "+", geneSize = 5000, threshold = 0.8, gap = 5, plot = FALSE) ``` **Arguments** - `tx` : GRanges of transcripts. - `annox` : GRanges of non-overlapping annotations for reference. - `strand` : Takes "+" or "-" Default: "+" - `geneSize` : Numeric. Minimum gene size in annox to be used as reference. Default: 5000 - `threshold` : Numeric. Ratio of overlapped region relative to a gene width. Transcripts only greater than this threshold are subjected to be broken. Default: 0.8 - `plot` : Logical. If set to TRUE, show each step in a plot. Default: FALSE **Value** Returns GRanges object of broken transcripts. **Author(s)** Minho Chae and Charles G. Danko **Examples** ``` tax <- GRanges("chr7", IRanges(1000, 30000), strand="+") annox <- GRanges("chr7", IRanges(start=c(1000, 20000), width=c(10000,10000)), strand="+") bPlus <- breakTranscriptsOnGenes(tx, annox, strand="+") ``` **combineTranscripts** **Description** Combines transcripts that are within the same gene annotation, combining smaller transcripts for genes with low regulation into a single transcript representing the gene. **Usage** ```r combineTranscripts(tx, annox, geneSize = 1000, threshold = 0.8, plot = FALSE) ``` **Arguments** - `tx`: GRanges of transcripts. - `anno`: GRanges of non-overlapping annotations for reference. - `geneSize`: Numeric. Minimum gene size in annotations to be used as reference. Default: 1000 - `threshold`: Numeric. Ratio of overlapped region relative to transcript width. Transcripts only greater than this threshold are subjected to be combined. Default: 0.8 - `plot`: Logical. If set to TRUE, show each step in a plot. Default: FALSE **Value** Returns GRanges object of combined transcripts. **Author(s)** Minho Chae and Charles G. Danko **Examples** ```r tx <- GRanges("chr7", IRanges(start=c(1000, 20000), width=c(10000,10000)), strand="+") anno <- GRanges("chr7", IRanges(1000, 30000), strand="+") combined <- combineTranscripts(tx, anno) ``` --- **countMappableReadsInInterval** *countMappableReadsInInterval* counts the number of mappable reads in a set of genomic features. **Description** Supports parallel processing using mclapply in the 'parallel' package. To change the number of processors, use the argument 'mc.cores'. detectTranscripts detectTranscripts detects transcripts de novo using a two-state hidden Markov model (HMM). Usage countMappableReadsInInterval(features, UnMap, debug = FALSE, ...) Arguments features A GRanges object representing a set of genomic coordinates. The meta-plot will be centered on the start position. UnMap List object representing the position of un-mappable reads. Default: not used. debug If set to TRUE, provides additional print options. Default: FALSE ... Extra argument passed to mclapply Value Returns a vector of counts, each representing the number of reads inside each genomic interval. Author(s) Charles G. Danko and Minho Chae detectTranscripts detectTranscripts detects transcripts de novo using a two-state hidden Markov model (HMM). Description Read counts can be specified as either a GRanges object (reads), or using a fixed-step wiggle-format passed in a list (Fp and Fm). Either reads or BOTH Fp and Fm must be specified. Usage detectTranscripts( reads = NULL, Fp = NULL, Fm = NULL, LtProbA = -5, LtProbB = -200, UTS = 5, size = 50, threshold = 0.1, debug = TRUE, ...) Arguments reads A GRanges object representing a set of mapped reads. Fp Wiggle-formatted read counts on "+" strand. Optionally, Fp and Fm represent list() filled with a vector of counts for each chromosome. Can detect transcripts starting from a fixed-step wiggle. Fm Wiggle-formatted read counts on "-" strand. LtProbA Log probability of t... . Default: -5. One of these is just an initialization, and the final value is set by EM. The other is a holdout parameter. LtProbB Log probability of t... . Default: -200. UTS Varience in read counts of the untranscribed sequence. Default: 5. size Log probability of t... . Default: -5. threshold Threshold change in total likelihood, below which EM exits. debug If set to TRUE, provides additional print options. Default: FALSE ... Extra argument passed to mclapply evaluateHMMInAnnotations Details Supports parallel processing using mclapply in the 'parallel' package. To change the number of processors set the option 'mc.cores'. Value Returns a list of emisParams, trnasParams, viterbiStates, and transcripts. The transcript element is a GRanges object representing the predicted genomic coordinates of transcripts on both the + and - strand. Author(s) Charles G. Danko and Minho Chae Examples ```r S0mR1 <- as(readGAlignments(system.file("extdata", "S0mR1.bam", package="groHMM")), "GRanges") ## Not run: # hmmResult <- detectTranscripts(S0mR1, LtProbB=-200, UTS=5, threshold=1) # txHMM <- hmmResult$transcripts ``` evaluateHMMInAnnotations - evaluateHMM Evaluates HMM calling. Description Evaluates HMM calling of transcripts compared to known annotations. Usage ```r evaluateHMMInAnnotations(tx, annox) ``` Arguments - **tx** - GRanges of transcripts predicted by HMM. - **annox** - GRanges of non-overlapping annotations. Value - a list of error information; merged annotations, dissociated annotation, total, and rate. Author(s) - Minho Chae expressedGenes Examples ```r tx <- GRanges("chr7", IRanges(start=seq(100, 1000, by=200), width=seq(100, 1000, by=100)), strand="+") annox <- GRanges("chr7", IRanges(start=seq(110, 1100, by=150), width=seq(100, 1000, by=150)), strand="+") error <- evaluateHMMInAnnotations(tx, annox) ``` **expressedGenes** *Function identifies expressed features using the methods introduced in Core, Waterfall, Lis; Science, Dec. 2008.* **Description** Supports parallel processing using mclapply in the `parallel` package. To change the number of processors use the argument `mc.cores`. **Usage** ```r expressedGenes(features, reads, Lambda = NULL, UnMap = NULL, debug = FALSE, ...) ``` **Arguments** - **features**: A GRanges object representing a set of genomic coordinates. The meta-plot will be centered on the start position. There can be optional "ID" column for gene ids. - **reads**: A GRanges object representing a set of mapped reads. - **UnMap**: List object representing the position of un-mappable reads. Default: not used. - **debug**: If set to true, returns the number of positions. Default: FALSE. - **...**: Extra argument passed to mclapply **Value** Returns a data.frame representing the expression p.values for features of interest. **Author(s)** Charles G. Danko **getCores** *Returns the number of cores.* **Description** Returns the number of cores. **Usage** ```r getCores(cores) ``` **Arguments** - `cores` the number of cores, it is 1 in windows platform. **Examples** ```r cores <- getCores(2L) ``` --- **getTxDensity** *getTxDensity Calculates transcript density.* **Description** Calculates transcript density for transcripts which overlaps with annotations. For 'run genes together' or 'broken up a single annotation' errors, best overlapped transcripts or annotations are used. **Usage** ```r getTxDensity(tx, annox, plot = TRUE, scale = 1000L, nSampling = 0L, samplingRatio = 0.1, ...) ``` **Arguments** - `tx` GRanges of transcripts. - `annox` GRanges of non-overlapping annotations. - `plot` Logical. If TRUE, plot transcript density. Default: TRUE - `scale` Numeric. Scaled size of a gene for transcript density calculation. Default: 1000L - `nSampling` Numeric. Number of subsampling. Default: 0L - `samplingRatio` Numeric. Ratio of sampling for annotations. Default: 0.1 - `...` Extra argument passed to mclapply. **Details** Supports parallel processing using mclapply in the 'parallel' package. To change the number of processors set the option 'mc.cores'. limitToXkb ## Description limitToXkb truncates a set of genomic intervals at a constant, maximum size. ## Usage ```r limitToXkb(features, offset = 1000, size = 13000) ``` ## Arguments - `features`: A GRanges object representing a set of genomic coordinates. The meta-plot will be centered on the start position. - `offset`: Starts the interval from this position relative to the start of each genomic features. - `size`: Specifies the size of the window. ## Value Returns GRanges object with new genomic coordinates. ## Author(s) Minho Chae and Charles G. Danko ## Examples ```r tx <- GRanges("chr7", IRanges(start=seq(1000, 4000, by=1000), width=seq(1000, 1300, by=100)), strand=rep("+", 4)) annox <- GRanges("chr7", IRanges(start=seq(1100, 4100, by=1000), width=seq(900, 1200, by=100)), strand=rep("+", 4)) ``` **makeConsensusAnnotations** *makeConsensusAnnotations* Makes a consensus annotation **Description** Makes a non-overlapping consensus annotation. Gene annotations are often overlapping due to multiple isoforms for a gene. In consensus annotation, isoforms are first reduced so that only redundant intervals are used to represent a genomic interval for a gene, i.e., a gene id. Remaining unresolved annotations are further reduced by truncating 3’ end of annotations. **Usage** ```r makeConsensusAnnotations(ar, minGap = 1L, minWidth = 1000L, ...) ``` **Arguments** - `ar` GRanges of annotations to be collapsed. - `minGap` Minimum gap between overlapped annotations after truncated. Default: 1L - `minWidth` Minimum width of consensus annotations. Default: 1000L - `...` Extra argument passed to mclapply. **Details** Supports parallel processing using mclapply in the ‘parallel’ package. To change the number of processors, use the argument `mc.cores`. **Value** Returns GRanges object of annotations. **Author(s)** Minho Chae **Examples** ```r ## Not run: # library(TxDb.Hsapiens.UCSC.hg19.knownGene) # txdb <- TxDb.Hsapiens.UCSC.hg19.knownGene # tx <- transcripts(txdb, columns=c("gene_id", "tx_id", "tx_name"), # filter=list(tx_chrom="chr7")) # tx <- tx[gre "random", as.character(seqnames(tx)), invert=TRUE),] # ca <- makeConsensusAnnotations(tx) ``` **metaGene** *Returns a histogram of the number of reads in each section of a moving window centered on a certain feature.* **Description** Supports parallel processing using mclapply in the 'parallel' package. To change the number of processors, set the option 'mc.cores'. **Usage** ```r desc <- metaGene(features, reads = NULL, plusCVG = NULL, minusCVG = NULL, size = 100L, up = 10000L, down = NULL, ...)``` **Arguments** - `features`: A GRanges object representing a set of genomic coordinates. The meta-plot will be centered on the transcription start site (TSS). - `reads`: A GRanges object representing a set of mapped reads. Instead of 'reads', 'plusCVG' and 'minusCVG' can be used; Default: NULL. - `plusCVG`: A RangesList object for reads with '+' strand. - `minusCVG`: A RangesList object for reads with '-' strand. - `size`: The size of the moving window. - `up`: Distance upstream of each feature to align and histogram. Default: 10 kb. - `down`: Distance downstream of each feature to align and histogram. If NULL, same as up. Default: NULL. - `...`: Extra argument passed to mclapply. **Value** Returns a integer-Rle representing the 'typical' signal centered on a point of interest. **Author(s)** Charles G. Danko and Minho Chae **Examples** ```r features <- GRanges("chr7", IRanges(1000, 1000), strand="+" reads <- GRanges("chr7", IRanges(start=c(1000:1004, 1100), width=rep(1, 6)), strand="+") mg <- metaGene(features, reads, size=4, up=10) ``` **Description** Supports parallel processing using `mclapply` in the ‘parallel’ package. To change the number of processors, use the argument ‘mc.cores’. **Usage** ```r metaGeneMatrix(features, reads, size = 50, up = 1000, down = up, debug = FALSE, ...) ``` **Arguments** - `features` A GRanges object representing a set of genomic coordinates. - `reads` A GRanges object representing a set of mapped reads. - `size` The size of the moving window. - `up` Distance upstream of each f to align and histogram Default: 1 kb. - `down` Distance downstream of each f to align and histogram Default: same as up. - `debug` If set to TRUE, provides additional print options. Default: FALSE - `...` Extra argument passed to `mclapply` **Value** Returns a vector representing the ’typical’ signal across genes of different length. **Author(s)** Charles G. Danko and Minho Chae --- **Description** Supports parallel processing using `mclapply` in the ‘parallel’ package. To change the number of processors, use the argument ‘mc.cores’. **Usage** ```r metaGene_nL(features, reads, n_windows = 1000, debug = FALSE, ...) ``` pausingIndex Arguments - **features**: A GRanges object representing a set of genomic coordinates. - **reads**: A GRanges object representing a set of mapped reads. - **n_windows**: The number of windows to break genes into. - **debug**: If set to TRUE, provides additional print options. Default: FALSE - ... Extra argument passed to mclapply Value Returns a vector representing the 'typical' signal across genes of different length. Author(s) Charles G. Danko and Minho Chae pausingIndex Returns the pausing index for different genes. TODO: DESCRIBE THE PAUSING INDEX. Description Supports parallel processing using mclapply in the 'parallel' package. To change the number of processors, use the argument 'mc.cores'. Usage pausingIndex(features, reads, size = 50, up = 1000, down = 1000, UnMAQ = NULL, debug = FALSE, ...) Arguments - **features**: A GRanges object representing a set of genomic coordinates. - **reads**: A GRanges object representing a set of mapped reads. - **size**: The size of the moving window. - **up**: Distance upstream of each f to align and histogram. - **down**: Distance downstream of each f to align and histogram (NULL). - **UnMAQ**: Data structure representing the coordinates of all un-mappable regions in the genome. - **debug**: If set to TRUE, provides additional print options. Default: FALSE - ... Extra argument passed to mclapply Value Returns a data.frame of the pausing indices for the input genes. Returns the pausing index for different genes. Author(s) Charles G. Danko and Minho Chae. Examples ```r features <- GRanges("chr7", IRanges(2394474,2420377), strand="+") reads <- as(readGAlignments(system.file("extdata", "S0mR1.bam", package="groHMM")), "GRanges") ## Not run: # pi <- pausingIndex(features, reads) ``` **polymeraseWave** *Given GRO-seq data, identifies the location of the polymerase 'wave' in up- or down-regulated genes.* **Description** The model is a three state hidden Markov model (HMM). States represent: (1) the 5' end of genes upstream of the transcription start site, (2) upregulated sequence, and (3) the 3' end of the gene through the polyadenylation site. **Usage** ```r polymeraseWave(reads1, reads2, genes, approxDist, size = 50, upstreamDist = 10000, TSmooth = NA, NonMap = NULL, prefix = NULL, emissionDistAssumption = "gamma", finterWindowSize = 10000, limitPCRDups = FALSE, returnVal = "simple", debug = TRUE) ``` **Arguments** - `reads1`: Mapped reads in time point 1. - `reads2`: Mapped reads in time point 2. - `genes`: A set of genes in which to search for the wave. - `approxDist`: The approximate position of the wave. Suggest using 2000 [bp/ min] * time [min], for mammalian data. - `upstreamDist`: The amount of upstream sequence to include. Default: 10 kb. - `TSmooth`: Optimally, outlying windows are set a maximum value over the inter-quantile interval, specified by TSmooth. Reasonable value: 20; Default: NA (for no smoothing). Users are encouraged to use this parameter ONLY in combination with the normal distribution assumptions. - `NonMap`: Optionally, un-mappable positions are treated as missing data. NonMap passes in the list() structure for un-mappable regions. - `prefix`: Optionally, writes out png images of each gene examined for a wave. 'Prefix' denotes the file prefix for image names written to disk. Users are encouraged to create a new directory and write in a full path. - `emissionDistAssumption`: Takes values "norm", "normExp", and "gamma". Specifies the functional form of the 'emission' distribution for states I and II (i.e. 5' of the gene, and inside of the wave). In our experience, "gamma" works best for highly-variable 'spikey' data, and "norm" works for smooth data. As a general rule of thumb, "gamma" is used for libraries made using the direct ligation method, and "norm" for circular ligation data. Default: "gamma". polymeraseWave finterWindowSize Method returns 'quality' information for each gene to which a wave was fit. Included in these metrics are several that define a moving window. The moving window size is specified by filterWindowSize. Default: 10 kb. limitPCRDups If true, counts only 1 read at each position with >= 1 read. NOT recommended to set this to TRUE. Default: FALSE. returnVal Takes value "simple" (default) or "alldata". "simple" returns a data.frame with Pol II wave end positions. "alldata" returns all of the available data from each gene, including the full posterior distribution of the model after EM. decag If TRUE, prints error messages. Details The model computes differences in read counts between the two conditions. Differences are assumed fit a functional form which can be specified by the user (using the emissionDistAssumption argument). Currently supported functional forms include a normal distribution (good for GRO-seq data prepared using the circular ligation protocol), a gamma distribution (good for `spikey` ligation based GRO-seq data), and a long-tailed normal+exponential distribution was implemented, but never deployed. Initial parameter estimates are based on initial assumptions of transcription rates taken from the literature. Subsequently all parameters are fit using Baum-Welch expectation maximization. Arguments: Value Returns either a data.frame with Pol II wave end positions, or a List() structure with additional data, as specified by returnVal. Author(s) Charles G. Danko Examples genes <- GRanges("chr7", IRanges(2394474,2420377), strand="+", SYMBOL="CYP2W1", ID="54905") reads1 <- as(readGAlignments(system.file("extdata", "S0mR1.bam", package="groHMM")), "GRanges") reads2 <- as(readGAlignments(system.file("extdata", "S40mR1.bam", package="groHMM")), "GRanges") approxDist <- 2000*10 # Not run: # pw <- polymeraseWave(reads1, reads2, genes, approxDist) **readBed** *readBed Returns a GenomicRanges object constructed from the specified bed file.* **Description** Bed file format is assumed to be either four column: seqnames, start, end, strand columns; or six column: seqnames, start, end, name, score, and strand. Three column format is also possible when there is no strand information. **Usage** ```r readBed(file, ...) ``` **Arguments** - `file` Path to the input file. - `...` Extra argument passed to read.table **Details** Any additional arguments available to read.table can be specified. **Value** Returns GRanges object representing mapped reads. **Author(s)** Minho Chae and Charles G. Danko. --- **RgammaMLE** *RgammaMLE fits a gamma distribution to a specified data vector using maximum likelihood.* **Description** RgammaMLE fits a gamma distribution to a specified data vector using maximum likelihood. **Usage** ```r RgammaMLE(X) ``` **Arguments** - `X` A vector of observations, assumed to be real numbers in the interval [0, +Inf). **Value** Returns a list of parameters for the best-fit gamma distribution (shape and scale). Rnorm Rnorm fits a normal distribution to a specified data vector using maximum likelihood. Description Rnorm fits a normal distribution to a specified data vector using maximum likelihood. Usage Rnorm(X) Arguments X A vector of observations, assumed to be real numbers in the interval (-Inf, +Inf). Value Returns a list of parameters for the best-fit normal distribution (mean and variance). Author(s) Charles G. Danko Rnorm.exp Rnorm.exp fits a normal+exponential distribution to a specified data vector using maximum likelihood. Description Distribution function defined by: alpha*Normal(mean, variance)+(1-alpha) *Exponential(lambda). Usage Rnorm.exp(xi, wi = rep(1, NROW(xi)), guess = c(0.5, 0, 1, 1), tol = sqrt(.Machine$double.eps), maxit = 10000) Arguments xi A vector of observations, assumed to be real numbers in the interval (-Inf, +Inf). wi A vector of weights. Default: vector of repeating 1; indicating all observations are weighted equally. (Are these normalized internally?! Or do they have to be [0,1]?) guess Initial guess for parameters. Default: c(0.5, 0, 1, 1). maxit Maximum number of iterations. Default: 10,000. Details Fits nicely with data types that look normal overall, but have a long tail starting for positive values. Value Returns a list of parameters for the best-fit normal distribution (alpha, mean, variance, and lambda). Author(s) Charles G. Danko --- **runMetaGene** Runs metagene analysis for sense and antisense direction. Description Supports parallel processing using mclapply in the `parallel` package. To change the number of processors, set the option `mc.cores`. Usage ``` runMetaGene(features, reads, anchorType = "TSS", size = 100L, normCounts = 1L, up = 10000L, down = NULL, sampling = FALSE, nSampling = 1000L, samplingRatio = 0.1, ...) ``` Arguments - **features**: GRanges A GRanges object representing a set of genomic coordinates, i.e., set of genes. - **reads**: GRanges of reads. - **anchorType**: Either 'TSS' or 'TTS'. Metagene will be centered on the transcription start site(TSS) or transcription termination site(TTS). Default: TSS. - **size**: Numeric. The size of the moving window. Default: 100L. - **normCounts**: Numeric. Normalization vector such as average reads. Default: 1L. - **up**: Numeric. Distance upstream of each feature to align and histogram. Default: 1 kb. - **down**: Numeric. Distance downstream of each feature to align and histogram. If NULL, down is same as up. Default: NULL. - **sampling**: Logical. If TRUE, subsampling of Metagene is used. Default: FALSE. - **nSampling**: Numeric. Number of subsampling. Default: 1000L. - **samplingRatio**: Numeric. Ratio of sampling for features. Default: 0.1. - **...**: Extra argument passed to mclapply. Value A list of integer-Rle for sense and antisense. Author(s) Minho Chae Examples ```r features <- GRanges("chr7", IRanges(start=1000:1001, width=rep(1,2)), strand=c("+", "-")) reads <- GRanges("chr7", IRanges(start=c(1000:1003, 1100:1101), width=rep(1, 6), strand=rep(c("+","-"), 3)) ## Not run: # mg <- runMetaGene(features, reads, size=4, up=10) ``` **tlsDeming** A 'total least squares' implementation using demming regression. **Description** A 'total least squares' implementation using demming regression. **Usage** ```r tlsDeming(x, y, d = 1) ``` **Arguments** - `x` X values. - `y` Y values. - `d` Ratio of variances. Default: 1, for orthogonal regression. **Value** Parameters for the linear model. **Author(s)** Charles G. Danko **tlsLoess** A 'total least squares'-like hack for LOESS. Works by rotating points 45 degrees, fitting LOESS, and rotating back. **Description** A 'total least squares'-like hack for LOESS. Works by rotating points 45 degrees, fitting LOESS, and rotating back. **Usage** ```r tlsLoess(x, y, theta = -pi/4, span = 1) ``` **tlsSvd** **Arguments** - **x** X values. - **y** Y values. - **theta** Amount to rotate, sets the ratio of variances that are assumed by the hack. Default: \(-\pi/4\) radians (45 degrees) for orthogonal regression. - **span** The LOESS span parameter. Default: 1 **Value** List of input values and LOESS predictions. **Author(s)** Charles G. Danko --- **tlsSvd** A 'total least squares' implementation using singular value decomposition. --- **Description** A 'total least squares' implementation using singular value decomposition. **Usage** ```r tlsSvd(x, y) ``` **Arguments** - **x** X values. - **y** Y values. **Value** Parameters for the linear model \(Y - aX + e\). **Author(s)** Charles G. Danko windowAnalysis windowAnalysis Returns a vector of integers representing the counts of reads in a moving window. Description Supports parallel processing using mclapply in the 'parallel' package. To change the number of processors, set the option 'mc.cores'. Usage windowAnalysis(reads, strand = '*', windowSize = stepSize, stepSize = windowSize, chrom = NULL, limitPCRDups = FALSE, ...) Arguments reads GenomicRanges object representing the position of reads mapping in the genome. strand Takes values of '+', '-', or '*'. '*' denotes collapsing reads on both strands. Default: '*'. windowSize Size of the moving window. Either windowSize or stepSize must be specified. stepSize The number of bp moved with each step. chrom Chromosome for which to return data. Default: returns all available data. limitPCRDups Counts only one read mapping to each start site. NOTE: If set to TRUE, assumes that all reads are the same length (don’t use for paired-end data). Default: FALSE. ... Extra argument passed to mclapply Value Returns a list object, each element of which represents a chromosome. Author(s) Charles G. Danko and Minho Chae Examples S0mR1 <- as(readGAlignments(system.file("extdata", "S0mR1.bam", package="groHMM")), "GRanges") ## Not run: # Fp <- windowAnalysis(S0mR1, strand="+", windowSize=50) writeWiggle writes a wiggle track or BigWig file suitable for uploading to the UCSC genome browser. Description writeWiggle writes a wiggle track or BigWig file suitable for uploading to the UCSC genome browser. Usage ```r writeWiggle(reads, file, strand = "*", fileType = "wig", size = 50, normCounts = NULL, reverse = FALSE, seqinfo = NULL, track.type.line = FALSE, ...) ``` Arguments - `reads` GenomicRanges object representing the position of reads mapping in the genome. - `file` Specifies the filename for output. - `strand` Takes values of "+", ",-", or "*". Computes Writes a wiggle on the specified strand. "*" denotes collapsing reads on both strands. Default: "*". - `fileType` Takes values of "wig" or "BigWig". Default: "wig". - `size` Size of the moving window. - `normCounts` A normalization factor correcting for library size or other effects. For example, total mappible read counts might be a reasonable value. Default: 1 (i.e. no normalization). - `reverse` If set to TRUE, multiplies values by -1. Used for reversing GRO-seq data on the negative (-) strand. Default: FALSE - `seqinfo` Seqinfo object for reads. Default: NULL. - `track.type.line` If set to TRUE, prints a header identifying the file as a wiggle. Necessary to upload a custom track to the UCSC genome browser. Default: TRUE - `...` Extra argument passed to mclapply. Author(s) Minho Chae and Charles G. Danko Examples ```r S0mR1 <- as(readGAlignments(system.file("extdata", "S0mR1.bam", package="groHMM")), "GRanges") ## Not run: # writeWiggle(reads=S0mR1, file="S0mR1_Plus.wig", fileType="wig", # strand="*", reverse=FALSE) ``` Index `Topic package` `groHMM-package, 2` averagePlot, 3 breakTranscriptsOnGenes, 4 combineTranscripts, 5 countMappableReadsInInterval, 5 detectTranscripts, 6 evaluateHMMInAnnotations, 7 expressedGenes, 8 getCores, 9 getTxDensity, 9 groHMM (groHMM-package), 2 groHMM-package, 2 limitToXkb, 10 makeConsensusAnnotations, 11 metaGene, 12 metaGene_nL, 13 metaGeneMatrix, 13 pausingIndex, 14 polymeraseWave, 15 readBed, 17 RgammaMLE, 17 Rnorm, 18 Rnorm.exp, 18 runMetaGene, 19 tlsDeming, 20 tlsLoess, 20 tlsSvd, 21 windowAnalysis, 22 writeWiggle, 23
{"Source-Url": "http://www.bioconductor.org/packages/release/bioc/manuals/groHMM/man/groHMM.pdf", "len_cl100k_base": 8437, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 44324, "total-output-tokens": 10792, "length": "2e13", "weborganizer": {"__label__adult": 0.00039124488830566406, "__label__art_design": 0.0006570816040039062, "__label__crime_law": 0.0004220008850097656, "__label__education_jobs": 0.001434326171875, "__label__entertainment": 0.0003211498260498047, "__label__fashion_beauty": 0.0002205371856689453, "__label__finance_business": 0.0003752708435058594, "__label__food_dining": 0.0005102157592773438, "__label__games": 0.0009555816650390624, "__label__hardware": 0.001735687255859375, "__label__health": 0.0011434555053710938, "__label__history": 0.00034880638122558594, "__label__home_hobbies": 0.0002180337905883789, "__label__industrial": 0.0006499290466308594, "__label__literature": 0.0003311634063720703, "__label__politics": 0.000492095947265625, "__label__religion": 0.0006265640258789062, "__label__science_tech": 0.272705078125, "__label__social_life": 0.00024080276489257812, "__label__software": 0.06329345703125, "__label__software_dev": 0.65185546875, "__label__sports_fitness": 0.0004367828369140625, "__label__transportation": 0.00034880638122558594, "__label__travel": 0.0002636909484863281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33266, 0.01984]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33266, 0.75089]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33266, 0.63288]], "google_gemma-3-12b-it_contains_pii": [[0, 1663, false], [1663, 2900, null], [2900, 4265, null], [4265, 5401, null], [5401, 6773, null], [6773, 8715, null], [8715, 10074, null], [10074, 11528, null], [11528, 12815, null], [12815, 13640, null], [13640, 15012, null], [15012, 16537, null], [16537, 17675, null], [17675, 19226, null], [19226, 21721, null], [21721, 23926, null], [23926, 25042, null], [25042, 26268, null], [26268, 27933, null], [27933, 28967, null], [28967, 29718, null], [29718, 31034, null], [31034, 32657, null], [32657, 33266, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1663, true], [1663, 2900, null], [2900, 4265, null], [4265, 5401, null], [5401, 6773, null], [6773, 8715, null], [8715, 10074, null], [10074, 11528, null], [11528, 12815, null], [12815, 13640, null], [13640, 15012, null], [15012, 16537, null], [16537, 17675, null], [17675, 19226, null], [19226, 21721, null], [21721, 23926, null], [23926, 25042, null], [25042, 26268, null], [26268, 27933, null], [27933, 28967, null], [28967, 29718, null], [29718, 31034, null], [31034, 32657, null], [32657, 33266, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33266, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33266, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33266, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33266, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33266, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33266, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33266, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33266, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33266, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33266, null]], "pdf_page_numbers": [[0, 1663, 1], [1663, 2900, 2], [2900, 4265, 3], [4265, 5401, 4], [5401, 6773, 5], [6773, 8715, 6], [8715, 10074, 7], [10074, 11528, 8], [11528, 12815, 9], [12815, 13640, 10], [13640, 15012, 11], [15012, 16537, 12], [16537, 17675, 13], [17675, 19226, 14], [19226, 21721, 15], [21721, 23926, 16], [23926, 25042, 17], [25042, 26268, 18], [26268, 27933, 19], [27933, 28967, 20], [28967, 29718, 21], [29718, 31034, 22], [31034, 32657, 23], [32657, 33266, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33266, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
1bd411b05180cfa964831b1982e6ebda1f3bac39
[REMOVED]
{"Source-Url": "https://www.mais.informatik.tu-darmstadt.de/WebBibPHP/papers/2024/2024-ISoLA-GehringMantel.pdf", "len_cl100k_base": 13498, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 53264, "total-output-tokens": 14995, "length": "2e13", "weborganizer": {"__label__adult": 0.0003614425659179687, "__label__art_design": 0.0005288124084472656, "__label__crime_law": 0.0004010200500488281, "__label__education_jobs": 0.001033782958984375, "__label__entertainment": 9.560585021972656e-05, "__label__fashion_beauty": 0.00018155574798583984, "__label__finance_business": 0.000484466552734375, "__label__food_dining": 0.00031757354736328125, "__label__games": 0.0008020401000976562, "__label__hardware": 0.004001617431640625, "__label__health": 0.0007009506225585938, "__label__history": 0.00045990943908691406, "__label__home_hobbies": 0.00014507770538330078, "__label__industrial": 0.0008335113525390625, "__label__literature": 0.00032067298889160156, "__label__politics": 0.000301361083984375, "__label__religion": 0.0004858970642089844, "__label__science_tech": 0.2021484375, "__label__social_life": 8.600950241088867e-05, "__label__software": 0.019134521484375, "__label__software_dev": 0.765625, "__label__sports_fitness": 0.00024390220642089844, "__label__transportation": 0.0010967254638671875, "__label__travel": 0.0002386569976806641}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63852, 0.02514]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63852, 0.20357]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63852, 0.91843]], "google_gemma-3-12b-it_contains_pii": [[0, 2622, false], [2622, 6047, null], [6047, 9401, null], [9401, 12420, null], [12420, 15954, null], [15954, 19484, null], [19484, 21484, null], [21484, 24720, null], [24720, 28287, null], [28287, 31572, null], [31572, 34832, null], [34832, 38212, null], [38212, 41321, null], [41321, 44434, null], [44434, 47413, null], [47413, 50792, null], [50792, 54257, null], [54257, 57497, null], [57497, 60771, null], [60771, 63852, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2622, true], [2622, 6047, null], [6047, 9401, null], [9401, 12420, null], [12420, 15954, null], [15954, 19484, null], [19484, 21484, null], [21484, 24720, null], [24720, 28287, null], [28287, 31572, null], [31572, 34832, null], [34832, 38212, null], [38212, 41321, null], [41321, 44434, null], [44434, 47413, null], [47413, 50792, null], [50792, 54257, null], [54257, 57497, null], [57497, 60771, null], [60771, 63852, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63852, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63852, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63852, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63852, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63852, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63852, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63852, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63852, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63852, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63852, null]], "pdf_page_numbers": [[0, 2622, 1], [2622, 6047, 2], [6047, 9401, 3], [9401, 12420, 4], [12420, 15954, 5], [15954, 19484, 6], [19484, 21484, 7], [21484, 24720, 8], [24720, 28287, 9], [28287, 31572, 10], [31572, 34832, 11], [34832, 38212, 12], [38212, 41321, 13], [41321, 44434, 14], [44434, 47413, 15], [47413, 50792, 16], [50792, 54257, 17], [54257, 57497, 18], [57497, 60771, 19], [60771, 63852, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63852, 0.17323]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
75fae7f6cfac56b280bf4b9f1fc35b98a7561512
Types Semantics and Application to Program Verification Antoine Miné École normale supérieure, Paris year 2013–2014 Course 3 5 March 2014 Introduction Purposes of typing: - *avoid errors* during the execution of programs by *restricting* them - help compile programs efficiently - document properties of programs In this course, we look at typing from a *formal* and *semantic* view: what semantics can we give to types and typing? what semantic information is guaranteed by types? We don’t discuss: typing in language design and implementation type theory as an alternative to set theory relations between type theory and proof theory **Type**: set of values with a specific machine representation (often, distinct types denote non-overlapping value sets, but this is not always the case: e.g., short/int/long in C, or subtyping Java and C++) Variables are assigned a type that defines its possible values **static vs. dynamic typing:** - **static**: the type of each variable is known at compile time (C, Java, OCaml) - **dynamic**: the type of each variable is discovered during the execution and may change (Python, Javascript) strongly vs. loosely typed languages: - **loose**: typing does not prevent invalid value construction and use (e.g., view an integer as a pointer in C, C++, assembly) - **strong**: all type errors are detected (Java, OCaml, Python, Javascript) **static strong typing**: well-typed programs cannot go wrong [Milner78] **type checking vs. type inference**: - **checking**: checks the consistency of variable use according to user declarations (C, Java) - **inference**: discover (almost) automatically a (most general) type consistent with the use (OCaml, except modules... **Goal:** strong static typing for imperative programs Classic workflow to introduce types: - **design a type system** set of logical rules stating whether a program is “well typed” - **prove the soundness with respect to the (operational) semantics** well-typed programs cannot go wrong - **design algorithms to check** typing from user-given type annotations or to **infer** type annotations that make the program well typed Less classic view: - **design typing by abstraction of the semantics** sound by construction (static analysis) Type systems Simple imperative language Expressions: \( \textit{expr} ::= \textit{X} \quad \text{(variable)} \) \[ \quad | \quad \textit{c} \quad \text{(constant)} \] \[ \quad | \quad \Diamond \textit{expr} \quad \text{(unary operation)} \] \[ \quad | \quad \textit{expr} \Diamond \textit{expr} \quad \text{(binary operation)} \] Statements: \( \textit{stat} ::= \textit{skip} \quad \text{(do nothing)} \) \[ \quad | \quad \textit{X} \leftarrow \textit{expr} \quad \text{(assignment)} \] \[ \quad | \quad \textit{stat}; \textit{stat} \quad \text{(sequence)} \] \[ \quad | \quad \textbf{if} \ \textit{expr} \ \textbf{then} \ \textit{stat} \ \textbf{else} \ \textit{stat} \quad \text{(conditional)} \] \[ \quad | \quad \textbf{while} \ \textit{expr} \ \textbf{do} \ \textit{stat} \quad \text{(loop)} \] \[ \quad | \quad \textbf{local} \ \textit{X} \ \textbf{in} \ \textit{stat} \quad \text{(local variable)} \] - constants: \( \textit{c} \in \mathbb{I} \overset{\text{def}}{=} \mathbb{Z} \cup \mathbb{B} \) \text{(integers and booleans)} - operators: \( \Diamond \in \{+,-,\times,/,<,\leq,\neg,\land,\lor,=,\neq\} \) - variables: \( \textit{X} \in \mathbb{V} \) \text{(\( \mathbb{V} \): set of all program variables)} variables are now local, with limited scope and must be declared \( \text{(no type information...yet!)} \) e.g.: \texttt{local Y in (local X in (X \leftarrow 0; \textbf{while} X < Y \textbf{ do} X \leftarrow X + 1); Y \leftarrow 2)} Reminders: deductive systems **Deductive system:** set of axioms and logical rules to derive theorems defines what is provable in a formal way **Judgments:** \( \Gamma \vdash \text{Prop} \) a fact, meaning: “under hypotheses \( \Gamma \), we can prove Prop” **Rules:** rule: \( \frac{J_1 \cdot \cdot \cdot J_n \text{ (hypotheses)}}{J \text{ (conclusion)}} \) axiom: \( J \) (fact) **Proof tree:** complete application of rules from axioms to conclusion example in propositional calculus: \[ \begin{align*} \vdots \\ \Gamma \vdash B \\ \Gamma, A \vdash B \\ \Gamma, A \vdash C \\ \Gamma, A \vdash B \wedge C \\ \Gamma \vdash A \rightarrow (B \wedge C) \end{align*} \] Typing judgments Types \[ \text{type} ::= \text{int} \ (\text{integers}) \] \[ \text{bool} \ (\text{booleans}) \] Hypotheses \( \Gamma \): set of type assignments \( \mathcal{X} : t \), with \( \mathcal{X} \in \mathcal{V} \), \( t \in \text{type} \) (meaning: variable \( \mathcal{V} \) has type \( t \)) Judgments: - \( \Gamma \vdash \text{stat} \) given the type assignments \( \Gamma \) \( \text{stat} \) is well-typed - \( \Gamma \vdash \text{expr} : \text{type} \) given the type of variables \( \Gamma \) \( \text{expr} \) is well-typed and has type \( \text{type} \) Expression typing \[ \Gamma \vdash c : \text{int} \quad (c \in \mathbb{Z}) \quad \Gamma \vdash c : \text{bool} \quad (c \in \mathbb{B}) \quad \Gamma \vdash X : t \quad ((X:t) \in \Gamma) \] \[ \Gamma \vdash e : \text{int} \quad \Gamma \vdash e : \text{bool} \] \[ \Gamma \vdash -e : \text{int} \quad \Gamma \vdash \neg e : \text{bool} \] \[ \Gamma \vdash e_1 : \text{int} \quad \Gamma \vdash e_2 : \text{int} \] \[ \Gamma \vdash e_1 \diamond e_2 : \text{int} \quad (\diamond \in \{+, -, \times, /\}) \] \[ \Gamma \vdash e_1 : \text{int} \quad \Gamma \vdash e_2 : \text{int} \] \[ \Gamma \vdash e_1 \diamond e_2 : \text{bool} \quad (\diamond \in \{=, \neq, <, \leq\}) \] \[ \Gamma \vdash e_1 : \text{bool} \quad \Gamma \vdash e_2 : \text{bool} \] \[ \Gamma \vdash e_1 \diamond e_2 : \text{bool} \quad (\diamond \in \{=, \neq, \wedge, \vee\}) \] Note: the syntax of an expressions uniquely identifies a rule to apply, up to the choice of types for \( e_1 \) and \( e_2 \) in the rules for =, \( \neq \). Statement typing \[ \begin{align*} \Gamma \vdash \text{skip} & \quad & \Gamma \vdash e : t & ((X : t) \in \Gamma) \\ \Gamma \vdash X \leftarrow e & \\ \Gamma \vdash s_1 & \quad \Gamma \vdash s_2 & \quad \Gamma \vdash s_1 ; s_2 & \\ \Gamma \vdash s_1 & \quad \Gamma \vdash s_2 & \quad \Gamma \vdash e : \text{bool} & \quad \Gamma \vdash \text{if } e \text{ then } s_1 \text{ else } s_2 \\ \Gamma \vdash s & \quad \Gamma \vdash e : \text{bool} & \quad \Gamma \vdash \text{while } e \text{ do } s & \\ \Gamma \vdash \text{local } X \text{ in } s & \\ \Gamma \vdash \text{local } X \text{ in } s & \\ \end{align*} \] **Definition:** \( s \) is well-typed if we can prove \( \emptyset \vdash s \) **Note:** the syntax of a statement uniquely identifies a rule to apply, up to the choice of \( t \) in the rule for \texttt{local } X \texttt{ in } s Soundness of typing ### Soundness of typing #### Types and errors **Goal:** well-typed programs “cannot go wrong” The operational semantics has several kinds of errors: 1. **Type mismatch** in operators \((1 \lor 2, \text{true} + 2)\) 2. **Value errors** (divide or modulo by 0, use uninitialized variables) Typing seeks only to prevent statically the first kind of errors. Value errors can be prevented with static analyses. This is much more complex and costly; we will discuss it later in the course. Typing aims at a “sweet spot”: detect at compile-time all errors of a certain kind. **Soundness:** well-typed programs have no type mismatch error. It is proved based on an operational semantics of the program. Soundness of typing Reminder: denotational semantics of expressions \[ E[\text{expr}] : \mathcal{E} \rightarrow \mathcal{P}(\mathbb{I} \cup \{\Omega_t, \Omega_v\}) \quad \mathcal{E} \overset{\text{def}}{=} \forall \rightarrow (\mathbb{I} \cup \{\omega\}) \] \begin{align*} E[c] \rho & \overset{\text{def}}{=} \{c\} \\ E[c_1, c_2] \rho & \overset{=} {=} \{ c \in \mathbb{Z} \mid c_1 \leq c \leq c_2 \} \\ E[X] \rho & \overset{\text{def}}{=} \{ \rho(X) \mid \text{if } \rho(X) \in \mathbb{I} \} \cup \{ \Omega_v \mid \text{if } \rho(X) = \omega \} \\ E[-e] \rho & \overset{\text{def}}{=} \{-v \mid v \in (E[e] \rho) \cap \mathbb{Z} \} \cup \\ & \quad \{ \Omega \mid \Omega \in (E[e] \rho) \cap \{\Omega_t, \Omega_v\} \} \cup \\ & \quad \{ \Omega_t \mid \text{if } (E[e] \rho) \cap \mathbb{B} \neq \emptyset \} \\ E[e_1/e_2] \rho & \overset{\text{def}}{=} \{ v_1/v_2 \mid v_1 \in (E[e_1] \rho) \cap \mathbb{Z}, v_2 \in (E[e_2] \rho) \cap \mathbb{Z} \} \cup \\ & \quad \{ \Omega \mid \Omega \in ((E[e_1] \rho) \cup (E[e_2] \rho)) \cap \{\Omega_t, \Omega_v\} \} \cup \\ & \quad \{ \Omega_t \mid \text{if } ((E[e_1] \rho) \cup (E[e_2] \rho)) \cap \mathbb{B} \neq \emptyset \} \cup \\ & \quad \{ \Omega_v \mid \text{if } 0 \in E[e_2] \rho \} \end{align*} \[ \omega \text{ denotes the special "non-initialized" value} \] \[ \text{special values } \Omega_t \text{ and } \Omega_v \text{ denote type and value errors} \] \[ \text{we show here how to mix non-determinism and errors:} \] \[ \text{errors } \Omega \in \{\Omega_t, \Omega_v\} \text{ from sub-expressions are propagated} \] \[ \text{new type errors } \Omega_t \text{ and value errors } \Omega_v \text{ may be generated} \] \[ \text{we return a set of values and errors} \] Soundness of typing Reminder: operational semantics of statements \[ \tau[^{l_1}\text{stat}^{l_2}] \subseteq \Sigma^2 \] where \( \Sigma \overset{\text{def}}{=} (\mathcal{L} \times \mathcal{E}) \cup \{\Omega_t, \Omega_v, \omega\} \) \[ \tau[^{l_1}\text{skip}^{l_2}] \overset{\text{def}}{=} \{(l_1, \rho) \rightarrow (l_2, \rho) \mid \rho \in \mathcal{E}\} \] \[ \tau[^{l_1}X \leftarrow e^{l_2}] \overset{\text{def}}{=} \} \{ (l_1, \rho) \rightarrow (l_2, \rho[X \mapsto v]) \mid v \in (E[e] \rho) \cap \mathbb{I} \} \cup \{ (l_1, \rho) \rightarrow \Omega \mid \Omega \in (E[e] \rho) \cap \{\Omega_t, \Omega_v\} \} \[ \tau[^{l_1}\text{while}^{l_2}e \text{ do }^{l_3}s^{l_4}] \overset{\text{def}}{=} \} \{ (l_1, \rho) \rightarrow (l_2, \rho) \mid \rho \in \mathcal{E} \} \cup \{ (l_2, \rho) \rightarrow (l_3, \rho) \mid \text{true} \in E[e] \rho \} \cup \{ (l_2, \rho) \rightarrow (l_4, \rho) \mid \text{false} \in E[e] \rho \} \cup \{ (l_2, \rho) \rightarrow \Omega_t \mid (E[e] \rho) \cap \mathbb{Z} \neq \emptyset \} \cup \{ (l_2, \rho) \rightarrow \Omega \mid \Omega \in (E[e] \rho) \cap \{\Omega_t, \Omega_v\} \} \cup \tau[^{l_3}s^{l_2}] (and similarly for if e then s_1 else s_2) \[ \tau[^{l_1}s_1;^{l_2}s_2^{l_3}] \overset{\text{def}}{=} \tau[^{l_1}s_1^{l_2}] \cup \tau[^{l_2}s_2^{l_3}] \] \[ \tau[^{l_1}\text{local} X \text{ in }^{l_2}s^{l_2}] \overset{\text{def}}{=} \} \{ (l_1, \rho) \rightarrow (l_2, \rho'[X \mapsto \rho(X)]) \mid (l_1, \rho[X \mapsto \omega]) \rightarrow (l_2, \rho') \in \tau[^{l_1}s^{l_2}] \} \cup \{ (l_1, \rho) \rightarrow \Omega \mid (l_1, \rho[X \mapsto \omega]) \rightarrow \Omega \in \tau[^{l_1}s^{l_2}], \Omega \in \{\Omega_t, \Omega_v\} \} - when entering its scope, a local variable is assigned the "non-initialized" value \( \omega \) - at the end of its scope, its former value is restored - special \( \Omega_t, \Omega_v \) states denote error (blocking states) - errors \( \Omega \) from expressions are propagated; new type errors \( \Omega_t \) are generated Type soundness Operational semantics: maximal execution traces \[ t[s] \overset{\text{def}}{=} \{(\sigma_0, \ldots, \sigma_n) | n \geq 0, \sigma_0 \in l, \sigma_n \in B, \forall i < n: \sigma_i \rightarrow \sigma_{i+1}\} \cup \{(\sigma_0, \ldots) | \sigma_0 \in l, \forall i \in \mathbb{N}: \sigma_i \rightarrow \sigma_{i+1}\} \] Type soundness \[ s \text{ is well-typed} \implies \forall (\sigma_0, \ldots, \sigma_n) \in t[s]: \sigma_n \neq \Omega_t \] (well-typed programs never stop on a type error at run-time) Typing checking **Problem:** how do we prove that a program is well typed? **Bottom-up reasoning:** construct a proof tree ending in $\emptyset \vdash s$ by applying rules “in reverse” - given a conclusion, there is generally only one rule to apply - the only rule that requires imagination is: $$\Gamma \cup \{(X : t)\} \vdash s$$ $$\Gamma \vdash \text{local } X\text{ in } s$$ $t$ is a free variable in the hypothesis $\implies$ we need to guess a good $t$ that makes the proof work - to type $\Gamma \vdash e_1 = e_2 : \text{bool}$, we also have to choose between $\Gamma \vdash e_1 : \text{bool}$ and $\Gamma \vdash e_1 : \text{int}$ **Solution:** ask the programmer to **add type information** to all variable declarations we change the syntax of declaration statements into: \[ \text{stat ::= local } X : \text{type in stat} \\ \mid \cdots \] The typing rule for local variable declarations becomes **deterministic**: \[ \Gamma \cup \{(X : t)\} \vdash s \\ \overline{\Gamma \vdash \text{local } X : t \text{ in } s} \] Given variable types, we assign a single type to each expression (solves the indeterminacy in the typing of \(e_1 = e_2\)) **Algorithm:** propagation by induction on the syntax \[ \tau_e : ((\forall \rightarrow \text{type}) \times \text{expr}) \rightarrow (\text{type} \cup \{\Omega_t\}) \] \[ \tau_e(\Gamma, c) \overset{\text{def}}{=} \text{int} \quad \text{if } c \in \mathbb{Z} \] \[ \tau_e(\Gamma, c) \overset{\text{def}}{=} \text{bool} \quad \text{if } c \in \mathbb{B} \] \[ \tau_e(\Gamma, X) \overset{\text{def}}{=} \Gamma(X) \] \[ \tau_e(\Gamma, -e) \overset{\text{def}}{=} \text{int} \quad \text{if } \tau_e(\Gamma, e) = \text{int} \] \[ \tau_e(\Gamma, -e) \overset{\text{def}}{=} \text{bool} \quad \text{if } \tau_e(\Gamma, e) = \text{bool} \] \[ \tau_e(\Gamma, e_1 \diamond e_2) \overset{\text{def}}{=} \text{int} \quad \text{if } \tau_e(\Gamma, e_1) = \tau_e(\Gamma, e_2) = \text{int}, \diamond \in \{+, -, \times, /\} \] \[ \tau_e(\Gamma, e_1 \diamond e_2) \overset{\text{def}}{=} \text{bool} \quad \text{if } \tau_e(\Gamma, e_1) = \tau_e(\Gamma, e_2) = \text{int}, \diamond \in \{=, \neq, <, \leq\} \] \[ \tau_e(\Gamma, e_1 \diamond e_2) \overset{\text{def}}{=} \text{bool} \quad \text{if } \tau_e(\Gamma, e_1) = \tau_e(\Gamma, e_2) = \text{bool}, \diamond \in \{=, \neq, \land, \lor\} \] \[ \tau_e(e) \overset{\text{def}}{=} \Omega_t \quad \text{otherwise} \] \(\Omega_t\) indicates a type error Typing checking Type propagation in statements Type checking is performed by induction on the syntax of statements: \[ \tau_s : ((\forall \to \text{type}) \times \text{stat}) \to \mathbb{B} \] \[ \tau_s(\Gamma, \text{skip}) \overset{\text{def}}{=} \text{true} \] \[ \tau_s(\Gamma, (s_1 ; s_2)) \overset{\text{def}}{=} \tau_s(\Gamma, s_1) \land \tau_s(\Gamma, s_2) \] \[ \tau_s(\Gamma, X \leftarrow e) \overset{\text{def}}{=} \tau_e(\Gamma, e) = \Gamma(X) \] \[ \tau_s(\Gamma, \text{if } e \text{ then } s_1 \text{ else } s_2) \overset{\text{def}}{=} \tau_s(\Gamma, s_1) \land \tau_s(\Gamma, s_2) \land \tau_e(\Gamma, e) = \text{bool} \] \[ \tau_s(\Gamma, \text{while } e \text{ do } s) \overset{\text{def}}{=} \tau_s(\Gamma, s) \land \tau_e(\Gamma, e) = \text{bool} \] \[ \tau_s(\Gamma, \text{local } X : t \text{ in } s) \overset{\text{def}}{=} \tau_s(\Gamma[X \mapsto t], s) \] (in particular, \( \tau_s(\Gamma, s) = \text{false} \) if \( \tau_e(\Gamma, e) = \Omega_t \) for some expression \( e \) inside \( s \)) **Theorem** \[ \tau_s(\emptyset, s) = \text{true} \iff \emptyset \vdash s \text{ is provable} \] - we have an algorithm to check if a program is well-typed - the algorithm also assigns statically a type to every sub-expression (useful to compile expressions efficiently, without dynamic type checks) Type inference Type inference **Problem:** can we avoid specifying types in the program? **Solution:** automatic type inference - each variable $X \in V$ is assigned a type variable $t_X$ - we generate a set of type constraints ensuring that the program is well typed - we solve the constraint system to infer a type value for each type variable **Type constraints:** we need equalities on types and type variables \[ \begin{align*} type\ const &::= type\ expr = type\ expr \quad \text{(type equality)} \\ type\ expr &::= \text{int} \quad \text{(integers)} \\ &| \quad \text{bool} \quad \text{(booleans)} \\ &| \quad t_X \quad \text{(type variable for } X \in V) \end{align*} \] Generating type constraints for expressions **Principle:** similar to type propagation \[ \tau_e : \text{expr} \rightarrow (\text{type expr} \times \mathcal{P}(\text{type const})) \] \[ \begin{align*} \tau_e(c) & \overset{\text{def}}{=} (\text{int}, \emptyset) \quad \text{if } c \in \mathbb{Z} \\ \tau_e(c) & \overset{\text{def}}{=} (\text{bool}, \emptyset) \quad \text{if } c \in \mathbb{B} \\ \tau_e(X) & \overset{\text{def}}{=} (t_X, \emptyset) \\ \tau_e(-e_1) & \overset{\text{def}}{=} (\text{int}, C_1 \cup \{t_1 = \text{int}\}) \\ \tau_e(-e_1) & \overset{\text{def}}{=} (\text{bool}, C_1 \cup \{t_1 = \text{bool}\}) \\ \tau_e(e_1 \diamond e_2) & \overset{\text{def}}{=} (\text{int}, C_1 \cup C_2 \cup \{t_1 = \text{int}, t_2 = \text{int}\}) \quad \text{if } \diamond \in \{+, -, \times, /\} \\ \tau_e(e_1 \diamond e_2) & \overset{\text{def}}{=} (\text{bool}, C_1 \cup C_2 \cup \{t_1 = \text{int}, t_2 = \text{int}\}) \quad \text{if } \diamond \in \{<, \le\} \\ \tau_e(e_1 \diamond e_2) & \overset{\text{def}}{=} (\text{bool}, C_1 \cup C_2 \cup \{t_1 = \text{bool}, t_2 = \text{bool}\}) \quad \text{if } \diamond \in \{\land, \lor\} \\ \tau_e(e_1 \diamond e_2) & \overset{\text{def}}{=} (\text{bool}, C_1 \cup C_2 \cup \{t_1 = t_2\}) \quad \text{if } \diamond \in \{=, \ne\} \end{align*} \] where \((t_1, C_1) = \tau_e(e_1)\) and \((t_2, C_2) = \tau_e(e_2)\) - we return the type of the expression (possibly a type variable) and a set of constraints to satisfy to ensure it is well typed - no type environment is needed: variable \(X\) has symbolic type \(t_X\) - \(e_1 = e_2\) and \(e_1 \neq e_2\) reduce to type equality Generating type constraints for statements \[ \tau_s : \text{stat} \rightarrow \mathcal{P}(\text{type const}) \] \[ \begin{align*} \tau_s(\text{skip}) & \overset{\text{def}}{=} \emptyset \\ \tau_s(s_1; s_2) & \overset{\text{def}}{=} \tau_s(s_1) \cup \tau_s(s_2) \\ \tau_s(X \leftarrow e) & \overset{\text{def}}{=} C \cup \{t_X : t\} \\ \tau_s(\text{if } e \text{ then } s_1 \text{ else } s_2) & \overset{\text{def}}{=} \tau_s(s_1) \cup \tau_s(s_2) \cup C \cup \{t = \text{bool}\} \\ \tau_s(\text{while } e \text{ do } s) & \overset{\text{def}}{=} \tau_s(s) \cup C \cup \{t = \text{bool}\} \\ \tau_s(\text{local } X \text{ in } s) & \overset{\text{def}}{=} \tau_s(s) \\ \end{align*} \] where \((t, C) \overset{\text{def}}{=} \tau_e(e)\) - we return a set of constraints to satisfy to ensure it is well typed - for simplicity, scoping in \textbf{local } X \in s is not handled \[\implies\] we assign a single type for all local variables of the same name Solving type constraints \( \tau_s(s) \) is a set of equalities between type variables and constants \texttt{int}, \texttt{bool} **Solving algorithm:** compute equivalence classes by unification consider \( T = \{ \texttt{int}, \texttt{bool} \} \cup \{ t_X \mid X \in \mathbb{V} \} \) - start with disjoint equivalence classes \( \{ \{ t \} \mid t \in T \} \) - for each equality \( (t_1 = t_2) \in \tau_s(s) \), merge the classes of \( t_1 \) and \( t_2 \) (with union-find data-structure: \( O(|\tau_s(s)| \times \alpha(|T|)) \) time cost) - if \texttt{int} and \texttt{bool} end up in the same equivalence class the program is not typable otherwise, there exist type assignments \( \Gamma \in \mathbb{V} \rightarrow \text{type} \) such that the program is typable Solving type constraints If the program is typable, we end up with several equivalence classes: - the class containing `int` gives the set of integer variables - the class containing `bool` gives the set of boolean variables - other classes correspond to “polymorphic” variables e.g. `local X in if X = X then ···` such classes can be assigned either type `bool` or `int` however, we can prove that these variables are in fact never initialized $\implies$ polymorphism is not useful in this language Types as semantic abstraction Type semantics We return to our simple imperative language: \[ \begin{align*} \text{expr} & ::= \quad X \\ & \mid c \\ & \mid [c_1, c_2] \\ & \mid \diamond \text{expr} \\ & \mid \text{expr} \diamond \text{expr} \\ \text{stat} & ::= \quad \text{skip} \\ & \mid X \leftarrow \text{expr} \\ & \mid \text{stat}; \text{stat} \\ & \mid \text{if } \text{expr} \text{ then } \text{stat} \text{ else } \text{stat} \\ & \mid \text{while } \text{expr} \text{ do } \text{stat} \\ & \mid \text{local } X \text{ in } \text{stat} \end{align*} \] Principle: derive typing from the semantics - view types as sets of values - modify the non-deterministic denotational semantics to reason on types instead of sets of values (abstraction) \[\Rightarrow\] the semantics expresses the absence of dynamic type error \((\Omega_t\) never occurs in any computation\) - the semantics on types is computable, always terminates \[\Rightarrow\] we have a static analysis **Types #**: representative subsets of $\mathbb{I} \overset{\text{def}}{=} \mathbb{Z} \cup \mathbb{B} \cup \{\Omega_t, \Omega_v\}$: - we distinguish integers, booleans, and type errors $\Omega_t$ - but not value errors $\Omega_v$ nor non-initialization $\omega$ from valid values - a type in $\mathbb{I}^\#$ over-approximates a set of values in $\mathcal{P}(\mathbb{I})$ $\implies$ every subset of $\mathbb{I}$ must have an over-approximation in $\mathbb{I}^\#$ - $\mathbb{I}^\#$ should be closed under $\cap$ $\implies$ every $I \subseteq \mathbb{I}$ has a best over-approximation: $\alpha(I) \overset{\text{def}}{=} \cap\{ t \in \mathbb{I}^\# \mid I \subseteq t \}$ We define a **finite lattice** $\mathbb{I}^\# \overset{\text{def}}{=} \{\text{int}^\#, \text{bool}^\#, \text{all}^\#, \bot, \top\}$ where - $\text{int}^\# \overset{\text{def}}{=} \mathbb{Z} \cup \{\Omega_v, \omega\}$ - $\text{bool}^\# \overset{\text{def}}{=} \mathbb{B} \cup \{\Omega_v, \omega\}$ - $\text{all}^\# \overset{\text{def}}{=} \mathbb{Z} \cup \mathbb{B} \cup \{\Omega_v, \omega\}$ (no information, no type error) - $\bot \overset{\text{def}}{=} \{\Omega_v, \omega\}$ (value error, non-initialization) - $\top \overset{\text{def}}{=} \mathbb{Z} \cup \mathbb{B} \cup \{\Omega_t, \Omega_v, \omega\}$ (no information, type error) $\implies (\mathbb{I}^\#, \subseteq, \cup, \cap, \bot, \top)$ forms a complete lattice Abstract denotational semantics of expressions \[ E\[\text{expr}\] : E \rightarrow I \] where \( E \) \( \overset{\text{def}}{=} \) \( \forall \rightarrow I \) \[ \begin{align*} E\[c\] \rho & \overset{\text{def}}{=} \text{int} & \text{if } c \in \mathbb{Z} \\ E\[c\] \rho & \overset{\text{def}}{=} \text{bool} & \text{if } c \in \mathbb{B} \\ E\[\{c_1, c_2\}\] \rho & \overset{\text{def}}{=} \text{int} & \text{if } c_1 \leq c_2 \\ E\[\{c_1, c_2\}\] \rho & \overset{\text{def}}{=} \bot & \text{if } c_1 > c_2 \\ E\[X\] \rho & \overset{\text{def}}{=} \rho(X) \\ E\[e\] \rho & \overset{\text{def}}{=} \circ \,(E\[e\] \rho) \\ E\[e_1 \circ e_2\] \rho & \overset{\text{def}}{=} (E\[e_1\] \rho) \circ (E\[e_2\] \rho) \] - an abstract environments \( \rho \in E \) assigns a type to each variable - we return \( \bot \) when using a non-initialized variable (\( \rho(X) = \bot \)) or the expression has no value (\( \{c_1, c_2\} \) where \( c_1 > c_2 \)) - we use abstract unary operators \( \circ : I \rightarrow I \) and abstract binary operators \( \circ : (I \times I) \rightarrow I \) (defined in the next slide) The abstract operators $\circ\#$, $\diamond\#$ are defined as: $$ \begin{align*} -\# \ x & \overset{\text{def}}{=} \\ \quad & \begin{cases} \bot & \text{if } x = \bot \\ \text{int}\# & \text{if } x = \text{int}\# \\ \top & \text{if } x \in \{\text{bool}\#, \text{all}\#, \top\} \end{cases} & \\ \rightarrow\# \ x & \overset{\text{def}}{=} \\ \quad & \begin{cases} \bot & \text{if } x = \bot \\ \text{bool}\# & \text{if } x = \text{bool}\# \\ \top & \text{if } x \in \{\text{int}\#, \text{all}\#, \top\} \end{cases} \end{align*} $$ $$ \begin{align*} x \ +\# \ y & \overset{\text{def}}{=} \\ \quad & \begin{cases} \bot & \text{if } x = \bot \lor y = \bot \\ \text{int}\# & \text{if } x = y = \text{int}\# \\ \top & \text{otherwise} \end{cases} & \\ x \ \lor\# \ y & \overset{\text{def}}{=} \\ \quad & \begin{cases} \bot & \text{if } x = \bot \lor y = \bot \\ \text{bool}\# & \text{if } x = y = \text{bool}\# \\ \top & \text{otherwise} \end{cases} \end{align*} $$ $$ \begin{align*} x <\# \ y & \overset{\text{def}}{=} \\ \quad & \begin{cases} \bot & \text{if } x = \bot \lor y = \bot \\ \text{bool}\# & \text{if } x = y = \text{int}\# \\ \top & \text{otherwise} \end{cases} & \\ x \ =\# \ y & \overset{\text{def}}{=} \\ \quad & \begin{cases} \bot & \text{if } x = \bot \lor y = \bot \\ \text{bool}\# & \text{if } x = y \in \{\text{int}\#, \text{bool}\#\} \\ \top & \text{otherwise} \end{cases} \end{align*} $$ and other operators are similar: $$ \begin{align*} -\# & \overset{\text{def}}{=} \times\# & \overset{\text{def}}{=} /\#, \\land\# & \overset{\text{def}}{=} \lor\#, \ \le\# & \overset{\text{def}}{=} <\#, \text{and } \ne\# & \overset{\text{def}}{=} =\# \end{align*} $$ the operators are strict the operators propagate type errors the operators create new type errors (\text{return } \bot \text{ if one argument is } \bot) (\text{return } \top \text{ if one argument is } \top) (\text{return } \top) Abstract denotational semantics of statements We consider the complete lattice \((\forall \rightarrow \#, \subseteq, \cup, \cap, \perp, \top)\) (point-wise lifting) \[ S^\#[\text{stat}] : \mathcal{E}^\# \rightarrow \mathcal{E}^\# \quad \text{where} \quad \mathcal{E}^\# \stackrel{\text{def}}{=} \forall \rightarrow \# \] \[ S^\#[\text{skip}] \rho \overset{\text{def}}{=} \rho \] \[ S^\#[s_1; s_2] \overset{\text{def}}{=} S^\#[s_2] \circ S^\#[s_1] \] \[ S^\#[X \leftarrow e] \rho \overset{\text{def}}{=} \begin{cases} \top & \text{if } \rho = \top \lor E^\#[e] \rho = \top \\ \bot & \text{if } E^\#[e] \rho = \bot \\ \rho[X \mapsto E^\#[e] \rho] & \text{otherwise} \end{cases} \] - the possibility of a type error is denoted by \(\top\) and is propagated (we never construct \(\rho\) where \(\rho(X) = \top\) and \(\rho(Y) \neq \top\)) - using a non-initialized variable results in \(\bot\) (we can have \(\rho(X) = \bot\) and \(\rho(Y) \neq \bot\), if \(X\) is not initialized but \(Y\) is, however, \(X \leftarrow X + 1\) will output \(\bot\) where \(Y\) maps to \(\bot\)) Abstract denotational semantics of statements \[ S^\#[][] \text{local } X \text{ in } s \] \[ \rho \overset{\text{def}}{=} \begin{cases} \top & \text{if } \rho = \top \\ S[][](\rho[X \mapsto \bot]) & \text{otherwise} \end{cases} \] \[ S^\#[][] \text{if } e \text{ then } s_1 \text{ else } s_2 \] \[ \rho \overset{\text{def}}{=} \begin{cases} \top & \text{if } \rho = \top \lor E^\#[][] e \rho \notin \{\text{bool}^\#, \bot\} \\ \bot & \text{if } E^\#[][] e \rho = \bot \\ (S^\#[][] s_1 \rho) \cup (S^\#[][] s_2 \rho) & \text{otherwise} \end{cases} \] - returns an error \( \top \) if \( e \) is not boolean - merges the types inferred from \( s_1 \) and \( s_2 \) if \( (S^\#[][] s_1 \rho)(X) = \text{int}^\# \) and \( (S^\#[][] s_2 \rho)(X) = \text{bool}^\# \), we get \( X \mapsto \text{all}^\# \) (i.e., depending on the branch taken, \( X \) may be an integer or a boolean) Notes: constructing \( \rho \) such that \( \rho(X) = \text{all}^\# \) is not a type error but a type error is generated if \( X \) is used when \( \rho(X) = \text{all}^\# \) Abstract denotational semantics of statements \[ S^\# \left[ \textbf{while } e \textbf{ do } s \right] \rho \overset{\text{def}}{=} S^\# \left[ e \right] (\text{lfp } F) \] where \( F(x) \overset{\text{def}}{=} \rho \cup S^\# \left[ s \right] (S^\# \left[ e \right] x) \) and \( S^\# \left[ e \right] \rho \overset{\text{def}}{=} \begin{cases} \top & \text{if } \rho = \top \lor E^\# \left[ e \right] \rho \notin \{\text{bool}^\#, \bot\} \\ \bot & \text{if } E^\# \left[ e \right] \rho = \bot \\ \rho & \text{otherwise} \end{cases} \) - similar to tests \( S^\# \left[ \textbf{if } e \textbf{ then } s \right] \), but with a fixpoint - the sequence \( X_0 \overset{\text{def}}{=} \bot, X_{i+1} \overset{\text{def}}{=} X_i \cup F(X_i) \) is: - increasing: \( X_i \subseteq X_{i+1} \) - converges in finite time - its limit \( X_\delta \) satisfies \( X_\delta = X_\delta \cup F(X_\delta) \) - and so \( F(X_\delta) \subseteq X_\delta \) - \( X_\delta \) is a post-fixpoint of \( F \) \( \Rightarrow S^\# \left[ s \right] \) can be computed in finite time Consider a standard (non abstract) denotational semantics: \[ S[s] : \mathcal{P}(E) \to \mathcal{P}(E) \text{ where } E \overset{\text{def}}{=} \{\Omega_t, \Omega_v\} \cup (\forall \to (\mathbb{Z} \cup B \cup \{\omega\})) \] **Soundness theorem** \[ \Omega_t \in S[s](\lambda X.\omega) \implies S^\#[s]\bot = \top \] Proof sketch: every set of environments \( R \) can be over-approximated by a function \( \alpha_E(R) \in \forall \to \bot^\# \) \[ \alpha_E(R) \overset{\text{def}}{=} \begin{cases} \top & \text{if } \Omega_t \in R \\ \lambda X.\alpha_i(\{\rho(X) \mid \rho \in R \setminus \{\Omega_t, \Omega_v\}\}) & \text{otherwise} \end{cases} \] where we abstract sets of values \( V \) as \( \alpha_i(V) \in \bot^\# \) \[ \alpha_i(V) \overset{\text{def}}{=} \begin{cases} \bot & \text{if } V \subseteq \{\omega\} \\ \text{int}^\# & \text{else if } V \subseteq \mathbb{Z} \cup \{\omega\} \\ \text{bool}^\# & \text{else if } V \subseteq B \cup \{\omega\} \\ \text{any}^\# & \text{otherwise} \end{cases} \] we can then prove by induction on \( s \) that \( \forall R : (\alpha \circ S[s])(R) \subseteq (S^\#[s] \circ \alpha)(R) \) we conclude by noting that \( \alpha(\lambda X.\omega) = \bot \) and \( \Omega_t \in \alpha(x) \iff x = \top \) \[ \implies S^\#[s] \text{ can find statically all dynamic typing errors!} \] The typing analysis is not complete in general: $\mathcal{S}^\# s \downarrow = \top \not\Rightarrow \Omega_t \in \mathcal{S} s (\lambda X.\omega)$ Examples: correct programs that are reported as incorrect - $P \overset{\text{def}}{=} X \leftarrow 10; \text{if } X < 0 \text{ then } X \leftarrow X + \text{true}$ the erroneous assignment $X \leftarrow X + \text{true}$ is never executed: $\mathcal{S} P R = \emptyset$ but $\mathcal{S}^\# P \downarrow = \top$ as $\mathcal{S}^\# P$ cannot prove that the branch is never executed - $P \overset{\text{def}}{=} X \leftarrow 10; (\text{while } X > 0 \text{ do } X \leftarrow X + 1); X \leftarrow X + \text{true}$ similarly, $X \leftarrow X + \text{true}$ is never executed but $\mathcal{S}^\# P$ cannot express (and so cannot infer) non-termination $\implies \mathcal{S}^\# s$ can report spurious typing errors (checking exactly $\Omega_t \in \mathcal{S} s R$ is undecidable, by reduction to the halting problem) Comparison with classic type inference The analysis is **flow-sensitive**, classic type inference is **flow-insensitive**: - type inference assigns a single static type to each variable - $S^♯[s]$ can assign different types to $X$ at different program points example: “$X ← 10; ⋮; X ← true$” is not well typed but its execution has no type error and $S^♯[s] ⊥ \neq ⊤$ The analysis takes “dead variables” into account not-typable variables do not necessarily result in a typing error example: “(if $[0, 1] = 0$ then $X ← 10$; else $X ← true$); ⋮” is not well typed as $X$ cannot store values of type either `int` or `bool` at ⋮ but its execution has not type error and $S^♯[s] ⊥ \neq ⊤$ ⇒ **static type analysis is more precise than type inference** (but it does not always give a unique, program-wide type assignment for each variable) It is also possible to design a **flow-insensitive version** of the analysis (e.g., replace $S^♯[s]X$ with $X \cup S^♯[s]X$) **Problem:** imprecision of the type analysis \[ P = \begin{cases} \text{if } [0, 1] = 0 \text{ then } X \leftarrow 10; \text{ else } X \leftarrow \text{true} \end{cases}; \ Y \leftarrow X; \ Z \leftarrow X = Y \] - \(S[ P]\) has no type error as \(X\) and \(Y\) always hold values of the same type - \(S^[ P]\) \(\vdash = \top\): incorrect type error - \(S^[ P]\) gives the environment \([X \mapsto \text{all}^{\#}, Y \mapsto \text{all}^{\#}]\) - which contains environments such as \([X \mapsto 12, Y \mapsto \text{true}]\) - on which \(X = Y\) causes a type error **Solution:** polymorphism represent a set of type assignments: \(\mathcal{E}^{\#} = \mathcal{P}(\forall \rightarrow \forall^{\#})\) (instead of \(\mathcal{E}^{\#} = \forall \rightarrow \forall^{\#}\)) e.g. \{ \([X \mapsto \text{int}^{\#}, Y \mapsto \text{int}^{\#}], [X \mapsto \text{bool}^{\#}, Y \mapsto \text{bool}^{\#}]\) \} on which \(X \equiv^{\#} Y\) gives \(\text{bool}^{\#}\) and no error - we can represent relations between types (e.g., \(X\) and \(Y\) have the same type) - this typing analysis is more precise but still incomplete - the analysis is more costly \(|\mathcal{E}^{\#}|\) is larger) but still decidable and sound Conclusion Type systems are added to programming languages to help ensuring **statically** the **correctness** of programs. Traditional type **checking** is performed by propagation of declarations. Traditional type **inference** is performed by **constraint solving**. We can also view typing as an **abstraction** of the **dynamic semantic** which can be computed **statically** (in a way similar to the denotational semantics). Typing always results in **conservative approximation** but the amount of approximation can be **controlled** (flow-sensitivity, relationality, etc.). Courses and references on typing: Research articles and surveys:
{"Source-Url": "http://www.di.ens.fr/~rival/semverif-2014/sem-03-typing.pdf", "len_cl100k_base": 12180, "olmocr-version": "0.1.53", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 91992, "total-output-tokens": 14892, "length": "2e13", "weborganizer": {"__label__adult": 0.00036978721618652344, "__label__art_design": 0.00034999847412109375, "__label__crime_law": 0.00033974647521972656, "__label__education_jobs": 0.0011911392211914062, "__label__entertainment": 5.358457565307617e-05, "__label__fashion_beauty": 0.00014460086822509766, "__label__finance_business": 0.00019216537475585935, "__label__food_dining": 0.0004451274871826172, "__label__games": 0.00047469139099121094, "__label__hardware": 0.0005898475646972656, "__label__health": 0.0005578994750976562, "__label__history": 0.00024020671844482425, "__label__home_hobbies": 9.888410568237303e-05, "__label__industrial": 0.0004487037658691406, "__label__literature": 0.00031948089599609375, "__label__politics": 0.0003063678741455078, "__label__religion": 0.0005917549133300781, "__label__science_tech": 0.01078033447265625, "__label__social_life": 0.00012153387069702148, "__label__software": 0.0031890869140625, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.0003445148468017578, "__label__transportation": 0.0005841255187988281, "__label__travel": 0.00022351741790771484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34499, 0.00727]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34499, 0.33542]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34499, 0.53517]], "google_gemma-3-12b-it_contains_pii": [[0, 142, false], [142, 644, null], [644, 1149, null], [1149, 1729, null], [1729, 2283, null], [2283, 2296, null], [2296, 3737, null], [3737, 4410, null], [4410, 4991, null], [4991, 6003, null], [6003, 6849, null], [6849, 6869, null], [6869, 7573, null], [7573, 9305, null], [9305, 11318, null], [11318, 11838, null], [11838, 11854, null], [11854, 12484, null], [12484, 12876, null], [12876, 14299, null], [14299, 15631, null], [15631, 15646, null], [15646, 16316, null], [16316, 17950, null], [17950, 18909, null], [18909, 19691, null], [19691, 20209, null], [20209, 20239, null], [20239, 21188, null], [21188, 22588, null], [22588, 23710, null], [23710, 25621, null], [25621, 26707, null], [26707, 27767, null], [27767, 28837, null], [28837, 30164, null], [30164, 31144, null], [31144, 32112, null], [32112, 33332, null], [33332, 33343, null], [33343, 33917, null], [33917, 34499, null]], "google_gemma-3-12b-it_is_public_document": [[0, 142, true], [142, 644, null], [644, 1149, null], [1149, 1729, null], [1729, 2283, null], [2283, 2296, null], [2296, 3737, null], [3737, 4410, null], [4410, 4991, null], [4991, 6003, null], [6003, 6849, null], [6849, 6869, null], [6869, 7573, null], [7573, 9305, null], [9305, 11318, null], [11318, 11838, null], [11838, 11854, null], [11854, 12484, null], [12484, 12876, null], [12876, 14299, null], [14299, 15631, null], [15631, 15646, null], [15646, 16316, null], [16316, 17950, null], [17950, 18909, null], [18909, 19691, null], [19691, 20209, null], [20209, 20239, null], [20239, 21188, null], [21188, 22588, null], [22588, 23710, null], [23710, 25621, null], [25621, 26707, null], [26707, 27767, null], [27767, 28837, null], [28837, 30164, null], [30164, 31144, null], [31144, 32112, null], [32112, 33332, null], [33332, 33343, null], [33343, 33917, null], [33917, 34499, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34499, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34499, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34499, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34499, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34499, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34499, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34499, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34499, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34499, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34499, null]], "pdf_page_numbers": [[0, 142, 1], [142, 644, 2], [644, 1149, 3], [1149, 1729, 4], [1729, 2283, 5], [2283, 2296, 6], [2296, 3737, 7], [3737, 4410, 8], [4410, 4991, 9], [4991, 6003, 10], [6003, 6849, 11], [6849, 6869, 12], [6869, 7573, 13], [7573, 9305, 14], [9305, 11318, 15], [11318, 11838, 16], [11838, 11854, 17], [11854, 12484, 18], [12484, 12876, 19], [12876, 14299, 20], [14299, 15631, 21], [15631, 15646, 22], [15646, 16316, 23], [16316, 17950, 24], [17950, 18909, 25], [18909, 19691, 26], [19691, 20209, 27], [20209, 20239, 28], [20239, 21188, 29], [21188, 22588, 30], [22588, 23710, 31], [23710, 25621, 32], [25621, 26707, 33], [26707, 27767, 34], [27767, 28837, 35], [28837, 30164, 36], [30164, 31144, 37], [31144, 32112, 38], [32112, 33332, 39], [33332, 33343, 40], [33343, 33917, 41], [33917, 34499, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34499, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
7a11bf575ba170359479d27ebe87a1a9faca0fc2
Aiding Code Change Understanding with Semantic Change Impact Analysis Quinn Hanam Electrical and Computer Engineering University of British Columbia Vancouver, Canada qhanam@ece.ubc.ca Ali Mesbah Electrical and Computer Engineering University of British Columbia Vancouver, Canada amesbah@ece.ubc.ca Reid Holmes Computer Science University of British Columbia Vancouver, Canada rtholmes@cs.ubc.ca Abstract—Code reviews are often used as a means for developers to manually examine source code changes to ensure the behavioural effects of a change are well understood. Unfortunately, the behavioural impact of a change can include parts of the system outside of the area syntactically affected by the change. In the context of code reviews this can be problematic, as the impact of a change can extend beyond the diff that is presented to the reviewer. Change impact analysis is a promising technique which could potentially assist developers by helping surface parts of the code not present in the diff but that could be affected by the change. In this work we investigate the utility of change impact analysis as a tool for assisting developers understand the effects of code changes. While we find that traditional techniques may or may not benefit developers, more precise techniques may reduce time and increase accuracy. Specifically, we propose and study a novel technique which extracts semantic, rather than syntactic, change impact relations from JavaScript commits. We (1) define four novel semantic change impact relations and (2) implement an analysis tool called SEMCIA that interprets structural changes over partial JavaScript programs to extract these relations. In a study of 2,000 commits from the version history of three popular NodeJS applications, SEMCIA reduced false positives by 9–37% and further reduced the size of change impact sets by 19–91% by splitting up unrelated semantic relations, compared to change impact sets computed with Unix diff and control and data dependencies. Additionally, through a user study in which developers performed code review tasks with SEMCIA, we found that reducing false positives and providing stronger semantics had a meaningful impact on their ability to find defects within code change diffs. I. INTRODUCTION Code reviews, a part of modern software engineering practice, require developers to understand how atomic code changes (commonly called commits) affect program behaviour [20]. Research has shown that developers desire tool support for understanding the effects of code changes during code review [7], [33]. Automatically tracing the effects of code changes is the domain of change impact analysis. Broadly defined as the practice of “identifying the potential consequences of a change...” [6], change impact analysis is a technique which provides additional context that can help developers understand the effects of code changes. More specifically, static change impact analysis is commonly used to detect potential changes to program behaviour by using program slicing to compute control and data dependencies for statements touched by a change (e.g., Gethers et. al. [14]). In this work, we investigate the efficacy of using static change impact analysis to help developers understand the effects of code changes. First, we investigate whether traditional change impact analysis (i.e., techniques based on data and control dependencies) can help developers understand code changes. We conduct a user study where developers performed simple code review tasks relating to code navigation and bug finding. We found no evidence that traditional change impact analysis increased code change understanding performance. Second, we theorize that traditional change impact analysis did not increase performance because of various sources of noise in our change impact analysis tool. Specifically, (1) change impact analysis suffered from high false positive rates due to imprecise syntactic change information and (2) multiple semantic relations were grouped together inside syntactic data-dependency and control-dependency relations, which obscured relevant semantic information. Third, we address the problem of noise in static change impact analysis by presenting a novel change impact analysis tool which computes semantic, rather than syntactic, change impact relations. We introduce novel semantics for four change impact analyses, which show relationships between structural changes and changes to program behaviour. We implement our analyses in tool called SEMCIA, which identifies these semantic relationships for JavaScript. SEMCIA is optimized for ease-of-use by performing a partial (intra-file) analysis which shows local results robustly without needing a complete system, and which could be directly integrated into existing code review tooling. SEMCIA is available online [3]. We found that using AST diff instead of Unix diff reduced false positives by 29-53%, and that using semantic relations reduced the size of the change impact sets by 20-90%. Fourth, we investigate whether our semantics-based change impact analysis implementation can help developers understand code changes. Using the same user study setup as our initial investigation, we provide empirical evidence that reducing false positives and providing stronger semantics can have a meaningful impact on tasks which require understanding the effects of code changes. We found that while traditional static change impact analysis is likely unsuited for helping developers understand code changes, semantic change impact analysis reduced the completion time of targeted tasks by 30-90%. Our main contributions in this work include the following: 1) A controlled user study which evaluates the utility of traditional change impact analysis techniques on code change understanding tasks. 2) Four novel semantic change impact relations, aimed at supporting developers performing specific code change understanding tasks. 3) A novel semantic change impact analysis tool for JavaScript, which removes sources of noise (i.e., false positives and syntactic relations) from traditional change impact analysis techniques. 4) A controlled user study which evaluates the utility of semantic change impact analysis techniques on code change understanding tasks. II. BACKGROUND When Bacchelli and Bird interviewed code review practitioners at Microsoft, they found that code reviewers struggle to understand the effects of changes. One developer summarized the problem by stating “...big-picture impact analysis requires contextual understanding. When reviewing a small, unfamiliar change, it is often necessary to read through much more code than that being reviewed” [7]. More specific information needs related to change impact have also been identified separately by Ko et al. (i.e. “How have resources I depend on changed?”) [20] and Tao et al. (i.e. “How does this change alter the program’s... behaviour?” and “Who references the changed classes/methods/fields?”) [33]. These information needs often generalize to code tracing problems, where a developer must trace through code to infer semantic information. However, developers have difficulty inferring semantic information from low levels of abstraction because of failures and limitations of human memory [30], and require tool support to do so accurately and efficiently. The most common tool for viewing source code changes, the Unix diff utility, displays line-level edit operations [28] that transform one version of source code to another. Because Unix diff only displays information about syntactic changes, using Unix diff to infer the effects of code changes requires developers to manually trace control or data flow beginning with lines that contain syntactic changes. This lack of tool support makes it difficult to understand the effects of code changes during tasks such as code review. Change impact analysis provides one potential solution to providing support for understanding the effects of code changes. Roughly speaking, change impact analysis is the process of determining what regions of code are impacted by a change. Because of the wide variety of applications for which change impact analysis is used, many different change impact analysis techniques have been developed [14]. A task like code review requires precisely tracking behavioural changes in source code as it is evolved, and we therefore focus on the change impact analysis technique that uses static analysis, or static change impact analysis. Other change impact analysis techniques are only loosely related to source code (e.g., by mining bug reports or by tracking meta information about file changes) and do not provide information about changes to runtime behaviour. Prior approaches to static change impact analysis (e.g., [14], [8], [34], [4], [11]) have almost exclusively used a technique where Unix diff is used to identify a slicing criterion (i.e., variables in modified lines), and data and control dependencies are computed for all variables in the criterion. We begin by conducting a preliminary study to determine whether or not this form of change impact analysis can improve the performance of developers performing code change understanding tasks. III. PRELIMINARY STUDY It is unclear whether or not traditional static change impact analysis can aid developers performing code change understanding tasks, such as code review. To gain insight into this, we perform a user study in which we evaluate change impact analysis as a tool for assisting with code review tasks. Specifically, our goal is to answer the following research question: RQ1: Are there code review tasks for which traditional change impact analysis (i.e., one that computes control and data dependencies) can improve speed or accuracy over a typical (i.e., Unix diff) diff utility? For our study, we ask software developers to perform code review tasks with two different tools, which replicate (1) the common functionality available in diff utilities, and (2) a diff utility augmented with traditional change impact analysis information. A. Tools Under Evaluation We implemented the following two (web-based) change impact analysis tools for the study: UnixDiff is modelled after the functionality of the diff utility used by the popular version control host GitHub1. This tool shows a Unix diff in split view, where the original file and the new file are shown side by side and aligned according to the Unix diff. Deleted lines are highlighted in red and inserted lines are highlighted in green. Two unchanged lines adjacent to inserted or deleted lines are shown for context, while all other unchanged lines are hidden but can be expanded through a context menu. Syncia is implemented on top of UnixDiff, and provides the results of a traditional change impact analysis. This change impact analysis displays program slices containing data and control dependencies, where the slicing criterion is everything inside Unix diff. The slices are shown when selected from a context menu. For every line containing a criterion and dependency that is part of the slice, two lines surrounding that line are shown for context while all other lines are hidden. Syncia also provides basic code navigation by highlighting definitions and uses of selected values. 1https://github.com/ Fig. 1: Experience of user study participants in years. TABLE I: Subject commits <table> <thead> <tr> <th>Subject</th> <th>GitHub Project</th> <th>Commit</th> <th>Lines</th> <th>Time (s)</th> </tr> </thead> <tbody> <tr> <td>Karma</td> <td>karma-runner/karma</td> <td>82f1c1</td> <td>162</td> <td>&lt;1</td> </tr> <tr> <td>PM2</td> <td>Unitech/pm2</td> <td>d0f9c36</td> <td>2,389</td> <td>39</td> </tr> <tr> <td>Popcorn</td> <td>popcorn-official/popcorn-desktop</td> <td>d7b9dc8</td> <td>392</td> <td>&lt;1</td> </tr> </tbody> </table> B. Subjects Participants. We recruited 11 participants (eight graduate students and three industrial developers) to participate in our study. Figure 1 shows our participants’ experience as developers and with JavaScript. All our participants had experience using at least one diff tool. Commits. Our study participants were not familiar with the source code for any of the projects used in the study. To find non-trivial, but not overwhelming, commits we mined 134 Node.js projects and randomly selected 10 commits that met two requirements: (1) exactly one file was modified, and (2) between 10 and 30 lines were labelled as inserted by Unix diff. Among these 10 commits, we then randomly selected three for use in the study, that contained four types of behavioural changes: changes to variables, (function) values, conditions and callsites. Table I shows the details of these three commits. C. Tasks We created code review tasks that captured common bug patterns identified by Hanam et al. [16] or code navigation tasks identified by Murphy et al. [27]. The goal of the code review tasks was to force the participants to understand the code changes with respect to different semantic domains. The four bugs involved in the code review tasks were: Scope Conflict Bug. We introduced one scope conflict, where a new variable hid another variable declared at a higher scope, in each of the subject diffs and asked participants to locate it. Incorrect Condition Bug. We injected one bug into a branch condition in each of the subject diffs and asked the participants to locate it. The expected behaviour of the conditions were either obvious (threw a null dereference exception), or explained by code comments inserted into the diff. Incorrect Arguments Bug. We introduced one bug into the arguments of one callsite in each of the subject diffs and asked participants to locate it. The bug was either a missing argument or an incorrect argument order, which was obvious from comparing argument values at the callsite to parameter names. Callsites of Modified Functions. We asked participants to identify all callsites (within the file only) of functions whose behaviour was modified by the change. Participants were not told what functions changed or how many callsites there were. In the Karma and Popcorn diffs, only one callsite called a modified function. In the PM2 diff, six callsites called a modified function, with four of them being callback functions. Each participant performed six reviews: three tasks on each of the two subject diffs. The diffs were presented in the same order for each task (Karma, PM2, Popcorn). Task order and the diff tool used for each task was randomly selected. Before each of the nine reviews, participants completed a short tutorial. During this tutorial, the participant learned how to use the selected diff tool and performed the task on a small training commit. The tutorial ensured that participants were familiar with both the code pattern they were looking for and how to use the diff tool. For bug identification tasks, participants were instructed to first identify and explain the bug to the researcher conducting the session. If correct, participants indicated the location of the bug on the web page and triggered a timer which logged the time taken to identify the bug. If the participant had not completed a review after seven minutes, they were stopped and the search time was recorded as seven minutes. A total of 64 reviews were performed by the participants; one participant performed four reviews instead of six. At least two reviews and at most five reviews were performed for each {diff, tool} pair. II shows the results of the study. Columns 3–4 show the mean time participants spent searching for bugs or callsites. The mean search times for SynCIA (column 4) are shown relative to the mean search time of UnixDiff. A negative value means that the mean search time was less than for UnixDiff. Columns 5–6 show the percent of bugs or callsites that were successfully found during the reviews. D. Summary of Findings (RQ1) The use of SynCIA did not show a statistically significant benefit over Unix diff. This preliminary study suggests that traditional change impact analysis may be unsuited to helping developers understand the effects of code changes. Next, we investigate reasons why traditional change impact analysis may be unsuited to code comprehension tasks. Specifically, we theorize that the imprecision of these techniques causes noise in analysis results that obscures relevant information and negates any potential benefit. IV. NOISE IN CIA As discussed in Section II, traditional static change impact analysis uses a technique where Unix diff is used to identify a slicing criterion (i.e., variables referenced in modified lines), and data and control dependencies are computed for all variables in the criterion. This approach to static change impact analysis has two major limitations: (1) it suffers from high false positive rates, and (2) it does not provide semantic relationships between code changes and changes to program behaviour. We characterize these limitations as adding noise to sets of semantic change impact relations. The following two sections describe these two sources of noise in greater detail. 1) False Positives in Change Impact Analysis: False positives in change impact analysis reports are differences in behaviour where no differences exist. False positives are pernicious in a code review context as they represent additional information added to a diff that has no value to the developer. Existing static change impact analysis tools use character-based changes provided by Unix diff to determine the slicing criterion. Such changes select each statement with one or more character changes as a slicing criterion. These include changes which do not modify the Abstract Syntax Tree (AST), such as whitespace changes. It is trivial to prove the semantic equivalence of unchanged parts of the code by parsing each version of the source code into an AST and checking subtree equivalence. Consider Listings 1 and 2, which show the old (Pold) and new (Pnew) versions of a program which computes the length of the hypotenuse of right angle triangles. We will use this change as a running example. Figure 2, shows a method rename refactoring, which is part of the change made in our running example. A newline character has been added before the left brace at line 1. If we use a Unix diff tool, the function definition for pythag is selected as the criterion. Not knowing which elements of the function declaration was changed, the analysis must assume that pythag returns a new value. It therefore records a dependency relation between the value of x and the change to pythag. We can eliminate these types of false positives simply by parsing each version of the source code into an AST and computing a set of transformations to the AST rather than the text file. Furthermore, because AST diff is more precise than Unix diff, fewer AST nodes will be included in the list of structural changes, which allows the change impact analysis to ignore larger portions of the program. 2) Syntactic vs. Semantic Relations: Existing static change impact analysis tools use data and control dependencies to compute an over-approximation of locations in code that contain modified states. Because control and data dependencies represent syntactic rather than semantic relations [26], multiple semantic relations end up grouped together. This has a similar effect to that of false positives, in that it makes it more difficult for developers to find the semantic relations they are interested in. Consider the left hand side of Figure 3, which shows two data dependencies on expressions that were modified in our running example (since JavaScript has first order functions, hypLen is a variable that points to a function object). These two data dependencies have different semantics (i.e., they change the program’s state in different ways). One dependency is caused by a variable renaming and does not affect program behaviour, while the other dependency is caused by a new value being passed to the function, which causes a number... of variables to point to new values in memory. Similarly, consider the right hand side of Figure 3, which shows two control dependencies on expressions that were modified by a change (not shown) to the program in our running example. These two control dependencies also have different semantics. One change is caused by a call stack being added, which causes new statement executions, while the other is caused by a change to the branch condition, which causes statements to be executed under different conditions. V. SEMANTIC CHANGE IMPACT ANALYSIS Our solution to noise in change impact analysis involves two components. First, using AST diff rather than Unix diff to select the criterion (i.e. the syntactic changes that impact behaviour) is a simple change that uses existing technology. Second, developing a semantics-based change impact analysis is a novel solution that requires us to define our desired semantics. In this section, we provide a sketch of these semantics. A more detailed and formal presentation can be found in our tech report [1]. The first step in comparing the behaviour of two programs is to decide which points in the programs should be compared. To make clear what we want to compare, we define a program point as one or more executions of a single statement which occur in an equivalent context. Consider again our running example (Listings 1 and 2). The statement at line 8 is executed once for each callsite of the function declared at line 1; once in \( P_{old} \) and twice in \( P_{new} \). This yields the following two program points and their states: <table> <thead> <tr> <th>Program Point</th> <th>Statement in ( P_{old} )</th> <th>Value of ( c ) in ( P_{old} )</th> <th>Statement in ( P_{new} )</th> <th>Value of ( c ) in ( P_{new} )</th> </tr> </thead> <tbody> <tr> <td>( s_1 )</td> <td>return ( c );</td> <td>5</td> <td>return ( c );</td> <td>5</td> </tr> <tr> <td>( s_2 )</td> <td></td> <td></td> <td>return ( c );</td> <td>10</td> </tr> </tbody> </table> Because programs can have large state spaces, it is impractical to keep track of the state after each possible execution. Static analysis tools solve this problem by merging program points that are executed within equivalent contexts. For simplicity, in this work we merge all program points with the same statement. In static analysis terms, this is known as a flow sensitive, context insensitive analysis. In our running example, this strategy yields the following program points and their states at lines 5-8: <table> <thead> <tr> <th>PP</th> <th>Statement in ( P_{old} )</th> <th>Values in ( P_{old} )</th> <th>Statement in ( P_{new} )</th> <th>Values in ( P_{new} )</th> </tr> </thead> <tbody> <tr> <td>5</td> <td>( a = \text{pow}(a, 2) )</td> <td>( a : {9} )</td> <td>( a = \text{pow}(a, 2) )</td> <td>( a : {9, 36} )</td> </tr> <tr> <td>6</td> <td>( b = \text{pow}(b, 2) )</td> <td>( b : {16} )</td> <td>( b = \text{pow}(b, 2) )</td> <td>( b : {16, 64} )</td> </tr> <tr> <td>7</td> <td>( c = \sqrt{a+b} )</td> <td>( c : {5} )</td> <td>( c = \sqrt{a+b} )</td> <td>( c : {5, 10} )</td> </tr> <tr> <td>8</td> <td>return ( c )</td> <td></td> <td>return ( c )</td> <td></td> </tr> </tbody> </table> While merging program points makes program analysis tractable, the cost is a loss of precision. For example, while we know that at line 8, the value of \( c \) is either 5 or 10, we no longer know for which callsite (i.e. line 10 or 11) each value holds. To compare the behaviour of two programs, we must first decide which program points should be compared. We refer to the process of aligning the program points of two programs as interleaving, where two program points are interleaved if they are judged to occur at the same point in time in an execution. Table III gives a basic example of an interleaving, where program points are interleaved with each other if they occur at the same line number. Computing a good interleaving is an active research area. The chosen interleaving can affect the precision and soundness of the change impact analysis. Prior approaches for computing an execution interleaving range from doing it manually (e.g., [21]) to the method proposed by Partush and Yahav [31] which uses speculative correlation to approximately minimize the differences between the abstract states in both versions. A third approach, is to interleave statements which are matched by the structural diff utility. This approach is unsound (it does not maintain temporal order) but is automated and fast. A. Semantic Relations for Static Change Impact Analysis We propose and implement four semantic relations for static change impact analysis that can provide additional information about the impact of a code change while addressing the problems with false positives identified previously. The four relations we identify are by no means exhaustive, and represent a first step towards identifying meaningful semantic elements that may be relevant to a code evolution task. The semantic relations we define should provide developers support for navigating the impact of code changes. To identify potential semantic relations, we take inspiration from popular IDE navigation features in Eclipse, such as those identified by Murphy et. al. [27]. These include search for reference, open declaration, highlight variables and expand block. Such features may help developers answer questions about changes. While a full specification of each relation is available in our tech report [1], we give brief examples of each relation here. **Modified Callsites** The open declaration navigation feature suggests the need to view new or modified function calls and the definitions/bodies of those function calls. An example of such an analysis is shown in Figure 4.A. The execution of `hypLen` and consequently the statements at lines 2–8, are affected by the new callsite at line 11. **Modified Branch Conditions** The `branch block` navigation feature suggests the need to identify what branch conditions have changed and what statements are affected by these changes. An example of such an analysis is shown in Figure 4.B. The execution of the statements at lines 4–8 may be affected by the change to the condition at line 3. **New Value Propagation** The search for reference navigation feature suggest the need to track how new values (including modified functions) are propagated or used throughout the program. An example of such an analysis is shown in Figure 4.C. The value of `b` inside function `hypLen` is affected by the new integer literal at line 11. **Modified Variables** The highlight variables navigation feature suggests the need to track where new variables are used in the program. An example of such an analysis is shown in Figure 4.D. The variable `hypLen` is renamed at line 1 and used at line 11 is defined at line 1. Even though this relation is unrelated to program behaviour (it is a refactoring), from a slicing perspective it can be specified as a criterion/dependency relationship in the same way as the other relations. **B. Semantic Change Impact Analysis by Interpreting Structural Changes** The approach we selected for our analysis uses abstract interpretation to interpret and track the effects of structural changes. While more complete specification of our analysis is available in our tech report [1], for clarity we provide an example-oriented description here. **Modified Callsites** Consider the two versions (`P_old` and `P_new`) of a program in Table IV. There is a structural change at line 8, where a new callsite to `bar` has been added. When the analysis reaches line 8, the AST diff tells that the call site has been added. When the analysis proceeds to line 5, it pushes a new stack frame, and its modification state (changed) onto the abstract call stack. The stack frame is popped when control flows out of `bar`. **Modified Branch Conditions** Consider the two versions (`P_old` and `P_new`) of a program in Table V. There is a structural change at line 3, where the branch condition of the `if` statement has changed. When the analysis reaches line 3, the AST diff tells the interpreter that the branch condition has changed. When the analysis proceeds to line 4, it pushes the condition, and its modification state (changed) onto the branch condition stack. The condition is popped from the stack when control flows out of the condition’s block. **New Value Propagation** Consider the two versions (`P_old` and `P_new`) of the program in Table VI. There is a structural change at line 2, where the integer literal is changed from 0 to 1. When the analysis reaches line 2, the AST diff tells the interpreter that the integer literal has changed and the analysis updates the value of `y` to changed in the abstract store. When the analysis reaches line 4, the analysis interprets the result of `x + y` as changed, and updates the value of `z` to changed in the abstract store. **Modified Variables** Consider the two versions (`P_old` and `P_new`) of the program in Table VII. There is a structural change at line 2, where the variable is renamed from `y` to `z`. During variable hoisting, the AST diff tells the interpreter that the variable name has changed. The analysis places `z`, and its modification state (changed) into the abstract environment. TABLE VII: Example of modified variable analysis. <table> <thead> <tr> <th>Interleaved Statement from $F_{old}$</th> <th>Interleaved Statement from $F_{new}$</th> <th>Concrete Environment $P_{old}$/$P_{new}$</th> <th>Abstract Environment in $P$</th> </tr> </thead> <tbody> <tr> <td>1 var x; var x;</td> <td>x, x</td> <td>{x, unchanged}</td> <td></td> </tr> <tr> <td>2 var y; var z;</td> <td>y, z</td> <td>{z, changed}</td> <td></td> </tr> <tr> <td>3 log(x); log(x);</td> <td>x, x</td> <td>{x, unchanged}</td> <td></td> </tr> <tr> <td>4 log(y); log(z);</td> <td>y, z</td> <td>{z, changed}</td> <td></td> </tr> </tbody> </table> C. Implementation We implemented our semantic change impact analysis as a tool named SEMCIA, which is implemented on top of the CommitMiner [3] static analysis framework. CommitMiner is an abstract interpreter for JavaScript, similar to the formally specified JSAI [18], but which (1) enables change impact analysis by providing user-specified analyses with information from diff utilities (i.e. Unix diff or AST diff), and (2) enables partial program analysis by discovering entry points and recovering type and control flow information (in a similar fashion to Dagenais and Henderson [10]). We configured CommitMiner to perform a flow sensitive, context insensitive analysis. VI. FALSE POSITIVE STUDY Section IV described two sources of noise present in change impact analysis. These sources of noise can be measured by the number of false or semantically unrelated dependencies created by a change impact analysis. Recall that in semantic change impact analysis, a criterion is an AST node that has some syntactic change (i.e. inserted, removed or updated) applied to it, and a dependency is a program element (e.g., a variable or statement) whose accessible state has changed because of the criterion. By answering the following research questions, we evaluate our technique’s ability to reduce noise in change impact analysis: RQ2: How many false dependencies are eliminated by computing the criterion with AST diff rather than Unix diff? RQ3: How are control and data dependencies partitioned into our four semantic dependencies? To answer these, we analyzed the commit histories of three open source Node.js projects: MediacenterJS$^2$, PM2$^3$ and Karma$^4$. These applications are medium size JavaScript projects selected for their diversity and relatively long commit histories. SEMCIA successfully analyzed 444 file changes from 299 MediacenterJS commits, 958 file changes from 594 PM2 commits and 1,572 file changes from 1,127 Karma commits. Files were ignored if they were minimized (i.e. library code), or did not have a .js extension. Merge commits were ignored, because their changes are already included in the commit history. SEMCIA was not able to parse some files, either because they used non-JavaScript 1.6 syntax or because they had syntax errors. In terms of execution durations, nearly all analyses completed in under one second, with only outliers running for more than one second and no analysis took longer than one minute. A. RQ2: Unix diff vs AST diff. We first investigate the affect of using AST diff instead of Unix diff to compute structural changes (i.e. the criterion), since Unix diff has traditionally been used inside static change impact analysis tools. As demonstrated by Falleri et. al. [12], AST diff is substantially more precise than Unix diff when used to compute an AST transformation. We can therefore safely use AST diff as ground truth, since the criterion it creates is almost always a subset of the criterion created by Unix diff. For this experiment, the structural diff utility (i.e. Unix diff or AST diff) is the independent variable. The number of dependencies created are independent variables. The flow analysis (SEMCIA) is a control variable and behaves the same for both diff utilities. The first column of Table VIII shows the number of AST nodes in the criterion computed by AST diff. The second column shows the number of dependencies computed by SEMCIA using AST diff. The third column shows the number of AST nodes in the criterion computed by Unix diff. The fourth column shows by what % the size of the criterion set increased. The fourth column shows the number of dependencies computed by SEMCIA using Unix diff. The fifth column shows by what % the number of dependencies increased. These results show that a large number of false positive dependencies (23–49% of the total annotations) were created when Unix diff was used instead of AST diff. This suggests that change impact analysis utilities can improve their precision significantly by basing their analysis on a criterion computed by AST diff rather than Unix diff. B. RQ3: Syntactic vs Semantic Relations We now investigate how SEMCIA partitions the syntactic dependencies created by control and data dependency analysis into separate semantic dependencies. To compute data and control dependencies, we implemented a tool called SYNCIA, which is the same as SEMCIA but computes syntactic (data and control) dependencies. For our experiment, the flow analysis (i.e., SEMCIA or SYNCIA) is the independent variable. The number of dependencies added (to the criterion and dependency sets) is the dependant variable. The structural diff utility (AST diff) is a control variable and behaves the same for both change impact analyses. We compare the number of dependencies computed by SEMCIA to the number of dependencies computed by SYNCIA. Variable and value dependencies are compared to data dependencies, while call and condition dependencies are compared to control dependencies. Note that while variable dependencies are a subset of data dependencies, value dependencies are not a subset of data dependencies. Data dependency analysis uses all variables and values which are labelled --- 2https://github.com/jansmolders86/mediacenterjs 3https://github.com/Unitech/pm2 4https://github.com/karma-runner/karma as changed as the slicing criterion, while value dependency analysis uses only values which are labelled as changed as the slicing criterion (semantically, this means there is a new value in memory). The condition dependencies and call dependencies are both subsets of control dependencies. The fourth column group of Table VIII shows the results of this experiment. The largest increase in dependencies occurs for modified call site dependencies, where the number of dependencies is increased by 748% respectively. This occurs because the number of modified call site dependencies is high relative to the number of branch condition dependencies. For example, if we wanted to look at functions were called because of a change, if we used SYNCEIA rather than SEMCIA, we might expect most of the dependencies we are shown to be caused by changes to branch conditions, rather than changes to call sites. Regarding the modified value dependencies, the number of dependencies actually decreases by 9% when using SYNCEIA. This occurs because many of the dependencies that SYNCEIA considers to be part of the criterion, SEMCIA considers to be dependencies. This is caused by the fact that in data dependency analysis, variables and values are considered part of the criterion, whereas in modified value dependency analysis, only values are considered part of the criterion. C. Summary of Findings (RQ2 and RQ3) Regarding RQ2, our results show that interpreting Unix diff introduces a significant number of false positives over AST diff. Regarding RQ3, our results show that using data dependencies and control dependencies as a proxy for semantic dependencies introduces a significant amount of noise. VII. USER STUDY Ultimately, we believe the information generated by SEMCIA about code changes can help developers better understand the implications of a change during code review. While the previous section demonstrated that SEMCIA can reduce noise in static change impact analysis, we next need to see if developers could benefit from using SEMCIA during code review tasks. Specifically, our goal is to answer the following research question: RQ4: Are there code review tasks for which SEMCIA can improve speed or accuracy over (1) a typical diff utility or (2) control and data dependencies? To answer this, we extend our user study from Section III to include SEMCIA, where each participant was given three additional reviews to complete using SEMCIA. We implemented an additional (web-based) change impact analysis tool: SEMCIA is implemented on top of UNIXDIFF, and provides the results of SEMCIA change impact analysis. This change impact provides variable, value, call and condition slices as defined in Section V. The slices are shown when selected from a context menu. For every line containing a criterion and dependency that is part of the slice, two lines surrounding that line are shown for context while all other lines are hidden. SEMCIA also provides basic code navigation by highlighting definitions and uses of selected values. Figure 5 shows a screenshot of this tool displaying a slice of modified call sites. The slicing criterion is annotated in red, while dependencies are annotated in blue. Criterion and dependency annotations are highlighted differently depending on the slice and the AST node being annotated. For example, the ‘function’ keyword of a function declaration is highlighted in the value slice, while the entire function (usually spanning multiple lines) is highlighted in the call slice. Each participant performed three additional reviews: three tasks using SEMCIA. The conditions of the study are the same as in the preliminary study (and were, in fact, completed at the same time). A. Results A total of 32 additional reviews were performed by the participants; one participant performed two additional reviews. Instead of three, IX shows the results of the study. Columns 3–5 show the mean time participants spent searching for bugs or callsites. The mean search times forSYNCIA (column 4) and SEMCIA (column 5) are shown relative to the mean search time ofUNIXDIFF. A negative value means that the mean search time was less than for UNIXDIFF. Columns 6–8 show the percent of bugs or callsites that were successfully found during the reviews. We test statistical significance with a two-tailed, homoscedastic variance t-test. Our null hypothesis is that there was no difference in task completion time between those who used SYNCIA and those who used either SYNCIA or UNIXDIFF. **Scope Conflict Bug.** Participants using SEMCIA outperformed UNIXDIFF and SYNCIA in both search time and success rate. Participants using SEMCIA spent 46%–75% less time (statistically significant; with null hypothesis \( p = 0.013 \)) on average searching for scope conflicts. Participants using SEMCIA successfully found the scope conflicts more often for two out of the three files. Using the variable slice provided by SEMCIA, participants immediately identified which callsites could be problematic and were able to see all uses of those variables inside the slice without scrolling. This allowed participants to focus on solving the problem rather than navigating the code. **Incorrect Condition Bug.** For this bug, there is no clear evidence that participants using either SYNCIA or SEMCIA outperformed UNIXDIFF for search time (null hypothesis \( p = 0.378 \)) or success rate. This may be because there was little code navigation required to find the modified conditions or diagnose the problem unlike the other three tasks. The modified branch conditions and the statements which they controlled were also relatively easy to find by inspecting the line changes provided by UNIXDIFF. Some participants mentioned that the control slice provided by SEMCIA made the task more difficult because it reduced the amount context surrounding each modified condition. **Incorrect Arguments Bug.** Participants using SEMCIA outperformed UNIXDIFF and SYNCIA in both search time and success rate. Participants using SEMCIA spent 76%–84% less time (statistically significant; with null hypothesis \( p = 0.01 \)) on average searching for incorrect arguments. Participants using SEMCIA successfully found the incorrect arguments more often for two out of the three files. Using the call slice provided by SEMCIA, participants immediately identified which callsites could be problematic and were able to see the function declarations of the callees inside the slice. This seems to have allowed the participants to focus on solving the problem rather than navigating the code. **Callsites of Modified Functions.** Participants using SEMCIA outperformed UNIXDIFF and SYNCIA in both search time and success rate. Participants using SEMCIA spent 29%–59% less time (statistically significant; with null hypothesis \( p = 0.015 \)) on average searching for callsites of modified functions. Participants using SEMCIA found more callsites for one of the three files. Using the value slice provided by SEMCIA, participants immediately identified the functions that modified behaviour and were able to see all callsites of those functions within the slice. The file where success rate was improved over UNIXDIFF was PM2, the longest file with the most (six) callsites to find. Four callsites called functions as callbacks. In these cases the declarations of these modified functions were nested inside callsites, and the variables pointing to these functions at their callsites were aliases of the function. This made it especially difficult for participants to figure out what functions had changed without the aid of the code navigation feature included with SEMCIA and SYNCIA. **B. Summary of Findings (RQ4)** This study suggests that SEMCIA can help developers better understand how code has evolved during code review tasks. Specifically, for code reviewing tasks that require code navigation, reduced noise in change impact slices can help reviewers identify and quickly navigate to relevant parts of the program. Furthermore, the study provides evidence that the reduction in noise that SEMCIA provides makes it easier to understand the effects of code changes. VIII. DISCUSSION Threats to validity. Our analysis framework currently handles JavaScript 1.6 syntax. Like most flow analysis frameworks [24], ours has sources of unsoundness. It does not support dynamic code evaluation (eval), reflection, or the JavaScript event loop, and does not model the behaviour of many JavaScript API functions. These limitations typically manifest as dependencies which are missing from the change impact set. Since all of the change impact tools we study use the same analysis framework, the results are internally consistent. In terms of generalizability, other JavaScript projects may have different commit sizes, file sizes, design patterns and control flows that may affect the accuracy reported in Section VI. Additionally, our approach may work differently for non-JavaScript languages. While the analysis will likely have better soundness and precision on less dynamic languages, other languages may have different commit sizes, file sizes, design patterns and control flows that may change the results for those languages. Our user study was limited in that we only had 11 participants, eight of which were graduate students. The code review tasks they were performing were also on codebases they were not familiar with, which is not the usual case when one is performing code reviews. That said, the level of familiarity was equivalent with all treatments evaluated in the user study. Applications. While the primary motivation for this work was to help developers understand the evolution of their systems, our approach can also be used to analyze code changes automatically without requiring fully-buildable systems. Given the improvements our tool as in terms of false positives, and the design decisions we made in favour of performance, we believe the approach could be used to automatically generate large-scale datasets for automated analyses that require broad collections of semantic change information. Artifacts. We have made the artifacts created during this work openly accessible. The following are available in our downloadable companion [2]: (1) our formal specification of semantic change impact analysis, (2) the datasets from our mining study, and (3) the diff utility used in our user study. The code for SEMCIA is publicly available on GitHub [3]. IX. RELATED WORK Change Impact Analysis. Prior work that uses program slicing for change impact analysis has almost exclusively used syntactic relationships and data and control dependencies [14]. As our empirical study has shown, this type of change impact analysis can yield imprecise results with unclear semantics. While some applications of change impact analysis (e.g., test selection) may tolerate imprecision and noise, human code comprehension [17], [23] tasks are less tolerant of imprecision and noise. One notable exception is the work by Gyori et. al. [15], which uses symbolic execution to reduce false positives in C/C++ change impact analysis. They use their symbolic equivalence checking tool, SymDiff, to check small sections of modified code for semantic equivalence during dataflow analysis. When two sections are proved semantically equivalent (e.g., \( x = y \) and \( x = y + 0 \)), the dataflow analysis ignores the change. While equivalence checking with symbolic execution subsumes abstract interpretation as a technique for eliminating false positive impacts, relative to abstract interpretation symbolic execution is time consuming. Symbolic execution also requires a symbolic execution engine with adequate language support, which does not yet exist for JavaScript. Finally, Gyori et. al. do not address the problem of specifying or separating different semantic domains, which we do using program slicing. Data and Control Dependencies. JDiff [5] uses the structure of object oriented programs to compute changes to control flow graphs, which can be used to detect changes to control and data dependencies. Techniques for decomposing unrelated code changes leverage change impact analysis based on data and control dependencies, such as the work by Barnett et al. [8] and the work by Tao and Kim [34]. These techniques may achieve more precise results with the improvements proposed in this work. Behavioural Equivalence. Equivalence checking is the process of determining whether the output of two pieces of code are always the same given the same input. In the context of the semantics of changes, equivalence checking can either verify that the behaviour of a function does not differ between versions or label the points in the program where values can differ between versions. The SymDiff [21], [22] tool checks for behavioural equivalence between program versions by using a constraint solver to symbolically compute output values and check where output values differ. Because runtimes for single methods range from a few seconds to over one hour, equivalence checking is generally limited to critical code where formal verification is needed. Higher Level Semantics. Higher level semantics than changes to data and control dependencies, and changes to symbolic values have also been proposed for generating useful information about code changes. Various approaches have been proposed that summarize structural or behavioural code changes as higher level semantics [19], [32], [13], [29], [25], [9]. X. CONCLUSION In this paper, we defined four new semantic relations for change impact analysis and implemented a tool called SEMCIA that extracts these semantic relations from code changes. SEMCIA reduced false positive annotations by 23–49%, and reduced annotations with unrelated semantics by 19–91%. The reductions in noise provided by SEMCIA helped developers perform code review tasks more quickly and accurately. Ultimately, we believe that semantic change impact analysis could help developers better understand how their systems are evolving. REFERENCES
{"Source-Url": "https://www.cs.ubc.ca/~rtholmes/papers/icsme_2019_hanam.pdf", "len_cl100k_base": 10116, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 40346, "total-output-tokens": 11917, "length": "2e13", "weborganizer": {"__label__adult": 0.00033211708068847656, "__label__art_design": 0.0002536773681640625, "__label__crime_law": 0.0002727508544921875, "__label__education_jobs": 0.0006098747253417969, "__label__entertainment": 4.51207160949707e-05, "__label__fashion_beauty": 0.00013494491577148438, "__label__finance_business": 0.00013053417205810547, "__label__food_dining": 0.0002532005310058594, "__label__games": 0.00038814544677734375, "__label__hardware": 0.0005154609680175781, "__label__health": 0.0002961158752441406, "__label__history": 0.00013494491577148438, "__label__home_hobbies": 6.586313247680664e-05, "__label__industrial": 0.00021636486053466797, "__label__literature": 0.00018537044525146484, "__label__politics": 0.0001842975616455078, "__label__religion": 0.0003371238708496094, "__label__science_tech": 0.003173828125, "__label__social_life": 7.730722427368164e-05, "__label__software": 0.0039825439453125, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.00025963783264160156, "__label__transportation": 0.00033545494079589844, "__label__travel": 0.00016820430755615234}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54053, 0.01904]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54053, 0.54039]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54053, 0.91802]], "google_gemma-3-12b-it_contains_pii": [[0, 5603, false], [5603, 11393, null], [11393, 16075, null], [16075, 20105, null], [20105, 25252, null], [25252, 29464, null], [29464, 35615, null], [35615, 39480, null], [39480, 43808, null], [43808, 49713, null], [49713, 54053, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5603, true], [5603, 11393, null], [11393, 16075, null], [16075, 20105, null], [20105, 25252, null], [25252, 29464, null], [29464, 35615, null], [35615, 39480, null], [39480, 43808, null], [43808, 49713, null], [49713, 54053, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54053, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54053, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54053, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54053, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54053, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54053, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54053, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54053, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54053, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54053, null]], "pdf_page_numbers": [[0, 5603, 1], [5603, 11393, 2], [11393, 16075, 3], [16075, 20105, 4], [20105, 25252, 5], [25252, 29464, 6], [29464, 35615, 7], [35615, 39480, 8], [39480, 43808, 9], [43808, 49713, 10], [49713, 54053, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54053, 0.10553]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
40e4c5defe4038dc1e0fa84f45a1882d4ae56d12
Action Language $\mathcal{BC}$: Preliminary Report Joohyung Lee, Vladimir Lifschitz and Fangkai Yang 1 School of Computing, Informatics and Decision Systems Engineering, Arizona State University joolee@asu.edu 2 Department of Computer Science, University of Texas at Austin {vl,fkyang}@cs.utexas.edu Abstract The action description languages $\mathcal{B}$ and $\mathcal{C}$ have significant common core. Nevertheless, some expressive possibilities of $\mathcal{B}$ are difficult or impossible to simulate in $\mathcal{C}$, and the other way around. The main advantage of $\mathcal{B}$ is that it allows the user to give Prolog-style recursive definitions, which is important in applications. On the other hand, $\mathcal{B}$ solves the frame problem by incorporating the commonsense law of inertia in its semantics, which makes it difficult to talk about fluents whose behavior is described by defaults other than inertia. In $\mathcal{C}$ and its extension $\mathcal{C}+$, the inertia assumption is expressed by axioms that the user is free to include or not to include, and other defaults can be postulated as well. This paper defines a new action description language, called $\mathcal{BC}$, that combines the attractive features of $\mathcal{B}$ and $\mathcal{C}+$. Examples of formalizing commonsense domains discussed in the paper illustrate the expressive capabilities of $\mathcal{BC}$ and the use of answer set solvers for the automation of reasoning about actions described in this language. 1 Introduction Action description languages are formal languages for describing the effects and executability of actions. “Second generation” action description languages, such as $\mathcal{B}$ [Gelfond and Lifschitz, 1998, Section 5], $\mathcal{C}$ [Giunchiglia and Lifschitz, 1998], and $\mathcal{C}+$ [Giunchiglia et al., 2004, Section 4], differ from the older languages STRIPS [Fikes and Nilsson, 1971] and ADL [Pednault, 1989] in that they allow us to describe indirect effects of an action—effects explained by interaction between fluents. The languages $\mathcal{B}$ and $\mathcal{C}$ have significant common core [Gelfond and Lifschitz, 2012]. Nevertheless, some expressive possibilities of $\mathcal{B}$ are difficult or impossible to simulate in $\mathcal{C}$, and the other way around. The main advantage of $\mathcal{B}$ is that it allows the user to give Prolog-style recursive definitions. Recursively defined concepts, such as the reachability of a node in a graph, play important role in applications of automated reasoning about actions, including the design of the decision support system for the Space Shuttle [Noguiera et al., 2001]. On the other hand, the language $\mathcal{B}$, like STRIPS and ADL, solves the frame problem by incorporating the commonsense law of inertia in its semantics, which makes it difficult to talk about fluents whose behavior is described by defaults other than inertia. The position of a moving pendulum, for instance, is a non-inertial fluent: it changes by itself, and an action is required to prevent the pendulum from moving. The amount of liquid in a leaking container changes by itself, and an action is required to prevent it from decreasing. A spring-loaded door closes by itself, and an action is required to keep it open. Work on the action language $\mathcal{C}$ and its extension $\mathcal{C}+$ was partly motivated by examples of this kind. In these languages, the inertia assumption is expressed by axioms that the user is free to include or not to include. Other default assumptions about the relationship between the values of a fluent at different time instants can be postulated as well. On the other hand, some recursive definitions cannot be easily expressed in $\mathcal{C}$ and $\mathcal{C}+$. In this paper we define a new action description language, called $\mathcal{BC}$, that combines the attractive features of $\mathcal{B}$ and $\mathcal{C}+$. This language, like $\mathcal{B}$, can be implemented using computational methods of answer set programming [Marek and Truszczyński, 1999; Niemelä, 1999; Lifschitz, 2008]. The main difference between $\mathcal{B}$ and $\mathcal{BC}$ is similar to the difference between inference rules and default rules. Informally speaking, a default rule allows us to derive its conclusion from its premise if its justification can be consistently assumed; default logic [Reiter, 1980] makes this idea precise. In the language $\mathcal{B}$, a static law has the form $\langle \text{conclusion} \rangle \text{ if } \langle \text{premise} \rangle$. In $\mathcal{BC}$, a static law may include a justification: $\langle \text{conclusion} \rangle \text{ if } \langle \text{premise} \rangle \text{ ifcons} \langle \text{justification} \rangle$ (ifcons is an acronym for “if consistent”). Dynamic laws may include justifications also. The semantics of $\mathcal{BC}$ is defined by transforming action descriptions into logic programs under the stable model semantics. When static and dynamic laws of the language $\mathcal{B}$ are translated into the language of logic programming, as in [Balduccini and Gelfond, 2003], the rules that we get do not contain negation as failure. Logic programs corresponding to $\mathcal{B}$-descriptions do contain negation as failure, but this is because inertia rules are automatically included in them. In the case of $\mathcal{BC}$, on the other hand, negation as failure is used for translating justifications in both static and dynamic laws. We define here three translations from BC into logic programming. Their target languages use slightly different versions of the stable model semantics, but we show that all three translations give the same meaning to BC-descriptions. The first version uses nested occurrences of negation as failure [Lifschitz et al., 1999]; the second involves strong (classical) negation [Gelfond and Lifschitz, 1991] but does not require nesting; the third produces multi-valued formulas under the stable model semantics [Bartholomew and Lee, 2012]. The third translation is particularly simple, because BC and multi-valued formulas have much in common: both languages are designed for talking about non-Boolean fluents. But we start with defining the other two translations, because their target languages are more widely known. Examples of formalizing commonsense domains discussed in this paper illustrate the expressive capabilities of BC and the use of answer set solvers for the automation of reasoning about actions described in this language. We state also two theorems relating BC to B and to C^+. ### 2 Syntax An action description in the language BC includes a finite set of symbols of two kinds, fluent constants and action constants. Fluent constants are further divided into static and statically determined. A finite set of cardinality \( \geq 2 \), called the domain, is assigned to every fluent constant. An atom is an expression of the form \( f = v \), where \( f \) is a fluent constant, and \( v \) is an element of its domain. If the domain of \( f \) is \( \{ f, t \} \) then we say that \( f \) is Boolean. A static law is an expression of the form \[ A_0 \text{ if } A_1, \ldots, A_m \text{ ifcons } A_{m+1}, \ldots, A_n \quad (1) \] \( (n \geq m \geq 0) \), where each \( A_i \) is an atom. It expresses, informally speaking, that every state satisfies \( A_0 \) if it satisfies \( A_1, \ldots, A_m \), and \( A_{m+1}, \ldots, A_n \) can be consistently assumed. If \( m = 0 \) then we will drop if; if \( m = n \) then we will drop ifcons. A dynamic law is an expression of the form \[ A_0 \text{ after } A_1, \ldots, A_m \text{ ifcons } A_{m+1}, \ldots, A_n \quad (2) \] \( (n \geq m \geq 0) \), where - \( A_0 \) is an atom containing a regular fluent constant, - each of \( A_1, \ldots, A_m \) is an atom or an action constant, and - \( A_{m+1}, \ldots, A_n \) are atoms. It expresses, informally speaking, that the end state of any transition satisfies \( A_0 \) if its beginning state and its action satisfy \( A_1, \ldots, A_m \), and \( A_{m+1}, \ldots, A_n \) can be consistently assumed about the end state. If \( m = n \) then we will drop ifcons. For any action constant \( a \) and atom \( A \), \[ a \text{ causes } A \] stands for \[ A \text{ after } a. \] For any action constant \( a \) and atoms \( A_0, \ldots, A_m (m > 0) \), \[ a \text{ causes } A_0 \text{ if } A_1, \ldots, A_m \] stands for \( A_0 \text{ after } a, A_1, \ldots, A_m \). An action description in the language BC is a finite set consisting of static and dynamic laws. ### 3 Defaults and Inertia Static laws of the form \[ A_0 \text{ if } A_1, \ldots, A_m \text{ ifcons } A_{m+1}, \ldots, A_n \quad (3) \] and dynamic laws of the form \[ A_0 \text{ after } A_1, \ldots, A_m \text{ ifcons } A_{m+1}, \ldots, A_n \quad (4) \] will be particularly useful. They are similar to normal defaults in the sense of [Reiter, 1980]. We will write (3) as \[ \text{default } A_0 \text{ if } A_1, \ldots, A_m, \] and we will drop if when \( m = 0 \). We will write (4) as \[ \text{default } A_0 \text{ after } A_1, \ldots, A_m. \] For any regular fluent constant \( f \), the set of the dynamic laws \[ \text{default } f = v \text{ after } f = v \] for all \( v \) in the domain of \( f \) expresses the commonsense law of inertia for \( f \). We will denote this set by \[ \text{inertial } f. \] ### 4 Semantics For every action description \( D \), we will define a sequence of logic programs with nested expressions \( PN_0(D), PN_1(D), \ldots \) so that the stable models of \( PN_l(D) \) represent paths of length \( l \) in the transition system corresponding to \( D \). The signature \( \sigma_{D,l} \) of \( PN_l(D) \) consists of - expressions \( i : a \) for nonnegative integers \( i \leq l \) and all atoms \( A \), and - expressions \( i : a \) for nonnegative integers \( i < l \) and all action constants \( a \). Thus every element of the signature \( \sigma_{D,l} \) is a “time stamp” \( i \) followed by an atom in the sense of Section 2 or by an action constant. The program consists of the following rules: - the translations \[ i : A_0 \leftarrow i : A_1, \ldots, i : A_m; \not not i : A_{m+1}, \ldots, \not not i : A_n \] \((i \leq l) \) of all static laws (1) from \( D \), - the translations \[ (i + 1) : A_0 \leftarrow i : A_1, \ldots, i : A_m; \not not (i + 1) : A_{m+1}, \ldots, \not not (i + 1) : A_n \] \((i < l) \) of all dynamic laws (2) from \( D \), - the choice rule\(^1\) \{ 0 : A \} for every atom \( A \) containing a regular fluent constant, \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] \[\text{\quad }\] • the choice rule \( \{i : a\} \) for every action constant \( a \) and every \( i < l \), • the existence of value constraint \[ \perp \leftarrow \text{not } i : (f = v_1), \ldots, \text{not } i : (f = v_k) \] for every fluent constant \( f \) and every \( i \leq l \), where \( v_1, \ldots, v_k \) are all elements of the domain of \( f \), • the uniqueness of value constraint \[ \perp \leftarrow i : (f = v), i : (f = w) \] for every fluent constant \( f \), every pair of distinct elements \( v, w \) of its domain, and every \( i \leq l \). The transition system \( T(D) \) represented by an action description \( D \) is defined as follows. For every stable model \( X \) of \( PN_0(D) \), the set of atoms \( A \) such that \( 0 : A \) belongs to \( X \) is a state of \( T(D) \). In view of the existence of value and uniqueness of value constraints, for every state \( s \) and every fluent constant \( f \) there exists exactly one \( v \) such that \( f = v \) belongs to \( s \); this \( v \) is considered the value of \( f \) in state \( s \). For every stable model \( X \) of \( PN_1(D) \), \( T(D) \) includes the transition \( \langle s_0, \alpha, s_1 \rangle \), where \( s_i (i = 0, 1) \) is the set of atoms \( A \) such that \( i : A \) belongs to \( X \), and \( \alpha \) is the set of action constants \( a \) such that \( 0 : a \) belongs to \( X \). The soundness of this definition is guaranteed by the following fact: **Theorem 1** For every transition \( \langle s_0, \alpha, s_1 \rangle \), \( s_0 \) and \( s_1 \) are states. We promised that stable models of \( PN_1(D) \) would represent paths of length \( l \) in the transition system corresponding to \( D \). For \( l = 0 \) and \( l = 1 \), this is clear from the definition of \( T(D) \); for \( l > 1 \) this needs to be verified. For every set \( X \) of elements of the signature \( \sigma_{D,i} \), let \( X^i (i < l) \) be the triple consisting of • the set of atoms \( A \) such that \( i : A \) belongs to \( X \), • the set of action constants \( a \) such that \( i : a \) belongs to \( X \), and • the set of atoms \( A \) such that \( (i + 1) : A \) belongs to \( X \). **Theorem 2** For every \( l \geq 1 \), \( X \) is a stable model of \( PN_1(D) \) iff \( X^0, \ldots, X^{l-1} \) are transitions. The rules contributed to \( PN_l(D) \) by static law (3) have the form \[ i : A_0 \leftarrow i : A_1, \ldots, i : A_m, \text{not not } i : A_0. \] They can be equivalently rewritten as \[ \{i : A_0\} \leftarrow i : A_1, \ldots, i : A_m \] (see [Lifschitz et al., 2001]). Similarly, the rules contributed to \( PN_l(D) \) by dynamic law (4) have the form \[ (i + 1) : A_0 \leftarrow i : A_1, \ldots, i : A_m, \text{not not } (i + 1) : A_0. \] They can be equivalently rewritten as \[ \{(i + 1) : A_0\} \leftarrow i : A_1, \ldots, i : A_m. \] In particular, the rules contributed by the commonsense law of inertia (5) can be rewritten as \[ \{(i + 1) : f = v\} \leftarrow i : f = v. \] 5 Other Abbreviations In \( BC \)-descriptions that involve Boolean fluent constants we will use abbreviations similar to those established for multi-valued formulas in [Giunchiglia et al., 2004, Section 2.1]: if \( f \) is Boolean then we will write the atom \( f = t \) as \( f \), and the atom \( f = f \) as \( \sim f \). A static constraint is a pair of static laws of the form \[ \begin{align*} f &= v & \text{if } A_1, \ldots, A_m \\ f &= w & \text{if } A_1, \ldots, A_m \end{align*} \] where \( v \neq w \) and \( m > 0 \). We will write (6) as impossible \( A_1, \ldots, A_m \). The use of this abbreviation depends on the fact that the choice of \( f, v, \) and \( w \) in (6) is inessential, in the sense of Theorem 3 below. About action descriptions \( D_1 \) and \( D_2 \) we say that they are strongly equivalent to each other if, for any action description \( D \) (possibly of a larger signature), \( T(D \cup D_1) = T(D \cup D_2) \). This is similar to the definition of strong equivalence for logic programs [Lifschitz et al., 2001]. **Theorem 3** Any two static constraints (6) with the same atoms \( A_1, \ldots, A_m \) are strongly equivalent to each other. The rules contributed to \( PN_l(D) \) by (6) can be equivalently written as \[ \perp \leftarrow i : A_1, \ldots, i : A_m. \] A dynamic constraint is a pair of dynamic laws of the form \[ \begin{align*} f &= v & \text{after } a_1, \ldots, a_k, A_1, \ldots, A_m \\ f &= w & \text{after } a_1, \ldots, a_k, A_1, \ldots, A_m \end{align*} \] where \( v \neq w, a_1, \ldots, a_k \) (\( k > 0 \)) are action constants, and \( A_1, \ldots, A_m \) are atoms. We will write (7) as nonexecutable \( a_1, \ldots, a_k \) if \( A_1, \ldots, A_m \), and we will drop in this abbreviation when \( m = 0 \). The use of this abbreviation depends on the following fact: **Theorem 4** Any two dynamic constraints (7) with the same action constants \( a_1, \ldots, a_k \) and the same atoms \( A_1, \ldots, A_m \) are strongly equivalent to each other. The rules contributed to \( PN_l(D) \) by (7) can be equivalently written as \[ \perp \leftarrow i : a_1, \ldots, i : a_k, i : A_1, \ldots, i : A_m. \] 6 Example: The Blocks World The description of the blocks world below ensures that every block belongs to a tower that rests on the table; there are no blocks or groups of blocks “floating in the air.” Let \( Blocks \) be a finite non-empty set of symbols (block names) that does not include the symbol \( Table \). The action description below uses the following fluent and action constants: • for each \( B \in Blocks \), regular fluent constant \( Loc(B) \) with domain \( Blocks \cup \{Table\} \), and statically determined Boolean fluent constant \( InPower(B) \); • for each \( B \in Blocks \) and each \( L \in Blocks \cup \{Table\} \), action constant \( Move(B, L) \). In the list of static and dynamic laws, \( B, B_1 \) and \( B_2 \) are arbitrary elements of Blocks, and \( L \) is an arbitrary element of Blocks \( \cup \{ \text{Table} \} \). Two different blocks cannot rest on the same block: impossible \( \text{Loc}(B_1) = B, \text{Loc}(B_2) = B \) \( \quad (B_1 \neq B_2) \). The definition of intower\((B)\): \[ \begin{align*} \text{intower}(B) & \quad \text{if Loc}(B) = \text{Table}, \\ \text{intower}(B) & \quad \text{if Loc}(B) = B_1, \text{intower}(B_1), \\ \text{default} & \quad \sim \text{intower}(B). \end{align*} \] Blocks don’t float in the air: impossible \( \sim \text{intower}(B) \). The commonsense law of inertia: inertial \( \text{Loc}(B) \). The effect of moving a block: \( \text{Move}(B, L) \) causes \( \text{Loc}(B) = L \). A block cannot be moved unless it is clear: nonexecutable \( \text{Move}(B, L) \) if \( \text{Loc}(B_1) = B \). Here is a representation of logic programs \( PN_1(D) \) (Section 4), for this action description \( D \), in the input language of the grounder GRINGO:\(^2\) % declarations of variables for steps, % blocks, and locations step(0..1). #domain step(I). block(b(1..n)). #domain block(B). #domain block(B1). #domain block(B2). location(X) :- block(X). location(table). #domain location(L). % translations of static laws :- loc(B1,B,I), loc(B2,B,I), B1!=B2. intower(B,true,I) :- loc(B,B1,I), intower(B1,B,I), intower(B1,true,I). (intower(B,false,I)). :- intower(B,false,I). % translations of dynamic laws loc(B,L,I+1) :- loc(B,L,I), I<l. loc(B,L,I+1) :- move(B,L,I), I<l. :- move(B,L,I), loc(B1,B,I), I<l. % standard choice rules loc(B,L,0). (move(B,L,I)) :- I<l. % uniqueness and existence of value :- not 1{loc(B,LL,I) : location(LL)}1. :- not 1{intower(B,false,I), intower(B,true,I)}1. The values of the symbolic constants \( l \) (the number of steps) and \( n \) (the number of blocks) are supposed to be specified in command line. The stable models generated by an answer set solver for this input file will represent all trajectories of length \( l \) in the transition system corresponding to the blocks world with \( n \) blocks. For instance, if we ground this program with the GRINGO options \(-c \ l=0 \ -c \ n=3\) then the resulting program will have 13 stable models, corresponding to all possible configurations of 3 blocks. The rules involving intower can be written more economically if we use strong (classical) negation and replace \( \text{intower}(B,\text{true},I), \text{intower}(B,\text{false},I) \) with \( \text{intower}(B,I), \sim \text{intower}(B,I) \). That would make the uniqueness of value constraint for intower redundant. 7 Example: A Leaking Container The example above includes the inertiality assumption for all regular fluents. In some cases, the commonsense law of inertia for a regular fluent is not acceptable and needs to be replaced by a different default. Consider, for instance, a container of capacity \( n \) that has a leak, so that it loses \( k \) units of liquid per unit of time, unless more liquid is added. This domain can be described using the regular fluent constants \( \text{Amt} \) with domain \( \{0, \ldots, n\} \), for the amount of liquid in the container, and the action constant FillUp. There are two dynamic laws: \text{default} \ \text{Amt} = \max(a-k,0) \text{ after } \text{Amt} = a \quad (a=0, \ldots, n), \text{ FillUp causes } \text{Amt} = n. (When \( k = 0 \), the first of them turns into inertial \( \text{Amt} \).) Consider the following temporal projection problem involving this domain, with \( n = 10 \) and \( k = 3 \): initially the container is full, and it is filled up at time 3; we would like to know how the amount of liquid in the container will change with time. The program below consists of the rules of \( PN_1(D) \) and rules encoding the temporal projection problem. % declarations of variables for steps % and amounts step(0..1). #domain step(I). amount(0..n). #domain amount(A). % translations of dynamic laws \text{amt}(AA,I+1) :- \text{amt}(A,I), AA=(|A-k|+(A-k))/2, I<l. \text{amt}(n,I+1) :- \text{fillup}(I), I<l. % standard choice rules \text{amt}(A,0). \{\text{fillup}(I)\} :- I<l. % uniqueness and existence of value :- not 1{\text{amt}(AA,I) : \text{amount}(AA)}1. A multi-valued signature is a set $\sigma$ of symbols, called constants, along with a nonempty finite set $\text{Dom}(c)$ of symbols, disjoint from $\sigma$, assigned to each constant $c$, called the domain of $c$. An atom of the signature $\sigma$ is an expression of the form $c = v$ (“the value of $c$ is $v$”), where $c \in \sigma$ and $v \in \text{Dom}(c)$. If $\text{Dom}(c) = \{f, t\}$ then we say that the constant $c$ is Boolean. A multi-valued formula is a propositional combination of atoms. (Note that the symbol $\neg$ in multi-valued formulas corresponds to negation as failure in logic programs.) A multi-valued interpretation of $\sigma$ is a function that maps every element of $\sigma$ to an element of its domain. An interpretation $I$ satisfies an atom $c = v$ if $I(c) = v$. The satisfaction relation is extended from atoms to arbitrary formulas according to the usual truth tables for the propositional connectives. The reduct $F^1$ of a multi-valued formula $F$ relative to a multi-valued interpretation $I$ is the formula obtained from $F$ by replacing each maximal subformula that is not satisfied by $I$ with $\bot$. We say that $I$ is a stable model of $F$ if $I$ is the only interpretation satisfying $F^1$.3 Consider the multi-valued signature consisting of - the constants $i : f$ for nonnegative integers $i \leq l$ and all fluent constants $f$, with the same domain as $f$, and - the Boolean constants $i : a$ for nonnegative integers $i < l$ and all action constants $a$. If $F$ is a propositional combination of atoms $f = v$ and action constants then $i : F$ stands for the formula of this signature obtained from $F$ by prepending $i :$ to every fluent constant and to every action constant. For any action description $D$, by $\text{MV}_i(D)$ we denote the conjunction of the following multi-valued formulas: - the translations $$i : (A_1 \land \cdots \land A_l \land \neg A_{l+1} \land \cdots \land \neg A_n \rightarrow A_0)$$ for every fluent constant $f$, every pair of distinct elements $v, w$ of its domain, and every $i \leq l$, $$i : (A_1 \land \cdots \land A_l \land \neg A_{l+1} \land \cdots \land \neg A_n \rightarrow (i + 1) : A_0)$$ for every fluent constant $f$, every element $v$ of its domain, and every $i \leq l$. The stable models of the program $\text{PNI}(D)$ from Section 4 can be obtained from the (complete) answer sets of $\text{PSI}(D)$ by removing all negative literals: Theorem 5 A set $X$ of atoms of the signature $\sigma_{D,A}$ is a stable model of $\text{PNI}(D)$ iff $X \sqcup \{ \neg A \mid A \in \sigma_{D,A} \setminus X \}$ is an answer set of $\text{PSI}(D)$. It follows that the translation $\text{PN}$ in the definition of $T(D)$ can be replaced with the translation $\text{PS}$. 9 Translation into the Language of Multi-Valued Formulas Multi-valued formulas are defined in [Giunchiglia et al., 2004, Section 2.1], and the stable model semantics is extended to such formulas in [Bartholomew and Lee, 2012]. \footnote{This formulation is based on the characterization of the stable model semantics of multi-valued formulas given by [Bartholomew and Lee, 2012, Theorem 5].} 10 Relation to B The version of the action language B referred to in this section is defined in [Gelfond and Lifschitz, 2012]. For any action description D in the language B, by $D_<$ we denote the result of replacing each negative literal $\neg f$ in D with the atom $\sim f$ (that is, $f = \mathbf{f}$). The abbreviations introduced in Sections 2 and 5 above allow us to view $D_<$ as an action description in the sense of BC, provided that all fluent constants are treated as regular Boolean. We define the translation of D into BC as the result of extending $D_<$ by adding the inertiality assumptions (5) for all fluent constants f. We will loosely refer to states and transitions of the transition system represented by D as states and transitions of D. To state the claim that this translation preserves the meaning of D, we need to relate states and transitions in the sense of the semantics of B to states and transitions in the sense of Section 4. In B, a state is a consistent and complete set of literals $f, \sim f$ for fluent constants f. For any set s of atoms $f, \sim f$, by $s^<_<$ we denote the set of literals obtained from s by replacing each atom $\sim f$ with the negative literal $\neg f$. Furthermore, an action in B is a consistent and complete set of literals $\sim a$ for action constants a. Theorem 7 For any action description D in the language B, (a) a set s of atoms is a state of the translation of D into the language BC iff $s^<_<$ is a state of D; (b) for any sets $s_0, s_1$ of atoms and any set $\alpha$ of action constants, $(s_0^<_<, \alpha, s_1)$ is a transition of the translation of D into the language BC iff $$\langle \{s_0^<_< \cup \{a \mid a \notin \alpha\}\}, (s_1^<_<) \rangle$$ is a transition of D. The description of the blocks world from Section 6 does not correspond to any B-description, in the sense of this translation, for two reasons. First, some fluent constants in it are not regular: it uses statically determined fluents InTower(B), defined recursively in terms of Loc(B). They are similar to “defined fluents” allowed in the extension of B introduced in [Gelfond and Incelzan, 2009]. Second, some fluent constants in it are not Boolean: the values of Loc(B) are locations. The leaking container example (Section 7) does not correspond to any B-description either: the regular fluent Amt is not Boolean, and the default describing how the value of this fluent changes is different from the commonsense law of inertia. An alternative approach to describing the leaking container is based on an extension of B by “process fluents,” called $\mathcal{H}$ [Chintabathina et al., 2005]. 11 Relation to C+ The semantics of C+ is based on the idea of universal causation [McCain and Turner, 1997]. Formal relationships between universal causation and stable models are investigated in [McCain, 1997; Ferraris et al., 2012], and it is not surprising that a large fragment of BC is equivalent to a large fragment of C+. In C+, just as in BC, some fluent symbols can be designated as “statically determined.” Other fluents are called “simple” in C+; they correspond to regular fluents in our terminology.) Fluent symbols in C+ may be non-exogenous; in our first version of BC such fluents are not allowed. Action symbols in C+ may be non-Boolean; in this respect, that language is more general than the version of BC defined above. Consider a BC-description such that, in each of its static laws (1), $m = 0$. In other words, we assume that every static law has the form $$A_0 \text{ ifcns } A_1, \ldots, A_n.$$ Such a description can be translated into C+ as follows: • all action constants are treated as Boolean; • every static law (8) is replaced with caused $A_0$ if $A_1 \land \cdots \land A_n$; • every dynamic law (2) is replaced with caused $A_0$ if $A_{m+1} \land \cdots \land A_n$ after $A_1 \land \cdots \land A_m$; • for every action constant a, exogenous a is added. Theorem 8 For any action description D in the language BC such that in each of its static laws (1) $m = 0$, (a) the states of the translation of D into the language C+ are identical to the states of D; (b) the transitions of the translation of D into the language C+ can be characterized as the triples $$(s_0, \{a=f \mid a \in \alpha\} \cup \{a=f \mid a \in \sigma^A \setminus \alpha\}, s_1)$$ for all transitions $(s_0, \alpha, s_1)$ of D. This translation is applicable, for instance, to the leaking container example. The description of the blocks world from Section 6 cannot be translated into C+ in this way, because the static laws in the recursive definition of InTower(B) violate the condition $m = 0$. 12 Future Work The version of BC described in this preliminary report is propositional; expressions with variables, as in the examples from Sections 6 and 7, need to be grounded before they become syntactically correct in the sense of BC. We plan to define the syntax and semantics of BC with variables, in the spirit of [Lifschitz and Ren, 2007], using the generalization of stable models proposed in [Ferraris et al., 2011]. The version of the Causal Calculator described in [Casolary and Lee, 2011] will be extended to cover the expressive capabilities of BC. Acknowledgements Joohyung Lee was partially supported by the National Science Foundation under Grant IIS-0916116 and by the South Korea IT R&D program MKE/KIAT 2010-TD-300404-001. Many thanks to Michael Gelfond and to the anonymous referees for valuable advice. References
{"Source-Url": "http://www.cs.utexas.edu/users/ai-lab/downloadPublication.php?filename=http%3A%2F%2Fwww.cs.utexas.edu%2Fusers%2Fvl%2Fpapers%2FlangBC.pdf&pubid=127370", "len_cl100k_base": 9119, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 32415, "total-output-tokens": 11876, "length": "2e13", "weborganizer": {"__label__adult": 0.0004603862762451172, "__label__art_design": 0.0007653236389160156, "__label__crime_law": 0.0006427764892578125, "__label__education_jobs": 0.0025730133056640625, "__label__entertainment": 0.0001984834671020508, "__label__fashion_beauty": 0.00025773048400878906, "__label__finance_business": 0.0005221366882324219, "__label__food_dining": 0.0006256103515625, "__label__games": 0.0009069442749023438, "__label__hardware": 0.0010690689086914062, "__label__health": 0.001125335693359375, "__label__history": 0.000537872314453125, "__label__home_hobbies": 0.0002237558364868164, "__label__industrial": 0.0010404586791992188, "__label__literature": 0.001216888427734375, "__label__politics": 0.0005087852478027344, "__label__religion": 0.0007867813110351562, "__label__science_tech": 0.3603515625, "__label__social_life": 0.00020432472229003904, "__label__software": 0.009307861328125, "__label__software_dev": 0.615234375, "__label__sports_fitness": 0.0003647804260253906, "__label__transportation": 0.0009832382202148438, "__label__travel": 0.0002388954162597656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36084, 0.01739]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36084, 0.56639]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36084, 0.80138]], "google_gemma-3-12b-it_contains_pii": [[0, 5506, false], [5506, 12173, null], [12173, 18056, null], [18056, 22350, null], [22350, 25521, null], [25521, 31027, null], [31027, 36084, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5506, true], [5506, 12173, null], [12173, 18056, null], [18056, 22350, null], [22350, 25521, null], [25521, 31027, null], [31027, 36084, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36084, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36084, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36084, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36084, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36084, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36084, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36084, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36084, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36084, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36084, null]], "pdf_page_numbers": [[0, 5506, 1], [5506, 12173, 2], [12173, 18056, 3], [18056, 22350, 4], [22350, 25521, 5], [25521, 31027, 6], [31027, 36084, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36084, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
587d4f04be12cf67a7df3db2e9cfa66d13230511
[REMOVED]
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9780387255897-c2.pdf?SGWID=0-0-45-301307-p46178307", "len_cl100k_base": 11063, "olmocr-version": "0.1.50", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 80289, "total-output-tokens": 13463, "length": "2e13", "weborganizer": {"__label__adult": 0.00041747093200683594, "__label__art_design": 0.000797271728515625, "__label__crime_law": 0.0006976127624511719, "__label__education_jobs": 0.0251007080078125, "__label__entertainment": 0.0001761913299560547, "__label__fashion_beauty": 0.0002715587615966797, "__label__finance_business": 0.00856781005859375, "__label__food_dining": 0.0005288124084472656, "__label__games": 0.0008792877197265625, "__label__hardware": 0.0017795562744140625, "__label__health": 0.0008916854858398438, "__label__history": 0.0009298324584960938, "__label__home_hobbies": 0.0001970529556274414, "__label__industrial": 0.00099945068359375, "__label__literature": 0.0012464523315429688, "__label__politics": 0.000904083251953125, "__label__religion": 0.0005960464477539062, "__label__science_tech": 0.371826171875, "__label__social_life": 0.0002601146697998047, "__label__software": 0.04681396484375, "__label__software_dev": 0.53466796875, "__label__sports_fitness": 0.0002906322479248047, "__label__transportation": 0.0008554458618164062, "__label__travel": 0.00025463104248046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55381, 0.04373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55381, 0.18081]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55381, 0.91617]], "google_gemma-3-12b-it_contains_pii": [[0, 1734, false], [1734, 5128, null], [5128, 7579, null], [7579, 10605, null], [10605, 13755, null], [13755, 16593, null], [16593, 18721, null], [18721, 20517, null], [20517, 20857, null], [20857, 22301, null], [22301, 23736, null], [23736, 26383, null], [26383, 26980, null], [26980, 28913, null], [28913, 30632, null], [30632, 31816, null], [31816, 33407, null], [33407, 34603, null], [34603, 38171, null], [38171, 41950, null], [41950, 45382, null], [45382, 48915, null], [48915, 52546, null], [52546, 55121, null], [55121, 55381, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1734, true], [1734, 5128, null], [5128, 7579, null], [7579, 10605, null], [10605, 13755, null], [13755, 16593, null], [16593, 18721, null], [18721, 20517, null], [20517, 20857, null], [20857, 22301, null], [22301, 23736, null], [23736, 26383, null], [26383, 26980, null], [26980, 28913, null], [28913, 30632, null], [30632, 31816, null], [31816, 33407, null], [33407, 34603, null], [34603, 38171, null], [38171, 41950, null], [41950, 45382, null], [45382, 48915, null], [48915, 52546, null], [52546, 55121, null], [55121, 55381, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55381, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55381, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55381, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55381, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55381, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55381, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55381, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55381, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55381, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55381, null]], "pdf_page_numbers": [[0, 1734, 1], [1734, 5128, 2], [5128, 7579, 3], [7579, 10605, 4], [10605, 13755, 5], [13755, 16593, 6], [16593, 18721, 7], [18721, 20517, 8], [20517, 20857, 9], [20857, 22301, 10], [22301, 23736, 11], [23736, 26383, 12], [26383, 26980, 13], [26980, 28913, 14], [28913, 30632, 15], [30632, 31816, 16], [31816, 33407, 17], [33407, 34603, 18], [34603, 38171, 19], [38171, 41950, 20], [41950, 45382, 21], [45382, 48915, 22], [48915, 52546, 23], [52546, 55121, 24], [55121, 55381, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55381, 0.26457]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
afcc98406a36989e4ee326630313d3ce905b751c
[REMOVED]
{"len_cl100k_base": 12066, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 56245, "total-output-tokens": 14775, "length": "2e13", "weborganizer": {"__label__adult": 0.0005345344543457031, "__label__art_design": 0.0005884170532226562, "__label__crime_law": 0.0008492469787597656, "__label__education_jobs": 0.0016632080078125, "__label__entertainment": 0.00017714500427246094, "__label__fashion_beauty": 0.0003116130828857422, "__label__finance_business": 0.0004038810729980469, "__label__food_dining": 0.0007977485656738281, "__label__games": 0.0015401840209960938, "__label__hardware": 0.001728057861328125, "__label__health": 0.0016651153564453125, "__label__history": 0.0005345344543457031, "__label__home_hobbies": 0.00021851062774658203, "__label__industrial": 0.0010061264038085938, "__label__literature": 0.0008897781372070312, "__label__politics": 0.0006303787231445312, "__label__religion": 0.0009026527404785156, "__label__science_tech": 0.29052734375, "__label__social_life": 0.0001512765884399414, "__label__software": 0.00679779052734375, "__label__software_dev": 0.68603515625, "__label__sports_fitness": 0.0004546642303466797, "__label__transportation": 0.00130462646484375, "__label__travel": 0.0002701282501220703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48311, 0.03441]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48311, 0.455]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48311, 0.81845]], "google_gemma-3-12b-it_contains_pii": [[0, 3273, false], [3273, 7729, null], [7729, 12478, null], [12478, 16106, null], [16106, 20014, null], [20014, 23818, null], [23818, 27673, null], [27673, 30675, null], [30675, 34590, null], [34590, 38117, null], [38117, 41632, null], [41632, 45475, null], [45475, 48311, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3273, true], [3273, 7729, null], [7729, 12478, null], [12478, 16106, null], [16106, 20014, null], [20014, 23818, null], [23818, 27673, null], [27673, 30675, null], [30675, 34590, null], [34590, 38117, null], [38117, 41632, null], [41632, 45475, null], [45475, 48311, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48311, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48311, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48311, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48311, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48311, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48311, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48311, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48311, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48311, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48311, null]], "pdf_page_numbers": [[0, 3273, 1], [3273, 7729, 2], [7729, 12478, 3], [12478, 16106, 4], [16106, 20014, 5], [20014, 23818, 6], [23818, 27673, 7], [27673, 30675, 8], [30675, 34590, 9], [34590, 38117, 10], [38117, 41632, 11], [41632, 45475, 12], [45475, 48311, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48311, 0.06486]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
8882021e9ae8b8bf50d8ece3d32cc0b8d015f34c
This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail. Author(s): Merikoski, Helena; Savolainen, Paula; Ahonen, Jarmo J. Title: Suppliers’ software development project start-up practices Year: 2017 Version: Please cite the original version: All material supplied via JYX is protected by copyright and other intellectual property rights, and duplication or sale of all or part of any of the repository collections is not permitted, except that material may be duplicated by you for your research use or educational purposes in electronic or print form. You must obtain permission for any other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not an authorised user. Suppliers’ Software Development Project Start-up Practices <table> <thead> <tr> <th>Journal:</th> <th>International Journal of Managing Projects in Business</th> </tr> </thead> <tbody> <tr> <td>Manuscript ID</td> <td>IJMPB-10-2016-0083.R1</td> </tr> <tr> <td>Manuscript Type:</td> <td>Research Paper</td> </tr> <tr> <td>Keywords:</td> <td>Supplier, Software development project, Project start-up, Practices</td> </tr> </tbody> </table> Suppliers’ Software Development Project Start-up Practices Abstract Purpose – The purpose of this paper is to present a life cycle phase of a software development project which is substantial for the success of the project. This paper visualizes the project start-up phase from suppliers’ perspective. Design/methodology/approach – The method is a theory building from case studies. The data was collected from three software supplier firms by conducting process modelling separately in each firm. Findings – The study resulted in a model of a supplier’s software project start-up which includes start-up practices and involved roles. The results indicate that project start-up is an integral and structured phase of project life cycle, which influences the execution of a software development project, especially from the supplier’s perspective in the project business context. Research limitations/implications – The study focuses on the start-up phase of software development projects delivered to external customers. Therefore, developed project start-up model is applicable as such in software supplier firms. Practical implications – The project start-up model presented in this paper indicate that project start-up is a complex and multi-dimensional activity in a supplier firm. This study suggests that if the project start-up phase is clearly defined, planned and followed in a supplier firm, it reduces confusion and miscommunication among the people involved in the project and helps to achieve the business goals of a project. Originality/value This study emphasizes that it is necessary to make a distinction between the perspectives of the customer and the supplier when studying projects in the project business context. The findings contribute the new knowledge for managing outsourced software development projects. Keywords Supplier, Software development project, Project start-up, Practices Paper type Research paper Introduction The trend of procuring software development from outside sources is increasing (Crow, Muthuswamy 2014, Lee 2008). Hence, studies on outsourced software development have been published increasingly during the last decade (Mehta, Bharadwaj 2015). In outsourcing situations, there are at least two parties involved, a customer and a supplier¹, with different roles and responsibilities (Liu, Yuliani 2016). Having two parties involved with different roles and responsibilities means that there are two parties with different perspectives. Existence of two different perspectives has been brought out for example in studies of Taylor (2005) and Liu and Yuliani (2016) on risks in outsourced IT projects. Taylor (2005) highlighted... differences between project risks between the customer’s and the supplier’s side. Respectively, Liu and Yuliani (2016) found that the risks are different from the point of view of the customer and the supplier in outsourced IT projects. These studies emphasize the different perspectives of the customer and the supplier. Therefore, it is important to make difference between the customer’s and the supplier’s perspectives on outsourced software development projects. Although there are numerous studies on outsourcing and software development projects in general (Alsudairi, Dwivedi 2010, Hätönen, Eriksson 2009, Aubert, Rivard et al. 2004), studies on outsourced software development projects from the supplier’s perspective have been rare (Taylor 2007, Savolainen, Ahonen et al. 2012, Lee 2008, Levina, Ross 2003). Therefore, this paper concentrates on the supplier’s perspective by considering the commercial relationship between two parties, a customer and a supplier. The commercial relationship between a customer and a supplier entails that a software development project is managed and conducted by a supplier firm and the end product of the project is delivered to an external customer (Kishore, Rao et al. 2003). For a supplier operating in the software industry, outsourcing means business, where it delivers projects to external customers. Thus, project deliveries are the one source of many supplier firms’ revenues and the backbone of their business (Artto, Valtakoski et al. 2015, Kujala, Ahola et al. 2013, Andersen, Jessen 2003). Hence, it is essential for the supplier firm to be able to market and sell projects to customers (Jalkala, Cova et al. 2010), because there is no project before a sales case has been successfully completed (Turkulainen, Kujala et al. 2013). A successful sales case means that the supplier firm gets an order from the customer (Cooper, Budd 2007). After the customer has placed an order, the supplier firm starts preparations for the project. These preparations should be fast, cost-effective and cover the required steps. These preparatory actions take place between the project’s sales and execution phases. The interface between the supplier firm’s sales operations and project operations has started to get increasing attention at both organizational and project levels (Cova, Salle 2005, Cooper, Budd 2007, Turkulainen, Kujala et al. 2013, Artto, Valtakoski et al. 2015). Even though the importance of the interface between the supplier firm’s project sales and project execution phases has been noticed, empirical studies on the topic are sparse (Savolainen 2011). However, it is noted that the supplier firm has a great responsibility that the software development project delivery fulfils both the customer’s and the supplier’s objectives (Lee 2008). Therefore, it is important for the supplier firm to be able to manage the project from the beginning to the delivery. From the supplier’s perspective, the software development project begins after a successful sales case with the project start-up phase. In brief, the project start-up phase has been identified earlier (Fangel 1991) and its importance for software development projects from the supplier’s perspective has been highlighted (Savolainen, Ahonen et al. 2015). However, it is still unclear what happens in a software supplier firm after it receives an order from the customer. Especially vague are the first actions that the supplier firm performs after receiving an order from an external customer. To gain a better understanding of the project start-up phase from the supplier’s perspective, we formulated the following research question: What happens in a software supplier firm during the project start-up phase? Thus, to achieve our goal and to find the answer to our research question, we studied the software development project start-up phase in supplier firms. In the next section, the background from the relevant literature is given. After that, the methodology of this study is described, and then the results are described in detail. Finally, the last sections concentrate on discussion, conclusions and future work. Background Software which is delivered to an external customer is usually developed in projects (Gottschalk, Karlsen 2005, Karlsen, Gottschalk 2006). As the focus of our study is on software development projects, we adopted the definition of a project from the standard for software development (ISO/IEC 2008a) which defines a project as an "endeavour with defined start and finish dates undertaken to create a product or service in accordance with specified resources and requirements". Here, the project start date is when the supplier firm receives the order from the customer, and the finish date is when the customer pays for the delivered project to the supplier. Because the most of the software development work is conducted in projects, suppliers are often project-based firms as they organize their business operations in projects (Mutka, Aaltonen 2013, Artto, Valkakoski et al. 2015). When a supplier firm conducts all or at least some parts of its business through projects, the firm conducts project business (Artto, Wikström 2005, Hobday 2000). Project business is defined in general as (Artto, Wikström 2005): “the part of business that relates directly or indirectly to projects, with a purpose to achieve objectives of a firm or several firms”. From now on, we use the term ‘project business context’ to emphasise the supplier’s perspective. For the project business research, there is a conceptual framework supporting scholars to position their research using four major research areas. These research areas are management of a project, management of a project-based firm, management of a project network, and management of a business network (Artto, Kujala 2008). Here, as the focus of this study is the management of outsourced projects in the project business context, we found a project business framework developed by Artto and Kujala (2008) helpful to posit our research one of the four research areas which they have defined. The most relevant research area for this study is management of a project. It is for finding answers to question how to manage a single project effectively and successfully (Artto, Kujala 2008). Although the topic has been studied extensively, it is still relevant in the case of outsourced software development projects, which have a reputation to fail. Thus, our study is about how to manage a single project in a project business context and we chose the perspective of a supplier firm. Recently, Hobbs and Besner (Hobbs, Besner 2016) have highlighted that the project delivered to internal and external customers differ in how they are managed. In the project business context, the supplier firm starts preparations for the project after a successful completion of a sales case (Terkulainen, Kujala et al. 2013). In the case of software development projects, the project agreement is usually incomplete at the very beginning of a project because of complex nature of software delivery (Kujala, Nystén-Haarala et al. 2015). In practice, after the customer has placed and order, the project is transferred from sales operations to project operations within the supplier firm (Skaates, Tikkanen et al. 2002). Thus, the transition from sales operations to project operations within the supplier firm means that each project passes through a specific phase. In this paper, this transition is called the project start-up phase, where we adopted the terminology from Fangel (1984, 1991). Fangel (1991) defines the project start-up phase as “a unified and systematic management process which quickly generates a platform for taking off and getting going effectively”. Thus, the purpose of the project start-up phase is to create the conditions for the success of the project. The basis for understanding project start-up has been presented by the INTERNET Committee on Project Start-up that was founded at the end of 1984. This work can be found in the book ‘Handbook of Project Start-up: How to launch any phase effectively’ (Fangel 1990). It contains several abstracts, articles, and reports which were written for workshops, congresses, symposia and conferences on this theme during 1981-1988. In addition, earlier research has described project start-up in general terms. Silvasti (1987) has studied project start-up phase in small delivery projects. Egginton (1996) has studied project start-up in large international projects. The results of a study made by Halman and Burger (2002) indicate that project start-up helps to gain a better understanding about a project. Different methods for project start-up are introduced, for example workshops, reports and ad hoc assistance (Turner 2009). More recently, a study focused on software development projects in the project business context suggested that by investing in the start-up phase of the project, the supplier firm is better placed to achieve the business objectives of the project (Savolainen, Ahonen et al. 2015). In addition to sparse research on the topic, project start-up is not described in detail in the standards. Even though project management standards such as PMBOK (The Project Management Body of Knowledge) (PMI 2013), PRINCE2 (Office of Government Commerce 2009) and ISO21500 (Guidance on project management) (ISO 2012) identify the early phases of a project, they do not provide guidance for the project start-up phase for a supplier firm. Because of the general nature of these standards, they do not take different contexts into account. Therefore, standards lack, for example, the project business context where marketing and sales precedes every project. Thus, it is also somewhat surprising that the early phases of the project lifecycle are not taken into account in software development related standards and frameworks, such as CMMI (Crisis, Konrad et al. 2009) and ISO/IEC/IEEE 16326 (ISO/IEC/IEEE 2009). The early phases of a project are discussed only in PMBOK (PMI 2013), PRINCE2 (Office of Government Commerce 2009) and ISO21500 (ISO 2012). PMBOK (PMI 2013) and ISO 21500 (ISO 2012) describe initiating activities with the term ‘Initiating process group’ and PRINCE2 (Office of Government Commerce 2009) defines the processes of ‘Starting up a project’ and ‘Initiating a project’. Although project management and software development are comprehensively covered by different standards, the project start-up phase has been given very little attention in them. In addition, current literature has not outlined what a successful supplier firm does during the project start-up phase. However, previous studies have implied that at least some administrative effort should be invested in order to get a complex task, such as a software development project up and running (Barry, Mukhopadhyay et al. 2002). Thus, the importance of early phases of project lifecycle have noticed to be crucial for the success of a project (Kappelman, McKeeman et al. 2007). In addition, it is being noticed that the selection of a project management approach during the project start-up phase increases the probability of project success (Rolstadås, Tommelein et al. 2014). Moreover, there has been some interest on the project start-up activities of supplier firms in the project business context. Researcher have analysed failed software development projects and found that often the reason for failure can be traced to the start-up phase of the project (Ahonen, Savolainen 2010, Jørgensen 2014). In addition, researchers have highlighted that a software supplier firm encounters several challenges during the start-up phase of software development projects (Savolainen, Ahonen 2015). Those challenges include lost knowledge, communication problems, and resource management challenges, as has been discussed in recent studies (Turkulainen, Kujala et al. 2013, Savolainen, Ahonen 2015). Challenges during the project start-up phase may endanger the supplier’s business success at the organizational level as well as at the project level, and therefore, well-organized project start-up is necessary for a supplier firm. To conclude, the results of earlier studies suggest that the project start-up phase from the supplier’s perspective requires more attention than it has been given. Therefore, we conducted this study to model the structure of the project start-up phase of a software development project delivered to an external customer. Even though the need for different project management practices in different projects in different contexts has been highlighted (Besner, Hobbs 2013), references to the activities or practices which supplier firms perform during the project start-up phase were not found. Thus, it can be concluded also that there is a need for the description of the actions performed to allow a supplier firm to start up projects quickly and cost-effectively. Consequently, our paper presents project start-up practices which offer one solution for this need. Methodology As the project start-up phase within a supplier firm is still not well researched phenomenon and the aim of our study is to gain better understanding about it, we found it reasonable to study project start-up in natural settings together with the practitioners. Usually, firms do not want outsiders to become familiar with their business in depth. During our study, there was an ongoing research project where three software supplier firms were involved and they were willing to participate in the study. It offered the opportunity to us to study the project start-up phase in its natural settings and to see what practitioners do during this phase. Building theories from cases We chose a theory building from case studies as the research strategy. According to Benbasat et al. (1987), the case study approach allows researchers to study a phenomenon in its natural settings and offers a relatively full understanding about it. Rowley (2002) has stated that a case study research offers more detailed information about the studied phenomenon than a survey research. In addition, Myers (2013) has stated that the complexity of the context of real-life can be brought out with a research method where the researchers get to see the actions of practitioners in real-life situations. Further, it is known that it is possible to build theories from case studies (Eisenhardt, 1989, Yin, 2013). By applying this research strategy, it is possible to build a theory which is novel, testable and empirically valid (Eisenhardt, 1989). This research strategy is suitable especially to research areas where existing theory is incomplete (Eisenhardt, 1989) as it is in the case of the project start-up phase. from the supplier's point of view. Thus, we found the theory building from case studies approach to be an applicable strategy for the needs of our study. The central element of building theories from cases is a replication logic (Eisenhardt, 1989, Yin, 2013). Further, the use of multiple cases helps the researcher to build a more detailed theory than the use of a single case as the data source (Eisenhardt, 1991). Since three software supplier firms participated in our study, we got the opportunity to replicate the same study which resulted in three independent case descriptions. These case descriptions laid the foundation for the theory building process. When conducting research in close collaboration with firms, it is important that data collection does not take more time than is needed and disturb the daily work of firms. Thus, we wanted to use a data collection method which allowed us to collect detailed information about project start-up phase efficiently. Therefore, we chose process modelling as a data collection method. **Data collection** The process modelling was the main data collection method and thus offered the primary data for our study. In addition to process modelling, firms offered us their quality manuals and other project related documentation for analysis. This additional information was the secondary data of our study. In addition to practical reasons, there were different reasons why we selected a process modelling to be a data collection method in this study. Firstly, to be able to understand and to improve operations of any organization, it is important to have detailed models which describe different processes (Giaglis 2001). Secondly, process modelling offers detailed knowledge of different processes of organizations (Bandara, Gable et al. 2005). Thirdly, process models and process guides are found to be useful in software firms to avoid problems in software project deliveries (Dingsøyr, Moe 2004). Fourthly, earlier experiences indicate that the process modelling is an effective method for modelling processes quickly and cost effectively (Dingsøyr, Moe 2004). In addition, it is important to define processes together with the people who will follow the defined process in their daily work (Dingsøyr, Moe 2004). We replicated the study by applying the same process modelling technique with each case firms. We applied the process modelling technique LAPPI which started to evolve almost two decades ago 1999 (Raninen, Ahonen et al. 2013). The LAPPI technique has been developed through dozens of industrial cases, mainly in different IT organizations (Raninen, Ahonen et al. 2013). Nowadays, the LAPPI is in active use in different software supplier firms (Raninen, Ahonen et al. 2013). The applied LAPPI technique is documented in detail elsewhere (Raninen, Ahonen et al. 2013). Case firms We collected the empirical evidence for this study from three software supplier firms. They all operate in a small European country and supply a wide variety of software development projects and related services to their customers. The firms are labeled here as Firm A, Firm B and Firm C. Firm A is a part of a subsidiary of a large globally operating parent firm. The subsidiary has several business units around the country which all operate independently. Each business unit operates around their own specific business area which are part of the parent firm’s business. Firm A is one of these business units. The customers of Firm A are mainly medium-sized and large firms and public sector organizations. Its project deliveries are relatively large. The duration of projects varies from a few months to a few years. In addition, Firm A offers a wide variety of continuous and consultant services to its customers to complement its services. Since Firm A delivers large projects, it is important for the firm to ensure the profitability of the projects. If a large project fails or its financial result is not profitable, it may have a relatively significant impact on the financial performance of the firm. Therefore, Firm A wants to put effort in the start-up phase of projects when it has better opportunities to affect the profitability of the projects than during the later phases of the project life cycle. Firm B is a medium-sized software supplier with offices in several locations. Its customers are other firms and public sector organizations. Firm B offers software development and IT consulting services. The project deliveries of Firm B are small; the duration of projects varies from a few days to a few months. This means that they have very limited time for start-up projects after the customer has placed an order. To maintain the profitability of its projects, Firm B must start projects fast and efficiently, avoiding extra work and costs. Therefore, to be able to operate effectively, Firm B wants its project start-up phase to be well planned and carried out by following a certain routine. Firm C is a very small firm with less than ten employees. They have one office where all employees work. The project deliveries of Firm C include both hardware and software. Most of the customers of Firm C operate in the construction industry. The duration of the projects of Firm C varies greatly, depending on whether it is an existing or a completely new customer to the firm. Since Firm C is very small and can deliver only a few projects annually, the profitability of each project is important for the continuity of the firm. Therefore, Firm C must ensure that its projects are profitable and it wants to invest in the formalization of the start-up phase of its projects. Model building This study resulted in a model of supplier’s software development project start-up phase. A model building followed a process which is presented in Figure 1. Figure 1. A model building process The basis of the model building was the firm specific descriptions of the project start-up phase. To begin with we conducted process modelling in all three firms (Firm A, Firm B and Firm C). We applied the same process modelling technique in each case. Process modellings resulted in descriptions of the case firms’ project start-up phases. Firm-specific process descriptions included details of project start-up practices, the roles of the people who carry them out and information flows between roles during the project start-up phase in each firm. We validated each of the firm specific descriptions separately in the case firms. Based on the validated firm specific descriptions, we built a model of project start-up by comparing the firm specific descriptions and then integrated their commonalities into the model. In the first step of model building, two researchers... (Researcher1 and Researcher2) worked independently and produced a draft of a model. After this, during the second step, the same researchers (Researcher1 and Researcher2) compared their drafts of the model and formed it into a common vision with two other researchers (Researcher3 and Researcher4), which had not been involved in the model building previously. In addition, we validated the model of project start-up separately in each case firm (Firm A, Firm B and Firm C). During validating workshops, each firm gave improvement suggestions on the model. After the validation was done in the case firms, we produced the final version of the model of project start-up which is described in detail in next section. **A model of a supplier’s software project start-up** In the model, the project start-up phase begins in the supplier firm when the supplier has received an order from the customer or when the sales case of the project is near to its closure, and the supplier can be sure that they are going to win the deal. There are altogether 16 practices in the model of project start-up which is presented in Figure 2. There were initially less practices in the firm-specific process descriptions than in the project start-up model. This was because firms initially combined several project start-up practices, but in the validation phase, these practices were divided into smaller entities. Therefore, there is more amount of practices in the project start-up model than in firm-specific descriptions. In general, in the literature discussing the activities related to project management the term practice is widely used (Loo 2002, Besner, Hobbs 2008, Besner, Hobbs 2013). For this study, we adopted the term practice from CMMI (Crisis, Konrad et al. 2009) where “generic practice is the description of an activity that is considered important in achieving the associated generic goal”, because it describes the activities performed during the start-up phase of a software development project. According to this definition of the term practice, the implementation of a single project start-up practice must be well defined, planned and organized. Further, each project start-up practice receives information either in verbal or documented form as input from a person who has a role in project start-up. The output of a project start-up practice can be, for example, a project-related document, such as a project plan. It can also be a decision relating to the project, such as information about who the project manager of the project will be. The purpose of a single project start-up practice is to ensure that the issues associated with it are considered before the project begins so that possible challenges and risks can be better managed during the execution of the project. Further, the purpose of project start-up practices is to help the supplier firm to ensure the success of the software development project by setting up the project management environment for the project. Start-up practices help the supplier firm to prepare for the forthcoming project during the project start-up phase and to manage and to develop its business. There is no strict execution order of project start-up practices in the model, except for two practices: Inform Production unit of Future Project and Organize Internal Kick-off Meeting. The former begins the project start-up phase and the latter closes it in the supplier firm. It is noteworthy that the emphasis of practices varies in each project. Figure 2. Model of a supplier’s software project start-up Most of the project start-up practices are carried out only in the supplier firm. Some of the practices require cooperation with the customer and third parties, such as subcontractors. There are six different roles involved in the project start-up practices. Four of them are internal (Sales Manager, Business Manager, Project Manager, Project Team) and two of the roles are external (Customer, 3rd Party). These roles are listed and connected to project start-up practices in Table 1. Table 1. Project start-up practices and participating roles. Since project start-up phase is an interface between sales operations and project operations, practices in the model are mainly related to business and project management. Only a small part of the work during the start-up phase is related to the software development. Business related practices direct the supplier’s business by helping to ensure the achievement of the business objectives of the project. Project management related practices establish the conditions for the successful management and execution of the project. Software development related practice ensures that there is required technical environment available for the project in the supplier’s side and in the customer’s side. To achieve the business objectives, the cooperation between the project sales team and the project team within the supplier firm is very important (Turkulainen, Kujala et al. 2013). Thus, Inform Production Unit of Future Project practice is needed as it begins the project start-up phase and builds a bridge between the supplier’s sales operations and project operations. When the project is transferred from the sales operation to project operations, it is important that the project manager is appointed as soon as possible. The project manager is one of the most important members of the project team in the supplier firm. Thus, Appoint Project Manager practice requires attention during the project start-up phase. The project manager does not only manage the project, but he or she also manages the customer relationship and the business around the project. Therefore, the selection of a project manager is an important decision for the supplier firm (Mainela, Ulkuniemi 2013). The supplier would benefit from the fact that the project manager has been appointed already during the sales of the project. Then, the project manager would be familiar with the project and the customer from the very beginning (Savolainen, Ahonen 2015). During the project start-up, seamless cooperation between the supplier’s sales team and the project team is required, so that the supplier can create the conditions for successful project delivery. The Transfer Project to Production practice helps to transfer the project and all project related information from the sales team to the project team within the supplier firm. After that, the project team has responsibility for the project, and the role of the sales team will be primarily consultative. The cooperation between sales operations and project operations is essential to ensure that the solution, which is sold to the customer, is doable within the limits of the agreement (Ahonen, Savolainen 2010). Therefore, it is necessary to ensure that the supplier and the customer have achieved a consistent view of the content of the project agreement before project execution begins. Thus, the Prepare Project Agreement practice is necessary, when the project is delivered to an external customer. During sales, the project is only a piece of paper (Mainela, Ulkuniemi 2013). After the sales case is successfully closed, the project becomes visible in the supplier firm when the information about the customer’s order is saved in the supplier’s information system. The Save Order Information in System practice is needed to update the information about the customer in the system. The information about the customer supports customer relationship management and helps the supplier improve products, services and processes (Khodakarami, Chan 2014). During the project start-up phase, the supplier firm needs to assign and engage the project team in the forthcoming project. If there is lack of requisite skills in the project team, the project manager should plan how they are acquired on time (Kappelman, McKeeman et al. 2007). It should be noted that usually the supplier is under pressure to allocate resources for multiple simultaneous projects (Browning, Yassine 2010). Clearly defined responsibilities of a project team help to meet cost and time targets of the project (Papke-Shields, Beise et al. 2010). The Allocate Resources for Project practice is needed to ensure that the requisite human resources are available at the right time during the execution phase of the project. In addition, if there are third parties involved to the project, the Manage 3rd Parties practice is taking place at the project start-up. This practice helps to plan and manage the cooperation which may be carried out with third parties during the forthcoming project. In the project business context, the supplier must be able to manage the discontinuity of customer relationships (Mainela, Ulkuniemi 2013). The continuity of customer relationships requires that customers are satisfied with delivered projects (Narayanan, Balasubramanian et al. 2011). According to Bose (2002) customer relationship management involves tasks the purpose of which is to acquire, analyze and use knowledge about the customer to build and to maintain long-term customer relationships. Therefore, Manage Customer Relationship practice takes place during the project start-up phase. In addition to the long-term customer relationships, the supplier strives for profitable business. The Ensure Project Profitability practice helps to ensure the profitability of the project. Usually the outcome of the project can be implemented in various ways. However, the supplier firm must offer to the customer the option that produces the best possible result from the supplier’s business perspective. This requires clarifying the needs of the customer during the project start-up phase. Personal interaction between the project manager of the supplier and the representatives of the customer during the early phase of a project is not important only for the forthcoming project, it is important for the management of the customer relationship (Mainela, Ulkuniemi 2013). Meeting with the customer during project start-up helps the supplier to build trust in the customer relationship, which will help to increase the commitment of the customer to the project and eventually it may lead to higher customer satisfaction (Smyth, Gustafsson et al. 2010). The Meet Customer practice helps to organize a meeting where the supplier can refine the unclear matters related to the project with the customer. The Define Technical Environment practice is the only practice which is directly related to software development work. This practice ensures that the requisite technical environment is available during the execution phase of the project. The supplier firm must separately define the technical environment for development and testing and for the customer. If the technical environment of the forthcoming project is not defined with sufficient care during the project start-up phase, it may lead to project failure (Ahonen, Savolainen 2010). The identification of project risks during the project start-up phase increases the awareness of risks among the stakeholders of the project and contributes to the success of the project (De Bakker, Boonstra et al. 2010). Thus, if the supplier is prepared for the potential risks and their occurrence at the very beginning of the project, the project has a better chance of being completed in accordance with the plan (Papke-Shields, Beise et al. 2010). Therefore, Analyze Project Risks practice is necessary during the project start-up phase. When project is delivered to an external customer, the supplier must report the progress of the project to the customer. Thus, the supplier must agree separately on its internal reporting practices and on reporting to the customer. If the supplier organizes regular project status meetings, the achievement of the project objectives is more likely (Papke-Shields, Beise et al. 2010). In addition, effective communication with the customer helps to increase the customer’s understanding of the progress of the project and leads to higher customer satisfaction (Papke-Shields, Beise et al. 2010). Therefore, the Plan Project Monitoring practice takes place during the project start-up phase. Although the project scope is defined in the project agreement, it may have changed during or after the sales phase of the project. Therefore, the scope of the project must be redefined during the start-up phase in cooperation with the customer and the supplier. The project will be more likely to be completed on time and on budget if the changes of the scope of the project are planned and implemented in accordance with formal practices (Papke-Shields, Beise et al. 2010). Thus, the Redefine Project Scope practice is necessary during the project start-up. The customer-supplier context is the reason to re-write the project plan during the project start-up phase. It is a usual situation that after the sales case is closed, the representatives both in the customer’s and the supplier’s organizations may change. Therefore, it is necessary to re-create the shared vision of the project with the customer and the supplier. Thus, the Prepare Project Plan practice ensures that the project plan is updated. The Organize Internal Kick-off Meeting practice completes the project start-up phase, after which the project execution phase can begin. Discussion Previous research has revealed that the project start-up phase is essential to the success of the project. Thus, a disciplined project start-up is a prerequisite for successful project management. Further, successful project management is a prerequisite for the success of the project. Therefore, supplier firms should invest in the project start-up phase and follow formal practices. Even though project start-up related issues can be found in standards, such as PMBOK (PMI 2013), PRINCE2 (Office of Government Commerce 2009) and ISO 21500 (ISO 2012), earlier research has left the structure and the practices performed during project start-up very vague. Thus, project start-up related issues have not been assembled together in such a manner as a supplier firm faces them after completing a successful sales case, before the actual project starts. Earlier studies have revealed that project success can be endangered if the start-up is not properly performed (Ahonen, Savolainen 2010, Rolstadås, Tommelein et al. 2014). In addition, the boundaries for the supplier’s software development project start-up phase have been defined, and action points in the project start-up process have been identified (Savolainen, Ahonen et al. 2015). Based on our findings reported in this paper, it can be said that there is a structured project start-up phase between project sales and execution phases within the supplier firm. Our study indicates that, during the project start-up phase, the supplier firm implements several start-up practices before the project starts, and they are repeated for every project which is delivered to an external customer. It should, however, be noted that although our study reveals practices that are essential for the project start-up phase, the Project Manager and Project Team do not perform all of them. Certain practices are performed by other internal roles and the external Customer is involved with some of the supplier’s software development project start-up practices, such as Prepare Project Agreement, Allocate Resources for Project, Meet Customer, Define Technical Environment, Plan Project Monitoring, Redefine Project Scope and Prepare Project Plan. Therefore, successful project start-up requires cooperation at the organizational level as well as at the project level with different units within the supplier firm and with the customer. As several roles are involved during the supplier’s project start-up, this raises the question of costs and how they are covered. The question of costs allocation was discussed also in Fulford (2013) and Savolainen et al. (2015). Thus, we assume that the effort used for project start-up practices may not be logged as costs for the project. In other words, these costs are likely to be considered a part of the general overhead, although they are clearly related to individual projects, and will influence the profitability of the project. The project start-up practices presented in this paper suggest that project start-up is a more complex and multi-dimensional activity in supplier firms than one would expect. However, if project start-up practices are clearly defined and followed, it reduces confusion and miscommunication among the people involved in the project. This further reduces challenges during project implementation and helps software suppliers to achieve the business goals of a project. Given the importance of project start-up being an interface between sales operations and project operations, it is unclear why project start-up has been a neglected subject in related standards and previous research. One of the reasons might be that research has not distinguished between suppliers’ and customers’ perspectives on projects. Our study is one step among others in making the supplier’s project start-up phase visible. Conclusions and future work The aim of this study was to answer the question of what happens in a software supplier firm during the project start-up phase. To find the answer to the research question, we modelled the software development project start-up phase in three supplier firms and built a model of project start-up. Our study contributes the knowledge of project management by building a theory of project start-up from three case studies. The project start-up model offers a missing piece to the theory of project management in the project business context. The process modelling technique we applied in this study is documented in detail and can be replicated in other supplier firms by following a similar process. Our study is a good starting point for project start-up research, and our results can be compared with both smaller and larger software supplier firms later to gain a better understanding of what happens before a software project starts. In addition, the findings of our study confirm what previous studies have shown and answer the research question. In previous studies, project start-up has been identified and defined, but its structure and practices have not been established from the supplier’s perspective in the project business context. Thus, our study deepened our understanding about the software development project start-up phase, especially from the supplier firms’ point of view. Our study was conducted in close cooperation with the practitioners working in the software supplier firms. The study was conducted as a multiple case study, and the data collection method was process modelling implemented in the software supplier firms. Finally, we followed the theory building process to develop the model of project start-up phase. The results of our study indicate that project start-up is a structured phase of a project’s life cycle and includes several practices. We used the project business framework (Artto, Kujala 2008) to posit our study in a particular research area. We found the research area on management of a project relevant for this study. The findings contribute the new knowledge for this research area which has gained wide interest among researchers and practitioners for decades. Our study emphasizes that it is necessary to make a distinction between the perspectives of the customer and the supplier when studying projects in the project business context. This point has also been highlighted recently in a study focusing on the customer’s and supplier’s risks in outsourced IT projects (Liu, Yuliani 2016). So far, the study of Liu and Yuliani (2016) is one of the best, where the different perspectives of the customer and the supplier have been separated from each other. As this study focused on supplier’s perspective, it is important to continue to study this topic also from the customer’s point of view. Acknowledgements References SAVOLAINEN, P., 2011. *Why do software development projects fail?: emphasising the supplier’s perspective and the project start-up*, University of Jyväskylä. Figure 1. A model building process Firm A Process modelling - 8 practices - 7 roles - 41 information flows Firm B Process modelling - 15 practices - 6 roles - 36 information flows Firm C Process modelling - 7 practices - 8 roles - 20 information flows Building a Model of Project Start-up - A draft built by Researcher 1 - A draft built by Researcher 2 - A common vision of Researcher 1, Researcher 2, Researcher 3, and Researcher 4 A Model of Project Start-Up Validation Firm A Validation Firm B Validation Firm C A Validated Model of Project Start-Up Figure 2. Model of a supplier's software project start-up - Inform Production Unit of Future Project - Appoint Project Manager - Meet Customer - Redefine Project Scope - Transfer Project to Production - Manage Customer Relationship - Manage 3rd Parties - Prepare Project Agreement - Define Technical Environment - Ensure Project Profitability - Save Order Information in System - Analyze Project Risks - Prepare Project Plan - Allocate Resources for Project - Plan Project Monitoring - Organize Internal Kick-off Meeting Table 1. Project start-up practices and participating roles. <table> <thead> <tr> <th>Practice</th> <th>Internal’s role(s)</th> <th>External role(s)</th> </tr> </thead> <tbody> <tr> <td>Inform Production Unit of Future Project</td> <td>Sales Manager, Business Manager</td> <td></td> </tr> <tr> <td>Appoint Project Manager</td> <td>Business Manager, Project Manager</td> <td></td> </tr> <tr> <td>Transfer Project to Production</td> <td>Sales Manager, Business Manager, Project Manager</td> <td></td> </tr> <tr> <td>Prepare Project Agreement</td> <td>Sales Manager, Business Manager, Project Manager</td> <td>Customer</td> </tr> <tr> <td>Save Order Information in System</td> <td>Project Manager</td> <td></td> </tr> <tr> <td>Allocate Resources for Project</td> <td>Sales Manager, Business Manager, Project Manager, Project Team</td> <td>Customer</td> </tr> <tr> <td>Manage Customer Relationship</td> <td>Sales Manager, Business Manager, Project Manager</td> <td></td> </tr> <tr> <td>Ensure Project Profitability</td> <td>Sales Manager, Business Manager, Project Manager</td> <td></td> </tr> <tr> <td>Meet Customer</td> <td>Project Manager, Project Team</td> <td>Customer</td> </tr> <tr> <td>Define Technical Environment</td> <td>Project Manager, Project Team</td> <td>Customer</td> </tr> <tr> <td>Analyze Project Risks</td> <td>Business Manager, Project Manager</td> <td></td> </tr> <tr> <td>Plan Project Monitoring</td> <td>Sales Manager, Business Manager, Project Manager</td> <td>Customer</td> </tr> <tr> <td>Redefine Project Scope</td> <td>Sales Manager, Business Manager, Project Manager</td> <td>Customer</td> </tr> <tr> <td>Manage 3rd Parties</td> <td>Business Manager, Project Manager</td> <td>3rd Party</td> </tr> <tr> <td>Prepare Project Plan</td> <td>Business Manager, Project Manager</td> <td>Customer</td> </tr> <tr> <td>Organize Internal Kick-off Meeting</td> <td>Project Manager, Project Team</td> <td></td> </tr> </tbody> </table>
{"Source-Url": "https://jyx.jyu.fi/bitstream/handle/123456789/56575/merikoskietalacceptedmanuscript.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 9938, "olmocr-version": "0.1.53", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 97177, "total-output-tokens": 15078, "length": "2e13", "weborganizer": {"__label__adult": 0.0007462501525878906, "__label__art_design": 0.0007410049438476562, "__label__crime_law": 0.0006771087646484375, "__label__education_jobs": 0.01161956787109375, "__label__entertainment": 0.00011396408081054688, "__label__fashion_beauty": 0.0003097057342529297, "__label__finance_business": 0.0281524658203125, "__label__food_dining": 0.000804901123046875, "__label__games": 0.00107574462890625, "__label__hardware": 0.0009374618530273438, "__label__health": 0.0007052421569824219, "__label__history": 0.0003688335418701172, "__label__home_hobbies": 0.0002830028533935547, "__label__industrial": 0.0011425018310546875, "__label__literature": 0.0005145072937011719, "__label__politics": 0.0004172325134277344, "__label__religion": 0.0004351139068603515, "__label__science_tech": 0.00760650634765625, "__label__social_life": 0.00021731853485107425, "__label__software": 0.01446533203125, "__label__software_dev": 0.9267578125, "__label__sports_fitness": 0.0003941059112548828, "__label__transportation": 0.0009937286376953125, "__label__travel": 0.0004148483276367187}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60214, 0.02796]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60214, 0.25313]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60214, 0.91619]], "google_gemma-3-12b-it_contains_pii": [[0, 1021, false], [1021, 1565, null], [1565, 4252, null], [4252, 6043, null], [6043, 7915, null], [7915, 9647, null], [9647, 11579, null], [11579, 13446, null], [13446, 15344, null], [15344, 17317, null], [17317, 19029, null], [19029, 20779, null], [20779, 22503, null], [22503, 24244, null], [24244, 25780, null], [25780, 27624, null], [27624, 29343, null], [29343, 31008, null], [31008, 32832, null], [32832, 34873, null], [34873, 36731, null], [36731, 38701, null], [38701, 40367, null], [40367, 42397, null], [42397, 44203, null], [44203, 45856, null], [45856, 48523, null], [48523, 51151, null], [51151, 53838, null], [53838, 56539, null], [56539, 57292, null], [57292, 57854, null], [57854, 58376, null], [58376, 60214, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1021, true], [1021, 1565, null], [1565, 4252, null], [4252, 6043, null], [6043, 7915, null], [7915, 9647, null], [9647, 11579, null], [11579, 13446, null], [13446, 15344, null], [15344, 17317, null], [17317, 19029, null], [19029, 20779, null], [20779, 22503, null], [22503, 24244, null], [24244, 25780, null], [25780, 27624, null], [27624, 29343, null], [29343, 31008, null], [31008, 32832, null], [32832, 34873, null], [34873, 36731, null], [36731, 38701, null], [38701, 40367, null], [40367, 42397, null], [42397, 44203, null], [44203, 45856, null], [45856, 48523, null], [48523, 51151, null], [51151, 53838, null], [53838, 56539, null], [56539, 57292, null], [57292, 57854, null], [57854, 58376, null], [58376, 60214, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60214, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60214, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60214, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60214, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60214, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60214, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60214, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60214, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60214, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60214, null]], "pdf_page_numbers": [[0, 1021, 1], [1021, 1565, 2], [1565, 4252, 3], [4252, 6043, 4], [6043, 7915, 5], [7915, 9647, 6], [9647, 11579, 7], [11579, 13446, 8], [13446, 15344, 9], [15344, 17317, 10], [17317, 19029, 11], [19029, 20779, 12], [20779, 22503, 13], [22503, 24244, 14], [24244, 25780, 15], [25780, 27624, 16], [27624, 29343, 17], [29343, 31008, 18], [31008, 32832, 19], [32832, 34873, 20], [34873, 36731, 21], [36731, 38701, 22], [38701, 40367, 23], [40367, 42397, 24], [42397, 44203, 25], [44203, 45856, 26], [45856, 48523, 27], [48523, 51151, 28], [51151, 53838, 29], [53838, 56539, 30], [56539, 57292, 31], [57292, 57854, 32], [57854, 58376, 33], [58376, 60214, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60214, 0.08185]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
4008493042c876c034b8c2426e2f4ce9ebf06fac
AN APPROACH OF SOFTWARE ENGINEERING THROUGH MIDDLEWARE Ganesh S. Wedpathak\textsuperscript{1}, Sagar R. Mali\textsuperscript{2}, Sagar P. Mali\textsuperscript{3} \textsuperscript{1,2,3}Dept. of Computer Science, AITRC, Vita, India, ABSTRACT The challenge for software engineering research is to devise notations, techniques, methods and tools for distributed system construction that systematically build and exploit the capabilities that middleware deliver. The construction of a large class of distributed systems can be simplified by leveraging middleware, which is layered between network operating systems and application communication and coordination of distributed components. Existing middleware products enable software engineers to build systems that are distributed across a local-area network. State-of-the-art middleware research aims to push this boundary towards Internet-scale distribution, adaptive and reconfigurable middleware and middleware for dependable and wireless systems. I. INTRODUCTION Various commercial trends have lead to an increasing demand for distributed systems. Firstly, the number of mergers between companies is continuing to increase. The different divisions of a newly merged company have to deliver unified services to their customers and this usually demands an integration of their IT systems. The time available for delivery of such an integration is often so short that building a new system is not an option and therefore existing system components have to be integrated into a distributed system that appears as an integrating computing facility. Secondly, the time available for providing new services are decreasing. Often this can only be achieved if components are procured off-the-shelf and then integrated into a system rather than built from scratch. Components to be integrated may have incompatible requirements for their hardware and operating system platforms; they have to be deployed on different hosts, forcing the resulting system to be distributed. Finally, the Internet provides new opportunities to offer products and services to a vast number of potential customers. In this setting, it is difficult to estimate the scalability requirements. An e-commerce site that was designed to cope with a given number of transactions per day may suddenly find itself exposed to demand that is by orders of magnitude... larger. The required scalability cannot usually be achieved by central-ized or client-server architectures but demands a distributed system. Distributed systems can integrate legacy components, thus preserving investment, they can decrease the time to market, they can be scalable and tolerant against failures. The caveat, however, is that the construction of a truly distributed systems is considerably more difficult than building a centralized or client/server system. This is because there are multiple points of failure in a distributed system, system components need to communicate with each other through a network, which complicates communication and opens the door for security attacks. Middleware has been devised in order to conceal these difficulties from application engineers as much as possible; As they solve a real problem and simplify distributed system construction, middleware products are rapidly being adopted in industry [6]. In order to build distributed systems that meet the requirements, software engineers have to know what middleware is available, which one is best suited to the problems at hand, and how middleware can be used in the architecture, design and implementation of distributed systems. The principal contribution of this paper is an assessment of both, the state-of-the-practice that current middleware products offer and the state-of-the-art in middleware research. Software engineers increasingly use middleware to build distributed systems. Any research into distributed software engineering that ignores this trend will only have limited impact. We, therefore, analyze the influence that the increasing use of middleware should have on the software engineering research agenda. We argue that requirements engineering techniques are needed that focus on non-functional requirements, as these influence the selection and use of middleware. We identify that software architecture research should produce methods that guide engineers towards selecting the right middleware and employing it so that it meets a set of non-functional requirements. We then highlight that the use of middleware is not transparent for system design and that design methods are needed that address this issue. This paper is further structured as follows. In Section 2, we discuss some of the difficulties involved in building distributed systems and delineate requirements for middleware. In Section 3, we use these requirements to attempt an assessment of the support that current middleware products provide for distributed system construction. We then present an overview of ongoing middleware research in Section 4 in order to provide a preview of what future middleware products might be capable of. In Section 5, we delineate a research agenda for distributed software engineering that builds on the capabilities of current and future middleware and conclude the paper in Section 6. II. REQUIREMENTS OF MIDDLEWARE In this section, we review the difficulties that arise during distributed system construction. We argue in this section that it is too expensive and time consuming if application designers have to resolve these problems by directly using network operating system primitives. Instead they require a middleware that provides higher-level primitives. This approach to distributed system construction with middleware is sketched in Figure 1. ![Figure 1: Middleware in Distributed System Construction](image) Thus middleware is layered between network operating systems and application components [13]. Middleware facilitates the communication and coordination of components that are distributed across several networked hosts. The aim of middleware is to provide application engineers with high level primitives that simplify distributed system construction. The idea of using middleware to build distributed systems is comparable to that of using database management systems when building information systems. It enables application engineers to abstract from the implementation of low-level details, such as concurrency control, transaction management and network communication, and allows them to focus on application requirements. NETWORK COMMUNICATION As shown in Figure 1, the different components of a distributed system may reside on different hosts. In order for the distributed system to appear as an integrated computing facility, the components have to communicate with each other. This communication can only be achieved by using network protocols, which are often classified by the ISO/OSI reference model [25]. Distributed systems are usually built on top of the transport layer, of which TCP or UDP are good examples. The layers underneath are provided by the network operating system. Different transport protocols have in common that they can transmit messages between different hosts. If the communication between distributed systems is programmed at this level of abstraction, application engineers need to implement session and presentation layer. This is too costly, too error-prone and too time-consuming. Instead, application engineers should be able to request parameterized services from possibly more than one remote components and may wish to execute them as atomic and isolated transactions, leaving the implementation of session and presentation layer to the middleware. The parameters that a component requesting a service needs to pass to a component that provides a service are often complex data structures. The presentation layer implementation of the middleware should provide the ability to transform these complex data structures into a format that can be transmitted using a transport protocol, i.e. a sequence of bytes. This transformation is referred to as marshalling and the reverse is called unmarshalling. COORDINATION By virtue of the fact that components reside on different hosts, distributed systems have multiple points of control. Components on the same host execute concurrently, which leads to a need for synchronization when components communicate with each other. This synchronization needs to be implemented in the session layer implementation provided by the middleware. Synchronization can be achieved in different ways. A component can be blocked while it waits for another component to complete execution of a requested service. This form of communication is often called synchronous. After issuing a request, a component can also continue to perform its operations and synchronize with the service providing component at a later point. This synchronization can then be initiated by either the client component (using, for example polling), in which case the interaction is often called deferred synchronous. Synchronization that is initiated by the server is referred to as asynchronous communication. Thus, application engineers need some basic mechanisms that support various forms synchronization between communicating components. Sometimes more than two components are involved in a service request. These forms of communications are also referred to as group requests. This is often the case when more than one component is interested in events that occur in some other component. An example is a distributed stock ticker application where an event, such as a share price update, needs to be communicated to multiple distributed display components, to inform traders about the update. Although the basic mechanisms for this push-style communication are available in multi-cast networking protocols additional support is needed to achieve reliable delivery and marshalling of complex request parameters. A slightly different coordination problem arises due to the sheer number of components that a distributed system may have. The components, i.e. modules or libraries, of a centralized application reside in virtual memory while the application is executing. This is inappropriate for distributed components for the following reasons: - Hosts sometimes have to be shut down and components hosted on these machines have to be stopped and restarted when the host resumes operation; - The resources required by all components on a host maybe greater than the resources the host can provide; and - Depending on the nature of the application, components may be idle for long periods, thus wasting resources if they were kept in virtual memory all the time. For these reasons, distributed systems need to use a concept called activation that allows for component executing processes to be started (activated) and terminated (deactivated) independently from the applications that they execute. The middleware should manage persistent storage of components’ state prior to deactivation and restore components’ state during activation. Middleware should also enable application programmers to determine the activation policies that define when components are activated and de-activated. Given that components execute concurrently on distributed hosts, a server component may be requested from different client components at the same time. The middleware should support different mechanisms called threading policies to control how the server component reacts to such concurrent requests. The server component may be single-threaded, queue requests and process them in the order of their arrival. Alternatively, the component may also spawn new threads and execute each request in its own thread. Finally the component may use a hybrid threading policy that uses a pool with a fixed number of threads to execute requests, but starts queuing once there are no free threads in the pool. RELIABILITY Network protocols have varying degrees of reliability. Protocols that are used in practice do not necessarily guarantee that every packet that a sender transmits is actually received by the receiver and that the order in which they are sent is preserved. Thus, distributed system implementations have to put error detection and correction mechanisms in place to cope with these unreliabilities. Unfortunately, reliable delivery of service requests and service results does not come for free. Reliability has to be paid for with decreases in performance. To allow engineers to trade-off reliability and performance in a flexible manner, different degrees of service request reliability are needed in practice. For communication about service requests between two components, the reliabilities that have been suggested in the distributed system literature are best effort, at-most-once, at-least-once and exactly-once [13]. Best effort service requests do not give any assurance about the execution of the request. At-most-once requests are guaranteed to execute only once. It may happen that they are not executed, but then the requester is notified about the failure. At-least-once service requests are guaranteed to be executed, possibly more than once. The highest degree of reliability is provided by exactly-once requests, which are guaranteed to be executed once and only once. Additional reliabilities can be defined for group requests. In particular, the literature mentions k-reliability, time-outs, and totally-ordered requests. K-reliability denotes that at least K components receive the communication. Time-outs allow the specification of periods after which no delivery of the request should be attempted to any of the addressed components. Finally totally-ordered group communication denotes that a request never overtake a request of a previous group communication. The above reliability discussion applies to individual requests. We can extend it and consider more than one request. Transactions [18] are important primitives that are used in reliable distributed systems. Transactions have ACID properties, which means they enable multiple request to be executed in an atomic, consistency-preserving, isolated and every completed transaction is consistent. It demands that a transaction is isolated from concurrent transaction and, finally that once the transaction is completed its effect cannot be undone. Every middleware that is used in critical applications needs to support distributed transactions. Reliability may also be increased by replicating components [4], i.e. components are available in multiple copies on different hosts. If one component is unavailable, for example because its host needs to be rebooted, a replica on a different host can take over and provide the requested service. Sometimes components have an internal state and then the middleware should support replication in such a way that these states are kept in sync. SCALABILITY Scalability denotes the ability to accommodate a growing future load. In centralized or client/server systems, scalability is limited by the load that the server host can bear. This can be overcome by distributing the load across several hosts. The challenge of building a scalable distributed system is to support changes in the allocation of components to hosts without changing the architecture of the system or the design and code of any component. This can only be achieved by respecting the different dimensions of transparency identified in the ISO Open Distributed Processing (ODP) reference model [24] in the architecture and design of the system. Access transparency, for example demands that the way a component accesses the services of another component is in-dependent of whether it is local or remote. Another example is location transparency, which demands that components do not know the physical location of the components they interact with. A detailed discussion of the different transparency dimension is beyond the scope of this paper and the reader is referred to [13]. If components can access services without knowing the physical location and without changing the way they request it, load balancing mechanisms can migrate components between machines in order to reduce the load on one host and increase it on another host. It should again be transparent to users whether or not such a migration occurred. This is referred to as migration transparency. Replication can also be used for load balancing. Components whose services are in high demand may have to exist in multiple copies. Replication transparency means that it is transparent for the requesting components, whether they obtain a service from the master component itself or from a replica. The different transparency criteria that will lead to scalable systems are very difficult to achieve if distributed systems are built directly on network operating system primitives. To overcome these difficulties, we demand from middleware that they support access, location, migration and replication transparency. HETEROGENEITY The components of distributed systems may be procured off the-shelf, may include legacy and new components. As a result they are often rather heterogeneous. This heterogeneity comes in different dimensions: hardware and operating system platforms, programming languages and indeed the middleware itself. Hardware platforms use different encodings for atomic data types, such as numbers and characters. Mainframes use the EBCDIC character set, Unix servers may use 7-bit ASCII characters, while Windows-based PCs use 16-bit Unicode character encodings. Thus the character encoding of alphanumeric data that is sent across different types of platforms has to be adjusted. Likewise, mainframes and RISC servers, for example, use big-endian representations for numbers, i.e. the most significant byte encoding an integer, long or floating point number comes last. PCs, however, use a little-endian representation where the significance of bytes decreases. Thus, whenever a number is sent from a little-endian host to a big-endian host or vice versa, the order of bytes with which this number is encoded needs to be swapped. This heterogeneity should be resolved by the middleware rather than the application engineer. When integrating legacy components with newly-built components, it often occurs that different programming languages need to be used. These programming languages may follow different paradigms. While legacy components tend to be written in imperative languages, such as COBOL PL/I or C, newer components are often implemented using object-oriented programming languages. Even different object-oriented languages have considerable differences in their object model, type system, approach to inheritance and late binding. These differences need to be resolved by the middleware. As we shall see in the next section, there is not just one, but many approaches to middleware. The availability of different middleware solutions may present a selection problem, but sometimes there is no optimal single middleware, and multiple middleware systems have to be combined. This may be for a variety of reasons. Different middleware may be required due to availability of programming language bindings, particular forms of middleware may be more appropriate for particular hardware platforms (e.g. COM on Windows and CORBA on Mainframes). Finally, the different middleware systems will have different performance characteristics and depending on the deployment a different middleware may have to be used as a backbone than the middleware that is used for local components. Thus middleware will have to be interoperable with other implementations of the same middleware or even different types of middleware in order to facilitate distributed system construction. III. MIDDLEWARE SOLUTIONS In this section, we review the state of current middleware products. We identify the extent to which they address the above requirements and highlight their shortcomings. As it is impossible to review individual middleware products in this paper, we first present a classification, which allows us to abstract from particular product characteristics and which provides a conceptual framework for comparing the different approaches. The four categories that we consider are transactional, message-oriented, procedural, and object or component middleware. We have chosen this classification based on the primitives that middleware products provide for the interaction between distributed components, which are distributed transactions, message passing, remote procedure calls and remote object requests. TRANSACTIONAL MIDDLEWARE Transactional middleware supports transactions involving components that run on distributed hosts. Transaction to implement distributed transactions. The products in this category include IBM’s CICS [22], BEA’s Tuxedo [19] and Transarc’s Encina. Network Communication: Transactional middleware enables application engineers to define the services that server components offer, implement those server components and then write client components that request several of those services within a transaction. Client and server components can reside on different hosts and therefore requests are transported via the network in a way that is transparent to client and server components. Coordination: The client components can request services using synchronous or asynchronous communication. Transactional middleware supports various activation policies and allows services to be activated on demand and deactivated when they have been idle for some time. Activation can also side in memory. Reliability: A client component can cluster more than one service request into a transaction, even if the server components reside on different machines. In order to implement these transactions, transactional middleware has to assume that the participating servers implement the two-phase commit protocol. If server components are built using database management systems, they can delegate implementation of the two-phase commit to these database management systems. For this implementation to be portable, a standard has been defined. The Distributed Transaction Processing (DTP) Protocol, which has been adopted by the Open Group, defines a programmatic interface for two-phase commit in its XA-protocol [43]. DTP is widely supported by relational and object-oriented database management systems. This means that distributed components that have been built using any of these database management systems can easily participate in distributed transactions. This makes them fault-tolerant, as they automatically recover to the end of all completed transactions. Scalability: Most transaction monitors support load balancing, and replication of server components. Replication of servers is often based on replication capabilities that the database management systems provide upon which the server components rely. Heterogeneity: Transactional middleware supports heterogeneity because the components can reside on different hardware and operating system platforms. Also different database management systems can participate in transactions, due to the standardized DTP protocol. Resolution of data heterogeneity is, however, not well-supported by transactional middleware, as the middleware does not provide primitives to express complex data structures that could be used as service request parameters and therefore also does not marshal them. The above discussion has shown that transactional middleware can simplify the construction of distributed systems. Transactional middleware, however, has several weaknesses. Firstly, it creates an undue overhead if there is no need to use transactions, or transactions with ACID semantics are inappropriate. This is the case, for example, when the client per-marshalling between the data structures that a client uses and the parameters that services require needs to be done manually in many products. Thirdly, although the API for the two-phase commit is standardized, there is no standardized approach for defining the services that server components offer. This reduces the portability of a distributed system between different transaction monitors. MESSAGE-ORIENTED MIDDLEWARE (MOM) Message-oriented middleware (MOM) supports the communication between distributed system components by facilitating message exchange. Products in this category include IBM’s MQSeries [16] and Sun’s Java Message Queue [20]. Network Communication: Client components use MOM to send a message to a server component across the network. The message can be a notification about an event, or a request for a service execution from a server component. The content of such a message includes the service parameters. The server responds to a client request with a reply-message containing the result of the service execution. Coordination: A strength of MOM is that this paradigm supports asynchronous message delivery very naturally. The client continues processing as soon as the middleware has taken the message. Eventually the server will send a message including the result and the client will be able to collect that message at an appropriate time. This achieves de-coupling of client and server and leads to more scalable systems. The weakness, at the same time is that the implementation of synchronous requests is cumbersome as the synchronization needs to be implemented manually in the client. A further strength of MOM is that it supports group communication by distributing the same message to multiple receivers in a transparent way. Reliability: MOM achieves fault-tolerance by implementing message queues that store messages temporarily on persistent storage. The sender writes the message into the message queue and if the receiver is unavailable due to a failure, the message queue retains the message until the receiver is available again. Scalability: MOMs do not support access transparency very well, because client components use message queues for communication with remote components, while it does not make sense to use queues for local communication. This lack of access transparency disables migration and replica queues need to be set up by administrators and the use of queues is hard-coded in both client and server components, which leads to rather inflexible and poorly adaptable architectures. Heterogeneity: MOM does not support data heterogeneity very well either, as the application engineers have to write the code that marshals. With most products, there are different programming language bindings available. In assessing the strengths and weaknesses of MOM, we can note that this class of middleware is particularly well suited for implementing distributed event notification and publish/subscribe-based architectures. The persistence of message queues means that this event notification can be achieved in fault tolerant ways so that components receive events when they restart after a failure. However, message oriented middleware also has some weaknesses. It only supports at-least once reliability. Thus the same message could be delivered more than once. Moreover, MOM does not support transaction properties, such as atomic delivery of messages to all or none receivers. There is only limited support for scalability and heterogeneity. PROCEDURAL MIDDLEWARE Remote Procedure Calls (RPCs) were devised by Sun Microsystems in the early 1980s as part of the Open Network Computing (ONC) platform. Sun provided remote procedure calls as part of all their operating systems and submitted RPCs as a standard to the X/Open consortium, which adopted it as part of the Distributed Computing Environment (DCE) [36]. RPCs are now available on most Unix implementations and also on Microsoft’s Windows operating systems. Network Communication: RPCs support the definition of server components as RPC programs. An RPC program exports a number of parameterized procedures and associated parameter types. Clients that reside on other hosts can invoke those procedures across the network. Procedural middleware implements these procedure calls by marshalling the parameters into a message that is sent to the host where the server component is located. The server component unmarshalls the message and executes the procedure and transmits marshalled results back to the client, if required. Marshalling and unmarshalling are implemented in client and server stubs, that are automatically created by a compiler from an RPC program definition. Coordination: RPCs are synchronous interactions between exactly one client and one server. Asynchronous and multicast communication is not supported directly by procedural middleware. Procedural middleware provides different forms of activating server components. Activation policies define whether a remote procedure program is always available or has to be started on demand. For startup on demand, the RPC server is started by an inetd daemon as soon as a request arrives. The inetd requires an additional configuration table that provides for a mapping between remote procedure program names and the location of programs in the file system. Reliability: RPCs are executed with at-most once semantics. The procedural middleware returns an exception if an RPC fails. Exactly-once semantics or transactions are not supported by RPC programs. Scalability: The scalability of RPCs is rather limited. Unix and Windows RPCs do not have any replication mechanisms that could be used to scale RPC programs. Thus replication has to be addressed by the designer of the RPC-based system, publish/subscribe-based architectures. The persistence of message queues means that this event notification can be achieved in fault tolerant ways so that components receive events when they restart after a failure. However, message-oriented middleware also has some weaknesses. It only supports at-least once reliability. Thus the same message could be delivered more than once. Moreover, MOM does not support transaction properties, such as atomic delivery of messages to all or none receivers. There is only limited support for scalability and heterogeneity. PROCEDURAL MIDDLEWARE Remote Procedure Calls (RPCs) were devised by Sun Microsystems in the early 1980s as part of the Open Network Computing (ONC) platform. Sun provided remote procedure calls as part of all their operating systems and submitted RPCs as a standard to the X/Open consortium, which adopted it as part of the Distributed Computing Environment (DCE) [36]. RPCs are now available on most Unix implementations and also on Microsoft’s Windows operating systems. Network Communication: RPCs support the definition of server components as RPC programs. An RPC program exports a number of parameterized procedures and associated parameter types. Clients that reside on other hosts can invoke those procedures across the network. Procedural middleware implements these procedure calls by marshalling the parameters into a message that is sent to the host where the server component is located. The server component unmarshalls the message and executes the procedure and transmits marshalled results back to the client, if required. Marshalling and unmarshalling are implemented in client and server stubs, that are automatically created by a compiler from an RPC program definition. Coordination: RPCs are synchronous interactions between exactly one client and one server. Asynchronous and multicast communication is not supported directly by procedural middleware. Procedural middleware provides different forms of activating server components. Activation policies define whether a remote procedure program is always available or has to be started on demand. For startup on demand, the RPC server is started by an inetd daemon as soon as a request arrives. The inetd requires an additional configuration table that provides for a mapping between remote procedure program names and the location of programs in the file system. Reliability: RPCs are executed with at-most once semantics. The procedural middleware returns an exception if an RPC fails. Exactly-once semantics or transactions are not supported by RPC programs. Scalability: The scalability of RPCs is rather limited. Unix and Windows RPCs do not have any replication mechanisms that could be used to scale RPC programs. Thus replication has to be addressed by the designer of the RPC-based system, which means in practice that RPC-based systems are only deployed on a limited scale. **Heterogeneity:** Procedural middleware can be used with different programming languages. Moreover, it can be used across different hardware and operating system platforms. Procedural middleware standards define standardized data representations that are used as the transport representation of requests and results. DCE, for example standardizes the Network Data Representation (NDR) for this purpose. When marshalling RPC parameters, the stubs translate hardware specific data representations into the standardized form and the reverse mapping is performed during unmarshalling. Procedural middleware is weaker than transactional middleware and MOM as it is not as fault tolerant and scalable. Moreover, the coordination primitives that are available in procedural middleware are more restricted as they only support synchronous invocation directly. Procedural middleware improve transactional middleware and MOM with respect to interface definitions from which implementations that automatically marshal and unmarshal service parameters and results. A disadvantage of procedural middleware is that this interface definition is not reflexive. This means that procedures exported by one RPC program cannot return another RPC program. Object and component middleware resolve this problem. **OBJECT AND COMPONENT MIDDLEWARE** Object middleware evolved from RPCs. The development of object middleware mirrored similar evolutions in programming languages where object-oriented programming languages, such as C++ evolved from procedural programming languages such as C. The idea here is to make object-oriented principles, such as object identification through references and inheritance, available for the development of distributed systems. Systems in this class of middleware include the Common Object Request Broker Architecture (CORBA) of the OMG [34, 37], the latest versions of Microsoft’s Component Object (COM) [5] and the Remote Method Invocation (RMI) capabilities that have been available since Java1.1 [28]. More recent products in this category include middleware that supports distributed components, such as Enterprise Java Beans [30]. Unfortunately, we can only discuss and compare this important class of middleware briefly and refer to [8, 13] for more details. **Network Communication:** Object middleware support distributed object requests, which mean that a client object requests the execution of an operation from a server object that may reside on another host. The client object has to have an object reference to the server object. Marshalling operation parameters and results is again achieved by stubs that are generated from an interface definition. **Coordination:** The default synchronization primitives in object middleware are synchronous requests, which block the client object until the server object has returned the response. However, the other synchronization primitives are supported, too. CORBA 3.0, for example, supports both deferred synchronous and asynchronous object requests. Object middleware supports different activation policies. These include whether server objects are active all the time or started on demand. Threading policies are available that determine whether new threads are started if more than one operation is requested by concurrent clients, or whether they are queued and executed sequentially. CORBA also supports group communication through its Event and Notification services. This service can be used to implement push-style architectures. **Reliability:** The default reliability for object requests is at most once. Object middleware support exceptions, which clients catch in order to detect that a failure occurred during execution of the request. CORBA messaging, or the Notification service [33] can be used to achieve exactly-once reliability. Object middleware also supports the concept of transactions. CORBA has an Object Transaction service [32] that can be used to cluster requests from several distributed objects into transactions. COM is integrated with Microsoft’s Transaction Server [21], and the Java Transaction Service [7] provides the same capability for RMI. **Scalability:** The support of object middleware for building scalable applications is still somewhat limited. Some CORBA implementations support load-balancing, for example by employing using name servers that return an object reference for a server on the least loaded host, or using factories that create server objects on the least loaded host, but support for replication is still rather limited. **Heterogeneity:** Object middleware supports heterogeneity in many different ways. ORBA and COM both have multiple programming language bindings so that client and server objects do not need to be written in the same programming language. They both have a standardized data representation that they use to resolve heterogeneity of data across platforms. Java/RMI takes a different approach as heterogeneity is already resolved by the Java Virtual Machine in which both client and server objects reside. The different forms of object middleware inter-operate. CORBA defines the Internet Inter-Orb Protocol (IIOP) standard [34], which governs how different CORBA implementations exchange request data. Java/RMI leverages this protocol and uses it as a transport protocol for remote method invocations, which means that a Java client can perform a remote method invocation of a CORBA server and vice versa. CORBA also specifies an inter-working specification to Microsoft’s COM. Object middleware provides very powerful component models. They integrate most of the capabilities of transactional, message-oriented or procedural middleware. However, the scalability of object middleware is still rather limited and this disables use of the distributed object paradigm on a large scale. IV. MIDDLEWARE STATE-OF-THE-ART While middleware products are already successfully employed in industrial practice, they still have several shortcomings, which prevent their use in many application domains. These weaknesses lead to relatively inflexible systems that do not respond well to changing requirements; the do not really scale beyond local area networks; they are not yet dependable and are not suited to use in wireless networks. In this section, we review the state-of-the-art of middleware research that addresses the current weaknesses that will influence the next-generation of middleware products. We discuss trading, reflection and application-level transport mechanisms that support the construction of more flexible software architectures. We present replication techniques that will lead to better scalability and fault-tolerance. We then discuss research into middleware that supports real-time applications and finally address middleware research results for mobile and pervasive computing. FLEXIBLE MIDDLEWARE **Trading:** Most middleware products use naming for component identification: MOMs use named message queues, DCE has a Directory service, CORBA has a Naming service, COM uses monikers and Java/RMI uses the RMI Registry to bind names to components. Before a client component can make a request, it has to resolve a name binding in order to obtain a reference to the server component. This means that clients need to uniquely identify their servers, albeit in a location-transparent way. In many application domains, it is unreasonable to assume that client components can identify the component from which they can obtain a service. Even if they can, this leads to inflexible architectures where client components cannot dynamically adapt to better service providers becoming available. Trading has been suggested as an alternative to naming and it offers more flexibility. The ISO/ODP standard defines the principal characteristics of trading [2]. The idea is similar to the yellow pages of the telephone directory. Instead of using names, components are located based on service types. The trader registers the type of service that a server component offers and the particular qualities of service (QoS) that it guarantees. Clients can then query the trader for server components that provide a particular service type and demand the QoS guarantees from them. The trader matches such a service query with the service offers that it knows about and returns a component reference to the client. From then on the client and the server communicate without involvement of the trader. The idea of trading has matured and is starting to be adopted in middleware products. The OMG has defined a Trading service [32] that adapts the ODP trader ideas to the distributed object paradigm and first implementations of this service are becoming available. Thus trading enables the dynamic connection of clients with server components based on the service characteristics rather than the server’s name. **Reflection:** Another approach to more flexible execution environments for components is reflection. Reflection is a well-known paradigm in programming languages [17]. Programs use reflection mechanisms to discover the types or classes and define method invocations at run-time. Reflection is already supported to some extend by current middleware products. The interface repository and dynamic invocation interface of CORBA enable client programmers to discover the types of server components that are currently known and then dynamically create requests that invoke operations from these components. Current research into reflective middleware [9] goes beyond reflective object and component models. It aims to support meta object protocols [29]. These protocols are used for inspection and adaptation of the middleware execution environment itself. In [12] it is suggested, for example, to use an environment meta-model. Inspection of the environment meta-model supports queries of the middleware’s behavior upon events, such as message arrival, enqueuing of requests, marshalling and unmarshalling, thread creation and scheduling of requests. Adaptation of the environment meta-model enables components to adjust the behavior of the middleware to any of those events. **Application-level Transport Protocols:** While marshalling and unmarshalling is mostly best done by the middleware, there are applications, where the middleware creates an undue overhead. One important application of reflection is therefore to marshalling. This is particularly the case when there is an application-specific data representation that is amenable for transmission through a network that and heterogeneity does not need to be resolved by the middleware. In [14] we investigate the combined use of middleware and markup-languages, such as XML [14]. We suggest to transmit XML documents as uninterpreted byte strings using middleware. This combination is motivated by the fact that XML supports semantic translations between data structures and by the fact that existing markup language definitions, such as FpML [15] or FIXML [23] can be leveraged. On the other had, the HTTP protocol with which XML was originally used is clearly inappropriate to meet reliability requirements. It can be expected that interoperability between application-level and middleware data-structures will become available in due course, because the OMG have started an adoption process for technology that will provide seamless interoperability between CORBA data structures and XML structured documents [35]. **SCALABLE MIDDLEWARE** Although middleware is successfully used in scalable applications on local-area networks, current middleware standards and products impose limitations that prevent their use in globally distributed systems. In particular, current middleware platforms do not support replication to the necessary extent to achieve global distribution [31]. State of the art research addresses this problem through non-transparent replication. Replication: Tanenbaum is addressing this problem for distributed object middleware in the Globe project [42]. The aim of Globe is to provide an object based middleware that scales to a billion users. To achieve this aim, Globe makes extensive use of replication. Unlike other replication mechanisms, such as Isis [4], Globe does not assume the existence of a universally applicable replication strategy. It rather suggests that replication policies have to be object-type specific, and therefore they have to be determined by server object designers. Thus, Globe assumes that each type of object its own strategy that proactively replicates objects. REAL-TIME MIDDLEWARE A good summary of the state of the art in real-time middleware has been produced in the EU funded Caber Net network of excellence by [1]. Most current middleware products are only of limited use in real-time and embedded systems because all requests have the same priority. Moreover the memory requirements of current middleware products prevent deployment in embedded systems. These problems have been addressed by various research groups. TAO [39] is a real-time CORBA prototype developed that supports request prioritization and the definition of scheduling policies. The CORBA 3.0 specification [41] builds on this research and standardizes real-time and minimal middleware. MIDDLEWARE FOR MOBILE COMPUTING Current middleware products assume continuous availability of high-bandwidth network connections. These cannot be achieved with physically mobile hosts for various reasons. Wireless local area network protocols, such as Wave-LAN, do achieve reasonable bandwidth. However, they only operate if hosts are within reach of a few hundred metres from their base station. Network outages occur if mobile hosts roam across areas covered by different base stations or if they enter ‘radio shadows’. Wide-area wireless network protocols, such as GSM have similar problems during cell handovers. In addition, their bandwidth is by orders of magnitude smaller; GSM achieves at most 9,600 baud. State-of-the-art wireless and wide-area protocols, such as GSRM and UTMS will improve this situation. However, they will not be available for another two years. Several problems occur when current middleware products are used with these wireless network protocols. Firstly, they all treat unreachability of server or client components as exceptional situation and raise errors that client or server component programmers have to deal with. Secondly, the transport representation that is chosen for wired networks efficient. Middleware products are therefore optimized to simplify both, the translation between different heterogeneous data representations, and the routing of messages to their intended receivers. Such optimizations do not need to choose size efficient encodings for the network protocol and are therefore inappropriate when packets are sent through a 9,600 baud wireless connection. Research into middleware for mobile computing aims to overcome these issues by providing coordination primitives, such as tuple spaces, that treat unreach ability as normal rather than exceptional situations. Moreover, they use compressed transport representation to save bandwidth. A good overview into the state of the art for mobile middleware is given by [38] and we therefore avoid to delve into detail in this paper. V. MIDDLEWARE AND SOFTWARE ENGINEERING RESEARCH In this section, we analyze the consequences of the availability of middleware products and their evolution as a result of middleware research on the software engineering research agenda. We argue on the importance of non-functional requirements for building software systems with existing and upcoming middleware and identify a need for requirements engineering techniques that focus on non-functional requirements. We identify that software architecture research should produce methods that systematically guide engineers towards selecting the right middleware and employing it in such a way that it meets a set of non-functional requirements. We then highlight that the use of middleware is not transparent for system design and that design methods are needed that address this issue. Two trends are important for the discussion of the impact of middleware on software engineering research. Firstly, middleware products are conceived to deliver immediate benefits in the construction of distributed systems. They are therefore rapidly adopted in industry. Secondly, middleware vendors have a proven track record to incorporate middleware research results into their products. An example is the ISO/ODP Trader, which was defined in 1993, adopted as a CORBA standard in 1997 and last year became available in the first CORBA products. There is therefore a good chance that some of the state-of-the-art research in the areas of flexible, scalable, real-time and mobile middleware will become state of the practice in 3-5 years. Unless research into software engineering for distributed systems delivers principles, notations, methods and tools that are compatible with the capabilities that current middleware products provide and that middleware research will generate in the future, software engineering research results will only be of limited industrial significance. Industry will adopt the middleware that is known to deliver the benefits and ignore incompatible software engineering methods and tools. Middleware products and research, however, only support programming and largely ignore all other activities that are needed in software processes for distributed systems. We, therefore, have a chance to achieve a symbiosis between software engineering and middleware. The aim of this section is to identify the software engineering research themes that will lead to the principles, notations, methods and tools that are needed to support all life cycle activities when building distributed systems using middleware. REQUIREMENTS ENGINEERING The challenges of co-ordination, reliability, scalability and heterogeneity in distributed system construction that we discussed in Section 2 and that engineers are faced with are of a nonfunctional nature. Software engineers thus have to define software architectures that meet these nonfunctional requirements. However, the relationship between non-functional requirements and software architectures is only very poorly understood. We first discuss the requirements engineering end of this relationship. Existing requirements engineering methods tend to have a very strong focus on functional requirements. In particular the object-oriented and use-case driven approaches of Jacobsen [27] and more recently Rational [26] more or less completely ignore non-functional concerns. A goal-oriented approach, such as [10] seems to provide a much better basis, but needs to be augmented to specifically address nonfunctional concerns. For non-functional goals to be a useful input to middleware-oriented architecting, these goals need to be quantified. For engineers need to have quantitative requirements models for the required response time, peak loads and overall transaction or data volume that an architecture is expected to scale up to. Thus requirements engineering research needs to devise methods and tools that can be used to elicit and model non-functional requirements from a quantitative point of view. Once a particular middleware system has been chosen for a software architecture, it is extremely expensive to revert that choice and adopt a different middleware or a different architecture. The choice is influenced by the non-functional requirements. Unfortunately, requirements tend to be unstable and change over time. Non-functional requirements often change with the setting in which the system is embedded, for example when new hardware or operating system platforms are added as a result of a merger, or when scalability requirements increase as a result of having to build web-based interfaces that customers use directly. Requirements engineering methods, therefore, not only have to identify the current requirements, but also elicit and estimate the ranges in which they can evolve during the planned lifetime of the distributed system. SOFTWARE ARCHITECTURE There is only very little work on the influence of middleware on software architectures, with [11] being a notable exception. Indeed, we believe that research on software architecture description languages has over-emphasized functionality and not sufficiently addressed the specification of how global properties and non-functional requirements are achieved in an architecture. These requirements cannot be attributed to individual components or connectors and can therefore not be specified by current architectural description languages. Distributed software engineering research needs to identify notations, methods and tools that support architecting. Research needs to provide methods that help software engineers to systematically derive software architectures that will meet a set of non-functional requirements and overcome the guesswork that is currently being done. This includes support for identifying the appropriate middleware or combinations of middlewares for the problem at hand. Moreover, software engineering research needs to define architecting processes that are capable of mitigating the risks of choosing the wrong middleware or architectures. These processes will need to rely on methods that quantitatively model the performance and scalability that a particular middleware-based architecture will achieve and use validation techniques, such as model checking, to validate that models actually do meet the requirements. The models need to be calibrated using metrics that have been collected by observing middleware performance in practice. Many architecture description languages support the explicit modeling of connectors by means of which components communicate [40]. A main contribution of [11] is the observation that connectors are most often implemented using middleware primitives. We would like to add the observation that each middleware only supports a very limited set of connectors. Specifying the behaviour of connectors explicitly in an ADL is therefore modelling overkill that is only needed if architects opt out of using middleware at all. For most applications, the specification of each connector is completely unnecessary. Instead, software architecture research should develop middleware-oriented ADLs that have built-in support for all connectors provided by the middleware that practitioners actually use. DESIGN In [13], we have argued that the use of middleware in a design is not, and never will be, entirely transparent to designers. There are a number of factors that, despite of the ISO/ODP transparency dimensions, necessitate designers to be aware of the involvement of middleware in the communication between components. These factors are: - Network latency implies that the communication between two distributed components is by orders of magnitude slower than a local communication. - Component activation and deactivation of stateful components lead to a need for implementing persistence of these components. - Components need to be designed so that they can cope with the concurrent interactions that occur in a distributed environment. - The components have a choice of the different synchronization primitives a particular middleware offers, and need to exploit them properly. In particular, they have to avoid the deadlocks or liveness problems that can occur as a result of using these synchronization primitives. The software engineering community needs to develop middleware-oriented design notations, methods and tool that take the above concerns into account. Discussing the state of the art middleware research above, we have highlighted a trend to give the programmer more influence on how the middleware behaves. Globe’s replication strategies, TAO’s scheduling policies and reflection capabilities that influence the middleware execution engine have to be used by the designer. This means, effectively, that the programmer gets to see more of the middleware and that distribution and heterogeneity become less transparent. If this is really necessary, and the middleware research community puts forward good reasons, programmers will have to be aided even more in the design of distributed components. Thus appropriate principles, notations, methods and tools for the design of replication strategies, scheduling policies and the use reflection capabilities are needed from software engineering research. VI. SUMMARY We have discussed why the construction of distributed systems is difficult and indicated the support that software engineers can expect from current middleware products to simplify the task. We have then reviewed the current state of the art in middleware research and used this knowledge to derive a software engineering research agenda that will produce the principles, notations, methods and tools that are needed to support all activities during the life cycle of a software engineering process. REFERENCES
{"Source-Url": "http://www.iaeme.com/MasterAdmin/UploadFolder/AN%20APPROACH%20OF%20SOFTWARE%20ENGINEERING%20THROUGH%20MIDDLEWARE-2/AN%20APPROACH%20OF%20SOFTWARE%20ENGINEERING%20THROUGH%20MIDDLEWARE-2.pdf", "len_cl100k_base": 10086, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 42918, "total-output-tokens": 12869, "length": "2e13", "weborganizer": {"__label__adult": 0.0002658367156982422, "__label__art_design": 0.0003285408020019531, "__label__crime_law": 0.00022983551025390625, "__label__education_jobs": 0.0006875991821289062, "__label__entertainment": 5.805492401123047e-05, "__label__fashion_beauty": 0.00011968612670898438, "__label__finance_business": 0.0002371072769165039, "__label__food_dining": 0.00025582313537597656, "__label__games": 0.0005092620849609375, "__label__hardware": 0.000659942626953125, "__label__health": 0.0002956390380859375, "__label__history": 0.00021529197692871096, "__label__home_hobbies": 5.632638931274414e-05, "__label__industrial": 0.00028443336486816406, "__label__literature": 0.0002224445343017578, "__label__politics": 0.000202178955078125, "__label__religion": 0.00036263465881347656, "__label__science_tech": 0.01537322998046875, "__label__social_life": 5.543231964111328e-05, "__label__software": 0.007053375244140625, "__label__software_dev": 0.9716796875, "__label__sports_fitness": 0.00019919872283935547, "__label__transportation": 0.00034546852111816406, "__label__travel": 0.0001666545867919922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63895, 0.02364]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63895, 0.51766]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63895, 0.92197]], "google_gemma-3-12b-it_contains_pii": [[0, 2381, false], [2381, 5829, null], [5829, 9921, null], [9921, 13964, null], [13964, 18030, null], [18030, 21683, null], [21683, 25422, null], [25422, 29163, null], [29163, 32596, null], [32596, 36602, null], [36602, 40452, null], [40452, 44495, null], [44495, 48316, null], [48316, 52707, null], [52707, 56163, null], [56163, 59469, null], [59469, 62557, null], [62557, 63895, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2381, true], [2381, 5829, null], [5829, 9921, null], [9921, 13964, null], [13964, 18030, null], [18030, 21683, null], [21683, 25422, null], [25422, 29163, null], [29163, 32596, null], [32596, 36602, null], [36602, 40452, null], [40452, 44495, null], [44495, 48316, null], [48316, 52707, null], [52707, 56163, null], [56163, 59469, null], [59469, 62557, null], [62557, 63895, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63895, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63895, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63895, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63895, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63895, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63895, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63895, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63895, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63895, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63895, null]], "pdf_page_numbers": [[0, 2381, 1], [2381, 5829, 2], [5829, 9921, 3], [9921, 13964, 4], [13964, 18030, 5], [18030, 21683, 6], [21683, 25422, 7], [25422, 29163, 8], [29163, 32596, 9], [32596, 36602, 10], [36602, 40452, 11], [40452, 44495, 12], [44495, 48316, 13], [48316, 52707, 14], [52707, 56163, 15], [56163, 59469, 16], [59469, 62557, 17], [62557, 63895, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63895, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
eaedd7b57aeecd36fbb0bd94c3a611a0b6643903
A Simple Language for Expressing Properties of Telecommunication Services and Features* C.A. Middelburg** Dept. of Network & Service Control, PTT Research and Dept. of Philosophy, Utrecht University October 1994 Abstract This paper reports on a quest for a language for expressing properties of telecommunication services and features, which may play a part in feature interaction detection. A language is sought with a restricted, but practically sufficient, expressive power, which can be complemented with computer-based tools for verification of models described in SDL with respect to properties expressed in the language. A language is suggested which allows the observer technique to be used for checking whether properties expressed in the language are satisfied by models described in SDL. This language can be viewed as a restricted version of the full branching-time temporal logic ACTL*. 1 Introduction New features are added in telecommunication systems to provide new telecommunication facilities. However these facilities may be somehow in conflict with the existing ones. Feature interaction is the general name for this phenomenon. An extreme kind of feature interactions occurs when characteristic properties of a new feature are inconsistent with properties of the core service or additional features that are already provided. For example, with the Terminating Call Screening feature subscriber C may enforce that he will not receive calls from subscriber A, but subscriber B may use the Call Forwarding Unconditional feature to enforce that subscriber C will receive all his incoming calls – including calls from subscriber A – which is impossible. There are, however, many milder kinds of feature interactions conceivable; from unforeseen, undesirable ones to intended, desirable ones. An undesirable feature interaction occurs, *This work was partially supported by the RACE project no. 2017, SCORE. It represents the view of the author. **e-mail: C.A.Middelburg@research.ptt.nl for example, if the Call Forwarding Unconditional feature can be used to circumvent the blocking intended when using the Originating Call Screening feature. This is possible if the number being forwarded to is not considered to be a dialled number. However, although it is undesirable, this interaction does not give rise to an inconsistency.\footnote{Because either is described in isolation, the properties of Call Forwarding Unconditional and Originating Call Screening, as expressed in Section 4, are inconsistent.} A comprehensive survey of current features can be found in [5], which also suggests a categorization of feature interactions. New features are usually viewed as functionality extensions to the existing services. There are, of course, no feature interactions present if the existing functionality remains unaffected by the newly created functionality, but that is rather the exception than the rule. It is appropriate to call the feature concerned a conservative extension if this favourable situation occurs. However, the situation is unlikely to occur, because new facilities tend to add to the special cases that have to be taken into account by the existing ones. For example, all three above-mentioned features affect dialling such that certain properties of the existing services concerning dialling will no longer hold in the general case. There is at least empirical evidence that useful features are almost inevitably non-conservative extensions. This means among other things that the interactions caused by them are at least partly intended. Furthermore, unintended interactions may be unforeseen but desirable ones. However, this does not mean that there are no undesirable interactions and even ones that should be absolutely excluded. This paper reports on work done concerning feature interaction detection. It is about a language for expressing properties of telecommunication services and features. The quest is for a “property language” with a restricted, but practically sufficient, expressive power, which can be complemented with computer-based tools for verification of models described in SDL [2] with respect to properties expressed in the language. The underlying idea is that a property language for SDL together with a suitable “model checker” may play a part in feature interaction detection before new features are actually added (see also [3]). The current section already touched upon the fact that feature interactions are generally inescapable and not necessarily undesirable. Section 2 explains that the way in which a property language for SDL, together with an accompanying model checker, may play a part in feature interaction detection is currently still very limited. Section 3 sketches the approach followed to find a suitable property language and Section 4 presents the results. In Section 5, the observer technique is proposed to check whether properties expressed in the language concerned are satisfied by models described in SDL. Pre-defined functions and predicates that are needed in this language are mentioned in Section 6. Finally, some closing remarks are made in Section 7. ## 2 Connections with feature interaction detection In order to minimize the danger of having modelled services and features different from the intended ones, it is important to check whether the models concerned satisfy anticipated properties. Because SDL is the language generally used in the telecommunication world for modeling, this means that it is useful to have a property language for SDL and an accompanying model checker—to check whether models described in SDL satisfy properties formulated in the property language. But which part can they play in a systematic way to detect feature interactions? Let us assume that the existing services have been modelled using SDL, that their characteristic properties have been given in the property language, and that it has been checked that the model satisfies these properties. An example of such properties is the following crucial property of the Terminating Call Screening feature: subscriber B should not receive calls from subscriber A when A is on B’s screening list. It must be considered to be a major problem in itself to reach the situation sketched above, but we suppose that this problem can be solved. In [16], the following desirable, but not very realistic scenario for the addition of a new feature is mentioned. The characteristic properties of the new feature are identified and they are expressed in the property language. Next, a model of the new feature is described in SDL and it is checked, using the model checker, whether this model satisfies the characteristic properties. Finally, the model of the existing services and the model of the new feature are combined and it is checked whether the combined model satisfies the union of all the characteristic properties concerned (it is not made precise in [16] what is meant by combining models). Because new features are frequently non-conservative extensions, there are several shortcomings of this scenario. If the final step does not succeed, this does not have to mean that a case of inconsistent properties has been detected; it does not even have to mean that there is an undesirable feature interaction. It might indicate that the model of the existing services is not suited to extension without changes—the question remains how this must be dealt with. It is moreover likely that the characteristic properties of the existing services have to be adapted—to take new special cases into account—and that the model described in SDL has to be changed accordingly. This has to be accomplished before the final step can be performed. Suppose, for the sake of simplicity, that only the Basic Call service exists and that the Call Forwarding Unconditional feature has to be added. The latter feature changes the effect of dialling such that the characteristic properties of the Basic Call service concerning dialling will no longer hold. The original characteristic properties would now, for example, lead to such impossible situations as getting at the same time a ring-back tone from the subscriber that a call is intended for as well as a busy tone from the subscriber that the call is forwarded to. So clearly, they must be adapted and so must be the model of the Basic Call service. The adaptations needed in cases such as the one described above are the result of interactions introduced by the new feature. They are needed to turn the undesirable interactions into acceptable ones. But how do we detect these undesirable interactions? They must have been detected before the step meant for the detection of undesirable feature interactions can be performed! In the example just given, we should have detected beforehand that the addition of the Call Forwarding Uncondi- \[\text{Some generic model checkers, e.g. SMV [4], can be used for checking the consistency of properties.}\] tional feature leads to situations in which a subscriber gets a ring-back tone as well as a busy tone. All this indicates that the systematic approach to add new features, which incorporates detection of undesirable interactions with existing ones, suggested by the scenario given above, is not to be expected soon. We are, for example, far away from having identified the kinds of feature interactions that are likely to be undesirable; the empirical results needed to identify them require a lot of experiments in adding new features to existing ones and in detecting feature interactions. However, as explained at the beginning of this section, it remains useful to check whether a new model, obtained by adapting an existing one to the needs of a new feature, satisfies all relevant properties. 3 Finding a suitable property language In this section the approach followed to find a suitable property language is described. The following approach has been followed in order to find a property language with a restricted, but practically sufficient, expressive power, which can be complemented with a model checker for verifying models described in SDL with respect to properties expressed in the language. Characteristic properties of features have been expressed, from the subscriber’s point of view, in a highly expressive language, viz. a first-order version of ACTL* [15]. But it has been further investigated whether there are common forms of formulae of this logic, which suffice for expressing these properties. As in [10], we focussed on the subscriber’s point of view. This means that we dealt with properties that can be expressed in terms of: - the events that can be produced by subscribers at their telephones, such as taking off-hook, putting on-hook and dialling a number; - the observable states of the subscriber’s telephones, such as being idle, emitting a dial-tone, etc.; - the phases of a call that are recognizable through the observable states, such as the ready phase, the calling phase, etc.; - the features that subscribers have activated. Observable states and phases are global states, i.e. they comprise a succession of internal states of the telecommunication system. They are viewed as (basic and derived, respectively) predicates on the internal states. The properties on which we focus in this manner are the ones that are really relevant to telephone subscribers. This also means that, in an explanation of features meant for their (potential) users, other properties do not matter. An alternative to the official semantics of SDL [17], defining the meaning of SDL specifications as labelled transition systems like in [11], is assumed. The connection with models described in SDL is clear. Events correspond to signals from the environment. Each observable state may encompass many consecutive SDL states satisfying a common predicate explicitly definable in terms of the values of certain variables, the contents of the input port queue of certain process instances, etc.; and so does each phase (see also Section 6). Note, however, that by giving the characteristic properties these predicates are only specified up to the point that is relevant from the subscriber’s point of view. Unlike this is the way in which characteristic properties of features are expressed in [7]. A specialization of linear-time temporal logic for SDL is used there to express the properties directly in terms of the specifics of a given model of the services and features involved. In order to make analysis practically feasible, a suitable abstraction is made. For example, an event produced by a subscriber is regarded to happen at the moment that the telecommunication system starts to handle the event. Thus, some exceptional cases are not covered; but otherwise even the Basic Call service appears to be a complete chaos. ACTL*, for Action-based CTL*, can be viewed as CTL* extended with relativized next operators.\(^4\) This highly expressive logic is a sublogic of the logic used as the general SPECS property language (SPECS PL) described in [18]; it is essentially the SPECS PL without its fixed-point operator, but with an until operator – which can be introduced as an abbreviation in the SPECS PL using the fixed-point operator. ACTL* has, in addition to the usual logical operators – \(\top\) (true), \(\neg\) (not), \(\land\) (and), \(\forall\) (for all) – of classical first-order logic, the following temporal operators: \(X\) (nexttime), \(X_{\alpha}\) for each transition label \(\alpha\) (relativized nexttime), \(U\) (until) and \(A\) (for all paths). A transition label \(\alpha\) is either an element \(a\) from a set of actions \(A\) or the special label \(\tau\) (silent action). The intuition behind these operators is as follows: - \(X\varphi\) means that \(\varphi\) will be true after the next transition, - \(X_{\alpha}\varphi\) means that the next transition will be an \(\alpha\) transition and \(\varphi\) will be true after this transition, - \(\varphi U \psi\) means that \(\psi\) will eventually be true and until then \(\varphi\) will be true, - \(A\varphi\) means that \(\varphi\) will be true for all paths starting from the current state. \(\tau\) transitions are used to model transitions where the action involved is hidden from the environment, e.g. the internal steps of a system. The first-order version of ACTL* is precisely defined in Appendix A. Some well-known temporal operators that can be introduced as abbreviations are \(F\) (finally or sometime), \(G\) (globally or always), and \([a]\) (inevitably after \(a\)): - \(F\varphi\) stands for \(\top U \varphi\), - \(G\varphi\) stands for \(\neg F\neg \varphi\), - \([a]\varphi\) stands for \(A \neg X_{\alpha} \neg ((X_{\tau} \top) U \varphi)\). \([a]\varphi\) means that for all paths from the current state with an \(a\) transition as its first transition, after this \(a\) transition and zero or more directly following \(\tau\) transitions, \( \varphi \) will be true. So the operator \([a]\) is slightly different from the one in the standard Hennessy-Milner Logic of [12], where it is allowed to have \(\tau\) transitions directly preceding the \(a\) transition as well. We also use the abbreviation \([a_1, \ldots, a_n]\) \(\varphi\) for \([a_1] \varphi \land \ldots \land [a_n] \varphi\). Before we proceed with our quest for a suitable property language, it is worth recalling that we are not looking here for a language which allows us to formulate any property as elegantly or naturally as possible; the emphasis is on the practical feasibility of model checking. We are also not looking for a language which allows us to express the purpose of a new feature as described by the service provider who wants to provide the feature; it is the functionality extension agreed with the service provider to reach this purpose what matters to feature interaction detection. The purpose is elusive; what it means to reach the purpose is usually not even considered. For example, the purpose of the Terminating Call Screening feature is: not to be disturbed by telephone calls from certain people. This gives us little clue about the interactions this feature may cause; what is offered to prevent such disturbance determines these interactions. Of course, the purpose of the feature suggests potential interactions to experts. This means that it is suitable for guesswork, but it is not amenable to rigorous analysis. 4 Common forms of formulae In practice, the crucial properties of most features can be expressed by formulae of the following general forms: 1. \( AG\varphi \) 2. \( AG(\varphi \Rightarrow [a] \psi) \) where \(\varphi\) and \(\psi\) are formulae without temporal operators, i.e. formulae of classical first-order logic. They are mainly built from atomic formulae concerning the observable states of the subscriber’s telephones, the phases of a call that are recognizable through the observable states, and the features that subscribers have activated. \(a\) is an action label corresponding to an event that can be produced by subscribers at their telephones. In principle, we might also need formulae of the following general form:\(^5\) 3. \( AG(\varphi \Rightarrow \neg [a] \psi) \) Indeed, we actually need a few formulae of this additional form to formulate some general response properties of the system. For example, the following formula is needed to express that if the telephone of a subscriber is ready for dialling, it is possible for him or her to dial another subscriber: \[ AG(A \neq B \land ready(A) \Rightarrow \neg [dial(A, B)] \bot) \] \(^5\)The conjecture is that the additional form allows for any degree of non-bisimilarity to be distinguished (see [14]). None of the properties of this kind is specific to a certain feature. The technique to check whether properties are satisfied by a model described in SDL, which is explained in Section 5, can be adapted to properties expressed by formulae of this additional form. But, unlike the original technique, the use of the adapted one is not supported by any commercially available SDL-toolset. A formula of the form 1 expresses a state invariance property, i.e. property that will hold at all states along all possible paths. A formula of the form 2 is a transition rule; it expresses a property that will hold for all state transitions along all possible paths. For example, the crucial properties of the Originating Call Screening (OCS) feature can be expressed by the following formulae if we assume that only the Basic Call service exists: \[ A \ \Box \ (A \neq B \land OCS(A, B) \Rightarrow \neg \text{calling}(A, B)) \] \[ A \ \Box \ (A \neq B \land OCS(A, B) \land \text{ready}(A) \land \text{idle}(B) \Rightarrow [\text{dial}(A, B)] \text{rejecting}(A)) \] \[ A \ \Box \ (A \neq B \land \neg OCS(A, B) \land \text{ready}(A) \land \text{idle}(B) \Rightarrow [\text{dial}(A, B)] \text{calling}(A, B)) \] The first formula is of the form 1. It expresses that the phase where subscriber A is calling subscriber B will never occur when B is on A’s screening list. The last two formulae are of the form 2. The first of them expresses that, if A’s telephone is ready for dialling and B’s telephone is idle, but B is on A’s screening list, the phase where A’s call attempt is rejected will occur after A dials B. The second of them expresses that the calling phase will occur after A dials B, as previously being usual, if it is instead not the case that B is on A’s screening list. For clearness’ sake, we mention here that \(\text{calling}(A, B)\) indicates the phase during which B’s telephone is ringing and A gets a ring-back tone and that \(\text{rejecting}(A)\) indicates the phase during which A gets a busy tone. Note that the last two formulae change the property of the Basic Call service expressed by the following formula: \[ A \ \Box \ (A \neq B \land \text{ready}(A) \land \text{idle}(B) \Rightarrow [\text{dial}(A, B)] \text{calling}(A, B)) \] This adaptation is needed because the Originating Call Screening feature affects dialling. The crucial properties of many other features can be expressed in the same vein, e.g. Terminating Call Screening, Call Forwarding Unconditional and Call Forwarding on Busy/No Answer. Call Forwarding Unconditional (CFU), for example, can be described as follows if we again assume that only the Basic Call service exists: \[ A \ \Box \ (A \neq B \land A \neq C \land CFU(A, B) \land \text{ready}(C) \land \text{idle}(B) \Rightarrow [\text{dial}(C, A)] \text{calling}(C, B)) \] \[ A \ \Box \ (A \neq B \land A \neq C \land CFU(A, B) \land \text{ready}(C) \land \text{busy}(B) \Rightarrow [\text{dial}(C, A)] \text{rejecting}(C)) \] \[ A \ \Box \ (A \neq C \land \neg (\exists B \cdot CFU(A, B)) \land \text{ready}(C) \land \text{idle}(A) \Rightarrow [\text{dial}(C, A)] \text{calling}(C, A)) \] \[ A \ \Box \ (A \neq C \land \neg (\exists B \cdot CFU(A, B)) \land \text{ready}(C) \land \text{busy}(A) \Rightarrow [\text{dial}(C, A)] \text{rejecting}(C)) \] The first two formulae express how dialling is affected if Call Forwarding Unconditional is activated by the called party and the last two formulae express that dialling is not affected if it is not activated. Note that if we instead assume that both the Basic Call service and the Originating Call Screening feature exist, several additional formulae are needed to cover the combinations of features as well. For the sake of simplicity, we will also assume in the remaining examples that only the Basic Call service exists. The Abbreviated Dialling (ABD) feature can also be described naturally in the same way: \[ A G(A \neq B \land \text{ABD}(A, B, N) \land \text{ready}(A) \land \text{idle}(B) \Rightarrow [\text{dial}(A, \text{abbr}(N))] \text{calling}(A, B)) \\ A G(A \neq B \land \text{ABD}(A, B, N) \land \text{ready}(A) \land \text{busy}(B) \Rightarrow [\text{dial}(A, \text{abbr}(N))] \text{rejecting}(A)) \] Here \(\text{abbr}(N)\) is used to tag \(N\) as an abbreviated number – to make it distinguishable from a subscriber number. Most features need mainly formulae of the form 2, but some features only need formulae of the form 1, e.g. Calling Number Delivery (CND) and Unlisted Number (UN): \[ A G(A \neq B \land \text{CND}(A) \land \text{calling}(B, A) \Rightarrow \text{delivering}(B, A)) \\ A G(\text{UN}(B) \Rightarrow \forall C \cdot \neg \text{delivering}(B, C)) \] Note that these two features are trivially inconsistent. If subscriber \(B\) is calling subscriber \(A\) while \(A\) has activated CND, \(A\) should have \(B\)’s number delivered during the calling phase. But if additionally \(B\) has activated UN, this is in contradiction to \(B\)’s demand that his or her number should never be delivered to anybody. The Automatic Call Back (ACB) feature can be described as well, but at the cost of introducing an auxiliary predicate \(\text{acbsubscr}\) indicating which subscriber is called if the Automatic Call Back code is dialled: \[ A G(A \neq B \land \text{ACB}(A) \Rightarrow [\text{dial}(B, A)] \text{acbsubscr}(A, B)) \\ A G(A \neq B \land \text{acbsubscr}(A, B) \land \text{ready}(A) \land \text{idle}(B) \Rightarrow [\text{dial}(A, \text{acbcode})] \text{calling}(A, B)) \\ A G(A \neq B \land \text{acbsubscr}(A, B) \land \text{ready}(A) \land \text{busy}(B) \Rightarrow [\text{dial}(A, \text{acbcode})] \text{rejecting}(A)) \\ A G(A \neq B \land \text{acbsubscr}(A, B) \Rightarrow [\text{dial}(A, \text{acbcode})] \text{acbsubscr}(A, B)) \\ A G(A \neq B \land \text{acbsubscr}(A, B) \Rightarrow [\text{offhook}(C), \text{onhook}(C)] \text{acbsubscr}(A, B)) \\ A G(A \neq B \land C \neq D \land A \neq D \land \text{acbsubscr}(A, B) \Rightarrow [\text{dial}(C, D)] \text{acbsubscr}(A, B)) \\ A G(\text{ACB}(A) \land \text{acbsubscr}(A, B) \land \text{acbsubscr}(A, C) \Rightarrow B = C) \] The first formula expresses that, if subscriber \(A\) has activated ACB, subscriber \(B\) becomes the subscriber to be automatically called back immediately after \(B\) has dialled \(A\).\(^6\) The second and third formula express that such an automatic call back will take place immediately after $A$ dials the ACB code — *acbcode* is used to represent this code. The following three formulae together express that events other than another subscriber dialling $A$ keep the subscriber to be automatically called back unchanged. The last formula expresses that there can be at most one subscriber to be automatically called back. In formulae of the form 2, the until operator only plays an inessential part; it is only used to deal with internal steps of the system as modelled by means of SDL. In general, one expects intuitively that the until operator will play an essential part in describing the Automatic Call Back feature. It is surprising that it turned out to be relatively easy to devise the formulae given above, while a satisfactory formulation using the until operator in an essential way could not be found. In case of the Automatic Call Back feature, the auxiliary predicate that had to be introduced does not seem to be artificial; the notion of the subscriber to be automatically called back, in case the ACB code is dialled, is natural and very relevant to the subscriber having activated the feature. A similar remark applies to the Automatic Recall feature — what could be expected — and also for various other features that require the introduction of auxiliary predicates. It also applies to the predicate needed for the Call Waiting feature. It presents the essentials of a call waiting situation, viz. the subscriber to whom is spoken and the subscriber being on hold. However, this predicate could also be viewed as a phase of a call where at least one subscriber having activated the Call Waiting feature is involved. Note that, from this viewpoint, the Call Waiting feature adds to certain calls a new phase which did not occur before its introduction. 5 Observers to check properties Observers, which are explained below, can be used to check whether properties expressed by linear-time formulae are satisfied. Linear-time formulae are formulae of the general form $A \varphi$ where $\varphi$ is a temporal formula without the operator $A$. The formulae of the form 1 given in the previous section are linear-time formulae, but the formulae of the form 2 are not (they are true branching-time formulae). However, for formulae $\varphi$ and $\psi$ without temporal operators, $$A G(\varphi \Rightarrow [a] \psi)$$ is equivalent to $$A G(\varphi \Rightarrow \neg X_a \neg((X_{\neg} T) U \psi))$$ in ACTL* — where all paths through the underlying transition systems are considered (see also [6]). We first used the branching time form, because it is intuitively less clear for the linear time form that it is an appropriate one to express state transition rules. An observer is essentially a deterministic automaton accepting certain paths. The principle of model checking by means of observers is that, for a formula $A \varphi$ to be checked, a deterministic automaton is constructed that accepts a path if and only if \( \varphi \) is true of that path. This means that, for the formula \( A \varphi \) to hold, all paths must be accepted by the corresponding observer. Such an observer can always be constructed for a linear-time formula if variables range over a finite domain. The use of observers is supported by, for example, the SDL-toolset GEODE [1], but the construction of observers from temporal formulae – as described in, for example, [8] – has to be done manually yet. However, the observer for a formula of the form 1 is trivial and the observer for a formula of the form 2, i.e. \( AG(\varphi \Rightarrow[\alpha]\psi) \), is as follows: \[ \begin{array}{c} p \quad 0 \quad 1 \\ \quad s \quad q \quad r \\ \end{array} \] where transition \( p \) is labelled with \( \neg(\varphi \land X_\alpha T) \), \( q \) with \( \varphi \land X_\alpha T \), \( r \) with \( \neg \psi \land X_T T \) and \( s \) with \( \psi \). The observer will start in state 0 and it will keep this state as long as \( \neg(\varphi \land X_\alpha T) \) holds. It will pass to state 1 as soon as \( \varphi \land X_\alpha T \) holds and it will keep this state as long as \( \neg \psi \land X_T T \) holds. It will pass back to state 0 as soon as \( \psi \) holds. Thereafter, this pattern of behaviour will repeat itself indefinitely. Note that it may not occur that \( \neg: \land X_T T \) holds while the observer is in state 1. Thus, it will accept exactly the paths that satisfy \( G(\varphi \Rightarrow[\alpha]\psi)! \) So, if we stick to temporal formulae of the above-mentioned two general forms, the construction of observers for them is quite easy. The automation of this construction will require only a small effort. 6 Pre-defined functions and predicates In the previous sections, we used predicates for the observable states of telephones, the phases of telephone calls, etc. To check whether a model described in SDL satisfies properties formulated in (a restricted version of) ACTL*, these predicates must be defined explicitly in terms of the values of certain variables, and the like, that are extant in the model. This section gives an overview of a specialization of ACTL* suited for SDL. In appendix A, the uninterpreted first-order version of ACTL* is defined. A partially interpreted version appears to be more suited for SDL. This means that temporal structures with specific functions and predicates are assumed for certain function and predicate symbols. It seems also useful to have several different domains instead of one, which requires a many-sorted version with sort symbols to distinguish between them. Specific domains are assumed for certain sort symbols as well. Thus, we get a specialization of ACTL* for SDL which offers a number of pre-defined sorts, functions and predicates. First the envisaged specialization is sketched and next the grounds for it are given. In the following saved signals and timers are not taken into account. The pre-defined sorts needed include the pre-defined sorts of SDL (including $PId$, the set of process instance identifiers) and additionally: - **ChanNm**: a finite set of channel names; - **StateNm**: a finite set of state names; - **Signal**: a countably infinite set of signals. Each pre-defined sort of SDL comes together with certain pre-defined functions to construct values of the sort or to extract other values from them; and so does the pre-defined sort **Signal**. In addition to these state-independent functions, some state-dependent functions and predicates are needed. For each variable declared in an SDL system definition by $\text{dcl } vnm \ Snm$, a function $vnm : PId \rightarrow Snm$ is needed; $vnm(pid)$ yields the current contents of the variable with name $vnm$ owned by the process instance with identifier $pid$. Besides, the following functions are needed: - **state** : $PId \rightarrow StateNm$ $\text{state}(pid)$ yields the name of the current process state of the process instance with identifier $pid$; - **ipfirst** : $PId \rightarrow Signal$ $\text{ipfirst}(pid)$ yields the first signal in the input port queue of the process instance with identifier $pid$; - **chfirst** : $ChanNm \rightarrow Signal$ $\text{chfirst}(cnm)$ yields the first signal in the channel with name $cnm$. The following predicates are also needed: - **init** : $PId$ $\text{init}(pid)$ is true if the process instance with identifier $pid$ is in the start state; - **ipempty** : $PId$ $\text{ipempty}(pid)$ is true if there are no signals in the input port queue of the process instance with identifier $pid$; - **chempty** : $ChanNm$ $\text{chempty}(cnm)$ is true if there are no signals in the channel with name $cnm$; - **existing** : $PId$ $\text{existing}(pid)$ is true if the process instance with identifier $pid$ exists. In what precedes, we viewed the observable states of the subscriber’s telephones, the phases of a call that are recognizable through the observable states and the features that subscribers have activated as predicates on the internal states of the telecommunication system. In order to verify a model described in SDL with respect to properties expressed as ACTL* formulae of certain forms, these predicates have to be defined explicitly in terms of functions and predicates that depend on the system states extant in the model. Such a system state consists of a collection of process instances, where each instance is uniquely identified by a process instance identifier, and a collection of named channels. The process instances contained in the former collection may vary from system state to system state, but the channels contained in the latter collection are the same channels in all system states. The channels convey signals which may catch a delay. A process instance comprises a named process state prescribing its future behaviour, a storage of variables, which determines the value of the variables associated with it, and an input port queue containing signals received by the process instance but not consumed by it. Clearly, the sorts, functions and predicates proposed above as the pre-defined ones are sufficient to consult any detail of the system states. The whole is essentially part of the sorts, functions and predicates proposed as the pre-defined ones in [19]. Only the fundamental ones to consult all details of the system states are left in the current paper. 7 Closing remarks Although the crucial properties of many features can currently be expressed by formulae of the above-mentioned two general forms, it is possible that these forms will turn out to be too restrictive in the future. Preliminary investigations indicate that the SMV model checker [4] can be adapted such that it can be used to check whether models described in SDL satisfy properties formulated in ACTL. Roughly, ACTL differs from ACTL* in that each occurrence of $A$ in a ACTL formula must be followed by a boolean combination of formulae of the forms $X\, \varphi$, $X_\alpha \, \varphi$ and $\varphi \, U \, \psi$, where $\varphi$ and $\psi$ must be ACTL formulae as well. Of course, quantification could also be allowed, provided that the variables concerned range over finite domains. The description of features by formulae of the above-mentioned two general forms seems closely related to the specification style with state invariants and pre- and post-condition style specifications of operations that is used with VDM [13]. This style enables to associate a number of relatively simple proof obligations with two specifications of a program, whose discharge is sufficient to show that one specification correctly refines the other specification in the following sense: each program satisfying the former specification simulates – when viewed as a transition system – some program satisfying the latter one. A similar approach is advocated to be used with Z [20]. It is useful to investigate whether there are connections between this notion of refinement and undesirable feature interactions. There are also close connections with the approach to feature interaction detection proposed in [10]. What is new in the current paper, is that $\tau$ transitions are taken into account. The “network properties” from that paper are exactly the temporal formulae of the form 1, and the “declarative transition rules” are the temporal formulae of the form 2 apart from the account of $\tau$ transitions. There would be an exact match if we had chosen $[\alpha] \, \varphi$ to stand for $A \rightarrow X_\alpha \rightarrow \varphi$. There is nothing like $\tau$ transitions in the official semantics of SDL [17]. However, recent proposals for an operational semantics by which the meaning of a system definition is defined as a labelled transition system, all introduce $\tau$ transitions to model the internal steps of the system (see e.g. [11]). The current paper also indicates how the notation used in [10] relates to ACTL* and how it can be checked with existing techniques whether models described in SDL satisfy properties expressed in such a notation. Acknowledgements Thanks go to Wiet Bouma, Alfo Melisse, Hugo Veltuijsen, Anders Gammelgaard and Simon Pickin for helpful conversations and feedback on the subject of this paper. References A Definition of first-order ACTL* Syntax The language of the first-order version of ACTL*, containing terms and formulae, is defined over a set \( F \) of function symbols, a set \( P \) of predicate symbols, a set \( X \) of variable symbols, and a set \( A \) of actions. The silent action \( \tau \) is not in \( A \). Every function or predicate symbol has an arity \( n \) (\( n \geq 0 \)). The terms of ACTL* are inductively defined by the following formation rules: - variable symbols are terms; - if \( f \) is a function symbol of arity \( n \) and \( t_1, \ldots, t_n \) are terms, then \( f(t_1, \ldots, t_n) \) is a term. The formulae of ACTL* are inductively defined by the following formation rules: - \( \top \) is a formula; - if \( P \) is a predicate symbol of arity \( n \) and \( t_1, \ldots, t_n \) are terms, then \( P(t_1, \ldots, t_n) \) is a formula; - if \( t_1 \) and \( t_2 \) are terms, then \( t_1 = t_2 \) is a formula; - if \( \phi \) is a formula, then \( \neg \phi \) is a formula; - if \( \varphi_1 \) and \( \varphi_2 \) are formulae, then \( \varphi_1 \land \varphi_2 \) is a formula; - if \( \varphi \) is a formula and \( x \) is a variable symbol, then \( \forall x \cdot \varphi \) is a formula; - if \( \varphi \) is a formula, then \( X \varphi \) is a formula; - if \( \varphi \) is a formula and \( \alpha \in A \cup \{\tau\} \), then \( X_0 \varphi \) is a formula; - if \( \varphi_1 \) and \( \varphi_2 \) are formulae, then \( \varphi_1 U \varphi_2 \) is a formula; - if \( \varphi \) is a formula, then \( A \varphi \) is a formula. The string representation of formulae suggested by these formation rules can lead to syntactic ambiguities: parentheses are used to avoid such ambiguities. Semantics The semantics of ACTL* terms and formulae is defined with respect to a temporal structure. A (first-order) temporal structure $K$ is a quintuple $\langle S, A, \rightarrow, D, L \rangle$ where: - $S$ is a set of states; - $A$ is the set of actions; - $\rightarrow \subseteq S \times (A \cup \{\tau\}) \times S$ is the transition relation, the elements from $\rightarrow$ are called transitions; - $D$ is a set, the domain of $K$; - $L$ maps each state $s \in S$ to an interpretation $L(s)$ that assigns an appropriate meaning over $D$ to all function and predicate symbols of the temporal language, i.e.: - for every $n$-ary function symbol $f$, a total function $f^{L(s)} : D^n \rightarrow D$; - for every $n$-ary predicate symbol $P$, a total function $P^{L(s)} : D^n \rightarrow \{T, F\}$. A temporal structure $\langle S, A, \rightarrow, D, L \rangle$ can be viewed as a labelled transition system $\langle S, A, \rightarrow \rangle$ together with a structure $(D, L(s))$ of classical first-order logic for each state $s \in S$ so as to provide for functions and predicates which may vary from state to state. The truth of formulae is defined for fullpaths in $K$. A path in $K$ is an element $\pi = \langle s_0, \langle (\alpha_1, s_1), (\alpha_2, s_2), \ldots \rangle \rangle$ from $S \times ((A \cup \{\tau\}) \times S)^{\infty}$ such that $\langle s_i, \alpha_{i+1}, s_{i+1} \rangle \in \rightarrow$ for all $i < |\pi|$. Here $|\pi|$ is the length of the second component of $\pi$, i.e. the number of transitions represented. $\pi$ is a fullpath in $K$ iff there is no $\langle \alpha, s \rangle \in (A \cup \{\tau\}) \times S$ that appended to the second component of $\pi$ yields again a path. Let $\pi = \langle s_0, \langle (\alpha_1, s_1), (\alpha_2, s_2), \ldots \rangle \rangle$ be a path. Then we write $\pi_0$ for $s_0$, $\pi_{i+1}$ for $\alpha_1$, and $\pi^i$ ($0 \leq i < |\pi|$) for the path $\langle s_i, \langle (\alpha_{i+1}, s_{i+1}), \ldots \rangle \rangle$. We also write fullpaths $\pi^i(s)$ for the set $\{\pi^i|\pi \text{ is a fullpath in } K \}$ and $\pi_0 = s$. The interpretation of terms and formulae of ACTL* in $K$ is further given under an assignment in $K$ – assigning a value to each variable symbol. An assignment in $K$ is a function $\xi : \mathcal{X} \rightarrow D$. For every assignment $\xi$, variable symbol $x$ and element $d \in D$, we write $\xi(x \rightarrow d)$ for the assignment $\xi'$ such that $\xi'(y) = \xi(y)$ if $y \neq x$ and $\xi'(x) = d$. The meaning of terms is given by a function mapping term $t$, structure $K$, fullpath $\pi$ and assignment $\xi$ to the element of $D$ that is the value of $t$ in the first state of $\pi$ under assignment $\xi$. We write $[t]^{K,\pi}_\xi$ to denote the value of this function for the arguments $t$, $K$, $\pi$ and $\xi$. Similarly, the meaning of formulae is given by a relation associating formula $\varphi$, structure $K$, fullpath $\pi$ and assignment $\xi$ if $\varphi$ is true of $\pi$ in $K$ under assignment $\xi$. We write $\mathcal{K}, \pi \models_\xi \varphi$ to indicate that this relation holds for the arguments $\varphi$, $\mathcal{K}$, $\pi$ and $\xi$. The interpretation functions for terms and formulae are inductively defined by $$ \begin{align*} [x]^{K,\pi}_\xi &= \xi(x), \\ [f(t_1, \ldots, t_n)]^{K,\pi}_\xi &= f^{L(s_0)}([t_1]^{K,\pi}_\xi, \ldots, [t_n]^{K,\pi}_\xi) \end{align*} $$ 15 and \[ \mathcal{K}, \pi \models_\xi T \quad \text{always} \] \[ \mathcal{K}, \pi \models_\xi P(t_1, \ldots, t_n) \quad \text{iff } P^{L(\pi_0)}([t_1]^{\xi}_K, \ldots, [t_n]^{\xi}_K) = T, \] \[ \mathcal{K}, \pi \models_\xi t_1 = t_2 \quad \text{iff } [t_1]^{\xi}_K = [t_2]^{\xi}_K, \] \[ \mathcal{K}, \pi \models_\xi \neg \varphi \quad \text{iff not } \mathcal{K}, \pi \models_\xi \varphi, \] \[ \mathcal{K}, \pi \models_\xi \varphi_1 \land \varphi_2 \quad \text{iff } \mathcal{K}, \pi \models_\xi \varphi_1 \text{ and } \mathcal{K}, \pi \models_\xi \varphi_2, \] \[ \mathcal{K}, \pi \models_\xi \forall x \cdot \varphi \quad \text{iff for all } d \in D, \mathcal{K}, \pi \models_\xi(x \rightarrow d) \varphi, \] \[ \mathcal{K}, \pi \models_\xi X \varphi \quad \text{iff } |\pi| \geq 1 \text{ and } \mathcal{K}, \pi^1 \models_\xi \varphi, \] \[ \mathcal{K}, \pi \models_\xi X_\alpha \varphi \quad \text{iff } |\pi| \geq 1 \text{ and } \pi_{i_1} = \alpha \text{ and } \mathcal{K}, \pi^1 \models_\xi \varphi, \] \[ \mathcal{K}, \pi \models_\xi \varphi_1 U \varphi_2 \quad \text{iff for some } i, \mathcal{K}, \pi^i \models_\xi \varphi_2 \text{ and for all } j < i, \mathcal{K}, \pi^j \models_\xi \varphi_1, \] \[ \mathcal{K}, \pi \models_\xi A \varphi \quad \text{iff for all } \pi' \in \text{fullpaths}_{\mathcal{K}(\pi_0)}, \mathcal{K}, \pi' \models_\xi \varphi. \] A formula \( \varphi \) is true of fullpath \( \pi \) in temporal structure \( \mathcal{K} \), written \( \mathcal{K}, \pi \models \varphi \), iff \( \mathcal{K}, \pi \models_\xi \varphi \) for every assignment \( \xi \). A formula \( \varphi \) is valid, written \( \models \varphi \), iff \( \mathcal{K}, \pi \models \varphi \) for every temporal structure \( \mathcal{K} \) and every full path \( \pi \) in \( \mathcal{K} \).
{"Source-Url": "https://staff.fnwi.uva.nl/c.a.middelburg/papers/PU-94-356.pdf", "len_cl100k_base": 10582, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 49606, "total-output-tokens": 12684, "length": "2e13", "weborganizer": {"__label__adult": 0.0004427433013916016, "__label__art_design": 0.0005536079406738281, "__label__crime_law": 0.000476837158203125, "__label__education_jobs": 0.0008077621459960938, "__label__entertainment": 0.0002073049545288086, "__label__fashion_beauty": 0.000209808349609375, "__label__finance_business": 0.0008091926574707031, "__label__food_dining": 0.0004622936248779297, "__label__games": 0.0007309913635253906, "__label__hardware": 0.004245758056640625, "__label__health": 0.0006189346313476562, "__label__history": 0.0003693103790283203, "__label__home_hobbies": 0.00014138221740722656, "__label__industrial": 0.0009598731994628906, "__label__literature": 0.0007715225219726562, "__label__politics": 0.0004427433013916016, "__label__religion": 0.000537872314453125, "__label__science_tech": 0.30810546875, "__label__social_life": 0.00010764598846435548, "__label__software": 0.017486572265625, "__label__software_dev": 0.65966796875, "__label__sports_fitness": 0.0002593994140625, "__label__transportation": 0.0012712478637695312, "__label__travel": 0.00020313262939453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46177, 0.01545]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46177, 0.28731]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46177, 0.88854]], "google_gemma-3-12b-it_contains_pii": [[0, 2009, false], [2009, 5381, null], [5381, 8981, null], [8981, 11662, null], [11662, 14962, null], [14962, 17725, null], [17725, 21032, null], [21032, 24133, null], [24133, 27041, null], [27041, 30074, null], [30074, 33188, null], [33188, 36400, null], [36400, 39205, null], [39205, 40950, null], [40950, 44381, null], [44381, 46177, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2009, true], [2009, 5381, null], [5381, 8981, null], [8981, 11662, null], [11662, 14962, null], [14962, 17725, null], [17725, 21032, null], [21032, 24133, null], [24133, 27041, null], [27041, 30074, null], [30074, 33188, null], [33188, 36400, null], [36400, 39205, null], [39205, 40950, null], [40950, 44381, null], [44381, 46177, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46177, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46177, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46177, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46177, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46177, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46177, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46177, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46177, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46177, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46177, null]], "pdf_page_numbers": [[0, 2009, 1], [2009, 5381, 2], [5381, 8981, 3], [8981, 11662, 4], [11662, 14962, 5], [14962, 17725, 6], [17725, 21032, 7], [21032, 24133, 8], [24133, 27041, 9], [27041, 30074, 10], [30074, 33188, 11], [33188, 36400, 12], [36400, 39205, 13], [39205, 40950, 14], [40950, 44381, 15], [44381, 46177, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46177, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
5487aaafe4aae7ccde5df051b0494f43f5ae3e13
# Contents 1 Introduction 7 2 Installing KDE Plasma Desktop and KDE Applications 8 2.1 Installing Packages 8 2.1.1 Linux® 8 2.1.2 Microsoft® Windows® 8 2.1.3 Mac®OS 9 2.1.4 BSD™ 9 2.1.5 Mobile Devices 9 2.1.6 Live Media 9 2.1.7 Building from Source Code 9 3 Finding Your Way Around 10 3.1 A Visual Dictionary 10 3.1.1 The Big Picture 10 3.1.1.1 Windows 10 3.1.1.2 Elements of Graphical User Interface (GUI) 12 3.1.2 The Widgets 14 3.2 Common Menus 19 3.2.1 The File Menu 20 3.2.2 The Edit Menu 20 3.2.3 The View Menu 21 3.2.4 The Tools Menu 21 3.2.5 The Settings Menu 22 3.2.6 The Help Menu 22 3.2.7 Thanks and Acknowledgments 23 3.3 Common Keyboard Shortcuts 23 3.3.1 Working with Windows 23 3.3.1.1 Starting and Stopping Applications 23 3.3.1.2 Moving Around 24 3.3.1.3 Panning and Zooming 24 3.3.2 Working with Activities and Virtual Desktops 25 KDE Fundamentals 3.3.3 Working with the Desktop ............................................. 25 3.3.4 Getting Help .............................................................. 25 3.3.5 Working with Documents .............................................. 25 3.3.6 Working with Files ....................................................... 26 3.3.7 Changing Volume and Brightness .................................... 26 3.3.8 Leaving Your Computer ............................................... 26 3.3.9 Modifying Shortcuts ..................................................... 27 4 Common Tasks ................................................................. 28 4.1 Navigating Documents .................................................... 28 4.1.1 Scrolling ................................................................. 28 4.1.2 Zooming ................................................................. 29 4.2 Opening and Saving Files ................................................ 29 4.2.1 Introduction ............................................................. 29 4.2.2 The File Selection Window .......................................... 29 4.2.3 Thanks and Acknowledgments ....................................... 33 4.3 Check Spelling ............................................................... 34 4.3.1 Check Spelling .......................................................... 34 4.3.2 Automatic Spell Checking ............................................ 34 4.3.3 Configuring Sonnet ..................................................... 35 4.3.4 Thanks and Acknowledgments ....................................... 35 4.4 Find and Replace ........................................................... 35 4.4.1 The Find Function ..................................................... 35 4.4.2 The Replace Function ............................................... 37 4.4.3 Thanks and Acknowledgments ....................................... 38 4.5 Choosing Fonts ............................................................. 38 4.6 Choosing Colors .......................................................... 39 4.6.1 Using Basic Colors .................................................... 39 4.6.2 Mixing Colors .......................................................... 40 4.6.2.1 Using the Grid ..................................................... 40 4.6.2.2 Using the Screen Colors ........................................ 40 4.6.2.3 Hue/Saturation/Value .......................................... 40 4.6.2.4 Red/Green/Blue .................................................. 40 4.6.2.5 HTML Hexadecimal Code ...................................... 40 4.6.3 Custom Colors ........................................................ 40 4.6.4 Thanks and Acknowledgments ....................................... 40 5 Customizing KDE software 5.1 Customizing Toolbars 5.1.1 Modifying Toolbar Items 5.1.1.1 Adding an Item 5.1.1.2 Removing an Item 5.1.1.3 Changing the Position of Items 5.1.1.4 Adding a Separator 5.1.1.5 Restoring Defaults 5.1.1.6 Changing Text and Icons 5.1.2 Customizing Toolbar Appearance 5.1.2.1 Text Position 5.1.2.2 Icon Size 5.1.2.3 Moving Toolbars 5.1.2.4 Show/Hide Toolbars 5.1.3 Thanks and Acknowledgments 5.2 Using and Customizing Shortcuts 5.2.1 Introduction 5.2.2 Changing a Shortcut 5.2.3 Resetting Shortcuts 5.2.4 Removing a Shortcut 5.2.5 Working with Schemes 5.2.6 Printing Shortcuts 5.2.7 Thanks and Acknowledgments 6 Credits and License Abstract This guide provides an introduction to the Plasma workspace and applications and describes many common tasks that can be performed. Chapter 1 Introduction Welcome to KDE! This guide will introduce you to the many features of the Plasma workspace and applications and describe many common tasks you can perform. For more information on KDE, visit the KDE website. Chapter 2 Installing KDE Plasma Desktop and KDE Applications You can install KDE applications, including KDE Plasma Desktop, on a variety of different platforms, ranging from smartphones and tablets to computers running Microsoft® Windows®, Mac®, OS, UNIX®, BSD™ or Linux®. Binary packages are available for many different platforms and distributions, or advanced users may build the source code. 2.1 Installing Packages Hundreds of developers worldwide have done a lot of work to make it easy to install KDE onto a variety of different devices and platforms. 2.1.1 Linux® Nearly every Linux® distribution provides binary packages for individual applications and the KDE Plasma Desktop as a whole. To install an individual application, look for its name in your distribution’s package collection. To install one of the KDE Plasma Workspaces, like KDE Plasma Desktop, look for a metapackage or package group, typically plasma-desktop. **NOTE** Some applications may be installed together with other applications in a combined package named after the KDE package they are provided in. For instance, Konqueror might be found in the kde-base apps package. If you have trouble locating KDE packages for your distribution, please contact their support resources. Many distributions also have a team dedicated to packaging applications by KDE that can provide assistance specific to them. 2.1.2 Microsoft® Windows® The KDE on Windows Initiative provides binary packages of KDE applications for Microsoft® Windows®. They also provide a special installer application that permits you to install individual applications or groups and all necessary dependencies easily. KDE Fundamentals For more information on the initiative and to download the installer, visit the KDE on Windows Initiative. 2.1.3 Mac® OS Individual KDE applications can be installed through several different ‘ports’ systems available for Mac® OS. Several different KDE applications also provide their own binary builds for Mac® OS. For more information, visit KDE on Mac OSX. 2.1.4 BSD™ Most BSD™ distributions allows you to install KDE applications and the KDE Plasma Desktop through their ‘ports’ system. For more information on installing ports, see your BSD™ distribution’s documentation. 2.1.5 Mobile Devices Plasma Mobile is an exciting initiative to bring a new KDE experience to mobile devices like smartphones or tablets. Binary releases are provided for several different devices. For more information, visit Plasma Mobile. 2.1.6 Live Media Several Linux® and BSD™ distributions offer live media. This permits you to try out the KDE Plasma Desktop without installing anything to your system. All you have to do insert a CD or connect a USB drive and boot from it. If you like what you see, most offer an option to install it to your hard drive. There is a list of distributions that offer the KDE workspace and applications on live media on the KDE website. 2.1.7 Building from Source Code For detailed information on how to compile and install KDE Plasma Desktop and applications see Build from source. Since KDE software uses cmake you should have no trouble compiling it. Should you run into problems please report them to the KDE mailing lists. The recommended tool to build Frameworks, KDE Plasma Desktop and all the other applications is kdesrc-build Chapter 3 Finding Your Way Around 3.1 A Visual Dictionary The KDE Plasma Workspaces feature many different graphical user interface elements, commonly known as ‘widgets’. This guide will introduce you to their names and functions. 3.1.1 The Big Picture 3.1.1.1 Windows This is the window of the KWite, a text editor. Click on part of the window to learn more about it. 1. The **Window Menu** 2. The **Titlebar** 3. Buttons to minimize, maximize, or close the window 4. The Menubar 5. The Toolbar 6. A very large Text Area that acts as this program’s Central Widget 7. A vertical Scrollbar (there is also a horizontal scrollbar below the text box) 8. The Statusbar This is the another window, that of the Dolphin file manager. Click on part of the window to learn more about it. 1. A panel that contains a list of Places on the computer system 2. A Breadcrumb of the path of the displayed folder 3. A folder Icon 4. A file Icon 5. A highlighted Icon 6. A Context Menu listing actions that can be performed on a file 7. A Slider that changes the size of the Icons displayed 8. More Panels 3.1.1.2 Elements of Graphical User Interface (GUI) This screenshot, from the System Settings Formats panel, shows some GUI elements. Click on part of the window to learn more about it. 1. An **Icon List** (the second item is selected) 2. An open **Drop Down Box** 3. An item in the Drop Down Box that has been selected 4. Some more **Buttons** KDE Fundamentals This screenshot, from the System Settings Custom Shortcuts panel, shows some more GUI elements. 1. A **Tree View** 2. A **Check Box** that has been selected 3. A pair of **Spin Boxes** 4. A **Menu Button** This screenshot, from the System Settings Default Applications panel, shows even more GUI elements. Click on part of the window to learn more about it. 1. A **List Box** 2. A pair of **Radio Buttons** 3. A **Text Box** Finally, this screenshot, from the System Settings Colors panel, shows five **Tabs** ### 3.1.2 The Widgets <table> <thead> <tr> <th>Name</th> <th>Description</th> <th>Screenshot</th> </tr> </thead> <tbody> <tr> <td>Central Widget</td> <td>The main area of the running application. This might be the document you are editing in a word processor or the board of a game like Chess.</td> <td><img src="image" alt="Central Widget Screenshot" /></td> </tr> <tr> <td>Button</td> <td>These can be clicked with the left mouse button to perform an action.</td> <td><img src="image" alt="OK Button Screenshot" /></td> </tr> <tr> <td>Breadcrumb</td> <td>Shows the path in a hierarchical system, such as a filesystem. Click on any part of the path to go up in the tree to that location. Click on the arrow to the right of part of the path to go to another child element of that path.</td> <td><img src="image" alt="Breadcrumb Screenshot" /></td> </tr> </tbody> </table> ## KDE Fundamentals <table> <thead> <tr> <th>Element</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Check Box</td> <td>These can be clicked to select and unselect items. They are typically used in a list of several selections. Selected items typically display a check mark, whereas unselected items will have an empty box.</td> </tr> <tr> <td>Color Selector</td> <td>This allows a color to be selected for various purposes, such as changing the color of text. For more information, see Section 4.6</td> </tr> <tr> <td>Combo Box</td> <td>A combination of a Drop Down Box and a Text Box. You can select an option from the list, or type in the text box. Some combo boxes may automatically complete entries for you, or open the list with a list of choices that match what you have typed.</td> </tr> <tr> <td>Context Menu</td> <td>Many user interface elements in the Plasma Workspace and in Applications contain a context menu, which can be opened by clicking on something with the right mouse button. This menu contains options and actions that usually affect the user interface element that was right-clicked.</td> </tr> <tr> <td>Dialog Box</td> <td>A small window that appears above a larger application window. It may contain a message, warning, or configuration panel, among others.</td> </tr> <tr> <td>Drop Down Box</td> <td>This provides a list of items, from which you may select one.</td> </tr> <tr> <td></td> <td>KDE Fundamentals</td> </tr> <tr> <td>---</td> <td>---</td> </tr> <tr> <td><strong>Icon</strong></td> <td>A graphical representation of something, such as a file or action. They typically, but not always, also contain a text description, either beneath or to the right of the icon.</td> </tr> <tr> <td><strong>Icon List</strong></td> <td>This provides a list of items represented by an Icon and a description. It is typically used in the left panel of configuration panels to allow selection from various types of configuration categories.</td> </tr> <tr> <td><strong>List Box</strong></td> <td>This is a list of items that typically allows multiple items to be selected. To select a group of contiguous items, press the <strong>Shift</strong> key and click the first and last items. To select multiple items that are not contiguous, press the <strong>Ctrl</strong> key and select the desired items.</td> </tr> <tr> <td><strong>Menu</strong></td> <td>A list of selections, that usually perform an action, set an option, or opens a window. These can be opened from <strong>menubars</strong> or <strong>menu buttons</strong>.</td> </tr> <tr> <td>KDE Fundamentals</td> <td></td> </tr> <tr> <td>------------------</td> <td></td> </tr> <tr> <td><strong>Menubar</strong></td> <td>These are located at the top of nearly every window and provide access to all the functions of the running application. For more information, see Section 3.2.</td> </tr> <tr> <td><strong>Menu Button</strong></td> <td>A special type of button that opens a menu.</td> </tr> <tr> <td><strong>Panel or Sidebar or Tool View</strong></td> <td>These are located on the sides or bottom of the central widget and allow you to perform many different tasks in an application. A text editor might provide a list of open documents in one, while a word processor might allow you to select a clip art image.</td> </tr> <tr> <td><strong>Progress Bar</strong></td> <td>A small bar that indicates that a long-running operation is being performed. The bar may indicate how much of the operation has completed, or it may simply bounce back and forth to indicate that the operation is in progress.</td> </tr> <tr> <td><strong>Radio Button</strong></td> <td>These are used in a list of options, and only permit one of the options in the list to be selected.</td> </tr> <tr> <td><strong>Scrollbar</strong></td> <td>Allows you to navigate a document.</td> </tr> </tbody> </table> ### KDE Fundamentals <table> <thead> <tr> <th>Component</th> <th>Description</th> <th>Image</th> </tr> </thead> <tbody> <tr> <td><strong>Slider</strong></td> <td>Allows a numeric value to be selected by moving a small bar either horizontally or vertically across a line.</td> <td><img src="image1.png" alt="Slider Image" /></td> </tr> <tr> <td><strong>Spin Box</strong></td> <td>This permits a numerical value to be selected, either by using the up and down arrows to the right of the box to raise or lower the value, respectively, or by typing the value into the text box.</td> <td><img src="image2.png" alt="Spin Box Image" /></td> </tr> <tr> <td><strong>Status Bar</strong></td> <td>These are located at the bottom of many applications and display information about what the application is currently doing. For instance, a web browser might indicate the progress of loading a web page, while a word processor might display the current word count.</td> <td><img src="image3.png" alt="Status Bar Image" /></td> </tr> <tr> <td><strong>Tab</strong></td> <td>These appear at the top of an area of a window, and permit that area of the window to be changed to a variety of different selections.</td> <td><img src="image4.png" alt="Tab Image" /></td> </tr> <tr> <td><strong>Text Area</strong></td> <td>Allows a large amount of text to be typed in, typically multiple lines or paragraphs. Unlike a Text Box, pressing Enter will usually result in a line break.</td> <td><img src="image5.png" alt="Text Area Image" /></td> </tr> <tr> <td><strong>Text Box</strong></td> <td>A single-line text entry that allows a small amount of text to be typed in. Typically, pressing Enter will perform the same action as clicking the OK button.</td> <td><img src="image6.png" alt="Text Box Image" /></td> </tr> <tr> <td><strong>Titlebar</strong></td> <td>This is located at the top of every window. It contains the name of the application and usually information about what the application is doing, like the title of the web page being viewed in a web browser or the filename of a document open in a word processor.</td> <td><img src="image7.png" alt="Titlebar Image" /></td> </tr> </tbody> </table> ### KDE Fundamentals <table> <thead> <tr> <th>Toolbar</th> <th>These are located near the top of many applications, typically directly underneath the menu bar. They provide access to many common functions of the running application, like Save or Print.</th> </tr> </thead> <tbody> <tr> <td>Tree View</td> <td>A Tree View allows you to select from a hierarchical list of options. A section, or category, of the Tree View may be unexpanded, in which case no options will appear beneath it and the arrow to the left of the title will be pointing right, toward the title. It may also be expanded, in which case several options will be listed below it, and the arrow to the left of the title will be pointing down, toward the options. To expand a portion of the tree view, click the arrow to the left of the title of the section you wish to expand, double-click on the title, or select the title using your keyboard’s arrow keys and press the Enter or + key. To minimize a portion of the tree view, you may also click the arrow, double-click on the title, or press the Enter or - key.</td> </tr> </tbody> </table> ### 3.2 Common Menus Many KDE applications contain these menus. However, most applications will have more menu entries than those listed here, and others may be missing some of the entries listed here. ![The menubar in Gwenview.](image) Some applications, like Dolphin, do not show a menubar by default. You can show it by pressing Ctrl-M. You can also use this to hide the menubar in applications that support doing so. ### 3.2.1 The File Menu The **File** menu allows you to perform operations on the currently open file and access common tasks in applications. Common menu items include: - **File → New (Ctrl+N)** - Creates a new file. - **File → New Window** - Opens a new window. - **File → Open... (Ctrl+O)** - Opens an already existing file. - **File → Save (Ctrl+S)** - Saves the file. If the file already exists it will be overwritten. - **File → Save As...** - Saves the file with a new filename. - **File → Save All** - Saves all open files. - **File → Reload (F5)** - Reloads the current file. - **File → Reload All** - Reloads all open files. - **File → Print... (Ctrl+P)** - Prints the file. Use **Print to File (PDF)** to generate a PDF file or select a range of pages to print only these pages to a new PDF file. - **File → Close (Ctrl+W)** - Closes the current file. - **File → Close All** - Closes all open files. - **File → Quit (Ctrl+Q)** - Exits the program. ### 3.2.2 The Edit Menu The **Edit** menu allows you to modify the currently open file. - **Edit → Undo (Ctrl+Z)** - Undo the last action you performed in the file. - **Edit → Redo (Ctrl+Shift+Z)** - Redo the last action you performed in the file. KDE Fundamentals **Edit → Cut (Ctrl+X)** Removes the currently selected portion of the file, if any, and places a copy of it in the clipboard buffer. **Edit → Copy (Ctrl+C)** Places a copy of the currently selected portion of the file, if any, in the clipboard buffer. **Edit → Paste (Ctrl+V)** Copies the first item in the clipboard buffer to the current location in the file, if any. **Edit → Select All (Ctrl+A)** Selects the entire contents of the currently open file. **Edit → Find... (Ctrl+F)** Allows you to search for text in the currently open file. **Edit → Replace... (Ctrl+R)** Allows you to search for text in the currently open file and replace it with something else. **Edit → Find Next (F3)** Go to the next match of the last Find operation. **Edit → Find Previous (Shift+F3)** Go to the previous match of the last Find operation. ### 3.2.3 The View Menu The **View** menu allows you to change the layout of the currently open file and/or the running application. This menu has different options depending on the application you are using. ### 3.2.4 The Tools Menu The **Tools** menu allows you to perform certain actions on the currently open file. **Tools → Automatic Spell Checking (Ctrl+Shift+O)** Check for spelling errors as you type. For more information, see Section 4.3.2. **Tools → Spelling...** This initiates the spellchecking program - a program designed to help the user catch and correct any spelling errors. For more information, see Section 4.3.1. **Tools → Spelling (from cursor)...** This initiates the spellchecking program, but only checks the portion of the document from the current location of the cursor to the end. For more information, see Section 4.3.1. **Tools → Spellcheck Selection...** This initiates the spellchecking program, but only checks the currently selected text in the document. For more information, see Section 4.3.1. **Tools → Change Dictionary...** This allows you to change the dictionary used to check spellings. For more information, see Section 4.3.3. 3.2.5 The Settings Menu The Settings allows you to customize the application. This menu typically contains the following items: **Settings → Show Menubar (Ctrl+M)** Toggle the Menubar display on and off. Once hidden it can be made visible using the shortcut Ctrl+M again. If the menubar is hidden, the context menu opened with a right mouse button click anywhere in the view area has an extra entry Show Menubar. **Settings → Show Statusbar** Toggles the display of the statusbar on and off. Some KDE applications use statusbar at the bottom of their screen to display useful information. **Settings → Toolbars Shown** Allows you to show and hide the various toolbars supported by the application. **Settings → Show Statusbar** When checked, this displays a small bar at the bottom of the application containing information about the status. When unchecked the status bar is hidden. **Settings → Configure Shortcuts...** Allows you to enable, disable, and modify keyboard shortcuts. For more information, see Section 5.2. **Settings → Configure Toolbars...** Allows you to customize the contents, layout, text, and icons of toolbars. For more information, see Section 5.1. **Settings → Configure Notifications...** This item displays a standard KDE notifications configuration dialog, where you can change the notifications (sounds, visible messages, etc.) used by the application. For more information how to configure notifications please read the documentation for the System Settings module Manage Notifications. **Settings → Configure Application...** Opens the configuration panel for the currently running application. 3.2.6 The Help Menu The Help menu gives you access to the application’s documentation and other useful resources. **Help → Application Handbook (F1)** Invokes the KDE Help system starting at the running application’s handbook. **Help → What’s This? (Shift+F1)** Changes the mouse cursor to a combination arrow and question mark. Clicking on items within the application; will open a help window (if one exists for the particular item) explaining the item’s function. **Help → Tip of the Day** This command opens the Tip of the Day dialog. You can page through all the tips by using the buttons on the dialog and select to show the tips at startup. Note: Not all applications provide these tips. KDE Fundamentals Help → Report Bug... Opens the Bug report dialog where you can report a bug or request a ‘wishlist’ feature. Help → Donate Opens the Donations page where you can support KDE and its projects. Help → Switch Application Language... Opens a dialog where you can edit the Primary language and Fallback language for this application. Help → About Application This will display version and author information for the running application. Help → About KDE This displays the KDE Development Platform version and other basic information. 3.2.7 Thanks and Acknowledgments Special thanks to an anonymous Google Code-In 2011 participant for writing much of this document. 3.3 Common Keyboard Shortcuts The KDE Plasma Workspaces provide keyboard shortcuts that allow you to perform many tasks without touching your mouse. If you use your keyboard frequently, using these can save you lots of time. This list contains the most common shortcuts supported by the workspace itself and many applications available within. Every application also provides its own shortcuts, so be sure to check their manuals for a comprehensive listing. 3.3.1 Working with Windows These shortcuts allow you to perform all kinds of operations with windows, whether it be opening, closing, moving, or switching between them. 3.3.1.1 Starting and Stopping Applications These shortcuts make it easy to start and stop programs. ### KDE Fundamentals <table> <thead> <tr> <th>Shortcut</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Meta</td> <td>Open the Application Launcher</td> </tr> <tr> <td>Alt+Space / Alt+F2</td> <td>Run Command Interface</td> </tr> <tr> <td>Ctrl+Esc</td> <td>System Activity</td> </tr> <tr> <td>Alt+F4</td> <td>Close</td> </tr> <tr> <td>Ctrl+Q</td> <td>Quit</td> </tr> <tr> <td>Ctrl+Alt+Esc</td> <td>Force Quit</td> </tr> </tbody> </table> ### 3.3.1.2 Moving Around These shortcuts allow you to navigate between windows, activities, and desktops efficiently. <table> <thead> <tr> <th>Shortcut</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Ctrl+F10</td> <td>Present Windows</td> </tr> <tr> <td>Ctrl+F9</td> <td>Present Windows on current desktop</td> </tr> <tr> <td>Ctrl+F7</td> <td>Present Windows of current application only</td> </tr> <tr> <td>Ctrl+F12</td> <td>Show Desktop</td> </tr> <tr> <td>Ctrl+Alt+A</td> <td>Activate Window Demanding Attention</td> </tr> <tr> <td>Alt+Tab</td> <td>Walk through windows</td> </tr> <tr> <td>Alt+Shift+Tab</td> <td>Walk through windows (Reverse)</td> </tr> <tr> <td>Alt+F3</td> <td>Open the Window Operations menu</td> </tr> <tr> <td>Meta+Alt+Up</td> <td>Switch to Window Above</td> </tr> <tr> <td>Meta+Alt+Down</td> <td>Switch to Window Below</td> </tr> <tr> <td>Meta+Alt+Left</td> <td>Switch to Window to the Left</td> </tr> <tr> <td>Meta+Alt+Right</td> <td>Switch to Window to the Right</td> </tr> </tbody> </table> ### 3.3.1.3 Panning and Zooming Need to get a closer look? The KDE Plasma Workspaces allow you to zoom in and out and move your entire desktop around, so you can zoom in even when the application you are using doesn’t support it. <table> <thead> <tr> <th>Shortcut</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Meta+=</td> <td>Zoom In</td> </tr> <tr> <td>Meta--</td> <td>Zoom Out</td> </tr> <tr> <td>Meta+0</td> <td>Zoom Normal</td> </tr> <tr> <td>Meta+Up</td> <td>Pan Up</td> </tr> <tr> <td>Meta+Down</td> <td>Pan Down</td> </tr> <tr> <td>Meta+Left</td> <td>Pan left</td> </tr> <tr> <td>Meta+Right</td> <td>Pan Right</td> </tr> </tbody> </table> 3.3.2 Working with Activities and Virtual Desktops These shortcuts allow you to switch between and manage Activities and virtual desktops. <table> <thead> <tr> <th>Shortcut</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Meta+Q / Alt+D,Alt+A</td> <td>Manage Activities</td> </tr> <tr> <td>Meta+Tab</td> <td>Next Activity</td> </tr> <tr> <td>Meta+Shift+Tab</td> <td>Previous Activity</td> </tr> <tr> <td>Ctrl+F1</td> <td>Switch to Desktop 1</td> </tr> <tr> <td>Ctrl+F2</td> <td>Switch to Desktop 2</td> </tr> <tr> <td>Ctrl+F3</td> <td>Switch to Desktop 3</td> </tr> <tr> <td>Ctrl+F4</td> <td>Switch to Desktop 4</td> </tr> </tbody> </table> 3.3.3 Working with the Desktop These shortcuts allow you to work with the KDE Plasma Desktop and panels. <table> <thead> <tr> <th>Shortcut</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Alt+D A</td> <td>Add Widgets</td> </tr> <tr> <td>Alt+D R</td> <td>Remove this Widget</td> </tr> <tr> <td>Alt+D L</td> <td>Lock/Unlock Widgets</td> </tr> <tr> <td>Alt+D S</td> <td>Widget Settings</td> </tr> <tr> <td>Ctrl+F12</td> <td>Show Desktop</td> </tr> <tr> <td>Alt+D T</td> <td>Run the Associated Application</td> </tr> <tr> <td>Alt+D,Alt+S</td> <td>Desktop Settings</td> </tr> </tbody> </table> 3.3.4 Getting Help Need some help? The manual for the current application is only a keypress away, and some programs even have additional help that explains the element in focus. <table> <thead> <tr> <th>Shortcut</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>F1</td> <td>Help</td> </tr> <tr> <td>Shift+F1</td> <td>What’s This?</td> </tr> </tbody> </table> 3.3.5 Working with Documents Whether it’s a text document, spreadsheet, or web site, these shortcuts make performing many kinds of tasks with them easy. <table> <thead> <tr> <th>Shortcut</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>F5</td> <td>Refresh</td> </tr> <tr> <td>Ctrl+A</td> <td>Select All</td> </tr> </tbody> </table> ### 3.3.6 Working with Files Whether you are in an Open/Save dialog or the Dolphin file manager, these shortcuts save you time when performing operations on files. Note that some of the concepts used with files are the same as with documents, so several of the shortcuts are identical to their counterparts listed above. <table> <thead> <tr> <th>Shortcut</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Ctrl+Z</td> <td>Undo</td> </tr> <tr> <td>Ctrl+Shift+Z</td> <td>Redo</td> </tr> <tr> <td>Ctrl+X</td> <td>Cut</td> </tr> <tr> <td>Ctrl+C</td> <td>Copy</td> </tr> <tr> <td>Ctrl+V</td> <td>Paste</td> </tr> <tr> <td>Ctrl+N</td> <td>New</td> </tr> <tr> <td>Ctrl+P</td> <td>Print</td> </tr> <tr> <td>Ctrl+S</td> <td>Save</td> </tr> <tr> <td>Ctrl+F</td> <td>Find</td> </tr> <tr> <td>Ctrl+W</td> <td>Close Document/Tab</td> </tr> <tr> <td>Ctrl+A</td> <td>Select All</td> </tr> <tr> <td>Ctrl+L</td> <td>Replace Location</td> </tr> <tr> <td>Ctrl+Shift+A</td> <td>Invert Selection</td> </tr> <tr> <td>Alt+Left</td> <td>Back</td> </tr> <tr> <td>Alt+Right</td> <td>Forward</td> </tr> <tr> <td>Alt+Up</td> <td>Up (to folder that contains this one)</td> </tr> <tr> <td>Alt+Home</td> <td>Home Folder</td> </tr> <tr> <td>Delete</td> <td>Move to Trash</td> </tr> <tr> <td>Shift+Delete</td> <td>Delete Permanently</td> </tr> </tbody> </table> ### 3.3.7 Changing Volume and Brightness In addition to the standard keys, many computer keyboards and laptops nowadays have special keys or buttons to change the speaker volume, as well as the brightness of your monitor if applicable. If present, you can use these keys in the KDE Plasma Workspaces to perform those tasks. If you do not have such keys, see Section 3.3.9 for information on how to assign keys for these tasks. ### 3.3.8 Leaving Your Computer All done? Use these shortcuts and put your computer away! ### 3.3.9 Modifying Shortcuts The shortcuts described in *Working With Windows, Leaving Your Computer, Changing Volume and Brightness* and *Working with Activities and Virtual Desktops* are called *global shortcuts*, since they work regardless of which window you have open on your screen. These can be modified in the *Global Shortcuts panel of System Settings*, where they are separated by KDE component. The shortcuts described *Working with the Desktop* are immutable and cannot be modified. The shortcuts described in *Working with Documents* and *Getting Help* are set by individual programs. Most KDE programs allow you to use the *common shortcut editing dialog* to modify these. The shortcuts described in *Working With Files* can be edited in the same manner when used inside a file manager like Dolphin or Konqueror, but cannot be modified in the case of Open/Save dialogs, etc. Chapter 4 Common Tasks 4.1 Navigating Documents 4.1.1 Scrolling You’re probably familiar with the scrollbar that appears on the right side (and sometimes the bottom) of documents, allowing you to move within documents. However, there are several other ways you can navigate documents, some of which are faster and easier. Several mice have a wheel in the middle. You can move it up and down to scroll within a document. If you press the `Shift` key while using the mouse wheel, the document will scroll faster. If you’re using a portable computer like a laptop, you might also be able to scroll using the touchpad. Some computers allow you to scroll vertically by moving your finger up and down the rightmost side of the touchpad, and allow you to scroll horizontally by moving your finger across the bottommost side of the touchpad. Others let you scroll using two fingers: move both fingers up and down anywhere on the touchpad to scroll vertically, and move them left and right to scroll horizontally. Since this functionality emulates the mouse wheel functionality described above, you can also press the `Shift` key while you do this to scroll faster. If you use the KDE Plasma Workspaces, you can control mouse wheel behavior in the Mouse module in System Settings, and you can control touchpad scrolling behavior in the Touchpad module in System Settings. Otherwise, look in the configuration area of your operating system or desktop environment. Additionally, the scrollbar has several options in its context menu. You can access these by right-clicking anywhere on the scrollbar. The following options are available: - **Scroll here** Scroll directly to the location represented by where you right-clicked on the scrollbar. This is the equivalent of simply clicking on that location on the scrollbar. - **Top (Ctrl+Home)** Go to the beginning of the document. - **Bottom (Ctrl+End)** Go to the end of the document. - **Page up (PgUp)** Navigate to the previous page in a document that represents a printed document, or one screen up in other types of documents. Page down (PgDn) Navigate to the next page in a document that represents a printed document, or one screen down in other types of documents. Scroll up Scroll up one unit (usually a line) in the document. This is the equivalent of clicking the up arrow at the top of the scrollbar. Scroll down Scroll down one unit (usually a line) in the document. This is the equivalent of clicking the down arrow at the bottom of the scrollbar. 4.1.2 Zooming Many applications permit you to zoom. This makes the text or image you are viewing larger or smaller. You can generally find the zoom function in the View menu, and sometimes in the status bar of the application. You can also zoom using the keyboard by pressing Ctrl++ to zoom in, or Ctrl+- to zoom out. If you can scroll with the mouse wheel or touchpad as described in Section 4.1.1, you can also zoom by pressing Ctrl and scrolling that way. 4.2 Opening and Saving Files 4.2.1 Introduction Many KDE applications work with files. Most applications have a File menu with options that allow you to open and save files. For more information on that, see Section 3.2.1. However, there are lots of different operations that require selecting a file. Regardless of the method, all KDE applications generally use the same file selection window. 4.2.2 The File Selection Window The Toolbar This contains standard navigation tool buttons: KDE Fundamentals Back Causes the folder view to change to the previously displayed folder in its history. This button is disabled, if there is no previous item. Forward Causes the folder view to change to the next folder in its history. This button is disabled, if there is no next folder. Parent Folder This will cause the folder view to change to the immediate parent of the currently displayed folder, if possible. Reload (F5) Reloads the folder view, displaying any changes that have been made since it was first loaded or last reloaded. Show Preview Displays a preview of each file inside the folder view. Zoom Slider This allows you to change the size of the icon or preview shown in the folder view. Options Options → Sorting - Options → Sorting → By Name Sort files listed in the folder view alphabetically by name. - Options → Sorting → By Size Sort files listed in the folder view in order of their file size. - Options → Sorting → By Date Sort files listed in the folder view by the date they were last modified. - Options → Sorting → By Type Sort files listed in the folder view alphabetically by their file type. - Options → Sorting → Descending When unchecked (the default), files in the folder view will be sorted in ascending order. (For instance, files sorted alphabetically will be sorted from A to Z, while files sorted numerically will be sorted from smallest to largest.) When checked, files in the folder will be sorted in descending order (in reverse). - Options → Sorting → Folders First When enabled (the default), folders will appear before regular files. Options → View - Options → View → Short View Displays only the filenames. - Options → View → Detailed View Displays Name, Date and Size of the files. - Options → View → Tree View Like Short View, but folders can be expanded to view their contents. - Options → View → Detailed Tree View This also allows folders to be expanded, but displays the additional columns available in Detailed View. Options → Show Hidden Files (Alt+.) Displays files or folders normally hidden by your operating system. The alternate shortcut for this action is F8. Options → Show Places Navigation Panel (F9) Displays the places panel which provides quick access to bookmarked locations and disks or other media. Options → Show Bookmarks Displays an additional icon on the toolbar that provides access to ‘bookmarks’, a list of saved locations. Options → Show Aside Preview Displays a preview of the currently highlighted file to the right of the folder view. Bookmarks Opens a submenu to edit or add bookmarks and to add a new bookmark folder. Location Bar The location bar, which can be found on top of the folder view, displays the path to the current folder. The location bar has two modes: Bread Crumb Mode In the ‘bread crumb’ mode, which is the default, each folder name in the path to the current folder is a button which can be clicked to quickly open that folder. Moreover, clicking the ‘>’ sign to the right of a folder opens a menu which permits to quickly open a subfolder of that folder. Editable Mode When in bread crumb mode, clicking in the gray area to the right of the path with the left mouse button switches the location bar to the ‘editable’ mode, in which the path can be edited using the keyboard. To switch back to bread crumb mode, click the check mark at the right of the location bar with the left mouse button. The context menu of the location bar offers action to switch between the modes and to copy and paste the path using the clipboard. Check the last option in this context menu to display either the full path starting with the root folder of the file system or to display the path starting with the current places entry. The Places List This provides the standard KDE list of Places, shared with Dolphin and other file management tools. The Folder View The largest part of the file selection window is the area that lists all items in the current directory. To select a file, you can double-click on it, or choose one and hit Open or Save. You can also select multiple files at once. To select specific files, or to unselect specific files that are already selected, press and hold Ctrl, click on each file, then release Ctrl. To select a contiguous group of files, click the first file, press and hold Shift, click on the last file in the group, and release Shift. The Folder View supports a limited set of file operations, which can be accessed by right-clicking on a file to access its context menu, or using keyboard shortcuts. The following items are available in the context menu: Create New... Create a new file or folder. Move to Trash... (Del) Move the currently selected item to the trash. Sorting This submenu can also be accessed from the toolbar, and is described in Options → Sorting. View This submenu can also be accessed from the toolbar, and is described in Options → View. Open File Manager Opens the current folder in your default file manager application. Properties (Alt+Enter) Open the Properties window that allows you to view and modify various types of metadata related to the currently selected file. The Preview Pane If enabled, this displays a preview of the currently highlighted file. The Name Entry When a file is selected, a name will appear in this text box. You may also manually enter a file name or path in this text box. The Filter Entry When opening a file, the Filter entry allows you to enter a filter for the files displayed in the folder view. The filter uses standard globs; patterns must be separated by white space. For instance, you can enter `*.cpp *.h *.moc` to display several different common Qt™ programming files. To display all files, enter a single asterisk (`*`). The filter entry saves the last 10 filters entered between sessions. To use one, press the arrow button on the right of the entry and select the desired filter string. You can disable the filter by pressing the Clear text button to the left of the autocompletion arrow button. When saving a file, the Filter entry will instead display a drop down box that allows you to select from all the file types the application supports. Select one to save a file in that format. Automatically select filename extension When saving a file, this check box will appear. When selected (the default), the application will automatically append the default file extension for the selected file type to the end of the file name, if it does not already appear in the Name entry. The file extension that will be used is listed in parenthesis at the end of the check box label. ![Image of Save As - Konqueror dialog] Saving a file in Konqueror. The Open or Save Button Depending on the action being performed, an Open or Save button will be displayed. Clicking on this button will close the file selection window and perform the requested action. The Cancel Button Clicking Cancel will close the file dialog without performing any action. 4.2.3 Thanks and Acknowledgments Special thanks to Google Code-In 2011 participant Alexey Subach for writing parts of this section. 4.3 Check Spelling Sonnet is the spelling checker used by KDE applications such as Kate, KMail, and KWord. It is a GUI frontend to various free spell checkers. To use Sonnet you need to install a spell checker like GNU Aspell, Enchant, Hspell, ISpell or Hunspell and additionally the corresponding dictionaries for the languages that you wish to use. 4.3.1 Check Spelling Correcting a misspelling. To check spelling, go to Tools → Spelling.... If a word is possibly misspelled in your document, it is displayed in the top line in the dialog. Sonnet tries to appropriate word(s) for replacement. The best guess is displayed to the right of Replace with. To accept this replacement, click on Replace. Sonnet also allows you to select a word from the list of suggestions and replace the misspelled word with that selected word. With the help of the Suggest button, you can add more suggestions from the dictionary to the suggestions list. Click on Ignore to keep your original spelling, Click on Finished to stop spellchecking and keep the changes made. Click on Cancel to stop spellchecking and cancel the changes already made. Click on Replace All to automatically replace the misspelled word(s) with the chosen replacement word, if they appear again in your document later. Click on Ignore All to ignore the spelling at that point and all the future occurrences of the word misspelled. Click on Add to Dictionary to add the misspelled word to your personal dictionary. The Personal Dictionary is a distinct dictionary from the system dictionary and the additions made by you will not be seen by others. The drop down box Language at the bottom of this dialog allows you to switch to another dictionary temporarily. 4.3.2 Automatic Spell Checking In many applications, you can automatically check spelling as you type. To enable this feature, select Tools → Automatic Spell Checking. Potentially misspelled words will be underlined in red. To select a suggestion, right click on the word, select the Spelling submenu, and select the suggestion. You may also instruct Sonnet to ignore this spelling for this document by selecting Ignore Word, or you may select Add to Dictionary to save it in your personal dictionary. 4.3.3 Configuring Sonnet To change your dictionary, go to Tools → Change Dictionary... A small window will appear at the bottom of the current document that will allow you to change your dictionary. For more information on configuring Sonnet, see the Spell Checker System Settings module documentation. 4.3.4 Thanks and Acknowledgments Special thanks to Google Code-In 2011 Participant Salma Sultana for writing much of this section. 4.4 Find and Replace The Find function of many KDE applications lets you find specific text in a document, while the Replace function allows you to replace text that is found with different text you provide. You can find both these functions in the Edit menu of many KDE applications. For more information on this menu, see Section 3.2.2. 4.4.1 The Find Function The Find function searches for text in a document and selects it. ![Find Text — Calligra Sheets](image) Searching for income in Calligra Sheets. To use Find in many applications, go to Edit → Find... or press Ctrl+F. Then, in the Text to find: text box, enter the text that you want to find. If Regular expression is checked you will be able to search using regular expressions. Click on Edit to select from and enter commonly used regular expression characters, like White Space or KDE Fundamentals **Start of Line.** If Kate is installed, you can find more information on writing regular expressions in its documentation. You can limit the found results by configuring these options: **Case sensitive** Capital and lowercase characters are considered different. For example, if you search for ‘This’, results that contain ‘this’ will not be returned. **Whole words only** By default, when the application searches for text, it will return results even in the middle of other text. For instance, if you search for ‘is’ it will stop on every word that contains that, like ‘this’ or ‘history’. If you check this option, the application will only return results when the search text is a word by itself, that is, surrounded by whitespace. **From cursor** The search will start from the location of the cursor and stop at the end of the text. **Find backwards** By default, the application searches for the text starting at the beginning of the document, and move through it to the end. If you check this option, it will instead start at the end and work its way to the beginning. **Selected text** Select this option to search only in the text that is currently selected, not the entire document. It is disabled when no text is selected. Many applications show a search bar instead of the Find window. See KatePart documentation for additional information on the search bar. The Find function only selects the first match that it finds. You can continue searching by selecting Edit → Find Next or by pressing F3. Kate displays a match for **KDE**. 4.4.2 The Replace Function The Replace function searches for text in a document and replaces it with other text. You can find it in many application at Edit → Replace... or by pressing Ctrl+R. Replacing *income* with *expense* in Calligra Sheets The window of the Replace function is separated into 3 sections: Find Here you may enter the text you wish to search for. See Section 4.4.1 for more information on the options provided here. Replace Here you may enter the text you wish to replace the found text with. You can reuse the found text in the replacement text by selecting the Use placeholders checkbox. Placeholders, sometimes known as backreferences, are a special character sequence that will be replaced with all or part of the found text. For instance, ‘\0’ represents the entire found string. You may insert placeholders into the text box by clicking the Insert Placeholder button, then selecting an option from the menu like Complete Match. For example, if you are searching for ‘message’, and you want to replace it with messages insert the Complete Match placeholder and add an ‘s’. The replace field will then contain ‘\0s’. If Kate is installed, you can learn more about placeholders in the Regular Expressions appendix of its documentation. Options This contains all the same options that the Find function does, with one addition: If the Prompt on replace option is checked a window will appear on every found word, allowing you to confirm whether you would like to replace the found text. KDE Fundamentals Many Applications show a search and replace bar instead of the Replace window. See KatePart documentation for additional information on the search and replace bar. Replacing kmail with kate in Kate’s search and replace bar 4.4.3 Thanks and Acknowledgments Thanks to an anonymous Google Code-In 2011 participant for writing much of this section. 4.5 Choosing Fonts The font selector appears in many different KDE applications. It lets you select the font face, font style, and font size of the text that appears in your application. Selecting a font in System Settings. Font This is the leftmost selection box, and lets you choose the font face from a list of fonts on your system. Font style This is the center selection box, and lets you choose the font style from the following choices: • **Italic** - this displays text in a cursive, or slanted fashion, and is commonly used for citations or to emphasize text. • **Regular** - the default. Text is displayed without any special appearance. • **Bold Italic** - a combination of both **Bold** and **Italic** • **Bold** - this displays text in a darker, thicker fashion, and is commonly used for titles of documents or to emphasize text. **Size** This is the rightmost selection box, and lets you select the size of your text. Font size is measured in *points*, a standard unit of measure in typography. For more information on this, see the [Point (typography) article on Wikipedia](https://en.wikipedia.org/wiki/Point_(typography)). **Preview** The bottom of the font selector displays a preview of text using the font settings that are currently selected. You may change this text if you wish. ### 4.6 Choosing Colors The color chooser appears in many KDE applications, whenever you need to select a color. It lets you pick from a **Basic colors** consisting of many predefined colors or mix your own when you want a specific color. Selecting a color in *System Settings*. #### 4.6.1 Using Basic Colors The basic colors group is a set of predefined colors. You can find it on the top side of the color chooser window. To select a color from the basic colors, simply click on it. The color will be displayed at the right of the palette, along with its HTML hexadecimal code. 4.6.2 Mixing Colors The color chooser also lets you mix your own colors. There are several ways to do this: 4.6.2.1 Using the Grid The right side of the color chooser contains a large box, and a thinner box immediately to its right. You can use the left box to select the Hue and Saturation of the desired color based on the visual guide provided in the box. The right bar adjusts the Value. Adjust these to select the desired color, which is displayed in the middle of the window. For more information on Hue, Saturation, and Value, see Section 4.6.2.3 4.6.2.2 Using the Screen Colors The eyedropper tool allows you to select a color from your screen. To use it, select the Pick Screen Color button below the basic colors, and then click anywhere on your screen to select that color. 4.6.2.3 Hue/Saturation/Value The lower-right corner of the screen allows you to manually enter the coordinates of the desired color in the Hue/Saturation/Value (HSV) color space. For more information on this, see the HSL and HSV article on Wikipedia. These values are also updated when selecting a color by other methods, so they always accurately represent the currently selected color. 4.6.2.4 Red/Green/Blue The lower-right corner of the screen also allows you to manually enter the coordinates of the desired color in the Red/Green/Blue (RGB) color model. For more information on this, see the RGB color model article on Wikipedia. These values are also updated when selecting a color by other methods, so they always accurately represent the currently selected color. 4.6.2.5 HTML Hexadecimal Code You may enter the HTML hexadecimal code representing the color in the lower-right corner of the screen. For more information on this, see the Web colors article on Wikipedia. This value is also updated when selecting a color by other methods, so it always accurately represents the currently selected color. 4.6.3 Custom Colors After selecting a color, you may add it to the Custom Colors group so you can use it later. To do so, click the Add to Custom Colors button. Chapter 5 Customizing KDE software 5.1 Customizing Toolbars 5.1.1 Modifying Toolbar Items To customize an application’s toolbars, go to Settings → Configure Toolbars... or right-click on a toolbar and select Configure Toolbars.... On the left side of the toolbar configuration panel, the available items that you can put in your toolbar are shown. On the right, the ones that already appear on the toolbar are shown. At the top, you can select the toolbar you wish to modify or view. Above each side of the panel there is a Filter text box you can use to easily find items in the list. 5.1.1.1 Adding an Item You can add an item to your toolbar by selecting it from the left side and clicking on the right arrow button. 5.1.1.2 Removing an Item You can remove an item by selecting it and clicking the left arrow button. 5.1.1.3 Changing the Position of Items You can change the position of the items by moving them lower or higher in the list. To move items lower, press the down arrow button, while to move items higher press the up arrow button. You can also change items’ position by dragging and dropping them. On horizontal toolbars, the item that’s on top will be the one on the left. On vertical toolbars, items are arranged as they appear in the toolbar. 5.1.1.4 Adding a Separator You can add separator lines between items by adding a --- separator --- item to the toolbar. 5.1.1.5 Restoring Defaults You can restore your toolbar to the way it was when you installed the application by pressing the Defaults button at the bottom of the window and then confirming your decision. 5.1.1.6 Changing Text and Icons You can change the icon and text of individual toolbar items by selecting an item and clicking either the Change Icon... or Change Text... button. 5.1.2 Customizing Toolbar Appearance You can change the appearance of toolbars by right-clicking on a toolbar to access it’s context menu. 5.1.2.1 Text Position You can change the appearance of text on toolbars in the Text Position submenu of a toolbar’s context menu. You can choose from: - **Icons** - only the icon for each toolbar item will appear. - **Text** - only the text label for each toolbar item will appear. - **Text Alongside Icons** - the text label will appear to the right of each toolbar item’s icon - **Text Under Icons** - the text label will appear underneath each toolbar item’s icon You can also show or hide text for individual toolbar items by right-clicking on an item and checking or unchecking the item under Show Text. 5.1.2.2 Icon Size You can change the size of toolbar items’ icons by selecting Icon Size from the toolbar’s context menu. You can choose from the following options: (each lists the icon size in pixels) - **Small** (16x16) - **Medium** (22x22) [the default value] - **Large** (32x32) - **Huge** (48x48) 5.1.2.3 Moving Toolbars In order to move toolbars, you must ‘unlock’ them. To do so, uncheck Lock Toolbar Positions from a toolbar’s context menu. To restore the lock, simply recheck this menu item. You can change a toolbar’s position from the Orientation submenu of its context menu. You can choose from: - **Top** [the default in many applications] - **Left** - **Right** - **Bottom** You can also move a toolbar by clicking and holding onto the dotted line at the left of horizontal toolbars or the top of vertical toolbars and dragging it to your desired location. 5.1.2.4 Show/Hide Toolbars If your application has only one toolbar, you can hide a toolbar by deselecting Show Toolbar from either the toolbar’s context menu or the Settings menu. To restore the toolbar, select Show Toolbar from the Settings menu. Note that toolbars must be ‘unlocked’ to hide them from their context menu; see Section 5.1.2.3 for more information. If your application has more than one toolbar, a submenu called Toolbars Shown will appear in the context menu and Settings menu instead of the above menu entry. From that menu you may select individual toolbars to hide and show. 5.1.3 Thanks and Acknowledgments Thanks to an anonymous Google Code-In 2011 participant for writing much of this section. 5.2 Using and Customizing Shortcuts 5.2.1 Introduction Many KDE applications allow you to configure keyboard shortcuts. To open the standard keyboard shortcuts configuration panel, go to Settings → Configure Shortcuts.... In the Configure Shortcuts window, you will see a list of all the shortcuts available in the current application. You can use the search box at the top to search for the shortcut you want. Searching for shortcuts with file in Dolphin. 5.2.2 Changing a Shortcut To change a shortcut, first click on the name of a shortcut you want to change. You will see a radio group where you can choose whether to set the shortcut to its default value, or select a new shortcut for the selected action. To set a new shortcut, choose Custom and click on the button next to it. Then just type the shortcut you would like to use, and your changes will be saved. 5.2.3 Resetting Shortcuts There is a button at the bottom of the window, called **Defaults**. Clicking on this button will reset all your custom shortcuts to their default values. You can also reset an individual shortcut to its default value by selecting it, and choosing the **Default** radio button. 5.2.4 Removing a Shortcut To remove a shortcut, select it from the list, then click the remove icon (a black arrow with a cross) to the right of the button that allows you to select a shortcut. 5.2.5 Working with Schemes Schemes are keyboard shortcuts configuration profiles, so you can create several profiles with different shortcuts and switch between these profiles easily. Editing a scheme called work. To see a menu allowing you to edit schemes, click on the Manage Schemes button at the bottom of the form. The following options will appear: Current Scheme Allows you to switch between your schemes. New... Creates a new scheme. This opens a window that lets you select a name for your new scheme. Delete Deletes the current scheme. More Actions Opens the following menu: - Save Shortcuts to scheme Save the current shortcuts to the current scheme. - Export Scheme... Exports the current scheme to a file. - Import Scheme... Imports a scheme from a file. 5.2.6 Printing Shortcuts You can print out a list of shortcuts for easy reference by clicking the Print button at the bottom of the window. 5.2.7 Thanks and Acknowledgments Special thanks to Google Code-In 2011 participant Alexey Subach for writing much of this section. Chapter 6 Credits and License The original idea for this guide was proposed by Chusslove Illich and brought to fruition with input from Burkhard Lück, Yuri Chornoivan, and T.C. Hollingsworth. Much of it was written by participants of Google Code-In 2011. Thanks to Google for sponsoring their excellent work! This documentation is licensed under the terms of the GNU Free Documentation License. This program is licensed under the terms of the GNU General Public License.
{"Source-Url": "https://docs.kde.org/stable5/en/khelpcenter/fundamentals/fundamentals.pdf", "len_cl100k_base": 14580, "olmocr-version": "0.1.53", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 88023, "total-output-tokens": 15588, "length": "2e13", "weborganizer": {"__label__adult": 0.0004036426544189453, "__label__art_design": 0.0025806427001953125, "__label__crime_law": 0.0001990795135498047, "__label__education_jobs": 0.0020294189453125, "__label__entertainment": 0.0004529953002929687, "__label__fashion_beauty": 0.000179290771484375, "__label__finance_business": 0.00025463104248046875, "__label__food_dining": 0.0002161264419555664, "__label__games": 0.00276947021484375, "__label__hardware": 0.0015811920166015625, "__label__health": 0.00012063980102539062, "__label__history": 0.00029659271240234375, "__label__home_hobbies": 0.000225067138671875, "__label__industrial": 0.00014030933380126953, "__label__literature": 0.0004935264587402344, "__label__politics": 0.0001627206802368164, "__label__religion": 0.0006198883056640625, "__label__science_tech": 0.002323150634765625, "__label__social_life": 0.0002644062042236328, "__label__software": 0.45947265625, "__label__software_dev": 0.52490234375, "__label__sports_fitness": 0.00018036365509033203, "__label__transportation": 0.00017273426055908203, "__label__travel": 0.00024700164794921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61419, 0.03249]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61419, 0.17844]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61419, 0.8199]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 962, false], [962, 3842, null], [3842, 4521, null], [4521, 4663, null], [4663, 4896, null], [4896, 6568, null], [6568, 8247, null], [8247, 8666, null], [8666, 9343, null], [9343, 9689, null], [9689, 10135, null], [10135, 11337, null], [11337, 13001, null], [13001, 13909, null], [13909, 14925, null], [14925, 17338, null], [17338, 18605, null], [18605, 20031, null], [20031, 22067, null], [22067, 24414, null], [24414, 25830, null], [25830, 28249, null], [28249, 30080, null], [30080, 31903, null], [31903, 32796, null], [32796, 34887, null], [34887, 36270, null], [36270, 38704, null], [38704, 40022, null], [40022, 41526, null], [41526, 43391, null], [43391, 45626, null], [45626, 46919, null], [46919, 48492, null], [48492, 50015, null], [50015, 50830, null], [50830, 52276, null], [52276, 54350, null], [54350, 54942, null], [54942, 55953, null], [55953, 57769, null], [57769, 59366, null], [59366, 60053, null], [60053, 60944, null], [60944, 61419, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 962, true], [962, 3842, null], [3842, 4521, null], [4521, 4663, null], [4663, 4896, null], [4896, 6568, null], [6568, 8247, null], [8247, 8666, null], [8666, 9343, null], [9343, 9689, null], [9689, 10135, null], [10135, 11337, null], [11337, 13001, null], [13001, 13909, null], [13909, 14925, null], [14925, 17338, null], [17338, 18605, null], [18605, 20031, null], [20031, 22067, null], [22067, 24414, null], [24414, 25830, null], [25830, 28249, null], [28249, 30080, null], [30080, 31903, null], [31903, 32796, null], [32796, 34887, null], [34887, 36270, null], [36270, 38704, null], [38704, 40022, null], [40022, 41526, null], [41526, 43391, null], [43391, 45626, null], [45626, 46919, null], [46919, 48492, null], [48492, 50015, null], [50015, 50830, null], [50830, 52276, null], [52276, 54350, null], [54350, 54942, null], [54942, 55953, null], [55953, 57769, null], [57769, 59366, null], [59366, 60053, null], [60053, 60944, null], [60944, 61419, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 61419, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61419, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61419, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61419, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61419, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61419, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61419, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61419, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61419, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61419, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 962, 3], [962, 3842, 4], [3842, 4521, 5], [4521, 4663, 6], [4663, 4896, 7], [4896, 6568, 8], [6568, 8247, 9], [8247, 8666, 10], [8666, 9343, 11], [9343, 9689, 12], [9689, 10135, 13], [10135, 11337, 14], [11337, 13001, 15], [13001, 13909, 16], [13909, 14925, 17], [14925, 17338, 18], [17338, 18605, 19], [18605, 20031, 20], [20031, 22067, 21], [22067, 24414, 22], [24414, 25830, 23], [25830, 28249, 24], [28249, 30080, 25], [30080, 31903, 26], [31903, 32796, 27], [32796, 34887, 28], [34887, 36270, 29], [36270, 38704, 30], [38704, 40022, 31], [40022, 41526, 32], [41526, 43391, 33], [43391, 45626, 34], [45626, 46919, 35], [46919, 48492, 36], [48492, 50015, 37], [50015, 50830, 38], [50830, 52276, 39], [52276, 54350, 40], [54350, 54942, 41], [54942, 55953, 42], [55953, 57769, 43], [57769, 59366, 44], [59366, 60053, 45], [60053, 60944, 46], [60944, 61419, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61419, 0.15375]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
2361402bf586c466ccaa17a081fb0dcc306a2ab6
Abstract We propose a practical algorithm for Streett automata model checking of higher-order recursion schemes (HORS, for short) [13, 25], which checks whether the tree generated by a given HORS is accepted by a given Streett automaton. The Streett automata model checking of HORS is useful in the context of liveness verification of higher-order functional programs. The previous approach to Streett automata model checking converted Streett automata to parity automata and then invoked a parity tree automata model checker. We show through experiments that our direct approach outperforms the previous approach. Besides being able to directly deal with Streett automata, our algorithm is the first practical Streett or parity automata model checking algorithm that runs in time polynomial in the size of HORS, assuming that the other parameters are fixed. Previous practical fixed-parameter polynomial time algorithms for HORS could only deal with the class of trivial tree automata. We have confirmed through experiments that (a parity automata version of) our model checker outperforms previous parity automata model checkers for HORS. 1998 ACM Subject Classification F.3.1 Specifying and Verifying and Reasoning about Programs Keywords and phrases Higher-order model checking, higher-order recursion schemes, Streett automata Digital Object Identifier 10.4230/LIPIcs.FSCD.2017.32 1 Introduction Model checking of higher-order recursion schemes (HORS, for short) [13, 25] has been actively studied and applied to automated verification of higher-order programs [17, 21, 24, 26, 34, 23, 41]. The model checking problem asks whether the tree generated by a given HORS is accepted by a given tree automaton. Despite the extremely high complexity (\(k\)-EXPTIME complete for order-\(k\) HORS), practical model checkers that work reasonably well for typical inputs have been developed [14, 16, 3, 31, 29, 7, 28]. In particular, the state-of-the-art trivial automata model checkers for HORS (i.e., model checkers which handle the restricted class of automata called trivial tree automata [1]) [3, 18, 31] can handle thousands of lines of input in a few seconds. The state-of-the-art model checkers for the full class of tree automata (that is equi-expressive to the modal $\mu$-calculus and MSO) have, however, been much behind the trivial automata model checkers. Indeed, whilst the state-of-the-art trivial automata model checkers for HORS employ fixed-parameter polynomial time algorithms, existing parity tree automata model checkers for HORS \[7, 28\] do not. Another limitation of the state-of-the-art model checkers for HORS is that the class of automata is restricted to trivial or parity tree automata. Whilst parity tree automata are equi-expressive to other classes of tree automata like Streett, Rabin, and Muller automata, the translation from those automata to parity tree automata significantly increases the number of states. Thus, it may be desirable for model checkers to support other classes of automata directly. To address the limitations above, we propose a practical Streett automata model checking algorithm for HORS, which checks whether the tree generated by a given HORS is accepted by a given Streett automaton. Compared with the previous model checking algorithms for HORS that can deal with the full class of tree automata \[7, 28\], our new algorithm has the following advantages: (i) It can directly deal with Streett automata, which naturally arise in the context of liveness verification of higher-order programs \[41\]. (ii) More importantly, it runs in time polynomial in the size of HORS, assuming that the other parameters (the size of the automaton and the largest order and arity of non-terminals in HORS) are fixed. The previous parity automata model checkers for HORS \[7, 28\] did not satisfy this property, and suffered from hyper-exponential time complexity in the size of HORS. We develop the algorithm in two steps. First, following Kobayashi and Ong’s type system for parity automata model checking of HORS \[19\], we prepare a type system for Streett automata model checking such that the tree generated by a HORS $G$ is accepted by a Streett automaton $A$ if and only if $G$ is typable in the type system parameterized by $A$. We prove its correctness by showing that the type system can actually be viewed as an instance of Tsukada and Ong’s type system \[38\]. Secondly, we develop a practical algorithm for checking the typability. The algorithm has been inspired by Broadbent and Kobayashi’s saturation-based algorithm \[3\] for trivial automata model checking; in fact, the algorithm is a simple modification of their HorSatT algorithm. The proof of the correctness of our algorithm is, however, non-trivial and much more involved than the correctness proof for HorSatT. The correctness proof is one of the main contributions of the present paper. We have implemented a new model checker HorSatS based on the proposed algorithm and its variation, called HorSatP,\(^1\) for parity tree automata model checking, and experimentally confirmed the two advantages above. For the advantage (i), we have confirmed that HorSatS is often faster than the combination of a converter from Streett to parity tree automata, and HorSatP. For (ii), we have confirmed that HorSatP often outperforms previous parity automata model checkers \[7, 28\]. The rest of the paper is organized as follows. Section 2 reviews basic definitions. Section 3 provides a type-based characterization of Streett automata model checking of HORS and proves its correctness. Section 4 develops a practical algorithm for Streett automata model checking and proves its correctness. Section 5 reports experimental results. Section 6 discusses related work and Section 7 concludes the paper. Proofs omitted in this paper are found in a longer version of the paper \[36\]. \(^1\) Actually, HorSatP has been implemented in 2015 and used in Watanabe et al.’s work \[41\]. It has not been properly formalized, however. 2 Preliminaries In this section we review the definitions of higher-order recursion schemes (HORS) [12, 25, 17] and (alternating) Streett tree automata [35, 8] to define Streett automata model checking of HORS. We write dom(f) for the domain of a map f, and over a sequence $x_1, x_2, \ldots, x_m$ (for some $m$). Given a set $\Sigma$ of symbols (which we call terminals), a $\Sigma$-labeled (unranked) tree is a partial map $T$ from $\{1, \ldots, N\}^*$ (for some fixed $N \in \mathbb{N}$) to $\Sigma$ such that if $\pi i \in \text{dom}(T)$, then $\pi \in \text{dom}(T)$ and $\pi j \in \text{dom}(T)$ for all $1 \leq j \leq i$. If $\Sigma$ is a ranked alphabet (i.e., a map from terminals to natural numbers), a $\Sigma$-labeled ranked tree $T$ is a dom($\Sigma$)-labeled tree such that for each $\pi \in \text{dom}(T)$, $\{i | \pi i \in \text{dom}(T)\} = \{1, \ldots, \Sigma(\pi)\}$. 2.1 Higher-Order Recursion Schemes The set of sorts, ranged over by $\kappa$, is defined by $\kappa ::= \circ | \kappa_1 \rightarrow \kappa_2$. We sometimes abbreviate $\kappa_1 \rightarrow \cdots \rightarrow \kappa \rightarrow \kappa'$ to $\kappa^n \rightarrow \kappa'$. The sorts can be viewed as the types of the simply-typed $\lambda$-calculus with the single base type $\circ$, with the type of trees. We write $\text{Terms}_{\Sigma, \kappa}$ for the set of simply-typed $\lambda$-terms that have the sort $\kappa$ under the environment $\Sigma$. The order and arity of each sort, denoted by $\text{ord}(\kappa)$ and $\text{arity}(\kappa)$, are defined inductively by: $\text{ord}(\circ) = \text{arity}(\circ) = 0$, $\text{ord}(\kappa_1 \rightarrow \kappa_2) = \max(\text{ord}(\kappa_1) + 1, \text{ord}(\kappa_2))$, and $\text{arity}(\kappa_1 \rightarrow \kappa_2) = \text{arity}(\kappa_2) + 1$. Definition 1 (higher-order recursion scheme). A higher-order recursion scheme (HORS) is a quadruple $G = (\Sigma, N, \mathcal{R}, S)$ where: (i) $\Sigma$ is a ranked alphabet. (ii) $N$ is a map from symbols called non-terminals to sorts. (iii) $\mathcal{R}$ is a map from non-terminals to simply-typed terms of the form $\lambda x.t$, where $t$ does not include $\lambda$-abstractions and has the sort $\circ$. It is required that for each $F \in \text{dom}(N)$, $\lambda x.t \in \text{Terms}_{\Sigma, \kappa}$ where $\kappa$ is the set of terms in $\Sigma \cup \{\bot \}$. (i) $N(S) = \circ$. The rewriting relation $\rightarrow_G$ on terms of $G$ is defined inductively by: (i) $F s_1 \ldots s_m \rightarrow_G [s_1/x_1, \ldots, s_m/x_m]$ if $F \in \text{dom}(N)$, $\lambda x_1 \ldots x_m.t$ and $s \rightarrow_G s'$ if $s \rightarrow_G s'$, and (ii) $s \rightarrow_G s'$ if $t \rightarrow_G t'$. Here, $[s_1/x_1, \ldots, s_m/x_m]$ the term obtained from $t$ by replacing each $x_i$ with $s_i$. The tree $[G]$ generated by $G$, called the value tree of $G$, is defined as the least upper bound of $\{t^\perp | S \rightarrow_G t\}$ with respect to $\sqsubseteq$ where $t^\perp$ is defined by (i) $t^\perp = t$ if $t$ is a terminal, (ii) $(t_1 t_2)^\perp = t_1^\perp t_2^\perp$ if $t_1^\perp \neq \perp$, and (iii) $t^\perp = \perp$ otherwise. Here, the partial order $\sqsubseteq$ is defined by $t_1 \sqsubseteq t_2$ if $t_2$ is obtained by replacing some of $\perp$’s in $t_1$ with some trees. The value tree $[G]$ is a $(\Sigma \cup \{\perp \})$-labeled ranked tree. Example 2 (HORS). Let $G_1 = (\Sigma, N, \mathcal{R}, S)$ where $\Sigma = \{a \rightarrow 2, b \rightarrow 1, c \rightarrow 0\}$, $N = \{F \mapsto ((\circ \rightarrow \circ) \rightarrow \circ), B \mapsto ((\circ \rightarrow \circ) \rightarrow \circ \rightarrow \circ), I \mapsto (\circ \rightarrow \circ), S \mapsto \circ\}$, and $\mathcal{R} = \{F \mapsto (\lambda f. \, b \, (a \, (f \, c) \, (F \, (B \, f)))\}, B \mapsto (\lambda f. \, b \, (f \, x)), I \mapsto (\lambda x. \, x), S \mapsto (F \, I)\}$. From the start non-terminal $S$, the reduction proceeds as follows. \[ S \rightarrow_G F I \rightarrow_G I \, (a \, (I \, c) \, (F \, (B \, I))) \rightarrow_G a \, c \, (F \, (B \, I)) \\ \rightarrow_G a \, c \, (b \, (a \, (b \, c) \, (F \, (B \, (B \, I)))))) \rightarrow_G a \, c \, (b \, (a \, (b \, c) \, (F \, (B \, (B \, I)))))) \\ \rightarrow_G a \, c \, (b \, (a \, (b \, c) \, (F \, (B \, (B \, I)))))) \\ \rightarrow_G \ldots \] The value tree $[G_1]$ has a finite path $abab^2 \ldots ab^n ab^n c$ for each $n \in \mathbb{N}$ and also has an infinite path $abab^2 \ldots$. 2.2 Streett Tree Automata Given a set $X$, the set $B^+(X)$ of positive boolean formulas over $X$, ranged over by $\varphi$, is defined by $\varphi ::= \text{true} \mid \text{false} \mid x \mid \varphi_1 \land \varphi_2 \mid \varphi_1 \lor \varphi_2$ where $x$ ranges over $X$. Given a subset Streett Automata Model Checking of Higher-Order Recursion Schemes Y of X, one can calculate the boolean value [φ]_Y of φ by assigning true to the elements of Y and false to those of X \ Y. We say Y satisfies φ if [φ]_Y = true. Definition 3 (Streett tree automaton [35, 8]). A Streett tree automaton is a tuple \( \mathcal{A} = (\Sigma, Q, \delta, q_0, C) \) where \( \Sigma \) is a ranked alphabet, \( Q \) is a finite set of states, \( \delta \) is a map from \( Q \times \text{dom}(\Sigma) \) to \( \mathcal{B}^+ (\mathbb{N} \times Q) \) called a transition function such that \( \delta(q, a) \in \mathcal{B}^+ (\{1, \ldots, \Sigma(a)\} \times Q) \) for each \( q \) and \( a \), and \( q_0 \in Q \) is a special state called the initial state, and \( C \) is a Streett acceptance condition of the form \( C = \{(E_1, F_1), \ldots, (E_k, F_k)\} \) where \( E_i, F_i \subseteq Q \) for each \( i \). Given a \( \Sigma \)-labeled ranked tree \( T \), a run-tree of \( \mathcal{A} \) over \( T \) is a \((\text{dom}(T) \times Q)\)-labeled (unranked) tree \( R \) such that (i) \( \varepsilon \in \text{dom}(R) \), (ii) \( R(\varepsilon) = (\varepsilon, q_0) \), (iii) for every \( \pi \in \text{dom}(R) \) with \( R(\pi) = (\xi, q) \) and \( j \in \{j \in \mathbb{N} \mid \pi j \in \text{dom}(R)\} \), there exist \( i \in \mathbb{N} \) and \( q' \in Q \) such that \( R(\pi j) = (\xi i, q') \), and (iv) for every \( \pi \in \text{dom}(R) \) with \( R(\pi) = (\xi, q) \), \( \{i, q'\} \) satisfies \( \delta(q, T(\xi)) \). Let \( \text{Paths}(R) \) be defined by \( \text{Paths}(R) = \{\pi \in \mathbb{N}^\omega \mid \text{every (finite) prefix of } \pi \text{ is in } \text{dom}(R)\} \) and \( \text{Inf}_R : \text{Paths}(R) \rightarrow 2^\mathbb{Q} \) be defined by \( \text{Inf}_R(\pi) = \{q \in Q \mid \text{the state label of } R(\pi) \text{ is } q \text{ for infinitely many } i\} \) where \( \pi_i \) is the prefix of \( \pi \) of length \( i \). A run-tree \( R \) is accepting if for every \( \pi \in \text{Paths}(R) \) and every \( (E_i, F_i) \in C \), \( \text{Inf}_R(\pi) \cap E_i \neq \emptyset \Rightarrow \text{Inf}_R(\pi) \cap F_i = \emptyset \) holds. A Streett automaton \( \mathcal{A} \) accepts a \( \Sigma \)-labeled ranked tree \( T \) if there exists an accepting run-tree of \( \mathcal{A} \) over \( T \). Example 4 (Streett tree automaton). Let \( A_1 = (\Sigma, Q, \delta, q_0, C) \) be a Streett tree automaton where \( \Sigma = \{a \mapsto 2, b \mapsto 1, c \mapsto 0\} \), \( Q = \{q_0, q_a, q_b\} \), \( \delta(q, a) = (1, q_a) \land (2, q_b) \) for each \( q \), \( \delta(q, b) = (1, q_b) \) for each \( q \), and \( \delta(q, c) = \text{true} \) for each \( q \), and \( C = \{(E_1, F_1)\} \) where \( E_1 = \{q_a\} \) and \( F_1 = \{q_b\} \). This automaton accepts \( \Sigma \)-labeled ranked trees such that for each infinite path, if \( a \) appears infinitely often in it, then \( b \) also appears infinitely often in it. 2.3 Streett Automata Model Checking Problem for HORS We can now define the Streett automata model checking problem for HORS. Definition 5 (Streett automata model checking problem for HORS). The Streett model checking problem for HORS is a decision problem to check whether \( \mathcal{G} \) is accepted by \( \mathcal{A} \), for a given HORS \( \mathcal{G} \) and a Streett automaton \( \mathcal{A} \). The need for Streett automata model checking of HORS naturally arises in the context of verifying liveness properties of higher-order programs. A popular method for verification of temporal properties of programs is to use Vardi’s reduction to fair termination [40], the problem of checking whether all the fair execution sequences of a given program are terminating [5, 2, 27, 41]. Here, a fairness constraint is of the form \( \{(A_1, B_1), \ldots, (A_n, B_n)\} \), which means that if the event \( A_1 \) occurs infinitely often, so does \( B_1 \), for each \( i \). For proving/disproving fair termination of a higher-order functional program, a natural approach is to convert the program to a HORS that generates a tree representing all the possible event sequences, and then check whether the tree contains a fair but infinite event sequence. For example, consider the program: \[ \text{let} \quad \text{rec} \quad f() = \text{if} \quad *\text{int} < 0 \quad \text{then} \quad (\text{event} \ B; ()) \quad \text{else} \quad (\text{event} \ A; f()) \] where \(*\text{int}\) represents a non-deterministic integer. It can be converted to the following HORS, whose value tree represents all the possible event sequences. \[ S \rightarrow F \quad F \rightarrow \text{br} \ (\text{ev}_B \ \text{end}) \ (\text{ev}_A \ F) \] Then, the problem of checking that the original program is not terminating with respect to the fairness constraint \( \{(A, B)\} \) is reduced to the problem of checking that the tree generated by the HORS has a fair infinite path (i.e., an infinite path such that if \( e v_A \) occurs infinitely often, so does \( e v_B \)). In the case of the HORS above, there is no infinite path in which \( e v_B \) occurs infinitely often, from which we can conclude that the original program is fair terminating. Indeed, Watanabe et al. [41] took such an approach for disproving fair termination of functional programs.\(^2\) The resulting decision problem for HORS can naturally be expressed as a Streett model checking problem; in the above case, we can use the Streett automaton \( \mathcal{A} = (\{ br \mapsto 2, ev_A \mapsto 1, ev_B \mapsto 1, \text{end} \mapsto 0 \}, \{ q_0, q_A, q_B \}, \delta, q_0, \{ \{ q_A \}, \{ q_B \} \} ) \) where \( \delta(q, ev_A) = (1, q_A), \delta(q, ev_B) = (1, q_B), \delta(q, \text{br}) = (1, q_0) \land (2, q_0), \) and \( \delta(q, \text{end}) = \text{true} \) for every \( q \in \{ q_0, q_A, q_B \}. \) As usual [25, 19], we assume that the value tree \( [G] \) of a HORS \( G \) does not contain \( \bot \) in the rest of the paper. Note that this is not a limitation, because any instance of the model checking problem for a HORS \( G \) and a Streett automaton \( \mathcal{A} \) can be reduced to an equivalent one for \( G' \) and \( \mathcal{A}' \) such that \( [G'] \) does not contain \( \bot \). ### 3 A Type System for Streett Automata Model Checking This section presents an intersection type system (parameterized by a Streett automaton \( \mathcal{A} \)) for Streett automata model checking of HORS, such that a HORS \( G \) is typable in the type system if and only if \( [G] \) is accepted by \( \mathcal{A} \). The type system is obtained by modifying the Kobayashi-Ong type system [19] for parity automata model checking. We prove the correctness of our type system by showing that it is actually an instance of Tsukada and Ong’s type system for model checking Böhm trees [38]. Let \( \mathcal{A} = (\Sigma, Q, \delta, q_0, C) \) be a Streett automaton with \( C = \{ (E_1,F_1), \ldots, (E_k,F_k) \} \). We define the set of *effects* by \( E = 2^{(E_1,F_1),\ldots,(E_k,F_k)} \).\(^3\) Here, \( E_1, \ldots, E_k, F_1, \ldots, F_k \) in effects should be considered just symbols (so that they are different from each other, even if \( E_i \) and \( F_j \) in \( C \) happen to be the same set of states) although they intuitively represent the sets used in \( C \). The set of *prime types*, ranged over by \( \theta \), and the set of *intersection types*, ranged over by \( \tau \), are defined by: \[ \theta \ (\text{prime types}) ::= q \mid \tau \rightarrow \theta' \quad \tau \ (\text{intersection types}) ::= \{ \{ \theta_i, e_i \} \}_{i \in I} \] where \( q \in Q, e_i \in \Sigma \) and \( I \) is a finite index set. Note that \( \{ \{ \theta_i, e_i \} \}_{i \in I} \) is a shorthand for \( \{ \{ \theta_i, e_i \} \mid i \in I \} \). We sometimes write \( (\theta_1, e_1) \land \cdots \land (\theta_k, e_k) \) for \( \{ \{ \theta_i, e_i \} \mid i \in I \} \), and \( \top \) for the empty intersection type \( \emptyset \). Intuitively, \( q \) is the type of trees accepted from \( q \) by \( \mathcal{A} \) (i.e., by \( \mathcal{A}_q = (\Sigma, Q, \delta, q, C) \)), and \( \{ \{ \theta_i, e_i \} \}_{i \in I} \rightarrow \theta' \) is the type of functions which take an argument that has type \( \theta_i \) for every \( i \in I \), and return a value of type \( \theta' \). Here, \( e_i \), describes what states may/must be visited before the argument is used as a value of type \( \theta_i \). For example, the type \( (q_1, \{ E_1 \}) \land (q_2, \{ F_1 \}) \rightarrow q \) describes the type of functions that take a tree that can be accepted from both \( q_1 \) and \( q_2 \) as an argument, and return a tree of type \( q \). Furthermore, the effect parts \( \{ \{ E_1 \} \} \) and \( \{ F_1 \} \) describe that in an accepting run of \( \mathcal{A}_q \) over the returned tree, the argument can be used as a value of type \( q_1 \) (i.e., visited with state \( q_1 \)) only after visiting states in \( E_1 \), and also used as a value of type \( q_2 \) only after visiting states in \( F_1 \). --- \(^2\) The actual method in [41] is more complicated due to a combination with predicate abstraction. \(^3\) One can use \( E = 2^{(E_1,\ldots,E_k,F_1,\ldots,F_k)} \) instead, where \( \sim \) is an equivalence relation defined in the proof of Theorem 8. This improves the time complexity of our algorithm. We use \( E = 2^{(E_1,\ldots,E_k,F_1,\ldots,F_k)} \) here for understandability, however. A type environment is a set of type bindings of the form \(x : (\theta, e)\) where \(x\) is a variable or a non-terminal, \(\theta\) is a prime type, and \(e\) is an effect. The part \(e\) represents \(when\) \(x\) may be used as a value of type \(\theta\). Note that a type environment may contain multiple bindings for each variable. The type judgement relation \(\vdash_{\mathcal{A}}\) among type environments, terms and prime types are defined inductively by the typing rules in Figure 1. The operations used in the rules are defined by: \(e_{\text{id}} = 0\), \(\text{Eff}(q) = \{E \in \{E_1, \ldots, E_k, F_1, \ldots, F_k\} \mid q \in E\}\), and \(\Gamma \upharpoonright e = \{x : (\theta, e) \mid (x : (\theta, e)) \in \Gamma\}\) where \(e \circ e' = e \cup e'\). In the rule (VAR), the effect \(e_{\text{id}}\) indicates that no state has been visited before the use of \(x\). The rule (CONST) is for terminals (i.e., tree constructors); the premise \(\{(i, q_{ij}) \mid i \in \{1, \ldots, n\}, j \in J_i\}\) satisfies \(\delta(q, a)\) implies that a tree of terminals is accepted from \(q\) if \(T_i\) is accepted from \(q_{ij}\) for each \(j \in J_i\). In the rule (APP), each type environment \(\Gamma_i\) is “lifted” by \(e_i\), to reflect the condition (as indicated by the argument type of \(t_i\)) that the argument \(t_i\) is used as a value of type \(\theta_i\) only after the effect \(e_i\) occurs. \begin{figure}[h] \centering \begin{align*} \{x : (\theta, e_{\text{id}})\} & \vdash_{\mathcal{A}} x : \emptyset & \text{(VAR)} \\ q \in Q & a \in \Sigma & n = \Sigma(a) & \{i, q_{ij}\} \mid i \in \{1, \ldots, n\}, j \in J_i\} \text{ satisfies } \delta(q, a) & e_{ij} = \text{Eff}(q_{ij}) \quad \vdash_{\mathcal{A}} a : \{\{q_{ij}, e_{ij}\}\}_{j \in J_i} \rightarrow \cdots \rightarrow \{(q_{nj}, e_{nj})\}_{j \in J_n} \rightarrow q \quad \text{(CONST)} \\ \emptyset & \vdash_{\mathcal{A}} t_0 : \{\theta_i, e_i\} \mid i \in I \rightarrow \emptyset & \Gamma_i & \vdash_{\mathcal{A}} t_i : \theta_i \text{ for each } i \in I \quad \Gamma_0 \cup \bigcup_{i \in I} (\Gamma_i \upharpoonright e_i) & \vdash_{\mathcal{A}} t_0 t_i : \theta \quad \text{(APP)} \\ \Gamma & \vdash_{\mathcal{A}} x : (\theta_i, e_i) \mid i \in I \rightarrow \theta & \Gamma & \text{ has no bindings for } x \mid I \subseteq J \quad \Gamma & \vdash_{\mathcal{A}} \lambda x.t : \{\theta_i, e_i\} \mid i \in J \rightarrow \emptyset & \text{(ABS)} \end{align*} \caption{Typing Rules.} \end{figure} \textbf{Example 6} (type judgement). Let \(A_1 = (\Sigma, Q, \delta, q_0, C)\) where \(\Sigma = \{a \rightarrow 2, b \rightarrow 1\}\), \(Q = \{q_0, q_a, q_b\}\), \(C = \{E_1, F_1\}\) with \(E_1 = \{q_a\}\) and \(F_1 = \{q_b\}\), and \(\delta(q, a) = (1, q_a) \land (2, q_a)\) for each \(q\) and \(\delta(q, b) = (1, q_b)\) for each \(q\). Types of the terminals can be determined by the (CONST) rule; for example, one can derive \(\emptyset \vdash_{\mathcal{A}} a : \{q_a, \{E_1\}\} \rightarrow \{q_a, \{E_1\}\} \rightarrow q_0\). By using the typing rules, one can derive a type judgement: \(\{x : \{q_a, \{E_1\}\}, x : \{q_b, \{F_1\}\}\} \vdash_{\mathcal{A}} a \ (b \ x) \ x : q_0\). \textbf{Remark.} The difference from the Kobayashi-Ong type system [19, 20] is condensed into the definitions of \(E, e_{\text{id}}, \text{Eff}\) and \(\circ\). Indeed, a variant of the Kobayashi-Ong type system for a parity automaton \((\Sigma, Q, \delta, q_0, \Omega)\) is produced by the following definitions: \(E = \{0, 1, \ldots, M\}\) where \(M = \max\{\Omega(q) \mid q \in Q\}\), \(e_{\text{id}} = 0\), \(\text{Eff}(q) = \Omega(q)\), and \(e \circ e' = \max\{e, e'\}\). This variant actually deviates from Kobayashi and Ong’s type system [20] in the way the priorities of visited states are counted in the rules (VAR) and (CONST). The variant is an instance of Tsukada and Ong’s type system [38], and is also close to the type system of Grellois and Melliès [10, 9]. The typability of a HORS is defined using Streett games. **Definition 7 (Streett game).** A Streett game is a quadruple \( G = (V_2, V_\forall, E, C) \) where \( V_2 \) and \( V_\forall \) are disjoint sets of vertices (we write \( V = V_2 \cup V_\forall \)), \( E \subseteq V \times V \) is a set of edges and \( C \) is a Streett acceptance condition on vertices. A play on \( G \) from \( v_0 \in V_2 \) is a finite or infinite path from \( v_0 \) of the directed graph \((V, E)\). A play is maximal if it is infinite, or if it is a finite play \( v_0 \ldots v_n \) and there is no \( v_{n+1} \in V \) such that \((v_n, v_{n+1}) \in E\). A (maximal) play is winning either if it is finite and the last vertex of it is in \( V_\forall \), or if it is infinite and it satisfies the acceptance condition \( C \), i.e., for each \((E_i, F_i) \in C\), a vertex in \( F_i \) occurs infinitely often whenever a vertex in \( E_i \) occurs infinitely often. A strategy \( W \) is a partial map from \( V^* V_2 \) to \( V \) that is edge-respecting, i.e., for every \( \bar{v} = v_0 \ldots v_n \in \text{dom}(W) \), \((v_n, W(\bar{v})) \in E\). A play \( \bar{v} = v_0 v_1 \ldots \) follows a strategy \( W \) if for every \( i, v_i \in V_2 \) and \( |\bar{v}| > i \) imply \( W(v_0 \ldots v_i) = v_{i+1} \). A strategy \( W \) is a winning strategy from \( v_0 \in V_2 \) if every play that follows \( W \) does not get stuck (i.e., if \( \bar{v} \in V^* V_2 \) follows \( W \), then \( \bar{v} \in \text{dom}(W) \)), and every maximal play from \( v_0 \) that follows \( W \) is winning. For a HORS \( \mathcal{G} = (\Sigma, \mathcal{N}, \mathcal{R}, S) \), we define the typability game \( G_{\mathcal{G}, A} \) as the Streett game \((V_2, V_\forall, E_2, E_\forall, C^\uparrow)\) where: - \( V_2 = \{(F, \theta, e) \mid F \in \text{dom}(\mathcal{N}), \theta :: \mathcal{N}(F), \text{ and } e \in E\} \) - \( V_\forall = \{\Gamma \mid \forall (F : (\theta, e)) \in \Gamma, F \in \text{dom}(\mathcal{N}), \theta :: \mathcal{N}(F), \text{ and } e \in E\} \) - \( E_2 = \{(F, \theta, e), \Gamma) \in V_2 \times V_\forall \mid \Gamma \vdash_A \mathcal{R}(F) : \theta\} \) - \( E_\forall = \{(\Gamma, (F, \theta, e)) \in V_\forall \times V_2 \mid (F : (\theta, e)) \in \Gamma\} \) - \( C^\uparrow = \{(E_1^\uparrow, F_1^\uparrow), \ldots, (E_k^\uparrow, F_k^\uparrow)\} \text{ where } E_i^\uparrow = \{(F, \theta, e) \in V_2 \mid E \in e\} \) Intuitively, in a position \((F, \theta, e)\), Player tries to show why \( F \) has type \( \theta \) by giving a type environment \( \Gamma \) such that \( \Gamma \vdash_A \mathcal{R}(F) : \theta \); in a position \( \Gamma \), Opponent tries to challenge Player by picking a type binding from \( \Gamma \), and asking why that assumption is valid. A HORS \( \mathcal{G} \) is well-typed, denoted by \( \vdash_A \mathcal{G} \), when \( G_{\mathcal{G}, A} \) has a winning strategy from \((S, q_0, e_{\text{id}})\). **Theorem 8 (Correctness).** Given a HORS \( \mathcal{G} \) and a Streett automaton \( A \), the value tree of \( \mathcal{G} \) is accepted by \( A \) if and only if \( \vdash_A \mathcal{G} \). We sketch a proof below;\(^4\) See the longer version [36] for more details. **Proof.** We are to define a winning condition that instantiates Tsukada and Ong’s type system for model checking Böhm trees [38] so that the resulting type system is equivalent to our type system and the correctness of our type system follows from the correctness of Tsukada and Ong’s type system. A winning condition is a structure \((E, \mathcal{F}, \Omega)\) where \( E \) and \( \mathcal{F} \) are partially ordered sets (we denote both the orders by \( \preceq \)) and \( \Omega \) is a downward-closed subset of \( \mathcal{F} \), equipped with four operations \( \circ : E \times E \to E, \oplus : E \times \mathcal{F} \to \mathcal{F}, \pi : \mathcal{F} \to \mathcal{F}, \text{ and } \setminus : E \times \mathcal{F} \to E \) that satisfy additional requirements. Let \( \mathcal{E} \) be defined by \( \mathcal{E} = 2^{\{E_1, \ldots, E_k, F_1, \ldots, F_k\}} \). Let \( S_i : \mathcal{E} \to \{-1, 0, 1\} \) for each \( i \in \{1, \ldots, k\} \) be defined by: (i) \( S_i(e) = -1 \) if \( F_i \in e \), (ii) \( S_i(e) = 0 \) if \( E_i \not\in e \) and \( F_i \not\in e \), and (iii) \( S_i(e) = 1 \) otherwise. A preorder \( \preceq \) on \( \mathcal{E} \) is defined by \( e \preceq e' \iff S_i(e) \leq S_i(e') \) for every \( i \). It induces an equivalence relation \( \sim \) on \( \mathcal{E} \) and a partial order \( \preceq \) on \( \mathcal{E}/\sim \). We define a \(^4\) Here we assume some familiarity with Tsukada and Ong’s type system [38]. Streett Automata Model Checking of Higher-Order Recursion Schemes The overall structure of the algorithm HorSatS is shown in Figure 2. An **effectless type environment**, denoted by $\Theta$, is a set of type bindings $F : \theta$ where $F$ is a non-terminal and $\theta$ is a prime type. The algorithm starts with a certain initial effectless type environment $\Theta_0$ \[ f(x) = g(O(h(x))) \] to mean that $f(x)$ is bounded by $g(h'(x))$ for some $h'(x)$ with $h'(x) = O(h(x))$. --- 5 We use the notation $f(x) = g(O(h(x)))$ to mean that $f(x)$ is bounded by $g(h'(x))$ for some $h'(x)$ with $h'(x) = O(h(x))$. --- This section proposes HorSatS and HorSatP, new practical algorithms for Streett and parity automata model checking of HORS, respectively. The type-based characterization in the previous section actually yields a straightforward model checking algorithm, which first constructs the typability game $G_{\mathcal{A}}$ and solves it, as discussed at the end of the previous section. It is, however, impractical since the size of the typability game is huge: $N$-fold exponential for an order-$N$ HORS. This is because the number of intersection types for order-$N$ functions is $N$-fold exponential. \[ \Theta := \Theta_0 \] while \( (F(\Theta)) \neq \Theta) \{ \Theta := F(\Theta) \} return whether ConstructGame(\(\Theta\)) has a winning strategy \textbf{Figure 2} The proposed algorithm HorSatP. (which will be defined below), and then expands it by repeatedly applying \(F\), until it reaches a fixpoint \(\Theta^\text{fix}\). The algorithm then constructs a subgame of \(G_{\mathcal{G},A}\) consisting of only types occurring in \(\Theta^\text{fix}\) and solves it. The algorithm HorSatP also has exactly the same structure; we just need to adapt \(F\) and ConstructGame for parity games. We describe the construction of \(\Theta_0\) and the expansion function \(F\) below; it has been inspired by Broadbent and Kobayashi’s HorSat algorithm for trivial automata model checking [3]. Let an input HORS be \(G = (\Sigma, \mathcal{N}, \mathcal{R}, S)\) and an input Streett automaton be \(A = (\Sigma, Q, \delta, q_0, C)\). The initial effectless type environment \(\Theta_0\) is defined by: \[ \Theta_0 = \{F : \top \rightarrow \cdots \rightarrow \top \rightarrow q \mid F \in \text{dom}(\mathcal{N}), \ m = \text{arity}(\mathcal{N}(F)), \ q \in Q \}. \] The expansion function \(F\) is defined by: \[ F(\Theta) = \Theta \cup \{ F : \tau_1 \rightarrow \cdots \rightarrow \tau_m \rightarrow q \mid \mathcal{R}(F) = \lambda x_1, \ldots, x_m. \ t, \\ (\tau_1 \rightarrow \cdots \rightarrow \tau_m \rightarrow q) :: \mathcal{N}(F), \\ \tau_i \subseteq \text{Types}_0(\text{Flow}(x_i)) \text{ for each } i \in \{1, \ldots, m\}, \\ \Gamma \cup \{x_1 : \tau_1, \ldots, x_m : \tau_m\} \vdash_A t : q \\ \text{for some } \Gamma \text{ such that } \forall (G : (\theta, e)) \in \Gamma. (G : \theta) \in \Theta \}. \] Here, \(\text{Flow}(x)\) is an overapproximation of the (possibly infinite) set of terms to which \(x\) may be bound in a reduction sequence from \(S\); it can be obtained by a flow analysis algorithm like 0CFA. For a set \(U\) of terms, \(\text{Types}_0(U)\) is defined as \(\{\theta \mid \Gamma \vdash_A u : \theta, u \in U \}, \forall (F : (\theta, e)) \in \Gamma. (F : \theta) \in \Theta\). The notation \(\{x_1 : \tau_1, \ldots, x_m : \tau_m\} \vdash_A t : q\) represents the type environment \(\{x_1 : (\theta_{i,j}, e_{i,j}) \mid i \in \{1, \ldots, m\}, (\theta_{i,j}, e_{i,j}) \in \tau_i\}\). Finally, ConstructGame(\(\Theta\)) returns a subgame \(G_{\mathcal{G},A,\Theta} \) of \(G_{\mathcal{G},A}\), obtained by restricting \(E_3\) to the following subset: \[ E'_3 = \{((F, \theta, e), \Gamma) \in V_3 \times V_\nu \mid \Gamma \vdash_A \mathcal{R}(F) : \theta \text{ and } \forall (G : (\theta, e)) \in \Gamma. (G : \theta) \in \Theta \}. \] The algorithm HorSatP is obtained by (i) replacing the type judgment relation used in the expansion function with that of the Kobayashi-Ong type system, and (ii) modifying ConstructGame(\(\Theta\)) to produce a subgame of the typability game for the Kobayashi-Ong type system. See the longer version [36] for more details. \textbf{Example 9} (a sample run of the algorithm). Consider a HORS \(G_2 = (\Sigma, \mathcal{N}, \mathcal{R}, S)\) where \(\Sigma = \{a \mapsto 2, b \mapsto 1, c \mapsto 0\}\), \(\mathcal{N} = \{F \mapsto (0 \rightarrow 0), S \mapsto 0\} \) and \(\mathcal{R} = \{F \mapsto (\lambda x.a x (F (b x))), S \mapsto (F c)\} \). and a Streett automaton \(A_2 = (\Sigma, Q, \delta, q_a, C)\) with \(Q = \{q_a, q_b\}\), \(C = \{(E_1,\emptyset)\}\) where \(E_1 = \{q_b\}\), and \(\delta\) is defined by \(\delta(q_a, a) = (1, q_a) \wedge (2, q_a)\) for each q, δ(q, b) = (1, qb) for each q, and δ(q, c) = true for each q. The automaton A2 accepts trees in which b occurs only finitely often in every path. The initial effectless type environment is Θ0 = {F : ⊤ → qa, F : ⊤ → qb, S : qa, S : qb}. Let Flow(x) = {b^n c | n ≥ 0} (hence Typesθ(Flow(x)) = {qa, qb} for any Θ). The fixpoint calculation proceeds as follows. Θ1 = F(Θ0) = Θ0 ∪ {F : (qa, ∅) → qa, F : (qb, {E1}) → qb}. Θ2 = F(Θ1) = Θ1 ∪ {F : (qa, ∅) ∧ (qb, {E1}) → qa, F : (qa, ∅) ∧ (qb, {E1}) → qb}. Θ3 = F(Θ2) = Θ2. The game constructed by the algorithm is shown in Figure 3, where: θF = (qa, ∅) ∧ (qb, {E1}) → qa. θ′F = (qa, ∅) → qa. θ′′F = ⊤ → qa. ΓF = {F : (θF, ∅)}. Γ′F = {F : (θ′F, ∅)}. Γ′′F = {F : (θ′′F, ∅)}. As the game has a winning strategy, the algorithm returns “Yes.” The proposed algorithm is sound and complete, as stated in the following theorems. The soundness (Theorem 10) follows from the fact that ConstructGame(Θ) produces a subgame of \( \mathcal{G}_{\mathcal{A}} \) obtained by restricting only Player’s moves. The completeness is, however, non-trivial, as to why the fixpoint \( \Theta^{\text{fix}} \) is sufficiently large so that Player can win the subgame \( G_{\mathcal{A},\Theta^{\text{fix}}} \) if she can win the whole game \( G_{\mathcal{A},\mathcal{G}} \). Whilst the construction of \( \Theta_0 \) and \( F \) is essentially the same as that of HORSatT algorithm, the completeness proof for HORSatS is much more involved than that for HORSatT. The proofs below apply to both HORSatS and HORSatP. We use the notations \( e_{id}, \text{Eff} \) and \( \circ \) for this generalization. \[ \textbf{Theorem 10 (Soundness). If HORSatS (resp. HORSatP) returns “Yes” for an input HORS } \mathcal{G} \text{ and a Streett (resp. parity) automaton } \mathcal{A}, \text{ then } \vdash_{\mathcal{A}} \mathcal{G}. \] \[ \textbf{Proof.} \text{ Suppose that the algorithm returns “Yes.” Then, the Streett game } G_{\mathcal{A},\Theta^{\text{fix}}} \text{ has a winning strategy } W \text{ from } (S, q_0, e_{id}). \text{ As } G_{\mathcal{A},\Theta^{\text{fix}}} \text{ is a subgame of } G_{\mathcal{A},\mathcal{G}} \text{ obtained by only restricting edges in } E_2, W \text{ is also a winning strategy for } G_{\mathcal{A},\mathcal{G}}. \text{ Thus, we have } \vdash_{\mathcal{A}} \mathcal{G}. \] \[ \textbf{Theorem 11 (Completeness). Given an input HORS } \mathcal{G} \text{ and a Streett (resp. parity) automaton } \mathcal{A}, \text{ HORSatS (resp. HORSatP) returns “Yes” if } \vdash_{\mathcal{A}} \mathcal{G}. \] Here we give only a proof sketch. See the longer version [36] for more details. Proof Sketch. Suppose $\vdash_A G$. By the definition of $\vdash_A G$, there exists a finite memory winning strategy $W$ of the typability game $G_{G,A}$. By adding a rule for unfolding non-terminals (i.e., a rule for deriving $F : (\theta, c) \vdash_A F : \theta$ from $\Gamma \vdash_A R(F)$), the typability of HORS can alternatively be described by the existence of a (possibly infinite) type derivation tree $\Pi_0$ for $S : \rho_0$ such that every infinite path in it satisfies the Streett/parity condition derived from that of $A$.\footnote{This alternative view of the typability follows easily from the definition of the typability game. Grellois and Melliès [9, 10] have indeed chosen such a formalization for parity tree automata model checking.} Such a derivation tree $\Pi_0$ can be constructed based on $W$. We show that we can transform $\Pi_0$ to another derivation tree $\Pi''_0$ which only uses the types occurring in $\Theta^\text{fix}$ (where $\Theta^\text{fix}$ is the effectless type environment produced by the fixpoint calculation in our algorithm). We first “cut” $\Pi_0$ by stopping unfoldings of non-terminals with a certain threshold for the depth of unfoldings, and replace the type of the non-terminal at each “cut” node with $\top \rightarrow \ldots \rightarrow \top \rightarrow q$, treating $\Gamma \vdash F : \underbrace{\top \rightarrow \ldots \rightarrow \top}_{\text{arity}(F)} \rightarrow q$ as an “axiom”. This axiom corresponds to the initial type environment $\Theta_0$ used in the algorithm. By accordingly reassigning the types in the tree, we have a finite type derivation tree $\Pi'_0$ that uses only the types in $\Theta^\text{fix}$ computed by the algorithm. Unfortunately, $\Pi'_0$ itself does not represent a winning strategy of $G_{G,A,\Theta^\text{fix}}$ as it uses the types of non-terminals in $\Theta_0$ as axioms. If we choose a sufficiently large number as the threshold for the depth of unfoldings, however, by matching each node of $\Pi'_0$ with a corresponding node in $\Pi_0$, we can reconstruct a valid (in the sense that every infinite path satisfies the Streett/parity condition) infinite derivation tree $\Pi''_0$, by replacing some edges in $\Pi'_0$ with “back edges”. Since $\Pi''_0$ has been obtained by only rearranging edges in $\Pi'_0$, $\Pi''_0$ also contains only types in $\Theta^\text{fix}$. By the correspondence between a valid infinite derivation tree and a winning strategy of the typability game, we obtain a winning strategy $W'$ for the subgame $G_{G,A,\Theta^\text{fix}}$ constructed by $\text{ConstructGame}(\Theta^\text{fix})$. Thus, the algorithm should return “Yes”. Appendix 8 shows an example of the construction of $\Pi''_0$. The algorithm runs in time polynomial in the size of HORS if the other parameters (the largest order and arity of terminals/non-terminals in HORS and the automaton) are fixed. Here, we assume that, as in [37], the linear-time sub-transitive control flow analysis [11] is used for computing the part $\text{Types}_P(\text{Flow}(x_1))$ in $F$. We also assume that an input HORS is normalized as in [19] so that for each $F \in \text{dom}(N)$, $R(F)$ is of the form $\lambda \vec{x}. c(F_1 \vec{x}_1) \ldots (F_j \vec{x}_j)$ where $0 \leq j$, $F_1, \ldots, F_j$ are non-terminals, $\vec{x}_1, \ldots, \vec{x}_j$ are variables and $c$ is a terminal, a non-terminal or a variable. As an increase of the size of HORS caused by this normalization is linear, it does not affect the time complexity result. Upon those assumptions, (i) The size of a type environment is bounded by $O(P)$ where $P$ is the size of an input HORS, and thus the number of the iteration is bounded by $O(P)$. (ii) Calculation of $F(\Gamma)$ is done in $O(P)$ time. Thus, the fixpoint $\Theta^\text{fix}$ can be calculated in time quadratic in $P$.\footnote{Actually, using the technique of [32], $\Theta^\text{fix}$ can be calculated in linear time.} Since the size of $\Theta^\text{fix}(F)$ is bounded above by a constant (under the fixed-parameter assumption) for each $F$, the size of (the relevant part of) the typability game is linear in $P$. Because we have assumed that the automaton is fixed (which also implies that the index $k$ of a Streett automaton or the largest parity of a parity tree automaton is also fixed), the game can also be solved in time polynomial in $P$. Thus, the whole algorithm runs in polynomial time under the fixed-parameter assumption. 5 Experiments We have implemented HorSatS and HorSatP, Streett and parity automata model checkers for HORS, respectively, based on the algorithm in Section 4. The implementations use a parity game solver PGSolver [6] as a backend for solving the typability game. Although the typability games solved by HorSatS are Streett games, they are actually converted to parity games and passed to PGSolver; this is because we could not find a practical implementation of a direct algorithm for Streett game solving. We have conducted two kinds of experiments, as reported below. The first experiment aims to confirm the effectiveness of the proposed fixed-parameter polynomial time algorithm. To this end, we have compared HorSatP with the previous parity automata model checkers APTRecS [7] and TravMC2 [28]. The second experiment aims to evaluate the effectiveness of the direct approach to Streett automata model checking. To this end, we have compared HorSatS with a combination of HorSatP and a conversion from Streett to parity tree automata. HorSatP vs APTRecS/TravMC2 We have used a benchmark consisting of 97 inputs of parity automata model checking problems, which include all the inputs used in the evaluation of APTRecS and TravMC2 [7, 28], and also new inputs derived from verification of tree processing programs [39, 22]. The experiment was conducted on a laptop computer with an Intel Core i5-6200U CPU and 8GB of RAM. To achieve the best performance of each model checker, we have used Ubuntu 16.04 LTS for APTRecS and HorSatP, and Windows 10 for TravMC2. Figure 4 shows the results. The horizontal axis is the size (the number of symbols in HORS) of an input, and the vertical axis is the elapsed time of each model checker. The points at the upper edge are timed-out runs (runs that took more than 50 seconds). For 27 tiny inputs (of size less than 20) APTRecS tends to be the fastest. For the remaining 70 inputs, APTRecS, TravMC2, and HorSatP won 6, 3, and 61 of them respectively. The results indicate that HorSatP usually outperforms the existing model checkers except for tiny inputs. In particular, HorSatP can handle a number of large inputs for which APTRecS and TravMC2 timed-out. Direct vs Indirect Approaches to Streett Automata Model Checking of HORS We have compared HorSatS with an indirect approach, which first converts an input Streett automaton to a parity automaton by means of the IAR construction [4, 8] and then performs parity automata model checking using HorSatP. The experiment was conducted on a desktop computer with an Intel Core i7-2600 CPU and 8GB of RAM. The OS was Ubuntu 16.04 LTS. We have used two benchmark sets. The first one has been prepared by the authors, hand-made or program-generated. The second one has been taken from Watanabe et al.’s fair non-termination verification tool for functional programs [41], with a slight modification to increase \(k\) (the number of pairs in Streett conditions); Watanabe et al.’s original tool supported only the case for \(k = 1\). To evaluate the dependency on \(k\), we have tested some of the inputs for different values of \(k\). The results are shown in Figure 5. The left figure shows the elapsed time of both approaches for several inputs. The horizontal axis is that of the direct approach, and the vertical axis is that of the indirect approach. (Thus, plots above the line \(y = x\) indicate instances for which the direct approach outperformed the indirect one.) The right figure shows the elapsed time for two series of inputs with the same structure but different values of \(k\). The results suggest that the direct approach often outperforms the indirect approach. In particular, the direct approach seems to be noticeably more scalable with respect to \(k\). 6 Related Work As already mentioned, our type system for Streett automata model checking of HORS presented in Section 3 is a variant of the Kobayashi-Ong type system [19] for parity tree automata model checking, and may also be viewed as an instance of Tsukada and Ong’s type system [38], an extension/generalization of the Kobayashi-Ong type system. Our main contribution in this respect is the specific design of “effects” suitable for Streett automata model checking. A naive approach would have been to use a set of states (rather than a set consisting of \(E_i, F_i\)) as an effect; that would suffer from the \((N + 1)\)-fold exponential time complexity in the size of the automaton, as opposed to $N$-fold exponential time complexity obtained in the last paragraph of Section 3. Our algorithms HorSatP and HorSatS are the first practical algorithms for Streett or parity tree automata model checking of HORS which run in time polynomial in the size of HORS (under the fixed-parameter assumption). The previous algorithms APTRecS [7] and TravMC2 [28] for parity tree automata model checking did not satisfy that property. The advantage of the new algorithms has been confirmed also through experiments. For trivial automata model checking, several fixed-parameter polynomial time algorithms have been known, including GTRecS [16], HorSat/HorSatT [3], and Preface [31]. Our algorithms are closest and similar to HorSatT algorithm, although the correctness proof for our new algorithms is much more involved. Model checking of HORS has been applied to automated verification of higher-order programs [15, 21, 17, 26, 24, 23, 41]. In particular, parity/Streett automata model checking has been applied to liveness verification [24, 7, 41]. Among others, Watanabe et al. [41] reduced the problem of disproving fair termination of functional programs (which is obtained from general liveness verification problems through Vardi’s reduction [40]) to Streett automata model checking of HORS, and used an indirect approach to solving the latter by a further reduction to parity automata model checking of HORS. As confirmed through experiments, our direct approach to Streett automata model checking often outperforms their indirect approach. 7 Conclusion We have proposed a type system and an algorithm for Streett automata model checking of HORS. The main contributions are twofold. First, ours is the first type system and algorithm that can directly be applied to Streett automata model checking of HORS; we have confirmed the advantage of the direct approach through experiments. Secondly, our algorithm HorSatS and its variant HorSatP for parity automata model checking are the first practical algorithms for Streett/parity automata model checking of HORS that run in time polynomial in the size of HORS, under the fixed-parameter assumption. We have also confirmed through experiments that HorSatP often outperforms the previous parity automata model checkers for HORS. Future work includes further optimizations of Streett/parity automata model checkers. Acknowledgments. We would like to thank anonymous referees for useful comments. References Oliver Friedmann and Martin Lange. PGSolver. Available at https://github.com/tcsprojects/pgsolver. Let $G_2$ and $A_2$ be the HORS and the Streett automaton in Example 9 respectively. Let $W$ be the memoryless winning strategy of $G_2, A_2$ defined by $W((S, q_a, \emptyset)) = \Gamma_1$ and $W((F, \hat{\theta}_F, \emptyset)) = \Gamma_1$ where $\theta_F = (q_a, \emptyset) \land (q_a, \{E_1\}) \land (q_a, \{E_1\}) \rightarrow q_a$ and $\Gamma_1 = \{ F : (\hat{\theta}_F, \emptyset) \}$. We write $F : \theta$ instead of $F : (\theta, \emptyset)$ in type environments. A derivation tree $\Pi_0$ that corresponds to $W$ is as follows: We construct $\Pi'_0$ by “cutting” the derivation tree at a certain threshold of depth, and then reassigning the types in it. Here, we choose to “cut” at the uppermost unfolding shown in the above tree. The resulting tree $\Pi'_0$ looks like: \[ \begin{align*} \vdots \end{align*} \] Here, $\theta_F = (q_a, \emptyset) \land (q_b, \{E_1\}) \rightarrow q_a$ and $\theta_{a1} = (q_a, \emptyset) \rightarrow q_a$. The uppermost unfolding has been replaced by the axiom. At the next unfolding below it, $F$ is given the type $(q_a, \emptyset) \rightarrow q_a$. In the body of this $F$, $x$ should have type $q_a$ because it is used as an argument of $a$, which has type $(q_a, \emptyset) \rightarrow (q_a, \emptyset) \rightarrow q_a$. On the other hand, $x$ in $b \ x$ need not have type $q_b$, since $b \ x$ is not typed in the derivation. Thus, the type (qa, ∅) \rightarrow qa is assigned to \( F \). By repeating this kind of argument for other unfoldings of \( F \), we update the type of the other occurrences of \( F \) to \( \theta_F \). Note that \( \Pi'_0 \) assigns only the types in \( \Theta^{\text{fix}} \) to non-terminals. In fact, the types \( \top \rightarrow qa \), \( (qa, \emptyset) \rightarrow qa \), and \( \theta_F \) assigned to \( F \) at the axiom, the first unfolding (counted from the top), and the second unfolding respectively belong to \( \Theta_0 \), \( F(\Theta_0) \), and \( F^2(\Theta_0) \). Notice that, in \( \Pi'_0 \), there are two unfolding nodes (the second and third unfolding nodes) labeled by the same judgment \( \{ F : \theta_F \} \vdash F : \theta_F \). (Note that this is not a coincidence; since there are only finitely many different type judgments, if the threshold for the number of unfoldings is sufficiently large, then there always exist such two nodes in each (sufficiently long) path of \( \Pi'_0 \).) By introducing a “back edge” between them, the following derivation tree \( \Pi''_0 \) is obtained. \[ \begin{array}{c} \{ x : qa \} \vdash a \ x : (qa, \emptyset) \rightarrow qa \\ (\text{UNFOLD}) \\ \{ F : \theta_F \} \vdash F : \theta_F \\ \{ x : (qa, \emptyset) \} \vdash b \ x : qa \\ \{ F : \theta_F, x : (qa, \emptyset) \} \vdash F \ (b x) : qa \\ \{ F : \theta_F, x : (qa, \emptyset) \} \vdash a \ x \ (F \ (b x)) : qa \\ \{ F : \theta_F \} \vdash \lambda x. a \ x \ (F \ (b x)) : \theta_F \\ \{ F : \theta_F \} \vdash F \ c : qa \\ \{ S : qa \} \vdash S : qa \end{array} \] (UNFOLD) Note that \( \Pi''_0 \) uses only types that occur in \( \Theta^{\text{fix}} \). Furthermore, this tree is a valid infinite derivation tree; indeed, it corresponds to the winning strategy of the subgame \( G_{G_2, A_2, \Theta^{\text{fix}}} \) shown in Figure 3.
{"Source-Url": "http://drops.dagstuhl.de/opus/volltexte/2017/7732/pdf/LIPIcs-FSCD-2017-32.pdf", "len_cl100k_base": 15744, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 77988, "total-output-tokens": 18737, "length": "2e13", "weborganizer": {"__label__adult": 0.0005860328674316406, "__label__art_design": 0.0006999969482421875, "__label__crime_law": 0.000728607177734375, "__label__education_jobs": 0.00133514404296875, "__label__entertainment": 0.0001894235610961914, "__label__fashion_beauty": 0.0003132820129394531, "__label__finance_business": 0.00045609474182128906, "__label__food_dining": 0.0007624626159667969, "__label__games": 0.003566741943359375, "__label__hardware": 0.0015668869018554688, "__label__health": 0.0012941360473632812, "__label__history": 0.0006628036499023438, "__label__home_hobbies": 0.00018775463104248047, "__label__industrial": 0.0009522438049316406, "__label__literature": 0.0006723403930664062, "__label__politics": 0.0006031990051269531, "__label__religion": 0.0009398460388183594, "__label__science_tech": 0.220458984375, "__label__social_life": 0.00013625621795654297, "__label__software": 0.00717926025390625, "__label__software_dev": 0.75439453125, "__label__sports_fitness": 0.0005793571472167969, "__label__transportation": 0.0011758804321289062, "__label__travel": 0.0003154277801513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55227, 0.02073]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55227, 0.39896]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55227, 0.80326]], "google_gemma-3-12b-it_contains_pii": [[0, 2245, false], [2245, 6038, null], [6038, 10789, null], [10789, 15675, null], [15675, 20265, null], [20265, 24109, null], [24109, 28911, null], [28911, 30134, null], [30134, 33679, null], [33679, 36312, null], [36312, 40770, null], [40770, 42975, null], [42975, 45192, null], [45192, 48401, null], [48401, 51976, null], [51976, 52512, null], [52512, 53367, null], [53367, 55227, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2245, true], [2245, 6038, null], [6038, 10789, null], [10789, 15675, null], [15675, 20265, null], [20265, 24109, null], [24109, 28911, null], [28911, 30134, null], [30134, 33679, null], [33679, 36312, null], [36312, 40770, null], [40770, 42975, null], [42975, 45192, null], [45192, 48401, null], [48401, 51976, null], [51976, 52512, null], [52512, 53367, null], [53367, 55227, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55227, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55227, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55227, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55227, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55227, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55227, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55227, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55227, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55227, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55227, null]], "pdf_page_numbers": [[0, 2245, 1], [2245, 6038, 2], [6038, 10789, 3], [10789, 15675, 4], [15675, 20265, 5], [20265, 24109, 6], [24109, 28911, 7], [28911, 30134, 8], [30134, 33679, 9], [33679, 36312, 10], [36312, 40770, 11], [40770, 42975, 12], [42975, 45192, 13], [45192, 48401, 14], [48401, 51976, 15], [51976, 52512, 16], [52512, 53367, 17], [53367, 55227, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55227, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
3c755d93686206a3f124ef816e329defd6b42179
Modeling Internet-Based Software Systems Using Autonomous Components Wenpin Jiao, Pingping Zhu, Hong Mei Institute of Software, School of Electronics Engineering and Computer Science Peking University, Beijing 100871, China {jwp, zhupp}@cs.pku.edu.cn, meih@pku.edu.cn Abstract. Internet-based software systems (called as Internetware) are dynamically formed task-specific coalitions of distributed autonomous components. Autonomous components were modeled from five aspects, i.e., goal, service, use contract, operating context, and implementation. The model can semantically reason about the autonomous behaviors of autonomous components. Based on the model, Internetware can be constructed using autonomous components from two directions, i.e., goal-driven refinement from top to bottom and cooperation-based composition from bottom to up. Then, Internetware is feasible if the refinement process can find a group of autonomous components that are willing to cooperate and can cooperatively achieve the goals of Internetware. Keywords: Internet-based Software System, Internetware, Autonomous Component, Model 1 Introduction The widespread used Internet is forming a new computing environment, which is becoming increasingly open and dynamic and differs from traditional environments [4] [12] on many aspects. For example, there is no clear boundaries for software systems or central authority for control or validation; resources (e.g., information, calculation, communication, control, or services) are usually autonomous and heterogeneous. Correspondingly, software systems on the Internet can be viewed as dynamically formed task-specific coalitions of distributed autonomous resources. Internet-based software systems (we call them as Internetware in this work) have many new characteristics: • Autonomy. Components constructing Internetware may be independent, active and adaptive entities. • Cooperativity. Internetware can be considered as a federation of Internet-based components. To perform the tasks of Internetware, components should cooperate in a coalition. • Evolutivity and adaptability. Components selected to assemble Internetware could be varied occasionally due to the dynamics of the Internet environment, for instance, components dynamically enter or leave the Internet or alter their service content. • Environment-awareness and dependency. Internetware situated in the open and dynamic environment should be able to perceive and adapt to the changes of the environment so that it could evolve properly. In this work, we introduce the concept of autonomous component for modeling autonomous computing resources on the Internet. Autonomous components are modeled as a tuple of goals, services, use contract, operating context and implementation. Based on the autonomous component model, we present an approach based on goal-driven refinement and cooperation-based composition to constructing Internetware. In the following context, Section 2 describes the autonomous component model and its semantics. Section 3 presents the approach to constructing Internetware. Section 4 discusses related work and makes some conclusion remarks on our work. 2 Autonomous Component General speaking, components are software entities that can be deployed independently and are subject to composition by third parts [14]. A software component is specified via its interface and the contract that define how other components can directly invoke the services provided by the component [2]. Existing models for software components are usually based on provided and required services (e.g., [5] [7] [15]) and they can just successfully manage functional interfaces and can only be used to specify passive components. However, computing resources on the Internet differ from traditional components on many aspects: • They are independently developed and delivered and are distributed over the whole network, and their executions are supported and controlled locally by the network nodes where they locate. Thus, they may not be free again, they can autonomously decide whether or not to provide services, and they may change their behaviors or performance without notification. • Even though they can be assembled into software systems for performing specific tasks, many of them can also execute alone for achieving their own goals. They are no longer passive entities developed only for assembly so they may or may not respond to service requests when they are busy with their own affairs. • They may provide services by themselves but they may also request services while implementing their functionalities. Due to the dynamics of the Internet, they may provide the same services in different ways; and they may request the same service dynamically from different components on the other hand. In addition, Internet-based computing resources are greatly diversiform in terms of the type of service, the operating context, the quality of service, the interoperability, the usage and so on. All of these indicate that existing component models are not sufficient for specifying Internet-based computing resources. 2.1 Autonomous Component Model Autonomous components are computing resources (or software entities) distributed over the Internet, which are developed independently to provide services for outside use besides pursuing local goals. Autonomous components interconnect via the Internet and interact by using varied communication technologies supported on the Internet, for instance, RPC (remote procedure call) and HTTP. To announce the services that an autonomous component can provide, those who deliver the component should explicitly specify how the component can provide services successfully and how people can use the services, including how to locate or find the component and how to assemble or interconnect the component. The specification of an autonomous component refers to the following aspects of contents: • Goals. Autonomous components are active and even self-interested and that they take actions (e.g., providing services) is because they want to gain benefits, i.e., to achieve their goals. • Services. Since autonomous components are often developed for serving the outside world and their local goals are generally user-invisible, autonomous components should declare what services they can provide for the outside use. • Use contract. When autonomous components announce the services they can provides, they should also notify the ways to use the services so that users could obtain the services. • Operating context (or running environment). Autonomous components are situated in different environments and they would not be able to achieve their goals or provide services if their environment could not supply sufficient resources and technology supports. • Implementation. Although the interfaces for providing services and the implementations of autonomous components are generally separable, we should be clear about how autonomous components achieve their goals and implement their services. 2.1.1 Goal A goal of an autonomous component represents an intention (or objective) to gain benefits or utilities. In most cases, there must be some reason (or stimulus) for an autonomous component to raise an intention. In addition, when several goals are stimulated simultaneously, an autonomous component should be able to schedule its actions for achieving goals by ranking the priorities of goals. A goal can be specified as a triple \( G = < Pre, Eff, Pri > \) where • Pre is the premise of pursuing the goal. Pre can be expressed as a function of the environment to indicate that the environment evolves into a specific state or some special stimulus (e.g., a request for services) appears in the environment. • Eff is the effectiveness of the goal, which can also be expressed as a function of the environment to indicate what state the environment will evolve into after the goal is achieved. For a self-interested autonomous component, Eff may also imply how many benefits the component gains. • Pri is the priority of the goal. When the priorities are not specified, goals can be achieved concurrently. 2.1.2 Service General speaking, a service is a functionality provided for outside use. A service can be described as a pair \( S = < SID, F > \) where • SID is the identifier of the service to be referenced by users. • $F$ is the \textit{functionality} of the service. The functionality can be specified as a pair $< \text{Pre}, \text{Post} >$, where $ ext{Pre}/\text{Post}$ are the pre-condition and the post-condition, respectively. ### 2.1.3 Use Contract Internet-based services may not be free or public to everyone so users should first be authenticated and pay for the services when they are trying to acquire services. In addition, autonomous component distributed over the Internet are heterogeneous, the ways to request services may be varied. For every service provided by a component, there is a contract specified. A contract is a quintuple $C = < \text{SID}, \text{Auth}, \text{Payment}, \text{Arg}, \text{Req} >$ where - $\text{SID}$ is the identifier of the service. - $\text{Auth}$ is the \textit{authentication} information that users should provide while they are going to request the service. When the service is public to everyone, $\text{Auth}$ can be blank. - $\text{Payment}$ in the software world can be considered as \textit{benefits} (or resources) that users bring to the autonomous component. Only when the autonomous component obtains the required benefits can it start to provide services. When the service is free, $\text{Payment}$ can be empty. - $\text{Arg}$ is the set of \textit{arguments}. Arguments can be either input or output data items or both. - $\text{Req}$ is the way to request the service. The way to request the service should be conformed to the architectural style [11] that the autonomous component is supposed to support. ### 2.1.4 Environment (or Operating Context) The environment in which an autonomous component situated consists of resources related with the achievements of the component’s goals, other autonomous components cooperating with the component to achieve the component’s goals, information (e.g., requests for services) transmitted among components, and constraints affecting the behaviors of components. An environment of autonomous component $P$ can be specified as a quadruple $E = < \text{Rsrc, Com, Info, Cons} >$ where - $\text{Rsrc}$ is the set of \textit{resources}. A kind of resources can be abstracted as an object class and a concrete resource can be considered as an instance object of the class. - $\text{Com}$ is the set of \textit{autonomous components} involved in the environment. - $\text{Info}$ is the \textit{information} transmitted through the environment. In the real world, information flowing in the environment could be varied. Nevertheless, in the software world, the information we care most is the requests for services and replies to the requests transferred among components. - $\text{Info} = \text{REQ} \cup \text{REP}$, where $\text{REQ}$ are requests for services conforming to specific use contracts and $\text{REP}$ are replies corresponding to requests. - $\text{Cons}$ specifies the \textit{constraints} to autonomous components involved in the environment. The constraints of the environment can be specified into two categories, \textit{i.e.}, \textit{interconnection} constraints and \textit{behavior} constraints. Firstly, the environment is unlikely to support all kinds of communication and interoperation technologies so services provided by autonomous components have to be requested and provided in a way that the environment supports. Secondly, because of the limitation of resources and interactions among components, the behaviors of components involved in an environment must be affected by the environment, especially the state transitions of the environment. The behavioral constraints of the environment can be divided into three sub-categories further, \textit{i.e.}, \textit{prohibitions}, \textit{permissions}, and \textit{obligations} [8] [9]. - $\text{Cons} = < \text{Interconn}, \text{Beh} >$ where $\text{Interconn}$ is the \textit{interoperation technology} the environment supports and $\text{Beh}$ is the set of \textit{behavior constraints} the environment imposes on autonomous components. An autonomous component perceives requests for services according to $\text{Interconn}$ and takes actions by referencing to $\text{Beh}$. - $\text{Beh} = \text{Proh} \cup \text{Perm} \cup \text{Obl}$, where $\text{Proh}$, $\text{Perm}$, and $\text{Obl}$ are the sets of \textit{prohibitions}, \textit{permissions} and \textit{obligations}, respectively and they have the similar definition with different semantics. - $\text{Proh} (\text{Perm or Obl}) = < \text{Condi}, \text{Action} >$ where $\text{Condi}$ is a function of the environment representing the state of the environment and $\text{Action}$ is an action (\textit{e.g.}, providing a service) of a component. ### 2.1.5 Implementation Traditionally, the interface (\textit{i.e.}, the use contracts) of a component is independent of the component’s implementation and users need not care how the component is implemented. However, from the perspectives of developers, both the use contracts and the implementation of a component should be modeled. Nevertheless, while modeling the implementation of an autonomous component, we do not care how the component is coded, either. Instead, what we are concerned with more is how a cmponent depends on its environment, especially resources and other components involved in the environment, to achieve its goals or provide its services. To achieve goals or provide services, autonomous components should possess sufficient resources and skills, including the capabilities (i.e., provided services) owned by components themselves and the abilities obtained via asking for assistances (i.e., required services) from other components. With respect to every single goal \( G \) or service \( S \), there is an implementation. - \( \text{Imp}(G) \) or \( \text{Imp}(S) = < \text{Rsrc}, \text{Sk} > \) where \( \text{Rsrc} \) is the set of resources appearing in the environment and \( \text{Sk} \) is the set of skills (i.e., provided and requested services) required by the component. ### 2.1.6 Autonomous Component From the point of views of different roles who deliver or use autonomous components, the model for describing autonomous components may be varied. For vendors who deliver autonomous components, they should make clear how to develop, use and run autonomous components. From vendors’ perspective, the autonomous component model can be defined as a sextuple. - \( M_{\text{vendor}} = < \text{ID}, \text{G}, \text{S}, \text{C}, \text{E}, \text{J} > \) where \( \text{ID} \) is the identifier of the component, \( \text{G} \) is the set of goals of the component, \( \text{S} \) is the set of services provided by the component, \( \text{C} \) is the set of use contracts corresponding to the provided services, \( \text{E} \) is the environment and \( \text{J} \) is the implementation of the component. Differently from the vendors of autonomous components, users usually only care what services autonomous components provide and how to use those services. From users’ perspective, the autonomous component model can be defined as a triple. - \( M_{\text{user}} = < \text{ID}, \text{S}, \text{C} > \) where \( \text{ID} \), \( \text{S} \) and \( \text{C} \) are with the same meanings as above. ### 2.2 Semantics of Autonomous Components When an autonomous component is modeled, we should be able to reason about the behaviors and properties of the component according to its model. #### 2.2.1 Activating Goal Autonomous components situated in the environment can perceive changes happening in the environment and react to the changes. When autonomous components perceive some stimuli and figure out that the occasions (i.e., the premises) for achieving goals appear autonomous components can start pursuing their goals. \[ G \in \text{G} \land G.\text{Pre} \land G.\text{Pri} \geq \varepsilon \implies \text{CanActivate}(G) \] i.e., a goal of an autonomous component can be activated if the premise of achieving the goal is satisfied and the priority of achieving the goal is high enough, for instance, higher than a threshold \( \varepsilon \), where \( \varepsilon \) may be adjusted dynamically. #### 2.2.2 Responding Request for Service Since autonomous components are goal-driven, they provide services because doing so can help them to achieve their own goals. When a user’s request for service, \( \text{Req} \) for \( S \), appears in the environment, an autonomous component will respond the request if the request stimulates the component to pursue some goal and the achievement of the goal engages providing the requested service. Naturally, the user requesting service should pass the authentication and his payment is acceptable. \[ \text{Req} \in \text{S}\text{.Info} \land \text{Req} = \text{S}\text{.Req} \land \text{S}\text{.F}\text{.Pre} \land \\ \text{Authenticated(}\text{User}\text{)} \land \text{PaymentOf(}\text{User}\text{)} \supseteq \text{UseContractOf(}\text{S}\text{.Payment}) \land \\ \exists_{\text{G} \in \text{G}} (\text{CanActivate}(G) \land S \in \text{Imp}(G), \text{Sk}) \implies \text{WillRespond(Req, S)} \] i.e., an autonomous component will respond a request for a service if the component can perceive the request and providing the service can benefit the achievement of its goal. #### 2.2.3 Feasibility of Goal After a goal is stimulated, the goal is achievable if the environment evolves into an appropriate state, i.e., there are sufficient resources and skills for achieving the goal and there are no constraints obstructing the behaviors of the component. \[ \text{Imp}(G), \text{Rsrc} \subseteq \text{Rsrc} \land \text{Imp}(G), \text{Sk} \subseteq (\bigcup_{c \in \text{C} \subseteq \text{Com}_c \subseteq \text{S}} S) \land \\ \neg \exists_{c \in \text{Com}_c \subseteq \text{C}} (\text{p}\text{.Condi} \land \exists_{s \in \text{Imp}(G), \text{Sk}} (s = \text{p}\text{.Action})) \implies \text{IsFeasible}(G) \] 2.2.4 Availability of Service Similarly, a service is available if the environment evolves into a state that supplies adequate resources and skills for performing the service. \[ \text{Imp}(S).Rsrc \subseteq \text{Rsrc} \land \text{Imp}(S).Sk \subseteq (S \cup \bigcup_{C \in \text{Com}^c} S) \land \neg \exists p \in \text{Com.Beh.Prod} \{p.\text{Condi} \land \exists s \in \text{Imp}(S).Sk(s = p.\text{Action})\} \models \text{IsAvailable}(S) \] (4) 2.2.5 Behavior of Autonomous Component The behaviors of an autonomous component are related to pursuing goals and providing services. \[ \begin{align*} \text{CanActivate} & \Rightarrow \text{WillRespond} = \text{provide the service.} \\ \text{Availability of Service} & = \text{feasible only if the service is available, and the goal is achieved only if the service has been performed.} \\ \text{Composition and refinement of autonomous components} & = \text{the composition of a group of components, if } P \text{ achieves } R' \text{'s goals, implement } R' \text{'s services and preserve the use contracts, i.e.,} \\ \forall G_C \exists G_C : \exists p(\text{Achieved}(G) \Rightarrow \text{Achieved}(G)) & \\ \forall S_C \exists S_C : \exists q(\text{Provided}(S) \Rightarrow \text{Provided}(S)) & \\ \end{align*} \] (5) Proposition 1. If the achievement of a goal depends on providing some service, then the goal is feasible only if the service is available, and the goal is achieved only if the service has been performed. \[ \begin{align*} (\text{IsFeasible}(G) \land S \in \text{Imp}(G).Sk) & \Rightarrow \text{IsAvailable}(S) \\ (\text{Achieved}(G) \land S \in \text{Imp}(G).Sk) & \Rightarrow \text{Provided}(S) \\ \end{align*} \] (7) 2.3 Composition and Refinement of Autonomous Components Although autonomous components are modeled from five aspects, the composition of two autonomous components can be considered as the merge of their goals, services and environments since their use contracts and implementations completely depend on their services and environments. Suppose that \( P = < ID_P, G_P, S_P, C_P, S_P \rangle \), \( Q = < ID_Q, G_Q, S_Q, C_Q, S_Q \rangle \) and \( R \) is the composition of \( P \) and \( Q \). While composing \( P \) and \( Q \), we should take the following situations into consideration. - \( P \) and \( Q \) are completely independent of each other, i.e., \( P \) (or \( Q \)) neither requests nor provides services from/to \( Q \) (or \( P \)) and they are situated in two isolated environments. \[ R = < ID_R, G_P \cup G_Q, S_P \cup S_Q, C_P \cup C_Q, S_P \cup C_Q, S_Q \cup C_P, S_P \cup C_Q > \] (8) Here, \( G_P + G_Q \) means simply putting all resources and components involved in the environments together whilst \( S_P + S_Q \) means collecting all implementations of goals and services into a single collection. - \( P \) and \( Q \) are situated in the same environment and share resources, but they do not depend on one another in terms of services. \[ R = < ID_R, G_P \cup G_Q, S_P \cup S_Q, C_P \cup C_Q, S_P \cup C_Q > \] (9) Here, \( G_P \oplus G_Q \) is similar as \( G_P + G_Q \) except that the shared resources will be re-counted. - \( P \) may request services from \( Q \), i.e., \( Q \) is a part of \( P \)'s environment, on which \( P \) will rely to implement its functionality. \[ R = < ID_R, G_P \cup G_Q, S_P \cup S_Q, C_P \cup C_Q, (G_P \oplus G_Q)Q, S_P + S_Q > \] (10) Here, \( (G_P \oplus G_Q)Q \) means excluding \( Q \) from the environment. Inversely, the refinement is referred to how an autonomous component is decomposed into one or more concretely implemented components that can realize the functionality of the refined component. Differently from composition, while discussing the refinement of an autonomous component, we need not be concerned with the environment and a component can be considered successfully refined if the concrete components can achieve the component’s goals and provide the component’s services using the same use contracts. An autonomous component, \( R \), is refined by another component \( P \), which can be a individual component or a composition of a group of components, if \( P \) achieves \( R \)'s goals, implement \( R \)'s services and preserve the use contracts, i.e., \[ \forall G_C \exists G_C : \exists p(\text{Achieved}(G) \Rightarrow \text{Achieved}(G)) \\ \forall S_C \exists S_C : \exists q(\text{Provided}(S) \Rightarrow \text{Provided}(S)) \] (11) The composition and refinement of autonomous components provide a foundation of developing Internet-based software systems. 3 Internet-Based Software Systems – Internetware Traditionally, when we refer to a software component, we always assume that the component’s interface contract and operating context are explicitly specified. Furthermore, according to the specification of a component, we can clearly know how to compose the component into software systems and can predict what outcome we will obtain after the component is deployed and executes. However, on the Internet, internet-based computing resources, i.e., autonomous components are autonomous and active, and we cannot be sure and guaranteed that an autonomous component will always be committed to the assembly of software systems. When we are requesting services from autonomous components, the outcomes will be uncertainty. In addition, for a service, there often exist multiple autonomous components that can provide it and the ways providing it may be varied. While constructing Internetware systems, we may even not know what autonomous components are actually assembled into systems. We cannot construct Internetware systems simply via assembling autonomous components through requesting and providing services of components as people did before when assembling traditional components. Therefore, we put forward an approach to constructing Internetware from two cooperative autonomous components into Internetware in order to achieve the goals cooperatively. 3.1 Goal-Driven Refinement According to the definition of refinement of autonomous components described above, for every goal of Internetware, there should be an autonomous component that can provide the service specified by the goal and implement the use contract corresponding to the service. Furthermore, if the autonomous component requests services from other components while implementing its functionality, then providing the required services can be considered as sub-goals of Internetware. Suppose $\mathcal{R} = \{G_1, G_2, ..., G_n\}$ is a set of goals of Internetware, and for each $G_i$, $\text{Imp}(G_i).S_k = \{S_i\}$. The refinement of Internetware can be described recursively as follows, where $\mathcal{P}$ is the set of autonomous components obtained in the refinement process. 1. For every goal $G_i$ ($1 \leq i \leq n$) in $\mathcal{R}$, search (or implement) an autonomous component $P_i$ that provides $S_i$ (i.e., $S_i \in \mathcal{S}_{P_i}$) and implements the use contract corresponding to $S_i$, and then let $\mathcal{P} = \mathcal{P} \cup \{P_i\}$. 2. If $P$ will require other services while providing $S$, i.e., $\text{Imp}(S_i).S_k = \{S_{i1}, ..., S_{in}\}$, then consider providing $S_{i1}, ..., S_{in}$ as sub-goals of Internetware, i.e., let $\text{Imp}(G_{i}) \cap S_k = \{S_i\}$ ($1 \leq j \leq m$), and let $\mathcal{R} = \mathcal{R} \cup \{G_{i1}, ..., G_{in}\}$. 3. Stop searching until all goals occurring in $\mathcal{R}$ can be achieved by autonomous components. Theorem 1. By using the above refinement process, we can obtain a refinement of Internetware, i.e., $\mathcal{P}$ is a refinement of Internetware. According to the first step, for every $G_i \in \mathcal{R}$ there is a $P_i \in \mathcal{P}$, a goal $G' \in \mathcal{G}_i$, and $S_i \in \text{Imp}(G').S_k$. Since $\text{Achieved}(G') \Rightarrow \text{Provided}(S_i)$ (formula 7) and $\text{Provided}(S_i) \Leftrightarrow \text{Achieved}(G_i)$ (formula 12), we can obtain $\text{Achieved}(G') \Rightarrow \text{Achieved}(G_i)$, i.e., $\mathcal{P}$ will achieve all goals of Internetware. According to the second step, because all required services for implementing $S_i$ will be considered as sub-goals to be refined further and these sub-goals will be achieved recursively as well, $S_i$ will be available (formula 4). That is, all services for achieving the goals of Internetware can be provided by $\mathcal{P}$. Furthermore, according to the definition of refinement (formula 11), we can conclude that $\mathcal{P}$ is a... refinement of Internetware. 3.2 Cooperation-Based Composition General speaking, there are two types of cooperation [3] among autonomous components involved in Internetware. - Cooperation with interaction. In this case, autonomous components support mutually in terms of capabilities (e.g., one provides services for another) and they are usually involved in the achievement of a single goal of Internetware. - Cooperation without interaction, i.e., autonomous components independently take actions of pursuing multiple (sub-)goals of Internetware. In the discussion of composition of autonomous components, we listed three situations under which the compositions are different. Correspondingly, the first type of cooperation will happen in the third situation whilst the second will take place in the first two situations. According to the refinement process discussed above, the refinement of Internetware just shows a possibility of achieving the global goals of Internetware under the hypothesis that all services provided by autonomous components are always available and all selected autonomous components will respond requests for services. However, the refinement does not consider whether autonomous components obtained in the refinement process are willing to provide required services. Differently, when autonomous components intend to cooperate, they must have believed that their actions of providing services would benefit themselves, i.e., being able to achieve their local goals. That is to say, while autonomous components are participating in cooperation, they must have been willing to respond requests for services. Therefore, we say that Internetware is feasible if there is a refinement of Internetware and all autonomous components occurring in the refinement will cooperate. Theorem 2. If there exists a refinement $P$ and a composition $D$ and $P = D$, then Internetware is feasible, i.e., for each goal $G \in R$, $G$ is achievable. Suppose $\text{Imp}(G), Sk = \{S\}$. According to the definition of refinement, $S$ must be one of the provided services of the refinement $P$, i.e., $S \in S_P$. Furthermore, according to the refinement process (theorem 1), $S$ will be implemented by autonomous components in $P$, i.e., $\text{IsAvailable}(S)$. On the other hand, since $P = D$ and autonomous components in $D$ are cooperative, $S \in S_D$ and $\text{WillRespond}(S, \text{Req}, S)$. Based on formula 5, the composition $D$ will provide service $S$ for achieving goal $G$. 3.3 Refining Internetware into Cooperative Components As discussed above, the goal-driven refinement and the cooperation-based composition just provide a theoretical possibility of implementing Internetware. On the Internet, due to the autonomy of components and the dynamics of environments, autonomous components may no longer be willing to provide services or they are temporarily (or permanently) inaccessible. The feasibility of Internetware will be affected because the required services are not available again. When we are discussing the cooperation-based composition of Internetware, we consider two aspects of factors that affect the composition: 1) participating autonomous components are desired to cooperate and 2) they can find adequate capability supports from their environments to implement their functionalities. Obviously, when the environments change, some autonomous components previously involved in the composition may not be willing to cooperate or cannot obtain adequate support from the environments. Thus, the composition for implementing Internetware may be different correspondingly to different environments. On the contrary, in the refinement process, we ignored the autonomy of components and the dynamics of environments so the obtained refinement of Internetware lacks of dynamic. Thus, no matter how the environments change, we may always obtain the same refinement if those previously obtained autonomous components do not leave the environments. That is to say, in the dynamic environments involving autonomous components, we cannot ensure that there exists a refinement $P$ and a composition $D$ and $P = D$ is always held for Internetware, i.e., we cannot always guarantee the feasibility of Internetware. Nevertheless, if we check the cooperativity of autonomous components obtained in the refinement process simultaneously when we are refining Internetware, we can more possibly get a feasible Internetware. Therefore, we improve the above refinement process. In the improved refinement, the procedure of searching autonomous components is like an auction using the contract-net protocol [10] [13], in which autonomous components that are requiring services announce the required services and autonomous components that are willing to cooperate will bid on the tasks of providing services. - For every goal $G_i (1 \leq i \leq m)$ in $R$, since $Imp(G_i).Sk = \{S_i\}$, there is only one task (i.e., providing the service $S_i$) to be announced. After the task is announced, - Recruit autonomous components that are willing to cooperate, and then - Select the autonomous component that requires least payment for carrying on the task of providing the service. 4 Related Work and Conclusions The Internet forces the environment of software systems to be open and dynamic, which makes it significant to develop flexible, trustworthy, and collaborative software systems to use and share information resources scattered on the open and dynamic Internet. However, traditional component-based software development methodologies failed to meet these challenges due to the following problems. First, they generally assume that the system goals are determined before the development and the system structures are fixed after the development. They do not take the openness and dynamics of the Internet into consideration and cannot meet the requirements for developing adaptive, evolutional, and collaborative software systems on the Internet. Second, they always consider the components involved in systems are static and passive, which makes them difficult to develop software systems that are required to be autonomous, pro-active and re-active to the changing demands of the environment and the users. Third, they usually require that the ways of interconnections and interactions among components are uniform and cannot provide different collaboration supports for components, which makes the components tightly coupled and the structures and behaviors of software systems stiff. Differently, Internetware is autonomous, cooperative and adaptive and it provides a new solution for developing flexible, multi-purpose, and evolving Internet-based software systems. To implement Internetware, we should make breakthroughs on the aspects of software theory, methodology, technology and platform of Internetware to enable software systems to be applied to multiple purposes, to integrate autonomous and cooperative software entities, to support flexible interconnections, and to adapt to changes of the dynamic environment. To study the characteristics of Internetware and develop a feasible way to integrate autonomous software components into Internetware, one of the most important issues that we should first address is to specify autonomous components in a uniform formal model. So far as we had known by now, there is no published work on modeling autonomous components like us. Even if some literature states it creates a model for autonomous components, autonomous components are assumed unable to self-determine whether, when, or how to provide their services. For example, [1] presents a component model for encapsulating services, which allows for the adjustment of structure and behavior of autonomous components to changing or previously unknown contexts in which they need to operate. [6] introduces service-oriented concepts into a component model and execution environment and defines a model for dynamically available components, where components provide and require services (i.e., functionality) and all component interaction occurs via services. In addition, there is much work on modeling traditional components. For example, [15] proposes a unifying component description language for integrated descriptions of structure and behavior of software components and component-based software systems. [7] describes how components are specified at the interface level, design level and how they are composed. A component consists a set of interfaces, provided to or required from its environment and an executable code that can be coupled with other components via its interfaces. [5] introduces a framework dealing with interface signature, interface constraints, interface packaging and configurations, and non-functional properties of software components. However, those models pay little attention on the autonomy of components or the impacts of the environments on components’ autonomous behaviors. Contrarily, most of those models assume that components are not completely self-controlled and will always respond requests once when they perceive requests for services. In this paper, we introduce the notion of autonomous component and put forward a new definition for it. First of all, autonomous components are software entities for use in the assembly of Internetware. Furthermore, autonomous components are self-controlled, environment dependent, and going in for goals locally. Based on the definition, autonomous components are modeled from five aspects, i.e., goals they are committed to achieving, services they promise to provide in order to achieve their goals, use contracts how users use them, environment on which they depend, and implementation how they use environmental resources and skills to achieve their goals. In the model, we give out a very concrete environment description. The environment of an autonomous component consists of not only the resources and skills on which the autonomous component depends to implement its goals and services but also constraints that will affect the behaviors and relationships of autonomous components. Differently from traditional component models, our model can semantically reason about the autonomous behaviors of autonomous components, such as when and how autonomous components start to pursue their goals, whether they are committed to providing services, and so on. The model can also reason about the relationships among autonomous components, such as composition and refinement. By using the autonomous components modeled in this paper, we can construct Internetware from two directions, i.e., goal-driven refinement from top to bottom and cooperation-based composition from bottom to up. Then, Internetware is feasible if the refinement process can find a group of cooperative autonomous components that can cooperatively achieve the goals of Internetware. At next stage, we will develop a running environment for supporting the automated assembly of Internetware from autonomous components. References 作者简介: 焦文科 男,1969年生,博士,副教授,主要研究方向包括软件工程、智能软件、构件技术和形式化方法等。电话:010-62757670,E-mail:jwp@sei.pku.edu.cn。 朱萍萍 女,1980年生,硕士。主要研究方向为软件复用与软件构件技术。 梅宏 男,1963年生,博士,教授,博士生导师,主要研究方向为软件工程、软件复用与软件构件技术、分布式对象技术等。 Wenpin Jiao. received his BA and MS degree in computer science from East China University of Science and Technology in 1991 and 1997, respectively, and Ph.D. degree in computer science from the Institute of Software at Chinese Academy of Sciences. From 2000 to 2002, he was a postdoctoral... fellow in the Department of Computer Science at the University of Victoria, Canada. Since 2004, he has been an associate professor in the School of Electronics Engineering at Peking University. His major research focus is on the autonomous component technology, multi-agent systems, and software engineering. Pingping Zhu, received her BA and MS degree in computer science from Peking University in 2002 and 2005, respectively. Her research interest includes software reuse and software component technology. Hong Mei, received his BA and MS degrees in computer science from Nanjing University of Aeronautics and Astronautics in 1984 and 1987, respectively; and Ph.D. degree in computer science from Shanghai Jiaotong University in 1992. From 1992 to 1994, he was a postdoctoral research fellow at Peking University. Since 1997, he has been a professor and Ph.D. advisor in the Department of Computer Science and Engineering at Peking University. He has also served as vice dean of the School of Electronics Engineering and the Capital Development Institute at Peking University, respectively. His current research interests include: Software Engineering and Software Engineering Environment, Software Reuse and Software Component Technology, Distributed Object Technology, Software Production Technology, and Programming Language. He is a member of the Expert Committee for Computer Science and Technology of State 863 High-Tech Program, a chief scientist of State 973 Fundamental Research Program, a consultant of Bell Labs Research China, the director of Special Interest Group of Software Engineering of China Computer Federation (CCF), a member of the Editorial Board of Sciences in China (Series F), ACTA ELECTRONICA SINICA and Journal of Software, and a guest professor of NUAA. He also served at various Program Committees of international conferences.
{"Source-Url": "http://sei.pku.edu.cn:80/~jwp/publications/2006/CJE05-156.pdf", "len_cl100k_base": 8910, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33077, "total-output-tokens": 10437, "length": "2e13", "weborganizer": {"__label__adult": 0.0002419948577880859, "__label__art_design": 0.0003261566162109375, "__label__crime_law": 0.0002371072769165039, "__label__education_jobs": 0.0009603500366210938, "__label__entertainment": 5.46574592590332e-05, "__label__fashion_beauty": 0.0001055002212524414, "__label__finance_business": 0.0001811981201171875, "__label__food_dining": 0.00023043155670166016, "__label__games": 0.0004777908325195313, "__label__hardware": 0.0005636215209960938, "__label__health": 0.0002663135528564453, "__label__history": 0.0002005100250244141, "__label__home_hobbies": 6.592273712158203e-05, "__label__industrial": 0.00020003318786621096, "__label__literature": 0.00022792816162109375, "__label__politics": 0.0001595020294189453, "__label__religion": 0.00028395652770996094, "__label__science_tech": 0.01258087158203125, "__label__social_life": 7.301568984985352e-05, "__label__software": 0.00927734375, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.0001779794692993164, "__label__transportation": 0.00033283233642578125, "__label__travel": 0.00015485286712646484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43126, 0.02519]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43126, 0.80319]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43126, 0.89542]], "google_gemma-3-12b-it_contains_pii": [[0, 3831, false], [3831, 8390, null], [8390, 13499, null], [13499, 18311, null], [18311, 22917, null], [22917, 26874, null], [26874, 31625, null], [31625, 36549, null], [36549, 41262, null], [41262, 43126, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3831, true], [3831, 8390, null], [8390, 13499, null], [13499, 18311, null], [18311, 22917, null], [22917, 26874, null], [26874, 31625, null], [31625, 36549, null], [36549, 41262, null], [41262, 43126, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43126, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43126, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43126, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43126, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43126, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43126, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43126, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43126, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43126, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43126, null]], "pdf_page_numbers": [[0, 3831, 1], [3831, 8390, 2], [8390, 13499, 3], [13499, 18311, 4], [18311, 22917, 5], [22917, 26874, 6], [26874, 31625, 7], [31625, 36549, 8], [36549, 41262, 9], [41262, 43126, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43126, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08