text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Exercises 12. The Preprocessor Preprocessor Directives Macro Definition Quote and Concatenation Operators File Inclusion Conditional Compilation Other Directives Predefined Identifiers Exercises Solutions to Exercises 217 218 219 220 222 223 224 226 227 228 230 Contents ix .
It does not attempt to teach object-oriented design to any depth. further simplifies the reader’s task. Readers x C++ Essentials Copyright © 2005 PragSoft . While no important topic has been omitted. Experience suggests that any small knowledge gaps left as a result. I have consciously avoided trying to present an absolutely complete description of C++. I hope. This book serves as an introduction to the C++ language. free from unnecessary verbosity. As a result.Preface Since its introduction less than a decade ago. descriptions of some of the minor idiosyncrasies have been avoided for the sake of clarity and to avoid overwhelming beginners with too much information.. templates and exception handling) which have added to its richness. First. Second. each chapter consists of a list of relatively short sections (mostly one or two pages). so that beginners can develop a good understanding of the language in a short period of time. Intended Audience This book introduces C++ as an object-oriented programming language. No previous knowledge of C or any other programming language is assumed. Finally. to produce a concise introductory text. The language has also rapidly evolved during this period and acquired a number of new features (e. I have strived to achieve three goals. It teaches how to program in C++ and how to properly use its features. C++ has experienced growing acceptance as a practical object-oriented programming language suitable for teaching. research. will be easily filled over time through selfdiscovery. which I believe is best covered in a book in its own right. I have tried to combine a tutorial style (based on explanation of concepts through examples) with a reference style (based on a flat structure). This. with no further subdivision.g. In designing this book. and commercial software development.
The aim is to present each new topic in a confined space so that it can be quickly grasped. making it suitable for a one-term or one-semester course.pragsoft. the sample programs presented in this book (including the solutions to the exercises) and provided in electronic form. Readers are encouraged to attempt as many of the exercises as feasible and to compare their solutions against the ones provided. Each chapter has a flat structure. Each chapter ends with a list of exercises. consisting of an unnumbered sequence of sections.who have already been exposed to a high-level programming language (such as C or Pascal) will be able to skip over some of the earlier material in this book. Structure of the Book The book is divided into 12 chapters. it will be equally useful to professional programmers and hobbyists who intend to learn the language on their own. It can also be used as the basis of an intensive 4-5 day industrial training course.com Contents xi . www. Answers to all of the exercises are provided in an appendix. most of which are limited to one or two pages. Although the book is primarily designed for use in undergraduate computer science courses. For the convenience of readers. The entire book can be easily covered in 10-15 lectures.
.
An algorithm is expressed in abstract terms. to hold the sorted list. A program written in a high-level language is translated to assembly language by a translator called a compiler. Programs expressed in the machine language are said to be executable. list2. and their storage in memory will also be discussed.pragsoft. A machine language is far too cryptic to be suitable for the direct use of programmers. A simple example of a problem and an algorithm for it would be: Problem: Algorithm: Sort a list of names in ascending lexicographic order. remove it from list1. Even assembly languages are difficult to work with. A further abstraction of this language is the assembly language which provides mnemonic names for the instructions and a more intelligible notation for the data.com Chapter 1: Preliminaries 1 . The assembly code produced by the compiler is then assembled to produce an executable program. A program written in any other language needs to be first translated to the machine language before it can be executed. it describes the sequence of steps to be performed for the problem to be solved. A solution to a problem is called an algorithm. it needs to be expressed in a language understood by it. Call the given list list1. until list1 is empty. Elementary concepts such as constants. Repeatedly find the ‘smallest’ name in list1. Programming A digital computer is a useful tool for solving a great variety of problems. create an empty list. variables. They liberate programmers from having to think in very low-level terms. We will use simple examples to show the structure of C++ programs and the way they are compiled.1. An assembly language program is translated to machine language by a translator called an assembler. Preliminaries This chapter introduces the basic elements of a C++ program. and help them to focus on the algorithm instead. The only language really understood by a computer is its own machine language. www. and make it the next entry of list2. The following is a cursory description of the concept of programming for the benefit of those who are new to the subject. To be intelligible to a computer. High-level languages such as C++ provide a much more convenient notation for implementing algorithms.
A function may also have a return type. This line defines a function called main. Listing 1. In this case. } 1 This line uses the preprocessor directive #include to include the contents of the header file iostream. This line is a statement.h> int main (void) { cout << "Hello World\n". ¨ 2 3 4 5 2 C++ Essentials Copyright © 2005 PragSoft . which when run.).h in the program.A Simple C++ Program Listing 1.h is a standard C++ header file and contains definitions for input and output. these always appear after the function name.1 shows our first C++ program. and causes the value of the latter to be sent to the former. between a pair of brackets. the effect is that the string "Hello World\n" is sent to cout. The return type for main is int (i. A function may have zero or more parameters. Iostream. The last character in this string ( ) is a newline character \n which is similar to a carriage return on a type writer. this always appears before the function name.1 1 2 3 4 5 Annotation #include <iostream. This brace marks the beginning of the body of main. All C++ programs must have exactly one main function. simply outputs the message Hello World. The symbol << is an output operator which takes an output stream as its left operand and an expression as its right operand. Program execution always begins from main. The word void appearing between the brackets indicates that main has no parameters. A stream is an object which performs input or output.e. The end of a statement is always marked with a semicolon (.. causing it to be printed on the computer monitor screen. Cout is the standard output stream in C++ (standard output usually means your computer monitor screen). A statement is a computation step which may produce a value. This brace marks the end of the body of main. an integer number). This statement causes the string "Hello World\n" to be sent to the cout output stream. A string is any sequence of characters enclosed in double-quotes.
(This ending may be different in other systems. The naming convention under MS-DOS and Windows is that C++ source file names should end in . the output option (-o) allows you to specify a name for the executable file produced by the compiler instead of a.out Hello World $ 1 The command for invoking the AT&T C++ translator in a UNIX environment is CC.cc -o hello $ hello Hello World $ Although the actual command may be different depending on the make of the compiler. As a convention. Dialog 1.cpp.1 shows how the program in Listing 1.cc $ a. . For example. Some options take arguments.1 1 2 3 4 Annotation $ CC hello.2 1 2 3 4 $ CC hello. An option appears as name.) The result of compilation is an executable file which is by default named a. illustrates the use of this option by specifying hello as the name of the executable file. Dialog 1. Dialog 1.out. The argument to this command (hello. or .out.c.1 is compiled and run in a typical UNIX environment. The UNIX command line prompt appears as a dollar symbol ($). This is the output produced by the program. The return of the system prompt indicates that the program has completed its execution.C. Windowsbased C++ compilers offer a user-friendly environment where compilation is as simple as choosing a menu command. we just use a.Error! Bookmark not defined.cc.Compiling a Simple C++ Program Dialog 1.com Chapter 1: Preliminaries 3 .cc) is the name of the file which contains the program. the file name should end in . where name is the name of the option (usually a single letter). 2 3 4 The CC command accepts a variety of useful options. To run the program. User input appears in bold and system response in plain. ¨ as a command. a similar compilation procedure is used under MS-DOS.pragsoft.
1 refers to the << operator which is actually defined in a separate IO library. For example. the outcome may be incomplete due to the program referring to library routines which are not defined as a part of the program.How C++ Compilation Works Compiling a C++ program involves a number of steps (most of which are transparent to the user): • First. Finally. In practice all these steps are usually invoked by a single command (e. • • Figure 1. The final result is an executable file. or just a translator which translates the code into C. Listing 1.. the linker completes the object code by linking it with the object code of any library modules that the program may have referred to. In either case.g. CC) and the user will not even see the intermediate files generated.g. Figure 1. The result is a modified program text which no longer contains any directives.1 illustrates the above steps for both a C++ translator and a C++ native compiler.1 C++ Compilation C++ Program C++ TRANSLATOR C Code C COMPILER C++ Program C++ NATIVE COMPILER Object Code LINKER Executable ¨ 4 C++ Essentials Copyright © 2005 PragSoft . The compiler may be a true C++ compiler which generates native (assembly or machine) code. the C++ preprocessor goes over the program text and carries out the instructions specified by the preprocessor directives (e.) Then. the resulting C code is then passed through a C compiler to produce native object code. (Chapter 12 describes the preprocessor in detail. the C++ compiler translates the program code. #include). In the latter case..
an integer variable can only take integer values (e.55. the type of a C++ variable cannot be changed.com .. multiple variables of the same type can be defined at once by separating them with commas. Variables are used for holding data values so that they can be utilized in various computations in a program. weeklyPay = workDays * workHours * payRate.h> int main (void) { int workDays.pragsoft. cout << weeklyPay. Listing 1.2 1 2 3 4 5 6 7 8 9 10 11 12 13 Annotation #include <iostream. As illustrated by this line. cout << "Weekly Pay = ". followed by a semicolon. The kind of values a variable can assume depends on its type.5. after this statement is executed. 2. 100. Once defined. float workHours. As a general rule. character). • Listing 1.2 illustrates the uses of some simple variable. real. workDays = 5. a variable is defined by specifying its type first. followed by the variable name. It assigns the value 5 to the variable workDays. This line defines three float (real) variables which. respectively. payRate. represent the work hours per day. A value which can be changed by assigning a new value to the variable. cout << '\n'. For example. Therefore.g. which will represent the number of working days in a week.5 to the variable workHours.. } 4 This line defines an int (integer) variable called workDays. Chapter 1: Preliminaries 5 5 6 7 www. weeklyPay.Variables A variable is a symbolic name for a memory location in which data can be stored and subsequently recalled. payRate = 38. -12). All variables have two important attributes: • A type which is established when the variable is defined (e. workDays denotes the value 5. workHours = 7. and the weekly pay. This line assigns the value 7.g. This line is an assignment statement. the hourly pay rate. integer.
It is important to ensure that a variable is initialized before it is used in any computation.55 to the variable payRate.. The assigning of a value to a variable for the first time is called initialization. 10-12 These lines output three items in sequence: the string "Weekly Pay = ".h> int main (void) { int workDays = 5. float payRate = 38. This is considered a good programming practice. the two programs are equivalent. the program will produce the following output: Weekly Pay = 1445.e. and a newline character. This line calculates the weekly pay as the product of workDays. whatever happens to be in the memory location which the variable denotes at the time) until line 9 is executed. For example. cout << '\n'. cout << "Weekly Pay = ". It is possible to define a variable and initialize it at the same time.625 When a variable is defined.3 is a revised version of Listing 1.55.5. cout << weeklyPay. } ¨ 6 C++ Essentials Copyright © 2005 PragSoft . When run.2 which uses this technique. Listing 1. the value of the variable weeklyPay. float workHours = 7. float weeklyPay = workDays * workHours * payRate. and payRate (* is the multiplication operator).3 1 2 3 4 5 6 7 8 9 10 11 #include <iostream. The resulting value is stored in weeklyPay.8 9 This line assigns the value 38. because it pre-empts the possibility of using the variable prior to it being initialized. weeklyPay has an undefined value (i. For all intents and purposes. Listing 1. workHours. its value is undefined until it is actually assigned one.
55 Weekly Pay = 1258. weeklyPay. This line reads the input value typed by the user and copies it to payRate. Listing 1. This is illustrated by Listing 1. weeklyPay = workDays * workHours * payRate. float payRate.h> int main (void) { int workDays = 5. character-oriented Input/Output (IO) operations. C++ provides two useful operators for this purpose: >> for input and << for output.5 which now allows the input of both the daily work hours and the hourly pay rate.Simple Input/Output The most common way in which a program communicates with the outside world is through simple. the program will produce the following output (user input appears in bold): What is the hourly pay rate? 33. We have already seen examples of output using <<. float workHours = 7. www. cout << weeklyPay.4 1 2 3 4 5 6 7 8 9 10 11 12 13 Annotation #include <iostream. cout << '\n'.pragsoft.5. The input operator >> takes an input stream as its left operand (cin is the standard C++ input stream which corresponds to data entered via the keyboard) and a variable (to which the input data is copied) as its right operand. cout << "Weekly Pay = ". cout << "What is the hourly pay rate? ". The rest of the program is as before.125 Both << and >> return their left operand as their result. enabling multiple input or multiple output operations to be combined into one statement. } 7 8 This line outputs the prompt What is the hourly pay rate? to seek user input. cin >> payRate.4 also illustrates the use of >> for input. Listing 1.com Chapter 1: Preliminaries 7 . 9-13 When run.
5 1 2 3 4 5 6 7 8 9 10 Annotation #include <iostream. payRate. cin >> workHours >> payRate.5 Weekly Pay = 1258.125 33.. weeklyPay = workDays * workHours * payRate.e. (cout << "Weekly Pay = ") evaluates to cout which is then used as the left operand of the next << operator.4. respectively.Listing 1. When run. float workHours. the program will produce the following output: What are the work hours and the hourly pay rate? 7. The two values should be separated by white space (i. (cin >> workHours) evaluates to cin which is then used as the left operand of the next >> operator. cout << "Weekly Pay = " << weeklyPay << '\n'. This statement is equivalent to: ((cout << "Weekly Pay = ") << weeklyPay) << '\n'. 9 This line is the result of combining lines 10-12 from Listing 1. weeklyPay. } 7 This line reads two input values typed by the user and copies them to workHours and payRate. cout << "What are the work hours and the hourly pay rate? ". This statement is equivalent to: (cin >> workHours) >> payRate. newline character.55 ¨ 8 C++ Essentials Copyright © 2005 PragSoft . followed by a . etc. one or more space or tab characters). Because the result of >> is its left operand. It outputs "Weekly Pay = " followed by the value of weeklyPay.h> int main (void) { int workDays = 5. Because the result of << is its left operand.
Anything enclosed by the pair /* and */ is considered a comment. www. } Comments should be used to enhance (not to hinder) the readability of a program.Comments A comment is a piece of descriptive text which explains some aspect of a program. A confusing or unnecessarily-complex comment is worse than no comment at all. cout << "Weekly Pay = " << weeklyPay << '\n'. • Listing 1. float weeklyPay. ¨ • • The best guideline for how to use comments is to simply apply common sense. float payRate = 33. float workHours = 7.6 illustrates the use of both forms. based on the total number of hours worked and the hourly pay rate.5.6 1 2 3 4 5 6 7 8 9 10 11 12 13 #include <iostream. in particular. Over-use of comments can lead to even less readability. A program which contains so much comment that you can hardly see the code can by no means be considered readable.com Chapter 1: Preliminaries 9 . The following two points. */ int main (void) { int workDays = 5. and proper indentation of the code can reduce the need for using comments. should be noted: • A comment should be easier to read and understand than the code which it tries to explain. // Number of work days per week // Number of work hours per day // Hourly pay rate // Gross weekly pay weeklyPay = workDays * workHours * payRate. C++ provides two types of comment delimiters: • Anything after // (until the end of the line on which it appears) is considered a comment. Program comments are totally ignored by the compiler and are only intended for human readers.h> /* This program calculates the weekly gross pay for a worker. Use of descriptive names for v ariables and other entities in a program.pragsoft. Listing 1.50.
causes the compiler to allocate a few bytes to represent salary. ¨ 10 C++ Essentials Copyright © 2005 PragSoft . The compiler uses the address of the first byte at which salary is allocated to refer to it. Byte Address 1211 . For example. Figure 1.3). The bytes are sequentially addressed. Byte 1212 Byte 1213 Byte 1214 1215 1216 Byte 1217 Byte . This memory can be thought of as a contiguous sequence of bits. Figure 1.. Memory 1 1 0 1 0 0 0 1 Bit The C++ compiler generates executable code which maps data entities to memory locations. the variable definition int salary = 65000.. 1211 . Therefore each byte can be uniquely identified by its address (see Figure 1.... Byte 1212 Byte 1213 Byte 1214 Byte 1215 Byte 1216 Byte 1217 Byte .2). the memory is also divided into groups of 8 consecutive bits (called bytes). Memory 10110011 10110011 salary (a two-byte integer whose address is 1214) While the exact binary representation of a data item is rarely of interest to a programmer... The above assignment causes the value 65000 to be stored as a 2’s complement integer in the two bytes allocated (see Figure 1.Memory A computer provides a Random Access Memory (RAM) for storing executable program code as well as the data the program manipulates.2 Bits and bytes in memory.. Typically. The exact number of bytes allocated and the method used for the binary representation of the integer depends on the specific C++ implementation. the general organization of memory and use of addresses for referring to data items (as we will see later) is very important. each of which is capable of storing a binary digit (0 or 1). but let us say two bytes encoded as a 2’s complement integer.3 Representation of an integer in memory.
a literal integer can be specified to be unsigned using the suffix U or u. The decimal notation is the one we have been using so far. and a long uses more or at least the same number of bytes as an int. salary = 65000. int. short int long age = 20. have a signed representation so that it can assume positive as well as negative values). and therefore use the letter A-F (or a-f) to represent. For example: 1984L 1984l 1984U 1984u 1984LU 1984ul Literal integers can be expressed in decimal. An integer is taken to be octal if it is preceded by a zero (0). 10-15. By default. respectively. A literal integer (e. on the author’s PC. and hexadecimal notations.com Chapter 1: Preliminaries 11 . or long.Integer Numbers An integer variable may be defined to be of type short. an int also 2 bytes. and can therefore only use the digits 0-7.. However.g. an integer variable is assumed to be signed (i. For example. 1984) is always assumed to be of type int. price = 4500000. price = 4500000. in which case it is treated as a long. salary = 65000. Also. a short uses 2 bytes. For example: 92 0134 0x5C // decimal // equivalent octal // equivalent hexadecimal Octal numbers use the base 8.pragsoft. Hexadecimal numbers use the base 16. Octal and hexadecimal numbers are calculated as follows: 0134 = 1 × 82 + 3 × 81 + 4 × 80 = 64 + 24 + 4 = 92 0x5C = 5 × 161 + 12 × 160 = 80 + 12 = 92 ¨ www. and hexadecimal if it is preceded by a 0x or 0X. unsigned short unsigned int unsigned long age = 20. octal. The only difference is that an int uses more or at least the same number of bytes as a short. and a long 4 bytes.e. unless it has an L or l suffix.. The keyword signed is also allowed but is redundant. an integer can be defined to be unsigned by using the keyword unsigned in its definition.
164E-3 or 2. float double interestRate = 0.002164 may be written in the scientific notation as: 2.141592654.141592654L 3.06F 0.141592654l In addition to the decimal notation used so far. The latter uses more bytes than a double for better accuracy (e. literal reals may also be expressed in scientific notation. in which case it is treated as a float. For example. The scientific notation is interpreted as follows: 2. For example. pi = 3.06f 3.Real Numbers A real variable may be defined to be of type float or double.g.164E-3 = 2.06) is always assumed to be of type double. The latter uses more bytes and therefore offers a greater range and accuracy for representing real numbers. a float uses 4 and a double uses 8 bytes.. or an L or l suffix. in which case it is treated as a long double. on the author’s PC. 0.. 0. For example: 0. unless it has an F or f suffix.g. 10 bytes on the author’s PC).164 × 10-3 ¨ 12 C++ Essentials Copyright © 2005 PragSoft . A literal real (e.164e-3 The letter E (or e) stands for exponent.06.
Characters A character variable is defined to be of type char. a character variable may be specified to be signed or unsigned.pragsoft.. An unsigned character variable can hold numeric values in the range 0 through 255. Like integers..e. By the default (on most systems) char means signed char. Nonprintable characters are represented using escape sequences.com Chapter 1: Preliminaries 13 . A literal character is written by enclosing the character between a pair of single quotes (e. A signed character variable can hold numeric values in the range -128 through 127. on some systems it may mean unsigned char. A character variable occupies a single byte which contains the code for the character.e. column = 26. both are often used to represent small integers in programs (and can be assigned numeric values like integers): signed char unsigned char offset = -88. row = 2. and the character a has the ASCII code 97.. the character A has the ASCII code 65. For example (assuming ASCII): '\12' '\11' '\101' '\0' // // // // newline (decimal code = 10) horizontal tab (decimal code = 9) 'A' (decimal code = 65) null (decimal code = 0) ¨ www. As a result. However. is machine-dependent). For example. The most common system is ASCII (American Standard Code for Information Interchange). a backslash followed by up to three octal digits) is used for this purpose.. char ch = 'A'. 'A'). The general escape sequence \ooo (i.g. This code is a numeric value and depends on the character coding system being used (i.
'A'). A string variable..4 A string and a string variable in memory..e. ¨ 14 C++ Essentials Copyright © 2005 PragSoft . These two are not equivalent.Strings A string is a consecutive sequence (i. a pointer to character). For example: "Name\tAddress\tTelephone" "ASCII character 65: \101" // tab-separated words // 'A' specified as '101' A long string may extend beyond a single line. "HELLO"). A string variable is defined to be of type char* (i. For example. simply contains the address of where the first character of a string appears.g. 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 ..e. The shortest possible string is the null string ("") which simply consists of the null character. Figure 1. Figure 1.. whereas the latter consists of a single byte. consider the definition: char *str = "HELLO". The former consists of two bytes (the character 'A' followed by the character '\0')..g. The above string is equivalent to the single line string: "Example to show the use of backslash for writing a long string" A common programming error results from confusing a single-character string (e.g.. therefore. (Pointers will be discussed in Chapter 5). A literal string is written by enclosing its characters between a pair of double quotes (e. A pointer is simply the address of a memory location. "A") with a single character (e. The characters of a string may be specified using any of the notations for specifying literal characters. The compiler always appends a null character to a literal string to mark its end. For example: "Example to show \ the use of backslash for \ writing a long string" The backslash in this context means that the rest of the string is continued on the next line. array) of characters which are terminated by a null character. in which case each of the preceding lines should be terminated by a backslash.4 illustrates how the string variable str and the string "HELLO" might appear in memory.... str 1212 'H' 'E' 'L' 'L' 'O' '\0' .
For example: salary salary2 2salary _salary Salary // // // // // valid identifier valid identifier invalid identifier (begins with a digit) valid identifier valid but distinct from salary C++ imposes no limit on the number of characters in an identifier. As a result.g. We have already seen examples of an important category of such names (i. not its name.e. Other categories include: function names. However. www. which allow the programmer to organize what would otherwise be quantities of plain data into a meaningful and human-readable collection. and macro names.e. '0'-'9').. variable names).. except that the first character may not be a digit.. most implementation do. 'A'-'Z' and 'a'-'z').com Chapter 1: Preliminaries 15 .pragsoft. a temperature variable eventually becomes a few bytes of memory which is referred to by the executable code by its address. A name should consist of one or more characters. 255 characters).1: Table 1.Names Programming languages use names to refer to the various entities that make up a program. For example. Certain words are reserved by C++ for specific purposes and may not be used as identifiers. each of which may be a letter (i. Upper and lower case letters are distinct. which will be described later in this book.1 C++ keywords. a digit (i.. C++ imposes the following rules for creating valid names (also called identifiers ).e. Names are a programming convenience. or an underscore character ('_'). no trace of a name is left in the final executable code generated by a compiler. But the limit is usually so large that it should not cause a concern (e. type names. These are called reserved words or keywords and are summarized in Table 1.
using the formula: 5 ° C = (° F − 32) 9 Compile and run the program.67F.0. signed char = -1786. float y = y * 2. float f = 0. unsigned double z = 0.oriented 1. signed int = 2.9. 1.2 Which of the following represent valid variable definitions? int n = -100. char c = '$' + 2. sign char h = '\111'. unsigned int i = -100. A greeting message. double d = 0. Its behavior should resemble this: Temperature in Fahrenheit: 41 41 degrees Fahrenheit = 5 degrees Celsius 1. char *name = "Peter Pan". A letter of the alphabet. unsigned char *num = "276811". int 2k.1 Write a program which inputs a temperature reading expressed in Fahrenheit and outputs its equivalent in Celsius. p = 4.3 Which of the following represent valid identifiers? identifier seven_11 _unique_ gross-income gross$income 2by2 default average_weight_of_a_large_pizza variable object. long m = 2.Exercises 1. Income of an employee. Number of words in a dictionary.52L. double x = 2 * m.4 Define variables to represent the following entities: • • • • • Age of a person. ¨ 16 C++ Essentials Copyright © 2005 PragSoft .
in some cases. However. C++ expressions are different from mathematical expressions. we often use the term evaluation.com Chapter 2: Expressions 17 . Expressions This chapter introduces the built-in C++ operators for composing expressions. increment. We will also discuss the precedence rules which govern the order of operator evaluation in a multi-operator expression. In this sense. we say that an expression evaluates to a certain value. such as assignment.2. logical. bitwise. the expression may also produce side-effects. www. These are permanent changes in the program state. and decrement. When discussing expressions.pragsoft. For example. We will look at each category of operators in turn. Usually the final value is the only reason for evaluating the expression. C++ provides operators for composing arithmetic. and conditional expressions. relational. An expression is any computation which yields a value. It also provides operators which produce useful sideeffects.
Integer division always results in an integer outcome (i. For example: unsigned char k = 10 * 92. This situation is called an overflow. This results in a run-time division-byzero failure which typically causes the program to terminate. unitPrice = cost / (double) volume.. not -4! Unintended integer divisions are a common source of programming errors.4 9 / 2. It returns the remainder of integer-dividing the operands.02 6.9 -0. For example: 9 / 2 -9 / 2 // gives 4.9 3.4 2 * 3.98 . you should cast one of the operands to be real: int int double cost = 100. It is possible for the outcome of an arithmetic operation to be too large for storing in a designated variable.5! // gives -5. However.2 Arithmetic operators. ¨ 18 C++ Essentials Copyright © 2005 PragSoft . if both operands are integers then the result will be an integer.0 13 % 3 // // // // // gives gives gives gives gives 16. if one or both of the operands are reals then the result will be a real (or double to be exact). For example 13%3 is calculated by integer dividing 13 by 3 to give an outcome of 4 and a remainder of 1. Generally. volume = 80.5 1 Except for remainder (%) all other arithmetic operators can accept a mix of integer and real operands. the result is always rounded down). not 4. // gives 1. When both operands of the division operator (/) are integers then the division is performed as an integer division and not the normal division we are used to.Arithmetic Operators C++ provides five basic arithmetic operators. Operator + * / % Name Addition Subtraction Multiplication Division Remainder Example 12 + 4. To obtain a real division when both operands are integers. the result is therefore 1. Table 2. These are summarized in Table 2.25 The remainder operator (%) expects integers for both of its operands.e.8 4. // overflow: 920 > 255 It is illegal to divide a number by zero.2. The outcome of an overflow is machine-dependent and therefore undefined.
pragsoft. the outcome may be 0 or may be 1. =< and => are both invalid and do not mean anything. For example (assuming ASCII coding): 'A' < 'F' // gives 1 (is like 65 < 70) The relational operators should not be used for comparing strings. not the string contents.g.3 >= 5 // // // // // // gives gives gives gives gives gives 1 0 1 1 0 1 Note that the <= and >= operators are only supported in the form shown.com Chapter 2: Expressions 19 . the expression "HELLO" < "BYE" causes the address of "HELLO" to be compared to the address of "BYE". Characters are valid operands since they are represented by numeric values. In particular. and is therefore undefined. Operator == != < <= > >= Name Equality Inequality Less Than Less Than or Equal Greater Than Greater Than or Equal Example 5 == 5 5 != 5 5 < 5. These are summarized in Table 2.Relational Operators C++ provides six relational operators for comparing numeric quantities. because this will result in the string addresses being compared. These will be described later in the book. strcmp) for the lexicographic comparison of string.5 6. Relational operators evaluate to 1 (representing the true outcome) or 0 (representing the false outcome).3. ¨ www.. The operands of a relational operator must evaluate to a number. C++ provides library functions (e. For example.5 5 <= 5 5 > 5. As these addresses are determined by the compiler (in a machine-dependent manner).3 Relational operators. Table 2.
// false // true ¨ 20 C++ Essentials Copyright © 2005 PragSoft . logical operators evaluate to 1 or 0.4 Logical operators. It is customary to use the type int for this purpose instead.4. whereas only zero represents the logical false. any nonzero value can be used to represent the logical true. int balanced = 1. Logical and produces 0 if one or both of its operands evaluate to 0. Otherwise. Logical or produces 0 if both of its operands evaluate to 0. For example: int sorted = 0.Logical Operators C++ provides three logical operators for combining logical expression. These are summarized in Table 2.5 10 && 0 // // // // gives gives gives gives 0 1 1 0 C++ does not have a built-in boolean type. it produces 1. Table 2. and if it is 0 it produces 1. it produces 1. which negates the logical value of its single operand. Note that here we talk of zero and nonzero operands (not zero and 1). The following are. If its operand is nonzero it produce 0. Like the relational operators. therefore. In general. Operator ! && || Name Logical Negation Logical And Logical Or Example !(5 == 5) 5 < 6 && 6 < 6 5 < 6 || 6 < 5 // gives 0 // gives 1 // gives 1 Logical negation is a unary operator. Otherwise. all valid logical expressions: !20 10 && 5 10 || 5.
unsigned char y = '\027'. it is common to declare a bit sequence as an unsigned quantity: unsigned char x = '\011'.5. Table 2. Bitwise and compares the corresponding bits of its operands and produces a 1 when both bits are 1.5 Bitwise operators. These are summarized in Table 2. Bitwise exclusive or compares the corresponding bits of its operands and produces a 0 when both bits are 1 or both bits are 0. The latter produces a bit sequence equal to the left operand but which has been shifted n bit positions to the right. Bitwise negation is a unary operator which reverses the bits in its operands.Bitwise Operators C++ provides six bitwise operators for manipulating the individual bits in an integer quantity. Table 2.6 illustrates bit sequences for the sample operands and results in Table 2. and 0 otherwise.com Chapter 2: Expressions 21 .pragsoft. and 1 otherwise. and 1 otherwise. To avoid worrying about the sign bit (which is machine dependent)..5. Table 2. Bitwise left shift operator and bitwise right shift operator both take a bit sequence as their left operand and a positive integer quantity n as their right operand. The former produces a bit sequence equal to the left operand but which has been shifted n bit positions to the left. Vacated bits at either end are set to 0. Bitwise or compares the corresponding bits of its operands and produces a 0 when both bits are 0.6 How the bits are calculated..
respectively. When used in prefix form.7.7 Increment and decrement operators. ¨ 22 C++ Essentials Copyright © 2005 PragSoft . Both operators may be applied to integer as well as real variables. These are summarized in Table 2. the expression is evaluated first and then the operator applied. adding and subtracting 1 from a numeric variable. The examples assume the following variable definition: int k = 5. When used in the postfix form. Table 2. Operator ++ ++ --- Name Auto Increment (prefix) Auto Increment (postfix) Auto Decrement (prefix) Auto Decrement (postfix) Example ++k k++ --k k-- + + + + 10 10 10 10 // // // // gives gives gives gives 16 15 14 15 Both operators can be used in prefix and postfix form. although in practice real variables are rarely useful in this form. the operator is first applied and the outcome is then used in the expression. The difference is significant.Increment/Decrement Operators The auto increment (++) and auto decrement (--) operators provide a convenient way of.
m = (n = p = 100) + 2.com Chapter 2: Expressions 23 . The assignment operator has a number of variants. ¨ www. Table 2. The only kind of lvalue we have seen so far in this book is a variable.8 Assignment operators. Other kinds of lvalues (based on pointers and references) will be described later in this book. // means: n = (m = (p = 100)). These are summarized in Table 2. For example: m = 100. This is equally applicable to other forms of assignment. The examples assume that n is an integer variable. n. and its right operand may be an arbitrary expression. // means: m = m + (n = p = 10). For example: int m. m += n = p = 10.8. The latter is evaluated and the outcome is stored in the location denoted by the lvalue. obtained by combining it with the arithmetic and bitwise operators.Assignment Operator The assignment operator is used for storing a value at some memory location (typically denoted by a variable). An lvalue (standing for left value) is anything that denotes a memory location in which a value may be stored. // means: m = (n = (p = 100)) + 2.pragsoft. p.25 * 25 / 25 % 25 & 0xF2F2 | 0xF2F2 ^ 0xF2F2 << 4 >> 4 An assignment operation is itself an expression whose value is the value stored in its left operand. Its left operand should be an lvalue. m = n = p = 100. . An assignment operation can therefore be used as the right operand of another assignment operation. Any number of assignments can be concatenated in this fashion to form one expression.
int min = (m < n ? (m < p ? m : p) : (n < p ? n : p)). that is. p =3. For example: int m = 1.Conditional Operator The conditional operator takes three operands. conditional expressions may be nested. int min = (m < n ? m : n). Otherwise. which is treated as a logical condition. it may be used as an operand of another conditional operation. Because a conditional operation is itself an expression. their evaluation causes a change to the value of a variable). ¨ 24 C++ Essentials Copyright © 2005 PragSoft . n = 2. For example: int m = 1. For example. // min receives 1 Note that of the second and the third operands of the conditional operator only one is evaluated. This may be significant when one or both contain side-effects (i. n = 2.. operand3 is evaluated and its value is the final result. m is incremented because m++ is evaluated but n is not incremented because n++ is not evaluated. in int min = (m < n ? m++ : n++). If the result is nonzero then operand2 is evaluated and its value is the final result.e. It has the general form: operand1 ? operand2 : operand3 First operand1 is evaluated.
. nCount = 0. It first evaluates the left operand and then the right operand.Comma Operator Multiple expressions can be combined into one expression using the comma operator. Otherwise. //. Here when m is less than n. and returns the value of the latter as the final outcome..pragsoft. For example: int m. int mCount = 0. ¨ www. The comma operator takes two operands. n). mCount++ is evaluated and the value of m is stored in min. min. m : nCount++.com Chapter 2: Expressions 25 . n. min = (m < n ? mCount++. nCount++ is evaluated and the value of n is stored in min.
7 illustrates the use of sizeof on the built-in types we have encountered so far.55 1. size = " << sizeof("HELLO") << " bytes\n".7 1 2 3 4 5 6 7 8 9 10 11 12 13 14 #include <iostream.55L) << " bytes\n". 100) and returns the size of the specified entity in bytes..55) << " bytes\n".h> int main { cout cout cout cout cout cout cout (void) << << << << << << << "char "char* "short "int "long "float "double size size size size size size size = = = = = = = " " " " " " " << << << << << << << sizeof(char) << " bytes\n".The sizeof Operator C++ provides a useful operator. sizeof(int) << " bytes\n". sizeof(short) << " bytes\n". When run. sizeof(float) << " bytes\n". The outcome is totally machine-dependent. the program will produce the following output (on the author’s PC): char char* short int long float double 1. sizeof(double) << " bytes\n". size = " << sizeof(1.55 cout << "1. sizeof(long) << " bytes\n".. sizeof(char*) << " bytes\n". Listing 2. cout << "1. It takes a single operand which may be a type name (e. int) or an expression (e.55L HELLO size size size size size size size size size size = = = = = = = = = = 1 bytes 2 bytes 2 bytes 2 bytes 4 bytes 4 bytes 8 bytes 8 bytes 10 bytes 6 bytes ¨ 26 C++ Essentials Copyright © 2005 PragSoft . sizeof.g. for calculating the size of any data item or type.55L cout << "HELLO } size = " << sizeof(1. Listing 2.g.
Table 2. Operators in higher levels take precedence over operators in lower levels. Operators with the same precedence level are evaluated in the order specified by the last column of Table 2.9. The result is then added to b because + has a higher precedence than ==.pragsoft. For example. Precedence rules can be overridden using brackets. [] ++ -. These rules divide the C++ operators into a number of precedence levels (see Table 2.com Chapter 2: Expressions 27 . followed by a = b. For example. rewriting the above expression as a == (b + c) * d causes + to be evaluated before *. and then == is evaluated.9).* / >> <= != -> ! ~ % . Level Highest Operator :: () + ->* * + << < == & ^ | && || ? : = . in a == b + c * d c * d is evaluated first because * has a higher precedence than + and ==. ¨ www. in a = b += c the evaluation order is right to left. * &.9 Operator precedence levels.Operator Precedence The order in which operators are evaluated in an expression is significant and is determined by precedence rules. so first b += c is evaluated.
14 // converts 3. Type operators are unary (i.0 (char) 122 // converts 122 to a char whose code is 122 (unsigned short) 3. // d receives 1.14 to an int to give 3 (long) 3. ¨ 28 C++ Essentials Copyright © 2005 PragSoft .14 // converts 3. For example: (int) 3. i + d involves mismatching types. i = i + d.e.14 // gives 3 as an unsigned short As shown by these examples.14) // same as: (int) 3. the built-in type identifiers can be used as type operators.. take one operand) and appear inside brackets to the left of their operand. This happens when values of different types are mixed in an expression. When the type name is just one word. For example: double d = 1.14 to a long to give 3L (double) 2 // converts 2 to a double to give 2. The result is a double which does not match the type of i on the left side of the assignment.Simple Type Conversion A value in any of the built-in types we have see so far can be converted (type-cast) to any of the other types. The above rules represent some simple but common cases for type conversion.5.14 In some cases. C++ also performs implicit type conversion. an alternate notation may be used in which the brackets appear around the operand: int(3. so it is converted to int (demoted) before being assigned to i. so i is first converted to double (promoted) and then added to d. More complex cases will be examined later in the book after we have discussed other data types and classes.0 // i receives 10 // means: i = int(double(i) + d) In the last example. This is called explicit type conversion. int i = 10.
q || n == 0) (++n * q-. 'a' + 2.5 Write expressions for the following: • • • • • • • • To test if a number n is even. ¨ 2. To test if a character c is a digit.3.9 Add extra brackets to the following expressions to explicitly show the order in which the operators are evaluated: (n <= p + q && n >= p . To reset the n-th bit of a long integer f to 0.q) (n | p & q ^ p << 2 + q) (p < q ? n < p ? q * n . 3. 2. To give the absolute value of a number n.8 Write a program which inputs a positive integer n and outputs 2 raised to the power of n.14 .2 : q / n + 1 : q ./ ++p .pragsoft. To do the test: n is odd and positive or n is even and negative.7 What will be the value of each of the following variables after its initialization: double long char char d k c c = = = = 2 * int(3.14).n) 2. To set the n-th bit of a long integer f to 1. To test if a character c is a letter. 2. and outputs Not sorted otherwise. 'p' + 'A' . Write a program which inputs three numbers and outputs the message Sorted if the numbers are in ascending order.Exercises 2. To give the number of characters in a null-terminated string literal s.com Chapter 2: Expressions 29 .'a'.
(A side-effect can be thought of as a change in the program state. algebraic computations.3. Like many other procedural languages. We will discuss these in turn. A running program spends all of its time executing statements. Roughly speaking. Statements represent the lowest-level building blocks of a program. The order in which statements are executed is called flow control (or control flow). Flow control is an important consideration because it determines what is executed during a run and what is not. Loop statements are used for specifying computations which need to be repeated until a certain logical condition is satisfied. Declaration statements are used for defining variables. from one statement to the next. which when completed will be handed over ( flow) to another statement. but may be diverted to other paths by branch statements. Statements This chapter introduces the various forms of C++ statements for composing programs. sort a list of names). such as the value of a variable changing because of an assignment.g. C++ provides different forms of statements for different purposes. each statement represents a computational step which has a certain side-effect. This term reflect the fact that the currently executing statement has the control of the CPU. Flow control statements are used to divert the execution path to another part of the program.. Flow control in a program is typically sequential. 30 C++ Essentials Copyright © 2005 PragSoft . depending on the outcome of a logical condition. the combination of which enables the program to serve a specific purpose (e. Branching statements are used for specifying alternate paths of execution. therefore affecting the overall outcome of the program.) Statements are useful because of the side-effects they cause. Assignment-like statements are used for simple.
it is also called a block. these variables are not defined. A scope is a part of the program text within which a variable remains defined.Simple and Compound Statements A simple statement is a computation terminated by a semicolon. and j in the above example is from where they are defined till the closing brace of the compound statement.com Chapter 3: Statements 31 . d + 5. the scope of min. i. it has some genuine uses. Blocks and scope rules will be described in more detail when we discuss functions in the next chapter. } Compound statements are useful in two ways: (i) they allow us to put multiple statements in places where otherwise only single statements are allowed. The scope of a C++ variable is limited to the block immediately enclosing it. min = (i < j ? i : j). Outside the compound statement. Because a compound statement may contain variable definitions and defines a scope for them. j = 20. For example. double d = 10. ¨. i = 10. Variable definitions and semicolon-terminated expressions are examples: int i. Multiple statements can be combined into a compound statement by enclosing them within braces. For example: { int min. The simplest statement is the null statement which consists of just a semicolon: . ++i.5. cout << min << '\n'. // null statement Although the null statement has no side-effect. because it has no side-effect (d is added to 5 and the result is just discarded). and (ii) they allow us to introduce a new scope in the program. // // // // declaration statement this has a side-effect declaration statement useless statement! The last example represents a useless statement. as we will see later in the chapter.
To make multiple statements dependent on the same condition. } 32 C++ Essentials Copyright © 2005 PragSoft . statement 2 is executed. First expression is evaluated. the general form of which is: if (expression) statement. nothing happens. balance += interest. Otherwise. First expression is evaluated. } A variant form of the if statement allows us to specify two alternative statements: one which is executed if a condition is satisfied and one which is executed if the condition is not satisfied. } else { interest = balance * debitRate. balance += interest. This is called the if-else statement and has the general form: if (expression) statement1 . For example: if (balance > 0) { interest = balance * creditRate. Otherwise. For example. The if statement provides a way of expressing this. If the outcome is nonzero then statement 1 is executed.The if Statement It is sometimes desirable to make the execution of a statement dependent upon a condition being satisfied. else statement2 . balance += interest. If the outcome is nonzero then statement is executed. we can use a compound statement: if (balance > 0) { interest = balance * creditRate. when dividing two values. we may want to check that the denominator is nonzero: if (count != 0) average = sum / count.
balance += interest. If statements may be nested by having an if statement appear inside another if statement. Or just: balance += balance * (balance > 0 ? creditRate : debitRate). Or simplified even further using a conditional expression: interest = balance * (balance > 0 ? creditRate : debitRate). balance += interest. For example: if (callHour > 6) { if (callDuration <= 5) charge = callDuration * tarrif1. else { if (ch >= 'a' && ch <= 'z') kind = lowerLetter.5) * tarrif2. www. For example: if (ch >= '0' && ch <= '9') kind = digit. else charge = 5 * tarrif1 + (callDuration . else kind = special. else { if (ch >= 'A' && ch <= 'Z') kind = upperLetter.Given the similarity between the two alternative parts. else if (cha >= 'A' && ch <= 'Z') kind = capitalLetter. it is conventional to format such cases as follows: if (ch >= '0' && ch <= '9') kind = digit. the whole statement can be simplified to: if (balance > 0) interest = balance * creditRate.pragsoft.com Chapter 3: Statements 33 . } else charge = flatFee. else interest = balance * debitRate. else if (ch >= 'a' && ch <= 'z') kind = smallLetter. A frequently-used form of nested if statements involves the else part consisting of another if-else statement. } } For improved readability.
¨ 34 C++ Essentials Copyright © 2005 PragSoft .else kind = special.
The following switch statement performs the operation and stored the result in result. There are. case constantn : statements. based on the value of an expression.pragsoft. break. case '/': result = operand1 / operand2. The final default case is optional and is exercised if none of the earlier cases provide a match. Note the plural: each case may be followed by zero or more statements (not just one statement). default: cout << "unknown operator: " << ch << '\n'. break.com Chapter 3: Statements 35 . Execution continues until either a break statement is encountered or all intervening statements until the end of the switch statement are executed. For example. } As illustrated by this example. . until a match is found. if we extend the above statement to also allow x to be used as a multiplication operator. we will have: www. operand1. situations in which it makes sense to have a case without a break. case '*': result = operand1 * operand2.The switch Statement The switch statement provides a way of choosing between a set of alternatives. break. and the outcome is compared to each of the numeric constants (called case labels). The statements following the matching case are then executed. } First expression (called the switch tag) is evaluated. and operand2. The general form of the switch statement is: switch (expression) { case constant1 : statements. break.. For example. The break terminates the switch statement by jumping to the very end of it. case '-': result = operand1 . however.operand2. switch (operator) { case '+': result = operand1 + operand2. suppose we have parsed a binary arithmetic operation into its three components and stored these in variables operator. in the order they appear. default: statements. break.. it is usually necessary to include a break statement at the end of each case.
for example. In general. case '-': result = operand1 .. break. It should be obvious that any switch statement can also be written as multiple if-else statements. when the conditions involved are not simple equality expressions. case '/': result = operand1 / operand2. preference should be given to the switch version when possible. break.g. else if (operator == '/') result = operand1 / operand2. case 'x': case '*': result = operand1 * operand2. execution proceeds to the statements of the next case and the multiplication is performed. else if (operator == 'x' || operator == '*') result = operand1 * operand2. } Because case 'x' has no break statement (in fact no statement at all!).operand2. else if (operator == '-') result = operand1 . ¨ 36 C++ Essentials Copyright © 2005 PragSoft . break. when this case is satisfied. the switch version is arguably neater in this case. The if-else approach should be reserved for situation where a switch cannot do the job (e. break. default: cout << "unknown operator: " << ch << '\n'. break. may be written as: if (operator == '+') result = operand1 + operand2.operand2. or when the case labels are not numeric constants). The above statement. However.switch (operator) { case '+': result = operand1 + operand2. else cout << "unknown operator: " << ch << '\n'.
sets n to its greatest odd factor. If the outcome is nonzero then statement (called the loop body) is executed and the whole process is repeated. while (n % 2 == 0 && n /= 2) . it also divides n by two and ensures that the loop will terminate should n be zero.10 While loop trace. This can be expressed as: i = 1. suppose we wish to calculate the sum of all numbers from 1 to some integer denoted by n.10 provides a trace of the loop by listing the values of the variables involved and the loop condition. Table 3. The general form of the while statement is: while (expression) statement. Here the loop condition provides all the necessary computation.com Chapter 3: Statements 37 . The following loop. For n set to 5. sum = 0. It is one of the three flavors of iteration in C++. a null statement). Otherwise. Table 3.pragsoft. First expression (called the loop condition) is evaluated. For example..The while Statement The while statement (also called while loop) provides a way of repeating an statement while a condition holds. so there is no real need for a body. The loop condition not only tests that n is even. the loop is terminated.. while (i <= n) sum += i++. for example. ¨.
e. The return value of main is what the program returns to the operating system when it completes its execution. For example: int main (void) { cout << "Hello World\n". It has the general form: return expression. return 0. } When a function has a non-void return value (as in the above example). it its conventional to return 0 from main when the program executes without errors. a non-zero error code is returned. For a function whose return type is void. expression should be empty: return. for example. Under UNIX. The actual return value will be undefined in this case (i. Otherwise. whose return type is always int. The type of this value should match the return type of the function.The return Statement The return statement enables a function to return a value to its caller.. The only function we have discussed so far is main. where expression denotes the value returned by the function. failing to return a value will result in a compiler warning. ¨ 44 C++ Essentials Copyright © 2005 PragSoft . it will be whatever value which happens to be in its corresponding memory location at the time).
what will the following code fragment output when executed? if (n >= 0) if (n < 10) cout << "n is small\n". year.14 Write a program which inputs an octal number and outputs its decimal equivalent. using the criteria: Underweight: Normal: Overweight: 3. 25/12/61 becomes: December 25.5 <= weight <= height/2. and outputs its factorial. checks that it is positive. 3. The following example illustrates the expected behavior of the program: Input an octal number: 214 Octal(214) = Decimal(532) 3. normal. else cout << "n is negative\n".13 Write a program which inputs an integer value.com Chapter 3: Statements 45 ..pragsoft. or overweight. For example. 9 x 9 = 81 ¨ height/2.. 1961 3.Exercises 3.3 < weight Assuming that n is 20.12 Write a program which inputs a date in the format dd/mm/yy and outputs it in the format month dd. using the formulas: factorial(0) = 1 factorial(n) = n × factorial(n-1) 3.5 height/2.11 weight < height/2.15 Write a program which produces a simple multiplication table of the following format for integers in the range 1 to 9: 1 x 1 = 1 1 x 2 = 2 .10 Write a program which inputs a person’s height (in centimeters) and weight (in kilograms) and outputs one of the messages: underweight.
.
4. Functions This chapter describes user-defined functions as one of the main building blocks of C++ programs. It consists of three entities: • • The function name. The function body is then executed. the call is an expression and may be used in other expressions. Since a call to a function whose return type is non. When a function call is executed. The interface of a function (also called its prototype) specifies how it may be used. This specifies the type of value the function returns.com Chapter 4: Functions 45 . A function call consists of the function name followed by the call operator brackets ‘()’. This is a set of zero or more typed identifiers used for passing values to and from the function. Each argument is an expression whose type should match the type of the corresponding parameter in the function interface.void yields a return value. The function return type. a call to a function whose return type is void is a statement. The function parameters (also called its signature). By contrast. the function return value (if any) is passed to the caller. A function which returns nothing should have the return type void. inside which zero or more comma-separated arguments appear. The other main building block — user-defined classes — will be discussed in Chapter 6. www. Finally. Using a function involves ‘calling’ it. the arguments are first evaluated and their resulting values are assigned to the corresponding parameters. A function provides a convenient way of packaging a computational recipe. • The body of a function contains the computational steps (statements) that comprise the function. This is simply a unique identifier. A function definition consists of two parts: interface and body.pragsoft. so that it can be used as often as required. The number of arguments should match the number of function parameters.
9 illustrates how this function is called. positive integer. Listing 4. The function name appears next followed by its parameter list.8 shows the definition of a simple function which raises an integer to the power of another. Listing 4. exponent) // Wrong! 2 3 This brace marks the beginning of the function body. assigned to the parameters base and exponent. i < exponent.9 1 2 3 4 5 #include <iostream. ++i) result *= base. } When run. this program will produce the following output: 2 ^ 8 = 256 46 C++ Essentials Copyright © 2005 PragSoft .A Simple Function Listing 4. Listing 4. and then the function body is evaluated. However.8 1 2 3 4 5 6 7 Annotation int Power (int base. respectively. The effect of this call is that first the argument values 2 and 8 are.h> main (void) { cout << "2 ^ 8 = " << Power(2. Power has two parameters (base and exponent) which are of types int and unsigned int. This line is a local variable definition. 4-5 This for-loop raises base to the power of exponent and stores the outcome in result. } 1 This line defines the function interface. it is not possible to follow a type identifier with multiple comma-separated parameters: int Power (int base. respectively Note that the syntax for parameters is similar to the syntax for defining variables: type identifier followed by the parameter name. return result.8) << '\n'. 6 7 This line returns result as the return value of the function. for (int i = 0. unsigned int exponent) { int result = 1. It starts with the return type of the function (int in this case). This brace marks the end of the function body.
Listing 4.10 shows how Power may be declared for the above program.8) << '\n'. Therefore if the definition of a function appears before its use.. Although a function may be declared without its parameter names.com Chapter 4: Functions 47 . for (int i = 0. int Power (int. } Because a function definition contains a prototype. // function declaration main (void) { cout << "2 ^ 8 = " << Power(2. unsigned int exponent) { int result = 1. a function should be declared before its is used. Collecting these in a separate header file enables other programmers to quickly access the functions without having to read their entire definitions. ++i) result *= base.pragsoft. it also serves as a declaration.h> 2 3 4 5 6 7 8 9 10 11 12 13 int Power (int base. unsigned int). i < exponent. A function declaration simply consists of the function prototype. this is not recommended unless the role of the parameters is obvious. Use of function prototypes is nevertheless encouraged for all circumstances. which specifies the function name. parameter types. no additional declaration is needed. } int Power (int base. ¨ www. and return type.In general.10 1 #include <iostream. Line 2 in Listing 4. unsigned int exponent). return result.
cout << "x = " << x << '\n'. ¨ 48 C++ Essentials Copyright © 2005 PragSoft . As a result. The former is used much more often in practice. cout << "num = " << num << '\n'. } the single parameter of Foo is a value parameter. receives the argument passed to it and works on it directly. Foo(x). if the function makes any changes to the parameter. As far as this function is concerned. respectively. num behaves just like a local variable inside the function. Reference parameters will be further discussed in Chapter 5. The program produces the following output: num = 0. in #include <iostream. A value parameter receives a copy of the value of the argument passed to it. For example. A reference parameter. As a result. When the function is called and x passed to it. the two styles of passing arguments are. this does not affect x. on the other hand. return 0. this will not affect the argument. It is perfectly valid for a function to use pass-by-value for some of its parameters and pass-by-reference for others. num receives a copy of the value of x.Parameters and Arguments C++ supports two styles of parameters: value and reference. x = 10.h> void Foo (int num) { num = 0. Within the context of function calls. Any changes made by the function to a reference parameter is in effect directly applied to the argument. although num is set to 0 by the function. } int main (void) { int x = 10. called pass-by-value and pass-by-reference.
Each block in a program defines a local scope .com Chapter 4: Functions 49 . } // global variable // global function // global function Uninitialized global variables are automatically initialized to zero. int main (void) { //. as we will see later.. The parameters of a function have the same scope as the function body. void Foo (int xyz) { if (xyz > 0) { double xyz. for example. } } // xyz is global // xyz is local to the body of Foo // xyz is local to this block there are three distinct scopes. ¨ www.. So. (However. outside functions and classes) is said to have a global scope .pragsoft. a variable need only be unique within its own scope. Generally. int Max (int. in int xyz. whereas the memory space for local variables is allocated on the fly during program execution. while local variables are created when their scope is entered and destroyed when their scope is exited..e. int).Global and Local Scope Everything defined at the program scope level (i.. global variables last for the duration of program execution. the lifetime of a variable is limited to its scope. This means that the same global variable or function may not be defined more than once at the global level. in which case the inner scopes override the outer scopes.) Global entities are generally accessible everywhere in the program. Variables defined within a local scope are visible to that scope only. Hence. each containing a distinct xyz. For example. Thus the sample functions we have seen so far all have a global scope.. Variables may also be defined at the global scope: int year = 1994. they must also be unique at the program level. a function name may be reused so long as its signature remains unique. Thus the body of a function represents a local scope. Local scopes may be nested. //. The memory space for global variables is reserved prior to program execution commencing. Since global entities are visible at the program level.
. if (::error != 0) //. } // refers to global error ¨ 50 C++ Essentials Copyright © 2005 PragSoft ... void Error (int error) { //.. } the global error is inaccessible inside Error.Scope Operator Because a local scope overrides the global scope. because it is overridden by the local error parameter.. in int error. For example. This problem is overcome using the unary scope operator :: which takes a global entity as argument: int error. void Error (int error) { //.. having a local variable with the same name as a global variable makes the latter inaccessible to the local scope.
//.com Chapter 4: Functions 51 .pragsoft. } // same as: int xyz. these variables are also called automatic..Auto Variables Because the lifetime of a local variable is limited and is determined automatically. This is rarely used because all local variables are by default automatic. For example: void Foo (void) { auto int xyz.. ¨ www. The storage class specifier auto may be used to explicitly specify a local variable to be automatic.
the compiler generates machine code which accesses the memory location denoted by the variable. i < n. ++i) sum += i. each time round the loop. Even when the programmer does not use register declarations. variables generally denote memory locations where variable values are stored. once when it is added to sum. Therefore it makes sense to keep i in a register for the duration of the loop. and once when it is incremented. in an expression). loop variables). i is used three times: once when it is compared to n. ¨ 52 C++ Essentials Copyright © 2005 PragSoft . efficiency gains can be obtained by keeping the variable in a register instead thereby avoiding memory access for that variable.Register Variables As mentioned earlier. they can always be added later by reviewing the code and inserting it in appropriate places.g.. For example: for (register int i = 0. Use of register declarations can be left as an after thought. Here. When the program code refers to a variable (e. and in some cases the compiler may choose not to use a register when it is asked to do so. Note that register is only a hint to the compiler.. For frequently-used variables (e. One reason for this is that any machine has a limited number of registers and it may be the case that they are all in use.g. The storage class specifier register may be used to indicate to the compiler that the variable should be stored in a register if possible. many optimizing compilers try to make an intelligent guess and use registers where they are likely to improve the performance of the program.
. In other words. Static local variables are useful when we want the value of a local variable to persist across the calls to the function in which it appears. consider a puzzle game program which consists of three files for game generation. The game solution file would contain a Solve function and a number of other functions ancillary to Solve. game solution.com Chapter 4: Functions 53 .. and user interface.. //. The variable will remain only accessible within its local scope. ¨ www. } // static local variable Like global variables.pragsoft. // static global variable A local variable in a function may also be defined as static. For example. a static local variable is a global variable which is only accessible within its local scope. but will instead be global. it is best not to make them accessible outside the file: static int FindNextRoute (void) // only accessible in this file { //. int Solve (void) { //. For example.. } // accessible outside this file The same argument may be applied to the global variables in this file that are for the private use of the functions in the file.. For example.. This is facilitated by the storage class specifier static. its lifetime will no longer be confined to this scope. } //. Because the latter are only for the private use of Solve.. if (++count > limit) Abort().. static local variables are automatically initialized to 0.. a global variable which records the length of the shortest route so far is best defined as static: static int shortestRoute. however.
// variable declaration informs the compiler that size is actually defined somewhere (may be later in this file or in another file). // no longer a declaration! If there is another definition for size elsewhere in the program. some means of telling the compiler that the variable is defined elsewhere may be needed.Extern Variables and Functions Because a global variable may be defined in one file and referred to in other files. ¨ 54 C++ Essentials Copyright © 2005 PragSoft . it will eventually clash with this one. the compiler may object to the variable as undefined. but this has no effect when a prototype appears at the global scope. } // defined elsewhere // defined elsewhere The best place for extern declarations is usually in header files so that they can be easily included and shared by source files. extern double cos(double). For example: double Tangent (double angle) { extern double sin(double). since this causes it to become a variable definition and have storage allocated for it: extern int size = 10. It is a poor programming practice to include an initializer for an extern variable. Function prototypes may also be declared as extern. This is facilitated by an extern declaration. the declaration extern int size. For example. This is called a variable declaration (not definition) because it does not lead to any storage being allocated for size. It is more useful for declaring function prototypes inside a function. Otherwise. return sin(angle) / cos(angle).
2. A constant must be initialized to some value when it is defined. const double pi = 3. // illegal! str2[0] = 'P'. const char *const str3 = "constant pointer to constant". and the object pointed to. } The usual place for constant definition is within header files so that they can be shared by source files. // maxSize is of type int With pointers. // illegal! str1 = "ptr to const". // illegal! A constant with no type specifier is assumed to be of type int: const maxSize = 128. Once defined.e.com Chapter 4: Functions 55 . the value of a constant cannot be changed: maxSize = 256. // ok str3 = "const to const ptr". For example: const int maxSize = 128. ¨ www.. // ok str2 = "const ptr".. char *const str2 = "constant pointer". two aspects need to be considered: the pointer itself.Symbolic Constants Preceding a variable definition by the keyword const makes that variable readonly (i. const unsigned int exponent) { //.141592654.. a symbolic constant). } A function may also return a constant result: const char* SystemVersion (void) { return "5.pragsoft. either of which or both can be constant: const char *str1 = "pointer to constant". // illegal! A function parameter may also be declared to be constant. This may be used to indicate that the function does not change the value of a parameter: int Power (const int base. // illegal! str3[0] = 'C'. str1[0] = 'P'.1".
where the name becomes a user-defined type. This is useful for declaring a set of closely-related constants. For example. d can only be assigned one of the enumerators for Direction. east = 0. //. introduces four enumerators which have integral values starting from 0 (i. west}. Enumerations are particularly useful for naming the cases of a switch statement. This is useful for defining variables which can only be assigned a limited set of values.. in enum Direction {north.. Direction d. switch (d) { case north: case south: case east: case west: } //.. which are readonly variables. however.. west}. //. We will extensively use the following enumeration for representing boolean values in the programs in this book: enum Bool {false. true}.Enumerations An enumeration of symbolic constants is introduced by an enum declaration. An enumeration can also be named.. ¨ 56 C++ Essentials Copyright © 2005 PragSoft . Here. south. For example. north is 0. The default numbering of enumerators can be overruled by explicit initialization: enum {north = 10. south is 1. enum {north. //. east..) Unlike symbolic constants. enumerators have no allocated memory. west}.. east. south. etc.e.. south is 11 and west is 1. south..
... and local variables.5 Function call stack frames.. this overhead is negligible compared to the actual computation the function performs. memory space is allocated on this stack for the function parameters...5 illustrates the stack frame when Normalize is being executed. consider a situation where main calls a function called Solve which in turn calls another function called Normalize: int Normalize (void) { //. Normalize(). the allocated stack frame is released so that it can be reused. Figure 4. } Figure 4... //. For most functions. For example.Runtime Stack Like many other modern programming languages. Solve(). C++ function call execution is based on a runtime stack. } int Solve (void) { //. //.. When a function is called. return value. } int main (void) { //. as well as a local stack area for expression evaluation.pragsoft. ¨ www.. When a function returns.com Chapter 4: Functions 57 . main Solve Normalize It is important to note that the calling of a function involves the overheads of creating a stack frame for it and removing the stack frame when it returns. The allocated space is called a stack frame.
For a value denoted by n. the use of inline should be restricted to simple. expands and substitutes the body of Abs in place of the call. if a function is defined inline in one file. Use of inline for excessively long and complex functions is almost certainly ignored by the compiler. Therefore. the compiler. this may be expressed as: (n > 0 ? n : -n) However. then it will have an impact on performance. ¨ 58 C++ Essentials Copyright © 2005 PragSoft . Consequently. it is better to define it as a function: int Abs (int n) { return n > 0 ? n : -n. } The function version has a number of advantages. frequently used functions. if Abs is used within a loop which is iterated thousands of times. is that its frequent use can lead to a considerable performance penalty due to the overheads associated with calling a function. Generally. it is reusable. no trace of the function itself will be left in the compiled code. A function which contains anything more than a couple of statements is unlikely to be a good candidate. no function call is involved and hence no stack frame is allocated. } The effect of this is that when Abs is called. For example. While essentially the same computation is performed. Second. Because calls to an inline function are expanded. Like the register keyword. inline is a hint which the compiler is not obliged to observe. inline functions are commonly placed in header files so that they can be shared. it may not be available to other files. it leads to a more readable program. First. The overhead can be avoided by defining Abs as an inline function: inline int Abs (int n) { return n > 0 ? n : -n. instead of replicating this expression in many places in the program. however.Inline Functions Suppose that a program frequently requires to find the absolute value of an integer quantity. instead of generating code to call Abs. it avoid undesirable sideeffects when the argument is itself an expression with side-effects. The disadvantage of the function version. And third.
Take the factorial problem. has the termination condition n == 0 which. all recursive functions can be rewritten using iteration. Table 4. the elegance and simplicity of the recursive version may give it the edge. As a general rule. return result. An iterative version is therefore preferred in this case: int Factorial (unsigned int n) { int result = 1. } ¨ www. one after the other.11 provides a trace of the calls to Factorial. } For n set to 3.11 Factorial(3) execution trace. the function will call itself indefinitely until the runtime stack overflows. Table 4. The Factorial function. (Note that for a negative n this condition will never be satisfied and Factorial will fail). while (n > 0) result *= n--.Recursion A function which calls itself is said to be recursive. The stack frames for these calls appear sequentially on the runtime stack. The second line clearly indicates that factorial is defined in terms of itself and hence can be expressed as a recursive function: int Factorial (unsigned int n) { return n == 0 ? 1 : n * Factorial(n-1).pragsoft. for instance. In situations where the number of stack frames involved may be quite large. for example. which is defined as: • Factorial of 0 is 1. for example. For factorial. In other cases. the iterative version is preferred. Otherwise. • Factorial of a positive number n is n times the factorial of n-1. when satisfied.com Chapter 4: Functions 59 . causes the recursive calls to fold back.. a very large argument will lead to as many stack frames. Recursion is a general programming technique applicable to problems which can be defined in terms of themselves.
int severity = 0). The accepted convention for default arguments is to specify them in function declarations. To avoid ambiguity. int severity).g. Default arguments are suitable for situations where certain (or all) function parameters frequently take the same values. ¨ 60 C++ Essentials Copyright © 2005 PragSoft . A less appropriate use of default arguments would be: int Power (int base. a default argument may be overridden by explicitly specifying an argument. For example. not function definitions. illegal to specify two different default arguments for the same function in a file. so long as the variables used in the expression are available to the scope of the function definition (e. for example. 3). global variables). Error("Round off error"). It is. Here. unsigned int exponent = 1). Because function declarations appear in header files. consider a function for reporting errors: void Error (char *message. this enables the user of a function to have control over the default arguments. all default arguments must be trailing arguments. severity 0 errors are more common than others and therefore a good candidate for default argument. In Error. // severity set to 3 // severity set to 0 As the first call illustrates. both the following calls are therefore valid: Error("Division by zero".Default Arguments Default argument is a programming convenience which removes the burden of having to specify argument values for all of a function’s parameters. however. Thus different default arguments can be specified for different situations. severity has a default argument of 0.. Because 1 (or any other value) is unlikely to be a frequently-used one in this situation. // Illegal! A default argument need not necessarily be a constant. The following declaration is therefore illegal: void Error (char *message = "Bomb". Arbitrary expressions can be used.
Args is initialized by calling va_start.h> 2 #include <stdarg. Menu can access its arguments using a set of macro definitions in the header file stdarg. return (choice > 0 && choice <= count) ? choice : 0. The second argument to va_arg must be the expected type of that argument (i. va_start(args. // argument list char* option = option1.Variable Number of Arguments It is sometimes desirable. option1). Va_arg is called repeatedly until this 0 is reached. and allows the user to choose one of the options. do { cout << ++count << ". the last argument must be a 0. For this technique to work. as illustrated by Listing 4.. Listing 4.e. to have functions which take a variable number of arguments. " << option << '\n'.h> 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Annotation int Menu (char *option1 .11. the function should be able to accept any number of options as arguments.. char*)) != 0).com Chapter 4: Functions 61 . This may be expressed as int Menu (char *option1 . if not necessary. cin >> choice.e.11 1 #include <iostream.. args is declared to be of type va_list. 11 Subsequent arguments are retrieved by calling va_arg. marking the end of the argument list. To be general. www. choice = 0.). } while ((option = va_arg(args.h. A simple example is a function which takes a set of menu options as arguments. option1 here). // clean up args cout << "option? ". } // initialize args 5 8 To access the arguments. The relevant macros are highlighted in bold. char* here)... displays the menu. int count = 0.pragsoft. The second argument to va_start must be the last function parameter explicitly declared in the function header (i. va_end(args). which states that Menu should be given one argument or more.) { va_list args..
"Quit application". Revert to saved file 4. "Revert to saved file". Open file 2. The sample call int n = Menu( "Open file". Delete file 5. will produce the following output: 1.12 Finally. Close file 3. 0). Quit application option? ¨ 62 C++ Essentials Copyright © 2005 PragSoft . va_end is called to restore the runtime stack (which may have been modified by the earlier calls). "Delete file". "Close file".
is an array of the string constants which represent the arguments. const char* argv[]). Listing 4.pragsoft. argv. return 0. it can be passed zero or more arguments. The second parameter. As an example. Because they appear on the same line as where operating system commands are issued. There are two ways in which main can be defined: int main (void).4 12. which is defined in stdlib.4" argv[2] is "12. denotes the number of arguments passed to the program (including the name of the program itself). i < argc. they are called command line arguments.h> 2 #include <stdlib. consider a program named sum which prints out the sum of a set of numbers provided to it as command line arguments.5" Listing 4. int main (int argc.3.com Chapter 4: Functions 63 . given the command line in Dialog 4. for (int i = 1. The latter is used when the program is intended to accept command line arguments.Command Line Arguments When a program is executed under an operating system (such as DOS or UNIX).12 1 #include <iostream.3 illustrates how two numbers are passed as arguments to sum ($ is the UNIX prompt). argc. Strings are converted to real numbers using atof. These arguments appear after the program executable name and are separated by blanks.h> 3 4 5 6 7 8 9 10 int main (int argc.9 $ Command line arguments are made available to a C++ program via the main function. ++i) sum += atof(argv[i]). we have: argc is 3 argv[0] is "sum" argv[1] is "10. Dialog 4.h. const char *argv[]) { double sum = 0. The first parameter.3 1 2 3 $ sum 10. } ¨ 22. Dialog 4. cout << sum << '\n'. For example.12 illustrates a simple implementation for sum.
return 0. C++ Essentials Copyright © 2005 PragSoft 64 . y). } cout << str << '\n'. A number is prime if it is only divisible by itself and 1.16 4.20 Define an enumeration called Month for the months of the year and use it to define a function which takes a month as argument and returns it as a constant string. x = y.Exercises 4.1 as functions. } int main (void) { Print("Parameter").17 Write the programs in exercises 1. cout << str << '\n'.h> char *str = "global". y = temp. } 4. 4.18 What will the following program output when executed? #include <iostream. { char *str = "local". Given the following definition of a Swap function void Swap (int x. Swap(x. } what will be the value of x and y after the following call: x = 10. y = 20. int y) { int temp = x.19 Write a function which outputs all the prime numbers between 2 and a given positive integer n: void Primes (unsigned int n). 4. void Print (char *str) { cout << str << '\n'.1 and 3. cout << ::str << '\n'.
com Chapter 4: Functions 65 .. 4..22 4.21 Define an inline function called IsAlpha which returns nonzero when its argument is a letter.23 where n denotes the number of values in the list.pragsoft. Define a recursive version of the Power function described in this chapter. double val .4. ¨ www. Write a function which returns the sum of a list of real values double Sum (int n. and zero otherwise.).
.
The act of getting to an object via a pointer to it. and reference data types and illustrates their use for defining variables.5. especially when large objects are being passed to functions. The dimension of an array is fixed and predetermined. A reference provides an alternative symbolic name ( alias) for an object. Pointers are useful for creating dynamic objects during program execution. Pointers. and References This chapter introduces the array. objects can be accessed in two ways: directly by their symbolic name. Accessing an object through a reference is exactly the same as accessing it through its original name. Dynamic objects do not obey the normal scope rules. www. and References 65 . An array consists of a set of objects (called its elements). Pointers. Pointer variables are defined to point to objects of a specific type so that when the pointer is dereferenced. it cannot be changed during program execution. Their scope is explicitly controlled by the programmer. pointer. a table of world cities and their current temperatures. A pointer is simply the address of an object in memory. They are used to support the call-by-reference style of function parameters. not its elements. or indirectly through a pointer. Unlike normal (global and local) objects which are allocated storage on the runtime stack. References offer the power of pointers and the convenience of direct access to objects. only the array itself has a symbolic name. all of which are of the same type and are arranged contiguously in memory.pragsoft.com Chapter 5: Arrays. Arrays. The number of elements in an array is called its dimension. Arrays are suitable for representing composite data which consist of many similar. a dynamic object is allocated memory from a different storage area called the heap. individual items. Generally. Each element is identified by an index which denotes the position of the element in the array. is called dereferencing the pointer. or the monthly transactions for a bank account. Examples include: a list of names. a typed object is obtained. In general.
int nums[3] = {5. 2 3 4 5 6 7 8 double Average (int nums[size]) { double average = 0. So. } Like other variables. the remaining elements are initialized to zero: int nums[3] = {5. Attempting to access a nonexistent array element (e. 10}. ++i) average += nums[i]. return average/size. Therefore. an array representing 10 height measurements (each being an integer quantity) may be defined as: int heights[10]. Listing 5.13 1 const int size = 3. 15}. initializes the three elements of nums to 5. The individual elements of the array are accessed by indexing the array. // nums[2] initializes to 0 66 C++ Essentials Copyright © 2005 PragSoft .g. i < size. for (register i = 0. and 15. Braces are used to specify a list of comma-separated initial values for array elements. Listing 5. for example. 10..Arrays An array variable is defined by specifying its dimension and the type of its elements. 10. we may write: heights[2] = 177. Each of heights elements can be treated as an integer variable. The first array element always has the index 0. heights[0] and heights[9] denote. For example. Processing of an array usually involves a loop which goes through the array element by element. to set the third element to 177. For example. respectively.13 illustrates this using a function which takes an array of integers and returns the average of its elements. the first and last element of heights. When the number of values in the initializer is less than the number of elements. an array may have an initializer. heights[-1] or heights[10]) leads to a serious runtime error (called ‘index out of bounds’ error). respectively.
When a complete initializer is used. but specified by an additional parameter. It is easy to calculate the dimension of an array using the sizeof operator. defines str to be an array of six characters: five letters and a null character. 'E'. 15}. Pointers.com Chapter 5: Arrays.pragsoft. For example. and References 67 . // no dimension needed Another situation in which the dimension can be omitted is for an array function parameter. 4 5 6 7 for (register i = 0. Listing 5. i < size. char str[] = {'H'. defines str to be an array of five characters.14 1 double Average (int nums[]. By contrast. The terminating null character is inserted by the compiler. return average/size. the dimension of ar is: sizeof(ar) / sizeof(Type) ¨ www. 'L'. 'O'}. 'L'. int size) 2 { 3 double average = 0. the Average function above can be improved by rewriting it so that the dimension of nums is not fixed to a constant. given an array ar whose element type is Type. Listing 5. The first definition of nums can therefore be equivalently written as: int nums[] = {5. For example.14 illustrates this. } A C++ string is simply an array of characters. because the number of elements is implicit in the initializer. char str[] = "HELLO". 10. For example. the array dimension becomes redundant. ++i) average += nums[i].
25. elements are accessed by indexing the array. however. 28. Figure 5. can imagine it as three rows of four integer entries each (see Figure 5. but the programmer’s perceived organization of the elements is different. it is equivalent to: int seasonTemp[3][4] = { 26. it is more versatile. 19. suppose we wish to represent the average seasonal temperature for three major Australian capital cities (see Table 5. 20 }. 22.Multidimensional Arrays An array may have more than one dimension (i. 38.6 . 19.12). 32.. 13}. 38. 17. The organization of this array in memory is as 12 consecutive integer elements. or higher).. 22. First row Second row Third row As before. Organization of seasonTemp in memory. A separate index is needed for each dimension.e. 13.. second column) is given by seasonTemp[0][1].12 Average seasonal temperature. For example. Because this is mapped to a one-dimensional array of 12 elements in memory. 17}. 20} }. 34. Table 5. 24. three. {28. For example. The nested initializer is preferred because as well as being more informative.. two. For example.. {24. 25. The programmer. 34. The array may be initialized using a nested initializer: int seasonTemp[3][4] = { {26. The organization of the array in memory is still the same (a contiguous sequence of elements). 26 34 22 17 24 32 19 13 28 38 25 20 . 32. it makes it possible to initialize only the first element of each row and have the rest default to zero: 68 C++ Essentials Copyright © 2005 PragSoft . Sydney’s average summer temperature (first row.6). Sydney Melbourne Brisbane Spring 26 24 28 Summer 34 32 38 Autumn 22 19 25 Winter 17 13 20 This may be represented by a two-dimensional array of integers: int seasonTemp[3][4].
25. 25. ++i) for (register j = 0. int seasonTemp[rows][columns] = { {26. 17}. We can also omit the first dimension (but not subsequent dimensions) and let it be derived from the initializer: int seasonTemp[][4] = { {26.15 1 const int rows 2 const int columns 3 4 5 6 7 8 9 10 11 12 13 14 15 16 = 3. = 4. Pointers.pragsoft. 20} }. i < rows. return highest. j < columns. } ¨ www. 20} }.int seasonTemp[3][4] = {{26}. 32. {28. {24}. 17}. 38. but uses nested loops instead of a single loop. ++j) if (temp[i][j] > highest) highest = temp[i][j]. 13}. 32. {24. 19. 34. 19. 38. 34. Listing 5. Processing a multidimensional array is similar to a one-dimensional array. {28. for (register i = 0. Listing 5.15 illustrates this by showing a function for finding the highest temperature in seasonTemp. 13}. 22. {24. {28}}. int HighestTemp (int temp[rows][columns]) { int highest = 0. 22.com Chapter 5: Arrays. and References 69 .
Pointers A pointer is simply the address of a memory location and provides an indirect way of accessing data in memory. Regardless of its type. The symbol & is the address operator. Figure 5. For example. and for marking the end of pointer-based data structures (e. it takes a pointer as argument and returns the contents of the location to which it points. linked lists). and is therefore equivalent to num. we say that ptr1 points to num. Figure 5. This is useful for defining pointers which may point to data of different types. will match any type. The null pointer is used for initializing pointers. it takes a variable as argument and returns the memory address of that variable. For example. we can write: ptr1 = &num. ¨ 70 C++ Essentials Copyright © 2005 PragSoft . converts ptr1 to char pointer before assigning it to ptr2. The symbol * is the dereference operator. num ptr1 Given that ptr1 points to num.. or whose type is originally unknown. *ptr2. A pointer variable is defined to ‘point to’ data of a specific type. // pointer to an int // pointer to a char The value of a pointer variable is the address to which it points. A pointer may be cast (type converted) to another type.7 illustrates this diagrammatically. given the definitions int num. In general.g. Therefore. For example: int char *ptr1. however. the type of a pointer must match the type of the data it is set to point to. ptr2 = (char*) ptr1. The effect of the above assignment is that the address of num is assigned to ptr1. a pointer may be assigned the value 0 (called the null pointer).7 A simple integer pointer. the expression *ptr1 dereferences ptr1 to get to what it points to. A pointer of type void*.
a block for storing a single integer and a block large enough for storing an array of 10 characters. For example. respectively. delete [] str.. For example. in void Foo (void) { char *str = new char[10].16 www. The heap is used for dynamically allocating memory blocks during program execution. Listing 5. As a result. The latter remains allocated until explicitly released by the programmer. a serious runtime error may occur. it is also called dynamic memory. It is harmless to apply delete to the 0 pointer. // delete an object // delete an array of objects Note that when the block to be deleted is an array. a variable on the stack). Similarly. } when Foo returns.Dynamic Memory In addition to the program stack (which is used for storing global variables and stack frames for function calls). int *ptr = new int. Should delete be applied to a pointer which points to anything but a dynamically-allocated object (e. called the heap. It takes a pointer as argument and releases the memory block to which it points. Pointers..com Chapter 5: Arrays.g. It returns a pointer to the allocated block. Dynamic objects are useful for creating data which last beyond the function call which creates them. but the memory block pointed to by str is not. For example: delete ptr. //. an additional [] should be included to indicate this. and References 71 . the program stack is also called static memory. another memory area. The delete operator is used for releasing memory blocks allocated by new..16 illustrates this using a function which takes a string parameter and returns a copy of the string. char *str = new char[10]. Two operators are used for allocating and deallocating memory blocks on the heap. is provided. the local variable str is destroyed. allocate. The significance of this will be explained later when we discuss classes. Memory allocated from the heap does not obey the same scope rules as normal variables.pragsoft. The new operator takes a type as argument and allocated a memory block for an object of that type. Listing 5.
The strlen function (declared in string. character by character. It is the responsibility of the programmer to deal with such possibilities. } 1 4 This is the standard string header file which declares a variety of functions for manipulating strings.h) copies its second argument to its first. str). The exception handling mechanism of C++ (explained in Chapter 10) provides a practical method of dealing with such problems. we add 1 to the total and allocate an array of characters of that size.h) counts the characters in its string argument up to (but excluding) the final null character.1 2 3 4 5 6 7 Annotation #include <string. ¨ 72 C++ Essentials Copyright © 2005 PragSoft . Should new be unable to allocate a block of the requested size. including the final null character. Because the null character is not included in the count. strcpy(copy.h> char* CopyOf (const char *str) { char *copy = new char[strlen(str) + 1]. 5 Because of the limited memory resources. The strcpy function (declared in string. it will return 0 instead. return copy. especially when many large blocks are allocated and none released. there is always the possibility that dynamic memory may be exhausted during program execution.
one byte) so that it points to the second character of "HELLO". Pointers. because the outcome depends on the size of the object pointed to.e. *(ptr + 1). therefore. that the elements of "HELLO" can be referred to as *str.e. int n = ptr2 . Another form of pointer arithmetic allowed in C++ involves subtracting two pointers of the same type.pragsoft. Listing 5.17 1 void CopyString (char *dest.. suppose that an int is represented by 4 bytes. Listing 5. For example. 5 } www. and References 73 . four bytes) so that it points to the second element of nums. the elements of nums can be referred to as *ptr. 40}. whereas ptr++ advances ptr by one int (i.ptr1.17 shows as an example a string copying function similar to strcpy. For example: int *ptr1 = &nums[1]. *(str + 2). int *ptr2 = &nums[3].8 illustrates this diagrammatically. // pointer to first element str++ advances str by one char (i. Figure 5. int *ptr = &nums[0]. H E L L O \0 10 20 30 40 str str++ ptr ptr++ It follows. char *src) 2 { 3 while (*dest++ = *src++) 4 .com Chapter 5: Arrays. etc. given char *str = "HELLO".8 Pointer arithmetic.Pointer Arithmetic In C++ one can add an integer quantity to or subtract an integer quantity from a pointer. Pointer arithmetic is not the same as integer arithmetic. *(ptr + 2). // n becomes 2 Pointer arithmetic is very handy when processing the elements of an array. *(str + 1). 20. Figure 5. int nums[] = {10. Now. and *(ptr + 3).. This is frequently used by programmers and is called pointer arithmetic. Similarly. 30.
This is shown in Listing 5. return highest. i < rows. 4 5 6 7 8 for (register i = 0. In this way. In turns out that an array variable (such as nums) is itself the address of the first element of the array it represents.18 shows how the HighestTemp function (shown earlier in Listing 5. This condition becomes 0 when the final null character of src is copied to dest. we pass an int pointer and two additional parameters which specify the dimensions of the array. 4 5 6 7 8 9 Annotation for (register i = 0.15) can be improved using pointer arithmetic. ++i) for (register j = 0. const int columns) 2 { 3 int highest = 0. whereas ptr is a variable and can be made to point to any other integer. that is. ++j) if (*(temp + i * columns + j) > highest) highest = *(temp + i * columns + j). Hence the elements of nums can also be referred to using pointer arithmetic on nums.Annotation 3 The condition of this loop assigns the contents of src to the contents of dest and then increments both pointers. The difference between nums and ptr is that nums is a constant. Listing 5.19 1 int HighestTemp (const int *temp. nums[i] is equivalent to *(nums + i). so it cannot be made to point to anything else. Listing 5. 6 HighestTemp can be simplified even further by treating temp as a onedimensional array of row * column integers.19. ++i) if (*(temp + i) > highest) highest = *(temp + i). j < columns.18 1 int HighestTemp (const int *temp. } ¨ 74 C++ Essentials Copyright © 2005 PragSoft . const int columns) 2 { 3 int highest = 0. return highest. Listing 5. The expression *(temp + i * columns + j) is equivalent to temp[i][j] in the previous version of this function. const int rows. i < rows * columns. } 1 Instead of passing an array to the function. const int rows. the function is not restricted to a specific array size.
// Compare points to strcmp function Alternatively. For example. typically because the latter requires different versions of the former in different circumstances. Therefore: Compare = &strcmp. const char*).20. Compare("Tom". for example. The above definition is valid because strcmp has a matching function prototype: int strcmp(const char*. is such. defines a function pointer named Compare which can hold the address of any function that takes two constant character pointers as arguments and returns an integer. the two types must match. "Tim"). by making the comparison function a parameter of the search function. // Compare points to strcmp function The & operator is not necessary and can be omitted: Compare = strcmp. (*Compare)("Tom". As shown in Listing 5. This might not be appropriate for all cases. int (*Compare)(const char*. // direct call // indirect call // indirect call (abbreviated) A common use of a function pointer is to pass it as an argument to another function. or indirectly via Compare. we can make the latter independent of the former. Given the above definition of Compare. This function may use a comparison function (such as strcmp) for comparing the search string against the array strings. The pointer can then be used to indirectly call the function. strcmp can be either called directly.20 www. const char*) = strcmp. A good example is a binary search function for searching through a sorted array of strings. "Tim"). Listing 5. Pointers. When a function address is assigned to a function pointer.pragsoft. and References 75 . If we wanted to do the search in a non-case-sensitive manner then a different comparison function would be needed. "Tim"). The following three calls are equivalent: strcmp("Tom". strcmp is case-sensitive.com Chapter 5: Arrays.Function Pointers It is possible to take the address of a function and store it in a function pointer. const char*). the pointer can be defined and initialized at once: int (*Compare)(const char*. For example. The string comparison library function strcmp.
// not found } 1 Binary search is a well-known algorithm for searching through a sorted list of items. 14 If item is greater than the middle item. strcmp) << '\n'. while (bot <= top) { mid = (bot + top) / 2. char *table[]. // return item index else if (cmp < 0) top = mid . the search span is reduced by half. The following example shows how BinSearch may be called with strcmp passed as the comparison function: char *cities[] = {"Boston". int (*Compare)(const char*. int top = n . const char*)) { int bot = 0. then the search is restricted to the upper half of the array. // restrict search to lower half else bot = mid + 1. cmp. 4. cout << BinSearch("Sydney". Compare is the function pointer to be used for comparing item against the 2 7 array elements. int mid. "Tokyo"}. 16 Returns -1 to indicate that there was no matching item. This is repeated until the two ends of the search span (denoted by bot and top) collide. int n. then the search is restricted to the lower half of the array.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Annotation int BinSearch (char *item. The search list is denoted by table which is an array of strings of dimension n. // restrict search to upper half } return -1. ¨ 76 C++ Essentials Copyright © 2005 PragSoft .1.1.table[mid])) == 0) return mid. the latter’s index is returned. or until a match is found. The search item is denoted by item. 9 10 If item matches the middle item. 11 If item is less than the middle item. "Sydney". The item is compared against the middle item of the array. This will output 2 as expected. Each time round this loop. if ((cmp = Compare(item. cities. "London".
as if they were the same variable. Reference parameters facilitates the pass-by-reference style of arguments. // illegal: reference without an initializer You can also initialize a reference to a constant. The most common use of references is for function parameters. the compiler guarantees that the object denoted by x will be different from both 1’s. Consider what could happen if this were not the case. double &num3.16. // num is a reference to num1 defines num2 as a reference to num1.com Chapter 5: Arrays. both num1 and num2 will denote the value 0. but merely a symbolic alias for it.16. So although we expect y to be 3. The 1 in the first and the 1 in the third line are likely to be the same object (most compilers do constant optimization and allocate both 1’s in the same memory location). after num1 = 0. For example. double &num2 = num1. After this definition num1 and num2 both refer to the same object.21. The notation for defining references is similar to that of pointers. int y = x + 1. // n refers to a copy of 1 The reason that n becomes a reference to a copy of 1 rather than 1 itself is safety. To observe the differences. A reference must always be initialized when it is defined: it should be an alias for something.pragsoft. it could turn out to be 4. int &n = 1. int &x = 1. as opposed to the pass-by-value style which we have used so far.21 www. It would be illegal to define a reference and initialize it later. by forcing x to be a copy of 1. In this case a copy of the constant is made (after any necessary type conversion) and the reference is set to refer to the copy. It should be emphasized that a reference does not create a copy of an object. except that & is used instead of *. num3 = num1. consider the three swap functions in Listing 5. However. Pointers. Hence. ++x. Listing 5. and References 77 . double num1 = 3.References A reference introduces an alias for an object.14.
cout << i << ". cout << i << ". } void Swap3 (int &x. 20 ¨ 78 C++ Essentials Copyright © 2005 PragSoft . 10 10. because Swap1 receives a copy of the arguments. int *y) { int temp = *x. " << j << '\n'. &j). 20 20. j). " << j << '\n'. j). " << j << '\n'. it will produce the following output: 10. y = temp. Swap3(i. *x = *y. this has no effect on the arguments passed to the function. *y = temp. Swap2 overcomes the problem of Swap1 by using pointer parameters instead. x = y. Swap2(&i. x = y. } void Swap2 (int *x. int y) { int temp = x. int &y) { int temp = x. The following main function illustrates the differences: int main (void) { int i = 10. Swap2 gets to the original values and swaps 7 them. Swap1(i. What happens to the copy does not affect the original. The parameters become aliases for the arguments passed to the function and therefore swap them as intended. j = 20. 13 Swap3 overcomes the problem of Swap1 by using reference parameters instead. Swap3 has the added advantage that its call syntax is the same as Swap1 and involves no addressing or dereferencing. } // pass-by-value (objects) // pass-by-value (pointers) // pass-by-reference 1 Although Swap1 swaps x and y. By dereferencing the pointers. } When run.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Annotation void Swap1 (int x. cout << i << ". y = temp.
Typedef char Name[12]. Therefore: String Name uint str.pragsoft.. // is the same as: char *str. and uint becomes an alias for unsigned int.20 is a good candidate for typedef: typedef int (*Compare)(const char*. table[mid])) == 0) return mid.. ¨ Chapter 5: Arrays. a typedef defines an alias for a type. // is the same as: unsigned int n. Its main use is to simplify otherwise complicated type declarations as an aid to improved readability. int n. name.. The complicated declaration of Compare in Listing 5. n. This makes BinSearch’s signature arguably simpler. Compare comp) { //. if ((cmp = comp(item. Just as a reference defines an alias for an object. typedef unsigned int uint. and References 79 . } The typedef introduces Compare as a new type name for any function with the given prototype. const char*). char *table[].Typedefs Typedef is a syntactic facility for introducing symbolic names for data types. Pointers. The effect of these definitions is that String becomes an alias for char*. //. // is the same as: char name[12]. int BinSearch (char *item.. Here are a few examples: typedef char *String. Name becomes an alias for an array of 12 chars.
const int size).2g Top Flake Cornabix Oatabix Ultrabran Write a function which outputs this table element by element.3g 0.27 Define a function to input a list of names and store them as dynamically-allocated strings in an array. return result. } 80 C++ Essentials Copyright © 2005 PragSoft . 5. ++i) result[i] = str[len . and a function to output them: void ReadNames (char *names[]. i < len.25 Define a function which reverses the order of the elements of an array of reals: void Reverse (double nums[]. void WriteArray (double nums[]. 5. input values for the elements of an array of reals and output the array elements: void ReadArray (double nums[].i . char *result = new char[len + 1]. A scan which involves no swapping indicates that the list is sorted.28 Rewrite the following function using pointer arithmetic: char* ReverseString (char *str) { int len = strlen(str). void WriteNames (char *names[]. 5. const int size). result[len] = '\0'.Exercises 5. 5. Define a two-dimensional array to capture this data: Fiber 12g 22g 28g 32g Sugar 25g 4g 5g 7g Fat 16g 8g 9g 2g Salt 0. Bubble sort involves repeated scans of the list. respectively. const int size). Write another function which sorts the list using bubble sort: void BubbleSort (char *names[]. where during each scan adjacent items are compared and swapped if out of order. const int size). const int size).24 Define two functions which. for (register i = 0.4g 0. const int size).5g 0.1].26 The following table specifies the major contents of four brands of breakfast cereals.
char *&name.5.29 Rewrite BubbleSort (from 5. Pointers. and References 81 . Rewrite the following using typedefs: void (*Swap)(double. double).30 www. usigned long *values[10][20].pragsoft. char *table[]. ¨ 5.27) so that it uses a function pointer for comparison of names.com Chapter 5: Arrays.
elems.. deleting the storage occupied by s. an automatic object is destroyed when its scope is left. private: int int int }. This in turn depends on the object’s scope. This memory should be released by a destructor: class Set { public: Set ~Set //. our revised version of Set uses a dynamically-allocated array for the elems member. (void) {delete elems. the destructor for s is invoked. an object’s constructor is applied just before the object is destroyed. maxCard. a class may have at most one destructor. allocating storage for s. For example. Next the rest of the body of Foo is executed. (const int size). and a dynamic object is destroyed when the delete operator is applied to it.} // destructor *elems. In such cases it is important to release member-allocated memory before the object is destroyed. A destructor always has the same name as the class itself. Hence. // set elements // maximum cardinality // set cardinality Now consider what happens when a Set is defined and used in a function: void Foo (void) { Set s(10). In general. Finally. Unlike constructors.Destructors Just as a constructor is used to initialize an object when it is created. but is preceded with a ~ symbol. Destructors are generally useful for classes which have pointer data members which point to memory blocks allocated by the class itself. s behaves just like an automatic variable of a built-in type. A destructor never takes any arguments and has no explicit return type. ¨ 92 C++ Essentials Copyright © 2005 PragSoft . } When Foo is called. a destructor is used to clean up the object just before it is destroyed.. A destructor can do just that. card.. which is created when its scope is entered and destroyed when its scope is left. the constructor for s is invoked. For example. //.elems and initializing its data members.. before Foo returns. a global object is destroyed when program execution is completed. as far as storage allocation is concerned.
private: int elems[maxCard]. }. Examples of the first case will be provided in Chapter 7. class RealSet { //. This can be arranged by declaring SetToReal as a friend of RealSet. one for sets of integers and one for sets of reals: class IntSet { public: //. We want to define a function. }. int card. the overhead of calling AddElem for every member of the set may be unacceptable.AddElem((float) elems[i]). class RealSet { public: //. i < card. The implementation can be improved if we could gain access to the private members of both IntSet and RealSet. } Although this works.. ++i) set.. www. We can do this by making the function a member of IntSet: void IntSet::SetToReal (RealSet &set) { set. SetToReal. for (register i = 0..pragsoft. There are two possible reasons for requiring this access: • • It may be the only correct way of defining the function. It may be necessary if the function is to be implemented efficiently..Friends Occasionally we may need to grant a function access to the nonpublic members of a class.com Chapter 6: Classes 93 .. An example of the second case is discussed below. private: float elems[maxCard]. which converts an integer set to a real set.EmptySet(). }. friend void IntSet::SetToReal (RealSet&).. Such an access is obtained by declaring the function a friend of the class. when we discuss overloaded input/output operators. int card. Suppose that we have defined two variants of the Set class.
card. }. protected..card = iSet. for (register i = 0. // abbreviated form Another way of implementing SetToReal is to define it as a global function which is a friend of both classes: class IntSet { //. i < card. for (int i = 0.. }. RealSet&).. In general. it has the same meaning.void IntSet::SetToReal (RealSet &set) { set. that does not make the function a member of that class.elems[i] = (float) elems[i].. ++i) rSet. i < iSet.elems[i] = (float) iSet. the position of a friend declaration in a class is irrelevant: whether it appears in the private. class RealSet { //. ¨ 94 C++ Essentials Copyright © 2005 PragSoft ... RealSet&). }. ++i) set. RealSet &rSet) { rSet. friend class A. friend void SetToReal (IntSet&.card. friend void SetToReal (IntSet&.card = card. class B { //. void SetToReal (IntSet &iSet. } The extreme case of having all member functions of a class A as friends of another class B can be expressed in an abbreviated form: class A. } Although a friend declaration appears inside a class.elems[i]. or the public section.
yVal.Default Arguments As with global functions.. // polar coordinates the following definition will be rejected as ambiguous. because it matches both constructors: Point p. p3(10. Point (float x = 0. 0) Careless use of default arguments can lead to undesirable ambiguity.. For example. // same as: p1(0. and the argument should be an expression consisting of objects defined within the scope in which the class appears. int y = 0). a member function of a class may have default arguments. public: Point (int x = 0. given the class class Point { int xVal. }. p2(10). //. public: Point (int x = 0.. int y = 0).com Chapter 6: Classes 95 . For example.pragsoft. The same rules apply: all default arguments should be trailing arguments. 20). //.. Given this constructor. 0) // same as: p2(10. the following definitions are all valid: Point Point Point p1. a constructor for the Point class may use default arguments to provide more variations of the way a Point object may be defined: class Point { int xVal. }. // ambiguous! ¨ www. float y = 0). yVal.
it receives an implicit argument which denotes the particular object (of the class) for which the function is invoked. int y) { this->xVal += x.Implicit Member Argument When a class member function is called.OffsetPt(2. when discussing overloaded operators. in Point pt(10. programming cases where the use of the this pointer is essential. Within the body of the member function. however. pt. // equivalent to: xVal += x. OffsetPt can be rewritten as: Point::OffsetPt (int x. it is undefined for global functions (including global friend functions). pt is an implicit argument to OffsetPt. The this pointer can be used for referring to member functions in exactly the same way as it is used for data members.20). ¨ 96 C++ Essentials Copyright © 2005 PragSoft . that this is defined for use within member functions of a class only. Using this. this->yVal += y. We will see examples of such cases in Chapter 7. There are. In particular. It is important to bear in mind. however.2). } Use of this in this particular example is redundant. For example. one can refer to this implicit argument explicitly as this. which denotes a pointer to the object for which the member is invoked. // equivalent to: yVal += y.
Scope Operator When calling a member function.. using the scope operator is essential. the case where the name of a class member is hidden by a local variable (e. In some situations.pragsoft. ¨ www. // abbreviated form This is equivalent to the full form: pt. // full form The full form uses the binary scope operator :: to indicate that OffsetPt is a member of Point.com Chapter 6: Classes 97 .. The latter are referred to explicitly as Point::x and Point::y. For example. member function parameter) can be overcome using the scope operator: class Point { public: Point (int x..g.Point::OffsetPt(2. int y) //.2).OffsetPt(2.2). Point::y = y. we usually use an abbreviated syntax. y. } { Point::x = x. private: int x. } Here x and y in the constructor (inner scope) hide x and y in the class (outer scope). For example: pt.
For example: class Image { public: Image (const int w.. The first approach involves initializing the data members using assignments in the body of a constructor. } The effect of this declaration is that width is initialized to w and height is initialized to h.. private: int width. const int h) : width(w). For example: class Image { public: Image (const int w. const int h). height(h) { //. //. const int h) { width = w. int height.. A colon is used to separate it from the header.Member Initialization List There are two ways of initializing the data members of a class.. Image::Image (const int w. The only difference between this approach and the previous one is that here members are initialized before the body of the constructor is executed. //. ¨ 98 C++ Essentials Copyright © 2005 PragSoft . It should consist of a comma-separated list of data members whose initial value appears within a pair of brackets. const int h). }.. int height. Image::Image (const int w. }. A member initialization list may be used for initializing any data member of a class. //. It is always placed between the constructor header and body.... private: int width. } The second approach uses a member initialization list in the definition of a constructor. height = h.
. elems[maxCard].Constant Members A class data member may defined as constant. height(h) { //. in class Set { public: Set //.. A constant data member is not appropriate for defining the dimension of an array data member.. const int h) : width(w).. Image::Image (const int w. }.. } maxCard... // illegal! www.. // illegal initializer! // illegal initializer! The correct way to initialize a data member constant is through a member initialization list: class Image { public: Image private: const int const int //. (void) : maxCard(10) { card = 0. For example: class Image { const int const int //. const int h). no member function is allowed to assign to a constant data member. } As one would expect. width. height. width. For example. However. (const int w. height = 168.. width = 256.. }. card.com Chapter 6: Classes 99 . height. private: const int int }. }.pragsoft. data member constants cannot be initialized using the same syntax as for other constants: class Image { const int const int //.
. A constant object can only be modified by the constant member functions of the class: const Set s. This is used to specify which member functions of a class may be invoked for a constant object. Member functions may also be defined as constant. } defines Member as a constant member function. but when the program is run and the constructor is invoked. s. The reason for this being that maxCard is not bound to a value during compilation. Constructors and destructors need never be defined as constant members. ¨ 100 C++ Essentials Copyright © 2005 PragSoft .Member(10). To do so. { card = 0. both inside the class and in the function definition. it would be illegal for it to attempt to modify any of the class data members.AddElem(10). Bool Set::Member (const int elem) const { //.. the keyword const is inserted after the function header.. (const int). For example. since they have permission to operate on constant objects. } Set Member AddElem (void) (const int) const. class Set { public: Bool void //.the array elems will be rejected by the compiler for not having a constant dimension.. }. unless the data member is itself a constant. // illegal: AddElem not a const member // ok Given that a constant member function is allowed to be invoked for constant objects. They are also exempted from the above rule and can assign to a data member of a constant object. s.
no matter how many objects of type Window are defined. For example.. the Window class might use a call-back function for repainting exposed areas of the window: class Window { //.. Member functions can also be defined to be static. This ensures that there will be exactly one copy of the member. shared by all objects of the class.com Chapter 6: Classes 101 . ¨. Static member functions are useful for defining call-back routines whose parameter lists are predetermined and outside the control of the programmer. *first. It does not receive an implicit argument and hence cannot refer to this. The alternative is to make such variables global. but inaccessible outside the class. For example. there will be only one instance of first. // linked-list of all windows // pointer to next window Here. Semantically. Like other static variables. global functions). we can ensure that it will be inaccessible to anything outside the class. Public static members can be referred to using this syntax by nonmember functions (e. by including the variable in a class. }. // call-back Because static members are shared and do not rely on the this pointer. they are best referred to using the class::member syntax. For example. a static member function is like a global function which is a friend of the class. first and PaintProc would be referred to as Window::first and Window::PaintProc... It can be initialized to an arbitrary value in the same scope where the member function definitions appear: Window *Window::first = &myWindow. a static data member is by default initialized to 0. }.Static Members A data member of a class can be defined to be static. *next. but this is exactly what static members are intended to avoid. static void PaintProc (Event *event).. consider a Window class which represents windows on a bitmap display: class Window { static Window Window //.
1. **entries. // restrict search to upper half } return -1. The syntax for defining a pointer to a member function is slightly more complicated. since the class name must also be included in the function pointer type. Compare may be used for passing a pointer to a Search member of Table: class Table { public: Table int Search int int private: int char }. while (bot <= top) { mid = (bot + top) / 2. // return item index else if (cmp < 0) top = mid . // restrict search to lower half else bot = mid + 1.Member Pointers Recall how a function pointer was used in Chapter 5 to pass the address of a comparison function to a search function. const char*). slots.1. const char*). entries[mid])) == 0) return mid. (char *item. Search has to use a slightly complicated syntax for invoking the comparison function via comp: int Table::Search (char *item. // not found } 102 C++ Essentials Copyright © 2005 PragSoft . As before. the idea is to make a function more flexible by making it independent of another function. int top = slots . typedef int (Table::*Compare)(const char*. if ((cmp = (this->*comp)(item. cmp. CaseSesitiveComp (const char*. Compare comp). NormalizedComp (const char*. Compare comp) { int bot = 0. The definition of Table includes two sample comparison member functions which can be passed to Search. defines a member function pointer type called Compare for a class called Table. It is possible to obtain and manipulate the address of a member function of a class in a similar fashion. const char*). For example. (const int slots). This type will match the address of any member function of Table which takes two constant character pointers and returns an int. int mid.
// illegal: need brackets! The last attempt will be interpreted as: this->*(comp(item. will work: (*comp)(item. The address of a data member can be obtained using the same syntax as for a member function. The above class member pointer syntax applies to all members except for static. // illegal: no class object! this->*comp(item. In general.*comp)(item. // illegal: no class object! (Table::*comp)(item. ¨ www. entries[mid]).Note that comp can only be invoked via a Table object (the this pointer is used in this case). entries[mid]) Search can be called and passed either of the two comparison member functions of Table. For example.Search("Sydney". entries[mid])). entries[mid]). int m = this->*n. (tab. a function which does not have access to the private members of a class cannot take the address of any of those members. the same protection rules apply as before: to take the address of a class member (data or function) one should have access to it. For example. Using a Table object instead of this will require the following syntax: Table tab(10).*n. Table::NormalizedComp). None of the following attempts. int p = tab. Static members are essentially global entities whose scope has been limited to a class. entries[mid]). Pointers to static members use the conventional syntax of global entities.pragsoft. though seemingly reasonable. // unintended precedence! Therefore the brackets around this->*comp are necessary.com Chapter 6: Classes 103 . int Table::*n = &Table::slots. For example: tab.
.. int &widthRef.. (const int w. }.References Members A class data member may defined as reference.. } This causes widthRef to be a reference for width. For example: class Image { int width.. int height. //. const int h) : widthRef(width) { //. //. ¨ 104 C++ Essentials Copyright © 2005 PragSoft .. int height. const int h). }. int &widthRef = width. Image::Image (const int w. }. // illegal! The correct way to initialize a data member reference is through a member initialization list: class Image { public: Image private: int width. int &widthRef. int height. As with data member constants... //. a data member reference cannot be initialized using the same syntax as for other references: class Image { int width.
int top. defining the constructor as follows would not change the initialization (or destruction) order: Rectangle::Rectangle (int left. a Rectangle class may be defined using two Point data members which represent the top-left and bottom-right corners of the rectangle: class Rectangle { public: Rectangle (int left.com Chapter 6: Classes 105 . First. int bottom) : botRight(right. The constructor for Rectangle should also initialize the two object members of the class. and finally the constructor for Rectangle itself. the constructor for topLeft is invoked. this is done by including topLeft and botRight in the member initialization list of the constructor for Rectangle: Rectangle::Rectangle (int left. but because it appears before botRight in the class itself. topLeft(left. Assuming that Point has a constructor. //. int bottom).bottom) { } If the constructor for Point takes no parameters. an object of another class. followed by the destructor for botRight.top) { } ¨ www. }. and finally for topLeft. The order of initialization is always as follows. private: Point topLeft. then the above member initialization list may be omitted.Class Object Members A data member of a class may be of a user-defined type.. Object destruction always follows the opposite direction. or if it has default arguments for all of its parameters. followed by the constructor for botRight. Point botRight. int bottom) : topLeft(left. int top.top). Therefore.bottom).pragsoft. The reason that topLeft is initialized before botRight is not that it appears first in the member initialization list. int right. the constructor is still implicitly called. botRight(right. int right. int right.. Of course. that is. int top. For example. First the destructor for Rectangle (if any) is invoked.
An array of objects can also be created dynamically using new: Point *petagon = new Point[5]. This definition assumes that Point has an ‘argument-less’ constructor (i. Each entry in the initialization list would invoke the constructor with the desired arguments. Point(10. When the array is finally deleted using delete.. 20. Set(20). 30}. 20. For example. Point(30. Set(30)}.0).20).30). When the initializer has less entries than the array dimension. For example. // destroys all array elements Unless the [] is included. For example. Set(20). is an abbreviated version of: Set sets[4] = {Set(10). and the last element is initialized to (0. The constructor is applied to each element of the array. Point pentagon[5] = { Point(10. a pair of [] should be included: delete [] pentagon. Point(20. Omitting the [] will cause the destructor to be applied to just the first element of the array: delete pentagon. a pentagon can be defined as an array of 5 points: Point pentagon[5]. it is sufficient to just specify the argument.20) }.Object Arrays An array of a user-defined type is defined and used much in the same way as an array of a built-in type. delete will have no way of knowing that pentagon denotes an array of points and not just a single point. one which can be invoked without arguments). initializes the first four elements of pentagon to explicit points. the remaining elements are initialized by the argument-less constructor. Set sets[4] = {10. The array can also be initialized using a normal array initializer. The destructor (if any) is applied to the elements of the array in reverse order before the array is deleted.e. When the constructor can be invoked with a single argument.30). // destroys only the first element! 106 C++ Essentials Copyright © 2005 PragSoft .
//. }. a general polygon class has no way of knowing in advance how many vertices a polygon may have: class Polygon { public: //.pragsoft. private: Point *vertices. the class must have an argument-less constructor to handle the implicit initialization.. Dynamic object arrays are useful in circumstances where we cannot predetermine the size of the array. For example. When this implicit initialization is insufficient.Point(10.Point(10..Since the objects of a dynamic array cannot be explicitly initialized at the time of creation. // the vertices // the number of vertices ¨ www. 20). pentagon[1]. the programmer can explicitly reinitialize any of the elements later: pentagon[0].. 30).com Chapter 6: Classes 107 . int nVertices..
int... where the class is completely contained by a block or function. Point topLeft. int). int).Class Scope A class introduces a class scope much in the same way a function (or block) introduces a local scope. All the class members belong to the class scope and thus hide entities with identical names in the enclosing scope. class Rectangle { // a nested class public: Rectangle (int.. }.. //. 108 C++ Essentials Copyright © 2005 PragSoft . The great majority of C++ classes (including all the examples presented so far in this chapter) are defined at the global scope. }. where a class is contained by another class.. The former can refer to the latter using the unary scope operator: int Process::fork (void) { int pid = ::fork(). private: int x. in int fork (void). private: class Point { public: Point (int. This leads to a global class. • • A nested class is useful when a class is used only by one other class. y. class Process { int fork (void). //. This leads to a nested class. At the class scope of another class. } // use global system fork A class itself may be defined at any one of three possible scopes: • At the global scope. whereby it can be referred to by all other scopes. //. }. // system fork the member function fork hides the global system function fork. This leads to a local class. At the local scope of a block or function. int. botRight. For example. For example.
//. int b) //. } A nested class may still be accessed outside its enclosing class by fully qualifying the class name. The member functions of Point may be defined either inline inside the Point class or at the global scope. All of its functions members.pragsoft.1). int g...defines Point as nested by Rectangle. int y) { //. A local class is useful when a class is used by only one function — be it a global function or a member function — or even just one block.. for example.... therefore. ColorTable colors. The following.com Chapter 6: Classes 109 . // undefined! A local class must be completely defined inside the scope in which it appears... The following. This implies that a local scope is not suitable for defining anything but very simple classes. Unlike a nested class. */ } defines ColorTable as a class local to Render. void Render (Image &image) { class ColorTable { public: ColorTable (void) AddEntry (int r. would be valid at any scope (assuming that Point is made public within Rectangle): Rectangle::Point pt(1. would be illegal at the global scope: ColorTable ct.. therefore. For example. }. The latter would require further qualification of the member function names by preceding them with Rectangle:: Rectangle::Point::Point (int x. } { /* . need to be defined inline inside the class. a local class is not accessible outside the scope within which it is defined. */ } { /* . ¨ www..
(int. (Remember that all of the members of a class are by default private. Furthermore. A value in this language may be defined to be of the type: 110 C++ Essentials Copyright © 2005 PragSoft . except that the keyword struct is used instead of class. The size of an object of a union is. therefore. but only one at a time. double salary. y. C++ allows such initializers for structures and classes all of whose data members are public: class Employee { public: char *name. The main use of unions is for situations where an object may assume values of different types. A union is a class all of whose data members are mapped to the same address within its object (rather than sequentially as is the case in a class). which supports a number of data types such as: integers. y. a structure can have an initializer with a syntax similar to that of an array. The initializer consists of values which are assigned to the data members of the structure (or class) in the order they appear. int).) Structures are defined using the same syntax as classes. the size of its largest data member. where it could only contain data members. In C. int).Structures and Unions A structure is a class all of whose members are by default public. called P. consider an interpreter for a simple programming language. }. It has been retained mainly for backward compatibility reasons. int).25}. strings. For example. int). Employee emp = {"Jack". }. it cannot be used with a class that has a constructor. 24. struct Point { Point void OffsetPt int x. (int. (int. 38952. is equivalent to: class Point { public: Point void OffsetPt int x. }. and lists. reals. (int. The struct construct originated in C. int age. This style of initialization is largely superseded by constructors. For example.
an object of type Value would be exactly 8 bytes. // object type Value val. a double 8 bytes. and protected may be used inside a struct or a union in exactly the same way they are used inside a class for defining private. ObjType type. }. An object in P can be represented by the class. ¨ www. *string. public.string is used for referring to its value. i. where type provides a way of recording what type of value the object currently has. listObj}. Assuming that a long is 4 bytes.e. // object value //. val. class Object { private: enum ObjType {intObj. a union may not have a static data member or a data member which requires a constructor. }.pragsoft.union Value long double char Pair //. { integer. list. Value *tail. all of the members of a union are by default public. For example. where Pair is itself a user-defined type for creating lists: class Pair { Value *head..com Chapter 6: Classes 111 .. and protected members... the same as the size of a double or a Pair object (the latter being equal to two pointers). //. strObj. real. public.. when type is set to strObj.. Like a structure.. }. Because of the unique way in which its data members are mapped to memory. and a pointer 4 bytes. The keywords private. realObj.
in data communication. To minimize the cost of transmission. 1. p. supervisoryPack}. In addition to the user data that it carries. it is illegal to take its address. 4. Figure 6. For the same reason. p. 4. class Packet { Bit type Bit acknowledge Bit channel Bit sequenceNo Bit moreData //. controlPack.. : : : : : 2. acknowledge sequenceNo type channel moreData These fields can be expressed as bit field data members of a Packet class. A bit field may be defined to be of type int or unsigned int: typedef unsigned int Bit.9 illustrates how the header fields are packed into adjacent bits to achieve this. so that as many individual data items as possible can be packed into a bit stream without worrying about byte or word boundaries.Bit Fields It is sometimes desirable to directly control an object at the bit level. For example. true}. }. Because a bit field does not necessarily start on a byte boundary.type = controlPack. a bit field cannot be defined as static.acknowledge = true. For example. ¨ 112 C++ Essentials Copyright © 2005 PragSoft . we can write: Packet p. // // // // // 2 1 4 4 1 bits wide bit wide bits wide bite wide bit wide A bit field is referred to in exactly the same way as any other data member.. given the enumerations enum PacketType {dataPack. Use of enumerations can make working with bit fields easier. 1. it is desirable to minimize the space taken by the header. enum Bool {false. each packet also contains a header which is comprised of network-related information for managing the transmission of the packet across the network.9 Header fields of a packet. data is transferred in discrete units called packets. Figure 6.
Provide a default argument so that the item is appended to the end. 6. 6. • Print which prints the sequence strings. Define a class named Sequence for storing sorted strings. and the following member functions for Sequence: • • • 6. a destructor.com Chapter 6: Classes 113 . to represent the set elements. A complex number has the general form a + ib. Choose which displays the menu and invites the user to choose an option. Define a class named Complex for representing complex numbers. Define a constructor.33 Define a class named Menu which uses a linked-list of strings to represent a menu of options.36 Define class named BinTree for storing sorted strings as a binary tree. a destructor. and false otherwise.pragsoft.32 Define these operations as member functions of Complex. and the following member functions for Menu: • Insert which inserts a new option at a given position.34 Redefine the Set class as a linked-list so that there would be no restriction on the number of elements a set may have. to represent the set elements. Use a nested class. • • Delete which deletes an existing option. Define the same set of member functions as for Sequence from the previous exercise. Delete which deletes an existing string.Exercises 6. Element. where a is the real part and b is the imaginary part (i stands for imaginary). Find which searches the sequence for a given string and returns true if it finds it. Complex arithmetic rules are as follows: (a + ib) + (c + id) (a + ib) – (c + id) (a + ib) * (c + id) = = = (a + c) + i(b + d) (a + c) – i(b + d) (ac – bd) + i(bc + ad) 6. 6.31 Explain why the Set parameters of the Set member functions are declared as references. www. Option. Use a nested class.35 Insert which inserts a new string into its sort position. Define a constructor.
¨ 6.38 6. thereby allowing nested menus.39 114 C++ Essentials Copyright © 2005 PragSoft .37 Define a member function for BinTree which converts a sequence to a binary tree. Add an integer ID data member to the Menu class (Exercise 6. starting from 0. as a friend of Sequence. How will you keep track of the last allocated ID? Modify the Menu class so that an option can itself be a menu. Define an inline member function which returns the ID.6. Use this function to define a constructor for BinTree which takes a sequence as argument.33) so that all menu objects are sequentially numbered.
but operate on different data types. Overloading of functions involves defining distinct functions which share the same name. classes cannot be overloaded. Providing alternate interfaces to the same function. as we will see in Chapter 8. Operators are similar to functions in that they take operands (arguments) and return a value. [] and () for container classes. Therefore. classes can be altered and extended through a facility called inheritance. The overloading of operators will be illustrated using a number of simple classes. Function overloading is appropriate for: • Defining functions which essentially do the same thing. The built-in definitions of the operators are restricted to built-in types. We will discuss how type conversion rules can be used to reduce the need for multiple overloadings of the same operator. We will also discuss memberwise initialization and assignment. Most of the built-in C++ operators are already overloaded. Additional definitions can be provided by the programmer.7. and the pointer operators. For example. it has multiple definitions. including << and >> for IO. Overloading This chapter discusses the overloading of functions and operators in C++.pragsoft. Each additional definition is implemented by a function. We will present examples of overloading a number of popular operators. so that they become independent of the data types they employ. • Function overloading is purely a programming convenience. each of which has a unique signature. or two addresses. www. the + operator can be used to add two integers. Also functions and classes can be written as templates. and the importance of their correct implementation in classes which use dynamically-allocated data members. each class must have a unique name. Unlike functions and operators. We will discuss templates in Chapter 9. two reals. However.com Chapter 7: Overloading 115 . so that they can also operate on user-defined types. The term overloading means ‘providing multiple definitions of’.
void Error (char *errMsg). Function overloading is useful for obtaining flavors that are not possible using default arguments alone.Function Overloading Consider a function. the compiler compares the number and type of arguments in the call against the definitions of GetTime and chooses the one that matches the call. long GetTime (void). GetTime(h.. // matches GetTime(void) // matches GetTime(int&. that is. the same function to have more than one definition: long GetTime (void). there is no reason for them to have different names. Given that these two functions serve the same purpose. long t = GetTime(). and seconds. and one which returns the time as hours. int&). s. For example: int h. int&. char *errMsg = ""). }. s). Member functions of a class may also be overloaded: class Time { //. int &minutes. Overloaded functions may also have default arguments: void Error (int errCode. int &seconds). int &seconds). GetTime. int &minutes. // seconds from midnight void GetTime (int &hours. To avoid ambiguity. each definition of an overloaded function must have a unique signature. and suppose that we require two variants of this function: one which returns the time as seconds from midnight. which returns in its parameter(s) the current time of the day.. m. // seconds from midnight void GetTime (int &hours. m. minutes. ¨ 116 C++ Essentials Copyright © 2005 PragSoft . When GetTime is called. C++ allows functions to be overloaded.
we can overload the + and operators for adding and subtracting Point objects: class Point { public: Point (int x. Alternatively.20). The above overloading of + and .y). to overload a predefined operator λ. }.x + q.uses member functions. + and . For example: operator+(p1.} private: int x.} friend Point operator + (Point &p. if λ is a unary operator: www. }.x .y .x.y). Point::y = y. we define a function named operatorλ .x.p.(Point &p) {return Point(x .x.(Point &p.20).pragsoft.} friend Point operator . Point::y = y. However. p2(10. Point &q) {return Point(p.x.y).} private: int x.p. much in the same way as they are used for adding and subtracting numbers: Point p1(10.p2.q. After this definition.Operator Overloading C++ allows the programmer to define additional meanings for its predefined operators by overloading them. an operator may be overloaded globally: class Point { public: Point (int x.can be used for adding and subtracting points. int y) {Point::x = x. int y) {Point::x = x. Point p4 = p1 . y.q.y). y. or two arguments if defined globally. Point &q) {return Point(p. Point p3 = p1 + p2.y + p.} Point operator . p2) // is equivalent to: p1 + p2 In general.com Chapter 7: Overloading 117 . If λ is a binary operator: • operatorλ must take exactly one argument if defined as a class member. The use of an overloaded operator is equivalent to an explicit call to the function which implements it.y .y + q.p.} Point operator + (Point &p) {return Point(x + p. For example.p.
¨ 118 C++ Essentials Copyright © 2005 PragSoft . Furthermore..10 summarizes the C++ operators which can be overloaded. Unary: + ne w + = == - * ! ~ & ++ -- () -> >* delete += != * -= < / /= > % %= <= & &= >= | |= && ^ ^= || << << = [] >> >> = () Binary: . For example. Equivalence rules do not hold for overloaded operators. [].g. . Table 7.* :: ?: sizeof // not overloadable Figure 7. because this can lead to ambiguity. overloading + does not affect +=. the precedence rules for the predefined operators is fixed and cannot be altered. no matter how you overload *.• operatorλ must take no arguments if defined as a member function. and not globally. The remaining five operators cannot be overloaded: . C++ does not support the definition of new operator tokens. unless the latter is also explicitly overloaded. nor can a strictly binary operator (e.10 Overloadable operators. For example.can be overloaded as prefix as well as postfix. references should be used. =. Operators ->. ~) cannot be overloaded as binary.g.. it will always have a higher precedence than +. Pointers are not suitable for this purpose because an overloaded operator cannot operate exclusively on pointers. and () can only be overloaded as member functions. Operators ++ and -. or one argument if defined globally. A strictly unary operator (e. =) be overloaded as unary. To avoid the copying of large objects when passing them to an overloaded operator.
++i) if (elem == set. Listing 7. { card = 0. } // use overloaded == www.. Most of the Set member functions are better defined as overloaded operators. ++i) if (!(set1.card. // intersection Set&). } int. Set &set) { for (register i = 0. Bool operator & (const int elem. i < set.25 1 #include <iostream. // set elements // set cardinality Here. Set &set2) { if (set1. (Set&. The implementation of these functions is as follow.. Set &set2) { return !(set1 == set2). (Set&. true}.pragsoft.Example: Set Operators The Set class was introduced in Chapter 6. Copy (Set &set). (Set&. } Bool operator != (Set &set1.card. // inequality Set&). for (register i = 0.com Chapter 7: Overloading 119 . Bool {false. we have decided to define the operator functions as global friends. // membership Set&). // equality Set&).elems[i] & set2)) // use overloaded & return false. class Set { public: friend friend friend friend friend //. return true. elems[maxCard]. i < set1. Listing 7. They could have just as easily been defined as member functions.25 illustrates. return false. Bool Bool Bool Set Set Set operator operator operator operator operator & == != * + (void) (const (Set&. card.card) return false.elems[i]) return true. Print (void).h> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 const enum maxCard = 100. Set&). // union AddElem (const int elem).card != set2. } Bool operator == (Set &set1. void void void private: int int }.
++i) res.Copy(res). as illustrated by the following main function: int main (void) { Set s1.50. s2. s3. (s1 + s2).AddElem(60).AddElem(20). cout << "s1 = ".elems[i].elems[i]). return 0. s1.60} is in s1 intsec s2 = {10.50. if (s1 != s2) cout << "s1 /= s2\n". cout << "s1 union s2 = ".20. return res. s2. s2.AddElem(10).10.30. (s1 * s2).60} /= s2 ¨ 120 C++ Essentials Copyright © 2005 PragSoft . for (register i = 0. s2. Set &set2) { Set res.AddElem(10). } When run.40} = {30.AddElem(50).Print(). if (20 & s1) cout << "20 is in s1\n". return res. s1. s1. i < set1.AddElem(30).card++] = set1.40. s2. s1. set1. cout << "s2 = ". s2.Print(). cout << "s1 intsec s2 = ".30.card. Set &set2) { Set res.Set operator * (Set &set1.30} union s2 = {10.AddElem(set2.Print().elems[i] & set2) // use overloaded & res. } The syntax for using these operators is arguably neater than those of the functions they replace. i < set2.Print().elems[res. for (register i = 0.AddElem(40). ++i) if (set1.20. } Set operator + (Set &set1. s1.AddElem(30). the program will produce the following output: s1 s2 20 s1 s1 s1 = {10.card.
specifying both coordinates of a point: class Point { //. } friend Point operator + (Point. Point). Point p(10. and one for when the integer is the second operand. we need a constructor which takes an int. q = p + 5. Point (int x) { Point::x = Point::y = x. suppose we want to overload + for the Point type so that it can be used to add two points.pragsoft.e. For example. For example. friend Point operator + (Point. }. it is possible to write expressions that involve variables or constants of type Point and int using the + operator. To make + commutative..Type Conversion The normal built-in type conversion rules of the language also apply to overloaded functions and operators. in if ('a' & set) //. because overloaded & expects its first operand to be of type int. or to add an integer value to both coordinates of a point: class Point { //. 'a') is implicitly converted from char to int. It should be obvious that if we start considering other types in addition to int. we have defined two functions for adding an integer to a point: one for when the integer is the first operand. In this case.. Hence. Any other type conversion required in addition to these must be explicitly defined by the programmer. // equivalent to: Point p(10)..com Chapter 7: Overloading 121 . friend Point operator + (Point. the first operand of & (i. int). q = 0.. // equivalent to: q = p + Point(5).. Point). }. www. friend Point operator + (int. A better approach is to use a constructor to convert the object to the same type as the class itself so that one overloaded operator can handle the job. Point).. For constructors of one argument.. this approach will ultimately lead to an unmanageable variations of the operator.20). one need not explicitly call the constructor: Point p = 10.
rectangle r is first implicitly converted to a Point object by the type conversion operator. Therefore. p(5. The overall effect is an implicit type conversion from int to Point.25). • // convert Y to X 122 C++ Essentials Copyright © 2005 PragSoft . by overloading the type operator Point in Rectangle: class Rectangle { public: Rectangle (int left. whose coordinates represent the width and height of the rectangle. //. What if we want to do the opposite conversion. The final value of q is therefore (15. one can define a member function which explicitly converts the object to the desired type.20. botRight.topLeft. int right. from the class type to another type? In this case... we can define a type conversion function which converts a rectangle to a point. r(10.5).Here. class X { //. int bottom). Instead. The temporary object is then destroyed. 5 is first converted to a temporary Point object and then added to p. // explicit type-cast to a Point // explicit type-cast to a Point In general. given a Rectangle class. int top.10. The type conversion Point can also be applied explicitly using the normal type cast notation. operator Point () {return botRight . given a user-defined type X and another (built-in or user-defined) type Y: • A constructor defined for X which takes a single argument of type Y will implicitly convert Y objects to X objects when needed. For example: Point(r). Overloading the type operator Y in X will implicitly convert X objects to Y objects when needed. constructors cannot be used because they always return an object of the class to which they belong. Point &q).. topLeft. and then added to p.} private: Point Point }. This operator is defined to convert a rectangle to a point. (Point)r. in the code fragment Point Rectangle r + p. For example. Rectangle (Point &p. X (Y&).30)..
this will be rejected by the compiler. Rectangle (Point &p). r(10. they can lead to programs whose behaviors can be very difficult to predict. private: Point Point }. Point &q).} friend Rectangle operator + (Rectangle &r.20. There is also the additional risk of creating ambiguity. in Point Rectangle r + p. int bottom).operators: class Rectangle { public: Rectangle (int left.operator Y (). botRight. p(5.5). // convert X to Y One of the disadvantages of user-defined type conversion methods is that. topLeft. Either as r + Rectangle(p) // yields a Rectangle or as: Point(r) + p // yields a Point Unless the programmer resolves the ambiguity by explicit type conversion. To illustrate possible ambiguities that can occur.10.com Chapter 7: Overloading 123 . Rectangle &t).pragsoft.topLeft. operator Point () {return botRight . Ambiguity occurs when the compiler has more than one option open to it for applying user-defined type conversion rules. r + p can be interpreted in two ways. and therefore unable to choose.30). friend Rectangle operator . Rectangle (Point &p. Now.(Rectangle &r. int right. Rectangle &t). ¨ www. int top. }. All such cases are reported as errors by the compiler. unless they are used sparingly. suppose that we also define a type conversion constructor for Rectangle (which takes a Point argument) as well as overloading the + and .
} Binary::Binary (unsigned int num) { 124 C++ Essentials Copyright © 2005 PragSoft . (). const Binary). while (iDest >= 0) // pad left with zeros bits[iDest--] = '0'.1.h> 2 #include <string. bits[binSize]. int iDest = binSize . // type conversion (void). This constructor converts a positive integer to its equivalent binary representation. The implementation of these functions is as follows: Binary::Binary (const char *num) { int iSrc = strlen(num) . 10 This function simply prints the bit pattern of a binary number.1. The + operator is overloaded for adding two binary numbers. // binary quantity friend Binary void private: char }. class Binary { public: Binary Binary operator + operator int Print (const char*). 6 7 8 9 This constructor produces a binary number from its bit pattern.h> 3 4 5 6 7 8 9 10 11 12 13 Annotation int const binSize = 16. no attempt is made to detect overflows.26 defines a class for representing 16-bit binary numbers as sequences of 0 and 1 characters. while (iSrc >= 0 && iDest >= 0) // copy bits bits[iDest--] = (num[iSrc--] == '0' ? '0' : '1'). (const Binary. For simplicity. (unsigned int). 12 This array is used to hold the 0 and 1 bits of the 16-bit quantity as characters.26 1 #include <iostream. Addition is done bit by bit. Listing 7.Example: Binary Number Class Listing 7. This type conversion operator is used to convert a Binary object to an int object.
num >>= 1. res.pragsoft.Print(). } } Binary operator + (const Binary n1. // convert n2 to int and then subtract } www. // add and then convert to int cout << n1 . for (register i = 0. unsigned value. strncpy(str.for (register i = binSize . } void Binary::Print (void) { char str[binSize + 1].1. } The following main function creates two objects of type Binary and tests the + operator. bits. main () { Binary n1 = "01011".com Chapter 7: Overloading 125 . const Binary n2) { unsigned carry = 0. cout << n1 + Binary(5) << '\n'. i < binSize. str[binSize] = '\0'. --i) { value = (n1. i >= 0. --i) { bits[i] = (num % 2 == 0 ? '0' : '1'). for (register i = binSize . binSize). n1. cout << str << '\n'. carry = value >> 1.Print(). i >= 0. (n1 + n2). } Binary::operator int () { unsigned value = 0. } return res. ++i) value = (value << 1) + (bits[i] == '0' ? 0 : 1).1.5 << '\n'. Binary n2 = "11010". return value. n2. Binary res = "0".bits[i] = (value % 2 == 0 ? '0' : '1').bits[i] == '0' ? 0 : 1) + carry.bits[i] == '0' ? 0 : 1) + (n2.Print().
does the addition. The first of these converts 5 to Binary. In either case.is not defined for Binary).Binary(5)) << '\n'. This is equivalent to: cout << (int) Binary::operator+(n2. The second converts n1 to int (because . and then converts the Binary result to int. The output produced by the program is evidence that the conversions are performed correctly: 0000000000001011 0000000000011010 0000000000100101 16 6 ¨ 126 C++ Essentials Copyright © 2005 PragSoft . before sending it to cout. the user-defined type conversion operator is applied implicitly. This is equivalent to: cout << ((int) n2) . and then send the result to cout.The last two lines of main behave completely differently.5 << '\n'. performs the subtraction.
n2 = "11010". Because the first operand of << must be an ostream object. cout << " + ". Without the use of overloaded <<. n..pragsoft. strncpy(str. It should therefore be defined as a global function: class Binary { //. binSize).Print(). return os. For example.bits. Binary &n) { char str[binSize + 1]. T&). (n1 + n2). friend ostream& operator << (ostream&. instead of the Binary class’s Print member function. Binary&).Print().com Chapter 7: Overloading 127 . will produce the following output: 0000000000001011 + 0000000000011010 = 0000000000100101 In addition to its simplicity and elegance.. The first parameter must be a reference to ostream so that multiple uses of << can be concatenated.Print(). this style of output eliminates the burden of remembering the name of the output function for each user-defined type. ¨ www. cout << '\n'. the last example would have to be written as (assuming that \n has been removed from Print): Binary n1 = "01011". it cannot be overloaded as a member function. }. cout << str. ostream& operator << (ostream &os. str[binSize] = '\0'. we can overload the << operator for the class. n2 = "11010". << can be used for the output of binary numbers in a manner identical to its use for the built-in types. cout << " = ". we can define an operator<< function which outputs objects of type T: ostream& operator << (ostream&. For example. n1. Binary n1 = "01011". The second parameter need not be a reference. n2. cout << n1 << " + " << n1 << " = " << n1 + n2 << '\n'. } Given this definition. but this is more efficient for large objects. For any given userdefined type T.Overloading << for Output The simple and uniform treatment of output for built-in types is easily extended to user-defined types by further overloading the << operator.
we overload the >> operator for the input of bit streams.Overloading >> for Input Input of user-defined types is facilitated by overloading the >> operator. For example. Again. n = Binary(str). }. ¨ 128 C++ Essentials Copyright © 2005 PragSoft . since it will be modified by the function. istream& operator >> (istream &is. in a manner similar to the way << is overloaded. Continuing with the Binary class example.. >> can be used for the input of binary numbers in a manner identical to its use for the built-in types. friend istream& operator >> (istream&. cin >> str. For any given user-defined type T. Binary&). it cannot be overloaded as a member function: class Binary { //. cin >> n. because the first operand of >> must be an istream object. } Given this definition. // use the constructor for simplicity return is.. The first parameter must be a reference to istream so that multiple uses of >> can be concatenated. will read a binary number from the keyboard into to n. Binary &n) { char str[binSize + 1]. Binary n. T&). The second parameter must be a reference. we can define an operator>> function which inputs objects of type T: istream& operator >> (istream&.
int value. Otherwise. If a matching index is found then a reference to its associated value is returned. // elements used so far }.h> 3 4 5 6 7 8 9 10 11 12 13 14 15 Annotation class AssocVec { public: AssocVec (const int dim). int& operator [] (const char *idx). The implementation of the member functions is as follows: AssocVec::AssocVec (const int dim) { AssocVec::dim = dim.27 1 #include <iostream. Each vector element consists of a string (denoted by index) and an integer value (denoted by value). // vector elements int dim. used = 0.h> 2 #include <string. Given a string index. The function which overloads [] must have exactly one parameter. elems = new VecElem[dim].27 defines a simple associative vector class. ~AssocVec (void). } *elems. a new element is created and a reference to this value is returned. private: struct VecElem { char *index. An associative vector is a one-dimensional array in which elements can be looked up by their contents rather than their position in the array. 12 The vector elements are represented by a dynamic array of VecElem structures. In AssocVec. it searches the vector for a match.com Chapter 7: Overloading 129 . // vector dimension int used.pragsoft. The overloaded [] operator is used for accessing vector elements.Overloading [] Listing 7. 5 7 The constructor creates an associative vector of the dimension specified by its argument. each element has a string name (via which it can be looked up) and an associated integer value. } AssocVec::~AssocVec (void) { www. Listing 7.
if (used < dim && // create new element (elems[used].index. Using AssocVec we can now create associative vectors that behave very much like normal vectors: AssocVec count(5). } static int dummy = 0. A reference expression is an lvalue and hence can appear on both sides of an assignment. ++i) // search existing elements if (strcmp(idx. count["orange"] = 10.index = new char[strlen(idx)+1]) != 0) { strcpy(elems[used]. } int& AssocVec::operator [] (const char *idx) { for (register i = 0.idx).index) == 0) return elems[i].value. } Note that. This will set count["fruit"] to 15.index.elems[i].value. i < used. return elems[used++]. return dummy. ++i) delete elems[i]. count["fruit"] = count["apple"] + count["orange"]. elems[used]. because AssocVec::operator[] must return a valid reference.for (register i = 0. a reference to a dummy static integer is returned when the vector is full or when new fails. i < used. count["apple"] = 5. delete [] elems.value = used + 1. This is why the return type of AssocVec::operator[] is defined to be a reference. If a function returns a reference then a call to that function can be assigned to. ¨ 130 C++ Essentials Copyright © 2005 PragSoft .
2). element 20 of M (i. ostream& operator << (ostream&.. A matrix is a table of values (very similar to a two-dimensional array) whose size is denoted by the number of rows and columns in the table.Overloading () Listing 7. Matrix&). ~Matrix (void) {delete elems. Matrix operator * (Matrix&. The overloaded () operator is used for accessing matrix elements.> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Annotation class Matrix { public: Matrix (const short rows. const short cols).} double& operator () (const short row. Matrix algebra provides a set of operations for manipulating matrices. Matrix&). Listing 7. which includes addition. An example of a simple 2 x 3 matrix would be: 10 20 30 21 52 19 M= The standard mathematical notation for referring to matrix elements uses brackets. These overloaded operators provide basic matrix operations. Matrix operator + (Matrix&.28 defines a matrix class. 7 8-10 14 The matrix elements are represented by a dynamic array of doubles. Matrix&). subtraction. const short col). const short cols. The function which overloads () may have zero or more parameters. The overloaded << is used for printing a matrix in tabular form. Matrix operator .(Matrix&. double *elems.com Chapter 7: Overloading 131 . all of whose elements are initialized to 0. friend friend friend friend private: const short rows. in the first row and second column) is referred to as M(1. }. For example. // matrix rows // matrix columns // matrix elements 4 6 The constructor creates a matrix of the size specified by its arguments.28 1 #include <iostream.pragsoft. It returns a reference to the specified element’s value.e. and multiplication. Matrix&).
m(2.1) = 10.1) = 15. The following code fragment illustrates that matrix elements are lvalues: Matrix m(2.cols. This will produce the following output: 10 15 20 25 30 35 ¨ 132 C++ Essentials Copyright © 2005 PragSoft . const short c) : rows(r). m(1.3). } As before. const short col) { static double dummy = 0.2) = 20.1)] : dummy. cols(c) { elems = new double[rows * cols]. } return os.3) = 30.0.rows. m(1. c <= m. r <= m. } double& Matrix::operator () (const short row. os << '\n'. ++r) { for (int c = 1.1)*cols + (col . ++c) os << m(r. cout << m << '\n'. return (row >= 1 && row <= rows && col >= 1 && col <= cols) ? elems[(row .The implementation of the first three member functions is as follows: Matrix::Matrix (const short r. } ostream& operator << (ostream &os.2) = 25.c) << " ". because Matrix::operator() must return a valid reference.3) = 35. m(2. a reference to a dummy static double is returned when the specified element does not exist. Matrix &m) { for (register r = 1. m(1. m(2.
++r) for (register c = 1. However. Matrix &q) { Matrix m(p. The initialization is handled by an internal constructor which the compiler automatically generates for Matrix: Matrix::Matrix (const Matrix &m) : rows(m. r <= p.11 illustrates. c <= p. then those are also memberwise initialized.rows && p.cols == q. As a result of the default memberwise initialization.Memberwise Initialization Consider the following definition of the overloaded + operator for Matrix: Matrix operator + (Matrix &p. Figure 7.cols) for (register r = 1. p.elems.cols). etc. Hence the destructor deletes the block pointed to by m. ++c) m(r.cols) { elems = m.cols. return m. If the data members of the object being initialized are themselves objects of another class. cols(m.rows.rows. } This form of initialization is called memberwise initialization because the special constructor initializes the object member by member.pragsoft. leaving the returned object’s elems data member pointing to an invalid block! This ultimately leads to a runtime failure (typically a bus error). m is destroyed upon the function returning.c) = p(r.com Chapter 7: Overloading 133 .11 pointers.: www. Figure 7. the elems data member of both objects will point to the same dynamically-allocated block.c).rows). if (p.c) + q(r.elems.rows == q. } This function returns a matrix object which is initialized to m.
. Point). The problems caused by the default memberwise initialization of other classes can be avoided by explicitly defining the constructor in charge of memberwise initialization.g..elems[i]. i < n. When returning an object value from a function (not applicable to a reference or pointer return value). e.rows). // same size for (register i = 0. } // memberwise copy argument to m // memberwise copy m to n // memberwise copy n and return copy • • It should be obvious that default memberwise initialization is generally adequate for classes which have no pointer data members (e.• When defining and initializing an object in a declaration statement that uses another object as its initializer. //.g.. this may be defined as follows: class Matrix { Matrix (const Matrix&). ++i) // copy elements elems[i] = m.. When passing an object argument to a function (not applicable to a reference or pointer argument). for the Matrix class. Matrix n = m in Foo below. return n in Foo below.cols) { int n = rows * cols. For any given class X.g.. e. Matrix Foo (Matrix m) { Matrix n = m.. For example. m in Foo below. cols(m.g. elems = new double[n]. }. } ¨ 134 C++ Essentials Copyright © 2005 PragSoft .. e. Matrix::Matrix (const Matrix &m) : rows(m.. //. return n. the constructor always has the form: X::X (const X&).
For example. } This is identical in its approach to memberwise initialization and is called memberwise assignment.2). and not globally. n(2. to handle the assignment in Matrix m(2.cols. //. It suffers from exactly the same problems.elems[i]. elems = m. cols = m. the = operator is overloaded by the following member of X: X& X::operator = (X&) Operator = can only be overloaded as a member. } // must match // copy elements In general.2).elems. } return *this.com Chapter 7: Overloading 135 . For example. m = n.rows. ¨) { int n = rows * cols. the compiler automatically generates the following internal function: Matrix& Matrix::operator = (const Matrix &m) { rows = m. for (register i = 0.. for any given class X.rows && cols == m. ++i) elems[i] = m. which in turn can be overcome by explicitly overloading the = operator.. i < n. the following overloading of = would be appropriate: Matrix& Matrix::operator = (const Matrix &m) { if (rows == m.pragsoft. for the Matrix class.Memberwise Assignment Objects of the same class are assigned to one another by an internal overloading of the = operator which is automatically generated by the compiler.
they impose different memory requirements.h> const int maxPoints = 512. static Block *freeList. New should always return a void*.h> #include <iostream. static int used. static union Block { int xy[2]. in particular. are not efficiently handled by the default versions of new and delete. Since blocks. Block *next. (void *ptr. For large objects this is not significant. (size_t bytes). }. in which case they override the global definition of these operators when used for objects of that class. Small objects.h. so that Point objects are allocated from an array: #include <stddef.. In addition. 136 C++ Essentials Copyright © 2005 PragSoft . As an example. As a result. void* operator new void operator delete private: int xVal. // points to our freestore // free-list of linked blocks // blocks used so far The type name size_t is defined in stddef. Every block allocated by new carries some overhead used for housekeeping purposes. having too many small blocks can severely slow down subsequent allocation and deallocation. The second parameter is optional and denotes the size of the allocated block. but for small objects the overhead may be even bigger than the block itself. The corresponding arguments are automatically passed by the compiler. The corresponding argument is always automatically passed by the compiler. The parameter of new denotes the size of the block to be allocated (in bytes). suppose we wish to overload new and delete for the Point class.Overloading new and delete Objects of different classes usually have different sizes and frequency of usage. size_t bytes). The first parameter of delete denotes the block to be deleted. yVal.. } *blocks. The performance of a program that dynamically creates many small objects can be significantly improved by using a simpler memory management strategy for those objects. These are initialized as follows: Point::Block *Point::blocks = new Block[maxPoints]. class Point { public: //. The dynamic storage management operators new and delete can be overloaded for a class. freeList and used are static they do not affect the size of a Point object (it is still two integers).
//. and delete is called after it has been destroyed. global new and delete are used when creating and destroying object arrays: Point *points = new Point[5].com Chapter 7: Overloading 137 . // // // // calls calls calls calls Point::operator new ::operator new Point::operator delete ::operator delete Even when new and delete are overloaded for a class. int Point::used = 0. ¨ www. new removes and returns the first block in the linked-list. the object does not exist: new is invoked before the object is constructed. For example: Point *pt = new Point(1. char *str = new char[10]. delete str. This is because when these operators are invoked for an object of the class. return used < maxPoints ? &(blocks[used++]) : (res == 0 ? 0 : (freeList = freeList->next. } Point::operator new and Point::operator delete are invoked only for Point objects. res)). which means that they will not have access to the this pointer and therefore the nonstatic class members. New takes the next available block from blocks and returns its address. but fails (returns 0) when the linked-list is empty: void* Point::operator new (size_t bytes) { Block *res = freeList.1). delete pt.pragsoft.Point::Block *Point::freeList = 0. delete [] points. // calls ::operator new // calls ::operator delete The functions which overload new and delete for a class are always assumed by the compiler to be static. } void Point::operator delete (void *ptr. size_t bytes) { ((Block*) ptr)->next = freeList. even if the call occurs inside a member function of Point... freeList = (Block*) ptr. When used reaches maxPoints. Calling new with any other type as argument will invoke the global definition of new. Delete frees a block by inserting it in front of the linked-list denoted by freeList.
int i = obj->xVal. the final result is q>xVal. Point* operator -> (void). and & is preserved. When the left operand of -> is an object or reference of type X (but not pointer). For classes that do not overload ->. Unary operators * and & can also be overloaded so that the semantic correspondence between ->. this operator is always binary: the left operand is a pointer to a class object and the right operand is a class member name.Overloading ->. This can be used to do some extra pointer processing. As an example. *. B& operator -> (void). X is expected to have overloaded -> as unary. Otherwise. If p is a pointer to a class Y then p is used as the left operand of binary -> and the right operand is expected to be a member of Y. *. }. B::operator-> is applied to p to give q. *. A::operator-> is applied to obj to give p.. and since q is a pointer to Point. or before the address of an object is obtained using &. }. is the successive application of overloaded -> in A and B: int i = (B::operator->(A::operator->(obj)))->xVal. and & It is possible to divert the flow of control to a user-defined function before a pointer to an object is dereferenced using -> or *. and &. consider a library system which represents a book record as a raw string of the following format: "%Aauthor\0%Ttitle\0%Ppublisher\0%Ccity\0%Vvolume\0%Yyear\0\n" 138 C++ Essentials Copyright © 2005 PragSoft . Consider the following classes that overload ->: class A { //. class B { //. In this case.. -> is first applied to the left operand to produce a result p. The effect of applying -> to an object of type A A obj. p is used as the left operand of unary -> and the whole procedure is repeated for class Y.. and is facilitated by overloading unary operators ->. In other words..
www. Also. *title. "City?". #include <iostream. 0.h> #include <stdlib. For efficiency reasons we may want to keep the data in this format but use the following structure whenever we need to access the fields of a record: struct Book char char char char char short short }.e. The default field values are denoted by a global Book variable: Book defBook = { "raw". "Title?". *city. RawBook::used = 0.g. } *data. "Author?". *publisher.. We now define a class for representing raw records. (void). in which case a default value must be used. \0). "Publisher?". short used. class RawBook { public: RawBook Book* operator -> Book& operator * Book* operator & private: Book* RawToBook char static static static }. The fields can appear in any order. short curr.Each field starts with a field specifier (e. (void).pragsoft. Book *cache. RawBook::curr = 0. 0 }. (void). we have used a simple cache memory of 10 records. The corresponding static members are initialized as follows: Book short short *RawBook::cache = new Book[cacheSize].. // raw format (kept for reference) *author. // cache memory // current record in cache // number of used cache records To reduce the frequency of mappings from RawBook to Book. vol. year.com Chapter 7: Overloading 139 . %A specifies an author) and ends with a null character (i. some fields may be missing from a record. { *raw.h> int const cacheSize = 10. and overload the unary pointer operators to map a raw record to a Book structure whenever necessary. // needed for atoi() below (char *str) (void). { data = str.
It sets up two book records and prints each using different operators. " 140 C++ Essentials Copyright © 2005 PragSoft . switch (*str++) { // get a field case 'A': bk->author = str. main () { RawBook r1("%AA. } while (*str++ != '\0') // skip till end of field . break. } The overloaded operators ->. ++i) if (data == cache[i]. *bk = defBook. break.) { while (*str++ != '%') // skip to next specifier . // end of record } return bk. break. " << r1->publisher << ". RawToBook loads the book at the current position in the cache: Book* RawBook::RawToBook (void) { char *str = data. " << r1->title << ".. case 'P': bk->publisher = str. Jackson\0%Y1987\0%PMiles\0\n"). break. } {return *RawToBook(). break. If the book is not in the cache. case 'C': bk->city = str. for (register i = 0. RawBook r2("%TPregnancy\0%AF. // the book // set default values for (. *. case 'V': bk->vol = atoi(str). case 'Y': bk->year = atoi(str). The following test case demonstrates that the operators behave as expected. curr = used < cacheSize ? used++ : (curr < 9 Book *bk = cache + curr. case 'T': bk->title = str.raw) return cache + i. i < used. } The identical definitions for -> and & should not be surprising since -> is unary in this context and semantically equivalent to &. cout << r1->author << ".The private member function RawToBook searches the cache for a RawBook and returns a pointer to its corresponding Book structure. break. // search cache // update curr and used ? ++curr : 0). if (*str == '\n') break. and & are easily defined in terms of RawToBook: Book* RawBook::operator -> (void) Book& RawBook::operator * (void) Book* RawBook::operator & (void) {return RawToBook(). Peters\0%TBlue Earth\0%PPhedra\0%CSydney\0% Y1981\0\n").} {return RawToBook(). bk->raw = data. " << r1->city << ".
<< (*r1). // note cout << bp->author << ". Peters. Blue Earth. Sydney. " << << bp->publisher << ". 0. It will produce the following output: A. Book *bp = &r2. Jackson. " << } << ". Phedra. Pregnancy. Miles. " << << bp->vol << ". 0.pragsoft.vol << ".com Chapter 7: Overloading 141 . 1987 ¨ www. City?. " << (*r1). " << '\n'. 1981 F. " << ".year use of & bp->title bp->city bp->year << '\n'.
n = n + Binary(1). Binary n2 = "11010". friend Binary friend Binary }. To distinguish between the two. int). (Binary&.may be overloaded in exactly the same way. return m. For example. The following code fragment exercises both versions of the operator: Binary n1 = "01011". the prefix and postfix versions of ++ may be overloaded for the Binary class as follows: class Binary { //. Both are easily defined in terms of the + operator defined earlier: Binary operator ++ (Binary &n) { return n = n + Binary(1). the postfix version is specified to take an extra integer argument. ¨ 142 C++ Essentials Copyright © 2005 PragSoft .. they can also be defined as member functions. } Binary operator ++ (Binary &n. It will produce the following output: 0000000000001100 0000000000011010 0000000000011011 The prefix and postfix versions of -. cout << n2 << '\n'. cout << ++n1 << '\n'. When this operator is used. operator ++ operator ++ (Binary&). the compiler automatically supplies a default argument for it..Overloading ++ and -The auto increment and auto decrement operators can be overloaded in both prefix and postfix form. } // prefix // postfix Note that we have simply ignored the extra parameter of the postfix version. // prefix // postfix Although we have chosen to define these as global friend functions. cout << n2++ << '\n'. int) { Binary m = n.
On a machine which uses a 64-bit representation for reals.g.com Chapter 7: Overloading 143 . and returns the ‘larger’ one. -. Define a SparseMatrix class which uses a linked-list to record only nonzero elements.which gives the difference of two binary values.. Operator + should concatenate two strings.41 Operator . • 7. storing such a matrix as an array would require 2 megabytes of storage. A sparse matrix is one which has the great majority of its elements set to zero. Operator <= which checks if a set is contained by another (e.42 Overload the following two operators for the Binary class: • Operator . (const String&). sparse matrices of sizes up to 500 × 500 are not uncommon.44 www. Operator [] should index a string character using its position.g. (const char*). Operator [] which indexes a bit by its position and returns its value as a 0 or 1 integer. s . In practice. and one for memberwise initialization/assignment. and overload the +. class String { public: String String String ~String String& String& operator = operator = (const char*). Complete the implementation of the following String class. assume that the first operand is always greater than the second operand. (const short). and * operators for it. Overload the following two operators for the Set class: • 7. (void).which gives the difference of two sets (e. two reals. finite element analysis).Exercises 7.. Also define an appropriate memberwise initialization constructor and memberwise assignment operator for the class. one for initializing/assigning to a String using a char*.pragsoft. • 7. 7. s <= t is true if all the elements of s are also in t).g.t gives a set consisting of those elements of s which are not in t). For simplicity. or two strings.43 Sparse matrices are used in a number of numerical methods (e. Note that two versions of the constructor and = are required.40 Write overloaded versions of a Max function which compares two integers. (const String&). A more economic representation would record only nonzero elements together with their positions in the matrix.
as defined below. typedef unsigned char uchar. an unsigned char can represent a bit vector of 8 elements. BitVec&). short idx). BitVec operator ^ (const BitVec&). // string characters // length of string 7. private: char short }. BitVec&). const String&). char* bits). BitVec operator ~ (void). { delete vec. Small bit vectors are conveniently represented by unsigned integers. short). BitVec operator >> (const short n). short bytes. friend ostream& operator <<(ostream&. BitVec&). BitVec operator << (const short n). short idx).45 A bit vector is a vector with binary elements. true}. short idx). private: uchar *vec. BitVec&). BitVec&). String&). len. BitVec operator & (const BitVec&). Larger bit vectors can be defined as arrays of such smaller bit vectors.} friend String operator + (const String&. BitVec&). Complete the implementation of the Bitvec class. that is. enum Bool {false. Bool operator == (const BitVec&). *chars. // vector of 8*bytes bits // bytes in the vector ¨ 144 C++ Essentials Copyright © 2005 PragSoft .char& operator [] (const short). int Length (void) {return(len). }. It should allow bit vectors of any size to be created and manipulated using the associated operators. friend ostream& operator << (ostream&. For example. each element is either a 0 or a 1. Bool operator != (const BitVec&). } BitVec&). short).
com Chapter 8: Derived Classes 145 . inheritance is supported by derived classes. most classes are not entirely unique. because from it many other classes may be derived. they would have similar member functions such as Insert. A derived class can itself be the base class of another derived class. In C++. while a few which depend on the fact that file is sorted would be different. except that its definition is based on one or more existing classes. Most of the member functions in both classes would therefore be identical.pragsoft. Shared properties are defined only once. www. called base classes. A derived class can share selected properties (function as well as data members) of its base classes. but makes no changes to the definition of any of its base classes. because it becomes a subordinate of the base class in the hierarchy. Delete. Object-oriented programming provides a facility called inheritance to address this problem. Given the shared properties of these two classes. Similarly. The inheritance relationship between the classes of a program is called a class hierarchy. it would be tedious to have to define them independently. The code would not only take longer to write it would also be harder to maintain: a change to any of the shared properties would have to be consistently applied to both classes. Inheritance makes it possible to define a variation of a class without redefining the new class from scratch.8. For example. Consider. and another class named SortedRecFile which represents a sorted file of records. for example. A derived class is also called a subclass. and Find. These two classes would have much in common. a class named RecFile which represents a file of records. and reused as often as desired. a class can inherit the properties of an existing class. but rather variations of existing ones. a base class may be called a superclass. For example. A derived class is like an ordinary class. In fact. Derived Classes In practice. Under inheritance. Find would be different in SortedRecFile because it can take advantage of the fact that the file is sorted to perform a binary search instead of the linear search performed by the Find member of RecFile. as well as similar data members. Clearly this would lead to considerable duplication of code. SortedRecFile would be a specialized version of RecFile with the added property that its records are organized in sorted order.
} friend ostream& operator << (ostream&.An illustrative Class We will define two classes for the purpose of illustrating a number of programming concepts in later sections of this chapter.29 1 #include <iostream. and telephone number) of a personal contact. friend ostream& operator <<(ostream&. ContactDir&). The two classes are defined in Listing 8. 3 Contact captures the details (i..} const char* Tel (void) const {return tel. void Delete (const char *name).e. // current directory size int maxSize. *address. Contact **contacts. address. const char *address.29 and support the creation of a directory of personal contacts. private: int Lookup (const char *name). Contact *name.h> 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Annotation class Contact { public: (const char *name. Listing 8. // max directory size }. const char *tel). // list of contacts int dirSize. void Insert (const Contact&).} const char* Address (void) const {return address. name. // contact name // contact address // contact telephone number //------------------------------------------------------------------class ContactDir { public: ContactDir (const int maxSize). const char* Name (void) const {return name. 18 ContactDir allows us to insert into. and search a list of personal contacts. delete from. *tel. ~Contact (void). Contact* Find (const char *name). ~ContactDir(void).h> 2 #include <string. private: char char char }. 146 C++ Essentials Copyright © 2005 PragSoft . Contact&).
If none exists then Lookup returns the index of the slot where such an entry should be inserted. delete [] contacts. The implementation of the member function and friends is as follows: Contact::Contact (const char *name.com Chapter 8: Derived Classes 147 . ContactDir::~ContactDir (void) { for (register i = 0. strcpy(Contact::address.name << " . Lookup is defined as private because it is an auxiliary function used only by Insert. }. This will overwrite an existing contact (if any) with identical name. name). Contact &c) { os << "(" << c. 23 Delete deletes a contact (if any) whose name matches a given name. strcpy(Contact::tel. tel). return os. " << c. strcpy(Contact::name. 24 Find returns a pointer to a contact (if any) whose name matches a given name.address << " . maxSize = max.tel << ")". address). and Find.22 Insert inserts a new contact into the directory. ++i) delete contacts[i].pragsoft. i < dirSize. dirSize = 0. } Contact::~Contact (void) { delete name. Contact::address = new char[strlen(address) + 1]. const char *tel) { Contact::name = new char[strlen(name) + 1]. " << c. 27 Lookup returns the slot index of a contact whose name matches a given name. } ostream &operator << (ostream &os. contacts = new ContactPtr[maxSize]. delete address. const char *address. delete tel. Delete. } ContactDir::ContactDir (const int max) { typedef Contact *ContactPtr. } www. Contact::tel = new char[strlen(tel) + 1].
return os. if (idx > 0 && strcmp(c.dirSize. i > idx. } } // shift left Contact *ContactDir::Find (const char *name) { int idx = Lookup(name).void ContactDir::Insert (const Contact& c) { if (dirSize < maxSize) { int idx = Lookup(c. ++i) contacts[i] = contacts[i+1].Name()). ++i) os << *(c. contacts[idx]->Name()) == 0) { delete contacts[idx].contacts[i]) << '\n'. } ostream &operator << (ostream &os. --i) // shift right contacts[i] = contacts[i-1].Name(). name) == 0) ? contacts[idx] : 0. for (register i = idx. name) == 0) return i. if (idx < dirSize) { delete contacts[idx]. --dirSize. i < dirSize.Address(). return dirSize. } } void ContactDir::Delete (const char *name) { int idx = Lookup(name).Tel()).Name(). ++i) if (strcmp(contacts[i]->Name(). } else { for (register i = dirSize. i < dirSize. i < c. } int ContactDir::Lookup (const char *name) { for (register i = 0. c. ContactDir &c) { for (register i = 0. ++dirSize. c. } The following main function exercises the ContactDir class by creating a small directory and calling the member functions: int main (void) 148 C++ Essentials Copyright © 2005 PragSoft . } contacts[idx] = new Contact(c. return (idx < dirSize && strcmp(contacts[idx]->Name().
Insert(Contact("Mary". 458 2324) ¨ www. 321 Yara Ln . cout << "Deleted Jack\n". "2 High St". 2 High St . 458 2324) Find Jane: (Jane . 321 Yara Ln . 982 6252) Deleted Jack (Mary . cout << dir.pragsoft.Delete("Jack"). dir. 9 Port Rd . 678 9862) (Jane . 982 6252) (Jack . dir. "982 6252")). return 0. "9 Port Rd". it will produce the following output: (Mary .Insert(Contact("Fred". 982 6252) (Fred . dir. "678 9862")). 11 South Rd .com Chapter 8: Derived Classes 149 . 9 Port Rd . 321 Yara Ln . }. "42 Wayne St".Insert(Contact("Jack". 282 1324) (Peter . cout << "Find Jane: " << *dir. dir.{ ContactDir dir(10). "458 2324")).Find("Jane") << '\n'. 11 South Rd . When run. "11 South Rd". cout << dir. dir. 663 2989) (Fred . 42 Wayne St .Insert(Contact("Jane". "663 2989")). dir. 678 9862) (Jane . 282 1324) (Peter . "282 1324")). "321 Yara Ln".Insert(Contact("Peter". 2 High St .
Listing 8. Find is redefined so that it can record the last looked-up entry. *recent. all the public members of ContactDir become public members of SmartDir. } Because ContactDir is a public base class of SmartDir. return c. The keyword public before ContactDir specifies that ContactDir is used as a public base class.30 1 class SmartDir : public ContactDir { 2 public: 3 SmartDir(const int max) : ContactDir(max) {recent = 0. A colon separates the two. SmartDir is best defined as a derivation of ContactDir. Here. This recent pointer is set to point to the name of the last looked-up entry.30.A Simple Derived Class We would like to define a class called SmartDir which behaves the same as ContactDir. if (c != 0) recent = (char*) c->Name(). SmartDir has its own constructor which in turn invokes the base class 3 4 5 7 constructor in its member initialization list. as illustrated by Listing 8. } Contact* SmartDir::Find (const char *name) { Contact *c = ContactDir::Find(name). 6 7 8 Annotation private: char }. 5 Contact* Find (const char *name). ContactDir is specified to be the base class from which SmartDir is derived. but also keeps track of the most recently looked-up entry. // the most recently looked-up name 1 A derived class header includes the base classes from which it is derived.} 4 Contact* Recent (void). The member functions are defined as follows: Contact* SmartDir::Recent (void) { return recent == 0 ? 0 : ContactDir::Find(recent). Recent returns a pointer to the last looked-up contact (or 0 if there is none). This means that we can invoke a member function such as Insert on a SmartDir object and this 150 C++ Essentials Copyright © 2005 PragSoft .
As illustrated by the definition of Find in SmartDir. "982 6252")). though they can have different signatures if desired). "11 South Rd".pragsoft. Figure 8. Figure 8. "678 9862")). dir. the former can still be invoked using its full name.Find("Jane").12 Base and derived class objects. all the private members of ContactDir become private members of SmartDir. dir. ContactDir object contacts dirSize maxSize SmartDir object contacts dirSize maxSize recent ¨ www. In accordance with the principles of information hiding. Invoking Find on a SmartDir object causes the latter to be invoked.Insert(Contact("Fred". The following code fragment illustrates that SmartDir behaves the same as ContactDir. There are two distinct definitions of this function: ContactDir::Find and SmartDir::Find (both of which have the same signature. 9 Port Rd . dir. "458 2324")). SmartDir will be unable to access any of the data members of ContactDir as well as the private member function Lookup. Therefore. dir. This should not be confused with overloading. Similarly. "282 1324")). dir. SmartDir redefines the Find member function.Insert(Contact("Jane".Recent() << '\n'. "2 High St".will simply be a call to ContactDir::Insert. "321 Yara Ln". dir. cout << "Recent: " << *dir. but also keeps track of the most recently looked-up entry: SmartDir dir(10).12 illustrates the physical make up of a ContactDir and a SmartDir object. "9 Port Rd". This will produce the following output: Recent: (Peter .com Chapter 8: Derived Classes 151 .Find("Peter").Insert(Contact("Peter".Insert(Contact("Mary". 678 9862) An object of type SmartDir contains all the data members of ContactDir as well as any additional data members introduced by SmartDir. the private members of ContactDir will not be accessible by SmartDir.
Inheritance between two classes is illustrated by a directed line drawn from the derived class to the base class. The number of objects contained by another object is depicted by a label (e. Each class is represented by a box which is labeled with the class name.13 illustrates the UML notation that we will be using in this book. SmartDir is derived from ContactDir. Figure 8. Figure 8.13 A simple class hierarchy ContactDir n Contact SmartDir Figure 8. n). a class object is composed of one or more objects of another class). and SmartDir are all classes.. A ContactDir is composed of zero or more Contact objects..13 is interpreted as follows. Contact.e. ¨ 152 C++ Essentials Copyright © 2005 PragSoft .g. A line with a diamond shape at one end depicts composition (i. ContactDir.Class Hierarchy Notation A class hierarchy is usually illustrated using a simple graph notation.
. initialize and destroy these additional members.com Chapter 8: Derived Classes 153 . To do this. the role of the constructor and destructor is to.. */ } { /* ... the destructor of the derived class is applied first.. When an object of a derived class is created.. */ } In general. Figure 8.14 illustrates how an object c of type C is created and destroyed.. In other words. */ } ¨ www. For example.. the SmartDir constructor passes its argument to the ContactDir constructor in this way: SmartDir::SmartDir (const int max) : ContactDir(max) { /* . constructors are applied in order of derivation and destructors are applied in the reverse order. Since a derived class may provide data members on top of those of its base class....Constructors and Destructors A derived class may have constructors and a destructor. // defined elsewhere SmartDir::SmartDir (const int max) : cd { /* . consider a class C derived from B which is in turn derived from A. the derived class constructor explicitly invokes the base class constructor in its member initialization list. respectively.pragsoft. this may not even require referring to the base class constructor: extern ContactDir cd.14 { /* . */ } Derived class object construction and destruction order.. */ } { /* . followed by the derived class constructor.. all that a derived class constructor requires is an object from the base class. In some situations. the base class constructor is applied to it first. class A class B : public A class C : public B Figure 8... For example.. When the object is destroyed... followed by the base class destructor. c being constructed c being destroyed A::A A::~A B::B B::~B C::C . C::~C The constructor of a derived class whose base class constructor requires arguments should specify these in the definition of its constructor.
For example. The restriction can be relaxed by defining the base class private members as protected instead. }. ¨ 154 C++ Essentials Copyright © 2005 PragSoft .. a protected base class member can be accessed by any class derived from it. Denying the derived class access to the base class private members may convolute its implementation or even make it impractical to define. As far as the clients of a class are concerned.. protected: // protected members. }.... protected: int Lookup Contact **contacts. SmartDir inherits all the private (and public) members of ContactDir. Lookup and the data members of ContactDir are now accessible to SmartDir. However... For example.. int dirSize.Protected Class Members Although the private members of a base class are inherited by a derived class. protected: // more protected members. a protected member is the same as a private member: it cannot be accessed by the class clients. but is not allowed to directly refer to the private members of ContactDir. int maxSize. and protected can occur as many times as desired in a class definition.. Each access keyword specifies the access characteristics of the members following it until the next access keyword: class Foo { public: // public members. // list of contacts // current directory size // max directory size As a result.. The access keywords private.. (const char *name). This restriction may prove too prohibitive for classes from which other classes are likely to be derived. they are not accessible to it. The idea is that private members should be completely hidden so that they cannot be tampered with by the class clients. public: // more public members.. public. the private members of ContactDir can be made protected by substituting the keyword protected for private: class ContactDir { //. private: // private members.
The members of a public base class keep their access characteristics in the derived class. class E : protected A {}. So. Unless so specified. protected: A::z. void Fz (void). protected: int z. Fx. and Protected Base Classes A base class may be specified to be private. Fy. public. and Fz become protected members of E. }. Whereas. x and Fx become private members of E. y and Fy become public members of D. For example: class C : private A { //. class C : private A {}. the public and protected members of a protected base class become protected members of the derived class. Base class access inheritance rules.. To do this.com Chapter 8: Derived Classes 155 . Fy. void Fy (void).13 Base Class Private Member Public Member Protected Member Private Derived private private private Public Derived private public protected Protected Derived private protected protected It is also possible to individually exempt a base class member from the access changes specified by a derived class.. • • Table 8. and Fz all become private members of B and C. class B : A {}. void Fx (void). // makes Fy a public member of C // makes z a protected member of C ¨ www. public: int y. x and Fx becomes private members of D. the exempted member is fully named in the derived class under its original access characteristic. and y. so that it retains its original access characteristics.Private. z. z.13 for a summary): • All the members of a private base class become private members of the derived class. // // // // A A A A is is is is a a a a private base class of B private base class of C public base class of D protected base class of E The behavior of these is as follows (see Table 8. }. The private members of a protected base class become private members of the derived class. So x. the base class is assumed to be private: class A { private: int x.pragsoft. or protected. Public. and z and Fz become protected members of D. y. So. class D : public A {}. public: A::Fy.
Virtual functions can be overloaded like other member functions.. If the object is of type SortedDir then invoking Lookup (from anywhere. }. Lookup should be declared as virtual in ContactDir: class ContactDir { //. the value of inheritance becomes rather questionable.. including constructors and destructors. dynamic binding is supported by virtual member functions. What we really want to do is to find a way of expressing this: Lookup should be tied to the type of the object which invokes it. Listing 8. We can also redefine these so that they refer to SortedDir::Lookup instead.31 156 C++ Essentials Copyright © 2005 PragSoft . can be declared as virtual. Listing 8. However. protected: virtual int Lookup (const char *name). Any member function. //. In C++. all the other member functions refer to ContactDir::Lookup. because we would have practically redefined the whole class..Virtual Functions Consider another variation of the ContactDir class.31 shows the definition of SortedDir as a derived class of ContactDir. The actual search is performed by the Lookup member function. Therefore we need to redefine this function in SortedDir so that it uses the binary search algorithm.. Only nonstatic member functions can be declared as virtual. This can be achieved through the dynamic binding of Lookup: the decision as to which version of Lookup to call is made at runtime depending on the type of the object. even from within the member functions of ContactDir) should mean SortedDir::Lookup. Similarly. The obvious advantage of this is that the search speed can be improved by using the binary search algorithm instead of linear search. A virtual function redefined in a derived class must have exactly the same prototype as the one in the base class. called SortedDir. which ensures that new contacts are inserted in such a manner that the list remains sorted at all times. A member function is declared as virtual by inserting the keyword virtual before its prototype in the base class. if the object is of type ContactDir then calling Lookup (from anywhere) should mean ContactDir::Lookup. If we follow this approach.
"663 2989")). "678 9862")).Insert(Contact("Jane". cout << dir.com Chapter 8: Derived Classes 157 . dir. // restrict search to upper half } return pos < 0 ? 0 : pos. while (bot <= top) { mid = (bot + top) / 2. 678 9862) ¨ www. // restrict search to lower half else pos = bot = mid + 1. 458 2324) (Jack .pragsoft. if ((cmp = strcmp(name. 9 Port Rd . }. Lookup is again declared as virtual to enable any class derived from SortedDir to redefine it. // return item index else if (cmp < 0) pos = top = mid . 321 Yara Ln .Insert(Contact("Mary". dir. "458 2324")). 42 Wayne St . 11 South Rd . "2 High St". int mid.1.Insert(Contact("Fred". "11 South Rd".Insert(Contact("Peter". 663 2989) (Jane . int pos = 0. It will produce the following output: (Fred . "321 Yara Ln". dir. "42 Wayne St". // expected slot } The following code fragment illustrates that SortedDir::Lookup is called by ContactDir::Insert when invoked via a SortedDir object: SortedDir dir(10). int top = dirSize .1 2 3 4 5 6 Annotation class SortedDir : public ContactDir { public: SortedDir (const int max) : ContactDir(max) {} protected: virtual int Lookup (const char *name). "982 6252")). contacts[mid]->Name())) == 0) return mid. 282 1324) (Peter . "9 Port Rd". cmp. dir. dir. 982 6252) (Mary . 3 5 The constructor simply invokes the base class constructor.Insert(Contact("Jack".1. "282 1324")). 2 High St . The new definition of Lookup is as follows: int SortedDir::Lookup (const char *name) { int bot = 0.
//.. For example. //. Under multiple inheritance. public. Figure 8. suppose we have defined two classes for. respectively. ~Window (void). }. public Window { public: Menu (int n. or protected. This is referred to as multiple inheritance... It therefore makes sense to define Menu by deriving it from OptionList and Window: class Menu : public OptionList. because each inherits its attributes from a single base class. a derived class inherits all of the members of its base classes. ~Menu (void). Alternatively. }. //.. each of the base classes may be private. class Window { public: Window (Rect &bounds).. ~OptionList (void). As before. Rect &bounds). representing lists of options and bitmapped windows: class OptionList { public: OptionList (int n). Figure 8.15 The Menu class hierarchy OptionList Window Menu Since the base classes of Menu have constructors that take arguments. The same base member access principles apply. A menu is a list of options displayed within its own window. a derived class may have multiple base classes. the constructor for the derived class should invoke these in its member initialization list: 158 C++ Essentials Copyright © 2005 PragSoft .Multiple Inheritance The derived classes encountered so far in this chapter represent single inheritance. }.15 illustrates the class hierarchy for Menu..
. // illegal: A appears twice ¨ www. all of which must be distinct: class X : A. A { //. followed by ~OptionList.. Figure 8.. Rect &bounds) : OptionList(n). even if we change their order in the constructor: Menu::Menu (int n. for example.16 illustrates the relationship between a Menu object and its base class objects. OptionList(n) { //. }. } The order in which the base class constructors are invoked is the same as the order in which they are specified in the derived class header (not the order in which they appear in the derived class constructor’s member initialization list). Window(bounds) { //.Menu::Menu (int n. } The destructors are applied in the reverse order: ~Menu.pragsoft. For Menu.com Chapter 8: Derived Classes 159 . Figure 8. B. Rect &bounds) : Window(bounds). followed by ~Window.16 Base and derived class objects. a derived class may have any number of base classes. OptionList object OptionList data members Window object Window data members Menu object OptionList data members Window data members Menu data members In general. The obvious implementation of a derived class object is to contain one object from each of its base classes. the constructor for OptionList is invoked before the constructor for Window....
The derived class Menu will inherit both these functions.Ambiguity Multiple inheritance further complicates the rules for referring to the members of a class.. void Highlight (int part). For example. Window::Highlight(part). because it is not clear whether it refers to OptionList::Highlight or Window::Highlight. the call m. }. (where m is a Menu object) is ambiguous and will not compile. void Highlight (int part). void Highlight (int part). As a result.Highlight(0). suppose that both OptionList and Window have a member function called Highlight for highlighting a specific part of either object type: class OptionList { public: //. public Window { public: //. The ambiguity is resolved by making the call explicit: m. }....Window::Highlight(0). class Window { public: //. void Menu::Highlight (int part) { OptionList::Highlight(part).. }. } ¨ 160 C++ Essentials Copyright © 2005 PragSoft . Alternatively.. we can define a Highlight member for Menu which in turn calls the Highlight members of the base classes: class Menu : public OptionList.
*wPtr = &menu. given class Menu : public OptionList.pragsoft. &wRef = menu.com Chapter 8: Derived Classes 161 ..Type Conversion For any derived class there is an implicit type conversion from the derived class to any of its public base classes. // caution! // caution! A base class object cannot be assigned to a derived class object unless there is a type conversion constructor in the derived class defined for this purpose. the following would be valid and would use the constructor to convert win to a Menu object before assigning: menu = win. causes the Window component of menu to be assigned to win. there is no implicit conversion from a base class to a derived class. Such conversions are safe because the derived class object always contains all of its base class objects. This can be used for converting a derived class object to a base class object. public Window { public: //. The first assignment. bounds). For example. *mPtr = (Menu*) &win. All such conversions must be explicitly cast to confirm the programmer’s intention: Menu Menu &mRef = (Menu&) win. // invokes Menu::Menu(Window&) ¨ www. }.. The extra data members will therefore end up with unpredictable values. Menu (Window&). be it a proper object. The reason being that such a conversion is potentially dangerous due to the fact that the derived class object may have data members not present in the base class object. for example. win = menu. or a pointer: Menu Window Window Window menu(n. a reference. By contrast.
cout << "Sydney -> Perth = " << tab("Sydney". tab("Sydney".14. We need a way of mapping strings to indices.Inheritance and Class Object Members Consider the problem of recording the average time required for a message to be transmitted from one machine to another in a long-haul network.31 0. Table 8. Listing 8.55 0. Table1 can be defined as a derived class of Matrix and AssocVec. so the Matrix class (Chapter 7) will not be adequate for representing the table.32 1 class Table1 : Matrix."Perth") = 12.33).34 15. 5 AssocVec(entries) {} 6 double& operator () (const char *src. AssocVec { 2 public: 3 Table1 (const short entries) 4 : Matrix(entries. Sydney Melbourne Perth Sydney 0.00 9. this->AssocVec::operator[](dest) )."Perth") << '\n'.45 10.00 2. as illustrated by Table 8. As shown in Listing 8. entries).14 Message transmission time (in seconds).45. which produces the following output: Sydney -> Perth = 12.32 Perth 12. 8 9 10 11 12 13 14 double& Table1::operator () (const char *src. This is already supported by the AssocVec class (Chapter 7).33 162 C++ Essentials Copyright © 2005 PragSoft .36 Melbourne 3. } Here is a simple test of the class Table tab(3). Listing 8.00 The row and column indices for this table are strings rather than integers. 7 }.45 Another way of defining this class is to derive it from Matrix and include an AssocVec object as a data member (see Listing 8.32. const char *dest) { return this->Matrix::operator()( this->AssocVec::operator[](src). This can be represented as a table. const char *dest).
index[dest]). On the other hand.34).com Chapter 8: Derived Classes 163 . double& Table2::operator () (const char *src.17 shows the class hierarchies for the three variations of table. Hence we need two associative vectors. One obvious generalization is to remove the restriction that the table should be square.pragsoft. const char *dest) { return this->Matrix::operator()(index[src]. Figure 8. a uses-a (or has-a) relationship is best realized using composition. Table2 is therefore the preferred solution. an is-a relationship is best realized using inheritance. AssocVec Matrix Matrix Table1 Table2 1 AssocVec Table3 2 AssocVec Listing 8. It is arguably easier to expand Table2 to do this rather than modify Table1 (see Listing 8. entries). } The inevitable question is: which one is a better solution. In general.17 Matrix Variations of table. const char *dest).34 www. Table1 or Table2? The answer lies in the relationship of table to matrix and associative vector: • • A table is a form of matrix. // row and column index }. It is worth considering which of the two versions of table better lends itself to generalization. because it implies that the properties of one object are shared by another object. because it implies that one object is contained by another object. Figure 8. and to allow the rows and columns to have different labels. To do this. index(entries) {} double& operator () (const char *src. private: AssocVec index.1 2 3 4 5 6 7 8 9 10 11 12 13 class Table2 : Matrix public: Table2 { (const short entries) : Matrix(entries. but rather uses an associative vector to manage the association of its row and column labels with positional indexes. A table is not an associative vector. we need to provide two sets of indexes: one for rows and one for columns.
the derived class constructor is invoked. private: AssocVec rowIdx. }. // row index // column index double& Table3::operator () (const char *src. First the base class constructors are invoked in the order in which they appear in the derived class header. AssocVec colIdx.18 illustrates this for a Table3 object. const char *dest). Finally. table being constructed table being destroyed Matrix::Matrix Matrix::~Matrix rowIdx.. Figure 8.AssocVec::AssocVec rowIdx.. rowIdx(rows). } For a derived class which also has class object data members.18 Table3 object construction and destruction order. const short cols) : Matrix(rows.AssocVec::~AssocVec colIdx.. colIdx[dest]). the order of object construction is as follows. const char *dest) { return this->Matrix::operator()(rowIdx[src]. As before.cols). Then the class object data members are initialized by their constructors being invoked in the same order in which they are declared in the class.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 class Table3 : Matrix { public: Table3 (const short rows. Figure 8. colIdx(cols) {} double& operator () (const char *src. Table3::~Table3 ¨ 164 C++ Essentials Copyright © 2005 PragSoft .AssocVec::AssocVec colIdx. the derived class object is destroyed in the reverse order of construction.AssocVec::~AssocVec Table3::Table3 .
This enables multiple occurrences of a virtual class in a hierarchy to be collapsed into one (see Figure 8.19a).. An object of a class which is derived from a virtual base class does not directly contain the latter’s object.19 Nonvirtual and virtual base classes.. For example. Rect &bounds) : Widget(bounds).*/ }.. This is not desirable (because a menu is considered a single widget) and may lead to ambiguity. public Window { /*.. class Menu : public OptionList. it is not clear as to which of the two widget objects it should be applied.19b and 8.19d). then the derived class object will contain an X object for each nonvirtual instance of X.. each menu object will have two widget objects (see Figure 8. www.. Port { /*. A virtual base class object is initialized.. but rather a pointer to it (see Figure 8.*/ }. { /*. A base class is made virtual by placing the keyword virtual before its name in the derived class header: class OptionList : virtual public Widget. not necessarily by its immediate derived class. Port { /*. in a menu object. The problem is overcome by making Widget a virtual base class of OptionList and Window..pragsoft.. OptionList and Window will share the same Widget object.*/ }. the widget object is initialized by the Menu constructor (which overrides the invocation of the Widget constructor by OptionList or Window): Menu::Menu (int n. class Window : public Widget. Window(bounds) { //.Virtual Base Classes Recall the Menu class and suppose that its two base classes are also multiply derived: class OptionList : public Widget. If in a class hierarchy some instances of a base class X are declared as virtual and other instances as nonvirtual. List class Window : virtual public Widget..19c). This ensures that a Menu object will contain exactly one Widget object. In other words. Since Widget is a base class for both OptionList and Window.com Chapter 8: Derived Classes 165 . OptionList(n). and a single X object for all virtual occurrences of X. Figure 8. when applying a widget member function to a menu object. but by the derived class farthest down the class hierarchy. List { /*.*/ }. } Regardless of where it appears in a class hierarchy.. a virtual base class object is always constructed before nonvirtual objects in the same hierarchy. This rule ensures that the virtual base class object is initialized only once. For example..*/ }.
¨ 166 C++ Essentials Copyright © 2005 PragSoft . and public). any combination of private. then the most accessible will dominate. then it would still be a public base class of Menu. protected. For example.. and a public base class of Window.e.. if Widget were declared a private base class of OptionList.
For example. depth).).. an overloading of new should attempt to allocate the exact amount of storage specified by its size parameter. Memberwise initialization and assignment (see Chapter 7) extend to derived classes.. }. memberwise initialization is handled by an automatically-generated (or user-defined) constructor of the form: Y::Y (const Y&). rather than assuming a predefined size. recall the overloading of these two operators for the Point class in Chapter 7. its inheritance by the Point3D class leads to a problem: it fails to account for the extra space needed by the data member of the latter (i. Similarly.. Special care is needed when a derived class relies on the overloading of new and delete operators for its base class.pragsoft. To avoid this problem. memberwise assignment is handled by an automatically-generated (or user-defined) overloading of the = operator: Y& Y::operator = (Y&) Memberwise initialization (or assignment) of a derived class object involves the memberwise initialization (or assignment) of its base classes as well as its class object members. Similarly. and suppose that we wish to use them for a derived class: class Point3D : public Point { public: //. For any given class Y derived from X. private: int depth. a derived class inherits all the overloaded operators of its base classes. an overloading of delete should note the size specified by its second parameter and attempt to release exactly those many bytes.com Chapter 8: Derived Classes 167 . ¨ Operators Except for the assignment operator.
In certain applications we may need to construct sets of such enumerations.47 Consider an educational application program which given an arbitrary set of values. and then proceeds to illustrate this by solving the equations using Gaussian elimination.. month. day). May. Use a random number generator (e. and a destructor. Jul. Dec }. ensuring that the range of the coefficients does not exceed coef. Year is easily derived from BitVec: enum Month { Jan. Nov. Jun. Complete the Year class by implementing its member functions.Exercises 8. day. Derive a class named LinearEqns from Matrix and for this purpose and define the following member functions for it: • • A constructor which accepts X as a matrix.x n]. Aug. generates a set of n linear equations whose solution is X. Because each day has a binary value.i × X i i =1 n • Solve which uses Gaussian elimination to solve the equations generated by Generate. Oct. Solve should the output operator of Matrix to display the augmented matrix each time the elements below a pivot are eliminated. BitVec { (const (const (const (const (const const short short short short short Month year). Mar. year. day). day). 8. X = [x 1.46 Consider a Year class which divides the days in a year into work days and off days.. It should take a positive integer (coef) as argument and generate a set of equations.n + 1 = ∑ M k . Feb.g. To ensure that X is a solution for the equations denoted by M. random under UNIX) to generate the coefficients. the last element of a row k is denoted by: M k .. .. // calendar year Days are sequentially numbered from the beginning of the year. starting at 1 for January 1st. Generate which randomly generates a system of linear equations as a matrix M. For C++ Essentials Copyright © 2005 PragSoft 168 . 8. class Year : public public: Year void WorkDay void OffDay Bool Working short Day protected: short }. x 2. // set day as work day // set day as off day // true if a work day // convert date to day const short year). Apr.48 Enumerations introduced by an enum declaration are small subsets of integers. Sep.
com Chapter 8: Derived Classes 169 . ¨ www. privateSym. Data) (Key) (Key) {} {} {return 0.*/ }.}. Operator % for set membership.. Examples of the kind of classes which could be derived from database include: linked-list.. all the member functions of an abstract are virtual and have dummy implementations. protectedSym. each parsing routine may be passed a set of symbols that should not be skipped when the parser attempts to recover from a syntax error. ifSym. and B-tree. switchSym. Operators >> and << for.for set difference. in a parser.pragsoft.. binary tree. use the built-in type int for Key and double for Data.example. These symbols are typically the reserved words of the language: enum Reserved {classSym..49 An abstract class is a class which is never used directly but provides a skeleton for other classes to be derived from it. Operators <= and >= for testing if a set is a subset of another. Derive a class named EnumSet from BitVec to facilitate this. adding an element to and removing an element from a set. See Comer (1979) for a description of B-tree and B*-tree. respectively. Insert Delete Search (Key. 8.. Operator .. elseSym. The following is a simple example of an abstract class: class Database { public: virtual void virtual void virtual Data }. Operator * for set intersection. Given that there may be at most n elements in a set (n being a small number) the set can be efficiently represented as a bit vector of n elements. EnumSet should overload the following operators: • • • • • • Operator + for set union. friendSym.*/ }.. class BStar : public BTree { /*.} It provides a skeleton for a database-like classes. Typically. First derive a B-tree class from Database and then derive a B*-tree from B-tree: class BTree : public Database { /*. publicSym. For the purpose of this exercise.
We will present a few simple examples to illustrate how templates are defined. This process is called instantiation and its outcome is a conventional function. This template then becomes a blueprint for generating executable functions by substituting a concrete type for the type parameter. independent of the particular data types used for its implementation. For example. 170 C++ Essentials Copyright © 2005 PragSoft . A function template defines an algorithm. Templates provide direct support for writing reusable code. This template can then be instantiated. Templates This chapter describes the template facility of C++ for defining functions and classes. by substituting a concrete type for the type parameter. friends. and derivations in the context of class templates. An algorithm is a generic recipe for accomplishing a task. and discuss the role of class members. Stack can therefore be defined as a class template with a type parameter which specifies the type of the items to be stored on the stack. the binary search algorithm operates on a sorted array of items. Templates facilitate the generic definition of functions and classes so that they are not tied to specific implementation types. Most data types can be defined independently of the concrete data types used in their implementation. whose exact type is irrelevant to the algorithm. instantiated. and specialized. This in turn makes them an ideal tool for defining generic libraries. the stack data type involves a set of items whose exact type is irrelevant to the concept of stack. one or more of which are unspecified. to generate executable stack classes. A class template defines a parameterized type. For example. A parameterized type is a data type defined in terms of other data types.9. We will describe the use of nontype parameters in class templates. They are invaluable in that they dispense with the burden of redefining a function or class so that it will work with yet another data type. Binary search can therefore be defined as a function template with a type parameter which denotes the type of the array items.
T val2). Max is specified to compare two objects of the same type and return the larger of the two. class T3> T3 Relation(T1. inline. Each specified type parameter must actually be referred to in the function prototype. and not before it: template <class T> inline T Max (T val1. T2> int Compare (T1. When multiple type parameters are used. T1). T val2) { return val1 > val2 ? val1 : val2. // ok // illegal! inline misplaced ¨ www. declares a function template named Max for returning the maximum of two objects.35 1 2 3 4 5 template <class T> T Max (T val1. Type parameters always appear inside <>. T val2).Function Template Definition A function template definition (or declaration) is always preceded by a template clause. } A type parameter is an arbitrary identifier whose scope is limited to the function itself. Each type parameter consists of the keyword class followed by the parameter name. // ok template <class T1. Both arguments and the return value are therefore of the same type T. // illegal! T2 not used. For example. and extern functions. inline template <class T> T Max (T val1. they should be separated by commas. class T2> int Compare (T1. T2). The keyword class cannot be factored out: template <class T1.pragsoft. T denotes an unspecified (generic) type. The definition of a function template is very similar to a normal function.com Chapter 9: Templates 171 . // illegal! class missing for T2 For static. class T2. The definition of Max is shown in Listing 9. Listing 9. template <class T1. T2). template <class T> T Max (T. the respective keyword must appear after the template clause. T).35. which consists of the keyword template and a list of one or more type parameters. except that the specified type parameters can be referred to within the definition.
3. given the earlier template definition of Max.6). will produce the following output: 19 20.5. double). 20. it cannot resolve the binding of the same type parameter to reasonable but unidentical types. 20. Max (char. When the compiler encounters a call to a template function.3) << ' ' << Max('a'.3 b In the first call to Max. the code fragment cout << Max(19. For example.5.Function Template Instantiation A function template represents an algorithm from which executable implementations of the function can be generated by binding its type parameters to concrete (built-in or user-defined) types. For example. hence T is bound to int. char). hence T is bound to char. both arguments are reals. both arguments are integers. The ordinary parameter n denotes the number of array elements. A total of three functions are therefore generated by the compiler to handle these cases: int double char Max (int. The compiler does not attempt any implicit type conversions to ensure a match. A matching argument for this parameter must be of type int: unsigned nValues = 4. nValues). For example: Max(10.6. hence T is bound to double. double values[] = {10. 3. would be considered an error because it requires the first argument to be converted to double so that both arguments can match T. 19. // ok Max(values. consider the alternative definition of Max in Listing 9. 5) << ' ' << Max(10. Max(values.'b') << '\n'.5}. The same restriction even applies to the ordinary parameters of a function template. In the second call. // illegal! nValues does not match int Listing 9. As a result.36 for finding the maximum value in an array of values. 4). Max (double. 12. it attempts to infer the concrete type to be substituted for each type parameter by examining the type of the arguments in the call. int).36 172 C++ Essentials Copyright © 2005 PragSoft . both arguments are characters. In the final call.
Max(pt1. using Max to compare two strings will result in their pointer values being compared.35 and 9. Max(values. which involves defining an instance of the function to exactly match the proposed argument types: #include <string. For example.pragsoft. return max. ¨ www. 12. ++i) if (vals[i] > max) max = vals[i]. } The obvious solution to both problems is to use explicit type conversion: Max(double(10).36.com Chapter 9: Templates 173 . int n) { T max = vals[0]. Both definitions of Max assume that the > operator is defined for the type substituted in an instantiation. As illustrated by Listings 9. // caution: "Day" > "Night" undesirable This case can be correctly handled through a specialization of the function. str2) > 0 ? str1 : str2. not their character sequences: Max("Day". for (register i = 1. pt2). } Given this specialization. char *str2) // specialization of Max { return strcmp(str1. // illegal: pt1 > pt2 undefined For some other types. "Night").1 2 3 4 5 6 7 8 9 template <class T> T Max (T *vals. When this is not the case. i < n.20). the above call now matches this function and will not result in an instance of the function template to be instantiated for char*. the operator may be defined but not produce the desired effect. pt2(20.30). int(nValues)). The same rule applies: each overloaded definition must have a unique signature.6). function templates can be overloaded in exactly the same way as normal functions. the compiler flags it as an error: Point pt1(10.h> char* Max (char *str1.
12. nums. 11 This line assumes that the operator < is defined for the type to which Type is bound in an instantiation. produces the expected output: 4 174 C++ Essentials Copyright © 2005 PragSoft . the dimension for which is denoted by n. else if (item < table[mid]) top = mid . } // return item index // restrict search to lower half // restrict search to upper half // not found 3 4 9 This is the template clause. This line assumes that the operator == is defined for the type to which Type is bound in an instantiation. Binary search is better defined as a function template so that it can be used for searching arrays of any type. 5 int top = n .1. else bot = mid + 1. For example. Instantiating BinSearch with Type bound to a built-in type such as int has the desired effect. BinSearch searches for an item denoted by item in the sorted array denoted by table. int nums[] = {10.1. cout << BinSearch(52. 6) << '\n'.20 provides a template definition. 100}. Type *table. 30. if (item == table[mid]) return mid.Example: Binary Search Recall the binary search algorithm implemented in Chapter 5. int n) 3 { 4 int bot = 0. 52.37 1 template <class Type> 2 int BinSearch (Type &item. 6 int mid. Listing 9. the scope for which is the entire definition of the BinSearch function. It introduces Type as a type parameter. Listing 9. 7 8 9 10 11 12 13 14 15 16 17 Annotation while (bot <= top) { mid = (bot + top) / 2. cmp. 38. } return -1.
Book *b2 = b.. b2->author)) == 0) return strcmp(b1->publisher. ¨ www. b2->publisher). } All are defined in terms of the private member function Compare which compares two books by giving priority to their titles.Now let us instantiate BinSearch for a user-defined type such as RawBook (see Chapter 7).RawToBook(). 3) << '\n'. b2->title)) == 0) if ((cmp = strcmp(b1->author. and finally publishers.. return cmp. }. (RawBook &b) (RawBook &b) (RawBook &b) (RawBook&). The code fragment RawBook books[] = { RawBook("%APeters\0%TBlue Earth\0%PPhedra\0%CSydney\0%Y1981\0\n").pragsoft. we need to ensure that the comparison operators are defined for our user-defined type: class RawBook { public: //.com Chapter 9: Templates 175 . produces the output 1 which confirms that BinSearch is instantiated as expected. {return Compare(b) < 0.} {return Compare(b) == 0. then authors... cout << BinSearch(RawBook("%TPregnancy\0%AJackson\0%PMiles\0\n").} int RawBook::Compare (RawBook &b) { int cmp. Book *b1 = RawToBook(). if ((cmp = strcmp(b1->title.} {return Compare(b) > 0. First. books. RawBook("%TPregnancy\0%AJackson\0%Y1987\0%PMiles\0\n"). int operator < int operator > int operator == private: int Compare //. RawBook("%TZoro\0%ASmiths\0%Y1988\0%PMiles\0\n") }.
top. ¨ 176 C++ Essentials Copyright © 2005 PragSoft .38 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 template <class Type> class Stack { public: Stack (int max) : stack(new Type[max]). maxSize(max) {} ~Stack (void) {delete [] stack.} Type& Top (void) {return stack[top].38. ++i) os << s.} void Push (Type &val). The member functions of Stack are defined inline except for Push. private: Type *stack. } Except for within the class definition itself. Listing 9. template <class Type> class Stack. declares a class template named Stack. For example. This is why the definition of Push and << use the name Stack<Type> instead of Stack. } template <class Type> ostream& operator << (ostream& os. These two are defined as follows: template <class Type> void Stack<Type>::Push (Type &val) { if (top+1 < maxSize) stack[++top] = val. // stack array int top. Stack&).} friend ostream& operator << (ostream&. // max size of stack }. except that the specified type parameters can be referred to within the definition. The << operator is also overloaded to display the stack contents for testing purposes.stack[i] << " ". void Pop (void) {if (top >= 0) --top. a reference to a class template must include its template parameter list. // index of top stack entry const int maxSize. The definition of a class template is very similar to a normal class. Stack<Type>& s) { for (register i = 0. The definition of Stack is shown in Listing 9. i <= s.Class Template Definition A class template definition (or declaration) is always preceded by a template clause. return os. top(-1). A class template clause follows the same syntax rules as a function template clause.
. for example. //. Stack<Type> typeStack..Push(30). If a class template is used as a part of the definition of another class template (or function template). // ok // illegal! Type is undefined The combination of a class template and arguments for all of its type parameters (e.. // stack of integers // stack of doubles // stack of points Each of these instantiations causes the member functions of the class to be accordingly instantiated.. // ok // ok ¨ www. it is easy to generate stacks of a variety of types through instantiation: Stack<int> s1(10). it should bind the latter’s type parameters to defined types. Therefore. Stack<Type> typeStack. Stack<double> s2(10). s1. It may appear wherever a C++ type may appear. will produce the following output: 10 20 30 When a nontemplate class or function refers to a class template. }. cout << s1 << '\n'.Push(20). s1. //. For example: template <class Type> class Sample { Stack<int> intStack. the member functions will be instantiated with Type bounds to int.Push(10). For example: class Sample { Stack<int> intStack. given the earlier template definition of Stack.pragsoft. Stack<int>) represents a valid type specifier. So. then the former’s type parameters can be bound to the latter’s template parameters.com Chapter 9: Templates 177 ..g. s1.Class Template Instantiation A class template represents a generic class from which executable implementations of the class can be generated by binding its type parameters to concrete (built-in or user-defined) types. in the first instantiation. Stack<Point> s3(10). For example. }.
void Pop (void) {if (top >= 0) --top. the operator << cannot be defined as before. Value parameters (of defined types) may also be used. // ok // illegal! 10u doesn't match int // ok // illegal! n is a run-time value ¨ 178 C++ Essentials Copyright © 2005 PragSoft . maxSize>::Push (Type &val) { if (top+1 < maxSize) stack[++top] = val. Listing 9. int maxSize> class Stack { public: Stack (void) : stack(new Type[maxSize]). The value itself must be a constant expression which can be evaluated at compile-time.} Type &Top (void) {return stack[top]. where the maximum size of the stack is denoted by a template parameter (rather than a data member).39 shows a variation of the Stack class. Both parameters are now required for referring to Stack outside the class. 5+5> int n = 10. For example.} void Push (Type &val). s2.39 1 2 3 4 5 6 7 8 9 10 11 12 template <class Type. s3. s4. } Unfortunately. int maxSize> void Stack<Type. Instantiating the Stack template now requires providing two arguments: a defined type for Type and a defined integer value for maxSize. 10u> Stack<int. maxSize>&). Listing 9. n> s1. For example: Stack<int. // stack array int top. int maxSize> // illegal! ostream &operator << (ostream&.} private: Type *stack. 10> Stack<int. Stack<int. Stack<Type. Push is now defined as follows: template <class Type. top(-1) {} ~Stack (void) {delete [] stack.Nontype Parameters Unlike a function template. // index of top stack entry }. not all parameters of a class template are required to represents types. since value template parameters are not allowed for nonmember functions: template <class Type. The type of the value must match the type of value parameter exactly.
(int max) : stack(new Str[max]).Class Template Specialization The algorithms defined by the member functions of a class template may be inappropriate for certain types. top(-1). For example. Str Top (void) {return stack[top]. a member function of a class template is specialized by providing an implementation of it based on a particular type. ¨ www. // max size of stack }. private: Str *stack. class Stack<Str> { public: Stack<Str>::Stack ~Stack (void) void Push (Str val).} friend ostream& operator << (ostream&. Like a global function template. Stack<Str>&). Such cases can be properly handled by specializing the inappropriate member functions. Pop needs to be specialized as well: void Stack<char*>::Pop (void) { if (top >= 0) delete stack[top--]. in which case all the class members must be specialized as a part of the process: typedef char* Str.com Chapter 9: Templates 179 . } It is also possible to specialize a class template as a whole. strcpy(stack[top]. val).} Although the friend declaration of << is necessary. void Pop (void). instantiating the Stack class with the type char* may lead to problems because the Push function will simply push a string pointer onto the stack without copying it. // stack array int top. As a result.pragsoft. For example. To free the allocated storage. its earlier definition suffices. // index of top stack entry const int maxSize. because this is a nonmember function. if the original string is destroyed the stack entry will be invalid. maxSize(max) {} {delete [] stack. void Stack<char*>::Push (char* &val) { if (top+1 < maxSize) { stack[++top] = new char[strlen(val) + 1]. } } specializes the Push member for the char* type.
. provides a template initialization for dummy. static Type dummy. // dummy entry template <class Type> Type& Stack<Type>::Top (void) { return top >= 0 ? stack[top] : dummy. As an example. and static members just like an ordinary class. } There are two ways in which a static data member can be initialized: as a template or as a specific type.. consider adding a static data member to the Stack class to enable Top to return a value when the stack is empty: template <class Type> class Stack { public: //. For example. could use the following initialization of dummy: int Stack<int>::dummy = 0. There will therefore be an instance of each static data member per instantiation of the class template. private: //. Static data members are shared by the objects of an instantiation. Alternatively. for example. template <class Type> Type Stack<Type>::dummy = 0. however. reference.. (Note. A Stack<int> instantiation. ¨ 180 C++ Essentials Copyright © 2005 PragSoft . Type& Top (void). that the value 0 may be inappropriate for non-numeric types). This is instantiated for each instantiation of Stack.Class Template Members A class template may have constant.. an explicit instance of this initialization may be provided for each instantiation of Stack. The use of constant and reference members is exactly as before. }.
friend Stack<int>. //..com Chapter 9: Templates 181 . Foo<int> and Stack<int> are friends of Sample<int>. The following makes a specific instance of Foo and Stack friends of all instances of Sample: template <class T> class Sample { friend Foo<int>.. but not Sample<double>.Class Template Friends When a function or class is declared as a friend of a class template. }. Consider the Stack class template and a function template named Foo: template <class T> void Foo (T&). //.. we can make each instance of Foo and Stack a friend of its corresponding instance of Sample: template <class T> class Sample { friend Foo<T>. We wish to define a class named Sample and declare Foo and Stack as its friends. The choice as to which form of friendship to use depends on the intentions of the programmer. }. // one-to-many friendship Alternatively.. //. The extreme case of making all instances of Foo and Stack friends of all instances of Sample is expressed as: template <class T> class Sample { // many-to-many friendship template <class T> friend Foo. template <class T> friend class Stack. ¨. the friendship can take one of there forms. friend Stack<T>. for example. // one-to-one friendship This means that. as illustrated by the following example. }.
} ListElem* Prev (void) {return prev. // next element in the list }. virtual void Remove (const Type&).} friend class List<Type>. It consists of a set of elements. 182 C++ Essentials Copyright © 2005 PragSoft .// one-to-one friendship protected: Type val. // first element in the list ListElem<Type> *last.} Type& Value (void) {return val. template <class Type> class ListElem { public: ListElem // forward declaration (const Type elem) : val(elem) {prev = next = 0. In a doubly-linked list. each element also contains a pointer to the previous element in the list.} ListElem * Next (void) {return next. // last element in the list }. List&). // previous element in the list ListElem *next. virtual void Insert (const Type&). Listing 9.40 1 #include <iostream. // the element value ListElem *prev. First 10 20 30 Last Because a container class can conceivably contain objects of any type. true}. it is best defined as a class template. Listing 9.20 A doubly-linked list of integers.40 show the definition of doubly-linked lists as two class templates. template <class Type> class List. virtual Bool Member (const Type&). Figure 9.Example: Doubly-linked Lists A container type is a type which in turn contains objects of another type.20 illustrates a doubly-linked list of integers. each of which contains a pointer to the next element in the list.} ~List (void). //--------------------------------------------------------template <class Type> class List { public: List (void) {first = last = 0. protected: ListElem<Type> *first.h> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 enum Bool {false. Figure 9. friend ostream& operator <<(ostream&. A linked-list represents one of the simplest and most popular forms of container types.
respectively. 27 Operator << is overloaded for the output of lists.com Chapter 9: Templates 183 . } } www. point to the first and last element in the list. because the declaration is outside the ListElem class. ListElem<Type> *next. handy != 0. Note that these two are declared of type ListElem<Type>* and not ListElem*. and two pointers which point to the previous and next elements in the list. if any.Annotation 3 This forward declaration of the List class is necessary because ListElem refers to List before the latter’s definition. 5-17 ListElem represents a list element. and Element are all defined as virtual to allow a class derived from List to override them. for (handy = first. 25 Remove removes the list element. handy = next) { next = handy->next. List is declared as a one-to-one friend of ListElem. and false otherwise. whose value matches its parameter. All of the member functions of ListElem are defined inline. It consists of a value whose type is denoted by the type parameter Type. Remove. 20 List represents a doubly-linked list. 26 Member returns true if val is in the list. delete handy.pragsoft. The definition of List member functions is as follows: template <class Type> List<Type>::~List (void) { ListElem<Type> *handy. 29-30 First and last. Insert. 24 Insert inserts a new element in front of the list. because the former’s implementation requires access to the nonpublic members of the latter.
first = handy. else last = handy->prev.//-----------------------------------------------------------------template <class Type> void List<Type>::Insert (const Type &elem) { ListElem<Type> *handy = new ListElem<Type>(elem). The overloading of << for ListElem does not require it to be declared a friend of the class because it is defined in terms of public members only: 184 C++ Essentials Copyright © 2005 PragSoft . } The << is overloaded for both classes. } //-----------------------------------------------------------------template <class Type> void List<Type>::Remove (const Type &val) { ListElem<Type> *handy. for (handy = first. if (first != 0) first->prev = handy. handy != 0. delete handy. } } } //-----------------------------------------------------------------template <class Type> Bool List<Type>::Member (const Type &val) { ListElem<Type> *handy. else first = handy->next. handy != 0. for (handy = first. handy = handy->next) { if (handy->val == val) { if (handy->next != 0) handy->next->prev = handy->prev. return false. handy->next = first. if (handy->prev != 0) handy->prev->next = handy->next. if (last == 0) last = handy. handy = handy->next) if (handy->val == val) return true.
We raise an Empty exception in response to this: template <class Type> Type &Stack<Type>::Top (void) { if (top < 0) throw Empty(). } • An attempt to pop from an empty stack results in an underflow. Error {}. else throw Overflow(). Error {}. We raise an Underflow exception in response to this: template <class Type> void Stack<Type>::Pop (void) { if (top >= 0) --top. */ }. The above exceptions are easily defined as derivations of Error: class class class class class class Error BadSize HeapFail Overflow Underflow Empty : : : : : { /* public public public public public . ¨ www. Error {}. Error {}. else throw Underflow().template <class Type> void Stack<Type>::Push (Type &val) { if (top+1 < maxSize) stack[++top] = val.pragsoft. } • Attempting to examine the top element of an empty stack is clearly an error. Error {}.. return stack[top].com Chapter 10: Exception Handling 191 .. } Suppose that we have defined a class named Error for exception handling purposes.
Push(10). par is optional and is an identifier bound to the object raised by the exception. A try block is followed by catch clauses for the exceptions which may be raised during the execution of the block.} catch (BadSize) {cout << "Bad stack size\n". //.} catch (Empty) {cout << "Empty stack\n".} For simplicity. } catch (Underflow) {cout << "Stack underflow\n". or One is a reference or constant of the other type.} catch (Overflow) {cout << "Stack overflow\n".Pop(). //. s.The Try Block and Catch Clauses A code fragment whose execution may potentially raise exceptions is enclosed by a try block.} catch (HeapFail) {cout << "Heap exhausted\n". The first matching catch clause is selected and its statements are executed. and statements represents zero or more semicolon-terminated statements.. the catch clauses here do nothing more than outputting a relevant message.. A catch clause (also called a handler) has the general form catch (type par) { statements } where type is the type of the object raised by the matching exception.. In other words. For example.. a try block is like a compound statement preceded by the try keyword. continuing with our Stack class. or Copyright © 2005 PragSoft 192 C++ Essentials . When an exception is raised by the code within the try block. The role of the catch clauses is to handle the respective exceptions. the catch clauses are examined in the order they appear. s. which has the general form try { statements } where statements represents one or more semicolon-terminated statements. we may write: try { Stack<int> s(3). The remaining catch clauses are ignored. A catch clause (of type C) matches an exception (of type E) if: • • C and E are the same type.
their order of appearance is significant.• • One is a nonprivate base class of the other type. */ } will match any exception type and if used.pragsoft. Set_terminate returns the previous setting.. the clause type void* will match any pointer and should therefore appear after other pointer type clauses: try { //. which simply terminates the program.. } { // propagate up the exception An exception which is not matched by any catch clause after a try block.. This process is continued until either the exception is matched or no more enclosing try block remains. like a default case in a switch statement.. ¨ www. The default terminate function can be overridden by calling set_terminate and passing the replacing function as its argument: TermFun set_terminate(TermFun).. Because of the way the catch clauses are evaluated. or Both are pointers and one can be converted to another by implicit type conversion rules.. The statements in a catch clause can also throw exceptions. Care should be taken to place the types which are likely to mask other types last. should always appear last.com Chapter 10: Exception Handling 193 .. } catch (char*) catch (Point*) catch (void*) {/*... is propagated up to an enclosing try block. The latter causes the predefined function terminate to be called.) { /* . throw.*/} {/*.... For example.. The case where the matched exception is to be propagated up can be signified by an empty throw: catch (char*) //..*/} {/*. This function has the following type: typedef void (*TermFun)(void).*/} The special catch clause type catch (.
The list is also an assurance that function will not throw any other exceptions. the only way to find the exceptions that a function may throw is to study its code (including other functions that it calls). char *key) throw (InvalidKey. This can be overridden by calling set_unexpected (which has the same signature as set_terminate) and passing the replacing function as its argument: TermFun set_unexpected(TermFun). or const char* exception. A function prototype may be appended with a throw list for this purpose: type function (parameters) throw (exceptions). This enables function users to quickly determine the list of exceptions that their code will have to handle. Should a function throw an exception which is not specified in its throw list. the predefined function unexpected is called. BadFile. but none other. ¨ 194 C++ Essentials Copyright © 2005 PragSoft . set_unexpected returns the previous setting. where exceptions denotes a list of zero or more comma-separated exception types which function may directly or indirectly throw. BadFile. const char*). An empty throw list specifies that the function will not throw any exceptions: void Sort (List list) throw (). For example. void Encrypt (File &in. In absence of a throw list. It is generally expected to at least define throw lists for frequently-used functions. The default behavior of unexpected is to terminate the program. File &out. specifies that Encrypt may throw an InvalidKey.Function Throw Lists It is a good programming practice to specify what exceptions a function may throw. As before.
are not identical. break.com Chapter 10: Exception Handling 195 . Packet::Valid() will return true if this is the case. } } Suppose we wish to check for the following errors in ReceivePacket: • That connection c is active. That the packet type is known (the default case is exercised otherwise).. Also define a throw list for the function..54 Consider the following function which is used for receiving a packet in a network system: void ReceivePacket (Packet *pack.55 Define appropriate exceptions for the Matrix class (see Chapter 7) and modify its functions so that they throw exceptions when errors occur. That no errors have occurred in the transmission of the packet. Connection *c) { switch (pack->Type()) { case controlPack: //. including the following: • • When the sizes of the operands of + and .. break... ¨ • •. When heap storage is exhausted.Exercises 10. • • Define suitable exceptions for the above and modify ReceivePacket so that it throws an appropriate exception when any of the above cases is not satisfied. case dataPack: //.. default: //. Connection::Active() will return true if this is the case. case diagnosePack: //. When the row or column specified for () is outside its range. When the number of the columns of the first operand of * does not match the number of rows of its second operand.. break.. 10.
This includes the definition of the ifstream.17 Predefined streams..11.h Description Defines a hierarchy of classes for low-level (untyped characterlevel) IO and high-level (typed) IO. Stream Type cin cout clog cerr istream ostream ostream ostream Buffered Yes Yes Yes No Description Connected to standard input (e. Table 11.h fstream. the monitor) Connected to standard error (e.22 relates these header files to a class hierarchy for a UNIX-based implementation of the iostream class hierarchy. Header File iostream. These are summarized by Table 11. Derives a set of classes from those defined in iostream. the monitor) Connected to standard error (e. the keyboard) Connected to standard output (e. and iostream classes.16 summarizes the role of these high-level classes.15 Iostream header files.h iomanip.. The definition of the library classes is divided into three header files. Figure 11.h for IO with respect to character arrays. and strstream classes. This includes the definition of the ios. This includes the definition of the istrstream.h strstream. this capability is provided by a library. Table 11. ostream. An additional header file defines a set of manipulators which act on streams.16 Highest-level iostream classes. The library also provides four predefined stream objects for the common use of programs.g.17. These are summarized by Table 11. Form of IO Standard IO File IO Array of char IO Input Output Input and Output istream ifstream istrstream ostream ofstream ostrstream iostream fstream strstream Table 11. A user of the iostream library typically works with these classes only. The IO Library C++ has no built-in Input/Output (IO) capability.. ofstream. ostrstream.15. The standard C++ IO library is called the iostream library. The highest-level classes appear unshaded. istream.g. Table 11. Defines a set of manipulator which operate on streams to produce useful effects. the monitor) 196 C++ Essentials Copyright © 2005 PragSoft .g.g.. Instead. and fstream classes.h for file IO. Derives a set of classes from those defined in iostream.
We therefore speak of ‘extracting data from an input stream’ and ‘inserting data into an output stream’.h v unsafe_ios stream_MT v unsafe_ostream v v ostream v ios streambuf unsafe_istream istream iostream fstream.com Chapter 11: The IO Library 197 . the act of writing data to an output stream is called insertion.h v strstreambase strstreambuf unsafe_strstreambase v istrstream ostrstream strstream v means virtual base class www. Similarly. It is performed using the >> operator (called the extraction operator) or an iostream member function. Figure 11.pragsoft.h v fstreambase filebuf v unsafe_fstreambase ifstream ofstream fstream strstream. The act of reading data from an input stream is called extraction. output.A stream may be used for input.22 Iostream class hierarchy. or both. and is performed using the << operator (called the insertion operator) or an iostream member function. iostream.
Therefore. a basic understanding of how a streambuf operates makes it easier to understand some of the operations of streams. Figure 11. a streambuf is associated with it. However. Figure 11. the user need not worry about or directly work with streambuf objects.sequence When a stream is created.The Role of streambuf The iostream library is based on a two layer model.24): • A put pointer points to the position of the next character to be deposited into the sequence as a result of an insertion. All stream classes overload the insertion and extraction operators for use with a streambuf* operand. All stream classes contain a pointer to a streambuf object or one derived from it. Depending on the type of the stream. ¨ 198 C++ Essentials Copyright © 2005 PragSoft . and iostream has both pointers. one or two pointers are associated with this sequence (see Figure 11.. ostream only has a put pointer. istream only has a get pointer. The insertion or extraction of a streambuf causes the entire stream represented by it to be copied. • For example. The upper layer deals with formatted IO of typed objects (built-in or user-defined). inserted object extracted object stream layer streambuf layer output chars input chars The streambuf layer provides buffering capability and hides the details of physical IO device handling. and is defined in terms of streambuf objects (see Figure 11. These are indirectly employed by streams.23 Two-layer IO model. get pointer d a t a p r e s e n t put pointer . The lower layer deals with unformatted IO of streams of characters. the stream classes provide constructors which take a streambuf* argument..23).24 Streambuf put and get pointers. Under normal circumstances. A get pointer points to the position of the next character to be fetched from the sequence as a result of an extraction. Think of a streambuf as a sequence of characters which can grow or shrink.
seekp(10. os. For example.pragsoft.com Chapter 11: The IO Library 199 . www. For example. os. An output stream can be flushed by invoking its flush member function. Similarly.e. Table 11.write(str. ios::cur for positions relative to the current put pointer position.tellp() + 10). moves the put pointer 10 characters forward. or ios::end for positions relative to the end of the stream. For example.18 summarizes the ostream member functions. For example. The overloading of the insertion operator for user-defined types was discussed in Chapter 7. and employed throughout this book. The put member function provides a simple method of inserting a single character into an output stream. An optional second argument to seekp enables the position to be specified relatively rather than absolutely.put('a'). combined into one statement). write inserts a string of characters into an output stream. return the stream for which they are invoked. 10). // flushes the os buffer The position of an output stream put pointer can be queried using tellp and adjusted using seekp.. ios::cur). These are defined as a public enumeration in the ios class. The second argument may be one of: • • • ios::beg for positions relative to the beginning of the stream.flush(). assuming that os is an ostream object. For example. This section looks at the ostream member functions. inserts the first 10 characters from str into os. inserts 'a' into os. the above is equivalent to: os. Use of the insertion operator << for stream output was introduced in Chapter 1.Stream Output with ostream Ostream provides formatted output capability.seekp(os. All output functions with an ostream& return type. os. Flushing causes any buffered data to be immediately sent to output: os. Multiple calls to such functions can be concatenated (i.
is valid and is equivalent to: os. The constructor associates a streambuf (or its derivation) with the class to provide an output stream. or the end position: enum seek_dir {beg. int n). ostream (streambuf*). Inserts n signed or unsigned characters into the stream. ostream& flush (). Inserts a character into the stream. int n). ostream& write (const signed char*. ostream& write (const unsigned char*. Moves the put pointer to a character position in the stream relative to the beginning. Table 11. end}. Flushes the stream. ¨ 200 C++ Essentials Copyright © 2005 PragSoft .put('b').os. the current.18 Member functions of ostream. os. ostream& put (char). long tellp ().put('a'). Returns the current stream put pointer position. ostream& seekp (long.put('a'). cur. seek_dir = ios::beg).put('b').
Stream Input with istream Istream provides formatted input capability. For example. extracts and returns the character denoted by the get pointer of is. '\t'). The read member function extracts a string of characters from an input stream. The return type of get and peek is an int (not char). does the same but does not advance the get pointer. is.pragsoft. 'y'. an input line consisting of x y (i. The delimiter.e. See Table 11. The effect of a call to get can be canceled by calling putback which deposits the extracted character back into the stream: is. 64. A variation of read. The get member function provides a simple method of extracting a single character from an input stream. is not deposited into buf.com Chapter 11: The IO Library 201 . Of course. This section looks at the istream member functions. extracts up to 64 characters from is and deposits them into buf. A variation of get.. it allows you to examine the next input character without extracting it. space. The actual number of characters extracted is obtained by calling gcount.getline(buf. 'x'. char buf[64]. Use of the extraction operator >> for stream input was introduced in Chapter 1. This is because the end-offile character (EOF) is usually given the value -1. The behavior of get is different from the extraction operator in that the former does not skip blanks.read(buf. and advances the get pointer. For example. Other variations of get are also provided. newline) would be extracted by four calls to get. In other words. The overloading of the extraction operator for user-defined types was discussed in Chapter 7.19 for a summary. allows extraction of characters until a user-specified delimiter is encountered. called getline. is similar to the above call to read but stops the extraction if a tab character is encountered. www. For example. called peek. For example.putback(ch). is. if EOF is encountered in the process. the same line would be extracted by two applications of >>. 64).get(). although extracted if encountered within the specified number of characters. less characters will be extracted. assuming that is is an istream object. int ch = is.
moves the get pointer 10 characters backward.tellg() . The iostream class is derived from the istream and ostream classes and inherits their public members as its own public members: class iostream : public istream. For example. Multiple calls to such functions can therefore be concatenated. }. An optional second argument to seekg enables the position to be specified relatively rather than absolutely.. ios::cur.19 summarizes the istream member functions. As with seekp.. extracts and discards up to 10 characters but stops if a newline character is encountered.seekg(-10. For example. return the stream for which they are invoked. is. public ostream { //. '\n').get(ch1). the second argument may be one of ios::beg. All input functions with an istream& return type. or ios::end.10). The position of an input stream get pointer can be queried using tellg and adjusted using seekg.get(ch2).Input characters can be skipped by calling ignore. For example. is. For example. ios::cur). The delimiters itself is also extracted and discarded. it can invoke any of the functions listed in Tables 11. is.ignore(10. the above is equivalent to: is.19.18 and 11. is valid and is equivalent to: is. Table 11. An iostream object is used for both insertion and extraction. 202 C++ Essentials Copyright © 2005 PragSoft .get(ch2). is.get(ch1).seekp(is.
(streambuf&.19 Member functions of istream. The second and third versions are similar but instead deposit the character into their parameter. The last version extracts and deposit characters into the given streambuf until the delimiter denoted by its last parameter is encountered. end}. istream& read (unsigned char*. int peek (). and deposit them into the given array. Moves the get pointer to a character position in the stream relative to the beginning. Extracts up to n characters into the given array. or until the delimiter denoted by the last parameter or EOF is encountered. Returns the current stream get pointer position. istream& ignore (int n = 1. char = '\n'). int n). Pushes an extracted character back into the stream. Skips up to n characters. but stops if EOF is encountered. seek_dir = ios::cur). long tellg (). Extracts at most n-1 characters. int n. or the end position: enum seek_dir {beg. istream& seekg (long. if encountered and extracted. int istream& istream& istream& get get get get (). istream& read (signed char*. int = EOF). (unsigned char&). which is always null-terminated. istream& putback (char). char = '\n'). but extracts and stops if the delimiter denoted by the last parameter is encountered. is not deposited into the array. int n. int n). cur.pragsoft. the current. istream& getline (unsigned char*. int gcount (). The first version extracts the next character (including EOF). Returns the number of characters last extracted as a result of calling read or getline. The delimiter. char = '\n'). istream& getline (signed char*.com Chapter 11: The IO Library 203 . ¨ www. (signed char&).Table 11. Returns the next input character without extracting it. istream (streambuf*) The constructor associates a streambuf (or its derivation) with the class to provide an input stream.
Use upper case for hexadecimal output. Provides values for stream opening mode. Append data to the end of the file. Provides formatting flags.). IO errors). Show the base on output. Left-adjust the output. It also keeps formatting information for the use of its client classes (e. An unrecoverable error has taken place. When state is set to this value. Open should fail if file does not already exist. istream and ostream). Seek relative to the current put/get pointer position. Use the floating notation for reals. Stream open for output.Using the ios Class Ios provides capabilities common to both input and output streams. Skip blanks (white spaces) on input. Output padding indicator.g. Show the decimal point on output. Convert to decimal. Convert to octal.20. Provides values for relative seek. Binary file (as opposed to default text file). Upon opening the stream. The seek_dir values specify the seek direction for seekp and seekg. Seek relative to the end of the stream. seek to EOF. End-of-file has been reached. Flush all streams after insertion. The last IO operation attempted has failed. The io_state values are used for the state data member which is a bit vector of IO error flags. Use the scientific notation for reals. The formatting flags are used for the x_flags data member (a bit vector). Table 11. Open should fail if file already exists. The definition of ios contains a number of public enumerations whose values are summarized by Table 11. Truncate existing file. The open_mode values are bit flags for specifying the opening mode of a stream. Stream open for input.e. Convert to hexadecimal. An invalid operation has been attempted. Right-adjust the output. Show the + symbol for positive integers. 204 C++ Essentials Copyright © 2005 PragSoft .. Seek relative to the beginning of the stream.. It uses a streambuf for buffering of data and maintains operational information on the state of the streambuf (i. it means that all is ok.20 Useful public enumerations in ios.
good()) // all is ok.. For example.1235 << '\n'..setstate(ios::eofbit | ios::badbit). based on the overloading of the ! operator: if (!s) // . For example. s.fail()) The opposite shorthand is provided through the overloading of the void* so that it returns zero when fail returns nonzero..bad()) // invalid IO operation. Ios also provides various formatting member functions.. where s is an iostream.fail()) // last IO operation failed. and cleared by calling clear. sets the eofbit and badbit flags... Similarly. This will produce the output: 233. cout << 233. and fail returns true if the last attempted IO operation has failed (or if bad() is true): if (s. bad returns nonzero if an invalid IO operation has been attempted: if (s.. good returns nonzero if no error has occurred: if (s. which can be checked for using a number of ios member functions..IO operations may result in IO errors. A shorthand for this is provided.com Chapter 11: The IO Library 205 . User-defined IO operations can report errors by calling setstate. This makes it possible to check for errors in the following fashion: if (cin >> str) // no error occurred The entire error bit vector can be obtained by calling rdstate.pragsoft. For example.precision(4). // same as: if (s. precision can be used to change the precision for displaying floating point numbers: cout.
spaces are used to pad the object up to the specified minimum size.setf(ios::scientific). cout. will produce: ***10 The formatting flags listed in Table 11. cout. cout << 123456 << '\n'.width(5).setf(ios::hex | ios::uppercase.20 can be manipulated using the setf member function. cout << 3. cout << 10 << '\n'. For example. cout. will display: 1E240 206 C++ Essentials Copyright © 2005 PragSoft . The padding character can be changed using fill. will display: 3. the specified width applies only to the next object to be output.width(5). For example. By default. cout.14 << '\n'. ios::basefield). Also. The second argument is typically one of: ios::basefield ≡ ios::dec | ios::oct | ios::hex ios::adjustfield ≡ ios::left | ios::right | ios::internal ios::floatfield ≡ ios::scientific | ios::fixed For example. For example.The width member function is used to specify the minimum width of the next output object.14e+00 Another version of setf takes a second argument which specifies formatting flags which need to be reset beforehand.fill('*'). cout. cout << 10 << '\n'. will use exactly 5 character to display 10: 10 An object requiring more than the specified width will not be restricted to it.
and set as a whole or examined by calling flags. ios::badbit. Examines the ios::failbit.Formatting flags can be reset by calling unsetf. ios (streambuf*). we can write: cin. The first version returns the current fill character. The first version returns the current floating-point precision. Examines the ios::eofbit in ios::state and returns nonzero if the end-of-file has been reached. Returns a pointer to the stream ’s associated streambuf. Examines int eof (void). char fill (void). int width (int).21 summarizes the member functions of ios. void setstate (int). Examines ios::state and returns zero if bits have been set as a result of an error. the ios::badbit and ios::hardfail bits in ios::state and returns nonzero if an IO error has occurred. char fill (char). The second version sets the floating-point precision and returns the previous floating-point precision. int bad (void). int fail (void). For example. void clear (int = 0). int rdstate (void). Associates the specified streambuf with the stream. Sets the ios::state value to the value specified by the parameter. to disable the skipping of leading blanks for an input stream such as cin. The second version sets the field width and returns the previous setting.pragsoft.21 Member functions of ios. and ios::hardfail bits in ios::state and returns nonzero if an operation has failed. int precision (int). Sets the ios::state bits specified by the parameter. The constructor associates a streambuf (or its derivation) with the class. int precision (void). streambuf* rdbuf (void).com Chapter 11: The IO Library 207 . void init (streambuf*). int width (void). The second version changes the fill character and returns the previous fill character. int good (void).unsetf(ios::skipws). Returns ios::state. www. Table 11. The first version returns the current field width. Table 11.
ostream* tie (ostream*).long setf (long). Returns the tied stream. For example. Both return the previous setting. Clears the formatting flags denoted by its parameter. The second version sets the formatting flags to a given value (flags(0) restores default formats). cerr. ¨ 208 C++ Essentials Copyright © 2005 PragSoft . The first version sets the formatting flags denoted by the parameter. The second version ties the stream denoted by its parameter to this stream and returns the previously-tied stream. long setf (long. long unsetf (long). and returns the previous setting. and clog are all tied to cout. long flags (long). long flags (void). if any. The second version also clears the flags denoted by its second argument. long). and zero otherwise. using any of the first three causes cout to be flushed first. and return the previous setting. The first version returns the format flags (this is a sequence of formatting bits). When two streams are tied the use of one affects the other. because cin. ostream* tie (void).
In general. endl is a commonly-used manipulator which inserts a newline into an output stream and flushes it.setf(ios::oct. has the same effect as: cout << 10 << '\n'.Stream Manipulators A manipulator is an identifier that can be inserted into an output stream or extracted from an input stream in order to produce a desired effect. Flushes the output stream. For example. Manipulator endl ends flush dec hex oct ws setbase(int) resetiosflags(long) setiosflags(long) setfill(int) setprecision(int) setw(int) ¨ www. Therefore. or 16. Stream Type output output output input/output input/output input/output input input/output input/output input/output input/output input/output input/output Description Inserts a newline character and flushes the stream. // sets the width of 10 to 8 characters Table 11. cout << oct << 10 << endl. Sets the conversion base to hexadecimal. Sets the floating-point precision to the argument. Some manipulators also take parameters. Sets the field width to the argument. is an easier way of saying: cout. Sets the conversion base to octal. 10.22 summarizes the predefined manipulators of the iostream library. cout << 10 << endl. For example.com Chapter 11: The IO Library 209 . Sets the conversion base to decimal. most formatting operations are more easily expressed using manipulators than using setf. Clears the status flags denoted by the argument. Inserts a null-terminating character. the setw manipulator is used to set the field width of the next IO object: cout << setw(8) << 10.pragsoft. cout << 10 << endl. Table 11. For example. ios::basefield).22 Predefined manipulators. Extracts blanks (white space) characters. Sets the padding character to the argument. Sets the status flags denoted by the argument. Sets the conversion base to one of 8.
ios::out).write(str. log << endl. ifstream inf("names.h. The fstream class is derived from iostream and can be used for opening a file for input as well as output. however. fstream.dat for input and connects it to the ifstream inf. } The external file connected to an ostream can be closed and disconnected by calling close: log. log. For example. all the public member functions of the latter can also be invoked for ifstream objects. ios::out).dat".File IO with fstreams A program which performs IO with respect to an external file should include the header file fstream. It is also possible to create an ofstream object first and then connect the file later by calling open: ofstream log. For example: fstream iof.dat". First.dat'\n". else { char *str = "A piece of text".open("names. A file can be opened for input by creating an ifstream object. A file can be opened for output by creating an ofstream object and specifying the file name and mode as arguments to the constructor.close(). strlen(str)). ios::out). ios::in).dat". we should check that the file is opened as expected: if (!log) cerr << "can't open 'log.dat". Because the classes defined in this file are derived from iostream classes. Because ifstream is derived from istream. Because ofstream is derived from ostream.open("log.h. opens a file named log. iof.20 for a list of the open mode values) and connects it to the ofstream log. ofstream log("log. For example. log. all the public member functions of the latter can also be invoked for ofstream objects.dat for output (see Table 11. iof << "Adam\n". // output 210 C++ Essentials Copyright © 2005 PragSoft .h also includes iostream. opens the file names.
ifstream. int=filebuf::openprot).close(). (int fd). (int fd.iof. int=ios::out. ios::in). (void). Closes the associated filebuf and file. iof >> name. (const char*. and fstream. void setbuf(char*. int size). Similar to ofstream constructors. char* buf. // input Table 11. char name[64]. ofstream ofstream ofstream ofstream (void). ¨ www. (const char*. char* buf. int. (int fd). (int fd. The second version makes an ofstream and connects it to an open file descriptor.com Chapter 11: The IO Library 211 . int size). Assigns a user-specified buffer to the filebuf. char* buf. The last version makes an ofstream and opens and connects a specified file to it for writing. The third version does the same but also uses a userspecified buffer of a given size. The first version makes an ofstream which is not attached to a file.pragsoft. Returns the associated filebuf. (int fd. iof. Similar to ofstream constructors. Opens a file for an ofstream. Table 11. void attach(int).close(). int size). int). int=ios::in. void close (void). istream. Connects to an open file descriptor.23 Member functions of ofstream. and fstream (in addition to those inherited from their base classes). void open (const char*. int=filebuf::openprot). iof. int.open("names. or fstream.dat". int=filebuf::openprot). ifstream ifstream ifstream ifstream fstream fstream fstream fstream (void). ifstream.23 summarizes the member functions of ofstream. int = filebuf::openprot). (const char*. filebuf* rdbuf (void). (int fd).
h support IO operations with respect to arrays of characters.24 summarizes the member functions of ostrstream. An istrstream object is used for input. If str is not called before odyn goes out of scope. ostrstream ssta(buffer. this file also includes iostream. Therefore. As before. 212 C++ Essentials Copyright © 2005 PragSoft . char buffer[1024]. In the dynamic version.str(). istrstream. //. Because these classes are derived from iostream classes. strstream) are very similar to the file IO counterparts (ofstream. when str is called. Alternatively. The three highest-level array IO classes (ostrstream. the user can obtain a pointer to the stream buffer by calling str: char *buf = odyn.Array IO with strstreams The classes defined in strstream. After all the insertions into an ostrstream have been completed. or a user-specified buffer: ostrstream odyn. this responsibility rests with the user. istrstream. Table 11.. 1024). It can be created with either a dynamically-allocated internal buffer. istrstream istr(data. the user may choose not to specify the size of the character array: istrstream istr(data). However. ifstream. An ostrstream object is used for output. the user should make sure that when buf is no longer needed it is deleted: delete buf. // dynamic buffer // user-specified buffer The static version ( ssta) is more appropriate for situations where the user is certain of an upper bound on the stream buffer size. The advantage of the former is that extraction operations will not attempt to go beyond the end of data array. Insertion and extraction on such streams causes the data to be moved into and out of its character array.h. they are derived from iostream classes and therefore inherit their member functions.. Its definition requires a character array to be provided as a source of input: char data[128]. fstream). 128). This freezes odyn (disabling all future insertions). and strstream (in addition to those inherited from their base classes). the object is responsible for resizing the buffer as needed. the class destructor will destroy the buffer.
int n). Returns the number of bytes currently stored in the buffer of an output stream. Freezes and returns the output stream buffer which. The second version creates an ostrstream with a user-specified buffer of a given size.Table 11.com Chapter 11: The IO Library 213 . istrstream (const char *. strstream (char *buf. char* str (void). istrstream. ostrstream (char *buf. The first version creates an ostrstream with a dynamically-allocated buffer. strstream (void). char* pcount (void).24 Member functions of ostrstream. int mode). int size. int mode = ios::out). ¨ www. ostrstream (void). Similar to ostrstream constructors. Returns a pointer to the associated buffer. if dynamically allocated. istrstream (const char *). The second version creates an istrstream using the first n bytes of a given string.pragsoft. should eventually be deallocated by the user. int size. strstreambuf* rdbuf (void). and strstream. The first version creates an istrstream using a given string.
and then ignore the remaining characters up to the comma following the line number (i. for example. 21 Each time round this loop. invalid expression where 21 is the number of the line in the program file where the error has occurred. extract the line number into lineNo. a line of text is extracted from data into dLine.43 provides a function which performs the proposed annotation. Annotation 6 Annotate takes two argument: inProg denotes the program file name and inData denotes the name of the file which contains the messages generated by the compiler. 12 Line is defined to be an istrstream which extracts from dLine.. connected to istreams prog and data. 22-26 We are only interested in lines which start with the word Error. so that. where the actual error message starts). the line number is effectively removed from the error message and displayed next to the program line. 36-37 The ifstreams are closed before the function returning.Example: Program Annotation Suppose we are using a language compiler which generates error message of the form: Error 21. 27-29 This loop skips prog lines until the line denoted by the error message is reached. ignore characters up to the space character before the line number. Note that as a result of the re-arrangements. 30-33 These insertions display the prog line containing the error and its annotation. Error: invalid expression Listing 11. respectively. 214 C++ Essentials Copyright © 2005 PragSoft . and then processed. 8-9 InProg and inData are.e. instead of the above we would have something like: 0021 x = x * y +. When a match is found. We would like to write a tool which takes the output of the compiler and uses it to annotate the lines in the program file which are reported to contain errors. we reset the get pointer of data back to the beginning of the stream.
prefixLen) == 0) { line.'). int prefixLen = strlen(prefix). // for data lines istrstream line(dLine. return 0. } } prog. while (progLine < lineNo && prog. line >> lineNo. '. prefix.h> <iomanip. '\n')) { if (strncmp(dLine.Listing 11.close().h> <string.pragsoft.h> <strstream.getline(dLine.ignore(lineSize. char *prefix = "Error". char pLine[lineSize]. lineSize)) ++progLine. int Annotate (const char *inProg. // for prog lines char dLine[lineSize]. ' '). return -1.ignore(lineSize. ios::in). ios::in). lineSize). line. const char *inData) { ifstream prog(inProg.tellg() << endl.dat".dat"). cout << setw(4) << setfill('0') << progLine << " " << pLine << endl. } while (data. } The contents of these two files are as follows: Chapter 11: The IO Library 215 .close().h> const int lineSize = 128. "data. data. if (!prog || !data) { cerr << "Can't open input files\n". ifstream data(inData.getline(pLine. } The following main function provides a simple test for Annotate: int main (void) { return Annotate("prog.43 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #include #include #include #include <fstream. int progLine = 0. cout << " " << prefix << ":" << dLine + line. line. lineSize. int lineNo.
prog. ) expected When run. Unknown directive: defone Note 3. return 0.dat: Error 1. } data.dat: #defone size 100 main (void) { integer n = 0. while (n < 10] ++n. Error: unknown type: integer 0007 while (n < 10] Error: ) expected ¨ 216 C++ Essentials Copyright © 2005 PragSoft . unknown type: integer Error 7. Return type of main assumed int Error 5. the program will produce the following output: 0001 #defone size 100 Error: Unknown directive: defone 0005 integer n = 0.
A line which contains an unbalanced bracket should be reported by a message such as the following sent to standard output: '{' on line 15 has no matching '}' ¨ 11. line by line. 1}).57 11..56 Use the istream member functions to define an overloaded version of the >> operator for the Set class (see Chapter 7) so that it can input sets expressed in the conventional mathematical notation (e.com Chapter 11: The IO Library 217 . Write a program which copies its standard input. each ‘(’ has a matching ‘)’.59 www. Your program should be able to copy text as well as binary files. and similarly for [] and {}. {2. except for when they appear inside comments or strings.g. to its standard output. Write a program which reads a C++ source file and checks that all instances of brackets are balanced. that is.Exercises 11.58 11. 5.pragsoft. Write a program which copies a user-specified file to another user-specified file.
Performing the file inclusion (#include) and conditional compilation (#ifdef.h" #define two 2 #define Abs(x) ((x) > 0 ? (x) -(x)) int main (void) // this is a comment { int n = two * Abs(num). Figure 12.cpp #include "prog. it is unable to check for any sort of language-level syntax errors.25 illustrates the effect of the preprocessor on a simple file. It shows the preprocessor performing the following: • Removing program comments by substituting a single white space for each comment.25 The role of the preprocessor. • • The preprocessor performs very minimal error checking of the preprocessing instructions. The role of the preprocessor is to transform the source file into an equivalent file by performing the preprocessing instructions contained by it. and macro substitution. the C++ compiler passes the file through a preprocessor. These instructions facilitate a number of features.12. ‘Learning’ the macros introduced by #define. etc. It compares these names against the identifiers in the program. and does a substitution when it finds a match. The Preprocessor Prior to compiling a program source file.) commands as it encounters them. } Preprocessor int num = -13. This function is performed by the compiler.h int num = -13. prog. such as: file inclusion. Figure 12. int main (void) { int n = 2 * ((num) > 0 ? (num) -(num)). Because it operates at a text level. conditional compilation. } 218 C++ Essentials Copyright © 2005 PragSoft . prog.
com Chapter 12: The Preprocessor 219 . For example. A line whose last non-blank character is \.pragsoft. the following multiple line and single line directives have exactly the same effect: #define CheckError if (error) exit(1) #define CheckError \ \ if (error) exit(1) A directive line may also contain comment. is assumed to continue on the line following it. A # appearing on a line on its own is simply ignored. The following are therefore all valid and have exactly the same effect: #define size 100 #define size 100 # define size 100 A directive usually occupies a single line.. only spaces and tabs may appear before it). these are simply ignored by the preprocessor. Most directives are followed by one or more tokens.25 Preprocessor directives.e.. Table 12. Table 12.25 summarizes the preprocessor directives. thus making it possible to define multiple line directives. which are explained in detail in subsequent sections. #ifndef. Blank symbols may also appear between the # and directive.Preprocessor Directives Programmer instructions to the preprocessor (called directives) take the general form: # directive tokens The # symbol should be the first non-blank character on the line (i. or #if directive Combination of #else and #if Change current line number and file name Outputs an error message Is implementation-specific ¨ www. A token is anything other than a blank.
A plain macro has the general form: #define identifier tokens It instructs the preprocessor to substitute tokens for every occurrence of identifier in the rest of the file (except for inside strings). use of word in bytes above). 220 C++ Essentials Copyright © 2005 PragSoft . an identifier defined by one macro can be used in a subsequent macro (e. macros are less often used for this purpose. For example: #define size #define word #define bytes 512 long sizeof(word) Because macro substitution is also applied to directive lines. The substitution tokens can be anything. For example. A call must provide a matching number of arguments. because consts can be used instead. the code fragment word n = size * bytes. Use of macros for defining symbolic constants has its origins in C. In C++. with the added benefit of proper type checking. A parameterized macro is matched against a call to it. even empty (which has the effect of removing identifier from the rest of the file). which had no language facility for defining constants.y) ((x) > (y) ? (x) : (y)) defines a parameterized macro for working out the maximum of two quantities. Otherwise. Plain macros are used for defining symbolic constants. Given the above definitions. is macro-expanded to: long n = 512 * sizeof(long). which is syntactically very similar to a function call. There should be no blanks between the identifier and (. #define Max(x. which takes two forms: plain and parameterized.. A parameterized macro has the general form #define identifier(parameters) tokens where parameters is a list of one or more comma-separated identifiers. the whole thing is interpreted as a plain macro whose substitution tokens part starts from (.g.Macro Definition Macros are defined using the #define directive.
pragsoft.2.com Chapter 12: The Preprocessor 221 . before a macro is redefined. ¨ www. is macro-expanded to: n = (n . C++ templates provide the same kind of flexibility as macros for defining generic functions and classes. k +6). Two facilities of C++ make the use of parameterized macros less attractive than in C. Overlooking the fundamental difference between macros and functions can lead to subtle programming errors. Second. every occurrence of a parameter in the substituted tokens is substituted by the corresponding argument. j) is expanded to ((++i) > (j) ? (++i) : (j)) which means that i may end up being incremented twice.2) : (k + 6). For example. the call n = Max (n . Macros can also be redefined. Note that the ( in a macro call may be separated from the macro identifier by blanks. C++ inline functions provide the same level of code efficiency as macros. without the semantics pitfalls of the latter. For example: #undef size #define size #undef Max 128 Use of #undef on an undefined identifier is harmless and has no effect. However. This protects the macro against undesirable operator precedence effects after macro expansion. the tokens part of the macro is substituted for the call. it should be undefined using the #undef directive.2) > (k + 6) ? (n . First. For example. This is called macro expansion. the macro call Max(++i. the semantics of macro expansion is not necessarily equivalent to function call. Because macros work at a textual level. It is generally a good idea to place additional brackets around each occurrence of a parameter in the substitution tokens (as we have done for Max). with the added benefit of proper syntax analysis and type checking. Where as a function version of Max would ensure that i is only incremented once.As before. Additionally.
as it makes it easy to build an identifier out of fragments. It transforms its operand into a string by putting double-quotes around it. ¨ 222 C++ Essentials Copyright © 2005 PragSoft . is expanded as: if ((tree->left) == 0) cout << "tree->left" << " is zero!\n". given the definition #define internal(var) internal##var the call long internal(str). because macro substitution is not performed inside strings.. The concatenation operator (##) is binary and is used for concatenating two tokens. Note that defining the macro as #define CheckPtr(ptr) \ if ((ptr) == 0) cout << "ptr is zero!\n" would not produce the desired effect. the call CheckPtr(tree->left). For example. It is very useful for writing translators and code generators. The quote operator (#) is unary and takes a macro parameter operand. This operator is rarely used for ordinary programs. Therefore.Quote and Concatenation Operators The preprocessor provides two special operators or manipulating macro parameters. expands to: long internalstr. For example.
the directory /usr/include/cpp on a UNIX system). Multiple inclusion of files may or may not lead to compilation problems. the file name should be enclosed in <> instead of double-quotes. for example.h> When the preprocessor encounters this. For example.h" ". For example. whether it is . it looks for the file in one or more prespecified locations on the system (e. On most systems the exact locations to be searched can be specified by the user.\file./file.h or .. For example. the compiler will flag it as an error. The next section describes a way of avoiding multiple inclusions of the same file.h" "\usr\local\file. placing #include "constants.). if a header file contains only macros and declarations then the compiler will not object to their reappearance. a full or relative path to it should be specified.e.File Inclusion A file can be textually included in another file using the #include directive. Although the preprocessor does not care about the ending of an included file (i.h to be included in f in exactly the position where the directive appears.pragsoft. if a file f includes another file g which in turn includes another file h. The included file is usually expected to reside in the same directory as the program file. then effectively f also includes h. Otherwise.h" // // // // include from parent dir (UNIX) full path (UNIX) include from parent dir (DOS) full path (DOS) When including system header files for standard libraries.g. But if it contains a variable definition. For example: #include #include #include #include ". etc. For example: #include <iostream. File inclusions can be nested.cc.cpp or . either as an argument to the compilation command or as a system environment variable.. ¨" inside a file f causes the contents of contents.. it is customary to only include header files in other files.com Chapter 12: The Preprocessor 223 ..h" "/usr/local/file.
code2 is included and code1 is excluded. Table 12. if expression2 evaluates to nonzero then only code2 is included.26 General forms of conditional compilation directives. Otherwise. Form #ifdef identifier code Explanation If identifier is a #defined symbol then code is included in the compilation process. depending on programmer-specified conditions being satisfied.Conditional Compilation The conditional compilation directives allow sections of code to be selectively included for or excluded from compilation. Otherwise. the #else part is optional. it is excluded. If expression evaluates to nonzero then code is included in the compilation process. and expression denotes a constant expression). Otherwise. #else CheckRegistration(). If identifier is not a #defined symbol then code is included in the compilation process. Otherwise. code3 is included. #endif 224 C++ Essentials Copyright © 2005 PragSoft .26 summarizes the general forms of these directives (code denotes zero or more lines of program text. Similarly. (). Otherwise. #elif sizeof(long) >= 4 typedef long Unit. any number of #elif directives may appear after a #if directive. it is excluded. As before. Also. #else typedef char Unit[4]. #endif // Ensure Unit is at least 4 bytes wide: #if sizeof(int) >= 4 typedef int Unit. If identifier is a #defined symbol then code1 is included in the compilation process and code2 is excluded. #else can be used with #ifndef and #if. Table 12. Otherwise. it is excluded. It is usually used as a portability tool for tailoring the program code to specific hardware and software architectures. If expression1 evaluates to nonzero then only code1 is included in the compilation process.
h: #ifndef _file_h_ #define _file_h_ contents of file. This is often done during testing and debugging when the programmer is experimenting with suspected areas of code. Subsequent inclusions have no effect because the #ifndef directive causes the contents to be excluded. For example.h. the symbol _file_h_ is undefined. this approach does not work if the code already contains /*. hence the contents is included.com Chapter 12: The Preprocessor 225 .. Although code may also be omitted by commenting its out (i.h goes here #endif When the preprocessor reads the first inclusion of file. ¨ www. use of defined makes it possible to write compound logical expressions. For example. causing the symbol to be defined. because such comments cannot be nested. we can avoid multiple inclusions of file.pragsoft.*/ style comments. Code is omitted by giving #if an expression which always evaluates to zero: #if 0 .One of the common uses of #if is for temporarily omitting code.h. given an include file called file.. #if defined BETA has the same effect as: #ifdef BETA However. placing /* and */ around it)...e.code to be omitted #endif The preprocessor provides an operator called defined for use is expression arguments of #if and #elif. For example: #if defined ALPHA || defined BETA Conditional compilation directives can be used to avoid the multiple of inclusion of files..h in any other file by adding the following to file.
Other Directives The preprocessor provides three other. It has the general form #error error where error may be any sequence of tokens. It has the general form: #line number file where file is optional.h. Examples from the SUN C++ compiler include: // align name and val starting addresses to multiples of 8 bytes: #pragma align 8 (name.h" makes the compiler believe that the current line number is 20 and the current file name is file. The change remains effective until another #line directive is encountered. // call MyFunction at the beginning of program execution: #pragma init (MyFunction) ¨ 226 C++ Essentials Copyright © 2005 PragSoft . It allows the line numbers and file name to be made consistent with the original input file. #endif The #pragma directive is implementation-dependent. less-frequently-used directives. double val. specific to their own implementation. The #error directive is used for reporting errors by the preprocessor. It is used by compiler vendors to introduce nonstandard preprocessor features. it outputs error and causes compilation to be aborted. instead of any intermediate C++ file. The directive is useful for translators which generate C++ code. For example. When the preprocessor encounters this. For example: #ifndef UNIX #error This software requires the UNIX OS. val) char name[9]. #line 20 "file. The #line directive is used to change the current line number and file name. It should therefore be only used for reporting errors which make further compilation pointless or impossible.
\n" defines an assert macro for testing program invariants.27 Standard predefined identifiers.. "12:30:55") __FILE__ __LINE__ __DATE__ __TIME__ The predefined identifiers can be used in programs just like program constants. For example. Most implementations augment this list with many nonstandard predefined identifiers.com Chapter 12: The Preprocessor 227 . Assuming that the sample call Assert(ptr != 0). "25 Dec 1995") Current time as a string (e.g.. the following message is displayed: prog. Table 12.Predefined Identifiers The preprocessor provides a small set of predefined identifiers which denote useful information.cpp: assertion on line 50 failed.cpp on line 50.g. ¨. appear in file prog. when the stated condition fails. Identifier Denotes Name of the file being processed Current line number of the file being processed Current date as a string (e. #define Assert(p) \ if (!(p)) cout << __FILE__ << ": assertion on line " \ << __LINE__ << " failed. The standard ones are summarized by Table 12.
h in another file when the symbol CPP is not defined.h when release is greater than 1. Including the file basics.62 Write directives for the following: • Defining Small as an unsigned char when the symbol PC is defined.60 Define plain macros for the following: • • • • An infinite loop structure called forever.g. Pascal style if-then-else statements.h in another file when release is 0. "file. • • 12.63 Write a macro named When which returns the current date and time as a string (e. "25 Dec 1995.h when release is 1. Finding the center of a rectangle whose top-left and bottom-right coordinates are given (requires two macros). write a macro named Where which returns the current location in a file as a string (e. ¨ 228 C++ Essentials Copyright © 2005 PragSoft . 12.. 12. Similarly. Pascal style begin and end keywords. 12:30:55")..g. Pascal style repeat-until loop. Finding the absolute value of a number.61 Define parameterized macros for the following: • • • Swapping two values. or final.Exercises 12. and as unsigned short otherwise. Redefine the above as inline functions or function templates as appropriate.h: line 25"). Including the file debug. or beta.
.
1 #include <iostream.3 // valid // valid // valid // invalid: . sign char h = '\111'. char *name = "Peter Pan". float y = y * 2. double celsius. celsius = 5 * (fahrenheit . cout << fahrenheit << " degrees Fahrenheit = " << celsius << " degrees Celsius\n".52L.32) / 9. double d = 0. signed int = 2.2 int n = -100.Solutions to Exercises 1. unsigned int i = -100.h> int main (void) { double fahrenheit. int 2k. char c = '$' + 2. unsigned double z = 0.9.oriented // // // // // // // // // // // // valid valid invalid: no variable name valid invalid: 2k not an identifier valid // valid (but dangerous!) invalid: can't be unsigned valid // valid // invalid: no variable name valid invalid: 'sign' not recognized valid valid 1. identifier seven_11 _unique_ gross-income gross$income 2by2 default average_weight_of_a_large_pizza variable object.0. not allowed in id 230 C++ Essentials Copyright © 2005 PragSoft . return 0. double x = 2 * m. float f = 0. signed char = -1786.67F. p = 4. unsigned char *num = "276811". cin >> fahrenheit.not allowed in id // invalid: $ not allowed in id // invalid: can't start with digit // invalid: default is a keyword // valid // valid // invalid: . long m = 2. cout << "Temperature in Fahrenhait: ". } 1.
cout << "What is the value of n? ".5 #include <iostream. *greeting. // // // // // age of a person employee income number of words in dictionary letter of alphabet greeting message 2.q))) || (n == 0)) (((++n) * (q--)) / ((++p) .2 2) .14). n2. return 0.3.h> int main (void) { double n1.pragsoft.14 .h> int main (void) { long n. www. cout << "2 to the power of " << n << " = " << (1L << n) << '\n'.2) : ((q / n) + 1)) : (q .com Solutions to Exercises 231 . 'p' + 'A' .3 2. 3.q)) (n | ((p & q) ^ (p << (2 + q)))) ((p < q) ? ((n < p) ? ((q * n) .'a'. n3.n)) double long char char d k c c = = = = 2 * int(3. employeeIncome.4 int double long char char age.1 (((n <= (p + q)) && (n >= (p . } 2. 'a' + 2. cin >> n. letter.1.4 #include <iostream. wordsInDictn. // // // // initializes initializes initializes initializes d k c c to to to to 6 0 'c' 'P' 2.
cin >> n1 >> n2 >> n3. else cout << "Overweight\n". cout << (n1 <= n2 && n2 <= n3 ? "Sorted" : "Not sorted") << '\n'. cin >> weight.cout << "Input three numbers: ".1 #include <iostream. cout << "Person's weight (in kilograms: ". cout << "Person's height (in centimeters): ". } 3.5) cout << "Underweight\n". because it is understood by the compiler as: if (n >= 0) if (n < 10) cout << "n is small\n". This is because the else clause is associated with the if clause immediately preceding it. } else cout << "n is negative\n". else if (height/2. return 0.3) cout << "Normal\n". else cout << "n is negative\n".h> int main (void) { double height. weight. cin >> height. return 0. The indentation in the code fragment if (n >= 0) if (n < 10) cout << "n is small\n". is therefore misleading. } 3.2 It will output the message n is negative. if (weight < height/2. 232 C++ Essentials Copyright © 2005 PragSoft . else cout << "n is negative\n".5 <= weight && weight <= height/2. The problem is fixed by placing the second if within a compound statement: if (n >= 0) { if (n < 10) cout << "n is small\n".
month.4 #include <iostream. int factorial = 1. switch (month) { case 1: cout << "January". case 4: cout << "April".h> int main (void) { int n. break. } 3. } 3. break. break.3. case 9: cout << "September". case 2: cout << "February".h> int main (void) { www. case 10: cout << "October". break. break. break. } cout << ' ' << day << ".h> int main (void) { int day. case 8: cout << "August". if (n >= 0) { for (register int i = 1.com Solutions to Exercises 233 . ++i) factorial *= i. return 0. break. case 6: cout << "June".break. case 3: cout << "March". year. cin >> day >> ch >> month >> ch >> year. case 12: cout << "December". cout << "Input a positive integer: ". break. case 11: cout << "November".5 #include <iostream.3 #include <iostream. char ch. cout << "Input a date as dd/mm/yy: ". cout << "Factorial of " << n << " = " << factorial << '\n'. break. cin >> n. break. i <= n. " << 1900 + year << '\n'. case 5: cout << "May". break. case 7: cout << "July". } return 0.pragsoft.
int octal. digit. cout << fahrenheit << " degrees Fahrenheit = " << FahrenToCelsius(fahrenheit) << " degrees Celsius\n". } cout << "Octal(" << octal << ") = Decimal(" << decimal << ")\n". double weight) { if (weight < height/2. power *= 8. j <= 9. for (int n = octal. i <= 9. n /= 10) { // process each digit digit = n % 10.6 #include <iostream. cin >> octal. n > 0.h> char* CheckWeight (double height.h> double FahrenToCelsius (double fahren) { return 5 * (fahren .5) 234 C++ Essentials Copyright © 2005 PragSoft . } 3. ++j) cout << i << " x " << j << " = " << i*j << '\n'. cin >> fahrenheit. ++i) for (register j = 1. } 4. // right-most digit decimal = decimal + power * digit. } int main (void) { double fahrenheit.h> int main (void) { for (register i = 1. int decimal = 0. return 0. cout << "Temperature in Fahrenhait: ". return 0.32) / 9.1b #include <iostream. } 4. int power = 1.1a #include <iostream. return 0. cout << "Input an octal number: ".
Nov.3 www. void Primes (unsigned int n) { Bool isPrime.pragsoft. cin >> height.com Solutions to Exercises 235 . for (register num = 2. Apr. Dec 4. Sep. } if (isPrime) cout << num << '\n'. it swaps a copy of x and y and not the originals.5 enum Month { Jan. Consequently.return "Underweight". Aug. for (register i = 2.2 The value of x and y will be unchanged because Swap uses value parameters.4 enum Bool {false. } 4. Mar. cout << "Person's weight (in kilograms: ". cout << CheckWeight(height. return 0. true}. Jun. } int main (void) { double height. Oct.3) return "Normal".5 <= weight && weight <= height/2. weight. return "Overweight". weight) << '\n'. num <= n. May. cin >> weight. ++num) { isPrime = true. if (height/2. The program will output: Parameter Local Global Parameter 4. Feb. } } 4. i < num/2. ++i) if (num%i == 0) { isPrime = false. Jul. break. cout << "Person's height (in centimeters): ".
case Nov: return "November".6 inline int IsAlpha (char ch) { return ch >= 'a' && ch <= 'z' || ch >= 'A' && ch <= 'Z'. case Mar: return "March". char* MonthStr (Month month) { switch (month) { case Jan: return "January". const int size) 236 C++ Essentials Copyright © 2005 PragSoft . // initialize args 4. default: return "". case Feb: return "february". va_start(args. // argument list double sum = 0. val).7 4. case Dec: return "December"...}.) { va_list args.1 void ReadArray (double nums[].8 while (n-. } double Sum (int n. case Jul: return "July". case Oct: return "October". case Jun: return "June".> 0) { sum += val. } 5. case May: return "May". val = va_arg(args. case Sep: return "September". double val . case Apr: return "April". double). } } 4. unsigned int exponent) { return (exponent <= 0) ? 1 : base * Power(base. } va_end(args).1). // clean up args return sum. } int Power (int base. exponent . case Aug: return "August".
1] = temp. 25. ++i) { temp = nums[i].]. nums[size . { 22. 2. names[i] = new char[strlen(name) + 1]. const int size) { double temp.{ for (register i = 0.3 }. j < cols. } } 5. void WriteContents (const double *contents. ++i) { cout << "names[" << i << "] = ".i . const int cols) { for (register i = 0.5 }. cin >> nums[i].com Solutions to Exercises 237 . 16. ++j) cout << *(contents + i * rows + j) << ' '. for (register i = 0. const int rows.2 } }. nums[i] = nums[size . } 5. ++i) { for (register j = 0. 5. 9. for (register i = 0. 0.4 }. } } void WriteArray (double nums[]. i < size. true}. 7. cin >> name. } } 5. i < rows. i < size/2. const int size) { for (register i = 0. i < size. 4. i < size. ++i) { cout << "nums[" << i << "] = ".4 enum Bool {false.2 void Reverse (double nums[]. cout << '\n'. 0. 8. 0. ++i) cout << nums[i] << '\n'.3 double contents[][4] = { { 12. 0. void ReadNames (char *names[].pragsoft. { 32. { 28.i . const int size) { char name[128].
strcpy(names[i].1. names[i+1]) > 0 ) { temp = names[i]. } } } while (swapped). char *result = new char[len + 1]. names[i+1]) > 0 ) { temp = names[i].1.= '\0'. i < size . ++i) { if (strcmp(names[i]. } void BubbleSort (char *names[]. ++i) cout << names[i] << '\n'. ++i) { if (comp(names[i]. while (*str) *res-. names[i+1] = temp. do { swapped = false. const int size) { for (register i = 0. for (register i = 0. Compare comp) { Bool swapped. } 5.5 char* ReverseString (char *str) { int len = strlen(str). const char*). swapped = true. *res-. char *temp. const int size) { Bool swapped. 238 C++ Essentials Copyright © 2005 PragSoft . do { swapped = false.= *str++. name). names[i] = names[i+1]. return result. } 5. const int size. i < size. i < size . } } void WriteNames (char *names[]. void BubbleSort (char *names[]. char *res = result + len.6 typedef int (*Compare)(const char*. for (register i = 0. char *temp.
real + real * c. imag .imag). SwapFun Swap. typedef unsigned long *Values[10][20]. // real part double imag.imag). } Complex Complex::Multiply (Complex &c) { return Complex( real * c. swapped = true.} Complex Add (Complex &c).imag). imag + c.2 www. double).real.imag. double i = 0) {real = r. } void Complex::Print (void) { 6.real. Complex Complex::Add (Complex &c) { return Complex(real + c. names[i+1] = temp. Declaring Set parameters as references avoids their being copied in a call.pragsoft. Complex Multiply(Complex &c).com Solutions to Exercises 239 . Name name.c. Complex Subtract(Complex &c). Table table. } 5. imag = i.1 6.names[i] = names[i+1]. private: double real.imag * c. } Complex Complex::Subtract (Complex &c) { return Complex(real . Values values. // imaginary part }. Call-byreference is generally more efficient than call-by-value when the objects involved are larger than the built-in type objects. } } } while (swapped). imag * c. typedef char *&Name. void Print (void). typedef char *Table[].real . class Complex { public: Complex (double r = 0.c.7 typedef void (*SwapFun)(double.
(const int pos = end). Option }. // option name Option *next. handy != 0. delete handy. const int pos = end).} Option*& Next (void) {return next. // denotes the end of the list void void int private: class Option { public: Option (const char*).} (void). Menu::Option::Option (const char* str) { name = new char [strlen(str) + 1].} private: char *name. *next. for (handy = first. *first. } } void Menu::Insert (const char *str.} const char* Name (void) {return name. str). const int pos) { Menu::Option *option = new Menu::Option(str). // next option }. // first option in the menu 240 C++ Essentials Copyright © 2005 PragSoft . (const char *str.h> #include <string. next = 0.3 #include <iostream. strcpy(name. } 6. handy = next) { next = handy->Next(). ~Option (void) {delete name. } Menu::~Menu (void) { Menu::Option *handy. (void). class Menu { public: Menu ~Menu Insert Delete Choose (void) {first = 0.h> const int end = -1.cout << real << " + i" << imag << '\n'.
else // it's not the first prev->Next() = handy->Next(). handy != 0 && handy->Next() != 0 && idx++ != pos. if (prev == 0) { // empty list option->Next() = first. " << handy->Name() << '\n'. if (handy != 0) { if (prev == 0) // it's the first entry first = handy->Next(). handy != 0. } while (choice <= 0 || choice > n). int idx = 0.Menu::Option *handy. for (handy = first. handy = handy->Next()) prev = handy. return choice. } else { // insert option->Next() = handy. handy = handy>Next()) prev = handy. // first entry first = option. Menu::Option *handy = first. *prev = 0. delete handy. prev->Next() = option. } www. choice. handy != 0 && idx++ != pos. int idx = 0. // set prev to point to before the insertion position: for (handy = first.com Solutions to Exercises 241 . cin >> choice. handy = handy->Next()) cout << ++n << ". } } int Menu::Choose (void) { int n. *prev = 0. cout << "Option? ". } } void Menu::Delete (const int pos) { Menu::Option *handy. // set prev to point to before the deletion position: for (handy = first.pragsoft. do { n = 0.
Set&). (void). } } int Set::Card (void) { Set::Element *handy. Element *first.} Value (void) {return value.} value. (Set&. handy != 0. for (handy = first. // element value // next element // first element in the list 242 C++ Essentials Copyright © 2005 PragSoft .h> const int enum Bool maxCard = 10. for (handy = first. }. class Set { public: Set ~Set Card Member AddElem RmvElem Copy Equal Intersect Union Print (void) { first = 0. (const int) const. handy = next) { next = handy->Next().6.} Next (void) {return next. handy != 0. handy = handy->Next()) Element (const int val) {value = val. } (void). true}. delete handy. (const int). int Bool void void void Bool void void void private: class Element { public: int Element*& private: int Element }. int card = 0. (Set&). *next. Set&). (void). (Set&). (Set&. {false. (const int). Set::~Set (void) { Set::Element *handy. next = 0. *next.4 #include <iostream.
for (handy = first. } Bool Set::Equal (Set &set) www. handy != 0 && handy->Next() != 0 && handy->Value() != elem. return false. delete handy. for (handy = first. } void Set::AddElem (const int elem) { if (!Member(elem)) { Set::Element *option = new Set::Element(elem). handy = handy->Next()) if (handy->Value() == elem) return true. else // it's not the first prev->Next() = handy->Next(). return card. handy != 0. } Bool Set::Member (const int elem) const { Set::Element *handy.AddElem(handy->Value()). if (handy != 0) { if (prev == 0) // it's the first entry first = handy->Next().++card. } } void Set::RmvElem (const int elem) { Set::Element *handy. handy = handy->Next()) prev = handy. handy != 0.com Solutions to Exercises 243 . option->Next() = first. int idx = 0. // prepend first = option. } } void Set::Copy (Set &set) { Set::Element *handy. // set prev to point to before the deletion position: for (handy = first.pragsoft. *prev = 0. handy = handy->Next()) set.
Set &res) { Copy(res). (void) {delete entries. for (handy = first. if (Card() != set. class Sequence { public: Sequence ~Sequence (const int size). typedef char *String.Card()) return false.{ Set::Element *handy. handy = handy->Next()) { cout << handy->Value(). } 6.'. } cout << "}\n".} 244 C++ Essentials Copyright © 2005 PragSoft .AddElem(handy->Value()).Member(handy->Value())) return false. Set &res) { Set::Element *handy. class BinNode.5 #include <iostream.Member(handy->Value())) res. } void Set::Union (Set &set. set. cout << '{'. handy != 0. } void Set::Intersect (Set &set. } void Set::Print (void) { Set::Element *handy. handy = handy->Next()) if (!set. true}. return true.h> enum Bool {false. handy != 0. for (handy = first. handy != 0.Copy(res). if (handy->Next() != 0) cout << '. class BinTree. for (handy = first.h> #include <string. handy = handy->Next()) if (set.
for (register i = 0. int Size (void) {return used. } **entries.} friend BinNode* BinTree::MakeTree (Sequence &seq. ++i) { if (strcmp(str. int low. void Sequence::Insert (const char *str) { if (used >= slots) return. return false. } for (register j = used.void Insert (const char*). void Delete (const char*). Bool Find (const char*). // sorted array of string entries // number of sequence slots // number of slots used so far www. used. j < used-1. j > i. str). entries[i] = new char[strlen(str) + 1]. i < used. ++i) { if (strcmp(str. slots. void Print (void). --used.entries[i]) < 0) break. int high). break. i < used.entries[i]) == 0) { delete entries[i]. ++used. ++j) entries[j] = entries[j+1].entries[i]) == 0) return true. i < used. --j) entries[j] = entries[j-1].pragsoft. strcpy(entries[i]. ++i) if (strcmp(str.com Solutions to Exercises 245 . } void Sequence::Delete (const char *str) { for (register i = 0. } } } Bool Sequence::Find (const char *str) { for (register i = 0. protected: char const int int }. for (register j = i.
cout << '\n'.} BinTree (Sequence &seq). BinNode *&subtree).h> #include <string. }. (void) {delete (void) {return (void) {return (void) {return value. ~BinTree(void) {root->FreeSubtree(root).} protected: BinNode* root. }. (const char*.} right.} left. void Delete (const char *str) {root->DeleteNode(str. BinNode *&subtree). root). if (i < used-1) cout << '.} char*& BinNode*& BinNode*& void void void const BinNode* void (BinNode *subtree).6 #include <iostream. class BinNode { public: BinNode ~BinNode Value Left Right FreeSubtree InsertNode DeleteNode FindNode PrintNode (const char*). BinNode *left. } 6.true}. BinNode *right.} void Print (void) {root->PrintNode(root). // node value // pointer to left child // pointer to right child class BinTree { public: BinTree (void) {root = 0. (const BinNode *node). ++i) { cout << entries[i]. for (register i = 0. (BinNode *node. i < used. const BinNode *subtree). root) != 0. (const char*.} void Insert (const char *str).'. private: char *value. } cout << "]\n".} value. // root node of the tree 246 C++ Essentials Copyright © 2005 PragSoft .} Bool Find (const char *str) {return root->FindNode(str.h> enum Bool {false.void Sequence::Print (void) { cout << '['.
else if (subtree->right == 0) // no right subtree subtree = subtree->left. delete node. FreeSubtree(node->right). left = right = 0.BinNode::BinNode (const char *str) { value = new char[strlen(str) + 1]. else InsertNode(node.com Solutions to Exercises 247 . } void BinNode::FreeSubtree (BinNode *node) { if (node != 0) { FreeSubtree(node->left). else { // left and right subtree subtree = subtree->right.pragsoft. strcpy(value. else { BinNode* handy = subtree. subtree->left). subtree->right). subtree->right). subtree->left). } void BinNode::DeleteNode (const char *str. } } void BinNode::InsertNode (BinNode *node. BinNode *&subtree) { int cmp. www. subtree->right). } delete handy. if (subtree == 0) return. else if (cmp > 0) DeleteNode(str. str). if ((cmp = strcmp(str. // insert left subtree into right subtree: InsertNode(subtree->left. BinNode *&subtree) { if (subtree == 0) subtree = node. subtree->value)) < 0) DeleteNode(str. subtree->value) <= 0) InsertNode(node. if (subtree->left == 0) // no left subtree subtree = subtree->right. else if (strcmp(node->value.
1). friend BinNode* BinTree::MakeTree (Sequence &seq. } void BinNode::PrintNode (const BinNode *node) { if (node != 0) { PrintNode(node->left).7 class Sequence { //. 0. class BinTree { public: //.. } 6. int high). int low.Size() . }.} } const BinNode* BinNode::FindNode (const char *str. const BinNode *subtree) { int cmp. cout << node->value << ' '. int low. } void BinTree::Insert (const char *str) { root->InsertNode(new BinNode(str). } } BinTree::BinTree (Sequence &seq) { root = MakeTree(seq.. subtree->value)) < 0 ? FindNode(str. subtree->right) : subtree)). int high).. seq. return (subtree == 0) ? 0 : ((cmp = strcmp(str. BinNode* MakeTree (Sequence &seq. subtree->left) : (cmp > 0 ? FindNode(str. root).. BinTree::BinTree (Sequence &seq) 248 C++ Essentials Copyright © 2005 PragSoft . BinTree (Sequence &seq). PrintNode(node->right). }... //.
id = lastId++. 0. class Option. node->Right() = (mid == high ? 0 : MakeTree(seq.entries[mid]). seq.pragsoft.h> #include <string. class Menu { public: //.. class Menu { public: Menu ~Menu Insert Delete Print Choose ID (void) {first = 0.8 A static data member is used to keep track of the last allocated ID (see lastId below).} // denotes the end of the list {return id. }. const Menu *submenu. (const char *str. void int int int private: class Option { public: Option (const char*. mid-1)). node->Left() = (mid == low ? 0 : MakeTree(seq.. (void) const. (void). int Menu::lastId = 0. low.9 #include <iostream. const int (const int pos = end).} (void). } 6. (void) {return id. int id. static int lastId..{ root = MakeTree(seq.com Solutions to Exercises 249 . mid+1. const Menu* = 0). int ID (void) private: //. www. return node.1). } BinNode* BinTree::MakeTree (Sequence &seq. int low. int high) { int mid = (low + high) / 2.Size() ..} // menu ID // last allocated ID void pos = end). 6. high)). BinNode* node = new BinNode(seq.h> const int end = -1.
Option int *first. (void) {return name. delete submenu. } void Menu::Option::Print (void) { cout << name. const Menu *menu) : submenu(menu) { name = new char [strlen(str) + 1]. } Menu::Option::~Option (void) { delete name. if (submenu != 0) cout << " ->". // option name // submenu // next option // first option in the menu // menu ID // last allocated ID static int }. else return submenu->Choose().} const. id. } int Menu::lastId = 0.} {return submenu. Menu::Option::Option (const char *str. str). Menu::~Menu (void) { 250 C++ Essentials Copyright © 2005 PragSoft . strcpy(name. next = 0. cout << '\n'. lastId. *next.const char* const Menu* Option*& void int private: char const Menu Option }. *submenu. (void) (void) (void) (void). } int Menu::Option::Choose (void) const { if (submenu == 0) return 0. *name.} {return next. ~Option Name Submenu Next Print Choose (void).
// set prev to point to before the deletion position: for (handy = first. *prev = 0. } } int Menu::Print (void) { int n = 0. } } void Menu::Insert (const char *str. prev->Next() = option. www. int idx = 0. } else { // insert option->Next() = handy. if (handy != 0) { if (prev == 0) // it's the first entry first = handy->Next(). // first entry first = option. } } void Menu::Delete (const int pos) { Menu::Option *handy. handy != 0 && idx++ != pos. handy != 0. handy = next) { next = handy->Next(). Menu::Option *handy. int idx = 0. delete handy. // set prev to point to before the insertion position: for (handy = first. const Menu *submenu. handy != 0 && handy->Next() != 0 && idx++ != pos. delete handy. if (prev == 0) { // empty list option->Next() = first. handy = handy->Next()) prev = handy.Menu::Option *handy. for (handy = first. handy = handy>Next()) prev = handy. Menu::Option *handy = first. const int pos) { Menu::Option *option = new Option(str. *next.pragsoft. submenu). *prev = 0.com Solutions to Exercises 251 . else // it's not the first prev->Next() = handy->Next().
friend Set operator - (Set&. } return n. Set&). const double y) { return x >= y ? x : y. return (n == 0 ? choice : n). cin >> choice. } while (choice <= 0 || choice > n)..h> const int Max (const int x. do { n = Print(). handy->Print(). handy = handy>Next()) ++n. const int y) { return x >= y ? x : y.y) >= 0 ? x : y. // choose the option: n = handy->Choose(). } int Menu::Choose (void) const { int choice. handy != 0..2 class Set { //.for (handy = first. } const double Max (const double x. const char *y) { return strcmp(x. n = 1. } const char* Max (const char *x. handy = handy->Next()) { cout << ++n << ". n. Menu::Option *handy = first. n != choice && handy != 0. } 7. cout << "Option? ". ".1 #include <string. // move to the chosen option: for (handy = first. } 7. // difference 252 C++ Essentials Copyright © 2005 PragSoft .
//.bits[i] = (value == -1 || value == 1 ? '1': '0'). friend Binary int //.. }. operator [] (const int n) {return bits[15-n] == '1' ? 1 : 0. } Bool operator <= (Set &set1. const Binary n2) { unsigned borrow = 0.elems[i].pragsoft.com Solutions to Exercises 253 . i < set1.. for (register i = 15.friend Bool operator <= (Set&. operator (const Binary.bits[i] == '0' ? 0 : 1) + borrow. }. borrow = (value == -1 || borrow != 0 && value == 1 ? 1 : 0). i < set1.4 #include <iostream.card. ++i) if (!(set1.card) return false. return true.h> class Matrix { public: Matrix (const int rows.card > set2... Set operator .card. ++i) if (!(set1.. Set &set2) { if (set1. } return res. } 7. const int cols).bits[i] == '0' ? 0 : 1) (n2. } 7.(Set &set1. Binary operator . Set &set2) { Set res. Set&). res.elems[i] & set2)) res. // subset for (register i = 0. --i) { value = (n1.3 class Binary { //.card++] = set1.elems[i] & set2)) return false. i >= 0. Binary res = "0". unsigned value.} www. for (register i = 0.. const Binary).(const Binary n1. return res.elems[res.
if (prev == 0) first = copy. Matrix&). Matrix&). cols. list->Value()).} protected: class Element { // nonzero element public: Element (const int row.} const int Col (void) {return col. } Matrix::Element* Matrix::Element::CopyList (Element *list) { Element *prev = 0. Element *first = 0. list = list->Next()) { copy = new Element(list->Row(). ~Matrix (void). col(c) { value = val. // linked-list of elements }. double val) : row(r). Matrix& operator = (const Matrix&). friend Matrix operator + (Matrix&. const int row. for (. col. const int col.(Matrix&.} Element* CopyList(Element *list).} Element*& Next (void) {return next. const int c. // row and column of element double value. double& InsertElem col).Matrix (const Matrix&).} double& Value (void) {return value. Matrix&). int rows. else (Element *elem. Matrix&). double& operator () (const int row. Element *copy. private: const int row. Matrix::Element::Element (const int r. const int Row (void) {return row. friend Matrix operator . const int col). // matrix dimensions Element *elems. const int 254 C++ Essentials Copyright © 2005 PragSoft . list->Col(). friend ostream& operator << (ostream&. next = 0. void DeleteList (Element *list). double).} int Cols (void) {return cols. friend Matrix operator * (Matrix&. int Rows (void) {return rows. // element value Element *next. list != 0. // pointer to next element }.
prev->Next() = copy. } Matrix::Matrix (const Matrix &m) { rows = m. list != 0. elems = m. const int cols) { Matrix::rows = rows. Matrix::cols = cols.elems->CopyList(m. elems = 0. if (elem == elems && (elems == 0 || row < elems->Row() || row == elems->Row() && col < elems->Col())) { // insert in front of the list: newElem->Next() = elems.cols.pragsoft.rows. } else { // insert after elem: newElem->Next() = elem->Next(). list = next) { next = list->Next().elems).com Solutions to Exercises 255 . delete list. cols = m. double& Matrix::InsertElem (Element *elem. } return first. 0. col. for (. elems = newElem. prev = copy.0). const int col) { Element* newElem = new Element(row. } Matrix::~Matrix (void) www. elem->Next() = newElem. } } // InsertElem creates a new element and inserts it before // or after the element denoted by elem. } return newElem->Value(). const int row. } void Matrix::Element::DeleteList (Element *list) { Element *next. } Matrix::Matrix (const int rows.
// check if it's the first element in the list: if (row == elems->Row() && col == elems->Col()) return elems->Value(). col <= m. return *this.0 << '\t'. else if (col < elem->Next()->Col()) break. col). } ostream& operator << (ostream &os. } Matrix& Matrix::operator = (const Matrix &m) { elems->DeleteList(elems).{ elems->DeleteList(elems). Matrix &m) { Matrix::Element *elem = m. os << '\n'. } double& Matrix::operator () (const int row. const int col) { if (elems == 0 || row < elems->Row() || row == elems->Row() && col < elems->Col()) // create an element and insert in front: return InsertElem(elems.rows. cols = m. row. rows = m.elems).cols. 0. for (register row = 1.rows. col).elems->CopyList(m. elem = elem->Next()) // found it! // doesn't exist // doesn't exist elem: 256 C++ Essentials Copyright © 2005 PragSoft . ++row) { for (register col = 1. } return os. } else os << 0. // create new element and insert just after return InsertElem(elem. row <= m. // search the rest of the list: for (Element *elem = elems.cols. elems = m. row. elem->Next() != if (row == elem->Next()->Row()) { if (col == elem->Next()->Col()) return elem->Next()->Value(). ++col) if (elem != 0 && elem->Row() == row && elem->Col() == col) { os << elem->Value() << '\t'. elem = elem->Next(). } else if (row < elem->Next()->Row()) break.elems.
qe != 0. pe != 0. // copy p: for (Matrix::Element *pe = p. qe != 0. (const short). } Matrix operator * (Matrix &p.h> #include <iostream. pe->Col()) = pe->Value(). qe != 0. = q. } Matrix operator . Matrix &q) { Matrix m(p. (const String&). pe != 0.5 #include <string.elems.rows. } 7.cols). q. (const String&). qe->Col()) += qe->Value().qe->Col()) += pe->Value() * qe->Value().com Solutions to Exercises 257 .rows.cols).elems. pe = pe->Next()) for (Element *qe = q. (const short). q. return m.(Matrix &p.cols).} Matrix operator + (Matrix &p.elems. qe = qe->Next()) m(qe->Row().pragsoft. // add q: for (Matrix::Element *qe = q. // subtract q: for (Element *qe m(qe->Row().h> class String { public: String String String ~String operator = operator = operator [] Length (const char*). String& String& char& int www.} = p.rows. return m. for (Element *pe = p. // copy p: for (Element *pe m(pe->Row(). pe != 0.elems.elems. (void) {return(len). pe = pe->Next()) pe->Col()) = pe->Value(). return m. (void). qe = qe->Next()) if (pe->Col() == qe->Row()) m(pe->Row(). q. (const char*).elems. qe = qe->Next()) qe->Col()) -= qe->Value(). Matrix &q) { Matrix m(p. pe = pe->Next()) m(pe->Row(). Matrix &q) { Matrix m(p.
len. chars = new char[len + 1]. str).friend friend String ostream& operator + (const String&.len. len = strLen. str). } String& String::operator = (const String &str) { if (this != &str) { if (len != str. 258 C++ Essentials Copyright © 2005 PragSoft .chars).len) { delete chars. } String& String::operator = (const char *str) { short strLen = strlen(str). chars = new char[len + 1]. len = str. strcpy(chars. // string characters // length of chars String::String (const char *str) { len = strlen(str). chars = new char[strLen + 1]. protected: char short }. const String&). String&). if (len != strLen) { delete chars. len. str. } String::String (const String &str) { len = str. chars = new char[len + 1]. } strcpy(chars. strcpy(chars. operator << (ostream&. } String::String (const short size) { len = size. *chars. return(*this). } String::~String (void) { delete chars. chars[0] = '\0'.
} ostream& operator << { out << str. BitVec& operator <<=(const short). str2.chars + str1. BitVec& operator >>=(const short). BitVec BitVec operator ~ operator & (). BitVec& operator &= (const BitVec&). (ostream &out. } 7. int operator [] (const short idx). BitVec (const BitVec&). str1. } String operator + (const String &str1.chars). typedef unsigned char uchar. void Reset (const short idx). str. String &str)> enum Bool {false.chars = new char[str.chars. return(out).com Solutions to Exercises 259 . const String &str2) { String result(str1. BitVec (const char* bits).h> #include <iostream. ~BitVec (void) { delete vec.len). strcpy(result.chars). strcpy(result. true}. return(result). return(index >= 0 && index < len ? chars[index] : dummy).pragsoft.len + str2. void Set (const short idx). } return(*this). class BitVec { public: BitVec (const short dim). BitVec& operator ^= (const BitVec&).len. } char& String::operator [] (const short index) { static char dummy = '\0'.6 #include <string. (const BitVec&). BitVec& operator |= (const BitVec&).chars.len + 1]. } strcpy(chars. } BitVec& operator = (const BitVec&).chars).
} 260 C++ Essentials Copyright © 2005 PragSoft . short n). } inline BitVec& BitVec::operator >>= (const short n) { return (*this) = (*this) >> n. } inline BitVec& BitVec::operator &= (const BitVec &v) { return (*this) = (*this) & v. BitVec&). } inline BitVec& BitVec::operator <<= (const short n) { return (*this) = (*this) << n. bytes. BitVec&). BitVec&). ostream& operator << (ostream&. short n). } inline BitVec& BitVec::operator ^= (const BitVec &v) { return (*this) = (*this) ^ v. } // reset the bit denoted by idx to 0 inline void BitVec::Reset (const short idx) { vec[idx/8] &= ~(1 << idx%8). } inline BitVec& BitVec::operator |= (const BitVec &v) { return (*this) = (*this) | v. protected: uchar short }. *vec. BitVec&).BitVec BitVec BitVec BitVec Bool Bool friend operator | operator ^ operator << operator >> operator == operator != (const (const (const (const (const (const BitVec&). // vector of 8*bytes bits // bytes in the vector // set the bit denoted by idx to 1 inline void BitVec::Set (const short idx) { vec[idx/8] |= (1 << idx%8).
} BitVec::BitVec (const BitVec &v) { bytes = v.1.bytes : bytes). i < bytes.// return the bit denoted by idx inline int BitVec::operator [] (const short idx) { return vec[idx/8] & (1 << idx%8) ? true : false.bytes < bytes ? v. --i) if (*bits++ == '1') // set the 1 bits vec[i/8] |= (1 << (i%8)). vec = new uchar[bytes]. for (register i = 0. vec = new uchar[bytes]. for (register i = 0. i < bytes. } // bitwise COMPLEMENT www. bytes = len / 8 + (len % 8 == 0 ? 0 : 1). } BitVec& BitVec::operator = (const BitVec& v) { for (register i = 0.vec[i]. } inline Bool BitVec::operator != (const BitVec &v) { return *this == v ? false : true. i < (v. i >= 0. } BitVec::BitVec (const short dim) { bytes = dim / 8 + (dim % 8 == 0 ? 0 : 1). i < bytes. ++i) vec[i] = 0. ++i) // copy bytes vec[i] = v. // copy bytes for (. ++i) vec[i] = v.com Solutions to Exercises 261 . // all bits are initially zero } BitVec::BitVec (const char *bits) { int len = strlen(bits). ++i) // extra bytes in *this vec[i] = 0. i < bytes. ++i) vec[i] = 0. vec = new uchar[bytes].bytes. for (register i = 0. return *this.vec[i]. // initialize all bits to zero for (i = len .pragsoft.
--i) r.vec[i] = vec[i] | v.bytes) * 8).bytes ? bytes : v. i < (bytes < v. // zero left bytes for (i = zeros. return r. // bytes on the left to become zero int shift = n % 8.bytes ? bytes : v.vec[i] = vec[i] ^ v. } // bitwise exclusive-OR BitVec BitVec::operator ^ (const BitVec &v) { BitVec r((bytes > v.vec[i].zeros]. return r.1.bytes ? bytes : v.bytes ? bytes : v.vec[i] << shift) | prev. i < bytes. i >= 0. } // bitwise AND BitVec BitVec::operator & (const BitVec &v) { BitVec r((bytes > v.BitVec BitVec::operator ~ (void) { BitVec r(bytes * 8).vec[i].bytes). i < r. for (. } // bitwise OR BitVec BitVec::operator | (const BitVec &v) { BitVec r((bytes > v. 262 C++ Essentials Copyright © 2005 PragSoft . // left shift for remaining bytes register i. i < (bytes < v. --i) // shift bytes left r.vec[i].bytes. i >= zeros.vec[i] = ~vec[i]. for (register i = 0. ++i) r. ++i) { // shift bits left r.vec[i] = 0.bytes) * 8).bytes) * 8). } // SHIFT LEFT by n bits BitVec BitVec::operator << (const short n) { BitVec r(bytes * 8). ++i) r. for (i = bytes . for (register i = 0.bytes ? bytes : v. unsigned char prev = 0. return r. i < (bytes < v. return r.bytes).vec[i] = vec[i . ++i) r.bytes). ++i) r. for (register i = 0.vec[i] = vec[i] & v.vec[i] = (r. for (register i = 0. int zeros = n / 8.bytes ? bytes : v.
for (i = 0.pragsoft. for (i = 0. ++i) // extra bytes in first operand if (vec[i] != 0) return false.prev = vec[i .1. i < bytes. ++i) r. ++i) // extra bytes in second operand if (v.zeros .vec[i] = 0. // right shift for remaining bytes register i.bytes.bytes ? bytes : v. www. prev = vec[i + zeros] << (8 .bytes. } return r.shift).bytes.vec[i] = (r. // shift bytes right // zero right bytes uchar prev = 0.com Solutions to Exercises 263 . int n = v. } ostream& operator << (ostream &os.vec[i] = vec[i + zeros]. --i) { // shift bits right r. i < v. int zeros = n / 8.vec[i]) return false.bytes .zeros] >> (8 .bytes > maxBytes ? maxBytes : v. i < bytes. } return r. // bytes on the right to become zero int shift = n % 8. i < bytes . char *str = buf. } // SHIFT RIGHT by n bits BitVec BitVec::operator >> (const short n) { BitVec r(bytes * 8).zeros. i >= 0. for (i = smaller.shift).vec[i] >> shift) | prev. return true. ++i) r. for (. register i.vec[i] != 0) return false. for (i = smaller. BitVec &v) { const int maxBytes = 256. } Bool BitVec::operator == (const BitVec &v) { int smaller = bytes < v. char buf[maxBytes * 8 + 1]. for (i = r. ++i) // compare bytes if (vec[i] != v. i < smaller.
inline Bool LeapYear(const short year) class Year : public public: Year void WorkDay void OffDay Bool Working short Day protected: short }. i >= 0. 30. month. os << buf. Jul. 30. const short year) { static short days[12] = { 31. 28. Nov. return os. --j) *str++ = v.1 #include "bitvec. Oct. {return year%4 == 0. 31. } void Year::WorkDay (const short day) { Set(day). --i) for (register j = 7. } short Year::Day (const short day. 31. day). Apr. day). j >= 0. year. Feb. 31. BitVec { (const (const (const (const (const const short short short short short Month year). 30.} // set day as work day // set day as off day // true if a work day // convert date to day const short year).h" enum Month { Jan. const Month month. Mar. } 8. *str = '\0'. Jun. 31. May. day). } Bool Year::Working (const short day) { return (*this)[day] == 1 ? true : false. 31 264 C++ Essentials Copyright © 2005 PragSoft .for (register i = n-1. Sep. // calendar year Year::Year (const short year) : BitVec(366) { Year::year = year.vec[i] & (1 << j) ? '1' : '0'. Aug. } void Year::OffDay (const short day) { Reset(day). day. 20. Dec }. 31.
Cols()) = 0. return res. ++r) solution(r.pragsoft. private: Matrix solution. 1) { for (register r = 1.1]. double* soln) : Matrix(n.0. int res = day.com Solutions to Exercises 265 . c) = (double) (mid .random(1000) % coef). void Solve (void).} class LinEqns : public Matrix { public: LinEqns (const int n. srand((unsigned int) time(0)). double *soln). c < Cols(). // set random seed for (register r = 1. } void LinEqns::Generate (const int coef) { int mid = coef / 2. c) * solution(c. i < month.h> #include "matrix. r <= n. n+1). ++c) { (*this)(r. LinEqns::LinEqns (const int n. } } } // solve equations using Gaussian elimination www. ++r) { (*this)(r. days[Feb] = LeapYear(year) ? 29 : 28. ++i) res += days[i]. Cols()) += (*this)(r.2 #include <stdlib. void Generate (const int coef). 1). // initialize right-hand side // generate equations whose coefficients // do not exceed coef: for (register c = 1.h> #include <time. solution(n. 1) = soln[r .}. }.h" inline double Abs(double n) {return n >= 0 ? n : -n. r <= Rows(). for (register i = Jan. (*this)(r. } 8.
++r) { double factor = (*this)(r. return.void LinEqns::Solve (void) { double const epsilon = 1e-5. r. diag). r >= 1.0. ++r) // upper triangle if (Abs((*this)(piv. c) = (*this)(diag. c) * factor. c. ++diag) { // diagonal piv = diag. cout << *this. diag] = 1: temp = (*this)(diag.// 'almost zero' quantity double temp. c). Cols())) < epsilon) cout << "infinite solutions\n". diag). (*this)(diag. diag)) < Abs((*this)(r. r <= Rows(). --r) { double sum = 0. diag))) piv = r. c <= Cols(). ++c) { temp = (*this)(diag. // choose new pivot // make sure there is a unique solution: if (Abs((*this)(piv. c) = (*this)(piv. for (r = Rows() . (*this)(diag.1. // the last unknown // the rest 266 C++ Essentials Copyright © 2005 PragSoft . ++c) (*this)(diag. ++c) (*this)(r. // now eliminate entries below the pivot: for (r = diag + 1. c) = temp. c) / temp. c). c) -= (*this)(diag. soln(Rows(). // pivot for (r = diag + 1. c <= Cols(). for (diag = 1. int diag. diag) = 0. (*this)(r. diag)) < epsilon) { if (Abs((*this)(diag. 1). diag <= Rows(). } // display elimination step: cout << "eliminated below pivot in column " << diag << '\n'. } } // normalise diag row so that m[diag. 1) = (*this)(Rows(). Cols()). r <= Rows(). c <= Cols(). for (c = diag + 1.0. piv. else cout << "no solution\n". } if (piv != diag) { // swap pivit with diagonal: for (c = 1. for (c = diag + 1. diag) = 1. } // back substitute: Matrix soln(Rows(). (*this)(piv.0.
friend EnumSet operator * (EnumSet &s. ++diag) sum += (*this)(r.sum. friend EnumSet operator . EnumSet &t). EnumSet &t) { return s | t. EnumSet &t) { return t[elem]. } inline EnumSet operator * (EnumSet &s. EnumSet &t) { return s & ~t. EnumSet &s). inline EnumSet operator + (EnumSet &s. } inline Bool operator >= (EnumSet &s. } inline Bool operator <= (EnumSet &s.h" class EnumSet : public BitVec { public: EnumSet (const short maxCard) : BitVec(maxCard) {} EnumSet (BitVec& v) : BitVec(v) {*this = (EnumSet&)v. cout << soln. friend EnumSet& operator >> (EnumSet &s. const short elem). } inline EnumSet operator . } cout << "solution:\n". diag <= Rows().for (diag = r + 1.3 #include "bitvec. friend Bool operator <= (EnumSet &s.(EnumSet &s. www. friend Bool operator >= (EnumSet &s.pragsoft.com Solutions to Exercises 267 . EnumSet &t). 1). EnumSet &t) { return s & t. soln(r. diag) * soln(diag. EnumSet &t) { return (t & s) == t. EnumSet &t). friend EnumSet& operator << (EnumSet &s. } // union // difference // intersection inline Bool operator % (const short elem. }. const short elem).} friend EnumSet operator + (EnumSet &s. friend Bool operator % (const short elem. EnumSet &t). 1) = (*this)(r. } 8. EnumSet &t) { return (t & s) == s.(EnumSet &s. Cols()) . EnumSet &t).
Reset(elem). This ensures that at least 50% of the storage capacity is utilized. private: Key key. } 8. return s. true }. const short elem) { s. Data). Furthermore. // item's key 268 C++ Essentials Copyright © 2005 PragSoft .h> #include "database. // max tree order class BTree : public Database { public: class Page. Key& KeyOf (void) {return key. The most important property of a B-tree is that the insert and delete operations are designed so that the tree remains balanced at all times. #include <iostream. Data &data) {} {} {return false. class Database { public: virtual void Insert virtual void Delete virtual Bool Search }.} Item (Key. The number n is called the order of the tree.} Data& DataOf (void) {return data.} friend ostream& operator << (ostream&.} Page*& Subtree (void) {return right. enum Bool { false. Item&).} A B-tree consists of a set of nodes. (Key key. where each node may contain up to 2n records and have 2n+1 children. return s. } EnumSet& operator >> (EnumSet &s. const short elem) { s. Data data) (Key key) (Key key.Set(elem).} EnumSet& operator << (EnumSet &s. class Item { // represents each stored item public: Item (void) {right = 0. typedef double Data. a nonleaf node that contains m records must have exactly m+1 children. Every node in the tree (except for the root node) must have at least n records.h" const int maxOrder = 256.4 typedef int Key.
void PrintPage (ostream& os. // no. // pointer to the left-most subtree Item *items. of items on the page Page *left.Page *page. Data data). Data &data). of items per page int used. // pointer to right subtree // represents each tree node Page (const int size). int &idx). // the items on the page }.} Item& operator [] (const int n) {return items[n]. int idx.} (Key key. (Key key. const int destIdx. const int srcIdx. class Page { public: data. protected: const int order. (Key key). const int Size (void) {return size. (void) {FreePages(root). int atIdx). Bool &underflow). BTree&). (ostream&.} Bool BinarySearch(Key key.com Solutions to Exercises 269 . // buffer page for distribution/merging virtual void virtual Item* virtual Item* virtual void virtual void virtual void }. // item's data *right. (Item *item. (Page *page. BTree::Item::Item (Key k. Page *page). int CopyItems (Page *dest. (Page *tree. Page *page. Bool InsertItem (Item &item. Bool DeleteItem (int atIdx). private: const int size. Bool &underflow). const int count). // max no.// order of tree Page *root.pragsoft. public: BTree ~BTree virtual void Insert virtual void Delete virtual Bool Search friend ostream& operator << (const int order). www. // root of the tree Page *bufP. const int idx. (Key key. Page*& Right (const int ofItem). (Page *parent.} int& Used (void) {return used. Bool &underflow). Data d) { FreePages SearchAux InsertAux DeleteAux1 DeleteAux2 Underflow (Page *page). const int margin).Data Page }. Key key). ~Page (void) {delete items.} Page*& Left (const int ofItem). Page *child.
int mid. } while (low <= high). do { mid = (low + high) / 2.data. } BTree::Page::Page (const int sz) : size(sz) { used = 0.KeyOf()) high = mid . if (key <= items[mid].1. if (key >= items[mid].1]. Bool found = low . } // return the left subtree of an item BTree::Page*& BTree::Page::Left (const int ofItem) { return ofItem <= 0 ? left: items[ofItem .key = k.KeyOf()) low = mid + 1. int &idx) { int low = 0. idx = found ? mid : high.Subtree().high > 1.1. // restrict to lower half // restrict to upper half 270 C++ Essentials Copyright © 2005 PragSoft . } // do a binary search on items of a page // returns true if successful and false otherwise Bool BTree::Page::BinarySearch (Key key. items = new Item[size]. BTree::Item &item) { os << item. data = d. right = 0. left = 0. } ostream& operator << (ostream& os.Subtree().key << ' ' << item. int high = used . } // return the right subtree of an item BTree::Page*& BTree::Page::Right (const int ofItem) { return ofItem < 0 ? left : items[ofItem]. return os.
1. ++i) { os << margBuf. i <= margin. const int destIdx. // print page and remaining children: for (i = 0. } // insert an item into a page Bool BTree::Page::InsertItem (Item &item. ++i) // shift left items[i] = items[i + 1]. i < count. if (Right(i) != 0) www. // build the margin string: for (int i = 0. margin + 8). i < used .return found. i < used. items[atIdx] = item. --i) // shift right items[i] = items[i . const int margin) { char margBuf[128]. const int count) { for (register i = 0. return count. return --used < size/2.com Solutions to Exercises 271 . os << (*this)[i] << '\n'. i > atIdx. margBuf[i] = '\0'. int atIdx) { for (register i = used. ++i) // straight copy dest->items[destIdx + i] = items[srcIdx + i]. // overflow? } // delete an item from a page Bool BTree::Page::DeleteItem (int atIdx) { for (register i = atIdx. } // copy a set of items from page to page int BTree::Page::CopyItems (Page *dest.1]. // insert return ++used >= size. ++i) margBuf[i] = ' '. const int srcIdx.pragsoft. // underflow? } // recursively print a page and its subtrees void BTree::Page::PrintPage (ostream& os. // print the left-most child: if (Left(0) != 0) Left(0)->PrintPage(os.
root != 0) tree. 0). } } void BTree::Delete (Key key) { Bool underflow. delete temp. BTree &tree) { if (tree. root = page. underflow).Right(i)->PrintPage(os.root->PrintPage(os. root. margin + 8). return true. data). Data &data) { Item *item = SearchAux(root. } ostream& operator << (ostream& os. bufP = new Page(2 * order + 2). // dispose root 272 C++ Essentials Copyright © 2005 PragSoft . } else if ((receive = InsertAux(&item. return os. } } BTree::BTree (const int ord) : order(ord) { root = 0. key). DeleteAux1(key. } void BTree::Insert (Key key. // new root page->InsertItem(*receive. if (root == 0) { // empty tree root = new Page(2 * order). root)) != 0) { Page *page = new Page(2 * order). if (item == 0) return false. root = root->Left(0). } } Bool BTree::Search (Key key. 0). root->InsertItem(item. data = item->DataOf(). 0). if (underflow && root->Used() == 0) { Page *temp = root. Data data) { Item item(key. page->Left(0) = root. *receive.
0. idx)) return 0. or passed up if (page->Used() < 2 * order) { // insert in the page page->InsertItem(*item. } else { // page is full. Item *item. delete page. Page *page) { Page *child.pragsoft. int size = bufP->Used(). if (page->BinarySearch(item->KeyOf().com Solutions to Exercises 273 . child). if (tree == 0) return 0. int idx. bufP->Used() = page->CopyItems(bufP. bufP->InsertItem(*item. for (register i = 0. 0. split Page *newP = new Page(2 * order). ++i) FreePages(page->Right(i)). } // insert an item into a page and split the page if it overflows BTree::Item* BTree::InsertAux (Item *item. i < page->Used(). // already in tree if ((child = page->Right(idx)) != 0) item = InsertAux(item. if (tree->BinarySearch(key. Key key) { int idx. return SearchAux(idx < 0 ? tree->Left(0) : tree->Right(idx). key).} // recursively free a page and its subtrees void BTree::FreePages (Page *page) { if (page != 0) { FreePages(page->Left(0)). page->Used()). www. idx + 1). idx)) return &((*tree)[idx]). } } // recursively search the tree for an item with matching key BTree::Item* BTree::SearchAux (Page *tree. idx + 1). // child is not a leaf if (item != 0) { // page is a leaf.
1. newP->Left(0) = bufP->Right(half). } } // delete an item and deal with underflows by borrowing // items from neighboring pages or merging two pages void BTree::DeleteAux2 (Page *parent. if (page == 0) return. const int idx. page->Used() = bufP->CopyItems(page. if (page->BinarySearch(key. 0. Page *page. child. child. Bool &underflow) { int idx.1). idx. underflow). child. idx. child.1).int half = size/2. 0. half). half + 1. } // delete an item from a page and deal with underflows void BTree::DeleteAux1 (Key key. underflow). item->Subtree() = newP. 0. Bool &underflow) { Page *child = page->Right(page->Used() . *item = (*bufP)[half]. } } else { // is not on child = page->Right(idx). underflow). size half . underflow). Page *child. } } return 0.Page *page. idx . underflow = false. DeleteAux1(key. idx)) { if ((child = page->Left(idx)) == 0) { // page is a underflow = page->DeleteItem(idx). return item. // should be if (underflow) Underflow(page. if (child != 0) { // page is not a leaf // the mid item leaf subtree this page in child 274 C++ Essentials Copyright © 2005 PragSoft . newP->Used() = bufP->CopyItems(newP. if (underflow) Underflow(page. } else { // page is a // delete from subtree: DeleteAux2(page.
1 ? child : page->Left(idx).1). A B*-tree www. 0. half). (*page)[idx] = (*bufP)[half]. } else { // page is a leaf // save right: Page *right = parent->Right(idx). half + 1. right->Used()). idx. underflow = page->DeleteItem(idx). 0. idx. right->Used() = bufP->CopyItems(right. left->Used()). bufP->Right(size++) = right->Left(0). Page *child. Bool &underflow) { Page *left = idx < page->Used() . page->Right(idx) = right. 0. underflow). size).1. // borrow an item from page for parent: page->CopyItems(parent. underflow). // copy contents of left. delete right. page->Used() . // restore right: parent->Right(idx) = right. left->Used() = bufP->CopyItems(left.1). parent item.com Solutions to Exercises 275 . Instead of splitting a node as soon as it becomes full. } } A B*-tree is a B-tree in which most nodes are at least 2/3 full (instead of 1/2 full). size. underflow = page->DeleteItem(page->Used() . // go another level down if (underflow) Underflow(page. underflow = false.half . 0. size += right->CopyItems(bufP. int idx. 0. child. child. size . 0. an attempt is made to evenly distribute the contents of the node and its neighbor(s) between them. right->Left(0) = bufP->Right(half). } else { // merge. (*bufP)[size] = (*page)[idx].DeleteAux2(parent. 0. Page *right = left == child ? page->Right(++idx) : child.pragsoft. A node is split only when one or both of its neighbors are full too. 0. if (size > 2 * order) { // distribute bufP items between left and right: int half = size/2.1. 1). } } // handle underflows void BTree::Underflow (Page *page. and right onto bufP: int size = left->CopyItems(bufP. page->Used() . and free the right page: left->Used() = bufP->CopyItems(left.
276 C++ Essentials Copyright © 2005 PragSoft . 0. root->Left(0) = left. // right is underflown (size == 0): Underflow(root. protected: virtual Item* virtual Item* }. root)) != 0) { left = root. int idx. class BStar : public BTree { public: BStar (const int order) : BTree(order) {} virtual void Insert (Key key. 0). the height of the tree is smaller. which in turn improves the search speed. right = new Page (2 * order). } else if ((overflow = InsertAux(&item. Page *page) { Page *child. Page *page). right.facilitates more economic utilization of the available store. root->InsertItem(*overflow. // already in tree InsertAux Overflow (Item *item. As a result. } } // inserts and deals with overflows Item* BStar::InsertAux (Item *item. // the right child of root right->Left(0) = overflow->Subtree(). Page *left. The search and delete operations are exactly as in a B-tree. only the insertion operation is different. Bool dummy. data). since it ensures that at least 66% of the storage occupied by the tree is actually used. (Item *item. // the left-most child of root root->Right(0) = right. if (page->BinarySearch(item->KeyOf(). 0). root->InsertItem(item. Data data). int idx). dummy). // root becomes a left child root = new Page(2 * order). *right. if (root == 0) { // empty tree root = new Page(2 * order). Page *child. Data data) { Item item(key. idx)) return 0. // insert with overflow/underflow handling void BStar::Insert (Key key. Item *overflow. Page *page.
bufP->Used() = page->CopyItems(bufP. 0. bufP->Used()). 0. half + 1. idx). bufP->InsertItem((*page)[idx]. } else if (page->Used() < 2 * order) { // item fits in node page->InsertItem(*item. 0. child)) != 0) return Overflow(item. bufP->Used()). } return 0. } if (bufP->Used() < 4 * order + 2) { // distribute buf between left and right: int size = bufP->Used(). bufP->Used()). bufP->Right(bufP->Used() . 0. and right into buf: bufP->Used() = left->CopyItems(bufP. *item = (*bufP)[size]. half.pragsoft. bufP->Used()). return item. Page *page. overflown and parent items. right->Used() = bufP->CopyItems(right. left->Used() = bufP->CopyItems(left. if (child == left ) { bufP->InsertItem(*item. right->Used()). 0. (*page)[idx] = (*bufP)[half]. bufP->Used(). // copy left.1) = right->Left(0).if ((child = page->Right(idx)) != 0) { // child not a leaf: if ((item = InsertAux(item. bufP->CopyItems(page. } // handles underflows Item* BStar::Overflow (Item *item. bufP->Right(bufP->Used() . } else { bufP->InsertItem((*page)[idx]. idx + 1). 0. } else { www. } else { // node is full int size = page->Used(). size). Page *right = left == child ? page->Right(++idx) : child. bufP->Used() += right->CopyItems(bufP. 0. right->Used()). Page *child. left->Used()). int idx) { Page *left = idx < page->Used() .half . size .1). half = size/2). child.com Solutions to Exercises 277 . 0. return 0. right->Left(0) = bufP->Right(half). bufP->Used().1 ? child : page->Left(idx). bufP->Used() += right->CopyItems(bufP. 0. idx + 1).1) = right->Left(0). size). page->Right(idx) = right. bufP->InsertItem(*item. 0. 0. page. bufP->InsertItem(*item.
0. return item. i < size . (*page)[page->Used() . 0.2 278 C++ Essentials Copyright © 2005 PragSoft . do { swapped = false. (4 * order + 1) / 3). names[i] = names[i+1]. names[i+1] = temp. const int size) { Bool swapped. template <class Type> void BubbleSort (Type *names. int mid1. } } } 9. } #include <string. newP->Left(0) = bufP->Right(mid2). swapped = true. 4 * order / 3). bufP->Right(mid1) = right. mid2. mid1 = left->Used() = bufP->CopyItems(left.1. if (page->Used() < 2 * order) { page->InsertItem((*bufP)[mid1]. x = y.// split int 3 pages: Page *newP = new Page(2 * order).1]. ++i) { if (names[i] > names[i+1]) { Type temp = names[i]. newP->Used() = bufP->CopyItems(newP.1 template <class Type> void Swap (Type &x. return 0. true}. } else { *item = (*page)[page->Used() . (4 * order + 2) / 3).h> enum Bool {false. Type &y) { Type tmp = x. mid2 = right->Used() = bufP->CopyItems(right. y = tmp. 9. mid1 + 1. 0. for (register i = 0. mid2 + 1. 0.1] = (*bufP)[mid1]. bufP->Right(mid2) = newP. (*page)[idx] = (*bufP)[mid2]. mid2 += mid1 + 1. right->Left(0) = bufP->Right(mid1). idx).
const BinNode *subtree). ++i) { if (strcmp(names[i]. }. } 9.1. (const Type&. names[i] = names[i+1]. BinNode *left.} (BinNode *subtree).} (void) {return left.3 #include <string. (const BinNode *node). (void) {} (void) {return value. private: Type value.pragsoft. swapped = true. names[i+1] = temp. const int size) { Bool swapped. BinNode *right.} } } while (swapped).} (void) {return right. (const Type&.com Solutions to Exercises 279 . typedef char *Str. } // specialization: void BubbleSort (char **names. do { swapped = false. BinNode *&subtree).h> #include <iostream.true}. for (register i = 0. (BinNode *node.h> enum Bool {false. } } } while (swapped). names[i+1]) > 0 ) { char *temp = names[i]. // node value // pointer to left child // pointer to right child www. BinNode *&subtree). template <class Type> class BinNode { public: BinNode ~BinNode Type& Value BinNode*& Left BinNode*& Right void void void const BinNode* void FreeSubtree InsertNode DeleteNode FindNode PrintNode (const Type&). i < size .
void Delete (const Type &val). void Print (void). else if (node->value <= subtree->value) InsertNode(node. subtree->right).template <class Type> class BinTree { public: BinTree (void). FreeSubtree(node->right). void Insert (const Type &val). left = right = 0. delete node. } // specialization: BinNode<Str>::BinNode (const Str &str) { value = new char[strlen(str) + 1]. } 280 C++ Essentials Copyright © 2005 PragSoft . } } template <class Type> void BinNode<Type>::InsertNode (BinNode<Type> *node. left = right = 0. BinNode<Type> *&subtree) { if (subtree == 0) subtree = node. Bool Find (const Type &val). else InsertNode(node. } template <class Type> void BinNode<Type>::FreeSubtree (BinNode<Type> *node) { if (node != 0) { FreeSubtree(node->left). template <class Type> BinNode<Type>::BinNode (const Type &val) { value = val.// root node of the tree }. protected: BinNode<Type> *root. subtree->left). strcpy(value. str). ~BinTree(void).
if (val < subtree->value) DeleteNode(val. } template <class Type> void BinNode<Type>::DeleteNode (const Type &val. // insert left subtree into right subtree: InsertNode(subtree->left.// specialization: void BinNode<Str>::InsertNode (BinNode<Str> *node. if ((cmp = strcmp(str. else if (subtree->right == 0) // no right subtree subtree = subtree->left. } } // specialization: void BinNode<Str>::DeleteNode (const Str &str. www. BinNode<Str> *&subtree) { if (subtree == 0) subtree = node. subtree->right).pragsoft. else InsertNode(node. else if (val > subtree->value) DeleteNode(val. } delete handy. BinNode<Str> *&subtree) { int cmp. subtree->right). subtree->value)) < 0) DeleteNode(str. if (subtree->left == 0) // no left subtree subtree = subtree->right. subtree->right).com Solutions to Exercises 281 . else { // left and right subtree subtree = subtree->right. if (subtree == 0) return. BinNode<Type> *&subtree) { int cmp. subtree->right). subtree->left). subtree->value) <= 0) InsertNode(node. else if (cmp > 0) DeleteNode(str. if (subtree == 0) return. subtree->left). subtree->left). else if (strcmp(node->value. else { BinNode* handy = subtree.
return (subtree == 0) ? 0 : ((cmp = strcmp(str. else { // left and right subtree subtree = subtree->right. subtree->right). else if (subtree->right == 0) // no right subtree subtree = subtree->left. const BinNode<Type> *subtree) { if (subtree == 0) return 0. subtree->right). } template <class Type> void BinNode<Type>::PrintNode (const BinNode<Type> *node) { if (node != 0) { PrintNode(node->left). subtree->left). } delete handy. } // specialization: const BinNode<Str>* BinNode<Str>::FindNode (const Str &str. if (subtree->left == 0) // no left subtree subtree = subtree->right.else { BinNode<Str>* handy = subtree. cout << node->value << ' '. subtree->value)) < 0 ? FindNode(str. } } template <class Type> const BinNode<Type>* BinNode<Type>::FindNode (const Type &val. const BinNode<Str> *subtree) { int cmp. subtree->right) : subtree)). // insert left subtree into right subtree: InsertNode(subtree->left. subtree->left) : (cmp > 0 ? FindNode(str. 282 C++ Essentials Copyright © 2005 PragSoft . return subtree. if (val > subtree->value) return FindNode(val. if (val < subtree->value) return FindNode(val. PrintNode(node->right).
class Data> class Page. cout << '\n'.pragsoft. } template <class Type> BinTree<Type>::~BinTree(void) { root->FreeSubtree(root). } template <class Type> void BinTree<Type>::Print (void) { root->PrintNode(root). } template <class Type> BinTree<Type>::BinTree (void) { root = 0.h> enum Bool { false. } 9. true }. root). Data data) virtual void Delete (Key key) virtual Bool Search (Key key.} #include <iostream.} } template <class Type> void BinTree<Type>::Insert (const Type &val) { root->InsertNode(new BinNode<Type>(val). } template <class Type> void BinTree<Type>::Delete (const Type &val) { root->DeleteNode(val. root). } template <class Type> Bool BinTree<Type>::Find (const Type &val) { return root->FindNode(val. template <class Key. class Data> class Database { public: virtual void Insert (Key key. template <class Key.com Solutions to Exercises 283 . {} {} {return false. Data &data) }. root) != 0.
const int count). Bool InsertItem (Item<Key. // root of the tree 284 C++ Essentials Copyright © 2005 PragSoft . class Data> class Item { // represents each stored item public: Item (void) {right = 0. ~Page (void) {delete items. Key& KeyOf (void) {return key. Data> *items. BTree&).} Data& DataOf (void) {return data. private: const int size.} Item<Key. friend ostream& operator << (ostream&. private: Key key. Data> { public: BTree (const int order).} Page<Key. int atIdx). Data> *root.} Page*& Left (const int ofItem). const int srcIdx. Data>*& Subtree (void) {return right. Data). virtual Bool Search (Key key. Data data). // item's key Data data. const int destIdx. Page*& Right (const int ofItem). Item&). // pointer to the left-most subtree Item<Key.} Item (Key. // the items on the page }. of items on the page Page *left. int &idx). // no. of items per page int used. Data> &item. Bool DeleteItem (int atIdx).} friend ostream& operator << (ostream&. void PrintPage (ostream& os.} Bool BinarySearch(Key key.template <class Key. Data &data).} int& Used (void) {return used. class Data> class BTree : public Database<Key. template <class Key. ~BTree (void) {FreePages(root). // pointer to right subtree }. int CopyItems (Page *dest. Data> *right. const int Size (void) {return size. Data>& operator [] (const int n) {return items[n]. // item's data Page<Key. virtual void Delete (Key key). protected: const int order. template <class Key. const int margin). class Data> class Page { // represents each tree node public: Page (const int size). // max no.// order of tree Page<Key.} virtual void Insert (Key key.
Page<Key. Page<Key. class Data> Item<Key. Bool &underflow).1]. Data> *parent.Page<Key. const int idx. class Data> Page<Key. Item<Key. (Page<Key. Data> *page. template <class Key. Data>*& Page<Key. } template <class Key.com Solutions to Exercises 285 . Bool &underflow).data. Data> *item. Data>::Right (const int ofItem) www. } // return the right subtree of an item template <class Key. class Data> ostream& operator << (ostream& os. Data>*SearchAux (Page<Key. Data>::Left (const int ofItem) { return ofItem <= 0 ? left: items[ofItem . Data>::Item (Key k. Data>::Page (const int sz) : size(sz) { used = 0.Subtree(). items = new Item<Key. Data>[size]. int idx. Bool &underflow).key << ' ' << item. Data> *tree. } // return the left subtree of an item template <class Key. class Data> Page<Key. (Page<Key.Data> &item) { os << item. virtual void virtual void DeleteAux1 DeleteAux2 (Key key. Page<Key. Data>*& Page<Key. Data> *page). Key key). virtual void Underflow }.pragsoft. left = 0. virtual Item<Key. return os. Data> *page). class Data> Page<Key. Data>*InsertAux (Item<Key. } template <class Key. Data> *page. Data> *bufP. virtual Item<Key. Data d) { key = k. Page<Key. // buffer page for distribution/merging virtual void FreePages (Page<Key. right = 0. Data> *child. Data> *page. data = d.
if (key <= items[mid]. int atIdx) { for (register i = used. const int count) { for (register i = 0. ++i) // straight copy dest->items[destIdx + i] = items[srcIdx + i]. Bool found = low . return count. class Data> int Page<Key.1. Data>::InsertItem (Item<Key. int mid. Data>::BinarySearch (Key key.1. do { mid = (low + high) / 2. } while (low <= high).KeyOf()) low = mid + 1. return found. // overflow? } // delete an item from a page // restrict to lower half // restrict to upper half 286 C++ Essentials Copyright © 2005 PragSoft . } // copy a set of items from page to page template <class Key. const int destIdx. // insert return ++used >= size. } // do a binary search on items of a page // returns true if successful and false otherwise template <class Key.Subtree().{ return ofItem < 0 ? left : items[ofItem]. --i) // shift right items[i] = items[i . int high = used . Data>::CopyItems (Page<Key. int &idx) { int low = 0. class Data> Bool Page<Key. items[atIdx] = item.1]. const int srcIdx.KeyOf()) high = mid . i > atIdx. Data> &item. i < count. Data> *dest. if (key >= items[mid]. class Data> Bool Page<Key. idx = found ? mid : high. } // insert an item into a page template <class Key.high > 1.
margin + 8). ++i) { os << margBuf. } else if ((receive = InsertAux(&item. ++i) // shift left items[i] = items[i + 1]. return --used < size/2. Data>::Insert (Key key. bufP = new Page<Key. Data> item(key. } } template <class Key. margin + 8). // new root www. Data>::PrintPage (ostream& os. *receive. // print the left-most child: if (Left(0) != 0) Left(0)->PrintPage(os. i < used. Data>(2 * order + 2). class Data> void BTree<Key. Data> *page = new Page<Key. i < used . class Data> Bool Page<Key.pragsoft. Data>(2 * order). if (root == 0) { // empty tree root = new Page<Key. Data>::BTree (const int ord) : order(ord) { root = 0. data). // underflow? } // recursively print a page and its subtrees template <class Key. ++i) margBuf[i] = ' '.com Solutions to Exercises 287 . } template <class Key.1. root->InsertItem(item. // print page and remaining children: for (i = 0. root)) != 0) { Page<Key. Data>::DeleteItem (int atIdx) { for (register i = atIdx. if (Right(i) != 0) Right(i)->PrintPage(os. class Data> BTree<Key.template <class Key. 0). margBuf[i] = '\0'. Data data) { Item<Key. i <= margin. // build the margin string: for (int i = 0. const int margin) { char margBuf[128]. Data>(2 * order). class Data> void Page<Key. os << (*this)[i] << '\n'.
page->Left(0) = root. Data> *page) { if (page != 0) { FreePages(page->Left(0)). if (item == 0) return false. root = root->Left(0). Data>::Search (Key key. Data &data) { Item<Key. } } template <class Key. class Data> ostream& operator << (ostream& os. root. ++i) FreePages(page->Right(i)). 0). Data> *item = SearchAux(root. class Data> void BTree<Key. i < page->Used(). for (register i = 0. key). class Data> void BTree<Key. delete temp. 0).root != 0) tree. Data>::Delete (Key key) { Bool underflow. return true. Data> &tree) { if (tree. data = item->DataOf(). } } // recursively search the tree for an item with matching key // dispose root 288 C++ Essentials Copyright © 2005 PragSoft .page->InsertItem(*receive. BTree<Key. underflow).root->PrintPage(os. class Data> Bool BTree<Key. delete page. Data>::FreePages (Page<Key. } template <class Key. } } template <class Key. return os. DeleteAux1(key. } // recursively free a page and its subtrees template <class Key. if (underflow && root->Used() == 0) { Page<Key. Data> *temp = root. root = page.
key). Data>::InsertAux (Item<Key. if (tree == 0) return 0. bufP->Used() = page->CopyItems(bufP. 0. Data> *item.pragsoft. return item. child). idx + 1). int idx.template <class Key. bufP->InsertItem(*item. if (page->BinarySearch(item->KeyOf(). } else { // page is full. newP->Used() = bufP->CopyItems(newP. split Page<Key. Data> *newP = new Page<Key. idx)) return 0. if (tree->BinarySearch(key. Data>(2 * order). Data> *item. 0. class Data> Item<Key. } // insert an item into a page and split the page if it overflows template <class Key. half + 1. idx + 1). class Data> Item<Key. 0. Data> *child. half). Data>:: SearchAux (Page<Key. size half . Data>* BTree<Key. int half = size/2. Data> *page) { Page<Key. // child is not a leaf if (item != 0) { // page is a leaf. page->Used()).com Solutions to Exercises 289 . idx)) return &((*tree)[idx]). int size = bufP->Used(). } // the mid item www. Data>* BTree<Key. page->Used() = bufP->CopyItems(page. Data> *tree. *item = (*bufP)[half]. return SearchAux(idx < 0 ? tree->Left(0) : tree->Right(idx). 0. Item<Key. or passed up if (page->Used() < 2 * order) { // insert in the page page->InsertItem(*item. 0. // already in tree if ((child = page->Right(idx)) != 0) item = InsertAux(item. item->Subtree() = newP. newP->Left(0) = bufP->Right(half).1). Page<Key. Key key) { int idx.
} // delete an item from a page and deal with underflows template <class Key. Page<Key. child. // go another level down if (underflow) Underflow(page. Bool &underflow) { Page<Key. Data> *right = parent->Right(idx). class Data> void BTree<Key. child. Data>::DeleteAux1 (Key key.1. idx. } else { // page is a // delete from subtree: DeleteAux2(page. child. Data> *page. class Data> void BTree<Key. Page<Key. idx . } } // delete an item and deal with underflows by borrowing // items from neighboring pages or merging two pages template <class Key. page->Used() . Data> *parent. if (page == 0) return.1.1). Page<Key.} return 0. underflow). Bool &underflow) { int idx. Data> *child = page->Right(page->Used() . if (child != 0) { // page is not a leaf DeleteAux2(parent. idx)) { if ((child = page->Left(idx)) == 0) { // page is a underflow = page->DeleteItem(idx). idx. } else { // page is a leaf // save right: Page<Key. Data> *child. underflow = false. underflow). // should be if (underflow) Underflow(page. if (underflow) Underflow(page. leaf subtree this page in child 290 C++ Essentials Copyright © 2005 PragSoft . child. Data>::DeleteAux2 (Page<Key. underflow). underflow). underflow). const int idx. underflow). DeleteAux1(key. Data> *page. child. if (page->BinarySearch(key. } } else { // is not on child = page->Right(idx). idx. child.
right->Used() = bufP->CopyItems(right. Data>::Underflow (Page<Key. right->Left(0) = bufP->Right(half). Data> *right = left == child ? page->Right(++idx) : child. Data data). half). left->Used()).1 ? child : page->Left(idx). class Data> void BTree<Key.1.1). 0. www. Data>(order) {} virtual void Insert (Key key. underflow = page->DeleteItem(idx). Data> *left = idx < page->Used() . right->Used()). // restore right: parent->Right(idx) = right. underflow = page->DeleteItem(page->Used() . 0. Data> { public: BStar (const int order) : BTree<Key. and right onto bufP: int size = left->CopyItems(bufP.half . size . page->Right(idx) = right. } else { // merge. 0. Page<Key. and free the right page: left->Used() = bufP->CopyItems(left. (*bufP)[size] = (*page)[idx].com Solutions to Exercises 291 . Page<Key. parent item. bufP->Right(size++) = right->Left(0). delete right.// borrow an item from page for parent: page->CopyItems(parent. Bool &underflow) { Page<Key. half + 1. page->Used() .1). idx. Data> *child. 0. underflow = false. } } // handle underflows template <class Key. 0. size. 0. class Data> class BStar : public BTree<Key.pragsoft. int idx. 1). size += right->CopyItems(bufP. 0. Data> *page. 0. size). } } //------------------------------------------------------------template <class Key. left->Used() = bufP->CopyItems(left. if (size > 2 * order) { // distribute bufP items between left and right: int half = size/2. // copy contents of left. (*page)[idx] = (*bufP)[half].
Data>(2 * order). Page<Key. idx)) return 0. root->InsertItem(item. Data>*InsertAux (Item<Key. // root becomes a left child root = new Page<Key. int idx. Bool dummy. Data>*Overflow (Item<Key. 0). right = new Page<Key. } else if ((overflow = InsertAux(&item. Data> *page. Data> *child. class Data> Item<Key. Data> *page) { Page<Key. Data data) { Item<Key. Data>::Insert (Key key. // the left-most child of root root->Right(0) = right. Data> *child. Page<Key. // the right child of root right->Left(0) = overflow->Subtree(). idx). virtual Item<Key. Data>(2 * order). 0). child. Data> *item. Data> *overflow. dummy). Page<Key. 0. if (page->BinarySearch(item->KeyOf(). class Data> void BStar<Key. Page<Key. Data> *page). Data> item(key. // already in tree if ((child = page->Right(idx)) != 0) { // child not a leaf: if ((item = InsertAux(item. Data> *item. Data> *item. // insert with overflow/underflow handling template <class Key. root)) != 0) { left = root. root->InsertItem(*overflow. } else if (page->Used() < 2 * order) { // item fits in node 292 C++ Essentials Copyright © 2005 PragSoft . root->Left(0) = left. Page<Key. }.protected: virtual Item<Key. int idx). data). *right. child)) != 0) return Overflow(item. // right is underflown (size == 0): Underflow(root. Data> *left. Data>(2 * order). right. Item<Key. page. Data>::InsertAux (Item<Key. } } // inserts and deals with overflows template <class Key. Data>* BStar<Key. if (root == 0) { // empty tree root = new Page<Key.
1) = right->Left(0). 0. half. 0. bufP->Used() += right->CopyItems(bufP. bufP->Right(bufP->Used() . half = size/2). idx + 1). overflown and parent items. 0.1) = right->Left(0). if (child == left ) { bufP->InsertItem(*item. bufP->Used()). 0. } if (bufP->Used() < 4 * order + 2) { // distribute buf between left and right: int size = bufP->Used(). 0. bufP->Used() = page->CopyItems(bufP.half . and right into buf: bufP->Used() = left->CopyItems(bufP. left->Used() = bufP->CopyItems(left. Page<Key. left->Used()). half + 1. 0. bufP->Used() += right->CopyItems(bufP. size).com Solutions to Exercises 293 . return item. } // handles underflows template <class Key. bufP->Used(). } else { bufP->InsertItem((*page)[idx].pragsoft. bufP->Used()). bufP->InsertItem(*item.1 ? child : page->Left(idx). Data> *left = idx < page->Used() . bufP->Used()). size). Data> *page. int idx) { Page<Key. idx + 1). Data> *child. bufP->InsertItem(*item. } else { // node is full int size = page->Used(). bufP->CopyItems(page. right->Used()). Page<Key. Data>* BStar<Key. 0. bufP->Used()). 0. } else { // split int 3 pages: www. bufP->Used(). Data> *item. } return 0. Data> *right = left == child ? page->Right(++idx) : child. // copy left. right->Used()). 0. bufP->Right(bufP->Used() . Data>::Overflow (Item<Key. right->Used() = bufP->CopyItems(right. 0. 0.1). bufP->InsertItem((*page)[idx]. right->Left(0) = bufP->Right(half). (*page)[idx] = (*bufP)[half]. Page<Key. return 0.page->InsertItem(*item. size . class Data> Item<Key. *item = (*bufP)[size]. page->Right(idx) = right.
idx). dataPack. mid1 = left->Used() = bufP->CopyItems(left. mid1 + 1.} (void) {return true. 0. mid2. InvalidPack. UnknownPack) { if (!c->Active()) throw InactiveConn(). class InactiveConn class InvalidPack class UnknownPack (void) (void) {return dataPack. } else { *item = (*page)[page->Used() .1] = (*bufP)[mid1]. } } } 10. (4 * order + 1) / 3).1 enum PType enum Bool {controlPack.} {}.} {return true. (*page)[idx] = (*bufP)[mid2]. 0. if (!pack->Valid()) 294 C++ Essentials Copyright © 2005 PragSoft . int mid1. bufP->Right(mid2) = newP. newP->Left(0) = bufP->Right(mid2). {}. Data>(2 * order).1].. mid2 += mid1 + 1. 4 * order / 3). mid2 = right->Used() = bufP->CopyItems(right. (4 * order + 2) / 3).Page<Key. right->Left(0) = bufP->Right(mid1). Data> *newP = new Page<Key. PType Type Bool Valid }.. if (page->Used() < 2 * order) { page->InsertItem((*bufP)[mid1]. 0. return 0. return item. 0. void ReceivePacket (Packet *pack. {}. newP->Used() = bufP->CopyItems(newP.. bufP->Right(mid1) = right. Connection *c) throw(InactiveConn.. class Packet { public: //. class Connection { public: //. true}. Bool Active }. diagnosePack}. (*page)[page->Used() . mid2 + 1. {false.
. class Matrix { public: Matrix Matrix ~Matrix double& operator () Matrix& operator = (const (const (void) (const (const short rows. break. break. friend ostream& operator << (ostream&.. } www. Matrix&). elems = new double[rows * cols]. {}.. break. case diagnosePack: //. }. default: //..com Solutions to Exercises 295 .(Matrix&. friend Matrix operator . {}. {}. const short cols). Matrix&). cols(c) { if (rows <= 0 || cols <= 0) throw BadDims()..} private: const short rows. friend Matrix operator + (Matrix&. double *elems. {delete elems..h> class class class class class DimsDontMatch BadDims BadRow BadCol HeapExhausted {}. // matrix rows // matrix columns // matrix elements Matrix::Matrix (const short r. case dataPack: //. throw UnknownPack().} const short Cols (void) {return cols.throw InvalidPack(). const short cols. } } 10. const short col). Matrix&). switch (pack->Type()) { case controlPack: //. if (elems == 0) throw HeapExhausted(). const short c) : rows(r). {}.2 #include <iostream... friend Matrix operator * (Matrix&. Matrix&).pragsoft. const short Rows (void) {return rows.} short row. Matrix&). Matrix&).
c) << '\t'. ++c) os << m(r. r <= p. Matrix m(p.cols. ++i) // copy elements elems[i] = m.cols != q. return *this. } return os.rows != q.rows == q. ++c) m(r.c) = p(r. for (register i = 0. c <= p.cols) for (register r = 1. } ostream& operator << (ostream &os.rows || p. i < n. if (elems == 0) throw HeapExhausted().cols) { // must match int n = rows * cols.rows && cols == m. if (col <= 0 || col > cols) throw BadCol().rows). const short col) { if (row <= 0 || row > rows) throw BadRow().1)].cols). if (p. if (rows <= 0 || cols <= 0) throw BadDims().rows. 296 C++ Essentials Copyright © 2005 PragSoft . Matrix &m) { for (register r = 1.c) + q(r. os << '\n'. elems = new double[n]. } Matrix operator + (Matrix &p. } double& Matrix::operator () (const short row.cols == q. return elems[(row .rows. c <= m. i < n.rows.cols) throw DimsDontMatch().c).1)*cols + (col . ++r) for (register c = 1.cols) { int n = rows * cols.rows && p. Matrix &q) { if (p.elems[i].elems[i].cols. ++r) { for (int c = 1. for (register i = 0. } Matrix& Matrix::operator = (const Matrix &m) { if (rows == m. p. cols(m.Matrix::Matrix (const Matrix &m) : rows(m. ++i) // copy elements elems[i] = m. r <= m. } else throw DimsDontMatch().
cols == q.pragsoft.cols. c <= q. ++c) m(r. }) for (register r = 1. } Matrix operator * (Matrix &p. i <= p.cols == q.rows. r <= p.rows || p. if (p. Matrix m(p.rows != q.c) = p(r.cols.c) += p(r.0. Matrix &q) { if (p.cols). q.(Matrix &p.cols != q. } return m. c <= p.q(r.return m.cols) throw DimsDontMatch(). r <= p. Matrix m(p.c).com Solutions to Exercises 297 .rows) throw DimsDontMatch().rows && p.cols).rows. return m.cols != q. ++r) for (register c = 1. for (register i = 1. p.c) = 0.c).rows.rows. ++c) { m(r. if (p. Matrix &q) { if (p. ++r) for (register c = 1.cols) for (register r = 1.rows == q. ++i) m(r.c) * q(r. } Matrix operator .cols.c) .
|
https://www.scribd.com/doc/60830066/c-Essentials
|
CC-MAIN-2017-26
|
refinedweb
| 62,370
| 61.93
|
Remove Duplicate words in a sentence using Java
This program is used to remove words which are repeated in a sentence in order to reduce the duplicate words and reduce the sentence length.
Duplicate words add redundancy to the sentence and can alter the meaning of the sentence.
Hence they should be removed.
This program is purely to remove the visible duplicates present in a sentence, and not to count the duplicates.
If you want to count or find the duplicates in a sentence, you can refer to the following link:
Checking for Duplicacy in an array using Hashing Technique in Java
Now, let’s figure out how to remove these duplicate words.
The following program is strictly character sensitive i.e make sure to properly define lower case and upper case characters.
Giving the right input containing duplicate words/numbers
- In the following code, we can input the sentence containing any characters, numbers etc.
- Make sure that if we want to remove the duplicates they must be exactly the same. For example, if there is a name like “Rahul” in the sentence and you want to remove the duplicate make sure the duplicate word is “Rahul” and not “rahul” since it is character sensitive.
- We can remove duplicate numbers in the sentence as well.
Program:
import java.io.*; class RemoveDuplicates { public static void main(String[] args)throws IOException { String input=""; //String to be inputted according to user BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter the sentence: "); input= br.readLine(); String[] words=input.split(" "); //Splitting every word in the sentence with the help of spaces for(int i=0;i<words.length;i++) //Outer loop for Comparison and to check if sentence is empty or not { if(words[i]!=null) { for(int j=i+1;j<words.length;j++) //Inner loop to compare two words in a sentence to check for duplicacy { if(words[i].equals(words[j])) //Checking if both the compared strings are equal { words[j]=null; //Deletes the duplicate words if the compared strings are equal } } } } for(int k=0;k<words.length;k++) //Displaying the String without the duplicate words { if(words[k]!=null) { System.out.print(words[k] + " "); } } } }
As seen above it is clearly explained how to remove the duplicate words in a sentence.
Please see the comments as it makes the code easier to understand and implement.
The following code gives the following output.
Output:
Enter the sentence: Welcome to this program this program where we remove duplicate words words The string without duplicate words is: Welcome to this program where we remove duplicate words
As seen above, we have successfully removed duplicate words from a sentence
|
https://www.codespeedy.com/remove-duplicate-words-in-a-sentence-using-java/
|
CC-MAIN-2019-43
|
refinedweb
| 448
| 55.34
|
Building and Consuming Async WCF Services in .NET Framework 4.5
In .NET Framework 4.5 the async and await keywords are pretty good additions. In this article I am going to explain on how simple is to build and consume WCF services using async and await keywords in .NET Framework 4.5. I will also provide the code examples of how the asynchrony was done in the previous versions of .NET Framework.
Significance of Async and Await Keywords
The difference that the async and await keywords bring to asynchronous programming is that the code is simpler to understand and follows the same hierarchy as a synchronous code. It avoids the complexities of using callbacks and also provides more control over the asynchronous process as it is task based.
Async – A method declared with async can perform await operations.
Await – Perform await over an asynchronous task.
You can read more about these keywords in Introduction to Async and Await Keywords in C# 5.0.
Sample Newsfeed WCF Service
Create a WCF service named FeedService.svc to a web application and host it to IIS. In the service contract, IFeedService.cs, add the code as follows.
namespace MyWcfServiceApplication { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the interface name "IFeedService" in both code and config file together. [ServiceContract] public interface IFeedService { [OperationContract] List<string> GetGeneralNewsFeed(); [OperationContract] List<String> GetSportsNewsFeed(); } }
Now let us go and add the implementation for the two service methods GetGeneralNewsFeed and GetSportsNewsFeed. FeedService.svc.cs concrete class code is as follows.
namespace MyWcfServiceApplication { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "FeedService" in code, svc and config file together. // NOTE: In order to launch WCF Test Client for testing this service, please select FeedService.svc or FeedService.svc.cs at the Solution Explorer and start debugging. public class FeedService : IFeedService { #region IFeedService Members public List<string> GetGeneralNewsFeed() { //Delay a bit and return some sample news content //Consider the delay as the time taken for parsing the feed in real time Thread.Sleep(3000); return new List<string>() { "This is general news number 1", "This is general news number 2", "This is general news number 3" }; } public List<string> GetSportsNewsFeed() { //Delay a bit and return some sample news content //Consider the delay as the time taken for parsing the feed in real time Thread.Sleep(3000); return new List<string>() { "Some cricket news...", "Some soccer news..."}; } #endregion } }
I have used Thread.Sleep in order to simulate a delay in returning the feed data.
Consuming the WCF Service in the Client
You will also notice there won't be any changes with respect to the way the WCF service is built. It is all to do with the client proxy generation. Now create the client console application and Add service reference to the WCF service.
Older Method – Callbacks
In this section let us see the sample client code to know how the asynchronous WCF calls were made in the earlier versions of .NET Framework. Below is the client code, which uses callbacks.
namespace OldAsyncWay { class Program { static FeedServiceClient client; static void Main(string[] args) { client = new FeedServiceClient(); //Begin asynchronous service callsd and provide callback functions client.BeginGetGeneralNewsFeed(GeneralNewsCallback, null); client.BeginGetSportsNewsFeed(SportsNewsCallback, null); Console.ReadLine(); } private static void GeneralNewsCallback(IAsyncResult asyncResult) { string[] generalNews = client.EndGetGeneralNewsFeed(asyncResult); } private static void SportsNewsCallback(IAsyncResult asyncResult) { string[] generalNews = client.EndGetSportsNewsFeed(asyncResult); } } }
A begin and end calls for each service method, a call back definition and asyncResult object; all these increase the complexity of the code written.
Async Calls – Async and Await
With .net framework 4.5 you have the power of using async and await keywords. All you need to do is when adding the service reference go to the Advance service settings and select the option to generate task-based operations. This will take care of adding the Async methods to the proxy class. It will append the word Async as suffix to the actual service method. Fig 1.0 shows the option selected while adding the service reference.
Fig 1.0 - Service Reference Settings
Following is the client code that consumes the WCF methods asynchronously using async and wait keywords.
namespace NewsClient { class Program { static void Main(string[] args) { FetcNewsAsync(); Console.ReadLine(); } private static async void FetcNewsAsync() { FeedServiceClient client = new FeedServiceClient(); //Makes a first async service call var task1 = client.GetGeneralNewsFeedAsync(); //Makes the second async service call and doesn't wait for task1 completion var task2 = client.GetSportsNewsFeedAsync(); //Now awaits for task1 string[] generalNews = await task1; //awaits for task2 string[] generalSports = await task2; } }
}
The code is now clean and simpler as that of a synchronous code. Hope this article explained clearly about consuming WCF services using async and await keywords in .NET Framework 4.5.
Happy reading!
ThanksPosted by Ngá»c on 04/08/2017 04:04am
Thanks sir. Nice work.Reply
DeveloperPosted by Amy on 11/06/2015 04:38pm
Good simple article. it is better to explain what the await is doing and add stopwatch in the code.Reply
|
https://www.codeguru.com/columns/experts/building-and-consuming-async-wcf-services-in-.net-framework-4.5.htm
|
CC-MAIN-2019-13
|
refinedweb
| 842
| 57.47
|
OneAnd
The
OneAnd[F[_],A] data type represents a single element of type
A
that is guaranteed to be present (
head) and in addition to this a
second part that is wrapped inside an higher kinded type constructor
F[_]. By choosing the
F parameter, you can model for example
non-empty lists by choosing
List for
F, giving:
import cats.data.OneAnd type NonEmptyList[A] = OneAnd[List, A]
which used to be the implementation of non-empty lists in cats but has
been replaced by the
cats.data.NonEmptyList data type. By
having the higher kinded type parameter
F[_],
OneAnd is also able
to represent other “non-empty” data structures e.g.
type NonEmptyStream[A] = OneAnd[Stream, A]
|
https://typelevel.org/cats/datatypes/oneand.html
|
CC-MAIN-2019-13
|
refinedweb
| 121
| 53.92
|
Introduction:
Here we will learn how to solve the problem of cannot deserialize the current JSON object (e.g. {"name":"value"}) into type 'System.Collections.Generic.List`1[userdetails]' because the type requires a JSON array (e.g. [1,2,3]) to deserialize correctly. Generally we will get this error whenever our deserialized JSON object returning list of object items but we are trying to hold only single item.
Description:
In previous articles I explained asp.net json serialization and deserialization in c#, vb.net, asp.net set custom error page inn web.config, unrecognized escape sequence in file path in c#, vb.net, cannot convert string to type double is not valid in vb.net, jQuery show error message in ajax call response with example, convert json string to json object with example and many more articles related to in JSON, asp.net, mvc, c#, vb.net. Now I will explain how to solve the problem cannot deserialize the current JSON object (e.g. {"name":"value"}) into type 'System.Collections.Generic.List`1[userdetails]' because the type requires a JSON array (e.g. [1,2,3]) to deserialize correctly.
We are getting following error in our application whenever we are try to deserialize the JSON string in asp.net by using newtonsoft json.
Actually this problem occurred whenever our JSON deserialization method returns more than one list item but we are trying to hold it as single item like as shown following.
C# Code
VB.NET Code
We need to change userdetails user parameter to var user like as shown following because our deserialization method returning more than one list item.
C# Code
VB.NET Code
If you want complete example to implement serialization and deserialization for JSON data create new web application and open your Default.aspx page and write the code like as shown below.
Now open code behind file and write the code like as shown below
C# Code
VB.NET Code
If you observe above code we added namespace “Newtonsoft.JSON” this we can get by adding reference using Manage Nuget Packages. To add reference right click on your application à select Manage Nuget Packages à Go to Browse Tab à Search for Newtonsoft à From the list select Newtonsoft.Json and install it. Once we install the component that will show like as shown following.
Demo
Now run the application to see the result that will be like as shown below. Following is the result of Serializing Data
Following is the result of DeSerializing the JSON Data.
|
http://www.aspdotnet-suresh.com/2017/02/cannot-deserialize-current-json-object-name-value.html
|
CC-MAIN-2017-22
|
refinedweb
| 423
| 50.73
|
Details
Description
Using the genapp plugin (2.2) of Maven-1.1-beta-1, I created a simple jar project.
Running pom:validate on the POM gave errors due to the fact that project.xml generated by the genapp plugin isn't a model-3.0.0 POM, because e.g. the <package> element generated by the genapp wizard isn't defined in the model-3.0.0 xsd. So I modified project.xml in order to be conform with the model-3.0.0 xsd. However running pom:validate still gives the following error:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$maven -e pom:validate
__ __
build:start:
pom:verify-version:
pom:validate:
[echo] ====== CUSTOM ADDED IN POM PLUGIN ====
[echo] XSD file : c:\devtools\maven-1.1-beta-1/maven-project-3.xsd
[echo] POM file : C:\tmp\myapp\project.xml
[echo] =======================================
[java] C:\tmp\myapp\project.xml:19:10: error: cvc-elt.1: Cannot find the declaration of element 'project'.
[java] [ERROR] Java Result: 1
BUILD SUCCESSFUL
Total time : 4 seconds
Finished at : zondag 31 juli 2005 12:58:49 CEST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The POM file project.xml is included as attachment.
Additional question : is the pom plugin still necessary from maven-1.1-beta-1 onwards, because I thought the Maven core would verify more strictly the POM itself (cf)?
Regards,
Davy Toch
Issue Links
- is depended upon by
MAVEN-1653 Executing the "pom:validate" goal on a reconstituted project.xml file using the content from the ten minute test page produces error messages
- is related to
-
Activity
I also noticed the namespace problem. I'm checking to see if it's not possible to have a workaround.
For the packageName attribute, I fixed the xsd today on the web site.
You can download it.
packageName isn't an attribute but an element
I suppose the best solution is to modify the maven genapp plugin so it can create Maven projects with v3.0.0 POM's (instead of adding a workaround in the maven pom plugin).
Yes I'll do it
This is due to this "issue"
We must do something reusable
Lukas is already working on it (he try to see if it's possible and better to use msv to do the validation)
Reopen to change fix version
The case can be closed.
I noticed that you now need a namespace "" in the root of the POM xml document. After adding this I still had a little problem because the genapp wizard creates an element <package>, while in POM 3.0.0 <packageName> is declared. From other posts on the Maven user/dev mailing lists, it seems the latter is incorrect and will be corrected in the final 1.1 version of Maven.
So sorry for wasting your time.
|
http://jira.codehaus.org/browse/MPPOM-5
|
crawl-003
|
refinedweb
| 457
| 59.19
|
You can extend Zope by creating your own types of objects that are customized to your applications needs. New kinds of objects are installed in Zope by Products. Products are extensions to Zope that Zope Corporation and many other third party developers create. There are hundreds of different Products and many serve very specific purposes. A complete library of Products is at the Download Section. of Zope.org.
Products can be developed two ways, through the web using ZClasses, and in the Python programming language. Products can even be a hybrid of both through the web products and Python code. This chapter discusses building new products through the web, a topic which you've already have some brief exposure to in Chapter 11, "Searching and Categorizing Content". Developing a Product entirely in Python product programming is the beyond its scope and you should visit Zope.org for specific Product developer documentation.
This chapter shows you how to:
The first step in customizing Zope starts in the next section, where you learn how to create new Zope Products.
Through the web Products are stored in the Product Management folder in the Control Panel. Click on the Control_Panel in the root folder and then click Products. You are now in the screen shown in Figure 12-1.
Figure 12-1 Installed Products
Each blue box represents an installed Product. From this screen, you can manage these Products. Some Products are built into Zope by default or have been installed by you or your administrator. These products have a closed box icon, as shown in Figure 12-1. Closed-box products cannot be managed through the web. You can get information about these products by clicking on them, but you cannot change them.
You can also create your own Products that you can manage through the web. Your products let you create new kinds of objects in Zope. These through the web managable product have open-box icons. If you followed the examples in Chapter 11, "Searching and Categorizing Content", then you have a News open-box product.
Why do you want to create products? For example, all of the various caretakers in the Zoo want an easy way to build simple on-line exhibits about the Zoo. The exhibits must all be in the same format and contain similar information structure, and each will be specific to a certain animal in the Zoo.
To accomplish this, you could build an exhibit for one animal, and then copy and paste it for each exhibit, but this would be a difficult and manual process. All of the information and properties would have to be changed for each new exhibit. Further, there may be thousands of exhibits.
To add to this problem, let's say you now want to have information on each exhibit that tells whether the animal is endangered or not. You would have to change each exhibit, one by one, to do this by using copy and paste. Clearly, copying and pasting does not scale up to a very large zoo, and could be very expensive.
You also need to ensure each exhibit is easy to manage. The caretakers of the individual exhibits should be the ones providing information, but none of the Zoo caretakers know much about Zope or how to create web sites and you certainly don't want to waste their time making them learn. You just want them to type some simple information into a form about their topic of interest, click submit, and walk away.
By creating a Zope product, you can acomplish these goals quickly and easily. You can create easy to manage objects that your caretakers can use. You can define exhibit templates that you can change once and effect all of the exhibits. You can do these things by creating Zope Products.
Using Products you can solve the exhibit creation and management problems. Let's begin with an example of how to create a simple product that will allow you to collect information about exhibits and create a customized exhibit. Later in the chapter you see more complex and powerful ways to use products.
The chief value of a Zope product is that it allows you to create objects in a central location and it gives you access to your objects through the product add list. This gives you the ability to build global services and make them available via a standard part of the Zope management interface. In other words a Product allows you to customize Zope.
Begin by going to the Products folder in the Control Panel. To create a new Product, click the Add Product button on the Product Management folder. This will take you to the Product add form. Enter the id "ZooExhibit" and click Generate. You will now see your new Product in the Product Management folder. It should be a blue box with an open lid. The open lid means you can click on the Product and manage it through the web.
Select the ZooExhibit Product. This will take you to the Product management screen.
The management screen for a Product looks and acts just like a Folder except for a few differences:
In the Contents View create a DTML Method named hello with these contents:
<dtml-var standard_html_header> <h2>Hello from the Zoo Exhibit Product</h2> <dtml-var standard_html_footer>
This method will allow you to test your product. Next create a Factory. Select Zope Factory from the product add list. You will be taken to a Factory add form as shown in Figure 12-2.
Figure 12-2 Adding A Factory
Factories create a bridge from the product add list to your Product. Give your Factory an id of myFactory. In the Add list name field enter Hello and in the Method selection, choose hello. Now click Generate. Now click on the new Factory and change the Permission to Add Document, Images, and Files and click on Save Changes. This tells Zope that you must have the Add Documents, Images, and Files permission to use the Factory. Congratulations, you've just customized the Zope management interface. Go to the root folder and click the product add list. Notice that it now includes an entry named Hello. Choose Hello from the product add list. It calls your hello method.
One of the most common things to do with methods that you link to with Factories is to copy objects into the current Folder. In other words your methods can get access to the location from which they were called and can then perform operations on that Folder including copy objects into it. Just because you can do all kinds of crazy things with Factories and Products doesn't mean that you should. In general people expect that when they select something from the product add list that they will be taken to an add form where they specify the id of a new object. Then they expect that when they click Add that a new object with the id they specified will be created in their folder. Let's see how to fulfill these expectations.
First create a new Folder named exhibitTemplate in your Product. This will serve as a template for exhibits. Also in the Product folder create a DTML Method named addForm, and Python Script named add. These objects will create new exhibit instances. Now go back to your Factory and change it so that the Add list name is Zoo Exhibit and the method is addForm.
So what's going to happen is that when someone chooses Zoo Exhibit from the product add list, the addForm method will run. This method should collect information about the id and title of the exhibit. When the user clicks Add it should call the add script that will copy the exhibitTemplate folder into the calling folder and will rename it to have the specified id. The next step is to edit the addForm method to have these contents:
<dtml-var manage_page_header> <h2>Add a Zoo Exhibit</h2> <form action="add" method="post"> id <input type="text" name="id"><br> title <input type="text" name="title"><br> <input type="submit" value=" Add "> </form> <dtml-var manage_page_footer>
Admittedly this is a rather bleak add form. It doesn't collect much data and it doesn't tell the user what a Zoo Exhibit is and why they'd want to add one. When you create your own web applications you'll want to do better than this example.
Notice that this method doesn't include the standard HTML headers
and footers. By convention Zope management screens don't use the
same headers and footers that your site uses. Instead management
screens use
manage_page_header and
manage_page_footer. The
management view header and footer ensure that management views
have a common look and feel.
Also notice that the action of the form is the add script. Now paste the following body into the add script:
## Script (Python) "add" ##parameters=id ,title, REQUEST=None ## """ Copy the exhibit template to the calling folder """ # Clone the template, giving it the new ID. This will be placed # in the current context (the place the factory was called from). exhibit=context.manage_clone(container.exhibitTemplate,id) # Change the clone's title exhibit.manage_changeProperties(title=title) # If we were called through the web, redirect back to the context if REQUEST is not None: try: u=context.DestinationURL() except: u=REQUEST['URL1'] REQUEST.RESPONSE.redirect(u+'/manage_main?update_menu=1')
This script clones the exhibitTemplate and copies it to the current folder with the specified id. Then it changes the title property of the new exhibit. Finally it returns the current folder's main management screen by calling manage_main.
Congratulations, you've now extended Zope by creating a new product. You've created a way to copy objects into Zope via the product add list. However, this solution still suffers from some of the problems we discussed earlier in the chapter. Even though you can edit the exhibit template in a centralized place, it's still only a template. So if you add a new property to the template, it won't affect any of the existing exhibits. To change existing exhibits you'll have to modify each one manually.
ZClasses take you one step farther by allowing you to have one central template that defines a new type of object, and when you change that template, all of the objects of that type change along with it. This central template is called a ZClass. In the next section, we'll show you how to create ZClasses that define a new Exhibit ZClass.
ZClasses are tools that help you build new types of objects in Zope by defining a class. A class is like a blueprint for objects. When defining a class, you are defining what an object will be like when it is created. A class can define methods, properties, and other attributes.
Objects that you create from a certain class are called instances of that class. For example, there is only one Folder class, but you many have many Folder instances in your application.
Instances have the same methods and properties as their class. If you change the class, then all of the instances reflect that change. Unlike the templates that you created in the last section, classes continue to exert control over instances. Keep in mind this only works one way, if you change an instance, no changes are made to the class or any other instances.
A good real world analogy to ZClasses are word processor templates. Most word processors come with a set of predefined templates that you can use to create a certain kind of document, like a resume. There may be hundreds of thousands of resumes in the world based on the Microsoft Word Resume template, but there is only one template. Like the Resume template is to all those resumes, a ZClass is a template for any number of similar Zope objects.
ZClasses are classes that you can build through the web using Zope's management interface. Classes can also be written in Python, but this is not covered in this book.
ZClasses can inherit attributes from other classes. Inheritance allows you to define a new class that is based on another class. For example, say you wanted to create a new kind of document object that had special properties you were interested in. Instead of building all of the functionality of a document from scratch, you can just inherit all of that functionality from the DTML Document class and add only the new information you are interested in.
Inheritance also lets you build generalization relationships
between classes. For example, you could create a class called
Animal that contains information that all animals have in
general. Then, you could create Reptile and Mammal classes
that both inherit from Animal. Taking it even further, you
could create two additional classes Lizard and Snake that both
inherit from Reptile, as shown in Figure 12-3.
Figure 12-3 Example Class Inheritance
ZClasses can inherit from most of the objects you've used in this book. In addition, ZClasses can inherit from other ZClasses defined in the same Product. We will use this technique and others in this chapter.
Before going on with the next example, you should rename the existing ZooExhibit Product in your Zope Products folder to something else, like ZooTemplate so that it does not conflict with this example. Now, create a new Product in the Product folder called ZooExhibit.
Select ZClass from the add list of the ZooExhibit Contents view and go to the ZClass add form. This form is complex, and has lots of elements. We'll go through them one by one:
->button to put them in your base class list. The
<-button removes any base classes you select on the right. For this example, don't select any base classes. Later in this chapter, we'll explain some of the more interesting base classes, like ObjectManager.
Now click Add. This will take you back to the ZooExhibit Product and you will see five new objects, as shown in Figure 12-4.
Figure 12-4 Product with a ZClass
The five objects Zope created are all automatically configured to work properly, you do not need to change them for now. Here is a brief description of each object that was created:
That's it, you've created your first ZClass. Click on the new ZClass and click on its Basic tab. The Basic view on your ZClass lets you change some of the information you specified on the ZClass add form. You cannot change the base classes of a ZClass. As you learned earlier in the chapter, these settings include:
At this point, you can start creating new instances of the ZooExhibit ZClass. First though, you probably want a common place where all exhibits are defined, so go to your root folder and select Folder from the add list and create a new folder with the id "Exhibits". Now, click on the Exhibits folder you just created and pull down the Add list. As you can see, ZooExhibit is now in the add list.
Go ahead and select ZooExhibit from the add list and create a new Exhibit with the id "FangedRabbits". After creating the new exhibit, select it by clicking on it.
As you can see your object already has three views, Undo, Ownership, and Security. You don't have to define these parts of your object, Zope does that for you. In the next section, we'll add some more views for you to edit your object.
All Zope objects are divided into logical screens called Views. Views are used commonly when you work with Zope objects in the management interface, the tabbed screens on all Zope objects are views. Some views like Undo, are standard and come with Zope.
Views are defined on the Views view of a ZClass. Go to your ZooExhibit ZClass and click on the Views tab. The Views view looks like Figure 12-5.
Figure 12-5 The Views view.
On this view you can see the three views that come automatically with your new object, Undo, Ownership, and Security. They are automatically configured for you as a convenience, since almost all objects have these interfaces, but you can change them or remove them from here if you really want to (you generally won't).
The table of views is broken into three columns, Name, Method, and Help Topic. The Name is the name of the view and is the label that gets drawn on the view's tab in the management interface. The Method is the method of the class or property sheet that gets called to render the view. The Help Topic is where you associate a Help Topic object with this view. Help Topics are explained more later.
Views also work with the security system to make sure users only see views on an object that they have permission to see. Security will be explained in detail a little further on, but it is good to know at this point that views now only divide an object management interfaces into logical chunks, but they also control who can see which view.
The Method column on the Methods view has select boxes that let you choose which method generates which view. The method associated with a view can be either an object in the Methods view, or a Property Sheet in the Property Sheets view.
Properties are collections of variables that your object uses to store information. A Zoo Exhibit object, for example, would need properties to contain information about the exhibit, like what animal is in the exhibit, a description, and who the caretakers are.
Properties for ZClasses work a little differently than properties on Zope objects. In ZClasses, Properties come in named groups called Property Sheets. A Property Sheet is a way of organizing a related set of properties together. Go to your ZooExhibit ZClass and click on the Property Sheets tab. To create a new sheet, click Add Common Instance Property Sheet. This will take you to the Property Sheet add form. Call your new Property Sheet "ExhibitProperties" and click Add.
Now you can see that your new sheet, ExhibitProperties, has been created in the Property Sheets view of your ZClass. Click on the new sheet to manage it, as shown in Figure 12-6.
Figure 12-6 A Property Sheet
As you can see, this sheet looks very much like the Properties view on Zope objects. Here, you can create new properties on this sheet. Properties on Property Sheets are exactly like Properties on Zope objects, they have a name, a type, and a value.
Create three new properties on this sheet:
Property Sheets have two uses. As you've seen with this example, they are a tool for organizing related sets of properties about your objects, second to that, they are used to generate HTML forms and actions to edit those set of properties. The HTML edit forms are generated automatically for you, you only need to associate a view with a Property Sheet to see the sheet's edit form. For example, return to the ZooExhibit ZClass and click on the Views tab and create a new view with the name Edit and associate it with the method propertysheets/ExhibitProperties/manage_edit.
Since you can use Property Sheets to create editing screens you might want to create more than one Property Sheet for your class. By using more than one sheet you can control which properties are displayed together for editing purposes. You can also separate private from public properties on different sheets by associating them with different permissions.
Now, go back to your Exhibits folder and either look at an existing ZooExhibit instance or create a new one. As you can see, a new view called Edit has been added to your object, as shown in Figure Figure 12-7.
Figure 12-7 A ZooExhibit Edit view
This edit form has been generated for you automatically. You only needed to create the Property Sheet, and then associate that sheet with a View. If you add another property to the ExhibitProperties Property Sheet, all of your instances will automatically get a new updated edit form, because when you change a ZClass, all of the instances of that class inherit the change.
It is important to understand that changes made to the class are reflected by all of the instances, but changes to an instance are not reflected in the class or in any other instance. For example, on the Edit view for your ZooExhibit instance (not the class), enter "Fanged Rabbit" for the animal property, the description "Fanged, carnivorous rabbits plagued early medieval knights. They are known for their sharp, pointy teeth." and two caretakers, "Tim" and "Somebody Else". Now click Save Changes.
As you can see, your changes have obviously effected this instance, but what happened to the class? Go back to the ZooExhibit ZClass and look at the ExhibitProperties Property Sheet. Nothing has changed! Changes to instances have no effect on the class.
You can also provide default values for properties on a Property Sheet. You could, for example, enter the text "Describe your exhibit in this box" in the description property of the ZooExhibit ZClass. Now, go back to your Exhibits folder and create a new , ZooExhibit object and click on its Edit view. Here, you see that the value provided in the Property Sheet is the default value for the instance. Remember, if you change this instance, the default value of the property in the Property Sheet is not changed. Default values let you set up useful information in the ZClass for properties that can later be changed on an instance-by-instance basis.
You may want to go back to your ZClass and click on the Views tab and change the "Edit" view to be the first view by clicking the First button. Now, when you click on your instances, they will show the Edit view first.
The Methods View of your ZClass lets you define the methods for the instances of your ZClass. Go to your ZooExhibit ZClass and click on the Methods tab. The Methods view looks like Figure 12-8.
Figure 12-8 The Methods View
You can create any kind of Zope object on the Methods view, but generally only callable objects (DTML Methods and Scripts, for example) are added.
Methods are used for several purposes:
For example, consider the isHungry method of the ZooExhibit ZClass defined later in this section. It does not define a view for a ZooExhibit, it just provide very specific information about the ZooExhibit. Methods in a ZClass can call each other just like any other Zope methods, so logic methods could be used from a presentation method, even though they don't define a view.
A good example of a presentation method is a DTML Method that displays a Zoo Exhibit to your web site viewers. This is often called the public interface to an object and is usually associated with the View view found on most Zope objects.
Create a new DTML Method on the Methods tab of your ZooExhibit ZClass called index_html. Like all objects named index_html, this will be the default representation for the object it is defined in, namely, instances of your ZClass. Put the following DTML in the index_html Method you just created:
<dtml-var standard_html_header> <h1><dtml-var animal></h1> <p><dtml-var description></p> <p>The <dtml-var animal> caretakers are:<br> <dtml-in caretakers> <dtml-var sequence-item><br> </dtml-in> </p> <dtml-var standard_html_footer>
Now, you can visit one of your ZooExhibit instances directly through the web, for example, will show you the public interface for the Fanged Rabbit exhibit.
You can use Python-based or Perl-based Scripts, and even Z SQL Methods to implement logic. Your logic objects can call each other, and can be called from your presentation methods. To create the isHungry method, first create two new properties in the ExhibitProperties property sheet named "last_meal_time" that is of the type date and "isDangerous" that is of the type boolean. This adds two new fields to your Edit view where you can enter the last time the animal was fed and select whether or not the animal is dangerous.
Here is an example of an implementation of the isHungry method in Python:
## Script (Python) "isHungry" ## """ Returns true if the animal hasn't eaten in over 8 hours """ from DateTime import DateTime if (DateTime().timeTime() - container.last_meal_time.timeTime() > 60 * 60 * 8): return 1 else: return 0
The
container of this method refers to the ZClass instance. So
you can use the
container in a ZClass instance in the same way
as you use
self in normal Python methods.
You could call this method from your index_html display method using this snippet of DTML:
<dtml-if isHungry> <p><dtml-var animal> is hungry</p> </dtml-if>
You can even call a number of logic methods from your display methods. For example, you could improve the hunger display like so:
<dtml-if isHungry> <p><dtml-var animal> is hungry. <dtml-if isDangerous> <a href="notify_hunger">Tell</a> an authorized caretaker. <dtml-else> <a href="feed">Feed</a> the <dtml-var animal>. </dtml-if> </p> </dtml-if>
Your display method now calls logic methods to decide what actions are appropriate and creates links to those actions. For more information on Properties, see Chapter 3, "Using Basic Zope Objects".
If you choose ZClasses:ObjectManager as a base class for your ZClass then instances of your class will be able to contain other Zope objects, just like Folders. Container classes are identical to other ZClasses with the exception that they have an addition view Subobjects.
From this view you can control what kinds of objects your instances can contain. For example if you created a FAQ container class, you might restrict it to holding Question and Answer objects. Select one or more meta-types from the select list and click the Change button. The Objects should appear in folder lists check box control whether or not instances of your container class are shown in the Navigator pane as expandable objects.
Container ZClasses can be very powerful. A very common pattern for web applications is to have two classes that work together. One class implements the basic behavior and hold data. The other class contains instances of the basic class and provides methods to organize and list the contained instances. You can model many problems this way, for example a ticket manager can contain problem tickets, or a document repository can contain documents, or an object router can contain routing rules, and so on. Typically the container class will provide methods to add, delete, and query or locate contained objects.
When building new types of objects, security can play an important role. For example, the following three Roles are needed in your Zoo:
As you learned in Chapter 7, "Users and Security", creating new roles is easy, but how can you control who can create and edit new ZooExhibit instances? To do this, you must define some security policies on the ZooExhibit ZClass that control access to the ZClass and its methods and property sheets.
By default, Zope tries to be sensible about ZClasses and security. You may, however, want to control access to instances of your ZClass in special ways.
For example, Zoo Caretakers are really only interested in seeing the Edit view (and perhaps the Undo view, which we'll show later), but definitely not the Security or Ownership views. You don't want Zoo caretakers changing the security settings on your Exhibits; you don't even want them to see those aspects of an Exhibit, you just want to give them the ability to edit an exhibit and nothing else.
To do this, you need to create a new Zope Permission object in the ZooExhibit Product (not the ZClass, permissions are defined in Products only). To do this, go to the ZooExhibit Product and select Zope Permission from the add list. Give the new permission the Id "edit_exhibit_permission" and the Name "Edit Zoo Exhibits" and click Generate.
Now, select your ZooExhibit ZClass, and click on the Permissions tab. This will take you to the Permissions view as shown in Figure Figure 12-9.
Figure 12-9 The Permissions view.
Now, click on the Property Sheets tab and select the ExhibitProperties Property Sheet. Click on the Define Permissions tab.
You want to tell this Property Sheet that only users who have the Edit Zoo Exhibits permission you just created can manage the properties on the ExhibitProperties sheet. On this view, pull down the select box and choose Edit Zoo Exhibits. This will map the Edit Zoo Exhibits to the Manage Properties permission on the sheet. This list of permissions you can select from comes from the ZClass Permissions view you were just on, and because you selected the Edit Zoo Exhibits permission on that screen, it shows up on this list for you to select. Notice that all options default to disabled which means that the property sheet cannot be edited by anyone.
Now, you can go back to your Exhibits folder and select the Security view. Here, you can see your new Permission is on the left in the list of available permission. What you want to do now is create a new Role called Caretaker and map that new Role to the Edit Zoo Exhibits permission.
Now, users must have the Caretaker role in order to see or use the Edit view on any of your ZooExhibit instances.
Access to objects on your ZClass's Methods view are controlled in the same way.
The previous section explained how you can control access to instances of your ZClass's Methods and Properties. Access control is controlling who can create new instances of your ZClass. As you saw earlier in the chapter, instances are created by Factories. Factories are associated with permissions. In the case of the Zoo Exhibit, the Add Zoo Exhibits permission controls the ability to create Zoo Exhibit instances.
Normally only Managers will have the Add Zoo Exhibits permission, so only Managers will be able to create new Zoo Exhibits. However, like all Zope permissions, you can change which roles have this permissions in different locations of your site. It's important to realize that this permission is controlled separately from the Edit Zoo Exhibits permission. This makes it possible to allow some people such as Caretakers to change, but not create Zoo Exhibits.
On the View screen of your ZClass, you can see that each view can be associated with a Help Topic. This allows you to provide a link to a different help topics depending on which view the user is looking at. For example, let's create a Help Topic for the Edit view of the ZooExhibit ZClass.
First, you need to create an actual help topic object. This is done by going to the ZooExhibit Product which contains the ZooExhibit ZClass, and clicking on the Help folder. The icon should look like a folder with a blue question mark on it.
Inside this special folder, pull down the add list and select Help Topic. Give this topic the id "ExhibitEditHelp" and the title "Help for Editing Exhibits" and click Add.
Now you will see the Help folder contains a new help topic object called ExhibitEditHelp. You can click on this object and edit it, it works just like a DTML Document. In this document, you should place the help information you want to show to your users:
<dtml-var standard_html_header> <h1>Help!</h1> <p>To edit an exhibit, click on either the <b>animal</b>, <b>description</b>, or <b>caretakers</b> boxes to edit them.</p> <dtml-var standard_html_footer>
Now that you have created the help topic, you need to associate with the Edit view of your ZClass. To do this, select the ZooExhibit ZClass and click on the Views tab. At the right, in the same row as the Edit view is defined, pull down the help select box and select ExhibitEditHelp and click Change. Now go to one of your ZooExhibit instances, the Edit view now has a *Help!* link that you can click to look at your Help Topic for this view.
In the next section, you'll see how ZClasses can be cobined with standard Python classes to extend their functionality into raw Python.
ZClasses give you a web managable interface to design new kinds of objects in Zope. In the beginning of this chapter, we showed you how you can select from a list of base classes to subclass your ZClass from. Most of these base classes are actually written in Python, and in this section you'll see how you can take your own Python classes and include them in that list so that your ZClasses can extend their methods.
Writing Python base classes is easy, but it involves a few installation details. To create a Python base class you need access to the filesystem. Create a directory inside your lib/python/Products directory named AnimalBase. In this directory create a file named Animal.py with these contents:
class Animal: """ A base class for Animals """ _hungry=0 def eat(self, food, servings=1): """ Eat food """ self._hungry=0 def sleep(self): """ Sleep """ self._hungry=1 def hungry(self): """ Is the Animal hungry? """ return self._hungry
This class defines a couple related methods and one default attribute. Notice that like External Methods, the methods of this class can access private attributes.
Next you need to register your base class with Zope. Create an __init__.py file in the AnimalBase directory with these contents:
from Animal import Animal def initialize(context): """ Register base class """ context.registerBaseClass(Animal)
Now you need to restart Zope in order for it find out about your base class. After Zope restarts you can verify that your base class has been registered in a couple different ways. First go to the Products Folder in the Control Panel and look for an AnimalBase package. You should see a closed box product. If you see broken box, it means that there is something wrong with your AnimalBase product.
Click on the Traceback view to see a Python traceback showing you what problem Zope ran into trying to register your base class. Once you resolve any problems that your base class might have you'll need to restart Zope again. Continue this process until Zope successfully loads your product. Now you can create a new ZClass and you should see AnimalBase:Animal as a choice in the base classes selection field.
To test your new base class create a ZClass that inherits from AnimalBase:Animal. Embellish you animal however you wish. Create a DTML Method named care with these contents:
<dtml-var standard_html_header> <dtml-if give_food> <dtml-call </dtml-if> <dtml-if give_sleep> <dtml-call sleep> </dtml-if> <dtml-if hungry> <p>I am hungry</p> <dtml-else> <p>I am not hungry</p> </dtml-if> <form> <input type="submit" value="Feed" name="give_food"> <input type="submit" value="Sleep" name="give_sleep"> </form> <dtml-var standard_html_footer>
Now create an instance of your animal class and test out its care method. The care method lets you feed your animal and give it sleep by calling methods defined in its Python base class. Also notice how after feeding your animal is not hungry, but if you give it a nap it wakes up hungry.
As you can see, creating your own Products and ZClasses is an involved process, but simple to understand once you grasp the basics. With ZClasses alone, you can create some pretty complex web applications right in your web browser.
In the next section, you'll see how to create a distribution of your Product, so that you can share it with others or deliver it to a customer.
Now you have created your own Product that lets you create any number of exhibits in Zope. Suppose you have a buddy at another Zoo who is impressed by your new online exhibit system, and wants to get a similar system for his Zoo.
Perhaps you even belong to the Zoo keeper's Association of America and you want to be able to give your product to anyone interested in an exhibit system similar to yours. Zope lets you distribute your Products as one, easy to transport package that other users can download from you and install in their Zope system.
To distribute your Product, click on the ZooExhibit Product and select the Distribution tab. This will take you to the Distribution view.
The form on this view lets you control the distribution you want to create. The Version box lets you specify the version for your Product distribution. For every distribution you make, Zope will increment this number for you, but you may want to specify it yourself. Just leave it at the default of "1.0" unless you want to change it.
The next two radio buttons let you select whether or not you want others to be able to customize or redistribute your Product. If you want them to be able to customize or redistribute your Product with no restrictions, select the Allow Redistribution button. If you want to disallow their ability to redistribute your Product, select the Disallow redistribution and allow the user to configure only the selected objects: button. If you disallow redistribution, you can choose on an object by object basis what your users can customize in your Product. If you don't want them to be able to change anything, then don't select any of the items in this list. If you want them to be able to change the ZooExhibit ZClass, then select only that ZClass. If you want them to be able to change everything (but still not be able to redistribute your Product) then select all the objects in this list.
Now, you can create a distribution of your Product by clicking Create a distribution archive. Zope will now automatically generate a file called ZooExhibit-1.0.tar.gz. This Product can be installed in any Zope just like any other Product, by unpacking it into the root directory of your Zope installation.
Don't forget that when you distribute your Product you'll also need to include any files such as External Method files and Python base classes that your class relies on. This requirement makes distribution more difficult and for this reason folks sometimes try to avoid relying on Python files when creating through the web Products for distribution.
|
http://www.faqs.org/docs/ZopeBook/CustomZopeObjects.html
|
CC-MAIN-2017-30
|
refinedweb
| 6,458
| 62.58
|
Configuring the Production
This chapter describes the process of creating and configuring an X12 Production. It contains the following sections:
Creating a new X12 Production
Adding an X12 Business Service
Adding an X12 Business Process
Adding an X12 Routing Rule
Adding an X12 Business Operation
Be sure to perform all tasks in the same namespace that contains your production. When you create rule sets and transformations do not use reserved package names; see “Reserved Package Names” in Developing Productions.
Also see “Overriding the Validation Logic” in Using Virtual Documents in Productions.
Creating a new X12 Production
You can add X12 components to an already existing production. However, if you want to create a new production explicitly for handling X12, follow the steps below.
In the Management Portal, switch to the appropriate namespace.
To do so, click Switch in the title bar, click the namespace, and click OK.
Click Interoperability.
Click Configure.
Click Production and then click Go.
InterSystems IRIS® then displays the last production you accessed, within the Production Configuration page.
Click the Actions tab on the Production Settings menu.
Click New to invoke the Production Wizard.
Enter a Package Name, Production Name, and Description.
Choose the Generic production type and click OK.
InterSystems IRIS creates a blank production from which you can add components such as business services, business processes, and business operations. See the sections below for more details.
As you build your production, it frequently happens that while configuring one component you must enter the name of another component that you have not yet created. A clear naming convention is essential to avoid confusion. For suggestions, see “Naming Conventions” in Best Practices for Creating Productions. For rules, see “Configuration Names,” in Configuring Productions.
Adding an X12 Business Service
Add one X12 business service for each application or source from which the production receives X12 documents.
To add an X12 business service to a production:
Access the Business Service Wizard as usual; see Configuring Productions.
Click the X12 Input tab.
Click one of the following from the Input type list:
TCP
File
FTP
For X12 Service Name, type the name of this business service.
For X12 Service Target, select one of the following:
Create Target Automatically — InterSystems IRIS adds a business process to the production and configures the business service to use it as a target. You can edit the business process details later.
None for Now — Do not specify a target for this business service. If you make this selection, ensure that you specify a target later.
Choose an Existing Production Item as Target — In this case, also select an existing business host from the drop-down list.
Click OK.
Adding an X12 Business Process
To add an X12 business process to a production:
Access the Business Process Wizard as usual; see Configuring Productions.
Click the X12 Router tab; the router class defaults to EnsLib.EDI.X12.MsgRouter.RoutingEngine.
For Routing Rule Name, do one of the following:
Select an existing routing rule from the Routing Rule Name drop-down list.
Select Auto-Create Rule and type a rule name into Routing Rule Name. In this case, the wizard creates the routing rule class in the same package as the production.
Later you must edit the routing rule and add your logic to it.
For X12 business process Name, type the name of this business process.
Click OK.
Ensure that your X12 business service is connected to the new X12 Business Process. To connect the process:
Select your X12 business service.
Click the Settings tab and open the Basic Settings menu in the menu to the right of the screen.
Enter the name of the new X12 business process in the Target Config Names field.
Configure additional settings of the business process, as needed. For details, see “Settings for X12 Business Processes”.
Adding an X12 Routing Rule
For general information on defining business rules, see Developing Business Rules.
When you create an X12 routing rule:.X12.Document.
In all other respects, the structure and syntax for both types of rule set are the same.
Adding an X12 Business Operation
To send X12 messages from a production to a file or application, you must add an X12 business operation. Add an X12 business operation for each output destination.
You might also want to add business operations to handle bad messages (for background, see “Business Processes for Virtual Documents” in Using Virtual Documents in Productions).
To add an X12 business operation to a production:
Access the Business Operation Wizard as usual; see Configuring Productions.
Click the X12 Output tab.
Click one of the following from the Output type list:
TCP
File
FTP
For X12 Operation Name, type the name of this business operation.
Click OK.
Ensure that the business operation is connected to the relevant business services or business process
For a routing rule, enter the name of your X12 business operation in the Target field of the routing rule set.
If your design uses a pass-through interface that simply relays messages from the incoming business service to the outgoing business operation, enter the name of your X12 business operation in the Target Config Names field of the X12 business service.
Configure additional settings of the business operation, as needed. For details, see “Settings for X12 Business Operations”.
If you want the production to send data that is not an X12 message, see “Defining Business Operations” in Developing Productions. Also see “Connectivity Options” in Introducing Interoperability Productions.
|
https://docs.intersystems.com/healthconnectlatest/csp/docbook/Doc.View.cls?KEY=EX12_production
|
CC-MAIN-2020-45
|
refinedweb
| 909
| 55.44
|
code.)
Why SCO won't show the code
Posted Aug 19, 2003 16:08 UTC (Tue) by rfunk (subscriber, #4054)
[Link]
Posted Aug 19, 2003 16:15 UTC (Tue) by rfunk (subscriber, #4054)
[Link]
Posted Aug 19, 2003 18:36 UTC (Tue) by ken (subscriber, #625)
[Link]
I found it in V5 also.
The comment in that file is not the same and some whitespace differs butit is the same code.
This is 30 year old source code!
Posted Aug 19, 2003 19:09 UTC (Tue) by mdrejhon (guest, #14189)
[Link]
Photo of SCO Slide:
Near exact match to 30 year old source code in V5:
What the heck is SCO doing, claiming 30 year old source code as their own?
Posted Aug 19, 2003 19:11 UTC (Tue) by rfunk (subscriber, #4054)
[Link]
#
/*
*/
struct map {
char *m_size;
char *m_addr;
};
malloc(mp, size)
struct map *mp;
{
register int a;
register struct map *bp;
for (bp =);
return(a);
}
}
return(0);
}
mfree(mp, size, aa)
struct map *mp;
{
register struct map *bp;
register int t;
register int a;
a = aa;
for (bp = mp; bp->m_addr<=a && bp->m_size!=0; bp++);
if (bp>mp && (bp-1)->m_addr+(bp-1)->m_size == a) {
(bp-1)->m_size =+ size;
if (a+size == bp->m_addr) {
(bp-1)->m_size =+ bp->m_size;
while (bp->m_size) {
bp++;
(bp-1)->m_addr = bp->m_addr;
(bp-1)->m_size = bp->m_size;
}
}
} else {
if (a+size == bp->m_addr && bp->m_size) {
bp->m_addr =- size;
bp->m_size =+ size;
} else if (size) do {
t = bp->m_addr;
bp->m_addr = a;
a = t;
t = bp->m_size;
bp->m_size = size;
bp++;
} while (size = t);
}
}
This product includes software developed or owned by
Caldera International, Inc..
Posted Aug 19, 2003 23:50 UTC (Tue) by Xman (subscriber, #10620)
[Link]
Posted Aug 21, 2003 20:31 UTC (Thu) by Ross (subscriber, #4065)
[Link]
Seems like an invalid copyright extension to me.
Posted Aug 19, 2003 19:37 UTC (Tue) by rfunk (subscriber, #4054)
[Link]
SCO's copyrights
Posted Aug 19, 2003 16:27 UTC (Tue) by rfunk (subscriber, #4054)
[Link]
SCO might well have a complaint that SGI did not properly
give credit for the code it used. But there is no possible way the company
can argue that this code's presence in Linux is an infringement of its
Two words: advertising clause.
even without the advertising clause
Posted Aug 19, 2003 17:52 UTC (Tue) by stevenj (subscriber, #421)
[Link]
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
(Note that it was a proprietary software company that did this, moreover, not the horrible hordes of hairy hackers. But, to be fair, I think that very few programmers are careful about this sort of thing.)
The code's being published in a book doesn't remove its copyrighted status or allow re-distribution, by the way, so that argument is a red herring (except regarding trade-secret claims).
Posted Aug 21, 2003 6:33 UTC (Thu) by bojan (subscriber, #14302)
[Link]
These guys have officially killed the advertising clause in 1999 on all versions of BSD software:
So, I'm guessing all the code is "off the hook", right?
Bojan
Make copies of that wayback machine page!!
Posted Aug 19, 2003 16:32 UTC (Tue) by rknop (guest, #66)
[Link]
Wayback machine does let site owners remove things from the archive. I know, because I've done this. ( has plays people have put online, and sometimes they ask us to remove them. Once I discovered wayback machine, I told them to remove the old plays and stop archiving the new ones.)
SCO may figure out that they have left tracks behind and try to erase them. Print-outs or copies of those tracks should be made before they do this. (I'm to lazy to do this myself, but *somebody* probably ought to.)
Of course, I wouldn't worry, because this is the same company that has distributed Linux on it's own ftp site while claiming that those who distribute Linux under the GPL are violating its intellectual property. This is a company whose left hand doesn't know what its left hand is doing. But, still.
-Rob
Posted Aug 19, 2003 22:50 UTC (Tue) by mmarq (guest, #2332)
[Link]
But wich ones ?
I dont belive that they will throw more fiascos as this, plently commented in this site.
Now that the super fast response of the Linux/OSS community seamed to have made a hole in the SCO FUD warchest, and mark a clear victory... you are telling that they could be back in business ?
Guess that there must be copies of "everything" UNIX related,... maybe OSI could hire a guy to google and copy everything Unix related,... how much days could it take ?
Posted Aug 19, 2003 23:58 UTC (Tue) by Arker (guest, #14205)
[Link]
I won't say there's no worry here. Please someone archive this stuff on your personal machine. And don't tell anyone it's there. Just keep it until it's needed, or this mess is over.
I'd just say I've done that myself, as I've done in past cases (I have an untouched copy of 2.4 source from Caldera for instance,) but it's almost 2am in my timezone and I've done enough for the day. I know there are thousands of geeks who haven't, and I know a lot of us have a little hard drive space to spare. Grab this stuff. If only one of us has it, it means nothing, but if a couple hundred have byte-identical copies with the same time and date and the same story on how it was obtained, we have a legal chain of evidence that can be proven beyond a reasonable doubt. So please, just in case, do it now. Burn it to a CD or something, along with a description of exactly when and how you obtained it. You'll almost certainly be wasting a CD, but they're cheap, and if it does become an issue, you'll be glad you did.
I'm going to bed now, I leave it up to you.
Posted Feb 11, 2006 23:26 UTC (Sat) by kbolino (guest, #35879)
[Link]
Posted Aug 19, 2003 16:36 UTC (Tue) by apwiggins (guest, #14171)
[Link]
The Lions book (more nitpicking)
Posted Aug 19, 2003 17:05 UTC (Tue) by rfunk (subscriber, #4054)
[Link]
The "6th Edition" part is part of the title (actually part of the name of
"UNIX 6th Edition"), and refers to the version of Unix it covers. It is
not the 6th edition of the book, as Amazon and others seem to think. (The
"with Source Code" part is the subtitle.)
Also, it's Lions', not Lion's, since the author's name is
John Lions
rather than John Lion.
Posted Aug 19, 2003 18:15 UTC (Tue) by ikegami (guest, #14174)
[Link]
: Also, it's Lions', not Lion's, since the author's name is John Lions rather than John Lion.
If you're gonna nitpick on spelling, get it right! The possessive form of "Lions" is "Lions's" because it's singular.
(singular) pro, James, Lions
(plural) pros, Jameses, Lionses
(possessive singular) pro's, James's, Lions's
(possessive plural) pros', Jameses', Lionses'
Posted Aug 19, 2003 18:27 UTC (Tue) by rfunk (subscriber, #4054)
[Link]
If you're gonna nitpick on spelling, get it right!
I did. I gave the title of the book as it is on the cover, not what some
manual on English says it should be. Then I explained why it's that way
on the cover.
The possessive form of "Lions" is "Lions's" because it's singular.
That's a matter of some debate. For example:
It's a living human language, therefore there are inconsistencies and
uncertainties. Deal with it. The title of this book, however, is not
uncertain.
Posted Aug 20, 2003 8:19 UTC (Wed) by rjamestaylor (guest, #339)
[Link]
;)
The Lions book
Posted Aug 19, 2003 23:47 UTC (Tue) by nicku (subscriber, #777)
[Link]
I have the two books from the class, which I treasure; one is A commentary on the UNIX Operating System, fourth printing, the other is UNIX Operating System Source Code Level Six. There, on Sheet 25 is the code that matches the slide.
They are printed on the big old Computer Centre line printers in fixed width font on A4 paper, landscape orientation, bound in yellow (the commentary) and red (the code). The commentary is printed justified in three columns, the code in two.
Posted Aug 19, 2003 17:24 UTC (Tue) by dwalters (subscriber, #4207)
[Link]
I keep seeing FUD articles about SCO in eWeek, CNet, etc. It would be nice to read about some fact instead of FUD for a change.
Posted Aug 19, 2003 18:37 UTC (Tue) by freethinker (guest, #4397)
[Link]
Posted Aug 19, 2003 19:06 UTC (Tue) by dwalters (subscriber, #4207)
[Link]
To my mind, the simple fact that SCOX is trading at over $10 a share, tells me that the shareholders may only be hearing one side of the story.
Posted Aug 19, 2003 17:36 UTC (Tue) by gups (guest, #14053)
[Link].
Posted Aug 19, 2003 17:47 UTC (Tue) by BrucePerens (subscriber, #2510)
[Link]
Bruce
It seems we do have a real copyright/license violation then
Posted Aug 19, 2003 17:55 UTC (Tue) by JoeBuck (subscriber, #2330)
[Link]
Then it appears that Linux does have a problem that must be corrected.
There is no Caldera copyright notice in the Linux copy of the code, and the license described in
the letter includes the GPL-incompatible advertising clause.
Posted Aug 19, 2003 18:05 UTC (Tue) by hch (guest, #5625)
[Link]
Removed due to severe crappieness...
Information on "SCO Ancient Unix" web page appears to be factually incorrect.
Posted Aug 19, 2003 18:10 UTC (Tue) by dwalters (subscriber, #4207)
[Link]
Indeed. The wording on the web site is:
Use of all versions is strictly for educational use only, and is restricted by the license agreed upon by entering this site.
If the code was already released under the BSD-style license, the above
statement is inaccurate and misleading. The BSD license-covered code is most certainly not limited to educational use only. Either SCO doen't realize this, or they're trying to close the stable door after the horse has bolted!
However, unless the code in question was also released under some other BSD-license which does not include the Caldera advertising clause, any advertising materials mentioning features or use of the code must acklowledge Caldera to be in compliance with the license.
Posted Aug 20, 2003 3:20 UTC (Wed) by lakeland (subscriber, #1157)
[Link]
The "Greek" says (in English); Typoe; WayBack
Posted Aug 19, 2003 18:03 UTC (Tue) by leonbrooks (guest, #1494)
[Link]
If you presume that the editor was in a hurry, all that happened was the
they
didn't go rightwards enough words (or perhaps went back a word instead of
forward) before appending the closing parenthesis to the ulong_t cast.
Naturally, all of this is moot since the code is BSD-derived anyway.
WRT WayBack, Reasons To Believe
moved to a totally dynamic website not long ago, even for pages with static
content, and the only reason I can see for this is that archive.org will then not
cache it, but there will be no record of them (nor will they ever have to admit)
asking WayBack to remove it. The reason for this is that their story changes
often, and if their ideas aren't crazy and inconsistent enough as-is, any archive
of their site would completely shred the credibility of what remains.
In point of fact TSG's conference stuff (from which I drew a
contact list) isn't archived, and was hurriedly pulled by TSG (who will say
"but it finished" despite leaving last year's page up for ages), but Google still
has a cache of it as at now.
Posted Aug 20, 2003 0:33 UTC (Wed) by neoprene (guest, #8520)
[Link]
THE TWO SECOND CRYPTO LESSON:
It is not greek, just greek letters. Most word-editors allow you to switch fonts,switch to AA-Symbol and you look like a learned man already ;)
* As part of the kernel evolution toward modular naming, the* functions malloc and mfree are beingrenamed to rmalloc and rmfree.Compatibility will be maintained bythe following assembler code:* (also see mfree/rmfree below)* /
Is this supposed to be the great mystery code SCOX is hiding? So lame.
Posted Aug 20, 2003 2:10 UTC (Wed) by Jotham (guest, #14211)
[Link]
Dear Sir,
You have just HACKED our encryption method and further *published* your method. This is in clear controvention of the DMCA act and show that you and those in collusion, LWN.net, are clearly terrorists. We estimate that this has damaged our company to in excess of 1 billion dollars in lost revenue and FUD. See you in court.
Sincerely and best wishes,
Dr EvilSCO Legal Department
Posted Nov 4, 2005 16:27 UTC (Fri) by rmosler2100 (guest, #33631)
[Link]
You should get a lawyer because SCO will be coming for you.
Bee-gees are imortal :-)
Posted Aug 19, 2003 18:33 UTC (Tue) by nraynaud (guest, #532)
[Link]
PEACE MY BROTHER !
Well, more seriously, why a so bad designed allocator falled here ? I just don't care the legal problem, but on a technical point-of-view, I think the power-flower mallocs are a little bit over-dated.
Should somebody call the SEC?
Posted Aug 19, 2003 19:01 UTC (Tue) by tavis (guest, #14187)
[Link]
Posted Aug 19, 2003 20:01 UTC (Tue) by tjc (subscriber, #137)
[Link]
I'm curious if this is part of the code that SCO claims Linux "stole" and used to provide enterprise capabilities that it would otherwise not have. If so, it's interesting that a Caldera Product Manager and a Public Relations Manager referred to this code as "ancient."
If this is the best SCO has to show, then they're up to their armpits already.
Posted Aug 21, 2003 23:00 UTC (Thu) by onetimepost (guest, #14340)
[Link]
In other news ..
Posted Aug 19, 2003 19:08 UTC (Tue) by nx_in (guest, #14162)
[Link]
Posted Aug 19, 2003 19:26 UTC (Tue) by TheQuietGuy (guest, #14190)
[Link]
Posted Aug 19, 2003 19:35 UTC (Tue) by lyda (guest, #7429)
[Link]
Editors, please note this.
Posted Aug 20, 2003 12:46 UTC (Wed) by nx_in (guest, #14162)
[Link]
The code which they are displaying as from linux kernel, is NOT
actually from linux kernel. See in the second picture:
if (size == 0)
return) ((ulong_t NULL);
Now, this code doesn't even compile!! The actual code that exists in linux
is this:
if (size == 0)
return((ulong_t) NULL);
Now, this could be really dangerous.
Moreover, this file is part of only the ia64 port which not even 2% of
linux users use. Just why I am supposed to pay SCO ??
Re: Editors, please note this.
Posted Aug 19, 2003 21:14 UTC (Tue) by lsweeks (guest, #14198)
[Link]
DNA Argument
Posted Aug 19, 2003 19:53 UTC (Tue) by burbank15 (guest, #6401)
[Link]
Granted I would be surprised if this is really what will form the basis of their case. How could you argue that ideas and processes that are available in any number of college texts without requiring an NDA can really belong to one company? Wouldn't this line of reasoning require that any programmer who attended any classes on OS design in the last 30+ years be willing to enter into some sort of contract with SCO. And SCO would have to convince the courts that the OS design theories that influenced Unix can be used without any contractual encumbrance, while processes developed for Unix do have contractual encumbrances.
*sigh* The more I read about this case, the more I think that someone such as Arianna Huffington should be writing a column about SCO. Any suggestions for the title? Note that she has already used "Pigs at the Trough" for a book.
Posted Aug 19, 2003 22:28 UTC (Tue) by nexex (guest, #14202)
[Link]
DNA Argument etc..
Posted Aug 21, 2003 2:38 UTC (Thu) by maguska (guest, #14290)
[Link]
the other:Since Linux is open source, SCO could also used it, and it's hard for them to prove that Linux is the copy and not their UNIX parts.
3rd:They could also show the copied parts in Linux's open source code, they don't have to show their one. This would make it possible to rewrite the problematical parts without signing secrecy agreements (it has another name, but I've forgotten it by now :) ).
4th:I was thinking about copyright law, and..It would be funny if - for instance - the A-bomb team (A. Einstein, Fermi, Neumann Janos, Wiegner Jeno, Szilard Leo and Teller Ede) announce sueing Russia breaking A-bomb copyright (or patent, I don't know which). Teller Ede is still living, so it could be happen!
again, excuse me for grammatical mistakesMaguska
Posted Aug 19, 2003 20:08 UTC (Tue) by ak_hepcat (guest, #14192)
[Link]
Seems like if I were DMcB (i'd kill myself, ha!) that would be the first thing that I'd sic my hounds on -- destroying the evidence..
The second, and third, shoes
Posted Aug 19, 2003 20:48 UTC (Tue) by ccchips (guest, #3222)
[Link]
Second shoe: When I was in NORML, I saw news "reports", on major television outlets, of sperm swimming in circles due to marijuana use. The scene they showed was actually a snippet from an old educational film about what sperm looked like under a microscope, while the announcer boldly announced these "important findings" by "research groups." Could this "code" actually be just something they threw up on a slide to mislead people that they were showing evidence?
The third shoe:
On this forum and otheres, I've occasionally seen quiet, second-hand and third-hand posts (a friend, a friend of a friend who worked at SCO) about SCO employees doing the *exact opposite* of what they claimed IBM did, but with no specification of where or when; in other words, these people claimed that they knew SCO employees who stole code, and were encouraged to do so.
I am waiting. If this is true, it's got to be heavy on *someone*'s conscience.
Getting through to LWN
Posted Aug 19, 2003 21:13 UTC (Tue) by corbet (editor, #1)
[Link]
Trust us, we noticed. Our bandwidth is there, but our poor server is sweating under the load. I think there's people at Rackspace standing by with fire extinguishers.
That's why we should have released the article under subscription - it would have kept the load down.
Seriously, though, it's clear that a stronger server has got to be in our future plans at some point.
Posted Aug 20, 2003 7:51 UTC (Wed) by ekj (subscriber, #1524)
[Link]
Lots of people reading this can only be good. Let it sweat. Keep the extinguishers handy.
Posted Aug 20, 2003 16:43 UTC (Wed) by colink (guest, #274)
[Link]
And why doesn't somebody like OSDL just buy y'all? Wouldn't it be great if a n industry wide consortium of companies interested in Linux gave back to the community that way?
Posted Aug 21, 2003 0:04 UTC (Thu) by Germ (guest, #14284)
[Link]
Posted Aug 21, 2003 0:36 UTC (Thu) by corbet (editor, #1)
[Link]
Awaiting your subscription...:)
Color changing is easy.
Posted Aug 21, 2003 5:43 UTC (Thu) by frazier (subscriber, #3060)
[Link]
That was easy.
Clean BSD code
Posted Aug 19, 2003 20:59 UTC (Tue) by muon113 (guest, #14196)
[Link]
Posted Aug 19, 2003 23:19 UTC (Tue) by Arker (guest, #14205)
[Link]
Thanks for crediting me. Just because of you, I made an account here.
I'm always late on these things. I've been posting to slashdot since just after they went live, but I have an embarrassingly high uid because I never bothered to register until it became necessary because of the trolls.
Enough about me, the case at hand, much as I'd like to agree with you I don't. It's my understanding that BSD before 4.4lite is not necessarily unencumbered. If someone can show that I'm wrong on that please do, I'd love to hear it.
However, the good news is that the comment does not prove copyright infringement, only the code could do that, (although the comment could be used as evidence to help convince a judge that this was a cut and paste rather than just a tight algorithm which any competent programmer would have hit on quickly) and that at worst what we have here is a failure of a contributor to include a legally required copyright notice, with actual damages being the statutory limit to the award against him.
So, IANAL but it looks to me like the best SCO can get out of this particular example is $1 from whoever originally contributed the code. No? Why not?
Do they think the BSD license is restrictive?
Posted Aug 19, 2003 21:09 UTC (Tue) by iabervon (subscriber, #722)
[Link]
Surely if the GPL's restrictions are not sufficient to prevent a work from being essentially in the public domain, the BSD license, which has nothing but the advertizing clause to make it more restrictive than the GPL is similarly unenforceable. This means that they must believe the code on which they're founding their claims to be in the public domain.
Too trivail to copyright (pseudo code)
Posted Aug 19, 2003 21:28 UTC (Tue) by darkonc (guest, #14197)
[Link]
static struct ( size_t m_size, char *m_addr } *chunk;
While(more chunks){ if current chunk biggern than request { take what we need out of the chunk point the pool pointer to the rest of the chunk adjust the size indicator. if we're using the entire chunk,{ move this node to the end of the list. # (so it doesn't block the search) # } #endif return pointer }#endif
}#endwhile# couldn't find a big enough chunkreturn.
Too trivial to copyright
Posted Aug 19, 2003 22:15 UTC (Tue) by rfunk (subscriber, #4054)
[Link]
It would be pretty difficult to produce a tight version of this algorithm
without a high degree of duplication.
Indeed, In his 1977 commentary, John Lions wrote about malloc and mfree:
The code for these two procedures has been written very tightly. There is
little, if any, "fat" which could be removed to improve run time
efficiency. However it would be possible write [sic]
these procedures in
a more transparent fashion.
Posted Aug 19, 2003 22:17 UTC (Tue) by mammothrept (guest, #14201)
[Link]
The implication of what you are saying is that this section of code is not copyrightable under US law, regardless of whether a copyright notification is attached. First, algorithms themselves are not copyrightable because of a legal doctrine called the idea/expression merger. Ideas cannot be copyrighted. Only specific expressions of ideas can be. Further, if there are only a small number of ways to express an idea, then even the expression is not copyrightable. If there are only "4 meaningful permutations of this algorithm" then a developer could block anyone else from using this idea merely by writing the algorithm four different ways. This would be using copyright to attain patent-like protection which is not allowed.
Not even patentable... 25 year limit just about over
Posted Aug 20, 2003 14:53 UTC (Wed) by egberts (guest, #14248)
[Link]
So, just run it through 'ident' and recopyright it.
Posted Aug 19, 2003 21:46 UTC (Tue) by renco (guest, #11927)
[Link]
The link to the ancient Unix license is removed!
You first need to obtain an Ancient UNIX license from SCO's web site. Once you have clicked on the license, go to the bottom and follow the hyperlink to obtain access to the Unix Archive.
After you fill in your details, you will be emailed the access details. The go to the Unix Archive sites list to find your nearest mirror of the archive.
Once you have been granted access, you can also order the Unix Archive on CD-ROM or other media. This is done on a volunteer basis, so it may take some time. Please volunteer to help out with this effort if you can.
SCO is making a lot of noise. I think I agree with one statement from SCO if I see the ode and the history. Linux is not the operating system that makes a lot of innovation. Do not use samba also anymore in you new OS.It also GNU !!
Posted Aug 19, 2003 22:36 UTC (Tue) by djabsolut (guest, #12799)
[Link]
Posted Aug 19, 2003 22:09 UTC (Tue) by spshealy (guest, #14200)
[Link]
I ran across the following flash cartoon with Darl in it... Pretty funny!
Posted Aug 20, 2003 3:42 UTC (Wed) by vijayandra (guest, #13932)
[Link]
Ancient Unix Mirror
Posted Aug 20, 2003 6:57 UTC (Wed) by mcbridematt (guest, #10302)
[Link]
Posted Aug 20, 2003 10:02 UTC (Wed) by luckybit (guest, #14235)
[Link]
Perens' analysis mirror
Posted Aug 21, 2003 14:02 UTC (Thu) by jbh (subscriber, #494)
[Link]
Posted Sep 1, 2003 14:50 UTC (Mon) by lundberg1000 (guest, #14689)
[Link]
It would be interesting to have the opinion of more Linux people.
Lars
Linux is a registered trademark of Linus Torvalds
|
http://lwn.net/Articles/45019/
|
crawl-001
|
refinedweb
| 4,289
| 66.47
|
Create Lightning Web Components
Learning Objectives
After completing this unit, you’ll be able to:
-.
- Click New to start a new project.
- Paste the following fields of the same name in the corresponding JavaScript class.
Here’s a JavaScript file to support this HTML. Paste this into
app.js.
import { LightningElement } from 'lwc'; export default class App extends LightningElement { name = 'Electra X4'; description = 'A sweet bike built for comfort.'; category = 'Mountain'; material = 'Steel'; price = '$2,700'; pictureUrl = ''; }
The Preview now shows a bike with some details. If not, Click Run. following } from 'lwc'; export default class App extends LightningElement { name = 'Electra X4'; description = 'A sweet bike built for comfort.'; category = 'Mountain'; material = 'Steel'; price = '$2,700'; pictureUrl = ''; ready = false; connectedCallback() { setTimeout(() => { this.ready = true; }, 3000); } }
Click Run to see the conditional directive working.
Base Reference. badges. It’s that simple.
OK. Let’s look at the JavaScript.
Working with JavaScript
Here’s where you make stuff happen. As you’ve seen so far, JavaScript methods define what to do with input, data, events, changes to state, and more to make your component work.
Lightning Web Components uses modules (built-in modules were introduced in ECMAScript 6) to bundle core functionality and make it accessible to the JavaScript in your component file. The core module for Lightning web components is
lwc.
Begin the module with the import statement and specify the functionality of the module that your component uses.
The
import statement indicates the JavaScript uses the
LightningElement functionality from the
lwc module.
// import module elements import { LightningElement} from 'lwc'; // declare class to expose the component export default class App extends LightningElement { ready = false; // use lifecycle hook connectedCallback() { setTimeout(() => { this.ready = true; }, 3000); } }
- LightningElement is the base class for Lightning web components, which allows us to use
connectedCallback().
- The
connectedCallback()method is one of our lifecycle hooks. You’ll learn more about lifecycle hooks in the next section. For now, know that the method is triggered when a component is inserted in the document object model (DOM). In this case, it starts the timer.
Lifecycle Hooks
Lightning Web Components } from 'lwc'; export default class App extends LightningElement { ready = false; connectedCallback() { setTimeout(() => { this.ready = true; }, 3000); } }
Also, notice that we used the
this keyword. Keyword usage should be familiar if you’ve written JavaScript, and behaves just like it does in other environments. The
this keyword in JavaScript refers to the top level of the current context. Here, the context is this class. The
connectedCallback() method assigns the value for the top level ready variable. It’s a great example of how Lightning Web Components lets you bring JavaScript features into your development. You can find a link to good information about
this in the Resources section.
We’re moving fast and you’ve been able to try out some things. In the next unit, we take a step back and talk more about the environment where the components live.
Decorators
Decorators are often used in JavaScript to modify the behavior of a property or function.
To use a decorator, import it from the
lwc module and place it before the property or function.
import { LightningElement, api } from 'lwc'; export default class MyComponent extends LightningElement{ @api message; }
You can import multiple decorators, but a single property or function can have only one decorator. For example, a property can’t have
@api and
@wire decorators.
Examples of Lightning Web Components decorators include:
- @api: Marks a field as public. Public properties define the API for a component. An owner component that uses the component in its HTML markup can access the component’s public properties. All public properties are reactive, which means that the framework observes the property for changes. When the property’s value changes, the framework reacts and rerenders the component.
Tip Field and property are almost interchangeable terms. A component author declares fields in a JavaScript class. An instance of the class has properties. To component consumers, fields are properties. In a Lightning web component, only fields that a component author decorates with
@apiare publicly available to consumers as object properties.
- @track: Tells the framework to observe changes to the properties of an object or to the elements of an array. If a change occurs, the framework rerenders the component. All fields are reactive. If the value of a field changes and the field is used in a template—or in the getter of a property used in a template—the framework rerenders the component. You don’t need to decorate the field with
@track. Use
@trackonly if a field contains an object or an array and if you want the framework to observe changes to the properties of the object or to the elements of the array. If you want to change the value of the whole property, you don’t need to use
@track.
Prior to Spring ’20, you had to use
@trackto mark fields (also known as private properties) as reactive. You’re no longer required to do that. Use
@trackonly to tell the framework to observe changes to the properties of an object or to the elements of an array. Some legacy examples may still use
@trackwhere it isn’t needed, but that’s OK because using the decorator doesn’t change the functionality or break the code. For more information, see this release note.
- @wire: Gives you an easy way to get and bind data from a Salesforce org.
Here’s an example using the
@api decorator to render a value from one component (bike) in another component (app). In the component playground, the files look like this:
The app component uses the following HTML.
<!-- app.html --> <template> <div> <c-bike bike={bike}></c-bike> </div> </template>
The app component uses the following JavaScript.
// app.js import { LightningElement } from 'lwc'; export default class App extends LightningElement { bike = { name: 'Electra X4', picture: '' }; }
The bike component uses the following HTML.
<!-- bike.html --> <template> <img src={bike.picture} <p>{bike.name}</p> </template>
The bike component uses the following JavaScript.
// bike.js import { LightningElement, api } from 'lwc'; export default class Bike extends LightningElement { @api bike; }
Try it in the component playground with and without the
@api decorator to see the behavior.
Bonus! Try These Components in the Playground
Now that you’re familiar with the component playground, take a little time to experiment with some of the examples in the Lightning Web Components Recipes. Try pasting some of the HTML, JavaScript, and CSS from the examples there into the component playground as a great way to get familiar with core component concepts.
Resources
- Lightning Web Components Developer Guide: Reactivity
- Lightning Web Components Developer Guide: Reference (includes HTML Template Directives, Decorators, and more)
- MDN web docs: this
|
https://trailhead.salesforce.com/en/content/learn/modules/lightning-web-components-basics/create-lightning-web-components
|
CC-MAIN-2020-40
|
refinedweb
| 1,117
| 56.35
|
My name is Leon LaSpina and I teach computer science at Bethpage Senior High School in New York.), and Ken Fogel (Canada). Now let's travel to the US and meet Leon LaSpina. -- NetBeans Team
I moved to NetBeans IDE in my classroom a few years ago and have been very pleased with that decision. Here are some reasons why NetBeans IDE is great for teaching Java from the perspective of a high school computer science teacher:
Code NavigationI love the fact that I can CTRL-click any method and the IDE jumps to the source code definition for that method. Then navigation works just like a web browser, for example, CTRL-left arrow works like a back button. Sometimes I might use CTRL-click a few times in a row to dig down into what a method is doing. After I learn what I needed or made some change, I can hit CTRL-left arrow a few times to get back to where I was.
For students this is quite helpful. As they try to understand some code I might give them, they can spend more time reading through the code and less time hunting around for the code they should be reading. The first time a student is presented with a project that involves more than a few classes with more than a few methods each, just navigating around the code can be intimidating.
Auto-Generation of Try/Catch BlocksI like to introduce file access early on, because it makes it easy to test the programs students write. They run the program on an input file I give them and the student and I both know right away if the program did what it was supposed to do. I can quickly test a student's work on a number of different input values without having to type them in.
Auto-Generation of Method Skeletons
Auto-generation of method skeletons for methods required by an interface is also supported by NetBeans IDE. For my own purposes, this is just a nice time saver. For my students, it helps them through the syntax and emphasizes the requirement to provide concrete implementations for abstract methods and interface methods.
For example, we might write public class "Student implements Comparable<Student>". As soon as the student starts writing this class, an error is visible.
When they click Alt-enter over the error, it allows us to let NetBeans IDE generate the "compareTo" method.
Generation of Getters, Setters, and ConstructorsTaking care of boilerplate is great for everyone, but when students have limited class time to write code, this is very helpful. Once they understand what a "getter" and "setter" is and how to write them, creating these is really just a typing exercise. In a class with 4 or 5 fields, having NetBeans IDE generate getters, setters, and constructors saves time for more interesting things.
Git SupportI love the built in Git support.
It has allowed me to use source control for my own projects and to coordinate work with students without really knowing how to use Git! }}
|
https://dzone.com/articles/netbeans-great-for-teaching-7
|
CC-MAIN-2017-09
|
refinedweb
| 514
| 67.99
|
I am nearly completely new to Scala, a few months on. I noticed some wild signatures. I have worked through generics with contrapositive/copositive/extensions/invariance, and most of the basics. However, I continue to find some of the method signatures a bit confusing. While I find examples and know what the signatures produce, I am still a bit at a loss as to some of the functionality. Googling my questions has left me with no answers. I do have the general idea that people like to beat the basic CS 1 stuff to death. I have even tried to find answers on the scala website. Perhaps I am phrasing things like "expanded method signature" and "defining function use in scala signature" wrong. Can anyone explain this signature?
futureUsing[I <: Closeable, R](resource: I)(f: I => Future[R])(implicit ec: ExecutionContext):Future[R]
futureUsing(resourceOfI)({stuff => doStuff(stuff)})(myImplicit)
futureUsing(resourceOfI)(myImplicit)({stuff => doStuff(stuff)})
Can anyone explain this signature?
futureUsing[I <: Closeable, R]
futureUsing works with two separate types (two type parameters). We don't know exactly what types they are, but we'll call one
I (input), which is a (or derived from)
Closable, and the other
R (result).
(resourse: I)
The 1st curried argument to
futureUsing is of type
I. We'll call it
resourse.
(f: I => Future[R])
The 2nd curried argument,
f, is a function that takes an argument of type
I and returns a
Future that will (eventually) contain something of type
R.
(implicit ec: ExecutionContext)
The 3rd curried argument,
ec, is of type
ExecutionContext. This argument is implicit, meaning if it isn't supplied when
futureUsing is invoked, the compiler will look for an
ExecutionContext in scope that has been declared
implicit and it will pull that in as the 3rd argument.
:Future[R]
futureUsing returns a
Future that contains the result of type
R.
Is there a specific ordering to this?
Implicit parameters are required to be the last (right most) parameters. Other than that, no,
resourse and
f could have been declared in either order. When invoked, of course, the order of arguments must match the order as declared in the definition.
Do I need ... implicits to drag in?
In the case of
ExecutionContext let the compiler use what's available from
import scala.concurrent.ExecutionContext. Only on rare occasions would you need something different.
...how would Scala use the 2nd curried argument...
In the body of
futureUsing I would expect to see
f(resourse).
f takes an argument of type
I.
resourse is of type
I.
f returns
Future[R] and so does
futureUsing so the line
f(resourse) might be the last statement in the body of
futureUsing.
|
https://codedump.io/share/knXuIq35eBY6/1/on-expanded-scala-method-signatures
|
CC-MAIN-2017-13
|
refinedweb
| 449
| 58.18
|
How to add more than one EllipseItem to a Scene..?
I want to add around 11-12 QGraphicsEllipseItems to a scene. How can i achieve this using
ellipseOne = scene->addEllipse(0,0,10,10,blackpen,redBrush); . . . . ellipseTwelve = scene->addEllipse(0,0,10,10,blackpen,redBrush);
Apart from this is there any other way.
Thanks in advance,
Rohith.G
@Rohith The documentation sais:
To add items to a scene, you start off by constructing a QGraphicsScene object. Then, you have two options: either add your existing QGraphicsItem objects by calling addItem(), or you can call one of the convenience functions addEllipse(), addLine(), addPath(), addPixmap(), addPolygon(), addRect(), or addText(), which all return a pointer to the newly added item. The dimensions of the items added with these functions are relative to the item's coordinate system, and the items position is initialized to (0, 0) in the scene.
From here:
- Chris Kawa Moderators
@Rohith said in How to add more than one EllipseItem to a Scene..?:
Apart from this is there any other way.
Yes, use a
forloop and store the items in a container, not named individual items.
@Rohith Hi friend, I think you are new for programing. so, if you want to become well in this. please write more and more code. This question is not complexity.
Snippet Code
#include <QGridLayout> #include <QGraphicsEllipseItem> #include <QGraphicsView> #include <QGraphicsScene> Widget::Widget(QWidget *parent) : QWidget(parent), ui(new Ui::Widget) { ui->setupUi(this); ///< QGraphicsScene *scene = new QGraphicsScene; ///< QGraphicsView *view = new QGraphicsView(this); QGraphicsView *view = new QGraphicsView; QGraphicsScene *scene = new QGraphicsScene(this); ///< Note: this pointer, if not have, will let memory leak view->setScene(scene); int x,y,w,h; int num = 10; for(int i=0; i < num; i++){ x = y = i * 10; w = h = i * 10; scene->addItem(new QGraphicsEllipseItem(QRect(x,y,w,h))); } QGridLayout* lyt = new QGridLayout; lyt->addWidget(view); setLayout(lyt); }
- Chris Kawa Moderators
@joeQ That's wrong.
ui->setupUi(this);sets up the ui and most probably sets a layout already, so instead of
QGridLayout* lyt = new QGridLayout; lyt->addWidget(view); setLayout(lyt);
EDIT: I haven't noticed that it's a QWidget not a QMainWindow, so you'll need a layout indeed, but
setupUiprobably sets it anyway.
Also the view does not take ownership of the scene so you've got a memory leak. Instead of giving a parent to the view you should give it to the scene:
QGraphicsScene *scene = new QGraphicsScene(this); QGraphicsView *view = new QGraphicsView();
@Chris-Kawa In that case, now there is another memory leak; for the view. So, it should probably be:
QGraphicsScene *scene = new QGraphicsScene(this); QGraphicsView *view = new QGraphicsView(this);
@Chris-Kawa (⊙o⊙)!, Thank U very much. I get it. Thank u.
- Chris Kawa Moderators
@c64zottel No, there's not. When a widget is put in a layout it gets re-parented to the widget governed by the layout. Similarly when it is set as central widget the main window becomes its parent. Widgets are released by their parents so there's no leak.
@Chris-Kawa Argh..., I missed that.
@Chris-Kawa Hi, I used the
Debug -> Memcheckto check memory leak of my code. There wasn't any memory leak. Is this
valgrindtool inaccurate ?
- Chris Kawa Moderators
@joeQ I don't know. I never used valgrind. But just add
connect(scene , &QGraphicsScene::destroyed, []{ qDebug() << "destroyed!"; });
and see that the destructor is never called if there's no parent.
@Chris-Kawa Yes, You are right. thank u again.
|
https://forum.qt.io/topic/79904/how-to-add-more-than-one-ellipseitem-to-a-scene
|
CC-MAIN-2017-47
|
refinedweb
| 577
| 65.22
|
I'm writing a program which organizes my school mark and for every subject I created a file.pck where are saved all the marks of that subject. Since I have to open and pickle.load 10+ files I decided to make 2 functions, files_open():
subj1 = open(subj1_file)
subj1_marks = pickle.load(subj1)
subj2 = open(subj2_file)
subj2marks = pickle.load(subj2)
subj1.close()
subj2.close()
file_open.subj1
since you just want to open, load and close the file afterwards I would suggest a simple helper function:
def load_marks(filename): with open(filename,"rb") as f: # don't forget to open as binary marks = pickle.load(f) return marks
Use like this:
subj1_marks = load_marks(subj1_file)
The file is closed when going out of scope of
with block, and your data remains accessible even if the file is closed which may be your (unjustified) concern with your question.
Note: someone suggested that what you really want (maybe) is to save all your data in one big pickle file. In that case, you could create a dictionary containing your data:
d = dict() d["mark1"] = subj1_marks d["mark2"] = subj2_marks ...
and perform one sole
pickle.dump() and
pickle.load() on the dictionary (if data is picklable then a dictionary of this data is also picklable): that would be simpler to handle 1 big file than a lot of them, knowning that you need all of them anyway.
|
https://codedump.io/share/5c0SFVqoK2Sh/1/opening-and-closing-a-large-number-of-files-on-python
|
CC-MAIN-2017-13
|
refinedweb
| 230
| 62.38
|
This is a sample application which makes AJAX calls to web service. This article discusses the server side aspect of the application.
The web service performs the task of fetching and storing the data from SQL server database at server. This article explains the server side aspect of the application. For discussion about how the UI layer works at client end please read: Calling A Web Service From HTML Page.....(for all browsers)
The model used is as the image shown below:
The Client side UI layer is discussed in article Calling A Web Service From HTML Page..(for all browsers).
The Application Logic layer has two layers: Business Object Layer(BL) and data access layer(DAL). DAL contains the class PollService. PollService is made a web service class by exposing some of its methods as WebMethods.
There are two more details which are optional - 1. Inheriting your web service class from WebService class of System.Web.Services namespace. 2. Applying WebService attribute to the class declaration. This will give us give us an advantage of having access to built-in Asp.Net objects Application,Context,Server,Session,User. If we don't need to use these objects, we can skip these details.
Here, PollService class exposes the methods: AddPoll ,CastMyVote ,CastVoteByPollId ,GetAllPollResults ,GetLatestPoll , GetListOfPolls ,GetPollById ,GetPollResultById as web methods to web services interface. This is done by decorating their method declarations with [WebMethod()] attribute.
Further important step is to specify the xml-namespace. Xml-namespace identifies your web service over the internet. When no xml-namespace is provided for the class, .Net provides a default namespace which is suitable for testing purpose only to. We give our webservice a namespace with the following declaration:
[WebService(Namespace="", Description="This is demo polling Web Service.")] public class PollService : System.Web.Services.WebService {
(Please make sure to be comfortable with creation of web services beforehand as this is a demo project only and this article should not be considered as a complete tutorial for web services at all. This article by Chris Maunder would be a very good start.)
The web methods of our web-service class look somewhat like this:
namespace pollLogicalLayer.LogicalLayer.DAL { public class PollService : System.Web.Services.WebService { ...... [WebMethod()] public List
GetListOfPolls() { //code to access database, fill the list of PollServiceBO objects and return this list. //PollServiceBO is discussed below. } [WebMethod()] public PollServiceBO GetLatestPoll() { //............ } [WebMethod()] public string CastVoteByPollId(string UserId,int PollId,Int16 SelectedOption) { ............. } [WebMethod()] public PollServiceBO GetPollResultById(int PollId) { ............. } [WebMethod()] public List GetAllPollResults() { ................ } [WebMethod()] public PollServiceBO GetPollById(int PollId) { .............. } [WebMethod()] public int AddPoll(string Question,string option1,string option2,string option3,string option4) { ................. } [WebMethod()] public string CastMyVote(string UserId, Int16 SelectedOption) { ............... } } }
Above methods take simple parameters as arguments like int,string,DateTime etc. as these are serializable data-types. The information is returned in form of objects of the class PollServiceBO. The business object PollServiceBO is kept in the namespace pollLogicalLayer.LogicalLayer.BL. Generally Web service business objects are kept separate from actual business objects. Web service objects should then instantiate business objects for actual data access operations. The code for PollServiceBO class is self explanatory:
public class PollServiceBO { private int _id; private string _question; private string _option1; private string _option2; private string _option3; private string _option4; private DateTime _dateAdded; //the getters and setters for the above fields }
The default database used for Database Layer is the SQL Server file poll.mdf kept in App_Data folder of the application. This layer contains simple stored procedures with the same names(AddPoll ,CastMyVote ,CastVoteByPollId....) as above for fetching/inserting data. The SQL script for the database(pollDb.sql) could be downloaded from above. If poll Database is to be changed to alternate Server, the connection String should be changed accordingly. Please make sure to place some data in poll database before making request to fetch. When server is changed, the connectionstring can be changed like this:
<connectionStrings> <add name="pollConnectionString" connectionString="Data Source={Target Server Name}; Initial Catalog=poll;Integrated Security=SSPI;Connect Timeout=10" providerName="System.Data.OleDb" /> </connectionStrings>
If the application is set up, PollService.asmx can be viewed in browser like this:
All the methods exposed by the web service are visible as in the image above. Please make sure to test the web service by invoking a web method, before using the UI. e.g.- click on GetLatestPoll and press invoke button. If xml response is visible then UI layer will be able to fetch the data.
If there is no response when web method is invoked, please browse Default.aspx which checks the objects directly. General cause for this is improper access to the database file.
If XML response is visible but UI is not able to fetch the data, please correct the location of the web service.
For Example- In CastMyVote UI, we have location of WebService as
var url = "";This says that PollService.asmx is hosted in poll folder on localhost ie. it is in c:\inetput\wwwroot\poll directory. If we want to host it from Visual Studio's inbuilt web server or any other server, the location is needed to be changed in all pages.E.g.-
"" //above one is location when my inbuilt web server hosts my website on port 2080 "" //this would be location if we host the PollService.asmx from a folder named poll in website
Index.htm is the start Page. Some of the pages are based upon the webservice.htc approach which works for Internet Explorer only.
Please note that this application exposes the functions of data access layer directly to the UI for demonstration purpose only. Web service business objects are kept different from application business objects. This is because to expose any field of the object returned by a web service, we will have to keep it as public read/write field. This is needed to keep this object serializable. We would never like to expose all the fields of our business objects and the methods of Data Access Layer directly. Hence a more legible approach will have:
public class PollServiceBO { public PollServiceBO(PollBO b) { this._id = b.ID; this._question = b.question; this._option1 = b.option1; this._option2 = b.option2; this._option3 = b.option3; this._option4 = b.option4; this._dateAdded = b.dateAdded; } /***** the usual getters and setters ****/ }
And the web service class should use the Data Access Layer and application business objects to interact with application (after validation).
[WebService(Namespace="", Description="This is demo polling Web Service.")] public class PollService : System.Web.Services.WebService { [WebMethod(CacheDuration = 30,Description="Returns list of latest 100 polls.")] public List<pollserviceBO> GetListOfPolls() { List<pollserviceBO> polls = PollDAL.GetListOfPolls(); List<pollserviceBO> returnpolls = new List<pollserviceBO>(); foreach (PollBO b in polls) { PollServiceBO pl = new PollServiceBO(b); returnpolls.Add(pl); } return returnpolls; } .... }
Authentication,abstraction of the DAL from UI Layer and state-management(if required) is a matter of further important consideration.
Further, I have enabled HTTP GET & POST interaction for this application by modifying Web.config file. I am processing the returned XML response using the UI javascript. However, if we do not want to do this and wish to enable session and authentication-authorization for this application, there is a workaround. We will need to create proxy classes using wsdl.exe or using Visual Studio using Web Reference approach. We write aspx pages which consume our web service using the proxy classes created above. These aspx pages collect simple parameters and return xml response from and to our ajax enabled UI interface.
Proxy class makes our life very simple as it takes care of generating the correct SOAP message and sending it over HTTP. It also takes care of converting the response message to corresponding .Net data types.
For information regarding how UI layer works, please see the article: Calling A Web Service From HTML Page.....(for all browsers)
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/webservices/WebServiceFromHTMLSample.aspx
|
crawl-002
|
refinedweb
| 1,308
| 50.33
|
#include <sys/types.h> #include <unistd.h> pid_t tcgetpgrp(int fildes);
The tcgetpgrp() function will return the value of the process() returns the value of the process group ID of the foreground process associated with the terminal. Otherwise, −1 is returned and errno is set to indicate the error.
The tcgetpgrp() function will fail if:
The fildes argument is not a valid file descriptor.
The calling process does not have a controlling terminal, or the file is not the controlling terminal.
See attributes(5) for descriptions of the following attributes:
setpgid(2), setsid(2), tcsetpgrp(3C), attributes(5), standards(5), termio(7I)
|
http://docs.oracle.com/cd/E36784_01/html/E36874/tcgetpgrp-3c.html
|
CC-MAIN-2015-18
|
refinedweb
| 102
| 56.05
|
Apache Spark 2 on CML
Apache Spark is a general purpose framework for distributed computing that offers high performance for both batch and stream processing. It exposes APIs for Java, Python, R, and Scala, as well as an interactive shell for you to run jobs.
In Cloudera Machine Learning (CML), Spark and its dependencies are bundled directly into the CML engine Docker image.
CML supports fully-containerized execution of Spark workloads via Spark's support for the Kubernetes cluster backend. Users can interact with Spark both interactively and in batch mode.
Dependency Management: In both batch and interactive modes, dependency management, including for Spark executors, is transparently managed by CML and Kubernetes. No extra required configuration is required. In interactive mode, CML leverages your cloud provider for scalable project storage, and in batch mode, CML manages dependencies though container images.
Autosc or spot instances.
Workload Isolation: In CML, each project is owned by a user or team. Users can launch multiple sessions in a project. Workloads are launched within a separate Kubernetes namespace for each user, thus ensuring isolation between users at the K8s level.
|
https://docs.cloudera.com/machine-learning/1.0/product/topics/ml-apache-spark-overview.html
|
CC-MAIN-2021-04
|
refinedweb
| 185
| 55.95
|
D Programming - Literals
Constant values that are typed in the program as a part of the source code are called literals.
Literals can be of any of the basic data types and can be divided into Integer Numerals, Floating-Point Numerals, Characters, Strings, and Boolean Values.
Again, literals are treated just like regular variables except that their values cannot be modified after their definition.
Integer Literals
An integer literal can be a of the following types −
Decimal uses the normal number represention with the first digit cannot be 0 as that digit is reserved for indicating the octal system.This does not include 0 on its own: 0 is zero.
Octal uses 0 as prefix to number.
Binary uses 0b or 0B as prefix.
Hexadecimal uses 0x or 0X as prefix.
An integer literal can also have a suffix that is a combination of U and L, for unsigned and long, respectively. The suffix can be uppercase or lowercase and can be in any order.
When you don’t use a suffix, the compiler itself chooses between int, uint, long, and ulong based on the magnitude of the value. 0b001 // binary
Floating Point Literals
The floating point literals can be specified in either the decimal system as in 1.568 or in the hexadecimal system as in 0x91.bc.
In the decimal system, an exponent can be represented by adding the character e or E and a number after that. For example, 2.3e4 means "2.3 times 10 to the power of 4". A “+” character may be specified before the value of the exponent, but it has no effect. For example 2.3e4 and 2.3e + 4 are the same.
The “-” character added before the value of the exponent changes the meaning to be "divided by 10 to the power of". For example, 2.3e-2 means "2.3 divided by 10 to the power of 2".
In the hexadecimal system, the value starts with either 0x or 0X. The exponent is specified by p or P instead of e or E. The exponent does not mean "10 to the power of", but "2 to the power of". For example, the P4 in 0xabc.defP4 means "abc.de times 2 to the power of 4".
Here are some examples of floating-point literals −
3.14159 // Legal 314159E-5L // Legal 510E // Illegal: incomplete exponent 210f // Illegal: no decimal or exponent .e55 // Illegal: missing integer or fraction 0xabc.defP4 // Legal Hexa decimal with exponent 0xabc.defe4 // Legal Hexa decimal without exponent.
By default, the type of a floating point literal is double. The f and F mean float, and the L specifier means real.
Boolean Literals
There are two Boolean literals and they are part of standard D keywords −
A value of true representing true.
A value of false representing false.
You should not consider the value of true equal to 1 and value of false equal to 0.
Character Literals
Character literals are enclosed in single quotes.
A character literal can be a plain character (e.g., 'x'), an escape sequence (e.g., '\t'), ASCII character (e.g., '\x21'), Unicode character (e.g., '\u011e') or as named character (e.g. '\©','\♥', '\€' ).
There are certain characters in D when they are preceded by a backslash they will have special meaning and they are used to represent like newline (\n) or tab (\t). Here, you have a list of some of such escape sequence codes −
The following example shows few escape sequence characters −Live Demo
import std.stdio; int main(string[] args) { writefln("Hello\tWorld%c\n",'\x21'); writefln("Have a good day%c",'\x21'); return 0; }
When the above code is compiled and executed, it produces the following result −
Hello World! Have a good day! −
import std.stdio; int main(string[] args) { writeln(q"MY_DELIMITER Hello World Have a good day MY_DELIMITER"); writefln("Have a good day%c",'\x21'); auto str = q{int value = 20; ++value;}; writeln(str); }
In the above example, you can find the use of q"MY_DELIMITER MY_DELIMITER" to represent multi line characters. Also, you can see q{} to represent an D language statement itself.
|
https://www.tutorialspoint.com/d_programming/d_programming_literals.htm
|
CC-MAIN-2018-51
|
refinedweb
| 686
| 59.19
|
DEBSOURCES
Skip Quicknav
Patches / puredata /0.49.0-3
src/g_editor_extras.c |
4 2 + 2 - 0 !
1 file changed, 2 insertions(+), 2 deletions(-)
fixed memleak in triggerize code by making variables more local
CID:190664
src/g_editor_extras.c |
8 6 + 2 - 0 !
1 file changed, 6 insertions(+), 2 deletions(-)
fixed crasher bug when iterating over canvas' glist while modifying
it...
Closes:
src/g_editor.c |
2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)
fixed undo for "tidy up" when nothing is selected.
Closes:
po/de.po |
2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)
fixed missing letter in german translation
configure.ac |
2 1 + 1 - 0 !
src/s_main.c |
2 1 + 1 - 0 !
2 files changed, 2 insertions(+), 2 deletions(-)
rename "pd" to "puredata"
in order to allow multiple flavours of Pd
to be installed simultaneously, puredata will install itself as
"/usr/bin/puredata" and provide an alternative as "pd"
This patch ensures that the "puredata" namespace is used throughout
(both installed binary and library-paths)
src/s_path.c |
4 4 + 0 - 0 !
1 file changed, 4 insertions(+)
search /usr/lib/pd/extra/ for externals
since we install into /usr/lib/puredata/extra, the ordinary install path for
externals (/usr/lib/pd/extra) won't get searched automatically; so we need to
add it manually
man/pd.1 |
26 16 + 10 - 0 !
1 file changed, 16 insertions(+), 10 deletions(-)
fix uris in manpage to point to some meaningful place
tcl/pd_menucommands.tcl |
15 14 + 1 - 0 !
1 file changed, 14 insertions(+), 1 deletion(-)
fix menu-entries in case puredata-doc is not available
if the puredata-doc package is not installed, some of the menu-entries (namely
the manual section) will not work correctly.
This patch checks whether the files are there, and if not, a dialog is raised
asking the user to install puredata-doc
src/s_main.c |
12 8 + 4 - 0 !
1 file changed, 8 insertions(+), 4 deletions(-)
remove c-macros for timestamps
tcl/pd-gui.tcl |
2 1 + 1 - 0 !
1 file changed, 1 insertion(+), 1 deletion(-)
|
https://sources.debian.org/patches/puredata/0.49.0-3/
|
CC-MAIN-2021-21
|
refinedweb
| 345
| 58.28
|
I am trying to resolve a schedules parameter list with families to identify which families have missing shared parameters. Is there a way to get if a parameter is a type parameter?
let me open dynamo real quick.
I think it´s “parameter.type”
or this:
looked at that but thats global parameter. issue is gathering a list of scheduled parameters for doors but some are type and some are instance AND some are hidden Revit parameters (to-Room)
Someone else here may be able to provide a more complete answer, but it is possible through the Revit API. Assuming my variable
param is a valid Parameter:
import clr clr.AddReference('RevitAPI') from Autodesk.Revit.DB import ElementType param_elem = param.Element # If the host element is an ElementType, assume the parameter is a Type Parameter. # True if type, false if instance. OUT = isinstance(param_elem, ElementType)
The function
isinstance() has nothing to do with Revit specifically, despite the similar language. The above code will not actually run either, as
param is still undefined, however if I could see what you have done so far I may be able to rework it into an actual solution.
|
https://forum.dynamobim.com/t/is-there-a-way-to-tell-if-a-parameter-is-a-type-parameter/46335
|
CC-MAIN-2020-45
|
refinedweb
| 194
| 57.06
|
A Beginner's Guide to Kotlin
A Beginner's Guide to Kotlin
Why is Kotlin so popular?
Join the DZone community and get the full member experience.Join For Free
Google.
Hello World
Kotlin is a statically-typed language that runs on the JVM and boasts 100 percent interoperability with existing Java code. The program below should look very familiar to most Java Developers:
package com.bugsnag.kotlin; public class App { public static void main (String[] args) { System.out.println("Hello World!"); } }
And the following will print “Hello World” in Kotlin:
fun main(args: Array<String>) { println("Hello World!") }
A few differences are obvious, such as the lack of semicolons and how concise our code is.
Kotlin Vs. Java
To get a feel for Kotlin, let’s take a closer look at its features and how they compare to Java.
Null Safety
We’ll start by exploring one of the most useful features of Kotlin — its support for null safety. In Java, any object can be
null. This means that runtime checks must be added throughout a codebase in order to prevent
NullPointerException crashes, which has often been called a Billion Dollar Mistake by language designers.
static class User { String name; } public void printUsername(User user) { if (user.name != null) { foo(user.name.length()); } }
In Kotlin, references to objects must either be nullable or non-null:
class User(var name: String?) // the name property can be null class User(var name: String) // the name property cannot be null
If a developer attempted to pass a nullable object to the second class, a compile-time error would occur.
Safe Call Operator
The following will be very familiar to most Java developers. The
userparameter may be
null, so a runtime check is required to ensure a
NPE is avoided.
void printUsername(User user) { if (user.getName() != null) { foo(user.getName().length()); } else { foo(null); // supply a null Integer } } void foo(Integer length) {}
Kotlin can simplify this with the
Safe Call operator. If
name is not null, then its length will be passed as an argument. Otherwise, a null reference will be passed.
fun printUsername(user: User) { foo(user.name?.length) // returns null if user.name is null } fun foo(length: Int?) {}
Alternatively, if it didn’t make sense to execute code when a value was null, we could use
let:
fun foo(nullableUser: User?) { nullableUser?.let { printUsername(nullableUser) } // only print non-null usernames } fun printUsername(user: User) {} // User is a non-null reference
Class Definitions
Kotlin classes are incredibly concise compared to Java. The following class, which defines three fields, and the getters and setters are over 30 lines long!.
class User(val name: String, var age: Int = 18, var address: String?)
Immutable references are also much easier. It’s simply a matter of switching from the
var keyword to
val..
Data Classes
Things get even more concise if the primary purpose of our class is to hold data, such as a JSON payload from an API. In Kotlin, these are known as data classes.
data class User(val name: String, var age: Int = 18, var address: String?)
Just adding the
data keyword will automatically generate
equals(),
hashCode(),
toString(), and
copy() implementations for our class. The equivalent Java implementation of this class is omitted to save both reader sanity and our bandwidth costs.
Type Inference
Kotlin uses type inference, which further increases its brevity. Consider this mouthful of a Java class:
class AbstractSingletonProxyFactoryBean { } public void foo() { AbstractSingletonProxyFactoryBean bean = new AbstractSingletonProxyFactoryBean(); }
Whereas the equivalent in Kotlin would look like this:
class AbstractSingletonProxyFactoryBean fun foo() { val bean = AbstractSingletonProxyFactoryBean() // type automatically inferred }
Functions
Type inference permeates throughout the language. It is possible to be either explicit or implicit when required, as shown by the two approaches to defining the same function below:
int add(int a, int b) { return a + b; }
fun add(a: Int, b: Int): Int { // explicit return type return a + b } fun add(a: Int, b: Int) = a + b // inferred return type
Properties
Kotlin Properties are simply awesome. Consider the following Java class, which defines a single field with accessor methods:
class Book { String author; String getAuthor() { return author; } void setAuthor(String author) { this.author = author; } } Book book = new Book(); book.setAuthor("Kurt Vonnegut"); System.out.println(book.getAuthor());
Equivalent functionality can be achieved in four lines of Kotlin by defining a class that declares an
author property. Our getters and setters will automatically be generated:
class Book(var author: String?) val book = Book() book.author = "Kurt Vonnegut" println(book.author)
Custom Accessors
If custom behavior is required for getters and setters, it’s possible to override the default behavior. For example::
set(value) { if ("Santa Claus".equals(value)) field = "Ho Ho HO" }
Interoperability
Another advantage of Kotlin is that it can be called from Java code, or vice versa, from within the same project..
fun main(args: Array Methods
Utility or helper classes will look very familiar to all Java developers. A static method will perform some useful operation that isn’t available in the Java standard library and will be called across the codebase:
sortStringChars function to the
String class:
fun String.sortStringChars(): String { val chars = this.toCharArray() Arrays.sort(chars) return String(chars) } fun main(args: Array<String>) { "azbso".sortStringChars() // returns "abosz" }
This results in a far more readable syntax — but beware. With great power comes great responsibility.
Functional Programming
Kotlin fully supports lambda expressions. Limited Java 8 support has only just been added to Android, which makes Kotlin’s functional programming features particularly welcome.
// filter a list for all authors whose name starts with 'J' val input = listOf("JK Rowling", "Charles Darwin") val authors = input.filter { author -> author.startsWith("J") } println(authors) // JK Rowling
It’s also possible to use constructs such as
filter and
map directly on Collections, which, again, is not currently supported on most Android devices.
fun main(args: Array.
val cereals = listOf("Kellogs Coroutines", "Cocoa Pods", "Locky Charms") cereals.toObservable() // perform some intensive/complex computation on background thread .subscribeBy(onNext = { println(it) // observe each cereal on the main thread and print it })
Kotlin Native and Javascript honorable mention should go to Gradle Script Kotlin, which brings all the benefits of static-typing to the existing Gradle DSL, and Spring Boot, which provides official support for Kotlin as of
1.5.
Potential Downsides smell..
Why Kotlin Beats Java
Let’s summarize some of the main advantages of Kotlin:
- Kotlin is far more concise than Java
- Lambdas and functional constructs have been supported out of the box for years
- 100 percent interoperability with existing Java code
- Kotlin practically eradicates one of the most common Java errors, the dreaded
NullPointerException
- IntelliJ IDEA provides great tooling support
- The language has been dogfooded from the ground-up, and as a result, it feels like a language designed by someone who programs in it every day.
Advice on Getting Started With Kotlin.
Published at DZone with permission of Jamie Lynch , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/a-beginners-guide-to-kotlin
|
CC-MAIN-2020-34
|
refinedweb
| 1,177
| 55.64
|
Apple keeps breaking AirPrint on older gear for no reason, and I keep fixing it.
Neary two years ago I had a go at making it work with Leopard, and in the meantime a few things changed - I upgraded my home server setup to Snow Leopard, iOS is now in its sixth revision, and I also changed printers.
AirPrint is not something I need to use often (if at all), but it’s handy enough for me to want it to work. It’s basically a set of tweaked CUPS filters and Bonjour announcements, and it’s somewhat aggravating that Apple can’t seem to bother to retrofit it to older Macs when it’s an altogether trivial hack.
Picking up where we left off earlier, to get this working on Snow Leopard I first added the “new” MIME types to CUPS (inspired by this post, which was my first and last stop while investigating the status quo):
sudo launchctl stop org.cups.cupsd sudo sh -c "echo 'image/urf urf string(0,UNIRAST<00>)' > /usr/share/cups/mime/airprint.types" sudo sh -c "echo 'image/urf application/pdf 100 cgpdftoraster' > /usr/share/cups/mime/airprint.convs" sudo launchctl start org.cups.cupsd
However, I’m pretty sure this isn’t actually used, since I’ve yet to see my iOS devices send
URF data to the printer - and that because since there is no apparent way to tweak the way CUPS registers printers in Bonjour, I set up a duplicate service with the
URF record set to
none like I did earlier - but this time, I whipped up the requisite Python script:
from subprocess import Popen, PIPE from signal import alarm, signal, SIGALRM class Alarm(Exception): pass def handler(signum, frame): # map the signal to an exception raise Alarm def dnssd(params, pattern='local.', timeout=3): h = Popen('dns-sd %s' % params, shell=True, stdout=PIPE) # set up a timeout if timeout: signal(SIGALRM, handler) alarm(timeout) result = [] try: # readline will block when dns-sd enters its loop while True: line = h.stdout.readline() if pattern in line: result.append(line) except Alarm: h.kill() return result # grab all instances of _ipp._tcp services printers = map( lambda entry: entry.split('_ipp._tcp.')[1].strip(), dnssd('-B _ipp._tcp.', 'local.', 3) ) # tack on 'URF=none' to each service info entries = [] for p in printers: entries.extend(map( lambda record: (p.split("@")[0].split(' ')[0], record.strip() + ' URF=none'), dnssd('-L "%s" _ipp' % p, 'txtvers', 1) )) # now advertise the first entry (advertising more than one would require # something like multiprocessing.Pool() or just plain exec(), and I have # just the one printer...) dnssd('-R "%s AirPrint" _ipp._tcp,_universal local. 631 %s' % entries[0], timeout=0)
This is somewhat of a hack since
dns-sd isn’t really supposed to be used this way - but it’s simpler and easier to understand than my attempt at doing the same with
ctypes.
You can grab the script here, and if you right-click on it and pick the Applet Builder you’ll get an application bundle you can toss into your Login Items and forget about1.
I’ve also tinkered with the notion of setting up an AirPrint renderer/redirector on a Raspberry Pi, and will probably end up doing that later - it should be pretty much the same thing, really, except you’ll want to look at airprint-generate to make it easier to set up the Bonjour announcements.
|
https://taoofmac.com/space/blog/2012/12/15/1830
|
CC-MAIN-2018-47
|
refinedweb
| 578
| 58.72
|
Visual Studio Productivity Tips
In this episode, Robert is joined by Allison Bucholtz-Au, who shows us how IntelliSense cuts down the number of keystrokes required to write code in Visual Studio. Even if you have been using IntelliSense for years, you are sure to see a thing or two you didn't it could do.
She is my fav now. She is just great, smile and teach.. like that
niiiiiiiiiiice !
hmmm, so any clue that intellisense may be related contextually by the namespace and the API, say denoted by signatures of procedures, functions, parameter lists, types - anything that can be seen by reflection, including names just created. So not done by 'guessing', unless you consider the modern take on AI assisted coding.
|
https://channel9.msdn.com/Shows/Visual-Studio-Toolbox/More-Code-Less-Typing?WT.mc_id=DX_MVP4025064
|
CC-MAIN-2019-43
|
refinedweb
| 123
| 62.38
|
In my “NetCF v2 now supports NTLM” post, I mentioned that the 3 ways to gain NTLM support with existing code are:
- Building your application against the v2 beta
- Uninstall v1 from your device (not possible on PPC 2003 devices, as v1 is installed in ROM)
- Construct an application configuration file for your executable, instructing it to run against the v2 beta.
How do I have my app to run with the newest (highest version number) framework installed? And I mean at all times, not just today, so when 3.0 is released it will run under 3.0.
Jerry, Wildcards are not supported. The NetCF supportedRuntime feature mirrors that of the full framework (specific build numbers only). You can, however, list more than one supportedVersion element. I created the following sample listing two fictitious builds of NetCF. My application correctly falls back to v2.0.4135 since that is listed and installed on the device.
<configuration>
<startup>
<supportedRuntime version="v2.0.7766" />
<supportedRuntime version="v2.0.5555" />
<supportedRuntime version="v2.0.4135" />
</startup>
</configuration>
The following link is to the .NET Framework documentation related to the supportedRuntime element:
— DK
Heya,
after installing the 2.0 beta SDK on my PC, and the runtime on my Ipaq, VS .NET 2003 can no longer debug the application. If I remove the config file so it can use the older version of the framework I can debug again.
Is there are work around for this, apart from installing a beta of Visual Studio 2005?
Thanks,
Mikal
Mikal,
Visual Studio .NET 2003 only knows how to debug NetCF v1. If you use a configfile to force your application to run against NetCF v2, you will either need to debug using the Visual Studio 2005 beta or cordbg (from the v2 SDK beta).
— DK
I want to use the System.Configuration namespace to get settings out of the app.config file and a smart device application does not seem to allow that… why not?
|
https://blogs.msdn.microsoft.com/davidklinems/2004/08/03/how-do-i-force-my-netcf-v1-application-to-run-against-netcf-v2/
|
CC-MAIN-2017-47
|
refinedweb
| 330
| 67.15
|
Back to Chapter 4 -- Index -- Chapter 6takes whatever ip points at, adds 1, and assigns the result to y, while
*ip += 1increments what ip points to, as do
++*ipcopies the contents of ip into iq, thus making iq point to whatever ip pointed to.
swap(a, b);where the swap function is defined as
void swap(int x, int y) /* WRONG */ {; }Pictorially:
Pointer arguments enable a function to access and change objects in the function that called it.int:
int n, array[SIZE], getint(int *); for (n = 0; n < SIZE && getint(&array[n]) != EOF; n++) ;Each call sets array[n] to the next integer found in the input and increments n. Notice that it is essential to pass the address of array[n] to getint. Otherwise there is no way for getint to communicate the converted integer back to the caller.
Our version of get+i points i elements after pa, and pa-i points i elements before. Thus, if pa points to a[0],
*(pa+1)refers to the contents of a[1], pa+i+i points are also identical: a+i.
/* strlen: return length of string s */"); /* string constant */ strlen(array); /* char array[100]; */ strlen(ptr); /* char the address of the subarray that starts at a[2]. Within f, the parameter declaration can read
f(int arr[]) { ... bounds. re-used.
#define ALLOCSIZE 10000 /* previously constant NULL is often used in place of zero, as a mnemonic to indicate more clearly that this is a special value for a pointer. NULL is defined in <stdio.h>. We will use NULL henceforth.
Tests like
if (allocbuf + ALLOCSIZE - allocp >= n) { /* it fits */and
if (p >= allocbuf && p < allocbuf + ALLOCSIZE)show several important facets of pointer arithmetic. First, pointers may be compared under certain circumstances. If p and q point to members of the same array, then relations like ==, !=, <, >=, etc., work properly. For example,
p < qmeans str alloc that maintains floats instead of chars, merely by changing char to float throughout alloc and afree. All the pointer manipulations automatically take into account the size of the objects pointed to. to add float or double to them, or even, except for void *, to assign a pointer of one type to a pointer of another type without a cast.
:
/* strcpy: copy t to s; array subscript version */ void strcpy(char *s, char *t) { int i; */ int strcmp(char *s, char *t) { int i;decre pointers themselves can bee stored in an array. Two lines can be compared by passing their pointers to strcmp. When two out-of-order lines have to be exchanged, the pointers in the pointer array are exchanged, not the text lines themselves.); char *alloc(int); /*; for (i = left+1; i <= right; i++) if (strcmp(v[i], v[left]) < 0) swap(v, ++last, i); swap(v, left, last); qsort(v, left, last-1); qsort(v, last+1, right); }Similarly, the swap routine needs only trivial changes:
/*.
The syntax is similar to previous initializations:
/*. important advantage of the pointer array is that the rows of the array may be of different lengths. That is, each element of b" };
with those for a two-dimensional array:
char aname[][15] = { "Illegal month", "Jan", "Feb", "Mar" };
Exercise 5-9. Rewrite the routines day_of_year and month_day with pointers instead of indexing.
The simplest illustration is the program echo, which echoes its command-line arguments on a single line, separated by blanks. That is, the command
echo hello, worldprints the output
hello, worldBywillHere 5-10. Write the program expr, which evaluates a reverse Polish expression from the command line, where each operator or operand is a separate argument. For example,
expr 2 3 4 + *evaluates 2 * (3+4).
Exercise 5-11. Modify the program entab and detab (written as exercises in Chapter 1) to accept a list of tab stops as arguments. Use the default tab settings if there are no arguments.
Exercise 5-12. Extend entab and detab to accept the shorthand
entab -m +nto mean tab stops every n columns, starting at column m. Choose convenient (for the user) default behavior.
Exercise 5-13. Write the program tail, which prints the last n lines of its input. By default, n is set to 10, let us say, but it can be changed by an optional argument so that
tail -nprints. that compares two lines on the basis of numeric value and returns the same kind of condition indication as strcmp does. These functions are declared ahead of main */ int readlines(char *lineptr[], int nlines); void writelines(char *lineptr[], int nlines); void qsort(void *lineptr[], int left, int right, int (*comp)(void *, void *)); int numcmp(char *, char *); /* sort input lines */ main(int argc, char *argv[]) { int n 5-14. Modify the sort program to handle a -r flag, which indicates sorting in reverse (decreasing) order. Be sure that -r works with -n.
Exercise 5-15. Add the option -f to fold upper and lower case together, so that case distinctions are not made during sorting; for example, a and A compare equal.
Exercise 5-16. Add the -d (``directory order'') option, which makes comparisons only on letters, numbers and blanks. Make sure it works in conjunction with -f.
Exercise 5-17. Add a field-searching capability, so sorting may bee done on fields within lines, each field sorted according to an independent set of options. (The index for this book was sorted with -df for the index category and -n for the page numbers.)
int *f(); /* f: function returning pointer to int */and
int (*pf)(); /* pf: pointer to function returning int */illustrates the problem: * is a prefix operator and it has lower precedence than (), so parentheses are necessary to force the proper association.
Although truly complicated declarations rarely arise in practice, it is important to know how to understand them, and, if necessary, how to create them. One good way to synthesize declarations is in small steps with typedef, which is discussed in Section 6.7. As an alternative, in this section we will present a pair of programs that convert from valid C to a word description and back again. The word description reads left to right.
The first, dcl, is the more complex. It converts a C declaration into a word description, as in these returning pointer to array[5] of chardcl is based on the grammar that specifies a declarator, which is spelled out precisely in Appendix A, Section 8.5; this is a simplified form:
dcl: optional *'s direct-dcl direct-dcl name (dcl) direct-dcl() direct-dcl[optional size]In words, a dcl is a direct-dcl, perhaps preceded by *'s. A direct-dcl is a name, or a parenthesized dcl, or a direct-dcl followed by parentheses, or a direct-dcl followed by brackets with an optional size.
This grammar can be used to parse functions. For instance, consider this declarator:
(*pfa[])()pfa will be identified as a name and thus as a direct-dcl. Then pfa[] is also a direct-dcl. Then *pfa[] is recognized as a dcl, so (*pfa[]) is a direct-dcl. Then (*pfa[])() is a direct-dcl and thus a dcl. We can also illustrate the parse with a tree like this (where direct-dcl has been abbreviated to dir-dcl):
The heart of the dcl program is a pair of functions, dcl and dirdcl, that parse a declaration according to this grammar. Because the grammar is recursively defined, the functions call each other recursively as they recognize pieces of a declaration; the program is called a recursive-descent parser.
/* dcl: parse a declarator */ void dcl(void) { int ns; for (ns = 0; gettoken() == '*'; ) /* count *'s */ ns++; dirdcl(); while (ns-- > 0) strcat(out, " pointer to"); } /* dirdcl: parse a direct declarator */ void dirdcl(void) { int type; if (tokentype == '(') { /* ( dcl ) */ dcl(); if (tokentype != ')') printf("error: missing )\n"); } else if (tokentype == NAME) /* variable name */ strcpy(name, token); else printf("error: expected name or (dcl)\n"); while ((type=gettoken()) == PARENS || type == BRACKETS) if (type == PARENS) strcat(out, " function returning"); else { strcat(out, " array"); strcat(out, token); strcat(out, " of"); } }Since the programs are intended to be illustrative, not bullet-proof, there are significant restrictions on dcl. It can only handle a simple data type line char or int. It does not handle argument types in functions, or qualifiers like const. Spurious blanks confuse it. It doesn't do much error recovery, so invalid declarations will also confuse it. These improvements are left as exercises.
Here are the global variables and the main routine:
#include <stdio.h> #include <string.h> #include <ctype.h> #define MAXTOKEN 100 enum { NAME, PARENS, BRACKETS }; void dcl(void); void dirdcl(void); int gettoken(void); int tokentype; /* type of last token */ char token[MAXTOKEN]; /* last token string */ char name[MAXTOKEN]; /* identifier name */ char datatype[MAXTOKEN]; /* data type = char, int, etc. */ char out[1000]; main() /* convert declaration to words */ { while (gettoken() != EOF) { /* 1st token on line */ strcpy(datatype, token); /* is the datatype */ out[0] = '\0'; dcl(); /* parse rest of line */ if (tokentype != '\n') printf("syntax error\n"); printf("%s: %s %s\n", name, out, datatype); } return 0; }The function gettoken skips blanks and tabs, then finds the next token in the input; a ``token'' is a name, a pair of parentheses, a pair of brackets perhaps including a number, or any other single character.
int gettoken(void) /* return next token */ { int c, getch(void); void ungetch(int); char *p = token; while ((c = getch()) == ' ' || c == '\t') ; if (c == '(') { if ((c = getch()) == ')') { strcpy(token, "()"); return tokentype = PARENS; } else { ungetch(c); return tokentype = '('; } } else if (c == '[') { for (*p++ = c; (*p++ = getch()) != ']'; ) ; *p = '\0'; return tokentype = BRACKETS; } else if (isalpha(c)) { for (*p++ = c; isalnum(c = getch()); ) *p++ = c; *p = '\0'; ungetch(c); return tokentype = NAME; } else return tokentype = c; }getch and ungetch are discussed in Chapter 4.
char (*(*x())[])()The abbreviated input syntax lets us reuse the gettoken function. undcl also uses the same external variables as dcl does.
/* undcl: convert word descriptions to declarations */ main() { int type; char temp[MAXTOKEN]; while (gettoken() != EOF) { strcpy(out, token); while ((type = gettoken()) != '\n') if (type == PARENS || type == BRACKETS) strcat(out, token); else if (type == '*') { sprintf(temp, "(*%s)", out); strcpy(out, temp); } else if (type == NAME) { sprintf(temp, "%s %s", token, out); strcpy(out, temp); } else printf("invalid input at %s\n", token); } return 0; }Exercise 5-18. Make dcl recover from input errors.
Exercise 5-19. Modify undcl so that it does not add redundant parentheses to declarations.
Exercise 5-20. Expand dcl to handle declarations with function argument types, qualifiers like const, and so on.
Back to Chapter 4 -- Index -- Chapter 6
|
http://pc-freak.net/Ctut/chapter5.html
|
CC-MAIN-2017-22
|
refinedweb
| 1,759
| 60.95
|
From: Bronek Kozicki (brok_at_[hidden])
Date: 2005-11-04 05:06:22
David Abrahams <dave_at_[hidden]> wrote:
> Do you understand what _CRT_NOFORCE_MANIFEST really does?
No :-( I see this has been raised already on boost-testing
As it turned out in discussion with Martyn Lovell, mixing C headers with
_DEBUG and without this symbol *is* a bug, at least in Visual C++, and
this is exactly what we are doing in wrap_python.hpp . We have a block
of code where we #undef _DEBUG, then #include Python headers (which in
turn #include some C headers) and several C headers too, and then we
#define _DEBUG back. Any following #include of C header will trigger the
error in vc8, and is a (silent one!) source of problems in previous
versions of MSVC. What I think we could do is to #include all required C
headers *before* #undef _DEBUG, and depend on their own inclusion guards
(or #pragma once) to prevent them being parsed in the following part of
the code where _DEBUG is #undef-ed. This means modyfing part of
wrap_python.hpp to something like (I will test it right away):
#ifdef _DEBUG
# ifndef BOOST_DEBUG_PYTHON
# ifdef _MSC_VER
# include <io.h>
# include <stdio.h>
# include <limits.h>
# include <float.h>
# include <basetsd.h>
# include <string.h>
# include <errno.h>
# include <stdlib.h>
# include <unistd.h>
# include <stddef.h>
# include <assert.h> // possibly more headers, to be verified
# endif
# undef _DEBUG // Don't let Python force the debug library
# define DEBUG_UNDEFINED_FROM_WRAP_PYTHON_H
# endif
#endif
// No changes below this line
Dave, what do you think?
As it currently stands out (without this or similar fix) we are actually
injecting bug into all MSVC projects (not only vc8 - it's only
protection built into vc8 headers that exposed the problem) that happen
to #include wrap_python.hpp . I think this is showstopper for 1.33.1,
even though regression tests currently are all green for other versions
of MSVC.
B.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2005/11/96345.php
|
CC-MAIN-2020-40
|
refinedweb
| 343
| 67.76
|
lp:~canonical-ci-engineering/uci-engine/doc-msging
Created by Vincent Ladeuil on 2015-01-13 and last modified on 2015-01-15
- Get this branch:
- bzr branch lp:~canonical-ci-engineering/uci-engine/doc-msging
Members of Canonical CI Engineering can upload to this branch. Log in for directions.
Branch merges
- Francis Ginther: Needs Fixing on 2015-01-15
- Paul Larson: Needs Fixing on 2015-01-13
- Diff: 293 lines (+251/-1)4 files modifieddocs/conf.py (+2/-1)
docs/index.rst (+2/-0)
docs/messaging.rst (+34/-0)
docs/snappy.rst (+213/-0)
Related bugs
Related blueprints
Branch information
- Owner:
- Canonical CI Engineering
- Project:
- Ubuntu CI Engine
- Status:
- Development
Recent revisions
- 933. By Francis Ginther on 2015-01-15
Remove 'Workflow Definition' and replace with description using default queue names.
- 932. By Vincent Ladeuil on 2015-01-14
Clarify progress queue role.
- 931. By Vincent Ladeuil on 2015-01-14
s/test_
source/ test_source_ branch/
- 930. By Vincent Ladeuil on 2015-01-14
Use the already existing uci/ namespace for glance images.
- 929. By Vincent Ladeuil on 2015-01-14
s/image_
version/ image_revision/ as it's the established word.
- 928. By Vincent Ladeuil on 2015-01-14
Rename image.builds queue to snappy.images.
Define snappy request.
Mention that reduced workflow can be defined.
- 927. By Vincent Ladeuil on 2015-01-14
Fix formatting.
- 926. By Francis Ginther on 2015-01-13
Add workflow definition and message examples for snappy workflow.
- 925. By Vincent Ladeuil on 2015-01-13
Fix typo.
- 924. By Vincent Ladeuil on 2015-01-13
Describes snappy components and their queues at a high level.
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)
- Stacked on:
- lp:uci-engine
|
https://code.launchpad.net/~canonical-ci-engineering/uci-engine/doc-msging
|
CC-MAIN-2019-47
|
refinedweb
| 296
| 60.82
|
import "golang.org/x/build/internal/lru"
Package lru implements an LRU cache.
Cache is an LRU cache, safe for concurrent access.
New returns a new cache with the provided maximum items.
Add adds the provided key and value to the cache, evicting an old item if necessary.
Get fetches the key's value from the cache. The ok result will be true if the item was found.
Len returns the number of items in the cache.
RemoveOldest removes the oldest item in the cache and returns its key and value. If the cache is empty, the empty string and nil are returned.
Package lru imports 2 packages (graph) and is imported by 2 packages. Updated 2017-08-16. Refresh now. Tools for package owners.
|
http://godoc.org/golang.org/x/build/internal/lru
|
CC-MAIN-2017-34
|
refinedweb
| 125
| 77.64
|
i have
#include <string.h>
but it doesn't recognize strtok()
This is a discussion on why does my compiler (borland 5) not recognize strtok()? within the C++ Programming forums, part of the General Programming Boards category; i have #include <string.h> but it doesn't recognize strtok()...
i have
#include <string.h>
but it doesn't recognize strtok()
I Love Jesus
post your code
Free the weed!! Class B to class C is not good enough!!
And the FAQ is here :-
Perhaps it is not implemented? Take a look into the file string.h, I guess there should be a prototype of the function.
#include <cassert.h>
#include <conio.h>
#include <cstdlib.h>
#include <iostream.h>
#include <fstream.h>
#include <string.h>
#pragma argsused
using namespace std;
int main()
{
const int MAX_RECORD = 75;
int position;
char inChar;
char quit;
string seq_line;
struct myRecord
{
char title[35];
char system[4];
double score;
};
fstream seq_in("sequential_file.txt", ios::in);
fstream rel_file("relative_file.txt", ios::in | ios:
ut);ut);
for (position = 1; position <=MAX_RECORD; ++position)
{
getline(seq_in, seq_line);
strtok(position, '|');
cout << seq_line << "\n";
}
cin >> quit;
}
I Love Jesus
as I thought you are not using strtok() correctly.
The first time you call strtok to tokenise a string you pass the address of the string to be tokenised. Subsequent calls to strtok pass NULL and this tells strtok to keep tokenising the same string.
here is an example...
also your headers are wrong....also your headers are wrong....Code:/* strtok()'s a string and returns a singly linked list of the tokens */ Node* Tokenise(char CopyString[],char* Delim) { size_t Len; char* TokenPtr; Node* Head=NULL; TokenPtr=strtok(CopyString,Delim); Len=strlen(TokenPtr)+SPACE; Head=AddNode(Head,TokenPtr,Len); while ((TokenPtr=strtok(NULL,Delim))!= NULL) { Len=strlen(TokenPtr)+SPACE; Head=AddNode(Head,TokenPtr,Len); } return Head; }
you should be using
<cassert> not <cassert.h>
<cconio> not <conio.h> be aware that this is not a standard header
<cstdlib> not <cstdlib.h>
<iostream> not <iostream.h>
<fstream> not <fstream.h>
<cstring> not <string.h>
Free the weed!! Class B to class C is not good enough!!
And the FAQ is here :-
i know those header files are wrong. i usually use the non-.h ones, but for some reason i didn't here. thanks.
I Love Jesus
|
http://cboard.cprogramming.com/cplusplus-programming/10804-why-does-my-compiler-borland-5-not-recognize-strtok.html
|
CC-MAIN-2014-23
|
refinedweb
| 382
| 70.8
|
Created on 2003-11-10 11:32 by dcjim, last changed 2010-07-23 09:56 by mark.dickinson. This issue is now closed.
You can't use iterators on wekref dicts because items
might be removed from the dictionaries while iterating
due to GC.
I've attached a script that illustrates the bug with
Python 2.3.2. It doesn't matter whether you use weak
key or weak value dicts.
If this can't be fixed, then the iteration methods should
either be removed or made to (lamely) create intermediate
lists to work around the problem.
I made a patch to fix the problem. The cleaning up of they weakref keys or
values will be held until all references to iterators created by the
weakdict are dead.
I also couldn't resist removing code duplication of code in items(),
keys() and values().
At first, I couldn't understand why this whole remove(), _remove() and
selfref() mechanism was in place. I had removed them and replaced them
with methods, and the tests still passed. Then I realized it was to make
sure keys and values didn't prevent the weak dicts from being freed. I
added tests for this.
Patch has tests, may need updating.
Interesting patch. I think the intermediate assertEquals in
test_weak_*_dict_flushed_dead_items_when_iters_go_out are just testing
an implementation detail, only the final one should remain.
Also, it is likely the "code duplication" you are talking about was
there for performance reasons, so I'd suggest putting it back in.
About duplicated code and performance:
When I look at the duplicated code, I don't see anything that remotely
looks like a performance tweak. Just to make sure, I made a bench:
#!/usr/bin/env python
import sys
sys.path.insert(0, 'Lib')
import timeit
import weakref
class Foo(object): pass
def setup():
L = [Foo() for i in range(1000)]
global d
d = weakref.WeakValueDictionary(enumerate(L))
del L[:500] # have some dead weakrefs
def do():
d.values()
print timeit.timeit(do, setup, number=100000)
Results without the patch:
./python.exe weakref_bench.py
0.804216861725
Results with the patch:
$ ./python.exe weakref_bench.py
0.813000202179
I think the small difference in performance is more attributable to the
extra processing the weakref dict does than the deduplication of the
code itself..
Oh, that's me again not correctly reading my own tests. It's the
*_are_not_held_* tests that test that no reference is kept.
I agree about the *_flushed_dead_items_* being an implementation detail.
> Results without the patch:
> ./python.exe weakref_bench.py
> 0.804216861725
>
> Results with the patch:
> $ ./python.exe weakref_bench.py
> 0.813000202179
Thanks for the numbers, I see my worries were unfounded.
>.
I was talking about doing `self.assertEqual(len(d), self.COUNT)` before
deleting the iterators.
I can confirm that the patch applies with minimal fuzz to the
release26-maint branches and the trunk, and that the added tests fail
without the updated implementation in both cases.
Furthermore, Jim's original demo script emits it error with my stock 2.6.5
Python, but is silent with the patched trunk / 2.6 branch.
Probably old news, but this also affects 2.5.4.
If this is to go forward the patch will need porting to 2.7, 3.1 and 3.2
It looks like this issue has been fixed in issue7105 already. Can we close this ticket?
It's not yet fixed in 2.7 or 2.6. Updating versions.
We might as well backport Antoine's patch rather than take this one (even if mine for 2.x already). It would be weird to have 2 wildly different patches to solve the same problem.
Maybe close this ticket and flag issue7105 for backporting?
Agreed.
|
http://bugs.python.org/issue839159
|
crawl-003
|
refinedweb
| 619
| 76.22
|
HTML and Java: Two Sides of the Same Wicket Coin
By Geertjan on Jul 13, 2005
Each web page in Wicket is like a coin. It has two sides -- a Java class and an HTML file.
- Setting Up the Coin: HelloWorldApplication.java. The application object creates the application that contains the web pages. The application object is a Java class. The absolute minimum content of the application object is the definition of the home page. The home page is the first web page displayed by the application object.
Here the compiled HelloWorld.class web page is set as the home page:
getPages().setHomePage(HelloWorld.class);
The web page consists of two sides -- the Java component (the back or tails side) and its HTML rendering (the front or heads side). Importantly, since they are two sides of the same coin, the two sides have the same name and are stored in the same folder structure. Normally, this means that they are stored in the same package. So, in this case, the two sides are called HelloWorld.java and HelloWorld.html and are stored together in the same package.
- Tails: HelloWorld.java. Here a Label component is created:
add(new Label("message", "Hello World!"));
There are two parameters: the component identifier ("message") and the content that the Label component should render ("Hello World!").
- Heads: HelloWorld.html. A Java component is used in an HTML file:
<span wicket:Message goes here</span>
The <span> element has one attribute, in the wicket namespace: the identifier ("id") which is defined as "message". Note that the Wicket identifier in the HTML file must match the component identifier in the Java component. The "Message goes here" text is a placeholder. You could write anything you like there -- it will be replaced by the Java component.
- Flipping the Coin. A web.xml file specifies the Wicket servlet wicket.protocol.http.WicketServlet that handles requests for the application object. A server, such as Tomcat or Jetty, is needed in order to deploy the application.
To see all of the above in practice, within the context of the two Java classes and HTML file that make up the "Hello World" application, use yesterday's blog entry to set everything up in NetBeans IDE. Alternatively, use another IDE. All Wicket applications are based on the above principles -- except that most Wicket applications have more than one coin. If you have enough coins, you can create a really rich application...
Posted by guest on May 02, 2006 at 07:14 PM PDT #
|
https://blogs.oracle.com/geertjan/entry/p_img_src_http_blogs1
|
CC-MAIN-2014-15
|
refinedweb
| 420
| 66.33
|
After more tracking, I think the bug is on line 180 in
org.apache.cxf.jaxrs.model.URITemplate. The final group should be gotten by
the following statement:
String finalGroup = m.group(m.groupCount());
The original statement fails because it does not assume uri template could
also contains group in it.
Regards,
Rice
On Sat, Jul 10, 2010 at 10:13 AM, Rice Yeh <riceyeh@gmail.com> wrote:
> Hi,
> I traced the code to
> org.apache.cxf.jaxrs.utils.JAXRSUtils.findTargetMethod method where the path
> is gotten from the final match group (line 284, see below). Based on the
> JAX-RS 1.0 spec, this is correct. However, on step 5 in 3.7.3 in JAX-RS
> spec, a ‘(/.*)?’ should be appended to the uri template. CXF seems missing
> this step and the final matching group becomes the address extension in my
> case. Otherwise, it should be "".
>
> Regards,
> Rice
>
> String path = values.getFirst(URITemplate.FINAL_MATCH_GROUP);
>
>
> On Sat, Jul 10, 2010 at 4:26 AM, Sergey Beryozkin <sberyozkin@gmail.com>wrote:
>
>> Hi
>>
>> On Fri, Jul 9, 2010 at 10:29 AM, Rice Yeh <riceyeh@gmail.com> wrote:
>>
>> > Hi,
>> > I have root resource class which match request's address extension. For
>> > example, htttp://localhost:8080/image/g1.png. The root resource class is
>> > successfully matched and then try to match the right method. However, in
>> > CXF, the path used to match to the methods is the extension "png". In
>> > Jboss's RESTEasy, it is the empty string "". I think the empty string is
>> > more correct.
>> >
>> > RestEasy is a good implementation all right :-)
>>
>>
>> >
>> > @Path("{resource:.+\\.(js|css|gif|png)}")
>> > public class PassThroughResource
>> > {
>> >
>> > @GET
>> > @Path("")
>> > public InputStream get(@Context UriInfo uri) {
>> > ...
>> > }
>> > }
>> >
>> >
>> I'm presuming you debugged it - how did it happen that .gif was not
>> 'eaten'
>> by the above regular expression ?
>>
>> thanks, Sergey
>>
>>
>> > Regards,
>> > Rice
>> >
>>
>
>
|
http://mail-archives.apache.org/mod_mbox/cxf-users/201007.mbox/%3CAANLkTinz3FvanEFPQ9WRZmnzNslDhND5MCgWeQLRb4ti@mail.gmail.com%3E
|
CC-MAIN-2017-51
|
refinedweb
| 302
| 70.29
|
B
backbone
(1)
A set of nodes and their interconnecting links providing the primary data path across a network.
(2)
A central high-speed computer network that connects smaller, independent networks.
background color
In the GDDM function, the first color of the display medium; for example, black on a display or white on a printer. Contrast with
neutral color
.
background skulk time
In the Distributed Computing Environment (DCE), an automatic timer that guarantees a maximum lapse time between skulks of a Cell Directory Service (CDS) directory, regardless of other factors, such as namespace management activities and user-initiated skulks. For every 24 hours, as CDS server checks each master replica in its clearinghouse and initiates a skulk if changes were made in a replica since the last time a skulk of that replica completed successfully. See
skulk
.
back out
(1)
To remove changes from a physical file member in the inverse order from which the changes were originally made.
(2)
An operation that reverses all the changes made during the current unit of recovery or unit of work. After the operation is complete, a new unit of recovery or unit of work begins.
backout recovery
The process of returning a file to a particular point by removing journaled changes to the file. Contrast with
forward recovery
.
back up
To save some or all of the objects on a system, usually to tape or diskette, for safekeeping.
backup
(1)
Pertaining to an alternative copy used as a substitute if the original is lost or destroyed, such as a backup log.
(2)
The act of saving some or all of the objects on a system to a tape, diskette, or save file.
(3)
The tapes, diskettes, or save files with the saved objects.
(4)
For communications, see
switched network backup (SNBU)
.
(5)
In Backup Recovery and Media Services, a service that makes a duplicate copy of current direct access data on removable media for use in recovery.
backup control group
In Backup Recovery and Media Services, a group of libraries, system keywords, and lists that share common backup characteristics. The default values for a backup control group are defined in the backup policy and can be used or overridden by each backup control group.
backup list
(1)
In Backup Recovery and Media Services, a group of objects or folders that are grouped together for processing in a backup control group. Each list is assigned a unique list name.
(2)
In the Operational Assistant function, a list of libraries or folders to be saved on a regular basis, such as daily or weekly.
backup node
A cluster node on which there is a secondary copy of a cluster resource. The copy is kept current through replication. See also
primary node
and
replicate node
.
backup policy
In Backup Recovery and Media Services, a policy that is used in backup control groups. Backup policy values can be overridden at the individual backup control group level. The backup policy inherits defaults from the system policy. System policy defaults can be used or overridden in the backup policy.
Backup Recovery and Media Services (BRMS)
An IBM licensed program that provides user-modifiable backup, archive, recovery, and media management functions and policies.
bandwidth
The capacity of a communications line, normally expressed in bits per second (bps).
bar chart
In the GDDM function, a chart consisting of several bars of equal width. The value of the dependent variable is indicated by the height of each bar.
bar code
A pattern of bars of various widths containing data to be interpreted by a scanning device.
bar graph
In Performance Tools, a graph consisting of several bars of equal width. The value of the dependent variable is indicated by the height of each bar.
base
The numbering system in which an arithmetic value is represented.
base aggregate table
A target table that contains data collected at intervals from a user table or point-in-time table.
baseband
A frequency band that uses the complete bandwidth of a transmission and requires all stations in the network to participate in every transmission. See also
broadband
.
base number
In SDA, the part of a self-check field from which the check digit is calculated.
base pool
A storage area that contains all unassigned main storage on the system and whose minimum size is specified in the system value QBASPOOL. The system-recognized identifier is *BASE.
base project
In VisualAge RPG, a collection of files that make up a VRPG component.
basic assistance level
The type of displays that provides the most assistance. Basic assistance level supports the more common user and operator tasks, and does not use computer terminology.
BASIC (beginner's all-purpose symbolic instruction code)
A programming language with a small list of commands and a simple syntax, primarily designed for numeric applications.
basic characters
Frequently used double-byte characters that. Contrast with
extended characters
. See also
extended character processing
.
basic conversation
In APPC, a temporary connection between an application program and an APPC session in which the user must provide all the information on how the data is formatted. Contrast with
mapped conversation
.
basic data exchange
A file format for exchanging data on diskettes or tape between systems or devices.
basic DST capability
A dedicated service tools (DST) capability used by a service representative or an experienced system user that provides access to DST functions that do not access sensitive data. See also
full DST capability
and
security DST capability
.
Basic Encoding Rules (BER)
A set of rules used to encode ASN.1 values as strings of octets.
basic information unit (BIU)
In SNA, the unit of data and control information passed between the transmission and control layers. It consists of a request or response header followed by a request or response unit.
basic input and output system (BIOS)
The personal computer code that controls the basic hardware operations of diskette drives, hard disk drives, and the keyboard on a personal computer.
basic link unit (BLU)
In SNA, the unit of data and control information transmitted over a communications line by data link control.
basic mapping support (BMS)
(1)
A CICS facility that handles data stream input and output from a terminal. Its use provides device and format independence for application programs.
(2)
In the Distributed Computing Environment (DCE), a facility that moves data streams to and from a terminal in CICS. It is an interface between CICS and its application programs. It formats input and output display data in response to BMS commands in programs.
basic rate interface (BRI)
In ISDN, an interface that provides two 64 000 bps data channels (B-channels) and one 16 000 bps signaling channel (D-channel). Also known as
2B + D
. Contrast with
primary rate interface (PRI)
.
basic telecommunications access method (BTAM)
A System/370-type access method that permits read or write communications with BSC remote devices.
batch
Pertaining to a group of jobs to be run on a computer sequentially with the same program with little or no operator action. Contrast with
interactive
.
batch accumulator
In DFU, an accumulator in which subtotals for a field are stored. Contrast with
total accumulator
.
batch device
Any device that can read serial input or write serial output, or both, but cannot be used to communicate interactively with the system. Examples of batch devices are printers, magnetic tape units, or diskette units.
batch file
A personal computer file that contains DOS commands organized for sequential processing. Batch files are identified with the .BAT file name extension.
batch job
A predefined group of processing actions submitted to the system to be performed with little or no interaction between the user and the system. Contrast with
interactive job
. See also
autostart job
,
communications job
,
prestart job
,
scheduled job
,
spooling job
, and
system job
.
batch mode
In query management, the query mode associated with a query instance that does not allow users to interact with the query commands while a procedure is running.
batch processing
A method of running a program or a series of programs in which one or more records (a batch) are processed with little or no action from the user or operator. Contrast with
interactive processing
.
batch shell
In CICS, a shell started to handle CICS interval control timer requests. The batch shell is transparent to the user; each user's program runs under its own user shell. Contrast with
user shell
.
batch subsystem
A part of main storage where batch jobs are processed.
BCC
See
block-check character (BCC)
.
B-channel
In ISDN, a duplex channel for transmitting data or digital voice across the network. Contrast with
D-channel
..
beacon message
A message frame sent repeatedly by an adapter indicating a serious network problem, such as a broken cable. See also
beaconing
.
bean
In Java, a reusable software component. Beans can be combined to create an application.
BEC
See
bus extension card (BEC)
.
BED
See
bus extension driver (BED) card
.
before-image
The contents of a record in a physical file before the data is changed by a write, an update, or a delete operation. Contrast with
after-image
..
BER
See
bus extension receiver (BER) card
or
Basic Encoding Rules
.
BEST/1 for the AS/400
The capacity planner for the AS/400 system. The BEST/1 for the AS/400 capacity planner is a function of the IBM Performance Tools licensed program.
bezel
A rim or surrounding part to keep another part.
BGU
See
Business Graphics Utility (BGU)
.
BID
(1)
In SNA, a command used to request permission to start a bracket.
(2)
In BSC, a protocol exchange in preparation for sending and receiving data. The sending station sends an ENQ character and the receiving station acknowledges receipt of the ENQ character by sending an ACK0 control character.
bidder
An SNA LU-LU half-session that is defined as requesting and receiving permission from another LU-LU half-session to begin a bracket at the start of a session. Contrast with
first speaker
. See also
bracket protocol
.
bidirectional language
The ability to write and read a language in two directions, such as from left to right and from right to left.
big endian
In the Distributed Computing Environment (DCE), an attribute of data representation that reflects how multi-octet data is stored in memory. In big endian representation, the lowest addressed octet of a multi-octet data item is the most significant. See
endian
and
little endian
.
bin
In AFP support, the standard-size paper source on the IBM 3820.
binary
(1)
Pertaining to a selection, choice, or condition that has two possible values. (I)
(2)
A numbering system with a base of two (0 and 1).
(3)
In DB2 UDB for AS/400, a data type indicating that the data is a binary number with a precision of 15 (halfword) or 31 (fullword) bits.
binary file
A file that contains codes that are not part of the ASCII character set. Binary files can utilize all 256 possible values for each byte in the file. integer
In DB2 UDB for AS/400, a basic data type that can be further classified as small integer or large integer.
binary item
Numeric data that is represented internally as a number in the base 2 numbering system; internally, each bit of the item is a binary number with the sign as the far left bit.
binary large object
A binary string that contains bytes with no associated code page. Also known as
BLOB
.
binary operator
A symbol representing an operation to be performed on two data items, arrays, or expressions. The four types of binary operators are numeric, character, logical, and relational. Contrast with
unary operator
.
binary stream
In the C language, a sequence of characters that corresponds on a one-to-one basis with the characters in the file. No character translation is performed on binary streams.
binary string
In REXX, a literal string expressed using a binary (base 2) representation of a value. The binary representation is a sequence of zero or more binary digits (the characters 0 or 1) enclosed in quotation marks and followed by the character b.
binary synchronous communications (BSC)
A data communications line protocol that uses a standard set of transmission control characters and control character sequences to send binary-coded data over a communications line.
binary synchronous communications equivalence link (BSCEL) support
The intersystem communications function (ICF) support on the AS/400 system that provides binary synchronous communications with another AS/400 system, System/36, System/38, and many other BSC computers and devices.
binary timestamp
In the Distributed Computing Environment (DCE), an opaque 128-bit (16-octet) structure that represents a Distributed Time Service (DTS) time value.
bind
(1)
In DB2 UDB for AS/400, to convert the output from the SQL precompiler to a usable structure called an access plan. The process of converting is the one during which access paths to the data are selected and some authorization checking is performed. See also
automatic bind
and
dynamic bind
.
(2)
To create a program, which can be run, by combining one or more modules created by an Integrated Language Environment (ILE) compiler. See also
binder
and
binding
.
BIND command
In SNA, a command used to start a session and define the characteristics of that session. Contrast with
UNBIND command
.
binder
The system component that creates a bound program by packaging Integrated Language Environment (ILE) modules and resolving symbols passed between those modules.
binder language
A small set of commands (STRPGMEXP, EXPORT, and ENDPGMEXP) that defines the external interface (signature) for a service program. These commands cannot be run alone and are of the source type BND. See also
public interface
.
binding
(1)
The process of creating a program by packaging Integrated Language Environment (ILE) modules and resolving symbols passed between those modules.
(2)
In the Distributed Computing Environment (DCE), a relationship between a client and a server involved in a remote procedure call. handle
In the Distributed Computing Environment (DCE), a reference to a binding. See
binding information
.
binding information
In the Distributed Computing Environment (DCE), information about one or more potential bindings, including a Remote Procedure Call (RPC) protocol sequence, a network address, an endpoint, at least one transfer syntax, and an RPC protocol version number. See
binding
. See also
endpoint
,
network address
,
RPC protocol
,
RPC protocol sequence
, and
transfer syntax
.
BIOS
See
basic input and output system (BIOS)
.
bit
A contraction of binary digit. Either of the binary digits, 0 or 1. Compare with
byte
.
bit data
In DB2 UDB for AS/400, data that is not associated with a coded character set; therefore, it is never converted.
bit mask
A pattern of bits designed to be logically compared to an existing bit value. The mask pattern allows only certain desired parts of the existing bit value to appear in the result of the comparison.
bit string
A series of bits consisting of the values 0 and 1.
BIU
See
basic information unit (BIU)
.
blank after
In RPG, an output specification option that changes the contents of a field so that it contains either zeros (if it is a numeric field) or blanks (if it is a character field) after that field is written to the output record.
BLOB
See
binary large object
.
block
(1)
A group of records that are recorded or processed as a unit.
(2)
A set of adjacent records stored as a unit on a disk, diskette, or magnetic tape
(3)
In data communications, a group of records that are received, processed, or sent as a unit.
(4)
A sequential group of statements (defined using line commands) that are processed as a unit.
(5)
In the OfficeVision program, a sequential string of text (defined using cursor-movement keys or line commands) that is processed as a unit.
(6)
In COBOL, a unit of data that is moved into or out of the computer storage.
(7)
In SEU, a group of records (defined using line commands) that are processed as a unit.
block-check character (BCC)
The BSC transmission control character that is used to determine if all of the bits that were sent were also received.
block control byte (BCB)
In a multileaving telecommunications access method, a control character used for transmission block status and sequence count.
block copy
(1)
In the OfficeVision program, to copy a sequential string of text (defined using the cursor-movement keys) from one part of a document to another part.
(2)
In SEU, to copy two or more adjoining source records from one part of a source member to another part, or from one source member to another.
block delete
(1)
In the OfficeVision program, to delete a sequential string of text (defined using the cursor-movement keys) in a document.
(2)
In SEU, to delete two or more adjoining source records from a source member.
signal
. Contrast with unblocked signal.
block exclude
In SEU, to exclude two or more adjoining records from the Edit or Browse display.
blocking call
In the Distributed Computing Environment (DCE), a call in which the caller is suspended until a called procedure is completed.
blocking factor
The number of records in a block. A blocking factor is calculated by dividing the size of the block by the size of the record.
block move
(1)
In the OfficeVision program, to move sequential strings of text (defined using the cursor-movement keys) from one part of a document to another part.
(2)
In SEU, to move two or more adjoining source records from one part of a source member to another part, or from one source member to another.
block overlay
In SEU, to overlay two or more adjoining records with other records defined by the Copy or Move line command.
block statement
In the C language, a group of data definitions, declarations, and statements appearing between a left brace and a right brace that are processed as a unit. The block statement is considered to be a single, C-language statement.
BLU
See
basic link unit (BLU)
.
BMS
See
basic mapping support (BMS)
.
BMS, minimum function
In CICS, support that is provided for 3270 displays and printers only. Minimum BMS supports extended attributes and large displays. It does not support cumulative mapping, terminal operator paging, routing, or message switching.
bookshelf
A grouping of online books within a softcopy library.
Boolean data
In COBOL, a category of data items that are limited to a value of 1 or 0.
Boolean literal
In COBOL, a literal composed of a Boolean character enclosed in double quotation marks and preceded by a B; for example, B "1". See also
literal
.
Boolean operator
In REXX, an operator each of whose operands and whose result take one of two values (0 or 1).
BOOTP
See
Bootstrap Protocol (BOOTP)
.
bootstrap
See
Bootstrap Protocol (BOOTP)
.
Bootstrap Protocol (BOOTP)
A protocol that allows a client to find both its Internet Protocol (IP) address and the name of a file from a server on the network.
border system
A system that exists within a trusted system but communicates between trusted and untrusted systems. A border system prevents security from being compromised.
both field
A field that can be used for either input data or output data.
BOT marker
See
beginning-of-tape marker (BOT marker)
.
bottleneck
In CICS, a symptom that characterizes a performance problem. It can be due to a task failing to start, failing to continue after starting, or taking a long time to complete.
bottom margin
In COBOL, an empty area that follows the page body.
boundary violation
In COBOL, an attempt to write beyond the externally defined boundaries of a sequential file.
bound program
An AS/400 object that combines one or more modules created by an Integrated Language Environment (ILE) compiler. See also
service program
.
box
In AFP Utilities, a continuous line constructing a rectangle.
bpi
Bits per inch.
bps
Bits per second.
bracket
In SNA, one or more chains of request units and their responses, representing a complete transaction, exchanged between two logical unit (LU) half-sessions. See also
RU chain
.
bracketed DBCS
A character string in which each character is represented by 2 bytes. The character string starts with a shift-out (SO) character and ends with a shift-in (SI) character. Contrast with
DBCS-graphic
.
bracket protocol
In SNA, the rules for controlling the data flow in which exchanges between the two logical unit (LU) half-sessions are achieved through the use of brackets, with one LU assigned at the beginning of the session as first speaker and the other LU as the bidder. The bracket protocol involves bracket start and stop rules. See also
first speaker
.
branch instruction
An instruction that changes the sequence of instructions processed in a computer program. The sequence of instructions continues at the address specified in the branch instruction.point
(1)
A place in a program (specified by a command or a condition) where the system stops the processing of that program and gives control to the display station user or to a specified program.
(2)
In CoOperative Development Environment/400, a place in a program, usually specified by a command or a condition, where processing may be interrupted and control given to the workstation user or to a specified debugger program.
breakpoint program
For a batch job, a user program that can be called when a breakpoint is specified.
BRI
See
basic rate interface (BRI)
.
bridge
(1)
A device that interconnects two local area networks that use the same logical link control protocol but may use different medium access control protocols.
(2)
A device that interconnects multiple LANs (locally or remotely) that use the same logical link control protocol but that can use different medium access control protocols. A bridge forwards a frame to another bridge based on the medium access control (MAC) address.
(3)
A device that connects two or more networks; for example, an Ethernet-to-Ethernet network or Ethernet to token-ring network. A bridge stores and forwards information in packets between the networks. See also
VM/MVS bridge
.
British thermal unit (Btu)
A measurement of heat produced in one hour.
BRMS
See
Backup Recovery and Media Services (BRMS)
.
broadband
A communication channel having a wider band of frequencies than a voice-grade channel, and therefore capable of higher-speed data transmission.
broadcast
(1)
In the Distributed Computing Environment (DCE), a notification sent to all members within an arbitrary grouping, such as nodes in a network or threads in a process. See also
signal
.
(2)
The simultaneous transmission of the same data to all nodes connected to a network.
broadcast and unknown server
A server that provides necessary frame-forwarding and broadcast-related services to its clients. Each local area network (LAN) emulation domain must contain a broadcast and unknown server.
broadcast message
A message sent to all work stations.
broadcast semantics
In the Distributed Computing Environment (DCE), a form of idempotent semantics that indicates that the operation is always broadcast to all host systems on the local network, rather than delivered to a specific system. An operation with broadcast semantics is implicitly idempotent. Broadcast semantics are supported only by connectionless protocols. See
at-most-once semantics
,
idempotent semantics
, and
maybe semantics
.
In MQSeries message queueing, to copy a message without removing it from the queue. See also
get
.
browse cursor
In MQSeries message queuing, an indicator used when browsing a queue to identify the message that is next in sequence.
browser
(1)
See
Web browser
.
(2)
In the Distributed Computing Environment (DCE), a Motif-based program that lets users view the contents and structure of a cell namespace.
BSC
See
binary synchronous communications (BSC)
.
BSCEL support
See
binary synchronous communications equivalence link (BSCEL) support
.
BSC 3270 device emulation
A function of the operating system that allows an AS/400 system to appear to a BSC host system as a 3274 Control Unit.
BTAM
See
basic telecommunications access method (BTAM)
.
Btu
See
British thermal unit (Btu)
.
Btu/hr
British thermal unit per hour. An English unit of measure for heat produced in one hour.
buffer
(1)
A routine or an area of storage that corrects for the different speeds of data flow or timings of events, when transferring data from one device to another.
(2)
A portion of storage used to hold input or output data temporarily.
build
In the Application Development Manager feature of the Application Development ToolSet licensed program, the procedure that processes a part into a program.
build process
In the Application Development Manager feature of the Application Development ToolSet licensed program, the procedure that determines which parts of an application have changed, and based on the relationship between those parts, compiles them in the correct order.
build report
In the Application Development Manager feature of the Application Development ToolSet licensed program, a report that describes the results of the build process. This report can be printed or viewed on a display.
built-in function
(1)
In C and CL, a predefined function, such as a commonly used arithmetic function or a function necessary to high-level language compilers (for example, a function for manipulating character strings or converting data). It is automatically called by a built-in function reference.
(2)
In REXX, a function that is supplied by a language. These functions, defined as part of the REXX language, include character manipulation, conversion, and information functions.
built-in function reference
In CL, a built-in function name, having an optional, and possibly empty, argument list that holds the value returned by the built-in function.
bullet
A heavy-dot symbol used to call attention to an item in a list or a printed passage.
bundle
A group of journal entries that are deposited together by the system.
burst
In AFP support, to separate continuous-forms paper into separate sheets.
bus
One or more conductors used for transmitting signals or power.
bus expansion
An AS/400 expansion unit that attaches to an AS/400 system unit for the purpose of increasing the number of buses on the system and which allows for additional I/O processor cards to be attached.
bus extension card (BEC)
The bus extension driver card or the bus extension receiver card.
bus extension driver (BER) card
.
bus extension receiver (BED) card
.
Business Conferencing
See
Ultimedia Business Conferencing
.
business graphics
See
graphics
.
Business Graphics Utility (BGU)
The IBM licensed program that can be used to design, plot, display, and print business charts.
business intelligence
Software products and services that are used to gather, manage, analyze, and disseminate information for making strategic business decisions.
business management
In System Manager, the discipline that encompasses inventory management, security management, financial administration, business planning, and management services for all enterprise-wide information systems.
bus-level partitioning
The dedicated allocation of an entire bus and all accompanying resources (input/output processors and input/output devices) to a particular logical partition. Contrast with
IOP-level partitioning
.
button
(1)
A mechanism on a pointing device, such as a mouse, used to request or start an action.
(2)
A graphical mechanism in a window that, when selected, results in an action. An example of a button is a list button that when selected produces a list of choices.
(3)
A graphical device that identifies a choice. See also
radio button
and
push button
.
bypass plug
Allows power to flow through an unused outlet in the power control compartment.
byte
(1)
The smallest unit of storage that can be addressed directly.
(2)
A group of 8 adjacent bits. In the EBCDIC coding system, 1 byte can represent a character. In the double-byte coding system, 2 bytes represent a character.
bytecode
Intermediate code that is generated by the Java compiler. The code must be interpreted or translated to run on a specific platform or processor.
[
Information Center Home Page
|
Feedback
]
[
Legal
|
AS/400 Glossary
]
|
http://www.redbooks.ibm.com/pubs/html/as400/v4r5/ic2924/info/RZAATMST05.HTM
|
crawl-002
|
refinedweb
| 4,630
| 55.84
|
linking to and from Tracy panel
Notice: This thread is very old.
I don't know if I'm not trying to do something impossible. Anyway, while coding some simple Tracy panel I stumbled upon two problems:
- In my implementation of \Tracy\IBarPanel I need to generate link
- I need Nette to catch that link pointing towards a presenter located in vendor/
Please note, moving files to app/ is out of question while I want to distribute it through Composer.
So far I've tried to hardcode the link and modify nette.application.mapping, but given I have that presenter in custom namespace, Nette can't catch that.
Last edited by meridius (2014-09-03 23:21)
|
https://forum.nette.org/en/20567-linking-to-and-from-tracy-panel
|
CC-MAIN-2022-33
|
refinedweb
| 117
| 61.87
|
Flask 101: How to Add a Search Form
Flask 101: How to Add a Search Form
In this post, we take a look at how to add a search form to your Flask-based web application. Read on for the awesome tutorial.
Join the DZone community and get the full member experience.Join For Free
In our last article, we added a database to our Flask web application but didn’t have a way to add anything to our database. We also didn’t have a way to view anything, so, basically, we ended up having a pretty useless web application. This article will take the time to teach you how to do the following:
- Create a form to add data to our database.
- Use a form to edit data in our database.
- Create some kind of view of what’s in the database.
Adding forms to Flask is pretty easy too, once you figure out what extension to install. I had heard good things about WTForms so I will be using that in this tutorial. To install WTForms you will need to install Flask-WTF. Installing Flask-WTF is pretty easy; just open up your terminal and activate the virtual environment we set up in our first tutorial. Then run the following command using pip:
pip install Flask-WTF
This will install WTForms and Flask-WTF (along with any dependencies) to your web app’s virtual environment.
Serving HTML Files
Originally when I started this series, all I was serving up on the index page of our web application was a string. We should probably spruce that up a bit and use an actual HTML file. Create a folder called “templates” inside the “musicdb” folder. Now create a file called “index.html” inside the “templates” folder and put the following contents in it:
<doctype html> <head> <title>Flask Music Database</title> </head> <h2>Flask Music Database</h2>
Now before we update our web application code, let’s go ahead and create a search form for filtering our music database’s results.
Adding a Search Form
When working with a database, you will want a way to search for items in it. Fortunately creating a search form with WTForms is really easy. Create a Python script called “forms.py” and save it to the “musicdb” folder with the following contents:
# forms.py from wtforms import Form, StringField, SelectField class MusicSearchForm(Form): choices = [('Artist', 'Artist'), ('Album', 'Album'), ('Publisher', 'Publisher')] select = SelectField('Search for music:', choices=choices) search = StringField('')
Here we just import the items we need from the wtforms module and then we subclass the Form class. In our subclass, we create a selection field (a combobox) and a string field. This allows us to filter our search to the Artist, Album, or Publisher categories and enter a string to search for.
Now we are ready to update our main application.
Updating the Main Application
Let’s rename our web application’s script from “test.py” to “main.py” and update it so it looks like this:
# main.py from app import app from db_setup import init_db, db_session from forms import MusicSearchForm from flask import flash, render_template, request, redirect from models import Album init_db() @app.route('/', methods=['GET', 'POST']) def index(): search = MusicSearchForm(request.form) if request.method == 'POST': return search_results(search) return render_template('index.html', form=search) @app.route('/results') def search_results(search): results = [] search_string = search.data['search'] if search.data['search'] == '': qry = db_session.query(Album) results = qry.all() if not results: flash('No results found!') return redirect('/') else: # display results return render_template('results.html', results=results) if __name__ == '__main__': app.run()
We changed the
index() function so it works with both POST and GET requests and told it to load our MusicSearchForm. You will note that when you first load the index page of your web app, it will execute a GET and the
index() function will render our index.html that we just created. Of course, we didn’t actually add the form to our index.html yet, so that search form won’t appear yet.
There is also a
search_results() that we added to handle really basic searches. However, this function won’t be called until we actually implement a way to display the results. So let’s go ahead and make the search form visible to our users.
When I was learning how to create forms with WTForms, the Flask-WTF website recommended creating a template with a macro called “_formhelpers.html.” Go ahead and create a file of that name and save it to your “templates” folder. Then add the following to that file:
{% macro render_field(field) %} <dt>{{ field.label }} <dd>{{ field(**kwargs)|safe }} {% if field.errors %} <ul class=errors> {% for error in field.errors %} <li>{{ error }}</li> {% endfor %} </ul> {% endif %} </dd> {% endmacro %}
This syntax might look a little odd since it is obviously not just HTML. This is actually Jinja2 syntax, which is the templating language used by Flask. Basically, anywhere you see the squiggly braces (i.e. {} ), you are seeing Jinja syntax. Here we pass in a field object and access its label and errors attributes. Feel free to look up the documentation for additional information.
Now open up your “index.html” file and update it so that it has the following contents:
<doctype html> <head> <title>Flask Music Database</title> </head> <h2>Flask Music Database</h2> {% from "_formhelpers.html" import render_field %} <form method=post> <dl> {{ render_field(form.select) }} <p> {{ render_field(form.search) }} </dl> <p><input type=submit value=Search> </form>
The new code in this example shows how you can import the macro you created into your other HTML file. Next, we set the form method to post and we pass the select widget and the search widget to our render_field macro. We also create a submit button with the following label: Search. When you press the Search button, it will post the data in the other two fields of the form to the page that it is on, which in this case is our index.html or “/”.
When that happens, the
index() method in our main.py script will execute:
@app.route('/', methods=['GET', 'POST']) def index(): search = MusicSearchForm(request.form) if request.method == 'POST': return search_results(search) return render_template('index.html', form=search)
You will note that we check which request method it is and if it’s the POST method, then we call the
search_results() function. If you actually press the Search button at this stage, you will receive an Internal Server Error because we haven’t implemented “results.html” as of yet. Anyway, right now your web application should look something like this:
Let’s take a moment and get the results function doing something useful.
Updating the Results Functionality
Right now, we don’t actually have any data in our database, so when we try to query it, we won’t get any results back. Thus we need to make our web application indicate that no results were found. To do that we need to update the “index.html” page:
<doctype html> <head> <title>Flask Music Database</title> </head> <h2>Flask Music Database</h2> {% with messages = get_flashed_messages() %} {% if messages %} <ul class=flashes> {% for message in messages %} <li>{{ message }}</li> {% endfor %} </ul> {% endif %} {% endwith %} {% from "_formhelpers.html" import render_field %} <form method=post> <dl> {{ render_field(form.select) }} <p> {{ render_field(form.search) }} </dl> <p><input type=submit value=Search> </form>
You will note that the new code is a new block of Jinja. Here we grab “flashed” messages and display them. Now we just need to run the web application and try searching for something. If all goes as planned, you should see something like this when you do a search:
Wrapping Up
Now we have a neat little search form that we can use to search our database, although it frankly doesn’t really do all that much since our database is currently empty. In our next article, we will focus on finally creating a way to add data to the database, display search results, and edit the data too!
Download Code
Download a tarball of the code from this article: flask_musicdv_part_iii.tar
Other Articles in the Series
- Part I – Flask 101: Getting Started
- Part II – Flask 101: Adding a Database
Related Readings
- The Jinja2 Website
- Flask-WTForms website
- Flask-SQLAlchemy website
- SQLAlchemy website
- A Simple SQLAlchemy tutorial
Published at DZone with permission of Mike Driscoll , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/flask-101-how-to-add-a-search-form?utm_medium=feed
|
CC-MAIN-2019-35
|
refinedweb
| 1,424
| 65.62
|
#include <gnet.h> enum GIPv6Policy;
voidgnet_ipv6_set_policy (GIPv6Policy policy); GIPv6Policy gnet_ipv6_get_policy (void);
The IPv6 module provides functions for setting and getting the "IPv6
policy". The IPv6 policy affects domain name resolution and server
binding. The possible policies are: IPv4 only, IPv6 only, IPv4 then
IPv6, and IPv6 then IPv4. GNet attempts to set the policy when
gnet_init() is called based on two environment variables (if set) or
on the host's interfaces. This can be overridden by calling
gnet_ipv6_set_policy().
IPv6 policy affects domain name resolution. A domain name can be resolved to several addresses. Most programs will only use the first address in the list. The problem then is what order the addresses should be in. For example, if there are both IPv4 and IPv6 addresses in the list and the system cannot connect to IPv6 hosts then an IPv6 address should not be first in the list. Otherwise, the host will attempt to connect to it and fail. IPv6 policy determines the order of the list. If the policy is "IPv4 only", only IPv4 addresses will be returned. If the policy is "IPv6 then IPv4", IPv6 addresses will come before IPv4 addresses in the list.
IPv6 policy also affects server binding. When a server socket is created, GNet binds to the "any" address by default. There are IPv4 and IPv6 "any" addresses. GNet needs to know which one to use. If the IPv6 policy allows IPv6, GNet will use the IPv6 "any" address. If the IPv6 policy allows only IPv4, GNet will use the IPv4 "any" address.
GNet sets IPv6 policy in
gnet_init(). First it checks two environment
variables: GNET_IPV6_POLICY and IPV6_POLICY. If either is set, it
uses the value to set IPv6 policy. Set the value to "4" for
GIPV6_POLICY_IPV4_ONLY, "6" for
GIPV6_POLICY_IPV6_ONLY, "46" for
GIPV6_POLICY_IPV4_THEN_IPV6, or "64" for
GIPV6_POLICY_IPV6_THEN_IPV4.
If neither environment variable is set, GNet sets the policy based on
the host's interfaces. If there are only IPv4 interfaces, the policy
is set to
GIPV6_POLICY_IPV4_ONLY. If there are only IPv6 interfaces,
the policy is set to
GIPV6_POLICY_IPV6_ONLY. If there are both, the
policy is set to
GIPV6_POLICY_IPV4_THEN_IPV6.
At runtime on Windows, GNet will check to see the computer can support
IPv6. If so it will set the policy to
GIPV6_POLICY_IPV4_THEN_IPV6. If
not it will set the policy to
GIPV6_POLICY_IPV4_ONLY. Environment
variables are not checked on Windows.
typedef enum { GIPV6_POLICY_IPV4_THEN_IPV6, GIPV6_POLICY_IPV6_THEN_IPV4, GIPV6_POLICY_IPV4_ONLY, GIPV6_POLICY_IPV6_ONLY } GIPv6Policy;
Policy for IPv6 use in GNet. This affects domain name resolution
and server binding.
gnet_init() attempts to set a reasonable
default based on environment variables or the interfaces available
on the host.
voidgnet_ipv6_set_policy (GIPv6Policy policy);
Sets the IPv6 policy.
GIPv6Policy gnet_ipv6_get_policy (void);
Gets the IPv6 policy.
|
http://developer.gnome.org/gnet/stable/gnet-ipv6.html
|
crawl-003
|
refinedweb
| 443
| 59.4
|
This class describes contact related information for each body in contact with other bodies in the world. More...
#include <drake/systems/controllers/qp_inverse_dynamics/qp_inverse_dynamics_common.h>
This class describes contact related information for each body in contact with other bodies in the world.
Each contact body has a set of point contacts. For each contact point, only point contact forces can be applied, and the friction cone is approximated by a set of basis vectors.
The stationary contact condition can be described as:
\[ J * vd + J_dot_times_vd = Kd * (0 - v_contact_pt). \]
Only the linear velocities and accelerations are considered here. Kd >= 0 is a stabilizing velocity gain to damp out contact velocity. This condition can be enforced either as an equality constraint or as a cost term.
Constructs a ContactInformation object for
body.
Computes a matrix (Basis) that converts a vector of scalars (Beta) to the stacked point contact forces (F).
All point forces are in the world frame, and are applied at the contact points in the world frame. Basis is (3 * N_c) by (N_basis * N_c), where N_c is the number of contact points, and N_basis is the number of basis per contact point.
Computes the contact points and reference point location in the world frame.
Computes the linear part of the stacked Jacobian for all the contact points.
Computes the linear part of the stacked Jacobian dot times v vector for all the contact points.
Computes the stacked velocities for all the contact points.
Computes a matrix that converts a vector of stacked point forces to an equivalent wrench in a frame that has the same orientation as the world frame, but located at
reference_point.
The stacked point forces are assumed to have the same order of
contact_points.
|
http://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1controllers_1_1qp__inverse__dynamics_1_1_contact_information.html
|
CC-MAIN-2018-43
|
refinedweb
| 287
| 64
|
Follow @jongalloway
ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers:
Top links:
Okay, for those of you who are still with me, let's dig in a bit.
I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported. I've installed the release version over pre-release versions, and it worked fine.
If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools.
Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later.
If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing.
Visual Studio 2013 development settings and color theme
When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later.
As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset.
Web Development (code only) settings
Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later.
Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore.
So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter.
During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do.
There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site.
You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right?
Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them.
So, here it is.
When you create a new ASP.NET application, you just create an ASP.NET application.
Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references.
If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray!
I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family.
In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates.
Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host)
The default is Individual User Accounts:
This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second.
The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365.
There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity.
I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features:
You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon).
The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits:
Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display:
When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown:
For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project.
Now when I refresh the page, I've got a new theme:
The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus.
For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon.
This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon.
This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser.
I love it.
A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal.
Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal.
This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things.
Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser.
The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET.
In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC.
ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like.
Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo
public class SampleController : Controller
{
[Route("spaghetti/with-nesting/where-is-waldo")]
public string Hiding()
{
return "You found me!";
}
}
I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: }
);
}
}
You can read more about Attribute Routing in ASP.NET MVC 5 here.
There are two new additions to filters: Authentication Filters and Filter Overrides..
Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers.
ASP.NET Web API 2 includes a lot of new features.
ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article.
ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications.
ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson.
Lots more
There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes.
I've written about OWIN and Katana recently. I'm a big fan.
OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host.
Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation.
Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack.
If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable.
Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release.
Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries.
Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time.
Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators.
You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit.
The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc.
Here's a 3 minute tour from Mads Kristensen.
The Windows Azure portal is good as websites go, but it's another step to have to go to the portal to create a site, then download the publish profile, then import it into my site. It's like ten clicks or something and it just gets really fatiguing and sometimes I need a nap.
They've updated the Server Explorer in Visual Studio 2013 so I can just right-click on the Windows Azure node to create a site. Then when I'm publishing, I can directly import the site publish profile and go. That means I can create a new Windows Azure Web Site, with a free 20 MB SQL Database, and publish it to Windows Azure all without leaving Visual Studio. That's really nice.
More about that on this post: Creating New Windows Azure Web Site from Visual Studio 2013 RC
That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features.
Lots, lots more
Okay, that's just a summary, and it's still quite a bit. Head on over to for more information, and download Visual Studio 2013 now to get started!
Great compilation, Jon, thanks for sharing!
The new features are really great, especially the new ASP.NET Identity where I can really control the Membership tables.
However, is there (or will there be) any tutorial on properly migrating from EF5, MVC4 to the new Identity system based on OWIN.
The tooling stuffs are really crazy. Really excited to use VS2013. :)
Selected Single Page Application template, once created I ran it and no SPA??
Hi Jon,
Thanks for this post. I think Visual Studio 2013 is the best release.
Cheers.
Cooooool....!!
Dats great..
Great...
Thanks Jon for the list, I am enjoying VS 2013 so far, however mine (RC) has Bootstrap 2.3 and not version 3. I was wondering which version you were referring to that has Bootstrap 3 natively.
Thanks
Val
It's funny that you reference Mads in this article, when you also have a section dedicated to not opening a browser window when a plugin is installed. Web Essentials not only does it when it's installed but every time you update too. 😃
Great ! thank you so much !
I have been "playing" with it since the RC.
1) The Web-API seems to be really well-backed (ready for prime time)
2) The ASPNet Identity - the parts at least for authentication with MicrosoftAccount; Facebook and Google do not seem to be ready for prime time -- it is so unnecessarily difficult to use for retrieving scope info (eg email, first name etc) -- even if facebook etc sends it to the app.
They do not match the capabilities of the "equivalent" javacript versions of WinLive; FB and gauth. The lack of readiness may explain why there is such scanty documentation on it
You can see the links below for some of the gyrations we are going through just to get things to work smoothly:
katanaproject.codeplex.com/.../145
katanaproject.codeplex.com/.../82
I appreciate the technical aspects of this post, but not the editorial ones.
Just so you are aware, I am not lazy, just strapped for time. You see those of us in the real world are rarely able to keep up-to-date with the latest MS changes in our workplace, and must use our personal time for learning the latest changes. Because I have a life outside of MS, your technical synopsis (aka cliff notes) helped me a lot today.
I also like the Blue theme, not because I like "old" things, but because I have problems with my eyesight. Starting with VS2012, the UI looks like a joke, as if MS decided to throw away all of its years of UX research; and I'm not the only one who thinks that. I embrace change, but this change was not good, especially for those of use who are visually challenged. So if you want to criticize my physical disability, I wonder about you.
I'm not sure if you're trying to be funny (if so, please stop because you aren't) or are just simply in need of some maturity.
Thanks for the information Mr Galloway. I am very much learning and guides such as the one you have posted up do help me a lot. It is muchly appreciated, thank you
|
http://weblogs.asp.net/jgalloway/archive/2013/10/17/top-things-web-developers-should-know-about-the-visual-studio-2013-release.aspx
|
CC-MAIN-2014-10
|
refinedweb
| 3,523
| 73.17
|
Hey,
I am trying to link a button in a header to open a lightbox on hover to act as a customised drop down menu. I have created the button and "onMouseIn" function in the properties panel and entered the code:
import window from 'wix-window';
export function whatwedo_onMouseIn() {
window.openLightbox('lightbox1');
}
I am so new to code so this I'm struggling to understand the most simplest things. If someone could help, that would be greatly appreciated!
Hi Luca!
First thing is that you got a mistake on the first row.
It should be this:
And did you add an event named "whatwedo_mouseIn" to the button's properties?
Without inspecting your site those the two reasons I can think of to solve your issue.
If for some reason you don't manage to make it work, post a link of the page you're having troubles with and I'd be glad to check it out.
Hope it helps.
Best of luck!
Doron. :)
Hi Doron,
Thanks for your reply, but unfortunately it still does not work. I have pasted the code into the "Site" area as it is to do with the header and still no luck. I have opened the properties panel for the button (which is now called button20) and entered "button20_mouseIn"
This is the code I have used:
import wixWindow from 'wix-window';
export function button20_onMouseIn() {
window.openLightbox('lightbox1');
}
It is saying that "window" is undefined. I have posted a link to my website below:
I am trying to trigger the button in the menu on the homepage saying "WHAT WE DO" to bring the drop down menu that is shown when you click "WHAT WE DO" but I want to code it so that on mouse hover it does this.
I look forward to your reply,
Luca
Hi again Luca!
Again, you have an error in your code.
It should be:
Also, I've noticed that you've put the button20 function both in the site code and the home page code.
I think you should try and comment out the one in the page's code.
Doron.
Hi Doron,
Code is now as follows:
import wixWindow from 'wix-window'; export function button20_mouseIn(event, $w) { wixWindow.openLightbox("lightbox1"); }
STILL no luck, I have also removed the code from the page code so it is only in the site code. Please advise!
Many thanks in advance,
Luca
Hi Luca!
Just checked out your site.
The previous state is what I saw, hence no changes in the code as you just posted.
After trying it myself (commenting out the page code and change the code to its correct form), I can assure you it does work.
Please make sure you change the code according to what I suggested and save it properly.
Doron.
Hi Doron,
Please could you post exactly the code you have used to make it work as I am still not having any luck?
Thank you for all your help.
Kind regards,
Luca
Hi Luca!
Your code should look like this:
Note that what the function is doing is opening a Lightbox according to its name and not by its field name.
This means that instead of "lightbox1" you should write the name of the Lightbox you want to open.
Another thing is that I noticed that you both added an event to the button and connected it by link.
If you wish the function to work you need to disable the link option.
Doron.
Hi Doron,
Amazing, thank you so much it finally works! Is there any way to make it instant and avoid the (very slight) delay when hovering?
Secondly, how can i make it so when i move the cursor off the lightbox it closes?
Many thanks,
Luca
Just as you used the 'onMouseIn' method to open the Lightbox you can use the 'onMouseOut' to close it.
Note that the function close( ) should be written in the Lightbox's code page and not in the site page.
As for the delay,
I believe that your intention* was to create an expandable menu for this certain tab.
If so, although using a Lightbox is a very creative work-around and I am impressed, I would suggest using the same method of mouseIn/mouseOut on a box object that you show( ) / hide( ) accordingly.
The delay is due to the fact that even if openLightbox is not a page redirection (such as wixLocation.to( ) ),
it does need to open a new window and the loading time seems like a delay.
Using the other method I described should solve this issue.
*If your intention was different than what I assumed I'd really love to hear what you had in mind.
Hope it helps.
Best of luck!
Doron. :)
|
https://www.wix.com/corvid/forum/community-discussion/how-to-open-a-lightbox-with-a-button-in-a-header
|
CC-MAIN-2020-05
|
refinedweb
| 793
| 79.9
|
The commit candidate structure. More...
#include <svn_client.h>
The commit candidate structure.
In order to avoid backwards compatibility problems clients should use svn_client_commit_item3_create() to allocate and initialize this structure instead of doing so themselves.
Definition at line 464 of file svn_client.h.
An array of svn_prop_t *'s, which are incoming changes from the repository to WC properties.
These changes are applied post-commit.
When adding to this array, allocate the svn_prop_t and its contents in
incoming_prop_changes->pool, so that it has the same lifetime as this data structure.
See for a description of what would happen if the post-commit process didn't group these changes together with all other changes to the item.
Definition at line 508 of file svn_client.h.
When committing a move, this contains the absolute path where the node was directly moved from.
(If an ancestor at the original location was moved then it points to where the node itself was moved from; not the original location.)
Definition at line 535 of file svn_client.h.
An array of svn_prop_t *'s, which are outgoing changes to make to properties in the repository.
These extra property changes are declared pre-commit, and applied to the repository as part of a commit.
When adding to this array, allocate the svn_prop_t and its contents in
outgoing_prop_changes->pool, so that it has the same lifetime as this data structure.
Definition at line 519 of file svn_client.h.
absolute working-copy path of item.
Always set during normal commits (and copies from a working copy) to the repository. Can only be NULL when stub commit items are created for operations that only involve direct repository operations. During WC->REPOS copy operations, this path is the WC source path of the operation.
Definition at line 473 of file svn_client.h.
When processing the commit this contains the relative path for the commit session.
#NULL until the commit item is preprocessed.
Definition at line 526 of file svn_client.h.
commit URL for this item.
Points to the repository location of PATH during commits, or to the final URL of the item when copying from the working copy to the repository.
Definition at line 481 of file svn_client.h.
|
https://subversion.apache.org/docs/api/latest/structsvn__client__commit__item3__t.html
|
CC-MAIN-2017-47
|
refinedweb
| 365
| 58.38
|
[Previous] [Contents] [Next]
Comparison Operators
The set of relational operators referred to as comparison operators are less than (<), less than or equal to (<=), greater than (>), greater than or equal to (>=), equal to (==), and not equal to (!=). The meaning of each of these operators is obvious when working with numbers, but how each operator works on objects isn't so obvious. Here's an example: -
using System; class NumericTest { public NumericTest(int i) { this.i = i; } protected int i; } class RelationalOps1App { public static void Main() { NumericTest test1 = new NumericTest(42); NumericTest test2 = new NumericTest(42); Console.WriteLine("{0}", test1 == test2); } }
If you're a Java programmer, you know what's going to happen here. However, most C++ developers will probably be surprised to see that this example prints a statement of false. Remember that when you instantiate an object, you get a reference to a heap-allocated object. Therefore, when you use a relational operator to compare two objects, the C# compiler doesn't compare the contents of the objects. Instead, it compares the addresses of these two objects. Once again, to fully understand what's going on here, we'll look at the MSIL for this code: -
.method public hidebysig static void Main() il managed { .entrypoint // Code size 39 (0x27) .maxstack 3 .locals (class NumericTest V_0, class NumericTest V_1, bool V_2) IL_0000: ldc.i4.s 42 IL_0002: newobj instance void NumericTest::.ctor(int32) IL_0007: stloc.0 IL_0008: ldc.i4.s 42 IL_000a: newobj instance void NumericTest::.ctor(int32) IL_000f: stloc.1 IL_0010: ldstr "{0}" IL_0015: ldloc.0 IL_0016: ldloc.1 IL_0017: ceq IL_0019: stloc.2 IL_001a: ldloca.s V_2 IL_001c: box ['mscorlib']System.Boolean IL_0021: call void ['mscorlib']System.Console::WriteLine (class System.String,class System.Object) IL_0026: ret } // end of method 'RelationalOps1App::Main'
Take a look at the .locals line. The compiler is declaring that this Main method will have three local variables. The first two are NumericTest objects, and the third is a Booleantype. Now skip down to lines IL_0002 and IL_0007. It's here that the MSIL instantiates the test1 object and, with the stloc opcode, stores the returned reference to the first local variable. However, the key point here is that the MSIL is storing the address of the newly created object. Then, in lines IL_000a and IL_000f, you can see the MSIL opcodes to create the test2 object and store the returned reference in the second local variable. Finally, lines IL_0015 and IL_0016 simply load the local variables on the stack via a call to ldloc, and then line IL_0017 calls the ceq opcode, which compares the top two values on the stack (that is, the references to the test1 and test2 objects). The returned value is then stored in the third local variable and later printed via the call to System.Console.WriteLine.
How can one produce a member-by-member comparison of two objects? The answer is in the implicit base class of all .NET Framework objects. The System.Object class has a method called Equals designed for just this purpose. For example, the following code performs a comparison of the object contents as you would expect and returns a value of true: -
using System; class RelationalOps2App { public static void Main() { Decimal test1 = new Decimal(42); Decimal test2 = new Decimal(42); Console.WriteLine("{0}", test1.Equals(test2)); } }
Note that the RelationalOps1App example used a self-made class (NumericTest), and the second example used a .NET class (Decimal). The reason for this is that the System.Object.Equals method must be overridden to do the actual member-by-member comparison. Therefore, using the Equals method on the NumericTest class wouldn't work because we haven't overridden the method. However, because the Decimal class does override the inherited Equals method, it does work like you'd expect.
Another way to handle an object comparison is through operator overloading. Overloading an operator defines the operations that take place between objects of a specific type. For example, with string objects, the + operator concatenates the strings rather than performing an add operation. We'll be getting into operator overloading in Chapter 13.
[Previous] [Contents] [Next]
|
http://www.brainbell.com/tutors/C_Sharp/Comparison_Operators.htm
|
crawl-002
|
refinedweb
| 688
| 58.69
|
The revolutionary project management tool is here! Plan visually with a single glance and make sure your projects get done.
Enumeration e = t.elements(); while(e.hasMoreElements()){ System.out.println((String)e.nextElement()); }
import java.util.Enumeration; import java.util.Hashtable;
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
> System.out.println((String
that should also be:
System.out.println(e.nextE
HashTable t = new HashTable(0);
And I tried the other way,
Enumeration e = t.keys();
while (e.hasMoreElements()) {
String key = (String) e.nextElement();
System.out.println(key);
}
it's also compiles error. This time it cannot find method keys.
How can I fix it? Do I need to create a keys() / element() method ?
It’s our mission to create a product that solves the huge challenges you face at work every day. In case you missed it, here are 7 delightful things we've added recently to monday to make it even more awesome.
import java.util.Scanner;
import java.util.Enumeration;
import java.util.Hashtable;
public class Hashtable {
public static void main(String[] args) throws Exception {
if( args.length != 1 ) return;
HashTable t = new HashTable(0);
Scanner in1 = new Scanner(new FileInputStream(args[0]));
while( in1.hasNext()){
String s = in1.next();
int v = t.getvalue(s);
t.assign(s, v);
}
Enumeration e = t.elements();
while(e.hasMoreElements())
System.out.println((String
}
}
}
you need to change the name of your class (and the filename)
eg.
public class HashtableTest {
cannot find symbol
symbol : method elements()
location: class HashTable
Enumeration e = t.elements();
should be:
Hashtable t = new Hashtable(0);
and
t.assign(s, v);
Should give you compile errors as well. Usually you would use:
t.put("one", 1);
instead.
Are you able to read from a text file ok?
|
https://www.experts-exchange.com/questions/26606948/How-to-print-out-all-elements-in-HashTable.html
|
CC-MAIN-2018-13
|
refinedweb
| 309
| 63.76
|
Creating minimal ASP.NET Core web application
By its nature ASP.NET Core application is .NET Core command-line application with some web sites stuff that is set up when application starts. Although Visual Studio has ASP.NET Core web site template that comes with simple default web site and start-up class it’s possible to be even more minimal. This blog post introduces the minimal ASP.NET Core web application.
Creating ASP.NET Core web application
Let’s start Visual Studio and create new ASP.NET Core web application.
For application type let’s select Empty.
Empty application is not still so empty as it can be.
Making application minimal
After creating default ASP.NET Core web application with Visual Studio I removed most of things that come with application:
- wwwroot folder
- bower dependencies
- web.config
- start-up class.
Image on left shows what’s left when all clutter is removed. Technically the Dependencies folder is also not needed but as it is something that Visual Studio shows dynamically it’s not possible to remove it.
There’s one problem now. The application doesn’t build anymore.
Fixing Program.cs
Let’s open Program.cs and make it look like shown below. After adding Kestrel web server to web host the web host is configured to response “Hello, world” to every request that comes in. Without this there will be no response and browsers see that web site as not alive.
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
namespaceAdvancedAspNetCore.MinimalApp
{
public class Program
{
public static void Main(string[] args)
{
new WebHostBuilder()
.UseKestrel()
.Configure(a => a.Run(r => r.Response.WriteAsync(“Hello, world”)))
.Build()
.Run();
}
}
}
Running the web application results in something like shown on screenshot below.
This response is given to all requests no matter what is asked.
Wrapping up
Although it is possible to create small and minimal ASP.NET Core web site from Visual Studio template the resulting site is still not as minimal as possible. After removing all clutter and modifying web host to response “Hello, world” to all requests the application got as minimal as possible. It is actually good example about how using ASP.NET MVC is not mandatory when creating ASP.NET Core web applications.
|
https://gunnarpeipman.com/aspnet/minimal-aspnet-core-app/
|
CC-MAIN-2019-30
|
refinedweb
| 374
| 52.66
|
By default, using a class as a static extension brings all of its static methods into the context, allowing them to be used as extensions of the appropriate types. In certain situations, the class can provide other static methods which are not intended for static extension. To make sure they do not interfere with the proper methods of the type, these methods can be marked with
@:noUsing:
using Main.IntExtender; class IntExtender { @:noUsing static public function double(i:Int) { return i * 2; } static public function triple(i:Int) { return i * 3; } } class Main { static public function main() { // works: trace(12.triple()); // does not work because the method is marked with @:noUsing: // trace(12.double()); // works as a normal static method: trace(IntExtender.double(12)); } }
It is also possible to always enable particular static extensions for a given type, by annotating the type with the
@:using(args...) metadata. The arguments are the full dot paths of static extension classes that will be applied on the type:
@:using(Main.TreeTools) enum Tree { Node(l:Tree, r:Tree); Leaf(value:Int); } class TreeTools { public static function sum(tree:Tree):Int { return (switch (tree) { case Node(l, r): sum(l) + sum(r); case Leaf(value): value; }); } } class Main { static public function main() { var a = Node(Node(Leaf(1), Leaf(2)), Leaf(3)); // works, even though there was no 'using Main.TreeTools' in this module trace(a.sum()); } }
|
https://haxe.org/manual/lf-static-extension-metadata.html
|
CC-MAIN-2019-39
|
refinedweb
| 233
| 51.38
|
Python’s “batteries included” nature makes it easy to interact with just about anything… except speakers and a microphone! As of this moment, there still are not standard libraries which which allow cross-platform interfacing with audio devices. There are some pretty convenient third-party modules, but I hope in the future a standard solution will be distributed with python. I appreciate the differences of Linux architectures such as ALSA and OSS, but toss in Windows and MacOS in the mix and it gets to be a huge mess. For Linux, would I even need anything fancy? I can run “
cat file.wav > /dev/dsp” from a command prompt to play audio. There are some standard libraries for operating system specific sound (i.e., winsound), but I want something more versatile. The official audio wiki page on the subject lists a small collection of third-party platform-independent libraries. After excluding those which don’t support microphone access (the ultimate goal of all my poking around in this subject), I dove a little deeper into sounddevice and PyAudio. Both of these I installed with pip (i.e.,
pip install pyaudio)
For a more modern, cleaner, and more complete GUI-based viewer of realtime audio data (and the FFT frequency data), check out my Python Real-time Audio Frequency Monitor project.
I really like the structure and documentation of sounddevice, but I decided to keep developing with PyAudio for now. Sounddevice seemed to take more system resources than PyAudio (in my limited test conditions: Windows 10 with very fast and modern hardware, Python 3), and would audibly “glitch” music as it was being played every time it attached or detached from the microphone stream. I tried streaming, but after about an hour I couldn’t get clean live access to the microphone without glitching audio playback. Furthermore, every few times I ran this script it crashed my python kernel! I very rarely see this happening. iPython complained: “It seems the kernel died unexpectedly. Use ‘Restart kernel’ to continue using this console” and I eventually moved back to PyAudio. For a less “realtime” application, sounddevice might be a great solution. Here’s the minimal case sounddevice script I tested with (that crashed sometimes). If you have a better one to do live high-speed audio capture, let me know!
import sounddevice #pip install sounddevice for i in range(30): #30 updates in 1 second rec = sounddevice.rec(44100/30) sounddevice.wait() print(rec.shape)
Here’s a simple demo to show how I get realtime microphone audio into numpy arrays using PyAudio. This isn’t really that special. It’s a good starting point though. Note that rather than have the user define a microphone source in the python script (I had a fancy menu system handling this for a while), I allow PyAudio to just look at the operating system’s default input device. This seems like a realistic expectation, and saves time as long as you don’t expect your user to be recording from two different devices at the same time. This script gets some audio from the microphone and shows the values in the console (ten times).
import pyaudio import numpy as np) print(data) # close the stream gracefully stream.stop_stream() stream.close() p.terminate()
I tried to push the limit a little bit and see how much useful data I could get from this console window. It turns out that it’s pretty responsive! Here’s a slight modification of the code, made to turn the console window into an impromptu VU meter.
import pyaudio import numpy as np CHUNK = 2**11 RATE = 44100 p=pyaudio.PyAudio() stream=p.open(format=pyaudio.paInt16,channels=1,rate=RATE,input=True, frames_per_buffer=CHUNK) for i in range(int(10*44100/1024)): #go for a few seconds data = np.fromstring(stream.read(CHUNK),dtype=np.int16) peak=np.average(np.abs(data))*2 bars="#"*int(50*peak/2**16) print("%04d %05d %s"%(i,peak,bars)) stream.stop_stream() stream.close() p.terminate()
The results are pretty good! The advantage here is that no libraries are required except PyAudio. For people interested in doing simple math (peak detection, frequency detection, etc.) this is a perfect starting point. Here’s a quick cellphone video:
I’ve made realtime audio visualization (realtime FFT) scripts with Python before, but 80% of that code was creating a GUI. I want to see data in real time while I’m developing this code, but I really don’t want to mess with GUI programming. I then had a crazy idea. Everyone has a web browser, which is a pretty good GUI… with a Python script to analyze audio and save graphs (a lot of them, quickly) and some JavaScript running in a browser to keep refreshing those graphs, I could get an idea of what the audio stream is doing in something kind of like real time. It was intended to be a hack, but I never expected it to work so well! Check this out…
Here’s the python script to listen to the microphone and generate graphs:
import pyaudio import numpy as np import pylab import time RATE = 44100 CHUNK = int(RATE/20) # RATE / number of updates per second def soundplot(stream): t1=time.time() data = np.fromstring(stream.read(CHUNK),dtype=np.int16) pylab.plot(data) pylab.title(i) pylab.grid() pylab.axis([0,len(data),-2**16/2,2**16/2]) pylab.savefig("03.png",dpi=50) pylab.close(()
Here’s the HTML file with JavaScript to keep reloading the image…
<html> <script language="javascript"> function RefreshImage(){ document.pic0. <img name="pic0" src="03.png"> </body> </html>
Here’s the result! I couldn’t believe my eyes. It’s not elegant, but it’s kind of functional!
Why stop there? I went ahead and wrote a microphone listening and processing class which makes this stuff easier. My ultimate goal hasn’t been revealed yet, but I’m sure it’ll be clear in a few weeks. Let’s just say there’s a lot of use in me visualizing streams of continuous data. Anyway, this class is the truly terrible attempt at a word pun by merging the words “SWH”, “ear”, and “Hear”, into the official title “SWHear” which seems to be unique on Google. This class is minimal case, but can be easily modified to implement threaded recording (which won’t cause the rest of the functions to hang) as well as mathematical manipulation of data, such as FFT. With the same HTML file as used above, here’s the new python script and some video of the output:
import pyaudio import time import pylab import numpy as np class SWHear(object): """ The SWHear class is made to provide access to continuously recorded (and mathematically processed) microphone data. """ def __init__(self,device=None,startStreaming=True): """fire up the SWHear class.""" print(" -- initializing SWHear") self.chunk = 4096 # number of data points to read at a time self.rate = 44100 # time resolution of the recording device (Hz) # for tape recording (continuous "tape" of recent audio) self.tapeLength=2 #seconds self.tape=np.empty(self.rate*self.tapeLength)*np.nan self.p=pyaudio.PyAudio() # start the PyAudio class if startStreaming: self.stream_start() ### LOWEST LEVEL AUDIO ACCESS # pure access to microphone and stream operations # keep math, plotting, FFT, etc out of here. def stream_read(self): """return values for a single chunk""" data = np.fromstring(self.stream.read(self.chunk),dtype=np.int16) #print(data) return data def stream_start(self): """connect to the audio device and start a stream""" print(" -- stream started") self.stream=self.p.open(format=pyaudio.paInt16,channels=1, rate=self.rate,input=True, frames_per_buffer=self.chunk) def stream_stop(self): """close the stream but keep the PyAudio instance alive.""" if 'stream' in locals(): self.stream.stop_stream() self.stream.close() print(" -- stream CLOSED") def close(self): """gently detach from things.""" self.stream_stop() self.p.terminate() ### TAPE METHODS # tape is like a circular magnetic ribbon of tape that's continously # recorded and recorded over in a loop. self.tape contains this data. # the newest data is always at the end. Don't modify data on the type, # but rather do math on it (like FFT) as you read from it. def tape_add(self): """add a single chunk to the tape.""" self.tape[:-self.chunk]=self.tape[self.chunk:] self.tape[-self.chunk:]=self.stream_read() def tape_flush(self): """completely fill tape with new data.""" readsInTape=int(self.rate*self.tapeLength/self.chunk) print(" -- flushing %d s tape with %dx%.2f ms reads"%\ (self.tapeLength,readsInTape,self.chunk/self.rate)) for i in range(readsInTape): self.tape_add() def tape_forever(self,plotSec=.25): t1=0 try: while True: self.tape_add() if (time.time()-t1)>plotSec: t1=time.time() self.tape_plot() except: print(" ~~ exception (keyboard?)") return def tape_plot(self,saveAs="03.png"): """plot what's in the tape.""" pylab.plot(np.arange(len(self.tape))/self.rate,self.tape) pylab.axis([0,self.tapeLength,-2**16/2,2**16/2]) if saveAs: t1=time.time() pylab.savefig(saveAs,dpi=50) print("plotting saving took %.02f ms"%((time.time()-t1)*1000)) else: pylab.show() print() #good for IPython pylab.close('all') if __name__=="__main__": ear=SWHear() ear.tape_forever() ear.close() print("DONE")
I don’t really intend anyone to actually do this, but it’s a cool alternative to recording a small portion of audio, plotting it in a pop-up matplotlib window, and waiting for the user to close it to record a new fraction. I had a lot more text in here demonstrating real-time FFT, but I’d rather consolidate everything FFT related into a single post. For now, I’m happy pursuing microphone-related python projects with PyAudio.
UPDATE: Displaying a single frequency
Use Numpy’s FFT() and FFTFREQ() to turn the linear data into frequency. Set that target and grab the FFT value corresponding to that frequency. I haven’t tested this to be sure it’s working, but it should at least be close…
import pyaudio import numpy as np np.set_printoptions(suppress=True) # don't use scientific notation CHUNK = 4096 # number of data points to read at a time RATE = 44100 # time resolution of the recording device (Hz) TARGET = 2100 # show only this one frequency) fft = abs(np.fft.fft(data).real) fft = fft[:int(len(fft)/2)] # keep only first half freq = np.fft.fftfreq(CHUNK,1.0/RATE) freq = freq[:int(len(freq)/2)] # keep only first half assert freq[-1]>TARGET, "ERROR: increase chunk size" val = fft[np.where(freq>TARGET)[0][0]] print(val) # close the stream gracefully stream.stop_stream() stream.close() p.terminate()
UPDATE: Display peak frequency
If your goal is to determine which frequency is producing the loudest tone, use this function. I also added a few lines to graph the output in case you want to observe how it operates. I recommend testing this script with a tone generator, or a YouTube video containing tones of a range of frequencies like this one.
import pyaudio import numpy as np import matplotlib.pyplot as plt np.set_printoptions(suppress=True) # don't use scientific notation) data = data * np.hanning(len(data)) # smooth the FFT by windowing data fft = abs(np.fft.fft(data).real) fft = fft[:int(len(fft)/2)] # keep only first half freq = np.fft.fftfreq(CHUNK,1.0/RATE) freq = freq[:int(len(freq)/2)] # keep only first half freqPeak = freq[np.where(fft==np.max(fft))[0][0]]+1 print("peak frequency: %d Hz"%freqPeak) # uncomment this if you want to see what the freq vs FFT looks like #plt.plot(freq,fft) #plt.axis([0,4000,None,None]) #plt.show() #plt.close() # close the stream gracefully stream.stop_stream() stream.close() p.terminate()
This program shows left vs right audio level:
import pyaudio import numpy as np maxValue = 2**16 print("L:%00.02f R:%00.02f"%(peakL*100, peakR*100))
Output
L:47.26 R:45.17 L:47.55 R:45.63 L:49.44 R:45.98 L:45.27 R:49.80 L:44.39 R:45.75 L:47.50 R:46.96 L:41.49 R:42.64 L:42.95 R:41.39 L:49.56 R:49.62 L:48.29 R:48.80 L:45.03 R:47.62 L:47.99 R:49.35 L:41.58 R:49.21
Or with a tweak…
import pyaudio import numpy as np maxValue = 2**16 bars = 35 lString = "#"*int(peakL*bars)+"-"*int(bars-peakL*bars) rString = "#"*int(peakR*bars)+"-"*int(bars-peakR*bars) print("L=[%s]\tR=[%s]"%(lString, rString))
graphical output:
|
https://www.swharden.com/wp/category/electronics/qrss-slow-speed-fsk-cw-and-mept-manned-experimental-propagation-transmitter/
|
CC-MAIN-2020-10
|
refinedweb
| 2,106
| 57.06
|
As noted in the comments here, the main bottleneck in the Python matrix solver functions presented recently was not in the data transfer from Excel, but rather in the creation of the Numpy arrays for very long lists of short lists (see this Stackoverflow thread for more details of the background). It seems there is a substantial time saving in converting the long array into 1D vectors, which can be converted into Numpy arrays very much more quickly. The VBA code below converts a 3D array of any length (up to the maximum allowed in Excel) to 3 vectors.
Function Vectorize(x As Variant, x_1 As Variant, x_2 As Variant, x_3 As Variant) As Long Dim nr As Long nr = x.Rows.Count x_1 = x.Resize(nr, 1).Value2 x_2 = x.Offset(0, 1).Resize(nr, 1).Value2 x_3 = x.Offset(0, 2).Resize(nr, 1).Value2 Vectorize = nr End Function
To transfer these vectors to Python, via ExcelPython, the PyObj function must be used:
Set x = Range("SSA") ' Excel range with 500500 rows and 3 columns n = Vectorize(x, x_1, x_2, x_3) 'Convert range values to 3 vectors ' Create ExcelPython objects for transfer to Python Set x_1 = PyObj(x_1, 1) Set x_2 = PyObj(x_2, 1) Set x_3 = PyObj(x_3, 1)
In Python the three vectors are converted to Numpy arrays:
def xl_getnpvect(x_1, x_2, x_3): timea = np.zeros(4) timea[0] = time.clock() x_1 = np.array(x_1) timea[1] = time.clock() x_2 = np.array(x_2) timea[2] = time.clock() x_3 = np.array(x_3) timea[3] = time.clock() return timea.tolist()
The table below compares the data transfer and conversion times using this method on an Excel range of 500500 rows x 3 columns, with the same operation using a 2D variant array.
The times for solution of a large sparse matrix system (10945×10945 matrix), using the new vector transfer routine, are shown below:
The data transfer and array creation times are now a relatively small proportion of the total solution time, even for the iterative solver with a solve time of only 0.28 seconds.
Your code looks good. Thanks for sharing.
maybe it might be of help to force vba to pass data a ONE-dimensional array (as a list), and then
make the reshape to right rows/columns in numpy (maybe also specifying the type and the
column order).
With a tweak, it is possible to do it WITHOUT copying the range, but simply forcing the array to think
(temporarily) that it is one-dimensional. The following code does just this, and then restores the
array:
Option Explicit
Option Base 1
Declare Sub CopyMemory Lib “kernel32” Alias “RtlMoveMemory” (pvDest As Any, lpvSource As Any, ByVal cbCopy As Long)
Declare Function VarPtrArray Lib “msvbvm60.dll” Alias “VarPtr” (Ptr() As Any) As Long
Private Const VT_BYREF = &H4000&
Public Type SAFEARRAYBOUND
cElements As Long
lLbound As Long
End Type
Public Type SAFEARRAY
cDims As Integer
fFeatures As Integer
cbElements As Long
cLocks As Long
pvData As Long
rgsabound(3) As SAFEARRAYBOUND
End Type
Function Rank(A) As Long
Dim lp As Long, VType As Integer, sa As SAFEARRAY, n As Long
Rank = 0, 2
Rank = .cDims
End With
End Function
Function Make1Dim(A, m As Long)
Dim lp As Long, VType As Integer, sa As SAFEARRAY, n As Long))
m = .rgsabound(2).cElements
n = .rgsabound(1).cElements
.rgsabound(1).cElements = m * n
.rgsabound(2).cElements = m * n
.cDims = 1
CopyMemory ByVal lp + 16, .rgsabound(1), 2 * Len(.rgsabound(1))
CopyMemory ByVal lp, .cDims, 16
End With
Make1Dim = A
End Function
Sub reset2Dim(A, m As Long)
Dim lp As Long, n As Long, VType As Integer, sa As SAFEARRAY
If Not IsArray(A) Then Exit Sub))
.rgsabound(2).cElements = m
.rgsabound(1).cElements = .rgsabound(1).cElements \ m
.cDims = 2
CopyMemory ByVal lp + 16, .rgsabound(1), 2 * Len(.rgsabound(1))
CopyMemory ByVal lp, .cDims, 16
End With
End Sub
Sub sho(r, A)
Dim n As Integer
n = Rank(A)
Select Case n
Case 0
r = A
Case 1
r.Resize(1, UBound(A, 1)).Value2 = A
Case 2
r.Resize(UBound(A, 1), UBound(A, 2)).Value2 = A
End Select
End Sub
Sub Test()
Dim A, m As Long ‘ m saves the right no. of rows
A = [B3:D6]
sho [B10], Make1Dim(A, m)
reset2Dim A, m
sho [B13], A
End Sub
just put numbers in the range B3:D6, and run Test.
Hi Maurizio, thanks for the suggestion and the code.
I had some problems with the code:
After pasting into the VBE, and replacing the WordPress ” and ‘ with the standard versions, I still had three lines marked as errors:
If (VType And VT_BYREF) 0 Then
I have deleted the 0, and the code now seems to work. Does that make sense?
I’m still not sure that I see the advantage of this approach though. With my code I get three 1D lists transferred to Python, which is quite convenient. I guess with a large rectangular array your way would be better though. Are there other advantages that I’m missing?
the original line had “If (VType And VT_BYREF) 0 Then” (i.e., not equal to zero), so your
change is equivalent. In reality, I think that in the present context it’s even safe to delete those
three lines. Regarding your question:
1) as you guess, for a large dense matrix definitively there should be a speed advantage. In python it is quite easy to restore the dense matrix, something like (on the premise that i’m NOT a python expert)
x = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
A=np.reshape(asfortranarray(x,dtype=’f8′),(3,-1),order=’F’)
(it makes a 3 x 2 matrix).
2) Similarly, if you need three vectors, i think you can get them in the python side, and possibly the whole process might be faster than the three calls;
3) This approach can give some speed advantage in any vba code, in cases where it is appropriate to process a matrix in vba with a single loop and a single index (add a scalar to the whole matrix, etc.) Something that it was already possible in the old vb6, just blanking the check on the array indexes and using just one index, but that is impossible in vba.
Is it worth? difficult to answer, maybe it is an option to have just in case.
A final comment on your interface to ExcelPython: it is perfect for spreadsheets formulas, as it works on the sequence
range -> variant -> py object -> python processing -> py object -> variant
but this is overkill for vba programmnig, where i’d like to have also internal routines like
py object -> python processing -> py object
in vba, just use
dim A
set A = ….. (py call)
and then the object A is alive and it can be used as such in other vba/ExcelPython processing. I think this “double” interface might increase the appeal of the whole thing.
Best regards, maurizio
|
https://newtonexcelbach.com/2014/06/08/data-transfer-to-python-update/
|
CC-MAIN-2020-40
|
refinedweb
| 1,167
| 70.73
|
Hey Numpy people!
Do anyone know how to disable underflow exception errors in Numeric?
I have a lot of these in my code. It is now a very important problem in my calculations. The only solution I see is to make a for loop and make the arithmetic in python instead of Numeric.
Thanks for your help,
Jean-Bernard
Python 2.0 (#9, Feb 2 2001, 12:17:02) [GCC 2.95.2 19991024 (release)] on linux2 Type "copyright", "credits" or "license" for more information. Hello from .pythonrc.py
import Numeric Numeric.__version__
'17.2.0'
Numeric.array([2.9e-131])**3
Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: math range error
2.9e-131**3
0.0
|
https://mail.python.org/archives/list/numpy-discussion@python.org/message/M46XUSOO7TSQIKRQ5W2PLT23JRIXU6ZB/
|
CC-MAIN-2022-33
|
refinedweb
| 122
| 70.39
|
Cameron Laird revisits the practice and concepts of multithreaded programming in Java, this time focusing on more intermediate programming solutions for today's distributed computing problems. Build on what you know about the
java.util.concurrent package while learning techniques to improve inter-thread communication and avoid Java concurrency pitfalls.
Multithreaded programming in Java has a reputation for difficulty, but most developers can untangle it with smart, designed-for-concurrency constructs that are standard with the Java platform. In this follow-up to my survey of basic modern threading techniques, I'll introduce some of the constructs in Doug Lea's
java.util.concurrent package and also discuss a few standbys of Java threading horror -- which aren't actually such a big deal when properly worked around. All in all, I will touch on seven topics that can help you make the best, or the worst, of your multithreaded programs:
- Thread management (a recurring theme)
- Runnable vs Callable
- Shared resources and immutability
- Synchronized blocks
- Inter-thread communication: Signals and locks
- Deadlocks
- Executors and thread pools
Note that some of the examples in this article build on my discussion in "Modern threading: A Java concurrency primer" (JavaWorld, June 2012).
Thread management (a recurring theme)
Programming with Java threads isn't really so hard -- it's thread management that keeps most software developers up at night. Consider the analogy of painting a room: more of your time will be spent on preparation than execution -- choosing and matching colors, clearing and taping the room, and so on. This is because today's paints and brushes make the painting part about as simple and "goofproof" as can be. Setup, as it turns out, is more than two-thirds of the game.
Thread programming works similarly: Using threads is generally much easier than managing (or cultivating) them over the long term. Thread management will be a recurring theme in your study of thread programming, so you might as well start thinking about it now.
For instance, in my previous article I introduced thread management as a simple evaluation of
new ExampleThread(). That thread was intended to be destroyed at the end of scope, which is fine for a simple program. But now we're ready to dig into some more sophisticated schemes. In the next sections, look for programs that do some of the following:
- Delete or re-use
Threadinstances
- Manage different varieties of a
Thread
- Require introspection on
Threadcharacteristics such as memory use or life history
Runnable vs Callable
In my last article I introduced a
MonitorModel based on a
Runnable rather than a
Thread.
Runnable's more flexible inheritance model gives it the advantage over
Thread. On the other hand, both
Runnable and
Thread share certain limits: neither returns values or throws
Exceptions.
For even more capable thread programming, go beyond both
Thread and
Runnable to use
Callable.
Callable communicates better than its two friends, because
Callable returns results.
Callable is part of the
java.util.concurrent package, which first appeared in the Java 5 end-of-summer 2004 release. The program in Listing 1 illustrates both the flexibility and the complications of using
Callable:
Listing 1. RockScissorsPaper with Callable
import java.util.concurrent.*; public class RockScissorsPaper { public static class PlayerCallable implements Callable { String name; int call_sequence = 0; static String[] SelectionTable = { "Rock", "Scissors", "Paper" }; PlayerCallable(String given_name) { name = given_name; } public String call() throws InterruptedException { int delay = (int) (2000 * Math.random()); call_sequence++; System.out.format("%s pauses %d microseconds on the %d-th invocation.\n", name, delay, call_sequence); Thread.sleep(delay); String choice = SelectionTable[three_sided_coin()]; System.out.format("%s selects %s.\n", name, choice); return choice; } } public static void main(String[] args) throws Exception { ExecutorService pool = Executors.newFixedThreadPool(2); PlayerCallable player1 = new PlayerCallable("player1"); PlayerCallable player2 = new PlayerCallable("player2"); for (int i = 10; i > 0; i--) { Future future1 = pool.submit(player1); Future future2 = pool.submit(player2); System.out.println(payoff((String) future1.get(), (String) future2.get())); } pool.shutdown(); } public void run() { FutureTask player1 = new FutureTask(new ThisCallable()); } public static int three_sided_coin() { return (int)(Math.random() * 3); } public static String payoff (String first_hand, String second_hand) { if (first_hand.equals(second_hand)) { return String.format("'%s' from both hands is a tie.", first_hand); } if ((first_hand.equals("Rock") & second_hand.equals("Scissors")) || (first_hand.equals("Scissors") & second_hand.equals("Paper")) || (first_hand.equals("Paper") & second_hand.equals("Rock"))) { return String.format("One's '%s' beats Two's '%s'.", first_hand, second_hand); } return String.format("Two's '%s' beats One's '%s'.", second_hand, first_hand); } public class ThisCallable implements Callable { public Integer call() throws java.io.IOException { return 1; } } }
In this game of rock-scissors-paper, two players report to an umpire.Output from a typical run of this program would be as follows:
Listing 2. Output from Callable RockScissorsPaper
player1 pauses 1350 microseconds on the 1-th invocation. player2 pauses 581 microseconds on the 1-th invocation. player2 selects Rock. player1 selects Paper. One's 'Paper' beats Two's 'Rock'. player1 pauses 942 microseconds on the 2-th invocation. player2 pauses 314 microseconds on the 2-th invocation. player2 selects Rock. player1 selects Rock. 'Rock' from both hands is a tie. player2 pauses 292 microseconds on the 3-th invocation. player1 pauses 703 microseconds on the 3-th invocation. player2 selects Paper. player1 selects Paper. 'Paper' from both hands is a tie. player1 pauses 1261 microseconds on the 4-th invocation. player2 pauses 354 microseconds on the 4-th invocation. player2 selects Scissors. player1 selects Paper. Two's 'Scissors' beats One's 'Paper'. player1 pauses 1534 microseconds on the 5-th invocation. player2 pauses 1860 microseconds on the 5-th invocation. player1 selects Paper. player2 selects Rock. One's 'Paper' beats Two's 'Rock'. player1 pauses 1025 microseconds on the 6-th invocation. player2 pauses 906 microseconds on the 6-th invocation. player2 selects Paper. player1 selects Rock. Two's 'Paper' beats One's 'Rock'. player1 pauses 554 microseconds on the 7-th invocation. player2 pauses 1975 microseconds on the 7-th invocation. player1 selects Rock. player2 selects Scissors. One's 'Rock' beats Two's 'Scissors'. player1 pauses 1175 microseconds on the 8-th invocation. player2 pauses 774 microseconds on the 8-th invocation. player2 selects Rock. player1 selects Paper. One's 'Paper' beats Two's 'Rock'. player1 pauses 134 microseconds on the 9-th invocation. player2 pauses 708 microseconds on the 9-th invocation. player1 selects Scissors. player2 selects Scissors. 'Scissors' from both hands is a tie. player1 pauses 531 microseconds on the 10-th invocation. player2 pauses 126 microseconds on the 10-th invocation. player2 selects Scissors. player1 selects Paper. Two's 'Scissors' beats One's 'Paper'.
Some things to note about the program: First, the two players operate independently. Each waits a randomized time, from zero to two seconds, then chooses among
Rock,
Scissors, or
Paper. Also note that the choices take place in separate threads, in indeterminate sequence. For example, in the tenth "hand"
Player1 takes four times as long to choose, so
Player2's choice appears on
stdout first:
player1 pauses 531 microseconds on the 10-th invocation. player2 pauses 126 microseconds on the 10-th invocation. player2 selects Scissors. player1 selects Paper.
Communication to and from each thread is fully programmable using the
call() and
get() methods. Invocation of the
Callable completes immediately. Afterward, when the result of the calculation is ready, it appears through the
Future mechanism. And finally, the
ExecutorService assumes responsibility for assigning
Callables to available
Threads for execution. (More about that later.)
While using
Callable involves more plumbing than using
Runnable, it also makes for cleaner communication.
Callable is generally a better choice than
Runnable for use cases where computing threads need to exchange data with their invoking process.
Runnable also might play rock-scissors-paper, but it would need a way to return the selection of
Rock,
Scissors, or
Paper. An individual programmer would be hard-pressed to code such a communication more elegantly than
Callable already does.
ExecutorService vs ForkJoinPool
ExecutorService was introduced in the
java.util.concurrent package to help manage progress-tracking and termination for asynchronous tasks. Learn about
ExecutorService (and its modernized sidekick,
ForkJoinPool) in the Java tip, " When to use ExecutorService vs ForkJoinPool."
Shared resources and immutability
Multithreaded programming makes it much harder to reason about or understand code segments locally. That's because many resources have the potential to be shared between threads, so what happens in one code segment might depend on a distant source, executing in a different thread. The example in Listing 3 illustrates my point.
Listing 3. Example thread-hazardous code segment
... common.balance = getBalance(); if (common.balance > common.threshold) { ...
If you're disturbed by what you see in Listing 3 then you are not alone! In a multithreaded context,
common.balance might have a different value when tested than when it was assigned. While the two statements are consecutive in source code, during execution other source code in a different thread could intervene and update the
common.balance value.
Worse, from a programmer's standpoint that sequence of execution isn't deterministic: it might vary from one run to the next.
An effective response to such difficulties is to program with immutable objects. For reasons that go beyond their use in threads, Joshua Bloch famously recommends that developers use immutable classes "unless there's a very good reason to make them mutable." For cases where a class cannot be immutable, he proposes that we limit the mutability "as much as possible" (see Effective Java in Resources).
Using immutable objects ensures thread safety. You can also attain thread safety by doing calculations on mutable objects whose only reference is within the local scope: if a thread can be guaranteed to have the only references to a resource, then using that resource is safe even if it's mutable.
More about immutability and thread safety
See Bill Venners's "Design for thread safety" for a quick tutorial on three ways to make an object thread safe. Vladimir Roubtsov's "Mutable or immutable?" defines and discusses immutable objects and patterns.
Synchronized blocks
Sometimes a calculation requires mutability, with references that can't be confined to a single thread; what to do then? This situation demands synchronization, which is a kind of locking that guarantees exclusive access by a thread to a shared resource.
Syntactically, the
synchronized keyword can be applied to both methods and blocks. In broad terms, block synchronization is more useful. For instance, using block synchronization would transform the code sample from Listing 3 to the following:
Listing 4. Block synchronization enforces thread safety
... synchronized(common.balance) { common.balance = getBalance(); if (common.balance > common.threshold) { ... }
The
synchronized keyword locks the source code segment so that only one thread can execute at a time. You are thus guaranteed that
common.balance will have the same value on reading within the thread as when it was written.
Alternately, you could use a slightly different syntax to lock resources for the span of an entire method:
... public static synchronized int getBalance() { ...
Synchronized locking guarantees that computation of
getBalance() is atomic or transactional across its resources.
Synchronization is a relatively delicate matter: It applies only to blocks and methods, not variables. If mis-used it can result in pathologies like deadlock. Synchronization applies only to final fields, and it's managed by methods like
wait() and
notify(). You can also configure synchronization with
java.util.concurrent.locks to yield interruptible or re-entrant locks, which I discuss in the next section. (Also see Resources.)
Avoid synchronization deadlocks
Brian Goetz's"Avoid synchronization deadlocks" is an in-depth look at how
synchronized can lead to deadlock, followed by tips for working around it.
|
http://www.javaworld.com/article/2078679/java-concurrency/modern-threading-for-not-quite-beginners.html
|
CC-MAIN-2015-32
|
refinedweb
| 1,933
| 51.34
|
IRC log of sml on 2008-10-16
Timestamps are in UTC.
18:02:19 [RRSAgent]
RRSAgent has joined #sml
18:02:19 [RRSAgent]
logging to
18:02:23 [johnarwe_]
johnarwe_ has joined #sml
18:02:24 [Zakim]
Zakim has joined #sml
18:03:00 [johnarwe_]
zakim, this is sml
18:03:00 [Zakim]
ok, johnarwe_; that matches XML_SMLWG()2:00PM
18:03:06 [pratul]
Agenda is at
18:03:28 [Kumar]
Kumar has joined #sml
18:04:17 [Zakim]
+[Microsoft]
18:04:26 [pratul]
Zakim, Microsoft is me
18:04:26 [Zakim]
+pratul; got it
18:06:47 [johnarwe_]
regrets: Sandy, MSM, Ginny
18:07:46 [kirkw]
kirkw has joined #sml
18:08:15 [kirkw]
chair: Pratul Dublish
18:08:26 [kirkw]
scribenick: kirkw
18:08:33 [kirkw]
schribe: Kirk Wilson
18:08:37 [johnarwe_]
zakim, who's here?
18:08:37 [Zakim]
On the phone I see +1.425.836.aaaa, johnarwe_, ??P15, pratul
18:08:38 [Zakim]
On IRC I see kirkw, Kumar, Zakim, johnarwe_, RRSAgent, pratul, Kirk, trackbot
18:09:12 [johnarwe_]
zakim, aaaa is kumar
18:09:12 [Zakim]
+kumar; got it
18:09:21 [johnarwe_]
zakim, ??P15 is Kirk
18:09:21 [Zakim]
+Kirk; got it
18:09:22 [kirkw]
scribe: Kirk Wilson
18:09:36 [johnarwe_]
zakim, who's here?
18:09:36 [Zakim]
On the phone I see kumar, johnarwe_, Kirk, pratul
18:09:37 [Zakim]
On IRC I see kirkw, Kumar, Zakim, johnarwe_, RRSAgent, pratul, Kirk, trackbot
18:10:00 [johnarwe_]
minutes at
18:10:19 [kirkw]
Topic: Approval of minutes from 10/2
18:10:37 [kirkw]
RESOLUTION: Minutes approved without objection.
18:10:59 [kirkw]
TOPIC: Issues opened by John
18:11:28 [kirkw]
Issue 5053:
18:11:51 [kirkw]
rrsagent, make log public
18:12:18 [kirkw]
John: Issue has to do with word order and clarification.
18:12:43 [kirkw]
Pratul: Issue is, Shall we endorse the resolution?
18:14:04 [kirkw]
RESOLUTION: We endorse the resolution.
18:14:27 [kirkw]
Issue 5155:
18:16:08 [kirkw]
RESOLUTION: We endorse the resolution.
18:16:34 [kirkw]
TOPIC: Action Items
18:17:01 [kirkw]
John: Only two open, from MSM and Pratul for draft of XLink note.
18:17:23 [kirkw]
Pratul: Will have XLink note for F2F.
18:17:55 [kirkw]
TOPIC: Latest draft of Test Case Document
18:18:04 [Zakim]
-Kirk
18:18:40 [kirkw]
My phone connect just dead on me. Let me get back on.
18:19:11 [Zakim]
+??P4
18:19:32 [kirkw]
zakim, ??P4 is kirkw
18:19:32 [Zakim]
+kirkw; got it
18:19:34 [johnarwe_]
zakim, ??P4 is Kirk
18:19:34 [Zakim]
I already had ??P4 as kirkw, johnarwe_
18:20:41 [kirkw]
Discussion of section 2
18:23:01 [kirkw]
Correction to p. 1 line 25:
18:23:11 [johnarwe_]
inconsistency betw 1.25 and 3.21-22 to be corrected
18:23:41 [johnarwe_]
btw, for the IRC record, for today I am repping IBM since Sandy is not able to attend
18:24:23 [Kumar]
from : Therefore, each test will be represented by an SML-IF document.
18:24:23 [Kumar]
to : Therefore, all tests, except the tests thatt test the locator element, will be represented by an SML-IF document.
18:24:41 [johnarwe_]
2.16 documentS
18:24:54 [kirkw]
s/thatt/that
18:27:52 [kirkw]
RESOLUTION: Text as pasted in IRC is approved.
18:28:00 [johnarwe_]
4. 16 resultS
18:28:15 [johnarwe_]
4.16 and -> or
18:28:23 [johnarwe_]
4.17 resultS
18:29:35 [johnarwe_]
4.23 This -> Comparing test results (so it refers back to 1st sentence, not 2nd, which seems like the original intent)
18:34:23 [kirkw]
John: bottom of p. 4.37: We have additional question if SML-IF document is valid, whether the model is SML valid. This leads to the possibility of a tertiary value of the results. Results, therefore, are not simply a boolean value.
18:35:34 [kirkw]
...There are states: SML-IF invalid vs. SML-IF valid (which can be SML valid or invalid)
18:36:18 [kirkw]
Kumar: Addressed by lines 1 - 8 on p. 5.
18:36:35 [kirkw]
s/are states/are three states
18:42:00 [kirkw]
Kumar: This is not a problem for the two implementations that we know. It will be clear from the test time what the source of the error is.
18:42:38 [kirkw]
John: Boolean is correct: Issue is what can be guaranteed from the spec and what you can know as a human. The two are not the same.
18:45:56 [kirkw]
RESOLUTION: No objections from current attendees to approving the text-plan doc with the specific change on p. 1.
18:46:12 [johnarwe_]
s/text/test/
18:46:14 [kirkw]
s/text-/test-
18:46:36 [kirkw]
TOPIC: Review of COSMOS Test Plan.
18:46:46 [kirkw]
See Ginny's email.
18:47:21 [pratul]
18:51:45 [pratul]
an SML reference (sml:ref = true) using only unrecognized schemes. (#1 above)
18:52:20 [kirkw]
Discussion: SML references using unrecognized schemes.
18:52:32 [kirkw]
...What is the expected result of the test?
18:52:58 [kirkw]
John: If targetRequired, then SML reference is invalid.
18:54:12 [kirkw]
Kumar: Doesn't see much value in writing such a test case. If both implementations doesn't understand the reference schemes, there is no issue of interoperability.
18:55:38 [kirkw]
John: We need to answer the question of whether we are starting with COSMOS and then just discuss additional test cases?
18:56:15 [kirkw]
...Pratul agrees we should start with this question.
18:57:20 [kirkw]
RESOLUTION: Agreed without objection to start with accepting the COSMOS set suite.
18:57:37 [kirkw]
s/set/test
18:58:22 [kirkw]
Returning to considering Ginny's list:
18:59:48 [pratul]
- an SML reference (sml:ref = true) using only unrecognized schemes. (#1 above)
19:00:27 [kirkw]
Kumar: Proposal is NOT to add it.
19:00:50 [kirkw]
John: If Ginny was to write such a case, we would not reject it.
19:01:09 [johnarwe_]
s/we/I/
19:01:48 [kirkw]
RESOLUTION: The group will not write such a case, but if Ginny were to write a case, we would accept it.
19:02:48 [kirkw]
Second Test Case: does not look like there are tests that test the necessary processing to identify identical targets (section 4.2.3) E.g., bullet #2 is not tested.
19:05:42 [kirkw]
Kumar: MS implementation could not test such a condition, since it supports only the SML URI reference scheme.
19:06:50 [kirkw]
Pratul: We have different aliases pointing to the same element.
19:07:01 [johnarwe_]
sml 4.2.3 #2 starts Otherwise, a model validator MUST consider both targets to be different when
19:10:27 [kirkw]
Pratul: Proposal is to add this test case.
19:10:58 [kirkw]
RESOLUTION: We should add a test case to cover this scenario.
19:11:28 [kirkw]
...Pratul: have one or more test cases.
19:11:40 [pratul]
I don't see deref() tests for each bullet in section 4.2.7, 1.b.
19:11:53 [kirkw]
Third Test Case: no test for section 4.3.1, bullet 1 (wrong namespace for 'uri') and bullet 1.a.
19:11:54 [pratul]
Ginny: I don't see deref() tests for each bullet in section 4.2.7, 1.b.
19:13:20 [kirkw]
NOTE: to myself--correct this copy error during editing.
19:13:53 [kirkw]
Pratul: Proposal is to add test case to cover this scenario: 1.b test case.
19:14:04 [pratul]
Proposal: Add test case(s) to cover 4.2.7, 1(b)
19:14:30 [kirkw]
Kumar: Since MS supports only one scheme, MS could not test it.
19:15:18 [kirkw]
RESOLUTION: If anybody can write the test case, we will accept it.
19:15:35 [kirkw]
...Attendees are "neutral" to this test case.
19:15:49 [kirkw]
s/can/is willing to
19:16:02 [kirkw]
Fourth issue: no test for section 4.3.1, bullet 1 (wrong namespace for 'uri') and bullet 1.a.
19:16:05 [pratul]
Ginny: no test for section 4.3.1, bullet 1 (wrong namespace for 'uri') and bullet 1.a.
19:16:06 [Kumar]
This test case will fall into the optional features test bucket.
19:17:10 [kirkw]
s/This test/Third bullet
19:17:55 [kirkw]
Pratul: Proposal is to add test cases for this scenario.
19:20:00 [kirkw]
RESOLUTION: We should add a test case to cover this scenario.
19:20:15 [kirkw]
...Kumar: there may be a test case for this.
19:20:39 [kirkw]
Fifth bullet: no targetRequired tests for derivation by restriction or substitution groups (there are tests for these in targetElement and targetType) - section 5.1.2.1, bullet 1.b and section 5.1.2.2 (for targetRequired).
19:21:06 [kirkw]
Pratul: Proposal is to add these test cases.
19:21:13 [kirkw]
Kumar: Agreed.
19:22:02 [kirkw]
RESULTION: We should add test cases to cover this scenario.
19:22:35 [kirkw]
Sixth bullet: no deref() test for sml:selector or sml:field, sections 5.2.1.2 - bullets 1 and 2.
19:23:21 [kirkw]
Kumar: We have test cases for this; also COSMOS.
19:24:06 [Kumar]
id-constraint-KeyDuplicate-invalid.xml
19:26:58 [kirkw]
Kumar: Ginny may mean what happens if there are invalid XPath.
19:28:23 [kirkw]
Pratul: We need more information from Ginny regarding what she means and go on from there.
19:29:29 [kirkw]
...Pratul will write Ginny an email after the call.
19:29:53 [kirkw]
Seventh bullet: no test for section 5.2.1.2, bullet 4.
19:31:33 [kirkw]
Pratul: Proposal is to add test cases for this, if COSMOS has no test cases for this scenario.
19:32:28 [kirkw]
RESOLUTION: Agreed, no objections.
19:32:54 [kirkw]
Eigth bullet: acyclic tests do not mention "intra-document references" so I assume there may not be a test for this. The tests only mention "inter-document references".
19:34:35 [kirkw]
Pratul: Proposal is to add test cases to cover intra-document acyclic constraint for intra-document references.
19:35:02 [kirkw]
RESOLUTION: We agree with no objections.
19:35:52 [Kumar]
s/id-constraint-KeyDuplicate-invalid.xml/InValidKeyDuplicate.xml/
19:35:53 [kirkw]
Pratul: Should we have a meeting next week?
19:37:26 [kirkw]
Pratul: We will meet next wekk.
19:37:26 [Zakim]
-pratul
19:37:28 [Zakim]
-kumar
19:37:32 [Zakim]
-kirkw
19:37:40 [kirkw]
s/wekk/week
19:37:45 [johnarwe_]
rrsagent, generate minutes
19:37:45 [RRSAgent]
I have made the request to generate
johnarwe_
19:37:52 [johnarwe_]
rrsagent, make log public
19:38:07 [kirkw]
Thank you.
19:38:27 [johnarwe_]
looks like it worked too
19:38:35 [Zakim]
-johnarwe_
19:38:36 [Zakim]
XML_SMLWG()2:00PM has ended
19:38:38 [Zakim]
Attendees were +1.425.836.aaaa, johnarwe_, pratul, kumar, Kirk, kirkw
19:38:51 [johnarwe_]
I've gotten so paranoid I don't hang up until the log is public
19:40:19 [kirkw]
Actually, I did that at some point during the start of session. Have it on my check off list.
20:48:39 [Zakim]
Zakim has left #sml
23:54:28 [MSM]
MSM has joined #sml
|
http://www.w3.org/2008/10/16-sml-irc
|
CC-MAIN-2016-40
|
refinedweb
| 1,926
| 74.49
|
Anyone know how to K64F Flash programming ?
Topic last updated 26 Mar 2015, by 13 replies.
Thank you everyone. <<quote Sissors>> I haven't checked it on the K64F, but this should work on it too to write to the flash memory: <</quote>> It is very good. I will try immediately. <<quote Kojto>> I believe Erik is right, his library should work for K64F. Please share if yo utest it. Thanks. <</quote>> yes. I will report test result. Thanks
I was successful in writing in the flash in K64F. However, modifications were needed. == 1.Define FTFE == K64F has not FTFA and FTFL. Must use the FTFE. <<code>> #ifdef TARGET_K64F # include "MK64F12.h" # define FTFA FTFE # define FTFA_FSTAT_FPVIOL_MASK FTFE_FSTAT_FPVIOL_MASK # define FTFA_FSTAT_ACCERR_MASK FTFE_FSTAT_ACCERR_MASK # define FTFA_FSTAT_RDCOLERR_MASK FTFE_FSTAT_RDCOLERR_MASK # define FTFA_FSTAT_CCIF_MASK FTFE_FSTAT_CCIF_MASK # define FTFA_FSTAT_MGSTAT0_MASK FTFE_FSTAT_MGSTAT0_MASK #else //Different names used on at least the K20: # ifndef FTFA_FSTAT_FPVIOL_MASK # define FTFA FTFL # define FTFA_FSTAT_FPVIOL_MASK FTFL_FSTAT_FPVIOL_MASK # define FTFA_FSTAT_ACCERR_MASK FTFL_FSTAT_ACCERR_MASK # define FTFA_FSTAT_RDCOLERR_MASK FTFL_FSTAT_RDCOLERR_MASK # define FTFA_FSTAT_CCIF_MASK FTFL_FSTAT_CCIF_MASK # define FTFA_FSTAT_MGSTAT0_MASK FTFL_FSTAT_MGSTAT0_MASK # endif #endif <</code>> == 2.Add "ProgramPhrase" command and change the code to 64 bit unit. == K64F has not ProgramLongword. It is necessary to use the ProgramPhrase instead. <<code>> enum FCMD { Read1s = 0x01, ProgramCheck = 0x02, ReadResource = 0x03, ProgramLongword = 0x06, ProgramPhrase = 0x07, EraseSector = 0x09, Read1sBlock = 0x40, ReadOnce = 0x41, ProgramOnce = 0x43, EraseAll = 0x44, VerifyBackdoor = 0x45 }; <</code>> <<code>> IAPCode program_flash(int address, char *data, unsigned int length) { #ifdef IAPDEBUG printf("IAP: Programming flash at %x with length %d\r\n", address, length); #endif if (check_align(address)) return AlignError; IAPCode eraseCheck = verify_erased(address, length); if (eraseCheck != Success) return eraseCheck; IAPCode progResult; #ifdef TARGET_K64F for (int i = 0; i < length; i+=8) { progResult = program_word(address + i, data + i); if (progResult != Success) return progResult; } #else for (int i = 0; i < length; i+=4) { progResult = program_word(address + i, data + i); if (progResult != Success) return progResult; } #endif return Success; } <</code>> <<code>> IAPCode program_word(int address, char *data) { #ifdef IAPDEBUG printf("IAP: Programming word at %x, %d - %d - %d - %d\r\n", address, data[0], data[1], data[2], data[3]); #endif if (check_align(address)) return AlignError; //Setup command #ifdef TARGET_K64F FTFA->FCCOB0 = ProgramPhrase;]; FTFA->FCCOB8 = data[7]; FTFA->FCCOB9 = data[6]; FTFA->FCCOBA = data[5]; FTFA->FCCOBB = data[4]; #else FTFA->FCCOB0 = ProgramLongword;]; #endif run_command(); return check_error(); } <</code>> Write size is 64bit, but FreescaleIAP worked by this patch.
Why can't Freescale stick to their names for once :/. They just insist on changing it all the time. Anyway nice you figured it out, I will also have another look at it later. Edit: So I assumed the KSDK targets (K22F and K64F currently for mbed), would have the same flash module. I was wrong. According to the reference manual the K22F should work fine with the current code since it does have program longword, but indeed the K64F does not have it. I am going to add you to the developers of the library. That way can you commit your changes, and then publish them? (It will ask if you want to fork, and that should not be needed, you can directly modify them).
Erik Olieman-san, Thank you for add me to developer. I published patched code. I could not test other board so I do not have a non-K64F board. My product is in C language, so I want to convert it to C and import it to the program. I want to use IAP source code with apache2 license.Would you allow it? I was not able to make the base code of the IPA of Freescale. Thank you for the making the library.
Hey, Your patch looks good, shouldn't affect other targets. And since it is effectively just C code, converting it to that should be easy enough. You (and others) may use it for your product freely. I enabled the tickbox for Apache license in for the library. I believe technically I should put it on every document of my code, but I consider that too much work :).
Please log in to post a reply.
Hi there
I am looking for a way to write the flash memory of K64F from the program.
FlexRAM seems to good but I can not find out good sample and application note.
Anyone know application notes or sample programs?
|
https://developer.mbed.org/forum/platform-38-FRDM-K64F-community/topic/5176/?page=1
|
CC-MAIN-2017-13
|
refinedweb
| 702
| 57.16
|
Sort and store files with same extension in Python
Suppose, you have a folder with hundreds of files that are not managed properly. Hence, creating a mess and now you want to arrange them in different folders. So, to store files with the same extension you just need this Python program. Then it will do your work in seconds and you are good to go.
For example, I have this folder having 111 files of different extensions and I want to arrange them in folders according to there extensions. So that files with the same extensions will be present in the same folder.
Modules Required inside the program
We need to use some Python modules like os and shutil, using these we can easily sort and store files with same extension using Python program. These modules can easily be imported into our Python program using the import command.
- OS Module is used here to change the directories and check for the existence of another directory in the current directory using os.path.exists() command in which the path of the directory is given.
This module is also being used to split the file name and extension into different variables as per our requirement using the os.path.splitext() command in which the name of the file is given.
This module is also being used to make a list of all files using the os.listdir() command in which the name of the directory is given.
- Shutil Module is used here to move files from one directory to another using shutil.move() command. The first attribute refers to the current location of the file and the second attribute refers to the future location of the file with the file name also i.e, where the file to be moved and the name of the file is to be specified as the second attribute.
Program Functioning for storing and sorting files with same extension in Python
I already discussed the modules used in this program which covers most of the explanation. Now, comes the remaining explanation of the program as follows:
- The input() command is used to take the directory name from the user. The directory should always be present in the same directory in which you have your Python program.
- The for loop is used to iterate through the list of file names stored in list li. This is the most important part of our program as all steps of moving files are done in this part.
- extension = extension[1:], this simply slice down the extension part having no dots(.) in it. For Example, the extension is (.jpg) but we just need (jpg), that’s what this line of code does for us.
- Here, the if statement is used to check whether any extension exists or not. If no, then continue is used to check for the next file but if yes, then simply move towards the next line of code.
- Then, the next if-else statement is used here to check whether the directory for an extension already exists or not. If yes, then just move the file to that directory, and if no, then make one and move that file to that newly-created directory.
import os import shutil dirName = input( 'Enter folder name: ' ) li = os.listdir(dirName) for i in li: fileName, extension = os.path.splitext(i) extension = extension[1:] if extension == "": continue if os.path.exists( dirName + '/' + extension ): shutil.move( dirName + '/' + i, dirName + '/' + extension + '/' + i ) else: os.makedirs( dirName + '/' + extension ) shutil.move( dirName + '/' + i, dirName + '/' + extension + '/' + i )
Output
Enter folder name: Files
Here, you can see that I have now all files with the same extensions are moved to the different folders and folder names are set to their extension name.
|
https://www.codespeedy.com/sort-and-store-files-with-same-extension-in-python/
|
CC-MAIN-2020-34
|
refinedweb
| 624
| 71.44
|
On Jan 17, 3:55 pm, Steven D'Aprano <st... at REMOVE-THIS- cybersource.com.au> wrote: > Both very good points, but consider that you're not comparing apples with > apples. > > >>> __import__("os", fromlist=["system"]) > <module 'os' from '/usr/lib/python2.5/os.pyc' > >>> system > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > NameError: name 'system' is not defined I must confess I've rarely had a need to use __import__ and don't think I've ever used the fromlist arg. I'm confused, though, because the docstring states: The fromlist should be a list of names to emulate ``from name import ...'' But it also states that __import__ always returns a module, so I'm utterly confused as to the purpose of fromlist, or how to inject the specified entries into the calling namespace. If anyone could explain this for me, I'd really appreciate it. > I mention this only to be pedantic, because I agree with your point that > exec can introduce security issues, and be significantly slower. Correcting misinformation isn't pedantry, especially when I've learned something :)
|
https://mail.python.org/pipermail/python-list/2009-January/520625.html
|
CC-MAIN-2014-10
|
refinedweb
| 184
| 63.39
|
Can't add my own method to my Analyzer subclass - Metaclass behaviour?
- Richard O'Regan last edited by Richard O'Regan
Hey guys,
So I'm working on another Analyzer to calculate basic statistics. It works great, just when I move some of the code out of notify_trade() to a new method I defined called 'calculate_statistics'
def calculate_statistics(self): # Stats for winning trades.. self.rets.won.closedTrades = np.size(self._won_pnl_list) ...
When I call
self.calculate_statistics()
I get an error
AttributeError: 'BasicTradeStats' object has no attribute 'calculate_statistics'
I've checked obvious things, tried defining other functions e.g.
def hello(self): print('Hello')
in other Analyzer code , like sqn.py e.t.c.. and I get same error trying to call.
I don't know about meta classes and assuming there is some restriction deliberately in place? Or am I just missing a trick somewhere?!
Thanks
- backtrader administrators last edited by
Adding methods to a subclass of analyzer is of course possible.
The problem here is that understanding and diagnosing your problem is close to impossible. See:
- Title: "... metaclass behavior" but there is nothing about metaclasses in the message
- There is no indication as to where
def calculate_statistics(self)is actually being declared and where it is being called
- There is no hint as to how the subclass is actually being created
- Richard O'Regan last edited by Richard O'Regan
Hey Backtrader, back to coding after partying all weekend.
I didn't put my whole code up because it was in prototype mode and messy - though in hindsight I didn't give nearly enough information. Apologies for my lack of clarity.
I loaded up everything just now, and it all works fine?? I think must have been something to do with reloading modules and Atom editor Beta I'm using..
I subclassed Analyzer (which uses metaclasses), I honestly could not use any functions I defined, only 'notify_trade' etc, I figured there must be some restrictions with metaclasses (which I have never used before) but I couldnt find anything on Google so was confused..
All sorted now - funny how restarting everything can often fix problem. Hopefully will have a new useful Analyzer up later today/tomorrow. Stay tuned
Thank-you
- backtrader administrators last edited by
Although metaclasses can be used to enforce restrictions, it's not the usage pattern in backtrader. They are in place to make things lighter (in the humble opinion of the author) both for the internals and the externals
- Richard O'Regan last edited by
@backtrader ah ok cool. All noted, thank-you
|
https://community.backtrader.com/topic/631/can-t-add-my-own-method-to-my-analyzer-subclass-metaclass-behaviour/1
|
CC-MAIN-2021-25
|
refinedweb
| 427
| 53.21
|
Terence Parr
Last updated: August 30, 2006
Translations:.
This lecture takes you through the basic commands and then shows you how to combine them in simple patterns or idioms to provide sophisticated functionality like histogramming. This lecture assumes you know what a shell is and that you have some basic familiarity with UNIX.
[By the way, this page gets a lot of attention on the net and unfortunately I get mail from lots of people that have better solutions or stuff I should add. I'm only showing what I've learned from watching good UNIX people so I am not saying these tips are the optimal solutions. I'd make a pretty ignorant sys admin.]
The first thing you need to know is that UNIX is based upon the idea of a stream. Everything is a stream, or appears to be. Device drivers look like streams, terminals look like streams, processes communicate via streams, etc... The input and output of a program are streams that you can redirect into a device, a file, or another program.
Here is an example device, the null device, that lets you throw output away. For example, you might want to run a program but ignore the output.
$ ls > /dev/null # ignore output of ls
where "# ignore output of ls" is a comment.
Most of the commands covered in this lecture process stdin and send results to stdout. In this manner, you can incrementally process a data stream by hooking the output of one tool to the input of another via a pipe. For example, the following piped sequence prints the number of files in the current directory modified in August.
$ ls -l | grep Aug | wc -l
Imagine how long it would take you to write the equivalent C or Java program. You can become an extremely productive UNIX programmer if you learn to combine the simple command-line tools. Even when programming on a PC, I use MKS's UNIX shell and command library to make it look like a UNIX box. Worth the cash.
If you need to know about a command, ask for the "man" page. For example, to find out about the ls command, type
$ man ls
LS(1) System General Commands Manual LS(1)
NAME
ls - list directory contents
SYNOPSIS
ls [-ACFLRSTWacdfgiklnoqrstux1] [file ...]
DESCRIPTION
For each operand that names a file of a type other than directory, ls
...
You will get a summary of the command and any arguments.
If you cannot remember the command's name, try using apropos which finds commands and library routines related to that word. For example, to find out how to do checksums, type
$ apropos checksum
cksum(1), sum(1) - display file checksums and block counts
md5(1) - calculate a message-digest fingerprint (checksum) for a file
See @
A shortcut for you home directory, /home/username, is ~username. For example, ~parrt is my home directory, /home/parrt.
When you are using the shell, there is the notion of current directory. The dot '.' character is a shorthand for the current directory and '..' is a shorthand for the directory above the current. So to access file test in the current directory, ./test is the same as plain test. If test is a directory above, use ../test.
/ is the root directory; there is no drive specification in UNIX.
The .bash_profile file is very important as it is how your shell
session is initialized including your ever-important CLASSPATH
environment variable. Your bash shell initialization file is
~username/.bash_profile and has set up code like the following:
PATH=$PATH:$HOME/bin
Typically, you will go in and set your CLASSPATH so that you don't
have to set it all the time.
export CLASSPATH=".:/home/public/cs601/junit.jar"
The export means that the assignment to CLASSPATH is visible to all child processes (that is, visible to all programs you run from the shell).
Changing a directory is done with cd dir where dir can be "." or ".." to move to current directory (do nothing) or go up a directory.
Display files in a directory with ls. The -l option is used to
display details of the files:
total 9592
-rw-r--r-- 1 parrt staff 5600 Aug 19 2005 C-Java-relationship.html
...
drwxr-xr-x 13 parrt staff 442 Oct 19 2005 sessions
-rw-r--r-- 1 parrt staff 2488 Oct 19 2005 sessions.html
...
"staff" is parrt's group.
If you want to see hidden files (those starting with "."), use "-a".
Combinations are possible: use "ls -la" to see details of all files including hidden ones.
There are 4 useful ways to display the contents or portions of a file.
The first is the very commonly used command cat. For example, to
display my list of object-oriented keywords used in this course, type:
$ cat /home/public/cs601/oo.keywords.txt
If a file is really big, you will probably want to use more, which
spits the file out in screen-size chunks.
$ more /var/log/mail.log
If you only want to see the first few lines of a file or the last few
lines use head and tail.
$ head /var/log/mail.log
$ tail /var/log/mail.log
You can specify a number as an argument to get a specific number of lines:
$ head -30 /var/log/mail.log
The most useful incantation of tail prints the last few lines of a
file and then waits, printing new lines as they are appended to the
file. This is great for watching a log file:
$ tail -f /var/log/mail.log
If you need to know how many characters, words, or lines are in a
file, use wc:
$ wc /var/log/mail.log
164 2916 37896 /var/log/mail.log
Where the numbers are, in order, lines, words, then characters. For clarity, you can use wc -l to print just the number of lines.
Instead of cd you can use pushd to save the current dir and then automatically cd to the specified directory. For example,
$ pwd
/Users/parrt
$ pushd /tmp
/tmp ~
$ pwd
/tmp
$ popd
~
$ pwd
/Users/parrt
To watch a dynamic display of the processes on your box in action, use top.
To print out (wide display) all processes running on a box, use ps auxwww.
To change the privileges of a file or directory, use chmod. The privileges are 3 digit octal words with 3 bits per digit: rwxrwxrwx where the first digit is for the file owner, the 2nd for the group, and 3rd for anybody. 644 is a common word value file which means 110100100 or rw-r--r--. When you do ls -l you will see these bits. 755 is a common word value for directories: rwxr-xr-x where directories need to be executable for cd to be able to enter that dir. 755 is a shorthand for the more readable argument u=rwx,go=rx. u is user, g is group, o is other.
Use chmod -R for recursively applying to all the dirs below the argument as well.
One of the most useful tools available on UNIX and the one you may use the most is grep. This tool matches regular expressions (which includes simple words) and prints matching lines to stdout.
The simplest incantation looks for a particular character sequence in a set of files. Here is an example that looks for any reference to System in the java files in the current directory.
grep System *.java
You may find the dot '.' regular expression useful. It matches any
single character but is typically combined with the star, which
matches zero or more of the preceding item. Be careful to enclose the
expression in single quotes so the command-line expansion doesn't
modify the argument. The following example, looks for references to
any a forum page in a server log file:
$ grep '/forum/.*' /home/public/cs601/unix/access.log
or equivalently:
$ cat /home/public/cs601/unix/access.log | grep '/forum/.*'
The second form is useful when you want to process a collection of files as a single stream as in:
cat /home/public/cs601/unix/access*.log | grep '/forum/.*'
If you need to look for a string at the beginning of a line, use caret '^':
$ grep '^195.77.105.200' /home/public/cs601/unix/access*.log
This finds all lines in all access logs that begin with IP address 195.77.105.200.
If you would like to invert the pattern matching to find lines that do not match a pattern, use -v. Here is an example that finds references to non image GETs in a log file:
$ cat /home/public/cs601/unix/access.log | grep -v '/images'
Now imagine that you have an http log file and you would like to filter out page requests made by nonhuman spiders. If you have a file called spider.IPs, you can find all nonspider page views via:
$ cat /home/public/cs601/unix/access.log | grep -v -f /tmp/spider.IPs
Finally, to ignore the case of the input stream, use -i.
Morphing a text stream is a fundamental UNIX operation. PERL is a good tool for this, but since I don't like PERL I stick with three tools: tr, sed, and awk. PERL and these tools are line-by-line tools in that they operate well only on patterns fully contained within a single line. If you need to process more complicated patterns like XML or you need to parse a programming language, use a context-free grammar tool like ANTLR.
For manipulating whitespace, you will find tr very useful.
If you have columns of data separated by spaces and you would like the columns to collapse so there is a single column of data, tell tr to replace space with newline tr ' ' '\n'. Consider input file /home/public/cs601/unix/names:
jim scott mike
bill randy tom
To get all those names in a column, use
$ cat /home/public/cs601/unix/names | tr ' ' '\n'
If you would like to collapse all sequences of spaces into one single space, use tr -s ' '.
To convert a PC file to UNIX, you have to get rid of the '\r' characters. Use tr -d '\r'.
If dropping or translating single characters is not enough, you can
use sed (stream editor) to replace or delete text chunks matched by
regular expressions. For example, to delete all references to word
scott in the names file from above, use
$ cat /home/public/cs601/unix/names | sed 's/scott//'
which substitutes scott for nothing. If there are multiple references to scott on a single line, use the g suffix to indicate "global" on that line otherwise only the first occurrence will be removed:
$ ... | sed 's/scott//g'
If you would like to replace references to view.jsp with index.jsp, use
$ ... | sed 's/view.jsp/index.jsp/'
If you want any .asp file converted to .jsp, you must match the file name with a regular expression and refer to it via \1:
$ ... | sed 's/\(.*\).asp/\1.jsp/'
The \(...\) grouping collects text that you can refer to with \1.
If you want to kill everything from the ',' character to end of line, use the end-of-line marker $:
$ ... | sed 's/,.*$//' # kill from comma to end of line
When you need to work with columns of data or execute a little bit of code for each line matching a pattern, use awk. awk programs are pattern-action pairs. While some awk programs are complicated enough to require a separate file containing the program, you can do some amazing things using an argument on the command-line.
awk thinks input lines are broken up into fields (i.e., columns) separate by whitespace. Fields are referenced in an action via $1, $2, ... while $0 refers to the entire input line.
A pattern-action pair looks like:
pattern {action}
If you omit the pattern, the action is executed for each input line. Omitting the action means print the line. You can separate the pairs by newline or semicolon.
Consider input
aasghar Asghar, Ali
wchen Chen, Wei
zchen Chen, Zhen-Jian
If you want a list of login names, ask awk to print the first column:
$ cat /home/public/cs601/unix/emails.txt | awk '{print $1;}'
If you want to convert the login names to email addresses, use the printf C-lookalike function:
$ cat /home/public/cs601/unix/emails.txt | awk '{printf("%s@cs.usfca.edu,",$1);}'
Because of the missing \n in the printf string, you'll see the output all on one line ready for pasting into a mail program:
aasghar@cs.usfca.edu,wchen@cs.usfca.edu,zchen@cs.usfca.edu
You might also want to reorder columns of data. To print firstname, lastname, you might try:
$ cat /home/public/cs601/unix/emails.txt | awk '{printf("%s %s\n", $3, $2);}'
but you'll notice that the comma is still there as it is part of the column:
Ali Asghar,
Wei Chen,
Zhen-Jian Chen,
You need to pipe the output thru tr (or sed) to strip the comma:
$ cat /home/public/cs601/unix/emails.txt | \
awk '{printf("%s %s\n", $3, $2);}' | \
tr -d ','
Then you will see:
Ali Asghar
Wei Chen
Zhen-Jian Chen
You can also use awk to examine the value of content. To sum up the first column of the following data (in file /home/public/cs601/unix/coffee):
3 parrt
2 jcoker
8 tombu
use the following simple command:
$ awk '{n+=$1;} ; END {print n;}' < /home/public/cs601/unix/coffee
where END is a special pattern that means "after processing the stream."
If you want to filter or sum all values less than or equal to, say 3, use an if statement:
$ awk '{if ($1<=3) n+=$1;} END {print n;}' < /home/public/cs601/unix/coffee
In this case, you will see output 5 (3+2);
Using awk to grab a particular column is very common when processing log files. Consider a page view log file, /home/public/cs601/unix/pageview-20021022.log, that are of the form:
date-stamp(thread-name): userID-or-IPaddr URL site-section
So, the data looks like this:
20021022_00.00.04(tcpConnection-80-3019): 203.6.152.30 /faq/subtopic.jsp?topicID=472&page=2 FAQs
20021022_00.00.07(tcpConnection-80-2981): 995134 /index.jsp Home
20021022_00.00.08(tcpConnection-80-2901): 66.67.34.44 /faq/subtopic.jsp?topicID=364 FAQs
20021022_00.00.12(tcpConnection-80-3003): 217.65.96.13 /faq/view.jsp?EID=736437 FAQs
20021022_00.00.13(tcpConnection-80-3019): 203.124.210.98 /faq/topicindex.jsp?topic=JSP FAQs/JSP
20021022_00.00.15(tcpConnection-80-2988): 202.56.231.154 /faq/index.jsp FAQs
20021022_00.00.19(tcpConnection-80-2976): 66.67.34.44 /faq/view.jsp?EID=225150 FAQs
220021022_00.00.21(tcpConnection-80-2974): 143.89.192.5 /forums/most_active.jsp?topic=EJB Forums/EJB
20021022_00.00.21(tcpConnection-80-2996): 193.108.239.34 /guru/edit_account.jsp Guru
20021022_00.00.21(tcpConnection-80-2996): 193.108.239.34 /misc/login.jsp Misc
...
When a user is logged in, the log file has their user ID rather than their IP address.
Here is how you get a list of URLs that people view on say October 22, 2002:
$ awk '{print $3;}' < /home/public/cs601/unix/pageview-20021022.log
/faq/subtopic.jsp?topicID=472&page=2
/index.jsp
/faq/subtopic.jsp?topicID=364
/faq/view.jsp?EID=736437
/faq/topicindex.jsp?topic=JSP
/faq/index.jsp
/faq/view.jsp?EID=225150
/forums/most_active.jsp?topic=EJB
/guru/edit_account.jsp
/misc/login.jsp
...
If you want to count how many page views there were that day that were not processing pages (my processing pages are all of the form process_xxx), pipe the results through grep and wc:
$ awk '{print $3;}' < /home/public/cs601/unix/pageview-20021022.log | \
grep -v process | \
wc -l
67850
If you want a unique list of URLs, you can sort the output and then use uniq:
$ awk '{print $3;}' < /home/public/cs601/unix/pageview-20021022.log | \
sort | \
uniq
uniq just collapses all repeated lines into a single line--that is why you must sort the output first. You'll get output like:
/article/index.jsp
/article/index.jsp?page=1
/article/index.jsp?page=10
/article/index.jsp?page=2
...
Note: The name comes from a similar word, hairball (stuff that cats throw up), I'm pretty sure.
To collect a bunch of files and directories together, use tar. For example, to tar up your entire home directory and put the tarball into /tmp, do this
$ cd ~parrt
$ cd .. # go one dir above dir you want to tar
$ tar cvf /tmp/parrt.backup.tar parrt
By convention, use .tar as the extension. To untar this file use
$ cd /tmp
$ tar xvf parrt.backup.tar
tar untars things in the current directory!
After running the untar, you will find a new directory, /tmp/parrt, that is a copy of your home directory. Note that the way you tar things up dictates the directory structure when untarred. The fact that I mentioned parrt in the tar creation means that I'll have that dir when untarred. In contrast, the following will also make a copy of my home directory, but without having a parrt root dir:
$ cd ~parrt
$ tar cvf /tmp/parrt.backup.tar *
It is a good idea to tar things up with a root directory so that when you untar you don't generate a million files in the current directly. To see what's in a tarball, use
$ tar tvf /tmp/parrt.backup.tar
Most of the time you can save space by using the z argument. The tarball will then be gzip'd and you should use file extension .tar.gz:
$ cd ~parrt
$ cd .. # go one dir above dir you want to tar
$ tar cvfz /tmp/parrt.backup.tar.gz parrt
Unzipping requires the z argument also:
$ cd /tmp
$ tar xvfz parrt.backup.tar.gz
If you have a big file to compress, use gzip:
$ gzip bigfile
After execution, your file will have been renamed bigfile.gz. To uncompress, use
$ gzip -d bigfile.gz
To display a text file that is currently gzip'd, use zcat:
$ zcat bigfile.gz
When you need to have a directory on one machine mirrored on another machine, use rsync. It compares all the files in a directory subtree and copies over any that have changed to the mirrored directory on the other machine. For example, here is how you could "pull" all logs files from livebox.jguru.com to the box from which you execute the rsync command:
$ hostname
jazz.jguru.com
$ rsync -rabz -e ssh -v 'parrt@livebox.jguru.com:/var/log/jguru/*' \
/backup/web/logs
rsync will delete or truncate files to ensure the files stay the same. This is bad if you erase a file by mistake--it will wipe out your backup file. Add an argument called --suffix to tell rsync to make a copy of any existing file before it overwrites it:
$ hostname
jazz.jguru.com
$ rsync -rabz -e ssh -v --suffix .rsync_`date '+%Y%m%d'` \
'parrt@livebox.jguru.com:/var/log/jguru/*' /backup/web/logs
where `date '+%Y%m%d'` (in reverse single quotes) means "execute this date command".
To exclude certain patterns from the sync, use --exclude:
$ rsync -rabz --exclude=entitymanager/ --suffix .rsync_`date '+%Y%m%d'` \
-e ssh -v 'parrt@livebox.jguru.com:/var/log/jguru/*' /backup/web/logs
To copy a file or directory manually, use scp:
$ scp lecture.html parrt@nexus.cs.usfca.edu:~parrt/lectures
Just like cp, use -r to copy a directory recursively.
Most GUIs for Linux or PCs have a search facility, but from the command-line you can use find. To find all files named .p4 starting in directory ~/antlr/depot/projects, use:
$ find ~/antlr/depot/projects -name '.p4'
The default "action" is to -print.
You can specify a regular expression to match. For example, to look under your home directory for any xml files, use:
$ find ~ -name '*.xml' -print
Note the use of the single quotes to prevent command-line expansion--you want the '*' to go to the find command.
You can execute a command for every file or directory found that matches a name. For example, do delete all xml files, do this:
$ find ~ -name '*.xml' -exec rm {} \;
where "{}" stands for "current file that matches". The end of the command must be terminated with ';' but because of the command-line expansion, you'll need to escape the ';'.
You can also specify time information in your query. Here is a shell script that uses find to delete all files older than 14 days.
#!/bin/sh
BACKUP_DIR=/var/data/backup
# number of days to keep backups
AGE=14 # days
AGE_MINS=$[ $AGE * 60 * 24 ]
# delete dirs/files
find $BACKUP_DIR/* -cmin +$AGE_MINS -type d -exec rm -rf {} \;
Use find in back ticks as an argument:
vi `find . -name '*.java'` # open all java files below current dir
If you want to know who is using a port such as HTTP (80), use fuser. You must be root to use this:
$ sudo /sbin/fuser -n tcp 80
80/tcp: 13476 13477 13478 13479 13480
13481 13482 13483 13484 13486 13487 13489 13490 13491
13492 13493 13495 13496 13497 13498 13499 13500 13501 13608
The output indicates the list of processes associated with that port.
Sometimes you want to use a command but it's not in your PATH and you can't remember where it is. Use whereis to look in standard unix locations for the command.
$ whereis fuser
fuser: /sbin/fuser /usr/man/man1/fuser.1 /usr/man/man1/fuser.1.gz
$ whereis ls
ls: /bin/ls /usr/man/man1/ls.1 /usr/man/man1/ls.1.gz
whereis also shows man pages.
Sometimes you might be executing the wrong version of a command and you want to know which version of the command your PATH indicates should be run. Use which to ask:
$ which ls
alias ls='ls --color=tty'
/bin/ls
$ which java
/usr/local/java/bin/java
If nothing is found in your path, you'll see:
$ which fuser
/usr/bin/which: no fuser in (/usr/local/bin:/usr/local/java/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/X11R6/bin:/home/parrt/bin)
To send a signal to a process, use kill. Typically you'll want to just say kill pid where pid can be found from ps or top (see below).
Use kill -9 pid when you can't get the process to die; this means kill it with "extreme prejudice".
If you are having trouble getting to a site, use traceroute to watch the sequence of hops used to get to a site:
$ /usr/sbin/traceroute
1 65.219.20.145 (65.219.20.145) 2.348 ms 1.87 ms 1.814 ms
2 loopback0.gw5.sfo4.alter.net (137.39.11.23) 3.667 ms 3.741 ms 3.695 ms
3 160.atm3-0.xr1.sfo4.alter.net (152.63.51.190) 3.855 ms 3.825 ms 3.993 ms
...
$ /sbin/ifconfig
Under the eth0 interface, you'll see the inet addr:
eth0 Link encap:Ethernet HWaddr 00:10:DC:58:B1:F0
inet addr:138.202.170.4 Bcast:138.202.170.255 Mask:255.255.255.0
...
If you want to kill all java processes running for parrt, you can
either run killall java if you are parrt or generate a "kill"
script via:
$ ps auxwww|grep java|grep parrt|awk '{print "kill -9 ",$2;}' > /tmp/killparrt
$ bash /tmp/killparrt # run resulting script
The /tmp/killparrt file would look something like:
kill -9 1021
kill -9 1023
kill -9 1024
Note: you can also do this common task with:
$ killall java
Please be aware that this is linux specific; i'm told that it will kill all processing on UNIXen like Solaris!
A histogram is set of count, value pairs indicating how often the value occurs. The basic operation will be to sort, then count how many values occur in a row and then reverse sort so that the value with the highest count is at the top of the report.
$ ... | sort |uniq -c|sort -r -n
Note that sort sorts on the whole line, but the first column is obviously significant just as the first letter in someone's last name significantly positions their name in a sorted list.
uniq -c collapses all repeated sequences of values but prints the number of occurrences in front of the value. Recall the previous sorting:
$ awk '{print $3;}' < /home/public/cs601/unix/pageview-20021022.log | \
sort | \
uniq
/article/index.jsp
/article/index.jsp?page=1
/article/index.jsp?page=10
/article/index.jsp?page=2
...
Now add -c to uniq:
$ awk '{print $3;}' < /home/public/cs601/unix/pageview-20021022.log | \
sort | \
uniq -c
623 /article/index.jsp
6 /article/index.jsp?page=1
10 /article/index.jsp?page=10
109 /article/index.jsp?page=2
...
Now all you have to do is reverse sort the lines according to the first column numerically.
$ awk '{print $3;}' < /home/public/cs601/unix/pageview-20021022.log | \
sort | \
uniq -c | \
sort -r -n
6170 /index.jsp
2916 /search/results.jsp
1397 /faq/index.jsp
1018 /forums/index.jsp
884 /faq/home.jsp?topic=Tomcat
...
In practice, you might want to get a histogram that has been "despidered" and only has faq related views. You can filter out all page view lines associated with spider IPs and filter in only faq lines:
$ grep -v -f /tmp/spider.IPs /home/public/cs601/unix/pageview-20021022.log | \
awk '{print $3;}'| \
grep '/faq' | \
sort | \
uniq -c | \
sort -r -n
1397 /faq/index.jsp
884 /faq/home.jsp?topic=Tomcat
525 /faq/home.jsp?topic=Struts
501 /faq/home.jsp?topic=JSP
423 /faq/home.jsp?topic=EJB
...
If you want to only see despidered faq pages that were referenced more than 500 times, add an awk command to the end.
$ grep -v -f /tmp/spider.IPs /home/public/cs601/unix/pageview-20021022.log | \
awk '{print $3;}'| \
grep '/faq' | \
sort | \
uniq -c | \
sort -r -n | \
awk '{if ($1>500) print $0;}'
1397 /faq/index.jsp
884 /faq/home.jsp?topic=Tomcat
525 /faq/home.jsp?topic=Struts
501 /faq/home.jsp?topic=JSP
A student asked if I knew of a program that generated class hierarchy
diagrams. I said "no", but then realized we don't need one. Here's
the one liner to do it:
# pulls out superclass and class as $5 and $3:
# public class A extends B ...
# only works for public classes and usual formatting
cat *.java | grep 'public class' $1 | \
awk 'BEGIN {print "digraph foo {";} {print $5 "->" $3;} END {print "}"}'
It generates DOT format graph files. Try it.
It's amazing. Works for most cases. Output looks like:
digraph foo {
antlr.CharScanner->JavaLexer
antlr.LLkParser->Mantra
->TestLexer
}
I like to automate as much as possible. Sometimes that means writing a program that generates another program or script.
I wanted to get a sequence of SQL commands that would update our database whenever someone's email bounced. Processing the mail file is pretty easy since you can look for the error code followed by the email address. A bounced email looks like:
From MAILER-DAEMON@localhost.localdomain Wed Jan 9 17:32:33 2002
Return-Path: <>
Received: from web.jguru.com (web.jguru.com [64.49.216.133])
by localhost.localdomain (8.9.3/8.9.3) with ESMTP id RAA18767
for <notifications@jguru.com>; Wed, 9 Jan 2002 17:32:32 -0800
Received: from localhost (localhost)
by web.jguru.com (8.11.6/8.11.6) id g0A1W2o02285;
Wed, 9 Jan 2002 17:32:02 -0800
Date: Wed, 9 Jan 2002 17:32:02 -0800
From: Mail Delivery Subsystem <MAILER-DAEMON@web.jguru.com>
Message-Id: <200201100132.g0A1W2o02285@web.jguru.com>
To: <notifications@jguru.com>
MIME-Version: 1.0
Content-Type: multipart/report; report-type=delivery-status;
boundary="g0A1W2o02285.1010626322/web.jguru.com"
Subject: Returned mail: see transcript for details
Auto-Submitted: auto-generated (failure)
This is a MIME-encapsulated message
--g0A1W2o02285.1010626322/web.jguru.com
The original message was received at Wed, 9 Jan 2002 17:32:02 -0800
from localhost [127.0.0.1]
----- The following addresses had permanent fatal errors -----
<pain@intheneck.com>
(reason: 550 Host unknown)
----- Transcript of session follows -----
550 5.1.2 <pain@intheneck.com>... Host unknown (Name server: intheneck.com: host not found)
...
Notice the SMTP 550 error message. Look for that at the start of a line then kill the angle brackets, remove the ... and use awk to print out the SQL:
# This script works on one email or a file full of other emails
# since it just looks for the SMTP 550 or 554 results and then
# converts them to SQL commands.
grep -E '^(550|554)' | \
sed 's/[<>]//g' | \
sed 's/\.\.\.//' | \
awk "{printf(\"UPDATE PERSON SET bounce=1 WHERE email='%s';\n\",\$3);}" >> bounces.sql
I have to escape the $3 because it means something to the surround bash shell script and I want awk to see the dollar sign.
#!/bin/bash # From a type and name (plus firstlettercap version), # generate a Java getter and setter # # Example: getter.setter String name Name # TYPE=$1 NAME=$2 UPPER_NAME=$3 echo "public $TYPE get$UPPER_NAME() {" echo " return $NAME;" echo "}" echo echo "void set$UPPER_NAME($TYPE $NAME) {" echo " this.$NAME = $NAME;" echo "}" echo
Failed logins: /var/log/messages
last, w, uptime
/etc/passwd changed?
fuser for ports
portscans in server report
weird processing hogging CPU?
|
https://www.cs.usfca.edu/~parrt/course/601/lectures/unix.util.html
|
CC-MAIN-2020-16
|
refinedweb
| 4,925
| 65.62
|
MASTER THESIS - FALL 2016 Tifig Supervisor: Prof. Peter Sommerlad Author: Toni Suter March 20, 2017 Abstract In 2014, Apple introduced a new programming language called Swift which replaced Objective-C as the default programming language to develop applications for Apple’s platforms. Since then, the language has evolved a lot and was open-sourced at the end of 2015. There are now official releases for both macOS and Ubuntu and there are efforts from the community to bring the language to other platforms as well. With its Xcode IDE (integrated development environment) Apple focuses mainly on the development of iOS and macOS apps. However, Xcode is not available on platforms other than the Mac and there aren’t a lot of alternatives yet. Additionally, many programmers are interested in using Swift for other areas such as web development. The main goal of this project is to create a cross-platform Swift IDE based on Eclipse which contains the basic components required to develop Swift programs. Over the course of the term project a simple Swift IDE called Tifig has been developed. The user can edit source files, build projects and run the resulting executables all from within the IDE. Every time the user changes a source file, the code is re-parsed and syntax errors are reported in the form of markers in the editor. In the subsequent master thesis the parser has been further improved and updated for Swift 3. Additionally, an indexer has been implemented. The semantic knowledge that is obtained by the indexer allowed for the development of the code navigation features Open Type and Jump to Definition. A lot of Swift’s core types and operators are not part of the language itself, but are instead declared in the standard library. For this reason, Tifig also indexes the standard library and makes its public symbols available in each project. I Management Summary Tifig is a simple, cross-platform Swift IDE [App17b] based on the Eclipse platform [ecl17c]. I developed it during my term project and extended it in my subsequent master thesis. This management summary gives an overview over the motivation and the goals for this project. It also describes the results of the project as well as work that could be done to further improve Tifig in the future. Motivation Swift is a relatively new programming language that was originally invented by Apple and is now available as an open-source project. Apple also develops Xcode [App17g] which is probably the most well-known IDE with support for Swift. Other than that, there are not a lot of compelling options yet. While Xcode is a very powerful and mature IDE, it has a few drawbacks. It is heavily focused on the development of iOS and macOS apps and is less suited for other areas such as web development. Additionally, it doesn’t yet support the refactoring of Swift code and is only available on macOS. Therefore, this is a good time to develop a new, cross-platform Swift IDE. The Eclipse platform is a good foundation to build on, since it is very extendable and there are already a lot of other well-known IDEs such as Eclipse JDT [ecl17b] and Eclipse CDT [ecl17a] that are based on it. Goals The goal of the term project was to create an Eclipse-based Swift IDE with the following features: • Wizards to create new Swift projects & files • Source code editor with syntax-highlighting support for Swift • Automatic parsing of the code and reporting of syntax errors • Building & running of Swift programs from within the IDE Afterwards, the following goals were set for the subsequent master thesis: • Improve the parser to make it fully compatible with Swift 3 • Develop an indexer for the Swift programming language II • Integrate Swift’s standard library in the indexing process • Add IDE features that rely on the index (e.g., Open Type, Jump to Definition) • Add refactoring support Results Most of the project goals were achieved and the result is a simple Swift IDE called Tifig. It automatically parses and indexes Swift code as it is entered by the user. In addition to the user’s code, Tifig also indexes the Swift standard library and makes its public symbols available in each project. Errors are reported in the form of markers in the editor and programs are built with the help of the Swift Package Manager. Finally, the two features Open Type and Jump to Definition have been implemented in order to make it more convenient to navigate a Swift project. A screenshot of Tifig is shown below: Screenshot of Tifig Unfortunately, the development of the indexer took longer than expected and there was not enough time to implement refactoring support. III Further work While Tifig is already a functioning IDE for small projects, there is still a lot of room for improvement. The following list contains a few things that could be added or improved in the future: • Improve the accuracy of reported syntax / semantic errors • Improve indexer (e.g., better generics support, support for partial imports, etc.) • Add more code navigation features (e.g., Open Call Hierarchy) • Add support for auto-completion in the Swift editor • Add support for debugging • Add support for automated refactorings IV Contents 1 Task Description - Term Project 1.1 Motivation . . . . . . . . . . 1.2 Project Goals . . . . . . . . . 1.3 Expected Results . . . . . . . 1.3.1 Optional features . . . 1.4 Time management . . . . . . 1.5 Deliverables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4 4 4 5 5 5 2 Task Description - Master Thesis 2.1 Motivation . . . . . . . . . . 2.2 Project Goals . . . . . . . . . 2.3 Expected Results . . . . . . . 2.3.1 Optional Features . . 2.4 Time management . . . . . . 2.5 Deliverables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 6 6 6 7 7 7 . . . . . . . . . . . . . . . . . . . . 8 8 8 8 9 9 11 11 11 12 13 15 16 21 24 24 25 25 25 26 28 3 Analysis 3.1 Introduction to Swift . . . . . . . . . . . 3.1.1 Development . . . . . . . . . . . 3.1.2 Vision . . . . . . . . . . . . . . . 3.1.3 Type Safety . . . . . . . . . . . . 3.1.4 Variables / Properties . . . . . . 3.1.5 Standard Library Types . . . . . 3.1.6 Tuples . . . . . . . . . . . . . . . 3.1.7 Functions . . . . . . . . . . . . . 3.1.8 Closures . . . . . . . . . . . . . . 3.1.9 Optionals . . . . . . . . . . . . . 3.1.10 Extensions . . . . . . . . . . . . 3.1.11 Operators . . . . . . . . . . . . . 3.1.12 Pattern Matching . . . . . . . . . 3.1.13 Protocol-Oriented Programming 3.1.14 Memory Management . . . . . . 3.1.15 Interoperability with Objective-C 3.2 Overview . . . . . . . . . . . . . . . . . 3.2.1 Features . . . . . . . . . . . . . . 3.2.2 Components . . . . . . . . . . . 3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Lexer 29 4.1 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 1 Contents 4.2 4.3 4.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 31 33 33 33 34 34 34 35 35 35 36 36 36 5 Parser 5.1 Architecture . . . . . . . . . . . . . . . . . . . 5.1.1 Parser Modules . . . . . . . . . . . . . 5.1.2 Recursive-Descent Parsing . . . . . . . 5.1.3 Speculative Parsing and Backtracking 5.2 AST . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Requirements . . . . . . . . . . . . . . 5.2.2 Structure . . . . . . . . . . . . . . . . 5.2.3 Visiting an AST . . . . . . . . . . . . 5.3 Error handling . . . . . . . . . . . . . . . . . 5.4 Testing . . . . . . . . . . . . . . . . . . . . . . 5.5 Implementation Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 37 38 41 43 44 45 45 46 47 49 50 . . . . . . . . . . . . . . . . . . 51 51 52 54 55 55 56 57 57 57 58 59 59 60 60 66 71 71 76 4.5 4.6 4.7 Identifiers . . . . . . . . . . . Operators . . . . . . . . . . . Literals . . . . . . . . . . . . 4.4.1 Integer Literals . . . . 4.4.2 Floating-Point Literals 4.4.3 String Literals . . . . 4.4.4 Boolean Literals . . . 4.4.5 Nil Literal . . . . . . . 4.4.6 Compiler Literals . . . Punctuation . . . . . . . . . . Comments . . . . . . . . . . . 4.6.1 Single-line Comments 4.6.2 Multi-line Comments . Implementation Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Indexer 6.1 The job of an Indexer . . . . . . . . . . . 6.2 Architecture Overview . . . . . . . . . . . 6.3 Definition Pass . . . . . . . . . . . . . . . 6.3.1 Bindings . . . . . . . . . . . . . . . 6.3.2 Unavailable Declarations . . . . . . 6.3.3 Conditions . . . . . . . . . . . . . 6.3.4 Extensions . . . . . . . . . . . . . 6.3.5 Implicit Operator Bindings . . . . 6.3.6 Implicit Variable Bindings . . . . . 6.3.7 Implicit Closure Parameters . . . . 6.3.8 Imports . . . . . . . . . . . . . . . 6.3.9 Standard Library . . . . . . . . . . 6.4 Type-Annotation Pass . . . . . . . . . . . 6.4.1 Index Types . . . . . . . . . . . . . 6.4.2 Tasks of the Type-Annotation Pass 6.5 Type-Check Pass . . . . . . . . . . . . . . 6.5.1 Type Inference in Swift . . . . . . 6.5.2 Implementation Approach . . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contents 6.6 6.7 6.8 Constraint-Based Type Checker . . . . . . . . . 6.6.1 Overview . . . . . . . . . . . . . . . . . 6.6.2 Example 1: Literals . . . . . . . . . . . 6.6.3 Example 2: Overload Resolution . . . . 6.6.4 Example 3: Binary Expressions . . . . . 6.6.5 Example 4: Explicit Member Expression 6.6.6 Example 5: Implicit Member Expression 6.6.7 Example 6: Optionals . . . . . . . . . . 6.6.8 Example 7: Initializer Call . . . . . . . 6.6.9 Example 8: Generic Function . . . . . . 6.6.10 Solver Algorithm . . . . . . . . . . . . . 6.6.11 Ranking Rules . . . . . . . . . . . . . . 6.6.12 Contextual Type Constraints . . . . . . 6.6.13 Pattern Matching . . . . . . . . . . . . . 6.6.14 Conversions . . . . . . . . . . . . . . . . Testing . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Single-File Test Cases . . . . . . . . . . 6.7.2 Multi-File Test Cases . . . . . . . . . . 6.7.3 Multi-Module Test Cases . . . . . . . . Implementation Status . . . . . . . . . . . . . . 7 User Interface 7.1 Wizards . . . . . . . . . . . 7.1.1 Project Wizard . . . 7.1.2 File Wizards . . . . 7.2 Project Nature . . . . . . . 7.3 Swift Perspective . . . . . . 7.4 Editor . . . . . . . . . . . . 7.4.1 Auto Indenting . . . 7.4.2 Syntax Highlighting 7.4.3 Reconciler . . . . . . 7.5 Type Information Hover . . 7.6 Open Type Dialog . . . . . 7.7 Builder . . . . . . . . . . . 7.8 Launcher . . . . . . . . . . 7.9 Implementation Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 78 80 84 89 95 99 102 105 108 111 113 115 117 118 121 121 122 123 124 . . . . . . . . . . . . . . 125 . 125 . 125 . 126 . 127 . 127 . 128 . 128 . 129 . 130 . 134 . 135 . 136 . 137 . 139 8 Conclusion 140 8.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 8.2 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 8.3 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Bibliography 142 3 1 Task Description - Term Project This section outlines the goals and the scope of the term project. 1.1 Motivation In 2014 Apple introduced a new programming language called Swift. It is a modern, statically-typed language that is meant to replace Objective-C as the default programming language to develop applications for Apple’s platforms. On December 3rd 2015, Swift was made open source under the Apache 2.0 license [Fou04]. Apple provides the Xcode IDE as the default tool to program in Swift. While Apple and the open source community are already working on porting the language and its standard library to Linux, there seem to be no plans to do the same for Xcode. Thus, it would be great to have a cross-platform alternative to Xcode. The Eclipse platform with its plug-in system seems like a good foundation to build on. 1.2 Project Goals The main goal of this term project is to create a collection of plug-ins for the Eclipse platform that add support for the Swift programming language. Since there will not be enough time to develop all the features that are expected from a modern IDE, I want to focus on creating a good foundation, which can later be extended during my upcoming master thesis. 1.3 Expected Results • Wizards The wizards will allow the creation of new Swift projects and files, classes, etc. • Perspective The perspective allows programmers to configure the views they want to see in the workbench while writing Swift code. • Editor The editor will likely be the bulk of the work. A parser will be required to support features like syntax highlighting, code formatting and static analysis / refactorings. 4 1 Task Description - Term Project • Builder The builder is responsible for the correct invocation of the Swift compiler. • Launch Configuration The launch configuration knows how to launch a Swift application. 1.3.1 Optional features The following features are expected in a modern IDE but are considered optional for this term project due to the limited time available: • Debugging • Refactorings • Interoperability with C/C++ (Eclipse CDT) 1.4 Time management The project started on February 29th 2016. It will end on July 18th 2016 at 12:00 p.m., which is when the final release has to be submitted completely. 1.5 Deliverables The following items will be included in the final release of the project: • 2 printed copies of the documentation • PDF of poster for presentation • 2 CDs that contain the code, project resources, documentation • 1 CD for archive with the documentation and abstract without personal information 5 2 Task Description - Master Thesis This section outlines the goals and the scope of the master thesis. 2.1 Motivation This master thesis is based on my previous term project in which I developed an Eclipsebased Swift IDE called Tifig. Currently, Tifig is still a very basic IDE. It has a parser that checks the syntax and it can build and run programs using the Swift Package Manager. However, the semantic analysis part is far from complete and there are a lot of opportunities for improvement. In addition the Swift 3 language definition was still in flux and changed in the last weeks of the term project. Not all of those changes have made it into Tifig’s parser yet. 2.2 Project Goals The goal of this master thesis is to extend the capabilities of Tifig. The main focus will be on improving the indexer and on adding refactoring support. Additionally, the remaining issues of the parser should be resolved in order to support the full Swift 3 feature set. 2.3 Expected Results Parser Improvements • Implement Swift 3 grammar changes that happened since the term project • Improve the error reporting of the parser Semantic Analysis and Symbol Table Improvements • Improve and extend the implementation and the tests of the indexer • Add imported symbols and standard library symbols to the index • Report semantic errors in the UI • Add more IDE features that rely on the index (e.g., Open Type, Open Call Hierarchy) 6 2 Task Description - Master Thesis Refactoring Support • Add the ability for plug-ins to programmatically modify a Swift program’s AST and to reflect those changes in the Editor and source code. • Implement some useful refactorings (e.g., extract method, rename symbol) 2.3.1 Optional Features The following features are expected in a modern IDE but are considered optional for this master thesis due to the limited time available: Debugging Support • Set breakpoints • Step through the statements of a program • Inspect local variables Integration with Eclipse CDT / Cevelop • Add the ability to develop called C/C++ code from the Swift language within a single IDE Add Unit Testing Support 2.4 Time management The project started on September 1st 2016. It will end on February 28th 2017 at 12:00 p.m., which is when the final release has to be submitted completely. 2.5 Deliverables The following items will be included in the final release of the project: • 2 printed copies of the documentation • Poster for presentation • 2 CDs that contain the code, project resources, documentation • 1 CD for archive with the documentation and abstract without personal information 7 3 Analysis The chapters 1 and 2 described the motivation for this project and gave an overview of the project goals. This included a list of the major components that should be implemented as part of this project. This chapter gives a short introduction to Swift and describes the components in more detail. 3.1 Introduction to Swift This section gives a short introduction to the Swift programming language. It is by no means a comprehensive tutorial but rather a quick overview of the language’s key characteristics and goals. 3.1.1 Development The development of the Swift programming language started internally at Apple in 2010 [Lat10]. For the first few years Chris Lattner was the lead developer of the project [Lat17a]. He also gave the first public demo of Swift on June 2, 2014 at Apple’s annual Worldwide Developers Conference (WWDC) [App14]. Since Lattner left Apple in early 2017, Ted Kremenek has taken over his role as lead developer [Lat17b]. Since its inception, Swift has undergone many significant changes and was open-sourced on December 3, 2015 [swi15]. Apple is the project lead, but there are already many contributions from non-Apple contributors and the community is very active [swi17a]. On September 13, 2016, Swift 3.0 was released [Kre16]. It was the first major release since Swift was open-sourced. At the time of writing, Swift 4.0 is being developed, which is expected to be released in the fall of 2017 [Kre17]. 3.1.2 Vision Apple’s vision for Swift is to create a safe programming language that is friendly to beginners but also provides modern features that make programming easier, more flexible and more fun. Apple also states that Swift is “the first industrial-quality systems programming language that is as expressive and enjoyable as a scripting language” and that it is “designed to scale from ‘hello, world’ to an entire operating system” [tsp17a]. 8 3 Analysis 3.1.3 Type Safety Swift is statically typed, which means that the type of each expression has to be known at compile time. This way, a lot of bugs can be caught and fixed early. Luckily, explicit type annotations are not always necessary, because the compiler can often infer the types automatically. 3.1.4 Variables / Properties Variables / Properties are introduced with a let- or var-declaration. If such a declaration is located within a named type (e.g., a struct or a class) it is considered to be a property of that type. However, a let- or var-declaration can also appear in the global scope or in a local scope in which case it is considered to be a global variable or a local variable, respectively. There are three different kinds of variable / property declarations which are described in the following sections. Stored Variables / Properties Listing 3.1 shows a few different examples of declarations of stored variables: Listing 3.1: Stored Variables 1 2 3 4 5 6 7 8 9 var x = 41 x += 1 // x is of type Int let y = "hello" y = y.uppercased() // x is of type String // error: cannot assign to value: 'y' is a 'let' constant let a1 = 1.5, b1 = true // a1 is of type Double, b1 is of type Bool let (a2, b2) = (1.5, true) // a2 is of type Double, b2 is of type Bool On line 1, an Int variable called x is declared. This variable is mutable, because we used the var keyword. Thus, the subsequent assignment x += 1 works and x has the value 42 after the assignment. On line 4, a String variable called y is declared. This variable is immutable, because we used the let keyword. Therefore, the assignment y = y.uppercased() results in a compilation error. On line 7 two immutable variables called a1 and b1 are declared in the same declaration. Finally, on line 9 two immutable variables called a2 and b2 are declared in the same declaration. However, this time there is a tuple pattern on the left hand side of the equals sign that is matched against a tuple expression on the right hand side. 9 3 Analysis Computed Variables / Properties Since the values of computed variables / properties usually change during the execution of the program, they always have to be declared with the var keyword. Listing 3.2 shows an example of a read-only, computed variable: Listing 3.2: Read-only, computed variable 1 2 3 4 5 6 7 import Darwin // import required for arc4random() var rand: Int { return Int(arc4random() % 100) } print(rand, rand) // prints two random numbers between 0 and 99 The computed variable rand produces a random number between 0 and 99 every time it is evaluated. Listing 3.3 shows an example of a read-write, computed variable: Listing 3.3: Read-write, computed variable 1 2 3 4 5 6 7 8 9 10 11 12 13 import Foundation // import required for sqrt() var radius = 5.5 var area: Double { get { return radius * radius * Double.pi } set { radius = sqrt(newValue / Double.pi) } } When the computed variable area is accessed, its value is automatically computed from the value of the stored variable radius. On the other hand, when a new value is assigned to area, its setter is executed which updates the stored variable radius. Note that the newValue variable is implicitly available in the setters of computed variables. Observed Variables / Properties Observed variables / properties always have to be declared with the var keyword, because otherwise their value could not change and it would make no sense to observe them. Listing 3.4 shows an example of an observed variable: Listing 3.4: Observed Variable 1 2 3 4 5 6 7 8 9 10 11 var x = 0 { willSet { print("value will change from \(x) to \(newValue)") } didSet { print("value did change from \(oldValue) to \(x)") } } x = 5 // prints 'value will change from 0 to 5' // and 'value did change from 0 to 5' 10 3 Analysis The willSet-clause is executed shortly before the value is updated. Within that code block the newValue variable is implicitly available to refer to the new value of the variable. Similarly, the didSet-clause is executed shortly after the value is updated and within its code block we can use the implicit oldValue variable to refer to the old value of the variable. 3.1.5 Standard Library Types Many of the types that seem to be built into the Swift programming language are actually declared in the standard library. This includes the types Int, Double, Bool, String, Array and Dictionary, among others. All of these types are structs with value semantics [Gal16]. Two notable exceptions are tuple types and function types, which are built into Swift. 3.1.6 Tuples Tuples provide a way to quickly group multiple values without defining a new named type. Tuples have a fixed number of elements and their elements can have different types. An example of this is shown in Listing 3.5: Listing 3.5: Tuple Example 1 2 3 let tuple = (8640, "Rapperswil") print(tuple.0) print(tuple.1) // tuple is of tuple type (Int, String) // prints '8640' // prints 'Rapperswil' The example shows that individual elements of a tuple can be accessed by element index. Alternatively, the elements of a tuple can be named. An example of this is shown in Listing 3.6: Listing 3.6: Tuple with named elements 1 2 3 let tuple = (zip: 8640, name: "Rapperswil") print(tuple.zip) print(tuple.name) // tuple is of tuple type (zip: Int, name: String) // prints '8640' // prints 'Rapperswil' Note that there is no such thing as a single-element tuple. Instead, a parenthesized expression that only contains a single element has the same type and value as the element itself. 3.1.7 Functions In Swift, functions can be declared in many different locations. Apart from declaring methods within named types it is also valid to declare free functions at the file level or to declare a function in the local scope of another function. The declaration syntax always looks the same. Listing 3.7 shows a free function that computes the factorial of a number: 11 3 Analysis Listing 3.7: Factorial function in Swift 1 2 3 4 5 6 7 8 func factorial(_ n: Int) guard n > 0 else { return 1 } return n * factorial(n } print(factorial(5)) −> − Int { 1) // 120 Note that the return type is specified after an arrow (->) which follows the parameter list. Each parameter can have an external name and an internal name. The internal name is used to refer to the parameter from within the function body. If a parameter has an external name, this name has to be specified in an argument label at the call site. In the example above, the parameter doesn’t have an external name because it was suppressed with the underscore _. By default, the external name and the internal name of a parameter are the same. Alternatively, it is also possible to specify two parameter names in which case the first one is the external name and the second one is the internal name. This can lead to code that is more readable, especially if a function has many parameters. Listing 3.8 shows an example: Listing 3.8: External and internal parameter names 1 2 3 4 5 func greet(person: String, from hometown: String) { print("Hello, \(person) from \(hometown)!") } greet(person: "Tim", from: "Cupertino") // Hello, Tim from Cupertino! The first parameter only specifies the name person. Therefore, both its internal and external name is person. For the second parameter two different names are specified. The external parameter name is from and the internal parameter name is hometown. Swift functions are first class values of the language. They can be stored in variables, passed as arguments to functions and returned from functions. This also means that functions have a type. The factorial() function in Listing 3.7 has the type (Int) -> Int and the greet() function in Listing 3.8 has the type (String, String) -> (). Note that a function that doesn’t return anything implicitly has the return type (). Listing 3.9 shows an example of how to use the higher-order method map() to transform an array of numbers [Har96]: Listing 3.9: Passing a function to map() 1 2 3 4 5 6 7 func square(_ n: Int) return n * n } −> Int { let numbers = [0, 1, 2, 3, 4, 5] let squares = numbers.map(square) print(squares) // 0, 1, 4, 9, 16, 25 3.1.8 Closures Closures are expressions that can be used in the same places as functions. Listing 3.10 shows code that is very similar to the example in Listing 3.9, but this time a closure is 12 3 Analysis passed to the map() method: Listing 3.10: Passing a closure to map() 1 2 3 let numbers = [0, 1, 2, 3, 4, 5] let squares = numbers.map({ (n: Int) −> Int in return n * n }) print(squares) // 0, 1, 4, 9, 16, 25 Listing 3.11 shows another version of the same code. This time the closure’s parameter type and return type is inferred from the context. Additionally, the expression n * n is now returned implicitly. This is only possible if the body of a closure consists of a single expression: Listing 3.11: Passing a closure to map() 1 2 3 let numbers = [0, 1, 2, 3, 4, 5] let squares = numbers.map({ n in n * n }) print(squares) // 0, 1, 4, 9, 16, 25 Finally, to make the code even more concise, we can use the shorthand parameter names $0, $1, $2 and so on to refer to the first, second or third parameter, respectively. Additionally, if a closure is the last argument in a function call, we can use the trailing closure syntax which allows us to write the closure after the function call’s parentheses. If the closure is the only argument in a function call, the parentheses can be omitted entirely. This is shown in Listing 3.12: Listing 3.12: Passing a closure to map() 1 2 3 let numbers = [0, 1, 2, 3, 4, 5] let squares = numbers.map { $0 * $0 } print(squares) // 0, 1, 4, 9, 16, 25 3.1.9 Optionals Optionals are a core concept in the Swift programming language. The goal of this feature is to improve the safety of programs by formalizing the notion of optionality so that it can be enforced by the compiler. In languages like Objective-C and Java, you never know whether it is really safe to dereference e.g., a parameter because it might be null. Good programmers insert null checks to avoid exceptions. However, that requires a lot of discipline and is therefore error-prone. In Swift, a variable can only become nil, if its value is wrapped in an optional. For example, a String cannot be nil, but an Optional<String> can. The compiler prohibits access to properties or methods of an optional. Instead the programmer has to unwrap the underlying value first. Another benefit of optionals is that they also work with basic types such as integers and floating-point numbers which means that programmers don’t have to resort to arbitrary sentinel values such as -1 [tsp17e]. Swift provides different language constructs to unwrap optionals which are described in the following sections. 13 3 Analysis Optional Binding Optional Binding is a special kind of condition that can be used in if, guard and while statements. An example of this is shown in Listing 3.13: Listing 3.13: Optional Binding 1 2 3 4 5 6 7 8 9 10 11 12 // String? is syntactic sugar for Optional<String> func createGreeting(name: String? = nil) −> String { // here, the type of 'name' is Optional<String> if let name = name { // here, the type of 'name' is String return "Hi \(name)!" } return "Hello there!" } print(createGreeting(name: "Toni")) print(createGreeting()) // Hi Toni! // Hello there! If the optional parameter name contains a value, the condition let name = name creates a new name variable that shadows the parameter and contains the unwrapped value of the optional. Note that the new name variable is only in scope within the then clause of the if statement. On the other hand, if the optional parameter name is nil, the condition is considered to be false and the then clause of the if statement is not executed. Optional Chaining Sometimes we might want to access a property or a method of an optional without unwrapping it first. This can be done with optional chaining. An example of this is shown in Listing 3.14: Listing 3.14: Optional Chaining 1 2 3 4 5 6 7 8 9 10 11 12 13 import Foundation // required for sqrt() struct Vec2D { var x: Double var y: Double func length() −> Double { return sqrt(x * x + y * y) } } var vec: Vec2D? = Vec2D(x: 2.5, y: 4.0) let len = vec?.length() // len is of type Optional<Double> The expression vec?.length() is an example of optional chaining. The type of this expression is Optional<Double>. If vec is nil, the value of the entire expression is nil as well. Otherwise, the value of the expression is the result of calling the method length(), wrapped in an optional. 14 3 Analysis Nil Coalescing Operator The nil coalescing operator is a convenient way to provide a default value that can be used when an optional is nil. An example of this is shown in Listing 3.15: Listing 3.15: Nil Coalescing Operator 1 2 3 print("Please enter your name:") let name = readLine() ?? "user" print("Hello \(name)!") // name is of type String The standard library function readLine() reads a line from stdin. The return type of this function is Optional<String>, because the input might be EOF (end of file) before a line can be read. In such a situation we can use the operator ?? (nil coalescing operator) in order to provide a default value. In the example above, the variable name is set to the name that was entered or to the default value "user" if readLine() returned nil. Force Unwrapping Force Unwrapping is another way to unwrap an optional. An example of this is shown in Listing 3.16: Listing 3.16: Force Unwrapping 1 2 3 4 5 var optInt: Int? = 42 var x = optInt! // x has type Int and value 42 optInt = nil x = optInt! // fatal error: unexpectedly found nil while unwrapping an Optional value If the optional optInt contains a value, the expression optInt! evaluates to the unwrapped value of that optional. Otherwise, the program is aborted with a fatal error. Thus, this feature should only be used, if you are absolutely sure that an optional contains a value. 3.1.10 Extensions Extensions are a way of adding new members (e.g., computed properties, methods, initializers) to an existing named type (i.e., structs, classes, enums and protocols). This feature serves two main purposes. Firstly, it allows programmers to add functionality to types that they don’t control (e.g., types that are imported from an external framework or from the standard library). Secondly, it is sometimes useful to divide the declaration of your own types into multiple extensions in order to group the members a certain way or to spread the declaration across multiple files. Most of Swift’s core types such as Int, Double, String and Array are regular struct types that are declared in the standard library. This means that they can be extended as well. An example of this is shown in Listing 3.17: 15 3 Analysis Listing 3.17: Extensions in Swift 1 2 3 4 5 6 7 8 9 extension String { func isPalindrome() −> Bool { let lowercased = self.lowercased() return lowercased == String(lowercased.characters.reversed()) } } print("Anna".isPalindrome()) print("John".isPalindrome()) // true // false In this example, the function isPalindrome() is added to the standard library type String. Of course we could provide this functionality without using an extension (e.g., by making isPalindrome() a free function that takes a String), but one could argue that the use of an extension makes the code more uniform, because it lets us treat isPalindrome() like any other method of the type String. Another important feature of extensions is that they can add protocol conformance to an existing type. For example, let’s assume that there is a new protocol called Squarable which requires a single method squared(). If we need to, we can make an existing type conform to this protocol through an extension. An example of this is shown in Listing 3.18: Listing 3.18: Extensions in Swift 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 protocol Squarable { func squared() −> Self } extension Int: Squarable { func squared() −> Int { return self * self } } func printSquare(_ n: Squarable) { print(n.squared()) } printSquare(5) // 25 This is sometimes called retroactive modelling [App15]. 3.1.11 Operators Swift not only supports overloading of existing operators but it also provides the ability to declare entirely new operators. Most operators that are available in Swift by default, are declared in the standard library. Note that this works for unary (i.e., prefix or postfix) and binary (i.e., infix) operators but not for ternary operators. There is only one ternary operator and it is the conditional expression operator a ? b : c that is also common in other languages. This operator cannot be overloaded and we cannot add a new ternary operator. 16 3 Analysis Overloading an existing operator To overload an existing operator, we declare a function that has the same name as the operator. This function can either be a free function or a static method on one of the operand types. For example, the standard library protocol Equatable has a single requirement, which says that any type that conforms to this protocol must overload the infix operator ==. An example of this is shown in Listing 3.19: Listing 3.19: Overloading the == operator using a free function 1 2 3 4 5 6 7 8 9 10 11 12 13 struct Point: Equatable { var x: Int var y: Int } func ==(lhs: Point, rhs: Point) −> Bool { return lhs.x == rhs.x && lhs.y == rhs.y } let a = let b = print(a print(a Point(x: Point(x: == b) // != b) // 2, y: 2) 1, y: 3) false true This example declares a struct type called Point that conforms to Equatable. The == operator is implemented using a free function. The code a == b is translated by the compiler into the function call (==)(a, b). The parentheses around == are required here, because otherwise the parser would parse it as a prefix operator instead of a function name. Also, note that an operator function never has external parameter names which is why we don’t have to specify argument labels. When a type conforms to Equatable, it automatically inherits a default implementation of the != operator which simply calls == and negates the result. We could add our own implementation of != that overrides the default behaviour (e.g., for performance reasons) but often this is not necessary. Listing 3.20 shows how to implement the == operator using a static method instead of a free function: Listing 3.20: Overloading the == operator using a static method 1 2 3 4 5 6 7 8 9 10 11 12 13 struct Point: Equatable { var x: Int var y: Int static func ==(lhs: Point, rhs: Point) −> Bool { return lhs.x == rhs.x && lhs.y == rhs.y } } let a = let b = print(a print(a Point(x: Point(x: == b) // != b) // 2, y: 2) 1, y: 3) false true 17 3 Analysis Declaring a new prefix / postfix operator Sometimes we might want to add an operator that doesn’t exist yet. For example, Listing 3.21 shows how to declare a new prefix operator called ||: Listing 3.21: Declaring a new prefix operator 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 import Foundation // required for sqrt() struct Vec2D { var x: Double var y: Double } prefix operator || prefix func ||(vec: Vec2D) −> Double { return sqrt(vec.x * vec.x + vec.y * vec.y) } let vec = Vec2D(x: 4, y: 3) print(||vec) // 5.0 Since the prefix operator || does not exist in the standard library, we declare it with an operator declaration on line 8. Additionally, an operator function is implemented on lines 10-12. It defines the actual behaviour of the operator. In this example, the operator is used to determine the length of two-dimensional vectors (i.e., instances of Vec2D). Often there are multiple operator functions provided for the same operator each of which handles a different kind of operand. For example, we could add additional operator functions to handle three-dimensional vectors and so on. Note that there are now two operators called ||. One is a prefix operator that is used to compute the length of vectors (this is the operator from the example above) and the other is an infix operator that is used for logical disjunction (this operator is declared in the standard library). Thus, it is possible to have two operators with the same name as long as they have a different notation (i.e., prefix, infix, postfix). This is also the reason why the declarations of prefix and postfix operator functions need to have a corresponding prefix or postfix modifier. Declaring a new infix operator The precedence of prefix and postfix operators is defined through Swift’s grammar. Postfix operators have a higher precedence than prefix operators which in turn have a higher precedence than infix operators. However, the precedence and associativity of individual infix operators in relationship to each other cannot be defined by the grammar because the operators are not yet known at parse time. Thus, the declaration of a new infix operator can also specify the precedence and associativity of that operator. This is done through so-called precedence groups. Each infix operator belongs to a precedence group and each group can specify its associativity as well as its precedence in relation to other groups. Listing 3.22 shows an excerpt from the standard library which defines the operators && and || along with the corresponding precedence groups and operator functions: 18 3 Analysis Listing 3.22: Declaring a new infix operator 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 precedencegroup LogicalDisjunctionPrecedence { associativity: left higherThan: TernaryPrecedence // not shown in this listing } precedencegroup LogicalConjunctionPrecedence { associativity: left higherThan: LogicalDisjunctionPrecedence } infix operator && : LogicalConjunctionPrecedence infix operator || : LogicalDisjunctionPrecedence extension Bool { public static func && (lhs: Bool, rhs: @autoclosure () return lhs ? rhs() : false } public static func || (lhs: Bool, rhs: @autoclosure () return lhs ? true : rhs() } −> Bool) −> Bool { −> Bool) −> Bool { } Both operators are left associative and the && operator has higher precedence than the || operator, because the precedence group LogicalConjunctionPrecedence is higher than the precedence group LogicalDisjunctionPrecedence. Note that the second operand of the two operator functions doesn’t have type Bool. Instead, it has a function type that returns a Bool and is marked with the @autoclosure attribute. This means that the second argument can be any expression of type Bool. However, unlike other arguments this expression is not evaluated immediately. Instead, it is automatically wrapped in a closure that returns this expression. This allows for the implementation of short-circuiting operators such as the && operator which doesn’t evaluate its second argument if the first argument evaluates to false. Built-in operators Most operators are declared in the Swift standard library as described in the preceding sections. However, the following list describes a few infix operators that are built into the compiler and cannot be overloaded: • Assignment Operator The assignment operator = assigns the expression on the right hand side to the lvalue on the left hand side. The type checker ensures that the type of the expression is convertible to the type of the lvalue. An example of this is shown in Listing 3.23: Listing 3.23: Assignment Operator 1 2 let x: Int x = 10 Note that it is possible to declare a variable without an initializer expression. However, the variable doesn’t have a default value and it cannot be used until it has been assigned a value. 19 3 Analysis • Type Cast Operator There are three different type cast operators: as, as? and as!. The operator as is used for upcasting as shown in Listing 3.24: Listing 3.24: Type Cast Operator as 1 2 3 4 5 6 class Animal {} class Dog: Animal {} class Cat: Animal {} let cat = Cat() let animal = cat as Animal // cat is of type Cat // animal is of type Animal The operator as? is used for downcasting. Since downcasting may fail at runtime, the result of the as? operator is an optional. Thus, if the expression on the left hand side cannot be downcasted to the type on the right hand side, the result is nil. An example of this is shown in Listing 3.25: Listing 3.25: Type Cast Operator as? 1 2 3 4 5 6 7 class Animal {} class Dog: Animal {} class Cat: Animal {} let animal: Animal = Cat() let cat = animal as? Cat let dog = animal as? Dog // animal has static type Animal // cat is of type Optional<Cat>; cat != nil // dog is of type Optional<Dog>; dog == nil The operator as! is used for downcasting as well. However, in contrast to the as? operator, its result is not an optional. Instead, the as! operator aborts the program with a fatal error, if downcasting fails at runtime. An example of this is shown in Listing 3.26: Listing 3.26: Type Cast Operator as! 1 2 3 4 5 6 7 class Animal {} class Dog: Animal {} class Cat: Animal {} let animal: Animal = Cat() let cat = animal as! Cat let dog = animal as! Dog // animal has static type Animal // cat is of type Cat // results in fatal error • Type Check Operator The type check operator is can be used to determine whether a value is an instance of a certain subclass. An example of this is shown in Listing 3.27: Listing 3.27: Type Check Operator 1 2 3 4 5 6 7 8 9 10 11 class Animal {} class Dog: Animal {} class Cat: Animal {} func describeAnimal(_ animal: Animal) { if animal is Dog { print("This is a dog.") } else if animal is Cat { print("This is a cat.") } } 20 3 Analysis In the above example, the expression animal is Dog evaluates to true, if the parameter animal holds a reference to an instance of the subclass Dog. • Conditional Expression Operator The conditional expression operator ?: is the only ternary operator in Swift. It takes a boolean condition as well as one expression for the then clause and one expression for the else clause. If the condition evaluates to true, the result of the overall expression is the expression of the then clause. Otherwise, the result of the overall expression is the expression of the else clause. Thus, the two expressions need to have the same type or they need to be convertible to a common supertype. An example of this is shown in Listing 3.28: Listing 3.28: Conditional Expression Operator 1 2 var x = 42 print(x >= 0 ? "positive" : "negative") 3.1.12 Pattern Matching In Swift, pattern matching can be used in switch, if, guard, while and for statements. There are different kinds of patterns that can be nested within each other to represent the structure of a single value or a composite value. The following sections describe the most common kinds of patterns: Expression Pattern An expression pattern consists of a single expression. An example is shown in Listing 3.29: Listing 3.29: Expression Pattern Example 1 1 2 3 4 5 6 7 let x = 0 switch x { case 42: print("x is 42") default: print("x has some other value") } Here the pattern 42 in the switch case is an expression pattern that contains a nested integer literal expression. A value matches an expression pattern, if pattern ~= value evaluates to true. The pattern matching operator ~= is a regular infix operator that can be overloaded like any other operator. In the example above, the expression 42 ~= x is valid, because there is a generic overload of the ~= operator in the standard library that works with all Equatable types and simply returns pattern == value. Listing 3.30 shows a more interesting expression pattern: Listing 3.30: Expression Pattern Example 2 1 2 3 4 let x = 25 if case 0...50 = x { print("x is in the range 0...50") } 21 3 Analysis In this example, case 0...50 = x is a so-called case condition. It matches the value x against the expression pattern 0...50. In the expression 0...50 the ... operator is a regular infix operator that creates a closed range from 0 to 50. The expression 0...50 ~= x is valid, because there is an overload of the ~= operator that takes a range and an element of a range, and returns true, if the element lies within the range. Wildcard Pattern A wildcard pattern consists of a single _ and matches any value. An example is shown in Listing 3.31: Listing 3.31: Wildcard Pattern Example 1 2 3 4 5 let x = "test" switch x { case _: break } Since the pattern _ in the example above always matches x, the switch statement doesn’t need a default case. Tuple Pattern A tuple pattern consists of multiple tuple pattern elements. It matches a tuple, if the tuple has the same number of elements and each tuple element matches the corresponding tuple pattern element. An example of this is shown in Listing 3.32: Listing 3.32: Tuple Pattern Example 1 2 3 4 5 6 7 8 9 10 11 let point = (5, 11) switch point { case (0...10, 0...10): print("x and y are in range 0...10") case (0...10, _): print("x is in range 0...10") case (_, 0...10): print("y is in range 0...10") default: print("no element is in range 0...10") } In this example, different tuple patterns are used to check whether both, either or none of the elements of the tuple point lie in the range 0...10. For the tuple (5, 11) the output is “x is in range 0...10”. Note that the tuple patterns in this example contain nested expression patterns and wildcard patterns. Value-Binding Pattern A value-binding pattern consists of the keyword let or var followed by a nested pattern. Any identifier that occurs within the nested pattern of a value-binding pattern is considered to be an identifier pattern. If pattern matching succeeds, a new local variable 22 3 Analysis is created for each identifier pattern and initialized with the corresponding sub value of the matched value. An example of this is shown in Listing 3.33: Listing 3.33: Value-Binding Pattern Example 1 2 3 4 5 6 7 let point = (1, 1) switch point { case let (x, y) where x == y: print("\(x) == \(y)") case let (x, y): print("\(x) != \(y)") } In the first switch case, the pattern let (x, y) binds the two elements of the tuple point to the new variables x and y. The additional where clause ensures that the pattern only matches if x and y are equal. The second switch case matches all remaining cases. Optional Pattern An optional pattern matches an optional if it is not equal to nil and if the wrapped value matches the nested pattern of the optional pattern. An example of this is shown in Listing 3.34: Listing 3.34: Optional Pattern Example 1 2 3 4 let numbers = [1, 2, nil, 4, nil, nil, 7] // numbers is of type Array<Optional<Int>> for case let number? in numbers { print(number) // number is of type Int } In this example, pattern matching is used to loop over an array of optionals while ignoring the elements that are nil. The pattern let number? first checks whether the current element is not nil. If that is the case, it unwraps the optional and uses the wrapped value to initialize a new variable called number. Thus, the for loop prints the numbers 1, 2, 4 and 7. Enum Case Pattern An enum case pattern can be used to match an instance of an enum type. If the corresponding enum case has associated values, nested patterns can be used to match these values as well. An example of this is shown in Listing 3.35: Listing 3.35: Enum Case Pattern Example 1 2 3 4 5 6 7 8 9 10 11 12 13 enum Result<T> { case success(T) case error } func handleResult<T>(_ result: Result<T>) { switch result { case .success(let value): print("Result is \(value)") case .error: print("Error") } } 23 3 Analysis The pattern .success(let value) matches the enum case Result.success and binds the associated value to the new variable value. Note that we can write .success instead of Result.success, because the enum type is known from the context (i.e., from the control expression of the switch statement). 3.1.13 Protocol-Oriented Programming Protocol-Oriented Programming is a programming technique that facilitates the writing of reusable code without relying on inheritance. The term was first introduced by Apple’s Dave Abrahams in a WWDC session in 2015 [App15]. It uses a feature called protocol extensions and is used heavily in the Swift standard library. To explain how it works, Listing 3.36 shows an example: Listing 3.36: Protocol-Oriented Programming in Swift 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 extension Sequence where Iterator.Element: Equatable { func countOccurrencesOfElement(_ element: Iterator.Element) var count = 0 for x in self { if x == element { count += 1 } } return count } } −> Int { let array = [1, 5, 2, 2, 8, 1, 2, 4, 2] print(type(of: array), array.countOccurrencesOfElement(2)) // prints 'Array<Int> 4' let set: Set<Int> = [1, 5, 2, 2, 8, 1, 2, 4, 2] print(type(of: set), set.countOccurrencesOfElement(2)) // prints 'Set<Int> 1' let chars = "hello, world!".characters print(type(of: chars), chars.countOccurrencesOfElement("l")) // prints 'CharacterView 3' In the example, we extend the Sequence protocol by adding a new method called countOccurrencesOfElement(). To do its job, the method needs to be able to check whether two elements of a sequence are equal. That is why there is a where-clause constraint on the protocol extension which makes sure that the extension only applies if the elements of the sequence conform to the Equatable protocol. Further down, the example shows that we can now use the method on various sequence types such as Array, Set and the CharacterView of a String. 3.1.14 Memory Management Swift uses automatic reference counting (ARC) to manage memory. ARC keeps track of the number of references that refer to an object. As soon as this reference count goes to 0, the object is destroyed and the corresponding memory deallocated. This means that there are no garbage collector pauses and memory is deallocated at a deterministic point in time. However, it also means that programmers occasionally need to be careful about memory management and e.g., resolve a strong reference cycle by introducing a weak reference [tsp17b]. 24 3 Analysis 3.1.15 Interoperability with Objective-C and C For almost 20 years, Objective-C has been the main programming language used to develop software for Apple’s platforms. It is a superset of the C programming language and can therefore directly call C functions. Since most of Apple’s frameworks are written either in Objective-C or in C, the transition to Swift cannot be done at the flick of a switch. Thus, it is important that they are mostly interoperable in order to make it possible to translate large code bases one file at a time [App17f]. Now that Swift is becoming more and more mature, Apple has begun to make the frameworks “swiftier”. For example, most of the types in the Foundation framework are now available in Swift as value types [App16]. 3.2 Overview This section describes the features that are expected from a modern IDE and outlines the high-level components which are required to implement these features. 3.2.1 Features The following list describes features that are commonly expected from a modern IDE. If implemented well, they can have a tremendous impact on programmer productivity. • Code Presentation They way code is presented to the programmer can matter a lot. For example, a text editor with a monospace font and support for syntax highlighting is usually much more convenient for reading and writing code than a general word processor application. Similarly, many IDEs provide some sort of outline view which lists all the types and functions defined in a file in order to give a quick overview of the file content. Another feature that may be worth considering is code folding, which allows the programmer to temporarily hide sections of the code. • Editing Assistance During editing it can be very useful to get some assistance from the IDE. For example, it may automatically reindent code according to configurable formatting rules, insert closing parentheses or display a list of auto-complete suggestions. • Error Reporting Many IDEs parse the compiler output and display any errors or warnings as problem markers directly in the source code editor. This makes it easier for programmers to quickly grasp the location and the cause of the problem. Additionally, IDEs may perform some syntactic and semantic analysis already during editing. This can be very important, especially for large projects where compilation may take a while. 25 3 Analysis • Code Navigation Large software projects may consist of hundreds of source files. Manually navigating through such code bases can be very tedious. Thus, most IDEs have some code navigation features such as “Jump to Definition” or “Open Call Hierarchy”. • Code Rewriting Another useful feature is the ability to make automatic changes to the code. This may come in the form of a list of predefined refactorings (e.g., Extract Method, Inline Temp, etc.) [Mar99], but it is also useful to provide quick-fixes for small problems that may occur during editing. For example, the IDE could offer to transform a local variable into a constant if it is not modified anywhere. • Program Compilation A major difference between a normal text editor and an IDE is that IDEs are capable of building, executing and debugging programs. The programmer also usually expects the option to configure the commands that are used to build a program in order to customize the build process. The output of the builder should be displayed to the user so that it is possible to diagnose build problems. • Program Execution After a project has been compiled, the user usually wants to launch the executable from within the IDE. The IDE should display the standard output that is printed by the process and the user should be able to provide input, if the process asks for it. Additionally, there should be some options to configure how the program is launched (e.g., setting the program arguments / environment variables, etc.). • Debugging The IDE should have the ability to launch the executable with a debugger and allow the user to set breakpoints, step through the statements of the program and inspect the values of the variables that are currently in scope. 3.2.2 Components Figure 3.1 shows a high-level overview of the most important components that are required to implement the features listed in subsection 3.2.1. In the following subsections, each component is described in more detail. Note that this is a simplified view of the whole system and that each component itself consists of various subcomponents. 26 3 Analysis Figure 3.1: High-level overview Editor The editor is the main interface through which the user interacts with the IDE. It is responsible for displaying the code in a syntax-highlighted form and it needs various other features such as marker annotations (for error reporting) and hyperlinking (for code navigation). Reconciler Whenever the user edits the source code, the reconciler is notified about those changes. It waits until there is a short pause in the editing process (e.g., 500ms), in which case it starts the reconciliation in a background thread. During reconciliation, the internal model of the source code (abstract syntax trees and index) is updated and any errors or warnings are displayed in the editor. Lexer The lexer is responsible for turning the source code (a stream of characters) into a stream of tokens. It doesn’t do any syntactic or semantic analysis but merely groups characters that belong together according to the language’s lexical structure. 27 3 Analysis Parser The parser takes the stream of tokens emitted by the lexer and builds an abstract syntax tree (AST) according to the language’s grammar rules. In the process, it shows problem markers in the editor for any syntax errors that it finds during parsing. Indexer The indexer is responsible for the semantic analysis of the code. It traverses the ASTs of the source files in the project and builds an index. The index is a symbol table and can be used to implement features such as auto-completion, “Jump to Definition” and “Open Call Hierarchy”. Builder The user can trigger the builder to compile the source code into an executable or a library. Most IDEs use an external compiler to build the project. In that case the builder is responsible for the correct invocation of the compiler and for presenting the build output to the user. Launcher / Debugger The executable can be launched after the project has been built successfully. The user can also customize the run configuration to set things like the program arguments or the environment variables. When the user starts the executable in the debugging mode, the launcher launches the executable with a debugger (e.g., LLDB) and lets the user control it during the debugging session. 3.3 Conclusion This chapter has identified the components that are required to implement a modern IDE. The chapters that follow describe how some of these components have been implemented during the project. 28 4 Lexer A lexer turns a stream of characters into a stream of tokens. This makes it easier for the parser to check the syntax and to build an AST (abstract syntax tree). Each token stores its kind, text, offset and length. Figure 4.1 illustrates this process: Figure 4.1: Lexing Process The token kinds of the Swift programming language are defined in its lexical structure [tsp17d]. The following subsections give an overview of the different token kinds. 4.1 Keywords Keywords are special words that are reserved by the programming language and cannot be used as identifiers. Swift 3 has 57 keywords such as if, let, var and struct. These are reserved in almost all contexts and therefore cannot be used to name a program entity (e.g., a variable or a class). There is one notable exception. All keywords except inout, var and let can be used as parameter names in a function declaration. This allows for more natural function calls, because external parameter names such as in and for are valid. An example of this is shown in Listing 4.1: 29 4 Lexer Listing 4.1: Using keyword in as parameter name 1 2 3 4 5 6 7 8 9 10 11 12 func find<T: Equatable>(element: T, in xs: [T]) for (i, x) in xs.enumerated() { if x == element { return i } } return nil } −> Array<T>.Index? { if let i = find(element: 5, in: [1, 2, 3, 4, 5, 6, 7, 8]) { print("Found number 5 at index \(i)") } Additionally, there are 26 keywords (e.g., willSet, didSet, etc.) that are only reserved in particular contexts. These are treated as identifiers by the lexer because it doesn’t have enough context to decide whether it should be a keyword instead. It is the parser’s job to make this distinction later. Finally, there are 15 keywords that start with a number sign (#) such as #line, #function, #if, etc. Note that these are not preprocessor macros, because Swift doesn’t have a preprocessor. Instead, they are special compiler directives to do conditional compilation or to log things such as the current file, line or function for debugging purposes. 4.2 Identifiers Identifers are used to give names to the entities of a program. This includes variables, properties and functions as well as custom types such as classes, structs, enums and protocols. In many programming languages the characters in an identifier are limited to characters from the English alphabet, the digits 0-9 and the underscore (_). In comparison, identifiers in Swift can contain most characters from Unicode’s Basic Multilingual Plane and even some characters from the supplementary planes (e.g., emoji) [uni17]. Listing 4.2 shows an example of valid Swift code that uses the Greek letters α and π as identifiers: Listing 4.2: Identifiers in Swift 1 2 3 4 5 let sectorArea = 9.8125 let radius = 5.0 let π = 3.14 let α = sectorArea * 360.0 / (π * radius * radius) print("α = \(α)") Words that are keywords but also match the rules of an identifier are treated as keywords by default. However, in Swift it is possible to use any keyword as an identifier by wrapping it in backticks as shown in Listing 4.3: Listing 4.3: Keyword as identifier 1 2 let `protocol` = "https" print(`protocol`) 30 4 Lexer 4.3 Operators In Swift, most of the basic types and operators are not actually part of the language but are predefined in the standard library. For example, the type Bool is a struct type defined in the standard library. Since structs have value semantics in Swift, instances of Bool behave as expected. Similarly, the logical operators &&, || and ! are also defined in the standard library. A full list of all the operators can be found in the Swift Standard Library Operators Reference [App17d]. Programmers can overload these predefined operators for their own types. However, it is even possible to define completely new operators. Listing 4.4 shows how to define a custom power operator: Listing 4.4: Custom power operator 1 2 3 4 5 6 7 8 9 10 11 infix operator **: MultiplicationPrecedence func **(x: Int, y: Int) var result = 1 for _ in 0..<y { result *= x } return result } print(2 ** 8) −> Int { // prints '256' On line 1 of the listing, a new infix operator called ** is declared. Additionally, the operator is added to the precedence group MultiplicationPrecedence. This means that it has the same associativity and precedence as the * operator. On line 3 of the listing, an operator function for the power operator ** is declared. In order to determine if a given operator application is syntactically correct, the parser needs to know whether the operator is a prefix, infix or postfix operator. Thus, the lexer needs to encode that information into the operator tokens it emits. It differentiates between the three cases based on the characters before and after the operator. Listing 4.5 shows an example: Listing 4.5: Lexing / Parsing of Operators 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 prefix operator +++ infix operator +++ postfix operator +++ prefix func +++(x: Int) −> String { return "prefix +++" } func +++(x: Int, y: Int) −> String { return "infix +++" } postfix func +++(x: Int) −> String { return "postfix +++" } let x = 0 let y = 1 print(+++x) print(x+++y) print(x +++ y) print(x+++) print(+++ x) print(x +++) print(x+++ y) print(x +++y) // // // // // // // // 1) 2) 3) 4) 5) 6) 7) 8) prints prints prints prints syntax syntax syntax syntax 'prefix +++' 'infix +++' 'infix +++' 'postfix +++' error error error error 31 4 Lexer In this example, the custom operator +++ is declared to be a prefix, an infix as well as a postfix operator. Each operator has its own operator function which simply returns a String that describes the operator. Whether a given operator is interpreted as a prefix, infix or postfix operator, depends on the spacing between the operator and its operand(s). The following list explains each case: 1. The operator +++ appears before the operand x and there is no whitespace in between. Thus, it is treated as a prefix operator. 2. The operator +++ appears between the two operands x and y with no spacing. In this case +++ is treated as an infix operator. 3. The operator +++ appears between the two operands x and y with a space on each side of the operator. Again, +++ is treated as an infix operator. 4. The operator +++ appears after the operand x and there is no whitespace in between. Thus, it is treated as a postfix operator. 5. In this case the lexer generates an infix operator token for +++ and an identifier token for x. That then leads to a syntax error during parsing, because infix operators expect two operands and not just one. 6. The lexer generates an identifier token for x and an infix operator token for +++. Again, this leads to a syntax error during parsing, because infix operators expect two operands. 7. The lexer first generates an identifier token for x and a postfix operator token for +++. So far this would be syntactically correct, but since it is followed by an additional identifier token for y, the parser reports a syntax error. 8. This is a similar problem where an operator appears between two operands, but the operator isn’t treated as an infix operator because of the inconsistent spacing around it. There is another rule that is worth mentioning. In general, operators cannot contain periods (.). This makes it easy to access a member of the result of a postfix operator expression as shown in Listing 4.6: Listing 4.6: Accessing member of postfix operator expression 1 2 3 4 5 6 7 8 9 postfix operator −!− postfix func −!−(s: String) return s.lowercased() } −> String { let greeting = "Hello, World" for c in greeting−!−.characters { print(c) } Otherwise, it would be impossible for the lexer to know, whether this is an infix operator -!-. or a postfix operator -!- followed by a period. However, there is an exception to this rule. If an operator starts with a period, it can also contain periods in the rest of the operator. This allows for operators like Swift’s closed range operator (e.g., 0...10). 32 4 Lexer 4.4 Literals Swift supports 8 different types of literals. The first 6 are described in this section. The others (array literals and dictionary literals) consist of multiple tokens and therefore don’t belong into the lexer section. 4.4.1 Integer Literals Integer literals are used to express integral numbers and can be written in four different numeral systems (binary, octal, decimal and hexadecimal). Examples are shown in Listing 4.7: Listing 4.7: Integer literals in Swift 1 2 3 4 print("Binary 0b10101010 == \(0b10101010)") print("Octal 0o252 == \(0o252)") print("Decimal 170 == \(170)") print("Hexadecimal 0xAA == \(0xAA)") // // // // prefix 0b prefix 0o no prefix prefix 0x => => => => binary numeral system octal numeral system decimal numeral system hexadecimal numeral system For better readability Swift allows the use of underscores within integer literals (e.g., 200_000, 0xAA_BB_CC, 0o111_222). These underscores do not affect the value of the literal but merely serve as visual separators. The default inferred type of an integer literal is the Swift standard library type Int, which represents a 32-bit or a 64-bit signed integer value type depending on the architecture of the current platform [tsp17e]. If the literal doesn’t fit into the inferred type, the compiler emits an error. 4.4.2 Floating-Point Literals Floating-point literals are used to express floating-point numbers. There are decimal and hexadecimal floating-point literals. Listing 4.8 shows examples for both forms: Listing 4.8: Floating-point literals in Swift 1 2 3 4 5 6 7 8 // decimal floating−point print("Decimal fp−literal print("Decimal fp−literal print("Decimal fp−literal literals with fraction 99.5 == \(99.5)") with exponent 123e4 == \(123e4)") with fraction & exponent 765.4e−3 == \(765.4e−3)") // hexadecimal floating−point literals print("Hexadecimal fp−literal with exponent 0xFFp2 = \(0xFFp2)") print("Hexadecimal fp−literal with fraction & exponent 0x12.Ap4 = \(0x12.Ap4)") // 99.5 // 123 * (10 ^ 4) // 765.4 * (10 ^ −3) // 255 * (2 ^ 2) // 18.625 * (2 ^ 4) Note that it is not possible to leave out the exponent for a hexadecimal floating-point literal. This is because it could lead to confusing code like print(0x123.beef). Does this code print a floating-point number or the result of accessing the property beef of the integer literal 0x123? It could be both, so the compiler reports an error in such ambiguous cases. The default inferred type of a floating-point literal is the Swift standard library type Double, which represents a 64-bit floating-point number. Note that there are also the types Float to represent 32-bit floating-point numbers and Float80 to represent 80-bit floating-point numbers. 33 4 Lexer 4.4.3 String Literals Swift supports two kinds of string literals. There are static string literals which look similar to the string literals of other languages (e.g., "hello"). Additionally, there are so-called interpolated string literals. These can contain expressions that are evaluated at run time and concatenated with the rest of the string. An example of this is shown in Listing 4.9: Listing 4.9: Interpolated string literal in Swift 1 2 3 4 5 import Foundation let x = 36.0 print("sqrt(x) == \(sqrt(x))") print("sqrt(x) == " + String(sqrt(x))) // string interpolation // string concatenation On line 4, string interpolation is used. On line 5, the same result is achieved with string concatenation. Additionally, both kinds of string literals may contain any of the escape sequences listed in Table 4.1: Table 4.1: Escape sequences Description Null Character Backslash Horizontal Tab Line Feed Carriage Return Double Quote Single Quote Unicode scalar Escape Sequence \0 \\ \t \n \r \" \' \u{...} The escape sequence for Unicode scalars \u{...} takes between one and eight hexadecimal digits that denote a Unicode code point. 4.4.4 Boolean Literals The boolean literals consist of the two keywords true and false. If any of these values is assigned to a variable or a constant, its type is inferred as a Bool. 4.4.5 Nil Literal The keyword nil is a literal that is used to denote that an optional does not have a value. An example is shown in Listing 4.10: 34 4 Lexer Listing 4.10: nil literal in Swift 1 2 3 4 5 6 7 let x: Int? = nil if let x = x { print("x = \(x)") } else { print("x is nil") } Note that the type annotation for the variable x is required, because the compiler cannot infer a type from nil. 4.4.6 Compiler Literals There are four keywords that start with a number sign (#) and that can be used as literal expressions (#file, #line, #column and #function). They are useful for logging and debugging purposes. An example is shown in Listing 4.11: Listing 4.11: Compiler literals in Swift 1 2 3 4 5 6 7 8 func f() { print("This print("This print("This print("This } literal literal literal literal appears appears appears appears in on in in file \(#file).") line \(#line).") column \(#column).") function \(#function).") f() 4.5 Punctuation The following tokens are reserved for punctuation and cannot be used as custom operators: (, ), {, }, [, ], ., ,, :, ;, =, @, #, ->, ̀ and ?. Additionally, the & character cannot be used as a prefix operator and the ! character cannot be used as a postfix operator. 4.6 Comments Swift supports comments that extend to the end of the current line (single-line comments) and comments that can spread over several lines (multi-line comments). Comments are ignored by the compiler and could therefore already be discarded by the lexer. However, for advanced IDE functionality such as refactoring, we may want to keep them around, in order to be able to perform valid code transformations. Thus, the lexer defines a separate token kind for comments. 35 4 Lexer 4.6.1 Single-line Comments Single-line comments begin with a // and extend to the end of the current line. Listing 4.12 shows an example: Listing 4.12: Single-line comments in Swift 1 2 print("Hello, ...") print("... world!") // a single−line comment 4.6.2 Multi-line Comments Multi-line comments begin with a /* and end with a */. In contrast to some other languages such as Java and C++, multi-line comments in Swift can be nested within each other as shown in Listing 4.13: Listing 4.13: Multi-line comments in Swift 1 2 3 4 5 6 7 8 /* a multi−line comment /* a nested multi−line comment */ */ print("Hello, world!") 4.7 Implementation Status For the most part, the lexer works as expected and there aren’t a lot of changes coming in Swift 3 that are relevant to this component. However, there are two known issues that need to be addressed in the future. For example, the lexer does not yet support unicode characters that take up more than 1 character in the UTF-16 encoding which Java uses to encode strings. This means for example, that it cannot correctly recognize identifiers that contain emoji. The second issue is related to interpolated string literals. At the moment, the lexer emits a single string token for the entire interpolated string literal. It would be better to generate individual tokens for the expressions used within the literal. This way the parser and the indexer can analyze that code as well and report syntax and semantic errors if there are any. 36 5 Parser The parser consumes the tokens generated by the lexer and uses the rules of the Swift grammar to verify the syntax of the program. At the same time it builds an AST (abstract syntax tree). The AST is an intermediate representation of the code that can later be traversed to perform semantic analysis, type checking, etc. Figure 5.1 shows the AST that results from the tokens for the expression try sqrt(x: 5.2): Figure 5.1: Parsing Process 5.1 Architecture Tifig uses a recursive-descent parser with arbitrary lookahead and support for backtracking [Alf06]. Its architecture is very much influenced by the patterns in the book Language Implementation Patterns by Terence Parr [Ter10]. Most Swift code can be parsed with only one or two tokens of lookahead. However, there are a few situations that require the ability to speculatively parse code and to backtrack if necessary. This section gives an overview of the parser architecture and explains recursive-descent parsing and backtracking in more detail. 37 5 Parser 5.1.1 Parser Modules Swift is a general-purpose programming language with relatively many language features. Thus, it is best not to implement the whole parser in one large and complex class. For this reason, the parser has been split up into six modules, each of which is responsible for parsing a group of related language elements. They are the same groups as the ones used in Swift’s official Language Reference [tsp17c]: declarations, statements, expressions, patterns, types and attributes. Each group is described in the following list: • Declarations Declarations introduce new names into a program. They can be used to declare new named objects (e.g., variables, constants, functions, etc.) or new named types (e.g., classes, structs, enums, etc.). Additionally, you can also use a declaration to extend the behaviour of an existing named type (with extension declarations) and to import external symbols into your program (with import declarations). Listing 5.1 shows an example of a struct declaration with two nested property declarations: Listing 5.1: Declarations in Swift 1 2 3 4 struct Point { let x: Int let y: Int } • Statements In an imperative programming language like Swift, statements are the instructions that are executed when the program is running. There are simple statements (e.g., declaration statements and expression statements) and there are statements that influence the control flow of the program (e.g., if statements, loop statements, return statements, etc.). Additionally, there are compiler-control statements which can be used to conditionally compile parts of the code. Listing 5.2 shows a for loop statement. The loop’s body is a code block that contains an expression statement in this example: Listing 5.2: Statements in Swift 1 2 3 for i in 1...10 { print("i = \(i)") } A statement can optionally be terminated with a semicolon (;). However, this is only required if two statements appear on the same line. • Expressions In Swift, there are four groups of expressions. Primary expressions are things like identifier expressions (e.g., referring to a variable / constant) and literal expressions (e.g., numeric literals, array literals, etc.). Postfix expressions are things like function call expressions, subscript expressions or the application of a postfix operator to a primary expression. Postfix expressions are a superset of primary expressions. Thus, primary expressions by themselves are also considered to be postfix expressions. 38 5 Parser Prefix expressions are things like the application of a prefix operator to a postfix expression or inout expressions (used for the argument supplied for an inout parameter). Prefix expressions are a superset of postfix expressions. Thus, postfix expressions by themselves are also considered to be prefix expressions. Finally, binary expressions combine one or more prefix expressions using binary operators. Since a binary expression can consist of a single prefix expression, binary expressions are a superset of prefix expressions. Expressions can be evaluated to a value and have a type. They can be used in various places (e.g., expression statements, variable initializers, if conditions, etc.). As an example, let’s look at the expression that is printed out in Listing 5.3: Listing 5.3: Expressions in Swift 1 2 3 let arr = [1, 2, 3, 4] let x = 10 print(−arr[2] * x) A slightly simplified AST for that expression is shown in Figure 5.2: Figure 5.2: AST for expression −arr[2] * x Note that child nodes are evaluated before their parents. Thus, primary expressions (e.g., arr, 2, x) are evaluated first. Then the postfix expressions (e.g., arr[2]) are evaluated and after that the prefix expressions (e.g., -arr[2]). Finally, binary expressions are evaluated (e.g., -arr[2] * x). Therefore, postfix operators have a higher precedence than prefix operators which in turn have a higher precedence than infix operators. If multiple binary operators and operands appear on the same level, the precedence and associativity of the individual operators determines the order of evaluation. 39 5 Parser • Patterns Patterns represents the structure of a single value or a composite value. One pattern can be matched with many different values that have the same structure. For example, the tuple pattern (_, 2) matches any two-element tuple (pair) whose second element is the integer value 2. Patterns can also be used to extract parts of a composite value. For example the value-binding pattern let (x, y) binds the two elements of a pair to the constants x and y. Patterns can appear in several places (e.g., variable declarations, switch statements, for loops, etc.). As an example, Listing 5.4 shows how patterns can be used to match certain value structures and to extract information from a composite value: Listing 5.4: Patterns in Swift 1 2 3 4 5 6 7 8 9 10 11 12 let point = (0, 42) switch point { case (0, 0): print("Point is case (let x, 0): print("Point is case (0, let y): print("Point is default: print("Point is } at the origin.") on x−axis at offset \(x).") on y−axis at offset \(y).") not on an axis.") • Types Swift differentiates between named types and compound types. Named types are introduced through a type declaration (e.g., classes, enums, structs and protocols). Basic types such as Int and Double are declared in the standard library in the form of named struct types. Struct types cannot be subclassed, but like all named types, they can be extended with extensions. Compound types don’t have a name and they are defined by the language itself. There are two kinds of compound types in Swift: function types and tuple types. Compound types cannot be extended with extensions. Listing 5.5 shows examples of different kinds of type annotations in Swift: Listing 5.5: Types in Swift 1 2 3 4 5 6 import Darwin let let let let a: b: c: d: Int = 0 [Int] = [1, 2, 3] (Int, String) = (5, "Test") (Double, Double) −> Double = pow // // // // Int is a named type [Int] is syntactic sugar for the named type Array<Int> (Int, String) is a tuple type (Double, Double) −> Double is a function type • Attributes Attributes are used to provide more information about a declaration or a type. Listing 5.6 shows an example of the declaration attribute @discardableResult which specifies, that the compiler should not emit a warning if the function writeToFile() is called and its return value is discarded: 40 5 Parser Listing 5.6: Declaration attribute @discardableResult 1 2 3 4 5 6 7 @discardableResult func writeToFile(str: String) // write to file return bytesWritten } −> Int { writeToFile(str: "File content") // no compiler warning Additionally, attributes may have arguments as shown in Listing 5.7: Listing 5.7: Declaration attribute @available 1 2 3 4 @available(iOS 9.0, OSX 10.11, *) class MyClass { // class definition } The attribute @available has two arguments (iOS 9.0 and OSX 10.11) which specify the operating system versions with which the class was introduced. Figure 5.3 shows an overview of the parser architecture with its modules. Note that throughout the project the abbreviations Decl, Stmt and Expr are used for the terms declaration, statement and expression, respectively. Figure 5.3: Parser modules The individual parser modules inherit from a common, abstract superclass ParserModule. An instance of Parser aggregates one instance of each of the six parser modules. The individual modules need to be able to talk to each other and to the parser which maintains the parse state. This is why the class ParserModule has a reference back to the class Parser. 5.1.2 Recursive-Descent Parsing Recursive-descent parsing is a top-down parsing technique in which each production from the language grammar is implemented with a separate method. For each non-terminal on the right hand side of a production, the corresponding method is called. For each terminal, the parser matches the current token with the expected token and consumes it [Alf06]. 41 5 Parser In order to illustrate this process, let’s look at an example using the production rule for if statements: i f -statement → if condition-list code-block else-clauseopt else-clause → else code-block | else i f -statement Terminals are written in bold and the non-terminal else-clause is marked as optional using the opt subscript. Note that the two production rules if-statement and else-clause are mutually recursive, which means that they are defined in terms of each other. This is valid and quite common in language grammars. Listing 5.8 shows the implementation of the if-statement production rule in the parser module StmtParser: Listing 5.8: Implementation of production rule for if statements 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 private IfStmt ifStmt() throws RecognitionException { match(Kind.KW_IF); final ConditionList conditionList = parse(this::conditionList); final CodeBlock thenBlock = parse(this::codeBlock); CodeBlock elseBlock = null; IfStmt elseIfStmt = null; if(la(1).is(Kind.KW_ELSE)) { match(Kind.KW_ELSE); if(la(1).is(Kind.KW_IF)) { elseIfStmt = parse(this::ifStmt); } else { elseBlock = parse(this::codeBlock); } } return new IfStmt(conditionList, thenBlock, elseBlock, elseIfStmt); } Note that the name la in the method call la(1) is an abbreviation for “lookahead”. Thus, the method call la(1) returns the next token without consuming it. Similarly, the method call la(2) would return the token that comes after the next token. This allows the parser to decide which path to take next. Let’s examine the code in detail: 1. First the parser matches the terminal if. If the next token is of type Kind.KW_IF (i.e., the keyword if), the call to match() succeeds and the token is consumed. If the next token is something else, match() throws a RecognitionException which has to be handled somewhere up the call chain. 2. Then the non-terminals condition-list and code-block are parsed by calling their corresponding production rule methods. These methods are not called directly but by way of the higher-order function parse(). Each production rule method returns an AST node and the purpose of the parse() method is to abstract away the task of capturing the tokens that make up a specific node. 3. After that, the else-clause is parsed. Note that its production rule has been integrated into the production rule for the if-statement because it is not used anywhere else. Since the else-clause is optional we first check whether the next token is the else keyword. If it is, the parser matches the token. The else-clause can either be a code-block (statements wrapped in braces) or another if-statement. Depending on the next token, it either calls the codeBlock() or the ifStmt() method. 42 5 Parser 4. Finally, the parser instantiates and returns the IfStmt node with the child nodes that were parsed throughout the ifStmt() method. 5.1.3 Speculative Parsing and Backtracking The example in Listing 5.8 never needed more than one token lookahead (la(1)) to fulfill its task. This is the case for most production rule methods. There are a few that require two tokens lookahead (la(2)) but those are very similar to the method shown in Listing 5.8. However, there are some situations in which a fixed amount of lookahead is not enough to decide which path to take next. The example in Listing 5.9 illustrates this problem. Note that the type annotations are only there to clarify the meaning of the program. They could be inferred by the compiler: Listing 5.9: Closures require speculative parsing 1 2 3 4 5 6 7 var x = 1, y = 2, z = 3 let closure1: () −> [Int] = { [x, y, z] } print(closure1()) // prints '[1, 2, 3]' let closure2: () −> Int = { [x, y, z] in x + y + z } print(closure2()) // prints '6' The example defines two closures which both start with the same sequence of tokens [x, y, z]. However, they are very different from each other. In closure1, [x, y, z] is an array literal that is implicitly returned by the closure. In closure2, [x, y, z] is a capture list which explicitly captures the global variables x, y and z by making a copy of them. When the parser reaches the opening square bracket ([), it cannot know whether the following tokens represent an array literal or a capture list. It is only when it reaches the token following the closing square bracket (]) that it knows for certain which path to take. Since array literals and capture lists can be arbitrarily long, a fixed amount of lookahead is not sufficient and the parser needs the ability to do speculative parsing and backtracking. Listing 5.10 shows how the higher-order function speculate() can be used to do speculative parsing: Listing 5.10: Example of speculative parsing 1 2 3 4 5 6 7 8 9 10 private ClosureExpr closureExpr() throws RecognitionException { match(Kind.LBRACE); CaptureList captureList = null; if(speculate(this::captureList)) { captureList = parse(this::captureList); } // ... } If speculate(this::captureList) returns true, it means that the following tokens are indeed a capture list. Internally, speculate() calls captureList() to see if it can successfully parse a capture list. The method captureList() throws an exception if 43 5 Parser it is not followed by the keyword in (or by a closure signature, but that is not relevant right now). This indicates to the speculate() method, that it should backtrack to the previous position and return false. Listing 5.11 shows the implementation of speculate(): Listing 5.11: Implementation of speculate() 1 2 3 4 5 6 7 8 9 10 11 12 13 <T extends IASTNode> boolean speculate(ParseFunction<T> parseFunc) { boolean success = true; markers.push(pos); try { parseFunc.apply(); } catch(final RecognitionException e) { success = false; } pos = markers.pop(); return success; } First, a boolean variable called success is declared and set to true. This variable tracks whether the speculation was successful or not. Then, the method stores the current token index by pushing it on the markers stack. The reason why the token index is pushed onto a stack and not just stored in a simple instance variable is because there may be nested calls to speculate() within the production rule method that was passed in for the parameter parseFunc. Next, the method stored in parseFunc is called. In Java 8, function types are expressed by specifying a functional interface with a single method [tjl17b] [tjl17a]. In this case the functional interface is called ParseFunction and its only method is apply(). If a RecognitionException is thrown, success is set to false to indicate that the speculation failed. The speculate() method then backtracks to its original position by resetting the current token index to the top marker on the markers stack. Finally, the success variable is returned to indicate to the caller whether the speculation was successful. 5.2 AST The AST is an intermediate representation of the code. The indexer traverses the AST in order to resolve names, infer expression types and to find semantic errors. A semantic error means that the code is syntactically correct, but there is some other problem with it. For example, a name may be used without it being declared or a value of type A is assigned to a variable of type B where A and B are incompatible. Once refactoring support is begin added, we also want to be able to analyze the AST to detect problems and to modify and rewrite the AST in order to reflect those changes in the code. 44 5 Parser 5.2.1 Requirements As with the parser itself, the requirements for the AST are similar to but not the same as those of an AST that a compiler might generate. Most importantly, the AST should be as abstract as possible but simultaneously as close to the original source as necessary. For example, consider the code examples in Listing 5.12 and Listing 5.13: Listing 5.12: Code example A 1 2 3 4 5 6 7 8 9 import Darwin var x: Int = 42 var y = (4 * 5) + 3 var z: UInt32 { get { return arc4random() % 100 } } Listing 5.13: Code example B 1 2 3 4 5 6 7 import Darwin var x = 42 var y = 4 * 5 + 3 var z: UInt32 { return arc4random() % 100 } Both examples declare 3 variables x, y and z. Each example does it in a syntactically slightly different way but both are semantically equivalent. With the variable x the type annotation is redundant, because it can be inferred from the initial value 42. The parentheses in the initializer expression of variable y are also unnecessary, because the * operator has a higher precedence than the + operator. Finally, z is a read-only, computed property. If there is no setter, one can leave out the get keyword as shown in Listing 5.13 and get the same result. One could imagine an AST that is the same for both code examples. However, for the purposes of an IDE that would be too abstract. For example, we may want to provide refactorings to transform the declarations to their shorter forms. That would not be possible if the AST does not contain these details. Additionally, since Swift allows the declaration of custom operators, we may not know the precedence of an operator at parse time and therefore cannot tell whether the parentheses are necessary or not. 5.2.2 Structure The AST consists of nodes that are instances of subclasses of the abstract superclass ASTNode. Each language construct (e.g., an if statement, a function call expression, a class declaration) has its own node class. These node classes are grouped in the same way as the parser modules described in section 5.1. There are declarations, statements, expressions, types, patterns and attributes. Each node has zero or more child nodes (getChildren()) and also has a reference to its parent node (getParent()). As an example, Listing 5.14 shows how the node class for the repeat-while statement is implemented: 45 5 Parser Listing 5.14: Node class for repeat-while statement 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 public class RepeatWhileStmt extends Stmt { private final CodeBlock body; private final IExpr condition; public RepeatWhileStmt(CodeBlock body, IExpr condition) { this.body = body; this.condition = condition; } public CodeBlock getBody() { return body; } public IExpr getConditionExpr() { return condition; } @Override public String getTreeStringTagName() { return "repeat_while_stmt"; } @Override public boolean accept(ASTVisitor visitor) { return acceptVisitor(visitor, body, condition); } } The following list explains the most important aspects of this implementation: • RepeatWhileStmt extends the abstract superclass Stmt which in turn extends the abstract superclass ASTNode. Thus, it inherits the methods getChildren() and getParent() which were mentioned above. • The node class has two child nodes which are stored in the instance variables body and condition. The body has to be a code block whereas the condition can be any kind of expression (in a later stage the semantic analyzer will have to make sure that the condition expression is of type Bool). The node class also defines getter methods for these two instance variables. • The method getTreeStringTagName() returns a short name for the node class RepeatWhileStmt. This is used to create a string description of the AST and will be described in more detail in section 5.4. • The method accept() is part of the visitor pattern [E. 94] which is used to traverse an AST. This will be described in the next section. 5.2.3 Visiting an AST To analyze a program we can traverse its AST using the visitor pattern. There are several variations of this pattern. The one used in this project is very much influenced by the visitor / AST structure of the Eclipse CDT project [ecl17a]. In order to visit an AST, one has to create a subclass of the abstract class ASTVisitor. The ASTVisitor class defines a visit() and leave() method for each kind of AST node. The default implementations of these methods do nothing and simply continue 46 5 Parser the visitation process. In the ASTVisitor subclass one can customize this behaviour by overriding one or more visit() / leave() methods. An instance of the visitor class is then passed to the accept() method of the AST’s root node in order to start the visitation process. The example in Listing 5.15 shows how an ASTVisitor can be used to collect all Name nodes in a source file: Listing 5.15: Visiting an AST to collect Name nodes 1 2 3 4 5 6 7 8 9 10 11 Stream<Name> collectNames(SourceFile ast) { final List<Name> names = new ArrayList<>(); ast.accept(new ASTVisitor() { @Override public int visit(Name name) { names.add(name); return PROCESS_CONTINUE; } }); return names.stream(); } The visit() and leave() methods return an integer, which can be used to abort the visitation process (by returning PROCESS_ABORT) or to skip a subtree of the AST (by returning PROCESS_SKIP). The default implementations in ASTVisitor return PROCESS_CONTINUE which continues the visitation process. By overriding the visit() methods, the tree can be visited in preorder and by overriding the leave() methods it can be visited in postorder [Rob11]. 5.3 Error handling When the parser encounters a token that it did not expect, it throws a RecognitionException. A method up the call chain will then catch this exception and handle it. To do that, it creates an appropriate ProblemNode and inserts it into the AST. This way, the errors can be displayed in the editor simply by visiting the AST and creating a marker for each ProblemNode as shown in subsection 7.4.3. Additionally, the indexer, the outline view and other components that rely on the AST can just skip over the ProblemNodes and look at the valid parts of the AST. Since we want to insert the problem nodes in places, where declarations (IDecl), statements (IStmt) or expressions (IExpr) are expected, there are various subclasses of ProblemNode that implement the corresponding interfaces. This is shown in Figure 5.4: 47 5 Parser Figure 5.4: Problem Nodes When the parser catches a RecognitionException it keeps consuming tokens until it reaches a point, where it can start to parse again. In order to illustrate this process, Listing 5.16 shows an example of how RecognitionExceptions are handled in the production rule method for declarations: Listing 5.16: Example of error handling 1 2 3 4 5 6 7 8 9 10 11 12 13 14 IDecl decl() { pushStartTokenIndex(getTokenIndex()); final int size = getCurrentStartTokenIndexStackSize(); try { // try to parse a declaration } catch(final RecognitionException e) { reduceStartTokenIndexStackToSize(size); consumeWhile(t −> { return !isStartOfDecl(t); }); return addTokens(new ProblemDecl(e.getMessage())); } } First, the current token index is pushed onto a stack. This is required in order to be able to assign the tokens that belong to the resulting AST node at the end of the method. Normally, this is encapsulated in the higher-order function parse() which was shown in subsection 5.1.2, but here we need to customize the default behaviour. The method also stores the current size of the token index stack. If a RecognitionException is thrown, the normal control flow is interrupted and some token indices may not be popped off the stack. Thus, the original size is restored in the catch clause with the call to the method reduceStartTokenIndexStackToSize(). 48 5 Parser Then, additional tokens are consumed until the start of a new declaration is found. Finally, the parser creates and returns a ProblemDecl which contains the error message. At this point, the parser resumes the normal parsing process. 5.4 Testing Tifig uses a set of automated tests to ensure that the quality of the parser does not deteriorate and that existing functionality keeps working after changes are made. The test cases use the parser to parse an input program and compare the resulting AST to an expected value. In order to compare the resulting ASTs, they are first transformed into a string representation. Listing 5.17 shows an example of a simple function declaration and Figure 5.5 shows the string representation of this program’s AST: Listing 5.17: Function declaration 1 2 3 private func myfunc(x: Int) { } Figure 5.5: String representation of an AST (function_decl modifiers='private' throwing_behaviour='none' (name text='myfunc') (parameter_clause (parameter variadic='false' (name text='x') (type_annotation inout='false' (type_identifier (type_identifier_element (name text='Int')))))) (code_block)) All parser test cases inherit from the superclass ParserTestCase which implements a few helper methods. An example of this is shown in Listing 5.18: 49 5 Parser Listing 5.18: Example of a parser test case 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 public class ClassDeclTests extends ParserTestCase { // class MyClass { // func f() {} // } // (class_decl modifiers='' // (name text='MyClass') // (decl_body // (function_decl modifiers='' throwing_behaviour='none' // (name text='f') // (parameter_clause) // (code_block)))) @Test public void testClassDeclWithInstanceMethod() { assertEqualSourceFileContent(); } // other test cases } The method assertEqualSourceFileContent() is a helper method that is declared in the superclass ParserTestCase. First, it reads the two comments above the test method. The first comment contains the Swift source code and the second comment contains the string representation of the expected AST. The code in the first comment is then parsed and the resulting AST’s string representation is compared with the expected value specified in the second comment. In addition to assertEqualSourceFileContent(), there are other helper methods that allow us to test certain language constructs without having to specify a full, valid program each time (e.g., assertEqualType(), assertEqualExpr(), etc). 5.5 Implementation Status Tifig’s parser is fully compatible with the Swift 3 grammar. However, there are still a few remaining issues that should be fixed in the future: • Improve Performance When the parser needs to speculate, it parses the same code twice. This should be improved in the future (e.g., with a memoizing parser). • Interpolated String Literals As mentioned in section 4.7, the lexer currently creates only a single token for an interpolated string literal. Once this is fixed, the parser must be updated as well and parse the expressions that are embedded within the string literal. • Error handling The errors reported by the parser are still too imprecise. This should be improved in the future. 50 6 Indexer This chapter describes what the Indexer does and how it is implemented in the Tifig IDE. The code shown in this chapter is in part influenced by what is shown in the book Language Implementation Patterns [Ter10]. 6.1 The job of an Indexer The indexer is responsible for semantic analysis of the source code which enables more advanced IDE features such as “Jump to Definition” or “Open Call Hierarchy”. The indexer visits the ASTs of the files in the project in order to create bindings for all the named entities of the program (e.g., variables, functions, classes). A binding is like an entry in a symbol table. Names that refer to the same entity also have the same binding. However, note that two occurrences of the same name do not necessarily refer to the same entity. For example, consider the code in Listing 6.1: Listing 6.1: Same name, two different variables 1 2 3 4 5 6 7 8 let x = 5 func f() { let x = 10 print(x) } print(x) In this program, there are two entities called x; a global variable and a local variable. Thus, there are two bindings with the name x. The name x on line 5 refers to the x declared on line 4 and is therefore associated with the binding for the local variable. On the other hand, the name x on line 8 refers to the x declared on line 1 and is therefore associated with the binding for the global variable. The process of finding the correct binding for a specific name is called binding resolution. Because Swift supports function overloading, binding resolution for function calls doesn’t only depend on the scope in which the function call occurs, but also on the type of the arguments (and on the contextual type, but this is explained in subsection 6.5.1). This means, that the indexer must be able to infer the types of expressions. For example, consider the code in Listing 6.2: Listing 6.2: Type Inference and Overload Resolution 1 2 3 4 5 func log(_ value: Bool) { print("Bool: \(value)") } func log(_ value: Int) { print("Int: \(value)") } let x = 2 < 1 log(x) // x is of type Bool // calls log: (Bool) −> () 51 6 Indexer In this example, there are two functions called log(). The first function takes an argument of type Bool and the second function takes an argument of type Int. During binding resolution, the indexer needs to figure out, whether the name log in the function call log(x) refers to the first function or the second function. To do that, it needs to know the type of the variable x and since x doesn’t have an explicit type annotation, it needs to be able to infer the type from the expression 2 < 1. Thus, being able to resolve bindings requires the ability to infer the types of arbitrary Swift expressions. Since type inference in Swift can be quite complicated (see subsection 6.5.1), this is in many ways the most complex task that the indexer has to perform. 6.2 Architecture Overview The indexer takes a set of ASTs as input and analyzes them in order to obtain the semantic knowledge that is required by the IDE. After indexing, each Name node should be backed by a corresponding binding. A binding contains additional information about an entity which might be useful to implement more advanced IDE features. For example, from a class binding we can get to the bindings of its members and from an operator binding we can get to the binding of its precedence group. The indexer performs three passes to index a Swift project. This is shown in Figure 6.1: Figure 6.1: Indexer Overview An example of why indexing is a multi-pass process is shown in Listing 6.3: Listing 6.3: Indexing is a multi-pass process 1 2 3 4 5 6 7 8 9 extension Derived { func f() { g() } } class Derived: Base {} class Base { func g() {} } Definition Pass The definition pass creates scopes and bindings. In the example above, an extension binding and two class type bindings are created in the file scope. The member scope of the extension binding and the member scope of the class type binding Base each contain a method binding. Additionally, both method bindings have a parameter scope and a local scope that do not contain any bindings. Type-Annotation Pass The type-annotation pass resolves type-annotations as well as other names that are not 52 6 Indexer part of an expression (e.g., the name of the base class in the type-inheritance clause). This has to be done in a separate pass, because Swift places very few restrictions on the order in which entities are declared. In the example above, the subclass Derived is declared before its superclass Base. This means that we cannot just perform a single pass from top to bottom because once we reach the type inheritance clause of the class Derived, the binding for the class Base has not yet been created. Type-Check Pass The type-check pass resolves the types of expressions. In the example above, there is only a single expression: the function call g() on line 3. We cannot combine the type-annotation pass and the type-check pass into a single pass. This is because when we visit the AST from top to bottom, the expression g() is processed before the typeinheritance clause of the class Derived. Thus, the class type binding Derived doesn’t yet know anything about its superclass Base and since the method g() is a member of Base, the indexer cannot find a corresponding binding for the name g. 53 6 Indexer 6.3 Definition Pass During the definition pass, the indexer only looks at declaration nodes. For each declared name, it creates a corresponding binding. There are different kinds of bindings for different declarations. For example, for a variable declaration the indexer creates an instance of VariableBinding and for a function declaration it creates an instance of FunctionBinding. Each binding is stored in a lexical scope and each scope can have at most one binding for a particular name. Scopes can have one parent scope and multiple child scopes. Therefore, all scopes together build a tree. The definition pass is responsible for creating the bindings and for building the scope tree. As an example, Listing 6.4 contains a simple program and Figure 6.2 shows the scopes and bindings that are created for this program during the definition pass: Figure 6.2: Scope Tree after Definition Pass Listing 6.4: Example Program 1 2 3 4 5 6 7 8 9 10 11 12 let x = 5 print(x) func f() { let x = 10 print(x) } func g(x: Int) { let y = 15 print(x, y) } The figure shows that each scope has a reference to its parent / enclosing scope. The bindings for the global variable x and the two global functions f() and g() are stored in the Swift file scope. The binding for the parameter x is stored in the parameter scope of the function g() and the bindings for the two local variables x and y are stored in their corresponding local scopes. The reason why each function has a separate scope for its parameters is because in Swift it is possible to declare a local variable that has the same name as one of the parameters, in which case the local variable shadows the parameter. Each function binding has a reference to its parameter scope which allows us to obtain the parameter bindings from a function binding. 54 6 Indexer Note that the types of the individual bindings are not yet set after the definition pass. This is the job of the subsequent type-annotation pass and type-check pass. 6.3.1 Bindings All bindings in Tifig have a type and an access level. The type is used during typechecking and the access level is used to determine which bindings are accessible from a specific location. Apart from the standard Swift access levels private, fileprivate, internal, public and open, the access level can also be null . This is used for entities whose declaration cannot have an access level modifier and it generally means that the access level can be ignored (i.e., the binding is always considered to be accessible). For example, enum case declarations cannot have an access level modifier. Nevertheless, since they are always accessed through their owner, they effectively have the same access level as the corresponding enum type. Bindings in Tifig can be broadly divided into two groups: declared bindings and implicit bindings. Declared Bindings Declared bindings are bindings that have an explicit declaration in the source code. They have a reference to the declaration name which is the Name node that appears in the corresponding declaration. If a name is backed by a declared binding, the user can click on the name in order to jump to its definition. Examples of declared bindings are VariableBinding, FunctionBinding and ParameterBinding. Implicit Bindings Some names don’t have an explicit declaration in the source code. For example, the name self is implicitly available within the methods of a named type. While the IDE cannot jump to self’s definition (since there is no such definition), we still want to set a binding for that name, because that allows us to set its type. Tifig uses implicit bindings for such names. They don’t have a reference to a declaration name and the “Jump to Definition” feature doesn’t generate hyperlinks for names that are backed by an implicit binding. Other examples of implicit bindings are the newValue variable that is implicitly available within the setter of a computed property, and the compiler-generated initializers of named types. 6.3.2 Unavailable Declarations In Swift a declaration may be marked as “unavailable” which means that compilation will result in an error if you are trying to use such a declaration in your program. This is often used in the Swift standard library for declarations that were previously available and have now been removed or were replaced by something else. Listing 6.5 shows an example: 55 6 Indexer Listing 6.5: Example of an unavailable declaration 1 2 3 4 5 6 @available(*, unavailable, message: "it has been removed in Swift 3") @discardableResult public prefix func ++ (x: inout Int) −> Int { x = x + 1 return x } The prefix increment operator ++ was deprecated in Swift 2 and removed in Swift 3. However, the above declaration is still part of the standard library because it allows the compiler to emit better error messages. The @available attribute marks the declaration as “unavailable” and specifies an error message that is displayed if a user compiles a program that tries to use this declaration. Additionally, the function has a @discardableResult attribute. This just means that the function has a side-effect and the compiler should not emit a warning if the result of the function is not used. Note that Tifig ignores unavailable declarations completely. In the future it might be better to create bindings that are marked as unavailable in order to be able to provide better diagnostics. 6.3.3 Conditions Optional binding conditions and case conditions can define new variables. These variables live in a separate scope from both the enclosing scope as well as the local scope of the corresponding if, guard or while statement. An example of this is shown in Listing 6.6: Listing 6.6: Condition Scopes 1 2 3 4 5 6 7 8 let x: Int? = 42 print(x) // prints 'Optional(42)' if let x = x { print(x) // prints '42' let x = 0 print(x) // prints '0' } The x in the first print() call refers to the global variable which is of type Optional<Int>. In the optional binding condition let x = x a new variable x is declared in a child scope of the global scope. However, the x in the initializer still refers to the global, optional variable. The x in the second print() call refers to the variable that has been declared by the optional binding condition, since it shadows the global variable. On line 6, a local variable is created which shadows the variable x from the optional binding condition. Finally, the x in the last print() call refers to this new local variable. 56 6 Indexer 6.3.4 Extensions In Swift, extensions allow us to add additional members (e.g., methods, initializers, computed properties) to an existing type. However, as mentioned in section 6.2, Swift places very few restrictions on the order in which entities are declared. Thus, it is possible to extend a type before it is declared. This is shown in Listing 6.7: Listing 6.7: Extension can appear before declaration of extended type 1 2 3 4 5 6 7 8 9 10 extension Point: Equatable { static func ==(lhs: Point, rhs: Point) −> Bool { return lhs.x == rhs.x && lhs.y == rhs.y } } struct Point { var x: Int var y: Int } For this reason, the indexer cannot add the additional member bindings directly to the extended type. Instead, a new extension binding is created for each extension. After the definition pass, extensions are connected to the corresponding extended type. Note that the binding of a specific named type doesn’t know anything about its extensions. This is because not all extensions are automatically available everywhere. For example, if a user’s program adds a method to the type Int through an internal extension, this method is only available in the corresponding module and not in other modules or in the standard library. Thus, the extensions are stored in the SwiftFile scope and when the type checker performs a member lookup, it searches the corresponding type binding as well as all of the extensions that are visible from the current file. 6.3.5 Implicit Operator Bindings Most operators in Swift are not part of the language but instead are declared in the standard library. However, there are a few exceptions. The infix operators =, as, as?, as!, is as well as the ternary operator ?: are built into the compiler. However, these operators are still part of a precedence group which is documented in the standard library file Policy.swift. During the definition pass Tifig generates implicit operator bindings for these built-in operators. Later, during the type-annotation pass, they are added to the corresponding precedence group. This allows us to treat these builtin operators like regular operators for the most part. They only are treated differently during type-checking where a regular operator results in a call to an operator function whereas a built-in operator gets special treatment (see section 3.1.11). 6.3.6 Implicit Variable Bindings In some situations, Swift implicitly defines special variables. For example, consider the code in Listing 6.8: 57 6 Indexer Listing 6.8: Implicit Variable Bindings 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 import Foundation struct Square { var side: Double var area: Double { get { return side * side } set { side = sqrt(newValue) } } } var square = Square(side: 5.0) print(square.area) // 25.0 square.area = 64.0 print(square.side) // 8.0 In this example, newValue is an implicitly defined variable that contains the new value that was assigned to the computed property area. During the definition pass, Tifig automatically creates an implicit binding for these variables. Later, during the typeannotation pass, it sets the type of these bindings. In the example above, newValue has type Double (i.e., the same type as the computed property area). It is possible to specify a different name for these variables as shown in Listing 6.9: Listing 6.9: Specifying a different name for the setter parameter 1 2 3 4 5 6 7 8 var area: Double { get { return side * side } set(newArea) { side = sqrt(newArea) } } In this case, Tifig would create a declared binding instead of an implicit binding, because there exists a corresponding declaration name. Note also that these bindings need to be defined in a separate scope, because they can be shadowed by local variables. Additionally, there are other situations in which such implicit variable bindings are generated. For example, in the willSet and didSet clauses of observed variables, there are implicitly defined variables called newValue and oldValue. Similarly, in the catch clause of a do-catch statement, there is an implicitly defined variable called error. 6.3.7 Implicit Closure Parameters If a closure uses implicit closure parameters (e.g., $0, $1, $2, ...), the definition pass creates implicit bindings for these parameters. Since there are no explicit type annotations for implicit closure parameters, their types are inferred during the type-check pass. 58 6 Indexer 6.3.8 Imports Import declarations are different from other declarations. During the definition pass, no new bindings are created for import declarations. Instead, the indexer looks for a module (i.e., an instance of SwiftModule) with the corresponding name. If it finds one, this module is added to the list of imported modules of the current SwiftFile. During binding resolution, if the indexer doesn’t find a suitable binding in the current module, it additionally searches the modules that are imported in the current file. It is also possible to only import a specific declaration of a module but this is currently not yet supported by Tifig’s indexer. 6.3.9 Standard Library As mentioned previously, Swift relies heavily on declarations from the standard library. Basic types such as Int, Double and Bool as well as arithmetic and logical operators are not part of the language but are instead declared in the standard library. Therefore, the standard library also needs to be indexed in order for the rest of the indexing process to work properly. Tifig treats the standard library as a separate module called “Swift”. This module is indexed once after Tifig has launched and is then implicitly imported in every file. Thus, the public declarations from the standard library are available everywhere. To be able to do this, Tifig’s application bundle contains a copy of the Swift files that contain the code for the standard library. 59 6 Indexer 6.4 Type-Annotation Pass As mentioned previously, every binding in Tifig has a type. Sometimes this type has to be inferred by the type-check pass (e.g., for a variable without an explicit type annotation). The main job of the type-annotation pass is to set the type of the bindings that do have an explicit type annotation. To do that, the type-annotation pass transforms the AST types from the type annotations into corresponding index types. Subsection 6.4.1 explains the difference between AST types and index types, and gives an overview over the various kinds of index types. Subsection 6.4.2 describes all the tasks that are fulfilled by the type-annotation pass. 6.4.1 Index Types An AST type is a node in the AST that describes an explicit type annotation. It has a specific location in the source code and is composed of one or more tokens. On the other hand, an index type is a more abstract representation of a type that is used by the indexer for type checking. It doesn’t have a specific location in the source code and there may not even be a corresponding AST type, because an index type may be the result of inferring the type of an expression. All index types implement the IType interface. Nominal Types In Swift, classes, structs, enums and protocols are sometimes called nominal types. These types have a name and are declared somewhere in the source code. Additionally, they can be extended, can conform to protocols and can have members. Nominal types are represented by subclasses of the abstract superclass NominalTypeBinding. Note that these classes are not just index types but also bindings. Metatypes A nominal type is used as the type for instances of the nominal type. In contrast, a metatype is used as the type of a nominal type itself. This is used to distinguish between a member reference to a static member and a member reference to an instance member. Listing 6.10 shows an example: Listing 6.10: Nominal Types vs. Metatypes 1 2 3 4 5 6 7 8 9 struct S { func f1() {} static func f2() {} } let s = S() s.f1() S.f2() s.f2() // error: static member 'f2' cannot be used on instance of type 'S' 60 6 Indexer When the type checker checks an explicit member expression of the form <owner>.<member>, it first evaluates the type of the subexpression <owner>. Afterwards, it looks for members with the name <member> within that owner type. On line 7, the name s resolves to the VariableBinding for s declared on line 6. The type of this binding is a nominal type (more specifically, a struct type). Thus, when the type checker looks for members with the name f1, it only looks for instance members. On line 8, the name S resolves to the StructTypeBinding for S declared on lines 1-4. The type of this binding is a metatype. Therefore, when the type checker looks for members with the name f2, it only looks for static members. Finally, on line 9, we try to access the static member f2 through the variable binding s. This fails because in Swift we cannot access static members through an instance of a nominal type. Tuple Types A tuple type is represented by an instance of the class TupleType. Each tuple type consists of multiple tuple type elements. Each tuple type element has an optional name and a type. The names of the individual elements have to be part of the tuple type, because we can later use these names to access individual elements of the tuple. This is shown in Listing 6.11: Listing 6.11: Tuple with named elements 1 2 3 4 5 let let let let let tuple = (name: "Toni", age: 26) name1 = tuple.name name2 = tuple.0 age1 = tuple.age age2 = tuple.1 // // // // // tuple is of type (name: String, age: Int) name1 is of type String name2 is of type String age1 is of type Int age2 is of type Int The above example shows that it is possible to either access individual elements by name or by index. The element lookup works similar to the member lookup of nominal types. First, the type of the owner is determined and afterwards the type checker looks for an element with the corresponding element name or index within the owner type. It is also possible to have a tuple type with unnamed elements which means that we can access the individual elements only by index. This is shown in Listing 6.12: Listing 6.12: Tuple with unnamed elements 1 2 3 let tuple = ("Toni", 26) let name = tuple.0 let age = tuple.1 // tuple is of type (String, Int) // name is of type String // age is of type Int Function Types Function types are used for entities that can be called with a function call expression. This includes functions, closures, methods, initializers and enum case constructors. A function type is composed of a parameter type and a return type. In Tifig, the parameter type is a tuple type whose elements represent the individual parameters (i.e., external names and parameter types). The return type can be any index type. However, the 61 6 Indexer return type can never be null and even functions that return nothing have a return type of () (i.e., empty tuple which in Swift means Void). Listing 6.13 shows the declaration of a variable called triple which is initialized with a closure expression. Since the type of this variable is the function type (Int) -> Int, it can called like a regular function. Listing 6.13: Function Type Example 1 2 3 let triple = { x in x * 3 } let three = 3 let nine = triple(3) // triple has function type (Int) −> Int Any Type The Any type is a special “top type” that is built into the Swift compiler [Ben02]. In Tifig, it is represented by an instance of the class AnyType. This class is a singleton, because there exists only one Any type. Lvalue Reference Types In Swift it is possible to pass an lvalue to a function by reference. To do so, the parameter type has to be preceded by the inout keyword and the argument has to be wrapped in an inout expression (e.g., &arg). Listing 6.14 shows an example: Listing 6.14: Lvalue Reference Types 1 2 3 4 5 6 7 8 9 func inc(_ x: inout Int) { x += 1 } var x = 0 print(x) inc(&x) inc(&x) print(x) // prints '0' // prints '2' The type of the argument expression &x is represented by an instance of the class LvalueReferenceType. Internally this class has a reference to the type of the lvalue itself (in the example above, x is the lvalue). The type checker makes sure that an argument of lvalue reference type is provided for each inout parameter. Additionally, the type wrapped inside the lvalue reference type must exactly match the parameter type. This is different from regular parameters where the argument expression can be of any type as long as it is convertible to the parameter type. An example of this is shown in Listing 6.15: 62 6 Indexer Listing 6.15: Regular Parameters vs. Inout Parameters 1 2 3 4 5 6 7 8 9 10 11 class Base {} class Derived: Base {} func f(_ x: Base) {} var d1 = Derived() f(d1) // valid: implicit conversion from Derived to Base func f(_ x: inout Base) {} var d2: Base = Derived() f(&d1) // invalid: implicit conversion is not possible with inout parameters f(&d2) // valid: no implicit conversion is necessary At the time of this writing, Tifig does not yet distinguish between lvalues and rvalues. Thus, the indexer will accept expressions like inc(&5) even though this is invalid code, because the literal expression 5 is not an lvalue. Additionally, Swift has support for pointers which can also be initialized with an expression like &arg. This is not yet implemented in Tifig either. Type Aliases Swift allows the declaration of type aliases in order to introduce a named alias for an existing type in your program. Anywhere in your program, the name of the type alias can be used instead of the existing type. The existing type can be a named type (e.g., a struct or another type alias) or a compound type (e.g., a tuple or a function). A type alias does not create a new type. It just defines another name to refer to an existing type. This is shown in Listing 6.16: Listing 6.16: Type Aliases 1 2 3 4 5 6 7 8 9 10 typealias IntPair = (Int, Int) func swap(_ p: IntPair) return (p.1, p.0) } var pair = (1, 2) print(pair) pair = swap(pair) print(pair) −> IntPair { // pair is of type (Int, Int) // prints '(1, 2)' // prints '(2, 1)' In Tifig, type aliases are represented by instances of the class TypeAliasTypeBinding. Like with nominal types, this class is both an index type and also a binding. Internally, it stores a reference to the existing type. Therefore, whenever the type checker needs to know the underlying type of a type alias (e.g., in order to perform member lookup) that information can be extracted from the type alias binding. Protocol Composition Types A protocol composition type is a compound type that consists of multiple protocol types. Any value whose type conforms to all the protocols listed in a protocol composition type can be assigned to a variable that is of that protocol composition type. An example of this is shown in Listing 6.17: 63 6 Indexer Listing 6.17: Protocol Composition Types 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 protocol P1 { func f1() } protocol P2 { func f2() } struct S: P1, P2 { func f1() {} func f2() {} } func f(_ x: P1 & P2) { x.f1() x.f2() } let s = S() f(s) In Tifig, protocol composition types are represented by instances of the class ProtocolCompositionType. Generic Type Parameters In Tifig, generic type parameters are represented by instances of the class GenericTypeParameterBinding. Like with nominal types and type aliases, this class is both an index type and a binding. In order to be able to correctly type check the bodies of generic functions, Tifig stores the conformance requirements of a generic type parameter within the corresponding GenericTypeParameterBinding instance. Listing 6.18 shows an example: Listing 6.18: Generic Type Parameters 1 2 3 4 5 6 7 protocol P { func f() } func g<T: P>(x: T) { x.f() } In this example, Tifig creates a GenericTypeParameterBinding for the generic type parameter T. The requirement that T must conform to the protocol P is stored within the corresponding GenericTypeParameterBinding instance. Later, when the function call expression x.f() is type checked, the indexer knows that all instances of type T do have a method f(), because this is required by the protocol P. Note that Swift also supports different kinds of requirements for generic type parameters (e.g., adding restrictions on associated types of a generic type parameter). However, this is currently not yet supported by Tifig. 64 6 Indexer Generic Type Instances A generic type instance is composed of a NominalTypeBinding which refers to a generic type as well as a list of type arguments supplied for the generic type parameters. This index type is represented by instances of the class GenericTypeInstance. An example is shown in Listing 6.19: Listing 6.19: Generic Type Instances 1 2 3 4 5 6 7 8 struct Pair<T1, T2> { let first: T1 let second: T2 } let pair = Pair(first: 42, second: "hello") let x = pair.first let y = pair.second In this example, the type of the variable pair is inferred to be the generic type instance Pair<Int, String>. When a member of a generic type instance is accessed, the generic type parameters in the member’s type are replaced by the corresponding type arguments that are included in the generic type instance. Thus, in the example above, the variable x is of type Int and the variable y is of type String. Associated Types In Tifig, associated types of protocols are represented by instances of the class ProtocolAssociatedTypeBinding. Similar to generic type parameters, associated types can have conformance requirements. Additionally, an associated type can have a default type. These properties are stored within the binding so that the type checker can later refer back to them. Swift Modules In Tifig, the class SwiftModule implements the IType interface. This is because we can use a module name as a qualifier to access public declarations of that module. Thus, the module acts like a type and the public declarations are the members of that type. This is sometimes used to access an entity of an imported module that is shadowed by an entity in the current module because the two entities have the same name. An example of this is shown in Listing 6.20: Listing 6.20: Using a module name to refer to a shadowed type 1 2 3 struct Int {} let x: Int let y: Swift.Int // Int refers to the Int type declared on line 1 // Swift.Int refers to the Int type declared in the standard library In this example, a struct type called Int is declared. This type shadows the Int type from the standard library. However, since the standard library is treated like a module with the name “Swift”, we can use the more specific name Swift.Int to directly refer to the type Int that is contained in the standard library. 65 6 Indexer Type Variables Type variables are represented by instances of the class TypeVariableType. A type variable is a special kind of index type that is only used during type checking and it acts as a placeholder, if the type of an expression is not yet known. Each type variable has an ID. In this thesis, type variables are referred to by the name $Tx where x is the ID of the type variable (e.g., a type variable with the ID 1 is called $T1). Additionally, each type variable has a fixed type. In the beginning of the type checking process, the fixed type is null. The constraint-based type checker then tries to find a fixed type for each type variable (see section 6.6). After the type checking process is done, the type variables will be replaced by their corresponding fixed types. Equality of Index Types During type checking, index types sometimes need to be compared for equality. Some index types (e.g., tuple types, function types, protocol composition types) are composed of other index types. Two instances of those types are equal if their components are equal. For example, there can be two separate instances of ProtocolCompositionType that are considered to be equal, if they are composed of the same protocols. In Tifig, this is implemented by overriding Java’s equals() method. On the other hand, there are also index types (e.g., nominal types, type aliases, generic type parameters) for which the equals() method is not overridden. Two variables of such an index type are only considered to be equal, if they refer to the exact same instance. This is because for example, it is possible to declare a struct type called S in one module and another struct type that is also called S in a second module. Even though these two struct types have the same name, they are still two distinct types. 6.4.2 Tasks of the Type-Annotation Pass In section 6.3 the scope tree for a small example program was shown. Figure 6.3 shows what this scope tree looks like after the type-annotation pass. As you can see, the types of the two function bindings as well as the type of the parameter binding have been set. The types of the variable bindings have not been resolved yet, because they don’t have an explicit type annotation. Thus, their types need to be inferred during the type-check pass. 66 6 Indexer Figure 6.3: Scope Tree after Type-Annotation Pass Listing 6.21: Example Program 1 2 3 4 5 6 7 8 9 10 11 12 let x = 5 print(x) func f() { let x = 10 print(x) } func g(x: Int) { let y = 15 print(x, y) } Apart from transforming AST types into index types, the type-annotation pass also resolves all names that are not part of an expression. For example, this includes class and protocol names in type-inheritance clauses as well as the conformance requirements of a generic type parameter. The following list describes all the tasks that are fulfilled by the type-annotation pass: Variables and Parameters The type of a variable binding can be resolved during the type-annotation pass, if there is an explicit type annotation. If a variable declaration doesn’t have an explicit type annotation, the type must be inferred from its initializer expression during the type-check pass. Note that function parameters always have an explicit type annotation. Functions and Methods The type of a function or a method can always be resolved during the type-annotation pass. The type is a function type that consists of the types of the function’s parameters and its return type. If a function declaration doesn’t have an explicit return type, the return type is implicitly set to () (empty tuple type / Void) which means that the function returns nothing. 67 6 Indexer Initializers Initializers are similar to methods. They also have parameters which must have an explicit type annotation. However, the return type of an initializer is always implicit. For regular initializers the return type is the enclosing nominal type. For failable initializers the return type is an optional containing the enclosing nominal type. An example of this is shown in Listing 6.22: Listing 6.22: Types of Initializers 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 struct Rational { let numerator: Int let denominator: Int // This is a failable initializer that // has function type (numerator: Int, denominator: Int) init?(numerator: Int, denominator: Int) { guard denominator != 0 else { return nil } self.numerator = numerator self.denominator = denominator } −> Rational? // This is a regular initializer that // has function type (integer: Int) −> Rational init(integer: Int) { self.numerator = integer self.denominator = 1 } } Subscripts Tifig uses function types as the types for subscript bindings. These types can be resolved during the type-annotation pass because a subscript declaration always has zero or more parameters and an explicit return type. Enum Cases The type of an enum case can also be resolved during the type-annotation pass. There are two kinds of enum cases: those that have associated values and those that don’t. The type of an enum case without associated values is simply the enclosing enum type. On the other hand, an enum case with associated values acts like an initializer. Thus, its type is a function type with the return type set to the enclosing enum type. An example of this is shown in Listing 6.23: Listing 6.23: Types of Enum Cases 1 2 3 4 5 6 7 8 enum E { case one case two(String, Int) } let x = E.one let y = E.two let z = E.two("test", 2) // x is of type E // y is of function type (String, Int) // z is of type E 68 −> E 6 Indexer Type Inheritance Clauses The names that appear in a type inheritance clause are also resolved during the typeannotation pass. An example of this is shown in Listing 6.24: Listing 6.24: Resolving names in type inheritance clauses 1 2 3 protocol P {} class Base {} class Derived: Base, P {} After the definition pass, there are three bindings: a protocol type binding called P and two class type bindings called Base and Derived. At this point the indexer has not yet recorded the fact that Derived inherits from Base and conforms to P. This is done during the type-annotation pass. The indexer resolves the names in the type inheritance clause and updates the class type binding Derived correspondingly. In class type declarations the first name in the type inheritance clause can either be the name of a superclass or the name of an adopted protocol. All remaining names in the type inheritance clause must resolve to protocols. The declarations of struct types, enum types and protocol types can only have the names of protocols in their type inheritance clause. Infix Operators The declaration of an infix operator can specify which precedence group the operator belongs to. During the type-annotation pass this precedence group name is resolved and the operator binding is updated correspondingly. If no precedence group name is specified, the infix operator belongs to the precedence group DefaultPrecedence. Precedence Groups All precedence groups together build a partially ordered set. Each precedence group declaration can specify which other precedence groups have higher or lower precedence than the current precedence group. Listing 6.25 shows an excerpt from the standard library which shows how some of the predefined precedence groups are ordered: Listing 6.25: Order of precedence groups 1 2 3 4 5 6 7 8 9 10 11 precedencegroup LogicalDisjunctionPrecedence { associativity: left higherThan: TernaryPrecedence } precedencegroup LogicalConjunctionPrecedence { associativity: left higherThan: LogicalDisjunctionPrecedence } precedencegroup ComparisonPrecedence { higherThan: LogicalConjunctionPrecedence } During the type-annotation pass the indexer resolves the precedence group names after higherThan: and records the order relationships within the individual precedence group bindings. 69 6 Indexer Typealiases During the type-annotation pass, the right hand side (i.e., the aliased type) of a type alias declaration is transformed into an index type. A reference to this index type is then stored in the corresponding TypealiasTypeBinding. This way, the type checker can later obtain the underlying type of a type alias. Associated Types During the type-annotation pass, any conformance requirements and default types in associated type declarations are resolved and transformed into index types. A reference to these index types is then stored in the corresponding associated type bindings. Implicit Operator Bindings Section 6.3.5 described that the definition pass creates implicit operator bindings for the infix operators =, as, as?, as!, is as well as for the ternary operator ?:. The type-annotation pass connects these bindings to their corresponding precedence group (as documented in the standard library file Policy.swift). Implicit Variable Bindings Section 6.3.6 described that the definition pass creates implicit variable bindings for implicit variables such as newValue and oldValue. The type-annotation pass assigns a type to each of these bindings. It obtains the type by looking at the context of the corresponding implicit variable. For example, a newValue variable in a setter clause has the same type as its enclosing computed property. 70 6 Indexer 6.5 Type-Check Pass During the type-check pass, the indexer checks whether the expressions in the source code are well-typed. If there are no type errors in an expression, the indexer assigns a type to each of its subexpressions. The indexer also assigns a type to each binding whose type depends on type inference. Additionally, overload resolution happens during the type-check pass. This is because overload resolution usually depends on the types of expressions and those are not yet known before the type-check pass. In the sections 6.3 and 6.4 the scope tree for a small example program was shown. Figure 6.4 shows what this scope tree looks like after the type-check pass. Figure 6.4: Scope Tree after Type-Check Pass Listing 6.26: Example Program 1 2 3 4 5 6 7 8 9 10 11 12 let x = 5 print(x) func f() { let x = 10 print(x) } func g(x: Int) { let y = 15 print(x, y) } As you can see, the types of the global variable binding and the two local variable bindings have been set. Now, all bindings have a corresponding type. Additionally, all the expressions in this example have been type-checked as well (not shown in figure). The rest of this section first gives an overview of how type inference works in Swift and then shows how the type-check pass is implemented. 6.5.1 Type Inference in Swift Section 6.1 described why the indexer needs to be able to infer the types of arbitrary expressions. This section shows some of the main characteristics of type inference in Swift. In Swift, type annotations can often be omitted, because the type can be inferred by the compiler. Listing 6.27 shows a few examples: 71 6 Indexer Listing 6.27: Type Inference Examples 1 2 3 4 5 6 7 func getNameAndAge() −> (name: String, age: Int) { return ("Toni", 26) } let number = 5 // number is of type Int let array = ["Text"] // array is of type Array<String> let age = getNameAndAge().age // age is of type Int let fn = { x in x * 2 } // fn is of function type (Int) −> Int Note that type inference is not always possible. For example, functions, initializers and subscripts always require type annotations for their parameters and return types. Similarly, computed properties as well as stored properties without an initial value always require a type annotation. Listing 6.27 shows a few examples where type inference does work. The type of the immutable variable number is inferred from its initializer expression 5. Similarly, the type of the closure parameter x is inferred from the closure body expression x * 2. Additionally, type inference is limited to a single statement. If a variable or constant is only initialized in a statement that follows the declaration, an explicit type annotation is required. This is shown in Listing 6.28: Listing 6.28: Type Inference is limited to a single statement 1 2 let x: Int x = 5 // type annotation is required, because // the constant is initialized in the next statement Bottom-Up Type Inference In all the examples that we have seen so far, type information flows from the bottom of the AST to the top. For example, Figure 6.5 shows the AST for the expression (1 < 2, "test"): Figure 6.5: Typed AST for the expression (1 < 2, "test") 72 6 Indexer Each expression node is annotated with its type. Note that the ExprElement nodes are not expression nodes and therefore don’t have a type. The Name node for the < operator is not an expression node either. However, it is backed by a binding which does have the function type (Int, Int) -> Bool. The leaf nodes have an intrinsic type. For example, an integer literal expression defaults to the type Int and a string literal expression defaults to the type String. Similarly, an identifier expression references some entity (e.g., a variable or a function) that also has a type. The type of an inner expression node depends on the types of its child nodes. For example, the BinaryExpressionsExpr node has type Bool which is obtained by applying the operator function of type (Int, Int) -> Bool to the two operands of type Int. Similarly, the ParenthesizedExpr node has type (Bool, String) which is obtained by creating a new tuple type with a tuple type element for each expr element. Bi-directional Type Inference In addition to this kind of bottom-up type inference, Swift also allows type information to flow from the root of the expression tree down to the leaves. This is called bi-directional type inference and is common in languages that use ML-like type systems. However, it is not present in mainstream languages like C++, Java, C#, or Objective-C [App17e]. To better understand how this works, it is useful to look at a few examples: • Literals The first example shows how bi-directional type inference works with Swift’s literals. Consider the code in Listing 6.29: Listing 6.29: Int vs. Integer Literal 1 2 3 let x = 2 let y: Double = 2 let z: Double = x // x is of type Int // OK // error: cannot convert value of type 'Int' to specified type 'Double' This code shows that there’s a difference between a variable of type Int and an integer literal. The variable x is initialized with an integer literal and since there is no type annotation, the type of x defaults to Int. The variable y is also initialized with an integer literal but there is an explicit type annotation that specifies that y should be of type Double. This is valid because it is possible to create a new instance of Double from an integer literal. However, when we try to initialize the variable z which is of type Double with the variable x which is of type Int we get a compilation error. This happens because there is no implicit coercion from Int to Double. Instead, one would have to write let z = Double(x) in order to create an instance of Double from x. Note that you can even write your own type that can be initialized with an integer literal. The way this works is by adopting a special protocol called ExpressibleByIntegerLiteral. An example is shown in Listing 6.30: 73 6 Indexer Listing 6.30: Conforming to ExpressibleByIntegerLiteral 1 2 3 4 5 6 7 8 9 10 11 12 struct EvenNumber: ExpressibleByIntegerLiteral { let value: Int init(integerLiteral: Int) { guard integerLiteral % 2 == 0 else { fatalError("\(integerLiteral) is not an even number") } self.value = integerLiteral } } let n: EvenNumber = 4 The protocol’s only requirement is that conforming types need to have an initializer with the signature init(integerLiteral: Int). The types Int and Double are declared in the standard library and both types conform to the ExpressibleByIntegerLiteral protocol. The only thing that is special about Int is that it is the default type for integer literals, if the type is not otherwise constrained. Other literals work in the same vein. For example, an array literal defaults to the type Array but it can also be used to create a Set because Set conforms to the ExpressibleByArrayLiteral protocol. An example is shown in Listing 6.31: Listing 6.31: Array Literals 1 2 let arr = [1, 2, 3] let set: Set<Int> = [1, 2, 3] // arr is of type Array<Int> // set is of type Set<Int> There are other ways by which a literal’s type can be constrained from its context. A few examples are shown in Listing 6.32: Listing 6.32: Expressions with contextual type constraints 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 // An integer literal is constrained to have type Double, // because of f()'s return type func f() −> Double { return 0 } // An array literal containing integer literals is constrained // to have type Set<Double>, because of g()'s parameter type func g(_: Set<Double>) {} g([1, 2, 3]) // An integer // because of var x = 2.5 switch x { case 2: print("x == default: print("x != } literal is constrained to have type Double, the type of the switch statement's control expression 2") 2") Finally, it is important to note that this bi-directional type inference works even if the contextual type constraint is not coming directly from an immediate ancestor node of the literal expression. An example of this is shown in Listing 6.33: 74 6 Indexer Listing 6.33: Bi-directional type inference over multiple levels 1 2 3 4 5 func id<T>(_ x: T) −> T { return x } let x = id(id(id(2))) let y: Double = id(id(id(2))) // x is of type Int (bottom−up, 3 levels) // y is of type Double (top−down, 3 levels) In the declaration of x, the generic type parameter T is inferred to be Int because of the integer literal. Consequently, x is inferred to be of type Int as well. On the other hand, in the declaration of y, the generic type parameter T is inferred to be Double because of the explicit type annotation. Therefore, the type of the integer literal is also set to Double. • Closures Closures are another one of Swift’s language features that heavily relies on bidirectional type inference. The parameter types and return types of closures are often not specified explicitly but instead inferred from the closure’s context. Listing 6.34 shows an example: Listing 6.34: Type Inference from Closure Context 1 2 3 let numbers = [1, 2, 3, 4, 5, 6] let evenNumbers = numbers.filter { n in n % 2 == 0 } print(evenNumbers) // [2, 4, 6] In this example, the filter() method expects a closure (or a function) that takes an Int and returns a Bool. Therefore, the type of the closure that is passed as argument is inferred to be (Int) -> Bool. In some cases, it is even possible to determine the type of a closure from its body. This is shown in Listing 6.35: Listing 6.35: Type Inference from Closure Body 1 2 let inc = { $0 + 1 } print(inc(4)) // 5 In this example, $0 is an implicit closure parameter and the result of the expression $0 + 1 is implicitly used as the return value for the closure. From the closure body, the type checker can figure out that $0 should be an Int and that the return type of the closure should also be Int. Note that this only works with closures whose bodies consist solely of a single expression or return statement. • Overload Resolution In Swift, functions can be overloaded. While the bindings for most names can be resolved before type checking, this is not the case for function names. This is because overload resolution depends on the types of a function’s arguments which are not known before type checking. In contrast to many other programming languages, overload resolution not only depends on the argument types but also on the contextual type of a function call. Listing 6.36 shows an example: 75 6 Indexer Listing 6.36: Overload Resolution in Swift 1 2 3 4 5 6 7 8 9 10 11 func f() −> Int { return 2 } func f() −> String { return "test" } let x = f() let y: Int = f() let z: String = f() // error: ambiguous use of 'f()' // Overload Resolution picks f: () // Overload Resolution picks f: () −> −> Int String In the declaration of x, there is no contextual type constraint, which makes the function call f() ambiguous.. 6.5.2 Implementation Approach As a first approach, a bottom-up type checker was implemented. This approach assumes that the type of each expression depends solely on the types of its subexpressions. To obtain the type of an expression one can call the getType() method of the root expression node. Listing 6.37 shows what the getType() method of the ParenthesizedExpr node class looked like: Listing 6.37: getType() method of the ParenthesizedExpr node class 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 public class ParenthesizedExpr extends ASTNode implements IPrimaryExpr { private final ExprElement[] elements; public ParenthesizedExpr(ExprElement[] elements) { this.elements = elements; } @Override public IType getType() { if(elements.length == 1) { return elements[0].getExpr().getType(); } final TupleTypeElement[] typeElements = new TupleTypeElement[elements.length]; for(int i = 0; i < elements.length; i++) { final ExprElement exprElement = elements[i]; final String elementName = exprElement.getName(); final IType elementType = exprElement.getExpr().getType(); final TupleTypeElement typeElement = new TupleTypeElement(elementName, elementType); typeElements[i] = typeElement; } return new TupleType(false, typeElements); } // other methods } The getType() method in the example above creates a tuple type element for each expr element. Each tuple type element is composed of the corresponding expr element’s name 76 6 Indexer and type. Finally, the method returns a new tuple type that is composed of the tuple type elements. This approach worked fine for simple expressions, but it soon became clear that it is not powerful enough to deal with Swift’s bi-directional type inference. Thus, I decided to translate parts of the Swift compiler’s type checker from C++ to Java in order to integrate it in the Tifig IDE. Apple uses a constraint-based type checker (similar to the Hindley-Milner type inference algorithm [DM82]) in order to deal with bi-directional type inference. This is explained in more detail in the next section. 77 6 Indexer 6.6 Constraint-Based Type Checker This section gives an overview over the different steps that are performed by the constraintbased type checker. Additionally, it shows various examples that illustrate how different kinds of expressions are type checked. 6.6.1 Overview The constraint-based type checker performs four steps: Constraint Generation, Constraint Solving, Solution Ranking and Solution Application [App17e]. Constraint Generation The type checking of an expression starts with constraint generation. During constraint generation, the type checker assigns a type to each subexpression. Since the type of a subexpression is often not yet fully known during constraint generation, type variables are used as placeholders. Additionally, constraints are generated which impose restrictions on the individual type variables. Before a constraint is added to the constraint system, it is simplified. This means that the system checks whether the constraint is already satisfied or whether it can be broken down into smaller constraints. Constraint Solving The constraint solver starts by assigning a fixed type to one of the type variables. The fixed type is not chosen at random, but instead represents an educated guess by the constraint solver. For example, if a type variable is used as a placeholder for the type of a literal expression the constraint solver may start by trying to use the default type for the corresponding literal kind (e.g., Int for integer literals or Double for floatingpoint literals). Similarly, if a type variable is used as a placeholder for the type of an overloaded name, the constraint solver may start by choosing the type of one of the overload choices. Next, the constraint solver simplifies all constraints that involved the type variable that was just assigned a fixed type. A constraint is considered to be solved by the simplifier, if it is satisfied by the current choice of fixed types. If that is the case, it is removed from the constraint system. A constraint is considered to be unsolved, if it still contains type variables that don’t have a fixed type. If that is the case, the constraint stays in the constraint system. Finally, if the solver can determine that one of the constraints can never be satisfied with the current choice of fixed types, it backtracks to the previous step in order to try a different fixed type. If there are no more constraints in the system after the simplification is done, this means that the current choice of fixed types represents a solution to the constraint system. This solution is then stored and the solver backtracks in order to look for additional solutions by trying out different combinations of fixed types. 78 6 Indexer If there are still unsolved constraints after the simplification is done, the whole process repeats and the solver makes the next guess and assigns a fixed type to a different type variable. Thus, the solution space explored by the solver can be viewed as a tree. The root node of the tree is the constraint system that directly results from the constraint generation. Each other node is a constraint system that was derived from the root constraint system and the path from the root node to another node represents the guesses that the solver made in order to derive the corresponding constraint system. The leaves are either constraint systems that represent a solution (i.e., all constraints were simplified and all type variables have a fixed type) or they are constraint systems where one or more constraints are not satisfiable anymore. Solution Ranking If the constraint solver didn’t find any solutions, it means that the constraint system is unsatisfiable or in other words, the expression is ill-typed (i.e., it contains a type error). If there is exactly one solution, the expression is well-typed and the type checker applies this solution to the expression. Finally, if there are multiple solutions, the solutions are ranked in order to determine whether there is a single solution that is better than all other solutions. If no such solution is found, the expression is considered to be ambiguous which is also a type error. Otherwise, the expression is well-typed and the type checker applies the best solution to the expression. Solution Application During solution application, type variables that occur in the types of the original expression and its subexpressions are replaced by their corresponding fixed type. Thus, after solution application the expression should have a valid type that doesn’t contain any type variables. Additionally, overloaded names are resolved to the corresponding overload choice that was determined by the constraint solver. 79 6 Indexer 6.6.2 Example 1: Literals The first example shows how the type checker handles bi-directional type inference of literal expressions. The code for this example is shown in Listing 6.38: Listing 6.38: Code for Example 1 1 let x: (Double, String) = (1, "test") Constraint Generation The constraint generator walks the AST of the initializer expression (1, "test") in postorder and assigns a type to each subexpression. The resulting AST is shown in Figure 6.6: Figure 6.6: AST for expression (1, "test") after constraint generation For literal expressions, the constraint generator creates a fresh type variable. On the other hand, the type of the parenthesized expression is a tuple type that is composed of the types of its elements. In addition to creating type variables and assigning types to subexpressions, the constraint generator also creates constraints. The following list describes the four constraints that are generated for the example above: • $T0 LiteralConformsTo ExpressibleByIntegerLiteral This constraint means that the fixed type of $T0 (i.e., the type of the literal expression 1) has to conform to the ExpressibleByIntegerLiteral protocol. • $T1 LiteralConformsTo ExpressibleByStringLiteral This constraint means that the fixed type of $T1 (i.e., the type of the literal expression "test") has to conform to the ExpressibleByStringLiteral protocol. • $T0 Conversion Double This constraint means that the fixed type of $T0 must be convertible to Double. 80 6 Indexer • $T1 Conversion String This constraint means that the fixed type of $T1 must be convertible to String. Note that the constraint generator first actually generates the constraint ($T0, $T1) Conversion (Double, String) as a result of the explicit type annotation of the variable x. However, this constraint is immediately simplified into the two smaller constraints $T0 Conversion Double and $T1 Conversion String. Internally, these type variables and constraints are stored in a constraint graph where the type variables are the vertices and the constraints are the edges. The constraint graph is a hypergraph which means that edges can join any number of vertices [Sap17]. In other words, an edge is an element of P (V ) \ {∅} where P (V ) is the power set of V and V is the set of all vertices. Thus, any given constraint connects a type variable to zero or more other type variables. Figure 6.7 shows the constraint graph for the example above: Figure 6.7: Constraint Graph Each of the two type variables has two edges and each edge is represented as a set that contains only a single type variable. In addition to the constraint graph, the type variables and the constraints are also tracked by the constraint system. Figure 6.8 shows an overview of the root constraint system after constraint generation: Figure 6.8: Root Constraint System The type variables $T0 and $T1 are set to null because they don’t have a fixed type yet. Note also that the constraints are divided into active and inactive constraints. This is 81 6 Indexer explained in the next section. Constraint Solving Figure 6.7 also shows that there are two connected components in the graph [J.A10]. The constraint solver can solve each connected component independent of the other connected components. This can lead to better performance because the solver may have to explore fewer combinations of fixed types. For example, let’s assume that there are two type variables $T0 and $T1. For $T0 the type checker tries 2 different fixed types and for $T1 the type checker tries 4 different fixed types. If $T0 and $T1 are in the same connected component, the solver needs to test all 2 * 4 = 8 combinations. On the other hand, if the two type variables are in separate connected components, the solver needs to test 2 combinations for the first component and 4 combinations for the second component resulting in a total of only 6 combinations. For the purposes of the example 1, the component that contains $T0 is referred to as component 0 and the component that contains $T1 is referred to as component 1. The constraint solving process for this example is illustrated in Figure 6.9: Figure 6.9: Constraint Solving Process 82 6 Indexer First, the constraint solver tries to solve component 0. To do this, all type variables and constraints that don’t belong to component 0 are temporarily removed from the constraint system. The constraint solver then tries to find a potential binding for the type variable $T0 (i.e., a type that could be used as the fixed type of $T0). In order to do that, the solver looks at the constraints that reference $T0 (i.e., the edges of that type variable in the constraint graph). The constraint solver determines that the type Double may be a good potential binding for that type variable. Thus, it assigns the fixed type Double to the type variable $T0. At the same time, constraints that reference $T0 are activated (i.e., they are moved from the list of inactive constraints to the list of active constraints). Afterwards, the constraint solver tries to simplify each active constraint. Since Double conforms to the ExpressibleByIntegerLiteral protocol and is also convertible to Double (i.e., convertible to itself), the two constraints are satisfied and therefore removed from the constraint system. Because there are no more constraints in the system, the solver has already found a solution for component 0. Next, the constraint solver tries to solve component 1. Based on the two constraints that reference $T1 it tries the type String as a potential binding for that type variable. This satisfies both constraints in the system and leads to a solution for component 1. Finally, the two partial solutions are combined into a single, final solution for the root constraint system. Since there is only one solution, the solution ranking step can be skipped. Solution Application The final solution is then applied to the original expression which results in the fully typed AST shown in Figure 6.10: Figure 6.10: AST for expression (1, "test") after solution application 83 6 Indexer 6.6.3 Example 2: Overload Resolution The second example shows how the type checker resolves calls to overloaded functions. The code for this example is shown in Listing 6.39: Listing 6.39: Code for Example 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 func printDescription(_ value: Any) { print("Value: \(value)") } func printDescription(_ value: Int) { let parity = value % 2 == 0 ? "even" : "odd" let sign = value >= 0 ? "positive" : "negative" print("Value: \(value)") print("Parity: \(parity)") print("Sign: \(sign)") } func printDescription(_ value: String) { let count = value.characters.count print("Value: \(value)") print("Character Count: \(count)") } printDescription(5) In this example, there are three functions called printDescription(). The job of these functions is to print a description of their argument. Depending on the argument type, a different function overload is called. This section explains how the type checker resolves and type checks the function call printDescription(5). Constraint Generation The constraint generator walks the AST of the expression printDescription(5) in postorder and assigns a type to each subexpression. The resulting AST is shown in Figure 6.11: Figure 6.11: AST for expression printDescription(5) after constraint generation 84 6 Indexer Since the overloaded name printDescription is not yet resolved, the constraint generator creates a fresh type variable for the corresponding identifier expression. Like in the first example, the type checker also creates a fresh type variable for the integer literal expression 5. This makes it possible to infer the type of the literal expression based on its context. Finally, since the function name is not yet resolved, the constraint generator also doesn’t know the type of the function and therefore cannot determine the type of the overall function call expression. Thus, another fresh type variable is created. In addition to creating type variables and assigning types to subexpressions, the constraint generator also creates constraints. The following list describes the constraints that are generated for the example above: • Disjunction Constraint – $T0 BindOverload printDescription: (Any) -> () – $T0 BindOverload printDescription: (Int) -> () – $T0 BindOverload printDescription: (String) -> () A disjunction constraint contains multiple nested constraints and is satisfied, if one of these nested constraints is satisfied. This type of constraint is used for overload resolution. In our example, the constraint generator creates a disjunction constraint that contains three nested constraints, each of which binds $T0 to one of the three printDescription overloads. • $T1 LiteralConformsTo ExpressibleByIntegerLiteral This constraint means that the fixed type of $T1 (i.e., the type of the literal expression 5) has to conform to the ExpressibleByIntegerLiteral protocol. • ($T1) -> $T2 ApplicableFunction $T0 This constraint means that the fixed type of $T0 must be a function type which has a single, required parameter and the argument, which is of type $T1 must be convertible to the type of that parameter. Additionally, the return type of that function type must be equal to $T2. Again, the type variables and the constraints together form a constraint graph which is shown in Figure 6.12. Additionally, Figure 6.13 shows an overview of the root constraint system after constraint generation: Figure 6.12: Constraint Graph Figure 6.13: Root Constraint System 85 6 Indexer Constraint Solving Compared to the example 1 there is only one connected component in this example, because all three type variables are connected by the ApplicableFunction constraint. The constraint solving process for this example is illustrated in Figure 6.14. From the root constraint system the solver starts by trying out the different nested constraints of the disjunction constraint. Each nested constraint can immediately be simplified which causes the type variable $T0 to be bound to the type of the corresponding overload. This also activates the ApplicableFunction constraint. During simplification this constraint is then replaced by two smaller constraints. The first one is an ArgumentConversion constraint which indicates that the argument which is of type $T1 must be convertible to the parameter type of the corresponding overload. The second one records the fact that the return type of the chosen function must be equal to $T2 (i.e., the type variable that was used as placeholder for the type of the function call expression). This constraint is immediately simplified which causes the type variable $T2 to be bound to the return type of the corresponding function (which is the empty tuple type for all three overloads). Next, the solver tries different potential bindings for the type variable $T1. Again, the potential bindings are derived from the constraints that involve $T1. For example, in the leftmost branch of the tree, the solver tries the potential binding $T1 := Int because of the constraint $T1 LiteralConformsTo ExpressibleByIntegerLiteral as well as $T1 := Any because of the constraint $T1 ArgumentConversion Any. Only the first one leads to a solution because the constraint $T1 LiteralConformsTo ExpressibleByIntegerLiteral cannot be satisfied if $T1 is bound to the type Any. Overall, the solver finds two solutions: one for the function printDescription: (Any) -> () and one for the function printDescription: (Int) -> (). The other leaves of the solver tree contain a constraint that is marked red. This is to indicate the constraint that cannot be satisfied on this path of the tree which causes the solving process to backtrack. Note that the constraint solver doesn’t search these branches of the solver tree simultaneously. Instead, it explores the solution space in a depth-first manner. 86 Figure 6.14: Constraint Solving Process 6 Indexer 87 6 Indexer Solution Ranking Since the constraint solver found more than one solution for the root constraint system, solution ranking comes into play. First, the ranking algorithm computes the difference between the two solutions. In this example, the two solutions differ only in the overload that was chosen for the function name printDescription. The system then checks whether one of the two overloads is more specialized than the other. If this is the case, the system prefers the more specialized overload. Thus, it comes to the conclusion that the solution that picks the overload printDescription: (Int) -> () is better than the solution that picks printDescription: (Any) -> (). More ranking rules will be explained in subsection 6.6.11. Solution Application This final solution is then applied to the original expression which results in the fully typed AST shown in Figure 6.15: Figure 6.15: AST for expression printDescription(5) after solution application 88 6 Indexer 6.6.4 Example 3: Binary Expressions Section 3.1.11 showed how new operators can be declared and how existing operators can be overloaded by declaring additional operator functions. For prefix and postfix operators this means that type checking works just like the type checking of a function call as shown in subsection 6.6.3. For example, listings 6.40 and 6.41 are semantically equivalent. Note that the expression !!b1 would be a valid expression that performs double negation in other languages like Java and C++. However, in Swift we need to add parentheses around the inner prefix expression. Otherwise, the parser would parse this as a single prefix expression with an operator called !!. This is because the operators are not yet known at parse time. Listing 6.41: Nested Function Calls Listing 6.40: Nested Prefix Expressions 1 2 1 2 3 let b1 = true let b2 = !(!b1) let negate = (!) let b1 = true let b2 = negate(negate(b1)) Type checking binary expressions is a bit more complicated. Since the infix operators are not yet known at parse time, the parser also doesn’t know their precedence and associativity. Thus, Tifig parses a series of binary expressions as a flat list. For example, Figure 6.16 shows the AST for the expression 0 == x % 2: Figure 6.16: AST for expression 0 == x % 2 However, for type checking we need to know the precedence and associativity of the individual operators. For example, the expression 0 == x % 2 is only valid if the % operator has a higher precedence than the == operator. To make it easier for the type checker, Tifig’s indexer temporarily creates a “shadow tree” for each binary expression which encodes the precedence and associativity of the infix operators in the tree structure. To see how this works let’s look at the code for example 3 which is shown in Listing 6.42: Listing 6.42: Code for Example 3 1 2 3 4 5 6 7 infix operator ***: MultiplicationPrecedence infix operator +++: AdditionPrecedence func ***(lhs: Int, rhs: Int) func +++(lhs: Int, rhs: Int) −> −> Int { return lhs * rhs } Int { return lhs + rhs } 1 +++ 2 +++ 3 *** 4 89 6 Indexer This example declares two new infix operators *** and +++ which behave exactly like the standard library operators * and +. This makes it easier to explain the constraint solving process, because all operator overloads are shown in the code for this example. To create the shadow tree, the indexer first looks up the precedence group of each individual operator. These precedence groups are stored in a sorted map. The indexer then repeatedly picks the next operator which belongs to the highest precedence group and collapses the two corresponding operands into an InfixOperatorExpr node. Once there are no more operators that belong to the highest precedence group, the process restarts with operators that belong to the second-highest precedence group. This process is illustrated in Figure 6.17: Figure 6.17: Building a shadow tree for the expression 1 +++ 2 +++ 3 *** 4 90 6 Indexer Step 0 shows all the operands and operators of the expression 1 +++ 2 +++ 3 *** 4. In step 1, the process collapses the two operands of the *** operator into a new InfixOperatorExpr node. This is because the *** operator has higher precedence than the +++ operator. Since there are no more *** operators, the process continues with the +++ operator. In step 2, the process collapses the two operands of the first +++ operator into a new InfixOperatorExpr node. Note that it starts with the first +++ operator, because the operator is left-associative. Finally, in step 3 the two operands of the second +++ operator are collapsed into a new InfixOperatorExpr node as well. Constraint Generation The constraint generator walks the AST of the expression 1 +++ 2 +++ 3 *** 4 in postorder and assigns a type to each subexpression. The resulting AST is shown in Figure 6.18: Figure 6.18: AST for expression 1 +++ 2 +++ 3 *** 4 after constraint generation For literal expressions, the constraint generator creates a fresh type variable. The same thing happens for the identifier expressions of the operators, but in this example, there is only one overload for each of the two operator names +++ and ***. Thus, the constraint generator can already fill in the fixed types. This has the effect that the types of the individual infix operator expressions are also already known. In addition to creating type variables and assigning types to subexpressions, the constraint generator also creates constraints. The following list describes the constraints that are generated for the example above: • $T0 LiteralConformsTo ExpressibleByIntegerLiteral $T2 LiteralConformsTo ExpressibleByIntegerLiteral $T4 LiteralConformsTo ExpressibleByIntegerLiteral $T6 LiteralConformsTo ExpressibleByIntegerLiteral These four constraints mean that the fixed types of the type variables that were generated for the individual literal expressions must conform to the ExpressibleByIntegerLiteral protocol. 91 6 Indexer • $T0 $T2 $T4 $T6 OperatorArgumentConversion OperatorArgumentConversion OperatorArgumentConversion OperatorArgumentConversion Int Int Int Int These four constraints mean that the fixed types of the type variables that were generated for the individual literal expressions must be convertible to Int. Note that the four OperatorArgumentConversion constraints are the result of the simplification of the constraints ($T0, $T2) -> Int ApplicableOperatorFunction $T1 and ($T4, $T6) -> Int ApplicableOperatorFunction $T5. Additionally, the constraint (Int, Int) -> Int ApplicableOperatorFunction $T3 can be simplified away entirely before it is ever added to the constraint system. The constraint graph and the root constraint system for this example are shown in Figure 6.19 and Figure 6.20, respectively: Figure 6.20: Root Constraint System Figure 6.19: Constraint Graph Constraint Solving Figure 6.19 shows that there are seven connected components in the graph. The constraint solver ignores the connected components containing the type variables $T1, $T3 and $T5 because these type variables already have a fixed type and there are no constraints in these components. The constraint solving process for this example is pretty straightforward. The connected components look very similar since each of them contains only a single type variable, one LiteralConformsTo constraint and one OperatorArgumentConversion constraint. The constraint solver solves each of these components in turn as shown in Figure 6.21: 92 Figure 6.21: Constraint Solving Process 6 Indexer 93 6 Indexer It starts by trying out the fixed type Int for the corresponding type variable. This satisfies both constraints and leads to a solution for the corresponding component. Finally, the four partial solutions are combined into a single, final solution for the root constraint system. Solution Application The final solution is then applied to the original expression which results in the fully typed AST shown in Figure 6.22: Figure 6.22: AST for expression 1 +++ 2 +++ 3 *** 4 after solution application 94 6 Indexer 6.6.5 Example 4: Explicit Member Expression This example shows how explicit member expressions are type checked. The code for this example is shown in Listing 6.43: Listing 6.43: Code for Example 4 1 2 3 4 5 6 7 8 9 10 11 12 13 struct Person { let name: String } func f() −> Person { return Person(name: "Steve") } func f() −> Int { return 0 } f().name Constraint Generation The constraint generator walks the AST of the explicit member expression f().name in postorder and assigns a type to each subexpression. The resulting AST is shown in Figure 6.23: Figure 6.23: AST for expression f().name after constraint generation Since the overloaded name f is not yet resolved, the constraint generator creates a fresh type variable for the corresponding identifier expression. This also means that we don’t yet know the type of the function call expression as well as the type of the explicit member expression. Thus, two additional type variables are created for these expressions. 95 6 Indexer The following list describes the constraints that are generated for the example above: • Disjunction Constraint – $T0 BindOverload f: () -> Person – $T0 BindOverload f: () -> Int This disjunction constraint contains two nested constraints, each of which binds binds $T0 to one of the two f overloads. One of these nested constraints must be satisfied in order for the disjunction constraint itself to be satisfied. • () -> $T1 ApplicableFunction $T0 This constraint means that the fixed type of $T0 should be a function type which has no required parameters. Additionally, the return type of that function type should be equal to $T1. • $T1.name ValueMember $T2 This constraint means that the fixed type of $T1 must have a member that is called “name” and is of type $T2. The constraint graph and the root constraint system for this example are shown in Figure 6.24 and Figure 6.25, respectively: Figure 6.25: Root Constraint System Figure 6.24: Constraint Graph Constraint Solving The constraint solving process for this example is illustrated in Figure 6.26: 96 6 Indexer Figure 6.26: Constraint Solving Process 97 6 Indexer The solver starts by picking the first overload and therefore sets the fixed type of $T0 to () -> Person. The simplification of the constraint () -> $T1 ApplicableFunction $T0 then sets the fixed type of $T1 to Person. Finally, the constraint $T1.name ValueMember $T2 is simplified. In the process, the simplifier looks for instance members called “name” in the struct type Person. It finds a name property which is of type String. Thus, the fixed type of $T2 is set to String and all constraints are satisfied. The solver creates a solution that contains the corresponding type bindings for the different type variables. Next, the solver backtracks in order to look for additional solutions. It picks the second overload and therefore sets the fixed type of $T0 to () -> Int. The ApplicableFunction constraint can be simplified and the fixed type of $T1 is set to Int. However, since the Int type doesn’t have a member called “name”, the simplification of the constraint $T1.name ValueMember $T2 fails. Thus, there is only a single solution for this constraint system. Solution Application The final solution is then applied to the original expression which results in the fully typed AST shown in Figure 6.27: Figure 6.27: AST for expression f().name after solution application 98 6 Indexer 6.6.6 Example 5: Implicit Member Expression This example shows how implicit member expressions are type checked. The code for this example is shown in Listing 6.44: Listing 6.44: Code for Example 5 1 2 3 4 5 6 enum DayOfTheWeek { case monday, tuesday, wednesday, thursday, friday, saturday, sunday } var day = DayOfTheWeek.monday day = .friday On line 5 of this example, the explicit member expression DayOfTheWeek.monday is used to initialize the variable day. This works because enum cases are considered to be static members of their enclosing enum type. The type of the variable day is inferred to be DayOfTheWeek because enum cases are instances of their enclosing enum type. On line 6, the value of the variable day is changed. This time, an implicit member expression is used, because the type of day is already known and the type checker can therefore infer that .friday refers to DayOfTheWeek.friday. Note that this works not just with enum cases but with any kind of static member that is an instance of its enclosing type. For example, Listing 6.45 shows how this can be used in Apple’s UI framework UIKit: Listing 6.45: UIKit Example 1 2 3 4 5 import UIKit let errorLabel = UILabel() errorLabel.text = "An error occurred!" errorLabel.textColor = .red In this example, the textColor property of UILabel is of type UIColor. This type is a class type and not an enum type. It defines several static properties to quickly access well-known colors (e.g., UIColor.blue, UIColor.green, UIColor.red). Since the type is already known from the context (i.e., the assignment to the textColor property), we can use .red instead of UIColor.red. 99 6 Indexer Constraint Generation The constraint generator walks the AST of the expression day = .friday in postorder and assigns a type to each subexpression. The resulting AST is shown in Figure 6.28: Figure 6.28: AST for expression day = .friday after constraint generation The constraint generator creates a fresh type variable $T0 for the identifier expression day. Since the name day is not overloaded, it can be resolved immediately and the fixed type of $T0 is set to DayOfTheWeek. The assignment operator is resolved to the implicit operator binding = that was generated during the definition pass. The type of this operator binding is set to null, but it doesn’t matter, because the operator is not treated like a normal operator and the built-in behaviour doesn’t rely on the type of the operator (see below). For the implicit member expression .friday, the constraint generator creates two type variables: $T1 for the base type and $T2 for the type of the member friday. Finally, the type of the assignment expression is set to the empty tuple (). This is different from other languages like Java or C++ where assignment expressions have the same type as the variable that is assigned a new value. The following list describes the constraints that are generated for the example above: • Metatype($T1).friday UnresolvedValueMember $T2 This constraint means that $T1 must have a static member that is called “friday” and is of type $T2. • $T2 Conversion $T1 This constraint means that the type of the static member “friday” (i.e., $T2) must be convertible to its enclosing type (i.e., $T1). • $T1 Conversion DayOfTheWeek This constraint results from the assignment expression and it means that the type of the right hand side (i.e., $T1) must be convertible to the type of the left hand side (i.e., DayOfTheWeek). 100 6 Indexer Figure 6.29 shows the root constraint system for this example: Figure 6.29: Root Constraint System Constraint Solving The constraint solver starts by trying the potential binding $T1 := DayOfTheWeek based on the constraint $T1 Conversion DayOfTheWeek. During the simplification of the UnresolvedValueMember constraint, the simplifier looks for static members called “friday” in the enum type DayOfTheWeek. It finds the enum case friday and therefore sets the fixed type of $T2 to DayOfTheWeek. Now, all constraints are satisfied. The final solution is shown in Figure 6.30: Figure 6.30: Final Solution Solution Application This final solution is applied to the original expression which results in the fully typed AST shown in Figure 6.31: Figure 6.31: AST for expression day = .friday after solution application 101 6 Indexer 6.6.7 Example 6: Optionals Swift has several language constructs that are related to the handling of optionals (see subsection 3.1.9). This example shows how a forced-value expression is type checked. Listing 6.46 contains the code for this example: Listing 6.46: Code for Example 6 1 2 3 4 func f() func f() −> −> Int { return 0 } Int? { return 0 } let x = f()! In this example there are two functions called f(). One returns an Int and the other returns an Int?. The variable x is initialized with a call to f() that is immediately force-unwrapped. Since only optionals can be unwrapped like this, the type checker chooses the f() overload that returns an Int?. The following subsections show how this is achieved. Constraint Generation The constraint generator walks the AST of the expression f()! in postorder and assigns a type to each subexpression. The resulting AST is shown in Figure 6.32: Figure 6.32: AST for expression f()! after constraint generation The constraint generator first creates a fresh type variable $T0 for the identifier expression f. Since there are multiple functions called f, it cannot yet resolve the overload and therefore has to create placeholder type variables for the function call expression as well as the forced-value expression. Additionally, the system creates the type variable $T3 which represents the type of the variable x. This is not shown in the AST of the initializer expression. 102 6 Indexer The following list describes the constraints that are generated for the example above: • Disjunction Constraint – $T0 BindOverload f: () -> Int – $T0 BindOverload f: () -> Int? The disjunction constraint that is generated for the identifier expression f contains two nested constraints, each of which binds $T0 to one of the two f overloads. • () -> $T1 ApplicableFunction $T0 This constraint means that the fixed type of $T0 must be a function type which has no required parameters and a return type that is equal to $T1. • $T1 OptionalObject $T2 This constraint means that $T1 (i.e., the return type of the function f()) must be an optional type which results in $T2 (i.e., the type of the forced-value expression) when unwrapped. • $T2 Conversion $T3 This constraint means that the initializer expression which has the type $T2 must be convertible to the type $T3 (i.e., the type of the variable x). Figure 6.33 shows the root constraint system for this example: Figure 6.33: Root Constraint System Constraint Solving The constraint solver first tries to choose the overload f: () -> Int and therefore sets the fixed type of $T0 to () -> Int. During the simplification of the constraint () -> $T1 ApplicableFunction $T0, the fixed type of $T1 is then set to Int. However, this means that the simplification of the constraint $T1 OptionalObject $T2 fails because the fixed type of $T1 (i.e., Int) is not an optional. Therefore, the solver backtracks and now chooses the overload f: () -> Int?. During the simplification of the ApplicableFunction constraint, $T1 is set to Int? and during 103 6 Indexer the simplification of the OptionalObject constraint, $T2 is set to Int. Finally, the solver tries the potential binding $T3 := Int based on the constraint $T2 Conversion $T3. Now, all constraints are satisfied. The final solution is shown in Figure 6.34: Figure 6.34: Final Solution Solution Application This final solution is applied to the original expression which results in the fully typed AST shown in Figure 6.35: Figure 6.35: AST for expression f()! after solution application 104 6 Indexer 6.6.8 Example 7: Initializer Call This example shows how an initializer call is type checked. The code for this example is shown in Listing 6.47: Listing 6.47: Code for Example 7 1 2 3 4 5 struct Circle { let radius: Double } let circle = Circle(radius: 2.5) The expression Circle(radius: 2.5) calls the compiler-generated, memberwise initializer which in turn constructs a new instance of the struct type Circle. Constraint Generation The constraint generator walks the AST of the expression Circle(radius: 2.5) in postorder and assigns a type to each subexpression. The resulting AST is shown in Figure 6.36: Figure 6.36: AST for expression Circle(radius: 2.5) after constraint generation In this example, much of the work happens already during constraint generation. The constraint generator first creates the type variable $T0 for the identifier expression Circle. Since there is only one entity (i.e., a struct type) with this name, the overload is immediately resolved during constraint generation and the fixed type of $T0 is set to Metatype(Circle). Next, the constraint generator creates a fresh type variable $T1 for the floating-point literal 2.5. 105 6 Indexer Finally, the constraint generator visits the FunctionCallExpr node. It generates the constraint (radius: $T1) -> $T2 ApplicableFunction Metatype(Circle) where $T2 is a fresh type variable that represents the type of the overall function call expression. However, since the second type of this constraint is a metatype, the constraint simplifier recognizes that this is a call to an initializer. Thus, this constraint is not added to the constraint system, but is instead simplified. The constraint simplifier creates a fresh type variable $T3 which represents the type of the initializer’s parameter list. The function type $T3 -> $T2 is therefore considered to be a placeholder for the type of the initializer. The simplifier then looks for initializers in the struct type Circle. If there were multiple initializers, a corresponding disjunction constraint would be generated. In this case there is only the compiler-generated, memberwise initializer. Since the type of this initializer is (radius: Double) -> Circle, the simplifier binds $T3 to (radius: Double) and $T2 to Circle. Finally, the simplifier adds the constraint (radius: $T1) ArgumentTupleConversion $T3 which is immediately simplified to $T1 ArgumentConversion Double. Additionally, the system creates the type variable $T4 which represents the type of the variable circle. This is not shown in the AST of the initializer expression. The following list describes the constraints that are generated for the example above: • $T1 LiteralConformsTo ExpressibleByFloatLiteral This constraint means that the fixed type of $T1 (i.e., the type of the literal expression 2.5) must conform to the ExpressibleByFloatLiteral protocol. • $T1 ArgumentConversion Double This constraint means that the fixed type of $T1 must be convertible to Double. • Circle Conversion $T4 This constraint means that the initializer expression which has the type Circle must be convertible to the type $T4 (i.e., the type of the variable circle). Figure 6.37 shows the root constraint system for this example: Figure 6.37: Root Constraint System 106 6 Indexer Constraint Solving There are two connected components in this constraint graph (note that components that contain only type variables which already have a fixed type are ignored). One component contains the type variable $T1 and the other contains the type variable $T4. The constraint solver starts with the first component and tries the potential binding $T1 := Double. This satisfies both the constraint $T1 LiteralConformsTo ExpressibleByFloatLiteral as well as the constraint $T1 ArgumentConversion Double. Thus, this component is already solved. For the second component, the constraint solver tries the potential binding $T4 := Circle. This satisfies the constraint Circle Conversion $T4 which means that this component is also solved. Finally, the solver combines the two partial solutions into the final solution shown in Figure 6.38: Figure 6.38: Final Solution Solution Application This final solution is applied to the original expression which results in the fully typed AST shown in Figure 6.39: Figure 6.39: AST for expression Circle(radius: 2.5) after solution application 107 6 Indexer 6.6.9 Example 8: Generic Function This example shows how a call to a generic function is type checked. The code for this example is shown in Listing 6.48: Listing 6.48: Code for Example 8 1 2 3 4 5 func _max<T: Comparable>(_ x: T, _ y: T) return y >= x ? y : x } −> T { let result = _max(99, 42) The generic function in the example above works just like the max() function from the standard library. However, in order to avoid ambiguity, it is called _max(). Constraint Generation The constraint generator walks the AST of the expression _max(99, 42) in postorder and assigns a type to each subexpression. The resulting AST is shown in Figure 6.40: Figure 6.40: AST for expression _max(99, 42) after constraint generation The constraint generator first creates the type variable $T0 for the identifier expression _max. However, since there is only one function with this name, the overload is immediately resolved during constraint generation. In the process, the generic type parameter T in the type of the _max() function is replaced with a fresh type variable $T1. Thus, the fixed type of $T0 is set to the function type ($T1, $T1) -> $T1. The type variables $T2 and $T3 are created for the two integer literals. Additionally, the system creates the type variable $T4 which represents the type of the result variable. This is not shown in the AST of the initializer expression. The following list describes the constraints that are generated for the example above: 108 6 Indexer • $T1 ConformsTo Comparable This constraint is generated due to the conformance requirement T: Comparable in the signature of the function _max(). It means that the fixed type of $T1 must conform to the protocol Comparable. • $T2 LiteralConformsTo ExpressibleByIntegerLiteral $T3 LiteralConformsTo ExpressibleByIntegerLiteral These two constraints mean that the fixed types of the type variables that were generated for the individual literal expressions must conform to the ExpressibleByIntegerLiteral protocol. • $T2 ArgumentConversion $T1 $T3 ArgumentConversion $T1 These two constraints mean that the fixed types of the type variables that were generated for the individual literal expressions must be convertible to the fixed type of $T1. • $T1 Conversion $T4 This constraint means that the initializer expression which has the type $T1 must be convertible to the type $T4 (i.e., the type of the variable result). Note that the two ArgumentConversion constraints are the result of the simplification of the constraint ($T2, $T3) -> $T1 ApplicableFunction ($T1, $T1) -> $T1. Figure 6.41 shows the root constraint system for this example: Figure 6.41: Root Constraint System Constraint Solving The constraint solving process for this example is straightforward. Based on the constraint $T2 LiteralConformsTo ExpressibleByIntegerLiteral, the solver tries the potential binding $T2 := Int. From there, the type Int propagates to the type variable $T1 because of the constraint $T2 ArgumentConversion $T1. Since Int conforms to the protocol Comparable, the solver then continues by trying the potential binding $T4 := Int based on the constraint $T1 Conversion $T4. At this point, all constraints except for the ones that involve $T3 have been simplified and removed. Finally, the solver tries 109 6 Indexer the potential binding $T3 := Int based on the constraint $T3 LiteralConformsTo ExpressibleByIntegerLiteral. This satisfies the remaining two constraints and the system is solved. Figure 6.42 shows the final solution of this constraint system: Figure 6.42: Final Solution Note that the fixed type of $T0 is simplified from ($T1, $T1) -> $T1 to (Int, Int) -> Int when the solution is finalized. Solution Application This final solution is applied to the original expression which results in the fully typed AST shown in Figure 6.43: Figure 6.43: AST for expression _max(99, 42) after solution application 110 6 Indexer 6.6.10 Solver Algorithm The preceding examples showed how the constraint solver type checks different kinds of expressions. This section describes how the solver is implemented. The two mutually recursive methods solveRec() and solveSimplified() are the central components of the solver algorithm. Figure 6.44 shows a flow chart of the solveRec() method. For simplicity reasons, the flow chart doesn’t show the extra logic that is used, if there are multiple connected components in the constraint graph: Figure 6.44: Flow Chart for solveRec() method The method solveRec() is called after constraint generation. Additionally, it is called every time after the solver applies a new potential binding as well as every time after the solver picks an overload from a disjunction constraint. The method returns a boolean to indicate success or failure as well as a list of solutions. It starts by simplifying active constraints. If an active constraint can be simplified, it is removed from the constraint system. Note that the simplification may lead to the generation of smaller constraints which was shown in some of the preceding examples. If an active constraint cannot be solved yet, it is added back to the list of inactive constraints. Finally, if a constraint is unsatisfiable with the current choice of fixed types, the simplifier returns an error. If there are any unsatisfiable constraints, sovleRec() returns true to indicate that there was an error. Otherwise, it proceeds by checking whether there are any inactive constraints. 111 6 Indexer If there are no more inactive constraints, it means that the solver has found a solution. It finalizes the solution and adds it to the list of solutions that is returned from solveRec() through an out parameter. Finally, solveRec() returns false which indicates success. If there are still inactive constraints, the solver calls the solveSimplified() method which will be explained below. If there are any solutions after solveSimplified() returns, solveRec() returns false (i.e., success). Otherwise, it returns true (i.e., error). Figure 6.45 shows a flow chart of the solveSimplified() method: Figure 6.45: Flow Chart for solveSimplified() method The solveSimplified() first looks for disjunction constraints. If there are disjunction constraints, it picks the smallest (i.e., the disjunction constraint with the fewest nested constraints). Then, it applies each nested constraint one after the other and calls solveRec() each time in order to explore whether the overload choice leads to a solution. If there are no disjunction constraints, the system determines potential bindings for the type variables in the constraint system. If there are potential bindings, it applies one 112 6 Indexer after the other and calls solveRec() each time in order to explore whether the potential binding leads to a solution. If there are no potential bindings and no disjunction constraints, solveSimplified() returns true to indicate that there was an error. Otherwise, if there are any solutions in the end, solveSimplified() returns false to indicate success. If there are no solutions, it returns true to indicate that there was an error. When the solver is stuck, either because there is an unsatisfiable constraint or because it found a solution, it backtracks to a point where other potential bindings or nested constraints can be tried out. During backtracking the solver also needs to revert any changes that were made to the constraint system. This includes fixed types that were set on type variables as well as constraints that were added or removed. To do that, the solver uses so-called solver scopes. An example of this is shown in Listing 6.49: Listing 6.49: Trying Potential Bindings 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 boolean tryTypeVariableBindings(ConstraintSystem cs, TypeVariableType typeVar, List<PotentialBinding> bindings, List<Solution> solutions) { boolean anySolved = false; for(final PotentialBinding binding : bindings) { IType type = binding.getBindingType(); // Try to solve the system with typeVar := type try(final SolverScope scope = new SolverScope(cs)) { cs.simplifier().addConstraint(ConstraintKind.Bind, typeVar, type); if(!cs.solver().solveRec(solutions)) { anySolved = true; } } } return !anySolved; } The tryTypeVariableBindings() method is used to try out different potential bindings for a specific type variable typeVar. To apply a potential binding, the constraint typeVar Bind type is added to the system. This is immediately simplified by setting the fixed type of typeVar to type. Afterwards, it calls solveRec() to recursively solve the system. Note that this happens within a try statement. In the beginning of that try statement, a new SolverScope is created. This class records the current state of the constraint system and reverts the system back to that state once execution leaves the scope of the try statement. In the original implementation in the Swift compiler, SolverScope is a C++ RAII class [rai17]. Tifig uses Java’s try-with-resources mechanism to achieve the same effect [try17]. 6.6.11 Ranking Rules This section looks at the ranking rules that are used to determine the “best” solution, if there are multiple solutions. 113 6 Indexer Prefer more specialized overloads If there are two or more solutions that differ in a specific overload choice, the solution ranking algorithm favors the solution which chooses the overload that is more specialized than the other overloads. This was shown in example 2 (see subsection 6.6.3). Another example is shown in Listing 6.50: Listing 6.50: Prefer more specialized overloads 1 2 3 4 func f(_ x: Int?) {} func f(_ x: Int) {} f(42) // picks f: (Int) −> () In this example, the overload f: (Int) -> () is considered to be more specialized than f: (Int?) -> (), because Int is convertible to Int? (see subsection 6.6.14). Prefer overloads with fewer ignored parameters The solution ranking algorithm also prefers overloads for which fewer parameters have been ignored in the function call. A parameter can be ignored (i.e., no argument needs to be provided) if it is either variadic or has a default value. An example of this is shown in Listing 6.51: Listing 6.51: Prefer overloads with fewer ignored parameters 1 2 3 4 func f(_ x: Int, _ y: String = "") {} func f(_ x: Int) {} f(42) // picks f: (Int) −> () In this example, the overload f: (Int) -> () is considered to be “better”, because none of its parameters have been ignored whereas with the overload f: (Int, String) -> () the parameter y is ignored. Prefer regular methods over protocol extension methods The solution ranking algorithm prefers regular methods over methods that are inherited from a protocol extension. An example of this is shown in Listing 6.52: 114 6 Indexer Listing 6.52: Prefer regular methods over protocol extension methods 1 2 3 4 5 6 7 8 9 10 11 protocol P {} extension P { func f() {} } struct S: P { func f(_ x: Int = 0) {} } let s = S() s.f() // picks f: (Int) −> () In this example, the ranking algorithm chooses the regular method over the protocol extension method even though the protocol extension method has fewer ignored parameters. Thus, this ranking rule takes precedence over the other rules. Prefer protocol extension methods from derived protocols The solution ranking algorithm prefers one protocol extension method over another, if the first one belongs to a protocol that is derived from the protocol of the second protocol extension method. An example of this is shown in Listing 6.53: Listing 6.53: Prefer protocol extension methods from derived protocols 1 2 3 4 5 6 7 8 9 10 11 12 13 14 protocol P2 {} extension P2 { func f() {} } protocol P1: P2 {} extension P1 { func f(_ x: Int = 0) {} } struct S: P1 {} let s = S() s.f() // picks f: (Int) −> () In this example, the ranking algorithm prefers the method f: (Int) -> () from protocol P1 over the method f: () -> () from protocol P2, because P1 inherits from P2. Again, this ranking rule takes precedence over the other rules. 6.6.12 Contextual Type Constraints Some expressions in Swift have a contextual type from their enclosing statement or declaration. These contextual types can influence type checking and overload resolution, because the type-check pass creates an additional Conversion constraint which ensures that the type of the expression is convertible to the corresponding contextual type. The following subsections show a few examples. 115 6 Indexer Variable Declarations If a variable declaration has an explicit type annotation and an initializer expression, the type in the type annotation is considered to be the contextual type of the initializer expression. An example of this is shown in Listing 6.54: Listing 6.54: Contextual Type in Variable Declaration 1 2 3 4 func f() func f() −> −> Int { return 42 } String { return "test" } let x: Int = f() // picks f: () −> Int due to contextual type constraint Return Statements The contextual type of the expression in a return statement is the return type of the enclosing function. Note that a return statement can also occur in a computed property or an observed property. In that case, the type of the corresponding property is considered to be the contextual type. An example is shown in Listing 6.55: Listing 6.55: Contextual Type in Return Statement 1 2 3 4 5 6 func f() func f() −> −> Int { return 42 } String { return "test" } func g() −> String { return f() // picks f: () } −> String due to contextual type constraint Boolean Conditions The expression in a boolean condition has a contextual type of Bool. Note that boolean conditions can occur in if, guard, while and repeat-while statements. An example is shown in Listing 6.56: Listing 6.56: Contextual Type in Boolean Condition 1 2 3 4 5 6 7 func f() func f() −> −> Int { return 42 } Bool { return true } // picks f: () −> Bool due to contextual type constraint if f() { print("is here") } Where Clauses The expression in a where clause has a contextual type of Bool. Note that where clauses can occur in switch, for and do statements. An example is shown in Listing 6.57: 116 6 Indexer Listing 6.57: Contextual Type in Where Clause 1 2 3 4 5 6 7 8 9 func f() func f() −> −> Int { return 42 } Bool { return true } switch (1, 2) { case let (x, y) where f(): break default: break } // picks f: () −> Bool due to contextual type constraint Throw Statements The expression in a throw statement needs to conform to the standard library protocol Error. Thus, the expression has a contextual type of Error. An example is shown in Listing 6.58: Listing 6.58: Contextual Type in Throw Statement 1 2 3 4 5 6 7 8 9 10 11 enum MyError: Error { case error1 case error2 } func f() func f() −> −> Int { return 42 } MyError { return .error1 } func g() throws { throw f() // picks f: () } −> MyError due to contextual type constraint 6.6.13 Pattern Matching Section 3.1.12 showed how pattern matching works in Swift. The type checker needs to verify whether a pattern is valid for the type of the given value. For example, a tuple pattern is only valid for a tuple value that has the same number of elements. After the type checker has determined that a pattern is valid for a type, it also needs to recursively verify the nested patterns. For identifier patterns, the type checker sets the type of the identifier name’s binding. For expression patterns, it type checks the expression pattern ~= type. This way, the pattern matching mechanism is extensible since the user can define a custom kind of pattern by providing a corresponding overload of the ~= operator. An example of this is shown in Listing 6.59: 117 6 Indexer Listing 6.59: Custom ~= operator overload 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 func ~=<T>(pattern: (T) return pattern(value) } −> Bool, value: T) func greaterThan<T : Comparable>(_ a: T) return { $0 > a } } −> −> Bool { (T) −> Bool { let x = 11 switch x { case greaterThan(10): print("x > 10") default: print("x <= 10") } This example defines an overload of the ~= operator which takes a pattern that is a predicate function and applies the predicate to the given value. Additionally, it defines the higher-order function greaterThan(). This function takes a parameter a and returns a predicate that returns true, if its parameter is bigger than a. This way we can use the expression greaterThan(10) as a pattern [Beg15]. 6.6.14 Conversions The various type-checking examples that were shown in this chapter contained a lot of conversion constraints (e.g., Conversion, ArgumentConversion, OperatorArgumentConversion). These constraints consist of two types and convey to the constraint system that the first type must be convertible to the second type. The conversions are implicit which means that there is no explicit casting / coercion syntax necessary. The following examples show various kinds of implicit conversions that are valid in Swift: • A type T is convertible to itself. An example of this is shown in Listing 6.60: Listing 6.60: Conversion from Int to Int 1 2 let x = 2 let y: Int = x // x is of type Int • A type T is convertible to the existential type Any. An example of this is shown in Listing 6.61: Listing 6.61: Conversion from Int to Any 1 2 let x = 2 let y: Any = x // x is of type Int • A class type T is convertible to the existential type AnyObject. An example of this is shown in Listing 6.62: Listing 6.62: Conversion from class type C to AnyObject 1 2 3 class C {} let x = C() // x is of type C let y: AnyObject = x 118 6 Indexer • A type T that conforms to the Hashable protocol is convertible to the existential type AnyHashable. An example of this is shown in Listing 6.63: Listing 6.63: Conversion from Int to AnyHashable 1 2 let x = 2 // x is of type Int let y: AnyHashable = x • A nominal type T is convertible to a protocol type that it conforms to. An example of this is shown in Listing 6.64: Listing 6.64: Conversion from nominal type to protocol type 1 2 3 4 protocol P {} struct S: P {} let x = S() let y: P = x // x is of type S • A nominal type T is convertible to a protocol composition type if it conforms to all the protocols in the protocol composition type. An example of this is shown in Listing 6.65: Listing 6.65: Conversion from nominal type to protocol composition type 1 2 3 4 5 protocol P1 {} protocol P2 {} struct S: P1, P2 {} let x = S() // x is of type S let y: P1 & P2 = x • A class type T is convertible to a class type that it directly or indirectly inherits from. An example of this is shown in Listing 6.66: Listing 6.66: Conversion from class type to base class type 1 2 3 4 class B {} class C: B {} let x = C() let y: B = x // x is of type C • A function type In1 -> Out1 is convertible to a function type In2 -> Out2 if In2 is convertible to In1 (contravariance) and Out1 is convertible to Out2 (covariance). An example of this is shown in Listing 6.67: Listing 6.67: Conversion from one function type to another 1 2 let x = { (_: Any) in 2 } let y: (Int) −> Any = x // x is of type (Any) −> Int • A type T1 is convertible to a function type @autoclosure () -> T2 if T1 is convertible to T2. Note that the @autoclosure attribute is only allowed in parameter types. An example of this is shown in Listing 6.68: Listing 6.68: Conversion from Int to @autoclosure () -> Any 1 2 3 func f(_ x: @autoclosure () −> Any) {} let x = 42 // x is of type Int f(x) • A type T1 is convertible to T2? if T1 is convertible to T2. An example of this is shown in Listing 6.69: 119 6 Indexer Listing 6.69: Conversion from Int to Any? 1 2 let x = 2 let y: Any? = x // x is of type Int • A type T1? is convertible to T2? if T1 is convertible to T2. An example of this is shown in Listing 6.70: Listing 6.70: Conversion from Int? to Any? 1 2 let x = Optional(2) let y: Any? = x // x is of type Int? • A type Array<T1> is convertible to Array<T2> if T1 is convertible to T2. An example of this is shown in Listing 6.71: Listing 6.71: Conversion from Array<Int> to Array<Any> 1 2 let x = [1, 2, 3] let y: Array<Any> = x // x is of type Array<Int> Note that this kind of covariance also works with Set and Dictionary. But these types are special cases and other generic types in Swift are invariant. 120 6 Indexer 6.7 Testing Like the parser, the indexer is tested with a comprehensive set of automated tests and the Swift code that is supposed to be tested is provided in the form of comments above the individual JUnit test methods [jun17]. The standard library is parsed and indexed before the first test case is executed. Afterwards, its public members are available in every test case. This is important because core types (e.g., Int, Bool) and operators (e.g., +, &&) are declared in the standard library. There are three different kinds of indexer test cases: single-file test cases, multi-file test cases and multi-module test cases. 6.7.1 Single-File Test Cases For single-file test cases there is only one comment above each test method which contains the contents of a standalone Swift file. An example of such a test case is shown in Listing 6.72: Listing 6.72: Example of a single-file indexer test case 1 2 3 4 5 6 7 8 9 10 11 12 public class FunctionBindingTests extends SingleFileIndexerTestCase { // func £f£(x: Int) {} // func f(y: Int) {} // f(x: 0) @Test public void testOverloadResolution() { final IBinding fBinding = getLastOccurrence("f").getBinding(); assertBindingProperties(AccessLevel.Internal, 0, fBinding); } // other test cases } All single-file indexer test cases inherit from the superclass SingleFileIndexerTestCase which implements a few helper methods. In the example above, the test code defines two free functions called f() that only differ in their parameter names. Additionally, there is a function call f(x: 0). Before each test method is executed, the corresponding test code is parsed and indexed. Note that we don’t care about the AST in this test, because the parser is already tested with corresponding parser tests as described in section 5.4. Instead, the goal of this test is to ensure that the name f is resolved to the correct function binding based on the argument label that is provided in the function call. With the call to the helper method getLastOccurrence(), we first obtain a reference to the last Name node with the name f. Note that this corresponds to the f in the function call f(x: 0). Then, we get the binding that this name was resolved to. Finally, we test a few properties of the binding with a call to the helper method assertBindingProperties(). We assert that the access level of the binding is internal and that the definition name of the binding is located at marker position 0. A pair of £ signs in the test code comment indicates a marker. The test code can contain any number of markers each of which has an index starting at 0. Note that these markers are removed from the test code before it is parsed. 121 6 Indexer Listing 6.73 shows a more complex test case which introduces two new helper methods: Listing 6.73: Example of a single-file indexer test case 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 public class EnumTypeBindingTests extends SingleFileIndexerTestCase { // enum E<T> { // case £one£(T) // } // let £e1£ = E.£one£(2) // let £e2£ = E.£one£("test") @Test public void testGenericEnumType() { final IBinding oneBinding1 = getNameAtMarkerIndex(2).getBinding(); assertBindingProperties(null, 0, oneBinding1); final IBinding e1Binding = getNameAtMarkerIndex(1).getBinding(); assertBindingProperties(AccessLevel.Internal, 1, e1Binding); assertEqualType("E<Int>", e1Binding.getType()); final IBinding oneBinding2 = getNameAtMarkerIndex(4).getBinding(); assertBindingProperties(null, 0, oneBinding2); final IBinding e2Binding = getNameAtMarkerIndex(3).getBinding(); assertBindingProperties(AccessLevel.Internal, 3, e2Binding); assertEqualType("E<String>", e2Binding.getType()); } // other test cases } Firstly, there is the method getNameAtMarkerIndex() which allows us to obtain any Name node that is marked. Secondly, the assertEqualType() method provides a convenient way to compare an index type (i.e., a type that implements the IType interface) to an expected type supplied in the form of a String. Note that the access level of enum cases is set to null, because they cannot have an explicit access level modifier and they are accessible anywhere the enclosing enum type is accessible. 6.7.2 Multi-File Test Cases For multi-file test cases there are two comments above every test method each of which contains the contents of an individual Swift file. An example of such a test case is shown in Listing 6.74: Listing 6.74: Example of a multi-file indexer test case 1 2 3 4 5 6 7 8 9 10 11 12 13 public class FunctionBindingTests extends MultiFileIndexerTestCase { // fileprivate func f() {} // func £f£(x: Int = 0) {} // f() @Test public void testAccessLevelFileprivate() { final IBinding fBinding = getLastOccurrence("f", 1).getBinding(); assertBindingProperties(AccessLevel.Internal, 0, 1, fBinding); } // other test cases } 122 6 Indexer All multi-file indexer test cases inherit from the superclass MultiFileIndexerTestCase which implements a few helper methods. These are the same helper methods that were shown in subsection 6.7.1 but some of them take an additional argument which indicates the file index. The upper comment has file index 0 and the comment below has file index 1. The example above tests function overload resolution across two files. Normally, the function in file 0 would be preferred over the function in file 1 because it is a better match for the function call f(). However, in this case the function in file 0 is not accessible from file 1 because it has access level fileprivate. Thus, the function call f() resolves to the function in file 1. 6.7.3 Multi-Module Test Cases For multi-module test cases there are two comments above every test method each of which contains the contents of a Swift file that belongs to a separate module. An example of such a test case is shown in Listing 6.75: Listing 6.75: Example of a multi-module indexer test case 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 public class StructTypeBindingTests extends MultiModuleIndexerTestCase { // public struct S { // public init() {} // public func £f£() {} // } // import Module0 // let £s£ = S() // s.f() @Test public void testMethodCallOnStructType() { final IBinding fBinding = getLastOccurrence("f", 1).getBinding(); assertBindingProperties(AccessLevel.Public, 0, 0, fBinding); final IBinding sBinding = getNameAtMarkerIndex(0, 1).getBinding(); assertEqualType("S", sBinding.getType()); } // other test cases } All multi-module indexer test cases inherit from the superclass MultiModuleIndexerTestCase which implements a few helper methods. These are the same helper methods that were shown in subsection 6.7.1 but some of them take an additional argument which indicates the module index. The upper comment has module index 0 and the comment below has module index 1. The example above tests the ability to use a struct type that is declared in a different module. To do that, we need to use the access level modifier public for the struct type as well as for the members that we want to use. Additionally, we need to import the module that contains the corresponding type. In a multi-module test case, the two modules are called Module0 and Module1, respectively. 123 6 Indexer 6.8 Implementation Status While the indexer already works quite well for simple programs, there are still a few important pieces that are currently incomplete or missing: • Improve Generics Support Tifig supports indexing of simple generic types and functions. However, especially protocols with associated types are not yet fully supported. Unfortunately, the standard library contains a lot of code that makes extensive use of this feature. Thus, a lot of the code that often occurs in regular Swift programs cannot be indexed yet. • For Loops Currently, for loops cannot be correctly indexed yet. This is because the indexer needs to extract an associated type from the given sequence in order to determine the element type of the sequence. Unfortunately, this is not yet supported by the indexer. • Partial Imports Instead of importing an entire module, it is also possible to only import a specific declaration of the module. This is not yet supported by Tifig’s indexer. • Lvalue vs. Rvalue The indexer should distinguish between lvalues and rvalues. In Swift, an lvalue is an expression that can be assigned to or passed to an inout parameter. Every other expression is considered to be a rvalue [Swi17b]. • Add support for Pointers Swift has support for pointers, but in Tifig’s indexer this is not yet implemented. • Error Handling The errors reported by the indexer are still too imprecise. Additionally, many kinds of semantic errors are not reported at all. While they are reported by the compiler, it would be nicer if Tifig could display these errors directly in the editor. • Persistent index In the future, it might worth considering to persist the index. This way there would be no need to reindex all projects every time a workspace is opened. 124 7 User Interface This chapter describes the various UI elements that were developed for Tifig. 7.1 Wizards The Tifig IDE contains wizards to create new Swift projects and new files. 7.1.1 Project Wizard The project wizard lets you create a new Swift project. At the moment, it is still very basic. One can only choose the name of the project and its location in the file system. In the future, it may be extended with more configuration options. The project wizard sets up the initial file structure of the project and opens main.swift in the Swift editor. Additionally, it configures the project with the Swift project nature and switches the workbench to the Swift perspective. This is explained in sections 7.2 and 7.3. The initial file structure is dictated by the Swift Package Manager [App17c] which is used to build projects in Tifig. Figure 7.1 shows a newly created project with its initial files and folders: Figure 7.1: Initial file structure of a new Swift project Package.swift is the so-called manifest file. It defines the package’s name and its contents. By convention, source files are located in the Sources directory. The main.swift file is special, because it is the only file in the module which can contain top-level statements (all other Swift files can only contain declarations). It is the entry-point for Swift packages with an executable target. 125 7 User Interface 7.1.2 File Wizards In addition to the project wizard, Tifig also has five file wizards. Four of them create a new Swift file with a custom type (a class, a struct, an enum or a protocol). The fifth file wizard creates an empty Swift file. Like with the project wizard, the file wizards only have configuration options to specify the name and the location of the file. Figure 7.2 shows the wizard to create a new struct type: Figure 7.2: Wizard to create a new struct type When the user clicks Finish, a new file called Point.swift is created in the project’s Sources folder. This file contains an empty struct type called Point as shown in Listing 7.1: Listing 7.1: Struct type Point 1 2 3 struct Point { } Plug-in extensions are used to make the wizards available in the usual locations within the workbench (e.g., in the File -> New menu or in the context menu). Figure 7.3 shows the Swift wizards in the context menu: 126 7 User Interface Figure 7.3: Wizards in context menu Note that these wizards only appear in the context menu, if the current project is configured with the Swift project nature or if the active perspective happens to be the Swift perspective. 7.2 Project Nature Project natures allow a plug-in to tag a project as a specific kind of project [ecl17e]. The Tifig IDE uses the Swift project nature to add Swift-specific behaviour to projects. When a new project is created with the Swift project wizard, it is automatically configured with the Swift project nature. The nature adds the Swift builder to the project’s build spec. This means that whenever the user triggers a build, Tifig will use the Swift builder to build the project. 7.3 Swift Perspective When a new Swift project is created, the workbench is automatically set to use the Swift perspective. A perspective can configure the layout of the current workbench page. This means that it can set the views that are shown by default as well as configure the action sets that are displayed in the toolbar. Additionally, it can add shortcuts for wizards and views that are often used in this perspective [ecl17d]. By default, the Swift perspective displays the outline view, the problems view and the console view. It also adds shortcuts for the Swift-specific wizards. 127 7 User Interface 7.4 Editor When the user clicks on a Swift source file (file with .swift extension) in the Project Explorer, Tifig opens the file in the Swift editor. The class SwiftEditor is a subclass of TextEditor which is provided by the Eclipse platform. SwiftEditor itself is not very interesting, but it sets up a few other components that implement features which facilitate the editing of Swift source code. These components are described in the following sections. 7.4.1 Auto Indenting During editing, the Swift editor assists the user by automatically indenting the cursor to the correct position based on the code that is being written. Figure 7.4 shows the classes that are involved in this process: Figure 7.4: Classes involved in Auto Indenting The class DefaultIndentLineAutoEditStrategy implements the most basic auto edit behaviour for source code editors and is provided by the Eclipse platform. Every time the user enters a line break, it copies the level of indentation that was used on the previous line. That is all it does. The class SwiftAutoIndentStrategy extends this behaviour in two ways. Firstly, if a line break is entered after an opening curly brace ({), it increases the level of indentation, because the user usually wants to indent the statements in a code-block or the members of a type declaration (e.g., a class, a struct, an enum). Secondly, once the user types a closing curly brace (}), it automatically reduces the level of indentation to match the level of the corresponding opening curly brace. Note that this implementation has been mostly copied from the Java Editor Example project [jav17]. Whenever the user edits the code, the method customizeDocumentCommand() is called on the SwiftAutoIndentStrategy passing it a reference to the current IDocument and an instance of the class DocumentCommand. The properties offset, length and text of the document command describe the change that is about to happen. The auto indent strategy can then look at the current document and at the document command 128 7 User Interface and decide to change some of the command’s properties in order to customize the code change. It goes without saying, that the implementation of the customizeDocumentCommand() method must be very fast. Otherwise the user could experience a lot of lagging in the editing process. 7.4.2 Syntax Highlighting The syntax highlighting process happens in two phases. First the code is divided into several partitions (Partitioning Phase). Then, each partition is split up into tokens, each of which can specify a set of text attributes such as the text color and the font weight (Presentation Reconciliation Phase). A lot of the code described in this section has been directly adopted from a series of articles called “Create a commercial-quality Eclipse IDE” [Dev06]. Partitioning Phase When a Swift source file is opened in Tifig, the SwiftPartitionScanner divides the code into several partitions based on a set of IPredicateRules. Figure 7.5 shows the classes that are involved in this process: Figure 7.5: Classes involved in Partitioning Phase In the Tifig IDE, there are four kinds of partitions: single-line comment, multi-line comment, string literal and default. The class SingleLineRule can be used to recognize partitions that cannot span across multiple lines (e.g., single-line comments and string literals). The class MultiLineRule is used to recognize multi-line comment partitions. The partition type “default” doesn’t require a rule. Instead, everything that is not caught by any of the rules is part of a default partition. The main reason for doing this extra step is to be able to differentiate between code sections during syntax highlighting. For example, we probably don’t want to highlight Swift keywords in a multi-line comment. However, we may want to emphasize Swift documentation markup [App17a] within comments. Thus, it makes sense to partition the code, so that different syntax highlighting rules can be applied for each kind of partition. 129 7 User Interface Another reason is that we don’t want to recompute the syntax highlighting for the whole file each time a character is added or removed. By dividing the code into partitions, the system can efficiently update the syntax highlighting only for the affected regions. Presentation Reconciliation Phase The presentation reconciliation phase is responsible for updating the syntax highlighting in the Swift editor every time the code changes. Figure 7.6 shows the classes that are involved in this process: Figure 7.6: Classes involved in Presentation Reconciliation Phase The PresentationReconciler has a DefaultDamagerRepairer for each type of partition. When code changes in a certain partition, the corresponding DefaultDamagerRepairer “computes the damage” which is Eclipse parlance for determining the code regions that are affected by the change. It then “repairs” these regions by updating the syntax highlighting. In order to do that, each DefaultDamagerRepairer has a RuleBasedScanner. The scanner maintains a set rules that describe how a certain type of partition should be divided into tokens. Each token can have text attributes that specify its text color, font weight, etc. [ecl17f]. For example, the RuleBasedScanner for the default partition type has a rule to recognize Swift keywords. Every time a keyword is recognized, the scanner emits a token with a dark blue, bold font. 7.4.3 Reconciler As was shown in subsection 7.4.2, the PresentationReconciler takes care of updating the syntax highlighting after every code change. In contrast, the Reconciler is responsible for updating the AST and the index, which represent the internal model of the source code. Since parsing Swift source code may take significantly longer than repairing damaged code regions, it is not feasible to run the Reconciler after every code change. Thus, a slightly different approach is necessary. The Reconciler has a 500ms timer that is restarted after every code change. When the timer reaches 0 it means that the programmer has paused for half a second, which is a good opportunity to start the reconciliation process in a background thread. 130 7 User Interface To update the internal model, the Reconciler invokes the AST Manager which maintains an AST for each source file. The AST Manager uses the Lexer and the Parser to create an updated AST for the changed source file. After that is done, it notifies the Syntax Error Reconciler and the Outline View about the change which will in turn update themselves. Finally, the AST Manager invokes the Indexer in order to update the index as well. Figure 7.7 shows the components that are involved in this reconciliation process: Figure 7.7: Components involved in Reconciliation Process Syntax Error Reconciler The Syntax Error Reconciler is the component that is responsible for updating the syntax error markers in the Swift editor every time the AST changes. First, it deletes all existing markers that indicate a Swift syntax error. Then it visits the AST and adds a new marker for every ProblemNode. This is shown in Listing 7.2: Listing 7.2: Excerpt from Syntax Error Reconciler 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 private void updateMarkers(IFile file, SourceFile ast) { try { file.deleteMarkers(SWIFT_PARSER_PROBLEM_MARKER, true, IResource.DEPTH_INFINITE); } catch (final CoreException e) { SwiftUIPlugin.logError("Failed to delete syntax error markers.", e); } ast.accept(new ASTVisitor() { @Override public int visit(ProblemNode problemNode) { try { final IMarker marker = file.createMarker(SWIFT_PARSER_PROBLEM_MARKER); marker.setAttribute(IMarker.MESSAGE, problemNode.getMessage()); marker.setAttribute(IMarker.CHAR_START, problemNode.getOffset()); marker.setAttribute(IMarker.CHAR_END, problemNode.getOffset() + problemNode.getLength()); marker.setAttribute(IMarker.SEVERITY, IMarker.SEVERITY_ERROR); } catch(final CoreException e) { SwiftUIPlugin.logError("Failed to add syntax error marker.", e); } return PROCESS_CONTINUE; } }); } 131 7 User Interface Note that the updateMarkers() method needs to be called on the main thread since background threads are not allowed to update the UI [thr17]. Outline View The outline view is a simple tree view that shows an outline of the source file that is currently being edited. It gives the user a convenient overview of the functions, properties and types declared in the file. By clicking on a name in the outline view, the user can jump directly to the declaration of that program entity. Figure 7.8 shows a screenshot of the Swift editor with the outline view on its right hand side: Figure 7.8: Swift Editor with Outline View The outline view looks for IDecl and IDeclContainer nodes in the AST. IDecl nodes are the leafs in the outline tree view (e.g., properties, functions, methods). IDeclContainer nodes are custom types (e.g., classes, structs, etc.) and type extensions. Note that SourceFile, the node type of the AST’s root node, is also an IDeclContainer. Thus, not every IDeclContainer is also an IDecl. Hyperlinking Hyperlinking is a convenient navigation feature that is supported by text editors in the Eclipse platform. It works by registering a hyperlink detector for a specific editor. When the user holds down the Command key (on macOS) or the Control key (on Linux) and hovers the mouse pointer over some text in the editor, the hyperlink detector’s detectHyperlinks() method is called. This method may return zero or more hyperlinks which are then presented to the user in the UI. When the user clicks on a hyperlink, its open() method is called, which performs some action (e.g., jumping to a specific place in the editor). Tifig registers a hyperlink detector for the Swift editor, which allows users to quickly jump to the declaration of a specific identifier. To find out whether the mouse is hovering 132 7 User Interface over an identifier, it uses the index. For each file the index maintains a list of all Name nodes. The hyperlink detector determines whether the cursor lies within the source region of one of these Name nodes and checks whether the corresponding node is backed by a valid binding. If this is case, it returns an instance of OpenDeclarationHyperlink. When the user clicks on such a hyperlink, it jumps to the identifier’s declaration. Note that the declaration may be located in a different file, in which case Tifig opens the editor for that file. Figure 7.9 shows the classes that are involved in this process: Figure 7.9: Classes involved in Hyperlinking For operator names, the hyperlink detector returns two hyperlinks: one that points to the declaration of the operator and one that points to the declaration of the operator function. In this case, Eclipse automatically shows a popup which lets the user select the corresponding hyperlink. This is shown in Figure 7.10: Figure 7.10: Hyperlinks for operator names If the user clicks on the first hyperlink (Open declaration), Tifig jumps to the declaration of the corresponding operator function on line 3. If the user clicks on the second hyperlink (Open operator declaration), Tifig jumps to the declaration of the ++ operator on line 1. 133 7 User Interface 7.5 Type Information Hover Type inference can be very convenient because it avoids cluttering up the code with redundant type annotations. However, sometimes the user might not be entirely sure which type is inferred by the compiler. To help with this problem, the Swift editor in Tifig has another feature which allows users to quickly find out the inferred type of an entity simply by hovering over the corresponding name in the source code. It can also be useful to discover the capabilities of an API without having to consult the documentation. For example, Figure 7.11 shows that the print() function which is provided by the standard library has additional optional parameters that may be useful: Figure 7.11: Type Information Hover Note that this type information is obtained from the binding of the corresponding Name node. 134 7 User Interface 7.6 Open Type Dialog Tifig also provides an Open Type Dialog which allows users to quickly jump to the declaration of a specific top-level type. The feature uses the class ElementListSelectionDialog which is provided by the Eclipse platform. The necessary data about the individual types is obtained from the index and it includes types from the standard library. The Open Type Dialog can be triggered with the keyboard shortcut Command-Shift-T on macOS or Control-Shift-T on Linux. This opens a small dialog that contains a text field and an alphabetically ordered list of all top-level types. The user can type into the text field to filter the list. The user can then select a type using either the arrow keys or the mouse. When the user presses Enter or clicks on the OK button, the dialog is closed and Tifig jumps to the corresponding type declaration. Figure 7.12 shows a screenshot of the Open Type Dialog: Figure 7.12: Open Type Dialog 135 7 User Interface 7.7 Builder The SwiftPackageManagerBuilder is responsible for building Swift projects in the Tifig IDE. Tifig knows to use this particular builder, because it is added to the project’s build spec, when the SwiftProjectNature is configured (see section 7.2). As its name implies, the builder uses the Swift Package Manager to build projects. Provided that the project is structured according to the conventions of the Swift Package Manager (see subsection 7.1.1), one can simply execute the swift build command in order to build the project. This is exactly what the SwiftPackageManagerBuilder does. By default, the builder is set to “Build Automatically” which means that the project is compiled every time the user saves a file. This can be turned off in which case the user has to trigger builds manually. In order to communicate the build result to the user, Tifig prints the output of the Swift Package Manager to a message console in the console view. An example of this is shown in Figure 7.13: Figure 7.13: Swift Package Manager Build Console 136 7 User Interface 7.8 Launcher When a user runs a project in Tifig, the following steps are performed under the hood: 1. Tifig looks for an existing launch configuration for the current project. The launch configuration must be of type SwiftApplicationLaunchConfigurationType and its PROJECT_NAME attribute must be equal to the name of the project. • If it finds a valid launch configuration, it proceeds to step 2. • If no valid launch configuration is found, Tifig looks for executable modules in the current project. By default, a new Swift project in Tifig has a single executable module. The user can create multiple modules by adding subfolders to the Sources folder. Each subfolder that contains a main.swift file is considered to be an executable module and each subfolder without such a file is considered to be a library module. If there are multiple executable modules, Tifig will present a dialog that lets the user select which executable should be launched. This is shown in Figure 7.14: Figure 7.14: Module Selection Dialog In this example there are three modules because there are three subfolders in the Sources folder. The first two modules are executable modules because they contain a main.swift file which represents the entry point for execution. The third module called Utils is a library module and can therefore not be launched. 137 7 User Interface Once the user selects an executable module from the list (or if there is only one executable module), a new launch configuration is instantiated from the SwiftApplicationLaunchConfigurationType. Its PROJECT_NAME attribute is set to the project name and its MODULE_NAME attribute is set to the name of the corresponding module. Tifig then saves the new launch configuration and proceeds to step 2. 2. The launch configuration’s launch() method is called to initiate the launch. The actual work is done by the launch configuration type’s delegate (ILaunchConfigurationDelegate). It executes the binary in the /.build/debug folder and connects the process to a new process console in the console view. This way the user can see the process’ output and the process can get input from stdin. Additionally, users can customize launch configurations by selecting Run -> Run Configurations... from the menu bar. At the moment there aren’t a lot of customization options but this may be extended in the future. Figure 7.15 shows how to specify the arguments that are passed to the program when it is launched: Figure 7.15: Customizing a Run Configuration 138 7 User Interface 7.9 Implementation Status The following list describes a few UI features that are currently not yet supported by Tifig and should be added in the future: • Auto Completion Auto completion is an important feature that users expect from a modern IDE. Thus, this feature should be added to Tifig as well. • Code Navigation Tifig currently supports the code navigation features Open Type and Jump to Definition. Two other code navigation features that are commonly used in Eclipsebased IDEs are Open Call Hierarchy and Open Type Hierarchy. These features should be added to Tifig as well. • Parse Compiler Output The warnings and errors that are emitted by the compiler are currently only displayed in the console. In the future, it would be more convenient if Tifig were able to parse the compiler output and display the errors in the form of markers in the editor. 139 8 Conclusion This chapter evaluates the project results and mentions further work that could be done in the future to improve the Tifig IDE. 8.1 Results The following list gives an overview over the main features that were implemented during the term project and the subsequent master thesis: • Perspective & Wizards A simple Swift perspective and a few wizards to create new Swift projects and files have been developed. • Parser Code that is entered by the user is automatically parsed by a custom Swift parser and syntax errors are reported in the form of markers in the editor. • Indexer After the Swift code is parsed, it is indexed by a custom indexer. The indexer features a constraint-based type checker that is similar to the type checker in Apple’s official Swift compiler. In addition to the user’s own code, the indexer also indexes the standard library and makes its public symbols available in every project. • Editor The Swift Editor consists of several smaller components that implement editor features such as auto indenting and syntax highlighting. • Code Navigation The semantic knowledge that is obtained by the indexer allowed for the development of the code navigation features Open Type and Jump to Definition. • Builder The builder is still very basic and it is not yet possible to specify build settings. Thus, there is certainly room for improvement in the future. However, I think that using the Swift Package Manager was the right decision, because it is a simple solution that probably suits most people’s needs. • Launcher A simple launcher has been developed as well. One can specify the program arguments in the run configuration and select which executable module that should be launched. Other than that, there aren’t a lot of customization options yet. 140 8 Conclusion In addition to the development of these components, I made a public website for the project () where I released several alpha versions of the Tifig IDE. 8.2 Outlook Overall, I think that my term project and the subsequent master thesis were a success. However, there is still a lot that can be done to improve the existing components as described in the Implementation Status sections in the chapters 4, 5, 6 and 7. Also, there are additional features that are still missing and need to be implemented in the future (e.g., debugging support, refactoring support, etc.). 8.3 Acknowledgements I would like to express my gratitude to my supervisor Prof. Peter Sommerlad for the useful comments and assistance during the weekly meetings throughout both the term project and the master thesis. I would also like to thank Silvano Brugnoni for allowing me to include a version of his Eclipse plug-in pASTa (Painless AST Analysis) in the Tifig IDE. Furthermore, the product- and the branding-plugin were adopted from the Cevelop IDE [fS17]. Thanks to the Institute for Software for giving me access to the corresponding source code. 141 Bibliography [Alf06] Alfred V. Aho and Monica S. Lam and Ravi Sethi and Jeffrey D. Ullman. Compilers: Principles, Techniques, and Tools. 2006. [App14] Apple. WWDC 2014 Keynote, June 2014. videos/play/wwdc2014/101/. [App15] Apple. Protocol-Oriented Programming in Swift, June 2015. developer.apple.com/videos/play/wwdc2015/408/. https:// [App16] Apple. What’s New in Foundation for Swift, June 2016.. apple.com/videos/play/wwdc2016/207/. [App17a] Apple. Markup Formatting Reference, March 2017.. apple.com/library/content/documentation/Xcode/Reference/xcode_ markup_formatting_ref/. [App17b] Apple. Swift, March 2017.. [App17c] Apple. Swift Package Manager, March 2017. package-manager/. [App17d] Apple. Swift Standard Library Operators Reference, March 2017. library_operators. [App17e] Apple. Type Checker Design and Implementation, March 2017. https:// github.com/apple/swift/blob/master/docs/TypeChecker.rst. [App17f] Apple. Using Swift with Cocoa and Objective-C. 2017. [App17g] Apple. Xcode, March 2017.. [Beg15] Ole Begemann. Pattern Matching in Swift, September 2015.. net/blog/2015/09/swift-pattern-matching/. [Ben02] Benjamin C. Pierce. Types and Programming Languages. 2002. [Dev06] Prashant Deva. Create a commercial-quality Eclipse IDE, September 2006. os-ecl-commplgin1/. [DM82] Luis Damas and Robin Milner. Principal type-schemes for functional programs, 1982. [E. 94] E. Gamma and R. Helm and R. Johnson and J. Vlissides. Design Patterns Elements of Reusable Object-Oriented Software. 1994. 142 Bibliography [ecl17a] Eclipse CDT (C/C++ Development Tooling), March 2017.. org/cdt/. [ecl17b] Eclipse Java development tools (JDT), March 2017. jdt/. [ecl17c] Eclipse Project, March 2017.. [ecl17d] Perspectives, March 2017.. jsp?topic=%2Forg.eclipse.platform.doc.isv%2Fguide%2Fworkbench_ perspectives.htm. [ecl17e] Project natures, March 2017.? topic=%2Forg.eclipse.platform.doc.isv%2Fguide%2FresAdv_natures. htm. [ecl17f] Syntax coloring, March 2017.. jsp?topic=%2Forg.eclipse.platform.doc.isv%2Fguide%2Feditors_ highlighting.htm. [Fou04] The Apache Software Foundation. Apache License, January 2004. http://. [fS17] Institute for Software. Cevelop - The C++ IDE for professional developers, March 2017.. [Gal16] Alexis Gallagher. A recipe for Value Semantics (not value types!), December 2016.. [Har96] Harold Abelson and Gerald Jay Sussman and Julie Sussman. Structure and Interpretation of Computer Programs. 1996. [J.A10] J.A. Bondy and U.S.R. Murty. Graph Theory. 2010. [jav17] Example - Java Editor, March 2017. index.jsp?topic=%2Forg.eclipse.platform.doc.isv%2Fsamples%2Forg. eclipse.ui.examples.javaeditor%2Fdoc-html%2Fui_javaeditor_ex. html. [jun17] JUnit, March 2017.. [Kre16] Ted Kremenek. Swift 3.0 Released!, September 2016. blog/swift-3-0-released/. [Kre17] Ted Kremenek. Swift 4 Release Process, February 2017. blog/swift-4-0-release-process/. [Lat10] Chris Lattner. Initial Swift Commit, July 2010. swift/commit/18844bc65229786b96b89a9fc7739c0fc897905e. [Lat17a] Chris Lattner. Chris Lattner’s Résumé, March 2017.. org/sabre/Resume.html. [Lat17b] Chris Lattner. Update on the Swift Project Lead, January 2017. Week-of-Mon-20170109/030063.html. 143 Bibliography [Mar99] Martin Fowler. Refactoring: Improving the Design of Existing Code. 1999. [rai17] RAII, March 2017.. [Rob11] Robert Sedgewick and Kevin Wayne. Algorithms. 2011. [Sap17] A.A. Sapozhenko. Hypergraph, March 2017. encyclopediaofmath.org/index.php/Hypergraph. [swi15] The Swift.org Blog, December 2015... [swi17a] Community Guidelines, March 2017.. [Swi17b] Lexicon, March 2017. master/docs/Lexicon.rst. [Ter10] Terence Parr. Language Implementation Patterns. 2010. [thr17] Threading issues, March 2017.? topic=%2Forg.eclipse.platform.doc.isv%2Fguide%2Fswt_threading. htm. [tjl17a] Function Types. In The Java Language Specification. Oracle, 2017. [tjl17b] Functional Interfaces. In The Java Language Specification. Oracle, 2017. [try17] The try-with-resources Statement, March 2017. javase/tutorial/essential/exceptions/tryResourceClose.html. [tsp17a] About Swift. In The Swift Programming Language. Apple, 2017. [tsp17b] Automatic Reference Counting. In The Swift Programming Language. Apple, 2017. [tsp17c] Language Reference. In The Swift Programming Language. Apple, 2017. [tsp17d] Lexical Structure. In The Swift Programming Language. Apple, 2017. [tsp17e] The Basics. In The Swift Programming Language. Apple, 2017. [uni17] Unicode, March 2017.. 144
|
https://manualzz.com/doc/48567479/master-thesis---hsr---institutional-repository
|
CC-MAIN-2018-30
|
refinedweb
| 42,223
| 55.84
|
Flask-MongoAlchemy adds support for MongoDB on Flask using MongoAlchemy. Source code and issue tracking are available at Github. If you want to get started, check out the example source code.
You can easily install using pip:
$ [sudo] pip install Flask-MongoAlchemy
If you prefer, you may use the latest source version by cloning the following git repository:
$ git clone $ cd flask-mongoalchemy $ [sudo] python setup.py develop
Make sure you have MongoDB installed to use it.
It is very easy and fun to use Flask-MongoAlchemy to proxy between Python and MongoDB.
All you have to do is create a MongoAlchemy object and use it to declare documents. Here is a complete example:
from flask import Flask from flask.ext.mongoalchemy import MongoAlchemy app = Flask(__name__) app.config['MONGOALCHEMY_DATABASE'] = 'library' db = MongoAlchemy(app) class Author(db.Document): name = db.StringField() class Book(db.Document): title = db.StringField() author = db.DocumentField(Author) year = db.IntField()
As you can see, extending the Document is all you need to create a document.
Now you can create authors and books:
>>> from application import Author, Book >>> mark_pilgrim = Author(name='Mark Pilgrim') >>> dive = Book(title='Dive Into Python', author=mark_pilgrim, year=2004)
And save them:
>>> mark_pilgrim.save() >>> dive.save()
If you make any changes on a document, you may call save() again:
>>> mark_pilgrim.>> mark_pilgrim.save()
And you can remove a document from the database by calling its remove() method:
>>> dive.remove()
Another basic operation is querying for documents. Every document has a query class property. It’s very simple to use it:
>>> mark = Author.query.get('76726') >>> mark.>> mark.save()
You also can use the filter method instead of the get() method:
>>> mark = Author.query.filter(Author.name == 'Mark Pilgrim').first() >>> mark.>> mark.save()
Do you want to go further? Dive deep into the API docs.
It’s possible to use authentication to connect to a MongoDB server. The authentication can be server based or database based.
The default behavior is to use server based authentication, to use database based authentication, you need to turn off the config value MONGOALCHEMY_SERVER_AUTH (see the next section for more detail on configuration values):
>>> app.config['MONGOALCHEMY_SERVER_AUTH'] = False
The following configuration values are present in Flask-MongoAlchemy:
When MongoAlchemy or init_app() are invoked with only one argument (the Flask instance), a configuration value prefix of MONGOALCHEMY is assumed; this can be overridden with the config_prefix argument, for example:
app = Flask(__name__) # defaulting to MONGOENGINE prefix mongo1 = MongoAlchemy(app) # using another database config app.config['OTHER_DBNAME'] = 'other' mongo2 = MongoAlchemy(app, config_prefix='OTHER')
This part of the documentation documents all the public classes and functions in Flask-MongoAlchemy.
Class used to control the MongoAlchemy integration to a Flask application.
You can use this by providing the Flask app on instantiation or by calling an init_app() method an instance object of MongoAlchemy. Here an example of providing the application on instantiation:
app = Flask(__name__) db = MongoAlchemy(app)
And here calling the init_app() method:
db = MongoAlchemy() def init_app(): app = Flask(__name__) db.init_app(app) return app
This callback can be used to initialize an application for the use with this MongoDB setup. Never use a database in the context of an application not initialized that way or connections will leak.
Base class for custom user documents.
an instance of query_class. Used to query the database for instances of this document.
the query class used. The query attribute is an instance of this class. By default BaseQuery is used.
Removes the document itself from database.
The optional safe argument is a boolean that specifies if the remove method should wait for the operation to complete.
Saves the document itself in the database.
The optional safe argument is a boolean that specifies if the remove method should wait for the operation to complete.
Base class for custom user query classes.
This class provides some methods and can be extended to provide a customized query class to a user document.
Here an example:
from flask.ext.mongoalchemy import BaseQuery from application import db class MyCustomizedQuery(BaseQuery): def get_johns(self): return self.filter(self.type.first_name == 'John') class Person(db.Document): query_class = MyCustomizedQuery name = db.StringField()
And you will be able to query the Person model this way:
>>> johns = Person.query.get_johns().first()
Note: If you are extending BaseQuery and writing an __init__ method, you should always call this class __init__ via super keyword.
Here an example:
class MyQuery(BaseQuery): def __init__(self, *args, **kwargs): super(MyQuery, self).__init__(*args, **kwargs)
This class is instantiated automatically by Flask-MongoAlchemy, don’t provide anything new to your __init__ method.
Returns the first result of this query, or aborts with 404 if the result doesn’t contain any row
Returns a Document instance from its mongo_id or None if not found
Like get() method but aborts with 404 if not found instead of returning None
Returns per_page items from page page By default, it will abort with 404 if no items were found and the page was larger than 1. This behaviour can be disabled by setting error_out to False.
Returns a Pagination object.
Internal helper class returned by paginate().
Returns True if a next page exists.
Returns True if a previous page exists.
list of items for the current page
Return a Pagination object for the next page.
The next page number.
current page number
The total number of pages
number of items to be displayed per page
Return a Pagination object for the previous page.
The previous page number.
query object used to create this pagination object.
total number of items matching the query
|
https://pythonhosted.org/Flask-MongoAlchemy/
|
CC-MAIN-2022-05
|
refinedweb
| 927
| 58.79
|
iFontDeleteNotify Struct Reference
[2D]
Called before a font is deleted. More...
#include <ivideo/fontserv.h>
Inheritance diagram for iFontDeleteNotify:
Detailed Description
Called before a font is deleted.
You can insert any number of callback routines into the font so that when the font will be destroyed all of them will be called in turn. This can be used by canvas driver, for example, if the canvas driver does some kind of caching for fonts, e.g. OpenGL driver pre-caches the font on a texture, it needs some mechanism to be notified when the font is destroyed to free the cache texture associated with the font.
Definition at line 72 of file fontserv.h.
Member Function Documentation
Before delete.
Implemented in csFontCache::FontDeleteNotify.
The documentation for this struct was generated from the following file:
- ivideo/fontserv.h
Generated for Crystal Space 2.0 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/new0/structiFontDeleteNotify.html
|
CC-MAIN-2014-10
|
refinedweb
| 148
| 58.79
|
To get a job as a front end developer, we need to nail the coding interview.
In this article, we’ll look at some quick warmup questions that everyone should know.
Write a function the reverses a string
We can use JavaScript’s string and array methods to reverse a string easily.
To do this, we can write the following function:
const reverseString = (str) => str.split('').reverse().join('');
The function works by splitting the string into an array of characters, then reversing the array, and joining it back together.
We used JavaScript string’s
split method and array’s
reverse and
join methods.
Write a function that filters out numbers from a list.
We can do that with the
isNaN function and the array’s
filter method.
To solve this problem, we can write the following code:
const removeNums = (arr) => arr.filter(a => isNaN(+a));
In the
removeNums function, we pass in an array
arr then call
filter on it with a callback that returns
isNaN(+a) , which is
true if an item is converted to
NaN when we try to convert it to a number.
Write a function that finds an element inside an unsorted list.
We can use the array’s
findIndex method to find an element inside an unsorted list.
We write the following function:
const search = (arr, searchItem) => arr.findIndex(a => a === searchItem);
to do the search. The
findIndex method returns the index of the first item that it finds that meets the condition in the callback.
We can use it as follows:
console.log(search(['foo', 'bar', 1], 'foo'));
to search for the string
'foo' . We then should get 0 since
'foo' is the first entry in the array.
Write a function that showcases the usage of closures.
A closure is a function that returns a function. It’s usually used to hide some data inside the outer function to keep them from being exposed from the outside.
We can do that by writing:
const multiply = (first) => { let a = first; return (b) => { return a * b; }; }
Then we can call it by writing:
console.log(multiply(2)(3));
which gets us 6.
What is a Promise? Write a function that returns a Promise.
A promise is a piece of asynchronous code that runs in an indeterminate amount of time.
It can have the pending, fulfilled or rejected states. Fulfilled means it’s successful, rejected means that it failed.
An example of a promise is the Fetch API. It returns a promise and we can use it to return a promise ourselves.
For instance, we can write:
const getJoke = async () => { const res = await fetch('') const joke = await res.json(); return joke; }
to get a joke from the Chuck Norris API.
Write a function that flattens a list of items.
We can use the array
flat method to flatten a an array of items.
For example, we can write:
const flatten = arr => arr.flat(Infinity);
to flatten all levels of an array to one level.
We can also loop through each item of an array and recusively flatten nested arrays into one level.
For instance, we can write the following:
const flatten = (arr = []) => { let result = []; for (let item of arr) { if (Array.isArray(item)) { result = result.concat(flatten(item)); } else { result = result.concat(item); } } return result; }
to flatten the array.
The code works by looping through each entry of the array. Then find the arrays in the entries and the
flatten calls itself then append it to
result. Otherwise, it just adds the item into the
result array.
After it went through all the levels, it returns
result .
Write a function that accepts two numbers
**a** and
**b** and returns both the division of
**a** and
**b** and their modulo of
**a** and
**b**.
We can do that by using the
/ operator to divide
a and
b , and the
% operator to find the remainder when we divide
a by
b .
To do that, we can write the following function:
const divMod = (a, b) => { if (b !== 0) { return [a / b, a % b]; } return [0, 0]; }
We check if the divisor is zero, then we do the operations as we mentioned and return the computed items in an array.
Conclusion
We can use arrays and string methods as much as possible to make our lives easier.
They’re also more efficient than what we implement from scratch.
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/aumayeung/basic-javascript-interview-exercises-3306
|
CC-MAIN-2021-10
|
refinedweb
| 729
| 74.59
|
In the first part of this three-part tutorial series, we saw how to lay out the template structure in a Flask-based application using Jinja2. We also saw how blocks can be used to leverage the inheritance in templates. In this part, we will cover how to write a custom filter, a custom context processor, and a macro.
Getting Started
I will build upon the catalog application we created in the first part of this series. First I will add a custom Jinja2 context processor to show a descriptive name for each product. Then I will create a custom Jinja2 filter to do the same job as the custom context processor. Then I will demonstrate how to create a custom Jinja2 macro for regular form fields.
Creating a Custom Jinja2 Context Processor
Sometimes, we might want to calculate or process a value directly in the templates. Jinja2 maintains a notion that the processing of logic should be handled in views and not in templates, and thus, it keeps the templates clean. A context processor becomes a handy tool in this case. We can pass our values to a method; this will then be processed in a Python method, and our resultant value will be returned. Therefore, we are essentially just adding a function to the template context (thanks to Python for allowing us to pass around functions just like any other object).
So, let's say we want to add a descriptive name for each product in the format
Category / Product-name. For this, a method needs to be added, which has to be decorated with
@app.context_processor.
@app.context_processor def some_processor(): def full_name(product): return '{0} / {1}'.format(product['category'], product['name']) return {'full_name': full_name}
Technically, a context is just a Python dictionary that can be modified to add and remove values. Any method with the specified decorator should return a dictionary that would update the current application context.
To use this context processor, just add the following Jinja2 tag in the template.
<h4>{{ full_name(product) }}</h4>
If we add this to that
flask_app/templates/product.html of our application, it would look like:
{% extends 'home.html' %} {% block container %} <div class="top-pad"> <h4>{{ full_name(product) }}</h4> <h1>{{ product['name'] }} <small>{{ product['category'] }}</small> </h1> <h3>$ {{ product['price'] }}</h3> </div> {% endblock %}
The resulting product page would now look like:
Creating a Custom Jinja2 Filter
After looking at the above example, experienced developers might think that it was stupid to use a context processor for the purpose. One can simply write a filter to get the same result; this will make things much cleaner. A filter can be written to display the descriptive name of the product as shown below.
@app.template_filter('full_name') def full_name_filter(product): return '{0} / {1}'.format(product['category'], product['name'])
This filter can be used just like a normal filter, i.e, by adding a
| (pipe) symbol and then the filter name.
{{ product|full_name }}
The above filter would yield the same result as the context processor demonstrated a while back.
To take things to a higher level, let's create a filter which will format the currency based on the current browser's local language. For this, first we need to install a Python package named
ccy.
$ pip install ccy
Now we need to add a method for the currency filter.
import ccy from flask import request @app.template_filter('format_currency') def format_currency_filter(amount): currency_code = ccy.countryccy(request.accept_languages.best[-2:]) return '{0} {1}'.format(currency_code, amount)
To use this filter, we need to add the following in our template:
<h3>{{ product['price']|format_currency }}</h3>
Now the product page would look like:
Creating a Custom Jinja2 Macro for Forms
Macros allow us to write reusable pieces of HTML blocks. They are analogous to functions in regular programming languages. We can pass arguments to macros as we do to functions in Python and then use them to process the HTML block. Macros can be called any number of times, and the output will vary as per the logic inside them. Working with macros in Jinja2 is a very common topic and has a lot of use cases. Here, we will just see how a macro can be created and then used after importing.
One of the most redundant pieces of code in HTML is defining input fields in forms. Most of the fields have similar code with some modifications of style and so on. The following is a macro that creates input fields when called. The best practice is to create the macro in a separate file for better reusability, for example,
_helpers.html:
{% macro render_field(name, class='', value='', type='text') -%} <input type="{{ type }}" name="{{ name }}" class="{{ class }}" value="{{ value }}"/> {%- endmacro %}
Now, this macro should be imported in the file to be used:
{% from '_helpers.jinja' import render_field %}
Then, it can simply be called using the following:
<fieldset> {{ render_field('username', 'icon-user') }} {{ render_field('password', 'icon-key', type='password') }} </fieldset>
It is always a good practice to define macros in a different file so as to keep the code clean and increase code readability. If a private macro that cannot be accessed out of the current file is needed, then name the macro with an underscore preceding the name.
Conclusion
In this tutorial, we have seen how to write a custom filter, a custom context processor and a custom macro for forms. In the next part of this series, we will see how to implement advanced date and time formatting at template level in Jinja2 using moment.js.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
|
https://code.tutsplus.com/tutorials/templating-with-jinja2-in-flask-advanced--cms-25794
|
CC-MAIN-2018-22
|
refinedweb
| 940
| 53.51
|
Page i
Using MPI-2
Page ii
Scientific and Engineering Computation
Janusz Kowalik, editor
Data-Parallel Programming on MIMD Computers,
Philip J. Hatcher and Michael J. Quinn, 1991
Unstructured Scientific Computation on Scalable Multiprocessors,
edited by Piyush Mehrotra, Joel Saltz, and Robert Voigt, 1992
Parallel Computational Fluid Dynamics: Implementation and Results,
edited by Horst D. Simon, 1992
Enterprise Integration Modeling: Proceedings of the First International Conference,
edited by Charles J. Petrie, Jr., 1992
The High Performance Fortran Handbook,
Charles H. Koelbel, David B. Loveman, Robert S. Schreiber, Guy L. Steele Jr. and Mary E. Zosel, 1994
PVM: Parallel Virtual Machine–A Users' Guide and Tutorial for Network Parallel Computing,
Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Bob Manchek, and Vaidy Sunderam, 1994
Practical Parallel Programming,
Gregory V. Wilson, 1995
Enabling Technologies for Petaflops Computing,
Thomas Sterling, Paul Messina, and Paul H. Smith, 1995
An Introduction to High-Performance Scientific Computing,
Lloyd D. Fosdick, Elizabeth R. Jessup, Carolyn J. C. Schauble, and Gitta Domik, 1995
Parallel Programming Using C++,
edited by Gregory V. Wilson and Paul Lu, 1996
Using PLAPACK: Parallel Linear Algebra Package,
Robert A. van de Geijn, 1997
Fortran 95 Handbook,
Jeanne C. Adams, Walter S. Brainerd, Jeanne T. Martin, Brian T. Smith, Jerrold L. Wagener, 1997
MPI—The Complete Reference: Volume 1, The MPI Core,
Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra, 1998
MPI—The Complete Reference: Volume 2, The MPI-2 Extensions,
William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg,
William Saphir, and Marc Snir, 1998
A Programmer's Guide to ZPL,
Lawrence Snyder, 1999
How to Build a Beowulf,
Thomas L. Sterling, John Salmon, Donald J. Becker, and Daniel F. Savarese, 1999
Using MPI: Portable Parallel Programming with the Message-Passing Interface, second edition,
William Gropp, Ewing Lusk, and Anthony Skjellum, 1999
Using MPI-2: Advanced Features of the Message-Passing Interface,
William Gropp, Ewing Lusk, and Rajeev Thakur, 1999
Page iii
Using MPI-2
Advanced Features of the Message-Passing Interface
William Gropp
Ewing Lusk
Rajeev Thakur
Page iv
© 1999
Gropp, William.
Using MPI-2: advanced features of the message-passing interface /
William Gropp, Ewing Lusk, Rajeev Thakur.
p. cm.—(Scientific and engineering computation)
Includes bibliographical references and index.
ISBN 0-262-057133-1 (pb.: alk. paper)
1. Parallel programming (Computer science). 2. Parallel computers—
Programming. 3. Computer interfaces. I. Lusk, Ewing. II. Thakur,
Rajeev. III. Title. IV. Series.
QA76.642.G762 1999
005.2'75–dc21 99-042972
CIP
Page v
To Christopher Gropp, Brigid Lusk, and Pratibha and Sharad Thakur
Page vii
Contents
Series Foreword
xv
Preface
xvii
1
Introduction
1
1.1 Background
1
1.1.1 Ancient History
1
1.1.2 The MPI Forum
2
1.1.3 The MPI-2 Forum
3
1.2 What's New in MPI-2?
4
1.2.1 Parallel I/O
5
1.2.2 Remote Memory Operations
6
1.2.3 Dynamic Process Management
7
1.2.4 Odds and Ends
7
1.3 Reading This Book
9
2
Getting Started with MPI-2
11
2.1 Portable Process Startup
11
2.2 Parallel I/O
12
2.2.1 Non-Parallel I/O from an MPI Program
13
2.2.2 Non-MPI Parallel I/O from an MPI Program
15
2.2.3 MPI I/O to Separate Files
16
2.2.4 Parallel MPI I/O to a Single File
19
2.2.5 Fortran 90 Version
21
2.2.6 Reading the File with a Different Number of Processes
22
2.2.7 C++ Version
24
2.2.8 Other Ways to Write to a Shared File
28
2.3 Remote Memory Access
29
2.3.1 The Basic Idea: Memory Windows
30
2.3.2 RMA Version of cpi
30
2.4 Dynamic Process Management
36
2.4.1 Spawning Processes
37
2.4.2 Parallel cp: A Simple System Utility
38
2.5 More Info on Info
47
Page viii
2.5.1 Motivation, Description, and Rationale
47
2.5.2 An Example from Parallel I/O
47
2.5.3 An Example from Dynamic Process Management
48
2.6 Summary
50
3
Parallel I/O
51
3.1 Introduction
51
3.2 Using MPI for Simple I/O
51
3.2.1 Using Individual File Pointers
52
3.2.2 Using Explicit Offsets
55
3.2.3 Writing to a File
59
3.3 Noncontiguous Accesses and Collective I/O
59
3.3.1 Noncontiguous Accesses
60
3.3.2 Collective I/O
64
3.4 Accessing Arrays Stored in Files
67
3.4.1 Distributed Arrays
68
3.4.2 A Word of Warning about Darray
71
3.4.3 Subarray Datatype Constructor
72
3.4.4 Local Array with Ghost Area
74
3.4.5 Irregularly Distributed Arrays
78
3.5 Nonblocking I/O and Split Collective I/O
81
3.6 Shared File Pointers
83
3.7 Passing Hints to the Implementation
85
3.8 Consistency Semantics
89
3.8.1 Simple Cases
89
3.8.2 Accessing a Common File Opened with MPI_COMM_WORLD
91
3.8.3 Accessing a Common File Opened with MPI_COMM_SELF
94
3.8.4 General Recommendation
95
3.9 File Interoperability
95
3.9.1 File Structure
96
3.9.2 File Data Representation
97
3.9.3 Use of Datatypes for Portability
98
Page ix
3.9.4 User-Defined Data Representations
100
3.10 Achieving High I/O Performance with MPI
101
3.10.1 The Four "Levels" of Access
101
3.10.2 Performance Results
105
3.10.3 Upshot Graphs
106
3.11 An Astrophysics Example
112
3.11.1 ASTRO3D I/O Requirements
112
3.11.2 Implementing the I/O with MPI
114
3.11.3 Header Issues
116
3.12 Summary
118
4
Understanding Synchronization
119
4.1 Introduction
119
4.2 Synchronization in Message Passing
119
4.3 Comparison with Shared Memory
127
4.3.1 Volatile Variables
129
4.3.2 Write Ordering
130
4.3.3 Comments
131
5
Introduction to Remote Memory Operations
133
5.1 Introduction
135
5.2 Contrast with Message Passing
136
5.3 Memory Windows
139
5.3.1 Hints on Choosing Window Parameters
141
5.3.2 Relationship to Other Approaches
142
5.4 Moving Data
142
5.4.1 Reasons for Using Displacement Units
146
5.4.2 Cautions in Using Displacement Units
147
5.4.3 Displacement Sizes in Fortran
148
5.5 Completing Data Transfers
148
5.6 Examples of RMA Operations
150
5.6.1 Mesh Ghost Cell Communication
150
Page x
5.6.2 Combining Communication and Computation
164
5.7 Pitfalls in Accessing Memory
169
5.7.1 Atomicity of Memory Operations
169
5.7.2 Memory Coherency
171
5.7.3 Some Simple Rules for RMA
171
5.7.4 Overlapping Windows
173
5.7.5 Compiler Optimizations
173
5.8 Performance Tuning for RMA Operations
175
5.8.1 Options for MPI_Win_create
175
5.8.2 Options for MPI_Win_fence
177
6
Advanced Remote Memory Access
181
6.1 Introduction
181
6.2 Lock and Unlock
181
6.2.1 Implementing Blocking, Independent RMA Operations
183
6.3 Allocating Memory for MPI Windows
184
6.3.1 Using MPI_Alloc_mem from C/C++
184
6.3.2 Using MPI_Alloc_mem from Fortran
185
6.4 Global Arrays
185
6.4.1 Create and Free
188
6.4.2 Put and Get
192
6.4.3 Accumulate
194
6.5 Another Version of NXTVAL
194
6.5.1 The Nonblocking Lock
197
6.5.2 A Nonscalable Implementation of NXTVAL
197
6.5.3 Window Attributes
201
6.5.4 A Scalable Implementation of NXTVAL
204
6.6 An RMA Mutex
208
6.7 The Rest of Global Arrays
210
6.7.1 Read and Increment
210
6.7.2 Mutual Exclusion for Global Arrays
210
6.7.3 Comments on the MPI Version of Global Arrays
212
Page xi
6.8 Differences between RMA and Shared Memory
212
6.9 Managing a Distributed Data Structure
215
6.9.1 A Shared-Memory Distributed List Implementation
215
6.9.2 An MPI Implementation of a Distributed List
216
6.9.3 Handling Dynamically Changing Distributed Data Structures
220
6.9.4 An MPI Implementation of a Dynamic Distributed List
224
6.10 Compiler Optimization and Passive Targets
225
6.11 Scalable Synchronization
228
6.11.1 Exposure Epochs
229
6.11.2 The Ghost-Point Exchange Revisited
229
6.11.3 Performance Optimizations for Scalable Synchronization
231
6.12 Summary
232
7
Dynamic Process Management
233
7.1 Introduction
233
7.2 Creating New MPI Processes
233
7.2.1 Intercommunicators
234
7.2.2 Matrix-Vector Multiplication Example
235
7.2.3 Intercommunicator Collective Operations
238
7.2.4 Intercommunicator Point-to-Point Communication
239
7.2.5 Finding the Number of Available Processes
242
7.2.6 Passing Command-Line Arguments to Spawned Programs
245
7.3 Connecting MPI Processes
245
7.3.1 Visualizing the Computation in an MPI Program
247
7.3.2 Accepting Connections from Other Programs
249
7.3.3 Comparison with Sockets
251
7.3.4 Moving Data between Groups of Processes
253
7.3.5 Name Publishing
254
7.4 Design of the MPI Dynamic Process Routines
258
7.4.1 Goals for MPI Dynamic Process Management
258
Page xii
7.4.2 What MPI Did Not Standardize
260
8
Using MPI with Threads
261
8.1 Thread Basics and Issues
261
8.1.1 Thread Safety
262
8.1.2 Threads and Processes
263
8.2 MPI and Threads
263
8.3 Yet Another Version of NXTVAL
266
8.4 Implementing Nonblocking Collective Operations
268
8.5 Mixed-Model Programming: MPI for SMP Clusters
269
9
Advanced Features
273
9.1 Defining New File Data Representations
273
9.2 External Interface Functions
275
9.2.1 Decoding Datatypes
277
9.2.2 Generalized Requests
279
9.2.3 Adding New Error Codes and Classes
285
9.3 Mixed-Language Programming
289
9.4 Attribute Caching
292
9.5 Error Handling
295
9.5.1 Error Handlers
295
9.5.2 Error Codes and Classes
297
9.6 Topics Not Covered in This Book
298
10
Conclusions
301
10.1 New Classes of Parallel Programs
301
10.2 MPI-2 Implementation Status
301
10.2.1 Vendor Implementations
301
10.2.2 Free, Portable Implementations
302
10.2.3 Layering
302
10.3 Where Does MPI Go from Here?
302
10.3.1 More Remote Memory Operations
303
Page xiii
10.3.2 More on Threads
303
10.3.3 More Language Bindings
304
10.3.4 Interoperability of MPI Implementations
304
10.3.5 Real-Time MPI
304
10.4 Final Words
304
A
Summary of MPI-2 Routines and Their Arguments
307
B
MPI Resources on the World Wide Web
355
C
Surprises, Questions, and Problems in MPI
357
D
Standardizing External Startup with mpiexec
361
References
365
Subject Index
373
Function and Term Index
379
Page xv
Series Foreword, with the
aim of facilitating transfer of is intended to help scientists and engineers understand the current world of advanced computation and to anticipate
future developments that will affect their computing environments and open up new capabilities and modes of computation.
This book describes how to use advanced features of the Message-Passing Interface (MPI), a communication library
specification for both parallel computers and workstation networks. MPI has been developed as a community standard for
message passing and related operations. Its adoption by both users and implementers has provided the parallel-programming
community with the portability and features needed to develop application programs and parallel libraries that will tap the
power of today's (and tomorrow's) high-performance computers.
JANUSZ S. KOWALIK
Page xvii
Preface
MPI (Message-Passing Interface) is a standard library interface for writing parallel programs. MPI was developed in two
phases by an open forum of parallel computer vendors, library writers, and application developers. The first phase took place in
1993–1994 and culminated in the first release of the MPI standard, which we call MPI-1. A number of important topics in
parallel computing had been deliberately left out of MPI-1 in order to speed its release, and the MPI Forum began meeting
again in 1995 to address these topics, as well as to make minor corrections and clarifications to MPI-1 that had been discovered
to be necessary. The MPI-2 Standard was released in the summer of 1997. The official Standard documents for MPI-1 (the
current version as updated by the MPI-2 forum is 1.2) and MPI-2 are available on the Web at. More
polished versions of the standard documents are published by MIT Press in the two volumes of MPI—The Complete Reference
[27, 79].
These official documents and the books that describe them are organized so that they will be useful as reference works. The
structure of the presentation is according to the chapters of the standard, which in turn reflects the subcommittee structure of
the MPI Forum.
In 1994, two of the present authors, together with Anthony Skjellum, wrote Using MPI: Portable Programming with the
Message-Passing Interface [31], a quite differently structured book on MPI-1, taking a more tutorial approach to the material.
A second edition [32] of that book has now appeared as a companion to this one, covering the most recent additions and
clarifications to the material of MPI-1, and bringing it up to date in various other ways as well. This book takes the same
tutorial, example-driven approach to its material that Using MPI does, applying it to the topics of MPI-2. These topics include
parallel I/O, dynamic process management, remote memory operations, and external interfaces.
About This Book
Following the pattern set in Using MPI, we do not follow the order of chapters in the MPI-2 Standard, nor do we follow the
order of material within a chapter as in the Standard. Instead, we have organized the material in each chapter according to the
complexity of the programs we use as examples, starting with simple examples and moving to more complex ones. We do
assume that the reader is familiar with at least the simpler aspects of MPI-1. It is not necessary to have read Using MPI, but it
wouldn't hurt.
Page xviii
We begin in Chapter 1 with an overview of the current situation in parallel computing, many aspects of which have changed in
the past five years. We summarize the new topics covered in MPI-2 and their relationship to the current and (what we see as)
the near-future parallel computing environment.
MPI-2 is not ''MPI-1, only more complicated." There are simple and useful parts of MPI-2, and in Chapter 2 we introduce them
with simple examples of parallel I/O, dynamic process management, and remote memory operations.
In Chapter 3 we dig deeper into parallel I/O, perhaps the "missing feature" most requested by users of MPI-1. We describe the
parallel I/O features of MPI, how to use them in a graduated series of examples, and how they can be used to get high
performance, particularly on today's parallel/high-performance file systems.
In Chapter 4 we explore some of the issues of synchronization between senders and receivers of data. We examine in detail
what happens (and what must happen) when data is moved between processes. This sets the stage for explaining the design of
MPI's remote memory operations in the following chapters.
Chapters 5 and 6 cover MPI's approach to remote memory operations. This can be regarded as the MPI approach to shared
memory, since shared-memory and remote-memory operations have much in common. At the same time they are different,
since access to the remote memory is through MPI function calls, not some kind of language-supported construct (such as a
global pointer or array). This difference arises because MPI is intended to be portable to distributed-memory machines, even
heterogeneous clusters.
Because remote memory access operations are different in many ways from message passing, the discussion of remote memory
access is divided into two chapters. Chapter 5 covers the basics of remote memory access and a simple synchronization model.
Chapter 6 covers more general types of remote memory access and more complex synchronization models.
Chapter 7 covers MPI's relatively straightforward approach to dynamic process management, including both spawning new
processes and dynamically connecting to running MPI programs.
The recent rise of the importance of small to medium-size SMPs (shared-memory multiprocessors) means that the interaction
of MPI with threads is now far more important than at the time of MPI-1. MPI-2 does not define a standard interface to thread
libraries because such an interface already exists, namely, the POSIX threads interface [42]. MPI instead provides a number of
features designed to facilitate the use of multithreaded MPI programs. We describe these features in Chapter 8.
In Chapter 9 we describe some advanced features of MPI-2 that are particularly useful to library writers. These features include
defining new file data representa-
Page xix
tions, using MPI's external interface functions to build layered libraries, support for mixed-language programming, attribute
caching, and error handling.
In Chapter 10 we summarize our journey through the new types of parallel programs enabled by MPI-2, comment on the
current status of MPI-2 implementations, and speculate on future directions for MPI.
Appendix A contains the C, C++, and Fortran bindings for all the MPI-2 functions.
Appendix B describes how to obtain supplementary material for this book, including complete source code for the examples,
and related MPI materials that are available via anonymous ftp and on the World Wide Web.
In Appendix C we discuss some of the surprises, questions, and problems in MPI, including what we view as some
shortcomings in the MPI-2 Standard as it is now. We can't be too critical (because we shared in its creation!), but experience
and reflection have caused us to reexamine certain topics.
Appendix D covers the MPI program launcher, mpiexec, which the MPI-2 Standard recommends that all implementations
support. The availability of a standard interface for running MPI programs further increases the protability of MPI applications,
and we hope that this material will encourage MPI users to expect and demand mpiexec from the suppliers of MPI
implementations.
In addition to the normal subject index, there is an index for the usage examples and definitions of the MPI-2 functions,
constants, and terms used in this book.
We try to be impartial in the use of C, Fortran, and C++ in the book's examples. The MPI Standard has tried to keep the syntax
of its calls similar in C and Fortran; for C++ the differences are inevitably a little greater, although the MPI Forum adopted a
conservative approach to the C++ bindings rather than a complete object library. When we need to refer to an MPI function
without regard to language, we use the C version just because it is a little easier to read in running text.
This book is not a reference manual, in which MPI functions would be grouped according to functionality and completely
defined. Instead we present MPI functions informally, in the context of example programs. Precise definitions are given in
volume 2 of MPI—The Complete Reference [27] and in the MPI-2 Standard [59]. Nonetheless, to increase the usefulness of this
book to someone working with MPI, we have provided the calling sequences in C, Fortran, and C++ for each MPI-2 function
that we discuss. These listings can be found set off in boxes located near where the functions are introduced. C bindings are
given in ANSI C style. Arguments that can be of several types (typically message buffers) are defined as void* in C. In the
Fortran boxes, such arguments are marked as being of type <type>. This means that one of the appropriate Fortran data types
should be used. To
Page xx
find the "binding box" for a given MPI routine, one should use the appropriate bold-face reference in the Function and Term
Index: C for C, f90 for Fortran, and C++ for C++. Another place to find this information is in Appendix A, which lists all MPI
functions in alphabetical order for each language.
Acknowledgments
We thank all those who participated in the MPI-2 Forum. These are the people who created MPI-2, discussed a wide variety of
topics (many not included here) with seriousness, intelligence, and wit, and thus shaped our ideas on these areas of parallel
computing. The following people (besides ourselves) attended the MPI Forum meetings at one time or another during the
formulation of MPI-2: Greg Astfalk, Robert Babb, Ed Benson, Rajesh Bordawekar, Pete Bradley, Peter Brennan, Ron
Brightwell, Maciej Brodowicz, Eric Brunner, Greg Burns, Margaret Cahir, Pang Chen, Ying Chen, Albert Cheng, Yong Cho,
Joel Clark, Lyndon Clarke, Laurie Costello, Dennis Cottel, Jim Cownie, Zhenqian Cui, Suresh Damodaran-Kamal, Raja
Daoud, Judith Devaney, David DiNucci, Doug Doefler, Jack Dongarra, Terry Dontje, Nathan Doss, Anne Elster, Mark Fallon,
Karl Feind, Sam Fineberg, Craig Fischberg, Stephen Fleischman, Ian Foster, Hubertus Franke, Richard Frost, Al Geist, Robert
George, David Greenberg, John Hagedorn, Kei Harada, Leslie Hart, Shane Hebert, Rolf Hempel, Tom Henderson, Alex Ho,
Hans-Christian Hoppe, Steven Huss-Lederman, Joefon Jann, Terry Jones, Carl Kesselman, Koichi Konishi, Susan Kraus, Steve
Kubica, Steve Landherr, Mario Lauria, Mark Law, Juan Leon, Lloyd Lewins, Ziyang Lu, Andrew Lumsdaine, Bob Madahar,
Peter Madams, John May, Oliver McBryan, Brian McCandless, Tyce McLarty, Thom McMahon, Harish Nag, Nick Nevin,
Jarek Nieplocha, Bill Nitzberg, Ron Oldfield, Peter Ossadnik, Steve Otto, Peter Pacheco, Yoonho Park, Perry Partow, Pratap
Pattnaik, Elsie Pierce, Paul Pierce, Heidi Poxon, Jean-Pierre Prost, Boris Protopopov, James Pruyve, Rolf Rabenseifner, Joe
Rieken, Peter Rigsbee, Tom Robey, Anna Rounbehler, Nobutoshi Sagawa, Arindam Saha, Eric Salo, Darren Sanders, William
Saphir, Eric Sharakan, Andrew Sherman, Fred Shirley, Lance Shuler, A. Gordon Smith, Marc Snir, Ian Stockdale, David
Taylor, Stephen Taylor, Greg Tensa, Marydell Tholburn, Dick Treumann, Simon Tsang, Manuel Ujaldon, David Walker,
Jerrell Watts, Klaus Wolf, Parkson Wong, and Dave Wright. We also acknowledge the valuable input from many persons
around the world who participated in MPI Forum discussions via e-mail.
Our interactions with the many users of MPICH have been the source of ideas,
Page xxi
examples, and code fragments. Other members of the MPICH group at Argonne have made critical contributions to MPICH
and other MPI-related tools that we have used in the preparation of this book. Particular thanks go to Debbie Swider for her
enthusiastic and insightful work on MPICH implementation and interaction with users, and to Omer Zaki and Anthony Chan
for their work on Upshot and Jumpshot, the performance visualization tools we use with MPICH.
We thank PALLAS GmbH, particularly Hans-Christian Hoppe and Thomas Kentemich, for testing some of the MPI-2 code
examples in this book on the Fujitsu MPI implementation.
Gail Pieper, technical writer in the Mathematics and Computer Science Division at Argonne, was our indispensable guide in
matters of style and usage and vastly improved the readability of our prose.
Page 1
1—
Introduction
When the MPI Standard was first released in 1994, its ultimate significance was unknown. Although the Standard was the
result of a consensus among parallel computer vendors, computer scientists, and application developers, no one knew to what
extent implementations would appear or how many parallel applications would rely on it.
Now the situation has clarified. All parallel computing vendors supply their users with MPI implementations, and there are
freely available implementations that both compete with vendor implementations on their platforms and supply MPI solutions
for heterogeneous networks. Applications large and small have been ported to MPI, and new applications are being written.
MPI's goal of stimulating the development of parallel libraries by enabling them to be portable has been realized, and an
increasing number of applications become parallel purely through the use of parallel libraries.
This book is about how to use MPI-2, the collection of advanced features that were added to MPI by the second MPI Forum. In
this chapter we review in more detail the origins of both MPI-1 and MPI-2. We give an overview of what new functionality has
been added to MPI by the release of the MPI-2 Standard. We conclude with a summary of the goals of this book and its
organization.
1.1—
Background
We present here a brief history of MPI, since some aspects of MPI can be better understood in the context of its development.
An excellent description of the history of MPI can also be found in [36].
1.1.1—
Ancient History
In the early 1990s, high-performance computing was in the process of converting from the vector machines that had dominated
scientific computing in the 1980s to massively parallel processors (MPPs) such as the IBM SP-1, the Thinking Machines CM-
5, and the Intel Paragon. In addition, people were beginning to use networks of desktop workstations as parallel computers.
Both the MPPs and the workstation networks shared the message-passing model of parallel computation, but programs were
not portable. The MPP vendors competed with one another on the syntax of their message-passing libraries. Portable libraries,
such as PVM [24], p4 [8], and TCGMSG [35], appeared from the research community and became widely used on workstation
networks. Some of them allowed portability to MPPs as well, but
Page 2
there was no unified, common syntax that would enable a program to run in all the parallel environments that were suitable for
it from the hardware point of view.
1.1.2—
The MPI Forum
Starting with a workshop in 1992, the MPI Forum was formally organized at Supercomputing '92. MPI succeeded because the
effort attracted a broad spectrum of the parallel computing community. Vendors sent their best technical people. The authors of
portable libraries participated, and applications programmers were represented as well. The MPI Forum met every six weeks
starting in January 1993 and released MPI in the summer of 1994.
To complete its work in a timely manner, the Forum strictly circumscribed its topics. It developed a standard for the strict
message-passing model, in which all data transfer is a cooperative operation among participating processes. It was assumed
that the number of processes was fixed and that processes were started by some (unspecified) mechanism external to MPI. I/O
was ignored, and language bindings were limited to C and Fortran 77. Within these limits, however, the Forum delved deeply,
producing a very full-featured message-passing library. In addition to creating a portable syntax for familiar message-passing
functions, MPI introduced (or substantially extended the development of) a number of new concepts, such as derived datatypes,
contexts, and communicators. MPI constituted a major advance over all existing message-passing libraries in terms of features,
precise semantics, and the potential for highly optimized implementations.
In the year following its release, MPI was taken up enthusiastically by users, and a 1995 survey by the Ohio Supercomputer
Center showed that even its more esoteric features found users. The MPICH portable implementation [30], layered on top of
existing vendor systems, was available immediately, since it had evolved along with the standard. Other portable
implementations appeared, particularly LAM [7], and then vendor implementations in short order, some of them leveraging
MPICH. The first edition of Using MPI [31] appeared in the fall of 1994, and we like to think that it helped win users to the
new Standard.
But the very success of MPI-1 drew attention to what was not there. PVM users missed dynamic process creation, and several
users needed parallel I/O. The success of the Cray shmem library on the Cray T3D and the active-message library on the CM-5
made users aware of the advantages of "one-sided" operations in algorithm design. The MPI Forum would have to go back to
work.
Page 3
1.1.3—
The MPI-2 Forum
The modern history of MPI begins in the spring of 1995, when the Forum resumed its meeting schedule, with both veterans of
MPI-1 and about an equal number of new participants. In the previous three years, much had changed in parallel computing,
and these changes would accelerate during the two years the MPI-2 Forum would meet.
On the hardware front, a consolidation of MPP vendors occurred, with Thinking Machines Corp., Meiko, and Intel all leaving
the marketplace. New entries such as Convex (now absorbed into Hewlett-Packard) and SGI (now having absorbed Cray
Research) championed a shared-memory model of parallel computation although they supported MPI (passing messages
through shared memory), and many applications found that the message-passing model was still well suited for extracting peak
performance on shared-memory (really NUMA) hardware. Small-scale shared-memory multiprocessors (SMPs) became
available from workstation vendors and even PC manufacturers. Fast commodity-priced networks, driven by the PC
marketplace, became so inexpensive that clusters of PCs combined with inexpensive networks, started to appear as "home-
brew" parallel supercomputers. A new federal program, the Accelerated Strategic Computing Initiative (ASCI), funded the
development of the largest parallel computers ever built, with thousands of processors. ASCI planned for its huge applications
to use MPI.
On the software front, MPI, as represented by MPI-1, became ubiquitous as the application programming interface (API) for
the message-passing model. The model itself remained healthy. Even on flat shared-memory and NUMA (nonuniform memory
access) machines, users found the message-passing model a good way to control cache behavior and thus performance. The
perceived complexity of programming with the message-passing model was alleviated by two developments. The first was the
convenience of the MPI interface itself, once programmers became more comfortable with it as the result of both experience
and tutorial presentations. The second was the appearance of libraries that hide much of the MPI-level complexity from the
application programmer. Examples are PETSc [3], ScaLAPACK [12], and PLAPACK [94]. This second development is
especially satisfying because it was an explicit design goal for the MPI Forum to encourage the development of libraries by
including features that libraries particularly needed.
At the same time, non-message-passing models have been explored. Some of these may be beneficial if actually adopted as
portable standards; others may still require interaction with MPI to achieve scalability. Here we briefly summarize two
promising, but quite different approaches.
Page 4
Explicit multithreading is the use of an API that manipulates threads (see [32] for definitions within a single address space.
This approach may be sufficient on systems that can devote a large number of CPUs to servicing a single process, but
interprocess communication will still need to be used on scalable systems. The MPI API has been designed to be thread safe.
However, not all implementations are thread safe. An MPI-2 feature is to allow applications to request and MPI
implementations to report their level of thread safety (see Chapter 8).
In some cases the compiler generates the thread parallelism. In such cases the application or library uses only the MPI API, and
additional parallelism is uncovered by the compiler and expressed in the code it generates. Some compilers do this unaided;
others respond to directives in the form of specific comments in the code.
OpenMP is a proposed standard for compiler directives for expressing parallelism, with particular emphasis on loop-level
parallelism. Both C [68] and Fortran [67] versions exist.
Thus the MPI-2 Forum met during time of great dynamism in parallel programming models. What did the Forum do, and what
did it come up with?
1.2—
What's New in MPI-2?
The MPI-2 Forum began meeting in March of 1995. Since the MPI-1 Forum was judged to have been a successful effort, the
new Forum procedures were kept the same as for MPI-1. Anyone was welcome to attend the Forum meetings, which were held
every six weeks. Minutes of the meetings were posted to the Forum mailing lists, and chapter drafts were circulated publicly
for comments between meetings. At meetings, subcommittees for various chapters met and hammered out details, and the final
version of the standard was the result of multiple votes by the entire Forum.
The first action of the Forum was to correct errors and clarify a number of issues that had caused misunderstandings in the
original document of July 1994, which was retroactively labeled MPI-1.0. These minor modifications, encapsulated as MPI-
1.1, were released in May 1995. Corrections and clarifications, to MPI-1 topics continued during the next two years, and the
MPI-2 document contains MPI-1.2 as a chapter (Chapter 3) of the MPI-2 release, which is the current version of the MPI
standard. MPI-1.2 also contains a number of topics that belong in spirit to the MPI-1 discussion, although they were added by
the MPI-2 Forum.
Page 5
MPI-2 has three "large," completely new areas, which represent extensions of the MPI programming model substantially
beyond the strict message-passing model represented by MPI-1. These areas are parallel I/O, remote memory operations, and
dynamic process management. In addition, MPI-2 introduces a number of features designed to make all of MPI more robust
and convenient to use, such as external interface specifications, C++ and Fortran-90 bindings, support for threads, and mixed-
language programming.
1.2.1—
Parallel I/O
The parallel I/O part of MPI-2, sometimes just called MPI-IO, originated independently of the Forum activities, as an effort
within IBM to explore the analogy between input/output and message passing. After all, one can think of writing to a file as
analogous to sending a message to the file system and reading from a file as receiving a message from it. Furthermore, any
parallel I/O system is likely to need collective operations, ways of defining noncontiguous data layouts both in memory and in
files, and nonblocking operations. In other words, it will need a number of concepts that have already been satisfactorily
specified and implemented in MPI. The first study of the MPI-IO idea was carried out at IBM Research [71]. The effort was
expanded to include a group at NASA Ames, and the resulting specification appeared in [15]. After that, an open e-mail
discussion group was formed, and this group released a series of proposals, culminating in [90]. At that point the group merged
with the MPI Forum, and I/O became a part of MPI-2. The I/O specification evolved further over the course of the Forum
meetings, until MPI-2 was finalized in July 1997.
In general, I/O in MPI-2 can be thought of as Unix I/O plus quite a lot more. That is, MPI does include analogues of the basic
operations of open, close, seek, read, and write. The arguments for these functions are similar to those of the
corresponding Unix I/O operations, making an initial port of existing programs to MPI relatively straightforward. The purpose
of parallel I/O in MPI, however, is to achieve much higher performance than the Unix API can deliver, and serious users of
MPI must avail themselves of the more advanced features, which include
· noncontiguous access in both memory and file,
· collective I/O operations,
· use of explicit offsets to avoid separate seeks,
· both individual and shared file pointers,
· nonblocking I/O,
· portable and customized data representations, and
Page 6
· hints for the implementation and file system.
We will explore in detail in Chapter 3 exactly how to exploit these features. We will find out there just how the I/O API
defined by MPI enables optimizations that the Unix I/O API precludes.
1.2.2—
Remote Memory Operations
The hallmark of the message-passing model is that data is moved from the address space of one process to that of another by
means of a cooperative operation such as a send/receive pair. This restriction sharply distinguishes the message-passing
model from the shared-memory model, in which processes have access to a common pool of memory and can simply perform
ordinary memory operations (load from, store into) on some set of addresses.
In MPI-2, an API is defined that provides elements of the shared-memory model in an MPI environment. These are called
MPI's "one-sided" or "remote memory" operations. Their design was governed by the need to
· balance efficiency and portability across several classes of architectures, including shared-memory multiprocessors (SMPs),
nonuniform memory access (NUMA) machines, distributed-memory massively parallel processors (MPPs), SMP clusters, and
even heterogeneous networks;
· retain the "look and feel" of MPI-1;
· deal with subtle memory behavior issues, such as cache coherence and sequential consistency; and
· separate synchronization from data movement to enhance performance.
The resulting design is based on the idea of remote memory access windows: portions of each process's address space that it
explicitly exposes to remote memory operations by other processes defined by an MPI communicator. Then the one-sided
operations put, get, and accumulate can store into, load from, and update, respectively, the windows exposed by other
processes. All remote memory operations are nonblocking, and synchronization operations are necessary to ensure their
completion. A variety of such synchronizations operations are provided, some for simplicity, some for precise control, and
some for their analogy with shared-memory synchronization operations. In Chapter 4, we explore some of the issues of
synchronization between senders and receivers of data. Chapters 5 and 6 describe the remote memory operations of MPI-2 in
detail.
Page 7
1.2.3—
Dynamic Process Management
The third major departure from the programming model defined by MPI-1 is the ability of an MPI process to participate in the
creation of new MPI processes or to establish communication with MPI processes that have been started separately. The main
issues faced in designing an API for dynamic process management are
· maintaining simplicity and flexibility;
· interacting with the operating system, the resource manager, and the process manager in a complex system software
environment; and
· avoiding race conditions that compromise correctness.
The key to correctness is to make the dynamic process management operations collective, both among the processes doing the
creation of new processes and among the new processes being created. The resulting sets of processes are represented in an
intercommunicator. Intercommunicators (communicators containing two groups of processes rather than one) are an esoteric
feature of MPI-1, but are fundamental for the MPI-2 dynamic process operations. The two families of operations defined in
MPI-2, both based on intercommunicators, are creating of new sets of processes, called spawning, and establishing
communications with pre-existing MPI programs, called connecting. The latter capability allows applications to have parallel-
client/parallel-server structures of processes. Details of the dynamic process management operations can be found in Chapter 7.
1.2.4—
Odds and Ends
Besides the above ''big three," the MPI-2 specification covers a number of issues that were not discussed in MPI-1.
Extended Collective Operations
Extended collective operations in MPI-2 are analogous to the collective operations of MPI-1, but are defined for use on
intercommunicators. (In MPI-1, collective operations are restricted to intracommunicators.) MPI-2 also extends the MPI-1
intracommunicator collective operations to allow an "in place" option, in which the send and receive buffers are the same.
C++ and Fortran 90
In MPI-1, the only languages considered were C and Fortran, where Fortran was construed as Fortran 77. In MPI-2, all
functions (including MPI-1 functions) have C++ bindings, and Fortran means Fortran 90 (or Fortran 95 [1]). For C++, the MPI-
2 Forum chose a "minimal" approach in which the C++ versions of MPI functions are quite similar to the C versions, with
classes defined
Page 8
for most of the MPI objects (such as MPI::Request for the C MPI_Request. Most MPI functions are member functions
of MPI classes (easy to do because MPI has an object-oriented design), and others are in the MPI namespace.
MPI can't take advantage of some Fortran-90 features, such as array sections, and some MPI functions, particularly ones like
MPI-Send that use a "choice" argument, can run afoul of Fortran's compile-time type checking for arguments to routines. This
is usually harmless but can cause warning messages. However, the use of choice arguments does not match the letter of the
Fortran standard; some Fortran compilers may require the use of a compiler option to relax this restriction in the Fortran
language.
1
"Basic" and "extended" levels of support for Fortran 90 are provided in MPI-2. Essentially, basic support requires
that mpif.h be valid in both fixed-and free-form format, and "extended" support includes an MPI module and some new
functions that use parameterized types. Since these language extensions apply to all of MPI, not just MPI-2, they are covered in
detail in the second edition of Using MPI [32] rather than in this book.
Language Interoperability
Language interoperability is a new feature in MPI-2. MPI-2 defines features, both by defining new functions and by specifying
the behavior of implementations, that enable mixed-language programming, an area ignored by MPI-1.
External Interfaces
The external interfaces part of MPI makes it easy for libraries to extend MPI by accessing aspects of the implementation that
are opaque in MPI-1. It aids in the construction of integrated tools, such as debuggers and performance analyzers, and is
already being used in the early implementations of the MPI-2 I/O functionality [88].
Threads
MPI-1, other than designing a thread-safe interface, ignored the issue of threads. In MPI-2, threads are recognized as a potential
part of an MPI programming environment. Users can inquire of an implementation at run time what
1
Because Fortran uses compile-time data-type matching rather than run-time data-type matching, it is invalid to make two calls
to the same routine in which two different data types are used in the same argument position. This affects the "choice" arguments
in the MPI Standard. For example, calling MPI-Send with a first argument of type integer and then with a first argument of
type real is invalid in Fortran 77. In Fortran 90, when using the extended Fortran support, it is possible to allow arguments of
different types by specifying the appropriate interfaces in the MPI module. However, this requires a different interface for each
type and is not a practical approach for Fortran 90 derived types. MPI does provide for data-type checking, but does so at run
time through a separate argument, the MPI datatype argument.
Page 9
its level of thread-safety is. In cases where the implementation supports multiple levels of thread-safety, users can select the
level that meets the application's needs while still providing the highest possible performance.
1.3—
Reading This Book
This book is not a complete reference book for MPI-2. We leave that to the Standard itself [59] and to the two volumes of
MPI—The Complete Reference [27, 79]. This book, like its companion Using MPI focusing on MPI-1, is organized around
using the concepts of MPI-2 in application programs. Hence we take an iterative approach. In the preceding section we
presented a very high level overview of the contents of MPI-2. In the next chapter we demonstrate the use of several of these
concepts in simple example programs. Then in the following chapters we go into each of the major areas of MPI-2 in detail.
We start with the parallel I/O capabilities of MPI in Chapter 3, since that has proven to be the single most desired part of MPI-
2. In Chapter 4 we explore some of the issues of synchronization between senders and receivers of data. The complexity and
importance of remote memory operations deserve two chapters, Chapters 5 and 6. The next chapter, Chapter 7, is on dynamic
process management. We follow that with a chapter on MPI and threads, Chapter 8, since the mixture of multithreading and
message passing is likely to become a widely used programming model. In Chapter 9 we consider some advanced features of
MPI-2 that are particularly useful to library writers. We conclude in Chapter 10 with an assessment of possible future directions
for MPI.
In each chapter we focus on example programs to illustrate MPI as it is actually used. Some miscellaneous minor topics will
just appear where the example at hand seems to be a good fit for them. To find a discussion on a given topic, you can consult
either the subject index or the function and term index, which is organized by MPI function name.
Finally, you may wish to consult the companion volume, Using MPI: Portable Parallel Programming with the Message-
passing Interface [32]. Some topics considered by the MPI-2 Forum are small extensions to MPI-1 topics and are covered in
the second edition (1999) of Using MPI. Although we have tried to make this volume self-contained, some of the examples
have their origins in the examples of Using MPI.
Now, let's get started!
Page 11
2—
Getting Started with MPI-2
In this chapter we demonstrate what MPI-2 "looks like," while deferring the details to later chapters. We use relatively simple
examples to give a flavor of the new capabilities provided by MPI-2. We focus on the main areas of parallel I/O, remote
memory operations, and dynamic process management, but along the way demonstrate MPI in its new language bindings, C++
and Fortran 90, and touch on a few new features of MPI-2 as they come up.
2.1—
Portable Process Startup
One small but useful new feature of MPI-2 is the recommendation of a standard method for starting MPI programs. The
simplest version of this is
mpiexec -n 16 myprog
to run the program myprog with 16 processes.
Strictly speaking, how one starts MPI programs is outside the scope of the MPI specification, which says how to write MPI
programs, not how to run them. MPI programs are expected to run in such a wide variety of computing environments, with
different operating systems, job schedulers, process managers, and so forth, that standardizing on a multiple-process startup
mechanism is impossible. Nonetheless, users who move their programs from one machine to another would like to be able to
move their run scripts as well. Several current MPI implementations use mpirun to start MPI jobs. Since the mpirun
programs are different from one implementation to another and expect different arguments, this has led to confusion, especially
when multiple MPI implementations are installed on the same machine.
In light of all these considerations, the MPI Forum took the following approach, which appears in several other places in the
MPI-2 Standard as well. It recommended to implementers that mpiexec be one of the methods for starting an MPI program,
and then specified the formats of some of the arguments, which are optional. What it does say is that if an implementation
supports startup of MPI jobs with mpiexec and uses the keywords for arguments that are described in the Standard, then the
arguments must have the meanings specified in the Standard. That is,
mpiexec -n 32 myprog
should start 32 MPI processes with 32 as the size of MPI_COMM_WORLD, and not do something else. The name mpiexec was
chosen so as to avoid conflict with the various currently established meanings of mpirun.
Page 12
Besides the -n <numprocs> argument, mpiexec has a small number of other arguments whose behavior is specified by
MPI. In each case, the format is a reserved keyword preceded by a hyphen and followed (after whitespace) by a value. The
other keywords are -soft, -host, -arch, -wdir, -path, and -file. They are most simply explained by
examples.
mpiexec -n 32 -soft 16 myprog
means that if 32 processes can't be started, because of scheduling constraints, for example, then start 16 instead. (The request
for 32 processes is a "soft" request.)
mpiexec -n 4 -host denali -wdir /home/me/outfiles myprog
means to start 4 processes (by default, a request for a given number of processes is "hard") on the specified host machine
("denali" is presumed to be a machine name known to mpiexec) and have them start with their working directories set to /
home/me/outfiles.
mpiexec -n 12 -soft 1:12 -arch sparc-solaris \
-path /home/me/sunprogs myprog
says to try for 12 processes, but run any number up to 12 if 12 cannot be run, on a sparc-solaris machine, and look for myprog
in the path /home/me/sunprogs, presumably the directory where the user compiles for that architecture. And finally,
mpiexec -file myfile
tells mpiexec to look in myfile for instructions on what to do. The format of myfile is left to the implementation. More
details on mpiexec, including how to start multiple processes with different executables, can be found in Appendix D.
2.2—
Parallel I/O
Parallel I/O in MPI starts with functions familiar to users of standard "language" I/O or libraries. MPI also has additional
features necessary for performance and portability. In this section we focus on the MPI counterparts of opening and closing
files and reading and writing contiguous blocks of data from/to them. At this level the main feature we show is how MPI can
conveniently express parallelism in these operations. We give several variations of a simple example in which processes write a
single array of integers to a file.
Page 13
Figure 2.1
Sequential I/O from a parallel program
2.2.1—
Non-Parallel I/O from an MPI Program
MPI-1 does not have any explicit support for parallel I/O. Therefore, MPI applications developed over the past few years have
had to do their I/O by relying on the features provided by the underlying operating system, typically Unix. The most
straightforward way of doing this is just to have one process do all I/O. Let us start our sequence of example programs in this
section by illustrating this technique, diagrammed in Figure 2.1. We assume that the set of processes have a distributed array of
integers to be written to a file. For simplicity, we assume that each process has 100 integers of the array, whose total length
thus depends on how many processes there are. In the figure, the circles represent processes; the upper rectangles represent the
block of 100 integers in each process's memory; and the lower rectangle represents the file to be written. A program to write
such an array is shown in Figure 2.2. The program begins with each process initializing its portion of the array. All processes
but process 0 send their section to process 0. Process 0 first writes its own section and then receives the contributions from the
other processes in turn (the rank is specified in MPI_Recv) and writes them to the file.
This is often the first way I/O is done in a parallel program that has been converted from a sequential program, since no
changes are made to the I/O part of the program. (Note that in Figure 2.2, if numprocs is 1, no MPI communication
operations are performed.) There are a number of other reasons why I/O in a parallel program may be done this way.
· The parallel machine on which the program is running may support I/O only from one process.
· One can use sophisticated I/O libraries, perhaps written as part of a high-level data-management layer, that do not have
parallel I/O capability.
· The resulting single file is convenient for handling outside the program (by mv, cp, or ftp, for example).
Page 14
/* example of sequential Unix write into a common file */
#include "mpi.h"
#include <stdio.h>
#define BUFSIZE 100
int main(int argc, char *argv[])
{
int i, myrank, numprocs, buf[BUFSIZE];
MPI_Status status;
FILE *myfile;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
for (i=0; i<BUFSIZE; i++)
buf[i] = myrank * BUFSIZE + i;
if (myrank != 0)
MPI_Send(buf, BUFSIZE, MPI_INT, 0, 99, MPI_COMM_WORLD);
else {
myfile = fopen("testfile", "w");
fwrite(buf, sizeof(int), BUFSIZE, myfile);
for (i=1; i<numprocs; i++) {
MPI_Recv(buf, BUFSIZE, MPI_INT, i, 99, MPI_COMM_WORLD,
&status);
fwrite(buf, sizeof(int), BUFSIZE, myfile);
}
fclose(myfile);
}
MPI_Finalize();
return 0;
}
Figure 2.2
Code for sequential I/O from a parallel program
Page 15
Figure 2.3
Parallel I/O to multiple files
· Performance may be enhanced because the process doing the I/O may be able to assemble large blocks of data. (In Figure 2.2,
if process 0 had enough buffer space, it could have accumulated the data from other processes into a single buffer for one large
write operation.)
The reason for not doing I/O this way is a single, but important one:
· The lack of parallelism limits performance and scalability, particularly if the underlying file system permits parallel physical I/
O.
2.2.2—
Non-MPI Parallel I/O from an MPI Program
In order to address the lack of parallelism, the next step in the migration of a sequential program to a parallel one is to have
each process write to a separate file, thus enabling parallel data transfer, as shown in Figure 2.3. Such a program is shown in
Figure 2.4. Here each process functions completely independently of the others with respect to I/O. Thus, each program is
sequential with respect to I/O and can use language I/O. Each process opens its own file, writes to it, and closes it. We have
ensured that the files are separate by appending each process's rank to the name of its output file.
The advantage of this approach is that the I/O operations can now take place in parallel and can still use sequential I/O libraries
if that is desirable. The primary disadvantage is that the result of running the program is a set of files instead of a single file.
This has multiple disadvantages:
· The files may have to be joined together before being used as input to another application.
· It may be required that the application that reads these files be a parallel program itself and be started with the exact same
number of processes.
· It may be difficult to keep track of this set of files as a group, for moving them, copying them, or sending them across a
network.
Page 16
/* example of parallel Unix write into separate files */
#include "mpi.h"
#include <stdio.h>
#define BUFSIZE 100
int main(int argc, char *argv[])
{
int i, myrank, buf[BUFSIZE];
char filename[128];
FILE *myfile;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
for (i=0; i<BUFSIZE; i++)
buf[i] = myrank * BUFSIZE + i;
sprintf(filename, "testfile.%d", myrank);
myfile = fopen(filename, "w");
fwrite(buf, sizeof(int), BUFSIZE, myfile);
fclose(myfile);
MPI_Finalize();
return 0;
}
Figure 2.4
Non-MPI parallel I/O to multiple files
The performance may also suffer because individual processes may find their data to be in small contiguous chunks, causing
many I/O operations with smaller data items. This may hurt performance more than can be compensated for by the parallelism.
We will investigate this topic more deeply in Chapter 3.
2.2.3—
MPI I/O to Separate Files
As our first MPI I/O program we will simply translate the program of Figure 2.4 so that all of the I/O operations are done with
MPI. We do this to show how familiar I/O operations look in MPI. This program has the same advantages and disadvantages as
the preceding version. Let us consider the differences between the programs shown in Figures 2.4 and 2.5 one by one; there are
only four.
First, the declaration FILE has been replaced by MPI_File as the type of myfile. Note that myfile is now a variable of
type MPI_File, rather than a pointer to an object of type FILE. The MPI function corresponding to fopen is (not
surprisingly)
Page 17
/* example of parallel MPI write into separate files */
#include "mpi.h"
#include <stdio.h>
#define BUFSIZE 100
int main(int argc, char *argv[])
{
int i, myrank, buf[BUFSIZE];
char filename[128];
MPI_File myfile;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
for (i=0; i<BUFSIZE; i++)
buf[i] = myrank * BUFSIZE + i;
sprintf(filename, "testfile.%d", myrank);
MPI_File_open(MPI_COMM_SELF, filename,
MPI_MODE_WRONLY | MPI_MODE_CREATE,
MPI_INFO_NULL, &myfile);
MPI_File_write(myfile, buf, BUFSIZE, MPI_INT,
MPI_STATUS_IGNORE);
MPI_File_close(&myfile);
MPI_Finalize();
return 0;
}
Figure 2.5
MPI I/O to separate files
called MPI_File_open. Let us consider the arguments in the call
MPI_File_open(MPI_COMM_SELF, filename,
MPI_MODE_CREATE | MPI_MODE_WRONLY,
MPI_INFO_NULL, &myfile);
one by one. The first argument is a communicator. In a way, this is the most significant new component of I/O in MPI. Files in
MPI are opened by a collection of processes identified by an MPI communicator. This ensures that those processes operating
on a file together know which other processes are also operating on the file and can communicate with one another. Here, since
each process is opening its own file for its own exclusive use, it uses the communicator MPI_COMM_SELF.
Page 18
The second argument is a string representing the name of the file, as in fopen. The third argument is the mode in which the
file is opened. Here it is being both created (or overwritten if it exists) and will only be written to by this program. The
constants MPI_MODE_CREATE and MPI_MODE_WRONLY represent bit flags that are or'd together in C, much as they are in
the Unix system call open.
The fourth argument, MPI_INFO_NULL here, is a predefined constant representing a dummy value for the info argument to
MPI_File_open. We will describe the MPI_Info object later in this chapter in Section 2.5. In our program we don't need
any of its capabilities; hence we pass MPI_INFO_NULL to MPI_File_open. As the last argument, we pass the address of
the MPI_File variable, which the MPI_File_open will fill in for us. As with all MPI functions in C, MPI_File_open
returns as the value of the function a return code, which we hope is MPI_SUCCESS. In our examples in this section, we do not
check error codes, for simplicity.
The next function, which actually does the I/O in this program, is
MPI_File_write(myfile, buf, BUFSIZE, MPI_INT,
MPI_STATUS_IGNORE);
Here we see the analogy between I/O and message passing that was alluded to in Chapter 1. The data to be written is described
by the (address, count, datatype) method used to describe messages in MPI-1. This way of describing a buffer to be written (or
read) gives the same two advantages as it does in message passing: it allows arbitrary distributions of noncontiguous data in
memory to be written with a single call, and it expresses the datatype, rather than just the length, of the data to be written, so
that meaningful transformations can be done on it as it is read or written, for heterogeneous environments. Here we just have a
contiguous buffer of BUFSIZE integers, starting at address buf. The final argument to MPI_File_write is a "status"
argument, of the same type as returned by MPI_Recv. We shall see its use below. In this case we choose to ignore its value.
MPI-2 specifies that the special value MPI_STATUS_IGNORE can be passed to any MPI function in place of a status
argument, to tell the MPI implementation not to bother filling in the status information because the user intends to ignore it.
This technique can slightly improve performance when status information is not needed.
Finally, the function
MPI_File_close(&myfile);
closes the file. The address of myfile is passed rather than the variable itself because the MPI implementation will replace its
value with the constant MPI_FILE_NULL. Thus the user can detect invalid file objects.
Page 19
/* example of parallel MPI write into a single file */
#include "mpi.h"
#include <stdio.h>
#define BUFSIZE 100
int main(int argc, char *argv[])
{
int i, myrank, buf[BUFSIZE];
MPI_File thefile;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
for (i=0; i<BUFSIZE; i++)
buf[i] = myrank * BUFSIZE + i;
MPI_File_open(MPI_COMM_WORLD, "testfile",
MPI_MODE_CREATE | MPI_MODE_WRONLY,
MPI_INFO_NULL, &thefile);
MPI_File_set_view(thefile, myrank * BUFSIZE * sizeof(int),
MPI_INT, MPI_INT, "native", MPI_INFO_NULL);
MPI_File_write(thefile, buf, BUFSIZE, MPI_INT,
MPI_STATUS_IGNORE);
MPI_File_close(&thefile);
MPI_Finalize();
return 0;
}
Figure 2.6
MPI I/O to a single file
2.2.4—
Parallel MPI I/O to a Single File
We now modify our example so that the processes share a single file instead of writing to separate files, thus eliminating the
disadvantages of having multiple files while retaining the performance advantages of parallelism. We will still not be doing
anything that absolutely cannot be done through language or library I/O on most file systems, but we will begin to see the ''MPI
way" of sharing a file among processes. The new version of the program is shown in Figure 2.6.
The first difference between this program and that of Figure 2.5 is in the first argument of the MPI_File_open statement.
Here we specify MPI_COMM_WORLD instead of MPI_COMM_SELF, to indicate that all the processes are opening a single file
together. This is a collective operation on the communicator, so all participating processes
Page 20
Figure 2.7
Parallel I/O to a single file
must make the MPI_File_open call, although only a single file is being opened.
Our plan for the way this file will be written is to give each process access to a part of it, as shown in Figure 2.7. The part of the
file that is seen by a single process is called the file view and is set for each process by a call to MPI_File_set_view. In
our example here, the call looks like
MPI_File_set_view(thefile, myrank * BUFSIZE * sizeof(int),
MPI_INT, MPI_INT, "native", MPI_INFO_NULL);
The first argument identifies the file. The second argument is the displacement (in bytes) into the file where the process's view
of the file is to start. Here we multiply the size of the data to be written (BUFSIZE * sizeof(int)) by the rank of the
process, so that each process's view starts at the appropriate place in the file. This argument is of a new type MPI_Offset,
which on systems that support large files can be expected to be a 64-bit integer. See Section 2.2.6 for further discussion.
The next argument is called the etype of the view; it specifies the unit of data in the file. Here it is MPI_INT, since we will
always be writing some number of MPI_INTs to this file. The next argument, called the filetype, is a very flexible way of
describing noncontiguous views in the file. In our simple case here, where there are no noncontiguous units to be written, we
can just use the etype, MPI_INT. In general, etype and filetype can be any MPI predefined or derived datatype. See Chapter 3
for details.
The next argument is a character string denoting the data representation to be used in the file. The native representation
specifies that data is to be represented in the file exactly as it is in memory. This preserves precision and results in no
performance loss from conversion overhead. Other representations are internal and external32, which enable various
degrees of file portability across machines with different architectures and thus different data representations. The final
argument
Page 21
Table 2.1
C bindings for the I/O functions used in Figure 2.6
int MPI_File_open(MPI_Comm comm, char *filename, int amode, MPI_Info info,
MPI_File *fh)
int MPI_File_set_view(MPI_File fh, MPI_Offset disp, MPI_Datatype etype,
MPI_Datatype filetype, char *datarep, MPI_Info info)
int MPI_File_write(MPI_File fh, void *buf, int count, MPI_Datatype datatype,
MPI_Status *status)
int MPI_File_close(MPI_File *fh)
is an info object as in MPI_File_open. Here again it is to be ignored, as dictated by specifying MPI_INFO_NULL for this
argument.
Now that each process has its own view, the actual write operation
MPI_File_write (thefile, buf, BUFSIZE, MPI_INT,
MPI_STATUS_IGNORE);
is exactly the same as in our previous version of this program. But because the MPI_File_open specified
MPI_COMM_WORLD in its communicator argument, and the MPI_File_set_view gave each process a different view of the
file, the write operations proceed in parallel and all go into the same file in the appropriate places.
Why did we not need a call to MPI_File_set_view in the previous example? The reason is that the default view is that of a
linear byte stream, with displacement 0 and both etype and filetype set to MPI_BYTE. This is compatible with the way we used
the file in our previous example.
C bindings for the I/O functions in MPI that we have used so far are given in Table 2.1.
2.2.5—
Fortran 90 Version
Fortran now officially means Fortran 90 (or Fortran 95 [1]). This has some impact on the Fortran bindings for MPI functions.
We defer the details to Chapter 9, but demonstrate here some of the differences by rewriting the program shown in Figure 2.6
in Fortran. The MPI-2 Standard identifies two levels of Fortran support: basic and extended. Here we illustrate programming
with basic support, which merely requires that the mpif. h file included in Fortran programs be valid in both free-source and
fixed-source format, in other words, that it contain valid syntax
Page 22
Table 2.2
Fortran bindings for the I/O functions used in Figure 2.8
MPI_FILE_OPEN(comm, filename, amode, info, fh, ierror)
character*(*) filename
integer comm, amode, info, fh, ierror
MPI_FILE_SET_VIEW(fh, disp, etype, filetype, datarep, info, ierror)
integer fh, etype, filetype, info, ierror
character*(*) datarep
integer(kind=MPI_OFFSET_KIND)disp
MPI_FILE_WRITE(fh, buf, count, datatype, status, ierror)
<type> buf(*)
integer fh, count, datatype, status(MPI_STATUS_SIZE), ierror
MPI_FILE_CLOSE(fh, ierror)
integer fh, ierror
for Fortran-90 compilers as well as for Fortran-77 compilers. Extended support requires the use of an MPI "module," in which
the line
include
'mpif.h'
is replaced by
use
mpi
We also use "Fortran-90 style" comment indicators. The new program is shown Figure 2.8. Note that the type MPI_Offset in
C is represented in Fortran by the type INTEGER(kind=MPI_OFFSET_KIND). Fortran bindings for the I/O functions used
in Figure 2.8 are given in Table 2.2.
2.2.6—
Reading the File with a Different Number of Processes
One advantage of doing parallel I/O to a single file is that it is straightforward to read the file in parallel with a different
number of processes. This is important in the case of scientific applications, for example, where a parallel program may write a
restart file, which is then read at startup by the same program, but possibly utilizing a different number of processes. If we have
written a single file with no internal structure reflecting the number of processes that wrote the file, then it is not necessary to
restart the run with the same number of processes as before. In
Page 23
! example of parallel MPI write into a single file, in Fortran
PROGRAM main
! Fortran 90 users can (and should) use
! use mpi
! instead of include 'mpif.h' if their MPI implementation provides a
! mpi module.
include 'mpif.h'
integer ierr, i, myrank, BUFSIZE, thefile
parameter (BUFSIZE=100)
integer buf(BUFSIZE)
integer(kind=MPI_OFFSET_KIND) disp
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr)
do i = 0, BUFSIZE
buf(i) = myrank * BUFSIZE + i
enddo
call MPI_FILE_OPEN(MPI_COMM_WORLD, 'testfile', &
MPI_MODE_WRONLY + MPI_MODE_CREATE, &
MPI_INFO_NULL, thefile, ierr)
! assume 4-byte integers
disp = myrank * BUFSIZE * 4
call MPI_FILE_SET_VIEW(thefile, disp, MPI_INTEGER, &
MPI_INTEGER, 'native', &
MPI_INFO_NULL, ierr)
call MPI_FILE_WRITE(thefile, buf, BUFSIZE, MPI_INTEGER, &
MPI_STATUS_IGNORE, ierr)
call MPI_FILE_CLOSE(thefile, ierr)
call MPI_FINALIZE(ierr)
END PROGRAM main
Figure 2.8
MPI I/O to a single file in Fortran
Page 24
Table 2.3
C bindings for some more I/O functions
int MPI_File_get_size(MPI_File fh, MPI_Offset *size)
int MPI_File_read(MPI_File fh, void *buf, int count, MPI_Datatype datatype,
MPI_Status *status)
Figure 2.9 we show a program to read the file we have been writing in our previous examples. This program is independent of
the number of processes that run it. The total size of the file is obtained, and then the views of the various processes are set so
that they each have approximately the same amount to read.
One new MPI function is demonstrated here: MPI_File_get_size. The first argument is an open file, and the second is the
address of a field to store the size of the file in bytes. Since many systems can now handle files whose sizes are too big to be
represented in a 32-bit integer, MPI defines a type, MPI_Offset, that is large enough to contain a file size. It is the type used
for arguments to MPI functions that refer to displacements in files. In C, one can expect it to be a long or long long—at
any rate a type that can participate in integer arithmetic, as it is here, when we compute the displacement used in
MPI_File_set_view. Otherwise, the program used to read the file is very similar to the one that writes it.
One difference between writing and reading is that one doesn't always know exactly how much data will be read. Here,
although we could compute it, we let every process issue the same MPI_File_read call and pass the address of a real
MPI -Status instead of MPI_STATUS_IGNORE. Then, just as in the case of an MPI_Recv, we can use
MPI_Get_count to find out how many occurrences of a given datatype were read. If it is less than the number of items
requested, then end-of-file has been reached.
C bindings for the new functions used in this example are given in Table 2.3.
2.2.7—
C++ Version
The MPI Forum faced a number of choices when it came time to provide C++ bindings for the MPI-1 and MPI-2 functions.
The simplest choice would be to make them identical to the C bindings. This would be a disappointment to C++ programmers,
however. MPI is object-oriented in design, and it seemed a shame not to express this design in C++ syntax, which could be
done without changing the basic structure of MPI. Another choice would be to define a complete class library that might look
quite different from MPI's C bindings.
Page 25
/* parallel MPI read with arbitrary number of processes*/
#include "mpi.h"
#include <stdio.h>
int main(int argc, char *argv[])
{
int myrank, numprocs, bufsize, *buf, count;
MPI_File thefile;
MPI_Status status;
MPI_Offset filesize;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_File_open(MPI_COMM_WORLD, "testfile", MPI_MODE_RDONLY,
MPI_INFO_NULL, &thefile);
MPI_File_get_size(thefile, &filesize); /* in bytes */
filesize = filesize / sizeof(int); /* in number of ints */
bufsize = filesize / numprocs + 1; /* local number to read */
buf = (int *) malloc (bufsize * sizeof(int));
MPI_File_set_view(thefile, myrank * bufsize * sizeof(int),
MPI_INT, MPI_INT, "native", MPI_INFO_NULL);
MPI_File_read(thefile, buf, bufsize, MPI_INT, &status);
MPI_Get_count(&status, MPI_INT, &count);
printf("process %d read %d ints\n", myrank, count);
MPI_File_close(&thefile);
MPI_Finalize();
return 0;
}
Figure 2.9
Reading the file with a different number of processes
Page 26
Although the last choice was explored, and one instance was explored in detail [80], in the end the Forum adopted the middle
road. The C++ bindings for MPI can almost be deduced from the C bindings, and there is roughly a one-to-one correspondence
between C++ functions and C functions. The main features of the C++ bindings are as follows.
· Most MPI "objects," such as groups, communicators, files, requests, and statuses, are C++ objects.
· If an MPI function is naturally associated with an object, then it becomes a method on that object. For example, MPI_Send
( . . .,comm) becomes a method on its communicator: comm.Send( . . .).
· Objects that are not components of other objects exist in an MPI name space. For example, MPI_COMM_WORLD becomes
MPI::COMM_WORLD and a constant like MPI_INFO_NULL becomes MPI::INFO_NULL.
· Functions that normally create objects return the object as a return value instead of returning an error code, as they do in C.
For example, MPI::File::Open returns an object of type MPI::File.
· Functions that in C return a value in one of their arguments return it instead as the value of the function. For example, comm.
Get_rank returns the rank of the calling process in the communicator comm.
· The C++ style of handling errors can be used. Although the default error handler remains MPI::ERRORS_ARE_FATAL in C
++, the user can set the default error handler to MPI::ERRORS_THROW_EXCEPTIONS In this case the C++ exception
mechanism will throw an object of type MPI::Exception.
We illustrate some of the features of the C++ bindings by rewriting the previous program in C++. The new program is shown
in Figure 2.10. Note that we have used the way C++ can defer defining types, along with the C++ MPI feature that functions
can return values or objects. Hence instead of
int myrank;
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
we have
int myrank = MPI::COMM_WORLD.Get_rank();
The C++ bindings for basic MPI functions found in nearly all MPI programs are shown in Table 2.4. Note that the new
Get_rank has no arguments instead of the two that the C version, MPI_Get_rank, has because it is a method on a
Page 27
// example of parallel MPI read from single file, in C++
#include <iostream.h>
#include "mpi.h"
int main(int argc, char *argv[])
{
int bufsize, *buf, count;
char filename[128];
MPI::Status status;
MPI::Init();
int myrank = MPI::COMM_WORLD.Get_rank();
int numprocs = MPI::COMM_WORLD.Get_size();
MPI::File thefile = MPI::File::Open(MPI::COMM_WORLD, "testfile",
MPI::MODE_RDONLY,
MPI::INFO_NULL);
MPI::Offset filesize = thefile.Get_size(); // in bytes
filesize = filesize / sizeof(int); // in number of ints
bufsize = filesize / numprocs + 1; // local number to read
buf = (int *) malloc (bufsize * sizeof(int));
thefile.Set_view(myrank * bufsize * sizeof(int),
MPI_INT, MPI_INT, "native", MPI::INFO_NULL);
thefile.Read(buf, bufsize, MPI_INT, &status);
count = status.Get_count(MPI_INT);
cout << "process " << myrank << " read " << count << " ints"
<< endl;
thefile.Close();
MPI::Finalize();
return 0;
}
Figure 2.10
C++ version of the example in Figure 2.9
Page 28
Table 2.4
C++ bindings for basic MPI functions
void MPI::Init(int& argc, char**& argv)
void MPI::Init()
int MPI::Comm::Get_size() const
int MPI::Comm::Get_rank() const
void MPI::Finalize()
Table 2.5
C++ bindings for some I/O functions
MPI::File MPI::File::Open(const MPI::Intracomm& comm, const char* filename,
int amode, const MPI::Info& info)
MPI::Offset MPI::File::Get_size const
void MPI::File::Set_view(MPI::Offset disp, const MPI::Datatype& etype,
const MPI::Datatype& filetype, const char* datarep,
const MPI::Info& info)
void MPI::File::Read(void* buf, int count, const MPI::Datatype& datatype,
MPI::Status& status)
void MPI::File::Read(void* buf, int count, const MPI::Datatype& datatype)
void MPI::File::Close
communicator and returns the rank as its value. Note also that there are two versions of MPI::Init. The one with no
arguments corresponds to the new freedom in MPI-2 to pass (NULL, NULL) to the C function MPI_Init instead of
(&argc, &argv).
The C++ bindings for the I/O functions used in our example are shown in Table 2.5. We see that MPI::File::Open returns
an object of type MPI::File, and Read is called as a method on this object.
2.2.8—
Other Ways to Write to a Shared File
In Section 2.2.4 we used MPI_File_set_view to show how multiple processes can be instructed to share a single file. As is
common throughout MPI, there are
Page 29
multiple ways to achieve the same result. MPI_File_seek allows multiple processes to position themselves at a specific
byte offset in a file (move the process's file pointer) before reading or writing. This is a lower-level approach than using file
views and is similar to the Unix function 1seek. An example that uses this approach is given in Section 3.2. For efficiency
and thread-safety, a seek and read operation can be combined in a single function, MPI_File_read_at; similarly, there is
an MPI_File_write_at. Finally, another file pointer, called the shared file pointer, is shared among processes belonging to
the communicator passed to MPI_File_open. Functions such as MPI_File_write_shared access data from the current
location of the shared file pointer and increment the shared file pointer by the amount of data accessed. This functionality is
useful, for example, when all processes are writing event records to a common log file.
2.3—
Remote Memory Access
In this section we discuss how MPI-2 generalizes the strict message-passing model of MPI-1 and provides direct access by one
process to parts of the memory of another process. These operations, referred to as get, put, and accumulate, are called remote
memory access (RMA) operations in MPI. We will walk through a simple example that uses the MPI-2 remote memory access
operations.
The most characteristic feature of the message-passing model of parallel computation is that data is moved from one process's
address space to another's only by a cooperative pair of send/receive operations, one executed by each process. The same
operations that move the data also perform the necessary synchronization; in other words, when a receive operation completes,
the data is available for use in the receiving process.
MPI-2 does not provide a real shared-memory model; nonetheless, the remote memory operations of MPI-2 provide much of
the flexibility of shared memory. Data movement can be initiated entirely by the action of one process; hence these operations
are also referred to as one sided. In addition, the synchronization needed to ensure that a data-movement operation is complete
is decoupled from the (one-sided) initiation of that operation. In Chapters 5 and 6 we will see that MPI-2's remote memory
access operations comprise a small but powerful set of data-movement operations and a relatively complex set of
synchronization operations. In this chapter we will deal only with the simplest form of synchronization.
It is important to realize that the RMA operations come with no particular guarantee of performance superior to that of send
and receive. In particular, they
Page 30
have been designed to work both on shared-memory machines and in environments without any shared-memory hardware at
all, such as networks of workstations using TCP/IP as an underlying communication mechanism. Their main utility is in the
flexibility they provide for the design of algorithms. The resulting programs will be portable to all MPI implementations and
presumably will be efficient on platforms that do provide hardware support for access to the memory of other processes.
2.3.1—
The Basic Idea:
Memory Windows
In strict message passing, the send/receive buffers specified by MPI datatypes represent those portions of a process's address
space that are exported to other processes (in the case of send operations) or available to be written into by other processes (in
the case of receive operations). In MPI-2, this notion of ''communication memory" is generalized to the notion of a remote
memory access window. Each process can designate portions of its address space as available to other processes for both read
and write access. The read and write operations performed by other processes are called get and put remote memory access
operations. A third type of operation is called accumulate. This refers to the update of a remote memory location, for example,
by adding a value to it.
The word window in MPI-2 refers to the portion of a single process's memory that it contributes to a distributed object called a
window object. Thus, a window object is made up of multiple windows, each of which consists of all the local memory areas
exposed to the other processes by a collective window-creation function. A collection of processes can have multiple window
objects, and the windows contributed to a window object by a set of processes may vary from process to process. In Figure
2.11 we show a window object made up of windows contributed by two processes. The put and get operations that move data
to and from the remote memory of another process are nonblocking; a separate synchronization operation is needed to ensure
their completion. To see how this works, let us consider a simple example.
2.3.2—
RMA Version of cpi
In this section we rewrite the cpi example that appears in Chapter 3 of Using MPI [32]. This program calculates the value of
by numerical integration. In the original version there are two types of communication. Process 0 prompts the user for a
number of intervals to use in the integration and uses MPI_Bcast to send this number to the other processes. Each process
then computes a partial sum, and the total sum is obtained by adding the partial sums with an MPI_Reduce operation.
Page 31
Figure 2.11
Remote memory access window on two processes. The shaded area covers a single window
object made up of two windows.
In the one-sided version of this program, process 0 will store the value it reads from the user into its part of an RMA window
object, where the other processes can simply get it. After the partial sum calculations, all processes will add their contributions
to a value in another window object, using accumulate. Synchronization will be carried out by the simplest of the window
synchronization operations, the fence.
Figure 2.12 shows the beginning of the program, including setting up the window objects. In this simple example, each window
object consists only of a single number in the memory of process 0. Window objects are represented by variables of type
MPI_Winin C. We need two window objects because window objects are made up of variables of a single datatype, and we
have an integer n and a double pi that all processes will access separately. Let us look at the first window creation call done on
process 0.
MPI_Win_create (&n, sizeof(int), 1, MPI_INFO_NULL,
MPI_COMM_WORLD, &nwin);
This is matched on the other processes by
MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL,
MPI_COMM_WORLD, &nwin);
The call on process 0 needs to be matched on the other processes, even though they are not contributing any memory to the
window object, because MPI_Win_create is a collective operation over the communicator specified in its last argument.
This communicator designates which processes will be able to access the window object.
The first two arguments of MPI_Win_create are the address and length (in bytes) of the window (in local memory) that the
calling process is exposing to put/get operations by other processes. Here it is the single integer n on process 0 and no
Page 32
/* Compute pi by numerical integration, RMA version */
#include "mpi.h"
#include <math.h>
int main(int argc, char *argv[])
{
int n, myid, numprocs, i;
double PI25DT = 3.141592653589793238462643;
double mypi, pi, h, sum, x;
MPI_Win nwin, piwin;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
if (myid == 0) {
MPI_Win_create(&n, sizeof(int), 1, MPI_INFO_NULL,
MPI_COMM_WORLD, &nwin);
MPI_Win_create(&pi, sizeof(double), 1, MPI_INFO_NULL,
MPI_COMM_WORLD, &piwin);
}
else {
MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL,
MPI_COMM_WORLD, &nwin);
MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL,
MPI_COMM_WORLD, &piwin);
}
Figure 2.12
cpi: setting up the RMA windows
memory at all on the other processes, signified by a length of 0. We use MPI_BOTTOM as the address because it is a valid
address and we wish to emphasize that these processes are not contributing any local windows to the window object being
created.
The next argument is a displacement unit used to specify offsets into memory in windows. Here each window object contains
only one variable, which we will access with a displacement of 0, so the displacement unit is not really important. We specify 1
(byte). The fourth argument is an MPI_Info argument, which can be used to optimize the performance of RMA operations in
certain situations. Here we use MPI_INFO_NULL. See Chapter 5 for more on the use of displacement units and the
MPI_Info argument. The fifth argument is a communicator, which specifies
Page 33
the set of processes that will have access to the memory being contributed to the window object. The MPI implementation will
return an MPI_Win object as the last argument.
After the first call to MPI_Win_create, each process has access to the data in nwin (consisting of the single integer n) via
put and get operations for storing and reading, and the accumulate operation for updating. Note that we did not have to acquire
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Preparing document for printing…
0%
Log in to post a comment
|
https://www.techylib.com/en/view/footballsyrup/using_mpi_portable_parallel_programming_with_the_..._biblioteca_c
|
CC-MAIN-2018-30
|
refinedweb
| 14,221
| 54.12
|
6.5 Proxies
In the final section of this chapter, we discuss proxies. You can use a proxy to create, at runtime, new classes that implement a given set of interfaces. Proxies are only necessary when you don’t yet know at compile time which interfaces you need to implement. This is not a common situation for application programmers, and you should feel free to skip this section if you are not interested in advanced wizardry. However, for certain systems programming applications, the flexibility that proxies offer can be very important.
6.5.1 When to Use Proxies
Suppose you want to construct an object of a class that implements one or more interfaces whose exact nature you may not know at compile time. This is a difficult problem. To construct an actual class, you can simply use the newInstance method or use reflection to find a constructor. But you can’t instantiate an interface. You need to define a new class in a running program.
To overcome this problem, some programs generate code, place it into a file, invoke the compiler, and then load the resulting class file. Naturally, this is slow, and it also requires deployment of the compiler together with the program. The proxy mechanism is a better solution. The proxy class can create brand-new classes at runtime. Such a proxy class implements the interfaces that you specify. In particular, the proxy class has the following methods:
- All methods required by the specified interfaces; and
- All methods defined in the Object class (toString, equals, and so on)..
6.5.2 Creating Proxy Objects
To create a proxy object, use the newProxyInstance method of the Proxy class. The method has three parameters:
- A class loader. As part of the Java security model, different class loaders can be used for system classes, classes that are downloaded from the Internet, and so on. We will discuss class loaders in Chapter 9 of Volume II. For now,
- Routing method calls to remote servers
- Associating user interface events with actions in a running program
- Tracing method calls for debugging purposes
In our example program, we use proxies and invocation handlers to trace method calls. We define a TraceHandler wrapper class that stores a wrapped object. Its invoke method simply prints the name and parameters of the method to be called and then calls the method with the wrapped object as the implicit parameter.
class TraceHandler implements InvocationHandler { private Object target; public TraceHandler(Object t) { target = t; } public Object invoke(Object proxy, Method m, Object[] args) throws Throwable { // print method name and parameters ... // invoke actual method return m.invoke(target, args); } }
Here is how you construct a proxy object that causes the tracing behavior whenever one of its methods is called:
Object value = ...; // construct wrapper InvocationHandler handler = new TraceHandler(value); // construct proxy for one or more interfaces Class[] interfaces = new Class[] { Comparable.class}; Object proxy = Proxy.newProxyInstance(null, interfaces, handler);
Now, whenever a method from one of the interfaces is called on proxy, the method name and parameters are printed out and the method is then invoked on value.
In the program shown in Listing 6.10, we use proxy objects to trace a binary search. We fill an array with proxies to the integers 1 ... 1000. Then we invoke the binarySearch method of the Arrays class to search for a random integer in the array. Finally, we print the matching element.
Object[] elements = new Object[1000]; // fill elements with proxies for the integers 1 . . . 1000 for (int i = 0; i < elements.length; i++) { Integer value = i + 1; elements[i] = Proxy.newProxyInstance(. . .); // proxy for value; } // construct a random integer Integer key = new Random().nextInt(elements.length) + 1; // search for the key int result = Arrays.binarySearch(elements, key); //) . . .
Since we filled the array with proxy objects, the compareTo calls call the invoke method of the TraceHandler class. That method prints. Note that the toString method is proxied even though it does not belong to the Comparable interface—as you will see in the next section, certain Object methods are always proxied.
Listing 6.10 proxy/ProxyTest.java
1 package proxy; 2 3 import java.lang.reflect.*; 4 import java.util.*; 5 6 /** 7 * This program demonstrates the use of proxies. 8 * @version 1.00 2000-04-13 9 * @author Cay Horstmann 10 */ 11 public class ProxyTest 12 { 13 public static void main(String[] args) 14 { 15 Object[] elements = new Object[1000]; 16 17 // fill elements with proxies for the integers 1 ... 1000 18 for (int i = 0; i < elements.length; i++) 19 { 20 Integer value = i + 1; 21 InvocationHandler handler = new TraceHandler(value); 22 Object proxy = Proxy.newProxyInstance(null, new Class[] { Comparable.class } , handler); 23 elements[i] = proxy; 24 } 25 26 // construct a random integer 27 Integer key = new Random().nextInt(elements.length) + 1; 28 29 // search for the key 30 int result = Arrays.binarySearch(elements, key); 31 32 // print match if found 33 if (result >= 0) System.out.println(elements[result]); 34 } 35 } 36 37 /** 38 * An invocation handler that prints out the method name and parameters, then 39 * invokes the original method 40 */ 41 class TraceHandler implements InvocationHandler 42 { 43 private Object target; 44 45 /** 46 * Constructs a TraceHandler 47 * @param t the implicit parameter of the method call 48 */ 49 public TraceHandler(Object t) 50 { 51 target = t; 52 } 53 54 public Object invoke(Object proxy, Method m, Object[] args) throws Throwable 55 { 56 // print implicit argument 57 System.out.print(target); 58 // print method name 59 System.out.print("." + m.getName() + "("); 60 // print explicit arguments 61 if (args != null) 62 { 63 for (int i = 0; i < args.length; i++) 64 { 65 System.out.print(args[i]); 66 if (i < args.length - 1) System.out.print(", "); 67 } 68 } 69 System.out.println(")"); 70 71 // invoke actual method 72 return m.invoke(target, args); 73 } 74 }
6.5.3 Properties of Proxy Classes
Now that you have seen proxy classes in action, let field—the invocation handler, which is defined in the Proxy superclass. Any additional data required to carry out the proxy objects’ tasks must be stored in the invocation handler. For example, when we proxied Comparable objects in the program shown in Listing 6.10, Oracle’s virtual machine generates class names that begin with the string $Proxy.
There is only one proxy class for a particular class loader and ordered set of interfaces. That is, if you call the newProxyInstance method twice with the same class loader and interface array, you get two objects of the same class. You can also obtain that class with the getProxyClass method:
Class proxyClass = Proxy.getProxyClass(null, interfaces);
A proxy class is always public and final. If all interfaces that the proxy class implements are public, the proxy class does not belong to any particular package. Otherwise, all non-public interfaces must belong to the same package, and the proxy class will also belong to that package.
You can test whether a particular Class object represents a proxy class by calling the isProxyClass method of the Proxy class.
This ends our final chapter on the fundamentals of the Java programming language. Interfaces, lambda expressions, and inner classes are concepts that you will encounter frequently. However, as we already mentioned, cloning and proxies are advanced techniques that are of interest mainly to library designers and tool builders, not application programmers. You are now ready to learn how to deal with exceptional situations in your programs in Chapter 7.
|
http://www.informit.com/articles/article.aspx?p=2470127&seqNum=5
|
CC-MAIN-2019-13
|
refinedweb
| 1,245
| 56.35
|
Given or right or diagonally down towards the right. Find out maximum amount of gold he can collect.
Examples:
Input : mat[][] = {{1, 3, 3}, {2, 1, 4}, {0, 6, 4}}; Output : 12 {(1,0)->(2,1)->(2,2)} Input: mat[][] = { {1, 3, 1, 5}, {2, 2, 4, 1}, {5, 0, 2, 3}, {0, 6, 1, 2}}; Output : 16 (2,0) -> (1,1) -> (1,2) -> (0,3) OR (2,0) -> (3,1) -> (2,2) -> (2,3) Input : mat[][] = {{10, 33, 13, 15}, {22, 21, 04, 1}, {5, 0, 2, 3}, {0, 6, 14, 2}}; Output : 83
Source Flipkart Interview
Create a 2-D matrix goldTable[][]) of the same as given matrix mat[][]. If we observe the question closely, we can notice following.
- Amount of gold is positive, so we would like to cover maximum cells of maximum values under given constraints.
- In every move, we move one step toward right side. So we always end up in last column. If we are at the last column, then we are unable to move right
If we are at the first row or last column, then we are unable to move right-up so just assign 0 otherwise assign the value of goldTable[row-1][col+1] to right_up. If we are at the last row or last column, then we are unable to move right down so just assign 0 otherwise assign the value of goldTable[row+1][col+1] to right up.
Now find the maximum of right, right_up, and right_down and then add it with that mat[row][col]. At last, find the maximum of all rows and first column and return it.
// C++ program to solve Gold Mine problem #include<bits/stdc++.h> using namespace std; const int MAX = 100; // Returns maximum amount of gold that can be collected // when journey started from first column and moves // allowed are right, right-up and right-down int getMaxGold(int gold[][MAX], int m, int n) { // Create a table for storing intermediate results // and initialize all cells to 0. The first row of // goldMineTable gives the maximum gold that the miner // can collect when starts that row int goldTable[m][n]; memset(goldTable, 0, sizeof(goldTable)); for (int col=n-1; col>=0; col--) { for (int row=0; row<m; row++) { // Gold collected on going to the cell on the right(->) int right = (col==n-1)? 0: goldTable[row][col+1]; // Gold collected on going to the cell to right up (/) int right_up = (row==0 || col==n-1)? 0: goldTable[row-1][col+1]; // Gold collected on going to the cell to right down (\) int right_down = (row==m-1 || col==n-1)? 0: goldTable[row+1][col+1]; // Max gold collected from taking either of the // above 3 paths goldTable[row][col] = gold[row][col] + max(right, max(right_up, right_down)); ; } } // The max amount of gold collected will be the max // value in first column of all rows int res = goldTable[0][0]; for (int i=1; i<m; i++) res = max(res, goldTable[i][0]); return res; } // Driver Code int main() { int gold[MAX][MAX]= { {1, 3, 1, 5}, {2, 2, 4, 1}, {5, 0, 2, 3}, {0, 6, 1, 2} }; int m = 4, n = 4; cout << getMaxGold(gold, m, n); return 0; }
Output:
16
Time Complexity :O(m*n)
Space Complexity :O.
|
https://www.geeksforgeeks.org/gold-mine-problem/
|
CC-MAIN-2018-09
|
refinedweb
| 552
| 68.64
|
Another coding horror story was reported in the microsoft.public.dotnet.xml newsgroup:.
Cool. This is just a remainder for those who use XSLT scripting (msxsl:script) in .NET: watch out, this feature can be pure evil if used unwisely - it leaks memory and there is nothing you can do about it.
The problem is that when XSLT stylesheet is loaded in .NET, msxsl:script is compiled into an assembly via CodeDOM and then loaded into memory, into the current application domain. Each time the stylesheet is loaded above process is repeated - new assembly is being generated and loaded into the application domain. But it's impossible to unload an assembly from application domain in .NET!
Here is KB article on the topic. It says it applies to .NET 1.0 only, but don't be confused - the problem exists in .NET 1.1 and 2.0. Moreover I'm pretty much pessimistic about if it's gonna be fixed in the future.
The solution is simple - just don't use script in XSLT unless you really really really have to. Especially on the server side - XSLT script and ASP.NET should never meet unless you take full resonsibility for caching compiled XslCompiledTransform. Use XSLT extension objects instead.
Update. Of couse Yuriy reminds me that msxsl:script runs faster than an extension object, because msxsl:script is available at compile time and so XSLT compiler can generate direct calls, while extension objects are only available at run-time and so can only be called via reflection.
That makes msxsl:script a preferrable but danger solution when your stylsheet makes lots of calls to extension functions.
In a perfect world of course msxsl:script would be compiled into dynamic methods (just like XSLT itself), which are GC reclaimable, but I don't think CodeDOM is capable of doing this currently. I wonder if it's possible to compile C#/VB/J# method source into dynamic method anyway?
Also it's interesting how to improve extension objects performance - what if extension objects could be passed at compile time? They are usually available anyway at that time too. Or what if compiled stylesheet could be "JITted" to direct calls instead of reflection?
Sergey, Anton, can you please comment on this?
TrackBack URL:
Can you provide more details? If you prefer email, you can contact "oleg" at this domain name.
This is not isolated to script in the xslt's, there's a major bug somewhere in the XslCompiled Transform code. Our site's content all runs through an xslt to clean-up the html created by the content teams. There is NO script in the Xslt's (Only 2 are used for the whole site) and the XslCompiled Transfom has a file dependancy cache on it. Admitedly the problem only ocours when both the GoogleBot and Yahoo Slurp bot handily walk the site at the same time but the 1k-2k exception emails we get as a result is rather irritating!!! If anyone has any ideas how to fix this, other than removing the transforms (Too much work and no time to do it) they'd be very welcome!!
Thanks for the great comment Sergey!
That's amazing still there is no compiler to dynamic methods, somebody has to work on this.
If not that must be interesting project if I only have any spare time :(
I'd not agree that scrip in XSLT is evil. I like the feature. The fact that the implementation of this feature in the .NET Framework 1 & 2 has serious usability problem doesn't mean the feature is bad and can't be implemented better.
In the days of MSXML scripts ware slow because they required starting scripting environment to interpret vbscript or jscript. This is quite expensive and extension objects were introduced as workaround.
.NET allowed compiling all scripts to the assemblies and scripts became much faster then extension objects, especially in XslCompiledTransform, which binds to the script functions at compiled time. The latter fact benefits from the real advantage of scripts over extension objects – scripts are available at compile time. This makes possible name and type verification, early binding and many forms of optimizations including type conversion elimination.
In the script, I also like the ability to write them in the same file where I call them. (And you still can put scripts in the separate file using xsl:include).
The well known problem with scripts in .NET Framework 2.0 is that the CodeDOM in order to compile scripts generates types and types can be unloaded only with entire AppDomain. The problem is not with the scripts themselves but with luck of adequate technology in .NET to compile them. The script unload problem would not exist if CLR solves the type unloading problem (no evidence that this going to happen soon), or if CodeDOM or some other technology would allow compiling script blocs to dynamic methods (from my perspective C# and Co. compilers should be managed classes that take TextReader as input and write results to the MethodBuilder.).
The script unloading problem also disappears with XSLT compiler. In this case compiled stylesheets become normal assemblies and can't be unloaded either. Scripts are not special in this case any more. (In this case we have packaging problem – stylesheet with script blocks compiles to multiple assemblies instead of one, but this is different story.)
Thus, scripts have chances to overcome the usability problem. Extension objects are unlikely to become better in the way they are designed.
Instead we discuss adding another CLR binding mechanism where users would be able to describe in the stylesheet which CLR types they want use and call methods of these types from XPath expressions in the stylesheet. (Similar to what Common Java Binding does.) In this case XslCompiledTransfor would be able to bind to methods at compile time.
In addition I'd like to point to one more victim of script unloading problem:
Sergey
maxtoroq, doesn't matter which language you are using. Above applies to XslTransform and XslCompiledTransform classes in .NET.
Do you know if this apply if you use jscript or javascript as the msxsl:script language?
Good point, Yuriy! Now I have to update my post.
Compare the following call stacks to see the difference how the same function is invoked depending on msxsl:script on XSLT extension object
EXTENSION OBJECT:
=================
} net2xslt.exe!net2xslt.Program.MSE.test(object o = Position=0, Current=null) Line 19
[Native to Managed Transition]
[Managed to Native Transition]
System.Data.SqlXml.dll!System.Xml.Xsl.Runtime.XmlExtensionFunction.Invoke(object extObj, object[] args) + 0x42 bytes
System.Data.SqlXml.dll!System.Xml.Xsl.Runtime.XmlQueryContext.InvokeXsltLateBoundFunction(string name, string namespaceUri,
System.Collections.Generic.IList{System.Xml.XPath.XPathItem}[] args) + 0x345 bytes
System.Xml.Xsl.CompiledQuery!{xsl:template match="/"}() Line 18 + 0x12b bytes XSLT
MSXSL:SCRIPT
============
} l_yjrpgf.dll!System.Xml.Xsl.CompiledQuery.Script1.test(object o = Position=0, Current=null) Line 10
System.Xml.Xsl.CompiledQuery!{xsl:template match="/"}() Line 18 + 0x123 bytes XSLT
System.Xml.Xsl.CompiledQuery!System.Xml.Xsl.CompiledQuery.Query.{xsl:apply-templates}(System.Xml.Xsl.Runtime.XmlQueryRuntime
{urn:schemas-microsoft-com:xslt-debug}runtime = {System.Xml.Xsl.Runtime.XmlQueryRuntime}) + 0xc5 bytes
However, if used carefully msxsl:script provides much value. XSLT compiler generates direct calls to msxsl:script functions while for the XSLT extension objects in generates invokes via reflection API. If you replace calls to EXSLT.NET string handling functions with C# equialents in msxsl:script blocks you get significant performance boost, if they are invoked often. so, if you can manage to cache XSLT compiled instances in memory they are very useful. We dramatically increased performance of the application, by replacing them with msxl:script blocks. It applies to XslCompiledTransform only.
This page contains a single entry by Oleg Tkachenko published on October 12, 2006 7:29 PM.
Joe Fawcett is blogging was the previous entry in this blog.
.NET XmlReader API flaw is the next entry in this blog.
Find recent content on the main index or look in the archives to find all content.
|
http://www.tkachenko.com/blog/archives/000620.html
|
crawl-002
|
refinedweb
| 1,347
| 56.96
|
People watching this port, also watch: gstreamer1-plugins-pango, xerces-c3, py37-pycparser, libkvkontakte, font-misc-meltho
make generate-plist
cd /usr/ports/lang/gcc9/ && make install clean
pkg install gcc9
Deleted ports which required this port:
Number of commits found: 16
Improve upon revision 532950 by passing GCC optimization options via
MAKE_ARGS instead of trying to do this via the environment (which is
lower priority and required files/patch-Makefile.in which we can now
remove).
PR: 245511
lang/gcc9: build with base GCC on powerpc64 elfv1
Instead of using lang/gcc8 for bootstrapping gcc9 on powerpc64 elfv1, use
directly base gcc.
Necessary changes:
- CFLAGS_FOR_TARGET="-O0" CXXFLAGS_FOR_TARGET="-O0" BOOT_CFLAGS="-O0" in
CONFIGURE_ENV and MAKE_ENV. Otherwise bootstrapped compiler fails later in the
build with segfault.
- CRTSTUFF_T_CFLAGS has changed optimizations to -O0, instead of -O2. -O2 worked
in gcc8, because there was no -fno-asynchronous-unwind-tables flag added to
CRTSTUFF_T_CFLAGS. Since this works when building with clang on powerpc64 elfv2,
this patch is added to EXTRA_PATCHES, only on powerpc64 elfv1,
- BOOT_CFLAGS has added ? before =. This is to allow overriding BOOT_CFLAGS in
CONFIGURE_ENV and MAKE_ENV.
- A patch by Gustavo Romero to gcc/dumpfile.c is necessary to allow compiling
with base GCC, otherwise base GCC hits ICE. Incidentally, this patch alone also
fixes build for powerpc (32 bits) with base GCC.
Bump PORTREVISION for dependency change.
PR: 245511, 242506
Approved by: gerald (maintainer timeout)
Update to the GCC 9.3 release, which fixes some 157 further bugs.
This remains the default version of GCC in the Ports Collection, and
this update mostly addresses regressions.
files/patch-powerpc32 was a backport from this release branch to begin
with and has now become obsolete. [1]
PR: 241125 [1]
Appease portlint when it comes for patch format.
Backport (part of) r521207 | gerald | 2019-12-28 from lang/gcc9-devel:
Enable GCC plugins support by default.
PR: 242644
Submitted by: tobik
Differential Revision:
Temporarily apply a patch from upstream that addresses a build failure
on powerpc ("error: integer constant is too large for 'long' type").
This is already part of lang/gcc9-devel after r518494 and the 20191123
snapshot of GCC 9.2.1; it will be part of the GCC 9.3 release at which
point we can remove this local patch again.
PR: 241125
Forward port r517702 | gerald | 2019-11-15 from lang/gcc9-devel:
On versions of FreeBSD that that are new enough and made that switch
already, use ELFv2 ABI on powerpc64.
PR: 239813
Submitted by: pkubaj
Reported by: linimon
Backport 517206 | gerald | 2019-11-10 from lang/gcc10-devel, which already
landed in lang/gcc9-devel as r517355 | gerald | 2019-11-13.
Add a new option PLUGINS that enables GCC's plugin framework. This is off
by default for now, but something to possibly make the default after a bit
of settling.
I plan to backport this to lang/gcc9-devel and then lang/gcc9.
Submitted by: David Carlier <devnexen@gmail.com>
Differential Revision:
Properly push down lang/gcc9/patch-clang-vec_step into the files/
subdirectory.
clang on rs6000/powerpc* unfortunately poisons user namespace by default
(without any special options or include files being required).
Until that changes (or GCC changes) we need to avoid using vec_step as a
variable name.
PR: 239266
Update to GCC 9.2 release, the second in the GCC 9 series, which fixes
some 68 bugs.
This is the default version of GCC in the Ports Collection, and it just
got a bit more polish and stability.
Both files/patch-amd64-gcc-multilib-support and
files/patch-powerpc64-no-_GNU_SOURCE [1] have been integrated upstream
(and also been part of lang/gcc9-devel already), so remove them here.
PR: 239648 [1]
Ensure _GNU_SOURCE is now longer defined on powerpc64 (which was a
regression from the GCC 8 series).
The technical background is that a consolidation in upstream GCC made
non-GNU platforms include gnu-user.h and then undefined some macros
in rs6000/freebsd.h, but missed doing the same in rs6000/freebsd64.h.
The has now been included upstream and the current snapshot that the
lang/gcc9-devel port tracks; carrying files/patch-powerpc64-no-_GNU_SOURCE
in this port should become obsolete with the GCC 9.2 release.
(As this should be a very short-lived measure, bump PORTREVISION only
for powerpc64 to avoid all other users having to rebuild, too.)
PR: 239648
Explicitly depend on GCC 8 (instead of USE_GCC=yes) for powerpc64 to
avoid a dependency loop.
PR: 238330
Reported by: pkub
lang/gcc*: Hide pkg-message during upgrades
PR: 239419
Approved by: gerald (maintainer)
Welcome GCC 9.1, the first release of the GCC 9 series! has a comprehensive overview of
many improvements and changes and
addresses issues you may encounter porting to this new version, though
this release series should have fewer of those than previous ones.
To provide a brief overview of some of the more noticable changes:
GCC's diagnostics now print source code with a left margin showing line
numbers. This is configurable via -fno-diagnostics-show-line-numbers.
Plus there have been lots of further improvements around diagnostic
messages in general as -fopt-info.
As usual a large number of improvements to code generation, including
Servers and bandwidth provided byNew York Internet, iXsystems, and RootBSD
12 vulnerabilities affecting 83 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities
Last updated:2020-10-19 10:27:15
|
https://www.freshports.org/lang/gcc9
|
CC-MAIN-2020-45
|
refinedweb
| 905
| 55.13
|
This blog series discusses C4C tenants from the aspect of how many you need and examples of recommended tenant landscapes. The blog series includes the following topics:
- Determine how many tenants you need
- Tenant landscape use case examples
- Considerations for tenant copies
- Tenant landscape recommendation with SDK
- SDK deployment recommendations – this blog
When using the SDK it is important to plan the landscape and how the solution will move through the tenant landscape. This blog highlights key topics discussed in the Deployment Recommendations with SDK presentation.
The presentation provides details on the following topics:
- Recommended Standard Landscape for SDK Development
- Advanced Options with more than 3 Tenants
- Solution Template Option
- Solution and Patch Process in Detail
This blog highlights the major points from the presentation.
Recommended Standard Landscape for SDK Development
As stated in the previous blogs, the recommended landscape is a separate DEV tenant that is used only for development. When setting up the development tenant, there are some very important considerations:
- Development tenant is a normal test tenant which is used solely for SDK developments.
- Create the development tenant from the (main) test tenant. Usually full copy, to copy the solution profile and test data.
- The test tenant remains the leading tenant for implementation configuration.
Use these tips to avoid obstacles:
- Do not assign the PDI work center to business users.
- Do not use the SDK development user for tests in frontend.
- Do not use the development tenant for integration, data migration preparation, report adoption and master/page Layouts, because of namespace switch in patch process.
Advanced Options with More than 3 Tenants
Whenever a test tenant is purchased, by default it is on the same system as your other test tenant. With multiple test tenants on the same system, both tenants must have the same version of the SDK solution. This may not be what you need.
Example:
In your company you have developers working on a DEV tenant. There are administration/key users testing on the TEST tenant. Additionally, you have user acceptance testing going on regionally. The user acceptance testing is done by a group of business users. It could be that while user acceptance test is going on, the key users are testing an updated version of the solution that has not yet been rolled out to the user acceptance testing. In this situation, the user acceptance testing may be on version 2.0 of a SDK solution, while the developers and key users are testing version 2.5 of the custom solution.
In order to support the previous example, the user acceptance testing needs to be on a different version of the SDK solution. This requires that the user acceptance tenant be located on a different system.
When test tenants are purchased and created, by default the system where they are located will not be known to you. You need to create an incident when requesting the test tenant and request it be on a different system in order to support multiple versions of the SDK solution.
Solution Template Option
Some consultants prefer to develop using solution template. This enables you to start coding when the design is not ready. There are disadvantages as well, for example business configuration content cannot be created. The presentation describes the pros and cons to help you make your decisions.
Solution and Patch Process in Detail
In order to deploy the solution to the (main) test tenant and production tenant, you need to assemble the solution in the development tenant and upload and activate it in the target tenant. Once the solution is assembled, all further developments and enhancements are done via a patch and must be compatible to previous versions. This means that deletion of objects (BOs, fields, BCOs, etc.) as well as change of data types is not allowed in a patch. However you can add new fields and objects and change the business logic in a patch. When the patch is then deployed to production, the entire solution is deployed to production. There are no ‘delta’ deployments. The actual deployment is a manual step, consisting of uploading a locally stored solution file to the target tenant and activation of the solution in that tenant. There is no transport mechanism between the tenants.
For more details on SDK landscape see Stefan’s Blog:
Thanks, Eduard! Great blog and presentation!!!
Jaroslava Mruzova
Eduard – I’m talking with one of our partners now. The question is – when you are doing SDK with the DEV tenant – do you do the adaptation for custom fields, do we recommend that you do this on the DEV tenant, or do the custom fields in the configuration tenant and redo or download/upload.
-ginger
Jaroslava Mruzova – the answer from Eduard and our other SDK experts is the following:
1. If you know the custom field will require business logic, create the field in the SDK.
2. all other extension fields created by the adaptation tools (key user tools) – are created in the main config tenant. if you later find they need logic, export, import them into the DEV tenant.
That’s our official recommendation.
Hi Ginger and Jaroslava,
the recommendation is to create KUT extension fields in the Test tenant and ex-/import them to the Dev tenant if required.
However if you require business logic for this field, I recommend to add the extension field with SDK, not KUT.
I will update the the blog to make that explicit.
Best regards,
Eduard
Thank you, Eduard Stelle – we really appreciate your guidance and deep insight! Thanks for updating the blog as well – you are ‘de best!!
Hi Eduard,
Thanks for this informative blog. This is very helpful especially where you have 3 system landscape.
What if we do not have the original solution and only have the patch solution? Can we safely deploy just the patch?
We tried this in our test environment. Although it gives errors but imports the patch solution successfully and also activates it without issues. Not sure what those error mean.
Again is it possible to get your original solution back? I tried enabling the original solution in PDI but it reverts it to the immediate previous version but not the original solution.
Regards,
Sandeep
Hi Sandeep,
A (patch) assembly contains the complete solution, so you can deploy the most recent (patch) version to a “fresh” tenant right away without deploying previous versions before.
In the development tenant you can switch between the original and patch solution by de-/enabling the solutions. However original solution does not necessarily mean the first version of the solution. If you deploy a new version to a system, the solution will be updated in all tenant in this system where the solution is available. Check pages 8 and 9 in this presentation:
This becomes important if you have a 4 or more tenant landscape in place.
Regards,
Eduard
Thanks Eduard,
We already did a small POC with this combination in one our QA tenant with a dummy solution and importing the latest patch went in successfully.
The only thing we noticed is that if you have PDI_AUTHORISATION_FIX WC assigned to you, you may get an Production Authorization fix error message, so the solution for this is to remove this and import the patch.
Thanks and Regards,
Sandeep
Thanks Eduard
I would like to know more about the Change Project tenants when it comes to later additions of SDK development. I mean, when Production is running and the client wants to engage in additional functionality development and deployment, do they need to know of what they can and what they cannot do in the Change Project tenant after requesting it from SAP Operations?
Hi Eugene,
This is a good questions.
In the change project case you should also follow the 3 tenant landscape approach. The change project should point to the main configuration tenant, not the development tenant. Once you deploy the custom solution to the test tenant with the change project, you will be able to scope the new custom solution to the solution workspacec. Before merging the workspace to production, the custom solution has to be deployed in the production tenant first.
Please check also this blog series on Change Projects:
The actual custom solution is independent of any implementation/change project. Everithing you can do before go live, you can do after the go live as well. Of course you need to consider existing data in the system. If for example you introduce an extension filed automatically calculated in the BeforeSave event with the new solution, the extension field will remain empty for all existing records unless they are changed and saved after successfull deployment and activation of the new solution.
However there are limitation with patch solution, basically it is not allowed to do incompatible changes like delete objects or change data type. For further details on limitations in patch solutions please check the help document:
Best regards,
Eduard
Thank you s much Eduard. You have answered my question almost fully.
There’s only one minor thing for me to ask. Here it is: if a customer wants to use one of its existing Test tenants as a Change Project tenant instead of requesting a new one from SAP, can the customer do it? Can they ask for a decommissioning of that tenant after the solution has been merged with the Production?
Or they have to request a new Change Project tenant every time they need to modify some of their functionalities?
Thanks in advance for your answer.
Cheers,
Gene
Hi Eugene,
The question is answered in the blog series on Change Projects:
You can use an existing test tenant for the change project, but you should not use the development tenant.
You can request decomissioning of the test tenant afterwards, but you don’t need to. You can use the same test tenant for subsequent change projects.
Best regards,
Eduard
Thank you Eduard very much. I have thoroughly read the blog and found many of its points exceptionally important. I will use those points in my dealing with customers.
There’s one point, however, somewhat missing from the blog, namely, what is the SAP main recommendations for the public cloud customers. I don’t think it’s a question for you specifically but rather a generic statement that our customers very often are looking for.
Cheers,
Gene
Hi Eduard,
One quick pointer and a very important one which can be added in your blog
When you try to download a copy of a solution from a tenant it will ask for description to give, you need to give the description same as the solution or else when you upload the Zip file in another tenant it will create a new solution with different namespace. So it will make things very complex for later patches for the solution.
This happens when you select Download a Copy not when Assemble and download.
We got to know about this in hard way.
Thanks and Regards,
Ajith J
|
https://blogs.sap.com/2015/07/07/sap-cloud-for-customer-tenant-landscapes-sdk-deployment-recommendations/
|
CC-MAIN-2018-39
|
refinedweb
| 1,836
| 61.87
|
C++ class wouldn't build in bare bone QML application
I have a simple bare bone QML application and I just want to add a c++ class and use it from QML but I get error and it wouldn't even build. Here is the code.
@#include <QGuiApplication>
#include <QQmlApplicationEngine>
#include "mymodel.h"
int main(int argc, char *argv[])
{
QGuiApplication app(argc, argv);
MyModel model; QQmlApplicationEngine engine; engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
// engine.rootContext()->setContextProperty("_MyModel", &model);
return app.exec();
}@
And the class MyModel is:
@#ifndef MYMODEL_H
#define MYMODEL_H
#include <QObject>
class MyModel : public QObject
{
Q_OBJECT
public:
explicit MyModel(QObject *parent = 0);
signals:
public slots:
};
#endif // MYMODEL_H@
the .cpp file of the class
@#include "mymodel.h"
MyModel::MyModel(QObject *parent) :
QObject(parent)
{
}
@
When I build, I get the following error:
@
main.obj:-1: error: LNK2019: unresolved external symbol "public: __thiscall MyModel::MyModel(class QObject *)" (??0MyModel@@QAE@PAVQObject@@@Z) referenced in function _main
debug\QMLCalc1.exe:-1: error: LNK1120: 1 unresolved externals
@
What is possibly wrong? I can't even get to the setContextProperty() call yet.
Have included mymodel.cpp into the project?
- p3c0 Moderators
Also, unrelated to above error, use setContextProperty before loading the QML or else there would be some Reference Errors.
I had to 'Run QMake' from build menu which fixed it and project now builds fine without making any other change but I still don't uderstand why do I have to do that! It is a simple new project and I just added a class. Could anyone tell me what does QMake do? Can I add this as a step to build process? Thanks.
- dheerendra
It generates the makes files again to compile your sources. You have added new class files to your project. So pro file got updated. I have seen some times it does not generate make file on its own after adding new class files. I have seen this more with when I'm using the qmake/qt creator with VC++. In general it is good practice to re-run the qmake once the pro-file is updated.
|
https://forum.qt.io/topic/49718/c-class-wouldn-t-build-in-bare-bone-qml-application
|
CC-MAIN-2017-43
|
refinedweb
| 347
| 58.89
|
NAME
VOP_REVOKE — revoke access to a device and its aliases
SYNOPSIS
#include <sys/param.h> #include <sys/vnode.h> int VOP_REVOKE(struct vnode *vp, int flags);
DESCRIPTION
VOP_REVOKE() will administratively revoke access to the device specified by vp, as well as any aliases created via make_dev_alias(9). Further file operations on any of these devices by processes which have them open will nominally fail. The flags must be set to REVOKEALL to signify that all access will be revoked; any other value is invalid.
LOCKS
The vp must be unlocked on entry, and will remain unlocked upon return.
SEE ALSO
make_dev_alias(9), vnode(9)
AUTHORS
This manual page was written by Brian Fundakowski Feldman.
|
http://manpages.ubuntu.com/manpages/precise/man9/VOP_REVOKE.9freebsd.html
|
CC-MAIN-2016-44
|
refinedweb
| 114
| 56.15
|
I want to fetch the stock price from web site:
For example stock price appears as "S&P BSE :25,489.57".I want to fetch the numeric part of it as "25489.57"
This is the code i have written as of now.It is fetching the entire div in which this amount appears but not the amount.
Below is the code:
from bs4 import BeautifulSoup
from urllib.request import urlopen
page = ""
html_page = urlopen(page)
html_text = html_page.read()
soup = BeautifulSoup(html_text,"html.parser")
divtag = soup.find_all("div",{"class":"sensexquotearea"})
for oye in divtag:
tdidTags = oye.find_all("div", {"class": "sensexvalue2"})
for tag in tdidTags:
tdTags = tag.find_all("div",{"class":"newsensexvaluearea"})
for newtag in tdTags:
tdnewtags = newtag.find_all("div",{"class":"sensextext"})
for rakesh in tdnewtags:
tdtdsp1 = rakesh.find_all("div",{"id":"tdsp"})
for texts in tdtdsp1:
print(texts)
I had a look around in what is going on when that page loads the information and I was able to simulate what the javascript is doing in python.
I found out it is referencing a page called
IndexMovers.aspx?in=en check it out here
It looks like this page is a comma separated list of things. First comes the name, next comes the price, and then a couple other things you don't care about.
To simulate this in python, we request the page, split it by the commas, then read through every 6th value in the list, adding that value and the value one after that to a new list called stockInformation.
Now we can just loop through stock information and get the name using
item[0] and price with
item[1]
import requests newUrl = "" response = requests.get(newUrl).text commaItems = response.split(",") #create list of stocks, each one containing information #index 0 is the name, index 1 is the price #the last item is not included because for some reason it has no price info on indexMovers page stockInformation = [] for i, item in enumerate(commaItems[:-1]): if i % 6 == 0: newList = [item, commaItems[i+1]] stockInformation.append(newList) #print each item and its price from your list for item in stockInformation: print(item[0], "has a price of", item[1])
This prints out:
S&P BSE SENSEX has a price of 25489.57 SENSEX#S&P BSE 100 has a price of 7944.50 BSE-100#S&P BSE 200 has a price of 3315.87 BSE-200#S&P BSE MidCap has a price of 11156.07 MIDCAP#S&P BSE SmallCap has a price of 11113.30 SMLCAP#S&P BSE 500 has a price of 10399.54 BSE-500#S&P BSE GREENEX has a price of 2234.30 GREENX#S&P BSE CARBONEX has a price of 1283.85 CARBON#S&P BSE India Infrastructure Index has a price of 152.35 INFRA#S&P BSE CPSE has a price of 1190.25 CPSE#S&P BSE IPO has a price of 3038.32 #and many more... (total of 40 items)
Which clearly is equivlent to the values shown on the page
So there you have it, you can simulate exactly what the javascript on that page is doing to load the information. Infact you now have even more information than was just shown to you on the page and the request is going to be faster because we are downloading just data, not all that extraneous html. This also has the advantage of being a little easier to parse. No BeautifulSoup required.
|
https://codedump.io/share/IweWLP1zrjA2/1/extract-using-beautiful-soup
|
CC-MAIN-2017-04
|
refinedweb
| 577
| 74.49
|
A python logrotater
Project description
This library is a simple logrotater (or file rotater). Simply pass in the path to the main logfile and this library will rotate all the logs by an increment of 1.
Example:
import logrotater
rotater = logrotater.LogRotate(prefix=’/home/kyle/p4.log’, verbose=True)
rotater.rotate()
The prefix path should be the path of the main logfile without the .N extension. The previous example would rotate /home/kyle/p4.log.N to /home/kyle/p4.log.N+1, move /home/kyle/p4.log to /home/kyle/p4.log.1 and create a new empty /home/kyle/p4.log
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/logrotater/
|
CC-MAIN-2020-24
|
refinedweb
| 126
| 71.61
|
This site uses strictly necessary cookies. More Information
Since updating Unity from 2019.2 to 2019.3, I am unable to compile the project code in Visual Studio. It still compiles and runs in the Unity Editor but Visual Studio gives hundreds of missing reference errors ("The type or namespace name 'XXX' could not be found (are you missing a using directive or an assembly reference?)"). This includes files which I wrote, i.e. my own created namespaces.
Nothing I've tried works (see below) except for one thing. If I manually remove the references to all four assemblies in visual studio and re-add them, the solution will compile. But when I then close VS and reopen the same error comes up.
The assemblies are Assembly-CSharp, Assembly-CSharp-Editor, Assembly-CSharp-Editor-firstpass and Assembly-CSharp-firstpass. The normal ones that get created as far as I know.
Why does the reference break each time I restart VS? (Edit: and sometimes when rebuilding the solution without restarting)
Visual Studio Community 2017 15.9.22.
all of these
all of these
All of these
Making sure Visual Studio (and the Visual Studio Tools for Unity (which is installed using the Visual Studio installer)) are fully up to date.
Updated VS to the latest version
Removed VS Code package (I'm not using Code) from Unity, then readded.
A number of restarts of Unity and VS in between all the above.
Bump. Any ideas on how to possibly fix this would be appreciated
Bump. Still no solution found. I've deleted a lot of the project but the errors just come up from different scripts. Definitely some issue with VS
Answer by IINovaII
·
May 13, 2020 at 08:15 AM
It's a bug that happens in Unity 2019.3.12f. Try the newer Unity version 2019.3.13 instead. It's been fixed there.
- Nova
Wow. I hit check for updates in the editor and it told me there were none... But found it in the hub. Downloading it now
If I had a million dollars I'd give it to you. I spent 8+ hours trying to figure this out thinking Unity was up to date... It's working now
I'm happy that I was able to help you.
- Nova
Answer by luixodev
·
Sep 05, 2020 at 10:24 AM
Same here when I update to Unity 2020.1.4f.4412, any idea?
Close VS, then create a new c# file inside unity, double click the file, then all namespaces were restored.
This worked for me! Thanks ;)
Answer by metixgosu
·
Apr 08 at 12:13 AM
Trigger is if you remove mononehaviour if from any script what is attached on game object in unity, if you want remove mono for any reason in vs just remove it from game object
131 People are following this question.
[resolved] Visual Studio does not find a namespace from the project
1
Answer
Async/Await in 2019.2 and Visual Studio 2019
1
Answer
The type or namespace 'UI' does not exist in the namespace UnityEngine. But UI is present. Need help please,The type or namespace name 'UI' does not exist in the namespace UnityEngine. Need help please
1
Answer
Visual Studio doesn't recognise any Unity namespaces
1
Answer
Errors appearing from codes of Unity Libraries
0
Answers
EnterpriseSocial Q&A
|
https://answers.unity.com/questions/1728809/namespace-references-keep-breaking-in-visual-studi.html?sort=oldest
|
CC-MAIN-2021-21
|
refinedweb
| 562
| 74.39
|
Matt Powell
Microsoft Corporation
May 2004
Applies to:
Microsoft® .NET Framework
Microsoft® SOAP Toolkit
Microsoft® Visual Basic®
Web Services Enhancements for Microsoft .NET
Summary: Take the next step in developing Web services by using managed code in the .NET Framework, and see how to incrementally transition your SOAP Toolkit-based Web service applications. (14 printed pages)
Introduction The Basics... Your First Managed Web Service for SOAP Toolkit Developers The Basics... Calling Web Services in Managed Code for SOAP Toolkit Developers The Migration Approach The Natural Solution: Drawing the Migration Line at the Wire Another Migration Option: Doing as Little Work as Possible to Remove the SOAP Toolkit What About SOAP Toolkit Feature 'X'? Conclusion Related Books
Since the Microsoft SOAP Toolkit released in June 2000, a number of advances in developing for the Windows platform—most notably the release of the .NET Framework—raised Web services to the status of a "first class citizen" in terms of application development. This article discusses the features of the SOAP Toolkit, the first Microsoft tool for building and calling Web services, and how to migrate SOAP Toolkit applications to use the .NET Framework.
The Microsoft SOAP Toolkit provides a mechanism for wrapping a COM object in a Web service and exposing its methods via SOAP messages. Similarly, you can take a WSDL file for an existing Web service and bind a proxy object to it, which allows the methods from the Web service to be called as if they are methods on the proxy object. While this functionality is useful, it can also be developed more easily using the .NET Framework, often with better results. I will show you how to accomplish the same thing in managed code using the .NET Framework, as well as look at ways to incrementally transition your Web service applications to the .NET Framework.
There are a number of very good reasons why you would want to move your Web services to the .NET Framework if you have code that uses the SOAP Toolkit or you are developing with the SOAP Toolkit.
First and foremost, Web services have come quite a ways since the SOAP Toolkit was released. Probably the single biggest advance is in interoperability. This is due in large part to the WS-I Basic Profile, which outlines rules for making Web services that will interoperate easily. Arguably the biggest rule in the Basic Profile is that RPC-encoded Web services should not be used. RPC encoding was quite popular with first-generation Web services, but interoperability problems are more likely to occur with RPC encoding. Therefore, the Basic Profile solution to avoid these problems is to use document/literal Web services. The SOAP Toolkit can communicate with document/literal Web services, but by default it creates RPC-encoded Web services.
In addition, the SOAP Toolkit uses a late-bound approach to consuming Web services. This means, among other things, that errors are not found until runtime that, in an early-bound scenario, are found at compile time. This might not seem like a big problem, but it could have disastrous results if the runtime error is not detectable to the application. There are many scenarios where changes to a Web service interface could be undetected in a late-bound scenario and would generate faults in an early-bound scenario. Obviously, these sorts of problems are rife with security vulnerabilities.
By the way, if you haven't heard this already, the .NET Framework rocks! The .NET Framework is much easier to program in than programming in the unmanaged world; it is less susceptible to buffer overruns, memory leaks, and security privilege elevations; and it ultimately provides Web service support as an integral part of the platform. Also, if you compare the performance of a typical scenario where a Visual Basic® 6.0 object is being exposed as a Web service, a completely managed solution is significantly faster.
The world of Web services continues to burst with innovation. For Microsoft®, that means improving upon the .NET Framework implementation of Web services. For instance, Web Services Enhancements for Microsoft .NET provides higher-level services for the .NET Framework that save developers lots of time when it comes to implementing security in their Web service applications. The Microsoft long-term vision for Web services and Indigo ("Indigo" is a set of .NET technologies for building and running connected systems, and is a feature of the next version of Windows code-named "Longhorn") still rests on and in the .NET Framework. So if you want to take advantage of Web service innovation from Microsoft in the future, you should be running the .NET Framework.
Finally, as an inverse corollary to "all innovation at Microsoft is going into .NET Framework Web services," it is true that "no innovation at Microsoft is going into the SOAP Toolkit." Specifically, full support for the SOAP Toolkit is ending in April 2005. If you have been using the SOAP Toolkit, I cannot imagine that you haven't already considered moving to the .NET Framework. Hopefully this information serves as a subtle nudge in that direction.
If you learned to program Web services using the SOAP Toolkit, then you are probably well aware of a typical first Web service written to add two numbers together. The process using the SOAP Toolkit is to write a COM DLL that exposes an IDispatch method. I did this using Visual Basic 6.0 and my method looked like this:
Function Add(x As Integer, y As Integer) As Integer
Add = x + y
End Function
Then I ran the generated COM object through the SOAP Toolkit WSDL Generator utility that created a Project1.WSDL and a Project1.WSML file. The WSDL file describes the Web service interface and the WSML file maps the calls to the proper COM server—in this case, the just-built DLL. Next I used the SOAPVDIR.CMD batch file to configure the virtual directory to use the SOAP Toolkit ISAPI listener, and the Web service was created.
It is important to note that, by default, the SOAP Toolkit WSDL Generator tool creates a WSDL file that uses an RPC/Encoded option that is not WS-I Basic Profile-compliant.
It becomes a little simpler to create a similar, integer-adding Web service using managed code (i.e., using the .NET Framework). The single biggest hurdle may be to get the .NET Framework installed on the machine. Once this is done, all there is to do is create a virtual directory on the IIS server and create a file such as MATH.ASMX listed below.
<%@ WebService Language="VB" Class="MathService" %>
Imports System
Imports System.Web.Services
Public Class MathService : Inherits WebService
<WebMethod> _
Public Function Add(A As Integer, B As Integer) As Integer
Return A + B
End Function
End Class
At first glance, this may seem more complicated than the previous Visual Basic code, but realize that the method I wrote in Visual Basic 6.0 was wrapped in a rather large Class1.cls file that was part of a Visual Basic 6.0 project, and so forth. This ASMX file is all that is needed. There is no need to register a COM object. Also, by default, the .NET Framework creates Web services using the Document/Literal approach instead of the RPC/Encoded approach. Document/Literal is WS-I Basic Profile-compliant.
Using Visual Basic .NET, the job is even easier and the project creation wizards do all the work. Just insert the code for the Add function and flag it with the <WebMethod> attribute.
For more information on attributes and migrating Visual Basic 6.0 code to Visual Basic .NET, see Language Changes in Visual Basic.
Writing a Web service is one thing, but you also want to know how to call a Web service. The SOAP Toolkit would use the following code to call the Add method created by the earlier Web service.
set soapClient3 = CreateObject("MSSOAP.SoapClient30")
Call soapClient3.mssoapinit("Project1.wsdl", "", "", "")
Ret = soapClient3.Add(3, 2)
This requires that Project1.wsdl be in the current directory, although you can also specify a URL for the first parameter to the mssoapinit function call.
The SOAP Toolkit uses a late-bound approach to calling a Web service. That means that it figures out at runtime what the details of the WSDL Web service definition are and how to call the Web service based on that information. This is opposed to an early-bound approach where the interface described by the WSDL is determined at design time when the programmer is writing the code. The problem with the late-bound approach is that you really don't know if the interface you are calling has changed. In the best-case scenario, the code written fails due to the method name changing, or because the function description has drastically changed. In the worst-case scenario, the SOAP Toolkit could coerce the method call into a valid SOAP request to the Web service, which could then cause unforeseen results that neither the client or server would be aware of.
An early-bound approach sends an XML message that is fully defined with the namespace the developer intended to send. If the Web service interface has changed, then a SOAP fault would be returned indicating the inconsistency in the message with what was expected. There is less of a chance that you might execute code on the server that was not intended and the vulnerabilities for data corruption and security breaches are reduced.
Making a call to the Web service from managed code requires binding the code to the Web service interface at design time. To do this, you supply the WSDL to the .NET Framework WSDL.EXE tool. WSDL.EXE takes the WSDL for a Web service as input and generates a class that can then be used to call the Web service from within your managed code.
In the case of the sample Web service, you could run WSDL.EXE with the following command line:
WSDL /l:VB
For my particular example, this created a Visual Basic .NET file called MathService.vb. This class can now be used in the code for a console application. The entire listing for the file AddThem.vb is listed below.
Class AddThem
Public Shared Sub Main()
Dim proxy as MathService = new MathService()
Dim ret as Integer = proxy.Add(2, 3)
System.Console.WriteLine(ret)
End Sub
End Class
Next, compile this code with the .NET Framework Visual Basic compiler. The following command line will do the trick:
vbc /r:System.dll /r:System.Web.Services.dll /r:System.Xml.dll /out:AddThem.exe MathService.vb addthem.vb
The /r parameters indicate reference DLLs needed to compile the Web service proxy class. What's created is an application called ADDTHEM.EXE that invokes the Web service and returns the result.
Of course, with Visual Studio .NET this task is even easier. To create an application like the ADDTHEM.EXE program just created, start by choosing to create a Visual Basic .NET Console Application. In the Solution Explorer, right-click the project and choose Add Web Reference... A wizard launches that allows us to browse to the WSDL for the Web service and then it creates the proxy class just as the WSDL.EXE utility did from the .NET Framework SDK. Finally, add the same 3 lines of code to the Main function as in the previous example and the application is written. Choose Build from the menu and get a working .NET-based application that calls the Web service and reports the results.
The .NET Framework supports all the SOAP Toolkit features for doing things such as adding HTTP authentication, adding an HTTP proxy server, and sending the request over SSL. The following code calls the Web service again, but this time, sets HTTP authentication information, changes the URL where the Web service is located, and sets an HTTP proxy server.
Class AddThem
Public Shared Sub Main()
Dim proxy As MathService = New MathService()
proxy.Url = ""
proxy.Credentials _
= New System.Net.NetworkCredential("Joe", "Password")
proxy.Proxy _
= New System.Net.WebProxy("")
Dim ret As Integer = proxy.Add(2, 3)
System.Console.WriteLine(ret)
End Sub
End Class
When moving from the SOAP Toolkit to .NET, one option is to rewrite your entire Web service application using managed code. In certain circumstances where there is not a lot of code to move or where a .NET approach is mandated for other reasons, this might be a viable solution. However, a lot of business scenarios will require that migration of a Web service application occur incrementally. Let's look at several approaches for transitioning from a SOAP Toolkit Web service application to a .NET Web service application.
As a starting point for my discussion, I will take a typical Web service application scenario where the SOAP Toolkit was used to expose a COM object as a Web service. Consuming this Web service is a smart client application written in Visual Basic 6.0. Also consuming this Web service is an ASP Web application. Both the smart client Visual Basic 6.0 application and the ASP Web application are using the SOAP Toolkit to communicate with the Web service. Figure 1 shows how these applications work using the SOAP Toolkit.
Figure 1. A smart client and a Web application consume a Web service. All Web service communication is accomplished using the SOAP Toolkit.
The Web service shown in Figure 1 is created using a COM object written with Visual Basic 6.0. It is exposed as a Web service using the SOAP Toolkit and the ASP listener. Consuming this Web service are two clients—the Visual Basic 6.0 client and an ASP client—both using the SOAP Toolkit to create a proxy in order to call the Web service. Notice that communication happens over RPC-encoded SOAP that is not WS-I Basic Profile-compliant.
Migrating this SOAP Toolkit solution to a .NET Framework solution can be done in a number of ways. We will look at some of these shortly. One migration option is to migrate the ASP client to an ASP.NET client. You could conceivably do this by 1) using the .NET Framework Web service client capabilities, or 2) using a SOAP Toolkit proxy from within your ASP.NET code. This second option is ridiculous, because if you are running ASP.NET pages, then you already are using the .NET Framework so you can use the .NET Framework to call your Web service. This avoids the COM interoperability layer and is a much more consistent and integrated approach. Working under the assumption that if you are migrating an ASP Web service client application to managed code, you will use the .NET Framework to talk to the Web service. My discussion will focus on other migration options shown on the bottom client in Figure 1: the smart Visual Basic 6.0 client accessing the ASP service, both using the SOAP Toolkit.
If you take the various pieces of the application: the client application, the SOAP Toolkit proxy object, the Web service, and the wrapped COM object, any and all of these portions of the unmanaged solution could be incrementally migrated to a managed solution using the .NET Framework. Figure 2 illustrates some of the permutations of options that you could use to migrate from an unmanaged solution to a managed solution.
Figure 2. Migration options for moving from a SOAP Toolkit solution to a .NET Framework solution
This is not an exhaustive list, but it does illustrate how there should be a migration path that fits your particular business needs.
Next I will be talking about something I call migration lines. A migration line is where the boundary occurs between managed and unmanaged code. This line can be placed at several places for any particular migration point in time and is represented in Figure 2 as the boundary between the yellow (unmanaged) portions in the diagram and the green (managed) portions of the diagram.
Web services are about interoperability, and this is possible because the wire protocol is concretely defined. This is true for Web services talking between different platforms, and is also true for Web services that communicate between different toolkits, such as the .NET Framework and the SOAP Toolkit. Figure 3 shows a managed/unmanaged Web service application in mid-migration where the migration line is drawn on the wire.
Figure 3. Migrating to the .NET Framework with migration line at the wire protocol
If you are going to pick a two-step approach from migrating from a complete SOAP Toolkit application to a complete .NET Framework application, this is probably your best approach. If the .NET Framework is not widely distributed on your network, then installing it on the server where your Web service lives is probably a much simpler task then installing it on a potentially large number of clients. Also, you will get the performance benefits of having a complete .NET Framework solution on the server where performance counts the most.
Notice that the communication from the client to the server is happening over RPC-encoded SOAP. While this is not WS-I Basic Profile-compliant, it does mean that your Web service clients will not have to be updated. And yes, the .NET Framework is flexible enough to do RPC encoded Web services even though it does document literal Web services by default.
In order to implement the migration option shown in Figure 3, two main tasks need to be performed. First, the business logic of the Web service must be migrated; next, the Web service that exposes this logic must be migrated. For this example, the business logic originally lived in a COM object written with Visual Basic 6.0. I am not going to go into all, or any, of the details of migrating the business logic to a .NET managed class since this code could entail using any number of back-end technologies, but I will say that basic design practices and architectures still apply to the managed code world as they did to the unmanaged code world before.
This leaves the second task in reaching the migration point: take the business logic in the newly created managed class and expose it as a Web service. The catch is that in order for the clients to continue working unmodified, the Web service must expose the exact same interface as the previous SOAP Toolkit Web service. Luckily the WSDL.EXE tool used earlier to create the .NET Framework client proxy can also be used to build a Web service that implements the interface defined in a WSDL. A typical use of it for this kind of scenario would look like this:
WSDL /server /l:vb /out:project1.vb project1.wsdl
The result of this command would be that WSDL.EXE would take the project1.wsdl and create the project1.vb file that contains the class requested. The /l flag indicates the language (in this case Visual Basic .NET) and /server generates server-useable code instead of client-useable code. The class generated will look something like this:
<System.Web.Services.WebServiceBindingAttribute( _
Name:="Class1SoapBinding", _
[Namespace]:="")> _
Public MustInherit Class Project1
Inherits System.Web.Services.WebService
...
End Class
This class will have a lot of attributes scattered throughout it mostly because the WSDL was not for a document/literal Web service. For the most part, these can be ignored. The main problem is with the initial class declaration that uses the MustInherit keyword.
MustInherit means that this class cannot be used directly. Instead it is telling you that another class should be created that inherits from this class and then it can override the functions that are declared and get all the benefits of the code in the original class. The problem is that .NET Framework attribution does not get passed down to any classes that inherit from this class. This means that you can override the declared functions, but you will have to recopy the complex attribution to your own function declarations in order for the expected results to occur.
It is generally accepted that with the .NET Framework version 1.1, if you use WSDL.EXE to create a server class as shown here, you should take the generated code and modify it so that it is no longer a MustInherit class, and put your code into the auto-generated class instead of creating another class that inherits from the generated class. The modifications involve 1) removing the MustInherit from the class declaration, and 2) removing the MustOverride from the specific function definitions. Once you have done this, all that remains is to insert the code into the functions that call into your business logic class.
This process should work for most SOAP Toolkit implementations, but be sure and test your new Web service well with your existing client application(s) in case there are some subtleties that get lost in translation.
Now that the first milestone in the migration path has been reached, the next job is to migrate the client applications to the .NET Framework. The details are not discussed since the Web service portion is a trivial extension of the section, The Basics... Calling Web Services in Managed Code for SOAP Toolkit Developers, earlier in this article. But there are a few items to consider when deploying a widely used .NET Framework application.
Deploying the .NET Framework. This is probably the single biggest hurdle to overcome for legacy systems that want to run managed applications. If you already have the .NET Framework on your client machines, great! If not, this can be a fairly large task. See Redistributing the .NET Framework for a good discussion of all your options for distributing the .NET Framework runtime.
Consider deploying with no-touch deployment. Since you are migrating to the .NET Framework, you might as well take advantage of some of its features that makes deploying client applications easier. No-touch deployment allows .NET Framework applications to be downloaded from a Web server. This means updates can reach all clients by updating the application image on the Web server. There are restrictions on what Web services you will be allowed to access from a no-touch application. See No-Touch Deployment in the .NET Framework for information on creating a no-touch application.
Consider moving your Web service to document literal. If you used the default SOAP Toolkit settings which create an RPC-encoded Web service, you should eventually move the Web service to use document/literal instead. Keeping it as RPC-encoded before you update your clients is probably a smart idea, but when you update your clients it is probably time to also update your Web service. This will insure better interoperability in the future and will promote more agile application development using your Web service.
Drawing the migration line down the middle at the wire protocol is one particularly nice migration strategy, but your migration strategy may be more focused on doing as little work as possible to remove the SOAP Toolkit code from your Web service application. Such a migration is illustrated in Figure 4 below.
Figure 4. Migrating only the SOAP Toolkit portions of your application
In this case, the migration line lives in two different places: on the client at the point where the application is calling into the Web service proxy, and on the server where the .NET Framework Web service uses the pre-existing Visual Basic 6.0 COM object that implemented the business logic wrapped by the Web service. Both the SOAP Toolkit proxy on the client and the SOAP Toolkit Web service running on ASP have been replaced with .NET Framework versions of the same thing.
The ability to mix and match managed and unmanaged code like this is possible via the .NET Framework support for COM interoperability. The .NET Framework allows a piece of managed code to look like a COM object using a COM Callable Wrapper (CCW). A CCW is used on the client to allow the unmanaged application the ability to call into the managed Web service proxy. The .NET Framework also allows managed code to call COM objects using a Runtime Callable Wrapper (RCW). An RCW is used on the server when the .NET Framework Web service invokes the business logic in the pre-existing COM object. For more information on using this sort of approach, see Integrating Web Services and COM Components.
There are a couple of things to note if you are using this approach. First, you might have noticed that Figure 3 uses document/literal SOAP to communicate between the client and the server. This follows the general guideline that you should move to document/literal and become WS-I Basic Profile compliant if you can. Second, one of the reasons why a completely managed Web service is faster than an unmanaged Web service is that it uses multi-threaded apartments, and the business logic class will run in a multi-threaded apartment (MTA). However, Visual Basic 6.0 creates COM objects that are apartment model, which means that they must run in single-threaded apartments (STAs). When the Visual Basic 6.0 COM object is called from the .NET Framework Web service, the call must be marshaled to a thread in an STA. This can create a performance hit compared to the previous SOAP Toolkit Web service, because the SOAP Toolkit threads were initialized as STAs and marshalling was avoided. If performance is a problem for your Web service, avoid this migration path and migrate your entire Web service to managed code, where you will experience a significant performance increase. Even though you are using only managed code on the server, you can still perform incremental migration by using the wire protocol migration line approach discussed earlier.
The SOAP Toolkit is a relatively full-featured product that has not been significantly explored in this article. Therefore, there may be some feature, X, that has not been described in terms of migrating to the .NET Framework. Rest assured that nearly every feature of the SOAP Toolkit has a similar feature in the .NET Framework. A few that come to mind:
DIME Support, while not in the .NET Framework directly, is supported by Web Services Enhancements (WSE) 1.0 and WSE 2.0, which are free downloads that integrate into the .NET Framework support for Web services.
Type Mappers is a SOAP Toolkit feature that is functionally akin to XML serialization, a fundamental part of the .NET Framework. Most scenarios that used Type Mappers in the SOAP Toolkit will be non-issues with the .NET Framework.
The Trace Utility in the SOAP Toolkit is a nice little tool to have around for looking at the SOAP messages being passed on the wire. There is no trace utility in the .NET Framework SDK, but my personal experience is that Web services have reached a level of maturity such that I rarely have to look at on-the-wire messages anymore. There are tracing capabilities built into the .NET Framework and WSE that are based off of configuration settings, but a smart-looking windows application is not included. There are, however, a number of solutions, free and for purchase, that are available from third parties.
The .NET Framework is the current and future Web services technology for Microsoft, and SOAP Toolkit-based Web service applications should be moved to the .NET Framework in order to improve interoperability, performance, and future innovation. A number of migration strategies are available that can help you find the approach to best meets your business needs. If you are new to the .NET Framework, I think you will be pleasantly surprised at its feature set and ease of use. Happy coding!
.NET Web Services: Architecture and Implementation
Real World XML Web Services: For VB and VB .Net Developers
|
http://msdn.microsoft.com/en-us/library/ms995793.aspx
|
crawl-002
|
refinedweb
| 4,658
| 56.15
|
This Comprehensive Java Graph Tutorial Explains Graph Data Structure in detail. It includes how to Create, Implement, Represent & Traverse Graphs in Java:
A graph data structure mainly represents a network connecting various points. These points are termed as vertices and the links connecting these vertices are called ‘Edges’. So a graph g is defined as a set of vertices V and edges E that connect these vertices.
Graphs are mostly used to represent various networks like computer networks, social networks, etc. They can also be used to represent various dependencies in software or architectures. These dependency graphs are very useful in analyzing the software and also at times debugging it.
=> Check ALL Java Tutorials Here.
What You Will Learn:
- Java Graph Data Structure
- How To Create A Graph?
- Graph Implementation In Java
- Java Graph Library
- Conclusion
Java Graph Data Structure
Given below is a graph having five vertices {A,B,C,D,E} and edges given by {{AB},{AC},{AD},{BD},{CE},{ED}}. As the edges do not show any directions, this graph is known as ‘undirected graph’.
Apart from the undirected graph shown above, there are several variants of the graph in Java.
Let’s discuss these variants in detail.
Different Variants Of Graph
The following are some of the variants of the graph.
#1) Directed Graph
A directed graph or digraph is a graph data structure in which the edges have a specific direction. They originate from one vertex and culminate into another vertex.
The following diagram shows the example of directed graph.
In the above diagram, there is an edge from vertex A to vertex B. But note that A to B is not the same as B to A like in undirected graph unless there is an edge specified from B to A.
A directed graph is cyclic if there is at least one path that has its first and last vertex as same. In the above diagram, a path A->B->C->D->E->A forms a directed cycle or cyclic graph.
Conversely, a directed acyclic graph is a graph in which there is no directed cycle i.e. there is no path that forms a cycle.
#2) Weighted Graph
In a weighted graph, a weight is associated with each edge of the graph. The weight normally indicates the distance between the two vertices. The following diagram shows the weighted graph. As no directions are shown this is the undirected graph.
Note that a weighted graph can be directed or undirected.
How To Create A Graph?
Java does not provide a full-fledged implementation of the graph data structure. However, we can represent the graph programmatically using Collections in Java. We can also implement a graph using dynamic arrays like vectors.
Usually, we implement graphs in Java using HashMap collection. HashMap elements are in the form of key-value pairs. We can represent the graph adjacency list in a HashMap.
A most common way to create a graph is by using one of the representations of graphs like adjacency matrix or adjacency list. We will discuss these representations next and then implement the graph in Java using the adjacency list for which we will use ArrayList.
Graph Representation In Java
Graph representation means the approach or technique using which graph data is stored in the computer’s memory.
We have two main representations of graphs as shown below.
Adjacency Matrix
Adjacency Matrix is a linear representation of graphs. This matrix stores the mapping of vertices and edges of the graph. In the adjacency matrix, vertices of the graph represent rows and columns. This means if the graph has N vertices, then the adjacency matrix will have size NxN.
If V is a set of vertices of the graph then intersection Mij in the adjacency list = 1 means there is an edge existing between vertices i and j.
To better understand this concept clearly, let us prepare an adjacency Matrix for an undirected graph.
As seen from the above diagram, we see that for vertex A, the intersections AB and AE are set to 1 as there is an edge from A to B and A to E. Similarly intersection BA is set to 1, as this is an undirected graph and AB = BA. Similarly, we have set all the other intersections for which there is an edge to 1.
In case the graph is directed, the intersection Mij will be set to 1 only if there is a clear edge directed from Vi to Vj.
This is shown in the following illustration.
As we can see from the above diagram, there is an edge from A to B. So intersection AB is set to 1 but intersection BA is set to 0. This is because there is no edge directed from B to A.
Consider vertices E and D. We see that there are edges from E to D as well as D to E. Hence we have set both these intersections to 1 in adjacency Matrix.
Now we move on to weighted graphs. As we know for the weighted graph, an integer also known as weight is associated with each edge. We represent this weight in the adjacency Matrix for the edge that exists. This weight is specified whenever there is an edge from one vertex to another instead of ‘1’.
This representation is shown below.
Adjacency List
Instead of representing a graph as an adjacency matrix which is sequential in nature, we can also use linked representation. This linked representation is known as the adjacency list. An adjacency list is nothing but a linked list and each node in the list represents a vertex.
The presence of an edge between two vertices is denoted by a pointer from the first vertex to the second. This adjacency list is maintained for each vertex in the graph.
When we have traversed all the adjacent nodes for a particular node, we store NULL in the next pointer field of the last node of the adjacency list.
Now we will use the above graphs that we used to represent the adjacency matrix to demonstrate the adjacency list.
The above figure shows the adjacency list for the undirected graph. We see that each vertex or node has its adjacency list.
In the case of the undirected graph, the total lengths of adjacency lists are usually twice the number of edges. In the above graph, the total number of edges is 6 and the total or sum of the length of all the adjacency list is 12.
Now let’s prepare an adjacency list for the directed graph.
As seen from the above figure, in the directed graph the total length of the adjacency lists of the graph is equal to the number of edges in the graph. In the above graph, there are 9 edges and sum of the lengths of adjacency lists for this graph = 9.
Now let us consider the following weighted directed graph. Note that each edge of the weighted graph has a weight associated with it. So when we represent this graph with the adjacency list, we have to add a new field to each list node that will denote the weight of the edge.
The adjacency list for the weighted graph is shown below.
The above diagram shows the weighted graph and its adjacency list. Note that there is a new space in the adjacency list that denotes the weight of each node.
Graph Implementation In Java
The following program shows the implementation of a graph in Java. Here we have used the adjacency list to represent the graph.
import java.util.*; //class to store edges of the weighted graph class Edge { int src, dest, weight; Edge(int src, int dest, int weight) { this.src = src; this.dest = dest; this.weight = weight; } } // Graph class class Graph { // node of adjacency list static class Node { int value, weight; Node(int value, int weight) { this.value = value; this.weight = weight; } }; // define adjacency list List<List<Node>> adj_list = new ArrayList<>(); //Graph Constructor public Graph(List<Edge> edges) { // adjacency list memory allocation for (int i = 0; i < edges.size(); i++) adj_list.add(i, new ArrayList<>()); // add edges to the graph for (Edge e : edges) { // allocate new node in adjacency List from src to dest adj_list.get(e.src).add(new Node(e.dest, e.weight)); } } // print adjacency list for the graph public static void printGraph(Graph graph) { int src_vertex = 0; int list_size = graph.adj_list.size(); System.out.println("The contents of the graph:"); while (src_vertex < list_size) { //traverse through the adjacency list and print the edges for (Node edge : graph.adj_list.get(src_vertex)) { System.out.print("Vertex:" + src_vertex + " ==> " + edge.value + " (" + edge.weight + ")\t"); } System.out.println(); src_vertex++; } } } class Main{ public static void main (String[] args) { // define edges of the graph List<Edge> edges = Arrays.asList(new Edge(0, 1, 2),new Edge(0, 2, 4), new Edge(1, 2, 4),new Edge(2, 0, 5), new Edge(2, 1, 4), new Edge(3, 2, 3), new Edge(4, 5, 1),new Edge(5, 4, 3)); // call graph class Constructor to construct a graph Graph graph = new Graph(edges); // print the graph as an adjacency list Graph.printGraph(graph); } }
Output:
Graph Traversal Java
To perform any meaningful action like searching for the presence of any data, we need to traverse the graph such that each vertex and the edge of the graph is visited at least once. This is done using graph algorithms that are nothing but a set of instructions that help us to traverse the graph.
There are two algorithms supported to traverse the graph in Java.
- Depth-first traversal
- Breadth-first traversal
Depth-first Traversal
Depth-first search (DFS) is a technique that is used to traverse a tree or a graph. DFS technique starts with a root node and then traverses the adjacent nodes of the root node by going deeper into the graph. In the DFS technique, the nodes are traversed depth-wise until there are no more children to explore.
Once we reach the leaf node (no more child nodes), the DFS backtracks and starts with other nodes and carries out traversal in a similar manner. DFS technique uses a stack data structure to store the nodes that are being traversed.
Following is the algorithm for the DFS technique.
Algorithm
Step 1: Start with the root node and insert it into the stack
Step 2: Pop the item from the stack and insert into the ‘visited’ list
Step 3: For node marked as ‘visited’ (or in visited list), add the adjacent nodes of this node that are not yet marked visited, to the stack.
Step 4: Repeat steps 2 and 3 until the stack is empty.
Illustration Of DFS Technique
Now we will illustrate the DFS technique using a proper example of a graph.
Given below is an example graph. We maintain stack to store explored nodes and a list to store visited nodes.
We will start with A to begin with, mark it as visited, and add it to the visited list. Then we will consider all the adjacent nodes of A and push these nodes onto the stack as shown below.
Next, we pop a node from the stack i.e. B and mark it as visited. We then add it to the ‘visited’ list. This is represented below.
Now we consider the adjacent nodes of B which are A and C. Out of this A is already visited. So we ignore it. Next, we pop C from the stack. Mark C as visited. The adjacent node of C i.e. E is added to the stack.
Next, we pop the next node E from the stack and mark it as visited. Node E’s adjacent node is C that is already visited. So we ignore it.
Now only node D remains in the stack. So we mark it as visited. Its adjacent node is A which is already visited. So we do not add it to the stack.
At this point the stack is empty. This means we have completed the depth-first traversal for the given graph.
The visited list gives the final sequence of traversal using the depth-first technique. The final DFS sequence for the above graph is A->B->C->E->D.
DFS Implementation
import java.io.*; import java.util.*; //DFS Technique for undirected graph class Graph { private int Vertices; // No. of vertices // adjacency list declaration private LinkedList<Integer> adj_list[]; // graph Constructor: to initialize adjacency lists as per no of vertices Graph(int v) { Vertices = v; adj_list = new LinkedList[v]; for (int i=0; i<v; ++i) adj_list[i] = new LinkedList(); } //add an edge to the graph void addEdge(int v, int w) { adj_list[v].add(w); // Add w to v's list. } // helper function for DFS technique void DFS_helper(int v,boolean visited[]) { // current node is visited visited[v] = true; System.out.print(v+" "); // process all adjacent vertices Iterator<Integer> i = adj_list[v].listIterator(); while (i.hasNext()) { int n = i.next(); if (!visited[n]) DFS_helper(n, visited); } } void DFS(int v) { //initially none of the vertices are visited boolean visited[] = new boolean[Vertices]; // call recursive DFS_helper function for DFS technique DFS_helper(v, visited); } } class Main{ public static void main(String args[]) { //create a graph object and add edges to it Graph g = new Graph(5); g.addEdge(0, 1); g.addEdge(0, 2); g.addEdge(0, 3); g.addEdge(1, 2); g.addEdge(2, 4); //print the DFS Traversal sequence System.out.println("Depth First Traversal for given graph"+ "(with 0 as starting vertex)"); g.DFS(0); } }
Output:
Applications Of DFS
#1) Detect a cycle in a graph: DFS facilitates to detect a cycle in a graph when we can backtrack to an edge.
#2) Pathfinding: As we have already seen in the DFS illustration, given any two vertices we can find the path between these two vertices.
#3) Minimum spanning tree and shortest path: If we run the DFS technique on the non-weighted graph, it gives us the minimum spanning tree and the shorted path.
#4) Topological sorting: Topological sorting is used when we have to schedule the jobs. We have dependencies among various jobs. We can also use topological sorting for resolving dependencies among linkers, instruction schedulers, data serialization, etc.
Breadth-first Traversal
Breadth-first (BFS) technique uses a queue to store the nodes of the graph. As against the DFS technique, in BFS we traverse the graph breadth-wise. This means we traverse the graph level wise. When we explore all the vertices or nodes at one level we proceed to the next level.
Given below is an algorithm for the breadth-first traversal technique.
Algorithm
Let’s see the algorithm for the BFS technique.
Given a graph G for which we need to perform the BFS technique.
- Step 1: Begin with the root node and insert it into the queue.
- Step 2: Repeat steps 3 and 4 for all nodes in the graph.
- Step 3: Remove the root node from the queue, and add it to the Visited list.
- Step 4: Now add all the adjacent nodes of the root node to the queue and repeat steps 2 to 4 for each node.[END OF LOOP]
- Step 6: EXIT
Illustration Of BFS
Let us illustrate the BFS technique using an example graph shown below. Note that we have maintained a list named ‘Visited’ and a queue. We use the same graph that we used in the DFS example for clarity purposes.
First, we start with root i.e. node A and add it to the visited list. All the adjacent nodes of the node A i.e. B, C, and D are added to the queue.
Next, we remove the node B from the queue. We add it to the Visited list and mark it as visited. Next, we explore the adjacent nodes of B in the queue (C is already in the queue). Another adjacent node A is already visited so we ignore it.
Next, we remove node C from the queue and mark it as visited. We add C to the visited list and its adjacent node E is added to the queue.
Next, we delete D from the queue and mark it as visited. Node D’s adjacent node A is already visited, so we ignore it.
So now only node E is in the queue. We mark it as visited and add it to the visited list. The adjacent node of E is C which is already visited. So ignore it.
At this point, the queue is empty and the visited list has the sequence we obtained as a result of BFS traversal. The sequence is, A->B->C->D->E.
BFS Implementation
The following Java program shows the implementation of the BFS technique.
import java.io.*; import java.util.*; //undirected graph represented using adjacency list. class Graph { private int Vertices; // No. of vertices private LinkedList<Integer> adj_list[]; //Adjacency Lists // graph Constructor:number of vertices in graph are passed Graph(int v) { Vertices = v; adj_list = new LinkedList[v]; for (int i=0; i<v; ++i) //create adjacency lists adj_list[i] = new LinkedList(); } // add an edge to the graph void addEdge(int v,int w) { adj_list[v].add(w); } // BFS traversal from the root_node void BFS(int root_node) { // initially all vertices are not visited boolean visited[] = new boolean[Vertices]; // BFS queue LinkedList<Integer> queue = new LinkedList<Integer>(); // current node = visited, insert into queue visited[root_node]=true; queue.add(root_node); while (queue.size() != 0) { // deque an entry from queue and process it root_node = queue.poll(); System.out.print(root_node+" "); // get all adjacent nodes of current node and process Iterator<Integer> i = adj_list[root_node].listIterator(); while (i.hasNext()){ int n = i.next(); if (!visited[n]) { visited[n] = true; queue.add(n); } } } } } class Main{ public static void main(String args[]) { //create a graph with 5 vertices Graph g = new Graph(5); //add edges to the graph g.addEdge(0, 1); g.addEdge(0, 2); g.addEdge(0, 3); g.addEdge(1, 2); g.addEdge(2, 4); //print BFS sequence System.out.println("Breadth-first traversal of graph with 0 as starting vertex:"); g.BFS(0); } }
Output:
Applications Of BFS Traversal
#1) Garbage collection: One of the algorithms used by the garbage collection technique to copy Garbage collection is “Cheney’s algorithm”. This algorithm uses a breadth-first traversal technique.
#2) Broadcasting in networks: Broadcasting of packets from one point to another in a network is done using the BFS technique.
#3) GPS navigation: We can use the BFS technique to find adjacent nodes while navigating using GPS.
#4) Social networking websites: BFS technique is also used in social networking websites to find the network of people surrounding a particular person.
#5) Shortest path and minimum spanning tree in un-weighted graph: In the unweighted graph, the BFS technique can be used to find a minimum spanning tree and the shortest path between the nodes.
Java Graph Library
Java does not make it compulsory for programmers to always implement the graphs in the program. Java provides a lot of ready libraries that can be directly used to make use of graphs in the program. These libraries have all the graph API functionality required to make full use of the graph and its various features.
Given below is a brief introduction to some of the graph libraries in Java.
#1) Google Guava: Google Guava provides a rich library that supports graphs and algorithms including simple graphs, networks, value graphs, etc.
#2) Apache Commons: Apache Commons is an Apache project that provides Graph data structure components and APIs that have algorithms that operate on this graph data structure. These components are reusable.
#3) JGraphT: JGraphT is one of the widely used Java graph libraries. It provides graph data structure functionality containing simple graph, directed graph, weighted graph, etc. as well as algorithms and APIs that work on the graph data structure.
#4) SourceForge JUNG: JUNG stands for “Java Universal Network/Graph” and is a Java framework. JUNG provides an extensible language for analysis, visualization, and modeling of the data that we want to be represented as a graph.
JUNG also provides various algorithms and routines for decomposition, clustering, optimization, etc.
Frequently Asked Questions
Q #1) What is a Graph in Java?
Answer: A graph data structure mainly stores connected data, for example, a network of people or a network of cities. A graph data structure typically consists of nodes or points called vertices. Each vertex is connected to another vertex using links called edges.
Q #2) What are the types of graphs?
Answer: Different types of graphs are listed below.
- Line graph: A line graph is used to plot the changes in a particular property relative to time.
- Bar graph: Bar graphs compare numeric values of entities like the population in various cities, literacy percentages across the country, etc.
Apart from these main types we also have other types like pictograph, histogram, area graph, scatter plot, etc.
Q #3) What is a connected graph?
Answer: A connected graph is a graph in which every vertex is connected to another vertex. Hence in the connected graph, we can get to every vertex from every other vertex.
Q #4) What are the applications of the graph?
Answer: Graphs are used in a variety of applications. The graph can be used to represent a complex network. Graphs are also used in social networking applications to denote the network of people as well as for applications like finding adjacent people or connections.
Graphs are used to denote the flow of computation in computer science.
Q #5) How do you store a graph?
Answer: There are three ways to store a graph in memory:
#1) We can store Nodes or vertices as objects and edges as pointers.
#2) We can also store graphs as adjacency matrix whose rows and columns are the same as the number of vertices. The intersection of each row and column denotes the presence or absence of an edge. In the non-weighted graph, the presence of an edge is denoted by 1 while in the weighted graph it is replaced by the weight of the edge.
#3) The last approach to storing a graph is by using an adjacency list of edges between graph vertices or nodes. Each node or vertex has its adjacency list.
Conclusion
In this tutorial, we have discussed graphs in Java in detail. We explored the various types of graphs, graph implementation, and traversal techniques. Graphs can be put to use in finding the shortest path between nodes.
In our upcoming tutorials, we will continue to explore graphs by discussing a few ways of finding the shortest path.
=> Watch Out The Simple Java Training Series Here.
|
https://www.softwaretestinghelp.com/java-graph-tutorial/
|
CC-MAIN-2021-17
|
refinedweb
| 3,786
| 65.62
|
Hello everyone.
I'm trying to make a planet out of a normalized cube. I've successfully normalized the cube into a sphere by having 6 different meshes, one for each side of the cube, by normalizing the vertices.
From my point of view, using this method, and considering the (approx) 64000 vertices limit for each mesh, the planet becomes rather small.
That being said, I am attempting to make one side of the cube out of 4 separate meshes. I successfully created 4 different meshes and centered them in 0, 0, 0 coordinates. However, when I go for normalizing the said meshes, it doesn't normalize it properly as in the 4 meshes are normalized separately. I'm adding some images for you to understand what I mean.
Thanks a lot in advance ;)
P.S.: I'm only trying to normalize the top face of the cube.
What do you mean by "normalize? In Unity lingo, normaliazing means to clamp a value between 0 and 1.
As it is to my understanding, one can obtain a sphere by normalizing every vertex from the faces of a cube and by multiplying the normalized vertex value by the série d radius of the sphere. That is what I mean by normalizing.
Answer by Bunny83
·
Oct 07, 2015 at 05:13 PM
Well, before you normalize your vertex position you have to transform them into a common space. So for example if you have an empty gameobject "center" in the center of your sphere, you have to do this for each patch:
// C#
Vector3 pos = center.InverseTransformPoint(patch.TransformPoint(vertex));
pos = pos.normalized * radius;
vertex = patch.InverseTransformPoint(center.TransformPoint(pos));
"patch" is the Transform component of each mesh and "vertex" is one local space vertex of a mesh.
editHere's a simple script that can create an arbitrarily large planet mesh (as long as your hardware can handle it). As mentioned below it creates child object for each mesh. Those objects origins are all at the cube / sphere center. We just calculate the vertices in each patch relative to it's target position inside our cube.
using UnityEngine;
using System.Collections;
public struct PatchConfig
{
public string name;
public Vector3 uAxis;
public Vector3 vAxis;
public Vector3 height;
public PatchConfig(string aName, Vector3 aUAxis, Vector3 aVAxis)
{
name = aName;
uAxis = aUAxis;
vAxis = aVAxis;
height = Vector3.Cross(vAxis, uAxis);
}
}
public class PlanetMesh : MonoBehaviour
{
private static PatchConfig[] patches = new PatchConfig[]
{
new PatchConfig("top", Vector3.right, Vector3.forward),
new PatchConfig("bottom", Vector3.left, Vector3.forward),
new PatchConfig("left", Vector3.up, Vector3.forward),
new PatchConfig("right", Vector3.down, Vector3.forward),
new PatchConfig("front", Vector3.right, Vector3.down),
new PatchConfig("back", Vector3.right, Vector3.up)
};
public int uPatchCount = 2;
public int vPatchCount = 2;
public int xVertCount = 250;
public int yVertCount = 250;
public float radius = 5f;
public Material patchMaterial;
void Start ()
{
GeneratePatches();
}
void GeneratePatch(PatchConfig aConf, int u, int v)
{
GameObject patch = new GameObject("Patch_" + aConf.name + "_" + u + "_" + v);
MeshFilter mf = patch.AddComponent<MeshFilter>();
MeshRenderer rend = patch.AddComponent<MeshRenderer>();
rend.sharedMaterial = patchMaterial;
Mesh m = mf.sharedMesh = new Mesh();
patch.transform.parent = transform;
patch.transform.localEulerAngles = Vector3.zero;
patch.transform.localPosition = Vector3.zero;
Vector2 UVstep = new Vector2(1f / uPatchCount, 1f / vPatchCount);
Vector2 step = new Vector2(UVstep.x / (xVertCount-1), UVstep.y / (yVertCount-1));
Vector2 offset = new Vector3((-0.5f + u * UVstep.x), (-0.5f + v * UVstep.y));
Vector3[] vertices = new Vector3[xVertCount * yVertCount];
Vector3[] normals = new Vector3[vertices.Length];
Vector2[] uvs = new Vector2[vertices.Length];
for (int y = 0; y < yVertCount; y++)
{
for (int x = 0; x < xVertCount; x++)
{
int i = x + y * xVertCount;
Vector2 p = offset + new Vector2(x * step.x, y * step.y);
uvs[i] = p + Vector2.one*0.5f;
Vector3 vec = aConf.uAxis * p.x + aConf.vAxis * p.y + aConf.height*0.5f;
vec = vec.normalized;
normals[i] = vec;
vertices[i] = vec*radius;
}
}
int[] indices = new int[(xVertCount - 1) * (yVertCount - 1) * 4];
for (int y = 0; y < yVertCount-1; y++)
{
for (int x = 0; x < xVertCount-1; x++)
{
int i = (x + y * (xVertCount-1)) * 4;
indices[i] = x + y * xVertCount;
indices[i + 1] = x + (y + 1) * xVertCount;
indices[i + 2] = x + 1 + (y + 1) * xVertCount;
indices[i + 3] = x + 1 + y * xVertCount;
}
}
m.vertices = vertices;
m.normals = normals;
m.uv = uvs;
m.SetIndices(indices, MeshTopology.Quads, 0);
m.RecalculateBounds();
}
void GeneratePatches()
{
for(int i = 0; i < 6; i++)
{
for(int u = 0; u < uPatchCount; u++)
{
for(int v = 0; v < vPatchCount; v++)
{
GeneratePatch(patches[i], u, v);
}
}
}
}
}
Note: the orientation of each cube side is determined by the patches array. The UVs are calculated per cube side. So no matter if you "subdivide" each side further one side will always be in range [0,1]
[0,1]
@Bunny83, first of all, thanks for your answer.
I did as you suggested and I got what is shown in the image. It did the proper spherifying thingy now but it seems it is rather diverted from the center point as in, in the image, the sections should cover the entirety of the "top" side of the sphere, but it only covers a part of it.
Any thoughts? Thanks once again.
@Bunny83, I figured it out myself!
As I said in my previous post, the top part of the cube wasnt centered relatively to the center of the sphere.
Thanks a lot for you help, it was really helpful!
P.S.: Here goes a screenshot of the real thing.
Maybe i should add that if you create the whole thing procedurally, you don't need to do this for each vertex when you simply offset the vertices when you create them. That means your would place the gameobject for all 24 patches (4 each side * 6 sides) at the center of your cube / sphere. When you create the vertices for the mesh you don't generate them around the local center, but at their respective position in your final cube.
I've just written a simple script that can create a procedural planet mesh of arbitrary size. I'll add it to my answer above.
@Bunny83, sorry for only answering today, it has been a long end of the week.
Thanks a lot for your help, including giving me your code. I was attempting to do something like this but it got no where as good as yours. Thanks once subdivide a single face on a mesh?
0
Answers
irregularities with procedurally generated sphere
1
Answer
How to create a terrain in a sphere with Perlin Noise
0
Answers
UV-ing a sphere procedurally
1
Answer
creating a mesh procedurally
4
Answers
|
https://answers.unity.com/questions/1077976/normalizing-several-meshes-towards-a-sphere.html?childToView=1078038
|
CC-MAIN-2020-05
|
refinedweb
| 1,092
| 57.98
|
- Advertisement
mdiasMember
Content count882
Joined
Last visited
Community Reputation823 Good
About mdias
- RankAdvanced Member
Game engine or programming luanguage?
mdias replied to FatDoggy's topic in For Beginners's ForumI think this is something that people either don't know about or just plain ignore when trying to dismiss engines like Unity/UE4. If something isn't supported or you just don't like how it's done in their scripting language there is nothing stopping you from creating a native level plugin to do it. Its worth mentioning that going this route will make your life harder when targeting other platforms, defeating one of the main features of using "vanila" Unity.
(Super) Smart Pointer
mdias replied to dpadam450's topic in General and Gameplay ProgrammingI would go with an event approach. Maybe right now your only requirement is to check if the object exists (which weak_ptr would solve very easily, even though what Ravyne said stands), but in the feature you may want your marine's target to look somewhere else even if the target still exists (maybe the enemy has applied a stealth "power up"). For maximum flexibility I would implement an event/observer pattern where your marine could register to the enemy's events and listen for a "destroyed" event, and possibly more events. void Marine::SetTarget( const ITargettable* target ) { if( _currentTarget ) _currentTarget->RemoveListener( this ); _currentTarget = target; if( _currentTarget ) _currentTarget->AddListener( this ); } void Marine::OnTargettableEvent( const ITargettable* target, const TargettableEvent& event ) { if( event.type == TargettableEvent_Destroyed || event.type == TargettableEvent_StealthApplied ) { SetTarget( nullptr ); } }
diamond problem with interfaces and down-casting
mdias replied to mdias's topic in General and Gameplay ProgrammingThose were just examples. Shaders are a better example of a resource that has common functionality (set resources, constant buffers) and that are further derived into specific types representing their stage. (example: IDeviceChild -> IResource -> IShader -> IVertexShader) You seem overly inflexible regarding OO principles, which I understand, but again, given the narrow scope of the library I think (and I believe many others do) sacrificing some principles in exchange for an easier/lighter API has more benefits than not doing so. I don't think we can say D3D API is a bad API because it implements GetType() on it's resources, or because it has an interface hierarchy. Every problem you point to ends up only being relevant to the implementation details, which is why I'm being so "stubborn" with my interface design, which apparently isn't all that bad, we just can't find a way to properly implement it. I'll probably have to abandon the design for an apparent lack of tools in C++ to properly implement it. A simple way to implement a virtual method by creating an alias to another base method (effectively hinting the compiler to reuse the vtable pointer) would solve the extra indirection introduced in my previous example... I know, I'm doing that right now actually. However I'm not convinced it's a good way to go. The more I go with it the more I think I should abbandon that idea and have a reference counting object as the base class, and just provide a helper template function to handle automatic AddRef/Release. Anyway, that's a minor problem I can analyse later.
[updated]: Make certain objects arrive in the scene (a lorry)
mdias replied to lucky6969b's topic in General and Gameplay ProgrammingWouldn't it just be easier to, at load time of the data you have, add/subtract a random number so that at load time you already have "randomized" data and act deterministically from there? If you want to repeat the process several times, keep a copy of the original data, and recalculate the next time (with some randomness) a truck should go everytime a new truck goes.
SDL Audio - Volume proportional
mdias replied to FGFS's topic in Engines and MiddlewareYour problem occurs because humans don't perceive sound linearly. try: float lin_volume = (float)(veN-50) / 30.f; alListenerf( AL_GAIN, powf(lin_volume, exp) ); where "exp" try E or some other value (>0.0f) that sounds better to you.
diamond problem with interfaces and down-casting
mdias replied to mdias's topic in General and Gameplay ProgrammingNot right now, I'm still in the early stages of developing this wrapper. However I can see myself adding reference couting to all objects through the base class, or adding an IResource::GetType method. Each addition of a method, or worse, change to it's behaviour would be prone to add inconsistency if the implementation doesn't share code. You're probably right. But I think there are valid reasons for D3D to be the way it is. Such as being more important to closely mirror the hardware/drivers capabilities than to meticulously follow OO-correctness concepts in a relatively small-scope library. The truth is, however ugly or not their implementation is, to the user things are abstracted in a way that IMO makes sense. And this is what I'm after. Yeah, that works but looks ugly. I've thought of a similar solution that would be remove virtual inheritance and: class Concrete : public IConcrete, public Base { public: int MyBaseMethod() { return Base::MyBaseMethod(); } } ..which would have similarities to the PIMPL idiom suggested by Hodgman, requiring an additional level of indirection. And since the base interfaces are pure interfaces with no data, it should be OK, even if ugly.. However this really sounds like we're fighting the language limitations here... Feels really weird to have workarounds like these to solve an OO problem in a "specialized-in-OOP" language. How do you handle resource specific functionality, such as mapping the buffer and so on? This is exacly the reason why I'm after pure interfaces, othewise I'd just typedef everything on the preprocessor and live a simple life with only a few virtual methods and single inheritance :)
diamond problem with interfaces and down-casting
mdias replied to mdias's topic in General and Gameplay ProgrammingThe idea behind a multibind-in-one-call entry point is to closely mirror functionality provided by both OpenGL and Direct3D. Your idea of calling MakeActive on several textures and transparently batch it to the backend API, while attractive and certainly more correct in "human reasoning" terms will introduce new complexity (such as tracking the state to know when to actually bind the textures on the backend API etc) which is a step away from the very thin wrapper I'm looking for. Plus, if both OpenGL and D3D already provide entry points with that same exact functionality, why should I create a higher level of managing? We can continue to discuss this (happy to do so if it will teach me something), but to be honest I came here looking to solve the interface hierarchy + implementation hierarchy problem. First of all, let me tell you I completely agree with you. However, I don't think it matters much in this specific case. Users are unlikely to expect an OpenGL resource to successfuly bind to a Direct3D one. My oglGraphicsContext implementation doesn't need to know the exact implementation type, it only needs to check for compatibility (example: to check if the final implementation implements GetGLHandle()). I'm not fixated on the idea of casting to an implementation pointer, and I actually hate that I see no other option than doing so in order to be able to do things like multi-bind. D3D does this also, they don't have an ID3DTexture2D::MakeActive method. Surely the device checks for compatibility with the object you throw at it; probably through QueryInterface or similar. I was confused myself when I first created this topic. Downcasting is what I mean. Because a Resource IS-A GraphicsChild, and a Texture IS-A Resource and so on. If you're asking why I need the implementation hierarchy; well I don't need it, I just want to share implementation code. Say I have 10 types of device resources; doesn't make much sense to re-implement IResource methods manually in each of the derived types, no? For sure getting rid of the implementation hierarchy would solve all my problems, but if I have to do that (and there's no other, better way) I begin to wonder if C++ isn't missing something...
diamond problem with interfaces and down-casting
mdias replied to mdias's topic in General and Gameplay ProgrammingI don't think we're understanding each other at all. Maybe the fact that english is not my native language is somehow impeding me to explain my problem properly. Where to exacly should I pass an array/container of ITexture pointers if you're telling me I should use ITexture::MakeActive() ? And doesn't that mean again that this container could have ITexture pointers created from another context? I'm not really questioning the validity of LSP. I just don't see a way to fully comply with it in this particular problem. I must add constraints to what is accepted. We don't live in a perfect world, there are exceptions I want to deal with appropriately. Say I give it a texture that is resident in another GPUs memory; I want to return an error from that (instead of invalid rendering or crash), even if in theory it should work, I could document that behaviour. The *::MakeActive approach would solve this though, but I would like to be able to MakeActive several textures in one call. Again, you misunderstood me. I mean the correctness of the internal state of the objects. I'm not talking about language correctness. In this specific case, by correctness I mean "the texture must belong to the same context you're trying to bind it to for rendering". Curious question: how would D3D behave if it were given, say a shader, from another device? Well the OpenGL or not problem is not relevant at all to the question I'm asking. They still have interface inheritance to which they must provide implementations somewhere. When I said "how does D3D do it", I meant do they duplicate code for each object that needs to provide an implementation to ID3D11Resource or solve this problem any other way? They still must cast it to an implementation pointer somewhere... This makes sense and I had already thought of it. However then ITexture::MakeActive() would be nothing but a bridge to the oglGraphicsContext's method. Which I don't have a problem with except if there's a better solution. Also, again this would mean a call to MakeActive() for each texture, and also not the way D3D does it. Exacly. So do you agree (excluding personal preference) with me that oglGraphicsContext could accept an ITexture and down-cast it there? Maybe I should try to put my main question in another way: How can I implement an interface hierarchy and reuse implementation code (without ugly "hacks" like macros) while avoiding multiple inheritance?
diamond problem with interfaces and down-casting
mdias replied to mdias's topic in General and Gameplay ProgrammingThis would certainly solve some problems, such as not needing to cast anymore. However, the way I see it, the Texture object would be modifying the owner's (oglGraphicsContext) state which doen't make much sense. I guess this is also the reason the D3D API goes this way instead of backwards. [Edit] It would also make it impossible to bind more than just one object in a single call. I disagree. The fact that SetTexture contains an ITexture doesn't automatically mean it will be OK to pass any ITexture. I can certainly write in the documentation that the ITexture must be created by the same IGraphicsContext and be in a specific state. I think I would like the API but not the fact that I would be double checking if the objects are valid everywhere. I am trying to adopt concepts of future graphics APIs such as PSOs which aliviate a lot of this by only checking at PSOs creation time. However some state just doesn't belong in a PSO... I am inspiring my class design on D3D11's, which seems to do things exacly like I'm trying to implement, but they somehow manage to not need virtual inheritance. I think I might be worrying too much about correctness and should think more about ease of use (adopt ::MakeActive() and ignore the fact that I'm modifying state that doesn't belong to this child object), but I'm afraid that by doing so I might be commiting suicide in the long run... This still doesn't solve being unable to static_cast from an IResource to an ITexture for example. How does D3D do it? The only way I see it working is to avoid shared implementations and only implement things in the final class, which can be a maintenance nightmare...
diamond problem with interfaces and down-casting
mdias replied to mdias's topic in General and Gameplay ProgrammingActually the current workaround I have found is to have a void* ITexture::GetPrivateDeviceData() which doesn't feel very clean but works for now. I'm still searching for a better way. oglTexture:oglTexture( oglGraphicsContext& ctx, const TextureDescription& desc ) { glGenTextures( 1, &_gl_texture ) // _gl_texture is a private/protected member variable ... // bind and create texture with content from "desc" } GLuint oglTexture::GetGLTexture() const // this is the method I wish oglGraphicsContext to see { return _gl_texture; } bool oglGraphicsContext::SetTexture( unsigned slot, ITexture* texture ) { // check that texure is a texture created by this graphics context if( texture->GetDeviceCtx() != this ) return false; auto ogl_texture = static_cast<oglTexture*>(texture); // this is the cast that's needed and won't work with virtual inheritance GLuint native_texture = ogl_texture->GetGLTexture(); ... // use native_texture for whatever... } I understand and I've been thinking myself that maybe I'm not choosing the best approach, but can't really think of anything as "clean"/"elegant" as this, which is why I posted here If I implement the ITexture::GetPrivateDeviceData() method as I mentioned above, I can do this, but I still feel this shouldn't really be a visible method. Plus having virtual inheritance on the interfaces will bring more problems to the end client trying to cast for example from IResource to ITexture... This could again be worked around by having a IDeviceChild::CastTo( Type ) but then it will have a performance cost and ugly feel...
diamond problem with interfaces and down-casting
mdias replied to mdias's topic in General and Gameplay ProgrammingI think I probably didn't explain my problem well enough. I agree with what you're saying, however this is not the problem. The clients using this wrapper will only interact with the interfaces without ever knowing about the implementation. This is the problem. Imagine this piece of code on the client side: ITexture* myTexture = ...; myAbstractedDevice->SetTexture( 0, myTexture ); And this is the implementation: class ITexture : public IResource { ... }; class oglTexture : public ITexture, public oglResource { ... }; void oglDevice::SetTexture( unsigned slot, ITexture* texture ) { assert( texture ); assert( texture->GetServer() == this->GetServer() ); oglTexture* ogl_texture = static_cast<oglTexture*>(texture); ... } If oglTexture only inherited from ITexture, this would work perfectly, but then I'd have to re-implement the other interfaces (IResource, IDeviceChild, ...) with mostly duplicated code, which makes me think it's not the right way to solve this.
diamond problem with interfaces and down-casting
mdias posted a topic in General and Gameplay ProgrammingHi, I'm facing a problem I can't seem to find a good answer for. I'm trying to build a wrapper for D3D11 and OpenGL. For this I have some interfaces similiar to D3D11's. Let's assume these interfaces: class IDeviceChild; class IResource : public IDeviceChild; class IBuffer : public IResource; Now, what I wish to do, is for each of those interfaces to have their own implementation class, like this: class oglDeviceChild : public IDeviceChild; class oglResource : public IResource, public oglDeviceChild; class oglBuffer : public IBuffer, public oglResource; Now this obviously won't work like this because of the diamond problem I have here, and the only way to solve it is to have virtual inheritance in the interface classes themselves. But that leads to another problem! If I have an oglResource, I can't static_cast to an oglBuffer. It sure must be possible to solve this, since D3D does it (and doesn't use virtual inheritance in the interfaces). It also looks like virtual inheritance only signals the class being inherited to be virtual, instead of that class plus it's parents... The only way out I see right now is to avoid multiple inheritance and only inherit the interface, but that doesn't look like a proper solution to me... Can anybody shed some light? Thanks in advance.
How to draw multiple objects to screen reusing draw commands?
mdias replied to SteveHatcher's topic in Graphics and GPU ProgrammingNo. What you may be wasting is bandwidth. The concept to keep in mind is this: Passing information to and from the graphics card is slow, therefore you should do it as few times as possible. You should also know that it only really matters when your game is running in interactive mode -> you should do slow stuff at Load Time, instead of Run Time because slow is only slow for real time stuff, but it's still fast enough to do in a Loading screen. Now, every time you tell your GPU to render something, that's information passing to and from the GPU*, so if you could manage to render your whole scene with just 1 draw call, it would be awesome, but you're probably going to need many more. However you group (as much as you can) static geometry in big buffers, instead of having all static objects in their separate buffers, you will be able to render much more objects with just 1 draw call! Example: Calling Draw() 1000 times, each time rendering just 1 polygon is much slower than rendering 10000 polygons with just 1 Draw() call. This is called geometry batching in case you want to research further. That's a tough question engine developers have to fight with every day! Indeed dividing you geometry in 2 groups (static geometry and non-static geometry) is a good start! Basically you should find the way that allows you to render more stuff with less Draw calls and state changes. * Driver details apply here, but ignore these for now. P.S: Geometry Instancing also allows you to render the same (small?) vertex buffer multiple times with different properties per "object" with just 1 draw call. This is useful to render stuff like a bunch of trees or rocks without actually duplicating the objects in the vertex buffer (wasting memory).
- I just noticed I posted an example passing the struct by value instead of reference. I don't have the code at hand but as far as I remember, to pass as reference you just pass a normal C++ pointer as an argument to the method. Like I said, I never worked with Lua, so I'm not familiar with it's strengths or weaknesses. However, it looks weird to me that different parts of the engine are scripted with different languages. I don't know if it applies to your project, but I'd abstract away all interactions with scripting so you could script any part of the engine with any or multiple implementations (including Mono, Lua and others) with relative ease. Maybe this way you could stick to just one of the scripting engines like you want and implement an other at a later stage if the need arises. Regarding "Lua vs Mono" I believe you'll have to tell more about the kind of project you're working on.
- You mean pointers to structs or to objects? Here's an example of me passing a pointer to a C++ on-stack struct: bool RayCastResults::call_internalAddResultOrdered( RayCastResult& result ) { MonoObject* excep = nullptr; MonoObject* resObject = mono_value_box( _domain, _classResult, &result ); _method_internalAddResultOrdered( _obj, resObject, &excep ); if( excep ) { mono_print_unhandled_exception( excep ); return false; } return true; } "_classResult" is: "_classResult = mono_class_from_name_case( img, "Engine.Physics", "RayCastResult" );" Which on C# side is: using System; using System.Collections.Generic; namespace Engine { namespace Physics { public class RayCastResults { private void internal_AddResultOrdered( RayCastResult result ) { ... } } public struct RayCastResult { public override string ToString() { return string.Format("[RayCastResult] distance: {0}", distance); } public float distance; public CollisionObject body; public Shape shape; public Vector3 normalWorld; public Vector3 pointWorld; } } } If passing a pointer to an object, you just need to pass the MonoObject* pointer. Sometimes it's a good idea to have internal methods on C# side where you do some processing on the C++ passed data before calling the real final method.
- Advertisement
|
https://www.gamedev.net/profile/38008-mdias/
|
CC-MAIN-2018-26
|
refinedweb
| 3,448
| 50.36
|
In this tutorial, I will show you how an accelerometer/gyroscope works with an Arduino UNO. In particular, I will use the MPU-6050 sensor.
The MPU-6050 sensor contains, in a single integrated, a 3-axis MEMS accelerometer and a 3-axis MEMS gyroscope. With the gyroscope, we can measure the angular acceleration of a body on its own axis, while with the accelerometer we can measure the acceleration of a body along one direction. It is very precise, as it has a 16-bit AD (from analog to digital) converter for each channel. Therefore, it captures the x, y and z channels at the same time. The sensor has a standard I²C communication protocol, so it is easy to interface with the Arduino world.
The MPU-6050 sensor is not even expensive, perhaps it is the cheapest on the market, especially considering the fact that it combines an accelerometer and a gyroscope.
Here are some features of the MPU-6050 sensor:-
- Chip with integrated 16-bit AD converter
- Gyroscope measuring range: ± 250, 500, 1000 and 2000 ° / s
- Accelerometer measuring range: +2, +4, +8, +16 g
- Interface: I²C
- Power supply: from 3V to 5V
For my test, I purchased a GY-521 module so that the MPU-6050 sensor is ready for use. Here is the wiring diagram of the GY-521 module for those who want to build it themselves:-
Now let’s move on to the actual tutorial by going to see how to use this module with an Arduino Uno.
List of material:
SCHEME
Connections for Arduino Uno:
GY-521 Arduino Uno
VCC 3.3V
GNS GND
SCL A5
SDA A4
The scheme and the links are addressed only for Arduino Uno, but the tutorial is also valid for all the other Arduino boards. The only thing that changes in the connections are the 2 pins I2C, that is SDA and SCL (Ex..
#include <Wire.h>
}
After loading the sketch on Arduino, open the Serial Monitor.
If this message comes out:
You have done everything right so far
Good! If you have done the check and everything went well we just have to continue the tutorial and try the features of the MPU-6050 sensor, ie reading the 3 axes (X, Y, Z) of the gyroscope, of the accelerometer and to finish also the temperature measurement in degrees Celsius (° C).
First of all you have to load this sketch on the Arduino board:
// MPU-6050 Short Example Sketch
#include<Wire.h>
const int MPU=0x68; // I2C address of the MPU-6050
int16(“Accelerometer: “);
Serial.print(“X = “); Serial.print(AcX);
Serial.print(” | Y = “); Serial.print(AcY);
Serial.print(” | Z = “); Serial.println(AcZ);
//equation for temperature in degrees C from datasheet
Serial.print(“Temperature: “); Serial.print(Tmp/340.00+36.53); Serial.println(” C “);
Serial.print(“Gyroscope: “);
Serial.print(“X = “); Serial.print(GyX);
Serial.print(” | Y = “); Serial.print(GyY);
Serial.print(” | Z = “); Serial.println(GyZ);
Serial.println(” “);
delay(333);
}
After opening the Serial Monitor you should see the values of the X, Y, Z axis of the accelerometer, gyroscope and the temperature in ° C as in the following picture:
This last sketch is quite simple to understand, especially if you have a good programming base and a bit of experience with Arduino. On the variables AcX, AcY, AcZ = accelerometer; Tmp = temperature; GyX, GyY, GyZ = gyroscope; the values of the X, Y, Z axes are stored. In this way you can manage these values as you want, as I will show you soon.
PROJECT: MANAGEMENT OF TWO SERVOMOTORS USING A GY-521 MODULE
This project will be a practical example to make you understand how easy it is to interface the GY-521 module with Arduino. I will use only the values of the accelerometer axes and calculate the Pitch and Roll angles (as described in my tutorial) to rotate the two servomotors from 0 ° to 179 ° according to the position of the accelerometer. Before proceeding I suggest you watch the following video to better understand what I’m talking about.
You may also like to read these articles
How to Build Rain Detector Using Arduino
Build Distance Measuring System with Arduino UNO and Ultrasonic sensor HC-Sr04
|
https://technicalustad.com/control-mpu-6050-gy-521-with-arduino-and-servo-motors/
|
CC-MAIN-2021-31
|
refinedweb
| 699
| 50.57
|
I: def __iter__(self): return (self.__getitem__(x) for x in itertools.count()) Now your class will be Iterable in the abc sense, and no longer relies on the sequence protocol Best, Neil On Fri, Sep 20, 2013 at 10:41 PM, Stephen J. Turnbull <stephen at xemacs.org>wrote: > Tim Delaney writes: > > > Also, pathological is probably not the best term to use. Instead, > > substitute "deliberately breaks a well-established protocol". > > Note that in Neil's use case (the OP) it's not deliberate. His > function receives an iterable, it naively iterates it and (if an > iterator) consumes it, and then some other function loses. Silently. > > Also, as long as __getitem__(0) succeeds, this *is* the "sequence > protocol". (A Sequence also has a __len__() method, but iterability > doesn't depend on that.) > > I don't see why Python would deprecate this. For example, consider > the sequence of factors of integers: [(1,2), (1,3), (1,2,2,4), (1,5), > (1,2,3,6), ...]. Factorization being in general a fairly expensive > operation, you might want to define this in terms of __getitem__() but > __len__() is infinite. I admit this is a somewhat artificial example > (I don't know of non-academic applications for this sequence, although > factorization itself is very useful in applications like crypto). > _______________________________________________ >. > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
|
https://mail.python.org/pipermail/python-ideas/2013-September/023312.html
|
CC-MAIN-2014-15
|
refinedweb
| 224
| 57.98
|
Web applications (or "Web apps") let you bundle a set of servlets, JSP pages, tag libraries, HTML documents, images, style sheets, and other Web content into a single collection that can be used on any server compatible with servlet version 2.2 or later (JSP 1.1 or later). When designed carefully, Web apps can be moved from server to server or placed at different locations on the same server, all without making any changes to any of the servlets, JSP pages, or HTML files in the application.
This capability lets you move complex applications around with a minimum of effort, streamlining application reuse. In addition, since each Web app has its own directory structure, sessions, ServletContext, and class loader, using a Web app simplifies even the initial development because it reduces the amount of coordination needed among various parts of your overall system.
4.1 Registering Web Applications
With servlets 2.2 and later (JSP 1.1 and later), Web applications are portable. Regardless of the server, you store files in the same directory structure and access them with URLs in identical formats. For example, Figure 41 summarizes the directory structure and URLs that would be used for a simple Web application called webapp1. This section will illustrate how to install and execute this simple Web application on different platforms.
Although Web applications themselves are completely portable, the registration process is server specific. For example, to move the webapp1 application from server to server, you don't have to modify anything inside any of the directories shown in Figure 41. However, the location in which the top-level directory (webapp1 in this case) is placed will vary from server to server. Similarly, you use a server-specific process to tell the system that URLs that begin with should apply to the Web application. In general, you will need to read your server's documentation to get details on the registration process. I'll present a few brief examples here, then give explicit details for Tomcat, JRun, and ServletExec in the following subsections.
My usual strategy is to build Web applications in my personal development environment and periodically copy them to various deployment directories for testing on different servers. I never place my development directory directly within a server's deployment directorydoing so makes it hard to deploy on multiple servers, hard to develop while a Web application is executing, and hard to organize the files. I recommend you avoid this approach as well; instead, use a separate development directory and deploy by means of one of the strategies outlined in Section 1.8 (Establish a Simplified Deployment Method). The simplest approach is to keep a shortcut (Windows) or symbolic link (Unix/Linux) to the deployment directories of various servers and simply copy the entire development directory whenever you want to deploy. For example, on Windows you can use the right mouse button to drag the development folder onto the shortcut, release the button, and select Copy.
To illustrate the registration process, the iPlanet Server 6.0 provides you with two choices for creating Web applications. First, you can edit iPlanet's web-apps.xml file (not web.xml!) and insert a web-app element with attributes dir (the directory containing the Web app files) and uri (the URL prefix that designates the Web application). Second, you can create a Web Archive (WAR) file and then use the wdeploy command-line program to deploy it. WAR files are simply JAR files that contain a Web application directory and use .war instead of .jar for file extensions. See Section 4.3 for a discussion of creating and using WAR files.
Figure 41 Registering Web Applications 245
With the Resin server from Caucho, you use a web-app element within web.xml and supply app-dir (directory) and id (URL prefix) attributes. Resin even lets you use regular expressions in the id. So, for example, you can automatically give users their own Web apps that are accessed with URLs of the form. With the BEA WebLogic 6 Server, you have two choices. First, you can place a directory (see Section 4.2) containing a Web application into the config/domain/applications directory, and the server will automatically assign the Web application a URL prefix that matches the directory name. Second, you can create a WAR file (see Section 4.3) and use the Web Applications entry of the Administration Console to deploy it.
Registering a Web Application with Tomcat
With Tomcat 4, creating a Web application consists simply of creating the appropriate directory structure and restarting the server. For extra control over the process, you can modify install_dir/conf/server.xml (a Tomcat-specific file) to refer to the Web application. The following steps walk you through what is required to create a Web app that is accessed by means of URLs that start with. These examples are taken from Tomcat 4.0, but the process for Tomcat 3 is very similar.
Create a simple directory called webapp1. Since this is your personal development directory, it can be located at any place you find convenient. Once you have a webapp1 directory, place a simple JSP page called HelloWebApp.jsp (Listing 4.1) in it. Put a simple servlet called HelloWebApp.class (compiled from Listing 4.2) in the WEB-INF/classes subdirectory. Section 4.2 gives details on the directory structure of a Web application,.
Finally, although Tomcat doesn't actually require it, it is a good idea to include a web.xml file in the WEB-INF directory. The web.xml file, called the deployment descriptor, is completely portable across servers. We'll see some uses for this deployment descriptor later in this chapter, and Chapter 5 (Controlling Web Application Behavior with web.xml) will discuss it in detail. For now, however, just copy the existing web.xml file from install_dir/webapps/ROOT/WEB-INF or use the version that is online under Chapter 4 of the source code archive at. In fact, for purposes of testing Web application deployment, you might want to start by simply downloading the entire webapp1 directory from.
Copy that directory to install_dir/webapps. For example, suppose that you are running Tomcat version 4.0, and it is installed in C:\jakarta-tomcat-4.0. You would then copy the webapp1 directory to the webapps directory, resulting in C:\jakarta-tomcat-4.0\webapps\ webapp1\HelloWebApp.jsp, C:\jakarta-tomcat-4.0\webapps\webapp1\ WEB-INF\classes\HelloWebApp.class, and C:\jakarta-tomcat-4.0\ webapps\webapp1\WEB-INF\web.xml. You could also wrap the directory inside a WAR file (Section 4.3) and simply drop the WAR file into C:\jakarta-tomcat-4.0\webapps.
Optional: add a Contextentry to install_dir/conf/server.xml. If you want your Web application to have a URL prefix that exactly matches the directory name and you are satisfied with the default Tomcat settings for Web applications, you can omit this step. But, if you want a bit more control over the Web app registration process, you can supply a Contextelement in install_dir/conf/server.xml. If you do edit server.xml, be sure to make a backup copy first; a small syntax error in server.xml can completely prevent Tomcat from running.
The Context element has several possible attributes that are documented at. For instance, you can decide whether to use cookies or URL rewriting for session tracking, you can enable or disable servlet reloading (i.e., monitoring of classes for changes and reloading servlets whose class file changes on disk), and you can set debugging levels. However, for basic Web apps, you just need to deal with the two required attributes: path (the URL prefix) and docBase (the base installation directory of the Web application, relative to install_dir/webapps). This entry should look like the following snippet. See Listing 4.3 for more detail.
<Context path="/webapp1" docBase="webapp1" />
Note that you should not use /examples as the URL prefix; Tomcat already uses that prefix for a sample Web application.
Core Warning
Do not use /examples as the URL prefix of a Web application in Tomcat.
Restart the server. I keep a shortcut to install_dir/bin/startup.bat (install_dir/bin/startup.sh on Unix) and install_dir/bin/shutdown.bat(install_dir/bin/shutdown.sh on Unix) in my development directory. I recommend you do the same. Thus, restarting the server involves simply double-clicking the shutdown link and then double-clicking the startup link.
Access the JSP page and the servlet. The URL webapp1/HelloWebApp.jsp invokes the JSP page (Figure 42), and invokes the servlet (Figure 43). During development, you probably use localhost for the host name. These URLs assume that you have modified the Tomcat configuration file (install_dir/conf/server.xml) to use port 80 as recommended in Chapter 1 (Server Setup and Configuration). If you haven't made this change, use and.
Figure 42 Invoking a JSP page that is in a Web application.
Figure 43 Invoking a servlet that is in a Web application.
Listing 4.1 HelloWebApp.jsp
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD><TITLE>JSP: Hello Web App</TITLE></HEAD> <BODY BGCOLOR="#FDF5E6"> <H1>JSP: Hello Web App</H1> </BODY> </HTML>
Listing 4.2 HelloWebApp.java
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class HelloWebApp extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); String\n"; String\n" + "<H1>" + title + "</H1>\n" + "</BODY></HTML>"); } }
Listing 4.3 Partial server.xml for Tomcat 4
<?xml version="1.0" encoding="ISO-8859-1"?> <Server> <!-- ... --> <!-- Having the URL prefix (path) match the actual directory (docBase) is a convenience, not a requirement. --> <Context path="/webapp1" docBase="webapp1" /> </Server>
Registering a Web Application with JRun
Registering a Web app with JRun 3.1 involves nine simple steps. The process is nearly identical to other versions of JRun.
Create the directory. Use the directory structure illustrated in Figure 41: a webapp1 directory containing HelloWebApp.jsp, WEB-INF/classes/HelloWebApp.class, and WEB-INF/web.xml.
Copy the entire webapp1 directory to install_dir/servers/default. The install_dir/servers/default directory is the standard location for Web applications in JRun. Again, I recommend that you simplify the process of copying the directory by using one of the methods described in Section 1.8 (Establish a Simplified Deployment Method). The easiest approach is to make a shortcut or symbolic link from your development directory to install_dir/servers/default and then simply copy the webapp1 directory onto the shortcut whenever you redeploy. You can also deploy using WAR files (Section 4.3).
Start the JRun Management Console. You can invoke the Console either by selecting JRun Management Console from the JRun menu (on Microsoft Windows, this is available by means of Start, Programs, JRun) or by opening. Either way, the JRun Admin Server has to be running first.
Click on JRun Default Server. This entry is in the left-hand pane, as shown in Figure 44.
Figure 44 JRun Web application setup screen.
Click on Web Applications. This item is in the bottom of the list that is created when you select the default server from the previous step. Again, see Figure 44.
Click on Create an Application. This entry is in the right-hand pane that is created when you select Web Applications from the previous step. If you deploy using WAR files (see Section 4.3) instead of an unpacked directory, choose Deploy an Application instead.
Specify the directory name and URL prefix. To tell the system that the files are in the directory webapp1, specify webapp1for the Application Name entry. To designate a URL prefix of /webapp1, put / webapp1in the Application URL textfield. Note that you do not have to modify the Application Root Dir entry; that is done automatically when you enter the directory name. Press the Create button when done. See Figure 45.
Figure 45 JRun Web application creation screen. You only need to fill in the Application Name and Application Root Dir entries.
Restart the server. From the JRun Management Console, click on JRun Default Server and then press the Restart Server button. Assuming JRun is not running as a Windows NT or Windows 2000 service, you can also double-click the JRun Default Server icon from the task-bar and then press Restart. See Figure 46. ServletExec. This approach assumes that you have modified JRun to use port 80 as recommended in Chapter 1 (Server Setup and Configuration). If you haven't made this change, use and.
Figure 46 You must restart JRun for a newly created Web app to take effect.
Registering a Web Application with ServletExec
The process of registering Web applications is particularly simple with ServletExec 4. To make a Web app with a prefix webapp1, just create a directory called webapp1 with the structure described in the previous two subsections. Drop this directory into install_dir/webapps/default, restart the server, and access resources in the Web app with URLs that begin with. You can also drop WAR files (Section 4.3) in the same directory; the name of the WAR file (minus the .war extension) automatically is used as the URL prefix.
For more control over the process or to add a Web application when the server is already running, perform the following steps. Note that, using this approach, you do not need to restart the server after registering the Web app.
Create a simple directory called webapp1. Use the structure summarized in Figure 41: place a simple JSP page called HelloWebApp.jsp (Listing 4.1) in the top-level directory and put a simple servlet called AppTest.class (compiled from Listing 4.2) in the WEB-INF/classes subdirectory. Section 4.2 gives details on the directory structure of a Web app,. Later in this chapter (and throughout Chapter 5), we'll see uses for the web.xml file that goes in the WEB-INF directory. For now, however, you can omit this file and let ServletExec create one automatically, or you can copy a simple example from. In fact, you can simply download the entire webapp1 directory from the Web site.
Optional: copy that directory to install_dir/webapps/default. ServletExec allows you to store your Web application directory at any place on the system, so it is possible to simply tell ServletExec where the existing webapp1 directory is located. However, I find it convenient to keep separate development and deployment copies of my Web applications. That way, I can develop continually but only deploy periodically. Since install_dir/webapps/default is the standard location for ServletExec Web applications, that's a good location for your deployment directories.
Go to the ServletExec Web app management interface. Access the ServletExec administration interface by means of the URL and select Manage under the Web Applications heading. During development, you probably use localhost for the host name. See Figure 47. This assumes that you have modified ServletExec to use port 80 as recommended in Chapter 1 (Server Setup and Configuration). If you haven't made this change, use.
Enter the Web app name, URL prefix, and directory location. From the previous user interface, select Add Web Application (see Figure 47). This results in an interface (Figure 48) with text fields for the Web application configuration information. It is traditional, but not required, to use the same name (e.g., webapp1) for the Web app name, the URL prefix, and the main directory that contains the Web application.
Figure 47 ServletExec interface for managing Web applications.
Figure 48 ServletExec interface for adding new Web applications.
Add the Web application. After entering the information from Item 4, select Add Web Application. See Figure 48. JRun. This assumes that you have modified ServletExec to use port 80 as recommended in Chapter 1 (Server Setup and Configuration). If you haven't made this change, use and.
|
http://www.informit.com/articles/article.aspx?p=26138&seqNum=2
|
CC-MAIN-2017-22
|
refinedweb
| 2,660
| 50.23
|
Flatten Schema: Resolved some problems that resulted in invalid schema imports and invalid namespace prefixes in flattened schemas.
XProc Transformation: Fixed some UI issues and made small UI improvements in the XProc transformation scenario configuration.
XSLT editing: Searching for function declarations/references and highlighting function occurrences did not work correctly in some cases.
Author mode: Keyboard navigation did not work on a line that started with fixed content (via CSS) followed by a hidden element (via CSS).
WebHelp-Mobile: The CSS for the main page did not include the common CSS used in the topic pages.
Eclipse/XML Schema: When changing the type of an element in the Design mode, selecting a type from the popup list that was automatically triggered had no effect.
Eclipse 4.2/4.3: Fixed a memory leak that affected Oxygen editors on Eclipse 4.2 and 4.3.
|
http://www.oxygenxml.com/build_history_15_1.html
|
CC-MAIN-2018-09
|
refinedweb
| 144
| 54.22
|
I'm in the process of making a Python-based personal assistant/question answerer, which, in my wildest dreams, will rival the inevitable "Siri For Mac". However, as of now, it requires you type text into an infinite loop of raw_inputs, and processes the text each time. But if this is ever to be useful to, well, people, it can't be a .py in a terminal window. As of now, I'm thinking about making it a simple .app with Platypus. But, since there is no text input on the Window app style for Platypus, I would include no GUI, and just have it all be speech-based, for input and output. Output is simple, I can just replace all 'print' lines with 'speakString' from "macspeech". But input would be the tricky part. I can only find libraries to input speech on Windows (pyspeech is EXACTLY what I need, but it's windows-based). Anyone heard of something like pyspeech for mac/universal?
I would look at Sphinx4.0 from CMU.
Sadly, it is written in Java. I think the recognition is better than what is built into my mac. I am just learning java/python so am struggling with getting the two to talk to each other.
You can interface with the mac speech engine using
Appkit.NSSpeechRecognizer:
from Foundation import *
The final method is to use the google voice search. But that requires shipping a voice snippet to the "cloud".
That approach is the most accurate but takes up to 10 seconds for a reply!
|
https://codedump.io/share/Nvl6Mf77Alv6/1/using-voice-instead-of-39rawinput39-in-python-for-mac-os-x
|
CC-MAIN-2016-44
|
refinedweb
| 258
| 75.91
|
Hello,
This is my first post anywhere on CSS-Tricks.com. (1) I searched this CSS forum using a few different search phrases but did not find anything matching my question. (2) I read the instructions on how to ask a good question. One recommendation is to provide a link. I’ll refrain from that just yet as the site owner may be sensitive to having too many people looking at his code.
The CSS I copied from the view source window in Firefox looks as follows (I didn’t copy it all):
@charset “utf-8″}ul{list-style:none}blockquote,q{quotes:none}blockquote:before,blockquote:after,q:before,q:after{content:”;content:none}a{margin:0;padding:0;font-size:100%;vertical-align:baseline;background:transparent}
It looked horrible on the screen. And it looked horrible when I copied it into Notepad. Finally, I tried it in WordPad. I sometimes find that WordPad renders CSS files well even though Notepad does not, but in this case it still remained horribly formatted in WordPad.
Your helpful advice would be appreciated.
Thanks.
Senff,
Wow – that worked. If it didn’t, I might have made fun of your hat, but it did and I won’t.
Thank you.
Majority of that is just a reset.
Just a little background on line endings. Most systems use a carriage return (Apple), a line feed (UNIX), or a combination CR+LF (Windows) as characters for line endings. Line feeds are very common but notepad sadly only recognizes windows line endings. The difference you see between notepad and wordpad is probably that wordpad recognizes line feeds as line endings as well.
You must be logged in to reply to this topic.
|
http://css-tricks.com/forums/topic/css-copied-from-a-site-is-formatted-horribly-how-can-i-snap-it-into-something-legible/
|
CC-MAIN-2015-32
|
refinedweb
| 286
| 66.03
|
Cerebral 2.
The status quo
In the JavaScript community today there are many frameworks to choose from, some more ambitious than others. Angular has now become an application platform, Ember has become a whole organization around developing applications with their framework. Vue also has a huge userbase. Even old grandpa Backbone still has many users. These frameworks are all great. They constantly innovate, support, and inspire their communities to build awesome products.
When React was released, it sparked vigorous innovation in state management. Views being functions that receive state and output UI is a beautiful concept which inspires musing on how we can best manage state without considering the UI at all. Solutions like Mobx and Redux are the most prominent out there in this space. They are not frameworks in themselves, but combined with React and other chosen tools we can put together our own framework.
So why would we choose one over the other? There is of course no simple answer to this. For some developers it can be as simple as choosing a framework based on its usage of JSX/Hyperscript vs traditional templates. Other reasons might be that “build our own framework” does not appeal at all; we want it all out of the box. Some developers favor writing as little code as possible, and do not see “magic” as a bad thing. Others want explicit code. The number of team members also affects the decision making. In the last year developer tools and type systems have also proven to be an important decision factor when choosing a framework.
Where does Cerebral fit in?
I saw a fantastic presentation by Preethi Kasireddy which compares Mobx with Redux. The reason I think the presentation is so good is because it nails the nature of application development and compares two very different approaches with their benefits and challenges. I will piggyback this presentation, adding Cerebral to the mix to see where it fits in. If you do not know Mobx or Redux I suggest you keep reading nevertheless as the code examples are quite simple and the concepts will matter to you no matter what framework you use. You might even find some approaches or tools you want to bring back to your existing framework and community. Please do, that is what open source is all about :-)
Learning curve
Every framework has new idioms, these being framework APIs, use of new JS APIs and/or patterns. This, in combination with the amount of “magic” introduced, affects the learning curve. Familiar code and magical code make a framework easier to learn. Angular introduced plenty of magic when it was released and many developers were dazzled by how easily they could make an input sync with some text on the page. But easy code does not mean code that is easy to scale and maintain; it is often in direct contradiction.
MOBX
Mobx has a familiar Object Oriented paradigm. This is what we know from older solutions like Backbone. Simply put, it means that we work with classes. We instantiate classes with state and methods for changing that state. This is a straight forward way of thinking about programming, but can get challenging when different class instances start to depend on each other and together need to express a complex flow of changes.
// Define state and state updates class MobxState { @observable items = [] addItem (item) { this.items.push(item) } } // Components @observer class Items extends Component { render() { return ( <div> <ul> {this.props.store.items.map((item) => <li>{item}</li>)} </ul> <button onClick={() => this.props.store.addItem('foo')}> Add foo </button> </div> ) } } // Pass state to components const store = new MobxState(); render(<Items store={store} />, document.getElementById('mount'));
Mobx truly is magical in the way our components detect a need for render by tracking the access to observable properties. As a developer you normally do not have to think about how this works and as a result Mobx has a low learning curve.
REDUX
Redux has a functional approach. This means that we do not create class instances, we create reducers. A reducer basically holds an object representing state (much like a class instance), but it has no methods. Requests for change are passed into the reducers, and based on the type of change and its payload, the reducer typically uses a switch to return a brand new state object. Immutability is a strong concept in Redux which definitely has its benefits, especially in its simple render optimization, but also has its drawbacks when it comes to the learning curve.
// Define state and state updates function ReduxState (state = Immutable.fromJS({items: []}), action) { switch (actions.type) { case 'addItem': return state.push('items', action.item) } return state } // Components const Items = connect( (state) => { return { items: state.items } }, (dispatch) => { return { onClick: (item) => { dispatch({type: 'addItem', item}) } } } )( function ItemsComponent ({items, onClick}) { return ( <div> <ul> {items.map((item) => <li>{item}</li>)} </ul> <button onClick={() => onClick('foo')}> Add foo </button> </div> ) } ) // Pass state to components const store = createStore(ReduxState) render( <Provider store={store}> <Items /> </Provider>, document.getElementById('root') )
CEREBRAL
Cerebral is more functional than it is object oriented. Some ot the internals and API calls are object oriented, but at the application abstraction it is fully functional. Object oriented programming is very good for defining state and changing out state values. It is very expressive and straight forward. But as we start to get into the realm of cross domain state changes and side effects, a functional approach allows us to write declarative code. Declarative code can be read by the framework before hand, giving developer tools insight into what we want our code to do, before it is even run.
const controller = Controller({ state: { items: [] }, signals: { itemAdded: push(state`items`, props`item`) } }) // Components const Items = connect({ items: state`items`, itemAdded: signal`itemAdded` }, function ItemsComponent ({items, itemAdded}) { return ( <div> <ul> {items.map((item) => <li>{item}</li>)} </ul> <button onClick={() => itemAdded({item: 'foo'})}> Add foo </button> </div> ) } ) // Pass state to components render(( <Container controller={controller}> <Items /> </Container> ), document.querySelector('#app'));
Cerebral is not magical, we explicitly tell the framework (and ourselves) how everything is connected, but it is not as low level as Redux.
Boilerplate
How much code do we have to write? It is important to understand that less code does not mean better code. We could state that type checking is boilerplate, but it gives us guarantees and arguably more readability of our code. To avoid boilerplate we often use abstractions, but abstractions can hide logic in a way that makes it difficult for the next developer to understand what is really going.
MOBX
Mobx is what we call an implicit library. A good example of this is looking at how components render.
@observer class Items extends Component { render() { return ( <div> <ul> {this.props.store.items.map((item) => <li>{item}</li>)} </ul> <button onClick={() => this.props.store.addItem('foo')}> Add foo </button> </div> ) } }
In this component we are not defining what state the component depends on, it automagically understands that by accessing observable properties. It is less code to read, but it is harder to figure out what causes this component to actually update.
REDUX
Redux is very explicit about what state our components use. We basically create a factory that extracts state and actions to be dispatched:
const ADD_ITEM = 'ADD_ITEM' function addItem (item) { return {type: ADD_ITEM, payload: item} } function Items ({items, onClick}) { return ( <div> <ul> {items.map((item) => <li>{item}</li>)} </ul> <button onClick={() => onClick('foo')}> Add foo </button> </div> ) } connect( (state) => { return { items: state.items } }, (dispatch) => { return { onClick: (item) => { dispatch(addItem(item)) } } } )(Items)
This contains a lot more boilerplate than Mobx. That said, it is more explicit about what state and state changes this component uses. We understand how the state gets to the component and therefore why it updates.
CEREBRAL
With Cerebral we are explicit, like Redux, but with less code. We connect state and signals where we need them. Redux has a concept of “Container and Presentational” components where I personally have worked on projects where these are completely separated. Meaning that you only connect state and actions to certain top level components, like pages, and you end up doing a lot of props passing. This instead of just connecting state and actions exactly where you need them. In Cerebral there are just components and you connect state and signals exactly where you need it.
connect({ items: state`items`, itemAdded: signal`itemAdded` }, function Items ({items, itemAdded}) { return ( <div> <ul> {items.map((item) => <li>{item}</li>)} </ul> <button onClick={() => itemAdded({item: 'foo'})}> Add foo </button> </div> ) } )
Again, the benefit of being explicit is that we know what the state dependencies are, and in Cerebral’s case, what signals the component can trigger. Since this is done through declarative code it can also be extracted and displayed in the devtools, helping us further understand what components depend on, without even running the render code.
Developer tools
There has been a revolution of developer tools in the React ecosystem. One thing is the React debugger itself, but when Redux got all its attention with immutabilty it opened up new possibilities. Especially the time travel debugger got a lot of attention. Time travel was actually one of the early experiments of the Cerebral debugger, almost 3 years ago, but it has ended up as a gimmick. The time travel itself is not the most valuable part; it is the history of state changes, and ideally how these changes came to be. What relates to our mental image of the application is where we find most value.
MOBX
Mobx has a developer tool that “does the job”, as Preethi says. We can investigate renders and what state properties a specific component depends on. This is done in the browser as an overlay.
REDUX
The Redux developer tools has gotten a lot of love. It can be used as overlay, as an extension or as a stand alone application. There are many different types of debuggers and we can combine them by our own preferences.
CEREBRAL
The Cerebral debugger is taken even further. Even though Redux lists mutations, we do not know how they relate to each other and how they came to be. In Cerebral we do not only get an overview of mutations, but also an overview of the complete flow of changes in our application. The debugger itself is a standalone application that allows us to connect to any JS environment, whether browser, server, React Native, Electron etc. We can even combine client and server side execution in a single operation flow.
Debuggability
When something goes wrong, how do we figure out what happened? Depending on the type of problem there are different approaches to debugging, but typically something happened when going from one state to the next. Being able to understand what actually happens when moving between application states is important. This insight can come from reading the source code and/or having devtools to help us visualize this.
MOBX
With Mobx’s magical nature it can be hard to track down bugs. Because state transitions happens “behind the scenes” it can be difficult to understand what exactly happens reading the code. Also the fact that changes can happen anywhere does not make things easier. That said, Mobx can be forced into a one-way-dataflow and the devtools helps us understand how a component renders.
REDUX
Redux is very explicit about how things are connected so it is easier to debug by reading the code around where the problem occurs. Also the fact that the state of reducers and changes to that state is isolated also helps a lot. The ability to skim through the mutations log in the developer tools is also a great benefit.
CEREBRAL
Since Cerebral is explicit about its state dependencies, where changes can occur, and has a one-way dataflow, it has the same benefits as Redux. On top of that, we have the devtools giving us a mental image of how state changes occur, with the ability to filter out specific state changes. This makes debugging application logic a great experience. Where Cerebral truly stands out though is the fact that we can see more than individual state transitions; we can see a flow of state transitions and side effects related to a specific event in the application.
Predictability
This is related to the previous point. Does our code behave the way we expect? One of the introductions to Flux was the counter on the Facebook notifications button. There was an issue where it popped up when it was not supposed to. They had many iterations trying to fix the bug, but it kept coming back. This is why Flux was introduced. It gave a predictable way to update state using the concept of one-way dataflow. All requests for changes targets “the top of the application”, which changes the state, and then the components render. The UI is a direct result of the current state of the application.
MOBX
If we do not force one-way-flow in Mobx it can become unpredictable as changes can occur anywhere.
REDUX
Redux, with its explicit definition of what a state change can be and how it is made, is the “king of predictability”. Based on the ideas of Flux, Redux has become THE implementation of Flux and is therefore very predictable.
CEREBRAL
Cerebral is also built on the concepts of Flux. The components only trigger signals that say: “This happened”. It is then up to Cerebral to run the mutations and side effects as defined. Since we define the whole flow in one signal Cerebral is very predictable. Changes are not divided into different parts of our code. Everything happens in one place, composed together, and this composition is also displayed in the debugger.
Testability
Some developers are very aggressive with testing, 100% coverage is the goal. Other times testing takes away too much time due to regular changes in the application itself, typical for startups. We can test components, we can test a state change, a flow with side effects or just a function computing some data. No matter what we test, it is important that things are not intertwined, like state changes are intertwined with component render, etc.
MOBX
If we allow Mobx to make changes anywhere and class instances are passed into other class instances, the code becomes harder to test. That said, with testing in mind it is perfectly possible to make Mobx testable using actions and planning out our domains to be as isolated as possible.
REDUX
Redux is basically pure functions. The actions just receive input and return an object. The reducers works the same way, only with state. This makes Redux highly testable. That said, when we get into side effects we are no longer in pure function world and things get harder to test.
CEREBRAL
Testing state changes is just a tiny bit of the story. Tests should really be done on the flow of changes in our code, so called integration tests. When a user clicks here, ajax requests are made, state changes are made etc. what state do we end up with at the end? Cerebral separates side effects from execution, even state changes. The functions in the signals get one input, called the context, and this context can easily be mocked during testing… even for a whole signal execution (integration test). Cerebral also has a set of helper tools to write less boilerplate for tests.
Modularity
As developers we tend to favor isolated pieces of code, modules. The challenge though is that, particularly in frontend code, these modules need to talk to each more often than not. They need to access each other’s state and trigger logic “within” each other. Planning out how these relationships should work and avoiding circular dependencies can be problematic. Decoupling with events can also decrease the readability of how things relate to each other.
MOBX
Mobx uses classes: state and methods for changing that state and doing side effects are all contained inside a class. In that sense Mobx has really good modularity. The challenge though is when these class instances need to access each other’s state or trigger each other’s methods. It can be difficult to coordinate.
REDUX
Redux does not really have a concept of modularity. We define our actions somewhere over here and our reducers somewhere over there. The great thing about this, is that any action can trigger any state changes in any reducer, and any reducer can react to any action.
CEREBRAL
Cerebral has a concept called modules. This is a way for us to structure our application, but without isolating signals and state: any signal can change and grab any state. Also any signal can compose its logic from other signals. That means Cerebral is highly composable and modular, without the isolation. It is difficult to go wrong planning out the domains and there is basically no risk having circular dependency issues. There is no need to pass class instances around to get access to what we need.
Scalability / Maintainability
Writing the 500th line of code and the 10000th are very different. When the application grows it becomes more important to keep things simple, rather than easy. Simple means having clear concepts and responsibilities. This part of the code handles UI rendering, this part handles request for state changes and this part does the state change. It is tempting to do all of this inside one component for example, but it quickly becomes complex when 50 components all have their own internal state, side effects and state changes, trying to “reach into” each other when necessary. When applications grow we need to have a good separation between these concepts.
MOBX
Mobx is very easy to get going with, but it does not force us into a strict pattern of where to request state changes, where to make those state changes and side effects. It can happen anywhere. That makes Mobx, without good discipline, less than ideal for scalability and maintenance.
REDUX
Redux has clear concepts of what components are for, that actions need to be triggered to request state changes, and reducers are where state changes happen. This makes Redux highly scalable and maintainable. That said, there are no strict opinions on how to handle side effects. There are tools to help us with this though.
CEREBRAL
Cerebral has clear concepts of what goes where. Our components should ideally not handle any state. There are always exceptions for complex UI updates, which is the case for any framework, but for the rest, all the state goes into Cerebral. We can only change state by firing off a signal, and the signals also has the logic for running side effects, composing it all together in a coherent flow. Scaling Cerebral is adding new state and signals, or new modules for structure purposes. It is easy to onboard new team members as they quickly get the mental image of how the application works by clicking around in the UI and looking at the debugger.
Summary
When looking at Preethis presentation it struck me that Cerebral is this balance between Mobx and Redux. It gives us the predictability, explicitness and great devtool experience of Redux, but with less boilerplate and a lower learning curve. This is not saying that Cerebral is the perfect solution and you do not need Mobx or Redux. It is just an alternative that might fit you better if you think Mobx is too magical and “radical”, and Redux is too much boilerplate and “conservative”.
Thanks for reading through and feel free to check out more about Cerebral on the official Website.
|
https://christianalfoni.herokuapp.com/articles/2017_03_19_Cerebral-2
|
CC-MAIN-2019-35
|
refinedweb
| 3,276
| 63.39
|
Hadoop – accessible despite hardware failure due to multiple copies of data. If a machine or any hardware crashes, then data will be accessed from another path.
Hadoop is highly scalable, as the new hardware can be easily added to the node. Hadoop also provides horizontal scalability which means nodes can be added on the fly without any downtime.
Hadoop is fault tolerant, as by default 3 replicas of each block is stored across the cluster. So if any node goes down, data on that node can be recovered from the other node easily.
In Hadoop, data is reliably stored on the cluster despite machine failure due to replication of data on the cluster.
Hadoop runs on a cluster of commodity hardware which is not very expensive.
Hadoop is very easy to use, as there is no need of client to deal with distributed computing; the framework takes care of all the things.
But as all technologies have pros and cons, similarly there are many pros and cons of Hadoop as well. As we have already seen features and advantages of Hadoop above, now let us see the limitations of Hadoop, due to which Apache Spark and Apache Flink came into existence.
Limitations of Hadoop
Various limitations of Hadoop are discussed below in this section along with their solution-
a. Issue with Small Files
Hadoop is not suited for small data. Hadoop distributed file system lacks the ability to efficiently support the random reading of small files because of its high capacity design.
Small files are the major problem in HDFS. A small file is significantly smaller than the HDFS block size (default 128MB). If we are storing these huge numbers of small files, HDFS can’t handle these lots of files, as HDFS was designed to work properly with a small number of large files for storing large data sets rather than a large number of small files. If there are too many small files, then the NameNode will be overloaded since it stores the namespace of HDFS.
Solution
Solution to deal with small file issue is simple merge the small files to create bigger files and then copy bigger files to HDFS.
HAR files (Hadoop Archives) were introduced to reduce the problem of lots files putting pressure on the namenode’s memory. By building a layered filesystem on the top of HDFS, HAR files works. Using Hadoop archive command, HAR files are created, which runs a MapReduce job to pack the files being archived into a small number of HDFS files. Reading through files in a HAR is not more efficient than reading through files in HDFS. Since each HAR file access requires two index files read as well the data file to read, this makes it slower.
Sequence files work very well in practice to overcome the ‘small file problem’, in which we use the filename as the key and the file contents as the value. By writing a program for files (100 KB), we can put them into a single Sequence file and then we can process them in a streaming fashion operating on the Sequence file. MapReduce can break Sequence file into chunks and operate on each chunk independently because Sequence file is splittable.
Storing files in HBase is a very common design pattern to overcome small file problem with HDFS. We are not actually storing millions of small files into HBase, rather adding the binary content of the file to a cell.
In Hadoop, with a parallel and distributed algorithm, MapReduce process large data sets. There are tasks that need to be performed: Map and Reduce and, MapReduce requires a lot of time to perform these tasks thereby increasing latency. Data is distributed and processed over the cluster in MapReduce which increases the time and reduces processing speed.
Solution
Spark has overcome this issue, by in-memory processing of data. In-memory processing is faster as no time is spent in moving the data/processes in and out of the disk. Spark is 100 times faster than MapReduce as it processes everything in memory. Flink is also used, as it processes faster than spark because of its streaming architecture and Flink may be instructed to process only the parts of the data that have actually changed, thus significantly increases the performance of the job.
Hadoop supports batch processing only, it does not process streamed data, and hence overall performance is slower. MapReduce framework of Hadoop does not leverage the memory of the Hadoop cluster to the maximum.
Solution
Spark improves the performance, but Spark stream processing is not as much efficient as Flink as it uses micro-batch processing. Flink improves the overall performance as it provides single run-time for the streaming as well as batch processing. Flink uses native closed loop iteration operators which make machine learning and graph processing faster.
Hadoop is designed for batch processing, that means it take a huge amount of data in input, process it and produce the result. Although batch processing is very efficient for processing a high volume of data, but depending on the size of the data being processed and computational power of the system, an output can be delayed significantly. Hadoop is not suitable for Real-time data processing.
Solution
Hadoop is not so efficient for iterative processing, as Hadoop does not support cyclic data flow(i.e. a chain of stages in which each output of the previous stage is the input to the next stage).
Solution
Apache Spark can be used to overcome this issue, as it accesses data from RAM instead of disk, which dramatically improves the performance of iterative algorithms that access the same dataset repeatedly. Spark iterates its data in batches. For iterative processing in Spark, each iteration has to be scheduled and executed separately.
In Hadoop, MapReduce framework is comparatively slower, since it is designed to support different format, structure and huge volume of data. In MapReduce, Map takes a set of data and converts it into another set of data, where individual element are broken down into key value pair and Reduce takes the output from the map as input and process further and MapReduce requires a lot of time to perform these tasks thereby increasing latency.
Solution
Spark is used to reduce this issue, Apache spark is yet another batch system but it is relatively faster since it caches much of the input data on memory by RDD and keeps intermediate data in memory itself. Flink’s data streaming achieves low latency and high throughput.
In Hadoop, MapReduce developers need to hand code for each and every operation which makes it very difficult to work. MapReduce has no interactive mode, but adding one such as hive and pig makes working with MapReduce a little easier for adopters.
Solution
While Spark can be used for such issue, Spark has interactive mode so that developers and users alike can have intermediate feedback for queries and other action. Spark is easy to program as it has tons of high-level operators. Flink can also be easily used as it also has high-level operators.
Hadoop can be challenging in managing the complex application. If the user doesn’t know how to enable platform who is managing the platform, your data could be at huge risk. At storage and network levels, Hadoop is missing encryption, which is a major point of concern. Hadoop supports Kerberos authentication, which is hard to manage.
HDFS supports access control lists (ACLs) and a traditional file permissions model. However, third party vendors have enabled an organization to leverage Active Directory Kerberos and LDAP for authentication.
Solution
Spark provides security bonus. If we run spark in HDFS, it can use HDFS ACLs and file-level permissions. Additionally, Spark can run on YARN giving it the capability of using Kerberos authentication.
Hadoop does not have any type of abstraction so MapReduce developers need to hand code for each and every operation which makes it very difficult to work.
Solution
To overcome this, Spark is used in which for batch we have RDD abstraction. Flink has Dataset abstraction.
Hadoop is entirely written in java, a language most widely used, hence java been most heavily exploited by cyber criminals and as a result, implicated in numerous security breaches.
Hadoop is not efficient for caching. In Hadoop, MapReduce cannot cache the intermediate data in memory for a further requirement which diminishes the performance of Hadoop.
Solution
Spark and Flink can overcome this, as Spark and Flink cache data in memory for further iterations which enhance the overall performance.
Hadoop has 1, 20,000 line of code, the number of lines produces the number of bugs and it will take more time to execute the program.
Solution
Although Spark and Flink are written in scala and java but they are implemented in scala, so the number of line of code is lesser than Hadoop. So it will also take less time to execute the program.
Hadoop only ensures that data job is complete, but it’s unable to guarantee when the job will be complete.
Views: 1523
Comment
© 2019 Data Science Central ®
Badges | Report an Issue | Privacy Policy | Terms of Service
You need to be a member of Data Science Central to add comments!
Join Data Science Central
|
https://www.datasciencecentral.com/profiles/blogs/limitations-of-hadoop-how-to-overcome-hadoop-drawbacks
|
CC-MAIN-2019-09
|
refinedweb
| 1,547
| 60.55
|
I've seen this problem dozens of times, so it must be a common problem - just not with a problem solution. I have a basic HTML page with a swf that works fine locally, but not after uploading the files to the server. I've moved over the HTML, swf, flv and AC_RunActiveContent.js files to the appropriate locations. Is there something I'm doing wrong? The page is here. Any help is appreciated. Thanks! - Mark
Chances are there is something internal to the Flash file that is not working in its new environment. Based on your html code, there is no need for the AC...js file. When I try to view the file alone, outside of the html page, it still does not show anything. What kinds of things are going on with that file when it is first trying to start up?
Well, the original project is structured a bit differently from the norm. It imports the flv to a playback component, and uses an event listener to navigate to a new URL once the video is finished playing. But I don't know that there's a connection here which would cause the video not to even begin. I don't know if this will help, but FWIW, here's the code:
import fl.video.*;
flvPlayer.source="NEWINTRO.flv"
flvPlayer.addEventListener(VideoEvent.COMPLETE, myHomePage);
flvPlayer.autoPlay= true;
var urlRequest=new
URLRequest("");
function myHomePage(eventObject:VideoEvent) :void
{navigateToURL(urlRequest, '_self');}
Like I said, locally, it all works flawlessly. Thanks for your help. - Mark
"Chances are there is something internal to the Flash file that is not working in its new environment"
This is one of the things I hate about Flash video. It's not self-contained, per Quicktime & Windows Media; it requires a number of external files to play a video. So what types of things can cause a Flash file to change its operations when transfered from one environment to another?
I started over from scratch thinking maybe I missed something. Still the same - works locally, not when uploaded to server. Any suggestions?
Unfortunately (for me) I don't have alot of experience with video stuff, so I can't offer much in the way of telling you there's something off with the code. I would expect that if it works locally it should work on the server as well. The best I can recommend it triple checking the locations of the involved files.
If there's any way of adding an event listener to see if the flv actually starts to load, and any other status you can check, I'd try that as a temporary troubleshooting step.
If no one else picks up on this thread, try posting it anew with a summary of what've included here. I won't interupt its progress with a response.
|
https://forums.adobe.com/thread/422542
|
CC-MAIN-2018-30
|
refinedweb
| 475
| 72.87
|
Device and Network Interfaces
- Generic SCSI device driver
#include <sys/scsi/targets/sgendef.h>
sgen@target,lun:<devtype>
The sgen driver exports the uscsi(7I) interfaces to user processes. The sgen driver can be configured to bind to SCSI devices for which no system driver is available. Examples of such devices include SCSI scanners and SCSI processor devices.
Typically, drivers which export the uscsi(7I) interface unconditionally require that the user present superuser credentials. The sgen driver does not, and relies on the filesystem permissions on its device special file to govern who may access that device. By default, access is restricted and device nodes created by the sgen driver are readable and writable by the superuser exclusively.
It is important to understand that SCSI devices coexisting on the same SCSI bus may potentially interact with each other. This may result from firmware bugs in SCSI devices, or may be made to happen programmatically by sending appropriate SCSI commands to a device. Potentially, any application controlling a device via the sgen driver can introduce data integrity or security problems in that device or any other device sharing the same SCSI bus.
Granting unprivileged users access to an sgen-controlled SCSI device may create other problems. It may be possible for a user to instruct a target device to gather data from another target device on the same bus. It may also be possible for malicious users to install new firmware onto a device to which they are granted access. In environments where security is a concern but user access to devices controlled by the sgen driver is nontheless desired, it is recommended that the devices be separated onto a dedicated SCSI bus to mitigate the risk of data corruption and security violations.
The sgen driver is configurable via the sgen.conf file. In addition to standard SCSI device configuration directives (see scsi(4)), administrators can set several additional properties for the sgen driver.
By default, the sgen driver will not claim or bind to any devices on the system. To do so, it must be configured by the administrator using the inquiry-config-list and/or the device-type-config-list properties.
As with other SCSI drivers, the sgen.conf configuration file enumerates the targets sgen should use. See scsi(4) for more details. For each target enumerated in the sgen.conf file, the sgen driver sends a SCSI INQUIRY command to gather information about the device present at that target. The inquiry-config-list property specifies that the sgen driver should bind to a particular device returning a particular set of inquiry data. The device-type-config-list specifies that the sgen driver should bind to every device that is of a particular SCSI device type. When examining the device, the sgen driver tests to see if it matches an entry in the device-type-config-list or the inquiry-config-list. For more detail on these two properties, see the PROPERTIES section.
When a match against the INQUIRY data presented by a device is made, the sgen driver attaches to that device and creates a device node and link in the /devices and /dev hierarchies. See the FILES section for more information about how these files are named.
It is important for the administrator to ensure that devices claimed by the sgen driver do not conflict with existing target drivers on the system. For example, if the sgen driver is configured to bind to a direct access device, the standard sd.conf file will usually cause sd to claim the device as well. This can cause unpredictable results. In general, the uscsi(7I) interface exported by sd(7D) or st(7D) should be used to gain access to direct access and sequential devices.
The sgen driver is disabled by default. The sgen.conf file is shipped with all of the 'name="sgen" class="scsi" target=...' entries commented out to shorten boot time and to prevent the driver from consuming kernel resources. To use the sgen driver effectively on desktop systems, simply uncomment all of the name="sgen" lines in sgen.conf file. On larger systems with many SCSI controllers, carefully edit the sgen.conf file so that sgen binds only where needed. Refer to driver.conf(4) for further details.
The inquiry-config-list property is a list of pairs of strings that enumerates a list of specific devices to which the sgen driver will bind. Each pair of strings is referred to as <vendorid, productid> in the discussion below.
is used to match the Vendor ID reported by the device. The SCSI specification limits Vendor IDs to eight characters. Correspondingly, the length of this string should not exceed eight characters. As a special case, "*" may be used as a wildcard which matches any Vendor ID. This is useful in situations where more than one vendor produces a particular model of a product. vendorid is matched against the Vendor ID reported by the device in a case-insensitive manner.
is used to match the product ID reported by the device. The SCSI specification limits product IDs to sixteen characters (unused characters are filled with the whitespace characters). Correspondingly, the length of productid should not exceed sixteen characters. When examining the product ID of the device, sgen examines the length l of productid and performs a match against only the first l characters in the device's product ID. productid is matched against the product ID reported by the device in a case-insensitive manner.
For example, to match some fictitious devices from ACME corp, the inquiry-config-list can be configured as follows:
To match "UltraToast 4000" devices, regardless of vendor, inquiry-config-list is modified as follows:
To match every device from ACME in the "UltraToast" series (i.e UltraToast 3000, 4000, 5000, ...), inquiry-config-list is modified as follows:
Whitespace characters are significant when specifying productid. For example, a productid of "UltraToast 1000" is fifteen characters in length. If a device reported its ID as "UltraToast 10000", the sgen driver would bind to it because only the first fifteen characters are considered significant when matching. To remedy this situation, specify productid as "UltraToast 1000 ", (note trailing space). This forces the sgen driver to consider all sixteen characters in the product ID to be significant.
The device-type-config-list property is a list of strings that enumerate a list of device types to which the sgen driver will bind. The valid device types correspond to those defined by the SCSI-3 SPC Draft Standard, Rev. 11a. These types are:
Alternately, you can specify device types by INQUIRY type ID. To do this, specify type_0x<typenum> in the sgen-config-list. Case is not significant when specifying device type names.
The sgen-diag property sets the diagnostic output level. This property can be set globally and/or per target/lun pair. sgen-diag is an integer property, and can be set to 0, 1, 2 or 3. Illegal values will silently default to 0. The meaning of each diagnostic level is as follows:
No error reporting [default]
Report driver configuration information, unusual conditions, and indicate when sense data has been returned from the device.
Trace the entry into and exit from routines inside the driver, and provide extended diagnostic data. No error reporting [default].
Provide detailed output about command characteristics, driver state, and the contents of each CDB passed to the driver.
In ascending order, each level includes the diagnostics that the previous level reports. See the IOCTLS section for more infomation on the SGEN_IOC_DIAG ioctl.
Driver configuration file. See CONFIGURATION for more details.
The sgen driver categorizes each device in a separate directory by its SCSI device type. The files inside the directory are named according to their controller number, target ID and LUN as follows:
cn is the controller number, tn is the SCSI target id and dn is the SCSI LUN
This is analogous to the {controller;target;device} naming scheme, and the controller numbers correspond to the same controller numbers which are used for naming disks. For example, /dev/dsk/c0t0d0s0 and /dev/scsi/scanner/c0t5d0 are both connected to controller c0.
The sgen driver exports the uscsi(7I) interface for each device it manages. This allows a user process to talk directly to a SCSI device for which there is no other driver installed in the system. Additionally, the sgen driver supports the following ioctls:
Send a TEST UNIT READY command to the device and return 0 upon success, non-zero upon failure. This ioctl accepts no arguments.
Change the level of diagnostic reporting provided by the driver. This ioctl accepts a single integer argument between 0 and 3. The levels have the same meaning as in the sgen-diag property discussed in PROPERTIES above.
The device was opened by another thread or process using the O_EXCL flag, or the device is currently open and O_EXCL is being requested.
During opening, the device did not respond to a TEST UNIT READY SCSI command.
Indicates that the device does not support the requested ioctl function.
Here is an example of how sgen can be configured to bind to scanner devices on the system:
device-type-config-list = "scanner";
The administrator should subsequently uncomment the appropriate name="sgen"... lines for the SCSI target ID to which the scanner corresponds. In this example, the scanner is at target 4.
name= "sgen" class= "scsi" target=4 lun=0;
If it is expected that the scanner will be moved from target to target over time, or that more scanners might be added in the future, it is recommended that all of the name="sgen"... lines be uncommented, so that sgen checks all of the targets on the bus.
For large systems where boot times are a concern, it is recommended that the parent="" property be used to specify which SCSI bus sgen should examine.
driver.conf(4), scsi(4), sd(7D), st(7D), uscsi(7I)
ANSI Small Computer System Interface-2 (SCSI-2)
SCSI-3 SPC Draft Standard, Rev. 11a
|
http://docs.oracle.com/cd/E23824_01/html/821-1475/sgen-7d.html
|
CC-MAIN-2014-23
|
refinedweb
| 1,673
| 55.24
|
In deep learning, the goal is to find the optimum weights of the model to get the desired output. In transfer learning, the network is initialized using the best pre-trained weights. The question is how do you initialize the weights for a non-pretrained model?
Training algorithms for deep learning models are usually iterative in nature and thus require the user to specify some initial point from which to begin the iterations. Moreover, training deep models is a sufficiently difficult task that most algorithms are strongly affected by the choice of initialization. — Deep Learning (book)
There are numerous weight initialization methods:
One of the ways is to initialize all weights to 0s. As all the weights are same, the activations in all hidden units are also the same. This makes the gradient w.r.t to each weight be same. Thus, the problem arises as to which weight the network should update and by how much i.e. backpropagation finds it difficult to minimize the loss. The same problem occurs if all weights are initialized as 1s.
In PyTorch,
nn.init is used to initialize weights of layers e.g. to change
Linear layers’s initialization method:
def init_weights(m, constant_weight): """initialize weight of Linear layer as constant_weight""" if type(m) == nn.Linear: nn.init.constant_(m.weight, constant_weight) m.bias.data.fill_(0)
The other way is to initialize weights randomly from a uniform distribution. Every number in uniform distribution has equal probability to be picked.
In PyTorch, the
Linear layer is initialized with He uniform initialization,
nn.init.kaiming_uniform_, by default.
Choosing high values of weights is not the best for the model as it brings problems of exploding and vanishing gradients. The general way to initialize weights is to select small random values, which are close to 0.
Good practice is to start your weights in the range of where
( is the number of inputs to a given neuron).
Another way is to initialize weights randomly from a normal distribution. As most values are concentrated towards the mean, most of the random values selected have higher probability to be closer to mean (say ).
There are many other ways for weight initialization such as Xavier initialization. It’s an active area of research.
References:
|
https://kharshit.github.io/blog/2019/02/08/weight-initialization-in-neural-nets
|
CC-MAIN-2020-16
|
refinedweb
| 377
| 58.18
|
Technical Articles
Handling text files in Groovy script of CPI (SAP Cloud Platform Integration).
Introduction:
Handling huge text files (which are either csv or fixed length)is a challenge in CPI (SAP Cloud Platform Integration) .
Mostly before converting them to xml required for mapping, we do read them via groovy scripts and also manipulate the data. Most often this is done via converting them to string format , which is very much memory intensive.
In this blog post, I will show alternate ways to handle them, not only how to read large files but also how to manipulate them.
Hope you will enjoy the reading.
Main Section:
In CPI (SAP Cloud Platform Integration) sometimes, we come across scenarios where we need to process an input csv or any other character delimited text file.
Most often these files are huge compared to when we get data as xml or json format.
This date which can be “,” or tab or “|” delimited or is of fixed length, creates additional complexity as first they have to be read, sorted, converted to xml (for mapping to some target structure) before they can be finally processed. Also, sometimes we have to do various checks on number of fields to validate if a line in the file is worth processing or not , before hand, to stop flow of unnecessary data.
Like: File -> input.csv
A,12234,NO,C,20190711,……
A,26579,NO,D,20190701,…….
……………………………………………..
……………………………………………..
Say, we have to process all lines of above file where fourth field has Flag set to ‘D’, or Debit indicator.
So, in above example after reading the file we should only keep lines which has ‘D’ as fourth field and hence line 1 above should not be processed further.
Here in below we will see how to handle text, csv files. Especially, Huge files and how to process each lines from them with out converting to String which is more memory consuming.
*. Reading large files :
We normally start our scripts by converting the input payload to String Object.
String content= message.getBody(String) // this line is mostly used in scripts.
But in case of large files, the above line converts the whole data to String and stores them in memory, which is not at all a good practice. Further any new changes on them by creating or replacing with new String Objects takes more space. This also has the probability of having – OutOfMemoryError Exception.
The better way is to handle them as stream. There are two class that can handle stream data.
a. java.io.Reader -> handles data as character or text stream
b. java.io.InputStream -> handles data as raw or binary stream.
Depending on the level of control you need over data, or business requirement you can use one of them. Mostly the Reader class is easier to use as we get data as text/character (UTF-16)rather then raw binary data (UTF-8).
Reading Data in CPI groovy script via java.io.Reader:
Reading Data at each field or word level, for each line:
*. Not a good way to do replace on data in CPI Groovy:
The String way of doing it –
The better approach of doing a replace while reading it as Stream:
*. Reading payload as an java.io.InputStream, stream object:
Conclusion:
This blog post, is written to ease the pain of developers, as while building Iflows, we do come across multiple cases where in, we need to handle large text files in csv or in other delimited format, which requires reading the entire file, sometimes working on data of each line via parsing-text etc.
In all those cases, the above blog post can be helpful to build required groovy scripts quickly, to be used in CPI (SAP Cloud Platform Integration) iflows, to handle these types of data.
It hastens those developments by providing architecture and re-usable codes on how to achieve the outcome.
I will look forward to your inputs and suggestions.
Great one!! Keep Blogging Subhojit 🙂
Thanks Arindam.
HI Subhojit,
I am reading a zip file through Groovy script. It works find until the size of the zip file is less than 4MB. If the zip file is more than 4MB, it is giving the below error.
When I tried to print the body.available() in the log, it shows 0 for files more than 4MB.
I used message.getBodySize() method instead of body.available(), but still its not working.
The maximum zip file size that we expect in real time would be more than 70MB.
Below is the program that I use to read through the zip file.
Can you please guide me where I am wrong?
Regards,
Anand...
Hi can you try to write your code like below ( remember , you have to convert the below code to the way you need. But the overall concept still remains same.)
def messageLog = messageLogFactory.getMessageLog(message);
InputStream is = message.getBody(InputStream.class);
{
}
}
}
}
}
}
ByteArrayOutputStream out=new ByteArrayOutputStream();
int n;
boolean canRead = false;
def myData =''
while ((n = is.read()) > -1){
if (n==80 && !canRead)
canRead = true;
if (!canRead){
continue;
out.write(n);
// def totalstring = out.toString("UTF-8");
InputStream is2 = new ByteArrayInputStream(out.toByteArray());
ZipInputStream zipStream = new ZipInputStream(is2);
ZipEntry entry=zipStream.getNextEntry();
byte[] buf=new byte[1024];
while (entry != null) {
if (entry.getName().contains("PDF")) {
ByteArrayOutputStream baos=new ByteArrayOutputStream();
int m;
while ((m=zipStream.read(buf,0,1024)) > -1) {
baos.write(buf,0,m);
myData = new String(baos.toByteArray(),StandardCharsets.UTF_8).replace("\"UTF-8\"\n","")
message.setBody(new String(baos.toByteArray(),StandardCharsets.UTF_8).replace("\"UTF-8\"\n",""));
zipStream.closeEntry();
entry=zipStream.getNextEntry();
messageLog.setStringProperty("Logging#5", "Printing Input Payload As Attachment")
messageLog.addAttachmentAsString("#ZIP CONTENT- payment_gl(PDF)", myData, "text/plain");
message.setBody(myData)
return message;
If it still gives same error , please try to open a ticket to CPI team.
Hi mate,
Thanks for your reply.
My bad, I forgot to mention that I need to encode the pdf content. Actually, I had other set of code, which is able to read through more than 10MB zip file, but I could not encode. It was giving an error like "Stream close". That's why I changed the code to read it into the FileOutputStream.
When I added the encoding part in your code, its giving the same error like Stream close. As I am inserting this PDF into SuccessFactors, I need to do base64Encoding. Please see the actual code.
FYI, your code is able to read through all the files inside zip file. If you could help to do the encoding with your code, then that would be great.
Hi S,
I am able to do the encoding.
Thanks
Anand...
Hi Subhojit,
Great blog!
I have a requirement to read 3 very large csv files and simply combine them and send to receiver.
While I can use the input stream to read the files and use the memory space efficiently, the combining part using aggregation will need for the file to be converted to xml, since aggregation in CPI works only with xml.
And since I will be doing an aggregation, this large data will still be stored in the data store. Isn't it?
Do you have any ideas/work around to manage that?
Thanks,
Shubham
Hi Subhojit,
We have a flat file where headers have special characters. We want to use replace only for header line. How do we achieve that using groovy?
Thanks,
Hemant
Thanks for your blog. I tried your method and it doesn't work.
My interface is extracting an email attachment via the sender mail adapter. I can see the attachment does get extracted by the mail adapter and saved into the body via the trace.
However, when I used the groovy script to read the attachment via the getBody, nothing gets read. I verified by the body.length() and is 0.
Here is the simple code for me to get the body:
String body = message.getBody(java.lang.String)
body.length() is 0.
I used your array method and the array has size() 0.
Is that a bug in CPI as getbody is local CPI method.
Thanks Jonathan.
|
https://blogs.sap.com/2019/07/15/handling-text-files-in-groovy-script-of-cpi-sap-cloud-platform-integration./
|
CC-MAIN-2022-21
|
refinedweb
| 1,350
| 66.84
|
XML - Managing Data Exchange/XHTML
From Wikibooks, the open-content textbooks collection
[edit] Introduction.
[edit] The Evolution of XHTML.
[edit] So What is XHTML?.
[edit] XHTML Document Structure
At a minimum, an XHTML document must contain a DOCTYPE declaration and four elements: html, head, title, and body:
The opening
html tag of an XHTML document must include a namespace declaration for the XHTML namespace.
The DOCTYPE declaration should appear immediately before the html tag in an XHTML document. It can follow one of three formats.
[edit] XHTML 1.0 Strict
The Strict declaration is the least forgiving. This is the preferred DOCTYPE for new documents. Strict documents tend to be streamlined and clean. All formatting will appear in Cascading Style Sheets rather than the document itself. Elements that should be included in the Cascading Style Sheet and not the document itself include, but are not limited to:
>
[edit] XHTML 1.0 Transitional.
[edit] XHTML 1.0 Frameset
If you are creating a page with frames, this declaration is appropriate. However, since frames are generally discouraged when designing Web pages, this declaration should be used rarely.
[edit] XML Prolog
Additionally, XHTML authors are encouraged by the W3C to include the following processing instruction as the first line of each document:
Although it is recommended by the standard, this processing instruction may cause errors in older Web browsers including Internet Explorer version 6. It is up to the individual author to decide whether to include the prolog.
[edit] Language
It is good practice to include the optional
xml:lang attribute [1] on the html element to describe the document's primary language. For compatibility with HTML the
lang attribute should also be specified with the same value. For an English language document use:
The
xml:lang and
lang attributes can also be specified on other elements to indicate changes of language within the document, e.g. a French quotation in an English document.
[edit] Converting HTML to XHTML.
[edit] Documents must be well-formed
Because XHTML conforms to all XML standards, an XHTML document must be well-formed according to the W3C's recommendations for an XML document. Several of the rules here reemphasize this point. We will consider both incorrect and correct examples.
[edit] Tags must be properly nested
Browsers widely tolerate badly nested tags in HTML documents..
[edit] Elements must be closed
Again, XHTML documents must be considered valid XML documents. For this reason, all tags must be closed. HTML specifications listed some tags as having "optional" end tags, such as the <p> and <li> tags.
In XHTML, the end tags must be included.
What should we do about HTML tags that do not have a closing tag? Some special tags do not require or imply a closing tag.
In XHTML, the XML rule of including a closing slash within the tag must be followed.
[edit] Tags must be lowercase
In HTML, tags could be written in either lowercase or uppercase. In fact, some Web authors preferred to write tags in uppercase to make them easier to read. XHTML requires that all tags be lowercase.
This difference is necessary because XML differentiates between cases. XML would read <H1> and <h1> as different tags, causing problems in the above example.
The problem can be easily fixed by changing all tags to lowercase.
[edit] Attribute names must be lowercase
Following the pattern of writing all tags in lowercase, all attribute names must also be in lowercase.
The correct tags are easy to create.
[edit] Attribute values must be quoted
Some HTML values do not require quotation marks around them. They are understood by browsers.
XHTML requires all attributes to be quoted. Even numeric, percentage, and hexadecimal values must appear in quotations for them to be considered part of a proper XHTML document.
[edit] Attributes cannot be minimized
HTML allowed some attributes to be written in shorthand, such as selected or noresize.
When using XHTML, attribute minimization is forbidden. Instead, use the syntax x="x", where x is the attribute that was formerly minimized.
A complete list of minimized attributes follows:
- checked
- compact
- declare
- defer
- disabled
- ismap
- nohref
- noresize
- noshade
- nowrap
- readonly
- selected
- multiple
[edit] The
name attribute is replaced with the
id attribute
HTML 4.01 standards define a name attribute for the tags a, applet, frame, iframe, img, and map.
XHTML has deprecated the name attribute. Instead, the id attribute is used. However, to ensure backwards compatibility with today's browsers, it is best to use both the name and id attributes.
As technology advances, it will eventually be unnecessary to use both attributes and XHTML 1.1 removed name altogether.
[edit] Ampersands are not supported
Ampersands are illegal in XHTML.
They must instead be replaced with the equivalent character code &.
[edit] Image alt attributes are mandatory
Because XHTML is designed to be viewed on different types of devices, some of which are not image-capable, alt attributes must be included for all images.
Remember that the img tag must include a closing slash in XHTML!
[edit] Scripts and CSS must be escaped
Internal scripts and CSS often include characters like the ampersand and less-than characters..
The type attribute must be included, and the CDATA tags should be used.
Because scripts and CSS may complicate an XHTML document, it is strongly recommended that they be placed in external .js and .css files, respectively. They can then be linked to from your XHTML document.
[edit] Some elements may not be nested
The W3C recommendations state that certain elements may not be contained within others in an XHTML document, even when no XML rules are violated by the inclusion. Elements affected are listed below.
[edit] When to convert.
[edit] MIME Types.[2]
[edit] Help Converting
[edit] HTML Tidy.
[edit] When not to convert.
[edit] XHTML 1.1.
[edit] DOCTYPE
The DOCTYPE for XHTML 1.1 is:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "">
[edit] Modularization.
[edit] Invalid XHTML.
|
http://en.wikibooks.org/wiki/XHTML_%28XML%29
|
crawl-002
|
refinedweb
| 988
| 58.38
|
tag:blogger.com,1999:blog-83600056549057584962018-03-06T00:38:56.196-08:00ProgrammerchoiceSathishnoreply@blogger.comBlogger2125tag:blogger.com,1999:blog-8360005654905758496.post-77147041504261099422010-07-04T12:29:00.000-07:002010-07-04T13:19:32.677-07:00Code ReusabilityHi Guys,<br /><br />I went to bed but I couldn't sleep, so before I sleep tonight I am going to tell you what is Better Programming.<br /><br />What is Better Programming? You don't know the answer then don't worry, read this blog completely.<br /><br />Writing reusable code is called Better Programming.<br /><br />Then what is Code Reusability?<br /><br />Write it once and make it available anywhere and any place. Make the changes in one place and changes available to the application it uses the code.<br /><br />You don't want to rebuild the code again and again; you want to reuse the code.<br /><br /.<br /><br />After Microsoft came up with .dll Dynamic Link Libraries then copying and pasting the code is gone away.<br /><br />So write the code in such a manner that it can be reused. Because your manager should say that you did a Better Programming for that :-). Ok guys, I am going to sleep now. See you in my Next Blog.Sathishnoreply@blogger.com0tag:blogger.com,1999:blog-8360005654905758496.post-15572613088080217732010-06-27T03:19:00.001-07:002010-06-29T13:31:19.603-07:00.Net 3.5: Extension MethodsFirst time I am writing a blog that too with having a coffee, now the time is 3 o' clock in the morning but I am not going to sleep until I complete this blog.<br /><br />OK.......Let us see what is Extension Methods.<br /><br />What you do if you want to add a method to a class , yes it is simple, why not, we can add a method to a class, this is OK.<br /><br />You want to add a method to a class but you don't have the source code of that class but still you want to add a method.<br /><br />Without disturbing the source code of a class you want to add a method to that class.<br /><br />Another case, You want to add a method which can be used by several different classes which are not from same family and you want to define the method in a central repository.<br /><br /><em><strong>Extension Methods </strong>is the answer for all the above questions.</em><br /><strong><em></em></strong><br />Extension methods enable you to add the behavior (method) externally to a class. You can define a method in a central repository and then you can attach that method to several different classes without disturbing the source code of that classes.<br /><br />You can extend the existing functionality without changing the existing code. Extension methods is a great way to provide flexible design.<br /><br />An extension method is nothing but Microsoft implementation of Visitor Pattern.<br /><br /><strong><em><u>Visitor Pattern:</u></em></strong> <p><a href=""><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 320px; DISPLAY: block; HEIGHT: 210px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5487518596276204866" border="0" alt="" src="" /></a><br /.</p><p>Visitor Pattern : Allows you to define a new operations with out changing the classes on which it operates.</p><p></p><p><br /><strong><em><u>Extension method signature:</u></em></strong><br />static string reverse(this string str, string value)<br /><br /><strong>static:</strong> Extension method should be static method, static is a keyword which is used to declare a method as a static method.<br /><br /><strong>string:</strong> return type of the reverse Extension method is a string.<br />reverse: reverse is the extension method name.<br /><br /><strong>this:</strong> ‘this’ Keyword is used to denote that the current method is an extended method of this class.<br /><br />the first parameter of a Extension method can be the name of the class to which this method is attached externally. Here the Extension method reverse is attached to the String class.<br /><br />Create an ordinary method and the 1st parameter of the method can be the class using the keyword <strong>this</strong> to which the method is attached externally.<br /><br />The class in which the extended methods are created that class should be a static class.<br /><br /><strong><u><em>Sample Code:</em></u></strong><br /></p><p><strong><em><u></u></em></strong></p><p>In this sample code i am going to show you how to add a reverse method to a String class using Extension Methods.</p><p>using System;<br />using System.Collections.Generic;<br />using System.Linq;<br />using System.Text;<br /><br />namespace Extension_Methods<br />{<br />public static class StringExtension<br />{<br />//Adding Extension method called reverse to the String class.<br />//this Keyword denotes that the reverse method is an extended method of String class.<br />public static string reverse(this string str, string value)<br />{<br />Char[] chars = value.ToCharArray();<br />Array.Reverse(chars);<br />return new string(chars);<br />}<br />}<br />} </p><p>I created a class called StringExtension inside this class I created a method called reverse and added it to the String class by declaring in the first parameter of the method <strong>this string str</strong>, which tells that this reverse method is a extension method of the string class. Don't forget the keyword <strong>this </strong>because this makes the difference of your method from Ordinary method to an Extension method.</p><p>using System;<br />using System.Collections.Generic;<br />using System.Linq;<br />using System.Text;<br /><br />namespace Extension_Methods<br />{<br />class Program<br />{<br />static void Main(string[] args)<br />{<br />String reverseValue = "Welcome";<br />//Calling the Extension method called reverse.<br />reverseValue = reverseValue.reverse(reverseValue);<br />Console.WriteLine(reverseValue);<br />Console.ReadLine();<br />}<br />}<br />} </p><p>I am calling the reverse method that I created in the StringExtension class and added it to the String class and I am printing the reverse value of "Welcome" using the Extension method called reverse.<br /></p><p>If you want to make some changes to the reverse method then you can do the changes in the StringExtension class without anthing to the String class. </p><p>Interesting right!</p><p>Thanks to Extension Methods. Ok guys, See you in my next blog.</p>Sathishnoreply@blogger.com2
|
http://programmerchoice.blogspot.com/feeds/posts/default
|
CC-MAIN-2018-22
|
refinedweb
| 1,082
| 55.64
|
-
arghfl.
Not to mention, StringUtils.isBlank() would work fine....
Admin
The first version of the method had a real bug. The method did not do what its name implied, but that's no WTF.
The second version of the method is still no good class design, but it ain't no WTF either, because there are some scenarios where you get objects, you don't know what they are (although you suspect them to be strings...), and you simply want to test whether they happen to be empty strings. Deserialization and generic data structures come to mind.
Of course, a more compact way of doing things would be:
public static boolean isEmptyString( Object obj ) { return (obj instanceof String && ((String)obj).Equals( String.Empty)); }
So where's the WTF?
Admin
OK now I got it. Call it with an Integer and it returns true. Sometimes your mind wants to refuse to see how bad things are.
Admin
From the comment it looks like he wants to return true if there's a string of whitespace so I'd say isEmpty is the one he wants. Also checking to see that the object is a String at all. Other than that I agree this is totally reinventing the wheel, but reinventing it with corners.
Admin
if ( obj instanceof String ) { // if it's a string if ( ((String)obj).length() == 0 ) // and == "" return false; // then NOT emptyString - WTF?! }
captcha = whiskey - mmm - need some...
Admin
Edit: I mean return that a string of whitespace isn't an empty string. So false not true.. I think...
CAPTCHA is craptastic, appropriately
Admin
To quote Shakespeare:
Admin
This code is FUBAR!
Check this out...
This prints out:
Admin
It's just a matter of the wrong function name, in both cases.
It should have been isNotEmptyString, since they both return false when the string length is zero. (i.e. when the string is empty).
The matter of the whitespace is good in both cases; since the method isn't documented, nobody can tell what is exactly meant by "empty" so both implementations are correct.
So the real WTF is that the second developer didn't catch on to the real bug, being the name of the method.
Admin
When "".equals(obj) just isn't enough...
Admin
This would have been so much easier, and clearer in C ...
g,d&r
Admin
Admin
Yep. The function works bass-ackwards, not to mention it will return TRUE if I send it an Integer, an Object, or little green men.
No, I doubt the function name's backward, the devel got it backwards. Or he's just done the "Russian Reversal" on his code, making it more WTFy than it already would be...
In Soviet Russia, code writes YOU!!!
Admin
I dont get all these java examples. I guess this article would be a lot more interesting if I understood it.
Oh well!
Admin
(x == null) followed by (x instanceof String) is redundant. instanceof implicitly checks against null.
Admin
yeah, my feeling exactly...
Admin
is it java or is it .NET they all seem the same to me. i googled and found out it was java only. .NET uses typeOf.
Admin
If that is c# then you could use string.isNullOrEmpty. Why would anyone write that useless function that would make no sense even if .net didn't provide us with 10 million string objects already in the framework.
Admin
return (obj instanceof String) && obj.toString().trim().length() == 0;
Admin
Sorry for stating this, but i'm really not impressed with the quality of the comments...
public boolean isEmptyString(CharSequence string) { return "".equals(string); }
It automatically validates if the thing in question is a string-like thing, it allows StringBuilder,CharBuffer etc. and it returns false if string is null or not empty.
Admin
Microsoft gave us String.IsNullOrEmpty(obj) in .NET 2.0 so we don't make the same mistake...
[)amien
Admin
Oh, actually, it looks like they don't want a whitespace string to count as an empty string. Weird.
So, that makes it easy:
return (obj instanceof String) && obj.toString().length() == 0;
Admin
Ah, the classic boolean result is the opposite of what it should be in some cases bug. it doesn't get any better than this, especially when the function has been "fixed" and it still does that. Can anyone say "unit test?"
Admin
Upps, must be "return "".equals(string.toString())....
Admin
I suppose you could do this:
return String.valueOf(obj).equals("");
...and make the assumption that if an incoming Object's toString() method results in "" you should count it as an empty string too.
Admin
I think the WTF here is there was no unit test, which would have caught the original error.
Admin
Gosh !
public boolean isEmptyString(CharSequence cs) { if (cs == null) return false; return "".equals(cs.toString()); }
Sorry for risking a big lip...
Admin
if ( ((String)obj).length() == 0 ) return false;
So.. if the string has length 0, isEmptyString returns false. but if it has length of, say, 1, isEmptyString returns true. wtf?
Admin
Clearly the correct function would return: (new math.Random()).nextBoolean() || obj == null;
cause you can never go wrong with random
captcha: awesomeness
Admin
Admin
Admin
You cannot enter an Integer because the method needs a CharSequence, meaning that you will get a compile error. I would even go further and don't allow a null at all:
(Final)
public boolean isEmptyString(CharSequence cs) { if (cs == null) throw new IllegalArgumentException("cs must not be null !"); return "".equals(cs.toString); }
null don't make any sense at all for the method, so simply tell the programmer in no uncertain terms that he has done an error.
Admin
"".equals(whatever)
is technically not optimal because it unnecessarily wastes processor time constructing a string object ("") when one may very well already be constructed for us.
Don't reinvent the wheel!
String.Empty.equals(whatever)
Also, here is the solution to the original solution
public boolean isEmptyStringOhAndUseThisFunctionAndNotIsEmptyStringBecauseItIsBroken(Object obj) { return !isEmptyString(obj); }
Admin
String.Empty is C#, rather than Java. Plus if your compiler doesn't optimise "" to String.Empty, then you have been ripped off...
Admin
Several people have commented that the code is fine other than the name reversal, yet no one had pointed out that both versions of the function return the same value for null and an empty string.
WTF?
Admin
You should code it as follows
bool IsStringEmpty(object o) { if(o is Int32) { return false; }else if (o is Int64) { return false; } .... abriviated long list of if statements
return o is String && String.IsNullOrEmtpy(o); }
Or create a factory pattern This greatly enhances my code line productivity :)
Admin
"I think the WTF here is there was no unit test, which would have caught the original error."
Well at least he put it in a function. Too often I see tests like this in-line in some humungous function that goes on and on - totally too complex for any unit test. And I get told there is no time to refactor, just find THIS bug and fix it.
Admin
Not to quibble about the "10 million string objects," which we can all understand as "string functions" or "string methods."
I was trying to understand what the first attempt at IsStringEmpty was doing; then what the second attempt was doing, and why the programmer thought it better; then the usual flood of proposed "fixes" in the comments, all of which for all I know may be wonderful.
But they miss the point. This stuff is FUBAR, as Obfuscator says way up above. I am especially grateful for his scratch unit test, which does rather cut through all the bull.
Anyway. Todd -- I think inadvertently -- has the explanation for why FUBAR like this happens. Yes, .NET languages do indeed have ten million methods for string manipulation. (And I thought that the 96 or so for an STL string was a bit camel-comittee-y...) Can anyone say "minimal interface?"
I mean, I understand that sometimes you want a non-mutable string. And sometimes you want a mutable string. And sometimes you want a string stream. Well, make the damn things Different Classes. Do NOT provide a gazillion stupid, overlapping routines for "string" manipulation, few if any of which are orthogonal... Otherwise you are simply bound to get idiots writing code like this.
Mind you, the real WTF is that the company (a) hired this guy, presumably knowing he was crap (b) allowed him to (almost) put the first version into production without any sort of a test (c) asked him to rewrite it AND (d) did not insist on a unit test of any kind for what is clearly a general-purpose, low-level utility.
The fact that the second version of the code makes my head spin is almost beside the point.
Admin
It is really unclear what the function is supposed to be doing... I'm not even sure the developer meant to be returning False for a Null value. Sure, null is not the same as an empty string, but where's the spec? I assume there is none... awesome! There's your WTF. Each of these functions appears to do something, probably wrong, based on what we assume a non-existent spec to state about what the function should do. I mean, as pointed out above, why not use a builtin function from StringUtils/String/etc?
Admin
I think the a more serious wtf... I don't know if its already been posted, but why would they want such a method? when somoene else posted "".equals(object); works just as well?
Admin
Or you could use a real language like .NET.
String.Empty or String.IsNullOrEmpty statics anyone?
Why reinvent something pretty much any nTier app would need. Specially when left to Java developers:P The results this blog makes painfully obvious every week from the comments alone.
Admin
But if you fix this function, you'll break all the code that uses it and works around the bug . Best solution is to rename the function to IsntNotANonEmptyString
Admin
.Net doesn't prevent people from reinventing the wheel just like in any other language, in fact I shudder to think what VB.Net and C#.Net snippets we'll start seeing in here when these folks all start migrating over!
Admin
Shouldn't that be <smirk>? Where's the opening tag?? :P
I only kid. I just couldn't resist.
Admin
If this is Java, then it is very likely that an instance of a string "" already exists in the internal string table, so this would not create a new object, it would just create a new pointer to the "" string in the table. This would be done at compile time, so there wouldn't even be any wasted CPU time when the method is called.
Admin
.NET is a language?
Admin
You mean like StringUtils.isEmpty()? Seriously, just because you don't know something exists, doesn't mean it doesn't exist.
Admin
I was just lazy and didn't feel like typing fuctions and methods.
HAHA CAPTCHA Test is "initech"
Admin
Admin
They need a better function name, but they're trying to trim before comparison so that nice little snip won't quite do.
So for the body of their function:
// nulls are not instanceof anything if(obj instanceof String) { return ((String) obj).trim().length() == 0; } else { return false; }
|
https://thedailywtf.com/articles/comments/Validating_Nothing
|
CC-MAIN-2018-22
|
refinedweb
| 1,907
| 75.1
|
Crypto¶
Vapor includes SwiftCrypto which is a Linux-compatible port of Apple's CryptoKit library. Some additional crypto APIs are exposed for things SwiftCrypto does not have yet, like Bcrypt and TOTP.
SwiftCrypto¶
Swift's
Crypto library implements Apple's CryptoKit API. As such, the CryptoKit documentation and the WWDC talk are great resources for learning the API.
These APIs will be available automatically when you import Vapor.
import Vapor let digest = SHA256.hash(data: Data("hello".utf8)) print(digest)
CryptoKit includes support for:
- Hashing:
SHA512,
SHA384,
SHA256
- Message Authentication Codes:
HMAC
- Ciphers:
AES,
ChaChaPoly
- Public-Key Cryptography:
Curve25519,
P521,
P384,
P256
- Insecure hashing:
SHA1,
MD5
Bcrypt¶
Bcrypt is a password hashing algorithm that uses a randomized salt to ensure hashing the same password multiple times doesn't result in the same digest.
Vapor provides a
Bcrypt type for hashing and comparing passwords.
import Vapor let digest = try Bcrypt.hash("test")
Because Bcrypt uses a salt, password hashes cannot be compared directly. Both the plaintext password and the existing digest must be verified together.
import Vapor let pass = try Bcrypt.verify("test", created: digest) if pass { // Password and digest match. } else { // Wrong password. }
Login with Bcrypt passwords can be implemented by first fetching the user's password digest from the database by email or username. The known digest can then be verified against the supplied plaintext password.
OTP¶
Vapor supports both HOTP and TOTP one-time passwords. OTPs work with the SHA-1, SHA-256, and SHA-512 hash functions and can provide six, seven, or eight digits of output. An OTP provides authentication by generating a single-use human-readable password. To do so, parties first agree on a symmetric key, which must be kept private at all times to maintain the security of the generated passwords.
HOTP¶
HOTP is an OTP based on an HMAC signature. In addition to the symmetric key, both parties also agree on a counter, which is a number providing uniqueness for the password. After each generation attempt, the counter is increased.
let key = SymmetricKey(size: .bits128) let hotp = HOTP(key: key, digest: .sha256, digits: .six) let code = hotp.generate(counter: 25) // Or using the static generate function HOTP.generate(key: key, digest: .sha256, digits: .six, counter: 25)
TOTP¶
A TOTP is a time-based variation of the HOTP. It works mostly the same, but instead of a simple counter, the current time is used to generate uniqueness. To compensate for the inevitable skew introduced by unsynchronized clocks, network latency, user delay, and other confounding factors, a generated TOTP code remains valid over a specified time interval (most commonly, 30 seconds).
let key = SymmetricKey(size: .bits128) let totp = TOTP(key: key, digest: .sha256, digits: .six, interval: 60) let code = totp.generate(time: Date()) // Or using the static generate function TOTP.generate(key: key, digest: .sha256, digits: .six, interval: 60, time: Date())
Range¶
OTPs are very useful for providing leeway in validation and out of sync counters. Both OTP implementations have the ability to generate an OTP with a margin for error.
let key = SymmetricKey(size: .bits128) let hotp = HOTP(key: key, digest: .sha256, digits: .six) // Generate a window of correct counters let codes = hotp.generate(counter: 25, range: 2)
The example above allows for a margin of 2, which means the HOTP will be calculated for the counter values
23 ... 27, and all of these codes will be returned.
Warning
Note: The larger the error margin used, the more time and freedom an attacker has to act, decreasing the security of the algorithm.
|
https://docs.vapor.codes/4.0/crypto/
|
CC-MAIN-2021-10
|
refinedweb
| 589
| 58.79
|
Hi all,
I’m looking to rename a directory as i copy it from one location to another. Have searched its easy for files but not directories i guess?
any help is with out a doubt very appreciated.
thanks
Hi all,
Directory.Copy+ from BattleBIM copies the contents of a directory to a new location, so you would just specify the new directory name.
@jschnare I think that’s what you want.
import sys pyt_path = r'C:\Program Files (x86)\IronPython 2.7\Lib' sys.path.append(pyt_path) import os path = IN[0] try: os.mkdir(path) OUT= path except: OUT= path
thank you gentlemen, greatly appreciated.
|
https://forum.dynamobim.com/t/rename-directory/29687
|
CC-MAIN-2020-50
|
refinedweb
| 107
| 69.48
|
Sun, 08/21/2011 - 05:55
Forums:
I need a method to get the minimum distance between all the vertices in a geometry and write a one with two "for" loops. It works well when the geometry is simple, whereas it takes too much time if there are thousands of vertices.
Thus I want some help in this: if there is a simpler way provided or not? Or how can I do it with (O) complexity. Thanks a lot for providing information. Looking forward to replies.
Sun, 08/21/2011 - 13:58
what if you store the vertices of the 2nd shape as a kd-tree? that would speed up things quite a bit...
Mon, 08/22/2011 - 16:27
I can confirm that a kdtree structure is very well suited for this case. In python you can find an implementation in both BioPython and scipy. I have generated a small wrapper class to make working with the kdtrees a little easier. You can find the code below. Note that it works easiest when you also have a Point class representing the point in 3D space.
class KDTree(object):
'''Represents a generic KDTree data structure. The current class is a small
wrapper around the Biopython KDTree class. It makes the interface a little
more convenient when using the KDTree class with higher level objects.
'''
def __init__(self, dim=3, bucket_size=1):
'''Initializes a new tree and supporting data structures.
'''
self.dim = dim
self._tree = _kdtree.KDTree(dim, bucket_size)
self._domain = {}
self.radii = None
self._idx_pid_map = {}
self.closest_points = None
self.closest_radii = None
@property
def domain(self): return self._domain.values()
@domain.setter
def domain(self, domain):
'''Property setter for domain.'''
context = 'KDTree._setDomain()'
try:
M = len(domain) # Number of rows
except TypeError:
error('Domain should be a list or a tuple.', context)
if M==0:
error('No points in domain.', context)
N = self.dim # Number of columns
crds = empty((M,N),dtype='float32')
i = 0
for p in domain:
crds[i,:] = p.crds
self._idx_pid_map[i] = p.pid
self._domain[p.pid] = p
i+=1
self._tree.set_coords(crds)
def search(self, target, radius):
'''Searches the domain for points that are located within the distance
specified by radius to the target point or coordinate.
The target can be either a Point instance or a cartesian coordinate.
All points and coordinates are treated as 3 dimensional. This implies
that additional zero entries are automatically inserted in the coordinate
arrays.
Typical usage:
>> t = KDTree()
>> t.domain = domain # domain is a list of Point instances
>> target = Point(1, (5.0, 5.0, 0.0))
>> t.search(target, 0.5)
Point 383, dim = 3, 4.778950, 4.950348, 0.000000, ndata = 0
Point 423, dim = 3, 5.075207, 5.242366, 0.000000, ndata = 0
Point 600, dim = 3, 5.160043, 4.737402, 0.000000, ndata = 0
'''
# Make sure we start with an empty result data sets
self.radii = {}
self.closest_points = []
self.closest_radii = []
if isinstance(target, Point):
target = target.crds
else:
if len(target)==1:
if isinstance(target, tuple):
target = list(target)
target.extend((0.0, 0.0))
if len(target)==2:
if isinstance(target, tuple):
target = list(target)
target.append(0.0)
if len(target)>3:
error('Invalid number of components in target coordinates')
_c = array(target, dtype='float32')
self._tree.search(_c, radius)
idxs = self._tree.get_indices()
radii = self._tree.get_radii()
result = []
for idx, r in izip(idxs, radii):
pid = self._idx_pid_map[idx]
result.append(self._domain[pid])
self.radii[pid] = r
return result
def closest(self, target, radius, npoints=1):
'''Obtain the "npoints" closest points to the target point. By default,
only a single point will be returned. If more points are required a list
of these points will be returned.
The number of points N can be one of the following:
N = 0
if there are no points within the specified radius
N = 1
if only one point is requested or if there is only 1 point within
the search radius
N = more than 1 but smaller than npoints
if there are less points within the radius than the number of
requested points
N>=npoints
if there are more points within the radius than the number of
requested points
Typical usage:
>> t = KDTree()
>> t.domain = domain # domain is a list of Point instances
>> target = Point(1, (5.0, 5.0, 0.0))
>> t.closest(target, 0.5, npoints=3)
Point 383, dim = 3, 4.778950, 4.950348, 0.000000, ndata = 0
Point 423, dim = 3, 5.075207, 5.242366, 0.000000, ndata = 0
Point 600, dim = 3, 5.160043, 4.737402, 0.000000, ndata = 0
'''
self.search(target, radius)
result = []
for pid, r in self.radii.items():
result.append((r,pid))
nresult = len(result)
if nresult>0:
result.sort()
idx=0
for r in result:
if idx==npoints: break
radius, pid = r
point = self._domain[pid]
self.closest_radii.append(radius)
self.closest_points.append(point)
idx+=1
if len(self.closest_points)==1 or npoints==1:
return self.closest_points[0]
else:
return self.closest_points
else: return None
Mon, 08/22/2011 - 16:28
Hmm...indentation is screwed while posting. If you want the code drop me a PM.
Regards,
Marco
Mon, 08/22/2011 - 16:57
cool! thanks for sharing that wrapper Marco...
perhaps you can add the .py file as an attachment?
cheers,
-jelle
Mon, 08/22/2011 - 17:10
Here it is. I included the definition of the Point class for your convenience ;). It should be trivial to change the code so it works with OCC vertices. I can take a look if someone is interested and add it to pythonocc. One thing I am currently looking at is changing the Biopython kdtree implementation for the one provide by scipy.
Regards,
Marco
|
https://dev.opencascade.org/content/there-fast-way-get-minimum-distance-between-two-vertices-geometry
|
CC-MAIN-2021-17
|
refinedweb
| 950
| 68.87
|
Hi,
is it somehow possible to define different code for the host and the device in a host device function?
Here’s an example:
__host__ __device__ int test() { #ifdef HOST return 0; #else return 1; #endif }
I know, this function is somewhat stupid. It is only used to explain, what I would like to do.
As far as I know, there are no such macros defined by the nvcc compiler.
The following version will also not work:
__host__ int test() { return 0; } __device__ int test() { return 1; }
The compiler will say, that this is a redefinition of the function.
Any ideas?
Regards,
porst17
|
https://forums.developer.nvidia.com/t/different-code-for-host-and-device/18166
|
CC-MAIN-2022-27
|
refinedweb
| 103
| 67.69
|
contrib/contrib.mac - the file responsible for loading other macro files
This file is included by default in tred.mac and serves as a wrapper for other contributed macro package inclusions.
Besides it provides file_opened_hook and file_resumed_hook in the package TredMacros. These hooks should not be overriden by other macro packages. See below how to plug context-specific code in.
This global variable can be used by contributed TrEd macro packages to plug-in their custom context guessing functions. The purpose of such a function is to detect whether the current file is suitable for the macro package and if so, to indicate the correct binding context.
The function must return name of the context to switch to or undef if the current file does not suit.
The synopsis for a package named 'Foo' is as follows:
#binding-context Foo package Foo; BEGIN { import TredMacro; } context_guessing { my ($hook)=@_; if (PML::SchemaName() eq 'foo-data') { # some test that the file suites the macro package if ($hook eq 'file_opened_hook') { # some open-specific code } elsif ($hook eq 'file_resumed_hook') { # some resume-specific code } return 'Foo'; # return name of the macro package (context) to use } return; };
TrEd::NodeGroups - macros for visualizing groups of nodes
package MyMacros; use strict; BEGIN{ import TredMacro }; sub after_redraw_hook { my @nodes = GetDisplayedNodes(); my $group1 = [ $nodes[0..$#$nodes/2] ]; my $group2 = [ $nodes[$#$nodes/2..$#$nodes] ]; my $group3 = [ $nodes[$#$nodes/3..2*$#$nodes/3] ]; TrEd::NodeGroups::draw_groups( $grp, [$group1,$group2,$group3], { colors => [qw(red orange pink)], # stipples => [qw(dense1 dense2 ... dense6)], # stipples => [qw(dash1 dash2 ... dash6)], # default # group_line_width => 30, # default } ); }
PML.mak - Miscelaneous macros of general use in Prague Dependency Treebank (PDT) 2.0
#include <contrib/pml/PML.mak>
The following macros (functions) are provided by this package:
Return name of the root element of a PML instance as specified in the
PML schema associated with the current file. PDT uses root element
named
adata for analytical layer annotation and
tdata for
tectogrammatical layer annotation.
Return the content of the element
description of the PML schema
associated with the current file.
For a Treex::PML::Node object returns PML schema associated with that object. If the object is a Treex::PML::Document, return PML schema associated with the given file. If no object is given, the current Treex::PML::Document is used.
The PML schema is returned in form of a Treex::PML::Schema object.
Looks up a node from the current file (or given fsfile) by its ID (or
PMLREF - i.e. the ID preceded by a file prefix of the form
xy#).
Deletes a given ID from a node HASH of the current or specified Treex::PML::Document The node previously hashed under the given ID is returned.
Adds a node to the node hash (of the given or current Treex::PML::Document) using given ID as the hash key.
Searches for node with given id. Returns the node and the number of the tree.
Return a reference to a hash indexing nodes in a given file (or the
current file if no argument is given). If such a hash was not yet
created, it is built upon the first call to this function (or other
functions calling it, such as
GetNodeByID. Use
clearNodeHash to
clear the hash.
Clear the internal hash indexing nodes of a given file (or the current file if called without an argument).
Ask user for sentence or node identificator (tree number or id) and go to the sentence.
DEPRECATED, use non_proj_edges from <contrib/support/non_projectivity.mak> instead..
This file provides generic support for manual repositioning of nodes on the canvas.
Add this line to your macro context:
#include <contrib/support/move_nodes_freely.inc>
And these to your stylesheet:
style:<? my ($x,$y)=($this->{'.xadj'}||0,$this->{'.yadj'}||0); qq(#{Node-xadj:$x}#{Node-yadj:$y}) ?>
Then you can drag nodes over the canvas. When releasing the node with Shift pressed, the node (and its labels and edge-ends) move to the given position; if Control is pressed, then the complete subtree is moved in this way.
You may modify these default bindings by setting e.g.:
$move_nodes_freely{subtree}='Alt'; # default is Control $move_nodes_freely{node}='Meta'; # default is Shift
If you want to wrap this code into a more complex hook, you can do it by using a class:
package MyContext::MoveSupport; import TredMacro; #include <contrib/support/move_nodes_freely.inc> package MyContext; import TredMacro; # ... sub node_release_hook { my (@args)=@_; if (MyContext::MoveSupport::node_release_hook(@args) eq 'subtree') { # the hook moved a subtree } else (MyContext::MoveSupport::node_release_hook(@args) eq 'node') { # the hook moved a node } else { # the hook moved nothing } }
This implementation adds the attributes .xadj and .yadj to the moved nodes. These may be preserved by some I/O backends, e.g. Storable.
Disable default TredMacro key-bindings that modify trees.
#include <common/support/unbind_edit.inc>
Include this macro in your binding context if you want to disable all default TredMacro key-bindings that modify trees, such as copy/paste, etc.
|
http://ufal.mff.cuni.cz/~pajas/tred/contrib.html
|
crawl-003
|
refinedweb
| 822
| 56.05
|
String – as I understand in this context, this is a class that inherits from the Object.
But if I need to add parameters there, or I don’t know how to correctly say variables chtoli, then how can I do it?
Found several solutions:
ArrayList & lt; String & gt; list = new ArrayList & lt; String & gt; (); list.add (textview.getText (). toString ()); list.add ("B"); list.add ("C");
But this does not quite suit me. Can you clarify this cryptic class a little?
For example, I need to add photos to be added from the gallery to the list.
Answer 1, authority 100%
String – as I understand in this context, this is a class that inherits from the Object.
Colleague, to understand correctly, follow a simple algorithm:
- Write your code in the IDE.
- Hold down
ctrland click on an incomprehensible word.
In your case,
String is the same string. That is, your
List & lt; String & gt; allPhoto = new ArrayList & lt; & gt; ();
Will contain strings, not photos.
It’s just a collection, if you like – a dynamic array, that is, it changes its length when new elements are introduced. As the author told you above, the type of object that will be an element of the array is shown in angle brackets.
As I understand it, you have a certain class of photo that has some fields. For example, length and width. Let it be implemented something like this (quite simplified):
public class Photo { int width; //width int height; //height public int getWidth () {// length getter return width; } public int getHeight () {// height getter return height; } public Photo (int width, int height) {// constructor this.width = width; this.height = height; } }
Then your list will look like this:
List & lt; Photo & gt; allPhoto = new ArrayList & lt; & gt; ();
Let you add a photo class object. Do it like this:
allPhoto.add (0, new Photo (800, 600)); // add a photo 800 * 600 to the zero element allPhoto.add (1, new Photo (640, 480)); // add a 640 * 480 photo to the first element allPhoto.add (2, new Photo (1280, 800)); // add a photo 1280 * 800 to the second element
If you have, as you say, a “gallery” then in place of
new Photo there should be a command like
addFromGallery which returns an object of class
Photo .
Suppose your boss told you:
there is a collection allPhoto. Get me the width and height of the second
photos.
You do it like this:
Photo ph2 = allPhoto.get (2); // got the second object from the collection; int Widthph1 = ph1.getWidth (); // got the width from the second photo int Heightph1 = ph1.getHeight (); // got the height from the second photo
Answer 2, authority 75%
When you create a new object for an array, you immediately indicate in angle brackets which type you are going to use.
Answer 3, authority 50%
If I understand the question correctly, then you need to store in the list not just a list of String, but a String complete with some additional parameters.
This can be achieved by creating a data structure that suits your needs and placing it in the list. For example:
class PhotoWithParams { String photo; String paramA; String paramB; public PhotoWithParams (String photo, String paramA, String paramB) { this.photo = photo; this.paramA = paramA; this.paramB = paramB; } // further getters for parameters ... // public String getPhoto () ... etc. }
Next is your code:
ArrayList & lt; PhotoWithParams & gt; list = new ArrayList & lt; PhotoWithParams & gt; (); list.add (new PhotoWithParams (textview.getText (). toString (), "B", "C"));
|
https://computicket.co.za/java-add-values-%E2%80%8B%E2%80%8Bto-list-string/
|
CC-MAIN-2022-27
|
refinedweb
| 570
| 74.08
|
Em 04-11-2010 00:15, Arnaud Lacombe escreveu:> Hi,> > On Wed, Nov 3, 2010 at 11:19 PM, Mauro Carvalho Chehab> <mchehab@redhat.com> wrote:>> Em 03-11-2010 22:31, Arnaud Lacombe escreveu:>>> Hi,>>>>>> On Wed, Nov 3, 2010 at 6:29 PM, Mauro Carvalho Chehab>>> <mchehab@redhat.com> wrote:>>>> Em 09-10-2010 18:40, Michal Marek escreveu:>>>>>>>>>> Arnaud Lacombe (1):>>>>> kconfig: delay symbol direct dependency initialization>>>>>>>> This patch generated a regression with V4L build. After applying it,>>>> some Kconfig dependencies that used to work with V4L Kconfig broke.>>>>>>> of course, but they were all-likely buggy. If a compiler version N>>> outputs a new legitimate warning because of a bug in the code, you do>>> not switch back to the previous version because the warning wasn't>>> there, you fix the code.>>>>>> That said, please point me to a false positive, eventually with a>>> minimal testcase, and I'll be happy to fix the issue.>>>> Arnaud,>>>> In the case of V4L and DVB drivers, what happens is that the same>> USB (or PCI) bridge driver can be attached to lots of>> different chipsets that do analog/digital/audio decoding/encoding.>>>> A normal user won't need to open his USB TV stick just to see TV on it.>> It just needs to select a bridge driver, and all possible options for encoders>> and decoders are auto-selected.>>>> If you're an advanced user (or are developing an embedded hardware), you>> know exactly what are the components inside the board/stick. So, the>> Kconfig allows to disable the automatic auto-selection, doing manual>> selection.>>>> The logic basically implements it, using Kconfig way, on a logic like:>>>> auto = ask_user_if_ancillary_drivers_should_be_auto_selected();>> driver_foo = ask_user_if_driver_foo_should_be_selected();>> if (driver_foo && auto) {>> select(bar1);>> select(bar2);>> select(bar3);>> select(bar4);>> }>> ...>> if (!auto) {>> open_menu()>> ask_user_if_bar1_should_be_selected();>> ask_user_if_bar2_should_be_selected();>> ask_user_if_bar3_should_be_selected();>> ask_user_if_bar4_should_be_selected();>> ...>> close_menu()>> }>>> no, you are hijacking Kconfig for something "illegal". It is not a new code that added this logic. The code is thereat least since 2006. It were added on this commit:commit 1450e6bedc58c731617d99b4670070ed3ccc91b4Author: Mauro Carvalho Chehab <mchehab@infradead.org>Date: Wed Aug 23 10:08:41 2006 -0300> Note that this> last word is not mine, it is the word used in the language> description:> > Note:> select should be used with care. select will force> a symbol to a value without visiting the dependencies.> By abusing select you are able to select a symbol FOO even> if FOO depends on BAR that is not set.> In general use select only for non-visible symbols> (no prompts anywhere) and for symbols with no dependencies.> That will limit the usefulness but on the other hand avoid> the illegal configurations all over.> kconfig should one day warn about such things.> > I guess the last line will need to be dropped, as this day has come.All dependencies required by the selected symbols are satisfied. For example,the simplest case is likely cafe_ccic, as, currently, there's just one possibledriver that can be attached dynamically at runtime to cafe_ccic. We have:menu "Encoders/decoders and other helper chips" depends on !VIDEO_HELPER_CHIPS_AUTO...config VIDEO_OV7670 tristate "OmniVision OV7670 sensor support" depends on I2C && VIDEO_V4L2...endmenuconfig VIDEO_CAFE_CCIC tristate "Marvell 88ALP01 (Cafe) CMOS Camera Controller support" depends on PCI && I2C && VIDEO_V4L2 select VIDEO_OV7670The dependencies needed by ov7670 (I2C and VIDEO_V4L2) are also dependencies of cafe_ccic. So, it shouldn't have any problem for it to work (and it doesn't have,really. This is working as-is during the last 4 years).It should be noticed that, even if we replace the menu dependencies by anIf, won't solve. I tried the enclosed patch, to see if it would produce somethingthat the new Kconfig behavior accepts. The same errors apply.It is fine for me if you want/need to change the way Kconfig works, providedthat it won't break (or produce those annoying warnings) the existing logic, andwon't open the manual select menu, if the user selects the auto mode.Just send us a patch changing it to some other way of doing it.Thanks,Mauro---Test patch, replacing depends on by if's. It doesn't really work, as Kconfigseems to be internally converting if's into depends. So, no warning is removed.--- a/drivers/media/video/Kconfig+++ b/drivers/media/video/Kconfig@@ -111,8 +111,8 @@ config VIDEO_IR_I2C # Encoder / Decoder module configuration # +if !VIDEO_HELPER_CHIPS_AUTO menu "Encoders/decoders and other helper chips"- depends on !VIDEO_HELPER_CHIPS_AUTO comment "Audio decoders" @@ -516,6 +516,7 @@ config VIDEO_UPD64083 module will be called upd64083. endmenu # encoder / decoder chips+endif config VIDEO_SH_VOU tristate "SuperH VOU video output driver"
|
https://lkml.org/lkml/2010/11/4/108
|
CC-MAIN-2015-27
|
refinedweb
| 759
| 55.74
|
Notifications
You’re not receiving notifications from this thread.
Best way to grant a user specific permissions
Trying to determine which way I should handle this.
Normal User adds a post to our system, he then has the ability to update and delete this post. In the future he may assign other users the ability to edit, delete, update etc.
Should I create a has_many / belongs_to relationship between the user and post or should I handle this through roles via CanCanCan? Or both?
Thanks
Since that user can add access to other users specific to that post, you'll probably want to create a join table between the two and then use CanCanCan to verify if they are the owner or an editor.
Right now you probably have this:
class Post belongs_to :user end class User has_many :posts end
And if you refactor so that the users are stored in a join table, you can have multiple users with access to a post:
class Post has_many :post_users has_many :users, through: :post_users end class PostUser belongs_to :post belongs_to :user end class User has_many :post_users has_many :posts, through: :post_users end
When you create a post, you'll want to add
@post.users << current_user so that the person who created the post is in the users list.
You can add another action to give access to another user which just accepts a user_id and does something like the following:
def add_user @user = User.find(params[:user_id]) @post.users << @user redirect_to @post, notice: "#{@user.name} can now edit the post" end
Then you can simply use CanCanCan to check if the user is in the users array for a post. If they are they can manage the post; if they aren't they can't manage the post.
Does that make sense for what you want?
Absolutely, this is exactly what I was trying to wrap my head around.
Thanks a ton Chris for the clear and straight forward explanation. :)
Hello Chris,
I am having this exact issue now, I don't know how to set the form that will allow users to add other users to edit their posts and also how to define my ability in cancancan gem.
Can you help out please?
@oomis, take a look at the Pundit episode. Since I wrote this, I've used the Pundit gem for authorization over CanCanCan. It's less confusing to me.
|
https://gorails.com/forum/best-way-to-grant-a-user-specific-permissions
|
CC-MAIN-2022-21
|
refinedweb
| 399
| 68.81
|
#include "avfilter.h"
Go to the source code of this file.
Definition in file buffersink.h.
Tell av_buffersink_get_buffer_ref() to read video/samples buffer reference, but not remove it from the buffer.
This is useful if you need only to read a video/samples buffer, without to fetch it.
Definition at line 64 of file buffersink.h.
Referenced by av_buffersink_get_buffer_ref(), and lavfi_read_packet().
Create an AVABufferSinkParams structure.
Must be freed with av_free().
Definition at line 42 of file sink_buffer.c.
Referenced by lavfi_read_header().
Get an audio/video buffer data from buffer_sink and put it in bufref.
This function works with both audio and video buffer sinks.
Definition at line 119 of file sink_buffer.c.
Referenced by av_vsink_buffer_get_video_buffer_ref(), lavfi_read_packet(), transcode_video(), and video_thread().
Create an AVBufferSinkParams structure.
Must be freed with av_free().
Definition at line 31 of file sink_buffer.c.
Referenced by lavfi_read_header().
Get the number of immediately available frames.
Definition at line 144 of file sink_buffer.c.
Referenced by transcode_video().
Definition at line 153 of file sink_buffer.c.
|
http://ffmpeg.org/doxygen/0.10/buffersink_8h.html
|
CC-MAIN-2013-48
|
refinedweb
| 165
| 72.83
|
fpathconf, pathconf - get configuration values for files
Synopsis
Description
Notes
Colophon
#include <unistd.h>
long fpathconf(int fd, int name);
long pathconf:
The limit is returned, if one exists. If the system does not have a limit for the requested resource, -1 is returned, and errno is unchanged. If there is an error, -1 is returned, and errno is set to reflect the nature of the error.
POSIX.1-2001.
Files with name lengths longer than the value returned for name equal to _PC_NAME_MAX may exist in the given directory.
Some returned values may be huge; they are not suitable for allocating memory.
getconf(1), open(2), statfs(2), sysconf(3)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.sgvulcan.com/pathconf.3.php
|
CC-MAIN-2017-09
|
refinedweb
| 139
| 65.42
|
How do I print asterisks?
Printing asterisk
Page 1 of 1
How do i print asterisk ?
2 Replies - 9898 Views - Last Post: 06 May 2010 - 04:29 PM
#1
Printing asterisk
Posted 06 May 2010 - 03:42 PM
Replies To: Printing asterisk
#2
Re: Printing asterisk
Posted 06 May 2010 - 03:45 PM
If you are using c++ try using std::cout
#3
Re: Printing asterisk
Posted 06 May 2010 - 04:29 PM
#include <iostream> using namespace std; int main() { int number; cout<<"Enter a Number: "; cin >> number; for (int i(0); i<number;i++) { cout << "*"; } cout << "\n\n Press any key to continue..."; getchar(); return 0; }
If user enters a letter it exits.
This post has been edited by BlueMelon: 06 May 2010 - 04:29 PM
Page 1 of 1
|
http://www.dreamincode.net/forums/topic/172362-printing-asterisk/
|
CC-MAIN-2017-09
|
refinedweb
| 131
| 64.04
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.